16. February 2018
Posted by aisolab
01

Artificial Intelligence is about to find its way into all areas of modern society. Up to now, however, the focus has been on areas of application that are less common, such as medicine, research and high technology. A short time later, the world of private entertainment and mobility followed: intelligent assistants in smartphones and cars as well as in consumer electronics and the web. Now AI is also penetrating into an area that is an integral part of everyday life outside the home: daily shopping.

The Amazon company, known primarily as an online wholesaler, is launching the concept of an intelligent supermarket. The company opened its first Amazon Go Supermarket in Seattle, Washington, in January 2018. The aim here is to offer customers everyday shopping, especially of foodstuffs, in a new, simplified form: no queuing at the checkout, no scanning of barcodes, no employees. The customer enters the shop, takes goods from the shelves, inserts them and leaves the store, ready. The amount due is debited from the customer's Amazon account, and the products withdrawn are deducted from stock in inventory accounting.

What sounds so simple, requires some intelligent technology in the background and shopping doesn't work without any prerequisites: Customers need the (free) Amazon Go-App and log in when entering the shop by having the system scan a QR-Code from their smartphone. After that, however, the phone can be plugged in safely, because from that point and on the Artificial Intelligence of charging with its learning algorithm takes over the process. While the system continuously monitors them via camera and personal identification, customers take the goods from the shelves, which in turn act as scales. The withdrawn number of pieces of a product is credited to the customer in the virtual shopping cart; goods are deleted. When the purchase is finished, the AI registers it when the customer leaves the shop and debits his Amazon account shortly after that with the amount due.

The decisive technological progress in this respect is undoubtedly the naturalness with which the transfer of goods and money takes place. Customers do not have to scan in commodities, and the purchase is made under constant visual and sensory control of the AI, without trust in the honesty of the customers. But Amazon's intelligent supermarket cannot do without personnel: Employees provide support for the supply of goods on the shelves, prepare ready-to-eat food and supervise the sale of alcohol to comply with legal requirements. Amazon's statements on the question of whether the new concept is a feasibility study or the prototype of an entire chain of stores is also unclear. Generally speaking, Amazon's advance may be seen as part of a development that is increasingly changing the way the food and retail trade functions and looks: Gradually, cash registers and staff disappear from the shops, and the supermarket becomes - according to the vision - an intelligent autonomous system.

09. February 2018
Posted by aisolab
02

For some time now, machine learning and Artificial neural networks have been widely discussed and are highly exciting topics of current research. Google has recently succeeded in creating Artificial Intelligence (AI) that can produce its own "children" and they are more reliable and precise than all comparable human-made AIs.

What is Machine Learning?

The definition of machine learning is described as following: learning i. e. generating knowledge from experience. However, understanding is gained automatically by a computer system. By automating data analysis, large amounts of data for pattern recognition can be processed quickly - much faster and more accurately than a human being could.

The AutoML project

By applying this approach, Google has now created its own Artificial Intelligence technique that can generate children. AI AutoML proposes specific software architectures and algorithms for its AI children. These are then tested and improved by the test results. To achieve this the methodology of Reinforcement Learning is used. This technique means that the AI draws up a plan of action or strategy without any human input and adapts it using appropriate positive and negative feedback. Together with the iterative improvement through several cycles, AI children are getting better and better at performing their tasks.

Tasks of the AI

The goal of the Google supercomputer is to detect objects such as humans, animals or cars in a video. NASNet, as the AI child is called, is very successful: it can correctly capture 82.7% of the objects in the videos it is shown. This makes it better than all AIs implemented by humans under similar conditions.

The automatic recognition of objects, which has succeeded in doing so, is of interest to many industries. The systems produced in this way are not only more precise, but they can also be much more complicated. First applications can be found in self-propelled cars, for example. And other companies will also benefit from being able to develop AIs for demanding tasks such as operations more quickly and accurately.

The ethical aspect

Despite the promising prospects for the future, large companies that are currently researching the field of Artificial Intelligence have to deal with ethical issues. For example, what happens if AutoML generates AI children too fast than society can follow suit? Or what happens if the AI develops a life of its own at some point? This is why the responsible generation of Artificial Intelligence is one of the primary drivers of research, including Amazon and Facebook in addition to Google.

06. February 2018
Posted by aisolab
03

The Digital Demo Day in Düsseldorf was a very successful day!

We had many interesting conversations at our stand where we showed examples of visual computing with AI and our AI-Cam before and especially after our presentation. It was our pleasure to welcome at our booth many new contacts but also to meet some friends from our network.

Our special thanks go to Klemens Gaida and his team for organising this great event!

02. February 2018
Posted by aisolab
04

In recent years, the rapid development of Artificial Intelligence has reached new levels that once were considered pure science fiction. With particular attention - and from different perspectives also with unusually high fears - all novelties are viewed, a fact that points in the direction of machine consciousness. Especially in the area of perception and its manipulation, the performance of machines is currently astounding to an increasing extent, especially for amateurs and even experts.

A good example is the work of an AI research group in the development department of Nvidia, a company known primarily as a manufacturer of graphics cards. The researchers succeeded in teaching Artificial neural network to change these media credibly, starting from image and film sources, about specific characteristics. With this technology, it is possible to change the weather in a video or the breed of a dog on a picture. The crucial point here is that representations in image or video which can be manipulated almost arbitrarily without a human editor having to intervene. The results are not yet perfect, but they are likely to be even more convincing in the future.

Scientists from the University of Kyoto went one step further: they used a similar procedure to allow Artificial Intelligence to recognise the mental perceptual images in the human brain so that the AI reads a person's thoughts to some extent. This happens in detail: A neuronal network is trained to match images that a human subject looks at with data obtained by functional magnetic resonance imaging (FMRI) of the person's corresponding brain activity. In this way, the AI learns to associate external stimuli (pictures) with the internal states in the brain (the MRI patterns). If, after this learning phase, it receives only MRI data as input, it can reconstruct what people perceive from this information without first having taken knowledge of these images. The images of these spiritual processes produced by the AI are anything but photorealistic, however, they do show the original image.

A question as threatening as can Artificial Intelligence reads your thoughts should not cause so much discomfort if you take a closer look. The actual "reading of thoughts", the look into the brain, continues to be taken over by MRI, and that's what it is meant for. Artificial intelligence, on the other hand, is limited to pattern recognition through a neural network and the application of what has been learned to new data. The strength of neural networks lies in the speed of work: while people need hours to learn new lessons, such a system can do millions of learning processes at the same time. A large number of passages creates a very differentiated system of weightings and states between the neurons in the net so that the result with continuous training becomes more and more similar to the model. The possible applications are manifold, but above all, in one respect this technology offers fascinating promises for the future: It could enable people who cannot communicate in language or writing to communicate their thoughts and inner images. Further applications are also conceivable, such as the direct "uploading" of intellectual content into computer networks.

 

29. January 2018
Posted by aisolab
05

 

The Digital Demo Day is just around the corner, and we are pleased to be part of it! On 1rst of February, we will be presenting our digital solutions and innovative AI technologies at the Digital Hub in Düsseldorf.

Join us at the Digital Demo Day to see the latest digital trends, talk about applications and get profound insights.

Tickets can be purchased here
See you there!

19. January 2018
Posted by aisolab
06

Cameras at intersections and other busy roads are not new. They enable traffic monitoring and provide images in case of accidents. In the future, the camera shots could be even more valuable - with the help of Artificial Intelligence. The evaluation of traffic data by supercomputers could soon ensure that roads will become safer. By evaluating data and analysing traffic in a short time. Scientists from the Texas Advanced Computing Center (TACC), the Center for Transportation Research at the University of Texas and the Texan city of Austin are working on programs that use deep learning and data mining to make roads safer and eliminate traffic problems.

The scientists are developing Artificial Intelligence that uses deep learning to evaluate video recordings of traffic points. This software should be able to recognise and classify objects correctly. The objects are cars, buses, trucks, motorcycles, traffic lights and people. The software determines how these objects move and behave. In this way, information can be gathered that can be analysed more precisely to prevent traffic problems. The aim is to develop software to help transport researchers evaluate data. The Artificial Intelligence should be flexible in its use and be able to recognise traffic problems of all kinds in the future without anyone having to program them explicitly for this purpose.

Thanks to deep learning, supercomputers classify the objects correctly and estimate the relationship between the detected objects in road traffic by following the movements of cars, people, etc. After this work was done, the scientists gave the software two tasks: Count the number of vehicles driving along a road. And more difficult: Record near collisions between cars and pedestrians. The Artificial Intelligence scored 10 minutes of video footage and counted all vehicles with 95% security. Being able to measure traffic accurately is a valuable skill of the supercomputer. At present, numerous expensive sensors are still needed to obtain such data, or studies must be carried out which would only produce specific data. The software, on the other hand, can monitor the volume of traffic over an extended period and thus provide figures on traffic volumes that are far more accurate. This procedure makes it possible to make better decisions on the design of road transport.

In the case of near collisions, the Artificial Intelligence enabled scientists to identify situations where pedestrians and vehicles were approaching threateningly automatically. Thus, it possible to determine points of traffic that are particularly dangerous before accidents happen. The data analysed could prove very revealing when it comes to eliminating future traffic problems.

The next project: The software will learn where pedestrians cross the road, how drivers react to signs pointing out pedestrians crossing the street, and how far they are willing to walk to reach the pedestrian path. The project by the Texas Advanced Computing Center and the University of Texas illustrates how deep learning can help reduce the cost of analysing video material.

18. January 2018
Posted by aisolab
07

I initiated and helped to organise the already famous Handelsblatt AI Conference, which will take place in Munich on March 15th and 16th.

We were able to set up a great line of speakers and I am really looking forward to discussing the latest developments in AI with Damian Borth, Annika Schröder, Klaus Bauer, Norbert Gaus, Bernd Heinrichs, Andreas Klug, Dietmar Harhoff, Alexander Löser, Gesa Schöning, Oliver Gluth, Reiner Kraft, Thomas Jarzombek and Daniel Saaristo.

We are very proud of having Jürgen Schmidhuber, one of the godfathers of AI and the inventor of LSTM.

Join us in getting profound insights and having exciting conversations.

See you in Munich!

Joerg Bienert

17. January 2018
Posted by aisolab
08

Bot better than a human for the first time

The internationally competing AI developers had developed their specific programs that can read and compete in a squad test. In doing so, they demonstrated how the development of learning and understanding the language of machines is organised. Almost all global technology companies, including Google, Facebook, IBM, and Microsoft, use the prestigious Stanford Question Answering Dataset reading and understanding test to measure themselves against each other and an average human subject. The squad has more than 100,000 questions related to the content of over 500 Wikipedia articles. Squad asks questions with objectifiable answers, such as "What causes rain?". Other questions that needed to be answered included: "What nationality did Nikola Tesla have?" Or "What is the size of the rainforest?" Or "Which musical group performed in the Super Bowl 50 Halftime Show?" Result: The bot persists. And not only that but for the first time, it was better than humans.

Machine learning development

The conventional first AI reading machine that has done better on the "squad" than a human being is the new software of Alibaba Group Holding Ltd., which was developed in Hangzhou, China, by the company's Institute of Data Science and Technologies. The research department of the Chinese e-commerce giant said that the machine language processing program achieved a test score of 82.44 points. It has cracked the previous record of 82.30 points, which was still held by a human. Chinese e-commerce giant Alibaba continues to announce its leadership role in the development of machine learning and Artificial Intelligence technologies. However, Microsoft Alibaba is hot on the heels. Microsoft's reading machine was only just exceeded by the test result of the Alibaba program.

What makes bots better

It is necessary that the bot program can accurately filter a variety of available information to give correct and only relevant answers. The program, which maps the human brain as a neural network model, works through paragraphs to sentences and from there to words, trying to identify sections that might contain the answer you're looking for.

The head of development for AI and language programs, Si Luo, said the findings make it clear that machines can now answer objective questions with high accuracy.

This development opens up new possibilities for Artificial Intelligence and its use in customer service. As reading robots could in the future, for example, also answer medical inquiries via the Internet, this would significantly reduce the need for human input. Alibaba's bot program called "Xiaomi" relies on the capabilities of the reading AI engine and has been used successfully. At the so-called Single's Day, the annually recurring, most significant shopping event of Asia on November 11, the software was successfully applied.

AI further ahead of existing challenge

However, bots still have difficulties with language queries that are vague or colloquial, ironic or just grammatically incorrect. If there are no prepared answers, it is not possible for the robot program to find a suitable solution when reading. Then it does not respond adequately and answers incorrectly.

15. January 2018
Posted by aisolab
09

The fusion of atomic nuclei could become one of the solutions to future energy problems and may represent a significant advance in the technological development of humanity. Fusion reactions are still difficult to control, potentially damaging the so-called tokamaks, the fusion reactors that generate energy from plasma using magnetic fields. Disruptions of the results can occur at any time, which interrupt the fusion process. Artificial intelligence can help to anticipate and react correctly to these disturbances so that the damage is minimised and the process runs as smoothly as possible. Scientists are in the process of developing computer programs that enable them to predict the behaviour of plasma.

Scientists from Princeton University and the U. S. Department of Energy's Princeton Plasma Physics Laboratory are conducting initial experiments with Artificial Intelligence to test the software's ability to predict. The group is led by William Tang, a renowned physicist and professor at Princeton University. He and his team develop the code for ITER, the "International Thermonuclear Experimental Reactor" in France. They aim to demonstrate the applicability of Artificial Intelligence in this area of science. The software is called "Fusion Recurrent Neural Network", FRNN for short, and uses a form of deep learning. This is an improved variant of machine learning that can process considerably more data. FRNN is particularly good at evaluating sequential data with great patterns. The team is the first to use a Deep Learning program to predict the complexities of fusion reactions.

This approach allowed the team to make more accurate predictions than before. So far, the experts have tried their hand at the Joint European Torus in Great Britain, the most massive Tokamak in operation. Soon ITER will face up to it. For ITER, the development of the Artificial Intelligence of the Fusion Recurrent Neural Network should be so advanced that it can make up to 95% accurate predictions when incidents occur. At the same time, the program should give fewer than 3% false alarms. The Deep Learning Program is powered by GPUs,"Graphic Processing Units", unlike lower-performance CPUs. These make it possible to run thousands of programs at the same time. A demanding work for the hardware, which has to be distributed to different GPUs. The Oak Ridge Leadership Computing Facility, currently the fastest supercomputer in the United States, is used for this purpose.

Initially, the first experiments were carried out on Princeton University computers, where it turned out that the Artificial Intelligence of the FRNN is perfectly capable of processing the vast amounts of data and making useful predictions. In this way, Artificial Intelligence provides a valuable service to science by predicting the behaviour of plasma with pinpoint accuracy. FRNNN will soon be used by Tokamaks all over the world and will make an essential contribution to the progress of humanity.

13. January 2018
Posted by aisolab
10

Machine learning advances into ever new dimensions. In the meantime, Artificial Intelligence can do something that previously seemed to be reserved for humans: it understands emotions.

 

Artificial intelligence: model of the human brain

Modern AI and machine learning via neural networks have a natural model: our human brain. This model is the most effective tool for solving all our known problems. However, a critical aspect of our intelligence was missing in the previous AI programs. It is about empathy and emotional intelligence. With these abilities, people can grasp feelings and make intuitive decisions "straight from the gut". To date, intelligent software programs have been able to understand speech, respond to it and act independently even after a particular data template, i. e. to act intelligently in common sense. But they do not feel for anyone. Now developers have moved a step closer to the incorporation of emotions into machine intelligence. Engineers have developed a method that allows the computer to recognise human feelings using physiological reactions and facial features. The pioneers of AI programs - Google, Microsoft and other giants - are very interested in this. They would like to integrate this AI aspect into their existing solutions or create computer-aided sentiment analysis that helps machines to interpret human feelings correctly and act accordingly. These can be machines of all kinds, even construction machinery.

How does machine learning about emotions work?

Data that communicates the emotional state of a person to a machine is transmitted in many different ways. That includes:

  • a vocal sound
  • speech patterns
  • use of certain expressions and phrases
  • facial expressions
  • physiological signals such as pulse, heart rate and body temperature
  • gestures
  • body language

Physiology cannot measure every machine because it requires individual sensors. But all the other signs are audible. Especially speech and facial expressions contain various non-verbal cues, which are very meaningful. Research results show that 55% of messages in the conversation are hidden in smiles, facial expressions and body signals such as a shrug of the shoulder, 38% in tone and only 7% in the actual meaning of the words. The previous software solutions for speech analysis thus neglect most of the message; they just identify the word itself. For example, a smartphone with speech recognition currently does not yet recognise a phrase with exclamation marks or question marks. But companies using Artificial Intelligence quickly learn more. These companies want to assess the emotional impact of advertising spots. Such situation can be possible by turning on the camera on a laptop while watching an advertising video. Up to the computer, which really "empathises" with us, not much more research time should pass. Experts already point out that an ethical discussion could then arise: Does the computer have feelings, does it have rights?