06. February 2018
Posted by aisolab

The Digital Demo Day in Düsseldorf was a very successful day!

We had many interesting conversations at our stand where we showed examples of visual computing with AI and our AI-Cam before and especially after our presentation. It was our pleasure to welcome at our booth many new contacts but also to meet some friends from our network.

Our special thanks go to Klemens Gaida and his team for organising this great event!

02. February 2018
Posted by aisolab

In recent years, the rapid development of Artificial Intelligence has reached new levels that once were considered pure science fiction. With particular attention - and from different perspectives also with unusually high fears - all novelties are viewed, a fact that points in the direction of machine consciousness. Especially in the area of perception and its manipulation, the performance of machines is currently astounding to an increasing extent, especially for amateurs and even experts.

A good example is the work of an AI research group in the development department of Nvidia, a company known primarily as a manufacturer of graphics cards. The researchers succeeded in teaching Artificial neural network to change these media credibly, starting with image and film sources, about specific characteristics. With this technology, it is possible to change the weather in a video or the breed of a dog on a picture. The crucial point here is that representations in image or video which can be manipulated almost arbitrarily without a human editor having to intervene. The results are not yet perfect, but they are likely to be even more convincing in the future.

Scientists from the University of Kyoto went one step further: they used a similar procedure to allow Artificial Intelligence to recognise the mental perceptual images in the human brain so that the AI reads a person's thoughts to some extent. This happens in detail: A neuronal network is trained to match images that a human subject looks at with data obtained by functional magnetic resonance imaging (FMRI) of the person's corresponding brain activity. In this way, the AI learns to associate external stimuli (pictures) with the internal states in the brain (the MRI patterns). If, after this learning phase, it receives only MRI data as input, it can reconstruct what people perceive from this information without first having taken knowledge of these images. The images of these spiritual processes produced by the AI are anything but photorealistic. However, they do show the original image.

A question as threatening as can Artificial Intelligence reads your thoughts should not cause so much discomfort if you take a closer look. The actual "reading of thoughts", the look into the brain, continues to be taken over by MRI, and that's what it is meant for. Artificial intelligence, on the other hand, is limited to pattern recognition through a neural network and the application of what has been learned to new data. The strength of neural networks lies in the speed of work: while people need hours to learn new lessons, such a system can do millions of learning processes at the same time. A large number of passages creates a very differentiated system of weightings and states between the neurons in the net so that the result of continuous training becomes more and more similar to the model. The possible applications are manifold, but above all, in one respect this technology offers fascinating promises for the future: It could enable people who cannot communicate in language or writing to communicate their thoughts and inner images. Further applications are also conceivable, such as the direct "uploading" of intellectual content into computer networks.


29. January 2018
Posted by aisolab


The Digital Demo Day is just around the corner, and we are pleased to be part of it! On 1rst of February, we will be presenting our digital solutions and innovative AI technologies at the Digital Hub in Düsseldorf.

Join us at the Digital Demo Day to see the latest digital trends, talk about applications and get profound insights.

Tickets can be purchased here
See you there!

19. January 2018
Posted by aisolab

Cameras at intersections and other busy roads are not new. They enable traffic monitoring and provide images in case of accidents. In the future, the camera shots could be even more valuable - with the help of Artificial Intelligence. The evaluation of traffic data by supercomputers could soon ensure that roads will become safer. By evaluating data and analysing traffic in a short time. Scientists from the Texas Advanced Computing Center (TACC), the Center for Transportation Research at the University of Texas and the Texan city of Austin are working on programs that use deep learning and data mining to make roads safer and eliminate traffic problems.

The scientists are developing Artificial Intelligence that uses deep learning to evaluate video recordings of traffic points. This software should be able to recognise and classify objects correctly. The objects are cars, buses, trucks, motorcycles, traffic lights and people. The software determines how these objects move and behave. In this way, information can be gathered that can be analysed more precisely to prevent traffic problems. The aim is to develop software to help transport researchers evaluate data. The Artificial Intelligence should be flexible in its use and be able to recognise traffic problems of all kinds in the future without anyone having to program them explicitly for this purpose.

Thanks to deep learning, supercomputers classify the objects correctly and estimate the relationship between the detected objects in road traffic by following the movements of cars, people, etc. After this work was done, the scientists gave the software two tasks: Count the number of vehicles driving along a road. And more difficult: Record near collisions between cars and pedestrians. The Artificial Intelligence scored 10 minutes of video footage and counted all vehicles with 95% security. Being able to measure traffic accurately is a valuable skill of the supercomputer. At present, numerous expensive sensors are still needed to obtain such data, or studies must be carried out which would only produce specific data. The software, on the other hand, can monitor the volume of traffic over an extended period and thus provide figures on traffic volumes that are far more accurate. This procedure makes it possible to make better decisions on the design of road transport.

In the case of near collisions, the Artificial Intelligence enabled scientists to identify situations where pedestrians and vehicles were approaching threateningly automatically. Thus, it possible to determine points of traffic that are particularly dangerous before accidents happen. The data analysed could prove very revealing when it comes to eliminating future traffic problems.

The next project: The software will learn where pedestrians cross the road, how drivers react to signs pointing out pedestrians crossing the street, and how far they are willing to walk to reach the pedestrian path. The project by the Texas Advanced Computing Center and the University of Texas illustrates how deep learning can help reduce the cost of analysing video material.

18. January 2018
Posted by aisolab

I initiated and helped to organise the already famous Handelsblatt AI Conference, which will take place in Munich on March 15th and 16th.

We were able to set up a great line of speakers and I am really looking forward to discussing the latest developments in AI with Damian Borth, Annika Schröder, Klaus Bauer, Norbert Gaus, Bernd Heinrichs, Andreas Klug, Dietmar Harhoff, Alexander Löser, Gesa Schöning, Oliver Gluth, Reiner Kraft, Thomas Jarzombek and Daniel Saaristo.

We are very proud of having Jürgen Schmidhuber, one of the godfathers of AI and the inventor of LSTM.

Join us in getting profound insights and having exciting conversations.

See you in Munich!

Joerg Bienert

17. January 2018
Posted by aisolab

Bot better than a human for the first time

The internationally competing AI developers had developed their specific programs that can read and compete in a squad test. In doing so, they demonstrated how the development of learning and understanding the language of machines is organised. Almost all global technology companies, including Google, Facebook, IBM, and Microsoft, use the prestigious Stanford Question Answering Dataset reading and understanding test to measure themselves against each other and an average human subject. The squad has more than 100,000 questions related to the content of over 500 Wikipedia articles. Squad asks questions with objectifiable answers, such as "What causes rain?". Other questions that needed to be answered included: "What nationality did Nikola Tesla have?" Or "What is the size of the rainforest?" Or "Which musical group performed in the Super Bowl 50 Halftime Show?" Result: The bot persists. And not only that but for the first time, it was better than humans.

Machine learning development

The conventional first AI reading machine that has done better on the "squad" than a human being is the new software of Alibaba Group Holding Ltd., which was developed in Hangzhou, China, by the company's Institute of Data Science and Technologies. The research department of the Chinese e-commerce giant said that the machine language processing program achieved a test score of 82.44 points. It has cracked the previous record of 82.30 points, which was still held by a human. Chinese e-commerce giant Alibaba continues to announce its leadership role in the development of machine learning and Artificial Intelligence technologies. However, Microsoft Alibaba is hot on the heels. Microsoft's reading machine was only just exceeded by the test result of the Alibaba program.

What makes bots better

It is necessary that the bot program can accurately filter a variety of available information to give correct and only relevant answers. The program, which maps the human brain as a neural network model, works through paragraphs to sentences and from there to words, trying to identify sections that might contain the answer you're looking for.

The head of development for AI and language programs, Si Luo, said the findings make it clear that machines can now answer objective questions with high accuracy.

This development opens up new possibilities for Artificial Intelligence and its use in customer service. As reading robots could in the future, for example, also answer medical inquiries via the Internet, this would significantly reduce the need for human input. Alibaba's bot program called "Xiaomi" relies on the capabilities of the reading AI engine and has been used successfully. At the so-called Single's Day, the annually recurring, most significant shopping event of Asia on November 11, the software was successfully applied.

AI further ahead of an existing challenge

However, bots still have difficulties with language queries that are vague or colloquial, ironic or just grammatically incorrect. If there are no prepared answers, it is not possible for the robot program to find a suitable solution when reading. Then it does not respond adequately and answers incorrectly.

15. January 2018
Posted by aisolab

The fusion of atomic nuclei could become one of the solutions to future energy problems and may represent a significant advance in the technological development of humanity. Fusion reactions are still difficult to control, potentially damaging the so-called tokamaks, the fusion reactors that generate energy from plasma using magnetic fields. Disruptions of the results can occur at any time, which interrupt the fusion process. Artificial intelligence can help to anticipate and react correctly to these disturbances so that the damage is minimised and the process runs as smoothly as possible. Scientists are in the process of developing computer programs that enable them to predict the behaviour of plasma.

Scientists from Princeton University and the U. S. Department of Energy's Princeton Plasma Physics Laboratory are conducting initial experiments with Artificial Intelligence to test the software's ability to predict. The group is led by William Tang, a renowned physicist and professor at Princeton University. He and his team develop the code for ITER, the "International Thermonuclear Experimental Reactor" in France. They aim to demonstrate the applicability of Artificial Intelligence in this area of science. The software is called "Fusion Recurrent Neural Network", FRNN for short, and uses a form of deep learning. This is an improved variant of machine learning that can process considerably more data. FRNN is particularly good at evaluating sequential data with great patterns. The team is the first to use a Deep Learning program to predict the complexities of fusion reactions.

This approach allowed the team to make more accurate predictions than before. So far, the experts have tried their hand at the Joint European Torus in Great Britain, the most massive Tokamak in operation. Soon ITER will face up to it. For ITER, the development of the Artificial Intelligence of the Fusion Recurrent Neural Network should be so advanced that it can make up to 95% accurate predictions when incidents occur. At the same time, the program should give fewer than 3% false alarms. The Deep Learning Program is powered by GPUs,"Graphic Processing Units", unlike lower-performance CPUs. These make it possible to run thousands of programs at the same time. A demanding work for the hardware, which has to be distributed to different GPUs. The Oak Ridge Leadership Computing Facility, currently the fastest supercomputer in the United States, is used for this purpose.

Initially, the first experiments were carried out on Princeton University computers, where it turned out that the Artificial Intelligence of the FRNN is perfectly capable of processing the vast amounts of data and making useful predictions. In this way, Artificial Intelligence provides a valuable service to science by predicting the behaviour of plasma with pinpoint accuracy. FRNNN will soon be used by Tokamaks all over the world and will make an essential contribution to the progress of humanity.

13. January 2018
Posted by aisolab

Machine learning advances into ever new dimensions. In the meantime, Artificial Intelligence can do something that previously seemed to be reserved for humans: it understands emotions.


Artificial intelligence: a model of the human brain

Modern AI and machine learning via neural networks have a natural model: our human brain. This model is the most effective tool for solving all our known problems. However, a critical aspect of our intelligence was missing in the previous AI programs. It is about empathy and emotional intelligence. With these abilities, people can grasp feelings and make intuitive decisions "straight from the gut". To date, intelligent software programs have been able to understand speech, respond to it and act independently even after a particular data template, i. e. to act intelligently in common sense. But they do not feel for anyone. Now developers have moved a step closer to the incorporation of emotions into machine intelligence. Engineers have developed a method that allows the computer to recognise human feelings using physiological reactions and facial features. The pioneers of AI programs - Google, Microsoft and other giants - are very interested in this. They would like to integrate this AI aspect into their existing solutions or create computer-aided sentiment analysis that helps machines to interpret human feelings correctly and act accordingly. These can be machines of all kinds, even construction machinery.

How does machine learning about emotions work?

Data that communicates the emotional state of a person to a machine is transmitted in many different ways. That includes:

  • a vocal sound
  • speech patterns
  • use of certain expressions and phrases
  • facial expressions
  • physiological signals such as pulse, heart rate and body temperature
  • gestures
  • body language

Physiology cannot measure every machine because it requires individual sensors. But all the other signs are audible. Especially speech and facial expressions contain various non-verbal cues, which are very meaningful. Research results show that 55% of messages in the conversation are hidden in smiles, facial expressions and body signals such as a shrug of the shoulder, 38% in tone and only 7% in the actual meaning of the words. The previous software solutions for speech analysis thus neglect most of the message; they just identify the word itself. For example, a smartphone with speech recognition currently does not yet recognise a phrase with exclamation marks or question marks. But companies using Artificial Intelligence quickly learn more. These companies want to assess the emotional impact of advertising spots. Such situation can be possible by turning on the camera on a laptop while watching an advertising video. Up to the computer, which really "empathises" with us, not much more research time should pass. Experts already point out that an ethical discussion could then arise: Does the computer have feelings, does it have rights?

09. January 2018
Posted by aisolab

Artificial Intelligence visionaries have never been embarrassed by bold predictions. As early as in the 1950s, research was accompanied by a series of utopian forecasts concerning future developments and their impact on society. As naive as such forecasts and desires may seem in retrospect, short-term trends in the event of intelligent systems of any kind can positively be identified and used sensibly. In 2018, the development of Artificial Intelligence will gain momentum like never before. Here are seven trends that you should know about AI development in 2018.

From Hype to Reality

After many promises of earlier AI research which had not even begun to be fulfilled, the field of Artificial Intelligence became calmer in the nineties. But at the turn of the millennium, Artificial Intelligence celebrated a comeback, which developed into a hype in the following years and was supposed to overshadow even the naive optimism of the early days. For some years now, we have been anticipating the point at where the long-standing debate on feasibility, which has been mostly theoretical, will finally lead to practical implementations that can exist for themselves. Somewhat philosophical topics such as the question of the principled possibility of active and general AI are increasingly being ousted by the power of the factual. Even old fears about the superiority of future intelligent machines are increasingly fading beyond the utility value of technologies that are already foreseeable today. For 2018, we anticipate that the focus of Artificial Intelligence will shift even more strongly into practice. Fundamental questions are increasingly coming into the background, the concrete applications of the different technologies conquer everyday life and are taken for granted by companies and end users.

Private companies set the tone

Even though fundamental considerations on machine intelligence have lost some of their weight - research in all areas of Artificial Intelligence continues to run at full speed. In 2018, the emphasis will be on two trends: In addition to the already mentioned stronger emphasis on practical aspects and the implementation of brilliant systems in practice, the shift in excellent research from the university sector to a number of global players in information technology can be observed. In fact, the willingness of companies to explore more radical concepts and to use the application of AI technologies in end devices only for the production sector is becoming increasingly apparent. To this end, more and more groups are forming workgroups and subsidiaries according to the Skunkworks principle: research budgets that university departments can only dream of utopian projects whose economic counter value is uncertain at best. The best-known example is probably Google's research company X.

Popularization and Democratisation

For decades, AI concepts have been regarded as esoteric in programmers' circles, as Deep Magic. The fact that Artificial Intelligence in application development is increasingly being made available and used by non-specialists is another development that we foresee for the year 2018. Whether in the form of program libraries and APIs or as building block elements in development environments for non-specialists: Artificial intelligence will be available to a broader group of people for modular use in all kinds of systems. Particularly in areas such as data mining and pattern recognition, the increasingly noticeable shortage of skilled workers will make it necessary to introduce such personnel to the possibilities offered by AI, which does not have the required in-house knowledge. As a result, Artificial Intelligence will become more and more a matter of course for users as a freely available design element: e. g. for assistance in the network, on mobile devices or in social networks.

Intelligent Assistants: Virtual Assistants

We already know them for years: virtual assistants who respond to our commands on mobile devices or in the field of home automation and, if desired, browse the web, control the volume of music or forward our shopping lists to retailers. Although this is already an everyday example of AI, it will be a further trend for 2018: virtual assistants will become more flexible and intelligent. And they will continue to spread, whether as a source of information for customers or as a wise companion in everyday life. The already successful systems of this division, such as Amazon's Alexa or Google Home, have already set standards here and will pave the way for new developments and improvements.

Personalization of Information and Marketing

Anyone who is regularly on the Internet knows the effect: advertising and other product information are adapted to previous searches for keywords or visits to websites. Based on the latest developments in machine learning and pattern recognition, AI will also open up new possibilities in this area. Intelligent agents and assistants will not only be able to select ads and news results but will also be able to customise the entire user experience to meet the needs of the user. 2018 will be the year in which marketing and information management will become intelligent, with tangible effects for the user regarding the possibilities for personalisation and electability of the services. At the same time, AI will become increasingly invisible from the end user's point of view: The result counts, not the technology behind it. Another exciting aspect will be the combination of the new technical possibilities with the still growing legal and political framework conditions for data protection and the protection of privacy, especially in the European Union.

The real world

Not only will AI 2018 enter virtual environments, but also physical reality. Robotics has been regarded for a long time as a critical discipline in the field; the underlying consideration is that real intelligence can only be created through direct contact with the physical world. Be that as it may, 2018 will be the year in which Artificial Intelligence will also become a key technology in the real world. Efforts to teach cars to drive autonomously are particularly useful in this area, and we will see tremendous progress in this field in the new year. But medical technology will also become more intelligent: AI systems use advanced pattern and image recognition techniques to interpret data from sources such as MRI, PET, CT or X-ray to facilitate the diagnosis of acute diseases. In general, machines are becoming more intelligent: Through the Internet of Things, entire supply chains and production cycles become independently operating systems that use resources more efficiently and produce more with less energy, raw materials and time.

AI as an investment focus

The current and future role of the AI naturally leads to a significant increase in investment in researching and implementing the resulting technologies. Companies will not only increase their expenditures in this area, but the war for the best brains has begun in the field of recruiting. AI becomes the focus of economic investments as well as human resources development. In the foreseeable future, the topic will be accompanied by a shortage of personnel in the economy as well as by innovation potential. It remains to be seen whether this problem will have a lasting effect on the progress of development. It is up to society as a whole to take care of filling the existing gaps but also to address future deficits. Thus, 2018 will hopefully be a good year - not only in the sector of Artificial Intelligence.

20. December 2017
Posted by aisolab

Google's Artificial Intelligence has helped NASA  to discover eight planets in a distant solar system. This system is very similar to ours - it could exist life there.

Living on a Kepler 90 planet?

The name of the discovered star is Kepler 90 - eight exoplanets orbit it. These are celestial bodies outside our solar system that circle around a distant star and are similar to Earth, Venus, Mars, Mercury, Jupiter and so on. This is the first solar system with such a large family of planets that astronomers have ever discovered. The eighth of these exoplanets was identified by a neural network whose Artificial Intelligence - a product of Google - supported NASA to discover them. This system tirelessly combed through the data collected by the Kepler space telescope in large quantities. It cannot be ruled out if there are 90 lives under the star Kepler. The distant sun - 2,500 light-years away - is the only one known to us so far, which gathers as many celestial bodies as our natural sun around it. The discovery can, therefore, be considered to be incredibly spectacular, and NASA scientists initially even kept it secret. Striking is also the use of the Artificial Intelligence, with which the discovery succeeded at all. The software trained with planet signals and learned by self-training - which symptoms could point to an exoplanet. These celestial bodies are not visible from the earth, not even with the strongest telescope. Instead, minimal darkening of stars is evaluated. These can only be produced by their satellites in a particular order of magnitude and with a specific orbit. They must, therefore, be exoplanets, not comets or asteroids.

Artificial Intelligence: How did it work for astronomical purposes?

The Kepler datasets are usually analysed by experts in painstaking detail work. Over time, they develop tangible references to variations in the brightness of a star that indicates a planetary transit. These critical data then establish automated tests, but even these would take many decades to prove an exoplanet for general data reconciliation. The low brightness fluctuations can have many different reasons. Therefore smaller planets, which are also relatively distant from their star, are very difficult to detect. The neural network that Google provided to NASA, on the other hand, received 15,000 planet signals as input, which had once been confirmed by astronomers. With these planet signals, Artificial Intelligence trained how a sign must be constructed to display a real exoplanet. This works even with weak signals. The accuracy of the neural network has now reached 96%, which is sensational among researchers.

Specifically, Deep Learning was the exact AI method used in the used in the discovery of the eighth Kepler planet Kepler 90i. Here, a neural network permanently optimises its internal structure via its hidden layers. Thus, it can expand and improve its learning algorithms. This results in a stable learning success, which is also necessary for the evaluation of vast amounts of data. NASA scientists can, therefore, describe Kepler 90i as a Mercury-like celestial body with a surface temperature of about 420 °C and a size of nearly 130 % of the earth's capacity, which orbits its star once in 14.4 days - a very short year. But with Deep Learning it was possible to discover even more similar exoplanets to the Earth. Possibly this brings the proof of extra-terrestrial life within reach.