One picture is worth a thousand words, that’s why we have brought Artificial Intelligence essentials into one image.
There are certainly more points of view and approaches. Therefore we would like to receive feedback and suggestions which we will gladly consider in the next release.
Here you can download your high – resolution AI Poster image.
The modern smartphone era which began 10 years ago with the introduction of the first iPhone, has now matured. When Google's Pixel-2 phones were introduced in San Francisco, CEO Sundar Pichai said that the smartphone's features have "weakened" thus it was difficult to develop exciting new hardware-based products. According to Google, there was a transition from a mobile-first company to an Artificial Intelligence first business, as the field of machine learning is one of Google's strongest assets.
Today, the translation of neural machines encompasses 96 languages and provides 2 billion translations per day. The live translation of a woman speaking Swedish while wearing wireless Google Pixel Buds headphones to an English speaking person holding a Google Pixel smartphone is considered as a best-illustrated demonstration proving the power of Google’s new integrated AI phone - services.
Google's latest open source software TensorFlow Lite for machine learning developers pre-release, is an exciting change in the area of AI. The company's commitment to the development of AI that can run algorithms on a mobile device - with no internet connectivity - is the foundation for the Artificial Intelligence Of Things (AIOT) of the future.
As far as consumer products are concerned, Google’s assistants Alexa and Siri are among the most popular AI mainstream applications. For 30 or 40 dollars, a person can get their own interactive AI assistant – provided that they have WiFi and charging accessibility.
TensorFlow Lite represents the first comprehensible steps in order to make Artificial Intelligence - powered devices not only accessible but also disposable. That results in the death of buttons. Developers can now preview Tensor Flow Lite for Android and iOS. Instead of providing new features for AI applications, the existing hardware - such as the Snapdragon processors - was used to execute algorithms that are normally not possible for mobile devices without connecting to the cloud.
With Google's new Lite artificial intelligence platform, you can run AI models on a smartphone and after adding new data, run these algorithms to get new results. It is on-the-go machine learning with no need for internet connectivity.
If you're one of those people who are afraid that hundreds of devices in your house can spy on you via your Internet connection, you'll be happy to know that Google’s researchers are designing the Tensor Flow Lite specifically to address these kinds of concerns.
According to the Tensor Flow Lite website, the software was developed fulfilling the following criteria:
It is interesting to see what's next on Google's AI Platform Miniaturization project. It paves the way for voice-controlled disposables based on cheap chips and AI-powered devices that won't expose your entire network to hackers attacks.
If Google continues to bring more value to less powerful devices, we will eventually live in a world where Artificial Intelligence could affordably be used in any device, even the disposable ones. Google’s engineer and technical director of TensorFlow, Pete Warden, told MIT,"What I want is a 50-cent chip that allows easy speech recognition and runs on a button battery for a year."
Tensor Flow Lite takes the company one step closer to Warden's vision.
The main topics of this workshop include interactive presentations of AI but also open discussion about technology, applications, social impact and more.
I am looking forward to seeing you there!
A group of researchers at the UCL (University College London) Knowledge Lab and Pearson have published an interesting AI paper regarding Education and the overall transformation of learning and teaching through technology. This paper aims at two things: the first is to explain what AIEd is and how it is built, and the second is to find out what the benefits of AIEd and how artificial intelligence will positively transform education in the coming years.
This approach describes how adequately designed and well-thought-out AIEd implementation can successfully contribute to the classroom environment. Importantly, the researchers do not see AIEd replacing teachers; instead, the future role of teachers continues to evolve and is eventually transformed such that their time is used more efficiently.
The researchers’ conclude that AIEd should be implemented in mixed learning environments where computerised advances and customary classroom exercises supplement each other. Understanding this implies tending to the 'chaos' of genuine classrooms, colleges, or workplace learning conditions, and including teachers and students in the application of AIEd with the goal that the outline would resemble this:
On 23rd November 2017, the conference "IoT Future Trends" will take place, hosted by the eco-Verband and IHK Köln.
The central topic is how to intelligently use and evaluate the enormous amounts of data generated in the “Internet of things” with the help of artificial intelligence, deep-learning, and other methods.
I am delighted to give a lecture regarding the use of artificial intelligence there - while I have already given an interview in advance.
The Eco-Association has provided us two free tickets, which we would like to offer you as part of a raffle.
Please send us a short e-mail to Tickets@aiso-lab.de
See you soon in Cologne!
A year ago Google published a paper on " Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs" at The Jama Network. It demonstrated that one of the main reasons for blindness is Diabetic Retinopathy (DR), a medicinal condition in which harm occurs to the retina because of diabetes.
Google Brain, the company’s AI team, has worked with specialists to enable them to analyse DR. The group has gathered more than 128,000 pictures that were each assessed by 3-7 ophthalmologists. These images were processed by a deep learning algorithm for making a model to recognise Diabetic Retinopathy. The execution of the calculation was tested on two distinctive datasets totalling to 12,000 images.
The use of Machine Learning (ML) in Diabetic Retinopathy is a leap forward in the fields of AI and health care. Robotized diagnosis of DR with higher exactness can help eye doctor's facilities in evaluating more patients and prioritising the treatment. This innovation can fill the existing deficiency in ophthalmology divisions.
Last year, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. The latest version of the computer program, named AlphaGo Zero, is the first to master Go, without human guidance.
Figure1: Self-playing reinforcement learning in AlphaGo zero
Google Deep mind researchers have presented an algorithm based exclusively on reinforcement learning, without human data, support or knowledge beyond gaming rules. AlphaGo turns into its own teacher: a neural system is trained to predict AlphaGo's move selections, and it is taught solely through self-play, starting with entirely random moves.
This neural network enhances the quality of the tree search by bringing higher quality move selection and more grounded self-play in the following cycle. Starting tabula rasa, the new program AlphaGo Zero accomplished superhuman performance, winning 100 – 0 against the previously published, champion-crushing AlphaGo
Figure2: MCTS in AlphaGo Zero
The outcomes from this algorithm demonstrate that a pure reinforcement learning approach is completely feasible, even in the most challenging areas. Besides, this approach requires only a couple of more hours to train and accomplishes much better asymptotic performance, compared to preparation on human expert data.
To empower prior diagnosis, researchers at the University of Bari in Italy developed a machine-learning calculation to recognize structural changes in the brain caused by Alzheimer's disease. First, they trained the algorithm using 67 MRI (Magnetic Resonance Imaging) scans, 38 of which were from people who had Alzheimer’s and 29 from healthy controls. The scans came from the Alzheimer’s Disease Neuroimaging Initiative database at the University of Southern California in Los Angeles.
Scientists have created a sophisticated algorithm that analyses MRI scans and notes structural changes to the mind caused by the disease, with an accuracy of more than 80 percent. This complex algorithm can distinguish the brains of healthy subjects from those with Alzheimer's with 86% precision. It could likewise recognize stable patients from those with MCI (Mild Cognitive Impairment) with 84% precision.
At present, there is no cure for Alzheimer; however early diagnosis implies that patients can get treatment sooner and can make more care arrangements. Doctors already use MRI scans to look for changes characteristic of Alzheimer's, but scientists believe artificial intelligence could help specialists to diagnose the conditions before differences are visible. Researchers think that this innovation could be used to foresee Alzheimer's and other diseases within ten years.
The AICamp “Unconference” in Frankfurt follows the tradition of the Cloud Camps which have been taking place since 2009 and offer lectures and discussions.
I will talk about the "7 challenges of AI project"s and am looking forward to inspiring discussions afterwards.
A team of scientists at the University of New York has developed a new multi-scale, sliding window approach that can be utilized for image classification, detection, and localization. This approach in comparison to others based on the ILSVRC datasets, ranked 4th in classification, 1st in localization and 1st in detection.
Another important contribution in this paper is the clarification of how ConvNets can be effectively used for detection and localization tasks. This team is the first to clarify how this could be achieved with regards to ImageNet. This proposed plot includes substantial modifications to the neural network design for classification. Also, through this approach, it is shown how different tasks can be learned simultaneously by using a single shared network.