Generative Adversarial Networks | Learning by producing
The machines have been trying to learn to recognize and identify the photos they have seen for years. In 2013, it succeeded in reaching the human level. Machine learning systems have provided simple output from a complex input. It can detect almost all details of a photos and display users exactly want they want.
On the other hand, sometimes these algorithms can be trapped. While machine learning algorithms can easily recognize an apple, if you put the same apple in a net bag, it doesn't understand what it is. Because this is an unexpected, unusual visual.
In 2014, Ian Goodfellow and his colleagues at the University of Montreal in Canada developed a new machine learning system using game theory to turn this weakness into an advantage.
The coolest idea in machine learning in the last 20 years. -Facebook's artificial intelligence research manager
Do you know why it is so important? Machine learning systems have provided simple output from a complex input. For instance, you are uploading photos in social media which are inputs. The ML algortihms produces a simple output using its neural networks. After analyzing, it produces output from these photos by detecting objects in photo. So, when you search something on internet, machine can easily find the keyword by filtering tons of data with produced tags.
The newly developed "Generative Adversarial Networks" can do the opposite. It can produce complex output from simple input. If you give the computer random numbers, it creates extremely complex and realistic photographs of human faces. So the machine not only learns, but also produces. They are learning by producing!
There is a productive network and it starts to draw the picture immediately from random noise. But there is also another network which examines images created by the productive network.
We can compare these two neural networks to the opponents in a game. There's a constant struggle between them. The producer's goal in this game is to trick the discriminator and to convince him that the image he produces is real. The aim of the discriminator is to extract as many fake images as possible by looking at real images.
You can play with Generative Adversarial Networks (GANs) in your browser by clicking link below:
By competing against each other, they can train each other to make the random points a straight line or a circle. As you will notice, the computer requires thousands of experiments to learn such simple drawings. Imagine that you gave real human photographs instead of the above example line or circle.
In 2014, when this system was developed, it produced a very low resolution with black and white colors. In last 5 years, the quality of photography that artificial intelligence can synthesize has gradually increased.
In these 5 years, machine learning researchers both developed and started to apply GAN techniques in many different fields. For example, computers can now produce cartoon characters and anime characters.
The PokeGAN project designs new Pokemon characters, looking at old characters. The CycleGAN project turns draft drawings into photographs. It can learn the styles of painters and turn photographs into paintings or satellite photographs into maps. The StackGAN project turns text into visual. For example, you write ”a little bird with a short beak, black and blue" and it produces such a bird image. It is not searching it but actually producing it. In fact, these birds do not exist.
What caused to such great progress in such a short time? Because it's not just about the advancement of technology.
Shows the amount of data generated on the Internet. Anything you notice? Since 2010, we are experiencing an explosion in the data we produce. This is one of the main reasons why GAN technique has been increasingly successful in machine learning since 2014. In the last 5 years, we have increased the number and variety of sample data by uploading the photos and videos we take to the internet and social media platforms.
Well, how we are producing so much data? The reason is very simple. For instance, FaceApp that ages people's photo. Only one such application allows hundreds of millions of photos a week to be uploaded.
Let's hope these are only used for machine learning. Because we accepted the terms and conditions without reading it to use the application , we give all the rights of those photographs to Yaroslav Goncharov who developed the application.
FaceApp is just one of the applications that uses machine learning and GAN technique. Nowadays, producer neural networks perceive images as a set of styles. It can learn the pose, posture, hair, face shape, eyes, skin color of a person separately and produce new photos.
Before FaceApp, which was used by almost everyone, there was FakeApp that some people know. FakeApp can combine videos of celebrities with GAN techniques and make them do things they never did. Deepfake Videos
In order to produce Deepfake videos, the machines need to learn again with the GAN technique. Although the examples in link are impressive, they are actually in the process of combining two existing images.
A few months ago, with a technique developed in Samsung's artificial intelligence laboratories, it became possible to produce video from a single photo.
Normally, we said that a lot of data is needed in machine learning. Now, It is enough to give the computer one visual to learn. The computer combines this with basic facial gestures to produce a simple video.
Probably in the near future we will be able to produce automatic videos from the selfies we take with our mobile phone. Or we can turn on our camera and play a photo like a puppet with our own facial expression.
That's all for now! Make sure you are following me on social media and if you find it useful please share Reverse Python with your friends. See you in next post DEVs!