Generative Adversarial Network(GAN)

Rishav Walde
4 min readSep 14, 2024

--

Generative Adversarial Networks (GANs) are powerful neural networks introduced by Ian J. Goodfellow in 2014. They are used to support generative modeling in machine learning. GANs consist of two neural networks — a generator and a discriminator — that work in opposition to each other.

The generator creates new data samples, such as images, while the discriminator evaluates these samples against real data to determine whether they are genuine or fake. This competition between the two networks helps the GAN learn to produce increasingly realistic data.

GANs often use techniques like Convolutional Neural Networks (CNNs) to enhance their performance. Although GANs can be framed as supervised learning problems, they are primarily an unsupervised learning technique, learning patterns from input data to generate new instances similar to the original dataset. For example, if the generator creates random images of tables, the discriminator will compare these images to real-world table images to provide feedback, which helps the generator improve its output.

see the GAN Structure In the Fig.1

Why GANs was Developed?

Machine learning algorithms and neural networks can sometimes be easily tricked into misclassifying data by adding noise. When noise is introduced, the likelihood of misclassification often increases.

To address this issue, Generative Adversarial Networks (GANs) were developed. GANs are designed to generate new, realistic data samples that resemble the original data. By doing so, GANs can help the model learn to recognize and visualize patterns more effectively, even in the presence of noise. This ability to generate realistic examples helps improve the robustness and performance of neural networks.

How does GANs Work?

GAN is basically an approach to generative modeling that generates a new set of data based on training data that looks like the training data. GANs have two main blocks, two neural networks, which compete with each other and enable us to capture, copy, and analyze variations in the dataset. The two models are usually called the generator and the discriminator, which we will cover in the components of GANs. To understand the term GAN, let’s break it into three parts:

1. Generative- To learn a generative model which describes how data is generated in terms of a probabilistic model. In simple words, it explains how data is generated visually.

2. Adversarial- The training of the model is done in an adversarial setting.

3. Networks- Use Deep Neural Networks for Training Purposes

In GANs, there is a generator and a discriminator. The generator generates a fake sample of data, such as an image or audio, and tries to fool the discriminator.

The discriminator, on the other hand, tries to distinguish between the real and fake samples. The generator and the discriminator are both neural networks, and they run in competition with each other during the training phase.

The steps are repeated several times, and with each repetition, the generator and discriminator improve in their respective roles. The working can be visualized in the figure below.

Hence, the generative model captures the distribution of data and its trends in such a manner that it tries to maximize the probability of the discriminator making a mistake. The discriminator, on the other hand, is based on the model that estimates the probability that the sample it received is from the training data and not from the generator.

This is formulated as a minimax game, where the discriminator is trying to minimize its reward V(D, G) and the generator is trying to maximize the discriminator’s reward, or in other words, maximize its loss. It can be mathematically described by the formula below:

Training a GAN

So basically, training GANs has two parts:

Part one: the discriminator is trained while the generator is ideally not updated in this phase. The discriminator is trained on real data for a number of epochs and sees if it can correctly predict them as real. Also, in this phase, the discriminator is trained on the fake data generated by the generator to see if it can correctly predict them as fake.

Part two: the generator is trained while the discriminator is idle. after the Discriminator is trained by the generated fake data pf the Generator, we can get its predictions and use the results for training the Generator and get better from the previous state to try and fool the Discriminator.

This method is repeated for a few epochs, and then the fake data is manually checked to see if it seems genuine. If it seems acceptable, then the training is stopped; otherwise, training continues for a few more epochs.

--

--

Rishav Walde
Rishav Walde

Written by Rishav Walde

0 Followers

A passionate programmer and data scientist with success on LeetCode, GeeksforGeeks, HackerRank, and Kaggle. Skilled in Python(AI/ML/DL, SQL, And Tableau.

No responses yet