December 17, 2024
Exploring GANs: Code a Basic Generative Adversarial Network in Keras
How can computers make realistic images from scratch? Did you ever give it a shot? Generative Adversarial Networks have such capability. In this post, I'll tell you how to create a simple Keras GAN to start using GANs, which are redefining data creation and manipulation.
How GANs Work: The Generator and Discriminator
Do you know a GAN has two opposing neural networks: the generator and the discriminator. To simulate genuine data, the generator creates fake data from random noise. The discriminator filters actual data from generator-generated bogus data.
This adversarial arrangement improves both networks as the generator produces increasingly convincing data, and the discriminator grows better at identifying fakes. The result? A system that gradually creates realistic images or data.
Setting Up the Environment
Install Python, Keras, and TensorFlow to construct a GAN. Before coding, first set up your environment using these commands:
pip install tensorflow
pip install keras
Coding the Basic GAN Model in Keras
Coding the GAN's generator, discriminator, and combined model is the next stage.
Building the Generator Network
The generator creates real-looking data from random noise. Here is a simple structure for a generator:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LeakyReLU
def build_generator():
model = Sequential()
model.add(Dense(128, input_dim=100))
model.add(LeakyReLU(alpha=0.01))
model.add(Dense(784, activation='tanh'))
return model
This model maps random noise (a 100-dimensional vector) to data with the same dimensions as our target dataset (e.g., 28x28 images for MNIST).
Building the Discriminator Network
Discriminators categorize data as authentic or fraudulent. It is a binary classifier:
from tensorflow.keras.layers import Dropout
def build_discriminator():
model = Sequential()
model.add(Dense(128, input_dim=784))
model.add(LeakyReLU(alpha=0.01))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
return model
Combined Networks
Combining the generator and discriminator models is possible now. We train the generator and discriminator alternately. After training the discriminator to recognize genuine data from the generator's fake data, we teach the generator to make better fakes to deceive the discriminator.
from tensorflow.keras.optimizers import Adam
import numpy as np
discriminator = build_discriminator()
discriminator.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
generator = build_generator()
gan_input = np.random.normal(0, 1, (1, 100))
gan_output = discriminator(generator.output)
# Freeze discriminator for GAN training
discriminator.trainable = False
gan = Sequential([generator, discriminator])
gan.compile(loss='binary_crossentropy', optimizer=Adam())
During training, the GAN alternates between training the discriminator with real and fake data and training the generator to improve the quality of its generated data.
Running and Observing Results
After building up the model, create sample outputs to evaluate your GAN. After training, input random noise into the generator and see the pictures. After training, the outputs should resemble the target dataset (e.g., MNIST digits) rather than being random.
generated_data = generator.predict(np.random.normal(0, 1, (1, 100)))
Conclusion and Next Steps
In this Keras tutorial, I've taught you how the generator and discriminator interact in this unique adversarial process by coding a simple GAN. Learn DCGAN or conditional GAN designs to improve your GAN abilities and deliver more realistic and diversified outcomes. With these tools, imaginative and novel applications are unlimited!
42 views