1 year ago
#372252
Asky
Why does my autoencoder generate the Anomalous image instead of the anomaly-free image?
I really need help with this, I am working on an anomaly detector project using Autoencoder, the first part of the project is to build an autoencoder trained only on non anomalous images so when it is tested on anomalous images (images never seen), it will reconstruct the non anomalous image and then compare between the input image (anomalous image) and the output image ( reconstructed anomaly-free image) so I can visualize the loss. but in my case the autoencoder is always reconstructing the input image as it is, whether it is anomaly free or anomalous image, in my project the autoencoder is not supposed to reconstruct the anomalous images as they are ( data never trained on ). Can someone please tell me what I am missing..
My training dataset is composed of 10 RGB identical anomaly-free images and my test dataset is composed of 1 anomalous image and 1 anomaly free image you can find below the output of the autoencoder for the test images.
https://i.stack.imgur.com/dj8cz.jpg
here's my autoencoder architecture I was aiming to modify it for better reconstruction but I stopped when it started reconstructing the anomalous images instead of anomaly-free images
def architecture_MVTEC(input_shape=(128, 128, 1), latent_dim=100):
parameters = dict()
N_layers = 8
parameters["filters"] = [32, 32, 64, 64, 128, 64, 32, latent_dim]
parameters["kernel_size"] = [4, 3, 4, 3, 4, 3, 3, 8]
parameters["strides"] = [2,1, 2, 1, 2, 1, 1, 1]
parameters["padding"] = ["same" for _ in range(N_layers-1)] + ["valid"]
# Input
inputs = Input(shape=input_shape)
x = inputs
# Encoder
for i in range(0, N_layers):
x = Conv2D(
filters=parameters["filters"][i],
kernel_size=parameters["kernel_size"][i],
strides=parameters["strides"][i],
padding=parameters["padding"][i])(x)
x = LeakyReLU(alpha=0.2)(x)
# Decoder
for i in reversed(range(0, N_layers)):
x = Conv2DTranspose(
filters=parameters["filters"][i],
kernel_size=parameters["kernel_size"][i],
strides=parameters["strides"][i],
padding=parameters["padding"][i])(x)
x = LeakyReLU(alpha=0.2)(x)
# Output
x = Conv2DTranspose(
filters=input_shape[2],
kernel_size=(3, 3),
strides=(1, 1),
padding="same")(x)
outputs = x
# Autoencoder
autoencoder = Model(inputs, outputs, name="autoencoder")[![enter image description here][1]][1]
return autoencoder
enter image description here
here's the autoencoder's summary :
Model: "autoencoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 128, 128, 1)] 0
conv2d (Conv2D) (None, 64, 64, 32) 544
leaky_re_lu (LeakyReLU) (None, 64, 64, 32) 0
conv2d_1 (Conv2D) (None, 32, 32, 32) 16416
leaky_re_lu_1 (LeakyReLU) (None, 32, 32, 32) 0
conv2d_2 (Conv2D) (None, 32, 32, 32) 9248
leaky_re_lu_2 (LeakyReLU) (None, 32, 32, 32) 0
conv2d_3 (Conv2D) (None, 16, 16, 64) 32832
leaky_re_lu_3 (LeakyReLU) (None, 16, 16, 64) 0
conv2d_4 (Conv2D) (None, 16, 16, 64) 36928
leaky_re_lu_4 (LeakyReLU) (None, 16, 16, 64) 0
conv2d_5 (Conv2D) (None, 8, 8, 128) 131200
leaky_re_lu_5 (LeakyReLU) (None, 8, 8, 128) 0
conv2d_6 (Conv2D) (None, 8, 8, 64) 73792
leaky_re_lu_6 (LeakyReLU) (None, 8, 8, 64) 0
conv2d_7 (Conv2D) (None, 8, 8, 32) 18464
leaky_re_lu_7 (LeakyReLU) (None, 8, 8, 32) 0
conv2d_8 (Conv2D) (None, 1, 1, 100) 204900
leaky_re_lu_8 (LeakyReLU) (None, 1, 1, 100) 0
conv2d_transpose (Conv2DTra (None, 8, 8, 100) 640100
nspose)
leaky_re_lu_9 (LeakyReLU) (None, 8, 8, 100) 0
conv2d_transpose_1 (Conv2DT (None, 8, 8, 32) 28832
ranspose)
leaky_re_lu_10 (LeakyReLU) (None, 8, 8, 32) 0
conv2d_transpose_2 (Conv2DT (None, 8, 8, 64) 18496
ranspose)
leaky_re_lu_11 (LeakyReLU) (None, 8, 8, 64) 0
conv2d_transpose_3 (Conv2DT (None, 16, 16, 128) 131200
ranspose)
leaky_re_lu_12 (LeakyReLU) (None, 16, 16, 128) 0
conv2d_transpose_4 (Conv2DT (None, 16, 16, 64) 73792
ranspose)
leaky_re_lu_13 (LeakyReLU) (None, 16, 16, 64) 0
conv2d_transpose_5 (Conv2DT (None, 32, 32, 64) 65600
ranspose)
leaky_re_lu_14 (LeakyReLU) (None, 32, 32, 64) 0
conv2d_transpose_6 (Conv2DT (None, 32, 32, 32) 18464
ranspose)
leaky_re_lu_15 (LeakyReLU) (None, 32, 32, 32) 0
conv2d_transpose_7 (Conv2DT (None, 64, 64, 32) 16416
ranspose)
leaky_re_lu_16 (LeakyReLU) (None, 64, 64, 32) 0
conv2d_transpose_8 (Conv2DT (None, 128, 128, 32) 16416
ranspose)
leaky_re_lu_17 (LeakyReLU) (None, 128, 128, 32) 0
conv2d_transpose_9 (Conv2DT (None, 128, 128, 1) 289
ranspose)
=================================================================
Total params: 1,533,929
Trainable params: 1,533,929
Non-trainable params: 0
_________________________________________________________________
deep-learning
encoding
decoding
autoencoder
anomaly-detection
0 Answers
Your Answer