1 year ago

#275782

test-img

Nandi

Why are encoded representations bad for classification?

Given a pre-trained well-performing auto-encoder. When I train a classifier on encodings (produced by the auto-encoder) the classifier does very poorly. In particular, it does much worse than training a classifier on normal inputs (i.e. unencoded inputs).

However, when I fine-tune the encoder based on classification loss, the classifier does quite well.

Why are encoded representations bad for classification?


Details: Iā€™m working on CIFAR-100 and trying to classify coarse image labels, i.e. 20 classes (but I think I had the same problem when doing classification on CIFAR-10). The classifier has 5 layers and Iā€™m using dropout:

classifier = tf.keras.Sequential([
            tf.keras.layers.Dense(512, 
                                  activation='relu',
                                  name='classifier_hidden_1'),
            tf.keras.layers.Dropout(0.2),
            tf.keras.layers.Dense(256, 
                                  activation='relu',
                                  name='classifier_hidden_2'),
            tf.keras.layers.Dense(128, 
                                  activation='relu',
                                  name='classifier_hidden_3'),
            tf.keras.layers.Dropout(0.2),
            tf.keras.layers.Dense(64, 
                                  activation='relu',
                                  name='classifier_hidden_4'),
            tf.keras.layers.Dense(num_classes, 
                                  activation=None,
                                  name='classifier_out'),
        ], name='classifier')

tensorflow

deep-learning

classification

autoencoder

representation

0 Answers

Your Answer

Accepted video resources