1 year ago

#285899

test-img

Denis D.

Adding randomness to the performing of image augmentation while using tf.data.dataset.from_tensor_slices with tf.cond

Good day to all!

I am trying to add the possibility to control the "randomness" of the augmentation, which is applied to the image data. This means that I perform every augmentation operation with a certain probability (for example 0.1). Since I use bare tensorflow 2 operations for augmentation, I have used tf.cond operator to choose either applying augmentation or not. However, while executing the code, I get the error:

OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.

The sample of the code is below:

# params
    augment = True
    augment_prob = tf.constant(0.1, dtype=tf.float32)
    # stating augmentation methods
    augment_methods = [
    random_rotate90_image,
    random_flip_vertical_image,
    random_flip_horizontal_image,
    random_crop_image,
    random_convert_to_grayscale_image]

    AUTOTUNE = tf.data.AUTOTUNE
    # load data
    train = data_load()
    # creating tf.data.Dataset and defining other operations
    dataset = tf.data.Dataset.from_tensor_slices((train.iloc[:, 0], train.iloc[:, 1:]))
    dataset = dataset.shuffle(train.shape[0])
    dataset = dataset.map(load_image, num_parallel_calls=AUTOTUNE)
    dataset = dataset.cache()
    # add augment if needed
    if augment:
        for method in augment_methods:
            dataset = dataset.map(lambda x, y: tf.cond(
                tf.math.less(tf.random.uniform([], 0., 1.), augment_prob), # condition (if generated value is less than 0.1, perform augmentation)
                lambda: method(x, y),                                      # perform augmentation
                lambda: x, y),                                             # do not perform
                num_parallel_calls=AUTOTUNE)

    dataset = dataset.batch(batch_size)

    model = create_model()
    model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='categorical_crossentropy')
    model.summary()

    model.fit(x=dataset, epochs=10)

Without the tf.cond statement everything works fine. The exception raises inside the tf.cond, while checking the bool variable, as I can suppose. I have tried to search similar cases but unfortunately did not find them... Did somebody face the same problem? Could you say, how did you overcome it? I would be very thankful for providing any help.

python

tensorflow

tensorflow2.x

tf.data.dataset

0 Answers

Your Answer

Accepted video resources