1 year ago
#321373
mad
No gradients provided for any variable in Tensorflow
I have the following issue when running the code below:
import tensorflow.compat.v1 as tfc
image=tfc.Variable(tf.zeros((1,224,224,3)))
#It declares the image
x = tfc.placeholder(tfc.float32, (1,224, 224, 3))
#our trainable adversarial input
x_hat = image
#Update ref by assigning value to it.
assign_op = tfc.assign(x_hat, x)
#Define learning rate as a placeholder
learning_rate = tfc.placeholder(tf.float32, ())
#defines y_hat as an int32 value
y_hat = tfc.placeholder(tf.int32, ())
#defines labels as one hot value with 1000 possible 0-1 values
labels = tfc.one_hot(y_hat, 12)
#loss used: Measures the probability error in discrete classification tasks in which the classes are mutually
#exclusive (each entry is in exactly one class).
#For example, each CIFAR-10 image is labeled with one and only one label: an image can be a dog or a truck,
#but not both.
loss = tfc.nn.softmax_cross_entropy_with_logits(logits=logits, labels=[labels])
optim_step = tfc.train.GradientDescentOptimizer(learning_rate).minimize(loss, var_list=[x_hat])
Whenever I run this code I have the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-41-b55e39746a98> in <module>
17 #assign_op = tf.assign(x_hat, x)
18 #The way the loss is optimized
---> 19 optim_step = tfc.train.GradientDescentOptimizer(learning_rate).minimize(loss, var_list=[x_hat])
/usr/local/lib/python3.8/dist-packages/tensorflow/python/training/optimizer.py in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)
405 vars_with_grad = [v for g, v in grads_and_vars if g is not None]
406 if not vars_with_grad:
--> 407 raise ValueError(
408 "No gradients provided for any variable, check your graph for ops"
409 " that do not support gradients, between variables %s and loss %s." %
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'Variable_2:0' shape=(1, 224, 224, 3) dtype=float32>"] and loss Tensor("softmax_cross_entropy_with_logits_sg_11/Reshape_2:0", shape=(1,), dtype=float32).
What is wrong with my code?
python
tensorflow
conv-neural-network
gradient-descent
0 Answers
Your Answer