1 year ago

#299525

test-img

mattfox

How to add a custom loss function to Keras that solves an ODE?

I'm new to Keras, sorry if this is a silly question!

I am trying to get a single-layer neural network to find the solution to a first-order ODE. The neural network N(x) should be the approximate solution to the ODE. I defined the right-hand side function f, and a transformed function g that includes the boundary conditions. I then wrote a custom loss function that only minimises the residual of the approximate solution. I created some empty data for the optimizer to iterate over, and set it going. The optimizer does not seem to be able to adjust the weights to minimize this loss function. Am I thinking about this wrong?

# Define initial condition
A = 1.0

# Define empty training data
x_train = np.empty((10000, 1))
y_train = np.empty((10000, 1))

# Define transformed equation (forced to satisfy boundary conditions)
g = lambda x: N(x.reshape((1000,))) * x + A
# Define rhs function
f = lambda x: np.cos(2 * np.pi * x)

# Define loss function
def OdeLoss(g, f): 
    epsilon=sys.float_info.epsilon
    def loss(y_true, y_pred):
        x = np.linspace(0, 1, 1000)
        R = K.sum(((g(x+epsilon)-g(x))/epsilon - f(x))**2)
        return R
    return loss

# Define input tensor
input_tensor = tf.keras.Input(shape=(1,))
# Define hidden layer
hidden = tf.keras.layers.Dense(32)(input_tensor)
# Define non-linear activation layer
activate = tf.keras.activations.relu(hidden)
# Define output tensor
output_tensor = tf.keras.layers.Dense(1)(activate)

# Define neural network
N = tf.keras.Model(input_tensor, output_tensor)

# Compile model
N.compile(loss=OdeLoss(g, f), optimizer='adam')

N.summary()

# Train model
history = N.fit(x_train, y_train, batch_size=1, epochs=1, verbose=1)

The method is based on Lecture 3.2 of MIT course 18.337J, by Chris Rackaukas, who does this in Julia. Cheers!

keras

ode

loss

0 Answers

Your Answer

Accepted video resources