NelworksNelworks
Season 2

S2-EP06: Physics-Informed Neural Networks and Objective Function Mismatch

Understanding physics-informed neural networks and objective function mismatch. Learn about PINNs, physics constraints, and when optimization objectives don't match real-world goals.

Tweet coming soon
It learned! It learned the Theory of Everything!
It learned to wiggle.
It's not wiggling! It's a Deep Neural Network I trained to predict the arm's trajectory.
The Mean Squared Error is $10^{-6}$! It's basically zero!
The MSE is low. The R-squared is high. The numbers are happy.
Now, show me the energy consumption plot.
Oh. That part is a little... weird.
'Weird?' Shez, your robot just invented a perpetual motion machine.
According to your model, to perform that elegant little flip, the arm needs to *generate* 50 Watts of power out of thin air.
It violated the First Law of Thermodynamics.
But the position prediction is perfect! Who cares about the energy?
Anyone who doesn't want their robot to tear a hole in spacetime.
I wish for a model that perfectly fits my data!
The optimizer is a genie. It will grant your wish *literally*.
You wished for a low MSE. You got a low MSE. You didn't wish for a model that respected reality.
This is the genie's contract.
All it says is 'Make the red line touch the blue dots.' It says *nothing* about Newton, Einstein, or the laws of energy.
So... the model has no common sense?
A calculator has no common sense. It just calculates.
Your neural network is just a very, very complicated calculator. It will happily divide by zero if you let it.
Let's ask your model to predict the arm's position if it moves 10% faster.
Whoa! What was that?!
That was **Extrapolation**. Your model is a genius inside the training data, and a complete lunatic outside of it.
So my beautiful model is just... a high-tech parrot? It can only repeat what it's seen?
Yes. Because you taught it imitation, not comprehension.
To get a smart genie, you need to write a smarter contract.
How do I write 'Don't Break Physics' in Python?
With more math.
We give the model a **Conscience**.
The sheep wants to eat the grass (Minimize Data Loss).
But we are going to build an electric fence around the field.
This fence is a **Physical Law**. If the sheep tries to cross it, it gets a shock.
The shock is a **Penalty**. We add it to the loss function.
We have two terms now.
**Data Loss:** 'Don't be wrong about what you see.'
**Physics Loss:** 'Don't be stupid about what you don't see.'
What's the lambda ($\lambda$)?
That's the voltage on the fence. How much we punish stupidity.
But that has a derivative in it! $\frac{dE}{dt}$!
My neural network just outputs a position. How do I get the derivative?
We use **Automatic Differentiation**.
Because every operation in this network is differentiable, we can analytically compute the derivative of the output with respect to the input (time).
We backpropagate not just the data error, but the *physics error* too.
So... the model learns to obey the fence on its own?
Yes. It finds a solution that is both close to the grass and far from the fence. It learns a physically plausible interpolation.
Retraining...
Huh. The MSE went up a little. $10^{-5}$. It's... statistically worse.
Of course. We constrained it. It can't wiggle perfectly anymore.
The energy... it makes sense now.
It generalizes. Because it learned the rule, not just the data points.
So I should choose a model that first obeys physics before the minimized errors.
A model is a tool. You told the first model to be perfect. It lied to you how reality works.
You told the second model to be a good engineer. It designed a movement that is constrained by physics.
I don't want to chase fake numbers anymore. I want to model the world.
That's the way of a scientist.