Physics Informed Neural Nets (PINN)
Physics informed neural nets is a straight forward way to include knowledge about the underlying physics in the training of a neural net. If all or some of the equations governing the behavior that is modeled is known a loss function can be constructed that penalize solutions that do not fulfill those equations. This could be Partial Differential Equations (PDEs), Ordinary Differential Equations (ODEs). It is also possible to learn parameters of these equations if all parameters are not known.
As an example, let’s say that we have measured the flow velocity components (u, v, w) and the pressure (p) at a number of points in space (x, y, z) and time (t) and want to learn the function (u, v, w, p) = f(x, y, z, t) so that we can calculate the complete flow velocity field and increase the resolution an space and time. The naive approach would then be to train a neural net with (x, y, z, t) as inputs and (u, v, w, p) as outputs using the measured data. Since we know that a fluid flow is governed by the Navier-Stokes equations and that the partial derivatives of (u, v, w, p) with respect to (x, y, z, t) can be calculated using the auto differentiation function available in machine learning frameworks such as JAX or Torch. We can set up a custom loss function that penalizes if the estimated velocity field violates some aspects of the Navier-Stokes equations, for instance conservation of mass. This loss function can be evaluated at the measured points as well as other points where there are no measurements.
Lagrangian and Hamiltonian Neural Nets
For mechanical systems instead of learning the function mapping position (p) and velocity (v) to acceleration a=f(v, p) we can learn the Lagrangian that describes the system. This will impose properties such as conservation of total energy. Using auto differentiation in the machine learning framework the acceleration can be calculated from the Lagrangian and compared with measurements in the loss function. Additionally, a custom loss function that ensures that the Euler-Lagrange equation is satisfied for the learned Lagrangian is added that is evaluated at the measured points and possibly additional points.
In a similar way if the system is described using Hamiltonian mechanics instead of learning a function that maps the position (p) and momentum (m) to velocity (v) and the time derivative of the momentum (dm), (v, dm)=f(p, m) we can learn the Hamiltonian that describes the system. This will in a similar way impose properties such as conservation of mass. Using auto differentiation, the velocity and momentum time derivate can be calculated and compared with measurements in the loss function. Similarly, a custom loss function is added that ensures that the Simplectic equations are satisfied for the learned Hamiltonian for the measured points and possibly additional points.
Summary
There are multiple methods to include knowledge about the physics, constraints or symmetries of the behavior that is being modelled that when included result in better models requiring less training data. Take a look at the video below by Steve Brunton from University of Washington for a detailed introduction to how physics and can be incorporated in machine learning.