Physics-guided Neural Networks (PGNNs)

They present two approaches for this: (1) using physics theory, they calculate additional features (feature engineering) to feed into the model along with the measurements and (2) they add a physical inconsistency term to the loss function in order to punish physically inconsistent predictions.(1) Feature engineering using a physics-based model(2) Data + Physics driven loss functionThe first approach, feature engineering, is extensively used in machine learning..Very much like adding a regularization term to punish overfitting, they add a physical inconsistency term to the loss function..Now let’s see how they applied (1) feature engineering and (2) loss function modification for this problem.(1) For feature engineering, they used a model called the general lake model (GLM) to generate new features and feed them into the NN..It is a physics-based model that captures the processes governing the dynamics of temperature in a lake (heating due to sun, evaporation etc.).(2) Let’s now see how they defined this physical inconsistency term..If ρA> ρB (i.e. inconsistency), the function will give a positive value that will increase the value of the loss function (the function we are trying to minimize), otherwise it will give zero leaving the loss function unchanged.At this point, two modifications are necessary to the function in order to use it properly..They are:PHY: General lake model (GLM).NN: A neural network.PGNN0: A neural network with feature engineering..Results of the GLM are fed into the NN as additional features.PGNN: NN with feature engineering and with the modified loss function.And two metrics for evaluation:RMSE: Root mean square error.Physical Inconsistency: Fraction of time-steps where the model makes physically inconsistent predictions.Results on Lake Mille LacsResults on Lake MendotaComparing NN with the PHY we can conclude that NN gives more accurate predictions at the expense of physically inconsistent results..Comparing PGNN0 and PGNN we can see that physical inconsistency is eliminated mostly thanks to the modified loss function..Moreover, incorporating our knowledge of the world into the loss function provides an elegant way to improve the generalization performance of machine learning models.. More details

Leave a Reply