Notes for PINN

Here’s the journal of reading notes that include brief statement of paper, reading time and some summaries. The main line is about physics-informed neural network (PINN). If there is any mistake, thank you for your criticism by email [email protected].

2022.02.13 –

In this time, I’ll learn some basic knowledge. Starting form solving equations, I would focus on improve convergence of the trained neural network for approximating equations, including how to solve the problem of Loss Imbalance and Network Structure Design.

  • Colby L. Wight, Jia Zhao. Solving Allen-Cahn and Cahn-Hilliard Equations using the Adaptive Physics Informed Neural Networks. arXiv:2007.04542, 2020
    Adaptive Resampling, Time-Adaptive Approach, Allen-Cahn eq, Cahn-Hilliard eq
  • Rafael Bischof, Michael Kraus. Multi-Objective Loss Balancing for Physics-Informed Deep Learning. arXiv:2110.09813, 2021
    a Review of some Loss Balancing methods which includes Learning Rate Annealing, GradNorm, SoftAdapt, also the authors proposed a new one called Relative Loss Balancing with Random Lookback (ReLoBRaLo). Burgers’ eq, Kirchhoff Plate Bending eq, Helmholtz eq

In most cases, some loss terms’ order of magnitude extremely lower than one (for example 1 0 − 12 10^{-12} 1012 and 1 0 − 1 10^{-1} 101) may give you a signal that the loss function trapped into a local minimum, and we should check the learning rate or the structure of neural network. However, we could think of loss balancing tricks if the difference is small, for example, two loss terms’ order of magnitude (for example 1 0 − 4 10^{-4} 104 and 1 0 − 2 10^{-2} 102). Doing some analysis according to the gradient of different loss terms, then using a suitable balancing trick, we will see a better result.
Generally, residual term is much lower than initial and boundary terms. I find a phenomenon that, in the same case of loss imbalance, residual term much lower than initial and boundary terms (for example L r = 1 0 − 7 , L i = 1 0 − 2 , L b = 1 0 − 2 L_r = 10^{-7}, L_i = 10^{-2}, L_b = 10^{-2} Lr=107,Li=102,Lb=102) usually give a bad result, however, lower initial term and boundary term (for example L r = 1 0 − 2 , L i = 1 0 − 4 , L b = 1 0 − 4 L_r = 10^{-2}, L_i = 10^{-4}, L_b = 10^{-4} Lr=102,Li=104,Lb=104) will give a better one. As we know, the number of solutions for a differential equation is infinite, thus when initial and boundary terms are not low, the neural network will learn a wrong solution even though residual term is certainly low. It probably learn a solution with other initial or boundary conditions. So I always set more weights for initial and boundary terms to ensure that the neural network optimizes residual loss under a correct initial and boundary conditions.

你可能感兴趣的:(PINN,深度学习,偏微分方程)