题目:Transfer learning based multi-fidelity physics informed deep neural network
作者:Souvik Chakraborty
期刊会议: Machine Learning (cs.LG); Computational Physics (physics.comp-ph)
年份:2020
论文地址:
代码:
动机:
前人工作:
Based on the discussion above, (at least) two salient conclusions can be drawn about the existing multi-fidelity approaches:
The probability of failure of the system can be calculated as
P f = P ( Ξ ∈ Ω f ) = ∫ Ω f d F Ξ ( ξ ) = ∫ Ω I Ω f d F Ξ ( ξ ) \begin{aligned} P_{f}=\mathbb{P}\left(\boldsymbol{\Xi} \in \Omega_{f}\right) &=\int_{\Omega_{f}} \mathrm{d} F_{\Xi}(\boldsymbol{\xi}) \\ &=\int_{\Omega} \mathbb{I}_{\Omega_{f}} \mathrm{d} F_{\Xi}(\boldsymbol{\xi}) \end{aligned} Pf=P(Ξ∈Ωf)=∫ΩfdFΞ(ξ)=∫ΩIΩfdFΞ(ξ)
数据点的数量 N h N_{h} Nh意义重大,可以直接训练代理模型, M : ( ξ , x , t ) → u \mathcal{M}:(\xi, x, t) \rightarrow u M:(ξ,x,t)→u然后用它来评估上式失败的概率
然而,在现实中,可以执行的实验室实验数量是有限的,因此,可用的数据点的数量往往不足以训练一个替代模型。为了补偿只有有限数量的高保真数据可用这一事实,我们考虑了系统的近似(低保真)控制方程:
I c ( ξ ) = { 1 if ξ ∈ c 0 if ξ ∉ c \mathbb{I}_{c}(\boldsymbol{\xi})=\left\{\begin{array}{ll}1 & \text { if } \quad \boldsymbol{\xi} \in c \\ 0 & \text { if } \quad \boldsymbol{\xi} \notin c\end{array}\right. Ic(ξ)={10 if ξ∈c if ξ∈/c
u t + h ( u , u x , u x x , … ; ξ ) = 0 u_{t}+h\left(u, u_{x}, u_{x x}, \ldots ; \boldsymbol{\xi}\right)=0 ut+h(u,ux,uxx,…;ξ)=0
对于现在一些纯数据驱动的DNN来说,需要大量的高可信度的数据参与训练。然而,对于目前的这片paper工作来说,焦点集中在很少能获得高保真数据的问题上。因此,直接应用数据驱动的DNN不太可能产生令人满意的结果。所以,为了解决数据驱动的DNN对训练数据的过度依赖,PINN被提出。
PINN有两个主要的优点:首先,与数据驱动的DNN等其他可靠性分析工具不同,PI-DNN不需要仿真数据。这将大大降低计算成本。其次,通过满足系统的控制微分方程来训练PI-DNN。因此,满足了不变性和对称性等物理特性。
但是对于PINN求解,准确的控制方程是已知的,但是在科学和工程中存在着许多控制微分方程4未知的情况。即使控制方程是已知的,它经常是基于某些假设和近似。
3.1节中的数据驱动的DNN和3.2节中的PINN都不能解决第2节中定义的可靠性分析问题
方法:主要分成两步
优势(使用了迁移学习)
our numerical examples are presented to illustrate the performance of the proposed approach. A wide variety of examples involving single and multiple stochastic variables, linear and non-linear problems
, ordinary and partial differential equations are selected
d u l d t = − Z u l \frac{\mathrm{d} u_{l}}{\mathrm{d} t}=-Z u_{l} dtdul=−Zul
满足下面初值:
u l ( t = 0 ) = 1.0 u_{l}(t=0)=1.0 ul(t=0)=1.0(21)
high-fidelity model:
u h = t sin ( t ) [ log ( u l 4 ) ] 2 + 15 t 3 + 1.0 u_{h}=t \sin (t)\left[\log \left(u_{l}^{4}\right)\right]^{2}+15 t^{3}+1.0 uh=tsin(t)[log(ul4)]2+15t3+1.0(22)
The relation between the high-fidelity u h u_{h} uh and the low-fidelity u l u_{l} ul is non-linear. The limit-state function for this problem is defined as
J ( Z , t t ) = u h ( Z , t t ) − u 0 \mathcal{J}\left(Z, t_{t}\right)=u_{h}\left(Z, t_{t}\right)-u_{0} J(Z,tt)=uh(Z,tt)−u0 (23)
For this example, t t = 1.0 t_{t}=1.0 tt=1.0 and u 0 = 18.0 u_{0}=18.0 u0=18.0 is considered. It is assumed that 15 samples from the high- fidelity model is available,and for each of the 15 high-fidelity samples, the observations are available at t = [0.0, 1.0]. Note that the data-generation process, i.e., Eq. (23) is not known. MF-PIDNN only have access to the high-fidelity data and the low-fidelity model in Eq. (21).
Process:
Table 1 shows the results obtained using MCS and MF-PIDNN
t = [0.0, 0.5, 0.9]
, and the objective is to compute the reliability of the system at t = 1.0
. The results obtained are shown in Table 2.variation in the number of high-fidelity data point
, N h N_{h} Nh is investigated. For each realization of Z, the responses are observed at t = [0.0, 1.0]
and the probability of failure at t t = 1.0 t_{t}=1.0 tt=1.0 is computed
. The variation of the MF-PIDNN predicted probability of failure is shown in Fig. 2.The low-fidelity model, on the other hand, is considered to be
( u l ) t = ν ( u l ) x x \left(u_{l}\right)_{t}=\nu\left(u_{l}\right)_{x x} (ul)t=ν(ul)xx
The high-fidelity model for this problem is
( u h ) t + u h ( u h ) x = ν ( u h ) x x \left(u_{h}\right)_{t}+u_{h}\left(u_{h}\right)_{x}=\nu\left(u_{h}\right)_{x} x (uh)t+uh(uh)x=ν(uh)xx
The boundary and the initial conditions:
u h ( t , x = − 1 ) = 1 + δ u h ( t , x = 1 ) = − 1 u h ( t = 0 , x ) = − 1 + ( 1 + x ) ( 1 + δ 2 ) \begin{array}{c} u_{h}(t, x=-1)=1+\delta \quad u_{h}(t, x=1)=-1 \\ u_{h}(t=0, x)=-1+(1+x)\left(1+\frac{\delta}{2}\right) \end{array} uh(t,x=−1)=1+δuh(t,x=1)=−1uh(t=0,x)=−1+(1+x)(1+2δ)
Process:
Along with MCS and MF-PIDNN results, LF-PIDNN and HF- DNN predicted results are also presented.
Methods
The variation of probability of failure with threshold x0 is shown in Fig. 4.
In this paper, a multi-fidelity physics informed deep neural network (MF-PIDNN) is presented. The proposed approach is ideally suited for problems where the physics of the problem is known in an approximate sense (low-fidelity physics) and only a few high-fidelity data is available.
physics-informed
and data-driven deep learning
;There are two distinct advantages of MF-PIDNN.
Subhayan De, Jolene Britton, Matthew Reynolds, Ryan Skinner, Kenneth Jansen, and Alireza Doostan. On transfer learning of neural networks using bi-fidelity data for uncertainty propagation. arXiv preprint arXiv:2002.04495, 2020. ↩︎
Xuhui Meng and George Em Karniadakis. A composite neural network that learns from multi-fidelity data: Ap- plication to function approximation and inverse pde problems. Journal ofComputational Physics, 401:109020, 2020 ↩︎
Dehao Liu and Yan Wang. Multi-fidelity physics-constrained neural network and its application in materials modeling. Journal ofMechanical Design, 141(12), 2019. ↩︎
Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings ofthe national academy ofsciences, 113(15):3932– 3937, 2016. ↩︎