PINN: Physics Informed Neural Networks

Intro

https://en.wikipedia.org/wiki/Physics-informed_neural_networks

Physics-informed neural networks (PINNs) are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).[1] They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios.[1] The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.

==> think of it as a SL DNN version of RL's expert trajectory

====> significantly reduce hypothesis space and reduce training time; enforce a baseline of "correctness" of approximation results

==> how exciting! could be a paradigm for how human and machine can collaborate in the future.

Physics Informed Deep Learning

Authors | Physics Informed Deep Learning

==> a concise and sufficiently technical poster to present PI-DL by its leading scholars

Authors

Maziar Raissi, Paris Perdikaris, and George Em Karniadakis

Abstract

We introduce physics informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given law of physics described by general nonlinear partial differential equations. We present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. Depending on the nature and arrangement of the available data, we devise two distinct classes of algorithms, namely continuous time and discrete time models. The resulting neural networks form a new class of data-efficient universal function approximators that naturally encode any underlying physical laws as prior information. In the first part, we demonstrate how these networks can be used to infer solutions to partial differential equations, and obtain physics-informed surrogate models that are fully differentiable with respect to all input coordinates and free parameters. In the second part, we focus on the problem of data-driven discovery of partial differential equations.

==> the various example equations cannot survive cp, so just check the linked article.

Status Quo* of PINNs

https://arxiv.org/abs/2201.05624

*[Submitted on 14 Jan 2022 (v1), last revised 13 Feb 2022 (this version, v3)]

Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What's next

Salvatore Cuomo, Vincenzo Schiano di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, Francesco Piccialli

Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, and integral-differential equations. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages, the review also attempts to incorporate publications on a larger variety of issues, including physics-constrained neural networks (PCNN), where the initial or boundary conditions are directly embedded in the NN structure rather than in the loss functions. The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

pdf download link: 

https://arxiv.org/pdf/2201.05624

A Frontier Application of PINN

https://www.quantamagazine.org/deep-learning-poised-to-blow-up-famed-fluid-equations-20220412/?utm_source=Nature+Briefing&utm_campaign=a20d5070a4-briefing-dy-202204134&utm_medium=email&utm_term=0_c9dfd39373-a20d5070a4-46746038

Summary:

==> the goal is to find singularity in the solutions of Euler's Equation for fluid flow.

==> there are proposed setups, but solving for the singularity by computer simulation is hard since computer cannot work with infinity and precision loss along approximation steps will result in falsely identified singularities.

==> through manipulation of the equation, we can get rid of time dependency and obtain a nice cyclic property where the equation produces "self-similar" result with similar physical setups, only magnified quantities of interests.

==> those equations need solving anew and they contain an unknown rate of magnification parameter;

====> solving those equations with traditional approach is hard, if possible

====> but such a problem is tailored to PINNs

He, Lai, Wang and Javier Gómez-Serrano, a mathematician at Brown University and the University of Barcelona, established a set of physical constraints to help guide their PINN: conditions related to symmetry and other properties, as well as the equations they wanted to solve (they used a set of 2D equations, rewritten using self-similar coordinates, that are known to be equivalent to the 3D Euler equations at points approaching the cylindrical boundary).

They then trained the neural network to search for solutions — and for the self-similar parameter — that satisfied those constraints. “This method is very flexible,” Lai said. “You can always find a solution as long as you impose the correct constraints.” (In fact, the group showcased that flexibility by testing the method on other problems.)

The team’s answer looked a lot like the solution that Hou and Luo had arrived at in 2013. But the mathematicians hope that their approximation paints a more detailed picture of what’s happening, since it marks the first direct calculation of a self-similar solution for this problem. “The new result specifies more precisely how the singularity is formed,” Sverak said — how certain values will blow up, and how the equations will collapse.

“You’re really extracting the essence of the singularity,” Buckmaster said. “It was very difficult to show this without neural networks. It’s clear as night and day that it’s a much easier approach than traditional methods.”

Gómez-Serrano agrees. “This is going to be part of the standard toolboxes that people are going to have at hand in the future,” he said.

Once again, PINNs have revealed what Karniadakis called “hidden fluid mechanics” — only this time, they made headway on a far more theoretical problem than the ones PINNs are usually used for. “I haven’t seen anybody use PINNs for that,” Karniadakis said.

That’s not the only reason mathematicians are excited. PINNs might also be perfectly situated to find another type of singularity that’s all but invisible to traditional numerical methods. These “unstable” singularities might be the only ones that exist for certain models of fluid dynamics, including the Euler equations without a cylindrical boundary (which are already much more complicated to solve) and the Navier-Stokes equations. “Unstable things do exist. So why not find them?” said Peter Constantin, a mathematician at Princeton.

你可能感兴趣的:(Algorithm,Math&Stat,机器学习,算法,偏微分方程,物理)