https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford_theorem
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics, that gives necessary and sufficient conditions under which a strictly positive probability distribution[clarification needed] can be represented as a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields.[1] It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph.
The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin[2] and Frank Spitzer[3] in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford who proved the equivalence in an unpublished paper in 1971.[4][5] Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett,[6] Preston[7] and Sherman[8] in 1973, with a further proof by Julian Besag in 1974.[9]
A simple Markov network for demonstrating that any Gibbs random field satisfies every Markov property.
It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following:
In the image to the right, a Gibbs random field over the provided graph has the form Pr ( A , B , C , D , E , F ) ∝ f 1 ( A , B , D ) f 2 ( A , C , D ) f 3 ( C , D , F ) f 4 ( C , E , F ) {\displaystyle \Pr(A,B,C,D,E,F)\propto f_{1}(A,B,D)f_{2}(A,C,D)f_{3}(C,D,F)f_{4}(C,E,F)} . If variables C {\displaystyle C} and D {\displaystyle D} are fixed, then the global Markov property requires that: A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} (see conditional independence), since C , D {\displaystyle C,D} forms a barrier between A , B {\displaystyle A,B} and E , F {\displaystyle E,F} .
With C {\displaystyle C} and D {\displaystyle D} constant, Pr ( A , B , E , F | C , D ) ∝ [ f 1 ( A , B , D ) f 2 ( A , C , D ) ] ⋅ [ f 3 ( C , D , F ) f 4 ( C , E , F ) ] = g 1 ( A , B ) g 2 ( E , F ) {\displaystyle \Pr(A,B,E,F|C,D)\propto [f_{1}(A,B,D)f_{2}(A,C,D)]\cdot [f_{3}(C,D,F)f_{4}(C,E,F)]=g_{1}(A,B)g_{2}(E,F)} where g 1 ( A , B ) = f 1 ( A , B , D ) f 2 ( A , C , D ) {\displaystyle g_{1}(A,B)=f_{1}(A,B,D)f_{2}(A,C,D)} and g 2 ( E , F ) = f 3 ( C , D , F ) f 4 ( C , E , F ) {\displaystyle g_{2}(E,F)=f_{3}(C,D,F)f_{4}(C,E,F)} . This implies that A , B ⊥ E , F | C , D {\displaystyle A,B\perp E,F|C,D} .
To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proven:
Lemma 1 provides a means for combining factorizations as shown in this diagram. Note that in this image, the overlap between sets is ignored.
Lemma 1
Let U {\displaystyle U} denote the set of all random variables under consideration, and let Θ , Φ 1 , Φ 2 , … , Φ n ⊆ U {\displaystyle \Theta ,\Phi _{1},\Phi _{2},\dots ,\Phi _{n}\subseteq U} and Ψ 1 , Ψ 2 , … , Ψ m ⊆ U {\displaystyle \Psi _{1},\Psi _{2},\dots ,\Psi _{m}\subseteq U} denote arbitrary sets of variables. (Here, given an arbitrary set of variables X {\displaystyle X} , X {\displaystyle X} will also denote an arbitrary assignment to the variables from X {\displaystyle X} .)
If
Pr ( U ) = f ( Θ ) ∏ i = 1 n g i ( Φ i ) = ∏ j = 1 m h j ( Ψ j ) {\displaystyle \Pr(U)=f(\Theta )\prod _{i=1}^{n}g_{i}(\Phi _{i})=\prod _{j=1}^{m}h_{j}(\Psi _{j})}
for functions f , g 1 , g 2 , … g n {\displaystyle f,g_{1},g_{2},\dots g_{n}} and h 1 , h 2 , … , h m {\displaystyle h_{1},h_{2},\dots ,h_{m}} , then there exist functions h 1 ′ , h 2 ′ , … , h m ′ {\displaystyle h'_{1},h'_{2},\dots ,h'_{m}} and g 1 ′ , g 2 ′ , … , g n ′ {\displaystyle g'_{1},g'_{2},\dots ,g'_{n}} such that
Pr ( U ) = ( ∏ j = 1 m h j ′ ( Θ ∩ Ψ j ) ) ( ∏ i = 1 n g i ′ ( Φ i ) ) {\displaystyle \Pr(U)={\bigg (}\prod _{j=1}^{m}h'_{j}(\Theta \cap \Psi _{j}){\bigg )}{\bigg (}\prod _{i=1}^{n}g'_{i}(\Phi _{i}){\bigg )}}
In other words, ∏ j = 1 m h j ( Ψ j ) {\displaystyle \prod _{j=1}^{m}h_{j}(\Psi _{j})} provides a template for further factorization of f ( Θ ) {\displaystyle f(\Theta )} .
Proof of Lemma 1 |
---|
The clique formed by vertices x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , and x 3 {\displaystyle x_{3}} , is the intersection of { x 1 } ∪ ∂ x 1 {\displaystyle \{x_{1}\}\cup \partial x_{1}} , { x 2 } ∪ ∂ x 2 {\displaystyle \{x_{2}\}\cup \partial x_{2}} , and { x 3 } ∪ ∂ x 3 {\displaystyle \{x_{3}\}\cup \partial x_{3}} .
Lemma 1 provides a means of combining two different factorizations of Pr ( U ) {\displaystyle \Pr(U)} . The local Markov property implies that for any random variable x ∈ U {\displaystyle x\in U} , that there exists factors f x {\displaystyle f_{x}} and f − x {\displaystyle f_{-x}} such that:
Pr ( U ) = f x ( x , ∂ x ) f − x ( U ∖ { x } ) {\displaystyle \Pr(U)=f_{x}(x,\partial x)f_{-x}(U\setminus \{x\})}
where ∂ x {\displaystyle \partial x} are the neighbors of node x {\displaystyle x} . Applying Lemma 1 repeatedly eventually factors Pr ( U ) {\displaystyle \Pr(U)} into a product of clique potentials (see the image on the right).
End of Proof