latex一个实例(包含各种包)

% !Mode::"TeX:UTF-8"
\documentclass{article}
\usepackage[UTF8]{ctex}
\usepackage{listings}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage[table]{xcolor}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{pythonhighlight}
\pagestyle{fancy}


\cfoot{Page \thepage of \pageref{LastPage}}
\usepackage[top=2.54cm, bottom=2.54cm, left=3.18cm,right=3.18cm]{geometry}
\lstset{language=Python}
\lstset{breaklines}
\lstset{extendedchars=false}

\begin{document}
%\begin{CJK*}{UTF8}{gbsn}
\title{AI第一次作业}

\date{July 2018}

\maketitle
{\tableofcontents}
\section{问题一}
证明Information gain $\ge 0$ (用香农熵):
解:信息增益表示度量X对预测Y的能力,表达式如下:
$$Gain(X,Y)=H(Y)-H(Y|X)$$
条件信息熵为:
$$H(Y|X)=\sum_{i}p(x_i)H(Y|X=x_i);p(X=x_i)=p(x_i)$$
简单写成:
$$H(Y|X)=\sum_{x\in X}p(x)H(Y|x)$$
已知Shannon Entropy:
$$H(X)=\sum_{i=1}^{m}p_i\log_2\frac{1}{p_i};p(X=x_i)=p(x_i)$$
简单写成:
$$H(X)=\sum_{x\in X}p(x)\log_2\frac{1}{p(x)}=-\sum_{x\in X}p(x)\log_2p(x)$$
所以信息增益为:

\begin{equation}\label{information gain}
\begin{aligned}
  Gain(X,Y)&=H(Y)-H(Y|X)\\
  &=H(X)-H(X|Y)\\
  &=H(X)+H(Y)-H(X,Y)
\end{aligned}
\end{equation}

\section{Motivation}
%\end{CJK*}
\end{document} 

上面是代码,下面是输出

% !Mode::”TeX:UTF-8”
\documentclass{article}
\usepackage[UTF8]{ctex}
\usepackage{listings}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage[table]{xcolor}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{pythonhighlight}
\pagestyle{fancy}

\cfoot{Page \thepage of \pageref{LastPage}}
\usepackage[top=2.54cm, bottom=2.54cm, left=3.18cm,right=3.18cm]{geometry}
\lstset{language=Python}
\lstset{breaklines}
\lstset{extendedchars=false}

\begin{document}
%\begin{CJK*}{UTF8}{gbsn}
\title{AI第一次作业}

\date{July 2018}

\maketitle
{\tableofcontents}
\section{问题一}
证明Information gain 0 ≥ 0 (用香农熵):
解:信息增益表示度量X对预测Y的能力,表达式如下:

Gain(X,Y)=H(Y)H(Y|X) G a i n ( X , Y ) = H ( Y ) − H ( Y | X )

条件信息熵为:
H(Y|X)=ip(xi)H(Y|X=xi);p(X=xi)=p(xi) H ( Y | X ) = ∑ i p ( x i ) H ( Y | X = x i ) ; p ( X = x i ) = p ( x i )

简单写成:
H(Y|X)=xXp(x)H(Y|x) H ( Y | X ) = ∑ x ∈ X p ( x ) H ( Y | x )

已知Shannon Entropy:
H(X)=i=1mpilog21pi;p(X=xi)=p(xi) H ( X ) = ∑ i = 1 m p i log 2 ⁡ 1 p i ; p ( X = x i ) = p ( x i )

简单写成:
H(X)=xXp(x)log21p(x)=xXp(x)log2p(x) H ( X ) = ∑ x ∈ X p ( x ) log 2 ⁡ 1 p ( x ) = − ∑ x ∈ X p ( x ) log 2 ⁡ p ( x )

所以信息增益为:

\begin{equation}\label{information gain}
\begin{aligned}
  Gain(X,Y)&=H(Y)-H(Y|X)\\
  &=H(X)-H(X|Y)\\
  &=H(X)+H(Y)-H(X,Y)
\end{aligned}
\end{equation}
\begin{equation}\label{information gain}\begin{aligned}  Gain(X,Y)&=H(Y)-H(Y|X)\\  &=H(X)-H(X|Y)\\  &=H(X)+H(Y)-H(X,Y)\end{aligned}\end{equation}

\section{Motivation}
%\end{CJK*}
\end{document}

你可能感兴趣的:(latex)