Neural-Symbolic可以认为是将人工智能中原本对立的连接主义和符号主义结合的一个新兴研究方向(实际上最早相关工作可追至1978年),先对其相关研究工作典型的十篇论文做出总结。
Neural-Symbolic意指神经符号主义,但是更本质地讲,实际上是将现代数学中的分析学和代数学结合,分析学擅长处理数值、函数、逼近等问题,代数学擅长处理推演、抽象、结构等问题;因此如果能适当结合,威力必然可观.
根据选定的这十篇论文,可以总结其基本研究思路如下:
e i F , h i F = LSTM ( Φ E ( x i ) , h i − 1 F ) e_{i}^{F}, h_{i}^{F}=\operatorname{LSTM}(\Phi_{E}(x_{i}), h_{i-1}^{F}) eiF,hiF=LSTM(ΦE(xi),hi−1F)
又比如利用Markov模型的逻辑知识信息导出[3]:
P ( X = x ) = 1 Z exp ( ∑ i = 1 n w i f i ( x { i } ) ) P(X=x)=\frac{1}{Z} \exp (\sum_{i=1}^{n} w_{i} f_{i}(x_{\{i\}})) P(X=x)=Z1exp(i=1∑nwifi(x{i}))
f ( s , p , o ) = ∥ s + p − o ∥ 2 2 f(s, p, o)=\|\mathbf{s}+\mathbf{p}-\mathbf{o}\|_{2}^{2} f(s,p,o)=∥s+p−o∥22
又比如语义损失函数[6]:
L s ( α , p ) ∝ − log ∑ x ⊨ α ∏ i : x ⊨ X i p i ∏ i : x ⊨ ¬ X i ( 1 − p i ) L^{s}(\alpha, p) \propto-\log \sum_{\mathbf{x} \models \alpha} \prod_{i:\mathbf{x} \models X_{i}} p_i \prod_{i : \mathbf{x} \models \neg X_{i}} (1-p_i) Ls(α,p)∝−logx⊨α∑i:x⊨Xi∏pii:x⊨¬Xi∏(1−pi)
在这里总结一些可以说超越了上述一般套路思维的亮点思维.
I i ( r ) = Concat ( Expand ( O i − 1 ( r − 1 ) ) , O i − 1 ( r ) , Reduce ( O i − 1 ( r + 1 ) ) ) I_{i}^{(r)}=\text{ Concat }\left(\text{ Expand }(O_{i-1}^{(r-1)}\right), O_{i-1}^{(r)}, \text{ Reduce }(O_{i-1}^{(r+1)})) Ii(r)= Concat ( Expand (Oi−1(r−1)),Oi−1(r), Reduce (Oi−1(r+1)))
min q , ξ ≥ 0 KL ( q ( Y ∣ X ) ∥ p θ ( Y ∣ X ) ) + C ∑ l , g l ξ l , g l \min _{q, \xi \geq 0} \operatorname{KL}(q(\boldsymbol{Y} | \boldsymbol{X}) \| p_{\theta}(\boldsymbol{Y} | \boldsymbol{X}))+C \sum_{l, g_{l}} \xi_{l, g_{l}} q,ξ≥0minKL(q(Y∣X)∥pθ(Y∣X))+Cl,gl∑ξl,gl
人类是以神经元为基础的生物,但却最终用符号的语言来交流信息,因此从学习机制上,结合Neural-Symbolic来探究人工智能是合理的.从数学上来说,分析学分支和代数学分支的结合是Neural-Symbolic的理论本质.
到目前为止,已有的关于Neural-Symbolic只能说算将神经网络方法和逻辑符号的一些概念有所结合,还处于比较初级的应用层面,无论在neural还是symbolic方向,都亟待加深,否则难以做出有深度的研究.
[1] Boella, Guido , et al. “Learning and reasoning about norms using neural-symbolic systems.” Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 2 2012.
[2] Zhang, Xiangling , et al. “Knowledge Graph Completion via Local Semantic Contexts.” International Conference on Database Systems for Advanced Applications Springer International Publishing, 2016.
[3] Zhu, Yuke , A. Fathi , and L. Fei-Fei . “Reasoning about Object Affordances in a Knowledge Base Representation.” European Conference on Computer Vision Springer International Publishing, 2014.
[4] Dong, Honghua , et al. “Neural Logic Machines.”,2019.
[5] Ramakrishna Vedantam , et al. “Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering.” ,2018.
[6] Xu, Jingyi , et al. “A Semantic Loss Function for Deep Learning with Symbolic Knowledge.” (2017).
[7] Hu, Zhiting , et al. “Harnessing Deep Neural Networks with Logic Rules.” Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016.
[8] Socher, Richard , et al. “Reasoning With Neural Tensor Networks for Knowledge Base Completion.” International Conference on Neural Information Processing Systems Curran Associates Inc. 2013.
[9] Liang, Chen , et al. “Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision.” (2016).
[10] Yi, Kexin,Wu, Jiajun , et al. “Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.” ,2019.