自动图算法
Uninformed Search Strategies
Sometimes better than BFS: Uniform-Cost Search
Safer DFS: Depth-Limited Search
Best of both: Iterative-Deepening Search
python-BFS-graph
python-BFS-tree
GfG的教学
A,B,C,D,E,F,G,H,I(假设每层节点从左到右访问)。
First-in First-out!
(FIFO) Queue
python -DFS-graph
python-DFS-tree
GfG的教学
A,B,D,E,I,C,F,G,H.(假设先走子节点的的左侧)
Last-In First-Out
Stack (LIFO Queue)
DLS 是一种折衷方案,提供了广度优先搜索的一些好处,同时减少了内存成本
从level 0 开始计数。
基于深度优先搜索
Iterative Deepening Depth First Search(IDDFS)
思路:始终扩展最浅的路径,即具有最低(总体)路径成本的后继
python-ucs-graph
Difference between Informed and Uninformed Search
How to formalise a problem for search
How to search
如何为问题提出启发式方法?
特质:
它不会高估沿当前路径通过 n 到达目标状态的实际成本。
it does not overestimate the true path cost from any given node.
An admissible heuristic will ensure optimality when we use tree search.
The heuristic given here is admissible because it does not overestimate the true path cost from any given node.
从 n 达到目标的估计成本不大于步骤成本 n
图搜索的时候不一致
A consistent heuristic will ensure optimality when we use graph search.
The heuristic given here is admissible because it does not overestimate the true path cost from any given node.
However, it is not consistent, because the heuristic overestimates at node B, because the step cost from B to C plus the estimated cost from C to the goal is less than the estimate from B.
启发式算法例子:
Greedy Best-First Search
In greedy best-first search, the evaluation function contains a simple heuristic f(n) = h(n) that estimates the cost of the
cheapest path from the state at the current node to the node that has the goal state (straight-line distance).
不断寻找最小点
use an evaluation function f(n) = g(n) + h(n)
简书教程
Lowest Evaluation Result First: g(n) + h(n)
Corresponding ADT: Priority Queue优先队列
Complete for finite spaces & positive path cost
Optimal: will always expand the currently best path
Optimally efficient (There is no other search algorithm that is guaranteed to expand fewer nodes and still find an optimal solution)
Space and time complexity depend on the heuristic chosen but can still be high in worst case, similar to BFS: bd
Adversarial Search
基于博弈论
类似于DFS
High utility function results serve MAX, low values are good for MIN
添加链接描述
这个比较易懂
基于minimax加了alpha和beta
解和minimax一样
添加链接描述
添加链接描述
知识是什么:
知识从何而来:
经验主义:Acquired by learning from sensory stimulation: Empiricism(Connectionist AI)
理论主义: Acquired by reasoning: Rationalism(classic AI)
A premise in relation to a well-defined domain, class, or condition.
A premise about a concrete instance, hypothesis, or condition.
A conclusion based on the premises, i.e. the instance belonging to the domain, inheriting the characteristics defined for the class, or satisfying the condition.
Forward reasoning:
Search connects initial fact(s) with desired conclusion(s).
States are combinations of facts and each rule is a method for generating a single successor (i.e., it defines a single transition).
Backward reasoning:
Search connects a final conclusion with one or more initial facts.
States are combinations of required conclusions and the transitions are defined by the rules. Each rule is a method for generating further `required conclusions’ from existing required conclusions.
What are the states?
→Forward reasoning: Facts
→Backward reasoning: Goal facts (Antecedents of conditions)
How does the successor function work?
→Forward reasoning: Application of rules to yield more facts
→Backward reasoning: Application of rules to yield more sub-goals
通常来说,搜索树在向前推理中是OR树,因为each node in a search tree branches according to the alternative transitions for a given state 。但是,in backwards reasoning, each node branches in two different ways because
we are usually compounding sub-goals that all need to be fulfilled for a solution.The branches from any node in a backward reasoning tree divide up into groups of logical AND branches, and each group presents an alternative chain (OR sub-tree).因此,反向推理中的搜索树是一个 AND/OR 树。
The combination of rule base and inference method can be viewed as a representation of knowledge for the domain, called a knowledge base.
A system which employs knowledge represented in this way is called a knowledge-based system or expert system.
An ideal knowledge representation
Formal methods of knowledge representation are also known as logics.
A formal logic is a system (often: symbols, grammar) for representing and analyzing statements in a precise, unambiguous way.
A logic usually has
Propositions: P, Q, R, S, etc.
Connectives: AND ( ∧), OR ( ∨), NOT ( ¬ ), IMPLIES ( ⇒)
Constants: True ( T ), False ( F )
Others: parentheses for grouping expressions together
真值表:
Semantics:
This is called a referential semantics. The way one fact follows another should be mirrored by the way one sentence is entailed by another.
Move from simple propositions to predicates
Representation of properties, e.g., mortal(person)
Representation of relationships, e.g., likes(fred, sausages)
Existentially quantified variables, e.g., ∃x (There exists an x such that…)存在量化的变量
Universally quantified variables, e.g., ∀x (For all x we can say that…)普通量化的变量
∧ Conjunction (AND)
⇒ IMPLICATION
∀ Universal quantifier (“For all elements X…”)
∃ Existential quantifier (“Some X…”, “There is at least one X…”)
I wear a hat if it’s sunny:
sunny→ hat
I wear a hat only if it’s sunny:
hat → sunny
p only if q: p→ q
关于if和only if
命题逻辑和谓词逻辑的区别
句子表示的一个基本困难是框架问题。
这会影响所有种类的知识表示,但在根据真理进行评估并且使用规则来定义行为结果的情况下尤为明显。
在形式逻辑中,很难跟踪世界上事物的状态
Technical solutions:
指定影响和非影响
For a complete state representation, we would have to specify not only the effects of an action on the environment, but also the non-effects of that action (everything that does not change) in so-called frame axioms
但是这对于较大的框架是不可行的,所以框架问题的实际解决方案通常会尝试添加更通用的变量或谓词,以注意何时发生变化或可能发生变化
语义网络利用表示知识的有向图的结构
在语义网络中,节点代表概念,节点之间的联系表示概念之间的关系
非标准化语义网络缺乏统一的构建规则因此存在一致性和偏见问题
语义网络支持有限推理(例如,集合、继承、归纳)
传播激活搜索(如 BFS)效率低下,不受知识指导
Elements of a frame
features:
Basics
Elements of a Script
formal categorisation and description of a body of knowledge
Ontology Components组件
Ontology Mapping映射
Ontology Problems
How to specify Ontologies
Ontology Specification 规范
不确定性对模糊/不精确知识 Related to vague or imprecise knowledge
不确定性和概率之间的联系:事件频率
Membership functions specify the degree to which something belongs to a fuzzy set,它们代表可能性的分布而不是概率
建一个模糊关联矩阵
Applications
Antilock Braking system (ABS)
Washing machines (weight / intensity)
Expert Systems
不确定性可以用熵来衡量
不确定性是熵分布平坦度的函数
In Bayesian reasoning, probabilities can help establish an agent’s state of belief
(|) = ( ∗(|))/()
P© and P(E) are called prior probabilities.
P(E|C) is the likelihood.
P(C|E) is called the posterior probability
贝叶斯网络是一个Graphical Model,Directed graph, no cycles
Nodes: Variables (conditional probabilities)
Edges: Interactions between nodes (conditional interdependencies)
Limitations of Bayesian Networks 贝叶斯网络的限制
Conditional entropy quantifies the uncertainty we have about some variable given we have observed another variable.
Bayes’ theorem allows calculating the conditional probabilities between variables
Bayesian networks explicitly encode the conditional dependencies between their variables
Bayesian networks implicitly encode the full joint probabilities of their variables also
Bayesian networks work great if there are strong correlations between probabilities.
Probability distributions with high entropy disturb the reasoning process as they have a tendency to reduce the clarity of results
DBNs are a class of knowledge representations for reasoning with conditional probabilities over an additional dimension (usually time)
DBNs often come in the form of discrete Markov processes, which involve a number of simplifying assumptions
Markov models are popular in Machine learning applications, such as speech and handwriting recognition, translation, or image processing but also others, such as the modelling of biological processes, or crypto-analysis
A kind of DBN, also called a Markov Process
Describes the evolution of a system over time under uncertainty
There are different types of Markov models depending on the use of discrete or continuous v
Markov Assumption马科夫假说:
A Markov model where the state of a system at any time t does depend solely on the previous state at t-1 is called a First-order Markov model
离散马尔可夫链:
逐步描述系统状态及其转换的模型
时间离散,状态描述也是如此
将状态转换的条件概率与矩阵运算来确定状态转换的概率。
模型可以保持相同状态或返回到以前的状态