|
⇤ ← Revision 1 as of 2006-01-10 02:12:22
Size: 75
Comment:
|
Size: 61693
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 1: | Line 1: |
| These are glosaries for AI class ["Csce876"] * CsceGlossaryChapter1 |
These are all the glosary terms from my AI class ["Csce876"] {{{#!latex2 \usepackage{amsmath}% \setcounter{MaxMatrixCols}{30}% \usepackage{amsfonts}% \usepackage{amssymb}% \usepackage{graphicx} \usepackage{geometry} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \geometry{left=0.5in,right=0.5in,top=0.5in,bottom=0.5in} %%end-prologue%% \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline $A^*$ Search & The most widely-known form of the best-first search, $A^*$ evaluates nodes using the evaluation function: \[ f(n) = g(n) + h(n) \] $A^*$ has several nice properties. Using the Tree-Search Algorithm and an {\em admissible} heuristic function we have: \begin{itemize} \item Optimality \item Complete \item Optimally efficient \end{itemize} If the heuristic function is consistent, then we have: \begin{itemize} \item Optimality \item Complete \end{itemize} \\ \hline $C^*$ & This is the actual cost of the optimal solution path. \\ \hline $f(n)$ (Evaluation Function) & The evaluation function is "the" problem-specific knowledge beyond the definition of the problem itself that is used to determine which node is chosen for expansion in the search process. \\ \hline $g(n)$ (Cost to node function) & This function gives the actual cost to reach node $n$ on the current path. \\ \hline $h(n)$ (Heuristic function) & The estimated cost of the cheape4st path from $n$ to a (the) goal. \\ \hline $k-$consistency & A CSP is $k-$consistent if, for any set of $k-1$ variables and for any consistent assignment to those variables, a consistent value can always be assigned to any $k^{th}$ variable. CITATION(aitextbook) (p 147) \\ \hline $n$-queens problem & The the goal of the $n-queens$ problem is to place $n$ queens on an $n \times n$ chess board where no queen is in a position to attack another queen. \\ \hline Abstraction (definition, goal, example) & The process of removing detail from a representation is called abstraction. We say the abstraction is valid if we can expand any abstract solution into a solution in the more detailed world. \\ \hline Admissible Heuristic & $h(n)$ is an admissible heuristic provided that $h(n)$ never overestimates the cost to reach the goal. The direct consequence of this is that $f(n)$ for $A^*$ never overestimates the true cost of a solution through $n$. \\ \hline Agent & An agent is just something that acts CITATION(aitextbook) \\ \hline Alldiff constraint & A constraint that indicate that all involved variables must be assigned a different value. CITATION(aitextbook) (p 140) \\ \hline Alliances & Alliances occur in many games involving more than two players. Alliances involve a group of players that are cooperating in a competitive environment. CITATION(aitextbook) (p 166) \\ \hline Alpha-beta pruning & This procedure is based on a DFS. Consider a node $n$ somewhere in the tree such that Player has a choice of moving to that node. If Player has a better choice $m$ either at the parent node of $n$ or at any choice point further up, then $n$ {\em will never be reached in actual play}. So once we have found out enough about $n$ (by examining its descendants in a DFS) to reach this conclusion, we can prune it. Alpha-beta pruning gets its name from the two parameters: \begin{description} \item[$\alpha =$] The value of the best choice we have found so far at any choice point along the path for $\max$. \item[$\beta =$] The value of the best choice we have found so far at any choice point along the path for $\min$. \end{description} CITATION(aitextbook) (p~168-169) \\ \hline And-elimination & $\frac{A \wedge B}{A}$ CITATION(aitextbook) (p 211) \\ \hline Anytime algorithm & An anytime algorithm is an algorithm whose output quality improves gradually over time, so that it has a reasonable decision ready whenever it is interrupted. CITATION(aitextbook) (p 971). \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Arc consistency & An arc from $X$ to $Y$ is consistent if, for every value $x$ of $X$ there is some value $y$ of $Y$ that is consistent with $x$. CITATION(aitextbook) (p 145) \\ \hline Arity (of a relation or a function) & Arity is the number of arguments CITATION(aitextbook) (p 246) \\ \hline Assertion & A FOL logic sentence added to the KB are called an assertion. CITATION(aitextbook) (p 253) \\ \hline Atmost constraint & The atmost constraint has a number $P$ and variables $P_1,...,P_k$ that are assigned numbers such that $\sum_{i=1}^{k} P_i \leq P$. CITATION(aitextbook) (p 148) \\ \hline Atomic sentences & Consists of a single proposition symbol. CITATION(aitextbook) (p 204) \\ \hline Autonomy (autonomous agent) & Not relying on prior knowledge of its designer, but rather on its own precepts. CITATION(aitextbook) \\ \hline Axiom & Axioms provide the basic factual information form which conclusions can be derived. CITATION(aitextbook) (p 255) \\ \hline Background knowledge & This is the sentences of information that a knowledge base is initialized with. CITATION(aitextbook) (p 196) \\ \hline Backjumping & The backjumping method backtracks to the most recent variable in the conflict set. CITATION(aitextbook) (p 149) \\ \hline Backtracking Search & A variant of the depth-first search it uses still less memory. In backtracking, only on successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. Thus the memory requirement is only $O(m)$ instead of $O(bm)$. \\ \hline Backtracking search & The backtracking search is a depth-first search that chooses values for one variable at a time and backtracks when a variable has no legal values left to assign. CITATION(aitextbook) (p 141) \\ \hline Backward chaining & Backward chaining starts with the query for the fact $q$ and finds those implications in the knowledge base that conclude $q$. If all of the premises of one of these implications can be proved true (through backward chaining), then $q$ is true. CITATION(aitextbook) (p 220) \\ \hline Belief State & When in a sensorless problem, the agent may only know what states it can possibly start in. By applying the successor function to each of these states it can determine a set of states that could be reached by a specific action. The set of states that the agent could be in at any one time is the {\em belief state}. \\ \hline Best-first Search & Best-first search is an instance of the general Tree-Search or Graph-Search algorithm in which a node is selected for expansion based on an evaluation function, $f(n)$. \\ \hline Biconditional & $A \Leftrightarrow B$ is a biconditional that states that $A$ is true if and only if $B$ is true or alternately if $A \Rightarrow B$ and $B \Rightarrow A$. CITATION(aitextbook) (p 205) \\ \hline Bidirectional search & The strategy is to run two simultaneous searches, one from the initial state forward and one from the goal state backwards. This is not always possible, and requires that the successor function be invertible. This should cut the search depth by a factor of two so that the storage requirements are $O(b^{d/2}) << b^d$. \\ \hline Binary constraint & A binary constraint relates two variables. CITATION(aitextbook) (p 140) \\ \hline Binding list & When posing a query $\exists x~Predicate(x)$, answering yes is not very helpful. The standard form of an answer to such a query includes a set of values that satisfy the query. This set is called a substitution or binding list. CITATION(aitextbook) (p 254) \\ \hline Blind Search & See Uninformed search \\ \hline Body (of a Horn clause) & The negative literals of a horn clause form the body. CITATION(aitextbook) (p 218) \\ \hline Boolean CSPs & Boolean CSPs are CSPs where the domains of the variables are either true or false. CITATION(aitextbook) (p 139) \\ \hline Bounded differences (constraint of) & A bounded difference constraint has the form of $const_1 ~~ \theta ~~ x-y ~~ \theta ~~ const_2$ where $\theta \in {<,>,\leq,\geq}$. CITATION(ainotes) \\ \hline Bounds Propagation & We say that a CSP is bounds-consistent if for every variable $X$, and for both the lower bound and upper bound values of $X$, there exists some value of $Y$ that satisfies the constraint between $X$ and $Y$, for every variable $Y$. This kind of constraint propagation is called {\em bounds propagation}. CITATION(aitextbook) (p 148) \\ \hline Branching Factor & The {\em branching factor} is the maximum number of successors of any node. \\ \hline Breadth-first Search & The BFS expands nodes from a FIFO queue. That is, as each node is discovered it is put into a FIFO queue. This results in all nodes being expanded at each level before a new level is expanded. The storage requirement is $O(b^{d+1})$. This requirement is prohibitive. \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Chinese room experiment & The system consists of a human, who understands only English (CPU), equipped with a rule book (PROGRAM), written in English, and various stacks of paper, some blank, some with indecipherable inscriptions (memory). The system is inside a room with a small opening to the outside. Through the opening appear slips of paper with indecipherable symbols. The human finds matching symbols in the rule book, and follows the instructions. The instructions may include writing symbols on new slips of paper, finding symbols in the stacks, rearranging the stacks and so on. Eventually, the instructions will cause one or more symbols to be transcribed onto a piece of paper that is passed back to the outside world with the correct response. Searle argues against the Turing test in that just running the right program does not generate understanding. CITATION(aitextbook) \\ \hline Chronological backtracking & The process of a backtrack search where we backup to the preceding variable and try a different value is called chronological backtracking, because the most recent decision point is revisited. CITATION(aitextbook) (p 148) \\ \hline Coercion & Coercion is the ability of an agent to reach a goal state even in a sensorless problem. The agent knows that it starts in any of the states but can reach a goal by reasoning about sets of states. See the Vacuum cleaner agent on p84 for a good example. \\ \hline Cognitive science & The interdisciplinary field of cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind. CITATION(aitextbook) \\ \hline Competitive Agent & In a multiagent system, when one's performance depends upon the performance of another in an inverse relationship, then the two agents are in competition and may be called competitive agents. \\ \hline Complementary literals & Literals such that one is a negation of the other. CITATION(aitextbook) (p 214) \\ \hline Complete-state formulation & A {\em complete-state formulation} starts with a complete proposed solution and then changes to it to find a solution that satisfies the goal test. \\ \hline Completeness & If the exists a solution, then the algorithm is guaranteed to find a solution. \\ \hline Completeness (of inference) & An inference $i$ is complete if whenever KB $\models \alpha$, it is also true that KB $\vdash_i \alpha$. That is, the procedure will answer any question whose answer follows from what is know by the KB. CITATION(ainotes) (10.23) \\ \hline Compositional (languages) & In a compositional language, the meaning of a sentence is a function of the meaning of its parts. CITATION(aitextbook) (p 241) \\ \hline Compositionality & In a compostional language, the meaning of a sentence is a function of the meaning of its parts. CITATION(aitextbook) (p 241) \\ \hline Condition-action rule & A condition action rule is a tuple (Condition, Action). When the condition is matched by the sensor input, the Simple Reflex agent will perform the "Action". \\ \hline Conflict set & A more intelligent approach to backtracking than chronological backtracking is to go all the way back to one of the set of variables that {\em caused the failure}. This set is called the {\bf conflict set}. CITATION(aitextbook) (p 148) \\ \hline Conflict-directed backjumping & The backtracking method that backs up to the source of a conflict that prohibits the finding of a consistent assignment to the remaining variables. CITATION(ainotes,aitextbook) (p 149) \\ \hline Conjunctive normal form & A sentence expressed as a conjunction of disjunctions of literals is said to be in conjunctive normal form. CITATION(aitextbook) (p 215) \\ \hline Connectionism & Connectionism models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. There are many different forms of connectionism, but the most common forms utilize neural network models. CITATION(Wikipedia:Connectionism) \\ \hline Consistency (as a property of the $h$ function) & A heuristic $h(n)$ is consistent if, for every node $n$ and every successor $n'$ of $n$ generated by any action $a$, the estimated cost of reaching the goal from $n$ is no greater than the step cost of getting to $n'$ plust the estimated cost or reaching the goal from $n'$: \\ \hline Consistent assignment & An assignment that does not violate any constraints is called a consistent or legal assignment. CITATION(aitextbook) (p 137) \\ \hline Constant symbol & A symbol in FOL that stand for objects. CITATION(aitextbook) (p 246) \\ \hline Constraint graph & A graph where nodes correspond to the variables of the problem and arcs correspond to the constraints. CITATION(aitextbook) (p 138) \\ \hline Constraint hypergraph & A constraint hypergraph contains two types of nodes: variables and constraints. arcs from constraint nodes to variable nodes indicate that the variable is involved in the constraint. This is used for representing constraints with an arity greater than 2. CITATION(ainotes) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Constraint propagation & This is the general term for propagation the implications of a constraint on one variable onto other variables. CITATION(aitextbook) (p 145) \\ \hline Constraint scope & The scope of a constraint is the set of variables that are involved in the constraint CITATION(ainotes). \\ \hline Constructive Search & This is the standard search process were we assign a value to a variable at each node starting from the first variable to the last. CITATION(ainotes). \\ \hline Contingency problem & If the environment is partially observable or if actions are uncertain, then the agent's percepts provide new information after each action. Each possible percept defines a {\em contingency} that must be planned for. A contingency problem is {\em adversarial} if the uncertainty is caused by the actions of other agents. \\ \hline Continuous domains & Where the variables require continuous values such as the real numbers we say we are using continuous domains. Problems that require precise values often require continuous domains. CITATION(aitextbook) (p 139) \\ \hline Contours & The fact that $f$-costs are nondecreasing along any path also means that we can draw contours in the state space, just like the contours in a topographic map. \\ \hline Crossover & In genetic algorithms when you cross two states you choose a point to split the states and "cross" the two states. State 1's first half with state 2's second half, and similarly for the second state. The split point is called the "crossover" point. \\ \hline Current State & Local search algorithms operate using a single "current state" and generally move only to neighboring states. The current state is then a complete description of a solution that may not meet all of the constraints or may not be optimal. (Of course it could be optimal too, but then it would be a solution) \\ \hline Cutset conditioning & Cutset conditioning is the process of finding the smallest cycle cutset. CITATION(aitextbook) (p 154) \\ \hline Cycle cutset & A subset $S$ called the cycle cutset is a set of variables of a CSP such that the constraint graph becomes a tree after the removal of $S$ CITATION(aitextbook) (p 153). \\ \hline Data driven & Data driven reasoning is reasoning in which the focus of attention starts with the known data. CITATION(aitextbook) (p 219) \\ \hline Data Mining & Data mining, also known as knowledge-discovery in databases (KDD), is the practice of automatically searching large stores of data for patterns. CITATION(Wikipedia:Datamining) \\ \hline Decision problem & Is any problem that must be answered yes or no. \\ \hline Deduction & Logical inference. That is to use the rules of logic to infer a conclusion. We say a conclusion is reached by deduction if it is logically inferred by the premise(s). \\ \hline Deduction theorem & For any sentences $\alpha$ and $\beta$, $\alpha \models \beta$ if and only if the sentence $(\alpha \Rightarrow \beta)$ is valid. CITATION(aitextbook) (p 210) \\ \hline Definite clause & Horn clauses with exactly one positive literal are called definite clauses. CITATION(aitextbook) (p 218) \\ \hline Degree (ordering heuristic) & Attempts to reduce the branching factor by selecting the variable that is involved in the largest number of constraints. CITATION(aitextbook) (p~144) \\ \hline Depth-First Search & This strategy expands nodes from a LIFO queue, also known as a stack. This results in nodes being expanded as deep as they can before exploring other nodes at the same level. Thus it is called the DFS. The space requirements for DFS are $O(bm)$. There is no guarantee that this will ever find a goal state because it may get caught in a loop. \\ \hline Depth-Limited Search & This is just a DFS search where we limit the depth that the search may descend to. It is also not guaranteed to find a solution in general, but we may guarantee a goal state is found if we know a goal exists within a certain depth. This has the same requirements as DFS \\ \hline Diameter (of a graph) & Given a graph $G=(V,E)$, the distance between two vertices $u$ and $v$ in $G$, denoted $d(u,v)$, is the length of the shortest $u-v$ path in $G$. The {\em diameter} of $G$, denoted diam$(G)$, is the maximum distance between any two vertices in $G$. CITATION(GraphTheoryThulasiramanSwamy) \\ \hline Domain (of a model) & The domain of a model is the set of objects that it contains. CITATION(ainotes) (p 245) \\ \hline Domain (of a variable) & Given a CSP the domain of a variable are the values that the variable may be assigned CITATION(ainotes,aitextbook) (p 137) \\ \hline Domain/degree heuristics (not in textbook) & The degree heuristic chooses the variable to assign that is involved in the largest number of constraints on other unassigned variables. CITATION(aitextbook) (p 144 - it is in the book) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Domination (of two functions) & We say a heuristic function $h_1(n)$ dominates another heuristic function $h_2(n)$ if \[ h_1(n) \geq h_2(n) \] \\ \hline Effective branching factor CITATION(ainotes) & One way to characterize the quality of a heuristic is using the {\em effective branching factor}. In words this is the the branching factor that a uniform tree of depth $d$ would have in order to contain $N+1$ nodes. Where $d$ is the depth of the search tree and $N$ is the number of nodes generated. That is at each depth starting at $0$ we have: \[ N = 1 + b^* + (b^*)^2 + ... + (b^*)^d = \frac{(b^*)^{d+1}-1}{b^* - 1} \] \\ \hline Entailment & Let KB be a knowledge base of sentences and let $\alpha$ be a sentence. $\alpha$ is entailed by a KB if the models of the KB are {\em all} models of $\alpha$. CITATION(ainotes) (11.5) For my own notes: Entailment is similar to implication except the domain of the variables are sentences (or propositions). KB $\models \alpha$ iff all models of KB are models of $\alpha$. ($i.e., M(KB) \subseteq M(\alpha)$) CITATION(ainotes) (11.5) \\ \hline Environment: Deterministic vs. Stochastic & If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic, otherwise we say it is stochastic. \\ \hline Environment: Discrete vs. Continuous & The discrete/continuous distinction can be applied (1) to the state of the environment - is it constantly changing?, (2) to the way {\em time} is handled - is it continuous in the application or is it handled as discrete points?, and (3) to the {\em percepts} and {\em actions} of the agent - How can the agent perceive the environment, and how does it act (continuously or discretely)?. \\ \hline Environment: Episodic vs. Sequential & If the agents experience is broken up into atomic episodes that do not affect actions in previous episodes, we call it episodic. If on the other hand current actions depend upon the sequence of actions taken previously, then we call the task environment sequential. \\ \hline Environment: Observable: Fully vs Partially & A task environment is effectively fully observable if the sensors detect all aspects that are {\em relevant} to the choice of action. Otherwise we say the task environment is partially observable. \\ \hline Environment: Single agent vs. Multiagent & The key distinction is whether an agent's behavior is best described as maximizing a performance measure whose value depends on other agents' behavior. \\ \hline Environment: Static vs. Dynamic & If the environment can change while the agent is deliberating, we say the environment is \textbf{dynamic}. If the environment does not change while the agent is deliberating, we call the environment \textbf{static}. If the score changes (decreases) as the agent deliberates in a static environment, we call the environment \textbf{semidynamic}. \\ \hline Environment: Strategic & If the environment is deterministic except for the actions of other agents, we say that the environment is strategic. (This was under Deterministic vs Stochastic in the book.) \\ \hline Epistemological level & Abstract description of what the agent knows about the world. CITATION(ainotes) (10.4) \\ \hline Exploration Problem & When the states and actions of the environment are unknown (e.g. you are dropped somewhere where you don't speak the language) the agent must act to discover them. This type of problem can be viewed as an extreme case of the contingency problem. \\ \hline Extended interpretation & An extended interpretation specifies a domain element in addition to the mapping given by the interpretation. CITATION(aitextbook) (p 249). \\ \hline Extensively defined constraint (not in textbook) & Extensionally defined constraints: all allowed tuples are listed. This is practical for defining arbitrary constraints. CITATION(ainotes) \\ \hline Factoring & Factoring is the last step in resolution and involves removing multiple copies of literals from the expression. CITATION(aitextbook) (p 214) \\ \hline Finite domains & Variables with finite (or discrete) domains are the simplest kind of CSPs. CITATION(aitextbook) (p 139) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline First-Order Logic & First-Order logic adds to propositional \begin{itemize} \item symbols: Objects, properties, functions and relations \item connectives: quantifiers $\exists$ and $\forall$ \end{itemize} CITATION(ainotes) (12.4,5) \\ \hline Fitness function & In genetic algorithms each state produced in the next generation is evaluated or rated using a fitness function. This determines how "good" the produced state is. \\ \hline Fixed point & The point at which applying inference rules produces no new facts. CITATION(aitextbook) (p 218) \\ \hline Forward chaining & If all the premises of an implication are known, then its conclusion is added to the list of known facts. The process continues until the desired fact is known. This process of chaining horn clauses together is called forward chaining. CITATION(aitextbook) (p 218) \\ \hline Forward Checking & Whenever a variable $X$ is assigned, the forward checking process looks at each unassigned variable $Y$ that is connected to $X$ by a constraint and deletes from $Y$'s domain any value that is inconsistent with the value chosen for $X$. CITATION(aitextbook) (p 144) \\ \hline Fringe & The collection of nodes that have been generated but not yet expanded is called the fringe (p 70). \\ \hline Function & In terms of FOL, a function is a relation in which there is only one "value" for a given "input." CITATION(aitextbook) (p 242) \\ \hline Function (mathematical definition) & A relation $f:A \rightarrow B$ is said to be a function if and only if $f$ maps each $a \in A$ to only one value in $B$. (Dr. Anderson) \\ \hline Genetic algorithm & A genetic algorithm is a variant of stochastic beam search in which successor states are generated by combining two parent states. \\ \hline Global constraint & Involves all the variables in a CSP. CITATION(ainotes) \\ \hline Global optimum (maximum or minimum) & If elevation in a state space landscape corresponds to cost, then the aim is to find the lowest valley (minimum). If the landscape corresponds to an objective function, then the aim is to find the highest peak. \\ \hline Goal Test & The goal test determines whether a given state is a goal state. \\ \hline Goal-directed reasoning & Goal directed reasoning is useful for answering specific questions such as "What shall I do now?" and "Where are my keys?" Backward chaining is an example of goal directed reasoning. CITATION(aitextbook) (p 220) \\ \hline Gradient ascent & The gradient defines the direction of the steepest ascent (in this case) or descent. (From calculus) \\ \hline Greedy Algorithm & A greedy algorithm always makes the choice that looks best at the moment. CITATION(IntroToAlgorithms) The Best-first search, also called the "greedy search" or "greedy best-first search", tries to expand the node that is closest to the goal, on the grounds that this is likely to lead to a solution quickly. Thus it evaluates nodes by using just the heuristic function \[ f(n) = h(n) \] \\ \hline Greedy local search & Hill climbing is sometimes called a greedy local search because it grabs a good neighbor state without thinking ahead about where to go next. \\ \hline Ground term & A term with no variables is called a ground term. CITATION(aitextbook) (p 249) \\ \hline Grounding & Grounding is the process of determining if the logical reasoning processes and the real environment in which the agent exists are consistent. The question is, "how do we know that KB is true in the real world?" Of course if you worried about this, you would never have time to worry about anything else. CITATION(aitextbook) (p 204) \\ \hline Hanhatten distance & This is the sum distances to the object in each dimension. Sometimes called the city block distance because you can't cut through the buildings. \\ \hline Head (of a Horn clause) & The positive literal of a horn clause is called the head. CITATION(aitextbook) (p 205) \\ \hline Heuristic Function & See $h(n)$. \\ \hline Heuristic Search & See "Informed search" \\ \hline Higher-Order Logic & Higher-Order logic views the relations and functions in first-order logic as objects in themselves. CITATION(aitextbook) (p 244) \\ \hline Hill climbing: first choice & Implements stochastic hill climbing by generating successor randomly until one is generated that is better than the current state. \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Hill climbing: random restart & Because hill climbing will find it harder and harder to find a better state as time goes on, we may stop the search process and start it from a new random location. This is called random restart. \\ \hline Hill climbing: stochastic & Stochastic hill climbing chooses at random from among the uphill moves where the probability of selection can vary with the steepness of the uphill move. \\ \hline Horn clauses & A horn clause is a disjunction of literals of which at most one is positive. CITATION(aitextbook) (p 204) \\ \hline Incremental formulation & An incremental formulation involves operations that augment or {\em build} the state description. The goal is to build a state that satisfies the goal test and at any point in the process, if we find we can not satisfy the goal state using the last augmentation, we may back track.\\ \hline Induction & General rules aquired by exposure to repeated associations between their elements. CITATION(aitextbook) \\ \hline Inference & Deriving new sentences from old. In the case of logical agents, inference must obey the fundamental requirement that when one asks a question of the knowledge base, the answer should follow from what has been told the knowledge base previously. CITATION(aitextbook) (p 195) \\ \hline Infinite domains & Domains where the size of the set is infinite. This can include discrete domains such as the integers. CITATION(aitextbook) (p 139) \\ \hline Informed Search & Strategies that know whether one non-goal state is more promising than another non-goal state are called informed or heuristic searches. \\ \hline Initial State & The state that the agent starts in. \\ \hline Instantiated variable & A variable is instantiated if it has been assigned a value from its domain. CITATION(ainotes) \\ \hline Intended interpretation & The intended interpretation is the one possible interpretation that we specify. CITATION(aitextbook) (p 246) \\ \hline Intensively defined constraint (not in textbook) & Intensionally defined constraints are used when it is not practical or even possible to list all tuples. This is the form we normally see using variables and relationship predicates. CITATION(ainotes) \\ \hline Interpretation & The semantics of FOL must relate sentences to models in order to determine truth. An interpretation specifies exactly which objects, relations and functions are referred to by the constant, predicate and function symbols.CITATION(aitextbook) (p 246) An {\em interpretation} for a set $X$ of formulas {\bf is a domain} $D$ together with a rule that \begin{itemize} \item assigns to each $n$-place predicate symbol (that occurs in a formula) of $X$ an $n$-place predicate in $D$; \item assigns to each $n$-place operation symbol of $X$ an $n$-place operation in $D$; \item assigns to each constant symbol of $X$ an element of $D$; \item assigns to $=$ the identity predicate $=$ in $D$, defined by: $a=b$ is true if and only if $a$ and $b$ are the same. CITATION(FOML) \end{itemize} \\ \hline Intractability & We say an algorithms is intractable if the time to run the algorithm increases exponentially as a function of the size of the problem. (From 310 or 823?) \\ \hline Intractability & A problem is called intractable if the time required to solve instance of the problme grows exponentially with the size of the instances. (p 8) \\ \hline Iterative-Deepening Search & Also called the {\em Iterative deepening depth-first search}, this is just a DFS performed to a depth $d$ where $d$ iteratively increases from $0$ to some limiting value $l$ until a goal is found. this will occur when $d$ reaches the shallowest goal state. Space complexity will be the same as the DFS $O(bd)$. Unlike DFS it is complete and finds the optimal goal when the path cost is a non-decreasing function of the depth of the node. {\em In general, iterative deepening is the preferred uninformed search method when there is a large search space and the depth of the solution is not known}. \\ \hline Iterative-Lengthening Search & Identical to the Iterative-Deepening Search except that the search uses an increasing path-cost limit instead of an increasing depth limit. This guarantees optimality while avoiding expensive memory requirements. \\ \hline k-CNF & A sentence expressed as a conjunction of disjunctions of literals is said to be in conjunctive normal form. A sentence is $k-$CNF if it has exactly (or at most) $k$ literals per clause. CITATION(aitextbook) (p 215) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Knowledge acquisition & Knowledge acquisition is the process of acquiring knowledge. This can be done by exploration or by (p261) applying to an outside expert to retrieve the knowledge required. \\ \hline Knowledge base & A set (conjunction) of sentences that represent facts about the world. CITATION(aitextbook) (p 195) \\ \hline Knowledge representation & One of the six disciplines of AI dedicated to the "representation" of knowledge that an entity knows or senses. CITATION(aitextbook) \\ \hline Knowledge representation language & A formal language used to represent sentences of information in the knowledge base. CITATION(aitextbook) (p 195) \\ \hline Leaf Node & A leaf node is an element of the fringe, that is, a node with no successors in the tree. \\ \hline Limited rationality & It is not always possible to do precisely the right thing in complicated environments do to high computational demands. CITATION(aitextbook) \\ \hline Linear constraints & Constraints in which each variable appears only in linear form. CITATION(ainotes) \\ \hline Linear Programming & Linear programming involves CSPs where constraints must be linear inequalities forming a convex region. CITATION(aitextbook) p 140 \\ \hline Literal & A literal is single symbol or its negation. CITATION(aitextbook) (p 204) \\ \hline Local Optimum & This corresponds to a peak or a valley in the state space landscape that is not the best solution. This is were a search may get stuck and need a random restart. \\ \hline Local Search & Local searches do not care about the path too the solution, and so they don't keep track of where they have been. The use very little memory and they can often find solutions in large or infinite (continuous) state spaces for which systematic algorithms are unavailable. \\ \hline Logical connectives & Not, conjunction, disjunction implication, biconditional which are respectively: $\neg \wedge, \vee, \Rightarrow, \Leftrightarrow$. CITATION(aitextbook) (p 204-205) \\ \hline Logical equivalence & Two sentences $A$ and $B$ are logically equivalent if they are true in the same set of models. This is written: $A \equiv B$. CITATION(aitextbook) (p 210) \\ \hline Min-conflict heuristic & The min-conflicts heuristic chooses a value for a variable that causes the fewest conflicts with other variables. CITATION(ainotes) \\ \hline Minimax algorithm & The minimax algorithm computes the minimax decision form the current state. It uses a simple recursive computation of the minimax values of each successor state, directly implementing the defining equations. The recursion proceeds all the way down to the leaves of the tree, and then the minimax values are backed up through the tree as the recursions unwinds. CITATION(aitextbook) (p 165) \\ \hline Minimax decision & When the Minimax value has been computed for each child node in the tree for the current node, the Minimax decision is the largest value of the child nodes. That is we choose the action that is the optimal choice (max value) because it guarantees the highest minimum value. CITATION(aitextbook) (p 165) \\ \hline Minimax value & \begin{equation} Minimax\text{-}Value(n)=\left\{ \begin{array}[c]{ll} Utility\left(n\right) & \text{if }n\text{ is a terminal state}\\ \max_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a }\max\text{ node}\\ \min_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a }\min\text{ node.} \end{array} \right. \end{equation} \\ \hline Minimum remaining values (a.k.a. least domain variable) & A heuristic that chooses the variable with the fewest legal values.CITATION(aitextbook) (p 143) \\ \hline Missionaries and cannibals problem & Three missionaries and three cannibals are on one side of a river, along with a both that can hold one or two people.. Find a way to get everyone to the other size without ever leaving a group of missionaries in one place outnumbered by the cannibals in that place (p 90). \\ \hline Model (in general) & A model is a world in which a sentence is true under a particular interpretation. CITATION(ainotes) (11.4) \\ \hline Model checking & Given a knowledge base KB and a sentence $\alpha$, model checking enumerates all possible models to check that a sentence $\alpha$ is true in all models in which KB is true. OR, model checking ensures that $\alpha$ is true in each model contained in KB. CITATION(aitextbook) (p 202) \\ \hline Model in propositional logic & A model is a mapping from proposition symbols directly to truth or falsehood. The models of a sentence are the mappings that make the sentence true. CITATION(ainotes) \\ \hline Modus ponens & $\frac{\alpha \Rightarrow \beta,~~ \alpha}{\beta}$ CITATION(aitextbook) (p 211) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Monotonicity (as a property of the $h$ function & A heuristic function $h(n)$ is said to be monotone if it satisfies: \[ h(n) \leq c(n,n') + h(n') \forall n,n' | n' \in SCS(n) \] where $SCS(n)$ gives the successors of $n$. \\ \hline Monotonicity (of inference) & Monotonicity says that the set of entailed sentences can only increase as information is added to the knowledge base. CITATION(aitextbook) (p 213) \\ \hline Mutation & In genetic algorithms after a crossover between two states has been completed we will sometimes randomly change the state by randomly changing one element of the state. This is called a mutation. \\ \hline Natural Language Processing & The ability of the computer to process English or some other natural language to communicate successfully. CITATION(aitextbook) \\ \hline Node consistency & A term meaning that a CSP is 1-consistent, or that any node by itself is consistent. CITATION(aitextbook) (p 147) \\ \hline Node expansion (in search) & After we examine a state (node) $x$ and determine that it is not a goal state, we determine the successor states (nodes) using the successor function. These set of states (nodes) are the expansion of $x$ (p 69). \\ \hline NP-completeness & The theory of NP-completeness, provides a method to reconize an intractable problem. Any problem class to which the class of NP-complete problems can be reduced is likely to be intractable. Cook and Karp showed the existence of large classes of canonical combinatorial search and resoning problems that are NP-complete. CITATION(aitextbook) \\ \hline Objective function & Objective functions are part of the description of an optimization problem that measures states to determine the best state. (from Glossary 6) \\ \hline Objective function (in optimization problem) & Objective functions are part of the description of an optimization problem that measures states to determine the best state. \\ \hline Occam's (or Ockham's) razor & is a principle attributed to the 14th-century English logician and Franciscan friar, William of Ockham. It forms the basis of methodological reductionism, also called the principle of parsimony or law of economy. In its simplest form, Occam's Razor states that one should make no more assumptions than needed. Put into everyday language, it says: Given two equally predictive theories, choose the simpler. CITATION(Wikipedia:OccamsRazor) \\ \hline Omniscience & Literally "all knowing". In the AI sense it means knowing all the information necessary to make the best choice that optimizes the performance measure. \\ \hline Open-loop & When given a {\bf static}, {\bf observable}, {\bf discrete}, {\bf deterministic} environment, solutions can be executed without pay attention to the percepts. Thus an agent carries out its plan with its eyes closed. Control theorists call this an {\bf open-loop} because it breaks the loop between agent and environment. \\ \hline Operator & Successor functions gives a description of the possible actions available to the agent. An alternate formulation uses a set of operators that can be applied to a state to generate successors. (p 62) \\ \hline Optimal solution & According to AIAMA: The optimal solution has the lowest path cost among all solutions. \\ \hline Optimally efficient & If an algorithm is optimally efficient it means that no other optimal algorithm is guaranteed to expand fewer nodes except possibly through tie-breaking among nodes with $f(n)=C^*$. {\em We note that $A^*$ is optimally efficient {\bf and} that it is also exponential in practice.} \\ \hline Optimization Problem & Optimization problem define as a goal to find the best state according to an objective function. \\ \hline Path & A {\em path} in the state space is a sequence of states connected by a sequence of actions. \\ \hline Path consistency & Any pair of adjacent variables can always be extended to a third neighboring variable - also called $3-$consistency. CITATION(aitextbook) (p 147) \\ \hline Path Cost & A {\em path cost} function assigns a numeric cost to each path from the root node to any node in the tree. \\ \hline Pathmax Equation CITATION(astar) & The PathMax function is an optimization routine that ensures monotonicity: \[ \hat{f}(m)=\max\{f(n),f(m)\} \] for all $m \in \{succ(n)\}$. \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Payoff function & Also called the {\bf objective function} or {\bf utility function}, it gives the numeric value for the terminal states. (Objective and utility functions generally give a value to each state). CITATION(aitextbook) (p 163) \\ \hline Percept Sequence & An agent's percept sequence is the complete history of everything the agent has ever perceived. \\ \hline Performance Measure & A performance measure embodies the criterion for success of an agent's behavior. \\ \hline Planning & Planning is the process of finding a sequence that achieves a desired effect given a suitable constructive inference algorithm (p 330). \\ \hline Plateau & An area in the state space landscape where the evaluation function is flat. I could be a flat local maximum or a shoulder. \\ \hline Ply & This is essentially a turn in a turn taking game. One move is made up of two different players in a two player game each taking a turn. Each turn is called a half move or ply. CITATION(aitextbook) (p 163) \\ \hline Predecessors & Predecessors of a state $x$ are all those states that have $x$ as a successor. \\ \hline Predicate symbol & Stand for relations in FOL. CITATION(aitextbook) (p 246) \\ \hline Premise & The premise or antecedent is the left hand side of an implication ($\Rightarrow$). CITATION(aitextbook) (p 205) \\ \hline Problem Formulation & Problem formulation is the process of deciding what actions and states to consider, given a goal. A problem can be defined formally by four components: \begin{enumerate} \setlength{\topsep}{0in} \setlength{\parsep}{0in} \setlength{\parskip}{0in} \setlength{\itemsep}{0in} \item initial state \item successor function \item goal test \item path cost \end{enumerate} \\ \hline Proof & A sequence of applications of inference rules is called a proof. CITATION(aitextbook) (p 212) \\ \hline Property & Unary relations CITATION(aitextbook) (p 242) \\ \hline Proposition symbol & A symbol that can be assigned true or false which represents a proposition (or statement). CITATION(aitextbook) (p 204) \\ \hline Propositional logic & Propositional logic is a very simple language consisting of proposition symbols and logical connectives. It can handle propositions that are known true, known false, or completely unknown. CITATION(aitextbook) (p 233) Propositional logic is made up of a set of symbols and boolean connectives $\wedge, \vee, \neg, \Rightarrow, \Leftrightarrow$ formed into sentences using the formal grammar form: Backus-Naur Form. The semantics are defined by the operation of the connectives as either True or False. CITATION(ainotes) (11.3, 11.7) \\ \hline Pruning & When a node $n'$ is explored and placed in the fringe as part of a node expansion of $n$ and $h(n') > h(n)$ then $A^*$ will not never expand this node, because $f(n') > C^*$. When this happens we say that $n'$ is {\em pruned}. \\ \hline Pruning (in general) & Pruning is the process of eliminating possibilities from consideration without having to examine them. In a tree structure we often say that we prune a subtree or branch by eliminating it from consideration. CITATION(aitextbook) (p 100) \\ \hline Rationality & Rationality (restricted rationality) is the ideal concept of intelligence (restrcited rationality is listed above) It simply means to do the right thingCITATION(aitextbook) \\ \hline Reduction ad absurdum & Proving $\beta$ from $\alpha$ by checking the unsatisfiability of $(\alpha \wedge \neg \beta)$ corresponds exactly to the standard mathematical proof technique of {\em reductio ad aburdum}. This is also called refutation: $\alpha \models \beta$ if and only if the sentence $(\alpha \wedge \neg \beta)$ is unsatisifiable. CITATION(aitextbook) (p 211) \\ \hline Refutation completeness & Any complete search algorithm, applying only the resolution rule, can derive any conclusion entailed by any knowledge base in propositional logic. There is a caveat: resolution is complete in a specialized sense. Given that $A$ is true, we can not use the resolution to automatically generate the consequence $A \vee B$ is true. This is called {\bf refutation completeness}, meaning that resolution can always be used to either confirm or refute a sentence, but it cannot be used to enumerate true sentences. CITATION(aitextbook) (p 214) \\ \hline Relation & A relation between $A$ and $B$ is defined as $R_{A,B} \subseteq A \times B$ or $R_{A,B} \subseteq \{ (x,y) | x \in A, y \in B\}$ CITATION(ainotes) \\ \hline Relation (mathematical definition) & A relation between $A$ and $B$ is defined as $R_{A,B} \subseteq A \times B$ or $R_{A,B} \subseteq \{ (x,y) | x \in A, y \in B\}$ CITATION(ainotes) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Relaxed problem & A problem with fewer restrictions on the actions is called a {\em relaxed problem}. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. \\ \hline Resolution & If $l_i$ and $m_j$ are complementary literals, then we have resolution defined as: \begin{equation} \frac{l_1 \vee ... \vee l_k, ~~~~ m_1 \vee ... \vee m_n} {l_1 \vee ... \vee l_{i-1} \vee l_{i+1} \vee ... \vee l_k \vee m_1 \vee ... \vee m_{j-1} \vee m_{j+1} \vee ... \vee m_n} \end{equation} CITATION(aitextbook) (p 214) \\ \hline Resolution closure & Given a set of clauses $S$, resolution closure is the set of all derivable clauses. CITATION(aitextbook) (p 217) \\ \hline Route-finding problem & The Route-finding problem is defined in terms of locations and transitions along links between them. \\ \hline Satisfiability & A sentence is satisfiable if it is true in some model. CITATION(aitextbook) (p 211) \\ \hline Search & The search process consists of finding a path from the initial state to a goal state (p 60). The essence of a search is to check a state (node) to determine if it satisfies the goal state and expanding it to find other options to examine later if it does not. (p 69) \\ \hline Search cost (vs. total cost) & Search cost typically depends on the time complexity , but can also include a term for memory usage. Total cost combines the search cost and the path cost of the solution found (p 72). \\ \hline Search node & The search node corresponds to the node representing the initial state. (p69) \\ \hline Search Strategy & The {\em search strategy} determines which which state to expand next. \\ \hline Search Tree & A tree structure generated from the initial state and children found using the successor function defines the search tree. \\ \hline Semantics (of a logic) & The semantics of the language defines the truth of each sentence with respect to each possible world. CITATION(aitextbook) (p 207) \\ \hline Sensorless problem & (Also called conformant problems) If an agent has no sensors at all, then it could be in one of several possible initial states, and each action might lead to one of several possible successor states. To know that a goal state has been reached, all states in the successor set must be goal states. \\ \hline Sentence & A sentence is a representation of a fact in the world in a {\em formal language}. CITATION(ainotes) (10.3) \\ \hline Sideway move & When reaching a shoulder, we can not find a state to move to that is better than the state we are in. To fix this we allow moving to states that are equal in hopes of finding a way off a shoulder. This is called a sideways move. \\ \hline Simulated annealing & This combines hill climbing with random walk in such a way that the amount of randomness slowly decreases (cools) over time. \\ \hline Size of a CSP & The size of a CSP is characterized by the number of constraints. CITATION(ainotes) \\ \hline Solution quality & Solution quality is measured by the path cost function. \\ \hline Soundness (of inference) & An inference algorithm that derives only entailed sentences is called sound. CITATION(aitextbook) (p 203) \\ \hline State space & Together, the initial state and successor function implicitly define the state space which is the set of all states reachable from the initial state. \\ \hline Step Cost & The step cost is the cost of taking an action $a$ to go from state $x$ to state $y$ is denoted by $c(x,a,y)$. \\ \hline Straight-line distance & The shortest distance between two points. This is an especially interesting heuristic, because it will always be optimistic no mater how we define points and measures we use for distance in our problem since the shortest distance between two points is a straight line. \\ \hline Strong $k-$consistency & A graph is strongly $k-$consistent if it is $i-$consistent for $i \in {1,..,k}$. CITATION(aitextbook) (p 147) \\ \hline Substitution & When posing a query $\exists x~Predicate(x)$, answering yes is not very helpful. The standard form of an answer to such a query includes a set of values that satisfy the query. This set is called a substitution or binding list. CITATION(aitextbook) (p 254) \\ \hline Successor Function (in search) & The successor function is a function $S:x \in {s_i} \rightarrow S(s_i):\{<A_j , S_{i+1}^j>^*\}$ where $s_i$ is a state, $A_j$ are actions and $S_{i+1}^j$ is a state. [notes] \\ \hline Syllogism & A Syllogism is an inference in which one proposition (conclusion) follows necessarily from two other premises. It is irrefutable reasoning processes (or patterns) that consequently always yield correct conclusions. CITATION(aitextbook) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Syntax (of a logic) & Defines the sentences in a language (grammar) CITATION(ainotes) (10.16) \\ \hline Task Environment$^*$ & PEAS: Performance, Environment, Actuators, Sensors. \\ \hline Tautology & A sentence that is valid. CITATION(aitextbook) (p 210) \\ \hline Term & A term is a logical expression that referes to an object. CITATION(aitextbook) (p 248) \\ \hline Terminal states & States where the game has ended are called terminal states. CITATION(aitextbook) (p 163) \\ \hline Terminal test & A terminal test determines when the game is over. CITATION(aitextbook) (p 163) \\ \hline Ternary constraint & A constraint involving 3 variables. CITATION(ainotes) \\ \hline Theorem & Theorems are sentences entailed by axioms. CITATION(aitextbook) (p 255) \\ \hline Total Turing test & A human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. To this "Turing test" we add a video signal and the opportunity to pass physical object "through the hatch" to get a total Turing test. \cite{Wikipedia:TuringTest, aitextbook} \\ \hline Touring Problem & Visit every city (vertice) given at least once starting and ending at a specific city. Or more precisely given a graph $G=(V,E)$ and an initial vertex $v_0$ give a sequence of edges such that each vertex is visited at least once (p 67). \\ \hline Toy Problems & A toy problem is intended to illustrate or exercise various problem-solving methods. It can be given a concise, exact, description and is easily used by different researchers to compare the performance of algorithms. \\ \hline Transpositions & In games, repeated states occur frequently because of transpositions which are different permutations of the move sequence that end up in t6he same position (or state). CITATION(aitextbook) (p 170) \\ \hline Traveling Salesperson Problem (TSP - give a {\em formal} definition) CITATION(NPCompleteness) & INSTANCE: A finite set $C={c_1,c_2,...,c_m}$ of cities, a distance $d(c_i,c)j) \in Z^+$ for each pair of cities $c_i,c_j \in C$, a bound $B \in Z^+$. QUESTION: Is there a "tour" of all the cities in $C$ having total length no more than $B$, that is, an ordering $<c_{\pi(1)},c|{\pi(2)},...,c_{\pi(m)}>$ of $C$ such that \begin{equation} \left(\sum_{i=1}^{m-1}d(c_{(i)},c_{(i+1)})\right)+d(c_{\pi(m)},c_{\pi(1)}=B? \end{equation} \\ \hline Tree decomposition (of a CSP) & The process of decomposing a constraint graph into a tree of connected components. A tree decomposition must satisfy the following three requirements: (1) Every variable in the original problem appears in at least one of the subproblems. (2) If two variables are connected by a constraint in the original problem, they must appear together (along with the constraint) in at least one of the subproblems. (3) If a variable appears in two subproblems in the tree, it must appear in every subproblem along the path connecting the those subproblems. CITATION(aitextbook) (p 154) \begin{definition} A \emph{tree-decomposition} of a graph $G$ is a tree $T$ whose nodes, called bags, are subsets of $V\left( G\right) $ such that: \begin{enumerate} \item $\cup _{x\in V\left( T\right) }X=V\left( G\right) ;$ \item $\forall \left\{ u,v\right\} \in E\left( G\right) $, $\exists X\in V\left( T\right) $ such that $u,v\in X;$ and \item $\forall X,Y,Z\in V\left( T\right) $, if $Y$ is on the path from $X$ to $Z$ in $T$ then $X\cap Z\subseteq Y$. From Dourisboure and Gavoille CITATION(doorisboure03) \end{enumerate} \end{definition} \\ \hline Tree width of a graph (check a book on graph theory) & Let $T$ be a tree-decomposition fo a graph $G$. The width of $T$ is \begin{equation} width(T) =\max_{X\in T}\{ \left\vert X\right\vert -1\} \end{equation} Let $D$ be the set of all decompositions of the tree $T$. Then the tree width is $\min_{T\in D}width\left( T\right) $. CITATION(doorisboure03) \\ \hline Triangle inequality & In AI we say \[ h(n) \leq c(n, a, n') + h(n') \] is the triangle inequality for paths from one node to another in the search tree. \\ \hline Truth table & A truth table specifies the truth value of a complex sentence for each possible assignment of truth values. CITATION(aitextbook) (p 206) \\ \hline \end{tabular} \noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline Turing machine & A Turing machine is a 7-tuple, $\left( Q,\Sigma ,\Gamma,\delta,q_{0},q_{accept},q_{reject}\right) $, where $Q$ is the set of start states, $\Sigma$ is the input alphabet not containing the blank symbol, $\Gamma$ is the tape alphabet, where $\_\in\Gamma$ and $\Sigma\subseteq\Gamma $, $\delta : Q\times\Gamma\rightarrow Q\times\Gamma\times\left\{ L,R\right\} $ is the transition function, $q_{0}\in Q$ is the start state, $q_{accept}\in Q$ is the accept state, and $q_{reject}\in Q$ is the reject state, where $q_{accept}\neq q_{reject}$. CITATION(IntroToTheTheoryOfComputation2) \\ \hline Turing test & A human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. CITATION(Wikipedia:TuringTest) \\ \hline Unary constraint & The simplest kind of constraint which restricts a single variable. CITATION(aitextbook) (p 140) \\ \hline Uniform-cost search & This strategy expands the node $n$ with the lowest path cost. Thus this will use a $min-priority-queue$ to determine which nodes to explore and expand next. If all step costs are equal this is identical to the BFS. Let $C^*$ be the cost of the optimal solution, and assume that every action costs at least $\epsilon$. Then the algorithms's worst-case time and space complexity is $O(b^{1+\lfloor C^*/\epsilon \rfloor})$ which can be much greater than $b^d$. \\ \hline Uninformed search & These types of searches have no information beyond what is given in the problem definition (this is also called the blind search). All they can do is generate successors and distinguish a goals state from a non-goal state. There are five given in Section 3.4: BFS, Uniform-Cost Search, DFS, Depth-limited Search, Iterative deepening depth-first search, and bidirectional search. \\ \hline Unit clause & A unit clause is a cloause with just one literal. CITATION(aitextbook) (p 221) \\ \hline Universal constraint & A statement that the two variable are not involved in a constraint together. CITATION(ainotes) \\ \hline Utility Function & A utility function maps a state (or sequence of states) onto a real number, which describes the associated degree of happiness. This allows rational decisions to be made when goals are inadequate. (1) When goals conflict and (2) When uncertainty is attached to several goals the utility function can give a way to determine which is goal is most likely achievable. \\ \hline Validity & A sentence is valid if it is true in all models. CITATION(aitextbook) (p 210) \\ \hline Validity (of a logical sentence) & A sentence is valid if it is true in all models. CITATION(ainotes) \\ \hline World Value ordering heuristic & Determines the order in which you choose values to assign to a variable from the domain. CITATION(aitextbook) (p 143) \\ \hline Variable (of a CSP) & Variables are part of a CSP that can take values defined by their domains. CITATION(aitextbook) (p 137) \\ \hline Variable ordering heuristic & Determines the order in which you assign variables. CITATION(aitextbook) (p 143) \\ \hline Zero-sum games & Means environments in which the utility values at the end of the game are always equal and opposite. CITATION(aitextbook) (p 161) \\ \hline \end{tabular} \bigskip All definitions without a citation come from CITATION(aitextbook) }}} |
These are all the glosary terms from my AI class ["Csce876"]
\usepackage{amsmath}%
\setcounter{MaxMatrixCols}{30}%
\usepackage{amsfonts}%
\usepackage{amssymb}%
\usepackage{graphicx}
\usepackage{geometry}
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\geometry{left=0.5in,right=0.5in,top=0.5in,bottom=0.5in}
%%end-prologue%%
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
$A^*$ Search & The most widely-known form of the best-first
search, $A^*$ evaluates nodes using the evaluation function: \[ f(n)
= g(n) + h(n) \] $A^*$ has several nice properties. Using the
Tree-Search Algorithm and an {\em admissible} heuristic function we
have: \begin{itemize} \item Optimality \item Complete \item
Optimally efficient \end{itemize} If the heuristic function is
consistent, then we have: \begin{itemize} \item Optimality \item
Complete \end{itemize} \\ \hline
$C^*$ & This is the actual cost of the optimal solution path. \\
\hline
$f(n)$ (Evaluation Function) & The evaluation function is "the"
problem-specific knowledge beyond the definition of the problem
itself that is used to determine which node is chosen for expansion
in the search process. \\ \hline
$g(n)$ (Cost to node function) & This function gives the actual
cost to reach node $n$ on the current path. \\ \hline
$h(n)$ (Heuristic function) & The estimated cost of the cheape4st
path from $n$ to a (the) goal. \\ \hline
$k-$consistency & A CSP is $k-$consistent if, for any set of $k-1$
variables and for any consistent assignment to those variables, a
consistent value can always be assigned to any $k^{th}$ variable.
CITATION(aitextbook) (p 147) \\ \hline
$n$-queens problem & The the goal of the $n-queens$ problem is to
place $n$ queens on an $n \times n$ chess board where no queen is in
a position to attack another queen. \\ \hline
Abstraction (definition, goal, example) & The process of removing
detail from a representation is called abstraction. We say the
abstraction is valid if we can expand any abstract solution into a
solution in the more detailed world. \\ \hline
Admissible Heuristic & $h(n)$ is an admissible heuristic provided
that $h(n)$ never overestimates the cost to reach the goal. The
direct consequence of this is that $f(n)$ for $A^*$ never
overestimates the true cost of a solution through $n$. \\ \hline
Agent & An agent is just something that acts CITATION(aitextbook) \\
\hline
Alldiff constraint & A constraint that indicate that all involved
variables must be assigned a different value. CITATION(aitextbook)
(p 140) \\ \hline
Alliances & Alliances occur in many games involving more than two
players. Alliances involve a group of players that are cooperating
in a competitive environment. CITATION(aitextbook) (p 166) \\ \hline
Alpha-beta pruning & This procedure is based on a DFS. Consider a
node $n$ somewhere in the tree such that Player has a choice of
moving to that node. If Player has a better choice $m$ either at the
parent node of $n$ or at any choice point further up, then $n$ {\em
will never be reached in actual play}. So once we have found out
enough about $n$ (by examining its descendants in a DFS) to reach
this conclusion, we can prune it. Alpha-beta pruning gets its name
from the two parameters: \begin{description} \item[$\alpha =$] The
value of the best choice we have found so far at any choice point
along the path for $\max$. \item[$\beta =$] The value of the best
choice we have found so far at any choice point along the path for
$\min$. \end{description} CITATION(aitextbook) (p~168-169) \\ \hline
And-elimination & $\frac{A \wedge B}{A}$ CITATION(aitextbook) (p
211)
\\ \hline
Anytime algorithm & An anytime algorithm is an algorithm whose
output quality improves gradually over time, so that it has a
reasonable decision ready whenever it is interrupted.
CITATION(aitextbook) (p 971). \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Arc consistency & An arc from $X$ to $Y$ is consistent if, for
every value $x$ of $X$ there is some value $y$ of $Y$ that is
consistent with $x$. CITATION(aitextbook) (p 145) \\ \hline
Arity (of a relation or a function) & Arity is the number of
arguments CITATION(aitextbook) (p 246) \\ \hline
Assertion & A FOL logic sentence added to the KB are called an
assertion. CITATION(aitextbook) (p 253) \\ \hline
Atmost constraint & The atmost constraint has a number $P$ and
variables $P_1,...,P_k$ that are assigned numbers such that
$\sum_{i=1}^{k} P_i \leq P$. CITATION(aitextbook) (p 148) \\ \hline
Atomic sentences & Consists of a single proposition symbol.
CITATION(aitextbook) (p 204) \\ \hline
Autonomy (autonomous agent) & Not relying on prior knowledge of
its designer, but rather on its own precepts. CITATION(aitextbook) \\
\hline
Axiom & Axioms provide the basic factual information form which
conclusions can be derived. CITATION(aitextbook) (p 255) \\ \hline
Background knowledge & This is the sentences of information that a
knowledge base is initialized with. CITATION(aitextbook) (p 196) \\
\hline
Backjumping & The backjumping method backtracks to the most recent
variable in the conflict set. CITATION(aitextbook) (p 149) \\ \hline
Backtracking Search & A variant of the depth-first search it uses
still less memory. In backtracking, only on successor is generated
at a time rather than all successors; each partially expanded node
remembers which successor to generate next. Thus the memory
requirement is only $O(m)$ instead of $O(bm)$. \\ \hline
Backtracking search & The backtracking search is a depth-first
search that chooses values for one variable at a time and backtracks
when a variable has no legal values left to assign.
CITATION(aitextbook) (p 141) \\ \hline
Backward chaining & Backward chaining starts with the query for
the fact $q$ and finds those implications in the knowledge base that
conclude $q$. If all of the premises of one of these implications
can be proved true (through backward chaining), then $q$ is true.
CITATION(aitextbook) (p 220) \\ \hline
Belief State & When in a sensorless problem, the agent may only
know what states it can possibly start in. By applying the successor
function to each of these states it can determine a set of states
that could be reached by a specific action. The set of states that
the agent could be in at any one time is the {\em belief state}. \\
\hline
Best-first Search & Best-first search is an instance of the
general Tree-Search or Graph-Search algorithm in which a node is
selected for expansion based on an evaluation function, $f(n)$. \\
\hline
Biconditional & $A \Leftrightarrow B$ is a biconditional that
states that $A$ is true if and only if $B$ is true or alternately if
$A \Rightarrow B$ and $B \Rightarrow A$. CITATION(aitextbook) (p
205)
\\ \hline
Bidirectional search & The strategy is to run two simultaneous
searches, one from the initial state forward and one from the goal
state backwards. This is not always possible, and requires that the
successor function be invertible. This should cut the search depth
by a factor of two so that the storage requirements are $O(b^{d/2})
<< b^d$. \\ \hline
Binary constraint & A binary constraint relates two variables.
CITATION(aitextbook) (p 140) \\ \hline
Binding list & When posing a query $\exists x~Predicate(x)$,
answering yes is not very helpful. The standard form of an answer to
such a query includes a set of values that satisfy the query. This
set is called a substitution or binding list. CITATION(aitextbook)
(p 254) \\ \hline
Blind Search & See Uninformed search \\ \hline
Body (of a Horn clause) & The negative literals of a horn clause
form the body. CITATION(aitextbook) (p 218) \\ \hline
Boolean CSPs & Boolean CSPs are CSPs where the domains of the
variables are either true or false. CITATION(aitextbook) (p 139) \\
\hline
Bounded differences (constraint of) & A bounded difference
constraint has the form of $const_1 ~~ \theta ~~ x-y ~~ \theta ~~
const_2$ where $\theta \in {<,>,\leq,\geq}$. CITATION(ainotes) \\
\hline
Bounds Propagation & We say that a CSP is bounds-consistent if for
every variable $X$, and for both the lower bound and upper bound
values of $X$, there exists some value of $Y$ that satisfies the
constraint between $X$ and $Y$, for every variable $Y$. This kind of
constraint propagation is called {\em bounds propagation}.
CITATION(aitextbook) (p 148) \\ \hline
Branching Factor & The {\em branching factor} is the maximum
number of successors of any node. \\ \hline
Breadth-first Search & The BFS expands nodes from a FIFO queue.
That is, as each node is discovered it is put into a FIFO queue.
This results in all nodes being expanded at each level before a new
level is expanded. The storage requirement is $O(b^{d+1})$. This
requirement is prohibitive. \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Chinese room experiment & The system consists of a human, who
understands only English (CPU), equipped with a rule book (PROGRAM),
written in English, and various stacks of paper, some blank, some
with indecipherable inscriptions (memory). The system is inside a
room with a small opening to the outside. Through the opening appear
slips of paper with indecipherable symbols. The human finds matching
symbols in the rule book, and follows the instructions. The
instructions may include writing symbols on new slips of paper,
finding symbols in the stacks, rearranging the stacks and so on.
Eventually, the instructions will cause one or more symbols to be
transcribed onto a piece of paper that is passed back to the outside
world with the correct response. Searle argues against the Turing
test in that just running the right program does not generate
understanding. CITATION(aitextbook) \\ \hline
Chronological backtracking & The process of a backtrack search
where we backup to the preceding variable and try a different value
is called chronological backtracking, because the most recent
decision point is revisited. CITATION(aitextbook) (p 148) \\ \hline
Coercion & Coercion is the ability of an agent to reach a goal
state even in a sensorless problem. The agent knows that it starts
in any of the states but can reach a goal by reasoning about sets of
states. See the Vacuum cleaner agent on p84 for a good example. \\
\hline
Cognitive science & The interdisciplinary field of cognitive
science brings together computer models from AI and experimental
techniques from psychology to try to construct precise and testable
theories of the workings of the human mind. CITATION(aitextbook) \\
\hline
Competitive Agent & In a multiagent system, when one's performance
depends upon the performance of another in an inverse relationship,
then the two agents are in competition and may be called competitive
agents. \\ \hline
Complementary literals & Literals such that one is a negation of
the other. CITATION(aitextbook) (p 214) \\ \hline
Complete-state formulation & A {\em complete-state formulation}
starts with a complete proposed solution and then changes to it to
find a solution that satisfies the goal test. \\ \hline
Completeness & If the exists a solution, then the algorithm is
guaranteed to find a solution. \\ \hline
Completeness (of inference) & An inference $i$ is complete if
whenever KB $\models \alpha$, it is also true that KB $\vdash_i
\alpha$. That is, the procedure will answer any question whose
answer follows from what is know by the KB. CITATION(ainotes)
(10.23)
\\ \hline
Compositional (languages) & In a compositional language, the
meaning of a sentence is a function of the meaning of its parts.
CITATION(aitextbook) (p 241) \\ \hline
Compositionality & In a compostional language, the meaning of a
sentence is a function of the meaning of its parts.
CITATION(aitextbook) (p 241) \\ \hline
Condition-action rule & A condition action rule is a tuple
(Condition, Action). When the condition is matched by the sensor
input, the Simple Reflex agent will perform the "Action". \\ \hline
Conflict set & A more intelligent approach to backtracking than
chronological backtracking is to go all the way back to one of the
set of variables that {\em caused the failure}. This set is called
the {\bf conflict set}. CITATION(aitextbook) (p 148) \\ \hline
Conflict-directed backjumping & The backtracking method that backs
up to the source of a conflict that prohibits the finding of a
consistent assignment to the remaining variables.
CITATION(ainotes,aitextbook) (p 149) \\ \hline
Conjunctive normal form & A sentence expressed as a conjunction of
disjunctions of literals is said to be in conjunctive normal form.
CITATION(aitextbook) (p 215) \\ \hline
Connectionism & Connectionism models mental or behavioral
phenomena as the emergent processes of interconnected networks of
simple units. There are many different forms of connectionism, but
the most common forms utilize neural network models.
CITATION(Wikipedia:Connectionism) \\ \hline
Consistency (as a property of the $h$ function) & A heuristic
$h(n)$ is consistent if, for every node $n$ and every successor $n'$
of $n$ generated by any action $a$, the estimated cost of reaching
the goal from $n$ is no greater than the step cost of getting to
$n'$ plust the estimated cost or reaching the goal from $n'$: \\
\hline
Consistent assignment & An assignment that does not violate any
constraints is called a consistent or legal assignment.
CITATION(aitextbook) (p 137) \\ \hline
Constant symbol & A symbol in FOL that stand for objects.
CITATION(aitextbook) (p 246) \\ \hline
Constraint graph & A graph where nodes correspond to the variables
of the problem and arcs correspond to the constraints.
CITATION(aitextbook) (p 138) \\ \hline
Constraint hypergraph & A constraint hypergraph contains two types
of nodes: variables and constraints. arcs from constraint nodes to
variable nodes indicate that the variable is involved in the
constraint. This is used for representing constraints with an arity
greater than 2. CITATION(ainotes) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Constraint propagation & This is the general term for propagation
the implications of a constraint on one variable onto other
variables. CITATION(aitextbook) (p 145) \\ \hline
Constraint scope & The scope of a constraint is the set of
variables that are involved in the constraint CITATION(ainotes). \\
\hline
Constructive Search & This is the standard search process were we
assign a value to a variable at each node starting from the first
variable to the last. CITATION(ainotes). \\ \hline
Contingency problem & If the environment is partially observable
or if actions are uncertain, then the agent's percepts provide new
information after each action. Each possible percept defines a {\em
contingency} that must be planned for. A contingency problem is {\em
adversarial} if the uncertainty is caused by the actions of other
agents. \\ \hline
Continuous domains & Where the variables require continuous values
such as the real numbers we say we are using continuous domains.
Problems that require precise values often require continuous
domains. CITATION(aitextbook) (p 139) \\ \hline
Contours & The fact that $f$-costs are nondecreasing along any
path also means that we can draw contours in the state space, just
like the contours in a topographic map. \\ \hline
Crossover & In genetic algorithms when you cross two states you
choose a point to split the states and "cross" the two states. State
1's first half with state 2's second half, and similarly for the
second state. The split point is called the "crossover" point. \\
\hline
Current State & Local search algorithms operate using a single
"current state" and generally move only to neighboring states. The
current state is then a complete description of a solution that may
not meet all of the constraints or may not be optimal. (Of course it
could be optimal too, but then it would be a solution) \\ \hline
Cutset conditioning & Cutset conditioning is the process of
finding the smallest cycle cutset. CITATION(aitextbook) (p 154) \\
\hline
Cycle cutset & A subset $S$ called the cycle cutset is a set of
variables of a CSP such that the constraint graph becomes a tree
after the removal of $S$ CITATION(aitextbook) (p 153). \\ \hline
Data driven & Data driven reasoning is reasoning in which the
focus of attention starts with the known data. CITATION(aitextbook)
(p 219) \\ \hline
Data Mining & Data mining, also known as knowledge-discovery in
databases (KDD), is the practice of automatically searching large
stores of data for patterns. CITATION(Wikipedia:Datamining) \\
\hline
Decision problem & Is any problem that must be answered yes or
no. \\ \hline
Deduction & Logical inference. That is to use the rules of logic
to infer a conclusion. We say a conclusion is reached by deduction
if it is logically inferred by the premise(s). \\ \hline
Deduction theorem & For any sentences $\alpha$ and $\beta$,
$\alpha \models \beta$ if and only if the sentence $(\alpha
\Rightarrow \beta)$ is valid. CITATION(aitextbook) (p 210) \\ \hline
Definite clause & Horn clauses with exactly one positive literal
are called definite clauses. CITATION(aitextbook) (p 218) \\ \hline
Degree (ordering heuristic) & Attempts to reduce the branching
factor by selecting the variable that is involved in the largest
number of constraints. CITATION(aitextbook) (p~144) \\ \hline
Depth-First Search & This strategy expands nodes from a LIFO
queue, also known as a stack. This results in nodes being expanded
as deep as they can before exploring other nodes at the same level.
Thus it is called the DFS. The space requirements for DFS are
$O(bm)$. There is no guarantee that this will ever find a goal state
because it may get caught in a loop. \\ \hline
Depth-Limited Search & This is just a DFS search where we limit
the depth that the search may descend to. It is also not guaranteed
to find a solution in general, but we may guarantee a goal state is
found if we know a goal exists within a certain depth. This has the
same requirements as DFS \\ \hline
Diameter (of a graph) & Given a graph $G=(V,E)$, the distance
between two vertices $u$ and $v$ in $G$, denoted $d(u,v)$, is the
length of the shortest $u-v$ path in $G$. The {\em diameter} of $G$,
denoted diam$(G)$, is the maximum distance between any two vertices
in $G$. CITATION(GraphTheoryThulasiramanSwamy) \\ \hline
Domain (of a model) & The domain of a model is the set of objects
that it contains. CITATION(ainotes) (p 245) \\ \hline
Domain (of a variable) & Given a CSP the domain of a variable are
the values that the variable may be assigned
CITATION(ainotes,aitextbook) (p 137) \\ \hline
Domain/degree heuristics (not in textbook) & The degree heuristic
chooses the variable to assign that is involved in the largest
number of constraints on other unassigned variables.
CITATION(aitextbook) (p 144 - it is in the book) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Domination (of two functions) & We say a heuristic function
$h_1(n)$ dominates another heuristic function $h_2(n)$ if \[ h_1(n)
\geq h_2(n) \] \\ \hline
Effective branching factor CITATION(ainotes) & One way to
characterize the quality of a heuristic is using the {\em effective
branching factor}. In words this is the the branching factor that a
uniform tree of depth $d$ would have in order to contain $N+1$
nodes. Where $d$ is the depth of the search tree and $N$ is the
number of nodes generated. That is at each depth starting at $0$ we
have: \[ N = 1 + b^* + (b^*)^2 + ... + (b^*)^d =
\frac{(b^*)^{d+1}-1}{b^* - 1} \] \\ \hline
Entailment & Let KB be a knowledge base of sentences and let
$\alpha$ be a sentence. $\alpha$ is entailed by a KB if the models
of the KB are {\em all} models of $\alpha$. CITATION(ainotes) (11.5)
For my own notes: Entailment is similar to implication except the
domain of the variables are sentences (or propositions). KB $\models
\alpha$ iff all models of KB are models of $\alpha$. ($i.e., M(KB)
\subseteq M(\alpha)$) CITATION(ainotes) (11.5) \\ \hline
Environment: Deterministic vs. Stochastic & If the next state of
the environment is completely determined by the current state and
the action executed by the agent, then we say the environment is
deterministic, otherwise we say it is stochastic. \\ \hline
Environment: Discrete vs. Continuous & The discrete/continuous
distinction can be applied (1) to the state of the environment - is
it constantly changing?, (2) to the way {\em time} is handled - is
it continuous in the application or is it handled as discrete
points?, and (3) to the {\em percepts} and {\em actions} of the
agent - How can the agent perceive the environment, and how does it
act (continuously or discretely)?. \\ \hline
Environment: Episodic vs. Sequential & If the agents experience is
broken up into atomic episodes that do not affect actions in
previous episodes, we call it episodic. If on the other hand current
actions depend upon the sequence of actions taken previously, then
we call the task environment sequential. \\ \hline
Environment: Observable: Fully vs Partially & A task environment
is effectively fully observable if the sensors detect all aspects
that are {\em relevant} to the choice of action. Otherwise we say
the task environment is partially observable. \\ \hline
Environment: Single agent vs. Multiagent & The key distinction is
whether an agent's behavior is best described as maximizing a
performance measure whose value depends on other agents' behavior.
\\ \hline
Environment: Static vs. Dynamic & If the environment can change
while the agent is deliberating, we say the environment is
\textbf{dynamic}. If the environment does not change while the agent
is deliberating, we call the environment \textbf{static}. If the
score changes (decreases) as the agent deliberates in a static
environment, we call the environment \textbf{semidynamic}. \\ \hline
Environment: Strategic & If the environment is deterministic
except for the actions of other agents, we say that the environment
is strategic. (This was under Deterministic vs Stochastic in the
book.) \\ \hline
Epistemological level & Abstract description of what the agent
knows about the world. CITATION(ainotes) (10.4) \\ \hline
Exploration Problem & When the states and actions of the
environment are unknown (e.g. you are dropped somewhere where you
don't speak the language) the agent must act to discover them. This
type of problem can be viewed as an extreme case of the contingency
problem. \\ \hline
Extended interpretation & An extended interpretation specifies a
domain element in addition to the mapping given by the
interpretation. CITATION(aitextbook) (p 249). \\ \hline
Extensively defined constraint (not in textbook) & Extensionally
defined constraints: all allowed tuples are listed. This is
practical for defining arbitrary constraints. CITATION(ainotes) \\
\hline
Factoring & Factoring is the last step in resolution and involves
removing multiple copies of literals from the expression.
CITATION(aitextbook) (p 214) \\ \hline
Finite domains & Variables with finite (or discrete) domains are
the simplest kind of CSPs. CITATION(aitextbook) (p 139) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
First-Order Logic & First-Order logic adds to propositional
\begin{itemize} \item symbols: Objects, properties, functions and
relations \item connectives: quantifiers $\exists$ and $\forall$
\end{itemize} CITATION(ainotes) (12.4,5) \\ \hline
Fitness function & In genetic algorithms each state produced in
the next generation is evaluated or rated using a fitness function.
This determines how "good" the produced state is. \\ \hline
Fixed point & The point at which applying inference rules produces
no new facts. CITATION(aitextbook) (p 218) \\ \hline
Forward chaining & If all the premises of an implication are
known, then its conclusion is added to the list of known facts. The
process continues until the desired fact is known. This process of
chaining horn clauses together is called forward chaining.
CITATION(aitextbook) (p 218) \\ \hline
Forward Checking & Whenever a variable $X$ is assigned, the
forward checking process looks at each unassigned variable $Y$ that
is connected to $X$ by a constraint and deletes from $Y$'s domain
any value that is inconsistent with the value chosen for $X$.
CITATION(aitextbook) (p 144) \\ \hline
Fringe & The collection of nodes that have been generated but not
yet expanded is called the fringe (p 70). \\ \hline
Function & In terms of FOL, a function is a relation in which
there is only one "value" for a given "input." CITATION(aitextbook)
(p 242) \\ \hline
Function (mathematical definition) & A relation $f:A \rightarrow
B$ is said to be a function if and only if $f$ maps each $a \in A$
to only one value in $B$. (Dr. Anderson) \\ \hline
Genetic algorithm & A genetic algorithm is a variant of stochastic
beam search in which successor states are generated by combining two
parent states. \\ \hline
Global constraint & Involves all the variables in a CSP.
CITATION(ainotes) \\ \hline
Global optimum (maximum or minimum) & If elevation in a state
space landscape corresponds to cost, then the aim is to find the
lowest valley (minimum). If the landscape corresponds to an
objective function, then the aim is to find the highest peak. \\
\hline
Goal Test & The goal test determines whether a given state is a
goal state. \\ \hline
Goal-directed reasoning & Goal directed reasoning is useful for
answering specific questions such as "What shall I do now?" and
"Where are my keys?" Backward chaining is an example of goal
directed reasoning. CITATION(aitextbook) (p 220) \\ \hline
Gradient ascent & The gradient defines the direction of the
steepest ascent (in this case) or descent. (From calculus) \\ \hline
Greedy Algorithm & A greedy algorithm always makes the choice that
looks best at the moment. CITATION(IntroToAlgorithms) The Best-first
search, also called the "greedy search" or "greedy best-first
search", tries to expand the node that is closest to the goal, on
the grounds that this is likely to lead to a solution quickly. Thus
it evaluates nodes by using just the heuristic function \[ f(n) =
h(n) \] \\ \hline
Greedy local search & Hill climbing is sometimes called a greedy
local search because it grabs a good neighbor state without thinking
ahead about where to go next. \\ \hline
Ground term & A term with no variables is called a ground term.
CITATION(aitextbook) (p 249) \\ \hline
Grounding & Grounding is the process of determining if the logical
reasoning processes and the real environment in which the agent
exists are consistent. The question is, "how do we know that KB is
true in the real world?" Of course if you worried about this, you
would never have time to worry about anything else.
CITATION(aitextbook) (p 204) \\ \hline
Hanhatten distance & This is the sum distances to the object in
each dimension. Sometimes called the city block distance because you
can't cut through the buildings. \\ \hline
Head (of a Horn clause) & The positive literal of a horn clause is
called the head. CITATION(aitextbook) (p 205) \\ \hline
Heuristic Function & See $h(n)$. \\ \hline
Heuristic Search & See "Informed search" \\ \hline
Higher-Order Logic & Higher-Order logic views the relations and
functions in first-order logic as objects in themselves.
CITATION(aitextbook) (p 244) \\ \hline
Hill climbing: first choice & Implements stochastic hill climbing
by generating successor randomly until one is generated that is
better than the current state. \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Hill climbing: random restart & Because hill climbing will find it
harder and harder to find a better state as time goes on, we may
stop the search process and start it from a new random location.
This is called random restart. \\ \hline
Hill climbing: stochastic & Stochastic hill climbing chooses at
random from among the uphill moves where the probability of
selection can vary with the steepness of the uphill move. \\ \hline
Horn clauses & A horn clause is a disjunction of literals of which
at most one is positive. CITATION(aitextbook) (p 204) \\ \hline
Incremental formulation & An incremental formulation involves
operations that augment or {\em build} the state description. The
goal is to build a state that satisfies the goal test and at any
point in the process, if we find we can not satisfy the goal state
using the last augmentation, we may back track.\\ \hline
Induction & General rules aquired by exposure to repeated
associations between their elements. CITATION(aitextbook) \\ \hline
Inference & Deriving new sentences from old. In the case of
logical agents, inference must obey the fundamental requirement that
when one asks a question of the knowledge base, the answer should
follow from what has been told the knowledge base previously.
CITATION(aitextbook) (p 195) \\ \hline
Infinite domains & Domains where the size of the set is infinite.
This can include discrete domains such as the integers.
CITATION(aitextbook) (p 139) \\ \hline
Informed Search & Strategies that know whether one non-goal state
is more promising than another non-goal state are called informed or
heuristic searches. \\ \hline
Initial State & The state that the agent starts in. \\ \hline
Instantiated variable & A variable is instantiated if it has been
assigned a value from its domain. CITATION(ainotes) \\ \hline
Intended interpretation & The intended interpretation is the one
possible interpretation that we specify. CITATION(aitextbook) (p
246)
\\ \hline
Intensively defined constraint (not in textbook) & Intensionally
defined constraints are used when it is not practical or even
possible to list all tuples. This is the form we normally see using
variables and relationship predicates. CITATION(ainotes) \\ \hline
Interpretation & The semantics of FOL must relate sentences to
models in order to determine truth. An interpretation specifies
exactly which objects, relations and functions are referred to by
the constant, predicate and function symbols.CITATION(aitextbook) (p
246) An {\em interpretation} for a set $X$ of formulas {\bf is a
domain} $D$ together with a rule that \begin{itemize} \item assigns
to each $n$-place predicate symbol (that occurs in a formula) of $X$
an $n$-place predicate in $D$; \item assigns to each $n$-place
operation symbol of $X$ an $n$-place operation in $D$; \item assigns
to each constant symbol of $X$ an element of $D$; \item assigns to
$=$ the identity predicate $=$ in $D$, defined by: $a=b$ is true if
and only if $a$ and $b$ are the same. CITATION(FOML) \end{itemize} \\
\hline
Intractability & We say an algorithms is intractable if the time
to run the algorithm increases exponentially as a function of the
size of the problem. (From 310 or 823?) \\ \hline
Intractability & A problem is called intractable if the time
required to solve instance of the problme grows exponentially with
the size of the instances. (p 8) \\ \hline
Iterative-Deepening Search & Also called the {\em Iterative
deepening depth-first search}, this is just a DFS performed to a
depth $d$ where $d$ iteratively increases from $0$ to some limiting
value $l$ until a goal is found. this will occur when $d$ reaches
the shallowest goal state. Space complexity will be the same as the
DFS $O(bd)$. Unlike DFS it is complete and finds the optimal goal
when the path cost is a non-decreasing function of the depth of the
node. {\em In general, iterative deepening is the preferred
uninformed search method when there is a large search space and the
depth of the solution is not known}. \\ \hline
Iterative-Lengthening Search & Identical to the
Iterative-Deepening Search except that the search uses an increasing
path-cost limit instead of an increasing depth limit. This
guarantees optimality while avoiding expensive memory requirements.
\\ \hline
k-CNF & A sentence expressed as a conjunction of disjunctions of
literals is said to be in conjunctive normal form. A sentence is
$k-$CNF if it has exactly (or at most) $k$ literals per clause.
CITATION(aitextbook) (p 215) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Knowledge acquisition & Knowledge acquisition is the process of
acquiring knowledge. This can be done by exploration or by (p261)
applying to an outside expert to retrieve the knowledge required. \\
\hline
Knowledge base & A set (conjunction) of sentences that represent
facts about the world. CITATION(aitextbook) (p 195) \\ \hline
Knowledge representation & One of the six disciplines of AI
dedicated to the "representation" of knowledge that an entity knows
or senses. CITATION(aitextbook) \\ \hline
Knowledge representation language & A formal language used to
represent sentences of information in the knowledge base.
CITATION(aitextbook) (p 195) \\ \hline
Leaf Node & A leaf node is an element of the fringe, that is, a
node with no successors in the tree. \\ \hline
Limited rationality & It is not always possible to do precisely
the right thing in complicated environments do to high computational
demands. CITATION(aitextbook) \\ \hline
Linear constraints & Constraints in which each variable appears
only in linear form. CITATION(ainotes) \\ \hline
Linear Programming & Linear programming involves CSPs where
constraints must be linear inequalities forming a convex region.
CITATION(aitextbook) p 140 \\ \hline
Literal & A literal is single symbol or its negation.
CITATION(aitextbook) (p 204) \\ \hline
Local Optimum & This corresponds to a peak or a valley in the
state space landscape that is not the best solution. This is were a
search may get stuck and need a random restart. \\ \hline
Local Search & Local searches do not care about the path too the
solution, and so they don't keep track of where they have been. The
use very little memory and they can often find solutions in large or
infinite (continuous) state spaces for which systematic algorithms
are unavailable. \\ \hline
Logical connectives & Not, conjunction, disjunction implication,
biconditional which are respectively: $\neg \wedge, \vee,
\Rightarrow, \Leftrightarrow$. CITATION(aitextbook) (p 204-205) \\
\hline
Logical equivalence & Two sentences $A$ and $B$ are logically
equivalent if they are true in the same set of models. This is
written: $A \equiv B$. CITATION(aitextbook) (p 210) \\ \hline
Min-conflict heuristic & The min-conflicts heuristic chooses a
value for a variable that causes the fewest conflicts with other
variables. CITATION(ainotes) \\ \hline
Minimax algorithm & The minimax algorithm computes the minimax
decision form the current state. It uses a simple recursive
computation of the minimax values of each successor state, directly
implementing the defining equations. The recursion proceeds all the
way down to the leaves of the tree, and then the minimax values are
backed up through the tree as the recursions unwinds.
CITATION(aitextbook) (p 165) \\ \hline
Minimax decision & When the Minimax value has been computed for
each child node in the tree for the current node, the Minimax
decision is the largest value of the child nodes. That is we choose
the action that is the optimal choice (max value) because it
guarantees the highest minimum value. CITATION(aitextbook) (p 165) \\
\hline
Minimax value &
\begin{equation}
Minimax\text{-}Value(n)=\left\{
\begin{array}[c]{ll}
Utility\left(n\right) & \text{if }n\text{ is a terminal state}\\
\max_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a }\max\text{ node}\\
\min_{s\in Succ(n)}Minimax\text{-}Val(s) & \text{if }n\text{ is a
}\min\text{ node.}
\end{array}
\right.
\end{equation}
\\ \hline
Minimum remaining values (a.k.a. least domain variable) & A
heuristic that chooses the variable with the fewest legal
values.CITATION(aitextbook) (p 143) \\ \hline
Missionaries and cannibals problem & Three missionaries and three
cannibals are on one side of a river, along with a both that can
hold one or two people.. Find a way to get everyone to the other
size without ever leaving a group of missionaries in one place
outnumbered by the cannibals in that place (p 90). \\ \hline
Model (in general) & A model is a world in which a sentence is
true under a particular interpretation. CITATION(ainotes) (11.4) \\
\hline
Model checking & Given a knowledge base KB and a sentence
$\alpha$, model checking enumerates all possible models to check
that a sentence $\alpha$ is true in all models in which KB is true.
OR, model checking ensures that $\alpha$ is true in each model
contained in KB. CITATION(aitextbook) (p 202) \\ \hline
Model in propositional logic & A model is a mapping from
proposition symbols directly to truth or falsehood. The models of a
sentence are the mappings that make the sentence true.
CITATION(ainotes) \\ \hline
Modus ponens & $\frac{\alpha \Rightarrow \beta,~~ \alpha}{\beta}$
CITATION(aitextbook) (p 211) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Monotonicity (as a property of the $h$ function & A heuristic
function $h(n)$ is said to be monotone if it satisfies: \[ h(n) \leq
c(n,n') + h(n') \forall n,n' | n' \in SCS(n) \] where $SCS(n)$ gives
the successors of $n$. \\ \hline
Monotonicity (of inference) & Monotonicity says that the set of
entailed sentences can only increase as information is added to the
knowledge base. CITATION(aitextbook) (p 213) \\ \hline
Mutation & In genetic algorithms after a crossover between two
states has been completed we will sometimes randomly change the
state by randomly changing one element of the state. This is called
a mutation. \\ \hline
Natural Language Processing & The ability of the computer to
process English or some other natural language to communicate
successfully. CITATION(aitextbook) \\ \hline
Node consistency & A term meaning that a CSP is 1-consistent, or
that any node by itself is consistent. CITATION(aitextbook) (p 147) \\
\hline
Node expansion (in search) & After we examine a state (node) $x$
and determine that it is not a goal state, we determine the
successor states (nodes) using the successor function. These set of
states (nodes) are the expansion of $x$ (p 69). \\ \hline
NP-completeness & The theory of NP-completeness, provides a method
to reconize an intractable problem. Any problem class to which the
class of NP-complete problems can be reduced is likely to be
intractable. Cook and Karp showed the existence of large classes of
canonical combinatorial search and resoning problems that are
NP-complete. CITATION(aitextbook) \\ \hline
Objective function & Objective functions are part of the
description of an optimization problem that measures states to
determine the best state. (from Glossary 6) \\ \hline
Objective function (in optimization problem) & Objective functions
are part of the description of an optimization problem that measures
states to determine the best state. \\ \hline
Occam's (or Ockham's) razor & is a principle attributed to the
14th-century English logician and Franciscan friar, William of
Ockham. It forms the basis of methodological reductionism, also
called the principle of parsimony or law of economy. In its simplest
form, Occam's Razor states that one should make no more assumptions
than needed. Put into everyday language, it says: Given two equally
predictive theories, choose the simpler.
CITATION(Wikipedia:OccamsRazor) \\ \hline
Omniscience & Literally "all knowing". In the AI sense it means
knowing all the information necessary to make the best choice that
optimizes the performance measure. \\ \hline
Open-loop & When given a {\bf static}, {\bf observable}, {\bf
discrete}, {\bf deterministic} environment, solutions can be
executed without pay attention to the percepts. Thus an agent
carries out its plan with its eyes closed. Control theorists call
this an {\bf open-loop} because it breaks the loop between agent and
environment. \\ \hline
Operator & Successor functions gives a description of the possible
actions available to the agent. An alternate formulation uses a set
of operators that can be applied to a state to generate successors.
(p 62) \\ \hline
Optimal solution & According to AIAMA: The optimal solution has
the lowest path cost among all solutions. \\ \hline
Optimally efficient & If an algorithm is optimally efficient it
means that no other optimal algorithm is guaranteed to expand fewer
nodes except possibly through tie-breaking among nodes with
$f(n)=C^*$. {\em We note that $A^*$ is optimally efficient {\bf and}
that it is also exponential in practice.} \\ \hline
Optimization Problem & Optimization problem define as a goal to
find the best state according to an objective function. \\ \hline
Path & A {\em path} in the state space is a sequence of states
connected by a sequence of actions. \\ \hline
Path consistency & Any pair of adjacent variables can always be
extended to a third neighboring variable - also called
$3-$consistency. CITATION(aitextbook) (p 147) \\ \hline
Path Cost & A {\em path cost} function assigns a numeric cost to
each path from the root node to any node in the tree. \\ \hline
Pathmax Equation CITATION(astar) & The PathMax function is an
optimization routine that ensures monotonicity: \[
\hat{f}(m)=\max\{f(n),f(m)\} \] for all $m \in \{succ(n)\}$. \\
\hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Payoff function & Also called the {\bf objective function} or {\bf
utility function}, it gives the numeric value for the terminal
states. (Objective and utility functions generally give a value to
each state). CITATION(aitextbook) (p 163) \\ \hline
Percept Sequence & An agent's percept sequence is the complete
history of everything the agent has ever perceived. \\ \hline
Performance Measure & A performance measure embodies the criterion
for success of an agent's behavior. \\ \hline
Planning & Planning is the process of finding a sequence that
achieves a desired effect given a suitable constructive inference
algorithm (p 330). \\ \hline
Plateau & An area in the state space landscape where the
evaluation function is flat. I could be a flat local maximum or a
shoulder. \\ \hline
Ply & This is essentially a turn in a turn taking game. One move
is made up of two different players in a two player game each taking
a turn. Each turn is called a half move or ply. CITATION(aitextbook)
(p 163) \\ \hline
Predecessors & Predecessors of a state $x$ are all those states
that have $x$ as a successor. \\ \hline
Predicate symbol & Stand for relations in FOL.
CITATION(aitextbook) (p 246) \\ \hline
Premise & The premise or antecedent is the left hand side of an
implication ($\Rightarrow$). CITATION(aitextbook) (p 205) \\ \hline
Problem Formulation & Problem formulation is the process of
deciding what actions and states to consider, given a goal. A
problem can be defined formally by four components:
\begin{enumerate} \setlength{\topsep}{0in} \setlength{\parsep}{0in}
\setlength{\parskip}{0in} \setlength{\itemsep}{0in} \item initial
state \item successor function \item goal test \item path cost
\end{enumerate} \\ \hline
Proof & A sequence of applications of inference rules is called a
proof. CITATION(aitextbook) (p 212) \\ \hline
Property & Unary relations CITATION(aitextbook) (p 242) \\ \hline
Proposition symbol & A symbol that can be assigned true or false
which represents a proposition (or statement). CITATION(aitextbook)
(p 204) \\ \hline
Propositional logic & Propositional logic is a very simple
language consisting of proposition symbols and logical connectives.
It can handle propositions that are known true, known false, or
completely unknown. CITATION(aitextbook) (p 233) Propositional logic
is made up of a set of symbols and boolean connectives $\wedge,
\vee, \neg, \Rightarrow, \Leftrightarrow$ formed into sentences
using the formal grammar form: Backus-Naur Form. The semantics are
defined by the operation of the connectives as either True or False.
CITATION(ainotes) (11.3, 11.7) \\ \hline
Pruning & When a node $n'$ is explored and placed in the fringe as
part of a node expansion of $n$ and $h(n') > h(n)$ then $A^*$ will
not never expand this node, because $f(n') > C^*$. When this happens
we say that $n'$ is {\em pruned}. \\ \hline
Pruning (in general) & Pruning is the process of eliminating
possibilities from consideration without having to examine them. In
a tree structure we often say that we prune a subtree or branch by
eliminating it from consideration. CITATION(aitextbook) (p 100) \\
\hline
Rationality & Rationality (restricted rationality) is the ideal
concept of intelligence (restrcited rationality is listed above) It
simply means to do the right thingCITATION(aitextbook) \\ \hline
Reduction ad absurdum & Proving $\beta$ from $\alpha$ by checking
the unsatisfiability of $(\alpha \wedge \neg \beta)$ corresponds
exactly to the standard mathematical proof technique of {\em
reductio ad aburdum}. This is also called refutation: $\alpha
\models \beta$ if and only if the sentence $(\alpha \wedge \neg
\beta)$ is unsatisifiable. CITATION(aitextbook) (p 211) \\ \hline
Refutation completeness & Any complete search algorithm, applying
only the resolution rule, can derive any conclusion entailed by any
knowledge base in propositional logic. There is a caveat: resolution
is complete in a specialized sense. Given that $A$ is true, we can
not use the resolution to automatically generate the consequence $A
\vee B$ is true. This is called {\bf refutation completeness},
meaning that resolution can always be used to either confirm or
refute a sentence, but it cannot be used to enumerate true
sentences. CITATION(aitextbook) (p 214) \\ \hline
Relation & A relation between $A$ and $B$ is defined as $R_{A,B}
\subseteq A \times B$ or $R_{A,B} \subseteq \{ (x,y) | x \in A, y
\in B\}$ CITATION(ainotes) \\ \hline
Relation (mathematical definition) & A relation between $A$ and
$B$ is defined as $R_{A,B} \subseteq A \times B$ or $R_{A,B}
\subseteq \{ (x,y) | x \in A, y \in B\}$ CITATION(ainotes) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Relaxed problem & A problem with fewer restrictions on the actions
is called a {\em relaxed problem}. The cost of an optimal solution
to a relaxed problem is an admissible heuristic for the original
problem. \\ \hline
Resolution & If $l_i$ and $m_j$ are complementary literals, then
we have resolution defined as: \begin{equation} \frac{l_1 \vee ...
\vee l_k, ~~~~ m_1 \vee ... \vee m_n} {l_1 \vee ... \vee l_{i-1}
\vee l_{i+1} \vee ... \vee l_k \vee m_1 \vee ... \vee m_{j-1} \vee
m_{j+1} \vee ... \vee m_n} \end{equation} CITATION(aitextbook) (p
214)
\\ \hline
Resolution closure & Given a set of clauses $S$, resolution
closure is the set of all derivable clauses. CITATION(aitextbook) (p
217) \\ \hline
Route-finding problem & The Route-finding problem is defined in
terms of locations and transitions along links between them. \\
\hline
Satisfiability & A sentence is satisfiable if it is true in some
model. CITATION(aitextbook) (p 211) \\ \hline
Search & The search process consists of finding a path from the
initial state to a goal state (p 60). The essence of a search is to
check a state (node) to determine if it satisfies the goal state and
expanding it to find other options to examine later if it does not.
(p 69) \\ \hline
Search cost (vs. total cost) & Search cost typically depends on
the time complexity , but can also include a term for memory usage.
Total cost combines the search cost and the path cost of the
solution found (p 72). \\ \hline
Search node & The search node corresponds to the node representing
the initial state. (p69) \\ \hline
Search Strategy & The {\em search strategy} determines which which
state to expand next. \\ \hline
Search Tree & A tree structure generated from the initial state
and children found using the successor function defines the search
tree. \\ \hline
Semantics (of a logic) & The semantics of the language defines the
truth of each sentence with respect to each possible world.
CITATION(aitextbook) (p 207) \\ \hline
Sensorless problem & (Also called conformant problems) If an agent
has no sensors at all, then it could be in one of several possible
initial states, and each action might lead to one of several
possible successor states. To know that a goal state has been
reached, all states in the successor set must be goal states. \\
\hline
Sentence & A sentence is a representation of a fact in the world
in a {\em formal language}. CITATION(ainotes) (10.3) \\ \hline
Sideway move & When reaching a shoulder, we can not find a state
to move to that is better than the state we are in. To fix this we
allow moving to states that are equal in hopes of finding a way off
a shoulder. This is called a sideways move. \\ \hline
Simulated annealing & This combines hill climbing with random walk
in such a way that the amount of randomness slowly decreases (cools)
over time. \\ \hline
Size of a CSP & The size of a CSP is characterized by the number
of constraints. CITATION(ainotes) \\ \hline
Solution quality & Solution quality is measured by the path cost
function. \\ \hline
Soundness (of inference) & An inference algorithm that derives
only entailed sentences is called sound. CITATION(aitextbook) (p
203)
\\ \hline
State space & Together, the initial state and successor function
implicitly define the state space which is the set of all states
reachable from the initial state. \\ \hline
Step Cost & The step cost is the cost of taking an action $a$ to
go from state $x$ to state $y$ is denoted by $c(x,a,y)$. \\ \hline
Straight-line distance & The shortest distance between two points.
This is an especially interesting heuristic, because it will always
be optimistic no mater how we define points and measures we use for
distance in our problem since the shortest distance between two
points is a straight line. \\ \hline
Strong $k-$consistency & A graph is strongly $k-$consistent if it
is $i-$consistent for $i \in {1,..,k}$. CITATION(aitextbook) (p 147) \\
\hline
Substitution & When posing a query $\exists x~Predicate(x)$,
answering yes is not very helpful. The standard form of an answer to
such a query includes a set of values that satisfy the query. This
set is called a substitution or binding list. CITATION(aitextbook)
(p 254) \\ \hline
Successor Function (in search) & The successor function is a
function $S:x \in {s_i} \rightarrow S(s_i):\{<A_j , S_{i+1}^j>^*\}$
where $s_i$ is a state, $A_j$ are actions and $S_{i+1}^j$ is a
state. [notes] \\ \hline
Syllogism & A Syllogism is an inference in which one proposition
(conclusion) follows necessarily from two other premises. It is
irrefutable reasoning processes (or patterns) that consequently
always yield correct conclusions. CITATION(aitextbook) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Syntax (of a logic) & Defines the sentences in a language
(grammar) CITATION(ainotes) (10.16) \\ \hline
Task Environment$^*$ & PEAS: Performance, Environment, Actuators,
Sensors. \\ \hline
Tautology & A sentence that is valid. CITATION(aitextbook) (p 210) \\
\hline
Term & A term is a logical expression that referes to an object.
CITATION(aitextbook) (p 248) \\ \hline
Terminal states & States where the game has ended are called
terminal states. CITATION(aitextbook) (p 163) \\ \hline
Terminal test & A terminal test determines when the game is over.
CITATION(aitextbook) (p 163) \\ \hline
Ternary constraint & A constraint involving 3 variables.
CITATION(ainotes) \\ \hline
Theorem & Theorems are sentences entailed by axioms.
CITATION(aitextbook) (p 255) \\ \hline
Total Turing test & A human judge engages in a natural language
conversation with two other parties, one a human and the other a
machine; if the judge cannot reliably tell which is which, then the
machine is said to pass the test. To this "Turing test" we add a
video signal and the opportunity to pass physical object "through
the hatch" to get a total Turing test. \cite{Wikipedia:TuringTest,
aitextbook} \\ \hline
Touring Problem & Visit every city (vertice) given at least once
starting and ending at a specific city. Or more precisely given a
graph $G=(V,E)$ and an initial vertex $v_0$ give a sequence of edges
such that each vertex is visited at least once (p 67). \\ \hline
Toy Problems & A toy problem is intended to illustrate or exercise
various problem-solving methods. It can be given a concise, exact,
description and is easily used by different researchers to compare
the performance of algorithms. \\ \hline
Transpositions & In games, repeated states occur frequently
because of transpositions which are different permutations of the
move sequence that end up in t6he same position (or state).
CITATION(aitextbook) (p 170) \\ \hline
Traveling Salesperson Problem (TSP - give a {\em formal} definition)
CITATION(NPCompleteness) & INSTANCE: A finite set
$C={c_1,c_2,...,c_m}$ of cities, a distance $d(c_i,c)j) \in Z^+$ for
each pair of cities $c_i,c_j \in C$, a bound $B \in Z^+$. QUESTION:
Is there a "tour" of all the cities in $C$ having total length no
more than $B$, that is, an ordering
$<c_{\pi(1)},c|{\pi(2)},...,c_{\pi(m)}>$ of $C$ such that
\begin{equation}
\left(\sum_{i=1}^{m-1}d(c_{(i)},c_{(i+1)})\right)+d(c_{\pi(m)},c_{\pi(1)}=B?
\end{equation} \\ \hline
Tree decomposition (of a CSP) & The process of decomposing a
constraint graph into a tree of connected components. A tree
decomposition must satisfy the following three requirements: (1)
Every variable in the original problem appears in at least one of
the subproblems. (2) If two variables are connected by a constraint
in the original problem, they must appear together (along with the
constraint) in at least one of the subproblems. (3) If a variable
appears in two subproblems in the tree, it must appear in every
subproblem along the path connecting the those subproblems.
CITATION(aitextbook) (p 154) \begin{definition} A
\emph{tree-decomposition} of a graph $G$ is a tree $T$ whose nodes,
called bags, are subsets of $V\left( G\right) $ such that:
\begin{enumerate} \item $\cup _{x\in V\left( T\right) }X=V\left(
G\right) ;$ \item $\forall \left\{ u,v\right\} \in E\left( G\right)
$, $\exists X\in V\left( T\right) $ such that $u,v\in X;$ and
\item $\forall X,Y,Z\in V\left( T\right) $, if $Y$ is on the path
from $X$ to $Z$ in $T$ then $X\cap Z\subseteq Y$. From Dourisboure
and Gavoille CITATION(doorisboure03) \end{enumerate}
\end{definition}
\\ \hline
Tree width of a graph (check a book on graph theory) & Let $T$ be
a tree-decomposition fo a graph $G$. The width of $T$ is
\begin{equation} width(T) =\max_{X\in T}\{ \left\vert X\right\vert
-1\} \end{equation} Let $D$ be the set of all decompositions of the
tree $T$. Then the tree width is $\min_{T\in D}width\left( T\right)
$. CITATION(doorisboure03) \\ \hline
Triangle inequality & In AI we say \[ h(n) \leq c(n, a, n') +
h(n') \] is the triangle inequality for paths from one node to
another in the search tree. \\ \hline
Truth table & A truth table specifies the truth value of a complex
sentence for each possible assignment of truth values.
CITATION(aitextbook) (p 206) \\ \hline
\end{tabular}
\noindent \begin{tabular}[c]{|p{2in}|p{4.75in}|}\hline
Turing machine & A Turing machine is a 7-tuple, $\left( Q,\Sigma
,\Gamma,\delta,q_{0},q_{accept},q_{reject}\right) $, where $Q$ is
the set of start states, $\Sigma$ is the input alphabet not
containing the blank symbol, $\Gamma$ is the tape alphabet, where
$\_\in\Gamma$ and $\Sigma\subseteq\Gamma $, $\delta :
Q\times\Gamma\rightarrow Q\times\Gamma\times\left\{ L,R\right\} $
is the transition function, $q_{0}\in Q$ is the start state,
$q_{accept}\in Q$ is the accept state, and $q_{reject}\in Q$ is the
reject state, where $q_{accept}\neq q_{reject}$.
CITATION(IntroToTheTheoryOfComputation2) \\ \hline
Turing test & A human judge engages in a natural language
conversation with two other parties, one a human and the other a
machine; if the judge cannot reliably tell which is which, then the
machine is said to pass the test. CITATION(Wikipedia:TuringTest) \\
\hline
Unary constraint & The simplest kind of constraint which restricts
a single variable. CITATION(aitextbook) (p 140) \\ \hline
Uniform-cost search & This strategy expands the node $n$ with the
lowest path cost. Thus this will use a $min-priority-queue$ to
determine which nodes to explore and expand next. If all step costs
are equal this is identical to the BFS. Let $C^*$ be the cost of the
optimal solution, and assume that every action costs at least
$\epsilon$. Then the algorithms's worst-case time and space
complexity is $O(b^{1+\lfloor C^*/\epsilon \rfloor})$ which can be
much greater than $b^d$. \\ \hline
Uninformed search & These types of searches have no information
beyond what is given in the problem definition (this is also called
the blind search). All they can do is generate successors and
distinguish a goals state from a non-goal state. There are five
given in Section 3.4: BFS, Uniform-Cost Search, DFS, Depth-limited
Search, Iterative deepening depth-first search, and bidirectional
search. \\ \hline
Unit clause & A unit clause is a cloause with just one literal.
CITATION(aitextbook) (p 221) \\ \hline
Universal constraint & A statement that the two variable are not
involved in a constraint together. CITATION(ainotes) \\ \hline
Utility Function & A utility function maps a state (or sequence of
states) onto a real number, which describes the associated degree of
happiness. This allows rational decisions to be made when goals are
inadequate. (1) When goals conflict and (2) When uncertainty is
attached to several goals the utility function can give a way to
determine which is goal is most likely achievable. \\ \hline
Validity & A sentence is valid if it is true in all models.
CITATION(aitextbook) (p 210) \\ \hline
Validity (of a logical sentence) & A sentence is valid if it is
true in all models. CITATION(ainotes) \\ \hline World
Value ordering heuristic & Determines the order in which you
choose values to assign to a variable from the domain.
CITATION(aitextbook) (p 143) \\ \hline
Variable (of a CSP) & Variables are part of a CSP that can take
values defined by their domains. CITATION(aitextbook) (p 137) \\
\hline
Variable ordering heuristic & Determines the order in which you
assign variables. CITATION(aitextbook) (p 143) \\ \hline
Zero-sum games & Means environments in which the utility values at
the end of the game are always equal and opposite.
CITATION(aitextbook) (p 161) \\ \hline
\end{tabular}
\bigskip
All definitions without a citation come from CITATION(aitextbook)