https://plato.stanford.edu/archives/spr2024/entries/logic-ai/
>[!info] Definition
>*Artificial Intelligence* (referred to hereafter by its nickname, “*AI*”) is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent.
- John McCarthy’s plan was to use ideas from philosophical logic to formalize commonsense reasoning.
- new setting: building working, large-scale computational models of rational agency.
## [1\. Logic in Artificial Intelligence](https://plato.stanford.edu/entries/logic-ai//#LogiArtiInte)
### [1.1 The Role of Logic in Artificial Intelligence](https://plato.stanford.edu/entries/logic-ai//#RoleLogiArtiInte)
- Logic can, for instance, provide a specification for a programming language, aiming at soundness and completeness.
- More loosely - logical ideas can inform parts of the software development process.
- Logical theory informs applications, and applications challenge logical theory and can lead to theoretical innovations.
- Core components:
- declarative representations
- their retrieval and maintenance
- the reasoning systems they service
- Knowledge representation
- declarative information/declarative representations
### [1.2 Philosophical Logic](https://plato.stanford.edu/entries/logic-ai//#PhilLogi)
- a desire to apply the methods of mathematical logic to nonmathematical domains
### [1.3 Logic in AI and Philosophical Logic](https://plato.stanford.edu/entries/logic-ai//#LogiAIPhilLogi)
- logical AI is philosophical logic constrained by an interest in large-scale formalization and in feasible, implementable reasoning.
- logic deals with reasoning—and relatively little of the reasoning we do is mathematical, while almost all of the mathematical reasoning done by nonmathematicians is mere calculation.
## [2\. John McCarthy and Commonsense Logicism](https://plato.stanford.edu/entries/logic-ai//#JohnMcCaCommLogi)
### [2.1 Logic and AI](https://plato.stanford.edu/entries/logic-ai//#LogiAI)
- McCarthy felt that even if AI implementations do not straightforwardly use logical reasoning techniques like theorem proving, a logical formalization will help to understand the reasoning problem itself.
- The claim is that without a logical account of the reasoning domain, it will not be possible to implement the reasoning itself.
### [2.2 The Formalization of Common Sense](https://plato.stanford.edu/entries/logic-ai//#FormCommSens)
- McCarthy’s long-term objective was to formalize _commonsense reasoning_, the prescientific reasoning that engages human thinking about everyday problems.
## [3\. Nonmonotonic Reasoning and Nonmonotonic Logics](https://plato.stanford.edu/entries/logic-ai//#NonmReasNonmLogi)
### [3.1 Nonmonotonicity](https://plato.stanford.edu/entries/logic-ai//#Nonm)
>[!info] Definition
>A logic is *monotonic* if its consequence relation has the property that if a set of formulas $T$ implies a consequence $B$ then a larger set $T \cup \{A\}$ will also imply $B$.
>A logic is *nonmonotonic* if its consequence relation lacks this property.
- While a mathematical proof must cover every contingency, practical reasoning routinely closes its eyes to some possibilities.
### [3.2 Beginnings](https://plato.stanford.edu/entries/logic-ai//#Begi)
Minsky's challenges to the logical approach which helped drive nonmonotonic logic:
- building large-scale representations
- reasoning efficiently
- representing control knowledge
- providing for the flexible revision of defeasible beliefs.
Applications that drove nonmonotonic logic:
- belief revision
- closed-world reasoning
- planning.
#### 3.2.1 Belief Revision
>[!info] Definition (Doyle)
>*Truth Maintenance System.* An algorithm providing a mechanism for updating the “beliefs” of a knowledge repository.
- The idea is to keep track of the support of beliefs, and to use the record of these support dependencies when it is necessary to revise beliefs.
- In a TMS, part of the support for a belief can consist in the _absence_ of some other belief. This introduces nonmonotonicity. For instance, it provides for defaults: beliefs that are induced by the absence of contrary beliefs.
#### 3.2.2 Closed-world reasoning
>[!info] Definition
>The _closed-world assumption_: as far as simple claims (i.e. positive or negative literals) are concerned, the system assumes that it knows all that there is to be known.
- This is another case of inference from the absence of a proof. A negative is proved, in effect, by the failure of a systematic attempt to prove the positive.
#### 3.2.3 Planning
>[!info] Definition
>*Causal inertia* - variables are unchanged by the performance of an action unless there is a special reason to think that they will change.
>*The frame problem:* the challenge of formalizing causal inertia in a temporal framework.
- When making plans, we assume causal inertia.
- This suggests that nonmonotonic temporal formalisms should apply usefully to reasoning about action and change, and in particular might address the frame problem.
### [3.3 The Earliest Formalisms](https://plato.stanford.edu/entries/logic-ai//#EarlForm)
Three influential approaches to nonmonotonic logic:
- _circumscription_ (McCarthy): restricting attention to certain types of models
- _modal approaches_ (Doyle & McDermott): addition of modal operator $L$, informally interpreted as ‘provable’.
- _default logic_ (Reiter): addition of "default rules" of the form "in the presence of $A_{1}\dots A_{n}$ and the absence of $B_{1} \dots B_{m}$ conclude $C
quot;
The modal approach and default logic were proved to be "equivalent".
### [3.4 Later Work in Nonmonotonic Logic](https://plato.stanford.edu/entries/logic-ai//#LateWorkNonmLogi)
## [4\. Reasoning about Action and Change](https://plato.stanford.edu/entries/logic-ai//#ReasAbouActiChan)
### [4.1 Priorian Tense Logic](https://plato.stanford.edu/entries/logic-ai//#PrioTensLogi)
### [4.2 Planning Problems and the Situation Calculus](https://plato.stanford.edu/entries/logic-ai//#PlanProbSituCalc)
### [4.3 Formalizing Microworlds](https://plato.stanford.edu/entries/logic-ai//#FormMicr)
### [4.4 Prediction and the Frame Problem](https://plato.stanford.edu/entries/logic-ai//#PredFramProb)
### [4.5 Nonmonotonic Treatments of Inertia and a Package of Problems](https://plato.stanford.edu/entries/logic-ai//#NonmTreaInerPackProb)
### [4.6 Some Emergent Frameworks](https://plato.stanford.edu/entries/logic-ai//#SomeEmerFram)
## [5\. Causal Reasoning](https://plato.stanford.edu/entries/logic-ai//#CausReas)
## [6\. Spatial Reasoning](https://plato.stanford.edu/entries/logic-ai//#SpatReas)
## [7\. Reasoning about Knowledge](https://plato.stanford.edu/entries/logic-ai//#ReasAbouKnow)
## [8\. Towards a Formalization of Common Sense](https://plato.stanford.edu/entries/logic-ai//#TowaFormCommSens)
## [9\. Taxonomic Representation and Reasoning](https://plato.stanford.edu/entries/logic-ai//#TaxoReprReas)
### [9.1 Concept-Based Classification](https://plato.stanford.edu/entries/logic-ai//#ConcBaseClas)
### [9.2 Nonmonotonic Inheritance](https://plato.stanford.edu/entries/logic-ai//#NonmInhe)
## [10\. Contextual Reasoning](https://plato.stanford.edu/entries/logic-ai//#ContReas)
## [11\. Prospects for a Logical Theory of Practical Reason](https://plato.stanford.edu/entries/logic-ai//#ProsForLogiTheoPracReas)
## [12\. Readings](https://plato.stanford.edu/entries/logic-ai//#Read)
## [Bibliography](https://plato.stanford.edu/entries/logic-ai//#Bib)
## [Academic Tools](https://plato.stanford.edu/entries/logic-ai//#Aca)
## [Other Internet Resources](https://plato.stanford.edu/entries/logic-ai//#Oth)
## [Related Entries](https://plato.stanford.edu/entries/logic-ai//#Rel)
___