Logic/Signed Articles/Graham Priest

From Citizendium
< Logic‎ | Signed Articles
Revision as of 01:58, 10 July 2007 by imported>Stephen Ewen (formatting)
Jump to navigation Jump to search

Logic

Graham Priest
Universities of Melbourne (Australia) and St Andrews (UK)

One of the most important cognitive abilities of people (and perhaps some other species) is the ability to reason. For example:

  • from the axioms of Euclidean Geometry, mathematicians deduce theorems about geometric figures that follow from these.
  • given some new theory in sub-atomic physics, physicists infer that certain observable consequences follow. These can be tested to assess the theory.
  • appealing to the facts of the matter, and various legal principles, statutes, and precedents, a barrister may argue that their client is innocent.
  • after we have read a book or seen a film, we may argue that a fictional character had certain motives (never stated explicitly) for acting in the way that they did.

Logic is the study of reasoning. It is not the study of how people actually reason. All too evidently, people often make mistakes when they reason. More importantly, the literature in cognitive psychology shows that people’s reasoning ability appears to make systematic and predictable mistakes. Rather, logic is the theorisation of the norms of correct reasoning.

A central object of theorisation is the notion of validity. In a piece of reasoning, or inference, there is a conclusion (that which is meant to be established) and premises (things which are meant to ground the conclusion). The inference is valid if the premises, assuming them to be true, really do support the conclusion. Investigations of the notion of validity often require the theorisation of the behaviour of words and phrases that seem to be integral to much reasoning, such as (in English) ‘if … then …’ and ‘it is not the case that …’. Moreover, it seems impossible to form an adequate understanding of validity without understanding at least certain things about notions such as truth, probability, necessity, meaning. Issues in logic therefore find themselves at the core of numerous central philosophical debates.

In Western logic there have been three great periods, in between which much of the sophisticated theorisation has, for various reasons, been forgotten. (There is a more continuous tradition in the East—especially India. But this never developed into anything like the sophisticated theorisation of modern Western logic.) The first was in Ancient Greece, where Aristotle and the Stoic logicians provided distinctive accounts of various forms of inference that they took to be valid, and why. The Aristotelian forms are called syllogistic, and are things like:

The second period was in the Middle Ages, and particularly in the great Medieval universities of Oxford and Paris. Logicians such as Ockham, Scotus, Buridan, took their Greek heritage, and developed sophisticated new theories on the basis of them concerning, for example, suppositio, obligationes, and insolubilia—loosely, reference, the rules of debate, and paradoxes, respectively.

The third great period started towards the end of the 19th Century and continues today. Following the general 19th Century drive for rigor in mathematics, thinkers such as George Boole, Charles Peirce, Ernst Schröder, Gottlob Frege, and Bertrand Russell, applied to logic mathematical techniques, such as those of abstract algebra, with a degree of sophistication never before used in the area. The result was a system (or family of systems) of logic (so called classical logic) much more powerful than anything achieved before, which could accommodate all mathematical reasoning of the time. Central to this was an analysis of the logic of quantifier phrases, such as ‘all numbers’, ‘some sets’.

The structures pioneered by Free and Russell provided fertile ground for investigation/elaboration/variation throughout the 20th Century, by logicians of the stature of David Hilbert, Kurt Gödel, and many others. The foundations for the theory of machine computation was laid down by Alonzo Church, Alan Turing, and others in the 1930s. And when practical computing machines became available 30 years later, this, in return, had a major impact on logic.

Central to modern logical investigations is the notion of a formal language. A formal language is an artificial language with a precisely defined vocabulary and syntax (like a computer programming language). Such languages behave in many ways like natural languages, but abstract away from the many vicissitudes and idiosyncracies of the latter. Studying the validity of inferences couched in formal languages therefore permits a clarity and precision that would not otherwise be available.

Generally speaking, there are two different approaches to the study of validity in formal languages. The first is purely combinatorial (proof theoretic). Thus, a valid inference is defined as one that can be obtained by performing certain formal operations on strings of symbols in the language, with no reference to what those strings may mean. For example, whenever we have a string of the form AB, and a string of the form A, we may obtain a string of the form B. (The symbol ‘’ is often used as the formal counterpart of the English construction ‘if … then …’.) The combinatorial systems are themselves of different possible kinds. Historically, the oldest are axiom systems. But nowadays logicians tend to prefer systems called natural deduction systems or sequent calculi, which mirror more closely ordinary reasoning in natural language. Tableau systems are yet a different kind of system, which are particularly efficient when reasoning is mechanised.

The second general approach to validity is set-theoretic (semantic). In this, symbols of the language are assigned meanings in certain ways. This allows one to define what it is for a sentence of the language to be true of a certain situation—in the same sort of way that the English sentence ‘The head of state is elected’ is true of the contemporary political situation in the USA, but is not true of the contemporary political situation in the UK. The situations themselves (which are often called models or interpretations) may be actual, merely possible, or maybe even impossible, and are defined in precise mathematical (set theoretic) terms. Valid inferences are then characterised as ones that preserve truth in an appropriate sense: namely, whenever the premises are true in any situation of a certain kind, so is the conclusion.

For many languages, it is possible to give both a proof theoretic characterisation and a semantic characterisation of validity which are equivalent (that is, such that the inferences that are valid according to the one characterisation are exactly the same as those that are valid according to the other). But there are cases (e.g., so called second order logic) in which the semantic characterisation is such that it is demonstrably impossible for there to be any corresponding proof-theoretic characterisation. Conversely, there are cases where there is a proof theoretic characterisation, but adequate semantic characterisations are still highly contentious (e.g., for so called sub-structural logics).

Another important distinction between two notions of validity, which is much more traditional, and which cuts across the one we have just been looking at, is that between deductively valid inferences and non-deductively (sometimes, inductively) valid inferences. Loosely, deductively valid inferences are those such that the premises could not be true without the conclusion also being true, such as:

In a non-deductively valid inference, the premises provide some ground, possibly excellent ground, for the truth of the conclusion. Yet it is nonetheless possible for the premises to be true whilst the conclusion is not. E.g.:

A feature of the latter kind of inference, but not the former, is that the inference can be made invalid by the addition of further premises. Witness:

This feature is called non-monotonicity.

Historically, deductively validity has been studied much more intensively than non-deductive validity. It is natural to suppose that non-deductive validity has something to do with the notion of probability—though that notion was itself not available until the 18th Century. But contemporary investigations of non-monotonicity deploy the same sort of proof-theoretic and semantic techniques that are deployed in the study of deductive validity. Where probability does play a role in contemporary logic is in that branch of logic termed decision theory. This is the study of reasoning, where conclusions are of the form that one ought to act in such and such a way, and premises contain information about the probable outcomes of one’s various possible actions, together with the values of those outcomes.

The contemporary study of logic is carried out in university departments of philosophy, mathematics, and computer science (and, occasionally, linguistics and economics). The development of novel ideas, techniques, and results triggered by the revolution in logic just over 100 years ago shows no sign of slowing down.

Originally from Giandomenico Sica, Ed., The Language of Science. Monza: Polimetrica.


Template:Cc-by-2.5