Though most of us never give a second thought to philosophers, we live in cultures which have underlying philosophical assumptions that are rarely examined. Just as some things are hard to say in certain languages, some thoughts are hard to think in certain philosophies, including certain kinds of thoughts about ethics and morality.
Logic can be considered as a sort of language of philosophical thought. It turns out that over the past century or two, there has been a revolution in the type of logic that has been taught and accepted by most philosophers, and as a result this revolution has insinuated itself through most Western cultures. But there are a few philosophers who think this almost unnoticed shift has had profound effects that almost nobody understands. And I’d like to take the rest of this space to explain why.
You might think that logic is just logic, but if you look a little deeper you will find that philosophers have come up with basically two different kinds of logic. Historically, the first philosopher to take a disciplined academic look at logic was Aristotle, who lived from about 384 BC to 322 BC.
His book on logic sets out various ways to argue from premises (things you believe or know are true) to conclusions, which are things that must be true if the premises are true. One of these ways is the syllogism: “All men are mortal; Socrates is a man; therefore, Socrates is mortal.” The first two phrases are the major and minor premise, respectively, and the third phrase is the conclusion, which must be true if the premises are true.
Philosopher Henry Veatch calls this classic Aristotelian kind of logic “what-logic,” because it is based on an intuitive, common-sense notion of what things are. Aristotle believed that people could look at things and determine what they were essentially, at least to some degree. Most people can tell apples from oranges, for example, and Aristotle would say that’s because the essential makeup of an apple is different from the essential makeup of an orange.
Aristotelian logic reigned until philosophers of the Enlightenment, such as David Hume (1711-1776) and Immanuel Kant (1724-1804) began to question things that up to then were thought to be obviously true. The details are complicated, but basically, they began to say things like, “When you see an apple, the only thing you can be sure of isn’t the apple itself — it’s the idea of an apple in your mind.”
Grossly oversimplified, Hume and Kant and their followers began to treat human thought and language as the only things we could be sure of, and tried to work outwards from thoughts to things in the outside world. They believed that we could never really know what a thing is, and there was no point in trying.
In the centuries since then, among most philosophers Aristotelian what-logic has been replaced by what Veatch calls “relating-logic.” Another name for it is symbolic logic. Electrical engineers have encountered it in the form of Boolean algebra, and it is embodied in all digital computers in the form of logic gates that perform logical functions like AND and OR.
While symbolic logic has proved to be extraordinarily useful — computers excel at it, naturally, and philosopher Peter Kreeft compares the change from what-logic to relating-logic to the shift from Roman to Arabic numerals — it has some basic shortcomings. The most serious is this: it can never tell you what a thing is; it can only say how things relate to each other. Here is Veatch on this defect:
To take the case merely of human actions … since there is no such thing as a human nature that can be appealed to in a relating-logic, there is no way in which one can determine what man’s function is or what sort of activity a characteristically human life must consist in.
For the purposes of ethical reasoning, these are major problems. If you can’t answer the question, “What is human nature?” you can’t very well say this aspect of human nature or this action is better than that one.
Kreeft, who has taught philosophy at Boston College for four decades, says that symbolic or relating-logic has taken over so thoroughly that it is beginning to disable his students from understanding aspects of reasoning that depend on what-logic: analogies, for example.
After Kreeft had published a criticism of certain aspects of computers, a well-known expert phoned him and predicted that as society began to think more exclusively in the relating-logic computer mode, ordinary intuitive understanding would atrophy and the SAT (Scholastic Achievement Tests, formerly used for college admissions) would drop its section on analogies as fewer people could figure them out. This actually happened a few years later.
Kreeft dusted off some old logic exams from 1962 and gave them to his current 21st-century students, and they failed spectacularly, especially when it came to analogies. For example, in the sentence, “He pointed with his right hand to the hands of a clock,” the word “hands” is used analogically. But only three out of 75 students understood that.
Kreeft claims that losing the ability to do what-logic is tragic, and may be either a result or a cause (or both) of everything from the rise of utilitarian ethics (“the greatest good for the greatest number”) which is a favourite of engineers, to the Sexual Revolution, which does not recognise anything like the natural form or “nature” of human sexuality.
While relating-logic is great for making things work, it fails to tell us what anything really is. And to the extent that modern thought and discourse increasingly exclude types of reasoning based on what-logic, we seem to be dumbing ourselves down to act more like computers and less like human beings.
In The Magician’s Nephew, C. S. Lewis said, “Now the trouble about trying to make yourself stupider than you really are is that you very often succeed.” In leaving Aristotelian logic behind, the Western world may be doing exactly that.
This article has been republished from the Engineering Ethics blog.