Review - God, Human, Animal, Machine (O'Gieblyn)

Rating: 5/5

I wanted to read a book which would delve into the philosophical underpinnings of the ongoing AI hype: What was the root of the idea that humans, with all their confusions and complexities, can ever be replaced by computers? O’Gieblyn’s column in the Wired magazine, Dear Cloud Support, is my favorite part of the magazine. Her writing is lucid and her references come from far and wide. I was not really prepared for the philosophical depth that is on display in this book: O’Gieblyn goes to the very beginning of the world and starts with the earliest philosophers (Plato, Aristotle) and ushers the reader through a series of “frames of mind.” She is not averse to religion or science; nor is she biased to any particular philosopher or their ideas; quoting from a huge variety of sources throughout the book to show the ways in which thinking has evolved. Her religious upbringing and her current vocation as a technology writer feature heavily throughout the book. After reading this book, I have a clear idea of where the foundation of the hype lies and how it has gotten this bad.

History of Ideas

Ideas do not just come out of nowhere; they are genetic, geographical.

– p.133

The concept that acts as the overture for this book is the Cartesian Partition: Descartes’ claim that the body and the soul were separate, that the body was machine-like, whereas the soul was responsible for “human characteristics,” symbolized by the internal experience, of a person. Descartes did not say much about the soul, or why it was supposed to be responsible for this internal experience. We can interpret this as his desire to analyze what he could and set aside what was intractable. Descartes’ aim with the partition between mind and body was to refute the leading theory of the time, Aristotle’s teleology, a doctrine which claims that ends are immanent in nature (i.e. vitalism.)

Aristotle’s teleology enchants the world with vitality and “ends” which are inherent to the object. (A stone falls to the ground because it is the nature of a stone to seek the center of the Earth, and so on.) Descartes’ cold partition of ends from the interior position that they occupied disenchanted the world, and made it susceptible to analysis. While the “soul” side of Descartes’ partition remained intractable, it was still possible to analyze the “body” side of the partition. This gives rise to modern science: Newtonian physics, computers, and information theory, which holds that information is capable of existing alone without any physical form at all.

From here, the threads of thought get quite entangled. First, on the “soul” side of the partition, there were attempts to resolve the hard problem of consciousness. There was Conway’s Panpsychism, that consciousness is everywhere and not very unique to humans at all. Through Leibniz, this philosophy undergoes a transformation and takes the mathematical form of the Integrated Information Theory (IIT) proposed by Goff and Tononi. The IIT proposes that consciousness is the result of a physical system that has a high degree of integration. The similarities between IIT and Panpsychism were not missed, because using IIT it is possible to prove that a small system of inactive logical gates is conscious, even though that makes no sense. Here we sense a vulnerability which is now being exploited by the peddlers of hype: No one is quite able to define what consciousness is or what it enables or how to even identify it. IIT remains widely contested and there is no agreement about whether it is the “theory of everything of consciousness.”

On the “mind” side of the partition, things seem to be going swimmingly. From Newtonian physics, we get many useful tools, the steam engine and computers being major among them. Newtonian physics’ focus on things (unsurprisingly) gives rise to Materialism, the idea that matter is the fundamental substance in nature and that everything springs from it. This was an idea that the German philosopher Hegel was affected by. Marx and Engels began their theory of human history mainly as a critique of materialism, ending up with a powerful criticism of Capitalism itself. Though, that is a topic for another time. Modern materialism is also the root for other things such as secularism, and alienation induced by modern economic systems.

The rise of materialism gives birth to the (frankly absurd) concept of Emergence, the idea that consciousness will emerge out of a system that is sufficiently similar to the human brain. O’Gieblyn recounts experiments which test this: Scientists who make robots and wait for them to become conscious. This was the most interesting part of the book for me. From the point of view of working with computers, this seemed like an unhinged effort. Imagine a mother board with a basic CPU, main memory, persistent memory, input devices without any software at all; i.e. Not even firmware. While that is an electrical circuit through which current will flow in some predictable manner, there is absolutely no way that the motherboard would behave anything like even the most rudimentary computer. O’Gieblyn sharply points out the contradiction: Emergence comes from the “body” side of the Cartesian partition, so, to claim that it in-turn gives rise to the “soul” side, which was intentionally set aside, is not possible.

Materialism spawns other ideas too. Dualism, succinctly summarized in the book as the idea that “mind is software that is running on the brain’s hardware.” This one also assumes that whatever the mind (or consciousness) is, must be part of the brain’s physical make-up and not something outside of it at all. Another stray idea is the mechanistic brain, or “machines can be conscious because consciousness is a mechanistic process, and we will figure out how to replicate it in machines.”

However, this apparent utopia with material on top does not continue for long: Challenges appeared from everywhere. Kierkegaard’s objection was that subjective experience is the only thing that matters; and his philosophy, lightly touched on in the book, was almost completely told through the first person without any affectation of a third-person point of view. Kierkegaard’s descendant is the famous scientist Bohr, who compared physics and poetry, saying that poetry was the task of creating images through the severely limited medium of languages, just as physics used languages and systems of reason to describe the world in an approximate sense; never being able to grasp everything at once. (Experts say that it is hard to classify Kierkegaard’s philosophy. So, I am not sure on which side of the partition he put more emphasis. But a rough guess would be that he preferred the soul far more than the body.)

Returning to the “soul” side of the partition, Panpsychism gives way to Idealism: the concept of a single unified mind which is the universal substrate of every existing mind. There is no Archimedean point, from which everything can be observed and commented upon. This concept also gives absolute primacy to subjective experience because that is the only way of describing or learning more about the world.

Kierkegaard insisted on the value of subjective truth over objective systems of thought. Fear and Trembling was in fact written to combat the Hegelian philosophy that was popular at the time, which attempted to be a kind of theory of everything—a purely objective view of history that was rational and impersonal.

– p.133

Metaphors

A large part of this book is about metaphors and how they are abused. While we used to say that computers process information like a brain does, now the metaphor is reversed and we hear sentences like “I am still processing that information” or “What they said did not compute for me.” This de-anthropomorphisation of human beings is insidious because it brings people and machines closer together without clarifying why.

Repeatedly, we see that what was a metaphor suddenly become the way things are talked about. All of science is a metaphorical way of explaining how things work to others and ourselves. However, if we were to believe that science itself was not a metaphor describing how things are but the absolute system which makes things the way they are, then we go down the slippery slope of granting more primacy to a system of symbols over reality itself: making claims that information and ideas can survive on their own forever without any subjective experience.

Hype

Remember how the words consciousness and intelligence seemed to lack a clear definition? Everyone was sure about what consciousness looked like (after all, we all have it); but no one was quite sure how to recognize it in something that was not human, or maybe even inorganic altogether. This is the vulnerability of the language which the con artists use to get into the news:

Such aspirations necessarily require expanding the definitions of terms that are usually understood more narrowly. If “intelligence” means abstract thought, then it would be foolish to think that plants are engaging in it. But if it means merely the ability to solve problems or adapt to a particular environment, then it’s difficult to say that plants are not capable of intelligence. If “consciousness” denotes self-awareness in the strongest sense of the word, then nobody would claim that machines have this capacity. But if consciousness is simply awareness of one’s environment, or–as has long been the case in artificial intelligence–the ability to behave in ways that appear deliberate and intentional, then it becomes more difficult to insist on it as a phenomenon that is unique to humans and other animals.

– p.109

It is perhaps appropriate to mention the strange case of an engineer thinking that an AI system was sentient. This engineer explicitly said that they were talking about sentience as a “priest” and not as a “scientist” It seems hard to interpret that sentence: What is the difference between a priest and a scientist? Is consciousness for one different from consciousness for the other? This latter question we can answer with certainty: Yes, consciousness is different for every single person and profession because there is no clear definition of it anywhere. But what about the former question about their perceptions of consciousness. If a priest can claim that something is sentient, is that a higher or lower bar of evidence than a scientist. This is the rub: Traditional modernity has lionized Science to the point where the S is capitalized; it would be foolish for a modern mind to claim that the priest’s point of view was more valid than the Scientist’s. But sentience itself is a concept so far removed from science, that it hardly matters what the profession of the person making the claims is. This is the point which is always missed: Everyone focused on the credentials of the person making the claim while the person making the claim was busy saying that they were not using their credentials to make the claim in the first place. It is clear how this situation can be quickly monetized for a modern audience that is not interested in the claim or the philosophical backing of it. They see only the hype itself, and start sharing the story for its own sake; for the absurdity that it represents, rather than for the truth content of the story.

Arendt was quite concise in expressing this emotion:

For Arendt, the problem was not that we kept creating things in our image; it was that we imbued these artifacts with a kind of transcendent power. Rather than focusing on how to use science and technology to improve the human condition, we had come to believe that our instruments could connect us to higher truths.

– p.239, God, Human, Animal, Machine (O’Gieblyn, Meghan)

Empiricism and Science

Empiricism is the bedrock of modern science. Empiricism is the claim that “more data will eventually lead to more knowledge.” We can see that more data did not lead to more knowledge; if anything, more data has lead to more buying and selling of goods and services. Visualizing the data that almost all technological platforms collect itself requires a bevy of software. Organizations that collect user data are chaotic enough that companies claiming to help organizations “get their data in control” have increased rapidly. We frequently talk about data warehouses now. The problem is no longer the lack of data; it is the abundance of it. (There is a truism everyone is tried of hearing.)

So, a lot of data did not necessarily lead to more knowledge. How about the scientific process? Was that improved at least? No. Here, O’Gieblyn points out the most basic of insights: The whole point of Science was to understand how things worked, how mechanisms could be explained; in a sentence, it was to understand how an input (coal) could lead to an output (power). The more data we had about the input, the output, and the process that happened in between, the more we could improve the machine. Steam engines are far more efficient now than when they were first invented. A lot of that was down to more data collection about the various factors which affect efficiency, mathematical formulations about what leads to more efficiency, and experimental proofs that proved the formulations right. This cycle is no longer in place anymore.

Every company that dabbles in the “product” business has a “data science” team. Using the most basic statistical insights (namely, the test for statistical significance), these teams run A/B tests: assigning users randomly to two different experiences and measuring which one makes them more money. While the “sober” attitude towards testing is that there is a lot of “user research” which tells the company what users want, and then the company runs an A/B test to confirm that what users want is really what makes the company the most amount of money, the real attitude is quite succinctly summarized in a couple sentences: “Who knows why people do what they do. They do it and we can measure that with very high fidelity!”

While normal A/B tests are still run by actual engineers and analysts who find insights in some data, machine learning systems automate this process by setting a mathematical goal in the beginning and letting the algorithm choose among several variants and find the best variant in a self-feeding feedback loop. This is the nature of most recommendation algorithms that are used to decide which TV show should be at the top of a on-demand movie streaming service, and which thumbnail should be used for the TV show, with the goal to make the user click on the show and watch a few minutes of it. This problem is exacerbated in opaque industries like programmatic advertising where decisions about which ad should be shown must be made within a fraction of a second.

Book of Job

The conclusions of this recommendation “algorithm” are to be accepted without questions about or attempts to understand how they were reached. This approach is commonly called science, but as O’Gieblyn neatly points out, is indistinguishable from the concept of an omniscient God. At the beginning of the Protestant revolution, Calvin and Luther were priests who said that God was radically other than human and it was futile to try to understand why God did what He did. It was man’s duty to simply accept what was written in scripture and receive all knowledge as revelation.

The rise of ML explainability is a sign of the apparent inscrutability of systems which are being built. However, the system which is used to explain the original system is itself independent of it and a completely different algorithm which is only building a “narrative” about the decision, rather than actually “looking inside.” Their inherent nature of being a faceless algorithm gives them a veneer of neutrality and strong protection against any claims that they are unjust in some humanly understandable way. The problem starts even before the system starts making decisions (“inferences”):

Moreover, because the training data sets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, […] undocumented training data perpetuates harm without recourse.”

We read the paper that forced Timnit Gebru out of Google. Here’s what it says. | MIT Technology Review (Retrieved: [2023-10-09 Mon 16:09])

When the system finally starts making predictions, its predictions can change behavior. Systems such as Predictive Policing (PredPol) were designed to anticipate the neighborhoods with the most crime; instead, they ended up reinforcing their sample data and producing the very outcome they predicted because when the police went to a locality hoping to find crime, they often found it.

Before the Cartesian Partition

At long last, one is convinced that Descartes’ thought experiment, which lead to the mind-body partition, gave rise to modern science and modern hype. What came before it though? Here again O’Gieblyn does not disappoint. She goes into the history of ideas before the partition, through the lens of other philosophers.

Plato’s philosophical system gave primacy to ideas and forms. The World of Forms was always present and never changing; it was only in that world that the perfect form of an object existed. There was a separate World of Matter where these objects existed, and they had a part of the characteristics of the perfect form but they could never hope to be completely identical to the perfect form which was unattainable. So, in Plato’s thinking, these “Universals” were characteristics which were always present, always certain, and most importantly, “common” or “generic.” These universals did not need to be proved.

The Loss of Universals came about in the Late Middle Ages (late 14th century), when suggestions began that there was nothing universal or common about ideas at all, and that it was just the names that we give to commonalities we perceive in objects. This was the Nominalist style of thinking. Nominalism eventually found its way into the mainstream through Protestantism and subsequently spawned the disenchantment and the scientific revolution.

Blumenberg’s thesis, which has since been reiterated by a number of philosophers and historians, is that nominalism, as it became widespread in Protestant theology, led to the Enlightenment, disenchantment, and the scientific revolution. The trauma of lost universals created an intolerable situation, one that reached the point of crisis in the thought experiments of Descartes, who so mistrusted his own powers of reason that it was not inconceivable, he imagined, that God somehow deceived him into thinking that a square had four sides or that two plus three equaled five.

– p.215

This is another path down which we find our work-obsessed culture. Protestant theology encouraged a work ethic which was powered by the anxiety of not knowing one’s predestination. (The concept of predestination decreed that one’s destination after death, hell or heaven, has been decided and will remain unaffected by one’s conduct during life.) Predestination, coupled with the Protestant concept that work benefits the individual and society as a whole, transformed the nature of jobs themselves, making it an obligation to consistently work diligently.1 These conditions were perfect for the rise of Industrial Capitalism, in which the harder people work for the same wage, the cheaper the products that are produced, the lower their wages will soon be, and counter-intuitively the harder they must work to earn the same wage as they did before.