Will AI take over the world? How likely is it that computers will surpass human intelligence in the next few years?
I’ve been reviewing the 20th century philosopher Martin Heidegger for clues about how to think about AI and personal science.
In previous weekly posts, we’ve discussed favorite books about science and the scientific method (eg. philosophy of science, complementary science, hunter-gatherers, consciousness, emotions and unpopular scientists). All are highly recommended for personal scientists. But is there a broader framework to think, not just about science, but about human intelligence in general?
Academic philosophers often distinguish between “Analytic” and “Continental” styles. Americans tend to be more well-represented among the Analytics, who focus on logic and language — and the obvious implications for practical subjects like computer science. But for thinking about whether intelligent computers are possible in the first place, nothing beats the work of Continental philosopher Martin Heidegger and those he influenced, including Jean-Paul Sartre, Michael Foucault, Jacques Derrida, Charles Taylor, and many others.
I think therefore I am (not quite)
Although Heidegger wrote prolifically on many subjects for decades (he died in 1976 at age 86), much of his work derived from his core interest in the metaphysical question of Being (“Sein”, in German) and what it means to exist. He argued that the entire basis of Western thought, especially since the time of Plato, began with an incorrect assumption that thinking (as in rationality or cogito) is the trait that makes humans special. Instead, Heidegger suggests that we are defined by our comportments, the way we face the world around us. An important consequence is his rejection of the subject-object distinction—the idea that humans are material objects, distinct from each other and the rest of the world. No, says Heidegger: we are being-in-the-world, his hyphenated neologism to describe the special relationships that tie all of us, independent of time and space.
This is a gross simplification that barely does justice to the complexity of these ideas, but it has implications that are important to personal science.
When you are deeply engaged in an activity, you’re not thinking. Skillfully driving a car, for example, requires that you not think about driving. If anything, your comportment is toward your destination—the reason you got in the car in the first place. And your comportment doesn’t stop with the destination; you drive there in order to do something else, which itself has another purpose, and on and on throughout your life. The ultimate comportment—and what makes humans special—is our obsession toward being—existing—which is wrapped up in our sense of temporality—our understanding of past, present, and future.
Computers, even these powerful AI systems like ChatGPT, don’t have comportments. Relative to us humans, they’re no different than cars. They’re tools (Heidegger uses the term “equipment”). Even if, magically, an AI robot acts/talks/looks the same as your fellow humans, it won’t be the same. It doesn’t comport to anything. It has no temporality.
The reason we can even articulate this as a question (“can computers think?”) is due to a fundamental misunderstanding about what it means to be in the world. You are not some individual “mind”, separate from other minds and objects. You just feel that way because you were “thrown” into a society with assumptions built on assumptions, many of them so foundational that, like a fish in water, you can’t even tell they’re there. Other civilizations, including pre-Descartes Western culture, have different assumptions.
Science doesn’t think
When we use the term “science”, we’re referring broadly to a set of ideas about data, experimentation, reproducibility, etc. More fundamentally, we assume that the world is made of matter (atoms), that there are consistent laws that describe how matter works, and that those laws never change. “But of course!” you say, “how could it be any other way?”.
Well, many societies throughout history offer plenty of “other ways”, whether it’s some Abrahamic God keeping it together, or the Dao (道), or a Demiurge or …. on and on. The point is that, our “scientific” way is merely one of many explanations.
Now, you can argue that a scientific view offers the “best” explanation—and the impressive technology we see around us seems like good evidence for that—but whatever conclusions you get from science are limited to what is derivable from those original assumptions.
What do we do when we witness something that doesn’t conform to the (material, predictable, constant) assumptions of science? If we’re locked into a scientific world view, we’ll force-fit our observations into a materialist explanation, not because it’s the best explanation, but because we literally can’t conceive of any alternative.
“Science doesn’t think”, concludes Heidegger. In other words, the whole point of a scientific perspective is to limit what you allow yourself to consider.
So what?
Our biggest problem is that we are surrounded by too much information. There are far too many books, lectures, Substacks, not to mention experts and gurus; we can’t possibly evaluate anything but a tiny fraction of them all, much less comprehend them. But yet we must make decisions, about what and who to trust. What can we do?
Science has proven itself to be effective on a large class of problems, from interstellar space travel and nuclear power to disease cures and more. If we can harness just part of that power for personal use— personal science—imagine the better, more effective decisions we can make.
But Heidegger reminds us to be wary of the assumptions that underly scientific practices, especially the institutionalized version also called “science” as practiced by professionals. Fortunately, as personal scientists we can be deliberate about our assumptions. We can think (“authentically”, to use Heidegger’s term), stretch our comportments in directions not permissible for salary-dependent professionals.
There’s much more to say about this, as I’ll continue to discuss in future posts.
References
I first encountered Heideggerian ideas while studying AI in college decades ago, partially thanks to a professor named Hubert Dreyfus, whose book What Computers Still Can’t Do, is a classic in the genre for explaining the limits of artificial intelligence. Most of Dreyfus’ ideas are still applicable, even with ChatGPT, as I’ve reaffirmed in my last several months on a deep dive review. Though he passed away in 2017 (age 87), you can still watch recordings of his classic lectures. Start with this YouTube Playlist from 2007 in 28 parts (includes auto-transcription).
No matter how smart you are, Heidegger’s main book Being and Time is not comprehensible without extreme effort. Although you’ll need it as a reference, I highly recommend starting first with the Dreyfus lectures and/or his book-length commentary Being in the World. I also recommend Richard Sembera’s Rephrasing Heidegger, which makes the impenetrable.
For lighter reading, Dreyfus and his student Sean Kelly co-wrote All Things Shining: Reading the Western Classics to Find Meaning in a Secular Age, which is a more gentle introduction. If you’d rather just watch the movie, see the hour-long 2009 documentary Being in the World (stream it for free with your local library card on Kanopy)
Many of the ideas of philosopher Thomas Kuhn (The Structure of Scientific Revolutions) though not directly influenced by Heidegger, are similar and there are numerous commentaries that describe the consequences in much more detail. I’ll write more about this in future posts.