Before asking “Can machines think?”, we should clarify the key terms. First, defining a “machine” as any physical system with organized causal processes (including brains) leads us to ask whether these processes can replicate “thinking.” If we define it more narrowly as our current digital computers (i.e ChatGPT, Gemini), we question whether these computational programs can produce thought. Second, “thinking” can refer to behavior involving reasoning, problem-solving, internal states tied to beliefs and desires, or even subjective consciousness. Third, the idea of “can” introduces varying possibilities: empirical, technological, or moral constraints on whether machines can be considered thinkers. These distinctions guide the debate: Functionalists like Turing and Putnam support the possibility of functional thinking, while Searle and Nagel argue that functional imitation doesn’t imply genuine consciousness.

In this essay I argue that machines, under the broad definition, can replicate the behavioral and functional aspects of thinking, and thus can be said to think. They, however, lack the capacity for phenomenal consciousness and the “what is it like” character of mental states that man consider genuine thought.

To begin, Alan Turing famously replaces the question “Can machines think?” with an empirical Turin Test. He asks whether a machine can carry out a conversation that is indistinguishable from a human. The strength of this move is that it converts a metaphysical question on operation and focuses on observable performance, aligning well with empirical science, creating a standard of intelligence that is straightforward to apply. Yet Turing’s test has limits. Critics argue that passing the imitation game is at best sufficient for saying a system behaves intelligently but not necessarily an indication of understanding or consciousness.

The imitation game measures intelligence by conversational fluency, assuming that mastery of language is sufficient evidence of thought. In Wittgenstein’s later philosophy, however, meaning arises from participation in shared “forms of life” and rule-governed language games. To genuinely understand a language is to be embedded in a practice where words are tied to actions, intentions, and worldly engagement.

An important condition for participating in a language game is understanding the language and the rules that govern the game. Those that support a mental image representational theory of mind argues that understanding a something requires one to formalize a “mental image” of the requisite thing. If I am trying to calculate a problem on graph theory, I must have some sort of mental image in my mind that represents the rules to which I should follow, or that represents my understanding of the problem. That is, understanding is representational, or pictorial, rather than propositional. Note that certain machines such as LLMs cannot satisfy such a representational theory of mind because they are trained on syntactical computation. AGIs, however, could perhaps commit to genuine engaged rule following.

Hilary Putnam transforms Turing’s criterion into a more structural conception of mind, claiming that mental states are functional states defined not by their material composition but by their causal roles within a system’s overall organization. As he writes, “pain is not a brain state, but a functional state of a whole organism”. (Putnam, 1975) His reasoning is as follows.

Putnam’s analogy with computational machines reinforces that a “probabilistic automaton” with the right causal structure could instantiate the same mental life as a biological organism. Hence, functionalism rescues theories of mind from both dualist mystery and reductive materialism, grounding psychology in systems theory rather than substance.

This normative orientation toward thought, and truth, can be clarified using Searle’s notion of “direction of fit.” Beliefs and intentions both have content, but they differ in how that content relates to the world. A belief has a mind-to-world direction of fit: if I believe there is flour in my pantry, my belief must conform to the facts. An intention, by contrast, has a world-to-mind direction of fit: if I intend to buy flour, the intention is fulfilled not by matching reality, but by changing reality through action. The distinction captures a deeper divide between theoretical thought, which aims to represent the world truthfully, and practical thought, which aims to transform it. If machines merely generate outputs without possessing states that are answerable to truth (as beliefs are) or capable of guiding action (as intentions are), then their linguistic performances may lack the normative structure characteristic of genuine thinking. In other words, Searle views intentionality as something that is grounded in the world we live in, and the physical world that we refer to.

In his Chinese Room argument, Searle further challenges the claim that running the right program suffices for understanding. Imagine a person who blindly follows a program to manipulate Chinese symbols: from the outside, the outputs are coherent Chinese, but the person has no idea what the symbols mean. Searle’s point is that symbol manipulation, or syntax, is not sufficient for semantic understanding.

The Chinese Room is compelling because it highlights an explanatory gap: programming explains behavior but not meaning. His Chinese Room argument is still debated to this day and remains one of the most contentious philosophy theories ever. However, I will not dive into these objections and replies because Searle’s argument opens the door to a deeper question which is of more concern to us: even if machines can replicate intelligent behavior, they fail to possess subjective thoughts.

Beyond semantics lies the deeper issue of phenomenal consciousness. Descartes defined the mind in terms of thinking and self-awareness: “I think, therefore I am.” Modern philosophers like Thomas Nagel emphasize the subjective character of experience, “what it is like”–ness to be a conscious creature, and argues that purely physical or functional accounts leave this out. If there is an irreducible subjective dimension, then no amount of behavioural mimicry or functional replication will produce true mental life. Nagel’s anti-reductionist claims worry that the materialist story may capture structure but not the qualitative aspect of experience.

The strength of the phenomenal objection is its insistence on first-person facts: consciousness seems categorically different from third-person functional descriptions. Its weakness is, however, methodological. Subjective data resist public verification and scientific modelling; opponents propose that neuroscience will eventually redescribe subjective states in objective terms, dissolving the mystery rather than denying the facts. (Churchland, 1986) The debate therefore remains open: either consciousness is a future scientific problem, or it is a genuine limit to material explanations.

In conclusion, if “thinking” is defined behaviorally or functionally, then machines can, in principle, think. The right causal organization, whether biological or artificial, should generate capacity for reasoning, learning and communication. But if thinking includes semantic understanding and phenomenal consciousness, the inner grasp and felt quality of mental states, then running programs or instantiating functional roles is not obviously sufficient.

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish