{"id":78,"date":"2026-03-17T08:56:48","date_gmt":"2026-03-17T08:56:48","guid":{"rendered":"https:\/\/thinkerivus.com\/?p=78"},"modified":"2026-03-21T03:24:31","modified_gmt":"2026-03-21T03:24:31","slug":"can-machines-think-and-feel","status":"publish","type":"post","link":"https:\/\/thinkerivus.com\/en\/can-machines-think-and-feel\/","title":{"rendered":"Can Machines Think and Feel"},"content":{"rendered":"<p>Before asking &#8220;Can machines think?&#8221;, we should clarify the key terms. First, defining a &#8220;machine&#8221; as any physical system with organized causal processes (including brains) leads us to ask whether these processes can replicate &#8220;thinking.&#8221; If we define it more narrowly as our current digital computers (i.e ChatGPT, Gemini), we question whether these computational programs can produce thought. Second, &#8220;thinking&#8221; can refer to behavior involving reasoning, problem-solving, internal states tied to beliefs and desires, or even subjective consciousness. Third, the idea of &#8220;can&#8221; introduces varying possibilities: empirical, technological, or moral constraints on whether machines can be considered thinkers. These distinctions guide the debate: Functionalists like Turing and Putnam support the possibility of functional thinking, while Searle and Nagel argue that functional imitation doesn\u2019t imply genuine consciousness.<\/p>\n\n\n\n<p>In this essay I argue that machines, under the broad definition, can replicate the behavioral and functional aspects of thinking, and thus can be said to think. They, however, lack the capacity for phenomenal consciousness and the \u201cwhat is it like\u201d character of mental states that man consider genuine thought.<\/p>\n\n\n\n<p>To begin, Alan Turing famously replaces the question \u201cCan machines think?\u201d with an empirical Turin Test. He asks whether a machine can carry out a conversation that is indistinguishable from a human. The strength of this move is that it converts a metaphysical question on operation and focuses on observable performance, aligning well with empirical science, creating a standard of intelligence that is straightforward to apply. Yet Turing\u2019s test has limits. Critics argue that passing the imitation game is at best sufficient for saying a system behaves intelligently but not necessarily an indication of understanding or consciousness.<\/p>\n\n\n\n<p>The imitation game measures intelligence by conversational fluency, assuming that mastery of language is sufficient evidence of thought. In Wittgenstein\u2019s later philosophy, however, meaning arises from participation in shared \u201cforms of life\u201d and rule-governed language games. To genuinely understand a language is to be embedded in a practice where words are tied to actions, intentions, and worldly engagement.<\/p>\n\n\n\n<p>An important condition for participating in a language game is understanding the language and the rules that govern the game. Those that support a mental image representational theory of mind argues that understanding a something requires one to formalize a \u201cmental image\u201d of the requisite thing. If I am trying to calculate a problem on graph theory, I must have some sort of mental image in my mind that represents the rules to which I should follow, or that represents my understanding of the problem. That is, understanding is representational, or pictorial, rather than propositional. Note that certain machines such as LLMs cannot satisfy such a representational theory of mind because they are trained on syntactical computation. AGIs, however, could perhaps commit to genuine engaged rule following.<\/p>\n\n\n\n<p>Hilary Putnam transforms Turing\u2019s criterion into a more structural conception of mind, claiming that mental states are functional states defined not by their material composition but by their causal roles within a system\u2019s overall organization. As he writes, \u201cpain is not a brain state, but a functional state of a whole organism\u201d. (Putnam, 1975) His reasoning is as follows.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>P1: Mental states are not type-identical to physical states, i.e. pain is not c-fibers firing because otherwise octopuses or other alien beings would not feel pain.<\/li>\n\n\n\n<li>P2: Accepting that mental properties are multiply realizable allows a given function or system to be implemented in different substrates, i.e. carbon-based brains, silicon circuits, or alien systems.<\/li>\n\n\n\n<li>P3: From P1, there is strong reason to deny type-physicalism and accept that mental properties are multiply realizable.<\/li>\n\n\n\n<li>C: Therefore, it is better to adopt a theory of thinking that unifies all thinkers under a shared functional architecture.<\/li>\n<\/ul>\n\n\n\n<p>Putnam\u2019s analogy with computational machines reinforces that a \u201cprobabilistic automaton\u201d with the right causal structure could instantiate the same mental life as a biological organism. Hence, functionalism rescues theories of mind from both dualist mystery and reductive materialism, grounding psychology in systems theory rather than substance.<\/p>\n\n\n\n<p>This normative orientation toward thought, and truth, can be clarified using Searle\u2019s notion of \u201cdirection of fit.\u201d Beliefs and intentions both have content, but they differ in how that content relates to the world. A belief has a mind-to-world direction of fit: if I believe there is flour in my pantry, my belief must conform to the facts. An intention, by contrast, has a world-to-mind direction of fit: if I intend to buy flour, the intention is fulfilled not by matching reality, but by changing reality through action. The distinction captures a deeper divide between theoretical thought, which aims to represent the world truthfully, and practical thought, which aims to transform it. If machines merely generate outputs without possessing states that are answerable to truth (as beliefs are) or capable of guiding action (as intentions are), then their linguistic performances may lack the normative structure characteristic of genuine thinking. In other words, Searle views intentionality as something that is grounded in the world we live in, and the physical world that we refer to.<\/p>\n\n\n\n<p>In his Chinese Room argument, Searle further challenges the claim that running the right program suffices for understanding. Imagine a person who blindly follows a program to manipulate Chinese symbols: from the outside, the outputs are coherent Chinese, but the person has no idea what the symbols mean. Searle\u2019s point is that symbol manipulation, or syntax, is not sufficient for semantic understanding.<\/p>\n\n\n\n<p>The Chinese Room is compelling because it highlights an explanatory gap: programming explains behavior but not meaning. His Chinese Room argument is still debated to this day and remains one of the most contentious philosophy theories ever. However, I will not dive into these objections and replies because Searle\u2019s argument opens the door to a deeper question which is of more concern to us: even if machines can replicate intelligent behavior, they fail to possess subjective thoughts.<\/p>\n\n\n\n<p>Beyond semantics lies the deeper issue of phenomenal consciousness. Descartes defined the mind in terms of thinking and self-awareness: \u201cI think, therefore I am.\u201d Modern philosophers like Thomas Nagel emphasize the subjective character of experience, \u201cwhat it is like\u201d&#8211;ness to be a conscious creature, and argues that purely physical or functional accounts leave this out. If there is an irreducible subjective dimension, then no amount of behavioural mimicry or functional replication will produce true mental life. Nagel\u2019s anti-reductionist claims worry that the materialist story may capture structure but not the qualitative aspect of experience.<\/p>\n\n\n\n<p>The strength of the phenomenal objection is its insistence on first-person facts: consciousness seems categorically different from third-person functional descriptions. Its weakness is, however, methodological. Subjective data resist public verification and scientific modelling; opponents propose that neuroscience will eventually redescribe subjective states in objective terms, dissolving the mystery rather than denying the facts. (Churchland, 1986) The debate therefore remains open: either consciousness is a future scientific problem, or it is a genuine limit to material explanations.<\/p>\n\n\n\n<p>In conclusion, if \u201cthinking\u201d is defined behaviorally or functionally, then machines can, in principle, think. The right causal organization, whether biological or artificial, should generate capacity for reasoning, learning and communication. But if thinking includes semantic understanding and phenomenal consciousness, the inner grasp and felt quality of mental states, then running programs or instantiating functional roles is not obviously sufficient.<\/p>","protected":false},"excerpt":{"rendered":"<p>Before asking &#8220;Can machines think?&#8221;, we sho [&hellip;]<\/p>","protected":false},"author":1,"featured_media":1174,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[],"class_list":["post-78","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-proceedings"],"_links":{"self":[{"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/posts\/78"}],"collection":[{"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/comments?post=78"}],"version-history":[{"count":0,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/posts\/78\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/media\/1174"}],"wp:attachment":[{"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/media?parent=78"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/categories?post=78"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thinkerivus.com\/en\/wp-json\/wp\/v2\/tags?post=78"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}