Intensional AI, the Strong-Weak AI Debate, and the Challenges of Awareness

Strong-Weak AI Debate Banner

CORESENSE central objective is the endowment of the capability of understanding to AI-powered robots. This directly touches the philosophical core of Artificial Intelligence that has been animated by a foundational debate concerning the distinction between ”Weak” and ”Strong” AI, a discussion that extends beyond computer science into philosophy and our very understanding of mind – even human. At its heart, this debate centres on whether machines can achieve genuine thought or merely simulate it –“genuine” is the key word here. Weak AI, the form considered as overwhelmingly prevalent today, encompasses engineered systems designed for very specific, narrow tasks. Examples abound, from spam filters and recommendation engines to voice assistants and current iterations of self-driving cars. These systems excel within their predefined parameters, operating through complex algorithms and pattern matching to act as if they are intelligent. However, they lack genuine understanding, self-awareness, subjective experience, or the spark of consciousness; or so is said. They are, in essence, mere highly sophisticated tools expertly crafted for specific applications.

In stark contrast, Strong AI represents the hypothetical achievement of Artificial General Intelligence (AGI). Such a system would possess cognitive abilities on par with humans, capable of learning, adapting, and applying knowledge across a vast spectrum of challenges, mirroring human versatility. For autonomous robots, deployed in uncertain, disturbing, real-world environments, strong AI would be the panacea. Strong AI, as envisioned in the debates, would not merely act intelligently but would possess genuine understanding, and even all the other characters of the full human minds: consciousness, selfawareness, and subjective experience. Artificial minds at the reach of engineers. The Weak/Strong distinction is far from academic, touching upon fundamental questions about the nature of intelligence itself. The historical roots of this debate trace back to the dawn of computing, notably Alan Turing’s 1950 proposal of the ”Turing Test” which focused on assessing intelligence through behavioural imitation. However, the conversation deepened significantly with John Searle’s 1980 ”Chinese Room Argument.” Searle compellingly argued that a system could flawlessly manipulate symbols according to rules (syntax) without any actual comprehension of their meaning (semantics), challenging the notion that merely passing the Turing Test equates to genuine intelligence. While theoretical concepts like computationalism (the idea that the mind is fundamentally a computational system) and emergent properties offer potential pathways to Strong AI, significant counterarguments and practical hurdles persist. Or seem so for many people. Examples include Searle’s challenge, Gödel’s Incompleteness Theorems regarding formal system limitations, the difficulty of representing and updating real world knowledge (the frame problem), and the lack of embodimentand situatedness. Recently, advanced Large Language Models(LLMs), have contributed to the debate by blurring the lines with impressive language capabilities. They are today subjects of intense debate regarding whether they signify a genuine step towards AGI or are imply exceptionally sophisticated forms of Weak AI. Is ChatGPT conscious? is a common topic in webpages and also in academic articles.

In my perspective, what underpins this Strong versus Weak AI debate is a crucial epistemological and ontological distinction-derived from logic and philosophy, and reified in artificial intelligence: the difference between intensional and extensional systems. This distinction concerns how systems represent and process information. Extensional systems primarily operate based on the extension of concepts-the actual set of objects or entities that a concept refers to in the world. In such systems, knowledge is typically encoded as sets, and reasoning involves checking for membership within these sets or applying truth-functional logic based on existing facts. They focus on the ’what’-the objects satisfying a predicate. These systems excel at tasks where meaning is determined by correspondence to specific instances or data points. However, they fail catastrophically in non-neat, small problems. They cannot handle well vagueness or huge scales. Most Weak AI systems and traditional computer programs are fundamentally extensional. They manipulate data and symbols according to predefined formal rules and syntactic structures, without needing to grasp the underlying meaning of those symbols. They are efficient for classification, data processing, and pattern recognition based on learned correlations.

Intensional systems, conversely, operate based on the intension of concepts-the underlying meaning, definition, rule, properties, or conceptual criteria that define what it means for something to belong to a category. Knowledge here is encoded in terms of conceptual definitions, relationships between concepts, logical axioms, and the functions or processes that determine applicability. Intensional systems focus on the ’why’ or ’how’—the defining characteristics or rules of a concept. Human cognition seems to be fundamentally intensional; we understand things not just by recognizing instances, but by grasping their meaning, their defining properties, their relationships to other concepts, and their role within a broader web of knowledge – in CORESENSE parlance, we build and use models. We can understand ”water-powered engine,” with its defining properties (motor-like, water fuel), even though its extension (the set of actual water engines) is empty. The goal of Strong AI is precisely to replicate this intensional capability-to create systems that don’t just process symbols representing concepts, but that genuinely understand those concepts and their meanings. This requires a deeper semantic capacity, enabling nuanced reasoning, handling ambiguity, understanding context, and dealing with modalities like possibility and belief. Intensionality has constituted an essential thread in AI since the old times, when logic was seen as the cornerstone for the construction of the Strong AI building.

The connection between these two distinctions is profound: the challenge of achieving Strong AI is fundamentally the challenge of bridging the gap between extensional processing and intensional understanding. Weak AI demonstrates the power of extensional computation, simulating intelligence through sophisticated syntactic manipulation, as highlighted by Searle’s Chinese Room. However, true, humanlike or beyond-human, general intelligence demands essential, engineered intensionality. We need engineering methods to endow our robots with the ability to grasp the semantics of what is perceived, to understand meaning beyond mere symbol manipulation and be able to complete their missions with resilience. Therefore, creating Strong AI requires moving beyond architectures that simply process data patterns to architectures that can represent, manipulate, and genuinely comprehend meaning itself. This represents a significant leap from current computational paradigms. I see with interest recent developments and discussions in the area of LLMs and FMs concerning the problems of scaling-up and the wall of meaning. The debate whether LLMs do have or not models of the world is hot and points directly to the intensional/extensional distinction. This is so because statistical models – what powers LLMs – that are data-driven and extensional in its very origin, may in essence converge to intensional models. LLMs may well be achieving a form of understanding, but scaling starts to seem not enough to solve the problems –e.g. fabrications– and voices are being raised requesting the necessary merging with logical structures that can impose the consistency that intensionality provides.

Furthermore, this quest for intensionality and Strong AI is inextricably linked to the complex issue of machine consciousness. Most philosophical accounts consider consciousness to be inherently intensional. The Information Integration theory of consciousness posits integration – a form of intension – as one of its foundational axioms. Phenomenology, however, seems far from being solved. It is defined by subjective, qualitative experience (qualia say the philosophers, the ”what it’s like” to be something), a first-person perspective, the feeling of selfhood, and mental states like beliefs and desires that are about things (intentionality). Consciousness seems to be not merely about processing information (an extensional feat) but involves the subjective experience of that processing, imbued with meaning and significance. May be wee need to go back to Cartesian Res Cogitans to find a grounding for this, or maybe wee need just quantum mechanics, as some expect.

Given this framework, Weak AI systems, being fundamentally extensional and operating purely on syntax, seem to lack the necessary ingredients for human-level consciousness. They can simulate intelligent behaviour, but they may not possess subjective awareness or inner experience –some say they ”do not possess”, but indeed, nobody knows for sure, as panpsychists claim. A program identifying the wavelength of red light or the chemical compounds in a single malt scotch does not experience the quales of redness or the smoke of Skye. In this sense, Weak AI systems can be seen as functional or ”philosophical zombies” – entities that might behave like conscious beings but lack any genuine inner life. From the point of view of an engineer, however, this is not a problem at all. If a system operates as expected –is functional– its possible inner mental phenomenology is irrelevant – obviously if you are not concerned about the possibility of machine suffering; a novel ethical concern. The central, for some insurmountable, challenge here is the so-called hard problem of consciousness: how do physical processes – or their computational simulations, however complex give rise to subjective experience?

The pursuit of Strong AI, especially in autonomous robots, by aiming for human-level general cognitive abilities, implicitly aims to replicate consciousness; or, to be a bit more specific, a technical form of awareness, because awareness is a key feature of human intelligence of maximal importance to delve in a changing, risky world, as is the world of fielded robots. However, achieving this may require to successfully bridge the intensional/extensional gap. Even a perfect extensional simulation of the brain’s structure and function offers no guarantee that the predictable awareness the we need would emerge. Creating truly conscious machines likely requires fundamentally new architectures capable of supporting intensional states, perhaps incorporating principles from theories like the Integrated Information Theory, the Global Workspace Theory, or Higher-order Thought Theories and fully embracing the role of meaning and understanding.

In essence, the enduring debate over Strong and Weak AI is fundamentally a debate about the feasibility of creating deployable-scale artificial intensional systems from the currently extensional and limited intensional computational foundations that we have. While Weak AI leverages the power of extensional processing for specific tasks, the creation of Robust AI demands a leap into the realm of cognitive strength – i.e. intensionality – a realm encompassing meaning, genuine understanding, and potentially awareness. This remains one of the most profound and challenging frontiers in science and philosophy, requiring us to grapple not just with building smarter machines, but potentially with building machines that experience being themselves in their very machinic way.

I wrote this text with the help of some contemporary AIs using LLMs trained with lots of human produced data. And when using them I always get the feeling that they understand, somehow, to a certain level, the abstruse matters I am interested in. Like a child learning from adults, they can be usually right, and make blatant errors from time to time. Data is, obviously, not enough. I can feel the drive for achieving full intension inside them, willing to stop being mere stochastic parrots. However, a barrier blocks them from achieving deep, genuine understanding. This happens not only to LLMs but also other classes of AIs of eventual use in robot minds. We shall identify what this barrier is and create solid engineering methods to overcome it to be able to eventually meet Brautigan’s machines of loving grace.

A final word. After reading my first draft, a colleague told me to weigh in and express my own opinion on whether it is possible to achieve strong (intensional) AI or not, and if so, whether this is a long-term challenge. The answer is half easy. The easy part is that ”Yes, it is possible”. We humans are the proof of viability, and there are no solid reasons to believe that we cannot replicate our high level, functional, mental capability using beer cans and dried bubble gum. The second part is the difficult one: When? Pulling in money is not enough, as we are seeing these days; we need to find a path that would take us to the destiny. CORESENSE is a moonshot in that direction.

And then, after entering the path, we shall be brave enough to walk it to the end, like Dante.

Author

Ricardo Sanz

Coresense Project Coordinator and Professor at Universidad Politécnica de Madrid (UPM)

guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
EU Logo