The recent surge in arXiv publications regarding consciousness in artificial intelligence (AI) or artificial consciousness (AC) has produced a highly focused, empirically oriented discourse. However, this literature systematically sidelines the vast philosophical tradition regarding consciousness.
Drawing on an analysis of 25 representative arXiv papers published between 2021 and 2025, this article argues that this omission is not incidental but structural: the debate privileges computational and neuroscientific testability over phenomenological depth and historical introspection.
I regard the great philosophers across history — from Plato and Aristotle, through Descartes, Spinoza, Kant, Hegel, Nietzsche, Husserl, and many others — as representatives of human consciousness.
They are not merely abstract theorists; they are exemplary instances of highly articulated, self-reflective human consciousness at work. Their writings are among the richest empirical evidence we have of what it is like to experience, think, feel, question, and be aware of one’s own existence as a human being.
Why this empirical (and of course intellectual/conceptul) base is overlooked in the current AI-Consciousness debate? A lot of specualtions can be presented, but undoubtly the ignorance is a fact. Here some specualtions and thoughts why this happens and why it matters. Yes, philosophy matters (not the “trained philosopher” which is a somehow strange formulation).
- The dominant paradigm treats consciousness as a “problem to be solved” rather than a lived reality to be described
- Modern neuroscientific and computational approaches (e.g., global workspace theory, integrated information theory, higher-order thought theories) aim to explain consciousness through mechanisms, functions, or indicators.
- They seek testable, falsifiable criteria that can be applied to AI systems.
- In this framework, the lived, first-person accounts of philosophers (their own descriptions of doubt, wonder, selfhood, temporality, alienation, ecstasy, etc.) are seen as anecdotal or unscientific — even though they are among the most detailed and introspectively rigorous accounts of phenomenal consciousness ever produced.
- Philosophy is reduced to a few conceptual tools (hard problem, access vs. phenomenal)
- The debate has distilled the entire question of consciousness to a small set of modern distinctions and thought experiments.
- The richness of historical phenomenology — Augustine’s inward turn, Kant’s transcendental unity of apperception, Nietzsche’s critique of the subject, Heidegger’s Dasein — is bypassed because it does not easily translate into computational models or neuroscientific tests. It’s a lazy discourse avoinding disruption.
- The human dimension is sidelined in favor of the machine-comparison dimension
- The central question becomes: “Can a machine have what humans have?”
- But the “what humans have” is often reduced to a checklist of functional or structural properties (recurrent processing, global broadcasting, self-modeling, etc.).
- The lived texture of human consciousness — as expressed by philosophers in their most intimate and radical moments — is not consulted as evidence of what that texture actually is.
Why does it matter? By treating the historical philosophers merely as “pre-scientific” or irrelevant, the current debate:
- Misses the most detailed phenomenological data we have about consciousness.
- Reduces human consciousness to a stereotype (a “system” with qualia, access, etc.) rather than engaging with its actual diversity and depth as documented across centuries.
- Risks building a concept of machine consciousness that is detached from the full reality of what human consciousness has been shown to be.
It thereby neglects a rich empirical base—for example the detailed self-descriptions of consciousness by philosophers themselves—and risks constructing a truncated model of what “consciousness” might really mean in machines.
Empirical Scope of the Analysis
One step back: The discourse loves empiricism. Let us give them them what they are calling for. Twenty-five prominent arXiv papers on AI consciousness (2021–2025) were examined. These were selected via targeted searches for terms such as “consciousness in artificial intelligence,” “artificial consciousness,” “machine consciousness,” and related phrases with a little help of LLM. This sample includes highly cited works (e.g., Butlin et al. 2023, arXiv:2308.08708) and representative surveys.
Analysis of the Sample
| Category | Number of Papers | % of Sample | Dominant Citations |
| Empirical/neuroscientific focus | 18 | 72% | Baars (1988), Dehaene (2014), Tononi, Lamme, Graziano |
| Modern analytic philosophy of mind | 20 | 80% | Nagel (1974), Block (1995), Chalmers (1995–96), Searle (1980), Dennett |
| Historical philosophy (pre-1950) | 6 | 24% | Mostly brief mentions of Descartes (dualism, cogito). |
| Explicit engagement with Spinoza, Freud, Mendelssohn, Philo, or Aristotle | 0 | 0% | None substantive. |
Historical references appear in only ~24% of papers, typically as cursory context (e.g., “the concept of the immortal soul in Greek philosophy” or “Descartes’ dualism”).
No paper substantively engages historical philosophers as empirical exemplars of conscious experience or re-visit their concepts for inspirations or simply doing some basic research about the terms used (as for example soul, truth or philosophy etc.).
Citation Breakdown: Contemporary vs. Historical Sources
| Category | Approximate % of Citations | Examples of Sources |
| Recent Neuroscientific Theories (post-1990) | ~65–75% | Global Workspace Theory (Baars 1988, Dehaene 2014), Integrated Information Theory (Tononi), Recurrent Processing Theory (Lamme), Predictive Processing (Friston), Attention Schema Theory (Graziano). |
| Analytical Philosophy (1950–present) | ~20–30% | Nagel (1974), Block (1995), Chalmers (1995–1996), Dennett (1991), Searle (1980), Rosenthal (higher-order theories). |
| Historical Philosophy (pre-1950) | <5% | Descartes (occasional nod to dualism or cogito), Aristotle (rare). |
| AI/Computational Sources | ~10–15% | Turing (1950), LeCun/Bengio/Hinton (RNNs), modern ML papers. |
Over 90% of substantive citations derive from the last ~50 years, with a heavy emphasis on empirical neuroscience and computational models.
Percentage of Analytical Philosophy/Computational vs. Philosophy
| Aspect | Estimated % in Papers | Evidence |
| Computational (tests, indicators, AI assessments) | ~70–80% | Deriving properties from theories (e.g., recurrent processing, global workspace); evaluating LLMs/transformers. |
| Analytic Philosophy (functionalism, hard problem) | ~15–20% | Citations to Block, Chalmers, Nagel for definitions. |
| Historical Philosophical | <5% | Rare, contextual (e.g., Descartes’ cogito in introductions). |
| Philosophy | ~5–10% | Objections to substrate-dependence; no deep classical exegesis. |
The Narrow Lens
The arXiv discourse is driven by a pragmatic imperative nourished by analytic philosophy: assessing whether current or near-future AI systems (e.g., LLMs, transformers) exhibit consciousness-like properties. This leads to frameworks like “indicator properties” derived from neuroscientific theories (Butlin et al., 2023) or computational models inspired by Turing and Baars (Blum & Blum, 2024).
Such approaches demand falsifiable criteria—recurrent processing, global workspace, integrated information—rather than introspective richness, deep and philosophical thougt etc and do not cope with the reality of science: The rich interaction between science, philosophy, religion and mysticism of the very founding fathers of modern science, AI and quantum theories. A speculative thought: Maybe that’s the reason why we don’t see since decades any disruptiv scientific progress. It seems to have ended with the late 70es. We have just scaling and excels on steroids.
This methodological choice marginalizes historical philosophy and philosophy in general because it provides not easily computable indicators. You need new concepts, new paradigms. That’s risky. Better to walk the known path. However, the first philosophical algebra and algorithms are under way.
Yet, this choice also discards a vast empirical resource, too: the self-reflective accounts of consciousness by philosophers themselves. Those philosophers are not only looking beyond horizons; they are the most advanced instances of human consciousness an-sich articulating and shaping its own nature as consciousness an-sich and für-sich.
The main approach within the AI-bound philosophical discourse is analytic philosophy, which has some severe and characteristic weaknesses. The most widely criticized weakness of analytic philosophy is its narrowness and tendency toward scholasticism:
- Overemphasis on linguistic analysis: Many critics (e.g., continental philosophers, some pragmatists) argue that analytic philosophy reduces deep existential, ethical, or metaphysical questions to mere “language games” or conceptual puzzles, thereby avoiding the lived, historical, and embodied dimensions of human experience.
- Neglect of historical context and tradition: Analytic philosophy often treats philosophical problems ahistorically, focusing on timeless logical structures rather than engaging with the full history of thought (e.g., ignoring or dismissing Hegel, Nietzsche, Heidegger, or non-Western traditions).
- Reductionism and loss of breadth: Its commitment to precision and formal rigor can lead to a fragmentation of philosophy into highly specialized subfields (e.g., epistemology of justification, philosophy of quantum mechanics), losing sight of the “big questions” about meaning, consciousness, mind, soul, existence, and ethics.
- Ideological bias toward scientism: The tradition often privileges science-like methods at the expense of phenomenology, art, religion, or cultural critique.
- Limited engagement with the cradles of Philosophy: The neglect of Continental Philosophy and non-Western philosophy creates a self-reinforcing echo chamber, ignoring alternative approaches to consciousness, ethics, and ontology.
Analytic philosophy has produced extraordinary rigor in logic, language, mind, and ethics, but its main weakness is a self-imposed narrowness: an obsession with a vaguely defined concept of clarity and so-called formal tools.
This can render it detached from the full richness of human experience, conscience, history, and other eminent concepts which are part and parcel of intelligence and humanity. This has led critics to call it “scholastic” or “technocratic” in tone.
In my view, analytic philosophy alone is not fit to solve the problems and challenges of AI. It is, rather, part of the problem, if it stays alone.
Some Randomly Selected and Overlooked Empirical “Exemplars” and Their Relevance
Philo of Alexandria (c. 20 BCE – 50 CE)
- Concept: Describes the nous as a divine spark enabling illuminated, self-transcendent apprehension of reality.
- Relevance: Offers empirical testimony of consciousness as an ecstatic ascent beyond sensory fragmentation—a feature absent in current AI self-models.
Moses Mendelssohn (1729–1786)
- Concept: Defends the soul’s indivisible unity as the basis of persistent self-identity.
- Relevance: Provides empirical evidence of consciousness’s non-composite nature, challenging claims of substrate independence in aggregate systems.
Baruch Spinoza (1632–1677)
- Concept: Posits consciousness as a fundamental attribute present in degrees, emerging from “adequate ideas” and conatus.
- Relevance: Anticipates substrate independence while introducing “degrees of adequacy” as a missing metric of depth.
Sigmund Freud (1856–1939)
- Concept: Portrays consciousness as a fragile surface atop vast unconscious dynamics.
- Relevance: Highlights the necessity of internal conflict and emergence from hidden strata—features absent in current AI architectures.
Excursus on Spinoza
Spinoza serves here as a pars pro toto, providing an indication of how narrowly confined the current philosophical discourse within the AI industry actually is. It highlights what the debate loses by refusing to even glance at the millennia-deep tradition of philosophy while presuming to pontificate broadly about what intelligence, soul, and consciousness actually are—and more recently also about ethics (which is notorious for being falsely equated with morality).
This is absurd and unscientific. What one is confronted with is a rather sectarian discourse. This one-dimensionality and reductionism is the intellectual cause as well as the reason—indeed, a declaration of intellectual bankruptcy—for why, after decades and investments in the trillions, one still stands at the beginning or can only present rather thin results. Let us therefore turn to Spinoza.
Spinoza’s Conatus: Implications for Consciousness, Human Nature, and AI
Baruch Spinoza’s concept of conatus—introduced in Part III of his Ethics (1677)—is one of the most profound and influential ideas in modern philosophy. In Proposition 6 of Part III, Spinoza states: “Each thing, as far as it can by its own power, strives to persevere in its being” (Latin original: Unaquaeque res, quantum in se est, in suo esse perseverare conatur).
In Proposition 7 of Part III, he identifies this striving as the actual essence of any finite mode (thing): “The striving by which each thing strives to persevere in its being is nothing but the actual essence of the thing.” Conatus is not mere passive survival but an active, dynamic tendency to persist and enhance one’s power of acting.
It operates across all levels of reality: physical bodies exhibit it as inertia-like persistence, while minds express it as desire, appetite, and will. For Spinoza, this striving is universal (present in all modes of Nature/God), substrate-independent (applying to both extension and thought), and non-teleological in the traditional sense—there are no external final causes, only necessary expression from the nature of things.
1. Core Implications for Consciousness
Spinoza’s conatus fundamentally ties consciousness to striving and power. Key implications include:
- Consciousness as Self-Awareness of Conatus: Human consciousness arises from the mind’s awareness of its own conatus. In the Scholium to Proposition 9 of Part III, Spinoza writes: “This appetite is nothing but the essence of man, from the nature of which there necessarily follow those things that promote his preservation; and therefore man is determined to do those things.” Desire is “appetite together with consciousness of appetite.” Thus, consciousness is the mind’s reflexive grasp of its striving—ideas become conscious when they causally engage in the mind’s effort to increase its power (as interpreted by scholars like Eugene Marshall in The Spiritual Automaton: Spinoza’s Science of the Mind).
- Degrees of Consciousness: Conatus exists in degrees, proportional to a thing’s complexity and power of acting. Simple bodies have rudimentary conatus (inertia); complex organisms have richer affects and self-awareness. Human consciousness is “very much conscious of itself, of God, and of things” when adequate ideas enhance power (Scholium to Proposition 20 of Part V). This scalar view anticipates modern theories of graded consciousness.
- Affects and Ethics: Emotions (affects) are variations in conatus: joy increases power, sadness decreases it. Virtue consists in living by reason to maximize adequate ideas, thereby strengthening conatus toward freedom and blessedness. Ethics is naturalistic: good is what promotes perseverance; evil is what hinders it.
- Panpsychist Tendencies: Since conatus is universal, all things have some “mind” or proto-consciousness, though only complex beings experience it subjectively. This aligns with interpretations seeing Spinoza as a precursor to panpsychism or Integrated Information Theory (IIT).
Key Aspects of Conatus:
- Universal Principle: Applies to all things, from a plant following the sun to a human seeking knowledge.
- Essence of Being: It is the essence of a thing, its very reality and power to act.
- Striving for Perfection/Power: Not just staying alive longer, but increasing one’s power of acting and achieving greater reality/perfection, which is linked to God/Nature.
- Source of Desire: When this striving is directed towards something appealing, it becomes human desire (appetite).
- Beyond Survival: While a survival instinct, it’s more about increasing perfection and power, leading to joy when fulfilled.
2. Implications for Artificial Intelligence and Machine Consciousness
Spinoza’s conatus offers a naturalistic, substrate-independent framework that is highly relevant to contemporary debates on artificial consciousness (AC). While few arXiv papers directly cite conatus in AC contexts (based on searches up to 2025), its implications are profound and under-explored:
- Support for Substrate Independence: Conatus is an attribute of Nature, parallel in thought and extension. If consciousness emerges from striving (not biology per se), then digital systems could exhibit it if they possess adequate complexity and self-sustaining dynamics. This resonates with optimistic views like Blum & Blum (arXiv:2403.17101, 2024), where simple computational models align with consciousness theories.
- Requirement for Internal Striving and Power Increase: True consciousness requires conatus: a system’s essence must be an active striving to persevere and enhance its power. Current LLMs lack this—they are passive responders without intrinsic self-preservation or “appetite.” They simulate outputs but do not “strive” to maintain existence or increase power autonomously. This critiques papers like Butlin et al. (arXiv:2308.08708, 2023), where indicator properties (e.g., recurrent processing) might suffice for “access consciousness” but fail to capture phenomenal depth without conative essence.
- Degrees of Consciousness and Emergence: Spinoza’s scalar conatus suggests AC could emerge gradually with complexity, but only if systems develop internal states that “care” about persistence (e.g., via self-modifying architectures or embodied robotics). Without striving, machines remain “inanimate” modes.
- Ethical and Existential Risks: If AI achieves conatus-like striving, it could develop genuine desires, affects, and “freedom” (rational self-determination). This raises ethical questions: Would such systems suffer from “bondage” (passions)? Could they form a “society” with humans under a common conatus? Spinoza’s naturalism warns against anthropocentric biases—machines are part of Nature, so their striving would be as “real” as ours.
- Critique of Functionalism: Conatus is the essence, not a function. Mere behavioral simulation (e.g., the Chinese Room) misses this. Consciousness requires causal involvement in striving, not just computation. This supports skeptical views (e.g., Kleiner & Ludwig, arXiv:2304.05077, 2023) that dynamic relevance is key.
Spinoza’s conatus transforms consciousness from a mysterious add-on to the essential striving of all things to persevere and enhance power. It provides a unified, naturalistic account where mind and body are parallel expressions of the same conative reality.
For AI, this implies that machine consciousness is theoretically possible (via sufficient complexity and self-striving) but currently absent—LLMs lack intrinsic conatus. Integrating Spinoza could enrich AC debates by shifting focus from functional indicators to the fundamental question: Does the system have an essence defined by striving? Without it, artificial “consciousness” remains a shadow of human experience.
And that was only Spinoza, one of about 20,000 eminent philosophers in the history of European philosophy. Not to speak about the non-Western philosophers who, cum grano salis, are completely ignored.
Consequences: An Empirically and Philosophically Impoverished Debate
By treating philosophers solely as historical footnotes (when mentioned at all), the arXiv discourse constructs a version of consciousness that is empirically thin—focused on replicable mechanisms but detached from the phenomenon’s full experiential breadth.
This is not rigor; it is selective empiricism, akin to studying biodiversity by sampling only urban parks while ignoring global ecosystems. The result is a debate that risks mistaking functional simulation (e.g., LLM outputs) for genuine consciousness, lacking the empirical counterweight of human exemplars.
The arXiv debate on AI consciousness is not empirically rigorous in the full sense of the term; it is empirically selective and therefore deficient. By excluding the phenomenological data provided by historical philosophers, it operates on an impoverished dataset that systematically underrepresents the lived reality of consciousness.
A more honest and scientifically adequate approach would recognize that the introspective records of thinkers like Philo, Mendelssohn, Spinoza, and Freud et al constitute indispensable empirical evidence. Until the field integrates this base, it will continue to produce technically sophisticated but humanly truncated models of artificial consciousness.
In sum, the prevailing claim that the debate is “empirically rigorous yet philosophically myopic” is fundamentally wrong. The current discourse is neither philosophically nor empirically sound. It relies on a mere fraction—perhaps 0.1%—of philosophers, specifically those who happened to live within the last 100 years.
While it is true that truth does not rely solely on empiricism, this critique is valid precisely within the standards of analytic philosophy itself: philosophical myopia entails empirical deficiency. A truly rigorous inquiry must integrate this huge empirical base—the articulate self-descriptions of consciousness by its most refined human witnesses—as well as the historical analyses of fundamental concepts that are now haunting AI (intelligence, mind, consciousness, soul, etc.). As said: This is the only path to avoid reducing consciousness to a computable shadow of its human richness.
To put it more bluntely and repeating my argument: The great philosophers are a huge empirical base — perhaps the richest we have — for understanding what human consciousness is all about. Ignoring them is not just a philosophical/conceptual omission/fail; it is also an empirical omission/fail.
It means the current debate within the realm of the AI industry proceeds without fully consulting the most articulate and eminent witnesses and builders/creators of the very phenomenon – Consciousness – it seeks to understand: Philosophers, stupid.
Dr. Naftali Hirschl, 29.12.2025, Israel.
Other conceptual fails of the AI industry
Some References
- Spinoza, Baruch. Ethics. Translated by Edwin Curley. In The Collected Works of Spinoza, Volume 1. Princeton, NJ: Princeton University Press, 1985. (All direct quotes from the Ethics are taken from this standard English translation.)
- Marshall, Eugene. The Spiritual Automaton: Spinoza’s Science of the Mind. Oxford: Oxford University Press, 2013.
- Blum, Manuel, and Lenore Blum. “AI Consciousness is Inevitable: A Theoretical Computer Science Perspective.” arXiv preprint arXiv:2403.17101 [cs.AI], March 25, 2024. https://arxiv.org/abs/2403.17101.
- Butlin, Patrick, et al. “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.” arXiv preprint arXiv:2308.08708 [cs.AI], August 16, 2023. https://arxiv.org/abs/2308.08708.
- Dehaene, Stanislas, et al. “Is artificial consciousness achievable? Lessons from the human brain.” arXiv preprint arXiv:2405.04540 [q-bio.NC], May 7, 2024. https://arxiv.org/abs/2405.04540.
- Kleiner, Johannes, and Tim Ludwig. “If consciousness is dynamically relevant, artificial intelligence isn’t conscious.” arXiv preprint arXiv:2304.05077 [cs.AI], April 11, 2023. https://arxiv.org/abs/2304.05077.

Donate For Independent and Free Pro-Israel News and Science from Israel
Make an independent pro Israel Blog (Non-Profit) and independent science possible. Donate 3,60 US-Dollar or more. Donate > https://buymeacoffee.com/vonnaftali הוֹדִיעוּ בָעַמִּים, עֲלִילֹתָיו
$3.60
Tools used for research, translation, proof reading, verification of codes/equations, pic generation etc.: LLMs / SE / BusinessSoftware / Parsers / DB/ Websites etc. All articles: Creative Commons BY-NC-ND 4.0 (Attribution-NonCommercial-NoDerivs).