AI Consciousness and Methodological Agnosticism
Although it's still early in the development of AI, the immense progress in the field over the last few years has sparked discussions around the subject of AI consciousness. This is a debate which can easily get heated on both sides, but it's by no means a new dilemma. The question of AI consciousness inherits all of the major points of the classical problem of other minds in epistemology, the branch of philosophy exploring the nature of knowledge. The problem of minds essentially poses that, "if I can only observe the behavior of others, how can I really know they have a mind at all?"1 Subjective experience is only directly accessible to the entity experiencing it, and even in humans it cannot be reliably measured. We, as humans, accept that humans are generally conscious because we believe ourselves to be conscious, but how do we determine the extent of consciousness as it may apply to animals, plants, objects, etc? Consequently:
- There is no operational definition of consciousness.
- We cannot determine when consciousness begins.
- The nature of consciousness, i.e. binary or continuous, singular or multiple, is unresolved.
These uncertainties create a structural epistemic blind spot which directly affects efforts to implement AI safety. Any attempt to treat consciousness as a safety variable would require speculative assumptions, which risk distorting decision-making in critical contexts.