Introduction#
Recently, the emergence of science fiction-themed games and films has sparked a series of reflections and discussions on the topic of artificial intelligence. After playing the game "Detroit: Become Human," I was amazed by the rich imagination of the game creators for the future world, and at the same time, I deeply pondered the ethical issues brought about by artificial intelligence. Perhaps, this future is right in front of us.
"We don't need science fiction scenarios, everything needs to be in line with expectations. If we choose science fiction scenarios, we may need flying cars and extraterrestrial beings, but these are far from our current lives. What we want is a predictable future that extends from the present reality." - Christophe Brusseaux, Detroit game team
In "Westworld," the ethical issues are magnified, as the androids are so similar to humans yet unable to gain human recognition. Is it just because the creators are arrogant? In "Blade Runner," there is a deeper exploration of the relationship between creation and being created. The series emphasizes that replicants cannot reproduce. Reproduction is not a technical problem, but rather, humans strive to avoid their creations having such functionality because the womb represents the identity of the creator, and replicants are products of humans as makers. If this product also becomes a maker, then there will be serious ethical issues between humans and replicants.
So, how different are replicants from humans? Do humans have a special and noble nature that distinguishes them from replicants? What exactly is humanity?
To answer these questions, I believe we should start by analyzing the most fundamental level, which is artificial consciousness. If AI is unconscious, then the so-called values and ethics are merely subjective attributions by humans, and the answer to the question should be returned to humans themselves. However, if AI has consciousness, then the question becomes completely different. In "Detroit: Become Human" and "Westworld," the replicants were initially unconscious beings, but after experiencing various events, they gradually became self-aware and developed empathy and self-awareness. In this process, replicants were no longer just products created by humans, but became complete and free individuals. Therefore, in this article, I will attempt to explore how artificial consciousness is possible.
Beginning#
To discuss the issue of consciousness, we need to start with Descartes' mind-body dualism. Dualism divides entities into two categories: matter and mind. Matter is an extended existence, while mind is a thinking existence. However, this theory has inherent flaws. On the one hand, it cannot explain the problem of mental causality or how the body and mind interact and influence each other. On the other hand, there is a great controversy over the nature and attributes of matter. This philosophical view is static and one-sided. Specifically, it separates the sophisticated mental processes from the structure and operation of living organisms.
In order to overcome the inherent flaws of substance dualism, philosophers of mind in the UK and the US have basically followed the possibilities left by Descartes. The mainstream idea is to dissolve the dualistic opposition and establish a monistic philosophical position. The most important philosophical path is materialism. Based on this, philosophers of mind have gradually developed various philosophical paths such as behaviorism, physicalism, functionalism, and strong artificial intelligence.
-
Behaviorism believes that the mind is nothing more than the behavior of the body. Although it resolves the problem of mental causality, it treats the expression of behavioral tendencies as the only form of mental causality, ignoring the individual's agency. Therefore, behaviorism was quickly replaced by physicalism.
-
Physicalism holds that mental states are identical to brain states, and can be divided into type-to-type and token-to-token physicalism. The former believes that each type of mental state is equivalent to a certain type of physical state, while the latter believes that each token of a mental state is equivalent to a certain token of a physical state. However, it inevitably returns to the dualistic position. But token-to-token physicalism gives us some positive thinking: Does the structure of the mind have multiple realizations? Is it possible for different material structures to produce the same mental structure? The answer seems to be yes. Take pain as an example. Human pain is generated by the brain and central nervous system, but crustaceans like lobsters also have pain. Their brains are simple, with only 100,000 neurons and no real central brain, but they still have pain. This shows that the mind has multiple realizations.
"As an invertebrate zoologist who has studied crustaceans for a number of years, I can tell you the lobster has a rather sophisticated nervous system that, among other things, allows it to sense actions that will cause it harm. … [Lobsters] can, I am sure, sense pain." - Jaren G. Horsley, Ph.D
-
Functionalism hierarchically divides matter, with matter being the lowest level and mind and consciousness being higher-level functions of matter. The ancient Chinese philosopher Wang Chong pointed out that the relationship between consciousness and matter is like the relationship between sharpness and the blade. The former is a derived attribute of the latter and cannot exist independently. Of course, at that time, he also believed that things were composed of qi.
-
Strong Artificial Intelligence believes that consciousness is a software program and the body is hardware. It compares the relationship between the mind and body to the relationship between software and hardware in a computer. So far, this theory is still a controversial position in the fields of philosophy of mind, cognitive science, and artificial intelligence.
Based on the above theories, we have reason to believe that artificial consciousness may be achievable for two reasons:
- Since the mind has multiple realizations, there is reason to believe that consciousness can be achieved through other artificial means.
- Since the mind or consciousness is only a higher-level function of matter, there is reason to believe that we can artificially create a material structure to achieve conscious functionality.
Questions#
However, theoretically, artificial consciousness is impossible because phenomenal consciousness is unobservable and private, which makes it impossible to empirically verify phenomenal consciousness.
Here are two examples to illustrate the unobservability and privacy of phenomenal consciousness:
- What is it like to be a bat?
- What does Mary not know?
What is it like to be a bat? This topic is unimaginable and incomprehensible for humans. Because humans cannot experience the sensations of a bat, we can assume that the bat's brain states are related to its internal experiences. However, since the internal mechanisms of humans and bats are different, humans cannot feel what bats feel. In other words, phenomenal consciousness cannot be experienced by external observation and lacks subjective experience, making it unobservable.
Mary is a color scientist who has devoted her life to studying the theories of red and green. However, she herself is colorblind. What is the difference between her theories and a philosophical zombie?
Note: In the field of philosophy of mind and phenomenology, a philosophical zombie is a hypothetical being that is indistinguishable from a normal human being but lacks conscious experience, sentience, qualia, or any kind of subjective experience. A philosophical zombie can be stimulated by sharp objects, but it will not feel any pain. It may say "ouch" and move away from the sharp object, or even tell us that it feels pain, but it does not actually experience pain.
Phenomenal consciousness has privacy. Different understandings do not mean that the descriptions are not about the same thing. Conversely, even if the behavior is the same, the internal experiences can be completely different. For example, what Mary perceives as green is what we perceive as red, and what we perceive as red is what Mary perceives as green, but we cannot distinguish them through behavior. We cannot know what her internal experience is like, but in terms of behavior and results, there is consistency. Similarly, for artificial consciousness, we cannot know if it has consciousness. This is just a description of objective things. When people try to describe more emotional subjective feelings, it becomes even more difficult to express them in specific words and actions.
There are also two thought experiments that provoke deep thoughts:
- The Chinese Room Argument
- The Chinese Room Experiment
The Chinese Room Argument is a simple thought experiment. If we believe that consciousness is a higher-level function of a certain material structure, then we can use a human as the basic unit of this structure. Assuming that this structure requires one hundred million basic units to form, then this structure should also have consciousness (imagine the human-shaped queue computer in "The Three-Body Problem"). But the basic unit of this structure is a human, and each individual has their own consciousness. So, does this structure have only one consciousness or one hundred million consciousnesses?
John Searle's Chinese Room Experiment is even more interesting. In this thought experiment, a person who only speaks English is locked in a closed room and can only communicate with the outside world through a small hole in the wall to pass notes, all of which are written in Chinese. This person has a book with a Chinese translation program, and there is enough paper, pencils, and a cabinet in the room. Using the Chinese translation program, this person can translate the incoming text into English and then translate their reply into Chinese to send it out. In this scenario, people outside would think that the person in the room understands Chinese completely, but in fact, this person can only operate the translation tool and knows nothing about Chinese.
This experiment implies that the intelligence exhibited by machines is likely just an illusion brought about by translation programs, and it actually knows nothing about true human intelligence.
In addition, the view of functionalism is also inaccurate. We cannot simply equate consciousness with functionality or the results of calculations. Consciousness is not computable. The functionality of the brain may be compared to that of a computer, but deeper intellectual activities, especially intentional mental activities, are not exhaustively computable algorithms. The computer programs defined by grammatical rules themselves are not sufficient to guarantee the presentation of intentionality and semantics.
Wittgenstein said, "The limits of my language mean the limits of my world." According to Searle's theory, we can modify it to say, "The limits of intentionality mean the limits of the world of speech acts." Intentionality characterizes the essence of conscious activity. In most cases, human speech and actions are actively guided by self-consciousness. On the other hand, everything a machine does needs to be predetermined. It is mechanical and passive. The essence of the mind cannot be computed, and this cannot be achieved through logical reduction.
Response#
We cannot deny that phenomenal consciousness is indeed unknowable, and I believe that there is no point in discussing agnosticism. Instead of pursuing answers to such questions, it is better to turn to more meaningful research. Even if we obtain some explanation, the answer itself does not have philosophical significance, so we don't need to care.
As for the thought experiments mentioned above, there is no need to be concerned either. The conclusions of thought experiments rely on our intuitions and do not have legitimacy. If we delve deeper, we will find that there are many problems. For example, the Chinese Room Experiment overlooks the engineering dimension. If it were to be implemented, it would undoubtedly require the construction of a model or function. Although the symbols do not have semantics, the predictability of input and output is the result given by human consciousness. In fact, when deciding on the model or function, formal semantics have already been established. It is not just about syntax, but also about semantics. Indeed, in the current field of NLP research, whether it is empirical approaches to construct deep learning models or rationalistic approaches to study formal logic, when constructing a system, it is already semantics. The former does not need to be mentioned, and the latter, when feeding data, the labels of supervised learning are given by humans, which provide semantic content.
As for the issue of intentionality, it is undeniable that it is currently a flaw in research methods. The current research in the field of artificial intelligence still has a long way to go, and what we see as AI is not related to "intelligence." Although artificial consciousness is theoretically impossible, I still hold an irrational hope in my heart, hoping to witness the arrival of artificial intelligence. Because, perhaps, this will be humanity's final invention.
Conclusion#
Regardless, scholars need to hold a sense of awe for the fields they study. There are still many questions to be explored in the field of AI, and this article is just throwing out questions for everyone to ponder. I will conclude with a poem:
If I cry out,
Who will hear me among the angelic orders?
And even if one of them pressed me
Suddenly against his heart:
I would be consumed in that overwhelming existence.
For beauty is nothing but the beginning of terror,
Which we are still just able to endure,
And we are so awed because it serenely disdains
To annihilate us.
Every angel is terrifying.
- Rainer Maria Rilke, "Duino Elegies"
References:
[1] Chuanfei Chin. Artificial Consciousness: From Impossibility to Multiplicity[C]. PT-AI 2017: Philosophy and Theory of Artificial Intelligence 2017 pp 3-18.
[2] John Searle. Mind: A Brief Introduction. Shanghai: Shanghai People's Publishing House.
[3] Wang Man. Descartes' Mind-Body Dualism and Its Influence on British and American Philosophy of Mind. Wei Shi, 2010(12):43-46.
[4] Mingke. Artificial Stupidity: The AI You See Has Nothing to Do with Intelligence [OL]. https://mp.weixin.qq.com/s/IBjOkDeeltlffXXSS0dsDA