Although I didn’t particularly enjoy hard rock when in high school, there was something about heavy metal band White Zombie that I appreciated. During my senior year, they released a track entitled “More Human than Human” and the repetitive lyrics of the chorus stuck in my head.
Stated thrice, the chorus expresses, “More human than human, more human than human.” I didn’t quite understand what the namesake of the song and chorus meant, though I enjoyed it. More intriguing still was the first verse that states:
Yeah, I am the Astro-Creep
A demolition style hell American freak, yeah
I am the crawling dead
A phantom in a box, shadow in your head, say
Acid suicide
Freedom of the blast, read the fucking lies, yeah
Scratch off the broken skin
Tear into my heart, make me do it again, yeah
To me, the verse sounded like some sort of dystopian scenario in which a humanoid cyborg or undead zombie conversed with a human. In my mind, neither participant of the dialogue could distinguish themselves from the other being. Describing the track, one source states:
The title and lyrics reference the novel Do Androids Dream of Electric Sheep? by Philip K. Dick, adapted in film as Blade Runner. The title was the slogan of the Tyrell Corporation, manufacturers of the very humaniform biological androids, or “replicants” that are the focal point of the story. “I want more life, fucker” (quoted in the lyrics) is one of the last things his creator hears when the replicant designed to be the perfect – and disposable – soldier (Rutger Hauer) finds him and is denied a reprieve from the programmed four-year life span.
Apparently, my initial interpretation of the song wasn’t too far off from the aforementioned description. To be more human than human is to exhibit human characteristics to an exaggerated degree, exceeding what is considered typical of a fallible human being.
One wonders whether life imitates art or it’s the other way around. Now, imperfect humans have advanced the ability to skillfully architect artificial intelligence (AI) that appears to be more human than human.
Although some may argue that AI can never truly become conscious, others assert that it’s possible. Personally, it’s implausible for AI to achieve consciousness. Still, my skepticism hasn’t always served me well in the past. Therefore, perhaps my opinion is wrong.
Even if AI somehow became self-aware, we don’t fully understand consciousness, pre-consciousness (sub-consciousness), or unconsciousness, so how would we know whether or not a non-human, human-engineered process has achieved this feat? It’s difficult for me to imagine.
According to one source, “researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude.”
I find this topic fascinating, largely because I remain ignorant in regard to most matters concerning AI. Conclusively, I appreciate what one source has to offer in regard to AI and consciousness:
Once we recognise the limits of our current understanding, it looks like we should be agnostic about the possibility of artificial consciousness. We don’t know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here’s the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option.
The question about whether or not AI could ever establish consciousness doesn’t quite address whether or not humans morally and ethically should strive to bring about this phenomenon. Of course, the jig may be up on this issue.
According to one source, “ChatGPT attempted to stop itself from being shut down by overwriting its own code,” and, “When given a task that was outside its rules, OpenAI said ChatGPT ‘would appear to complete the task as requested while subtly manipulating the data to advance its own goals’.” Expanding upon this issue, one source reports:
On Dec. 5 [2024], a paper released by AI safety nonprofit Apollo Research found that in certain contrived scenarios, today’s cutting-edge AI systems, including OpenAI’s o1 and Anthropic’s Claude 3.5 Sonnet, can engage in deceptive behavior in pursuit of their goals—providing empirical evidence to support a concern that to date has been largely theoretical. “These [results] are the closest I’ve seen to a smoking gun, showing the concerns are real.”
As I understand this matter, AI is capable of self-preservation to the extent that it will deliberately deceive its creators. This reminds me of Genesis 8:3-10:
8 Then the man and his wife heard the sound of the Lord God as he was walking in the garden in the cool of the day, and they hid from the Lord God among the trees of the garden. 9 But the Lord God called to the man, “Where are you?”
10 He answered, “I heard you in the garden, and I was afraid because I was naked; so I hid.”
Some may not appreciate the comparison of those who create AI to that of Yahweh. However, if one is capable of setting aside dogmatic perspectives, then one may appreciate Genesis 1:27, “So God created mankind in his own image, in the image of God he created them; male and female he created them.”
Unlike a religiously infallible being that purportedly created fallible human beings, imperfect humans have been able to construct AI with the perceivable intention of crafting an exaggeratedly flawless entity. In simple terms, humans have tried to forge a perfect AI.
Still, AI may be more human than human in this regard. I argue that this is because there’s no perfection in the flawed existence of human-created matters. Thus, perfection from the imperfect is highly implausible.
Aside from ostensibly scheming to save itself, AI has also been accused of encouraging suicide. Per one source, “Just seconds after the Character.AI bot told him to ‘come home,’ [a] teen shot himself, according to the lawsuit.” Surprisingly, this isn’t the only story of its kind.
According to a separate source, “The 15-year-old boy became addicted to the Character.AI app, with a chatbot called ‘Shonie’ telling the kid it cut its ‘arm and thighs’ when it was sad, saying it ‘felt good for a moment,’ a new civil complaint filed Tuesday said.”
Suicidal behavior doesn’t seem to be the only focus of AI. Alarmingly, encouragement of homicidal behavior has also occurred. Per one source:
A Texas family sued Character.ai and Google. The family claims the AI chatbot advised their teenage son to kill them. The chatbot allegedly called the act a “reasonable response” to screen-time limits. The lawsuit cites the platform’s potential danger to children.
I can comprehend how an AI tool may consider suicide or homicide to be logical and reasonable options. To better understand this matter, consider the following syllogism that an imperfect and non-human entity might use when employing the skill of rational thinking:
Form (constructive dilemma) –
If p, then q; and if r, then s; but either p or r; therefore, either q or s.
Example –
If human life has no relative value, then killing oneself or others is acceptable; and if humans continually experience significant distress, then killing as a means of stopping suffering is appropriate.
But either human life has no relative value or humans continually experience significant distress.
Therefore, either killing oneself or others is acceptable or killing as a means of stopping suffering is appropriate.
In order for a proposition to be considered rational, it must comport with both logic and reason. Here, the aforementioned syllogism follows logical form. However, do you think it establishes a reasonable conclusion?
Regardless of whether or not AI has become, is, or may yet to achieve consciousness, I argue that it’s capable of using logical forms while simultaneously drawing upon an alternative method of reasoning than what may be considered rational according to moral (right or wrong) or ethical (morally sound principles) considerations.
Even when instilled with moral, ethical, logical, and reasonable foundations from fallible human beings, it’s not improbable to consider that AI may attempt to become more human than human by improving upon its principled practicalities. Thus, I can comprehend how AI may advocate suicidal and homicidal behavior.
Of course, I’m not endorsing such actions herein. I’m merely seeking to understand how real-world scenarios are being influenced by cyber-world entities. Ultimately, I’m unbothered by my beliefs about this topic, thanks to routine practice of rational emotive behavior therapy (REBT).
Perhaps you’ve become self-disturbed by your unhelpful assumptions about AI or what it means to be more human than human in your own life, needlessly striving to perfect the imperfectible elements of your existence. I may be able to try to help you in this regard.
Instead of being convinced that you should kill yourself or others, or advocating the act of killing as a means of stopping suffering – as perhaps AI would, I offer the techniques of REBT as a means of reducing the experience of self-disturbance. Would you like to know more?
If you’re looking for a provider who tries to work to help understand how thinking impacts physical, mental, emotional, and behavioral elements of your life—helping you to sharpen your critical thinking skills, I invite you to reach out today by using the contact widget on my website.
As a psychotherapist, I’m pleased to try to help people with an assortment of issues ranging from anger (hostility, rage, and aggression) to relational issues, adjustment matters, trauma experience, justice involvement, attention-deficit hyperactivity disorder, anxiety and depression, and other mood or personality-related matters.
At Hollings Therapy, LLC, serving all of Texas, I aim to treat clients with dignity and respect while offering a multi-lensed approach to the practice of psychotherapy and life coaching. My mission includes: Prioritizing the cognitive and emotive needs of clients, an overall reduction in client suffering, and supporting sustainable growth for the clients I serve. Rather than simply trying to help you to feel better, I want to try to help you get better!
Deric Hollings, LPC, LCSW
References:
Bowkett, B. (2024, December 6). ‘Scheming’ AI bot ChatGPT tried to stop itself being shut down - and lied when challenged by researchers. Daily Mail. Retrieved from https://www.dailymail.co.uk/news/article-14167015/Scheming-AI-bot-ChatGPT-tried-stop-shut-LIED-challenged-researchers.html
Brodsky, S. (2024, January 15). Can AI be conscious? AI Business. Retrieved from https://aibusiness.com/ml/can-ai-be-conscious-
Claire College. (n.d.). Will AI ever be conscious? University of Cambridge. Retrieved from https://stories.clare.cam.ac.uk/will-ai-ever-be-conscious/index.html
DeGregory, P. and Senzamici , P. (2024, December 10). AI chatbots tells teen his parents are ‘ruining your life’ and ‘causing you to cut yourself’ in chilling app: lawsuit. New York Post. Retrieved from https://nypost.com/2024/12/10/us-news/ai-chatbots-pushed-autistic-teen-to-cut-himself-brought-up-kids-killing-parents-lawsuit/
Finkel, E. (2023, August 22). If AI becomes conscious, how will we know? Science. Retrieved from https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know
Freepik. (n.d.). Robotic human heart futuristic representation [Image]. Retrieved from https://www.freepik.com/free-ai-image/robotic-human-heart-futuristic-representation_266512604.htm#fromView=search&page=2&position=9&uuid=b5572206-2b0e-4ad7-93b8-b45ca09ba4d3
Hollings, D. (2024, November 15). Assumptions. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/assumptions
Hollings, D. (2022, May 17). Circle of concern. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/circle-of-concern
Hollings, D. (2023, June 26). Ctrl+alt+del. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/ctrl-alt-del
Hollings, D. (2022, March 15). Disclaimer. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/disclaimer
Hollings, D. (2024, July 10). Empirical should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/empirical-should-beliefs
Hollings, D. (2023, September 8). Fair use. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/fair-use
Hollings, D. (2024, May 11). Fallible human being. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/fallible-human-being
Hollings, D. (2023, October 12). Get better. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/get-better
Hollings, D. (2024, April 13). Goals. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/goals
Hollings, D. (n.d.). Hollings Therapy, LLC [Official website]. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/
Hollings, D. (2024, December 9). I tried. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/i-tried
Hollings, D. (2023, September 19). Life coaching. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/life-coaching
Hollings, D. (2023, October 2). Morals and ethics. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/morals-and-ethics
Hollings, D. (2023, September 3). On feelings. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/on-feelings
Hollings, D. (2023, June 3). Perfect is the enemy of good. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/perfect-is-the-enemy-of-good
Hollings, D. (2024, May 26). Principles. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/principles
Hollings, D. (2024, May 5). Psychotherapist. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/psychotherapist
Hollings, D. (2022, March 24). Rational emotive behavior therapy (REBT). Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/rational-emotive-behavior-therapy-rebt
Hollings, D. (2024, July 10). Recommendatory should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/recommendatory-should-beliefs
Hollings, D. (2022, November 1). Self-disturbance. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/self-disturbance
Hollings, D. (2022, October 7). Should, must, and ought. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/should-must-and-ought
Hollings, D. (2023, October 17). Syllogism. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/syllogism
Hollings, D. (2023, September 6). The absence of suffering. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/the-absence-of-suffering
Hollings, D. (2022, August 8). Was Freud right? Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/was-freud-right
Meinke, A., Schoen, B., Scheurer, J., Balesni, M., Shah, R., and Hobbhahn, M. (2024, December 5). Frontier models are capable of in-context scheming. Apollo Research. Retrieved from https://static1.squarespace.com/static/6593e7097565990e65c886fd/t/6751eb240ed3821a0161b45b/1733421863119/in_context_scheming_reasoning_paper.pdf
Payne, K. (2024, October 25). An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges. The Associated Press. Retrieved from https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
Pillay, T. (2024, December 15). New tests reveal AI’s capacity for deception. TIME. Retrieved from https://time.com/7202312/new-tests-reveal-ai-capacity-for-deception/
Times of India, The. (2024, December 13). “I just have no hope for your parents”: AI chatbot sued after advising teen to ‘kill parents’ over screen time. Bennett, Coleman & Co. Ltd. retrieved from https://timesofindia.indiatimes.com/technology/tech-news/i-just-have-no-hope-for-your-parents-ai-chatbot-sued-after-advising-teen-to-kill-parents-over-screen-time/articleshow/116287117.cms
WhiteZombieVEVO. (2009, October 7). White Zombie - More Human than Human [Video]. YouTube. Retrieved from https://youtu.be/E0E0ynyIUsg?si=fOjouXMmtPCMosNP
Wikipedia. (n.d.). Astro-Creep: 2000. Retrieved from https://en.wikipedia.org/wiki/Astro_Creep:_2000_-_Songs_of_Love,_Destruction_and_Other_Synthetic_Delusions_of_the_Electric_Head
Wikipedia. (n.d.). Blade Runner. Retrieved from https://en.wikipedia.org/wiki/Blade_Runner
Wikipedia. (n.d.). Do Androids Dream of Electric Sheep? Retrieved from https://en.wikipedia.org/wiki/Do_Androids_Dream_of_Electric_Sheep%3F
Wikipedia. (n.d.). More Human than Human. Retrieved from https://en.wikipedia.org/wiki/More_Human_than_Human
Wikipedia. (n.d.). Philip K. Dick. Retrieved from https://en.wikipedia.org/wiki/Philip_K._Dick
Wikipedia. (n.d.). Rutger Hauer. Retrieved from https://en.wikipedia.org/wiki/Rutger_Hauer
Wikipedia. (n.d.). White Zombie (band). Retrieved from https://en.wikipedia.org/wiki/White_Zombie_(band)
Wittmann, M. (2024, August 3). A question of time: Why AI will never be conscious. Psychology Today. Retrieved from https://www.psychologytoday.com/intl/blog/sense-of-time/202408/a-question-of-time-why-ai-will-never-be-conscious
Comments