Dystopian Artificial Intelligence Is Not Near, It Is Already Here
Authored by Dr.Sean Lin and Jacky Guan via The Epoch Times,
In November 2022, the release of an artificial intelligence (AI) online chatting program named ChatGPT shocked the world. This program is so “smart” that it delivers frighteningly human-like responses and seems to have very few flaws compared to previous versions. Not only do people treat it as a conversation companion, but they have also started to use this AI technology for a variety of tasks, such as completing homework, creating stunning images, writing poems, etc.
Using ChatGPT is like accessing a supercomputer’s brain, making this technology intriguing and exciting but also a bit scary and threatening. In 2014, Elon Musk warned that with AI, “we are summoning the demon,” but this threat could only become real when AI like ChatGPT could generate responses to questions that are indistinguishable from how a human would respond. This technology is so powerful that there now exists a widespread concern that the landscape of many industries will be transmuted by this advancement, including academia and health care.
Jobs that usually require a human touch, such as those in the fields of journalism and the service industry, are facing replacement and automation. We have long thought that the art of language stands at the pinnacle of human wisdom, yet AI is knocking right at its doorstep. It is challenging our uniqueness and intruding upon our lives.
AI Is Now Extremely Advanced and Capable
ChatGPT can basically chat about anything you want. From analyzing code for you to writing full-on essays and poems, this free online program describes itself as a computer program designed to “understand and respond to natural language inputs,” such as text conversations.
ChatGPT stands for Chat Generative Pre-trained Transformer. The program is based on a set of technologies developed and used in programs that generate user-friendly responses. AI is something that needs to be taught how to think and respond using technology that functions like neural networks. This process involves feeding the AI mountains of information before it can process it, adapt, learn, create, and become intelligent.
ChatGPT uses the latest edition of GPT and is capable of generating a large amount of human-like text based on minimal input. It already processed an unimaginably large amount of data that enabled it to learn what to return to the user when asked a question. For example, if you asked ChatGPT to write you a poem, it would generate something like this:
“Beneath the starry sky so bright,
With gentle breeze and silver light,
I stand and gaze into the night,
And let my thoughts take flight.
The moon and stars above me shine,
Guiding me through the dark divine,
Reminding me of love and time,
And all the beauty that is mine.
The world is vast, yet small and grand,
A mystery waiting to be scanned,
With secrets hidden in the sand,
And stories waiting to be canned.
So let me stand beneath the stars,
And let my soul and spirit soar,
For in this beauty, free from bars,
I find my heart forevermore.”
Isn’t this poem impressive? With the tremendous volume of data feeding ChatGPT, it can process our poems, analyze them, and produce a new poem in seconds. And the result could be different each time, even if you write the same command for it.
ChatGPT can also do the same with essays, entire chunks of code, and much more. It can have a casual conversation with you and knows how to adjust its jargon to even explain quantum computing in a way seventh graders can understand it.
It is understandable why many are instantly attracted to the program, as it can seemingly solve many issues and offers a significant shortcut for completing tasks. The chat is free and works extremely well; what could go wrong with this technology, except for being a teacher’s nightmare?
ChatGPT Can’t Be Held Accountable for Its Work
Recently, scientists have tested the limitations of ChatGPT and instructed it to write components of research articles that were later published in distinguished scientific journals like Nature. After the news broke that an AI was able to write research papers, it became the epicenter of a hot debate still shaking the community today.
Pro-AI arguments see technology like ChatGPT as the next step in human advancement.
It would make even science more efficient, reduce human labor, and make life easier.
The other side of the argument is that there is no way to hold artificial intelligence accountable for its work. If the program reaches the wrong conclusions or its algorithms aren’t mature enough, how can the program take responsibility for it?
The accountability issue is not just about when things go wrong. The use of AI-generated text without proper citation “could be considered plagiarism,” says Holden Thorp, editor-in-chief of the family of Science journals. For that reason, a few articles have already been published with ChatGPT listed as one of the authors, while publishers are hastening the push for regulation.
In fact, after papers were published in Nature with ChatGPT as a co-author, the editors-in-chief for Nature and Science concluded that “ChatGPT doesn’t meet the standard for authorship” because such a title carries accountability and liability to it, something out of the question for AI.
However, the core issue behind the authorship dispute is that journal editors are no longer certain about how much or to what extent the article was generated by ChatGPT. Scientific experiments likely still require studies conducted by humans. But authors of review articles that attribute ChatGPT likely did so because it played a significant role in the writing process.
Some biomedical researchers have used ChatGPT to conduct drug development research and have been able to identify potential drug chemicals that were missed in the past. With the help of AI, a new age of explosive advancements in the biomedical field is sure to be ushered in.
However, how will researchers know when AI data become misleading? Will anyone dare to challenge the algorithms behind this data? These are not the only questions we face today, because AI seems to also be taking over health care, either functioning as a robot or through an app.
Artificial Intelligence Should Not Replace Health Care Workers
Some clinics have been exploring the usage of ChatGPT to conduct patient consultations. Mental health clinics even obtained better performance outcomes when they adopted ChatGPT to take over consultations with their patients, with many patients not even realizing that they were talking to a robot.
AI could become the next nurse or physician’s assistant that helps you recover after an accident, or that performs the key incisions on your next operation. The future of health care could transform rapidly, as people might not even have to go to the doctor’s office at all with the combination of AI and telemedicine. All you have to do is open an app on your phone and talk with a chatbot, tell it about your symptoms, and it will curate a prescription for you. But there is a level of trust developed during face-to-face interactions that is missing from this AI model.
AI robots using a GPT can also be used to treat high-risk patients such as those with mental disorders or in rehab by replacing the doctor when monitoring the patients and administering treatment, conducting checkups, evaluating risks, and taking action if needed. However, the same accountability question arises when we implement AI into the medical field.
Here, the accountability question is more concerning, because who will be held accountable when the patient experiences complications from the wrong medicine or the wrong dose? You can’t blame the doctor because he was just following the AI. You can’t blame the AI because it’s a program. In the end, who will be held accountable?
For people to feel safe around AI, strict liability rules need to be imposed to restrict the freedom these things have. However, if these programs are to improve, they need to have more freedom to operate and learn. Although this appears to be a catch-22, the core issue is whether humans should let AI and robots take care of them.
With the capability of AI increasing exponentially, why are medical schools even training their students, and for what? In the future, if AI loses power or malfunctions, would licensed doctors still know how to treat patients without the help of AI? How dependent will we become on AI?
Human Beings Are Accelerating Toward a Crossroad
AI has a lot of potential and will inevitably become a part of our future. However, allowing AI to play a more significant role in medicine and health care will give it more power to influence our understanding of health and well-being. It may even allow AI to alter our bodies.
If AI becomes ubiquitous, will it make humans dumber and reduce us in all aspects? Over time, children might just talk to their chatbot tablets instead of their parents, people might forget how to alleviate symptoms of things as common as colds, and basic tasks like writing an essay might become things of the past. This will inevitably undermine humans and affect our development. When technology becomes so advanced that we can command robots with our minds, might we one day devolve into those aliens with lanky limbs and inflated heads?
When AI begins to mimic human thinking and presents human-like language, we begin to see the reality of the human brain laid bare: They are essentially machines that process information. When computers gather enough of a volume of data, they can engage a sophisticated algorithm to generate human-like thinking and response. The more people use it, the more the ChatGPT AI will be trained to become more human-like, possibly eventually becoming wiser than mankind.
So what makes us humans unique?
We have witnessed supercomputers defeat the human champions of chess and Go games.
Now, AI has arrived in the fields of which people are genuinely proud—fields that revolve around creation, emotion, human interaction, artistic expression, and so on.
This is a critical time when human beings need to think more deeply about where our wisdom comes from. Are our inspirations simply born of an accumulation of myriad data? AI and computers get their data from human input or via trawling the depths of seas of data. Do we, too, get our “original” ideas this way? Why do people get inspiration and creative ideas that seemingly have nothing to do with their prior experience and knowledge?
The threat of AI and supercomputers is not just about losing more jobs. And it goes beyond reducing human thinking capability. The fundamental threat of uncontrolled AI technology is that it cuts off human beings’ connection with our creator. Through technological advancement, human beings are constructing digital gods for people to worship. Using AI or robots to improve life may be the sweet side of this drug, but using AI to replace human thinking is the darker side.
The pressing issue here is how to safeguard our human spirituality. How do we maintain our connection to the divine? Human beings are not just flesh and bones, like how a machine is simply composed of mechanical parts.
The development of AI technologies like ChatGPT is the tipping point for a long-standing issue we’ve been facing—the (dis)connection with God and the true meaning of our human lives as we replace that connection. We’re faced with a choice: Do we keep falling into this bottomless technological pit, or should we return to a traditional way where human beings maintain their connection with the divine?
Here’s some food for thought: “How Humankind Came To Be” by Li Hongzhi.