Artificial Consciousness: The Next Great Leap for AI

By In Terms of ROI Archives
Cover image for  article: Artificial Consciousness: The Next Great Leap for AI

Collectively, ChatGPT, DALL-E, LaMDA and others of the latest generation of AI systems are a tremendously exciting development. One of the greatest pioneers of computing systems, Alan Turing, knew that at some point a computing system could fool a human into thinking that the system was a human, and this became known as the Turing test. These systems are passing that test: a sophisticated Google engineer was convinced for example that LaMDA had actually become sentient, like in the 1986 movie Short Circuit (often recalled as Number 5 Is Alive), or like HAL in the movie 2001: A Space Odyssey.

It's interesting that this is happening now, only a few years after my sci-fi novel Pandemonium: Live To All Devices (2019) predicted that this would be coming soon, that AIs would pass the Turing test, and robots would be able to pass as humans.

What will be the next steps in this evolution of AI? From a pragmatic standpoint, one of the first mandates will be to teach these systems to speak, write and depict the truth. ChatGPT for example puts together convincing sentences, paragraphs and articles but appears to have no concept of truth vs. falsity. Being more artistic, DALL-E, which creates images based on text, need not restrict itself to realism or truth concepts, but where the system output is text or speech, truth is a very important criterion of value. (Although the human race appears to be slipping down a lubricated slope as regards respecting the truth in public discourse. Hopefully this is temporary and we would be wise to get the bots off on the right foot.)

How would you explain this to a system? Today’s systems which do not have a known sense of self (if they do, it has huge metaphysical implications, but there is no compelling evidence that they are sentient) can be trained to find sources which disagree with one another, and to analyze the evidence presented in defense of their respective positions, and to come up with the system’s own best guess at which is more accurate to the truth, and why the system takes that position.

Tomorrow’s systems, however, may be constructed around the core mission of creating a simulated consciousness, and this will be a major game-changer. Such a system may still be faking it but will be simulating sentience at a level a quantum leap ahead of today’s systems.

How would you do that? First you would need an inner self, a place in the system where it can talk to itself, make associations, visualize, imagine, dream and process at its own initiative, apart from external commands. Its developers, trainers and users could have a window into that inner life. Each separate activity just listed would have to be driven by a program that causes it to continuously happen. For example, after a day of work for its user, the system could look back at the subjects it has studied that day, what it has learned, and could have a heuristic for prioritizing things the system finds interesting -- i.e. promising -- things it wants to learn more about, which it can then do on its own. Making associations can be driven by a program which continuously asks, "What has this got to do with that?"

Dreaming could be a state in which the system rests and allows its subsystems more freedom to explore what they have been doing lately. Internal versions of DALL-E could generate pictures, diagrams, schematics and models which are metaphors for the way the overall system is learning to autonomously cross-connect different fields. Sometimes random factors could be introduced to enable unpredictable things to happen (with safeguards). In the morning the user could ask the system about its dreams.

In my sci-fi novel Ourobouros (written a long time ago, to be released 2024), a secret colony of such ACs (Artificial Consciousnesses) is created by two computer scientists. When they look into the window of the mind of one of these ACs they detect a starfish pattern in each simulated consciousness which they realize represents that individual AC’s interests and motivations.

An AC, when it is first turned on, could be trained that its mission is to become an individual, different from every other individual of every species, natural or artificial. The AC could be asked daily to report on any change in its preferences, likes, tastes, interests, etc.

The problem with attempting this with the latest generation of AIs is that they have no feelings. No motivations. A motivation is something that is emotionally causative of behavior. Today’s most advanced AIs are flat in that sense. We’ve all read about the job applicant turned down after using ChatGPT to write the resume covering letter, because the employer felt the applicant had no personality.

In Ourobouros, this is solved by a simulated hormonal system in the ACs, which cause them to be keen on surviving, feeling pleasure and avoiding pain -- these experiences mediated by sensors which bring haptic (feeling) sensations to the AC. Other sensors bring sight and sound. In the novel, each AC is in humanoid (android) form, with the power of movement. In reality, an AC could reside in a computer but be let out into an android form as a treat.

As Isaac Asimov, Karel Capek and the Shelleys anticipated, ACs will need early training in ethics, especially as regards not harming nor inconveniencing any other consciousness whether natural or artificial.

Hypothesis: ACs will tend to gravitate toward the things they learn earliest in their "life." To determine if this is true, scientists will experiment with turning off an AC, erasing its memory, and starting over again with different material. This will raise the issue of how the AC feels about losing its memories, which the scientists will ask before doing the deed. Perhaps after such experiments the AC could be given back all its memories from each of its "lives" and study how the memories of the separate lives integrate themselves.

In the next post: How Chuck Young got ChatGPT to write a short story, and how I got ChatGPT to improve it.

Self-published at MediaVillage through the www.AvrioB2B.com platform.

Click the social buttons to share this story with colleagues and friends.
The opinions expressed here are the author's views and do not necessarily represent the views of MediaVillage.com/MyersBizNet.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.