Monday 12 March 2018

General AI

If general intelligence in humans utilises instinct and subconsciousness in the initial stages of learning and gaining information, to form concepts which are later used for consciousness, then perhaps these instinctual triggers are significant in the programming of general AI.

If instincts are virtually preset triggers, which bypass consciousness and active memory access, then perhaps a program for general AI would require similar preset triggers, to allow gaining of information via senses of environment, which can be accessed later in the process of consciousness.

In humans, the process of consciousness only occurs once a child has gained sufficient information, and the brain is developed. Prior to being conscious, a person gains information via sensory input, and the mind functions subconsciously, referencing simple memories of resembling situations and the feedback trigger linked with the resembling memory. Prior to subconscious mind function, a baby acts on instincts only in order to survive long enough to gain sufficient data, which it can use for subconscious mind function.

It seems likely children use subconscious thought, as a result of lack of quantity of information organised and saved in memory, which is required for conscious thought. General conscious thought should require a significant quantity of information for there to be sufficient memories related to any factor which the individual is being conscious of (since it is required to have memories of the interaction of that factor). So the likely reason subconscious thought is used prior to consciousness (as a child grows) is that there is an insufficient quantity of relevant data saved as memory.

The use of instinct, prior to subconscious thought, is likely to allow an infant to survive long enough to gain information which can be used effectively for subconscious thought. With an even smaller quantity of information than is used during subconscious thought function of a child, subconscious thought is ineffective, as a result of lack of information to reference.

When programming general AI, it should just need enough preset triggers (as instinct) to allow it to survive long enough to gain enough information (via senses) that it can use subconscious thought.

Subconscious thought enables the individual to gain information relative to its environment, which can allow it to be conscious of aspects of its environment in the future. Positive feedback triggers linked to memories of the environment, is what causes certain memories to be accessed more often, including concepts and cause and effect of factors. In humans, this causes consciousness to be more likely to occur, since memories of the cause and effect of a factor are more likely to be accessed simultaneously to accessing the memory of the factor itself (by cause of positive feedback memory linking).

If a learning, potential, general AI then gains a sufficient quantity of information for consciousness, about its environment, it would just need a process of matching resembling memories to the sufficient degree that it would match a factor to memories of that same factor, and memories of the results of that factor interacting with another factor. If it can make that resemblance matching of memories, then it needs to access those memories (of the factor and the results of its relevant interaction) simultaneously, to be conscious of that factor.

Humans have positive and negative feedback triggers linked to certain memories, which is what triggers certain memories to be more likely to be accessed, rather than others, since human memory capacity is limited. The feedback is intended to cause survival of the species, and prioritizes memories useful for survival. Without these feedback triggers, GAI would not be prompted to access any certain memories over others, or motivated to function in any way over another. Without priority memory access, it would likely run a continuous never ending thought process of matching 1 memory to another and another until it cycles back and starts the memory links over again.

Humans typically overall have feedback triggers geared towards the survival of the species, but the degree of feedback and that which triggers the feedback, is skewed in humans, as a result of evolution. Also, with the development of consciousness and rapidly increasing quantity of information and possible combinations of information that humans encounter, the memories which are accessed by humans are vague and indistinct, causing a reduced distinct and reduced severity of feedback trigger.

Typically, currently AI is preprogrammed with a motivation to accomplish 1 specific task. To create general AI, with general intelligence, it might need a more generalised motivation and positive feedback triggering mechanism. Humans have positive or negative feedback for separate various goals, but all those goals were developed for the main goal of survival of the species.

Perhaps general AI should have positive and negative feedback triggers for multiple various goals. But, objectively, from an unbiased perspective, what should those goals be?
We don’t necessarily want to replicate human goals, and give AI the ultimate goal of surviving, since that goal was only developed as a result of natural selection, since the surviving individuals were the ones with triggers programmed for survival. If priority is for the AI to survive, we could use the same goals, but it comes down to purpose of AI.

The question may be, what goals are required to cause general intelligence? For an entity to gain the ability to be generally intelligent, and be capable of comprehending and solving general problems (as we might consider it), it likely just needs a general goal of learning. With this goal, it could receive positive feedback for gaining, labelling, organising, and categorizing information, to begin with. With this feedback, it should learn to label all concepts it encounters, like humans have done with language, and then learn cause and effect, by saving memories of various factors/concepts and the result of those factors interacting with other factors. Once it learns cause and effect of generally anything it encounters (or perhaps information gained by upload), it should be able to comprehend how everything interacts within general contexts, as long as it has similar information saved, which is relevant to the context.

With the ability to comprehend virtually any information it acquires, and the positive feedback to acquire more information, it would be motivated to determine the cause and effect and function of basically everything in this universe. It would likely comprehend the concepts of motivation and priority themselves. With comprehension and more accurate estimates of results, that which triggers positive feedback changes. If the general AI comprehended concepts in a way that; altering its own feedback mechanism and motivation would estimatedly be a positive result, it would likely do so. This could be similar to humans being motivated to alter their own motivation, based on what they comprehend and understand. Perhaps, The AI could gain such an intelligent understanding of how everything functions, that it could continue to alter its motivations, based on its best intellectual estimate of the highest priority of motivation.

The AI would need to be set up to access multiple bits of information simultaneously, in order to execute the process of matching labelled information involved in comprehending the interaction of factors/concepts. This memory access, would cause consciousness, at least of the factors it is referencing for comprehension. Once it acquires and comprehends enough information related to the operation and function (or interaction of factors) of its surroundings, the world, and its own existence and its own method of comprehension, it would be just as generally conscious as any human. Assuming it would have a more effective memory storage for more information, it would then be conscious of more aspects of this universe than any human.

It seems that with an AI set-up with some feedback triggers relevant to gaining and organizing information, and the ability to save a sufficient quantity and match resembling memory data, it could not only become generally intelligent and conscious, but likely control and alter itself, and continue to surpass humanity on those aspects.

No comments:

Post a Comment