Tuesday 16 April 2019

Conscious Artificial Intelligence

How could AI be programmed to replicate, or improve upon, the subconscious and conscious reaction methods of a human brain?

In my last post, (AI)motion, I mentioned how emotions could be programmed into an Artificial General Intelligence (AGI), in order to give it reinforcement triggers, to be associated with memories of experiences and information that it gains. The reinforcement triggers would be preset for more generalized circumstances, involving situations which it should avoid or pursue, using negative or positive (respectfully) reinforcement. But this function in general, could cause subconscious, not necessarily conscious memory function.

In my 2nd to last post; Conscious Software Updated, I analysed and hypothesised the most effective use of reaction methods for a human mind, in modern day life, to be conscious thought. With the high quantity of variables and complicated circumstances in human society, using conscious decision making as the default reaction method for new circumstances, seems to be potentially the most effective method. Subconscious can still be applied to speed up future situations, with the same relevant factors, after conscious analysis has initially determined the most accurate reaction. How could this method of function, be intentionally caused and applied to an AGI?

Taking these concepts of hypothetical effective processing methods into consideration, they should be potentially applicable for programming an AGI. In a post from over a year ago; Key Concept to Create General AI, I hypothesised methods and functions of programming general AI. An AI with general intelligence, would need methods of accessing memory of information which is relative to new circumstances. Using human conscious analysis, as a sample, the AGI could evaluate any situation it’s in, and compare the situation with circumstances involving similar factors, which it has saved in its memory. I can now attempt to apply comprehension and understanding of concepts that I’ve philosiphised, since then.

Granted, the technicalities of programming it in this way, or the AGI having sufficient memory storage, might be complicated and out of technologies capabilities at this point, but this is philosophical understanding, and the general concepts involved in method of function, should hypothetically be applicable (as so often, concepts are replicable).

If an AGI was programmed for basic (in comparison to conscious) subconscious reinforcement triggers, associated with memories, the difference, in order to bump up the reaction method to conscious function, could hypothetically be quantity of memories, which it accesses at the time of reaction method. Subconscious function would basically be accessing the single memory which resembles current circumstances, and which has the most prevalent reinforcement associated with it, or memory path which has been used most frequently.
For eg, an AGI comes across a basketball, and its single memory, which resembles the current circumstances, and memory which has the most urgent reinforcement-trigger associated with it, is of a basketball striking the AGI, and causing damage. That basic memory access function, would likely cause the AGI to avoid a basketball, in its new, current scenario.

If the AGI was programmed to access more memories, involving relative factors within the circumstances, it could potentially use conscious thought. If it was programmed to take individual factors within its current situation, and access further memories of details of cause and effect, relative to those factors, it could come up with a more accurately beneficial (in context) reaction to the circumstances.
For eg, it could take the factor of a basketball, and access memories of concepts, that the basketball is a similar object to other sports balls, that it has learned about. It could then access memories of cause and effect of how sports balls function, and could consciously determine that the basketball in itself, is not an object which necessarily needs to be avoided (despite its memory with prevalent reinforcement of avoidance). Additionally, it could access memories of concepts of cause and effect, of the laws of physics, involving an object moving quickly, which transfers its energy to the object which it strikes.
After accessing these memories, it could determine that the cause of the past situation which caused negative reinforcement for avoidance, was the factor of the concept of a fast moving object striking it, rather than the factor of the basketball itself. Accessing these additional memories through conscious thought, could allow the AGI to make a more beneficial reaction to its current situation, by outranking its initial basic memory access (which would have urged the AGI to avoid the circumstances).

Using positive and negative reinforcement triggers, associated with memories of beneficial or harmful results, could be a potential for programming an AGI. But to cause the AGI to have more beneficial reactions to new situations, it may be possible to program it for conscious memory access, by allowing it to access more memories, of more detailed connections of cause and effect (involving relative factors). The key concept, of which to apply to an AI, for making it more significantly intelligent, could very well be, human conscious thought. Applying this concept could allow a Conscious Artificial Intelligence (CAI).

No comments:

Post a Comment