Thursday, 11 April 2019

(AI)motion

Would an Artificial General Intelligence (AGI) need to be programmed with emotions?
How would “emotions” be programmed, or function effectively in an AGI?

In my recent post; The Notion of Emotion, I analysed the function of emotions, to basically be, feedback triggers for the brain. Perhaps this concept of function of emotions in the brain, can be replicated, and adjusted for AGI. In order for the AGI to make a decision regarding the best outcome to any given scenario, it would likely need such feedback triggers, to initially guide it in an effective direction for interpreting its environment, as well as, give it positive or negative reinforcement, for which circumstances are most beneficial. Using this reinforcement, linked with memory, could allow it to then make necessary adjustments (decisions) to future, resembling circumstances.

I determined emotional feedback to have a range of general circumstances, which trigger the feedback, instinctually. Beyond the function of genetic pre-set factors to trigger emotions, there is the additional function of causing the effect of reinforcement for more specific details within circumstances, after the individual has experienced a situation. The factors within experienced situations, are reinforced in memories, by emotions, to be avoided or pursued. This function of repeated reinforcement allows the individual to distinguish factors within circumstances, more accurately than the genetic rough boundaries of factors, that they attain from birth.

Applying this concept of function to AGI, would mean programming it with positive or negative feedback, for different ranges of general circumstances. To replicate pain, it could be pre-programmed with negative feedback for a rough range of circumstances which cause it harm. This would cause it to avoid damaging itself, or being damaged. To replicate fear, it could be programmed to receive negative feedback for circumstances involving factors which are likely to harm it. Perhaps to cause it to “instinctively” avoid jumping in a pool, or in a fire, or off a cliff, or moving in front of a large, heavy, fast moving object. Happiness could be replicated by programming positive feedback, for general circumstances involving factors which are beneficial for it to stay well maintained and functioning properly.

Human emotions are mainly intended for survival and reproduction, but when creating an AGI, its likely that there would be other intentions for its existence and function. As a change from the typical function of human emotions, the AGI might be programmed with alternate positive or negative feedback, for sets of generalised circumstances. Perhaps it could be programmed to avoid situations involving harming humans, or to pursue acquiring information, or pursue accomplishment.

Beyond programming the AGI to avoid or pursue those ranges of circumstances in general, it could also be programmed for reinforcement, of its experiences. To replicate human emotion, as reinforcement for subconscious or conscious memory, the AGI should be programmed to save all its experiences in memory, and associate reinforcement triggers with those memories of experiences. It could associate the positive feedback from circumstances where the feedback was triggered. Then when it comes across circumstances involving more specific factors, which were present in its past experience, where it received positive feedback, it will be reinforced to pursue more accurate factors, which are associated with the positive circumstances. After recurrence of experiences, it should be able to determine more accurately, the specific factors which cause the positive results. That is basically the function of emotion reinforcing subconscious memories.

When it comes to emotions, it doesn’t seem entirely unrealistic to program them into AI. emotions could potentially be programmed, involving similar circumstances as humans, but it seems likely that many adjustments would be made. Perhaps these adjustments of the programming, would be different enough from emotions, to entitle a different term for the feedback triggers. If the triggers are a replica of emotion, for (A)rtificial (I)ntelligence, and cause -motion, in action and reaction, perhaps “Aimotion”.

No comments:

Post a Comment