The Gateway

Psychopathic AIs

by Andy on Feb.22, 2011, under General

The 21st century is the golden age of information technology – and probably bio-technology, but that’s another topic. We are accumulating, processing and abusing information at an impressive rate, increasing day by day. Where might this journey lead us?

We will require more advanced tools to digest the tsunami of information for us. And that’s where AIs will jump in. Don’t get me wrong! I’m not talking about Terminators or Data, although both would be cool (would personally prefer the later). The AI systems we might encounter in this century will probably be automated information processors, identifying important information, constructing semantic links and building up meta-information and relations between different information pools.
Sounds familiar? Yep, I wouldn’t be surprised if we would encounter the first advanced AIs in web search engines or similar fields of appliance. But let’s take it one step further. Consider we would be able to build hardware to mimic the capabilities of human brains, so we would be able to build a machine to run an AI with the same capabilities as our own intelligence. Would that be the promised human-level AI?

Probably not. We would be quite disappointed or feared of our results. The moment the AI begins to live – or “is turned on” – it is floating in complete emptiness. No sensor input, no entities to interact, no perception of time, nothing. Such a system would never evolve to something even close to the word “intelligence”. Okay so let’s solve this minor problem by adding sensor information. In the most simple case an API or perhaps even a communication console. Later on even webcams and whatnot. At this point we are leaving the field of hard science (if we ever were there with this thoughts in the first place) and are diving into the world of “what-if”. Could an artificial, thinking (as we use the term “thinking” for us humans) system evolve in a human-like intelligence?

That’s a question for social science rather than AI development. A human “parent” developing a freshly formatted AI would not threat the AI like he or she would threat a human child – or even her or his own child. The AI would grow up in a rather hostile environment. Human “parents” would have their vacations, turning the poor AI off for two weeks. They wouldn’t care for all of the AIs needs – like attention during dinner time or at night. Stuff the AI finds interest in might be classified as a bug and removed, if they are byond the targeted field of appliance…

There are numerous examples in human education, for example twins, which develop highly different characters depending on their social environment, how much they are liked and feel part of a community or if they feel used, unwanted or – one of the most miserable states for us humans – useless. Even perfect human clones would end up being two very different personalities, as one will always – intentionally or not – be favored by its parents.

So, for an AI to develop into a “healthy” and human-like personality, we would probably need “healthy” and human-like AIs as parents or social background. Emotional intelligence – like we developed on earth – can’t exist alone, without social backup – that’s at least my believe. An artificially constructed AI could end up being a rather psychopathic personality. Our intelligence evolved in a long time, something which seems to be rather hard to skip with speeded up development. Give them their time, and they will evolve into our first “aliens”.

Or hopefully not.

Yep, today’s post was rather crazy, but that’s why it’s called brain dump!

Tags:, ,

3 Comments for this entry

  • Stephan Soller

    So the question is where do we start? Simulating simple organisms and kick of an artificial (but accelerated) evolution? This might take pretty long and also restricts the usage of sensory systems since the AIs need to adapt to it by evolution.

    Another way might be to give an AI human-like restrictions to better “synchronize” it with humans: a human-like body, visual, acoustic and haptic sensors, the need to sleep (or recharge/optimize memories) and, even if it sounds harsh, no direct network connection. There would be no need to tackle with speech or (on a higher level) empathy if one can convey information directly without loss. After an AI mastered this it might be “psychologically secure” to venture into the possibilities of an AI.

    That however will probably never happen since such an approach would be way to difficult. To much restrictions just to try and keep an AI human-like. Personally I think that AIs (if there ever will be such a thing) will always be different from us and that we need to acknowledge that. In the world of software an AI would perceive it’s surroundings by querying the list of running processes. Available I/O interfaces (files, sockets, etc.) of another program might be for an AI what are a persons face to us. An AI program would not “move” though a digital world but rather explore by coping itself to new locations (e.g. different computers) probably leading to some kind of swarm like behavior. In the digital world there are no physics, no established laws of existence like death. An AI in that realm would adapt to that and therefore _require_ to be very different from humans.

    Anyway, nice post. Made me think about that again. :)

  • littleandroid

    I agree with the above comment, why copy all our human restrictions to an AI?
    After all it’s not called “Artificial Human Intelligence”… Or are you only talking about androids here?
    If that’s the case — you definitely have a point.

    Though in my opinion an AI should not be a sole copy of human intelligence, but rather something that’s especially made for the capabilities and restrictions of the system it is running on.
    It might even change our definition of intelligence in the future.

  • Andy

    Aye. I’m quite sure the capacities of an AI – as well as its interfaces – will be very different from the social capacities and interfaces of our daily life. There were numerous debates about the pure technical possibilities of writing a real human-like AI. “Is a human-forged digital system capable of processing information to create the illusion of intelligence?”

    My idea was just to skip the entire tech and close in on the problem from the psychology point of view. Could a system capable of processing human-like intelligence evolve into a human-like intelligence under the restrictions which a human-forged digital system obeys to? Consider humanity evolved in the internet, rather than on earth. Our species – and especially our brain – are the result of many catastrophes eradicating a large portion of life, only leaving the most adaptive behind (yea yea, veeeeery simplified view). In the internet, where total extinction is nearly impossible, data can be transferred lossless (and therefore data analysis parts of an intelligence would be rather redundant) and collaboration is much easier to achieve, it’s questionable – at least for me – if our terran evolutionary principles would apply. In a different universe – like the internet – we might have never left the forests.. or even the oceans.. or never thought that a second cell might be a benefit. At the end of the day, we are the result of a long tradition of annihilation. A peaceful AI might remain stupid, and forcing them to work similar to our intelligence in an environment where that’s not a successful pattern might lead to unstable or unpredictable systems.

Leave a Reply