• Categories

  • Most Popular Questions

  • Recently Viewed Questions

  • Recent Answers

    Rtedder on How did God come into exi…
    AustinHowell on Which came first – the c…
    brandon e mena on What holds the Sun in pla…
    Nick on Could you swim under Aust…
    Paul Burgess on How can a crocodile see under…
  • Recent Questions

  • Blog Stats

    • 2,243,560 hits
  • Visitors since 11-3-08

    counter create hit
  • Terms and Conditions

  • Warning

    We are doing maintenance on this site, so some posts may disappear for a short time. Sorry. Normal service will soon be resumed...
  • Pages

  • March 2008
    M T W T F S S
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
    31  
  • Archives

  • Meta

Is it possible that AI technology may get out of control?

Would artificial intelligence ever get out of human control? i.e. could we have scenarios like the films AI, I-Robot etc?
Simone Sharma from Manchester (Age 25-34)

3 Responses

  1. Not as per the film, but as described in the book, it’s possible.

    It is up to us to know what AI (computers) is doing. AI will make our lives much easier and enable us to do other things that we don’t have time for at the moment.

    An example: look back 50 years and at how much time people spent washing their clothes. Yet today we throw our clothes into the washing machine and go and do something else instead. Ok so this isn’t hi-tech AI, but it is still AI benefiting our lives.

    So, I think as long as we keep an eye on what AI is doing then we will benefit from it’s continued development. Even AI that teaches itself (I Robot) is fine as long as we know what it is doing.

  2. There’s a great deal of science fiction which explores this question and possible answers, but there isn’t a lot of real science to back it up yet. Back in the 1940’s it started to become possible to build computers to make calculations in seconds which took humans months or even years to do by hand. Today, computers are still glorified calculating machines which are able to do things for us even faster.

    In seventy short years we’ve made computers that can under limited circumstances act like humans or intelligent beings. In one famous example from the 60’s a computer program called “Eliza” was so successful at seeming to be a sympathetic listener to problems that the creator’s secretary had private “chats” with the computer program to hear out hers. Today, we have computers that can reliably beat Chess Grandmasters with ease, but these are just cleverly (human) designed machines with a great deal of calculation power. Similarly, search engines such as Google seem clever and intelligent but just have a great deal of storage and calculation capacity.

    What will it take for a computer to become truly intelligent and autonomous from us? I suspect the problem with this question is that human “intelligence” and computer “intelligence” are completely different things. We’ve never been able to pin-point our own mechanisms for intelligence and we continue to design our computers like calculation machines.

    It may take a paradigm shift in thinking before machines of the future gain “intelligence” as we know it. We will continue to chip away at smaller problems like playing games, understanding language and logical reasoning. While these will be achievements in themselves, I believe true intelligence, that also gives autonomy and control, will not happen until we can finally understand what human intelligence actually is.

  3. Hello Simone

    Like Steve Price I think that the answer to your question is, yes, it is theoretically possible.

    However, I think it is highly unlikely that AI technology will get out of control. There are several reasons that I think this.

    Firstly, because building Artificial Intelligent systems with enough intelligence to get out of control is very very difficult. Let me put it this way: the movie i-Robot is set in the year 2035, less than 30 years from now. In that movie the robots are capable of understanding and conversing with human beings – with speech – as well as doing everyday chores like walking the dogs or doing the cooking. Although we could argue about the level of intelligence portrayed in the i-Robot robots, those robots have – on the face of it – human-level intelligence. Given that the smartest robots that we can build now, in 2008, have an intelligence roughly somewhere between a slug and an ant, then there is an enormous gap between where we are now and human-level AI. My own view is that human-level AI is theoretically possible but that it will take hundreds of years to achieve (although I should say not all roboticists agree with me). See my blog here for more discussion on why I think this: http://alanwinfield.blogspot.com/2006/02/on-wild-predictions-of-human-level-ai.html

    Secondly, AI technology is man made and can easily break down. Let me illustrate what I mean with reference to the Bristol Robotics laboratories’ Ecobots. These are robots that can digest and get their energy from unrefined food – using a kind of biological battery called a Microbial Fuel Cell. In a recent talk I described a possible future application of this technology in which a swarm of Ecobots would ‘live’ in a field of crops, and control the weeds and pests in the crop by (1) being smart enough to know which are the weeds and the pests, (2) eating those and getting energy from them, and (3) excreting waste ‘fertiliser’. Someone asked me the question, if we can create robots that are autonomous and biologically embedded in the ecosystem, what are the implications of releasing these hybrids “into the wild”? My answer was: if we could make versions of these robots that could ’survive’ in the wild, they would still have two fundamental limitations. Firstly, they can’t reproduce, and secondly they cannot repair (heal) themselves. Furthermore, they would not be very smart – they would just have a small number of instinctive behaviours and wouldn’t be able to learn or reason. Thus, as soon as any of their components failed, the robots would simply stop working and – as it were – die. There are other reasons this might happen – the robot might get physically stuck, or damaged by a real animal, for example. Ideally the robots would be built from bio-degradable materials so that when they did stop working they would – like real animals – just rot away.

    Thirdly, and finally, a good deal of research is going on right now to develop ways in which we can guarantee the ‘dependability’ of intelligent systems. Thus, in the same way that flight control systems for aircraft are put through incredibly rigorous processes of formal ‘proof of correctness’ as well as test, validation and certification, future AI systems will also need to be put through similar processes before they will be ‘certified’ to be used in the real world, and especially in human environments. Of course even the best designed and certified systems can still go wrong (often because of human error in design, build or operation), but I believe there is no reason to think that we cannot have the same degree of confidence in the dependability of AI technology as we do now for conventional technology.

    Footnote: I am a roboticist, working primarily in the fields of swarm intelligence and swarm robotics, and my research is conducted in the Bristol Robotics Laboratories: http://www.brl.ac.uk/

    My web page is: http://www.ias.uwe.ac.uk/~a-winfie/

Leave a comment