Confusion permeates many discussions regarding artificial intelligence (AI). Much of the confusion stems from a failure to rigorously define AI. The "artificial" part is easy, defined as that which is made or produced by human beings rather than occurring naturally (1). Like a machine, for instance. The "intelligence" part is not so easy, with the Oxford Dictionary defining it as the ability to acquire and apply knowledge and skills (2). This is vague and unsatisfying. This is no fault of the dictionary's though. Despite their best efforts, defining intelligence for the purposes of AI has eluded many intelligent people for decades.
In 1945, the mathematician and physicist John von Neumann described a design for a digital computer consisting of an input device such as a keyboard, a CPU to process the data according to a predefined set of instructions, a memory to store the data and instructions, and an output device such as a monitor (10). This simple design later came to be known as Von Neumann architecture and along with its variations, Harvard architecture and Modified Harvard architecture, provided the basic design for modern computers and most other forms of WNAI today. There is a lot of WNAI out there already, all modelled after special purpose instructions called algorithms - in addition to computers, common examples include calculators, web search engines, car navigational systems that display maps and offer advice (useful), medical decision support systems that aid in the interpretation of electrocardiograms (mediocre), and recommender systems that suggest books and music albums based on a user's previous recommendations (irritating) (11). Less common examples of WNAI based on Von Neumann architecture include specialized game-playing computers and programs; in addition to the chess-playing computer Deep Blue, the best humans in the world have been surpassed by specialized programs made exclusively for simple games like checkers, backgammon, and Scrabble (11).
SGAI - strong general machine intelligence - is not the only route to achieving either intelligence or superintelligence, which is intelligence exceeding that of any human brain. In his 2015 book Superintelligence, the philosopher Nick Bostrom delineated four other routes to strong general intelligence - the first is whole brain emulation (copying a brain by directly uploading it into a computer); the second is biological cognition (the creation of enhanced brains by genetically selecting embryos with desirable cognitive traits); the third is brain-computer interfaces (brain implants, such as the skull-embedded antenna of cyborg activist Neil Harbisson that allows him to take phone calls and directly connect to the internet from his head) (12); and the fourth is networks and organizations (enhancing technology that links brains together) (11). However, since all of these routes involve copying, enhancing, or linking the human brain they will always be confined by the brain's biological limits; they cannot vastly surpass human intelligence, not by orders of magnitude anyhow. The AI route, on the other hand, is not restricted by any biological limits, so it is the only route with the potential to vastly surpass human intelligence.
Presuming we can create SGAI, the next question we have to ask ourselves is whether or not we will really need it. To answer this question we must distinguish between wants and needs - a want is a desire to possess or do something (17); a need is something essential (18). Many people don't want to work and don't want to die; superintelligent SGAI could not only take on most or all of the work, it could also assist humans in finding ways to extend their own lifespans. However, wants are not really compelling enough to convince an intelligent person that SGAI ought to be pursued. There must be a need, something crucial for the survival of life as we know it, or at the very least human life. Frankly, there will be a need, and that need is deep space exploration. The Earth will not sustain humans forever; the Sun will exist for another five billion years but as its luminosity increases it has been estimated that complex multicellular life on Earth will become extinct in as little as 800 million years from now (19). Some people will argue that this is a lot of time - that may be so, but it is still finite. Moreover, this assumes that humans don't get taken out by a global extinction event - like an asteroid impact, a supervolcano, or climate change - in the interim. There have been at least five global extinction events during the last 500 million years on Earth, and each of them managed to eradicate 75% to 96% of all of the species on the planet (20). Therefore a global extinction event is more likely to happen than not over the next couple of hundred million years. So unless we want to set an extinction date for ourselves, we are going to have to reach for the stars.
Recognizing that we eventually need SGAI if humans are to explore deep space and survive, the next question we have to ask ourselves is whether or not a superintelligent SGAI will destroy us before we even get to that point. In 1958, the same von Neumann who described the first design for a digital computer also proposed that a hypothetical event known as the technological singularity would occur once SGAI exceeded human intelligence (28). In 1965, the mathematician IJ Good summarized the technological singularity as follows (29).
Every year, the topic of AI seems to pop up more and more in the news. It's a topic that can't be avoided, nor should it be. Yet the field is full of misguided efforts to model SGAI after computers or neural networks rather than the only existing standard of true intelligence - the human brain. AI designers fervently chase SGAI, but most of them do not respect the human brain and without that respect, they just won't get there. Outside of the AI field, many people fear SGAI and want to put the brakes on it altogether, but they're looking at the issue without seeing the eventual need for it in deep space exploration, and the guesses of experts that the technological singularity could happen within a few decades and that it will be bad for humanity are based on their knowledge of computers, and so their guesses are far too soon and far too negative.
References
Proudly powered by Weebly