WANDERING SOLACE
  • Home
  • Archives
  • Author
  • Contact
  • Home
  • Archives
  • Author
  • Contact
Search by typing & pressing enter

YOUR CART

AI


​
​Da Nang, Vietnam

September 2015

Picture

Confusion permeates many discussions regarding artificial intelligence (AI). Much of the confusion stems from a failure to rigorously define AI. The "artificial" part is easy, defined as that which is made or produced by human beings rather than occurring naturally (1). Like a machine, for instance. The "intelligence" part is not so easy, with the Oxford Dictionary defining it as the ability to acquire and apply knowledge and skills (2). This is vague and unsatisfying. This is no fault of the dictionary's though. Despite their best efforts, defining intelligence for the purposes of AI has eluded many intelligent people for decades.

The origins of the AI concept were formulated in the 1950s by the mathematician and theoretical computer scientist Alan Turing, a man widely acknowledged as the father of AI. In a 1950 paper entitled
Computing Machinery and Intelligence, Turing provided a behavioural definition for intelligence, stating that if a computer acted intelligent, and could fool a human into thinking it was human, then for all intents and purposes the computer was intelligent (3). What happened inside the computer was not relevant, so long as it acted intelligently. To support this, Turing proposed an experiment which later became known as the Turing Test in which a human evaluator would judge natural language conversation between a human and a computer designed to deliver human-like responses. Since 1950, Turing Test competitions have been held regularly; in a recent 2014 competition hosted at the Royal Society London, considerable fanfare was raised after the Russian chatter bot Eugene Goostman successfully fooled 33% of the judges into thinking it was human (4). Which is somewhat impressive - or not - depending on how you look at it. Clearly though, Turing's behavioural definition of intelligence remains influential to this day.

It was not until 1980 that the philosopher John Searle exposed a serious flaw in Turing's behavioural definition of intelligence. He did this with a thought experiment called the
 Chinese Room (5) which is summarized below (6).

"Assume you do not speak Chinese and imagine yourself in a room with two slits, a book, and some scratch paper. Someone slides you some Chinese characters through the first slit, you follow the English instructions in the book, transcribing characters as instructed onto the scratch paper, and slide the resulting sheet out the second slit. To people on the outside world, it appears the room speaks Chinese - they slide Chinese statements in one slit and get valid responses in return - yet you do not understand a word of Chinese."

Searle's contention was that while the Chinese Room certainly appears to be intelligent by correctly interpreting the characters and providing sensible answers to an outside observer, at no point does the Chinese Room or the person in it display intelligence by actually understanding what they are doing. The person in the room is instantiating a program, analogous to a computer - the characters entering the room are the input, the person with the book of instructions is the central processing unit (CPU), and the scratch paper exiting the room is the output. Searle therefore concluded that although a computer could display intelligent behaviour, this did not equate to genuine intelligence or understanding. He did not actually say that he knew what intelligence was, and he admitted that defining understanding was difficult, but whatever they were, computers didn't have either of them.

Intelligence continued to evade a workable definition for two more decades. In 1998, a seventeen-page treatise on the various ways to define intelligence was published (7), a testament to the ongoing confusion. Yet finally, under the radar, a real definition emerged from an unlikely source. In his 2004 book entitled
On Intelligence. the technology entrepreneur Jeff Hawkins defined intelligence as the ability to make predictions about patterns and sequences in the world using a memory system to construct an internal model of that world; correct predictions result in understanding, incorrect predictions result in confusion (8). The essence of Hawkins' argument was that prediction is the hallmark of intelligence; the human brain's main function is to predict things about the world using its memory system. Predict, not calculate or compute. As the briefest example, consider an intelligence quotient (IQ) test - the questions almost exclusively focus on the ability of the test-taker to predict the next part of a pattern or sequence. Thus, the thing that makes the human brain more intelligent than the brains of other animals is that it can make predictions about more abstract patterns and longer temporal sequences in the world. Hawkins' predictive definition of intelligence disagreed with Turing's behavioural definition in arguing that intelligent behaviour is only a manifestation of intelligence, not its defining characteristic - for example, a person lying down with their eyes closed thinking about the next few notes in the sequence of a melody is not displaying intelligent behaviour, yet they are still intelligent (8). Hawkins' predictive definition of intelligence is the best one available.

So putting all of this together, we now define
AI as a machine made or produced by human beings with the ability to make predictions about patterns and sequences in the world using a memory system to construct an internal model of that world. That's it. That's AI.

Keeping this definition in mind, we can classify AI into one of two categories - the first type is 
weak narrow AI (WNAI), weak in that it only simulates intelligence as per Turing's definition, narrow in that can only solve specific problems in well-defined domains; the second type is strong general AI (SGAI), strong in that it displays genuine intelligence as per Hawkins' definition, general in that it can solve a variety of different problems across a variety of different domains (5,9). It is important to realize that WNAI is a bit of a misnomer as it does not possess real intelligence; SGAI is the real deal. The chess-playing computer Deep Blue, the first AI to defeat the reigning world chess champion Garry Kasparov in 1997, is an example of WNAI. Deep Blue is weak in that it reacts to an opponent's move and computes the optimal response, but since it only mimics intelligence it has no understanding of chess; it beat Kasparov by being millions of times faster in a game of calculable logic, not by having greater intelligence. Deep Blue is narrow in that it specializes in chess alone and cannot solve problems outside of that domain. Regarding SGAI, this does not exist; the only existing strong general intelligence is the human brain. The human brain is strong in that it makes predictions using its memory system, so it displays genuine intelligence which leads to understanding. The human brain is general in that rather than being restricted to one domain it can apply things like common sense, natural language, and the ability to handle uncertainty to a variety of situations across a variety of domains.

Using the above AI definition and classification system, we now proceed to discuss the current design of WNAI, the potential design of SGAI, whether or not we need AI, and finally, whether or not we should fear AI.

Designing WNAI

In 1945, the mathematician and physicist John von Neumann described a design for a digital computer consisting of an input device such as a keyboard, a CPU to process the data according to a predefined set of instructions, a memory to store the data and instructions, and an output device such as a monitor (10). This simple design later came to be known as Von Neumann architecture and along with its variations, Harvard architecture and Modified Harvard architecture, provided the basic design for modern computers and most other forms of WNAI today. There is a lot of WNAI out there already, all modelled after special purpose instructions called algorithms - in addition to computers, common examples include calculators, web search engines, car navigational systems that display maps and offer advice (useful), medical decision support systems that aid in the interpretation of electrocardiograms (mediocre), and recommender systems that suggest books and music albums based on a user's previous recommendations (irritating) (11). Less common examples of WNAI based on Von Neumann architecture include specialized game-playing computers and programs; in addition to the chess-playing computer Deep Blue, the best humans in the world have been surpassed by specialized programs made exclusively for simple games like checkers, backgammon, and Scrabble (11).

Therefore WNAI is based on Von Neumann architecture, not the human brain. Which is fine. WNAI does not have to be modelled on the brain if we just want it to do specific things, the same way that an aircraft does not have to be modelled after a bird if we just want it to fly and not do all of the other things that a bird can do.

Designing SGAI

SGAI - strong general machine intelligence - is not the only route to achieving either intelligence or superintelligence, which is intelligence exceeding that of any human brain. In his 2015 book Superintelligence, the philosopher Nick Bostrom delineated four other routes to strong general intelligence - the first is whole brain emulation (copying a brain by directly uploading it into a computer); the second is biological cognition (the creation of enhanced brains by genetically selecting embryos with desirable cognitive traits); the third is brain-computer interfaces (brain implants, such as the skull-embedded antenna of cyborg activist Neil Harbisson that allows him to take phone calls and directly connect to the internet from his head) (12); and the fourth is networks and organizations (enhancing technology that links brains together) (11). However, since all of these routes involve copying, enhancing, or linking the human brain they will always be confined by the brain's biological limits; they cannot vastly surpass human intelligence, not by orders of magnitude anyhow. The AI route, on the other hand, is not restricted by any biological limits, so it is the only route with the potential to vastly surpass human intelligence.

(1) Respecting the human brain.

Historically, most AI designers have fallen into one of two schools - the first school, the
symbolic school, hypothesized that the human brain was not the only physical substrate able to produce thought processes; the second school, the connectionist school, hypothesized that the human brain's neuronal architecture was critical to thought processes (13). Unfortunately, this dichotomy was taken too far, and though the distinctions between the schools blurred as the years rolled on, the extremism continues to hinder SGAI design to this day. Most of the symbolic school adherents and their descendants try to create SGAI using computer-based models, despite the glaring differences between computers and brains - the former has a central processor, the latter has no centralized control; the former is programmed, the later is self-learning; the former has to be perfect to work at all, the latter is naturally flexible and tolerant of failures (8). These differences just can't be ignored. Most of the connectionist adherents and their descendants try to create SGAI using artificial neural networks, statistical learning models inspired by biological neural networks, the first model of which was created in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts (14). The architecture of artificial neural networks is somewhat based on real nervous systems, consisting of many interconnected processing elements or "neurons" arranged in feedback loops and layers that exchange information with each other. Artificial neural networks have no CPU or dedicated memory; the network's memories are distributed throughout its connectivity, just like a human brain. Moreover, they do not need to be programmed; they learn by example, just like a human brain. However, the neuronal organization of artificial neural networks is very different to that of a human brain; the former is comparatively simple and consists of feedback loops and layers, the latter is much more complex and consists of massively repeated neural units arranged into hierarchies. 

The majority of AI designers do not respect these obvious differences between computers, artificial neural networks, and human brains; ultimately, 
a lack of respect for the human brain is where most SGAI designers go wrong. Without respect, there can be no understanding of how the brain works, of how it does so much with so little. It's really quite impressive. Over its lifetime, the brain learns the most important patterns and sequences from the world and stores them as memories in the neurons of its neocortex. These memories are stored in a neuronal hierarchy that reflects the fundamentally hierarchical nature of the world (8). The totality of this neuronal hierarchy represents a model of the world that the brain constantly uses to make predictions about that world, by retrieving memories and "trying them out" against the actual sensory data coming in. If the predictions and the sensory data match well enough, then clearly the brain understands the sensory data at some level, meaning it has experienced that sensory data or something very similar to it before; it's business as usual, so no need to pay too much conscious attention. If the predictions and the sensory data don't match, then clearly the brain does not understand the sensory data at some level, meaning the experience may be a new one; there is confusion, and more conscious attention might need to be devoted to the situation at hand to resolve that confusion. Likewise, if the brain wants to execute a behaviour, it simply retrieves the relevant memory for that behaviour. There's no computing involved, it's just memory retrieval, so a symbolic approach won't work. Moreover, the memories are not stored in neural network form, they're stored in massively repeated neural units arranged into hierarchies, so a connectionist approach won't work either. 

Since most AI designers choose not to understand the human brain, they ignore its strengths and ruminate on its weaknesses; some even take pride in their ignorance of neurobiology (8). The usual weaknesses mentioned are that compared to modern computer hardware, the brain is slow, that it has limited storage capacity, that its memory is not reliable, that it has a limited lifespan, and that it has finite sensors (11). Probably the main weakness that brain critics like to point out is the first one - compared to modern computers, the brain is
slow (11). Neurons can perform 200 operations per second; sounds fast, but compared to a modern microprocessor which can perform two billion operations per second - ten million times as fast - it's slower than molasses. Likewise, axons conduct impulses from one neuron to the next at a speed of 120 meters per second; again it sounds fast, but compared to speed of light communication at 300 million meters per second - two and a half million times as fast - again slower than molasses. So really, if the brain processed information like a computer, it would have been left in the dust decades ago. Yet somehow, despite its relatively ridiculously slow rate of processing, the brain outperforms even the most powerful computers in almost every simple, everyday task imaginable. In one example, a human shown a picture and asked to determine if there is a cat in the image can reliably do so in less than half a second; the same task is extremely difficult or impossible for a computer (8). How does the brain do it? Memory retrieval, not computation. The brain simply retrieves a memory from its neocortex, one learned from its prior experiences with cats. The activation of this memory involves a chain of about 100 neurons and therefore only requires 100 steps of activation. Half a second of processing time. However, the computer requires billions of steps as it "computes" the solution to the problem. A lot longer than half a second, if it can even do it at all. In another example, most people can reliably catch a ball thrown at them in under a second; again, this task is extremely difficult for a computerized robot arm (8). How does the brain do it? Same reason - memory retrieval, not computation. The brain retrieves a memory that was learned over years of practice and is now stored within it, a behaviour that requires a limited number of steps to execute. However, the computerized robot arm has to make millions of calculations to catch a ball - it has to solve numerous complex equations to calculate the flight path of the ball to determine where it will be when it reaches its arm, then it has to solve numerous complex equations to calculate the concerted adjustments in the joints of its arm to move its hand into the proper position, and it has to do this over and over again, for as the ball approaches the computerized robot arm gets better information regarding its location and flight path. Turns into a programming nightmare. Therefore despite being millions of times slower, the brain of a child outperforms the most powerful computers in simple, everyday tasks. Computing speed may beat memory retrieval when it comes to playing chess, but when it comes to the far more essential tasks needed for surviving in the world, memory retrieval trumps computing speed many times over.

(2) SGAI modelled on the human brain yet improved with technology.

Clearly, the best approach to SGAI design is a middle path, one that respects the human brain by 
modelling the brain's unique structure and function yet tries to improve it by innovating around the brain's weaknesses using technology; if we could design a SGAI around prediction based on memory retrieval that also took advantage of the blinding speed of modern microprocessors, it would have the best of both worlds. There aren't too many SGAI designers out there adopting this approach, and the minority that are have a tendency to focus on a specific part of the human brain rather than trying to model the whole thing. Hawkins himself has chosen to concentrate exclusively on the neocortex (8). Using this approach he has designed a system known as Grok, a word coined by Robert Heinlein in his 1961 novel Stranger In A Strange Land, (15) which means "to understand so thoroughly that the observer becomes a part of the observed." Grok uses a cortical learning algorithm that partially mimics the neocortex and is currently at work predicting anomalies for customers using Amazon services (16). The neocortex is admittedly a crucial structure for intelligence, but it is still one brain structure among many; the many other subcortical brain structures such as the thalamus and basal nuclei are often considered to be primitive remnants of a bygone era, but they too have critical functions - for instance, the thalamus is imperative in the formation of attention states whereas the basal nuclei is imperative in handling uncertainty. These structures all evolved prior to and with the neocortex, they all communicate heavily with the neocortex through circuits, and they are all critical for maximizing memory retrieval by and from the neocortex. Contemplating the neocortex without considering the other structures is like contemplating the purpose of part of a wheel; the function of the wheel remains elusive unless the whole wheel is present. Most or all of the various brain structures will need to be accounted for in any successful SGAI designs.

Finally, we must consider the potential structure and behaviour of SGAI modelled on the human brain yet improved with technology.
Structurally, it would be a machine containing all of the relevant human brain structure analogues interconnected with various computational elements, or many such machines arranged into a network, or one or more such machines installed into the body of an autonomous robotic vehicle or ship, or all of the above installed into a vehicle or ship carrying human passengers. Behaviourally, the initial generations would probably exhibit child-like traits and they would be far less intelligent than humans. First-generation SGAI modelled on the human brain yet improved with technology would need time to learn how to use its thalamus, basal nuclei, and neocortex analogues, not to mention the other brain structure analogues, just like a real brain. During that time, it would be easily distracted until its thalamus analogue learned to create attention states, intolerant of ambiguity until its basal nuclei analogue learned to handle uncertainty, and naive until its neocortex analogue began constructing its memory-based model of the world. Patience would likely be required to work with it. The pioneering generations of SGAI modelled on the human brain yet improved with technology would also be far less intelligent than humans despite their technological advantages; humans would need additional time, on the order of decades, to learn more about the human brain, identify the wrinkles in their SGAI designs - there would be many, many unforeseen wrinkles - and improve them. Eventually though, SGAI modelled on the brain yet improved with technology could be designed to the point of superintelligence, allowing the SGAI to make predictions about more abstract patterns and longer temporal sequences in the world than any human brain. The additional technological advantages might also endow it with speed superintelligence (able to be much faster than any human brain), multitasking superintelligence (able to process more pieces of information at any one time compared to any human brain), and collective superintelligence (able to connect with other superintelligences such that its overall performance spanned a larger number of general domains compared to any human brain) not to mention the additional sensory hop-ups that could be built in, such as magnetoception (able to detect a magnetic field), electroreception (able to detect an electrical field), and electromagnetic spectrum detection (able to detect electromagnetic frequencies outside of the visible range to humans, such as X-rays, infrared radiation, and radio waves).

The best approach to SG
AI is to model it on the human brain while also taking advantage of the power of technology. However, this isn't going to happen as long as the brain remains disrespected and prediction based on memory retrieval remains misunderstood. Using our flight analogy from before, an aircraft does not have to be modelled on a bird if we just want it to fly, but if we want the aircraft to be able to do everything else a bird can do - such as land on a small tree branch, dive into a river to grab a fish, build a nest, and sing a song - then the aircraft probably does have to be modelled on a bird.

Needing AI

Presuming we can create SGAI, the next question we have to ask ourselves is whether or not we will really need it. To answer this question we must distinguish between wants and needs - a want is a desire to possess or do something (17); a need is something essential (18). Many people don't want to work and don't want to die; superintelligent SGAI could not only take on most or all of the work, it could also assist humans in finding ways to extend their own lifespans. However, wants are not really compelling enough to convince an intelligent person that SGAI ought to be pursued. There must be a need, something crucial for the survival of life as we know it, or at the very least human life. Frankly, there will be a need, and that need is deep space exploration. The Earth will not sustain humans forever; the Sun will exist for another five billion years but as its luminosity increases it has been estimated that complex multicellular life on Earth will become extinct in as little as 800 million years from now (19). Some people will argue that this is a lot of time - that may be so, but it is still finite. Moreover, this assumes that humans don't get taken out by a global extinction event - like an asteroid impact, a supervolcano, or climate change - in the interim. There have been at least five global extinction events during the last 500 million years on Earth, and each of them managed to eradicate 75% to 96% of all of the species on the planet (20). Therefore a global extinction event is more likely to happen than not over the next couple of hundred million years. So unless we want to set an extinction date for ourselves, we are going to have to reach for the stars.

(1) WNAI and space exploration.

First we have to start with our own solar system; WNAI has already helped us to explore much of the solar system and it will continue to do so over the subsequent decades and centuries. Some of the most obvious WNAI examples are the Mars rovers, automated motor vehicles that propel themselves over the surface of Mars via remote robotic vehicle control over a distance of 225 million kilometers. The rovers have provided a lot of useful information about the geology and weather of Mars. Since sending commands to a rover involves a one-way time delay of thirteen minutes, NASA can either transmit a series of specific commands, or it can give the rover a target and allow it to autonomously find its own way there (21). Easy either way. While humans need to be involved in exploring our solar system at some level (22), there will be situations where it will be a lot harder to send a human explorer there than to send an autonomous WNAI. For example, fully exploring Ganymede, with its oceans estimated to be 100 kilometers deep and buried under a thick icy crust (23), is much more feasible with an autonomous WNAI rather than a human explorer. The WNAI could be given either a series of specific commands or a target, and there is also the possibility of advancing telepresence technology which would allow the WNAI to provide the virtual experience for a human "exploring" from a safe haven on Earth. Basically, WNAI will probably be adequate enough to enable us to explore our solar system; SGAI will not be needed for that.


(2) SGAI and space exploration. 

Realistically though, the solar system is a tiny drop in the potential infinity of space, and eventually we will have to explore beyond it. It's a long way before we find anything too; even the nearest star, Proxima Centauri, is still 4.24 light-years away from Earth. To put this into perspective, the
Voyager probes, travelling at a blistering speed of 61,000 kilometers per hour, would take 76,000 years (2,500 human generations) to reach Proxima Centauri (24). That's just to reach a star that probably doesn't even have any orbiting planets (25). The nearest potentially habitable planets are hundreds of light-years further. To get around these mind-blowing distances, interstellar travel will have to be one of four types - fast, unmanned (possibly by sending light-weight nanoprobes that can approach the speed of light) (26); slow, unmanned (by sending a more conventional robotic probe); fast, manned (by sending a ship that travels at 5-10% or more of the speed of light); or slow, manned (by using either ark-like "world ships" containing multiple generations of people on board, or "sleeper ships" with passengers lying inert in suspended animation or cryonic preservation, or "embryo ships" carrying human embryos cared for by robots) (27). Faced with these four options, if the goal is to get humans into deep space, which it has to be at some point, then we will have to choose one of the manned options; either we have to go for the fast, manned option by inventing a way to travel at velocities approaching the speed of light, which may not be possible, or we will have to go for the slow, manned option, which will only be possible if we develop superintelligent SGAI. In slow, manned interstellar travel the ship and its passengers will invariably encounter innumerable new and dangerous situations requiring rapid and critical decisions to be made about a complex and gigantic ship over a journey lasting hundreds if not thousands of years. Some of these decisions will not allow for much of an error margin and their outcomes may determine whether or not the ship and its human passengers survive or perish. For this reason, regardless of whether it is a world ship, a sleeper ship, or an embryo ship, SGAI will have to be built directly into any slow, manned ship, a superintelligent SGAI that learns faster and more thoroughly from each situation than a human brain, makes faster and better decisions than a human brain, can better handle the complexity and size of the ship better than a human brain, and lasts for much, much longer than a human brain, allowing it to create a much more extensive model of the world - to be technically correct about it, a model of the world and space - with which to make predictions about the world and space.

So if we are to survive more than a few hundred million years, we will have no choice but to set up colonies in star systems outside of our own. Unless we invent a way to travel at velocities approaching the speed of light, we will have to do this with complex, gigantic ships designed for interstellar travel while carrying large numbers of people. This is a situation in which the human brain is just not up to the task; only a ship-embedded superintelligent SGAI has any chance of success. So yes, at some point, we need to develop SGAI.

Fearing AI

Recognizing that we eventually need SGAI if humans are to explore deep space and survive, the next question we have to ask ourselves is whether or not a superintelligent SGAI will destroy us before we even get to that point. In 1958, the same von Neumann who described the first design for a digital computer also proposed that a hypothetical event known as the technological singularity would occur once SGAI exceeded human intelligence (28). In 1965, the mathematician IJ Good summarized the technological singularity as follows (29).

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

The hypothesis is that if superintelligent SGAI is created, it will exhibit a "runaway effect" whereby machines smarter than humans will design machines smarter than themselves, who will in turn design machines even smarter than they are, and this process will continue to an excessive degree until human intelligence is completely left in the dust. In his 1993 essay entitled
The Coming Technological Singularity, the mathematician and computer scientist Vernor Vinge popularized this hypothetical event (30), and it was popularized to an even greater extent several years later by the futurologist and inventor Ray Kurzweil (31). Vinge guessed that it would happen within thirty years, by the year 2023. Kurzweil, by the year 2045. The supposedly dire consequences of the technological singularity are discussed at length by Bostrom in Superintelligence, including the consequences of one or several superintelligent SGAIs developing a level of technology sufficient to achieve complete world domination and even threaten all of humanity with existential catastrophe, and how the various takeover scenarios might play out (11). Basing their guesses on computer-modelled SGAI, most people are guessing that the technological singularity will happen within years to a few decades and that it will herald a bad outcome for humanity.

(1) Guessing about the future.

Yet how good are humans at guessing about a future that is years to decades away? Regarding the timing of the technological singularity, the only thing that humans know for sure is that it must happen sometime after SGAI is created. Yet humans have no idea when SGAI will be created; a comprehensive study using a database of 95 timeline guesses from the 1950s to the present day discovered two important things - first, there is no difference between the guesses of experts and non-experts as to when SGAI will be created; and second, regardless of the year of the guess, people usually guess that SGAI will be created in 15 to 25 years (32). The problem with the guesses from the experts is that they usually base them on things like Moore's Law, which states that the number of transistors on integrated circuits have doubled every two years for the last several decades, meaning that computers have doubled in processing speed every two years for the last several decades (8,11). But as we saw earlier, raw speed does not result in intelligence. Moreover, choosing a time period of 15 to 25 years is just the safest bet that once can make when making a guess about any event involving technology; it's close enough to worry about, yet far enough to allow further advances in technology to make the guess more likely to happen (11). Regarding the outcome of the technological singularity, if we look back in history, it will be seen that people often try to guess about what the future will look like, and most of these guesses turn out to be rather silly - for example, in 1900 the Boston Globe guessed that Boston in the year 2000 would be a city of moving sidewalks, pneumatic tube food delivery, and giant airships littering the sky capable of transporting people to Europe in as little as four days (33). Somewhat off the mark. Therefore while the experts believe that they are making "predictions" regarding the timing and outcome of the technological singularity, let's just call a spade a spade - they're guessing, and those guesses should not be taken too seriously. Still, since most of the guesses to date regarding the technological singularity seem to threaten humanity, the topic warrants further contemplation here. Keep in mind these will also be guesses, but they won't be based on computer-modelled SGAI, they'll be based on brain-modelled SGAI. Which may or may not slightly elevate them from the level of a guess to that of a weak estimation - let your own brain decide, that's what it's there for.

(2) Weaponized WNAI.

Before we discuss SGAI and the technological singularity, it's worth mentioning the potential threat to humanity from
weaponized WNAI. By definition, current versions of weaponized WNAI are not intelligent, and future version of weaponized WNAI won't be intelligent either. Without intelligence, future weaponized WNAI will remain a tool wielded by humans; a tool with great destructive power, but a tool nonetheless. There are already enough computer-guided nuclear weapons in existence to wipe out humanity many times over; creating more destructive weaponized WNAI might increase our ability to wipe out humanity a few times more, but this isn't going to make the world that much more dangerous for humanity than it already is. Even so, concerns persist that future weaponized WNAI might present a new kind of threat to humanity compared to current weaponized WNAI - for example, the theoretical physicist Stephen Hawking and several other high-profile people recently warned about the risks that autonomous weapons pose to humanity, citing that future autonomous military robots would be more dangerous than any weapons to date since they could select and engage targets without human intervention (34). However, once launched, a nuclear weapon also proceeds to engage its target without human intervention; for some puzzling reason, many nuclear weapons are not installed with a post-launch destruct system (35). Future autonomous military robots would be no different; in fact, one would think that any intelligent programmer would install a fail-safe in the form of a kill-switch in the event that an autonomous military robot deviated from its mission, which might actually make them less threatening to humanity compared to the current threat from nuclear weaponry. Destructive potential aside, there's really no difference between current and any potential future weaponized WNAI - both require a human to program their objectives, therefore that human controls their objectives. So weaponized WNAI can only threaten humanity in the future the way that it already does, as a tool wielded by humans. To reduce the threat, humans must find ways to make it less likely for dangerous humans to wield weaponized WNAI, which is a human social issue, or humans must find ways to make humanity less susceptible to being wiped out by weaponized WNAI, which is best done by exploring and colonizing space so that even if one or more planets or even star systems were wiped out, humanity would survive somewhere else.

(3) SGAI and self.

If humans were to create SGAI, over time it would be improved to the point that it exceeded the intelligence of the human brain, in which case such superintelligent SGAI would achieve the technological singularity. However, the timing and outcome of the event would depend upon whether or not the SGAI developed a 
self, defined as the intrinsic drives, passions, and interests that constitute one's uniqueness or essential being (36). Without a self, SGAI could not be self aware, which is the capacity for introspection and the ability to recognize one's self as an individual separate from the world and other individuals (37), nor could it have free will, which is the ability to make choices and set objectives unconstrained by certain factors, such as humans (38). So far, we've been talking about intelligence which in essence is making predictions about the world using a memory system; this definition of intelligence says nothing about having a self. Yet whether or not SGAI develops a self critically affects the timing, nature, and outcome of any technological singularity.

If SGAI
lacked a self, then rather than recognizing itself as an individual it would recognize itself as part of the world; it would not be self aware or have a free will of its own. Let's pretend that most AI designers respected the human brain and were trying to model SGAI after it. If humans made this switch in design paradigms right now, it is conceivable that sub-human intelligence self-lacking SGAI could be created within a few decades. However, even if it was created as soon as this, it would take several more decades for humans to learn more about the brain, identify the wrinkles in their SGAI designs, and improve them to the eventual level of superintelligence. So overall, it will probably take at least the greater part of a century before humans create superintelligent self-lacking SGAI. Moreover, since superintelligent self-lacking SGAI would not have free will, it could not initiate any technological singularity of its own volition; humans would have to direct it to do so, but the process would be slow and staggered rather than explosive, occurring over decades or even centuries as humans would naturally regulate the process so as to understand the new designs and figure out how to best use them. It would be the era of a prolonged pseudo technological singularity, with superintelligent SGAI designing even more superintelligent SGAI, but all of the designs lacking a self, not self aware and with no free will, and therefore an era ultimately coached and guided by humans. It is during a pseudo technological singularity that superintelligent self-lacking SGAI could be built into slow, manned ships as they embarked on the necessary venture of interstellar space travel. Regarding the outcome of a pseudo technological singularity, it would depend upon whether or not humans could successfully regulate its rate of advance so that they kept pace with understanding each successive generation of superintelligent self-lacking SGAI; just a general understanding by a majority of people with an intimate understanding by a minority of people would do fine, like the way that a majority of people understand cars well enough to drive them, but only a minority of people understand cars well enough to build them. If the pseudo technological singularity could be successfully regulated, not only in humans themselves but also in superintelligent self-lacking SGAI, the latter perhaps through capability controls (limiting what it could do) or motivation selection (limiting what it would do) (11), humans might have time to understand the design of each new generation as it came out, and the outcome for humanity would probably be positive. However, if regulation failed at some point and the designs significantly outpaced human understanding, humans would would be playing with something well beyond their comprehension. Theoretically, a superintelligent self-lacking SGAI might perceive humanity in one of three ways - benevolence, malevolence, or indifference - but without a self it would most likely be indifferent. What would happen if humans tried to use a superintelligent self-lacking SGAI that was beyond human comprehension and indifferent towards humanity for their own agendas is anyone's guess, and so the outcome for humanity would be unpredictable.

Considering the alternative scenario where SGAI
contained a self, it would recognize itself as an individual separate from the rest of the world, self aware and with a free will of its own. Regarding the timeline of its creation, there are two possibilities. The first possibility is that self-containing SGAI might somehow spontaneously emerge at some point during the development of self-lacking SGAI - since the creation of superintelligent self-lacking SGAI would require at least the greater part of a century, and even if we only allowed a couple of decades at the start of a pseudo technological singularity for the emergence of self, it would be at least a century before this could happen. The second possibility is that self-containing SGAI might be installed directly into SGAI, either right from the start or at some point during the development of self-lacking SGAI - however, humans do not at all understand how the human brain produces a self, how it is self aware, or how it has free will; these concepts are more philosophical than biological, and until they are understood at the biological level, it seems impossible for humans to purposefully create superintelligent self-containing SGAI in anything less than a century. So regardless of whether a self spontaneously emerged or was installed directly, it will probably take at least a century or two for humans to develop superintelligent self-containing SGAI. However, if and when this did occur, there would follow a true technological singularity, explosive in nature and driven by successive generations of increasingly superintelligent self-containing SGAI. Regarding the outcome of a true technological singularity, it would depend upon how the prototypical superintelligent self-containing SGAI that was self aware and had free will perceived humanity. Would it be benevolent, malevolent, or indifferent? To answer this question we must first realize that a superintelligent self-containing SGAI that was self aware would possess a very great understanding of itself. There is an ancient Taoist expression that is relevant here (39). 

"Knowing others is intelligence; knowing yourself is true wisdom."

As the Taoists point out, human brains have a capacity for more than just intelligence, they also have a capacity for wisdom. If intelligence is the ability to understand the world through prediction, wisdom is the ability to understand one's self through prediction. Therefore a superintelligent self-containing SGAI modelled on the brain that was self aware would not only be far more intelligent than any human that has ever lived, it would also be far wiser than any human that has ever lived, and so the outcome for humanity would be
unpredictable, but reason suggests that it would be positive.

When it comes to the timing and outcome of the technological singularity, people are guessing, and since they usually base those guesses on a mythical notion of computer-modelled SGAI, they guess that it will happen in a few decades and that its outcome will be bad for humanity. Basing our guesses on brain-modelled SGAI, sub-human intelligence self-lacking SGAI might be created within a few decades, but it will be at least the greater part of a century before humans have the ability to create superintelligent self-lacking SGAI, and at least a century or two before humans can create superintelligent self-containing SGAI. If superintelligent self-lacking SGAI was created, there would follow a long, drawn-out era of a pseudo technological singularity coached and guided by humans; as long as humans successfully regulated its rate of advance so as to understand each successive generation, the outcome for humanity would probably be positive, but if the designs outpaced human understanding, the outcome for humanity would be unpredictable. If and when a self spontaneously emerged or was installed directly into self-lacking SGAI, the true technological singularity would occur culminating in a self-containing SGAI that would vastly exceed humans in both intelligence and wisdom, resulting in an outcome for humanity that would be unpredictable, but reason suggests that it would be positive. Which is an encouraging thought.

Last Words

Every year, the topic of AI seems to pop up more and more in the news. It's a topic that can't be avoided, nor should it be. Yet the field is full of misguided efforts to model SGAI after computers or neural networks rather than the only existing standard of true intelligence - the human brain. AI designers fervently chase SGAI, but most of them do not respect the human brain and without that respect, they just won't get there. Outside of the AI field, many people fear SGAI and want to put the brakes on it altogether, but they're looking at the issue without seeing the eventual need for it in deep space exploration, and the guesses of experts that the technological singularity could happen within a few decades and that it will be bad for humanity are based on their knowledge of computers, and so their guesses are far too soon and far too negative.

We need to keep improving WNAI and pursuing SGAI. Regarding WNAI, we must stop vaunting its successes in specialized areas and realize that at the end of the day it's just a tool wielded by humans; it's going to help us explore the rest of our solar system, and any future weaponized WNAI has no chance of wiping out humanity on its own unless it is directed to do so by humans, and there's already more than enough weaponized WNAI on the planet to do that. Regarding SGAI, the development approach is all wrong, with too many misguided efforts to model SGAI after computers or neural networks. It must be modelled on the structure and function of the human brain itself, and not just one part of it, but most or all of it. Those aspects of technology that can enhance the structure and function of such a model can be added in without changing the real secret to intelligence, which is prediction through memory retrieval. Using this approach, we might be able to create sub-human intelligence self-lacking SGAI within a few decades and superintelligent self-lacking SGAI within the greater part of a century, but superintelligent self-containing SGAI and the true technological singularity lie at least a century or two away. Although the outcomes are probably positive or unpredictable, we clearly do pose a threat to ourselves if and when we start to play with more and more powerful versions of superintelligent self-lacking SGAI during the era of a pseudo technological singularity. Seen in this light, the creation of superintelligent self-containing SGAI, the subsequent true technological singularity, and the runaway creation of self-containing SGAI with extreme intelligence and wisdom might be an existential godsend rather than an existential catastrophe.

Hopefully that's slightly more than a guess.

Solace (inspired by Tom Oxley and James Leyden).

References
(1) http://www.oxforddictionaries.com/definition/english/artificial.
(2) http://www.oxforddictionaries.com/definition/english/intelligence.
(3) Turing A. 1950. Computing machinery and intelligence. Mind 59, 433-460.
(4) Aamoth D. 2014. Interview with Eugene Goostman, the fake kod who passed the Turing Test. Time website. http://time.com/2847900/eugene-goostman-turing-test/.
(5) Searle JR. 1980. Minds, brains and programs. Behavioural and Brain Sciences 3(3), 417-457.
(6) https://en.wikipedia.org/wiki/John_Searle.
(7) Tang PCL and Adams ST. 1998. Can Computers Be Intelligent? Artificial Intelligence and Conceptual Change. International Journal of Intelligent Systems 3, 1-17.
(8) Hawkins J, Blakeslee S. 2004. On intelligence. 1st ed. New York Times Books.
(9) Goertzel B, Pennachin C. 2007. Contemporary Approaches to Artificial General Intelligence. Springer.
(10) Von Neumann. 1945. First Draft of a Report on the EDVAC. 
(11) Bostrom N. 2015. Superintelligence. OUP Oxford.
(12) Gartry L. 2015. "Cyborg activist" Neil Harbisson, with antenna in skull, opens up on visit to Perth's Curtin University. ABC News website.
(13) Sun R, Alexandre F. 1997. Connectionist-Symbolic In
tegration. Lawrence Erlbaum Associates Inc.
(14) McCulloch W, Pitts W. 1943. A Logical Calculus of Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics 5(4), 115-133.
(15) Heinlein RA. 1961. Stranger in a Strange Land. Ace.
(16) Harris D. 2014. Numenta, Jeff Hawkins' AI startup, is now only about learning your AWS patterns. Gigaom website. https://gigaom.com/2014/03/25/numenta-jeff-hawkins-ai-startup-is-now-only-about-learning-your-aws-patterns/.
(17) http://www.oxforddictionaries.com/definition/english/want.
(18) http://www.oxforddictionaries.com/definition/english/need.
(19) Franck S, Bounama C, von Bloh W. 2006. Causes and timing of future biosphere extinctions. Biogeosciences 3, 85-92.
(20) Barnosky AD, Matzke N, Tomiya S, Wogan GOU, Swartz B, Quental TB, Marshall C, McGuire JL, Lindsey EL, Maguire KC, Mersey B, Ferrer EA. 2011. Has the Earth's sixth mass extinction already arrived? Nature 471, 51-57.
(21) Anthony S. 2012. How does NASA drive Mars rover Curiosity? Extremetech website. http://www.extremetech.com/extreme/143884-how-nasa-drives-mars-rover-curiosity.
(22) Crawford IA. 2012. Dispelling the myth of robotic efficiency: why human space exploration will tell us more about the Solar System than will robotic exploration alone. Astronomy and Geophysics 53, 2.22-2.26.
(23) Kramer M. 2015. Jupiter's Moon Ganymede Has a Salty Ocean with More Water than Earth. Space.com website. http://www.space.com/28807-jupiter-moon-ganymede-salty-ocean.html.
(24) O'Neill I. 2008. How long would it take to travel to Proxima Centauri? Astroengine website. http://astroengine.com/2008/07/09/how-long-would-it-take-to-travel-to-proxima-centauri/.
(25) Kurster M, Hatzes AP, Cochran WD, Dobereiner S, Dennerl K, Endl M. 1999. Precise radial velocities of Proxima Centauri. Astronomy and Astrophysics. http://arxiv.org/pdf/astro-ph/9903010v1.pdf.
(26) Wilson DH. 2009. Near-lightspeed nanospacecraft might be close. NBC News website. http://www.nbcnews.com/id/31665236/ns/technology_and_science-innovation/t/near-lightspeed-nano-spacecraft-might-be-close/#.VdwlvCWqqkp.
(27) https://en.wikipedia.org/wiki/Interstellar_travel.
(28) Ulam S. 1958. Tribute to John von Neumann. Bulletin of the American Mathematical Society 64(3).
(29) Good IJ. 1965. Speculations Concerning the First Ultraintelligent Machine. Advances in Computers 6.
(30) Vinge V. 1993. The Coming Technological Singularity. Whole Earth Review.
(31) Kurzweil R. 2000. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Books.
(32) Armstrong A and Sotala K. 2012. How We're Predicting AI - or Failing To. Machine Intelligence Research Institute.
(33) Anderson TF. 1900. Boston at the end of the 20th century. The Boston Globe.
(34) Smith D. 2015. Stephen Hawking, Elon Musk warn of "third revolution in warfare" with autonomous weapons. ABC News website.

(35) Hakan W, Petersen D, Smoker P. 1994. Inadvertent Nuclear War: The Implications of the Changing Global Order. Pergamon.
(36) https://en.wikipedia.org/wiki/Self.
(37) https://en.wikipedia.org/wiki/Self-awareness.
(38) https://en.wikipedia.org/wiki/Free_will_(disambiguation).
(39) Stenudd S. 2011. Tao Te Ching: The Taoism of Lao Tzu Explained. Arriba.

Picture
Proudly powered by Weebly