I hadn’t heard of these folks, The Singularity Institute before they held their recent “Summit” (Sept 8-9 in San Francisco), although I’ve seen other envelope-is-expanding indicators, such as the recent prediction that Man will create “wet artificial life” within the next 3 to 10 years.
Of course I read Colossus (1966, D. F. Jones) in my early teens, and it was a decade old then (two giant computers hook up, go sentient and try to control the world). And I read Neuromancer (William Gibson, 1986) several years after it was published, too (computer hackers meld hardware into their brains and otherwise technologically enhance their bodies and minds). These two science fiction books cover a couple of the future paths toward “the singularity” – and here is my chief complaint now that I’ve typed it. Ironically there isn’t just one “singularity”. A singularity is a type of a thing, like a pinnacle or a sphere. There isn’t just one, every mountain is (or has) a pinnacle. All playground balls are spheres; now maybe you could say that the Earth is the sphere; but I can make a heliocentric argument that our sun is the sphere. Saying that this topic comprises the singularity is dangerous hubris, and again with some irony, it’s cyclically indicative of why humans may not be so easily subjugated in terms of the universal intelligence rankings.
And I’m using a lot of fancy words, another thing that bothered me about the Singularity folks' manifesto. There isn’t nearly enough academia-speak. Sure there are some long words, but once you’re comfortable with “singularity” and “artificial intelligence” there aren’t many mind bending concepts. Even an introductory text to such an audacious topic should be rife with the pretentious use of multi-syllabic and obscure terms.
But back to my science fiction reading: these two books covered, respectively, “artificial intelligence” and “brain-computer interfaces”; apparently the primary technology pathways to the smarter-than-human singularity.
The Introduction does eventually get around to the question of what does it mean to be a smarter-than-human brain – but at this point they throw up their hands and say that we’re not smart enough to know – it’s the singularity, get it? And I half-expect someone to elbow me in the ribs, nudge-nudge, wink-wink. Why is this impossible to measure? We’ve devised all sorts of tests to measure our own intelligence, and those of animals. I’m not sure we’ve been particularly successful at it, but I don’t think we’d be particularly less successful at measuring something smarter than we are than we are at measuring beings less smart. I clearly understand what it means to run at 25 miles per hour. I can’t do it but I can measure it and observe it without blowing any neurons.
In fact, how smart are we? I’ve learned that some researchers say that adult chimps are about as smart as the average human toddler. Of course I don’t know any toddlers that can survive on the African savannah, even without predators; but I guess that’s not the kind of smart they mean. How smart are people? Some are clearly more smart than others, by orders of magnitude. So does passing this singularity mean that something has to exist in our environment that is smarter than the smartest person ever? Or smarter than the smartest potential person, that is, one who knows how to use all of their brain (the Introduction discounts as an “urban legend” the widely held belief that we only effectively use a small portion of our brain; but I’ve been taught some of the research of people learning to use undamaged portions of their brain to replace damaged portions without further loss. The conclusion consistently is that portions of the brain were previously underutilized. The Introduction also makes a big deal out of brain size. Sure people have bigger brains than mammals of comparable size – but we don’t have really big brains in an absolute sense. Check out the brain size of elephants. Also brain size is not a good predictor of intelligence among other mammals, or animals at large. Are house cats smarter than squirrel monkeys? Their brains are considerably larger so they must be.
I’m going to contend that in the same way we could recognize the smartest guy in Algebra class in seventh grade, we can recognize something that is smarter than all people. But just like most of us can’t comprehend quantum physics, or gene therapy, we won’t grok the super-brain. Should we panic? These Singularity Summit folks seem to think it’s a good idea to get to that point; that it’s some grand adventure and inevitable so let’s find out what happens in that next chapter ASAP. I’m a bit more cautious. Peter Thiel thinks it’s an investment opportunity. I agree that if we hit the singularity and the bad thing happens that losing your life savings will be a secondary concern. We may be better served by pondering how one should go about swimming in that world before we dive into it.
So what allowed humans to dominate the current environment? I propose that there are 3 pillars, 3 strengths that, in fairly equal measure, have really allowed us to rise to the top (and I’m ignoring any religious angles here; it’s not too difficult to be propelled to the top of the heap if some omnipotent deity is pushing you).
Our brain. Okay, we’re comparatively smart. Most other animals just don’t measure up (neither do today’s computers); but there are a few animals who may, or have even been argued to be, smarter in some raw measure of brain power, speed and size. Dolphins spring to mind, elephants compete here too.
Our ability to communicate. A language with a large vocabulary is a fairly recent commodity for human kind, but we had pretty indicative grunts long before that. None of the animals seem to have quite our knack for self-expression. Sure whales, dolphins, chimps and insects in some odd hive/colony way are remarkably communicative.
Our dexterity. Remember that opposable thumb you have? Right; actually you’ve probably got 2 of them, most of us do. It’s a big deal. Dolphins fall out here, they can’t manipulate their environment and build tools the way we can. Apes and monkeys almost can. Evolutionists will tell you that we’re just better apes – this is a big way we’re better. Watch out for the elephants in this category though, its trunk has more than 40,000 muscles. If they had two of them we might still be living in caves watching Dumbo pave over our habitat.
So what about a smarter-than-human artificial mind? It trumps us on number one, by definition. Number two follows suit pretty well with the internet and cell phones and radio technology it wouldn’t have any trouble talking to its neighbors. Number three is a bit more dicey. As I recall that’s where Colossus fell down; people are just very maneuverable. And we can act autonomously or in concert as a huge mother-loving team. The wet-wear singularity might have us in trouble – I see that as farther off and less significant at least initially; a second singularity and an evolutionary one. The Bionic Woman is coming back to TV on September 26; I don’t see her, even if she were real, shattering our society. Not in our lifetime.
Of course the singularists spout off about the positive feedback loop that ensues after the singularity. An artificial intelligence may be self-improving, able to “rewrite its own source code”. And that’s intriguing (again though, see pillar #3 to retain some composure). Humans aren’t done yet, not even after this singularity, but we could become more akin to pets, or beasts of burden; simple slaves, and that is a scary contemplation. Or we may be able to harness smarter minds with weaker bodies the same way we used horses through the ages with their stronger bodies but weaker minds (and weaker communications). We already do it with computers with their faster number crunching and algorithmic abilities. Just the same I think I’m going to start storing my Roomba in the pantry, safely separated from my stationary computer from now on.