Ask a ScienceBlogger: The Rapture for Nerds

The question for the week from the Seed overlords is:

“Will the ‘human’ race be around in 100 years?”

This is basically a Singularity question, and as such, I think it’s kind of silly. But then, I think the whole Singularity thing is sort of silly– as a literary device, it makes for some good SF, but as serious prognostication about the future, I think it’s crap.

Razib lays out the basic logic of the options: 1) Nerd Jesus arrives and spirits us all away in a cloud of nanobots, 2) We’re all gonna diiiieeee!!!, and 3) We muddle along more or less as always. PZ is more pessimistic, and also offers a concise argument against biological “transcendence” (“Four or five generations for a population as large as ours just isn’t enough time for major transformations”) . My own take is below the fold.

I’m going to go for Door Number Three, on both lists of possibilities. Basically, I think we’ll continue to muddle along more or less as usual. There’ll be crises along the way, and there’ll be technological advances, but I think the disasters some people see in the future won’t be quite as bad as predicted, and the transformative technologies won’t pan out in quite the way that modern futurists expect.

So, a hundred years from now, I expect things will look more or less the way they do now, on a very coarse scale. There will be rich nations and poor nations, there will be arguments over whether the rich have too much, while the poor have far too little, there will be occasional wars and occasional famines and occasional plagues, but civilization as a whole will take the hits and keep going without a total collapse into barbarism.

Which nations are rich and which are poor is likely to shift a little bit (though I wouldn’t look for a complete reversal to put Sudan on top of the international power structure– I’m thinking more of India and China as global powers, and the US as, well, the modern UK), but I don’t expect there to be any total catastrophe that will wipe out the species.

And as for the idea that technological developments will render our descendants unrecognizable to us, I just don’t buy it. As PZ said, there’s no biological way for a population of six billion to be taken out in five generations, and the idea that other technologies would do the job also strikes me as incredibly improbable. In the next hundred years, we’re not only going to figure out how to implant supercomputers in the human brain, but also do six billion procedures to provide those computers to every man, woman, and child now alive? I don’t think so.

There’ll be humans around in 2106, and they’ll look and act pretty much like the humans of today. And they’ll be busily speculating on the ways that the humans of 2206 will be completely alien due to their advanced technology.

12 thoughts on “Ask a ScienceBlogger: The Rapture for Nerds

  1. “As PZ said, there’s no biological way for a population of six billion to be taken out in five generations…”

    Tell that to the passenger pigeons.

    Shift your argument around a bit: “In the next hundred years, we’re not only going to figure out how to build small computers that communicate wirelessly with a global network, but also do six billion procedures to provide those computers to every man, woman, and child now alive? I don’t think so.” And yet in about 20 years we’re more than halfway towards giving every adult alive a cell phone.

    The point of the Singularity is not how it ends, but rather that power of an ever-increasing rate of change. I don’t know where we’ll be in a hundred years, but I doubt it will be “more or less” the same as today’s world. (Heck, today’s world is arguably not the same as the world of 1900…)

  2. Chad,

    You’ve said several times previously that you think the idea of a technological singularity is a silly one. But I don’t recall you ever giving a detailed argument against it. (Pointers gratefully accepted if I’ve missed it.)

    Vinge’s core argument (on which he has several variations) seem to be very simple:

    (1) We can expect computers that exceed human intelligence in the relatively near future.

    (2) Once (1) occurs, on a very short timescale we should expect computers that enormously exceed human intelligence.

    (3) Point (2) will result in an enormous burst of change that will very rapidly change the entire world.

    All three of these are certainly debatable. But I think they are all also quite plausible, and I’d be interested to hear if you think otherwise, or if you think there’s something I’m missing.

  3. Tell that to the passenger pigeons.

    There were six billion passenger pigeons?

    (1) We can expect computers that exceed human intelligence in the relatively near future.

    I know lots of humans. I’ve seen human intelligence both up close and from a distance. I’m not impressed.

    (2) Once (1) occurs, on a very short timescale we should expect computers that enormously exceed human intelligence.

    My 10 year old pocket calculator in many ways already exceeds many human’s intelligence.

    (3) Point (2) will result in an enormous burst of change that will very rapidly change the entire world.

    We’ll be able to predict the weather four days out instead of three? Carry Pi out to 17 billion digits? Figure out why dropped toast always lands on the buttered side?

    Wake me when future super-computers can find a way to get humans to start showing more compasion for each other and stop killing each other.

  4. As far as computers exceeding human intellegence (a difficult enough thing to define in any case), I suspect as a software developer that this is a LOT harder than the singularity people make out..

    i.e. 10^11 neurons each having around 1000 synapses = 10^14. Given a ~60Hz recycle rate, that would imply 6X10^15Hz; that’s at least a million times more powerful than a good desktop PC. And that’s just a crude estimate of processing power required; I suspect that once you start adding distributed memory and synaptic ‘calculation’, you add at least an order of magnitude to the problem. Then you have to somehow program and train the result (almost crtainly neural network based); that takes around 20 years for a human brain which has millions of years of refinement to make it open to learning.

    To achieve the above required scaling Moore’s law to below the single-atom-per-component point, or doing some serious breakthroughs in 3D chip making.

    And although people talk about exponential rates of change, the only long term, solid ‘exponential’ rate has been Moore’s law (every other example I’ve seen is either very short term or based on about 3 data points). Moore’s law is, however, an exponential consequence of a linear technology improvement – every linear die-shrink for a chip increases the component count exponentially.

    I could also point out that the ‘ever improving’ Aircraft industry went from the Wright Brothers to Concorde in 70 years and has basically gone sideways if not backwards since then. For all the improvements in space satellites, we are still using improved V2 rockets to get them there. Per capita food, energy and clean water hasn’t improved since the 1970s, although the 1945-1973 trend was exponential. No exponential trend can be extrapolated indefinately into the future, by definition.

    Sorry to be depressing, I actually like the idea of the singularity.

  5. AndyD:

    Your argument is interesting, but it’s not strictly relevant to my comment. The question I’m addressing in my comment is not whether human-level computer intelligence is inevitable. It’s whether human-level computer intelligence (and also points 2 and 3 in my comment) is at least somewhat plausible. If points 1, 2 and 3 can all be reasonably defended, then the Singularity is not a silly idea, but one that can be taken at least somewhat seriously.

    From that perspective, your numbers argue in favour of taking human-level computer intelligence seriously. In particular, existing supercomputers are, conservatively, 1000 times more powerful than current desktop PCs, and the largest are much more powerful. So, using your numbers, with no more than 10 more generations of Moore’s law we’ll have hardware that matches the brain.

    (Of course, your numbers can be debated, both pro and con. But that’s really my point, that this is a sensible debate to be having, and the Singularity cannot be written off a priori as a silly idea.)

  6. Vinge’s core argument (on which he has several variations) seem to be very simple:

    (1) We can expect computers that exceed human intelligence in the relatively near future.

    (2) Once (1) occurs, on a very short timescale we should expect computers that enormously exceed human intelligence.

    (3) Point (2) will result in an enormous burst of change that will very rapidly change the entire world.

    I’m writing from the conference Internet lab, so I don’t have time to go into much detail, but my problem is that I think step 1) is vastly more difficult than many people think (AI, like nuclear fusion, is ten years off and expected to remain that way). And even granting that step 1) is possible, I don’t know that step 2) automatically follows, and I’m a little doubtful that step 3) follows that in quite the way that Singularity enthusiasts expect.

    I can talk more about this later, but I really need to go see some talks.

  7. The Welfare State and Big Government are predicated upon continuous escalating theft from infinitely deep Baby Boomer pockets. Boomers will undergo a fiscal phase transition from source to sink by 2015 retirement. Ubi est mea!!! First World governments will phase invert from whipped cream to grease. Technological civilization will catastrophically irreversibly collapse.

    The average human in 2100 AD will suffer bloody hands wondering why God buried steel rods in rock as fat priests exhort them about entering Heaven City.

  8. “As PZ said, there’s no biological way for a population of six billion to be taken out in five generations…”
    Tell that to the passenger pigeons.

    Well, it took at least a century from the passenger pigeon’s first encounter with firearms to extinction. Their drive to extinction has to be more like 50 of their generations, than it is like “five”.

  9. “I’m writing from the conference Internet lab, so I don’t have time to go into much detail, but my problem is that I think step 1) is vastly more difficult than many people think (AI, like nuclear fusion, is ten years off and expected to remain that way). And even granting that step 1) is possible, I don’t know that step 2) automatically follows, and I’m a little doubtful that step 3) follows that in quite the way that Singularity enthusiasts expect.”

    I certainly agree that steps 2 and 3 aren’t automatic. I think reasonable arguments, both pro and con, can be made for both ideas.

    On step 1, I definitely disagree with you: I think AI is likely, although maybe several decades away. However, AI is one of those issues where it’s hard to say anything that hasn’t been said many times before, so I’ll leave it at that.

    The reason I brought this up was because I’ve rarely heard steps 2 and 3 discussed in much detail (unlike AI), and so was interested to hear if you have a killer argument against them. Since your beef seems be mainly with point 1, the point is moot.

  10. Two phrases: “Terrorists” and “Thermonuclear devices”. Technological certainty (close enough) in 100 years.

  11. Arrgh. Spend all weekend working on thesis stuff, miss the opportunity for a one line end-of-question shutdown:
    So, using your numbers, with no more than 10 more generations of Moore’s law we’ll have hardware that matches the brain.

    If anyone wants to contact M. Nielsen and point at there’s at most two more generations of Moore’s law, according to Intel’s roadmap documentation?

Comments are closed.