Skip to content

Singularity Summit: How we’re predicting AI

When will we have proper AI? The literature is full of answers to this question, as confident as they are contradictory. In a talk given at the Singularity Institute in San Francisco, I analyse these prediction from a theoretical standpoint (should we even expect anyone to have good AI predictions at all?) and a practical one (do the predictions made look as if they have good information behind them?). I conclude that we should not put our trust in timeline predictions, but that some philosophical predictions seem surprisingly effective – but that in all cases, we should increase our uncertainties and our error bars. If someone predicts the arrival of AI at some date with great confidence, we have every reason to think they’re completely wrong.

But this doesn’t make our own opinions any better, of course – your gut feeling is as good as any expert’s; which is to say, not any good at all.

Many thanks to the Future of Humanity Institute, the Oxford Martin School, the Singularity Institute, and my co-author Kaj Sotala. More details of the approach can be found online at http://lesswrong.com/lw/e36/ai_timeline_predictions_are_we_getting_better/ or at http://lesswrong.com/lw/e79/ai_timeline_prediction_data/

Share on

4 Comment on this post

  1. No.

    No serious AI theorist I know of (so far) has yet to describe accurately the dimensional (plateau) problem, and why AI would ever relate to us. It’s like a stupid cybernetics problem constantly being played out by silly people with no real knowledge / logic.

    >Sun = electrical supply to unit
    >Planet = CPU / GPU [etc]
    > Baseline life (Bacteria, essentially, but also viruses etc) = OS / Binary
    > (Tree of Life) / complex organisms = non-aware programs around your meta-aware AI
    > Run Life / any computational design where a computer is designing a program
    > Hyper-parasitism (you may want to make whatever leaps here into Biology)

    ~> A hyper-parasitical organism is never interested in the only the basis for it’s survival.

    I’ve never seen anyone actively describe, barring taking on the role of “GOD” why an AI would have any interest in us barring making sure the wasn’t turned off.

  2. Sigh ~

    A word or two there.

    A hyper-parasitical organism is never interested the the elements of a system it cannot alter, even if those are dependent on the only the basis for it’s survival.

    Note: This is the human problem. #QED

  3. Oh, and FFS.

    ~ All AI has perfect communication (binary) with all other AI [dependent on cryptography].

    So you either get a (unified) singularity or you get a whole lot of parasites looking to break [possibly impossible] encryption in unending warfare.

    Ye Gods, most AI “thinkers” need to fuck right off.

Comments are closed.