10 Questions for Ray Kurzweil: Reader input

A couple of days ago, I invited readers to submit questions they’d like Ray Kurzweil to answer. The Science Cheerleader and Bartacus will be interviewing Ray Kurzweil (Artificial Intelligence expert; king of the Singularity effort)  in the coming weeks. Details are here.  As a reminder, the deadline to submit questions is midnight, Monday 2/16. 

Here are some terrific questions from Science Cheerleader readers:

Jon: Singularity University is clearly aimed at helping to shape the Singularity and hasten its arrival. Do exponential trends really need help and, if so, can we really expect to shape them?

Paul: 1) What is/will be the relationship between ethics and The Singularity? The rapid growth of science/knowledge leads to many advancements via engineering, but how can/will ethics be applied when mankind can no longer keep pace. Or will this be a problem?    2) What to do about scientific literacy so that everyone can understand, to at least a basic level, the rapidly advancing technology?

Corey:
Given the slow and erratic progress in AI over the past 40 years, what makes Kurzweil so confident that machines will become intelligent (in the commonly understood sense) in the next 40?    Or perhaps I should ask the flip side of the question: Suppose that things continue in much the way they are now, with increasingly powerful and miniaturized wireless devices making information available wherever we want it. Does that count as a “singularity”? It is easy for me to imagine, for instance, a brain implant that allows me to conduct Google searches purely by the power of thought–but that merging of biological and digital intelligence seems distinctly different from what Kurzweil means by singularity.

  • Jonathan Richardson

    Thanks for getting back with me. As I mentioned my companies name is Artistic Torso Creations. What I offer is, I can cast , mold and finish with a bronze finish. The material I use is very safe and I have a copy of MSDS. Your cheerlearders will be able to look what they looked like years and years after. You only look young for a short time. I and my assistant can come to them or they can come to my studio.

    Thank you in advance,
    Jonathan

  • Jonathan Richardson

    Thanks for getting back with me. As I mentioned my companies name is Artistic Torso Creations. What I offer is, I can cast , mold and finish with a bronze finish. The material I use is very safe and I have a copy of MSDS. Your cheerlearders will be able to look what they looked like years and years after. You only look young for a short time. I and my assistant can come to them or they can come to my studio.

    Thank you in advance,
    Jonathan

  • Dan

    I’ve heard about the singularity idea, but haven’t had time to follow up in any detail, and a good summary would be very useful. I know there was a conference on this back in 2006, but I couldn’t be there. Maybe it’s time to buy the book (I didn’t know about it).

    To follow up on Corey’s comment, it seems to me that we still haven’t managed to come up with a compelling definition of “what is intelligence” in the first place. And, how does “intelligence” differ from “consciousness”, “self”, “awareness”, etc. (seems to me these are all different things, and understanding one doesn’t guarantee we automatically understand the others – I don’t think we even have a “self” so much as we have an impression of being a self that is fundamentally an illusion, so we can’t understand that until we solve the philosophical problem first or at least concurrently).

    Furthermore, how do these things emerge in the brain? We have made good progress in understanding the “device drivers” of the brain, but when it comes to understanding the nature and origin of the core of cognition (how can you build a self-aware, conscious, intelligent neural network), no matter how much progress we’ve made in knowledge engines and so forth, I don’t see that we are really any closer to the answers to these questions than we were 50 years ago. (It seems to me that mere *magnitude* of complexity of a computational network is not enough to get to these results, but rather that it must be certain *kinds* of complexity that yield these things – i.e., certain specific types of network structures.)

    There was a group of folks several years ago who called themselves the “Extropians” who imagined “downloading” their brains into artificial cognitive devices and achieving immortality. Even if we could understand how knowledge is encoded in our synaptic networks (again, we seem barely closer to that than 50 years ago, maybe some minor and tentative initial steps), there is no guarantee we could capture that in exact detail to load into an artificial system, and even if we could there is the philosophical problem of the continuity of consciousness (we could create a “cognitive clone” of ourselves, but we’d not necessarily have any continuity of experience that attaches our “self” to that entity any more than we’d identify with a toaster – kind of like the personal identity problem of the imaginary Star Trek “transporter” – is that really “me” or just a copy of me that thinks it’s me? – then again, does this happen in some sense every time we take a nap?).

    Does Kurzweil think we are simply about to build a “race of intelligent machines that will take over the world”? (And hopefully they would have the interests of humans as a high priority?)

    So I have two basic questions:

    (1) How much of this is really well-defined right now? (I suspect there is a lot still not understood.)

    (2) What should we be *doing* about all this? (Follows up on Jon’s comment above, after we figure out what we *can* do about it, and Paul’s comment about ethics.)

  • Dan

    I’ve heard about the singularity idea, but haven’t had time to follow up in any detail, and a good summary would be very useful. I know there was a conference on this back in 2006, but I couldn’t be there. Maybe it’s time to buy the book (I didn’t know about it).

    To follow up on Corey’s comment, it seems to me that we still haven’t managed to come up with a compelling definition of “what is intelligence” in the first place. And, how does “intelligence” differ from “consciousness”, “self”, “awareness”, etc. (seems to me these are all different things, and understanding one doesn’t guarantee we automatically understand the others – I don’t think we even have a “self” so much as we have an impression of being a self that is fundamentally an illusion, so we can’t understand that until we solve the philosophical problem first or at least concurrently).

    Furthermore, how do these things emerge in the brain? We have made good progress in understanding the “device drivers” of the brain, but when it comes to understanding the nature and origin of the core of cognition (how can you build a self-aware, conscious, intelligent neural network), no matter how much progress we’ve made in knowledge engines and so forth, I don’t see that we are really any closer to the answers to these questions than we were 50 years ago. (It seems to me that mere *magnitude* of complexity of a computational network is not enough to get to these results, but rather that it must be certain *kinds* of complexity that yield these things – i.e., certain specific types of network structures.)

    There was a group of folks several years ago who called themselves the “Extropians” who imagined “downloading” their brains into artificial cognitive devices and achieving immortality. Even if we could understand how knowledge is encoded in our synaptic networks (again, we seem barely closer to that than 50 years ago, maybe some minor and tentative initial steps), there is no guarantee we could capture that in exact detail to load into an artificial system, and even if we could there is the philosophical problem of the continuity of consciousness (we could create a “cognitive clone” of ourselves, but we’d not necessarily have any continuity of experience that attaches our “self” to that entity any more than we’d identify with a toaster – kind of like the personal identity problem of the imaginary Star Trek “transporter” – is that really “me” or just a copy of me that thinks it’s me? – then again, does this happen in some sense every time we take a nap?).

    Does Kurzweil think we are simply about to build a “race of intelligent machines that will take over the world”? (And hopefully they would have the interests of humans as a high priority?)

    So I have two basic questions:

    (1) How much of this is really well-defined right now? (I suspect there is a lot still not understood.)

    (2) What should we be *doing* about all this? (Follows up on Jon’s comment above, after we figure out what we *can* do about it, and Paul’s comment about ethics.)

  • Good stuff. Thanks, all.

  • Good stuff. Thanks, all.

  • Ned

    Darlene,
    Here’s one I’d appreciate if you’d consider asking him:
    Given that past predictions about technology leading to human betterment often have proven incorrect, and given that the technologies you advocate have at least some potential for irreversible harm to human civilization, what kind of back-up plan do you propose to guard against a worst-case scenario? For example, nuclear engineers built in redundant layers of protection — defense in depth — even though they expected the reactors to operate safely: Do you have any such protections to offer those concerned about undesired “accidents” between here and the Singularity?
    Thanks,
    Ned

  • Ned

    Darlene,
    Here’s one I’d appreciate if you’d consider asking him:
    Given that past predictions about technology leading to human betterment often have proven incorrect, and given that the technologies you advocate have at least some potential for irreversible harm to human civilization, what kind of back-up plan do you propose to guard against a worst-case scenario? For example, nuclear engineers built in redundant layers of protection — defense in depth — even though they expected the reactors to operate safely: Do you have any such protections to offer those concerned about undesired “accidents” between here and the Singularity?
    Thanks,
    Ned

  • Jasmin

    Hi! I am now learning about this Singularity Theory, how it works, and who exactly believes its true. I believe singularity is true, but I have one question regarding its future:

    1) Can we expect singularity to shape itself as the human race cultivates it or will we still hold charge of our technological creations?

    I hope anyone, esp. R.K. can answer this question. This is something I’d like to expand my knowledge on. Thanks!

  • Hi! I am now learning about this Singularity Theory, how it works, and who exactly believes its true. I believe singularity is true, but I have one question regarding its future:

    1) Can we expect singularity to shape itself as the human race cultivates it or will we still hold charge of our technological creations?

    I hope anyone, esp. R.K. can answer this question. This is something I’d like to expand my knowledge on. Thanks!