Shared publicly  - 
 
I think this is a new clip, not sure. WSJ has a new article on Kurzweil updating his predication and responding to critics, but it's behind the pay-wall.

I think he's still shooting for 2045. He says computers will be powerful enough to simulate the human brain by 2020 with the reverse engineering done by 2029.

Pay-wall link:
http://online.wsj.com/article/SB10001424052970203358704577237660058194628.html

My somewhat related post on a human supercomputer:
https://plus.google.com/u/0/109667384864782087641/posts/BAkpftkJ6DP
.
3
2
Jonathan Langdale's profile photonomad dimitri's profile photoAripSon Newsted's profile photoAmine Benaichouche's profile photo
16 comments
 
I still don't care about reverse engineering the brain. A focus on cognitive augmentation pretty much nullifies the entire point of reverse engineering, and should CA become mainstream (which is likely) before a full brain model is complete, it will of course result in a quicker completion of a full brain model.
Same argument goes for attaching nerves to prosthetics instead of just building a wireless interface directly from the brain (wherever really) to the prosthesis and waiting for the brain to do it's neural-plastic thing until the prosthetic works.
 
Depends on the tech. They're already progressing on the reverse engineering. There will probably be some level of cognitive augmentation but I would suspect there will be some parts that elude us until there's a full model in a computer. There's a limit to what we can do with living subjects.

And when he says reverse engineering I think that includes cognitive augmentation. But even understanding how the brain works doesn't mean we'll fully understand how to control dynamical changes and long term affects of those changes to the complexity. Nor will we necessarily have the ability to augment certain aspects of a living brain.

I question whether we're going to even want to if we determine how to transfer consciousness. If you can back-up yourself to running simulation, that will be interesting. Although, I'm sure there's going to be some funny business in that the simulation will have simulated sensations, etc.

It might be possible that sensation input/output feedback loops could be eventually routed to/from virtual robotic hosts. I can imagine feeling picking up an apple on the other side of the world, and actually feeling it in your biological brain, or a virtual simulation. Might not be all that different.

It's going to be interesting though because I don't think everyone learns exactly the same. If whatever that network is can be profiled and switched depending on the user, that would be interesting. Then again, there might be more or less energy efficient patterns and you might want to choose to actually alter yourself.

But biologically we're really going to be limited what we can do except through drugs unless they can figure out how to remap DNA in a living being and deal with the transformation between the two. Seems like a lot of work. It seems like people are going to say, screw it, just make me virtual and imortal.

The best way to change yourself would be in the backup-mapping process to a new version. You could then check to make sure it's running as intended before actually making the conscious switch.
 
To get around the paywall - just google the title and click on the WSJ article. They allow you to read one free article if you come from Google - part of their SEO strategy if I remember correctly.
 
There is something more imminent and dangerous than the Singularity. It is the increasing obsolescence of human labor. We are not discussing this. My take-away for the recent work of Jaron Laneer.
http://youtu.be/T5JZFx6rIlY
 
Sometimes I think what we think is dangerous or abnormal is really just natural. If it was natural that we go extinct, we would think it was dangerous and unnatural somehow. Mutations, ugly people, strange people, they're all normal really.

Whatever happens is natural in my book, and not necessarily dangerous.
 
+Thomas Kindig Jaron's argument is this: bacteria are afraid of humans taking over the world, bacteria think they're better than humans.

Think of all the billions of bacteria who have either gone extinct or forfeited their individual rights for us to exist.
 
+Johnathan Langdale, starvation and homicide are both perfectly natural. Our responses to these natural phenomenon are moral. Bacteria have perhaps not yet acquired morality.
 
There is no real morality. It's just an illusion created by our brain. Unless morality can be quantified to point that we can say this string of digits is morality and this string is not morality, there is no morality. Morality is selfishness, that's all.
 
+Thomas Kindig Jaron's point about kids inventing themselves through social media as much as real life seems like a significant point. Although, I'm not ready to say it's necessarily dangerous. If their adult life is as much or more online then it could be a good thing. I'm not sure we're smart enough yet to say either way, but our natural reaction is to assume the worst.

Jaron attempt to separate non-geeks from geeks as a potential danger seems like a narrow view. This distinction has always existed in areas totally unrelated to computers. You could say the same thing about reading and writing, or theological understanding.

Jaron seems like a cool guy with some interesting points, but he seems biased and even somewhat narcissistic.
 
If there can be said to be an objective morality, it is one that is synonymous with the enlightened self-interest of members of any large group. Even the simplest of AI models are able to generate tit for tat through trial and error, and we have hundreds of millions of years of evolution on our side. I'd have to say that there is an objective morality, a set of rules that will tend to result in sum-positive outcomes for those who follow them and for the group at large. There may not be as many rules as our culturally-derived set as handed down for hundreds of years, but I do believe we can find a common set of morals that can suffice even in a secular world. 'Don't be a dick' goes far.
 
If you trace morality back it is defined by selfishness. Yet, I would argue that there are no truly selfless acts. If there are no selfless acts then selfishness loses it's meaning, it only has meaning to us in that we live under the illusion that selflessness exists. This can then also be traced back to the atom and the Standard Model of particle interactions. Everything has a tendency or probability to react in accordance with natural laws. Morality is no different than gravity, in my opinion.

+Uriah Maynard I can imagine an alternative planet with intelligent life where a morality no more no less "moral" than our own has developed such that the general conclusion "don't be nice" holds. They may even celebrate death. They may even be more moral by our own standards.

Neither us, nor them would be in any position to say which is universally moral. All that remains is our relative selfishness as defined by our environment which has designed our genetics through natural selection.
 
Imagination is one thing, but would it actually be able to work? You might have a species that is more intelligent than humans (it's possible that neanderthal was), but how could they develop things like civilization or other advanced technologies without the capacity to work together that morality is fundamentally necessary to provide? Without the golden rule, which is hard coded into our DNA, which emerges as a vital adaptation from any organism that interacts with others across multiple generations, how would the sort of complex thought that we are capable of ever hope to emerge? And if they were intelligent, how would we recognize that intelligence, when all our measures of intelligence are so firmly rooted in the needs of social beings? What forces would act on a species to develop that intelligence without first developing the ability to form and effectively manage tribes?
 
+Uriah Maynard Yea, I could realistically imagine it working and functioning where they kill each other once they've past their usefulness. Or they kill those who are unfit to reproduce, or those with predisposition to be excessively nice, or kind. In their environment being nice might not be productive. Why does that prevent them from developing technology? Maybe they reproduce every month and grow much faster and live much longer. Maybe they do not sleep. Maybe their system has a binary star causing no nighttime, or perhaps they go through long periods of darkness. I don't see why this would prevent a social evolution to emerge. Most of the life that has ever existed is now extinct. Is that because of the golden rule? I think it's because of the environment, lack of food, etc.

The idea that there is a golden universal rule seems more like the idea that the world is flat. It's centered on us, I don't think this golden rule is what we think it is. We don't even follow our own golden rule and yet we're successful. What we are is far from perfect, but we don't want to change. We're not even what we once were, who is to say what is optimal?

Would they have a respect for intelligent life? Probably. But why? Because they might see see similarities in that we both reproduce through a natural selection & have developed a sub-conscious and a conscious mind able to simulate and make more complicated decisions past a threshold that they consider intelligent. We might not even be close to that threshold. This respect would be selfish in that there's a similarity, and maybe we both like that. But that doesn't mean they would necessarily feel bad about killing us. And if we thought they were a danger, we'd probably kill them. Selfish. Just because we might feel bad about the lost opportunity doesn't mean much to me. Our desire to reach out and contact other's seems to stem from our generally social nature. In a way, even that is selfish.

A more advanced form of morality might say that sending advanced knowledge by radio to another independently developed life form could be a bad thing leading to their destruction (Prime Directive). What a waste all those millions of years of evolution would be if we caused some other world to prematurely go extinct as a result of learning about technology they were not ready for.
 
+Jonathan Langdale : i agree with the first part of this "Whatever happens is natural in my book, and not necessarily dangerous" but not necessarily the second. it can be dangerous (at least for a subset, at least for a period). and i do want to know, because it is not clear from your post & the comments, do you buy kurzweil? not the morality of it, but whether you think it possible.
Add a comment...