In the past few days I talked to people that do research associated with Jürgen Schmidhuber's work (see e.g. http://www.idsia.ch/~juergen/oops.html) and they told me that there have been breakthroughs and that true artificial general intelligence (AGI) is now only a few years away.
In case that they are really that close to AGI I actually agree with AI risk advocates.
So I am asking you people, what do you make of those claims that AGI is almost here?
In case that they are really that close to AGI I actually agree with AI risk advocates.
So I am asking you people, what do you make of those claims that AGI is almost here?
View 22 previous comments
I just now had an opportunity to talk with someone who worked on Epilog, a story understanding framework. I asked him what was in the way of human-like performance for the current system. It seems that there are a lot of capabilities in the works which need to be hand-coded; for example, he worked on implementing reasoning about equality (which can be encoded axiomatically, but is much more efficiently implemented as a new logical capability with its own inference mechanism), and the system recently gained truth-maintenance capabilities (so that if an axiom is withdrawn, related conclusions can be withdrawn without having to re-derive the whole knowledge base). Some sentences are parsed very quickly, but others take longer, say 10 minutes, because of a large amount of ambiguity. More capabilities are needed to deal efficiently with those sentences.
In any case, asking what is taking so long here is probably less interesting than asking what is keeping techniques like OOPS from scaling up, at least for this discussion. One thing is, no one is currently trying to train OOPS! If it were trained more, it should get more capable, right? So, shouldn't we see a line of papers showing increased capability on a variety of tasks? I asked Schmidhuber why not, last year; his response was simply that other architectures became more interesting (like Goedel machines).Aug 2, 2012
+Abram Demski How do you believe is an AI in a box going to acquire the capability to not only have nontrivial conversations with its gatekeeper but to actually follow through on an advanced deceit strategy possibly involving an advanced theory of mind of the gatekeeper?
Are such skills going to be hand-coded or to be learnt by a simple algorithm without any human assistance?Aug 2, 2012
With direct natural-language approaches such as story understanding, such things would have to be specifically coded. But that is why those approaches seem less plausible for AGI; we want it to figure out more things on its own.
You mentioned that you would be worried if you thought an OOPS-like approach would succeed within 5 years. Care to elaborate? An OOPS-like approach is very much "tool AI", no thought of betraying any master. Of course, it could be used to build agents.Aug 2, 2012
+Abram Demski I am much more worried about humans who control a superhuman tool AI than some sort of partially idiotic paperclip maximizer who barely recognizes humans as agents, if at all.
I guess the only hope is that they would just use it to realize something sufficiently stupid as to kill everyone quickly.
All I am worried about when it comes to the kind of unfriendly AI that SIAI imagines are those crazy acausal trade scenarios. Which seem to be sufficiently vague to discount them, even given that I buy the general idea of UFAI.
Otherwise humans + tool AI is a much more worrisome possibility than human extinction.
For the same reason I would be worried about an AGI that is very human or a half-baked FAI.
But I really know much less about this topic than I know about climate change, close to nothing.Aug 2, 2012
+Alexander Kruel , If humans with tool AI is concerning, it should be concerning at further timespans too, right? A 5-year timespan just makes it that much worse.
When you say it is more worrisome than human extinction, what do you mean?Aug 4, 2012
> If humans with tool AI is concerning, it should be concerning at further timespans too, right?
Yes, but I don't see what can be done about tool AI / narrow AI. All you need is some really advanced AI supported monitoring system to basically enable some sort of dictatorship to continue indefinitely or for long enough that everything turns into hell.
Friendly AI approaches and other ideas won't do anything about human demigods.
> When you say it is more worrisome than human extinction, what do you mean?
Augmented humans or half-baked friendly AI share enough of human values to turn the world into a hell, given the power they are supposed to have.
If I accept the line of reasoning employed by SIAI then it seems to me that the closer you get to friendly AI the worse the outcome becomes and only if you cross the threshold where it becomes truly friendly you get a positive utility outcome.
As far as I understand, any unfriendly AI is most likely going to disregard human values and just follow through on whatever goal it got. Which is probably less likely to be a goal involving living humans than when it comes to friendly AI.
Just think about a half-baked friendly AI that gets everything right but misses human boredom for some reason.
I think Antinatalism is much safer outcome. It is a negative utility outcome but its scope is clearly limited.
If paperclip maximizers are possible then I say we should deliberately build such a thing to transform the whole universe into an inanimate state.Aug 4, 2012