Profile cover photo
Profile photo
Linas Vepstas
artificial intelligence research
artificial intelligence research

Linas's interests
View all
Linas's posts

Post has shared content
no thoughts

Post has attachment
In case you want to be a rock star, but don't have the music chops ... you could go around explaining MMT to everyone you meet. 

Post has shared content
Our prez has always been inturested in history.
I'm starting to hope that the entire Trump presidency is nothing more than a massive 4chan troll.

Really, it makes the most sense.

Post has shared content
The below was written by +Daniel Estrada:

// +Alexander Kruel I started working on AI just around the time +Eliezer Yudkowsky came onto the scene (~2005), but my interests were with social robots and cybernetics, and the LessWrong approach to AGI simply didn't fit anything I knew from cognitive or social psychology, or really anything from the philosophy of mind. But as LessWrong and Bostrom-style positions become the popular trends, I've felt increasingly alienated from entering the field, for exactly the reasons mentioned in the OP.

Well, I think I finally understand how to attack the LessWrong views directly. The whole position is based on the problem of "alignment": we want AI to be "friendly" to human goals, but small unintentional misalignments can have profound negative consequences. Yudkowsky uses the Sorcerer's Apprentice to make the point clear. - The Sorcerer's Apprentice - Wikipedia

The alignment argument is right to point out that aligning with human goals is not a trivial task. Where the argument fails is in appreciating how we solve the general problem of social alignment as social agents, and in distinguishing clearly what unique challenges our computing machines bring to the task.

For starters: there is no clear orientation to humanity's goals! Humanity's goals are neither clear nor unambiguous even to humans. Moreover, participants in social systems can have goals that are in direct conflict without it being a fundamental challenge to the social order. For instance, Black Lives Matter has strong and direct criticisms of many aspects of the social order, and if they get their way it would cause major disruptions to our everyday life. For some this may appear to be a threat, but democracy and justice absolutely require that such voices are given space within the social order, and are empowered even to the point of changing that order directly. How do we deal with cases where the social integrity depends on interactions of misaligned agents? The simple LessWrong story of AGI alignment is inadequate for dealing with even the simplest cases of social conflict.

The alternative is to understand how the social world is organized as a dynamical system of interacting agents with diverse perspectives and goals, and all the complexity and inconsistency of goal alignment involved in the process of having them work together. For ANY agent to engage the social world, it is not enough to merely need to align itself with some abstract set of goals. Whose goals? Which goals? How strongly? To even start answering these questions, the agent needs to situate itself as an agent within that social space so that it has some perspective for evaluating the goals at stake, which ones deserve alignment and which can be negotiated. This process of situation and identification is the cognitive root of all social agency and intelligence.

The upshot is that agency is not a function of some abstract measure of intelligence or absolute alignment of goals. A system's agency is a function of its capacity to integrate with the agents around it. You have more capacity as an agent to achieve your goals as those goals are better integrated with the goals of all the agents around you. If you are in an agential vacuum, your agency also goes to nil.

As a consequence, the very idea of "AGI" becomes unstable. Different social environments will support varying capacities of artificial "intelligence", and the success of those AI depends entirely on the agential field they are working in. A bot might successfully navigate that field without aligning to any particular agents or values within it; indeed, the bot might be proactive in realigning the humans in the field, and we might think it is important that the bot has the space to do it!

If the argument is too abstract, here's a short and sweet example: Tweenbot - tweenbots | kacie kinzer

Tweenbot can only move in a straight line, but needs to get to the other side of a winding park. It depends on the support of nearby human volunteers to accomplish this goals. And the humans go out of their way to assist the bot! As a result of the supportive social environment, Tweenbot is capable of significantly more complicated paths through the social space.

This is a clear case where intelligence and alignment are built from repeated interactions between participants in a shared social space. It is simply a mistake to believe that the former can be analyzed prior to and independent of the latter.

via +Greg Egan

Maciej Ceglowski - Superintelligence: The Idea That Eats Smart People

Will AI solve all the worlds problems or take over the world, as some seem to hope or fear? It will definitely disrupt the industries and economies and thus the workplace as we new it.

His keynote given at WebCamp Zagreb 2016. Maciej is a very good writer and communicator and one of the people whose articles at are worth every minute. I highly recommend his too (as a satisfied customer).

h/t for the link to

Post has shared content
News of the weird ...
with all the truly batshit crazy in the news these days, how can parody outfits like +The Onion & Andy Borowitz even keep up?! #smfh

Post has shared content
subliminal french fries

Post has attachment
To my friends: please be aware of this. Its important. All that political activism: this is the reality that we are embedded in. 

Post has attachment
This is making my head explode.

Its a brilliant depiction of the both the current state of affairs (at the end of the article - its not just Trump that is the issue here) as well as the future, as well as a path to get from here, to the future.

Political government is our standard mechanism, machinery for solving tough social problems. The GOP/red-right has been interested in destroying much of that machinery, for many reasons: some good, some bad, and most recently, because that machinery has failed the white working class, who is now voting red.

Destroying all policy-solving machinery will, of course, increase the number of unsolved problems that society faces. The good news is that the Trump shock seems to be waking up and engaging the silicon-valley class, who are the pre-eminent problem-solvers.

This suggests that we may have a "Cambrian explosion" of political problem-solving technology just around the corner. What can we do to promote, channel, organize this explosion?

Post has attachment
And now for something completely different.
Wait while more posts are being loaded