Cover photo
Deen Abiola
5,081 followers|356,210 views


Deen Abiola

Shared publicly  - 
The blue whale heart is huge but not as large as has been commonly depicted.

More fascinating to me about whales is not their size but that their size increase does not come with a concomitant increase in cancer rates.

This is known as Peto's Paradox and  "is the observation, due to  Richard Peto , that at the species level, the  incidence  of  cancer  does not appear to correlate with the number of  cells  in an organism. [ 1 ]  For example, the incidence of cancer in humans is much higher than the incidence of cancer in  whales . [ 2 ]  This is despite the fact that a whale has many more cells than a human. If the probability of  carcinogenesis  were constant across cells, one would expect whales to have a higher incidence of cancer than humans".

Puzzle: Amongst the warm blooded, how come this relationship between size and robustness doesn't hold as much for birds in general? (Birds--parrots especially--can be very small and yet have life-spans in the range of humans. Bats and NMRs buck this trend in mammals too). What's going on there?

Deen Abiola's profile photoFred Gandt's profile photoDan Weese's profile photoSasha Biloshisky (Mr Sasha)'s profile photo
Whale anatomy is just bizarre.  No other animal can survive the changes in compression involved.  A blue whale can dive 500 meters and resurface without getting the bends.  That's a water pressure of over 5 megapascals, about 750 PSI.  Some whales are capable of diving 3000 meters for a total time of 138 minutes.  That's about 40 megapascals, almost 6000 PSI.  The air in the whale's lungs is compressed to less than a tenth of its original size - along with everything else.  Just amazes me....
Add a comment...

Deen Abiola

Shared publicly  - 
It is very easy to come up with a feature, test it on a handful of cases or a toy data-set and then declare success. What is difficult and immensely frustrating is throwing away features which ultimately end up being more effort than they're worth. For example, take the case of a summarization; if I find myself spending more time trying to decipher its meaning than it would have taken to read the actual text itself, then the feature has failed. The summary should give a good enough idea of the text, with a high probability, in a manner that doesn't lead to frustration. This is a high threshold that has led to my throwing away the vast majority of ideas.

##The Tiers of a Feature

I group features into five tiers: Mud, Plastic, Pyrite, Gold and Palladium. 

*Mud features* are noise, they're the most abundant class of feature I think up; they make everything worse with their mere existence. Testing ideas can sometimes be depressing because most of them will turn out to be mud. They're too numerous to enumerate. On the positive side, I seem to have gained the ability to automatically detect and cut short such ideas.

*Pyrite* These are novelty ideas that look promising, showing enormous potential, only to fall flat during practical application. They aren't failures per se, as I for example, count not fast enough, or works but is ultimately a gimmick, amongst these. I'll say the other majority of ideas fall in this category. One example is the phrase based summaries I showcased in the previous article, I will talk about how it fails later in this one.

*Plastic features* are borderline useful but not memorable or worth it. They're relatively reliable but likely, you would not care much if they were gone. Another common reason a feature doesn't make the cut is if its runtime is too slow or its algorithmic complexity is too high to work in real-time on an average machine. Many webpages can have tens of thousands of words (as grounding, that's about 80 textbook pages), and there will be instances where you end up eating 3 or 4 such webpages in parallel while browsing normally, and you want results in not more than a second. 

Another scenario might be analyzing dozens or more of pages at a time but still not going over a few seconds without results. Meeting those constraints has occupied much of my time, and has been the cause of my throwing away a lot of ideas—getting something that's both fast and actually reliable is difficult. Later in this article, I'll talk about escalation and my approach with UX to get around this where possible.

A different requirement on some algorithms is that they be able to  learn, in realtime, using minimal memory. These last two eliminate both cutting edge ideas such as recurrent neural networks, which I spent a couple weeks experimenting on, and old ideas disguised as new—such as word2vec. Even Conditional Random Fields and my Hidden Markov Model implementation proved too slow for speedy use. A corollary to this is that turn around time on ideas is much slower with say Deep Neural Networks. 

This limits the rate of experimentation and since most ideas are mud, and there is a great deal more to do than figure out an appropriate architecture, and the tech is not yet sufficiently better to be worth the cost, I decided to drop that branch of the tree (after 14 hours nursing an RNN—the scenario is not unlike obsessing over a graphics engine when you're trying to build a game). Hopefully, Moore's Law will address this in time but for now, they're useless for the sort of tasks one will encounter in an intelligence amplifier setting.

*Gold features* are extremely useful and reliable but perhaps only truly shine in a few contexts. An example would be the graphs of my previous post. It's not often I use such features but when I do, they're very useful for quickly getting some idea of a long and complex piece—papers are one example.

*Platinum/Palladium features* are rare and pivotal, they're what make the software something you'd want to incorporate into your daily routine. Some you use everywhere, others are used in only a handful of (but still important) scenarios.


There are two senses in which I use escalation, both of them inspired by games: a) the software must be useful at all skill levels and most often by b) not overloading the user (or symbiote) with options. As the post has grown too long, I've decided to split the discussion here. A future post will discuss a).

Typically, today, your only choices when given a text are to either read it now, never read it or save it to never read it later. What Project Int.Aug does is introduce layers below and above (or to the side of that). You can get a few words and topics, look into people, places, locations, sections of emphasis and concepts. You can read a summary at different levels of detail, you can read the text or you can explore a network representation of the text. The last one, the network, I am not certain that exploring it in full detail is actually any faster than reading the text but I've (and hopefully you too will) have found it a more enjoyable way to approach texts.

##What Games are Best at

The best games are really good at escalating difficulty, gradually introducing complexity and well utilizing a contextual interface that responds intelligently to your situation. Most software is not like that. In the next article I'll talk about how I try to emulate that, but here I'll focus on my attempt to escalate and hide away things yet keeping them highly accessible.


While using different applications or browsing, you can invoke a ~500x300 transparent window (for single screen folk, annoyingness is still not completely worked out, perhaps shift in favor of pop up). The window is purposely kept small as its meant to be taken in at a glance (there's the option to move analysis to a full window). Then, the easiest to parse features should be most quickly computed and displayed. This includes key word extraction, top nouns, top verbs—but how is this useful? Consider the choice of visiting a link today. It's a very wasteful task that involves invoking a new browser tab or window instance, skimming or looking at the title and then deciding that this was a waste of the last 20 seconds of your life. Trying to predict the content of a link is too inaccurate. However, being able to quickly peek at a handful of words from the text is an excellent compromise. Incidentally—later on, I realized that it's much harder to skim when you're blind, so the ability to extract key sections or query a document—my approximation of non-linear reading—is useful there too.

That last is an example of the guiding principle of this software. If an application is going to have any chance of being part of a larger system of amplified intelligence, then it needs to minimize friction. Minimizing friction is required before the illusion of an extended self can even be considered. There are, I believe, two parts to friction. Latency: things need to happen at speeds below the conscious threshold, or if not possible, meaningful feedback needs to occur at similar speeds. The other important aspect is prediction, but more specifically, preconscious prediction. When interacting with any system in the world, we're constantly making predictions on how it will respond to our actions; in the case of software, features which are difficult to quickly learn to predict at a preconscious level induce too much friction (our brains do not like this). This is not the same as saying the software must be dumbed down, only that it be easy to use, easy to learn and easy to grow with (essentially, be useful at all levels of skill—hard things will have some threshold you can't go below but let there be useful easy things too). Having to constantly guess what the software will do, and only being able to do so with an accuracy of < 100% is an absolute failure.

Less (but still) important than friction is that the cost of utilizing the feature be lower than the gained value, and that it is unambiguously better than what it is replacing. Consider a method that constantly offers irrelevant keywords, misclassifies people as locations at too high a rate or a word similarity function that induces more cognitive noise than clarity (even if it works perfectly). Finding out what is helpful in day to day use has not been easy. Consider that speed and accuracy are at odds with each other (always defer in favor of speed, a few percentage points gain is just not worth it if you're going to lose even more than a hundred milliseconds per instance, because scale).

##Measures are Useless

In machine learning papers unsupervised methods are typically scored under some measure. In reality, I've found such results as useless for actually gauging the real life utility of a method. The only real way to see how well a method works is to incorporate it into my daily activity and note if it relieves or adds to cognitive overhead. 

##The Interface

In building Project Int.Aug I have roughly 5 key goals:

* Augment ability to recall sites, papers, etc. that I have read, visited etc. I should not have to remember the exact wording. This solves the problem of too many tabs and bookmarks.
* Augment association by displaying contextually useful definitions when called upon; can be clippings, parts of a paper etc. to my current document, site or copied selection.
* Augment ability to research and search. Show useful associations between topics of what I'm researching and reduce my ramp up time. Consider a "more like this" feature, across personal documents and search in general. Allow querying of multiple pages and useful search agents to map out a few branches out of a search tree.  For example, someone claims a new mathematical result—is it really? Document and concept vectors across a broad swathe of papers. This system should be able to, with you, interactively refine possible prior work, even if you do not have it on your machine.
* Reduce the amount of reading I have to do (unless I'm reading for entertainment or edification, reading is a waste of time because I'm only going to remember a few words anyways so), get me those words that would be the only things I would have remembered had I read this piece anyways. 
* Make use of the data and trails (as well as ability to share these) we all generate while going through our day to day activity, in a way beneficial to us (corporations are already very good at this though mainly for their personal benefit).

I'll focus here on reducing required reading. Sometimes I forget that I'm trying to build an IA and not an AI. This means that spending too much time trying to get some piece perfect is counter-productive in the face of all what needs to be done; finding a high enough rate of signal over noise and UX to filter through these is more important. Our brains should not be passive in this relationship, they have an incredible and so far unique ability to just cut through so much of a search space: this is our form of creativity. On the other hand, we're not very good at considering alternatives that we deem counter-intuitive (I believe this to be a victim of our tendency towards confirmation bias), however computers can be good at this and that is their form of creativity. Combining those two with a good interface creates something formidable indeed.

An example of poor results is the association based summaries; they can be very hit or miss: 

*Sample 1*

> Drugs that inhibit this molecule are currently routinely used to protect: attack the parasites that cause them using small molecule drugs/is used/run experiments using laboratory robotics; attack the parasites that cause them using small molecule drugs: make it more economical/to find a new antimalarial that targets DHFR/To improve this process; the robot can help identify promising new drug candidates: demonstrating a new approach/independently discover new scientific knowledge/increases the probability; an anti-cancer drug inhibits a key molecule known: say researchers writing/to automate/can be generated much faster; an artificially-intelligent 'robot scientist' could make drug discovery faster: select compounds that have a high probability/does not have the ability to synthesise such compounds/has the potential to improve the lives; 

*Sample 2*:

> The more such Internet users deploy “ do not track ” software: to make their users more valuable/assimilate more learning material/creating more flexible scheduling options and opportunities; is..far fetched Robotic caregiving makes far more sense: to use an adjective that makes sense only/change our sense/would the robotic seal appear a far less comparatively; is..less refined any more humane social order could arise: changing social norms/enables “ social networking ”/making some striking comparisons; one more incremental step: amasses around one person ’s account/want high ones/needs anchors; Data scientists create these new human kinds even: to create it/to create certain kinds/perfecting a new science; is..virtuous or vicious

Both of these are distillations of a much longer (with the second being that of a very long and complex) piece. Trying to make sense of these is difficult and ultimately makes this a feature I consider pyrite (the difficulty lies in the fact that the type of similarity it surfaces is not appropriate for this task). However, utility is task dependent; while it is not sufficiently useful for a single article, I have a hypothesis that it will work better, as a kind of broad overview, when searching many multiple pages at a time. The same failing as a single piece summarizer is true for the single word version of the "association" based summaries:

>ai develop/comprehend/detect is...specific, good, former
Similar: ai, researcher, arm, decline, hundred
>facebook hit/publish/answer is...memory-based, good, free
Similar: facebook, memory-based, weston, arm, boss
>memory use/see/discern is...central, neural, biological
Similar: memory, use, reason, understanding, over
>google built/modify/think is...specific, free, parallel
Similar: google, ai, university, baidu, try
>computer give/develop/detect is...implicit, top, brainy
Similar: computer, world, give, pattern, journal

It is easy to get stuck in a track trying to fix this rather than remaining focused on the bigger picture. For example, one option would be to build an n-order markov chain specific to the text and then a general language model of sentences to try to generate the shortest, most likely sentence expansion of these phrases. But why? A big part of this project has been learning how to do the least amount of work to get something good enough for what I want else scrap it—due to how much needs doing. Sometimes the simplest thing is complex but often times, especially if you've made things modular and composable ahead of time, the method might have a surprisingly simple implementation (some might point out that composability hides complexity; which is exactly the point).

On the other hand, there are features which work really rather well: topic and keyword extraction, extracted summaries, concept and directional vectors. The named entity recognition aspect is more a plastic feature, it's okay but will more than serve as the basis of a question answering system (for example it sometimes labels books, papers, websites or genetic loci as locations which actually makes a lot of sense). You can see for yourself the output of the analysis of 7 randomly selected websites of varying complexity. The summaries in particular are surprisingly good; most extractive summaries work best for simple news pieces but completely fall apart with interviews, forums, papers or long narrative reads: this method degrades gracefully from simple news articles to interviews and thread posts. You can see some examples [in this link](, under the Full Summary sections. There are two methods to generate summaries, one using phrases and another sentences. Sometimes the phrases are better (in particular for short or news pieces) but the fuller sentence based summaries are more consistently better:

*Example of a better phrase based summary*
>Artificially intelligent robot scientist 'Eve' could boost search. Drugs that inhibit this molecule are currently routinely used to protect. Eve is designed to automate early-stage drug design. a compound shown to have anti-cancer properties might also be used. an anti-cancer drug inhibits a key molecule known. an artificially-intelligent 'robot scientist' could make drug discovery faster. attack the parasites that cause them using small molecule drugs. new drugs is becoming increasingly more urgent. the robot can help identify promising new drug candidates

*Example of topics*:
> brain-based physiology of creativity, the human cerebellum, monkey cerebellum
> global poverty, AI risk, computer science, effective altruists, effective altruism, billion people, Repugnant Conclusion : the idea
> artificial intelligence, last year, few months, common sense, memory-based AI, Facebook AI researcher, Facebook AI boss, crusade for the thinking machine
> drug discovery, mass screening, machine learning, Robot scientists, robot scientist, fight against malaria
> feedback and control mechanisms of Big Data, Blog Theory : Feedback and Capture in the, sociotechnical system : Particular political economies, effect of “ bombshell ” surveillance

*Examples from Directional vectors*.

These vectors capture some directionality (which provides some refinements in capturing context), as such you can recover common antecedent or succedent words. 

>Similar to drug: drug
>Top 3 preceedings for drug: Concepts: exist, choose, early-stage | Index: compound, positives., early-stage
>Top 3 post/next words for drug: Concepts: target., design., discovery | Index: discovery, candidate
> ==================
>Similar to scientist: scientist
>Top 3 preceedings for scientist: Concepts: robot | Index: robot, clinical
>Top 3 post/next words for scientist: Concepts: 'eve' | Index: be, them
> ==================
>Similar to self: self, tool
>Top 3 preceedings for self: Concepts: construct, ‘data, algorithmic | Index: algorithmic, network, premack
> Top 3 post/next words for self: Concepts: balkinization, commit, comprehensively | Index: setting
> ==================
> Similar to risk: risk, researcher, obsession.
> Top 3 preceedings for risk: Concepts: existential, ai, recoil | Index: ai, existential, human
> Top 3 post/next words for risk: Concepts: panel, estimate, charity | Index: of
> ==================
>Similar to altruist: altruist, intervention., altruism
>Top 3 preceedings for altruist: Concepts: effective, lethality. | Index: effective, maximum
>Top 3 post/next words for altruist: Concepts: groups., explain, don | Index: potential, though
> ==================
> Similar to people: people
> Top 3 preceedings for people: Concepts: serious, marginalize, part | Index:
> Top 3 post/next words for people: Concepts: seek | Index: who, in, seek

Sometimes the result is less than ideal but this is where UX can help. For example consider a sentence starting with "They", your first question will no doubt be: who? One way to fix this is to allow one to hover over a text and get an inline display showing the context of the sentence. However, hovering only works when popups are sparse, otherwise the interaction becomes very annoying with things popping up with every mouse move. Instead, I've resorted to selecting text triggering a context search. Another is, sometimes stories are too short and can be improved with length—you don't want too many options, however—so there are two modes, a set of parameters that give good results for long and short for a broad set of articles (the examples are all "short").  

The interface consists of three tabs: one for topics, one for gists/summaries and a final one for entities. The gists are further separated into phrase and sentences (though if over the next few days I find phrase induces too much cognitive overhead I'll drop it), entities to people, locations, orgs, etc. (a literal etc.), you can easily use keyboard navigation or have the summaries read to you at high speed. You can select text for more context. 

There is lots that needs to be done per text (generate document specific vectors, tokenize, tag parts of speech, generate chunks, extract enitites, extract key words, generate summaries) each of these occur in milliseconds for the average document but can rise up to 1-3 seconds for really long texts (a thread with 1000+ replies) but, updating asynchronously, with the most important (keywords) displayed first works around this speed issue. The important bit is because we are not building AI, we have more room for error so long as signal overwhelms noise and we have good friction removing tools to work around them. In this way you can choose to go into as little or as much depth as you want—escalation—and unlike the case with skimming, the probability of hitting the important bits is significantly better than random.

Sometimes all I can see are the failings and shortfalls, then I feel down because things seem so far from the imagined ideal. But then I ask myself, if two people were trying to learn something new, one with Project Int.Aug and the other with browsers and Google, then without a doubt, I know with certainty that the person using tools like Project Int.Aug is exceedingly better equipped. I might spin in circles, continuously replacing internal algorithms for something better, forever chasing after perfection but if the goal is to move forward to motor and even hoverbikes of the mind, I've got to release something, get outside input.

But right now, In a world of walkers, Project Int.Aug is an electric bicycle for the mind*.


*If you're working on something like this too, please let me know!

Here you can look at the performance of the methods across a [random sample of 7 websites](

The network for the [sample image]( 

![alt text](images/birdbrain.png)
Randall Lee Reetz's profile photoDeen Abiola's profile photoMark Bruce's profile photoWilliam Rutiser's profile photo
Yeah, I appreciate the complexity and difficulty, and I'm very vaguely aware of the large network of existing knowledge sitting in my head that I can tap on whim to make associations and comparisons with some new advance and providing me with a convenient filter that I never have to think about of course that helps draw out the important bits. 

But still, it'd be great to have an automated system that "put me out of a job" for this task so to speak :)
Add a comment...

Deen Abiola

Shared publicly  - 

This is a really pretty visualization of how Decision trees work. It's less about machine learning proper, which is actually a strength since it can be that much more concrete.

My only super tiny quibble is with the overview 1). I'll say instead that drawing boundaries applies to discriminative learners only and not to more probabilistic methods (also, not all learners are statistical but apparently there's a duality between sampling and search which messies neat divisions).

I'd also further characterize over-fitting as memorizing the data. Where, model complexity/number of parameters is unjustified given data/outmatches available data. It stems from a lack of smoothing, which is when you don't filter out noise but instead just explain every little detail using some really impressive bat deduction [1]. Humans do this when concocting conspiracies or reasoning based on stereotypes, initial impressions and anecdotes.

A Visual Introduction to Machine Learning

If you haven't seen this yet, it's pretty awesome!
What is machine learning? See how it works with our animated data visualization.
3 comments on original post
Nick Benik (HackerCEO)'s profile photoLibbie Lala's profile photo
Add a comment...

Deen Abiola

Shared publicly  - 
I think there's a third option: wake* the road network up. Children playing on the road—anyone crossing—should have ear pieces hooked up to a giant distributed computation from cars running simulations on the next 5 seconds and planning accordingly. It's not far out tech to have that anyone thinking to cross the street must have access to some whispering bayesian network that looks at the conditions of all the cars for some radius and suggests an optimal time to cross, if at all. This will all but disappear the already rare trolley problem. This system would also be able to learn if we put black boxes in cars to gather lots of data on the various more plausible accident scenarios. Assuming car ownership is even still a thing and the car hasn't been hacked, it might decide to not even turn on (thanks to whatever models learned from in-car black boxes) after deeming the probability of loss of control too high.

*I use wake in the loose sense of the emergence of interesting long range coherent oscillations. And something like a kindly giant that moves people out of the way so they're less likely to get stepped on.

I think a big part of the solution will be to stop thinking of cars as individual units and instead start realizing that traffic consisting of self driving cars will be a single extended thing unto itself.
Ever since seeing this article a few days ago, it's been bugging me. We know that self-driving cars will have to solve real-life "trolley problems:" those favorite hypotheticals of Philosophy 101 classes wherein you have to make a choice between saving, say, one person's life or five, or saving five people's lives by pushing another person off a bridge, or things like that. And ethicists (and even more so, the media) have spent a lot of time talking about how impossible it will be to ever trust computers with such decisions, and why, therefore, autonomous machines are frightening.

What bugs me about this is that we make these kinds of decisions all the time. There are plenty of concrete, real-world cases that actually happen: do you swerve into a tree rather than hit a pedestrian? (That's greatly increasing the risk to your life -- and your passengers' -- to save another person)

I think that part of the reason that we're so nervous about computerizing these ethical decisions is not so much that they're hard, as that doing this would require us to be very explicit about how we want these decisions made -- and people tend to talk around that very explicit decision, because when they do, it tends to reveal that their actual preferences aren't the same as the ones they want their neighbors to think they have.

For example: I suspect that most people, if driving alone in a vehicle, will go to fairly significant lengths to avoid hitting a pedestrian, including putting themselves at risk by hitting a tree or running into a ditch. I suspect that if the pedestrian is pushing a stroller with a baby, they'll feel even more strongly this way. But as soon as you have passengers in the car, things change: what if it's your spouse? Your children? What if you don't particularly like your spouse?

Or we can phrase it in the way that the headline below does: "Will your self-driving car be programmed to kill you if it means saving more strangers?" This phrasing is deliberately chosen to trigger a revulsion, and if I phrase it instead the way I did above -- in terms of running into a tree to avoid a pedestrian -- your answer might be different. The phrasing in the headline, on the other hand, seems to tap into a fear of loss of autonomy, which I often hear around other parts of discussions of the future of cars. Here's a place where a decision which you normally make -- based on secret factors which only you, in your heart, know, and which nobody else will ever know for sure -- is instead going to be made by someone else, and not necessarily to your advantage. We all suspect that it would sometimes make that decision in a way that, if we were making it secret (and with the plausible deniability that comes from it being hard to operate a car during an emergency), we might make quite differently.

Oddly, if you think about how we would feel about such decisions being made by a human taxi driver, people's reactions seem different, even though there's the same loss of autonomy, and now instead of a rule you can understand, you're subject to the driver's secret decisions. 

I suspect that the truth is this:

Most people would go to more lengths than they expect to save a life that they in some way cared about.

Most people would go to more lengths than they are willing to admit to save their own life: their actual balance, in the clinch, between protecting themselves and protecting others isn't the one they say it is. And most people secretly suspect that this is true, which is why the notion of the car "being programmed to kill you" in order to save other people's lives -- taking away that last chance to change your mind -- is frightening.

Most people's calculus about the lives in question is actually fairly complex, and may vary from day to day. But people's immediate conscious thoughts -- who they're happy with, who they're mad at -- may not accurately reflect what they would end up doing.

And so what's frightening about this isn't that the decision would be made by a third party, but that even if we ourselves individually made the decision, setting the knobs and dials of our car's Ethics-O-Meter every morning, we would be forcing ourselves to explicitly state what we really wanted to happen, and commit ourselves, staking our own lives and those of others on it. The opportunity to have a private calculus of life and death would go away.

As a side note, for cars this is less actually relevant, because there are actually very few cases in which you would have to choose between hitting a pedestrian and crashing into a tree which didn't come from driver inattention or other unsafe driving behaviors leading to loss of vehicle control -- precisely the sorts of things which self-driving cars don't have. So these mortal cases would be vanishingly rarer than they are in our daily lives, which is precisely where the advantage of self-driving cars comes from.

For robotic weapons such as armed drones, of course, these questions happen all the time. But in that case, we have a simple ethical answer as well: if you program a drone to kill everyone matching a certain pattern in a certain area, and it does so, then the moral fault lies with the person who launched it; the device may be more complex (and trigger our subconscious identification of it as being a "sort-of animate entity," as our minds tend to do), but ultimately it's no more a moral or ethical decision agent than a spear that we've thrown at someone, once it's left our hand and is on its mortal flight.

With the cars, the choice of the programming of ethics is the point at which these decisions are made. This programming may be erroneous, or it may fail in circumstances beyond those which were originally foreseen (and what planning for life and death doesn't?), but ultimately, ethical programming is just like any other kind of programming: you tell it you want X, and it will deliver X for you. If X was not what you really wanted, that's because you were dishonest with the computer.

The real challenge is this: if we agree on a standard ethical programming for cars, we have to agree and deal with the fact that we don't all want the same thing. If we each program our own car's ethical bounds, then we each have that individual responsibility. And in either case, these cars give us the practical requirement to be completely explicit and precise about what we do, and don't, want to happen when faced with a real-life trolley problem.
The computer brains inside autonomous vehicles will be fast enough to make life-or-death decisions. But should they? A bioethicist weighs in on a thorny problem of the dawning robot age.
166 comments on original post
Deen Abiola's profile photoRoger Smith's profile photo
...The whole point is to move the domain of discourse out of these false dilemmas and start thinking about what sort of new paradigms could evolve. An area with lots of children will result in the network adjusting itself such that the manner of driving makes trolley problems a close to nil occurrence.

It's simple. Self driving cars will make these sort of issues less common (it will also make them more visible). Communicating self driving cars will further reduce these issues. Communicating self driving cars that include humans in their planning and actions over a wide radius, as well as models learnt from the past--all leading to something that functions holistically is best of all.

(Oh and self-aware cars. As in a car that is reluctant to drive based on its level of injury.)
Add a comment...

Deen Abiola

Shared publicly  - 
This is a plus great article pointing out the dangers of getting most of your information on the state of AI (or anything) from press releases. Examples always show the best case scenario and rarely ever acknowledge the existence of pathological cases highlighting how far the methodologies have to go (this is something I don't do, either my examples are representative or I'll mention the limitations). The papers themselves are almost always more balanced.

The Deepmind games for example, are worth looking at in detail: you'll get a more grounded idea of what's possible (limited control/pole balancing games), what's novel (a better value prediction model, good!) and what's merely prestige fodder (cherry picked examples, misleading statements on who's first to first base).

But there is one section where I disagree with the essay.  Quoting:

> To sum up, CNN+RNN technology doesn’t understand numbers or colors. It doesn’t understand meaning of words. It’s nowhere near a real AI - probably closer to ten thousand monkeys striking keys at random in an attempt to replicate Shakespeare’s works.

That's a really unfair assessment. It's absolutely not closer to chaotic playwright monkeys. Random search would not get you to those results in a billion years. So it's closer to intelligence. Animal intelligence. But also very alien and very limited. 

What's been learned is a mapping from pixels to words (vectors) to sequences of words. Ultimately, giant computations on some set of functions. But, but there is real understanding, and even though it's not at all focused on what we would view as most salient or important, it has hooked on some meaningful set of discriminatory features which allow it to reason and make good predictions on examples from the same distribution as the training set. It's also achieved a compression of the data. That compression is a measure of understanding (see Brahe vs. Kepler). Alas, the errors tell us that even this style of understanding could be greatly improved, that the discriminatory features--alien though they might be--are still far from optimal.

I think though, that what people mean when they say it doesn't understand is that it hasn't learned a generative model but also, it hasn't learned a model from which non-trivial differences from the example set can be generated (Kepler vs Newton). In other words, if it really understood then it could tell stories, answer questions and infer non-visible states. Ultimately, though these machine learning models might be able to learn, they can't reason their way out of a paper bag. They're really rather inflexible. It is in that way that they can be both intelligent and incredibly stupid.
What you wanted to know about AI 2015-03-16 Recently a number of famous people, including Bill Gates, Stephen Hawking and Elon Musk, warned …
Ninja On Rye's profile photoNick Benik (HackerCEO)'s profile photoAlex Schleber's profile photo
Totally agree with your rebuttal of the million monkeys claim, btw.
Add a comment...

Deen Abiola

Shared publicly  - 
So uhm Re-sharing to issue a correction/clarification by +Larry Tesler who weighed in*, stating:

> What I believe I said (around 1970) was “Intelligence is whatever machines haven't done yet”. If and when artificial intelligence surpasses human intelligence, people might conclude, as you propose, that there is no such thing as intelligence. Or they might simply redefine intelligence as "whatever humans haven't done yet” as they try to catch up with AI.
//I'm thinking though, that the humans are not going to be very happy with such a state of affairs, preferring instead, to point out that intelligence is completely overrated anyways. Was it even ever good for anything? 

* So yeah that was totally unexpected. Still getting used to the idea that thanks to the internet, pioneers, once only found in books and archives, can temporarily exist as real people too.
Tesler's Theorem states that "AI is whatever hasn't been done yet." From this we can deduce that once AI reaches human parity we will have to conclude that there is no such thing as intelligence.
Add a comment...

Deen Abiola

Shared publicly  - 
I’m Nobody! Who are you?
Are you – Nobody – too?
Then there’s a pair of us!
Don’t tell! they’d advertise – you know!

How dreary – to be – Somebody!
How public – like a Frog –  
To tell one’s name – the livelong June –  
To an admiring Bog!

-- Emily Dickinson
Google doesn't want to talk about numbers.
Richard Green's profile photoEdward Morbius's profile photoJohn Baez's profile photoTom Malloy's profile photo
Yes, it does on mine.
Add a comment...
Have him in circles
5,081 people
Mattias Hansson's profile photo
Markus Sagebiel's profile photo
mohammed saoud's profile photo
David Wilson's profile photo
Sean McKernan's profile photo
Sport Biker's profile photo
Action Hindi Dubbed Movies's profile photo
Michael A. Phillips's profile photo
Luis Anaya's profile photo

Deen Abiola

Shared publicly  - 
#The Current Best Hypothesis is that the Brain is Computable

There are many (most?) people who dispute the idea that the brain is computable—there is something different and special about the human brain, they say. It is not possible to dispute this for now, but my own stance is a basic one: You may be right that the brain is somehow magical but my position is simpler and, all things being equal, more likely to end up as the correct one.

The argument that the brain is not a machine broadly rests on three ideas: those who lean to science and say: something, something quantum or another (quantum gravity if you want to be really fancy), or those who think it something magical, such as possessing a soul. The final group simply argue that the brain is not computable.

##The Brain is not Computable

It is not uncommon to see the argument put forward that the brain is not computable, that what computers do is mere mechanistic cranking of mathematical algorithms. This is true, but who's to say the brain is also not doing this? 

Occam's razor, Bayesian smoothing and regularization are all tools to keep one from over-fitting the evidence and failing to generalize. They are not laws, but tools to help you minimize your regret—make the fewest learning mistakes—over time. They do not say your idea must be simple, only that it does not say more than is possible given the data. The idea that the brain is computable fits within this regime as the hypothesis that is the simplest fit to the data. Why?

I often hear the point made that since people once compared the brain to clockwork and steam engines—comparisons we now know to be false—what makes you think an equivalence (and not just analogy) with computers won't show the same failing in time? Small aside: steam engines and the brain, thanks to the link between thermodynamics and information, is actually more interesting than what one might at first think.

###Universal Turing Machines

Turing Machines are, unlike a clock, universal. They can emulate any machine or procedure that is "effectively calculable". Our physical theories might use crutches such as real numbers or infinities but are, at the end of the day, only testable using computable procedures and numbers. This is what sets Turing Machines apart: any testable quantitative theory about the universe we can expect to devise will be simulatable (given enough time) on a Turing Machine (note: this is not the same thing as The Church Turing Thesis, instead of placing the restriction on the universe as CT does, it places it on any testable theory that compresses data. That is, more than a map from observation to expected outcome).

Even for the case that some physical things like the brain cannot be computed, it is simpler to believe that whatever non-computability the brain exploits is not unique to the exact biochemical make up of brains.

##Machines cannot have Souls

Interestingly, Occam's Razor applies here too, and my argument is short. Even if Souls are a property of the universe unexplainable by science, it is still simpler to believe that the pattern and arrangement of matter that ends up with things acquiring souls is not unique to a gelatin soup of fats and proteins. Something that thinks and acts as if it is conscious, is (in essence, I drop the extra requirement that the object must also be an organic human brain like thing). That, in a nutshell, is also Turing's argument.

But what is fascinating is that computer science has made the idea of a soul a scientific and testable hypothesis. If we do build intelligence (and maybe some of them will be more intelligent than humans in every way measureable) and yet they never wake up or attain consciousness or anything resembling (that is, nothing ever passes for consistently conscious but humans), then this is very suggestive of something unique and special about human beings. Until then, that hypothesis is unnecessarily complex.

##Quantum Mechanics

Quantum mechanics is the go to argument for people who want to appear scientific even while talking nonsense. However, it is possible that the brain does something that our current machines cannot. 

It is overwhelmingly unlikely that the brain is a *Quantum Computer*. What we know about quantum mechanics makes this highly unlikely considering how wet, noisy and hot the brain is. It is implausible that coherent and entangled states could remain in such a situation. Additionally, humans do poorly at things we expect Quantum Computers will be good at (things such as factoring, perceiving quantum interactions intuitively—simulating quantum evolution). In fact, regular Turing Machines already outpace us in many areas; we don't focus as much on the fact that we're terrible at deductive reasoning, arithmetic or enumerating the possibilities of a large search space; for those things, it did not take long for computers to surpass human ability.

But, suppose the brain was not quantum mechanical but still leveraged quantum mechanical artifacts for its functioning—artifacts unavailable to our machines—then it is possible that current efforts will not lead to AGI.

In a certain trivial sense everything is quantum mechanical in that an agent adhering to predictions based on the theory will be able to explain the world with the highest accuracy. Of course, with such a broad definition then even the computer you are currently reading this on is a Quantum one. Not at all a helpful distinction.

Yet there is also a non-trivial sense in which quantum effects can be leveraged. We see this with our current processors; part of the difficulty with getting higher speeds and lower power is that (amongst other reasons) quantum tunneling effects are getting in the way. Biological homing mechanisms and photosynthesis have also been implicated with taking advantage of quantum effects.

Evolution is extremely powerful at coming up with unexpected uses to subtle phenomenon. Consider the following, from a fascinating [article](

>A program is a sequence of logic instructions that the computer applies to the 1s and 0s as they pass through its circuitry.  So the evolution that is driven by genetic algorithms happens only in the virtual world of a programming language. What would happen, Thompson asked, if it were possible to strip away the digital constraints and apply evolution directly to the hardware?  Would evolution be able to exploit all the electronic properties of silicon components in the same way that it has exploited the biochemical structures of the organic world? 
>In order to ensure that his circuit came up with a unique result, Thompson deliberately left a clock out of the primordial soup of components from which the circuit evolved.  Of course, a clock could have evolved. The simplest would probably be a "ring oscillator"-—a circle of cells that change their output every time a signal passes through.  
> But Thompson reckoned that a ring oscillator was unlikely to evolve because only 100 cells were available.  So how did evolution do it—and without a clock? When he looked at the final circuit, Thompson found the input signal routed through a complex assortment of feedback loops.  He believes that these probably create modified and time-delayed versions of the signal that interfere with the original signal in a way that enables the circuit to discriminate between the two tones. "But really, I don't have the faintest idea how it works," he says.  One thing is certain: the FPGA is working in an analogue manner. 
>Up until the final version, the circuits were producing analogue waveforms, not the neat digital outputs of 0 volts and 5 volts.  Thompson says the feedback loops in the final circuit are unlikely to sustain the 0 and 1 logic levels of a digital circuit. "Evolution has been free to explore the full repertoire of behaviours available from the silicon resources," says Thompson.
>Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation.  Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all—there was no route of connections by which they could influence the output.  And yet if he disconnected them, the circuit stopped working. It appears that evolution made use of some physical property of these cells—possibly a capacitive effect or electromagnetic inductance—to influence a signal passing nearby.  Somehow, it seized on this subtle effect and incorporated it into the solution. 
>But how well would that design travel?  To test this, Thompson downloaded the fittest configuration program onto another 10 by 10 array on the FPGA. The resulting circuit was unreliable. Another challenge is to make the circuit work over a wide temperature range. On this score, the human digital scheme proves its worth.  Conventional microprocessors typically work between -20 0C and 80 0C. Thompson's evolved circuit only works over a 10 0C range—the temperature range in the laboratory during the experiment.  This is probably because the temperature changes the capacitance, resistance or some other property of the circuit's components. 

Although this is the result of a genetic algorithm, a similarity with its natural counterpart is found: the exploitation of subtle effects and specificity to the environment it was evolved within. The article shows us two things: how evolution is not bounded by man's windowed creativity but also, that, even if our current designs do not leverage some subtle effect while brains do, there's no reason why we could not build a process that searches over hardware to leverage similar powerful processes. The search could be more guided; instead of random mutations, we have something else that is learning via reinforcement what actions to take for a given state of components and connections (we could have another suggesting components to inject freshness) then we select the best performing programs from the pool as the basis of the next round and appropriately reward the proposal generators.

Returning to the quantum, what, if there were something subtle about ion-channels or neuron vesicles, that allowed more powerful computation than one might expect. Perhaps something akin to a very noisy quantum annealing process is available to all animal brain's optimization and problem solving processes? The advantage need not even be quantum it might even be that perhaps subtle electromagnetic effects or whatever are leveraged in a way that allows more efficient computation per unit time. This argument is one I've never seen made—yet, still, it consists of much extra speculation. Plausible though it is, I will only shift the weight of my hypotheses in that direction if we hit some insurmountable wall in our attempts to build thinking machines. For now, after seeing how very inherently mathematical the operations we perform with [our language are]( (some may dispute that this is cherry picking but that is irrelevant because the point is the fact that this is possible at all is highly suggestive and strongly favors moving away from skepticism and), it is premature to hold such (and other) needlessly complex hypotheses on the uniqueness of the human brain.


I have not argued against the soul or that the brain is incomputable or somehow special, instead I've argued that such hypotheses are unnecessary given what we know today. And even indirectly, when we look at history, we see one where assumptions of specialness have tended not to hold. The Earth is not the center of the universe, the speed of light is finite, simultaneity is undefined, what can be formally proven in any given theory is limited, a universal optimal learner is impossible, most things are computationally intractable, entropy is impossible to escape, most things are incomputable, most things are unlearnable (and not interesting), there is only a finite amount of information that can be stored within a particular volume (which is dependent on surface area and not volume), the universe is expanding, baryonic matter makes up only a fraction of the universe, earth like planets are common, some animals are capable of some mental feats that humans are not, the universe is fundamentally limited to being knowable by probabilistic means (this is not the same thing as the universe is non-deterministic)! 

While one cannot directly draw any conclusions on the brain from these, when constructing our prior (beliefs) it perhaps behooves us to take these as evidence suggesting a weighting away from hypotheses reliant on exception and special clauses.
Avinash Pujala's profile photoDave Gordon's profile photoJim Stuttard's profile photoAndré Haynes's profile photo
Human consciousness is just one type of consciousness. AI won't be human.  But they'll be our descendants in a sense; only built, not born. They'll have their roots in the world we experience, because that's what we tend to care most about, even as we extend the range of their capabilities beyond the useful but limited biological sensors that have developed here.

Human consciousness will also evolve as we enhance our biological experience with sensors and intelligent processing that extend our own awareness.

To facilitate discussions, we'll need definitions and scales for terms like consciousness and self-awareness. Definitions that could be applied to humans, other animals, and other forms of evolving sentience as well. We'll need to develop new areas of ethics and law to complement our evolving capabilities. 

We've already seen signs that other animals demonstrate aspects of consciousness [1], so I'm inclined to think we'll discover that the most human-like aspects of higher processing (self-awareness [2], sentience, compassion, empathy, grief, dignity, integrity, a sense of justice, fairness etc.) will be revealed as emergent properties of increasing intelligence and natural or artificial selection.

Emotional processing is crucial to the experience of being human - and to our natural selection and evolution. It will be interesting to see if, how, and why we incorporate those ingredients into purely computational intelligence.  


[2] I refer to the conscious awareness of self that +Mark Bruce mentioned, more than self-tests or sensory consciousness.  A sensory perception of "blue" likely has meaning beyond the transmission of data that is more artistic or specific to humans and related animals. But outside the artistic and or/emotional idea of "beauty", it's a useful interpretation of distance (wavelengths), just as the sensation of hearing words is a combination of pressure, displacement, frequency, and more. We can use other sensors to gather the information; any specific sensory consciousness is just one way for our brains to efficiently interpret massive amounts of data. For example,  we can toss a ball to a blind person, by relaying the information through sensors mapped to their back or their tongue. We can already provide enough sensory experience for them to catch the ball. I'd imagine we could eventually develop a similar information exchange to help them experience the ball color as well. The necessary information would be conveyed, but it would be a different sensory experience.

It's the higher self-awareness (e.g. that we are temporary and can be unplugged) that make things interesting. 
Add a comment...

Deen Abiola

Shared publicly  - 
#Summarization via Visualization and Graphs. 

Also, there's a typo (okay, at least one) in the Iran Agreement (well, in the version posted on medium and as of this writing...).  

_G+ note, inferior duplication of medium version posted here:

Ah, this is not part of the order of posting I planned,'s not everyday you get to analyze (and find a trivial mistake) in a government document. Since May, I've been writing a really fast, thread safe, fully parallel NLP library because everything else I've tried is either too bloated, too slow to run or train, not thread-safe, too academic, too license encumbered or utilizes too much memory. 

More pertinently, I've also been on a life-long quest to figure out some way to effectively summarize documents. Unfortunately, technology is as yet, too far away for my dream intelligent abstract summarizer—every single one of my apparently clever ideas have been unmasked as impostors and pretenders, always self-annihilating in exasperating puffs of failure. Sigh.

However, I have been able to combine ideas that work efficiently on today's machines to arrive at a compromise (plenty more on that in the future). One key idea has been representing text at a layer above just strings, think Google's word2vec but requiring orders of magnitude less computation and data for good results (to be more specific, I use reflective random indexing and directional vectors—which go just a bit beyond bag of words). 

Once vectors have been generated (it took my machine 500 ms to do this) and sentences have been tagged with parts of speech, interesting possibilities open up. For example, the magnitude of a vector is an indication of how important a word is, it's similar to word count but orders words in a way that better reflects a word's importance (counts, once you remove common stopwords, are actually infuriatingly good at this already—infuriating because it can be hard to come up with something both better and less dumb). It can also work when few words are repeated, so it's more flexible. Applying this to the Iran document I get as the top 10 most important nouns:

> "iran, iaea, fuel, year, centrifuge, reactor, uranium, enrichment, research, joint"

And for verbs: 

> "include, test, verify, modernise, permit, fabricate, redesign, monitor, intend, store"

This is useful and, being able to select a link, press a hot key and get a small window displaying a similar result for any page will, I think, be a useful capability to have in one's daily information processing toolkit. However, such a summary is limited. One idea is to take the top nouns, find their nearest neighbors but limit them to verbs and adjectives. Here's what I get: 

> "iran: include/produce/keep is...future, subsequent, consistent
>year: keep/conduct/initiate is...more, future, consistent
> iaea: monitor/verify/permit is...necessary, regular, daily
> fuel: fabricate/intend/meet is...non-destructive, ready, international
> uranium: seek/enter/intend is...future, natural, initial
> reactor: modernise/redesign/support is...iranian, international, light
> centrifuge: occur/remain/continue is...single, small, same
> production: include/need/produce is...current, future, consistent
>use: include/produce/meeting is...subsequent, initial, destructive
>arak: modernise/redesign/support is...light, iranian, international
>research: modernise/redesign/support, appropriate, light
>jcpoa: declare/implement/verify is...necessary, consistent, continuous

Reading this, I see the results are almost interpretable. There's the IAEA who will monitor Iran and JCPOA too, or something...I'm guessing. There's lots of emphasis on Iran's future and modernization, as well as limitations on uranium production and instruments—centrifuges in particular—in use (at this point, I'd like to point out that I've absolutely not even looked at the original document and don't ever plan to). I don't know if this method will ultimately prove useful; a lot of work involves experimenting with what actually works in day to day use. Some features are simply not worth the cognitive overhead of even just knowing they exist.

It was at this point I decided to graph the result. The basic idea is: connect all the words with the edge weights computed from pairwise cosine similarities but limit connections to be of the type VERB=>NOUN=>VERB, then apply a maximum spanning tree to prune the edges and make it actually readable. The idea being, instead of just grouping words by similarity we impose some grammatical structure then hopefully, we get something a bit more structured.

It was while browsing that graph I found the typo:

![alt text](images/chennals_iran.png) 

I'm fairly certain that "Chennals" is not some fancy Nuclear Engineering jargon.

![alt text](images/whitehouse-typo.png)

##Network Examples

I also built a graph using an algorithm utilizing inputs from a phrase chunker, which then tries to build short understandable phrases (verb dominant phrases can only link to noun phrases), another on sentences and another from paragraphs. The gray shaded and golden edge nodes tend to be most important and are worth zooming into. Around those will be all the most similar phrases/sentences/paragraphs. 

##Click for: [Single Words Example](

![alt text](images/QKy5v9bhf5.gif)

Although this graph visualization was originally meant to compare and contrast (via orthogonal vectors) two or more documents, it works well enough as a summarization tool. In case you're curious, the graph visualization toolkit I'm using is the excellent vis.js (I welcome any suggestions that'll improve on the sometimes cluttered layout).

##Click for: [Phrases Example Network](

The Phrases example is clearly more comprehensible than the single word approach but is not without flaws—there are incomplete thoughts and redundancies. On the other hand, we see that similar phrases are grouped together. It's worth noting that each phrase is represented by a single (200D) vector, hence the groupings are not based on string similarities. And, despite the algorithm not lowcasing all words, the method still groups different cased words together, suggesting that it captures something more than: these words tend to be near each other. It also groups conjugations and phrases in a non-trivial sense, as seen with higher level groupings like: 

* produce fuel assemblies/fuel core reloads/fuel will be exhausted/spent fuel  
* can be used/future use

Those are not just cherry picked samples, as you can see for yourself in the link above. The method holds generally in all documents I've tried. Additionally, it's worth remembering that nodes aren't just grouped by similarity but also must meet the very basic noun phrase-ish =>verb phrase-ish structure I mentioned. The goal is to get something sufficiently comprehensible while being non-linear and more exploratory. By zooming in and out and hiding irrelevant nodes, I can go into more or less depth as I please. This, together with basic question answering on arbitrary text form my very basic approximation of non-linear reading/knowledge acquisition. You can think of skimming as a far distant ancestor of this approach.


[Paragraphs Example](

Zooming out is, I've found, important when dealing with longer text items (removes clutter). Then, you can click a node, which disapears anything not in its neighborhood, making it easier to read when zoomed in. Other useful features are: the ability to search for a word as well as the ability to hover over nodes to get at their text. 

![alt text](images/summary_0.png)

##Text Summaries

Similar to connecting verbs and nouns, I tried connecting augmented noun phrases (very, very simple rule on how to join phrases to maximize coherence and the same for) verb phrases. With that, for the top 5 phrases, I got:

>"2. Iran will modernise the Arak heavy water research reactor to support peaceful nuclear research and radioisotopes production:
to be a multi-purpose research reactor comprising radio-isotope production/to support its peaceful nuclear research and production needs and purposes/to monitor Iran ’s production
>Iran ’s uranium isotope separation-related research and development or production activities will be exclusively based:
to any other future uranium conversion facility which Iran might decide to build/to verify the production/to minimise the production
>Iran ’s enrichment and enrichment R&D activities are:
to meet the enrichment and enrichment R&D requirements/conducting R&D/to enable future R&D activities
>Iran will maintain no more than 1044 IR-1 centrifuge machines:
will use no more than 348 IR-1 centrifuges/are only used to replace failed or damaged centrifuges/balancing these IR-1 centrifuges
>Iran will permit the IAEA to implement continuous monitoring:
will permit the IAEA to implement continuous monitoring/will permit the IAEA to verify the inventory/will allow the IAEA to monitor the quantities

This, I think, is actually a pretty decent summary. It's far from perfect but I've got a much better idea of what's in the document despite it being fairly short. It's also not a verbatim extractive summarizer (since it's constructing and combining phrases which incidentally, also ends up compressing sentences. Although...if a proper generalizing summarizer was a human, this would be like the last common ancestor of humans and mice. Or maybe lice. sigh). 

Closer to more typical extractive methods is a very simple method I came up with that generates vectors for sentences using RRI. The method takes the largest magnitude sentence and then finds the nearest sentence that get's within x% of its magnitude (I have x=50%). A sum of all met vectors is kept and a sentence must have > 0.7 similarity with this memory vector. This is repeated for all sentence. I've found that this method tends to create far more fluid summaries than is typical for extractive summarizers while working on almost all document types (even doing a fair job on complex papers and Forum threads). For this Agreement, we get the below at 10% the original document length:

##More Fluid Extracted Summary:

"Destructive and non-destructive testing of this fuel including Post-Irradiation-Examination (PIE) will take place in one of the participating countries outside of Iran and that country will work with Iran to license the subsequent fuel fabricated in Iran for the use in the redesigned reactor under IAEA monitoring.  

Iran will not produce or test natural uranium pellets, fuel pins or fuel assemblies, which are specifically designed for the support of the originally designed Arak reactor, designated by the IAEA as IR-40. Iran will store under IAEA continuous monitoring all existing natural uranium pellets and IR-40 fuel assemblies until the modernised Arak reactor becomes operational, at which point these natural uranium pellets and IR-40 fuel assemblies will be converted to UNH, or exchanged with an equivalent quantity of natural uranium. 

Iran will continue testing of the IR-6 on single centrifuge machines and its intermediate cascades and will commence testing of up to 30 centrifuge machines from one and a half years before the end of year 10. Iran will proceed from single centrifuge machines and small cascades to intermediate cascades in a logical sequence.  

Iran will commence, upon start of implementation of the JCPOA, testing of the IR- 8 on single centrifuge machines and its intermediate cascades and will commence the testing of up to 30 centrifuges machines from one and a half years before the end of year 10. Iran will proceed from single centrifuges to small cascades to intermediate cascades in a logical sequence. 

In case of future supply of 19.75% enriched uranium oxide (U3O8) for TRR fuel plates fabrication, all scrap oxide and other forms not in plates that cannot be fabricated into TRR fuel plates, containing uranium enriched to between 5% and 20%, will be transferred, based on a commercial transaction, outside of Iran or diluted to an enrichment level of 3.67% or less within 6 months of its production.  

Enriched uranium in fabricated fuel assemblies from other sources outside of Iran for use in Iran’s nuclear research and power reactors, including those which will be fabricated outside of Iran for the initial fuel load of the modernised Arak research reactor, which are certified by the fuel supplier and the appropriate Iranian authority to meet international standards, will not count against the 300 kg UF6 stockpile limit. 

 This Technical Working Group will also, within one year, work to develop objective technical criteria for assessing whether fabricated fuel and its intermediate products can be readily converted to UF6. Enriched uranium in fabricated fuel assemblies and its intermediate products manufactured in Iran and certified to meet international standards, including those for the modernised Arak research reactor, will not count against the 300 kg UF6 stockpile limit provided the Technical Working Group of the Joint Commission approves that such fuel assemblies and their intermediate products cannot be readily reconverted into UF6. This could for instance be achieved through impurities (e.g.  burnable poisons or otherwise) contained in fuels or through the fuel being in a chemical form such that direct conversion back to UF6 would be technically difficult without dissolution and purification. 

Iran will permit the IAEA to monitor, through agreed measures that will include containment and surveillance measures, for 25 years, that all uranium ore concentrate produced in Iran or obtained from any other source, is transferred to the uranium conversion facility (UCF) in Esfahan or to any other future uranium conversion facility which Iran might decide to build in Iran within this period.  

If the absence of undeclared nuclear materials and activities or activities inconsistent with the JCPOA cannot be verified after the implementation of the alternative arrangements agreed by Iran and the IAEA, or if the two sides are unable to reach satisfactory arrangements to verify the absence of undeclared nuclear materials and activities or activities inconsistent with the JCPOA at the specified locations within 14 days of the IAEA’s original request for access, Iran, in consultation with the members of the Joint Commission, would resolve the IAEA’s concerns through necessary means agreed between Iran and the IAEA. " 
Deen Abiola's profile photoJohn Hardy not a Turnbull fan's profile photoBoris Borcic's profile photoChris Seifert's profile photo
Thanks +Mark Bruce it means a lot to hear that. I've been working on and off for the past year but it's only the past 3 months that things have been stable enough for me to really be able to focus on it.
Add a comment...

Deen Abiola

Shared publicly  - 

This is a really pretty visualization of how Decision trees work. It's less about machine learning proper, which is actually a strength since it can be that much more concrete.

My only super tiny quibble is with the overview 1). I'll say instead that drawing boundaries applies to discriminative learners only and not to more probabilistic methods (also, not all learners are statistical but apparently there's a duality between sampling and search which messies neat divisions).

I'd also further characterize over-fitting as memorizing the data. Where, model complexity/number of parameters is unjustified given data/outmatches available data. It stems from a lack of smoothing, which is when you don't filter out noise but instead just explain every little detail using some really impressive bat deduction [1]. Humans do this when concocting conspiracies or reasoning based on stereotypes, initial impressions and anecdotes.

A Visual Introduction to Machine Learning

If you haven't seen this yet, it's pretty awesome!
What is machine learning? See how it works with our animated data visualization.
3 comments on original post
Bob Calder's profile photoDaniel Estrada's profile photoJay Dugger's profile photoRod Castor's profile photo
Excellent additional explanation, +Deen Abiola. 
Add a comment...

Deen Abiola

Shared publicly  - 
Just took a few days off after the DARPA Robotics Challenge. In case anyone is interested, here are the sideline reports that I sent back to CSAIL each evening, along with some video summaries.
Day 1:
MIT had a great (though not perfect) run yesterday, and I couldn't be prouder.
Long story short, we made an human operator error when transitioning the robot from the driving mode to the "egress" mode, and forgot to turn the driving controller off. This conspired through a series of events into a tragic faceplant out of the car into the asphalt. Our right arm was broken as were a few of our key sensors (an arm encoder). We called a reset -- taking a 10 min penalty -- got the robot back up and ready to go... But our right arm was hanging completely limp. That was unfortunate because we were planning on doing all of the tasks right-handed.
In an incredible display of poise and cleverness from the team, and an impressive showing from the algorithms, we were able to adapt and perform almost all of the tasks left handed. The only point we had to skip was the drill (we need both hands to turn the drill on). Even the walking on terrain and stairs looked fantastic despite having at 10kg flopping passively at the end of one arm.
After the officials review of the video, we were awarded the egress point and are in 4th place (the best of the non-wheeled robots). The robot is fixed and we know that we are capable of beating the top scores from yesterday in our run today. It's scheduled for 1:30pm pacific. Wish us luck!
- Russ
Day 2:
Day 2 was a roller coaster. Boston Dynamics was able to repair the robot damage from day one in the evening of Day 1 -- they are amazing. But when we got in to test the robot very early on Day 2, the robot powered down after just a minute or two of operation. It turned out that a small problem with the coolant lines overheated the PDB and main pump motor. The next 8 hours was chalked full of high stress robot debugging by boston dynamics and MIT (the heat caused collateral damage to the cpu bios and harddisks). Even at the start line we had a complete wrist failure and last minute actuator hot swap. I can only speak for myself, but i was physically and emotionally exhausted.
We finally started our run 30 min late. It started fantastically well. We actually passed the other top teams that were running on the parallel courses but had started up to 30 min earlier. We drove, egressed, walked through the door, turned the valve, picked up the drill, turned it on. And then... We pushed a little too hard into the wall. The wrist temperature was rising -- if we tripped the temperature fault then the wrist would have shut off completely (not good when you're holding a drill in a wall). We had to back off before the cut. Then we started cutting but the bit slipped out of the wall during the cut. The operators saw it and tried to go back to fix, but the drill has a 5 min automatic shutoff. Once off, it's extremely hard to turn back on. Our very real opportunity to win the entire competition slipped away from us in an instant.
We knew we had to get all of the points (and quickly) to win, so we tried the only thing we could. We told the robot to punch the wall. The drywall didn't fall. After a few tries something happened -- it looked like a lightning bolt hit the robot, some sort of fault caused the robot to fall. Our recovery and bracing planner kicked in automatically and the robot fell gently to the ground. But we had to pull it off the course to stand it up and start again.
With the win now out of reach, we decided to finish strong by doing the rough terrain and stairs (two of our favorites). They were beautiful to watch.
Our team had far more perception and planning autonomy than any of the other teams i was able to observe (most used teleop; ultimately the tasks were too easy). Our tools and our team were definitely capable of winning. There was just too much luck involved, and it wasn't our day.
We're incredibly disappointed, but I couldn't be prouder of our team and the tools. The amount if adversity that they overcame even this week is incredible. They did it with brains and class.
- Russ… (tells the story)… (shows the robot and our interface in action)
3 comments on original post
Denise Case's profile photo
Interesting view into the challenges and work that go on behind the scenes of these demanding competitions. Congrats to the teams and their impressive results - thanks for sharing!
Add a comment...

Deen Abiola

Shared publicly  - 
The abstract of DeepMind's recent publication in Nature [2] on learning to play video games claims: "While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces.” It also claims to bridge "the divide between high-dimensional sensory inputs and actions.” Similarly, the first sentence of the abstract of the earlier tech report version [1] of the article [2] claims to "present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning.”

However, the first such system [3] was created earlier at the Swiss AI Lab IDSIA, former affiliation of three authors of the Nature paper [2].

The system [3] indeed was able to "learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning” (quote from the abstract [2]), without any unsupervised pre-training. It was successfully applied to various problems such as video game-based race car driving from raw high-dimensional visual input streams.

It uses recent compressed recurrent neural networks [4] to deal with sequential video inputs in partially observable environments, while DeepMind's system [2] uses more limited feedforward networks for fully observable environments and other techniques from over two decades ago, namely, CNNs [5,6], experience replay [7], and temporal difference-based game playing like in the famous self-teaching backgammon player [8], which 20 years ago already achieved the level of human world champions (while the Nature paper [2] reports "more than 75% of the human score on more than half of the games”).

Neuroevolution also successfully learned to play Atari games [9].

The article [2] also claims "the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks”. Since other learning systems also can solve quite diverse tasks, this claim seems debatable at least.

Numerous additional relevant references can be found in Sec. 6 on "Deep Reinforcement Learning” in a recent survey [10]. A recent TED talk [11] suggests that the system [1,2] was a reason why Google bought DeepMind, indicating commercial relevance of this topic.


[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, M. Riedmiller. Playing Atari with Deep Reinforcement Learning. Tech Report, 19 Dec. 2013,

[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S.  Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, D. Hassabis. Human-level control through deep reinforcement learning. Nature, vol. 518, p 1529, 26 Feb. 2015.

[3] J. Koutnik, G. Cuccu, J. Schmidhuber, F. Gomez. Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In Proc. Genetic and Evolutionary Computation Conference (GECCO), Amsterdam, July 2013.

[4] J. Koutnik, F. Gomez, J. Schmidhuber. Evolving Neural Networks in Compressed Weight Space. In Proc. Genetic and Evolutionary Computation Conference (GECCO-2010), Portland, 2010.

[5] K. Fukushima, K. (1979). Neural network model for a mechanism of pattern recognition unaffected by shift in position - Neocognitron. Trans. IECE, J62-A(10):658–665.

[6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel. Back-propagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989

[7] L. Lin. Reinforcement Learning for Robots Using Neural Networks. PhD thesis, Carnegie Mellon University, Pittsburgh, 1993.

[8]  G. Tesauro. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215–219, 1994.

[9] M. Hausknecht, J. Lehman, R. Miikkulainen, P. Stone. A Neuroevolution Approach to General Atari Game Playing. IEEE Transactions on Computational Intelligence and AI in Games, 16 Dec. 2013.

[10] J. Schmidhuber. Deep Learning in Neural Networks: An Overview. Neural Networks, vol. 61, 85-117, 2015 (888 references, published online in 2014).

[11] L. Page. Where’s Google going next? Transcript of TED event, 2014

7 comments on original post
Randall Lee Reetz's profile photoDeen Abiola's profile photoFranchot van Slot's profile photo
+Randall Lee Reetz Thanks! But this was Juergen Schmidhuber's post. He is a pioneer in the field and you can understand him being miffed by the lack of acknowledgement of prior art.

I found this annoying myself, the misleading statements that, for example, essentially ignored the really impressive early work by Gerald Tesauro on TD-Backgammon.
Add a comment...
Deen's Collections
Have him in circles
5,081 people
Mattias Hansson's profile photo
Markus Sagebiel's profile photo
mohammed saoud's profile photo
David Wilson's profile photo
Sean McKernan's profile photo
Sport Biker's profile photo
Action Hindi Dubbed Movies's profile photo
Michael A. Phillips's profile photo
Luis Anaya's profile photo
Information Synthesist
For me, building software is like sculpting. I know what is there but I just need to get rid of all the annoying rock that is in the way
I like trying to write

I post now, mostly as a duplicated devlog on a project of mine whose goal is an intelligence amplification tool as inspired by the visions of Engelbert, Vannevar Bush and Licklider. I am, in order of skill, interested in:
  1. Functional Programming 
  2. Machine Learning, 
  3. Artificial Intelligence
  4. Mathematics
  5. Computation Theory
  6. Complexity Theory
  7. bioinformatics
  8. Physics
  9. neurobiology  
I also super interested in sustainable Energy, synthetic biology and the use of technology to improve human living. 

I believe the proper way to understand quantum mechanics is in terms of a Bayesian probability theory and that the many world interpretation is the way it applies to the universe physically. Still trying to find a philosophically synergistic combo.

I also do bballing and bboying/breaking/"breakdance".

I have some "hippie" beliefs like Dolphins are persons. All dolphins, whales great apes, elephants and pigs should not be eaten, murdered or kept in captivity. I would really like to see the results of giving dolphins an appropriate interface to internet access. 

Spent some time solving bioinformatics problems on Rosalind. It's a Project Euler for bioinformatics. Try it out if you enjoy algorithms and what to get some idea of biotech 

Favourite Books: Chronicles of Amber, Schild's Ladder, Diaspora, Permutation City, Blindsight, Ventus, Peace Wars, Marooned in Realtime, A Fire Upon Deep, Accelerando, Deathgate Cycle, MythAdventures, A Wizard of Earthsea, Tawny Man Trilogy, The Mallorean, The Riftwar Cycle  and Harry Potter 

Basic Information
Deen Abiola's +1's are the things they like, agree with, or want to recommend.
DUAL TRACE [creative studio]

We create games and apps for all platforms. We create art and music. We express ourselves. We try to make the world a more beautiful place.

Mutant flu paper is finally published, reveals pandemic potential of wil...

Evolution | It’s finally out. After months of will-they-won’t they and should-they-shouldn’t-they deliberations, Nature has finally publishe

A duplicated gene shaped human brain evolution… and why the genome proje...

Evolution | The Human Genome Project was officially completed in 2003, but our version of the genome is far from truly complete. Scientists

A review of openSNP, a platform to share genetic data « Genomes Unzipped

I initially came across openSNP when the team won in late 2011 the PLoS/Mendeley binary battle. This competition was open to software that i

I’ve got your missing heritability right here…

This blog will highlight and comment on current research and hypotheses relating to how the brain wires itself up during development, how th

Startup lets you start your own cell phone company, in minutes

Business is slow so far: Since the April launch, Farthing has signed up two subscribers, himself and his son. If I get up to 50, I'll be hap

Dolphins and Whales Engage in Rare Interspecies Play (Video)

Biologists have recorded several incidents of what appears to be wild humpback whales and bottlenose dolphins getting together for some play

New paper on repetition priming and suppression

A new paper by Steve Gotts, myself, and Alex Martin has officially been published in the journal Cognitive Neuroscience: Stephen J. Gotts, C

Treating brain cancer with viral vector

Surgeons are now starting to treat patients with recurrent brain cancer by directly injecting an investigational viral vector into their tum

2012 Survey Results

What would your favorite theoretician be doing? Antea is a fictional young aspiring mathematical physicist in the epilogue of

Monkeys Devise a Tool to Break Out of Zoo in Brazil

In making their daring escape, a group of capuchin monkeys exhibited a remarkable use of logic.

High Anxiety: Rooftop Excavators Tear Down from Up Top | WebUrbanist

Rooftop excavators on highrise buildings are the demolition equivalent of painting one's self into a corner, several hundred feet above the