Post is pinned.Post has shared content
Fair play

> It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions certainly into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought to be able to reach a decision about any given formula. This would be the argument.

Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.

- Alan Turing, 1947

http://www.turing.org.uk/publications/ex11.html
Photo

Post has attachment
// There are reports from nine countries. From the US report :

> Do bots have the capacity to influence the flow of political information over social media? This working paper answers this question through two methodological avenues: A) a qualitative analysis of how political bots were used to support United States presidential candidates and campaigns during the 2016 election, and B) a network analysis of bot influence on Twitter during the same event. Political bots are automated software programs that operate on social media, written to mimic real people in order to manipulate public opinion. The qualitative findings are based upon nine months of fieldwork on the campaign trail, including interviews with bot makers, digital campaign strategists, security consultants, campaign staff, and party officials. During the 2016 campaign, a bipartisan range of domestic and international political actors made use of political bots. The Republican Party, including both self-proclaimed members of the “alt-right” and mainstream members, made particular use of these digital political tools throughout the election. Meanwhile, public conversation from campaigners and government representatives is inconsistent about the political influence of bots. This working paper provides ethnographic evidence that bots affect information flows in two key ways: 1) by “manufacturing consensus,” or giving the illusion of significant online popularity in order to build real political support, and 2) by democratizing propaganda through enabling nearly anyone to amplify online interactions for partisan ends. We supplement these findings with a quantitative network analysis of the influence bots achieved within retweet networks of over 17 million tweets, collected during the 2016 US election. The results of this analysis confirm that bots reached positions of measurable influence during the 2016 US election. Ultimately, therefore, we find that bots did affect the flow of information during this particular event. This mixed-method approach shows that bots are not only emerging as a widely-accepted tool of computational propaganda used by campaigners and citizens, but also that bots can influence political processes of global significance.

More: http://comprop.oii.ox.ac.uk/2017/06/19/computational-propaganda-worldwide-executive-summary/
via Rob Jackson

Post has attachment
>The New Inquiry’s Conspiracy Bot condenses this recursive symbiosis. Just like us, our bot produces conspiracies by drawing connections between news and archival images—sourced from Wikimedia Commons and publications such as the New York Times—where it is likely none exist. The bot’s computer vision software is sensitive to even the slightest variations in light, color, and positioning, and frequently misidentifies disparate faces and objects as one and of the same. If two faces or objects appear sufficiently similar, the bot links them. These perceptual missteps are presented not as errors, but as significant discoveries, encouraging humans to read layers of meaning from randomness. If a “discovered” conspiracy finds some “true” reflection in the “real” world, such as linking two politicians that are actually colluding—and due to the sheer number of relationships it produces, it’s statistically likely—then the bot’s prediction appears more valid to the viewer, heightening the plausibility of its other predictions. Thus the nauseating cycle loops once more. First as news, then as fake news.The New Inquiry’s Conspiracy Bot condenses this recursive symbiosis. Just like us, our bot produces conspiracies by drawing connections between news and archival images—sourced from Wikimedia Commons and publications such as the New York Times—where it is likely none exist. The bot’s computer vision software is sensitive to even the slightest variations in light, color, and positioning, and frequently misidentifies disparate faces and objects as one and of the same. If two faces or objects appear sufficiently similar, the bot links them. These perceptual missteps are presented not as errors, but as significant discoveries, encouraging humans to read layers of meaning from randomness. If a “discovered” conspiracy finds some “true” reflection in the “real” world, such as linking two politicians that are actually colluding—and due to the sheer number of relationships it produces, it’s statistically likely—then the bot’s prediction appears more valid to the viewer, heightening the plausibility of its other predictions. Thus the nauseating cycle loops once more. First as news, then as fake news.

More: https://thenewinquiry.com/you-probably-think-this-bot-is-about-you/
https://conspiracy.thenewinquiry.com/
via Sophia Korb

// Me: 2017 couldn't be any more #2017
2017: Hold my beer

Post has shared content

Post has shared content
What next, movies ;) 

Post has shared content
This colonoscopy robot will haunt your dreams – BGR

' ... Developed by the Rentschler Research Group, which hails from the University of Colorado – Boulder, this worm-like robot is the ultimate colonoscopy tool. Designed as an option in place of a traditional colonoscopy, the robot can actually navigate through a human colon all on its own, capturing images and taking samples to aid in diagnosis of various ailments and diseases. ... '

http://bgr.com/2017/06/15/colonoscopy-robot-icra-ieee/


Post has shared content
Garry Kasparov, the chess legend who got beat by Deep Blue in 1998, and Demis Hassabis, leader of the team behind AlphaGo which recently beat the best human Go player, have a chat. Experiences playing chess computers, before, during, and after Deep Blue. How players learn differently now. Machines and humans working together. The Moravec paradox: computers are good at what humans are bad at and vice-versa.

Post has shared content
THIS NEW ATARI-PLAYING AI WANTS TO DETHRONE DEEPMIND

ARTIFICIAL INTELLIGENCE IS not a contact sport. Not yet, at least. Currently, algorithms mostly just compete to win old Atari games, or accomplish historic board gaming feats like owning five human Go champions at once. These are just practice rounds, though, for the way more complicated (and practical) goal of teaching robots how to navigate human environments. But first, more Atari! Vicarious, an AI company, has developed a new AI that is absolutely slammin' at Breakout, the paddle vs. brick arcade classic. Its AI, called Schema Networks, even succeeds at tweaked versions of the game—for instance, when the paddle is moved closer to the bricks. Vicarious says Schema Networks outperforms AIs that use deep reinforcement learning (currently the dominant paradigm in AI). Some critics aren't convinced, however. They say that in order to truly claim top score, Schema Networks must show its stuff against the world's best game-playing AI.

Post has shared content
The highest possible score on Ms. Pac-Man, 999990, has been achieved by an AI, higher than any score ever achieved by any human. It was achieved by breaking the game into 4 sub-problems, and developing a separate reinforcement learning algorithm for each problem. The 4 problems are: one that is rewarded for eating a pellet, one for eating a fruit, one for eating blue ghost, and one with a large negative reward if Ms. Pac-Man gets eaten by a ghost. An aggregator looks at all 4 to decide what action Ms. Pac-Man takes. The AI was developed by Maluuba, recently acquired by Microsoft and now part of Microsoft Research.

Post has shared content
These algorithms and strategies are agnostic to the internal details of the problem; put in energy and physical things you solve physics problems. Put in log probabilities and solve learning problems. However, learning algorithms did not have to do this and in fact they do not have to follow these constraints. It's just that, if you wish to learn effectively, you'll behave in a way parametrically equivalent to physical systems bound by a stationary principle. Physicists often arrived first because they had physical systems to study, observe and constrain methods. Computer scientists arrive later, at the same spot, asking, how do we do this more efficiently?

Nonetheless, it is a bit uncanny how often probabilistic inference on particular graphs structures often have a precise correspondence with a physical system. Consider, message passing to compute a marginal probability on one hand allows you to do inference and on something else, works out local magnetization. Why belief propagation and Beth Approximation ideas work as well as they do for computing a posterior probability distribution (a knowledge update) is not well known.

Here's what's key. As I point out above, variational methods are an approximation due to computational limitations. If we find that it is in fact true that the brain is also forced to optimize on some variational bounds and no better than a mere pauper Turing machine (resource bounded) , this too would be very suggestive that the brain itself is not just computable but bound by computational complexity laws!

In [5], Friston et al also point out how the brain will also operate to minimize its Helmholtz free energy by minimizing its complexity (giving less complex representations to highly probable states. You should not be surprised then, when we find expertise means less volume in highly trafficked areas or less energy use for mental processing of well understood things). Similarly, in the very interesting [6] Susanne Still shows that any non-equilibrium thermodynamic system being driven by an external system must alter its internal states such that there is a correspondence with the driving signal and coupling interface. As such, efficient dynamics corresponds to efficient prediction.

We thus arrive at an interesting separation. All systems we call alive (right down to bacteria) concern themselves with both variational and thermodynamic free energy but digital AIs only concern themselves with the variational concept.

--------

Concluding

In summary, those who reject certain algorithms as AI are making a fundamental mistake by assuming that the algorithm is what makes an AI. Instead, it's where the algorithm is used that matters. A simple dot product (something no more complex than 5 \* 3 + 6 \* 2) in one condition might be a high school math problem or find use in lighting calculations in a graphics engine. In another context however, it might compare word vector representations of distilled co-occurrence statistics or encode faces in a primate. We should expect then, that an AGI or an ASI will consist of narrow AI joined together in some non-trivial fashion but still no different from math.

I additionally pointed out a correspondence between inference and physical systems is not so surprising when viewed as aspects of something more general, analogized with a polymorphic function. But it is nonetheless not obvious why things ended up as such. Computational complexity limits and energy limits turn up at the same places surprisingly often and demand similar dues of informational and physical systems.

The link goes even deeper when we realize that predictive efficiency and thermodynamic efficiency of non-equilibrium systems are inextricably linked. Not just that brains but and predictive text autocomplete should count as performing inference but also, simple biological molecules. In fact, these systems might have gotten as complex as they did in order to be more effective at prediction, in order to more effectively use a positive free energy flow for say, replication or primitive metabolism.

A possible definition of Artificial Intelligence

I can now finally put forward a definition for what AI is. An AI is any algorithm that has been put to the task of computing a probability distribution for use in some downstream task (decisions, predictions), a filter which leverages the structure of what it is filtering, or performs a non-exhaustive search in some space. Autocomplete that enumerates alphabetically is not intelligent, Autocomplete that predicts what I might type next is. From the context of Intelligence amplification, an intelligent algorithm is any system that works cooperatively to reduce working memory load for the human partner.

In a future post I'll look into what algorithms the brain might be running. This will involve synthesizing the following proposals (what they have in common, what they differ in and how plausible they are): Equilibrium Propagation, Sleeping experts, approximate loopy belief propagation, natural evolution strategies and random projections.

Lexicon

*Weak AI* - Weak AI are programs that learn how to perform one predictive or search task very well. They can be exceedingly good at it. Any AI is certainly a collection of weak AI algorithms.

*Narrow AI* - A synonym for weak AI. A better label.

*Machine Learning* These days, it's very difficult to tell apart machine learning from Narrow AI. But a good rule of thumb is any algorithm derived from statistics, optimization or control theory put to use in the service of an AI system, with an emphasis of predictive accuracy instead of statistical soundness.

*GOFAI* - This stands for Good old fashion AI's, expert systems and symbolic reasoners that many in the 1970 and 80s thought would lead to AI as flexible as a human. Led to the popular misconception that AIs must be perfectly logical. Another popular misconception is that GOFAI was a wasted effort. This is certainly incorrect. GOFAI led to languages like lisp, prolog, haskell. Influenced databases like datalog, rules engines and even SQL. Knowledge graph style structures underlie many of the higher order abilities of 'Assistant' technologies like Siri, Google now, Alexa, Cortana, and Wolfram Alpha.

Furthermore, descendants are found in answer set programming, SMT solvers and the like that are used for secure software/hardware and verification. An incredible amount of value was generated from the detritus of those failed goals. Which should tell us how far they sought to reach. Something else interesting about symbolic reasoners is that they are the only AI based system capable of handling long complex chains of reasoning easily (neither deep learning nor even humans are exceptions to this).

*True AI* - This is a rarely used term that is usually synonymous with AGI but sometimes means Turing Test passing AI.

*Artificial General Intelligence* - This is an AI that is at least as general and flexible as a human. Sometimes used to refer to Artificial Super Intelligence.

*Artificial Super Intelligence* - This is the subset of AGI that are assumed to have broader and more precise capabilities than humans.

*Strong AI* - This has multiple meanings. Some people use it as a synonym for True AI, AGI or ASI. But others insist, near as I can tell, only biological based systems can be strong AIs. But we can alter this definition to be fairer: any AGI that also maximizes thermodynamic efficiency by maximizing energetic and memory use efficiency of prediction.

*AI* An ambiguous and broad term. Can refer to AGI, ASI, True AI, Turing Test passing AI, AI or Clippy. Depending on the person, their mood and the weather. Ostensibly, it's just the use of math and algorithms to do filtering, prediction, inference and efficient search.

---

*Natural Language Processing/Understanding* The use of machine learning and supposedly linguistics to try and convert the implicit structure in text to an explicitly structured representation. These days no one really pays attention to linguistics, which is not necessarily a good thing. For example, NLP people spend a lot more time on dependency parsing when constituency parsers better match human language use.

Anyways, considering the amount of embedded structure in text, it is stubbornly hard to get results that are any better than doing the dumbest thing you can think of. On second thought, this is probably due to how much structure there is in language on one hand and how flexible it is on another. For example, simply averaging word vectors with some minor corrections does almost as well and sometimes generalizes better than using a whiz bang Recurrent Neural Net. The state of NLP, in particular the difficulty of extracting anything remotely close to meaining, is the strongest indicator that Artificial General Intelligence is not near. Do not be fooled by PR and artificial tests, the systems remain as brittle to edge cases as ever. Real systems are high dimensional. As such, they are mostly edge cases.

*Deep Learning* These days, used as a stand in for machine learning even though it is a subset of it. Such a labeling is as useful as answering "what are you eating" with "food". DL, using neural networks, is the representation of computer programs as a series of tables of numbers (each layer of a Neural Network is a matrix). A vector is transformed by multiplying it with a matrix and applying another function to each element. A favored function is one that clamps all negative numbers to zero and results in piecewise function approximation. Each layer learns a more holistic representation based on the layers previous, until the final layer can be a really dumb linear regressor performing nontrivial separations.

The learned transformations often represent conditional probability distributions. Learning occurs by calculating derivatives of our function and adjusting parameters to do better against a loss function. Seeking the (locally) optimal model within model space. Newer Neural networks explicitly model latent/hidden/internal variables and are as such, even closer to the variational approach mentioned above.

Speaking of latent variables, there is an unfortunate trend of obsession about the clarity of model generated images. Yet quality of generation does not necessarily equate with quality of representation. And quality of representation is what matters. Consider humans, the majority of our vision is peripheral (we get around this by saccading and joining small sections together). Ruth Rosenholtz has shown a good model of peripheral vision is of capturing summary statistics. Although people complain that the visual quality of variational autoencoder is poor due to fuzziness, their outputs are not so far from models of peripheral vision.

The obsession is even more questionable when we consider that internal higher order representations have lost nearly all but the most important core required for visual information. Lossy Compression is lazy = good = energy efficient. Considering their clear variational motivation and the connection to the Information Bottleneck Principle, I feel it a bit unfortunate that work on VAEs has dropped so, in favor of Adversarial Networks.

Wait while more posts are being loaded