Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing. What, another bot-builder? Dozens of other startups have launched to build bots for developers, especially after Facebook kicked off a bot craze last month that now has tens of thousands of developers building bots on Facebook Messenger alone. But Motion AI stands out because it hand-holds you through building every aspect of the bot’s flow, including deployment across most of the bot platforms (Facebook, Slack, SMS, email, Web and so on). Moreover, it has created what it calls bot “modules,” which package up the logic required for building particular bot features. This saves novices — and even experienced developers — multiple steps. These modules will soon be featured in a store (to be launched in about a month) where customers can take whatever module they need as they put together their bots, according to founder and chief executive David Nelson.
The most important presumption about the brain that practopoietic theory challenges is the generally accepted idea that the dynamics of a neural network (with its excitatory and inhibitory mechanisms) is sufficient to implement a mind. Practopoiesis tells us that this is not enough. Something is missing. Practopoiesis also offers answers on what is missing, both theoretically and in a form of a concrete implementation. The theoretical answer is in T3-systems and the processes of anapoiesis. The concrete implementation in the brain is based on the neural adaptation mechanisms. These mechanisms enable us to adaptively deal with the context within which we have to operate and thus, to be intelligent.
The main contribution to the mind-body problem: Practopoiesis suggests that we should think about mind differently from how we are used to. According to T3-theory, the mind cannot be implemented by a classical computation, which consists of symbol manipulation within a closed system (a “boxed” computation machine). Rather, a mind i.e., a thought, is a process of an adaptation to organism’s environment. This requires the “computation” system to be open and to interact with the environment while a thought or a percept is evolving. The reason why we are conscious and machines are not, is that our minds are interacting with the surrounding world while undergoing the process of thought, and machines are not — machines recode inputs into symbols and then juggle the symbols internally, devoid of any further influences from the outside.
// This view seems promising. I think it lays on the anti-representational arguments too strong, perhaps because I've been defending representationalism recently. I think the right view will be one that describes a cybernetic control system with robust representational resources at its disposal. I think it's correct to say that the brain isn't fundamentally a representational system. The nervous system is fundamentally a system for coordinating action. But in so coordinating, it really can juggle "internal" representations around and inspect them. In Kinds of Minds, Dennett calls these the Popperian creatures after Karl Popper, who said that such thinking "permits our hypotheses to die in our stead."
My instructor from grad school Dr. Brewer once gave the following argument for internal representations, which I still find completely convincing. Here's the challenge: close your eyes and tell me how many windows are in your parent's living room. It's unlikely that you've thought of this question explicitly; if you can generate an answer, it's likely that you're conjuring an internal model of the room, and counting the windows on that model. That's exactly a case of "juggling symbols internally". Our capacity to reason about such mental models was the subject of my advisor's 2006 book, Models and Cognition:
But my sense is that practopoeisis can deal with such cases fairly comfortably. Still, I want to deal with the last claim in the quote above, about the differences between us and machines. He's right, in some sense, that neurons are sensitive to much more of the nearby activity than electrical circuits. But it's worth saying that quite a lot of the technical challenge in building microchips is in keeping the signals clear and distinct. The reason microchips can process at such high speeds is because we can keep these signals reliably clear at very small scales. In other words, this isn't an inevitably feature of our machines; it has been an explicit design goal.
From this perspective, it's worth mentioning that brains also do quite a lot of work insulating the signals from surrounding neurons. There are lots of neurons passing through any given space (see: http://goo.gl/jr2dHA), but only a few neurons are actually talking to each other. The rest are insulated from the signal by other types of neural cells. In other words, neural signals aren't entirely open to influence from the outside. But certainly they are tolerant of more influence than a microchip.
The other place to object, though, is the extent to which computation happens indpenedent of outside influence. When I'm playing a video game, for instance, there's certainly a lot of processing happening "under the hood" of the machine, but there's also a lot of sensitivity to my behavior and interaction with that system, and so there's a lot of interdependence and interaction between the player and the computer. At a very low level the computation isn't interactive, but at the level of the game itself, the machine is interactive nearly to the point of immersion. And even that's not strictly true! Modern video cards will only render from the perspective of the player, which means that in a very direct way, the computations performed are tightly linked to what the player is doing, with very rapid response times. To describe such machines as "devoid of influence from the outside" seems strange from this perspective.
What a lot of the criticisms of AI I've been ranting against fail to appreciate is the basic issue of multiple realizability, that the same high level process might infact be constituted by many different kinds of low level processes, each of which produces a functional analog at the higher level by different means. There's nothing about silicon that fundamentally prevents it from being interactive in the appropriate ways. The myth that biology is fundamentally "alive" while electronics are "inert" must be resisted wherever it appears.
But I'm ranting, and none of these issues seem like big problems for practopoiesis; in fact, I'd expect the author to agree with most of what I've said. Some of these articles seem rather new, within the last year. I'll be interested to see what the scholarly response is.
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years.
Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed?
A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.
Google was then the first to use their engineering to get it into their products and in 2012, it came out in the Android, and made the speech recognition in the Android work much better than before: It reduced the word-error rate to about 26 per cent. Then, in 2012, students in my lab took that technology that had been developed by other people, and developed even further, and while the existing technology was getting 26 per cent errors, and we got 16 per cent errors. In the years after we did that, people said, ‘Wow, this really works.’ They were very skeptical for many many years, they published papers dismissing it. Over the next years, they all switched to it.
// For anyone confused by 's article from last week (https://goo.gl/Q3OU7Y):
There's an important distinction between the proponents of AI and the Singularitarians, and does an excellent job here of representing this space. For Hinton, there's no question of a machine's thinking (or believing, deciding, imagining, etc; see his classic 2007 Google talk, esp. ~24:00 https://goo.gl/qdZviJ). We really can build computers that do all those things to demonstrable effect. And yet Hinton's optimism about AI doesn't preclude any realism about the gap between computers and humans: he says brains operate at roughly a million times the capacity of our best artificial neural nets today, with a hundred thousand times less power consumption.
Those are big numbers, for sure, but the underlying point is crystal clear: that these are differences of scale which pose an engineering challenge, not ones of essence that pose an affront to logical necessity. And given that we've experienced a trillion-fold increase in computing power over the last 60 years (http://goo.gl/mLMZ4p), it's hard to interpret these numbers as impossible to overcome. It probably won't happen in the next five years, and as Hinton rightly notes, who knows what will happen beyond that. The future success of AI is not fated to happen. But we can still heap criticism on those who claim, as Floridi does, that "No conscious, intelligent entity is going to emerge from a Turing Machine." Such views have no place in contemporary discussions of AI.
I quoted a piece in the interview crediting the success of commercial AI applications as part of what brought AI out of its winter. It is interesting to think about the socioeconomic viability of AI as critical to its development. It suggests that if AI does not close these large gaps in this iteration, there will be not just technical but also socioeconomic reasons for the failure. The first AI winter was also due to a lack of funding, but that was for academic research and not for smartphone apps that everyone uses.
Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI
Q: Beyond games, then—what might come next for AI?
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.
Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding?
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.
// Here's another shockingly bad popular article from a distinguished scholar who ought to know better. Like the Floridi article I criticized a few days ago (https://goo.gl/ZSGGrt), these criticisms aren't just differences of opinion. Both authors make elementary mistakes explaining the theory they purport to critique, mistakes that would be embarrassing in an undergraduate cognitive science course. The above quoted argument is a complete strawman of the view he rejects, and the implications he draws from it are utterly ridiculous. These mistakes do nothing more than reveal their authors to be either shameless partisans or astonishingly ignorant of the subjects about which they are supposed to be experts.
Epstein's view is that the information processing metaphor, where we talk about how brains "store" "representations" in "memory" for "processing" and the like, is simply a bad metaphor taken from computer science (the fanciest machines of our day) and grafted awkwardly to explain how the brain works. Epstein insists that this metaphor does not fit the brain at all, and that the whole IP metaphor should be abandoned, as it has become an impediment to a better theories in psychology. He says:
> Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
// To be clear, he's not just criticizing popular discussions of these topics on blogs and in magazines for being fast and loose with the IP metaphor. No, he's criticizing long established theories in psychology and cognitive science describing the information processing architecture of the brain. In other words, he's critiquing the use of these terms and theories in their full technical sense. He's arguing that information processing as a research paradigm in cognitive science is mistaken, and that decades of theoretical work should be abandoned.
But it is clear from this article that Epstein doesn't understand the first thing about information processing, computer science, or cognitive science. His arguments here are shockingly bad, completely misrepresenting these fields. Fortunately, I don't have to make these arguments myself; the internet has generated some decent backlash against the article, and several authors have come forward with point-by-point rebuttals. I'll document some below:
> Dr. Epstein made it very clear that he doesn’t understand computers.
> Anyone who understands the work of Turing realizes that computation is not the province of silicon alone. Any system that can do basic operations like storage and rewriting can do computation, whether it is a sandpile, or a membrane, or a Turing machine, or a person. Today we know (but Epstein apparently doesn't) that every such system has essentially the same computing power (in the sense of what can be ultimately computed, with no bounds on space and time).
> Nowhere does the IP thesis assert that we represent the world in specific, byte-like patterns. ‘Information’ doesn’t just mean ‘things that are coded into bytes.’
// In fact, the comments section on Aeon for this article is quite good, with representatives of both the representational and anti-representationalist (embodied) camp offering more cogent arguments for their positions. Both sides of this argument seem to agree that Epstein's article is both disappointing and completely misrepresents the field and the debate.
// If anyone finds more good responses to this article, I'd love to collect them here. I've seen this article shared many times, and it would be nice to point people somewhere for a comprehensive response.
via , who says "I think the article is right on target." lol
Key Phrases physical value, hypercomputation
Top Sentence:Quantum Hypercomputation—Hype or Computation
A real computer (a sort of idealized analog computer ) can perform hypercomputation  if physics admits general real variables (not just computable reals ), and these are in some way "harnessable" for useful (rather than random) computation.
Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing computable . "Three counterexamples refuting Kieu's plan for "quantum adiabatic hypercomputation"; and some uncomputable quantum mechanical tasks".
Mike Stannett , The case for hypercomputation , Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 8–24, Special Issue on Hypercomputation
, Quantum Hypercomputation—Hype or Computation?
Sentence ratio: 0.16, Word ratio: 0.35. Summary Count 97.0, Orig count 278.0
Bigrams: Applied Mathematics, higher level, infinite computation
Sentiment: more negative than positive but mostly neutral
Style: Mostly opinionated
(19) It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.  According to a 1992 paper,  a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole  could theoretically perform non-Turing computations.   Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.  
It's based on a novel he wrote, and will be released later this year. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."
1) I can't even
2) No, you
3) Watch it, mister!
// The rhetorical prowess of , philosopher.
Imagine for a moment the following commands.
1) take this knife.
2) chop all these "5" cookies into little pieces.
3) complete this task in as short a period of time as you can.
Enter the lab assistant, who takes one of the cookies and eats it ... those commands and that action are a combo waiting to go seriously wrong.
To misquote someone else, "I am not afraid of smart AI, I am afraid of the really stupid ones".
Herb Mugface - YouTube Channel (Herb is the robot below)
If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it’s wrong in one direction, a dangerous criminal could go free. If it’s wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.
The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.
But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”
Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.
via Randall Villarreal
Google is working on a new project to determine if artificial intelligence can ever be truly creative. The project, called Magenta, was unveiled this weekend at Moogest in Durham, North Carolina, Quartz reports. During a presentation at Moogfest, Google Brain researcher Douglas Eck said the goal of the new group is to determine if AI is capable of creating original music and visual art somewhat independently of humans. The Magenta team will use Google's open-source machine learning software TensorFlow, and try to "train" AI to make art. They'll create tools (and make them available to the public), like one that helps researchers import data from MIDI files into TensorFlow, Quartz reports.
Burn, media, burn! Why we destroy comics, disco records, and TVs
Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c
Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks
DVICE: The Internet weighs as much as a largish strawberry
Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want
philosophy bites: Adina Roskies on Neuroscience and Free Will
Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that
Kickstarter Expects To Provide More Funding To The Arts Than NEA
NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O
How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o
NYT: Google to sell Android-based heads-up display glasses this year
It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the