Imagine a business like this: you get highly trained experts to give you their research for free... and then you sell it back to them. Of course these experts need equipment, and they need to earn a living... so you get taxpayers to foot the bill.
And if the taxpayers want to actually read the papers they paid for? Then you charge them a big fee!
It's not surprising that with this business model, big publishers are getting rich while libraries go broke. Reed-Elsevier has a 37% profit margin!
But people are starting to fight back — from governments to energetic students like Alexandra Elbakyan here.
On Friday, the Competitiveness Council —a gathering of European ministers of science, innovation, trade, and industry—said that all publicly funded scientific papers published in Europe should be made free to access by 2020.
This will start a big fight, and it may take longer than 2020. But Alexandra Elbakyan isn't waiting around.
In 2011, as a computer science grad student in Kazakhstan, she got sick of paying big fees to read science papers. She set up SciHub, a pirate website that steals papers from the publishers and sets them free.
SciHub now has 51,000,000 papers in its database. In October 2015, Elsevier sued them. In November, their domain name was shut down. But they popped up somewhere else. By February, people were downloading 200,000 papers per day. Even scientists with paid access to the publisher's databases are starting to use SciHub, because it's easier to use.
Clearly piracy is the not the ultimate solution. Elbakyan now lives in an undisclosed location, to avoid being extradited. But she gave the world a much-needed kick in the butt. The old business model of get smart people to work for free and sell the product back to them is on its way out.
For more, read:
John Bohannon, Who's downloading pirated papers? Everyone, Science, 28 April 2016, http://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone
and especially the SciHub Twitter feed:
Also read this:
Martin Enserink, In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020, Science,
27 May 2016, http://www.sciencemag.org/news/2016/05/dramatic-statement-european-leaders-call-immediate-open-access-all-scientific-papers
The Dutch government is really pushing this! Congratulations to them!
Motion AI, a Chicago company that lets anyone easily build a bot without touching a line of code, has announced it is open to business after several months of private testing. What, another bot-builder? Dozens of other startups have launched to build bots for developers, especially after Facebook kicked off a bot craze last month that now has tens of thousands of developers building bots on Facebook Messenger alone. But Motion AI stands out because it hand-holds you through building every aspect of the bot’s flow, including deployment across most of the bot platforms (Facebook, Slack, SMS, email, Web and so on). Moreover, it has created what it calls bot “modules,” which package up the logic required for building particular bot features. This saves novices — and even experienced developers — multiple steps. These modules will soon be featured in a store (to be launched in about a month) where customers can take whatever module they need as they put together their bots, according to founder and chief executive David Nelson.
The most important presumption about the brain that practopoietic theory challenges is the generally accepted idea that the dynamics of a neural network (with its excitatory and inhibitory mechanisms) is sufficient to implement a mind. Practopoiesis tells us that this is not enough. Something is missing. Practopoiesis also offers answers on what is missing, both theoretically and in a form of a concrete implementation. The theoretical answer is in T3-systems and the processes of anapoiesis. The concrete implementation in the brain is based on the neural adaptation mechanisms. These mechanisms enable us to adaptively deal with the context within which we have to operate and thus, to be intelligent.
The main contribution to the mind-body problem: Practopoiesis suggests that we should think about mind differently from how we are used to. According to T3-theory, the mind cannot be implemented by a classical computation, which consists of symbol manipulation within a closed system (a “boxed” computation machine). Rather, a mind i.e., a thought, is a process of an adaptation to organism’s environment. This requires the “computation” system to be open and to interact with the environment while a thought or a percept is evolving. The reason why we are conscious and machines are not, is that our minds are interacting with the surrounding world while undergoing the process of thought, and machines are not — machines recode inputs into symbols and then juggle the symbols internally, devoid of any further influences from the outside.
// This view seems promising. I think it lays on the anti-representational arguments too strong, perhaps because I've been defending representationalism recently. I think the right view will be one that describes a cybernetic control system with robust representational resources at its disposal. I think it's correct to say that the brain isn't fundamentally a representational system. The nervous system is fundamentally a system for coordinating action. But in so coordinating, it really can juggle "internal" representations around and inspect them. In Kinds of Minds, Dennett calls these the Popperian creatures after Karl Popper, who said that such thinking "permits our hypotheses to die in our stead."
My instructor from grad school Dr. Brewer once gave the following argument for internal representations, which I still find completely convincing. Here's the challenge: close your eyes and tell me how many windows are in your parent's living room. It's unlikely that you've thought of this question explicitly; if you can generate an answer, it's likely that you're conjuring an internal model of the room, and counting the windows on that model. That's exactly a case of "juggling symbols internally". Our capacity to reason about such mental models was the subject of my advisor's 2006 book, Models and Cognition:
But my sense is that practopoeisis can deal with such cases fairly comfortably. Still, I want to deal with the last claim in the quote above, about the differences between us and machines. He's right, in some sense, that neurons are sensitive to much more of the nearby activity than electrical circuits. But it's worth saying that quite a lot of the technical challenge in building microchips is in keeping the signals clear and distinct. The reason microchips can process at such high speeds is because we can keep these signals reliably clear at very small scales. In other words, this isn't an inevitably feature of our machines; it has been an explicit design goal.
From this perspective, it's worth mentioning that brains also do quite a lot of work insulating the signals from surrounding neurons. There are lots of neurons passing through any given space (see: http://goo.gl/jr2dHA), but only a few neurons are actually talking to each other. The rest are insulated from the signal by other types of neural cells. In other words, neural signals aren't entirely open to influence from the outside. But certainly they are tolerant of more influence than a microchip.
The other place to object, though, is the extent to which computation happens indpenedent of outside influence. When I'm playing a video game, for instance, there's certainly a lot of processing happening "under the hood" of the machine, but there's also a lot of sensitivity to my behavior and interaction with that system, and so there's a lot of interdependence and interaction between the player and the computer. At a very low level the computation isn't interactive, but at the level of the game itself, the machine is interactive nearly to the point of immersion. And even that's not strictly true! Modern video cards will only render from the perspective of the player, which means that in a very direct way, the computations performed are tightly linked to what the player is doing, with very rapid response times. To describe such machines as "devoid of influence from the outside" seems strange from this perspective.
What a lot of the criticisms of AI I've been ranting against fail to appreciate is the basic issue of multiple realizability, that the same high level process might infact be constituted by many different kinds of low level processes, each of which produces a functional analog at the higher level by different means. There's nothing about silicon that fundamentally prevents it from being interactive in the appropriate ways. The myth that biology is fundamentally "alive" while electronics are "inert" must be resisted wherever it appears.
But I'm ranting, and none of these issues seem like big problems for practopoiesis; in fact, I'd expect the author to agree with most of what I've said. Some of these articles seem rather new, within the last year. I'll be interested to see what the scholarly response is.
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years.
Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed?
A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.
Google was then the first to use their engineering to get it into their products and in 2012, it came out in the Android, and made the speech recognition in the Android work much better than before: It reduced the word-error rate to about 26 per cent. Then, in 2012, students in my lab took that technology that had been developed by other people, and developed even further, and while the existing technology was getting 26 per cent errors, and we got 16 per cent errors. In the years after we did that, people said, ‘Wow, this really works.’ They were very skeptical for many many years, they published papers dismissing it. Over the next years, they all switched to it.
// For anyone confused by 's article from last week (https://goo.gl/Q3OU7Y):
There's an important distinction between the proponents of AI and the Singularitarians, and does an excellent job here of representing this space. For Hinton, there's no question of a machine's thinking (or believing, deciding, imagining, etc; see his classic 2007 Google talk, esp. ~24:00 https://goo.gl/qdZviJ). We really can build computers that do all those things to demonstrable effect. And yet Hinton's optimism about AI doesn't preclude any realism about the gap between computers and humans: he says brains operate at roughly a million times the capacity of our best artificial neural nets today, with a hundred thousand times less power consumption.
Those are big numbers, for sure, but the underlying point is crystal clear: that these are differences of scale which pose an engineering challenge, not ones of essence that pose an affront to logical necessity. And given that we've experienced a trillion-fold increase in computing power over the last 60 years (http://goo.gl/mLMZ4p), it's hard to interpret these numbers as impossible to overcome. It probably won't happen in the next five years, and as Hinton rightly notes, who knows what will happen beyond that. The future success of AI is not fated to happen. But we can still heap criticism on those who claim, as Floridi does, that "No conscious, intelligent entity is going to emerge from a Turing Machine." Such views have no place in contemporary discussions of AI.
I quoted a piece in the interview crediting the success of commercial AI applications as part of what brought AI out of its winter. It is interesting to think about the socioeconomic viability of AI as critical to its development. It suggests that if AI does not close these large gaps in this iteration, there will be not just technical but also socioeconomic reasons for the failure. The first AI winter was also due to a lack of funding, but that was for academic research and not for smartphone apps that everyone uses.
Geoffrey Hinton, the godfather of ‘deep learning’—which helped Google’s AlphaGo beat a grandmaster—on the past, present and future of AI
Q: Beyond games, then—what might come next for AI?
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.
Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding?
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.
It's based on a novel he wrote, and will be released later this year. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your "style, personality, and ideas."
1) I can't even
2) No, you
3) Watch it, mister!
// The rhetorical prowess of , philosopher.
Imagine for a moment the following commands.
1) take this knife.
2) chop all these "5" cookies into little pieces.
3) complete this task in as short a period of time as you can.
Enter the lab assistant, who takes one of the cookies and eats it ... those commands and that action are a combo waiting to go seriously wrong.
To misquote someone else, "I am not afraid of smart AI, I am afraid of the really stupid ones".
Herb Mugface - YouTube Channel (Herb is the robot below)
If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it’s wrong in one direction, a dangerous criminal could go free. If it’s wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.
The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wisconsin. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and follow-up supervision that could help Zilly with “staying on the right path.” His lawyer agreed to a plea deal.
But Judge James Babler had seen Zilly’s scores. Northpointe’s software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. “When I look at the risk assessment,” Babler said in court, “it is about as bad as it could be.”
Then Babler overturned the plea deal that had been agreed on by the prosecution and defense and imposed two years in state prison and three years of supervision.
via Randall Villarreal
Burn, media, burn! Why we destroy comics, disco records, and TVs
Americans love their media, but they also love to bash it—and not just figuratively. Inside the modern history of disco demolition nights, c
Using Smiles (and Frowns) to Teach Robots How to Behave - IEEE Spectrum
Japanese researchers are using a wireless headband that detects smiles and frowns to coach robots how to do tasks
DVICE: The Internet weighs as much as a largish strawberry
Dvice, Powered by Syfy. The Syfy Online Network. Top Stories • Nov 02 2011. Trending topics: cold fusion • halloween • microsoft. Japan want
philosophy bites: Adina Roskies on Neuroscience and Free Will
Recent research in neuroscience following on from the pioneering work of Benjamin Libet seems to point to the disconcerting conclusion that
Kickstarter Expects To Provide More Funding To The Arts Than NEA
NEW YORK — Kickstarter is having an amazing year, even by the standards of other white hot Web startup companies, and more is yet to come. O
How IBM's Deep Thunder delivers "hyper-local" forecasts 3-1/2 days out
IBM's "hyperlocal" weather forecasting system aims to give government agencies and companies an 84-hour view into the future o
NYT: Google to sell Android-based heads-up display glasses this year
It's not the first time that rumors have surfaced of Google working on some heads-up display glasses (9 to 5 Google first raised the