Profile cover photo
Profile photo
Andrea Casalotti
410 followers -
Velorutionary - Entrepreneur - Bodhisattva
Velorutionary - Entrepreneur - Bodhisattva

410 followers
About
Andrea's posts

Post has shared content
Google Brain’s new super fast and highly accurate AI: the Mixture of Experts Layer.

One of the big problems in Artificial Intelligence is the gigantic amount of GPUs (or computers) needed to train large networks. The training time of neural networks grows quadratically (think squared) in function of their size. This is due to how the network is trained. For each example, the entire network is modified, even though some parts might not even activate while processing this particular example. However, the memory of a network is directly dependent on the size of the network. The larger the network, the more patterns it can learn and remember. Therefore, we have to build giant neural networks to process the ton of data that corporations like Google & Microsoft have. Well, that was the case until Google released their paper Mixture of Experts Layer....The rough concept is to keep multiple experts inside the network. Each expert is itself a neural network. This does look similar to the PathNet paper, however, in this case, we only have one layer of modules. You can think of experts as multiple humans specialized in different tasks.

Daniel Dennett speaking with FT's John Thornhill
Although Dennett accepts that such a superintelligence is logically possible, he argues that it is a “pernicious fantasy” that is distracting us from far more pressing technological problems. In particular, he worries about our “deeply embedded and generous” tendency to attribute far more understanding to intelligent systems than they possess. Giving digital assistants names and cutesy personas worsens the confusion.

“All we’re going to see in our own lifetimes are intelligent tools, not colleagues. Don’t think of them as colleagues, don’t try to make them colleagues and, above all, don’t kid yourself that they’re colleagues,” he says.

Dennett adds that if he could lay down the law he would insist that the users of such AI systems were licensed and bonded, forcing them to assume liability for their actions. Insurance companies would then ensure that manufacturers divulged all of their products’ known weaknesses, just as pharmaceutical companies reel off all their drugs’ suspected side-effects. “We want to ensure that anything we build is going to be a systemological wonderbox, not an agency. It’s not responsible. You can unplug it any time you want. And we should keep it that way,” he says.


Post has attachment
"“The algorithm did it” is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences.

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness"

Post has attachment
Sicilians are really barbarians. Southern Italy is a cancer that is killing civil society in Italy.

Post has attachment
Last year DeepMind performed a similar analysis on Google's data centres, apparently netting a 15 percent reduction in electricity usage. DeepMind trained a neural network to more accurately predict future cooling requirements, in turn reducing the power usage of the cooling system by 40 percent. "Because that’s worked so well we're obviously expanding that capability around Google, but we'd like to look at doing it at National Grid-scale," Hassabis said to the FT.
“We think there’s no reason why you can't think of a whole national grid of a country in the same way as you can the data centres.

Post has attachment
Different opinions on whether we should place upper bounds to the capabilities of AI

Post has attachment
“A lot of companies are just using deep learning for this component or that component, while we view it more holistically,” says Reiley.

The most common implementation of the piecemeal approach to which they’re referring is the use of deep learning solely for perception. This form of artificial intelligence is good for, say, recognizing pedestrians in a camera image, because it excels at classifying things within an arbitrary scene. What’s more, it can, after having learned to recognize a particular pattern, extend that capability to objects that it hasn’t actually seen before. In other words, you don’t have to train it on every single pedestrian that could possibly exist for it to be able to identify a kindly old lady with a walker and a kid wearing a baseball cap as part of the same group of objects.

While a pedestrian in a camera image is a perceptual pattern, there are also patterns in decision making and motion planning—the right behavior at a four way stop, or when turning right on red, to name two examples—to which deep learning can be applied. But that’s where most self-driving car makers draw the line. Why? These are the kind of variable, situation-dependent decisions that deep learning algorithms are better suited to making than the traditional, rules-based approach with which they feel more comfortable, Reiley and Tandon tell us. Though deep learning’s “human-like” pattern recognition leads to a more nuanced behavior than you can expect from a rules-based system, sometimes, this can get you into trouble.

A deep learning system’s ability to recognize patterns is a powerful tool, but because this pattern recognition occurs as part of algorithms running on neural networks, a major concern is that the system is a “black box.” Once the system is trained, data can be fed to it and a useful interpretation of those data will come out. But the actual decision making process that goes on between the input and output stages is not necessarily something that a human can intuitively understand. This is why many companies working on vehicle autonomy are more comfortable with using traditional robotics approaches for decision making, and restrict deep learning to perception. They reason: If your system makes an incorrect decision, you’d want to be able to figure out exactly what happened, and then make sure that the mistake won’t be repeated.

Post has attachment
Research in digital health tends to focus on the last step in this process – the diagnosis. While it is crucial for what makes a doctor, it is still only one of many tasks a doctor has to do, and simply focusing on this single aspect will not solve the problem. Don’t get me wrong, we at Babylon are very interested in diagnosis and invest considerable resources into developing the best possible automated diagnostic system. However, we also focus on the simple things, the things that we as humans take for granted, as they are just so easy for us.

One major interest of ours is in language: How can we make the machine understand what you are saying? I mean not just understand that you are talking about your shoulder, but to really understand how “shoulder pain” and “I have shoulder pain for years and nobody can help me” talk about similar concepts but convey different emotions and problems. Clearly, we are not the only ones that want to solve this problem. Nearly every application area of machine learning that is based on text basically faces the same challenge – and that is a great thing for us! Very clever people at big companies like Google or Facebook are working hard to solve these problems, and to our benefit they openly publish many of their insights. Given these advances in other fields, the question really becomes one of transfer learning. How do we take the amazing results somebody else has achieved in their domain and apply it to our own problems?

It turns out that this is exactly the sort of problem deep learning excels at! Why? It is basically the whole reason why neural nets work so well in the first place. Without delving too deep into the technical details, any neural network that does a classification basically consists of two parts: i) a sequence of complicated transformations to the network’s input (for example an image), and ii) a relatively simple decision (say if it shows a cat). When we “train” the network, we show it thousands of examples of input and desired output, then put it under a lot of stress, as it can’t do the task well initially. For the network, the only way to relieve the stress is to become better at the task. As the decision part is so simple the network has only one choice: it has to transform the input in a way that makes it easier to make decisions. 

Post has attachment
Photo
Wait while more posts are being loaded