Profile cover photo
Profile photo
Wayne Radinsky
18,148 followers -
Software Design Engineer
Software Design Engineer

18,148 followers
About
Wayne's posts

Post has attachment
"Pruning deep neural networks to make them fast and small." "Pruning neural networks is an old idea going back to 1990 (with Yan Lecun's optimal brain damage work) and before. The idea is that among the many parameters in the network, some are redundant and don't contribute a lot to the output."

"If you could rank the neurons in the network according to how much they contribute, you could then remove the low ranking neurons from the network, resulting in a smaller and faster network."

"Getting faster/smaller networks is important for running these deep learning networks on mobile devices."

"The ranking can be done according to the L1/L2 mean of neuron weights, their mean activations, the number of times a neuron wasn't zero on some validation set, and other creative methods . After the pruning, the accuracy will drop (hopefully not too much if the ranking clever), and the network is usually trained more to recover."

Post has attachment
How big is Bitcoin? All of Bitcoin is smaller than the wealth of Bill Gates. I think the point of this is that Bitcoin is small and therefore has a lot of room to grow, but main the thing I noticed from this chart is how huge Apple is.

Post has attachment
"Comparing the top five computer vision APIs." Microsoft, IBM, Google, Cloudsight, and Clarifai.

"Most images are labeled with a correct high level category. Labels with more specificity are not as reliable. Outright errors are rare, but do happen. There is significant variance between vendors. Incorrect rotation can affect accuracy. Results are better if you can zoom in to areas of interest. It’s affordable at scale. It’s hard to evaluate a solution without a real problem."

"But one vendor, Cloud Sight seems to be too good to be true..." "The latency for most vendor APIs is sub-second, whereas Cloud Sight's docs ask you to wait 6 -- 15 seconds (and in practice I see something like 5 -- 30 seconds). Second, I found this buried deep in their privacy docs 'Our service employs a proprietary technology that utilizes both computer vision and crowdsourcing'. Third, some more searching turned up this Reddit thread on how they work, and this comment from a human tagger."

Post has attachment
Estonia "had its first experience with cyber-conflict back in 2007, when attacks originating from Russia managed to take fifty-eight Estonian websites offline at once, including those of the government, most newspapers and many banks. Although no information was lost during this event, Estonia had been backing up important data outside of its borders even before the attack, storing it in Estonian embassies across the world. Russia's annexation of Crimea in 2014 brought the question of continuity back to the forefront of public discussions in Estonia."

"The cloud technology provides a good opportunity, but the state also wants to maintain the full control and jurisdiction of their data and systems."

"The first data embassy will be based in a high-security data centre in Betzdorf, a commune in eastern Luxembourg. 'The Luxembourg site will store the copies of the most critical and confidential data,' Siim Sikkut, the government’s ICT policy adviser, explained, adding that the first data embassy should become operational by the end of this year, or at the latest, at the start of 2018."

Post has attachment
"Existing tools allow researchers to engineer cells so that when neurons turn on a gene called cfos, which helps cells respond to new information, they also turn on an artificially introduced gene for a fluorescent protein or another tagging molecule. The system is designed so that this labeling takes place only when the animals are exposed to a drug that activates the system, giving scientists control over the timing -- but not very precise control."

"Those activity-dependent tools have been hugely impactful, but those tools really only work on the timescale of a couple of days. If you think about the speed of the neural code, it's operating more at the pace of milliseconds."

"The researchers designed their tool to respond to calcium, because neurons experience an flux of calcium ions every time they fire an electrical impulse. However, the neurons are only labeled if this calcium flux occurs while the cell is also exposed to a beam of blue light delivered by the researchers. This combination of light exposure and calcium activity triggers the activation of a transcription factor that turns on a target gene that the researchers have engineered into the cells' genome. This gene could encode a fluorescent protein or anything else that could be used to label or manipulate neurons."

Post has attachment
Select memories can be erased, leaving others intact. At least in the marine snail Aplysia. "The new study tested that hypothesis by stimulating two sensory neurons connected to a single motor neuron of the marine snail Aplysia; one sensory neuron was stimulated to induce an associative memory and the other to induce a non-associative memory. By measuring the strength of each connection, the researchers found that the increase in the strength of each connection produced by the different stimuli was maintained by a different form of a Protein Kinase M (PKM) molecule (PKM Apl III for associative synaptic memory and PKM Apl I for non-associative). They found that each memory could be erased -- without affecting the other -- by blocking one of the PKM molecules."

"In addition, they found that specific synaptic memories may also be erased by blocking the function of distinct variants of other molecules that either help produce PKMs or protect them from breaking down."

Post has attachment
Neural networks that can answer reasoning questions. "Rather than training a single large network on lots of input/output pairs, we actually train a huge number of different networks at the same time, while tying their parameters together where appropriate."

"One of the remarkable things about this process is that we don't need to provide any low-level supervision for individual modules: the model never sees an isolated example of blue object or a 'left-of' relationship. Modules are learned only inside larger composed structures, with only (question, answer) pairs as supervision. But the training procedure is able to automatically infer the correct relationship between pieces of structure and the computations they're responsible for."

Post has attachment
"People whose minds tend to wander are less likely to stick to their long-term goals." "Those who could sustain focus in day-to-day life were more likely to report maintaining perseverance and passion in their long-term objectives." "We've shown that maintaining concentration over hours and days predicts passion over longer periods."

"The researchers' findings resulted from three separate studies. In the first two studies, surveys measured the mind wandering, inattention and grittiness of 280 participants. In the third study, 105 post-secondary students were asked to report on their mind-wandering habits during class and then fill out questionnaires to measure their grittiness."

Post has attachment
An AI inspection system to help manufactures identify product defects on the assembly line has been developed. "Instrumental makes a hardware box that goes on the assembly line and takes a photo of every device that passes through and they recently announced their deep learning software called Detect which highlights units that appear defective or anomalous, giving our customers a significant edge in discovering and resolving product issues."

Post has attachment
Predicting bond energies with AI. A deep learning-based system that can accurately determine bond energies has been developed. "John Parkhill, Assistant Professor of Chemistry & Biochemistry at the University of Notre Dame in Indiana, and his team, trained their neural network on a database of over 130,000 molecules."

"The network learns the total energies of the popular GDB9 database to a competitive MAE of 0.94 kcal/mol on molecules outside of its training set, is naturally linearly scaling, and applicable to molecules consisting of thousands of bonds. More importantly, it gives chemical insight into the relative strengths of bonds as a function of their molecular environment, despite only being trained on total energy information."
Wait while more posts are being loaded