Profile

Cover photo
Wayne Radinsky
Attended University Of Colorado At Boulder
Lives in Denver
17,014 followers|29,596,106 views
AboutPostsCollectionsPhotos

Stream

Wayne Radinsky

Shared publicly  - 
 
A film shot entirely by drones. "From the eyes of the drones we see two teenagers each held by police order within the digital confines of their own council estate tower block in London. A network of drones survey the council estates, as a roving flock off cctv cameras and our two characters are kept apart by this autonomous aerial infrastructure. We watch as they pass notes to each other via their own hacked and decorated drone, like kids in an old fashioned classroom, scribbling messages with biro on paper, balling it up and stowing it in their drones.. In this near future city drones form both agents of state surveillance but also become co-opted as the aerial vehicles through which two teens fall in love."
‘In the Robot Skies’ imagines teenage love in the impending age of surveillance.
6
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
The Neural Photo Editor. Instead of manipulating individual pixels, you modify the image by changing model parameters. This enables you to make large semantic changes with ease. You paint with a "contextual paintbrush," which, instead of changing pixel values, back propagates to determine the change in latent model parameters which would result in the requested color change, then takes a gradient descent step in that direction, resulting in globally, coherent changes. The system under the hood uses something called an Introspective Adversarial Network, which is a hybridization of a Generative Adversarial Network (a system where, instead of using a single generative neural network, you combine a generative neural network with a discriminatory neural network, and set them up as "adversaries"), and a Variational Autoencoder (which is some type of system where an neural network is trained to encode some input into parameters and decode it back out and compare the output with the input to correct itself -- I'm not clear how exactly this works).

Anyway, you can randomly sample the model to generate faces at will, then use the paintbrush to take walks in the latent space. He (Andrew Brock) says normally, to get specific repeatable changes in a generative model, the model needs to be augmented with attribute labels during training, as there is no guarantee that specific latent vectors will correspond meaningful features, but while this works for images generated by the model, it fails when you try to apply it to existing photos His solution is masked interpolation, where the output is a composition of the reconstruction and a weighted combination of the reconstruction error output difference. By interpolating between the error and the requested change, we can get smooth realistic changes in the original photo.
27
3
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Intel is going to do 3 generations of 10 nanometer technology, which they call 10 nm, 10 nm+, and 10 nm++. This will push the debut of 7 nm technology out to 2021 or 2022. It also looks like the "Tri-Gate" FinFET transistor structure, which Intel debuted in 2011 at 22 nm, will also be used for Intel's 7 nm transistors when they debut. So it won't be until 5 nm that Intel might use a transistor structure other than FinFET.
This future chip manufacturing technology looks evolutionary rather than revolutionary.
43
4
Emmanuel Bourmault's profile photoTravis Owens's profile photo
2 comments
 
Approaching the limits of physics
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
"Google's image-captioning AI is getting scary good. The machine knows more than just what's in a picture. It's learning to understand what those people and objects are doing."

"A dog is sitting on a beach next to a dog."

"Our model does indeed develop the ability to generate accurate new captions when presented with completely new scenes, indicating a deeper understanding of the objects and context in the images. It learns how to express that knowledge in natural-sounding English phrases despite receiving no additional language training other than reading the human captions."
The machine knows more than just what's in a picture. It's learning to understand what those people and objects are doing.
37
4
hinh hoa's profile photoWayne Radinsky's profile photoTerrence Lee Reed's profile photo
5 comments
 
Just wait until it graduates from kindergarten.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Hangover-free alcohol has been invented, at least according to the Imperial College Professor who claims to have invented it, David Nutt. The article says little about how it supposedly works, except that it's supposed to mimic the 'positive' effects of alcohol without the 'negative' side effects (including heart and liver damage, not just hangovers).
A new type of synthetic alcohol has been discovered which could allow people to enjoy the sociable effects of a few pints, but skip the hangover that usually follows.
13
Sy Bernot (Psybernaut)'s profile photoTitan ThinkTank's profile photoPedro Marcal's profile photo
4 comments
 
Things have changed at Imperial College, in my days I had a still in my lab that produced pure alcohol from wood alcohol for applying strain gauges and filling punch bowls.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
A new species of ant was discovered in frog vomit. That just sounds funny. Anyway, the ant species is called Lenomyrmex hoelldobleri, the frog species is called Oophaga sylvatica, or more colloquially, diablito ("little devil" in Spanish), and they both live in Ecuador. Apparently this frog is useful for discovering ants because it loves to eat ants.
Sometimes scientists make discoveries in the strangest of places. Like the belly of a poison frog.
18
6
Titan ThinkTank's profile photo
 
woahoa, i almost thought those live inside the frogs.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Google launched a new neural network translation system (called Google Neural Machine Translation) for the Chinese-to-English translations. They plan to roll it out for other language pairs over time.

"A few years ago we started using Recurrent Neural Networks (RNNs) to directly learn the mapping between an input sequence (e.g. a sentence in one language) to an output sequence (that same sentence in another language). Whereas Phrase-Based Machine Translation (PBMT) breaks an input sentence into words and phrases to be translated largely independently, Neural Machine Translation (NMT) considers the entire input sentence as a unit for translation."

"Since then, researchers have proposed many techniques to improve NMT, including work on handling rare words by mimicking an external alignment model, using attention to align input words and output words, and breaking words into smaller units to cope with rare words. Despite these improvements, NMT wasn't fast or accurate enough to be used in a production system, such as Google Translate."

The article goes on to describe how they made an LSTM neural network system with 8 encoder and 8 decoder layers and an "attention" mechanism to align the encoder with the decoder, a system for dividing rare words into "wordpieces", and a "beam search" system with a "coverage penalty" to ensure the output sentence covers all the words in the input sentence.
25
5
Wayne Radinsky's profile photoDaniel Estrada's profile photoChris Knutson's profile photo
7 comments
 
60% is a nice improvement, but that doesn't mean it doesn't still stuck. I can get 20/100 on a test, and improve by 60%, and yes, I would still suck.

The scores AFTER the 60% improvement are still lower than the starting scores of other languages. The other languages already had much better translation, so they arguably had less room for improvement, but still improved better than English<>Chinese.

I get that Chinese<>English translation is hard. I'm not saying it's not a big accomplishment, and I'm not belittling Google for this accomplishment. However, I would argue that yes, the translation still has a lot of room for improvement.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Beauty going high-tech. L'Oréal "has set up its own 'technology incubator' in San Francisco -- a small, nimble outfit working in partnership with university experts and specialist tech companies to produce a dazzling string of beauty innovations."

"One of the latest developments from the incubator is Lancôme's Le Teint Particulier, a personalised foundation system that is already available at Los Angeles' Nordstrom department store, and is set to hit Selfridges in the UK next spring."

"It's an optical scanner that analyses your skin tone, then, via a complex algorithm, commands the pigment machine on the make-up counter to squirt exactly the right amount of the right shades for you into a tube, before mixing them right in front of your eyes."
You might assume there is little connection between the techies beavering away in California’
6
1
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
An AI system that can find every theoretically viable combination of the four elements in the structure of elpasolite, a type of quaternary crystal, which is a crystal made up of four chemical elements, has been developed. The system has similar accuracy to systems that compute directly from quantum physics, but is much faster.

"The researchers were able to detect basic trends in formation energy and identify 90 previously unknown crystals that should be thermodynamically stable, according to quantum mechanical predictions."

"Some of the newly discovered elpasolite crystals display exotic electronic characteristics and unusual compositions".
With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. They report on their findings in the scientific journal Physical Review Letters.
34
6
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Microsoft has filed a patent for a "mediation component" that records everything you type in every application and sends it to Bing/Cortana as part of your search query the next time you do a search. "The search engine (e.g., Bing and Cortana) uses contextual rankers to adjust the default ranking of the default suggested queries to produce more relevant suggested queries for the point in time. The operating system, comprising the function of mediation component, tracks all textual data displayed to the user by any application, and then performs clustering to determine the user intent (contextually)."

Keep in mind this is a patent and not a product feature and most patents never see the light of day.
We’ve stated on many occasions that Windows 10 is an excellent operating system, albeit one with a few rough edges that could be smoothed-out. However, Microsoft has angered some users over the past year or so, in its willingness to dance right up to the line of what customers feel is acceptable practice for promoting adoption of its new OS.
35
9
Jim Gomes's profile photoVb Wyrde's profile photoErfan Ta's profile photo
22 comments
 
Keep up the good work Microsoft. Soon Windows would not be worthy to run on my physical hardware, and will be dumped in a KVM only to run games. Thanks to AMD and Intel for AMD-Vi and Vt-d extensions.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
"Monsanto has licensed the use of CRISPR-Cas genome-editing technology from the Broad Institute at Harvard University and MIT."

"Monsanto intends to use CRISPR to make crops like corn and soybeans more fruitful and more resistant to diseases and drought, says Tom Adams, Monsanto's head of biotechnology."
A licensing agreement between Monsanto and the Broad Institute will allow the biotech giant to use genome editing to modify plants like corn and tomatoes
12
7
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
A technique was invented to "virtually unroll" fragile scrolls has been developed. It works by first performing a high-resolution scan, then identifying layers and looking at one layer at a time, then identifying dense regions, which are dense because of ink, then doing a "virtual flattening" of the layer, then combining all these flattened pieces together to make a single flat scroll.

This was done in 5 pieces with the En-Gedi scroll, which was found to be a copy of Leviticus, the earliest copy ever found.
24
4
hinh hoa's profile photo
 
A dog is sitting on a beach next to a dog
Add a comment...
Wayne's Collections
Story
Tagline
Software Design Engineer
Introduction
I'm a software engineer specializing in great design of software -- every successful large software project ever made started out as a small software project that got larger. The key to a successful large project is knowing how to design software when it is small so it is capable of growing. Poor design in the early stages leads to high-entropy software that is difficult to maintain and add new features to years down the line. Good design in the initial stages allows new software features to be added easily. Good design doesn't take any more time than poor design, but you have to know how to do it.

Certain keys are very essential to good design. The beginning is the program's data structures, which form the foundation for any software project. The key to good data structure design is to make sure that the relationships between bits of data in your data structures are the same as the relationships between the objects or ideas that those data structures represent in the minds of your users. Any time these get out of sync, you are in for trouble -- but the trouble does not usually arrive immediately -- it can arrive months or years down the line. This delayed feedback cycle is one reason many software projects run late or fail. Any time the data structures are out of sync with the minds of users, there is the temptation to "patch" the problem by adding more data structures, that form a bridge between the existing data structures, and what you want to do. These "patches" are, unfortunately, "dirty hacks", that down the road will add complexity to your software. It is this complexity -- and more to the point, *unnecessary* complexity, that makes it more difficult to maintain or extend your software with new features in the future.

It is also extremely important to design the code structure correctly. It is very common to make basic errors like using global variables. Globals are very powerful, but should be used with care -- they connect separate components of the software with each other. (And be aware that many variables are global even when they are not called "global" in your particular programming language -- they can have other names). When you *want* something to apply "everywhere", globals are the right choice, because you change them in one place and the change is applied everywhere. But more often than not, globals are used when they shouldn't be, causing a change in one part of a program to cause another part of the program, that seems unrelated, to break.

Another minefield is object oriented programming. Objects are an extremely powerful and flexible programming metaphor -- and that's the problem. They are so flexible that they can mean almost anything, and they can make it easy for you to shoot yourself in the foot with excessive complexity. In reality, there is nothing wrong with non-object-oriented programming -- proper and thoughtful use of functions and libraries of functions -- so it is not necessary to use objects everywhere or make "everything" an object in your program. In particular, there is no advantage in doing "object-relational mapping" -- if you're doing this, it means you have designed all your data structures *twice* (once in the relational data model, and again in an object-oriented model), wasting effort. Furthermore, objects should only be used when they add *clarity* to a program, when they make it easier to understand how the program works, rather than more difficult. In certain situations, such as when polymorphism is needed to solve whatever problem your software needs to solve for the user, objects are a clear benefit, simplifying the design and adding clarity to the code. In many other situations, however, excess use of objects creates obfuscation, leading to maintainability problems and difficulty adding features to your software in the future.

And it is these complexity issues that impose limitations on how big your software can get, how many features it can have, and ultimately how well your business can grow and how well you can serve your customers.
Education
  • University Of Colorado At Boulder
    Computer Science
Basic Information
Gender
Male
Work
Occupation
Software Design and Development
Employment
  • Software Design and Development, present
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Denver
Previously
Denver - Silicon Valley, California