Profile

Cover photo
Wayne Radinsky
Attended University Of Colorado At Boulder
Lives in Denver
15,487 followers|9,642,228 views
AboutPostsPhotos

Stream

Wayne Radinsky

Shared publicly  - 
 
In the middle of last summer, a massive crater, nearly 100 feet (30 m) in diameter, appeared in Siberia. "Russian scientists have now spotted a total of seven craters, five of which are in the Yamal Peninsula. Two of those holes have since turned into lakes. And one giant crater is rimmed by a ring of at least 20 mini-craters. Dozens more Siberian craters are likely still out there."

"No one has been hurt in any of the blasts, but given the size of some of the craters, it's fair to say the methane bursts are huge. Researchers are nervous about even studying them."
14
5
Merry Weathers's profile photoRoberto Hernández De La Luz's profile photoiPan Baal's profile photoSamuel Smith's profile photo
10 comments
 
Possibly hydraulic effect? As ice melts in surrounding areas, water pushes down forcing gas up and stopped by the ice cap until critical pressure is achieved.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Interviews with DeepMind founders Demis Hassabis, Volodymyr Mnih, and Koray Kavukcuoglu. DeepMind is a giant neural network with multiple layers that work at progressively more abstract levels. It's input is the video frames from the game and game score and its outputs are the buttons on the video game controller. It can learn many games, but has a weakness if the game does not give it a direct path to immediate points and the system has nothing to work from to get started. It also has no long-term memory so longer-term planning is an issue.

DeepMind's Google acquisition is mutually beneficial because Google has massive data and computation power and DeepMind has algorithms. Aren't they giving away their secrets by publishing a Nature paper? They say they benefit from outside validation of the quality of their work.
 
Go inside DeepMind

Want to get a glimpse of the research behind yesterday's DeepMind publication (goo.gl/wYRwNV)? Check out the nature video interview with DeepMind founder Demis Hassabis, and two of the paper authors, Volodymyr Minh and +koray kavukcuoglu, to find out more about how the DQN algorithm works.
1 comment on original post
5
Wayne Radinsky's profile photo
 
I have this paper:

http://arxiv.org/pdf/1312.5602.pdf

Looks like their Nature paper is a new paper but I don't have access to it.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Music for cats. Something else that was done years ago that I didn't find out about until today. Species-specific music. If you scroll down the page you can hear some sample music. Spook's Ditty, Cozmo's Air, and Rusty's Ballad.
Home of authentic species-specific music: the NY Times #1 idea of 2009. > For pet-lovers · > For musicians · > For scientists · > Research · > Publications · > Listen · > Download · > CD · > Comment · > Addresses · > Smithsonian · > Universities · > The creator · > Playback · > IP protection ...
3
3
Wayne Radinsky's profile photoAlexander Nikitin's profile photoSven Türpe's profile photoFilip Talpa's profile photo
2 comments
 
Oh, right. Should've waited 'til tomorrow. It's caturday eve.
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Lab in Berkeley accidentally discovers solution to fix color blindness. "The glasses work by selectively removing certain wavelengths between the red and green cones that allow them to be in essence pushed apart again." The glasses were designed as protective eyewear for doctors during surgery.
For millions of Americans, a world without seeing color is reality. A solution for color blindness has been developed in a Bay Area lab.
35
11
Alexander Nikitin's profile photoKaren Peck's profile photoHenning Rogge's profile photoMark Bridge's profile photo
3 comments
 
About 20-30 wait...
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
John Lanchester at the London Review Of Books reviews Erik Brynjolfsson and Andrew McAfee's The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies. "The force at work here is a principle known as Moore's law. This isn't really a law at all, but rather the extrapolation of an observation made by Gordon Moore, one of the founders of the computer chip company Intel."

"The idea behind Watson was to build a computer that could understand ordinary language well enough to win a popular TV quiz show called Jeopardy!, playing against not just ordinary contestants, but record-holding champions.​1 This would be, as Brynjolfsson and McAfee say, 'a stern test of a computer's pattern matching and complex communication abilities', much more demanding than another IBM project, the chess computer Deep Blue, which won a match against the world champion Gary Kasparov in 1997. The outcome is already a locus classicus in the study of computing, robotics and futurism, and is discussed at length in both John Kelly and Steve Hamm's Smart Machines and Tyler Cowen's Average Is Over.​2 Watson won, easily. Its performance wasn't perfect: it thought Toronto was in the US, and when asked about a word with the double meaning 'stylish elegance, or students who all graduated in the same year', it answered 'chic', not 'class'. Still, its cash score at the end of the two-day contest was more than three times higher than that of the best of its human opponents. 'Quiz-show contestant may be the first job made redundant by Watson,' one of the vanquished men said, 'but I'm sure it won't be the last.'"
2
1
Chuck Petras's profile photo
Add a comment...
9
2
Wayne Radinsky's profile photoEmmanuel Bourmault's profile photoSeren Ade's profile photoJim Stuttard's profile photo
2 comments
 
It is a great movie, sad ending!
I heard about it for the first time watching a video of Numberphil about that. Very interesting story.
Add a comment...
Have him in circles
15,487 people
anna grace lazo's profile photo
Mark Winegar's profile photo
Jamie Lance's profile photo
michael webb's profile photo
Aldo Morales's profile photo
Naveed Ali's profile photo
Numerical Friends Group of Mark Zuckerberg, Facebook Founding- Chairman and CEO (NFG-MZ/FB-FCC)'s profile photo
Nika Karapet's profile photo
Yassar Sarhan's profile photo
 
Why Nintendo doesn't release Mario on the App Store. "Apple doesn't have incentives to protect the values of contents, since contents for Apple are just things which attracts people who want to buy smartphones."

"Nintendo's worst nightmare is the future where the value of contents are getting lower and lower. It seems that protecting the values of games is the thing they've been thinking hard, and this must be the part which they are struggling with."
4
Rob Zimmerman's profile photo
 
Fancy wording for "we still make more money by not selling Mario games for $2.99 on the iPhone." ;)
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Foxconn expects to replace 70% of its human workforce with robots within 3 years, according to a video of CEO Terry Gou you can watch but is in Chinese.
In three years, Foxconn will probably use automation to complete 70 percent of its assembly line work
9
1
Wayne Radinsky's profile photoMike McElroy's profile photo
 
hat tip +Randy LaVigne 
 ·  Translate
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Google bought the .app top level domain for $25 million. "The purchase gives the search giant rights to sell domains ending in .app extension. The company outbid several tech companies including Amazon and almost every large internet registry to grab the said top-level domain name."
Google has bought the .app top domain for a whopping sum of $25 million. The purchase gives the search giant rights to sell domains ending in .app extension. The company outbid several tech compani...
12
3
Irreverent Monk's profile photoMark Bridge's profile photo
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Microsoft is building fast, low-power neural networks with FPGAs. "Microsoft claims that new FPGA designs provide greatly improved processing speed over earlier versions while consuming a fraction of the power of GPUs. This type of work could represent a big shift in deep learning if it catches on, because for the past few years the field has been largely centered around GPUs as the computing architecture of choice."

"If there's a major caveat to Microsoft's efforts, it might have to do with performance. While Microsoft's research shows FPGAs consuming about one-tenth the power of high-end GPUs (25W compared with 235W), GPUs still process images at a much higher rate."
Microsoft on Monday released a white paper explaining a current effort to run convolutional neural networks — the deep learning technique responsible for record-setting computer vision algorithms — on FPGAs rather than GPUs. Microsoft claims that new FPGA designs provide greatly improved processing speed over earlier versions while consuming a fraction of the power of…
10
7
Lorenzo Pavesi's profile photoMark Bridge's profile photoDavid Bauer's profile photoJames Wu's profile photo
 
I met one of the NCSU students involved in using FPGAs for indexing. Normally you hear about GPU offloading but FPGAs make sense too. http://www.smartercomputingblog.com/power-systems/nc-state-university-students-data-power8/ 
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
"One of the questions was particularly revealing. What does the system have a hard time learning? The answer was that it has a hard time developing strategies with no immediate feedback.

"With most of the Atari games, the system is told whenever it does something positive. It can build on those successes. But if winning a game involves strategic planning, e.g., traversing a maze, the system has no way to tell when it is making progress."
 
Google's DeepMind just published both its recent results and the code it uses to get those results.

DeepMind uses the new "deep learning" neural net approach to machine learning. The current paper reports that the system is able to learn to play a range of traditional Atari games from scratch, all using the same untuned neural net. The system gets nothing more than the pixels on the screen as input. Its outputs control the buttons/keys it presses to play the game. It is also given a continually updated score that indicates how well it is doing.

That's all pretty impressive -- although the results, that it can play Atari games, was announced a while ago. This video (http://youtu.be/xN1d3qHMIEQ) includes interviews with some of the DeepMind developers.

One of the questions was particularly revealing. What does the system have a hard time learning? The answer was that it has a hard time developing strategies with no immediate feedback.

With most of the Atari games, the system is told whenever it does something positive. It can build on those successes. But if winning a game involves strategic planning, e.g., traversing a maze, the system has no way to tell when it is making progress. That's one reason, for example, the system does not do well on Ms. PacMan.

Quite a few years ago I worked on genetic programming. Genetic programming made significant gains initially, but it never achieved its goal of learning how to write code with any level of skill. My analysis was that genetic programming, like genetic algorithms, succeeds when there are paths to the solution. If there are no paths to the solution, it's very difficult for an evolutionary system to find its way. That seems to be the same problem DeepMind is having. If there are no markers along the trail saying "you are doing well," the system has a hard time finding its way.

That doesn't mean that every little step must be marked as right or wrong. With enough processing speed and memory, a system can explore many potential paths. All it needs is a way to evaluate how good each partial path is. It doesn't need to know whether each step is a good step.

In many cases such systems can be even more sophisticated in that they can find multiple useful partial paths that on their own don't lead to a solution but that together do. That's one of the strengths of genetic algorithms. Different elements of the population may have discovered different useful partial strategies, which when combined yield significant improvement.

Nonetheless, it's significant that DeepMind seems to be encountering the same sort of problem that blocked earlier learning algorithms.

This is not to say that genetic programming has failed. There are significant genetic programming successes. (See, for examp;le, the "Humies" awards: http://goo.gl/O0Ilxh.) It's just that none of them involve writing software of any level of sophistication.
1 comment on original post
7
Add a comment...

Wayne Radinsky

Shared publicly  - 
 
Boeing has invented a four-part reusable satellite launch vehicle. Instead of just having one shuttle that goes up and down, or one rocket that goes up and down and can't be reused, they have a small rocket contained in a hypersonic aircraft docked to a supersonic aircraft mounted on a large carrier aircraft.

With this new system, they can lower satellite launch costs to 1/3rd their current costs, they say, and we can fill low earth orbit with more than three times as much space junk.
Boeing's new patent for launching satellites sounds unnecessarily complicated at first, but it could slash the cost of getting to low-Earth orbit.
7
1
Andrew Borntrager's profile photoWayne Radinsky's profile photoCésar Díaz's profile photo
2 comments
 
Heh.
Add a comment...
People
Have him in circles
15,487 people
anna grace lazo's profile photo
Mark Winegar's profile photo
Jamie Lance's profile photo
michael webb's profile photo
Aldo Morales's profile photo
Naveed Ali's profile photo
Numerical Friends Group of Mark Zuckerberg, Facebook Founding- Chairman and CEO (NFG-MZ/FB-FCC)'s profile photo
Nika Karapet's profile photo
Yassar Sarhan's profile photo
Work
Occupation
Software Design and Development
Employment
  • Software Design and Development, present
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Denver
Previously
Denver - Silicon Valley, California
Links
Contributor to
Story
Tagline
Software Design Engineer
Introduction
I'm a software engineer specializing in great design of software -- every successful large software project ever made started out as a small software project that got larger. The key to a successful large project is knowing how to design software when it is small so it is capable of growing. Poor design in the early stages leads to high-entropy software that is difficult to maintain and add new features to years down the line. Good design in the initial stages allows new software features to be added easily. Good design doesn't take any more time than poor design, but you have to know how to do it.

Certain keys are very essential to good design. The beginning is the program's data structures, which form the foundation for any software project. The key to good data structure design is to make sure that the relationships between bits of data in your data structures are the same as the relationships between the objects or ideas that those data structures represent in the minds of your users. Any time these get out of sync, you are in for trouble -- but the trouble does not usually arrive immediately -- it can arrive months or years down the line. This delayed feedback cycle is one reason many software projects run late or fail. Any time the data structures are out of sync with the minds of users, there is the temptation to "patch" the problem by adding more data structures, that form a bridge between the existing data structures, and what you want to do. These "patches" are, unfortunately, "dirty hacks", that down the road will add complexity to your software. It is this complexity -- and more to the point, *unnecessary* complexity, that makes it more difficult to maintain or extend your software with new features in the future.

It is also extremely important to design the code structure correctly. It is very common to make basic errors like using global variables. Globals are very powerful, but should be used with care -- they connect separate components of the software with each other. (And be aware that many variables are global even when they are not called "global" in your particular programming language -- they can have other names). When you *want* something to apply "everywhere", globals are the right choice, because you change them in one place and the change is applied everywhere. But more often than not, globals are used when they shouldn't be, causing a change in one part of a program to cause another part of the program, that seems unrelated, to break.

Another minefield is object oriented programming. Objects are an extremely powerful and flexible programming metaphor -- and that's the problem. They are so flexible that they can mean almost anything, and they can make it easy for you to shoot yourself in the foot with excessive complexity. In reality, there is nothing wrong with non-object-oriented programming -- proper and thoughtful use of functions and libraries of functions -- so it is not necessary to use objects everywhere or make "everything" an object in your program. In particular, there is no advantage in doing "object-relational mapping" -- if you're doing this, it means you have designed all your data structures *twice* (once in the relational data model, and again in an object-oriented model), wasting effort. Furthermore, objects should only be used when they add *clarity* to a program, when they make it easier to understand how the program works, rather than more difficult. In certain situations, such as when polymorphism is needed to solve whatever problem your software needs to solve for the user, objects are a clear benefit, simplifying the design and adding clarity to the code. In many other situations, however, excess use of objects creates obfuscation, leading to maintainability problems and difficulty adding features to your software in the future.

And it is these complexity issues that impose limitations on how big your software can get, how many features it can have, and ultimately how well your business can grow and how well you can serve your customers.
Education
  • University Of Colorado At Boulder
    Computer Science
Basic Information
Gender
Male