Profile cover photo
Profile photo
Wayne Radinsky
18,638 followers -
Software Design Engineer
Software Design Engineer

18,638 followers
About
Posts

Post is pinned.Post has attachment
Add a comment...

Post has attachment
The Julia language's approach to machine learning is to modify the compiler itself, instead of using large frameworks like TensorFlow. "Where typical frameworks are all-encompassing monoliths in hundreds of thousands of lines of C++, Flux is only a thousand lines of straightforward Julia code. Simply take one package for gradients (Zygote.jl), one package for GPU support (CuArrays.jl), sprinkle with some light convenience functions, bake for fifteen minutes and out pops a fully-featured ML stack."

"Like the other next-gen ML systems, Flux is committed to providing an intuitive ('eager' or 'define-by-run') interface, and takes a hard line against any kind of graph building or performance annotations. We support all of the language's features, from control flow and data structures to macros. Users can code interactively in Jupyter notebooks and combine high-performance numerics with convenient plotting and visualisation. But we also want to get the benefits traditionally held by 'static graph' frameworks -- zero-overhead source-to-source AD, operator fusion, multi-GPU/distributed training, and single-binary deployment."

"How can we do all this? Effectively, we need to extract and analyse 'static graphs' directly from written Julia syntax, which is in fact the entirely normal job of a compiler. Most ML systems problems turn out to be standard and well-studied compiler problems, viewed through the right lens. Using a compiled language is enough to solve many issues, and extending that compiler is the best way to solve many more. We cover just a sample of our current work in this field -- namely taking gradients, compiling for GPUs and TPUs, and automatic batching."
Add a comment...

Post has attachment
"TensorFlow includes an implementation of the Keras API (in the tf.keras module) with TensorFlow-specific enhancements. These include support for eager execution for intuitive debugging and fast iteration, support for the TensorFlow SavedModel model exchange format, and integrated support for distributed training, including training on TPUs."

"Eager execution is especially useful when using the tf.keras model subclassing API. This API was inspired by Chainer, and enables you to write the forward pass of your model imperatively. tf.keras is tightly integrated into the TensorFlow ecosystem, and also includes support for: tf.data, enabling you to build high performance input pipelines," "distribution strategies, for distributing training across a wide variety of compute configurations, including GPUs and TPUs spread across many machines," "exporting models," "feature columns, for effectively representing and classifying structured data," "and more in the works."
Add a comment...

Post has attachment
"DeepMind Achieves Holy Grail." DeepMind published a paper revealing the inner workings of AlphaZero, the general-purpose game-playing system that taught itself to be the best player ever in Go, chess, and Shogi.

"The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go. That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself -- in just 3 days."

"'This work has, in effect, closed a multi-decade chapter in AI research,' writes Murray Campbell, an AI researcher at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY, who was a member of the team that designed IBM's Deep Blue, which in 1997 defeated Garry Kasparov, then the world chess champion. 'AI researchers need to look to a new generation of games to provide the next set of challenges.'"

"AlphaZero can crack any game that provides all the information that's relevant to decision-making; the new generation of games to which Campbell alludes do not. Poker furnishes a good example of such games of 'imperfect' information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for long."
Add a comment...

Post has attachment
"Ranking, the process of ordering a list of items in a way that maximizes the utility of the entire list, is applicable in a wide range of domains, from search engines and recommender systems to machine translation, dialogue systems and even computational biology. In applications like these (and many others), researchers often utilize a set of supervised machine learning techniques called learning-to-rank. In many cases, these learning-to-rank techniques are applied to datasets that are prohibitively large -- scenarios where the scalability of TensorFlow could be an advantage. However, there is currently no out-of-the-box support for applying learning-to-rank techniques in TensorFlow. To the best of our knowledge, there are also no other open source libraries that specialize in applying learning-to-rank techniques at scale."

"Today, we are excited to share TF-Ranking, a scalable TensorFlow-based library for learning-to-rank."

"TF-Ranking is fast and easy to use, and creates high-quality ranking models." "We provide flexible API's, within which the users can define and plug in their own customized loss functions, scoring functions and metrics."

"The objective of learning-to-rank algorithms is minimizing a loss function defined over a list of items to optimize the utility of the list ordering for any given application. TF-Ranking supports a wide range of standard pointwise, pairwise and listwise loss functions."
Add a comment...

Post has attachment
"Reduction of nurse burnout is the primary mission of Moxi, a nurse assistant robot with social intelligence that started trials at hospitals in Texas in September."

"In its first month of operations shadowing nurses and understanding their daily workflow, Moxi has learned to take away soiled linen and bring fresh sheets, plus deliver the handful of supplies every patient needs." "The robot also makes sure there's water next to the bed."

"Josh Tippy, a nurse manager in the neurology unit at Texas Health Dallas, was surprised that many things Moxi did happened without them realizing it, because the robot operated primarily at night and in off-hours when there are fewer people in the building -- but also fewer nurses."
Add a comment...

Post has attachment
AWS RoboMaker is a cloud-based service that uses Robot Operating System (ROS).

"RoboMaker essentially serves as a platform to help speed up the time-consuming robotics development process. Among the tools offered by the service are Amazon's machine learning technologies and analytics that help create a simulation for real-world robotics development."

"The system can also be used to help manage fleet deployment for warehouse-style robotics designed to work in tandem."

"AWS RoboMaker automatically provisions the underlying infrastructure and it downloads, compiles, and configures the operating system, development software, and ROS. AWS RoboMaker's robotics simulation makes it easy to set up large-scale and parallel simulations with pre-built worlds, such as indoor rooms, retail stores, and racing tracks, so developers can test their applications on-demand and run multiple simulations in parallel."
Add a comment...

Post has attachment
New AlphaZero vs Stockfish chess game with commentary by agadmator (no capitalization).
Add a comment...

Post has attachment
"24 Amazon workers hospitalized after robot punctures bear spray in warehouse." I know this is being presented as a "robot" story, but my first thought was, WTF is in bear spray? It must be crazy powerful stuff.

"An investigation revealed that 'an automated machine accidentally punctured a nine-ounce bear repellent can, releasing concentrated capsaicin.'"

Oh.
Add a comment...

"DeepMind Achieves Holy Grail." DeepMind published a paper revealing the inner workings of AlphaZero, the general-purpose game-playing system that taught itself to be the best player ever in Go, chess, and Shogi.

"The system, called AlphaZero, began its life last year by beating a DeepMind system that had been specialized just for Go. That earlier system had itself made history by beating one of the world's best Go players, but it needed human help to get through a months-long course of improvement. AlphaZero trained itself -- in just 3 days."

"'This work has, in effect, closed a multi-decade chapter in AI research,' writes Murray Campbell, an AI researcher at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY, who was a member of the team that designed IBM's Deep Blue, which in 1997 defeated Garry Kasparov, then the world chess champion. 'AI researchers need to look to a new generation of games to provide the next set of challenges.'"

"AlphaZero can crack any game that provides all the information that's relevant to decision-making; the new generation of games to which Campbell alludes do not. Poker furnishes a good example of such games of 'imperfect' information: Players can hold their cards close to their chests. Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft. But they may not pose a worthy challenge for long."
Add a comment...
Wait while more posts are being loaded