Post has attachment
Pixyz is a library for developing deep generative models in PyTorch. "Recently, many papers on deep generation models have been published. However, it is likely to be difficult to reproduce them with codes since there is a gap between mathematical formulas presented in these papers and actual implementation of them. Our objective is to create a new library which enables us to fill this gap and easy to implement these models. With Pixyz, you can implement even more complicated models just as if writing these formulas."
masa-su/pixyz
masa-su/pixyz
github.com
Add a comment...

Post has attachment
Google has announced that it's absorbing DeepMind Health, a part of its London-based AI lab DeepMind."

"DeepMind's founders said it was a 'major milestone' for the company that would help turn its Streams app -- which it developed to help the UK's National Health Service (NHS) -- into 'an AI-powered assistant for nurses and doctors' that combines 'the best algorithms with intuitive design.' Currently, the Streams app is being piloted in the UK as a way to help health care practitioners manage patients."

"DeepMind says its Streams team will remain in London and that it's committed to carrying out ongoing work with the NHS. These include a number of ambitious research projects, such as using AI to spot eye disease in routine scans."
Add a comment...

Post has attachment
torchdiffeq is a library of ordinary differential equation (ODE) solvers implemented in PyTorch. "As the solvers are implemented in PyTorch, algorithms in this repository are fully supported to run on the GPU."
torchdiffeq
torchdiffeq
github.com
Add a comment...

Post has attachment
"The genius neuroscientist who might hold the key to true AI." "When Karl Friston was inducted into the Royal Society of Fellows in 2006, the academy described his impact on studies of the brain as 'revolutionary' and said that more than 90 percent of papers published in brain imaging used his methods. Two years ago, the Allen Institute for Artificial Intelligence, a research outfit led by AI pioneer Oren Etzioni, calculated that Friston is the world's most frequently cited neuroscientist. He has an h-­index -- a metric used to measure the impact of a researcher's publications -- nearly twice the size of Albert Einstein's. Last year Clarivate Analytics, which over more than two decades has successfully predicted 46 Nobel Prize winners in the sciences, ranked Friston among the three most likely winners in the physiology or medicine category."

"What's remarkable, however, is that few of the researchers who make the pilgrimage to see Friston these days have come to talk about brain imaging at all."

"For the past decade or so, Friston has devoted much of his time and effort to developing an idea he calls the free energy principle." "With this idea, Friston believes he has identified nothing less than the organizing principle of all life, and all intelligence as well. 'If you are alive,' he sets out to answer, 'what sorts of behaviors must you show?'"

"First the bad news: The free energy principle is maddeningly difficult to understand."
Add a comment...

Post has attachment
"Speaker diarization, the process of partitioning an audio stream with multiple people into homogeneous segments associated with each individual, is an important part of speech recognition systems. By solving the problem of 'who spoke when', speaker diarization has applications in many important scenarios, such as understanding medical conversations, video captioning and more. However, training these systems with supervised learning methods is challenging -- unlike standard supervised classification tasks, a robust diarization model requires the ability to associate new individuals with distinct speech segments that weren't involved in training."

"All components in the speaker diarization system, including the estimation of the number of speakers, are trained in supervised ways, so that they can benefit from increasing the amount of labeled data available. On the NIST SRE 2000 CALLHOME benchmark, our diarization error rate (DER) is as low as 7.6%, compared to 8.8% DER from our previous clustering-based method, and 9.9% from deep neural network embedding methods. Moreover, our method achieves this lower error rate based on online decoding, making it specifically suitable for real-time applications."

"Modern speaker diarization systems are usually based on clustering algorithms such as k-means or spectral clustering. Since these clustering methods are unsupervised, they could not make good use of the supervised speaker labels available in data. Moreover, online clustering algorithms usually have worse quality in real-time diarization applications with streaming audio inputs. The key difference between our model and common clustering algorithms is that in our method, all speakers' embeddings are modeled by a parameter-sharing recurrent neural network (RNN), and we distinguish different speakers using different RNN states, interleaved in the time domain."
Add a comment...

Post has attachment
"Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. Our technique enables agents to learn and extract concepts from tasks, then use these concepts to solve other tasks in various domains. For example, our model can use concepts learned in a two-dimensional particle environment to let it carry out the same task on a three-dimensional physics-based robotic environment -- without retraining in the new environment."

"To create the energy function, we mathematically represent concepts as energy models. The idea of energy models is rooted in physics, with the intuition that observed events and states represent low-energy configurations."

"We construct the energy function as a neural network based on the relational network architecture, which allows it to take an arbitrary number of entities as input. The parameters of this energy function are what is being optimized by our training procedure; other functions are derived implicitly from the energy function."
Add a comment...

Post has attachment
Facebook AI has produced a Udacity course on PyTorch.
Add a comment...

Post has attachment
Bloomberg made a video about Canada's leadership in AI. Canada, eh?

Profiles leading researchers like Geoffrey Hinton, Yoshua Bengio, and Richard Sutton, industrial roboticist Suzanne Gildert, robot improv artist Kory Matthewson, "AI philosopher" George Dvorsky, and startups like Lyrebird and Kindred AI.
Add a comment...

Post has attachment
"A motorcyclist was injured in a collision with a Waymo self-driving car last month -- but Waymo says the accident underscores the robot cars' safety, as it was caused by the backup driver."

"As the Waymo car -- all of its cars are white Chrysler Pacifica minivans -- drove at 21 mph in the middle of three lanes, a car in the left lane began to merge into that middle lane. The test driver 'took manual control of the AV (autonomous vehicle) out of an abundance of caution, disengaged from self-driving mode, and began changing lanes into Lane 3' (the right-hand lane)."

"A motorcycle was in the right-hand lane traveling at 28 mph and beginning to overtake the Waymo car. Waymo's car and the motorcycle collided at the car's right rear bumper. The injured motorcyclist was transported to a hospital."
Carolyn Said
Carolyn Said
sfchronicle.com
Add a comment...

Post has attachment
Robot herds cattle by waving trash bags.
Add a comment...
Wait while more posts are being loaded