Profile cover photo
Profile photo
Collin Grimm
35 followers -
ML | Coffee | Visual Art
ML | Coffee | Visual Art

35 followers
About
Posts

Post has attachment
Amazing! Starry Night in 3D and VR. Truly creative!

Post has attachment

Post has shared content
Learning with music can change brain structure, study shows

Using musical cues to learn a physical task significantly develops an important part of the brain, according to a new study. People who practiced a basic movement task to music showed increased structural connectivity between the regions of the brain that process sound and control movement. The findings focus on white matter pathways—the wiring that enables brain cells to communicate with each other. The study could have positive implications for future research into rehabilitation for patients who have lost some degree of movement control. Thirty right-handed volunteers were divided into two groups and charged with learning a new task involving sequences of finger movements with the non-dominant, left hand. One group learned the task with musical cues, the other group without music. After four weeks of practice, both groups of volunteers performed equally well at learning the sequences, researchers at the University of Edinburgh found. Using MRI scans, it was found that the music group showed a significant increase in structural connectivity in the white matter tract that links auditory and motor regions on the right side of the brain. The non-music group showed no change.


Post has shared content
Google.ai aims to make state of the art AI advances accessible to everyone

On the stage of Google I/O, CEO Sundar Pichai announced Google.ai, a new initiative to democratize the benefits of the latest in machine learning research. Google.ai will serve as a center of Google’s AI efforts — including research, tools and applied AI. The new site will host research from Google and its Brain Team. It also allows anyone to quickly access fun experiments that highlight the company’s progress in the field. This includes AutoDraw, that makes it possible for unskilled artists to put their ideas on paper, Duet that can play along with piano players and Quick, Draw!, a game where an AI tries to guess your drawings. A selection of videos and posts about Google’s AI-first efforts are also co-located. Google’s Tensor Flow has played a pivotal role in making machine learning accessible to a greater number of developers. But every day new research comes from universities and private research labs and Google wants to help make that accessible too. Pichai underscored the point that building machine learning models today is very time consuming and often expensive because of the scarcity of engineers with relevant skill-sets. As Google Cloud and Tensor Flow become more ubiquitous, engineers will be able to do more with less. Pichai alluded to AutoML and a future where neural nets can create new neural nets on their own. This is a natural next step as researchers gain greater control of Generative Adversarial Networks (GANs) and reinforcement learning gets applied in new and more challenging contexts.



Post has shared content
Elon Musk-backed OpenAI is teaching robots how to learn just like humans do
Teaching AI using just a single example

OpenAI, the San Francisco-based nonprofit research lab backed by Elon Musk, today announced a research milestone on its robotics work. The achievement is a new algorithm that allows a human being to communicate a task to an AI by performing it first in virtual reality. The method is based on what’s known as one-time imitation learning, a technique OpenAI developed to allow software guiding a robot to mimic a physical action using just a single example. In this case, OpenAI is trying to teach a robotic arm how to stack a series of colored cube-shaped blocks. A human being wearing a VR headset first performs the task manually within a virtual environment. OpenAI then has its vision network — a type of neural network trained on a hundreds of thousands of simulated images — observe the action. This part of the process is based on previous OpenAI research focused on training AI using simulated data with ever-changing variables.

Post has attachment

Post has shared content
AI learns to play video game from instructions in plain English

An AI has learned to tackle one of the toughest Atari video games by taking instructions in plain English. The system, developed by a team at Stanford University in California, learned to play the game Montezuma’s Revenge, in which players scour an Aztec temple for treasure. The game is challenging for AI to learn because it offers sparse rewards, requiring players to make several moves before earning any points. Most video-game-playing AIs use reinforcement learning to develop a strategy, relying on feedback like game points to tell them when they are playing well. To help their AI pick up game tactics quicker, the Stanford team gave their reinforcement learning system a helping hand in the form of natural language instructions, for example advising it to “climb up the ladder” or “get the key”. “Imagine teaching a kid to play tennis by handing them a racket and leaving them in front of a ball machine for 10 years. That’s basically how we teach AI right now,” says team member Russell Kaplan. “It turns out kids learn a lot faster with a coach.” Teaching an AI in this way could have far-reaching applications, because using natural language means anyone could advise the AI, not just computer programmers.

Post has attachment
Hello all members of the group. I'm new here. It took some time for me to figure out that this was a continuation of the PapyOS project. (As per the post below)

I visited the website ( http://papyros.io) and the GitHub repository but there was no notice/notification to to inform new comers about the switch.

Please I'll like the admins of the project to add a notification to the website and the project page on GitHub. Thanks :)

Post has attachment

Post has shared content
A Peek at Trends in Machine Learning

Have you looked at Google Trends? It’s pretty cool — you enter some keywords and see how Google Searches of that term vary through time. I thought — hey, I happen to have this arxiv-sanity database of 28,303 (arxiv) Machine Learning papers over the last 5 years, so why not do something similar and take a look at how Machine Learning research has evolved over the last 5 years? The results are fairly fun, so I thought I’d post.
Wait while more posts are being loaded