Post has attachment
Artificial intelligence index 2018 annual report. AI outpaces CS in growth of annually published papers. Europe is the largest publisher of AI papers. (China is #2, US is #3). The number of papers on Neural Networks had a compound annual growth rate of 37% from 2014 to 2017.

That's on Scopus, Elsevier's abstract and citation database. On arXiv, the category with the fastest growth is computer vision and pattern recognition, followed by machine learning, and computation & language. China focused more on engineering and technology, the US focused more on medical and health sciences and the humanities, China focused more on agricultural sciences, and the three regions tied for social sciences and natural sciences.

Government-affiliated AI papers in China have increased 400%. The proportion of corporate papers in the US is 6.6x greater than that in China. US AI authors are cited 83% more than the global average. Mobile authors have a greater number of citations and publish more frequently. 70% of Advancement of Artificial Intelligence (AAAI) conference papers are from the US or China. Introductory AI and ML courses are increasing in their proportion of undergraduate students. AI courses are growing in China, Mexico, Canada, Austria, Israel, and Switzerland. 80% of AI professors are male.

Top AI conferences are the Neural Information Processing Systems (NeurIPS) conference, the Conference on Computer Vision and Pattern Recognition (CVPR), the International Conference on Machine Learning (ICML), and International Conference on Robotics and Automation (ICRA). ICML has had the highest growth.

Among small conferences, the International Conference on Learning Representations (ICLR) is growing the fastest. ICLR's 2018 conference attendance is 20x larger than it was in 2012.

Women in Machine Learning (WiML) has grown 600% since 2014 and AI4ALL has 900% more alumni than it had in 2015.

Since 2014, unique robot operating system (ROS) package downloads from ROS.org have increased by 567%. ROS.org pageviews in China were 18x greater in 2017 than in 2012.

Active AI startups in the US increased 2.1x from 2015 to 2018. VC funding for US AI startups increased 4.5x from 2013 to 2017.

Natural language processing (NLP) has the most job openings by AI skills required. Deep learning has the fastest growth. Men make up 71% of the applicant pool for AI jobs in the US.

The US has the most AI patents, but South Korea has the fastest growth in AI patents.

When it comes to AI adoption by business by region, Europe leads in robotic process automation, North America leads in machine learning, China leads in conversational interfaces, China leads in computer vision, China leads in natural language text understanding, North America leads in natural language speech generation, China leads in natural language generation, Europe and China tie for physical robotics, and China leads in autonomous vehicles. China leads in robot installations.

Company earnings calls mentions of "artificial intelligence" is increasing rapidly. Government mentions of "machine learning" and "artificial intelligence" are increasing rapidly.

TensorFlow has the most GitHub stars than any AI library.

Articles on AI became 2.5x more positive from 2016 to 2018.

ImageNet training time became 16x faster between June 2017 and November 2018.

Since 2015, the highest average precision reached in the Common Objects in Context Challenge (COCO) challenge has increased 72%.

From 2003 to 2018, constituency parsing performance increased by 10%.

The English to German BLEU score is 3.5x higher today than in 2008.

The AI2 Reasoning Challenge (ARC) benchmark in 2018 has gone from 63% to 69% on the Easy Set and from 27% to 42% on the Challenge Set.

On the General Language Understanding Evaluation (GLUE) benchmark, roughly half the gap between the first published baseline and the estimated non-expert human level has been closed in 2018.

VC investment has grown faster than academic enrollment and publishing.

The US has a national AI R&D strategy, the Summit on AI, the Select Committee on Artificial Intelligence, a $2B+ DARPA AI plan, and the AI Next program. Europe has the Declaration of Cooperation on AI, the Horizon 2020 program, the Digital Europe program, and the AI Sector Deal (in the UK). The Chinese government has a series of national AI initiatives with the goal of creating a $14.7B AI market in China by 2018 and ensure that China leads the world in AI by 2030, the Internet+ initiative, the robot industry development plan, and the New Generation AI Development Plan.

In 2018, AI matched human-level performance in Chinese-English translation (Microsoft), prostate cancer grading (Google), and Quake III Arena Capture The Flag (DeepMind). AI almost matched human-level performance (beat amateurs but not the top professionals) in Dota 2 (Open AI).

What's missing? Common sense and understanding, cooperation with humans, AI reasoning and learning, robots with AI, quantitative metrics for military use.
Add a comment...

Post has attachment
"Following the success of neural networks for perception, we naturally asked ourselves the question: given that we had millions of miles of driving data (i.e., expert driving demonstrations), can we train a skilled driver using a purely supervised deep learning approach?"

"This post  --  based on research we've just published*  --  describes one exploration to push the boundaries of how we can employ expert data to create a neural network that is not only able to drive the car in challenging situations in simulation, but also reliable enough to drive a real vehicle at our private testing facility. As described below, simple imitation of a large number of expert demonstrations is not enough to create a capable and reliable self-driving technology. Instead, we've found it valuable to bootstrap from good perception and control to simplify the learning task, to inform the model with additional losses, and to simulate the bad rather than just imitate the good."

"In order to drive by imitating an expert, we created a deep recurrent neural network (RNN) named ChauffeurNet that is trained to emit a driving trajectory by observing a mid-level representation of the scene as an input."
Add a comment...

Post has attachment
"More than 1,200 artificial intelligence (AI) companies have been established in Israel since 2010; 79 percent of them are still active and 6% have been acquired, reports IVC Research, which adds that 'this sector's vital signs are positive.'"

"Exits (where a company is either acquired or goes public) were higher in the first half of 2018 than for all of 2017, IVC adds."

"The mix of AI companies in Israel has also changed -- particularly in the last four years."

"AI companies in Israel have traditionally focused on computer vision and this is where most of the development activity has been."

"Jerusalem-based Mobileye, for example, builds systems that 'watch' how your car is driving and sound an alert if you're getting too close to another vehicle or veer out of your lane. Computer vision technology is now the basis behind Mobileye's AI-centric approach to self-driving cars."

"Beginning in 2014, though, there has been an increase in the share of companies implementing 'data science' (a catchall name that encompasses data mining, statistical inference and prediction models) into their product lines. That's been accompanied by a decrease in companies whose technology is more about computer vision, recommendation systems and text analysis."
Add a comment...

Post has attachment
Art that fooled Tumblr's AI censor.
Add a comment...

Post has attachment
Art that fooled Tumblr's AI censor.
Add a comment...

Post has attachment
AI music mastering. "LANDR, which was launched in 2014, recently announced that more than 2 million musicians have used its music creation platform to master 10 million songs.

"Mastering is still creative, and humans can hear things that programs can't. But some aspects of mastering -- like equalizing the loudness levels of different songs on a CD or trying to match the spectral content in bass and high frequencies -- are a lot simpler to automate than composing a piece of music or doing music production."

"Ryan Petersen, a Nashville-based producer and songwriter, played around with LANDR a few years ago and ultimately abandoned the service to return to human colleagues. He said that while the algorithm is technologically impressive, it fell short because it lacked a taste algorithm in the part of the software dedicated to creative learning."
Add a comment...

Post has attachment
Ben Goertzel showed up on Joe Rogan's podcast/video. The topic of the conversation is "the Singularity". It contains lots of phrases like, "Make a fork of yourself."

He is more optimistic than me that the Singularity will happen soon. Largely that is based on his optimism that his projects, OpenCog and SingularityNET, the blockchain-based economy for AIs, will pan out and bring artificial general intelligence (AGI) into manifestation in the world.
Add a comment...

Post has attachment
"Google Translate learns from hundreds of millions of already-translated examples from the web. Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed."

"Now you'll get both a feminine and masculine translation for a single word -- like 'surgeon' -- when translating from English into French, Italian, Portuguese or Spanish. You'll also get both translations when translating phrases and sentences from Turkish to English. For example, if you type 'o bir doktor' in Turkish, you'll now get 'she is a doctor' and 'he is a doctor' as the gender-specific translations."
Add a comment...

Post has attachment
The Julia language's approach to machine learning is to modify the compiler itself, instead of using large frameworks like TensorFlow. "Where typical frameworks are all-encompassing monoliths in hundreds of thousands of lines of C++, Flux is only a thousand lines of straightforward Julia code. Simply take one package for gradients (Zygote.jl), one package for GPU support (CuArrays.jl), sprinkle with some light convenience functions, bake for fifteen minutes and out pops a fully-featured ML stack."

"Like the other next-gen ML systems, Flux is committed to providing an intuitive ('eager' or 'define-by-run') interface, and takes a hard line against any kind of graph building or performance annotations. We support all of the language's features, from control flow and data structures to macros. Users can code interactively in Jupyter notebooks and combine high-performance numerics with convenient plotting and visualisation. But we also want to get the benefits traditionally held by 'static graph' frameworks -- zero-overhead source-to-source AD, operator fusion, multi-GPU/distributed training, and single-binary deployment."

"How can we do all this? Effectively, we need to extract and analyse 'static graphs' directly from written Julia syntax, which is in fact the entirely normal job of a compiler. Most ML systems problems turn out to be standard and well-studied compiler problems, viewed through the right lens. Using a compiled language is enough to solve many issues, and extending that compiler is the best way to solve many more. We cover just a sample of our current work in this field -- namely taking gradients, compiling for GPUs and TPUs, and automatic batching."
Add a comment...

Post has attachment
"TensorFlow includes an implementation of the Keras API (in the tf.keras module) with TensorFlow-specific enhancements. These include support for eager execution for intuitive debugging and fast iteration, support for the TensorFlow SavedModel model exchange format, and integrated support for distributed training, including training on TPUs."

"Eager execution is especially useful when using the tf.keras model subclassing API. This API was inspired by Chainer, and enables you to write the forward pass of your model imperatively. tf.keras is tightly integrated into the TensorFlow ecosystem, and also includes support for: tf.data, enabling you to build high performance input pipelines," "distribution strategies, for distributing training across a wide variety of compute configurations, including GPUs and TPUs spread across many machines," "exporting models," "feature columns, for effectively representing and classifying structured data," "and more in the works."
Add a comment...
Wait while more posts are being loaded