Profile cover photo
Profile photo
Monica Anderson
2,432 followers -
Pioneering context-utilizing semantic technology named Artificial Intuition, a Connectome Algorithm
Pioneering context-utilizing semantic technology named Artificial Intuition, a Connectome Algorithm

2,432 followers
About
Communities and Collections
View all
Posts

What's the difference between Big Data and Deep Learning?

"90% of Big Data is cleaning your data". Problems like lies, spam and duplication will distort your results.

Whereas in Deep Learning, you must train using all the data.
Otherwise the system cannot learn the the difference between good and bad.

This is a major hint that Deep Learning is on the path to true AGI.
It does more of the intelligence-demanding work – the Pattern gathering and The Model Making.

We are delegating (part of) our Understanding to the machine.
Which means we believe these systems are competent enough to form categories we might agree with.
Competent enough to perform Reduction on their own.

All we need to do towards the end is tell the system "These were some of the good things and these were some of the bad things" in a supervised post-learning labeling phase; this is pretty much standard operating procedure in the DL world. Most of the training is unsupervised and the emergent concepts are labeled at the end using a much smaller labeled set used in supervised mode.

It doesn't hurt that "not having to clean your training data" is a major timesaver in any project.

Post has attachment
Deep Learning is all about learning, not mathematics. Learning, information, knowledge, abstraction, reduction, novelty, salience, models, patterns, and general philosophy of science questions are discussed in the discipline of Epistemology. Discussing these issues in the domain of Mathematics is futile; I don't expect useful theories for deep learning to come from the Mathematics community.

Understanding arises from the ability to jump to conclusions on scant evidence and Neural Networks are the best currently known way to do that. Doesn't sound scientific, does it? We don't use much math in Psychology; we shouldn't use math to explain how intelligence works either.

That would be a domain error.

From the article below:

"It is the guiding principle of many applied mathematicians that if something mathematical works really well, there must be a good underlying mathematical reason for it, and we ought to be able to understand it. In this particular case, it may be that we don’t even have the appropriate mathematical framework to figure it out yet. (Or, if we do, it may have been developed within an area of “pure” mathematics from which it hasn’t yet spread to other mathematical disciplines.)"

Both I and the author hold this up as part of the problem.

Jumping to conclusions, like humans and neural networks do, doesn't provide optimality, completeness, repeatability, parsimony, transparency (of process) or scrutability (of result). I don't see physics and mathematics operating under such conditions.

Math is manipulation of Models. Equations, formulas, theories, hypotheses. Computer programs.

Models are only useful in the Reductionist disciplines – or rather, in all disciplines when using Reductionist (Model Making) methods. Physics and Chemistry are reductionist. Already when you get to BioChemistry you are dealing with Life, and all of a sudden the interactions are so complex that comprehensive Models become nearly impossible. The further away you go from Physics, the less you want to use Math and the more you want to use Model Free Methods. Like Machine Learning.

Model Free Methods are Manipulation of Patterns.

The reason Deep Learning works is that every layer provides a small and reasonably easy fractional Reduction from the slightly richer context at the input side of that layer and provides a slightly more Epistemically Reduced – "lossily abstracted" – or slightly less rich context on the output side. Over many layers this leads to the discovery of high level concepts with good disambiguation based on previous learning.

http://www.wired.com/2015/12/machine-learning-works-greatmathematicians-just-dont-know-why

Post has attachment
This is my most important presentation so far.
At 20 minutes, it is also the shortest.
This is the place to start if you want to know what my last 16 years of AI research are all about.

Post has attachment
Most important TED talk of the year. Watch it NOW.
Add a comment...

Post has attachment

Post has attachment
Monica Anderson hung out with 6 people.purno o, Saila Tee, Danilo Aguiar da Rosa, tedjani tija, Bishwo Bhattarai, and jay jadeja
Add a comment...

I'm clearly getting daft in my old age. I can't figure out how to scroll GMail inbox to the next screenful of emails. It shows about 25 headers. How do I get to the 25 before that? Help! Through the fog of daftness I seem to recall there used to be arrows to push to allow me to see the previous 25 etc. Now I can't find them even though I'm wearing triple eyeglasses.

I even went and shudder browsed the help pages. I've tried all the bucky bits my keyboard has. I've tried a couple browsers. No way to browse emails from three days ago without a search.
Add a comment...

Post has attachment

Post has shared content
Trying out Google's video sharing for my latest published talk about semantics of text.

Computer based analysis of the Semantics of language expressed as text is an AI level problem. Existing methods almost universally use Models of Language (Dictionaries, Grammars, Word Nets, Taxonomies, and Ontologies). The two simplest and most pervasive Models claim that Languages have Words and that those Words have Meanings. While acknowledging that good alternatives do not yet exist, this talk attempts to make plausible that these two "obvious" but fatally incorrect Models result, automatically, in a cascading series of forced engineering decisions that each discard a fraction of the available semantics until we end up with brittle systems that fail in catastrophic and memorable ways. The proposed alternative to word-centric Model Based methods of language analysis is Understanding Machines - capable of learning languages the way humans learn languages in babyhood - using new classes of algorithms based on Model Free Methods.

More talks at http://videos.syntience.com
Add a comment...

Post has shared content
Watch this: Portal gun brought to life in video short
http://vrge.co/LZtPrR
Add a comment...
Wait while more posts are being loaded