I just added coverage that started today from at and +Search Engine Roundtable. As well as from
What kind of shake-ups are you seeing in your markets???
The challenge is that, when new material shows up on the scene, you don't yet have any human interactions -- and quite often, good material, things people would love, simply goes unnoticed and never builds up the interaction signals which help. To detect quality in these things requires understanding the content itself, and the aspects of it which matter to people.
There are several hard aspects to this. One is simply understanding the content at the right granularity: "the color of the top-left pixel" or "the frequency of the word 'whenever'" are too fine-grained to give us a hint about whether people will like something, so we need to be able to group the content into more meaningful structures. For images, that might be "an image of a face in 3/4-profile," a certain color balance or contrast, a perspective or a cropping, and advances in image recognition in the past few years have (finally) made it possible to reliably identify such features. For text, it's much harder: there isn't yet even a clear idea of what features both could be measured about text and determine people's tastes. (How do you measure "intellectually meaty" or "hinting at scandal?")
This paper has used the recent advances in image processing, together with recent advances in AI in general, to get a sense of which pictures people will like. It started by taking several thousand images, and having them rated by humans for quality; that was used as "ground truth." Then, those thousands of images are analyzed into meaningful features, and a neural network is trained to find patterns of image features which predict human taste.
This is what neural networks, and other kinds of "supervised" machine learning systems, do in general: they take as inputs a bunch of signals, and combine them using a large number of parameters -- the "weights" -- to produce predictions of some values that you want to measure. The weights are set by taking a large number of test examples ("golden data" or "ground truth") with known values of both the signals and the test values; weights are chosen ("trained") to maximize the quality of the system's predictions for this data. To make sure that the training doesn't just teach it to recognize those specific examples, the golden data is randomly split into two groups; one is used for training, and then it's tested against the other group to make sure that the predictions with the trained weights are good. If they are, then you have a model which can predict -- given any set of measured signals -- the truth values.
In this case, the signals are these features of the image, measured by a second machine learning system; the quantity being predicted is whether people will like it. Because these are all "content-based signals" -- that is, they're based on the contents of the image, and not on people's responses to it -- the resulting model can be applied to any image.
The team then applied this model to a set of 9 million images from Flickr with fewer than five "favorites." They tested the quality of its picks by having human raters compare that result set with the set of popular images on Flickr; the result was excellent, with its "hidden gems" scoring statistically the same as the most popular images on the site.
I would expect a lot more work on related techniques over the next few years, and for this to have a significant impact on the way that content recommendation is done. The main upshot will be that more little-known works get the spotlight they deserve -- something critical, as more and more people are creating things of value that they want the world to see.
The 1st thing I do when searching is use Search Tools to set my location (generally set it to UK)
This overrides Google using your Internet address when displaying results, or I should say, it did override your Internet address.
This morning it doesn't matter what I set my location to Google is still using my Internet address.
Have Google switched of location settings in Search Tools or is something broken?
Note: doing the same thing on IE, Chrome and FF whether signed in to Google or not.
/by for #analytics #marketing
"It seems simple enough, but is CLV is one of the most misunderstood or ignored concepts in marketing because it's inherently a long view, and our industry has suffered from shorter and shorter term tactical thinking as we rush from campaign to campaign. To adopt CLV is ultimately to take a more strategic method of measurement because it's considering the future, not simply tracking results of the past."
- University of Michigan2001
- Wayne State University Law School2005
- AttorneySyncOwner, 2008 - presentCo-Founder
- EPL DigitalDirector, 2013 - presentHelping businesses grow revenue and decrease costs with internet marketing (SEO, SEM, Analytics).
(312) 207 0133
208 S. Jefferson St. #203 Chicago, IL 60661
Introducing Search Response and Airings Data in TV Attribution - Analyti...
The following is a cross post from Adometry by Google, a Marketing Analytics and Attribution product. Mass media drives people to interact w
The 25 Hottest Skills That Got People Hired in 2014
Believe it or not, 2014 is almost over and 2015 is right around the corner. With a new year comes new opportunities, and around this time we
How to Not Suck at Local SEO - MN Search Summit 2014
What really matters in local search? Which activities are going to have a high return on investment, and which activities are just a waste o
Local Citation Services Compared - MozLocal vs Yext vs UBL vs BrightLoca...
Read our detailed comparison of MozLocal vs Yext vs UBL vs BrightLocal vs.Whitespark and see which citation building service is right for yo
Official Google Webmaster Central Blog: Understanding web pages better