**Understanding the Algorithms of our Digital World**

We live here, now. The roots of what we do and how we do it however were laid some time ago by people living in entirely different times. Andrey Kolmogorov was a soviet mathematician (http://goo.gl/ctIw5) who took a redacted work of Claude Shannon (http://goo.gl/fIBz) father of

*Information Theory*and managed to fill in the blanks himself using logic and mathematical intuition, and advance it.

His work on algorithmic information theory and computational complexity bears directly upon us and highlights insights for search and Google+. He introduced the concept of complexity in the flow of information as a way of defining its semantic density. Using algorithms as an abstract rendering of something being described he was able to mathematically show that the less complex an object was, the shorter was the algorithm that would accurately generate or describe it. An object needing an algorithm that was as long and complicated as the object itself indicated maximal complexity (i.e. no shortcuts).

He described his dilemma like this:

*The intuitive difference between “simple” and “complicated” objects has apparently been perceived a long time ago. On the way to its formalization, an obvious difficulty arises: something that can be described simply in one language may not have a simple description in another and it is not clear what method of description should be chosen.*

To overcome this he chose the obvious: The computer language (

*any*computer language) of a universal Turing Machine (http://goo.gl/Hnoh). The Kolmogorov complexity of an object then is the size, in bits, of the shortest algorithm needed to generate it. This is also the amount of information. And it is also the degree of randomness it contains. Why is any of this important? For two reasons:

A. Randomness and complexity are closely tied to information. A truly unpredictable person, for instance, is incredibly complex. If his actions cannot be computed then they can never be predicted. Predictability is part of the way we operate as humans. The ability to analyse our behavioural patterns and calculate them algorithmically is what makes Google Now (http://goo.gl/NNQSH) and the YouTube algorithms that predict and preload videos we would like to watch, so accurate. A lot of our behaviour includes repetitive redundancies (the mental ‘shortcuts’ we use so as to not have to think all the time) that makes us a lot less complex than we think we are. But we have the

*potential*for complexity.

B. Randomness also marks the wealth of information contained in any flow of data. Like summer blockbusters that are designed to be big on spills and light on the brain, predictability reduces some of the semantic richness. You can take your eyes off the screen, check your phone, reach for some popcorn and touch your date’s hand without missing any critical plot twists because… well, there aren’t many.

This brings us to Google+. The complexity of our connections here makes every conversation unpredictable and therefore both information-rich and semantically dense (and there is more than a little overlap in these two terms). That means that when we are here we need to fully engage, think, respond, analyse, learn, consider. There’s no taking our eye off the screen and running Google+ on autopilot. That’s what also sets off the mental fireworks and produces fresh insights at every conversation.

Since we are talking about complexity you may need to check this +Lee Smallwood and +Dejan SEO HOA out: http://goo.gl/qosw7y

Kolmogorov and Shannon would have both loved it here.

I know, it’s Thursday, make it a good one.

View 21 previous comments

- +Rick Nimo This is a huge point, one I think that is high neglected. It gives me to think this: Not only are some of the measureables highly questionable underpinnings of the pictures they supposedly present - skewing strongly towards certain kinds of behaviors and even persons - they necessarily are doing so because it is the only widely distributed data that we have. There is all kinds of interest that simply does not express itself in actions like clicking or publicly declaring
*likiness*in one way or another - I in fact enjoy lots of things in my stream that I take zero action towards. More concerning than this though is that while we may assume that data collection will become more enriched beyond the primitive markers being used (and being treated as comprehensive on several levels), a culture of interpretation is being built, a kind of discourse of interest measurement which very likely is going to*grow out of*+1s and their like, rather than reverse itself against it. We are already seeing Facebook and Google Plus treat these kinds of markers as significant in their modeling. Will future models not simply build on this approach because that is where we will have the richest data history and interpretive experimentation? And lastly as platforms bend their social products towards these markers, rewarding those people that give them more markers of a kind, they are in turn shaping the behaviors and expectations of social action, the very thing they are supposed to be measuring. The idea that we are producing effective pictures of wants and using them to enhance experiences is determining our future of social media more than we may know.Aug 22, 2013 - +Kevin von Duuglas-Ittu Kurt Godel would have loved you as would Claude Shannon (reading his biography started my post) - because we are in the solipsistic area of his famous paradox that led to the theory of incompleteness - pretty apt actually seeing how we started off looking at information theory and algorithms and heck, this is a true G+ campus. There are is always an area of mathematics where our best models break down and our ability to derive proofs fails. :)Aug 22, 2013
- I like your point +John Kellden - I guess for me the concern is a reflexive one. How do early successes with primitive modeling (and the shift of financial resources towards them) produce a picture of success (Google selling ads) that bends back and shapes Social in very strong ways (again, mobile is a great example) as well as steering the demands of modeling itself --- let's model to produce a user base that is most susceptible to our ads.

I guess in the larger picture the Market Economies that drive commerce and Gift Economies that drive social are at odds with each other in a radical, nearly logic-level way, and where they intersect, in this case in predictive modeling that will shape our social landscape, is a likely place of some turbulence and mis-representations.Aug 22, 2013 - +Joseph Ruiz - you were very enthused about Big Data pictures sometime back. Wondering what you think?Aug 22, 2013
- Thanks +Kevin von Duuglas-Ittu I'm undercaffeinated, so not able to give justice to your many insightful perspective - I do believe there's both clashes (at odds) as well as synergies -
*in the same Venn intersections*.Aug 22, 2013 - Making a note to self to add one of +Lee Smallwood 's posts to the Mapping the Googleverse series...Aug 22, 2013