Understanding the Algorithms of our Digital World

We live here, now. The roots of what we do and how we do it however were laid some time ago by people living in entirely different times. Andrey Kolmogorov was a soviet mathematician (http://goo.gl/ctIw5) who took a redacted work of Claude Shannon (http://goo.gl/fIBz) father of Information Theory and managed to fill in the blanks himself using logic and mathematical intuition, and advance it. 

His work on algorithmic information theory and computational complexity bears directly upon us and highlights insights for search and Google+. He introduced the concept of complexity in the flow of information as a way of defining its semantic density. Using algorithms as an abstract rendering of something being described he was able to mathematically show that the less complex an object was, the shorter was the algorithm that would accurately generate or describe it. An object needing an algorithm that was as long and complicated as the object itself indicated maximal complexity (i.e. no shortcuts). 

He described his dilemma like this: The intuitive difference between “simple” and “complicated” objects has apparently been perceived a long time ago. On the way to its formalization, an obvious difficulty arises: something that can be described simply in one language may not have a simple description in another and it is not clear what method of description should be chosen.

To overcome this he chose the obvious: The computer language (any computer language) of a universal Turing Machine (http://goo.gl/Hnoh). The Kolmogorov complexity of an object then is the size, in bits, of the shortest algorithm needed to generate it. This is also the amount of information. And it is also the degree of randomness it contains. Why is any of this important? For two reasons: 

A. Randomness and complexity are closely tied to information. A truly unpredictable person, for instance, is incredibly complex. If his actions cannot be computed then they can never be predicted. Predictability is part of the way we operate as humans. The ability to analyse our behavioural patterns and calculate them algorithmically is what makes Google Now (http://goo.gl/NNQSH) and the YouTube algorithms that predict and preload videos we would like to watch, so accurate. A lot of our behaviour includes repetitive redundancies (the mental ‘shortcuts’ we use so as to not have to think all the time) that makes us a lot less complex than we think we are. But we have the potential for complexity. 

B. Randomness also marks the wealth of information contained in any flow of data. Like summer blockbusters that are designed to be big on spills and light on the brain, predictability reduces some of the semantic richness. You can take your eyes off the screen, check your phone, reach for some popcorn and touch your date’s hand without missing any critical plot twists because… well, there aren’t many. 

This brings us to Google+. The complexity of our connections here makes every conversation unpredictable and therefore both information-rich and semantically dense (and there is more than a little overlap in these two terms). That means that when we are here we need to fully engage, think, respond, analyse, learn, consider. There’s no taking our eye off the screen and running Google+ on autopilot. That’s what also sets off the mental fireworks and produces fresh insights at every conversation. 

Since we are talking about complexity you may need to check this  +Lee Smallwood  and +Dejan SEO  HOA out: http://goo.gl/qosw7y

Kolmogorov and Shannon would have both loved it here. 

I know, it’s Thursday, make it a good one. 
Shared publiclyView activity