The web has taught us that serving all requests is only worthwhile if you can respond in the very small window of time before a sale is lost. Thus having an SLA of 5 9's of uptime was worthless if you were constantly taking too long to respond. It is better to serve as many as possible in that window and simply reject the rest of the requests if you cannot keep up with demand.
Perhaps it is time to do the same with data. Data is being generated at any exorbitant rate in today's world, we are now forced to scale our systems to previously unimaginable levels. However if we cannot access it at least as fast as it is being generated then we are always working with stale data anyway so what is the value in guaranteeing consistency in your data transactions? It makes sense that in order to scale we must devalue consistency the same as we did uptime. Be consistent as much as possible but guarantee responses will be in a usable timeframe.