Profile cover photo
Profile photo
Virendra Rajput
412 followers -
<pure hacking>
<pure hacking>

412 followers
About
Posts

Post has shared content
Man Lives In A Boeing 727 In The Middle Of The Woods 

More information : http://bit.ly/1q5J0H6
PhotoPhotoPhotoPhotoPhoto
Man Lives In A Boeing 727 In The Middle Of The Woods
13 Photos - View album

Post has attachment

Post has attachment

Post has shared content
"How Pinterest started" as told by +Ben Silbermann at Startup Grind (http://bit.ly/1gDnJzm) by +Anna Vital.
Photo

Post has shared content
If You Are Designing Your Own REST backend You're Doing It Wrong

The only reason why I know this is because I've been guilty of that my self. This is meant to start a dialogue so please comment.

Let me walk though first what you are probably doing. You pick your favourite programming language and framework and get going. Whether that's node.js and restify, python and django or ruby and rails.

Then you are going to pick your database. Whether that it's tried and tested of MySQL or shiny and new of MongoDB. This choice is probably going to affect how you scale. Also how well you know some of the different databases and approaches.

Then you start coding. You hopefully care about how the URL schema looks like so you make a really nice interface for developers to work from like this:
GET          /users                  - will get you all of your users
GET          /users/olafur      - will get you one user
POST       /users                  - will make a new user
PUT         /users/olafur      - will update the user
DELETE /user/olafur         - will delete the user

You will go through all of your objects mapping to REST like this and hopefully you will end up with something sane. This is really nice to hook up with jQuery and mobile interfaces.

Now you have to scale. You hope that writing data to the server isn't going to kill it, so you hope you don't get too much traffic like that. You know how to handle reads at least. You have something like Nginx and Varnish, with Memcached, then you try finding bottlenecks and seeing if some more caching doesn't solve that. It's truly amazing to see the difference it makes.

Now you hit an API that has to do some async behaviour and now your screwed. There are some solutions for that but they make the code really complex, even Node.js.

But every one of these steps I've described are incorrect, now let me tell you why. Let's work our way back.

The first problem is that you have a lot of moving parts going on before the data you're trying to put into the database ends up there. There are problems with your APIs losing data because of errors or downtime. A lot of the Internet is going over unreliable wireless technologies. So your beautiful REST calls are now riddled with exception handling, because there are so many ways of things going wrong.

So what do you do. Of course you stick a REST database in front of your API. What does that accomplish, let's first talk about speed, we are talking about 4x writing speed improvements. You don't lose data when writing to the APIs. Databases are probably more solid than code you write. CouchDB is truly a speed freak when it's dealing with REST, it has security built in and validation. When you need to scale you have a multi master database so you stick one closest to your users and they all sync with each others. So we have covered scaling and dealing with the speed of writes and the reads.

How do you deal with writing to the REST APIs if the clients have bad Internet connections? You don't. You write to the native implementation in your browser[1][2] or mobile [3][4]. Then you sync with them when you have a connection. This also cuts down on traffic you have to get from the server. Trust me: it's an order of magnitude difference. You might say, "Why not implement syncing in your framework of choice?" If you did this, then you probably have to rewrite them because syncing works in CouchDB by keeping track of revisions and what has changed. This is hard to retrofit to a framework.

So then you don't have beautiful URLs right. It might be a valid use case for some simple API you have to maintain to have nice looking URLs but not for anything that has to scale. And it's possible in CouchDB with rewrite rules.

I personally like the model of a staging database and main database. You can create a rewrite rule so all writes go to staging and all reads come from main. What this gives you is a record of all the incorrect API calls without polluting your main database. What makes sense is to have types of documents you putting into the database and using views with map functions to sort them out. You don't have to do it like that but if you gain a lot from that.

So this is all fine and dandy for stuff that doesn't require processing but what if you actually want to do something more than just to store data?

You can do what I did and write a service that watches for changes in a database and then put those changes through plugins, in a flow like structure. Or you can use my implementation[5], currently only in python. This abstract the CouchDB from you code some your receiving information and sending back a response. I've written it in Tornado, but who knows asyncio looks pretty good.

With that kind of architecture you don't need to handle as much load as is being written to your servers you just handle as much load as you want. Balancing responsiveness with cost of the machines. But the reads are still going to be fast.

So why isn't everybody doing this then? We are still learning how to structure things well so I'm only able to write about this because of the awesome work of databases like CouchDB that are not afraid of being misunderstood and people that have formed the best practices from all the mistake they have done. I've done plenty of mistakes and will do plenty of mistakes in the future. The important thing is to learn from them.

I have to say that I really love REST and I love beautiful URLs but life is about doing the right thing, as often as you can get away with.

[1] https://github.com/olafura/sundaydata
[2] https://github.com/daleharvey/pouchdb
[3] http://www.couchbase.com/mobile
[4] https://cloudant.com/product/cloudant-features/sync/
[5] https://github.com/olafura/sundaytasks-py

#Python #CouchDB #NodeJS #Programming #RubyOnRails #REST

Post has shared content
There is a Pure HTML Gmail, and it Still Works
#gmail  /via +Paul Buchheit 

Paul Buchheit, recognized as the inventor of +Gmail ten years ago, said Gmail's reliance on AJAX (Asynchronous JavaScript and XML), was so scary, that in parallel, the team built an HTML only version -- just in case. Guess what? It still works. Find it here: http://mail.google.com/mail/u/0/h/ 

Post has attachment

Post has shared content

Post has attachment

Post has shared content
Wiring Corkboards

Inspired by +Urs Hölzle's post from earlier today, I'll relate my first visit to a datacenter.  It was August, 1999, about a week after I had started at Google.  Our traffic was growing very quickly, and we had just gotten a delivery of 400+ machines to our "cage" in the Exodus datacenter in Santa Clara.  The machines were the infamous "corkboards", which we had designed in-house, and which featured a thin layer of cork to protect the motherboards from the cookie trays on which they were mounted  on(well, more like "rested haphazardly on").  Each tray of four machines was served by a single power supply (which made for interesting failure domains), and had 8 disks total.  Four of the disks were nicely mounted with actual screws towards the back of each tray, but four were place on a plexiglass sheet that was draped over the ribbon cables on top of the motherboards, giving a kind of springy base rather than a firm mounting point.

The two ops folks in the company at the time were a bit overwhelmed getting all of these machines cabled and ready to go, and the extra capacity was needed very soon, and so a call was sent out for volunteers to come help with a server wiring party.  We all went down to the cage in waves.   The cage was small enough and completely packed from floor to ceiling with computing equipment that only three or four people could work in there at once.  Rack switches were bungee corded to the water pipes above the cage to prevent them from toppling off the tops of the racks (in fact, while we were doing the wiring, one of the racks did exactly that and gave one of my fellow wirers a nasty gash on the head).

In the end, we got all the machines up and running, and our users were happy the next morning.

Other relevant info:

Urs's post from earlier today, showing Google's first contract for datacenter space:

https://plus.google.com/100873628951632372330/posts/UseinB6wvmh
(Good discussion in the comments of this post, as well).

More info about the "corkboard" machines:

http://americanhistory.si.edu/press/fact-sheets/google-corkboard-server-1999
Photo
Wait while more posts are being loaded