Profile

Cover photo
Brandon Titus
Works at Mercury Intermedia
Attends University of Denver
Lives in Denver, CO
3,152 followers|14,990 views
AboutPostsPhotosYouTube

Stream

Brandon Titus

Shared publicly  - 
 
Dogs. Trying out auto enhance (hopefully)
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Sebastian Delmont originally shared:
 
Good bye, Google Maps… thanks for all the fish

TL;DR: We at StreetEasy decided to build our own maps using, among other tools, OpenStreetMap, TileMill, MapBox and Leaflet, instead of paying hundreds of thousands of dollars per year to Google. And yes, the money pushed us into doing it, but we're happier with the result because we now control the contents of our maps.

We were all happy...
Our site, StreetEasy (http://streeteasy.com/), has been using Google Maps embedded in our pages for the last 6 years. We're a real estate portal, so most of our pages have maps in them. So when Google announced they new usage limits (see http://www.dailymail.co.uk/sciencetech/article-2056128/Google-Maps-start-charging--thousands-sites-apps-hit-fees.html) , we were a little worried.

25,000 free map views per day, and $4 per CPM (1,000 views) beyond that. On Christmas day, when everybody was opening their presents, we did ten times that. On a good day, we do 600K-700K pageviews (http://www.quantcast.com/streeteasy.com).

We did the math and came up with numbers that reminded me of Oracle licensing in 1999. Six, seven, eight hundred thousand dollars. We met with Google salespeople, expecting to negotiate better terms, and they were nice, and they offered us discounts, but only to about half of what we've calculated.

In our opinion, their price was off by an order of magnitude. It's very, very hard to work out a $2 CPM cost in any site's business model, when most of the time, if you're lucky, you're making $1 CPM off your pages. And yes, StreetEasy does much better than that, and it would not have bankrupted us, but it would have also meant giving away a significant chunk of our profits.

It was not just the money!
$200,000 to $300,000 a year is, at the very least, the same as hiring a very good engineer for a year (and paying all the taxes and benefits and costs and still having a lot of money left). It was enough money to finally push us into doing our own maps.

Because despite Google Maps being such an awesome product, it had it's downsides. One is that your site looks just like every other site with maps on the Internet (and I know you can customize their colors now, but that costs even more!). Another is that you have no control over your maps, so when you're trying to point out the location of this wonderful apartment, Google might thing it's a good idea to cutter the map with random local businesses (and yes, they've gotten better at it, but often it's just noise). Or they might have bad data, and there's very little you can do about it except report it and wait. (I've always been annoyed at "Classon Pointe" being shown in the middle of Harlem, probably a mistake by some mapping data company decades ago, again, something that has been corrected, but that highlights the problem)

I've always wanted to have our own maps, but thought it would be impossible, or at the very least, a huge amount of work. Something not worth considering, given the rest of a long list of things we also wanted to build on StreetEasy. But with a potential invoice for a third of a million dollars hanging over our heads, we had enough "carrot" (or is it "stick"?) to revisit our priorities. At the very least, we should do more research and see what our options were.

Looking beyond GMaps
Our first option was, of course, Bing Maps. I'm sure Microsoft is having a great time helping all the Google Maps Refugees, and I have no doubt they would have offered us a very cheap licensing deal, but it still meant using someone else's maps, and leave us with license renegotiation risks a year or two down the road. But it was an option.

Then, my coworker +Jordan Anderson, sitting quietly across my desk, pointed out that his "other job", the site he had built with a friend before joining StreetEasy, the fabulous Ride The City (http://ridethecity.com/), did not use Google Maps, but their own tiles, and an open source JS library to display them.

A couple of days later, at a NYC Big Apps hackathon where we were showing off our public APIs, I met +Javier de la Torre (from http://vizzuality.com) and he showed me his awesome product, CartoDB (http://cartodb.com) and gave me a few more pointers. And I saw what +Alastair Coote was doing for his taxi app and got excited with the possibilities.

I spent the next week reading and browsing and searching, discovering the wonderful world of digital cartography, and being amazed at how far the open source tools had advanced in the last few years.

The world of Open Source Cartography
We now had a great tile renderer, Mapnik (http://mapnik.org/), that was at the core of pretty much every mapping tool out there. Great "geo" and "gis" functionality for Postgres, in the form of PostGIS (http://postgis.refractions.net/). A few javascript libraries to present the results inside web browsers, such as Leaflet (http://leaflet.cloudmade.com/), Open Layers (http://openlayers.org/) and Modest Maps (http://modestmaps.com/), and other libraries to abstract your mapping backend behind a common API, such as Wax (http://mapbox.com/wax/) or Mapstraction (http://mapstraction.com/).

But then I discovered the "second generation" of tools, built on top of what I just listed on the previous paragraph, and it blew my mind. Things like CartoDB or TileMill (http://mapbox.com/tilemill/) or Web Map Studio (http://cloudmade.com/products/web-maps-studio).

TileMill, in particular, was just amazing, and Carto CSS (http://developmentseed.org/blog/2011/feb/09/introducing-carto-css-map-styling-language/) made map design look like something I could actually do!

And of course, OpenStreetMap (http://www.openstreetmap.org/), the Wikipedia of mapping. An open source (well, technically, Creative Commons) data set, covering the entire globe, with lots of details (sometimes too much detail, like the voltage and gauge of a subway line!). It has a few errors here and there, but you can go and fix them yourself (as I've done http://www.openstreetmap.org/user/sdelmont/edits).

The path we took
I settled on Leaflet for the front end, mostly because it was small, fast, clean code with a good API that resembled Google Maps v2. It's a good thing that when we first implemented maps on StreetEasy, we did it through ruby that generated the JS code, so all I had to do was "implement an new backend". If I were to do it today, I might use Wax or Mapstraction instead, to ensure I could change map APIs if I had to.

It was fairly easy to implement most basic features. Showing a map, adding markers, adding polygons, info popups (we had our own code for that, just had to hook it on the right events). I spent a couple of days getting our "polygon editor" to work (something I plan to contribute back to Leaflet as soon as I have time to clean up the code). And of course, the dreaded "does it run on IE?" time (I ran into some issues with onload events on script tags, but that was all).

I installed Postgres and PostGIS, downloaded OSM extracts from http://download.geofabrik.de/osm/north-america/, because there is no point in downloading gigs and gigs of worldwide data when all I care about is the area around NYC. Imported it using osm2pgsql (http://wiki.openstreetmap.org/wiki/Osm2pgsql) and started playing with TileMill.

I discovered the work of Mike at Stamen (for example http://mike.teczno.com/notes/osm-us-terrain-layer.html) and was inspired by it. Found a couple of TileMill projects (https://github.com/mapbox/open-streets-style and https://github.com/mapbox/osm-bright) to better understand how to organize them. Ran into High Road (http://mike.teczno.com/notes/high-road.html), a set of queries that makes OSM roads much more manageable and stylable.

And I spent days and days tweaking maps. Just to get to a point where we were not unhappy with our maps. Something that was good enough to flip the switch.

We added building outlines from NYC Open Data (http://nycopendata.socrata.com/), and our own neighborhood boundaries to decide where to put the labels (and trust me, we have the best boundaries for NYC).

As soon as I had something that didn't cause my coworkers to vomit, I uploaded the tiles to S3 and started testing it on our site. A few days later, and a lot more map tweaks, we started using the new maps for some of our users. And as of Jan 10th, we flipped the switch for all pageviews on our site.

We decided to host our tileset with MapBox (http://mapbox.com), from the great guys at Development Seed.. We could have unpack the mbtiles file produced by TileMill and just upload them to S3 (see http://karchner.com/2011/02/21/extract-images-from-an-mbtiles-file-or-getting-actual/), but we went ahead and paid for MapBox, in part because it means less servers to worry about, in part because we want to support the guys that brought us TileMill, and in part because of the promise of more cool features down the road. And most importantly, because they promised to help us make our maps look nicer, and they know about nice maps.

Take a look at the results: http://streeteasy.com/nyc/sales/midtown-all-manhattan/status:open%7Cbeds:2?map_all=1

Where to now?
If I haven't made it clear, we're not completely happy with how our maps look, but we were happy enough to go ahead. We want to make them look great, with more data (such as subway stations) and better labels and lots of other little things. Development Seed will help us with that, and we've been learning a lot ourselves.

We'd also like to have a "live mapnik server", producing tiles on demand (and caching the results, duh) to make it easier to tweak our maps. Right now it takes a couple of days to go from OSM import to tile rendering to uploading multi-gigabyte files and finally showing them on the site. A live server would let us change a stylesheet and see the results right away.

We will try to contribute back to all these open source projects as much as we can. I already have some code for Leaflet for polygon editing and encoding, for example, and we've started doing edits on OSM.

What about geocoding?
You'd probably noticed I didn't talk about geocoding (the "art" of converting a street address into a set of coordinates in a map, in case you didn't know). That's part of what Google offers as part of their Maps APIs.

Well, at StreetEasy we built our own geocoder for NYC, using the City's database of streets and buildings. So it's not something we had to worry about as part of this transition.

But in case you need to do some geocoding, there are plenty of tools (for example http://highearthorbit.com/geocommons-open-sourced-geocoder/) that use OSM data.

The Year of the Open Map
I think that someone at Google got their pricing wrong by an order of magnitude. Large companies might be willing to pay that kind of licenses, but this is not the CMS market in 1998, where people would pay half a million for a Vignette license and another million for Oracle. There are so many open source options out there that the value of proprietary solutions has come down dramatically.

And if Google keeps pushing companies into experimenting with these open source solutions, it's only going to get better. I think 2012 is going to be the year of the Open Map. And I'm happy to be part of the front lines.
1
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Great tips from the Google+ devs about speed.
Mark Knichel originally shared:
 
Hi everyone! I’m an engineer on the Google+ infrastructure team. When +Joseph Smarr made an appearance on Ask Me Anything back in July (http://goo.gl/GbdYv), many of you wanted to hear more about Google+'s technology stack. A few of us engineers decided to write a few posts about this topic and share them with you.

This first one has to do with something we take very seriously on the Google+ team: page render speed. We care a lot about performance at Google, and below you'll find 5 techniques we use to speed things up.

1. We <3 Closure

We like Closure. A lot. We use the Closure library, templates, and compiler to render every element on every page in Google+ -- including the JavaScript that powers these pages. But what really helps us go fast is the following:

- Closure templates can be used in both Java and JavaScript to render pages server-side and in the browser. This way, content always appears right away, and we can load JavaScript in the background ("decorating" the page, and hooking up event listeners to elements along the way)

- Closure lets us write JavaScript while still utilizing strict type and error checking, dead code elimination, cross module motion, and many other optimizations

(Visit http://code.google.com/closure/ for more information on Closure)

2. The right JavaScript, at the right time

To help manage the Javascript that powers Google+, we split our code into modules that can be loaded asynchronously from each other. You will only download the minimum amount of Javascript necessary. This is powered by 2 concepts:

- The client contains code to map the history token (the text in the URL that represents what page you are currently on) to the correct Javascript module.

- If the Javascript isn’t loaded yet, any action in the page will block until the necessary Javascript is loaded.

This framework is also the basis for our support for making client side navigates work in Google+ work without reloading the page.

3. Navigating between pages, without refreshing the page

Once the Javascript is loaded, we render all content without going back to the server since it will be much faster. We install a global event listener that listens for clicks on anchor tags. If possible, we convert that click to an in page navigate. However, if we can’t client side render the page, or if you use a middle-click or control-click on the link, we let the browser open the link as normal.

The anchor tags on the page always point to the canonical version of the URL (i.e. if you used HTML5 history for the URL), so you can easily copy/share links from the page.

4. Flushing chunks (of HTML)

We also flush HTML chunks to the client to make the page become visible as soon as the data comes back, without waiting for the whole page to load.

We do this by
- Kicking off all data fetches asynchronously at the start of the request
- Only blocking on the data when we need to render that part of the page

This system also allows us to start loading the CSS, Javascript, images, and other resources as early as possible, making the site faster and feel more responsive.

5. iFrame is our friend

To load our Javascript in parallel and avoid browser blocking behavior (http://goo.gl/lzGq8), we load our Javascript in an iframe at the top of the body tag. Loading it in an iframe adds some complexity to our code (nicely handled through Closure), but the speed boost is worth it.

On a side note, you may have noticed that we load our CSS via a XHR instead of a style tag - that is not for optimization reasons, that’s because we hit Internet Explorer’s max CSS selector limit per stylesheet!

Final Comments

This is just a small glimpse of how stuff works under the covers for Google+ and we hope to write more posts like this in the future. Leave your ideas for us in the comments!
1
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Chris Wetherell's thoughts on the whole Google Reader thing. Brilliant guy.
Chris Wetherell originally shared:
 
There’s been some interesting critical discussions of some design and product changes within Google Reader recently and I’ve kind of stayed out of it since I’m heads down on making big changes elsewhere. But I grabbed a few minutes, and I’d like to share a few notes I’ve written about it…

• If Reader continues being understaffed, absorbed, or is eliminated then the internal culture at Google will adjust to a newly perceived lack of opportunity for building things that are treasured. No one knows what effect this will actually have, though. The response could be tiny.

• Technology will route around the diminishment or disappearance of Reader. Even if this means something other than feeds are being used.

It’s a tough call. Google’s leaders may be right to weaken or abandon Reader. I feel more people should acknowledge this.

• However, saying “no” to projects doesn’t make you Steve Jobs if you say no to inspiring things. It’s the discernment that’s meaningful, not the refusal. Anyone can point their thumb to the ground.

• The shareable social object of subscribe-able items makes Reader’s network unique and the answer to why change is painful for many of its users is because no obvious alternative network exists with exactly that object. The social object of Google+ is…nearly anything and its diffuse model is harder to evaluate or appreciate. The value of a social network seems to map proportionally to the perceived value of its main object. (Examples: sharing best-of-web links on Metafilter or sharing hi-res photos on Flickr or sharing video art on Vimeo or sharing statuses on Twitter/Facebook or sharing questions on Quora.) If you want a community with stronger ties, provide more definition to your social object.

Reader exhibits the best unpaid representation I’ve yet seen of a consumer’s relationship to a content producer. You pay for HBO? That’s a strong signal. Consuming free stuff? Reader’s model was a dream. Even better than Netflix. You get affinity (which has clear monetary value) for free, and a tracked pattern of behavior for the act of iterating over differently sourced items – and a mechanism for distributing that quickly to an ostensible audience which didn’t include social guilt or gameification – along with an extensible, scalable platform available via commonly used web technologies – all of which would be an amazing opportunity for the right product visionary.

• Reader is (was?) for information junkies; not just tech nerds. This market totally exists and is weirdly under-served (and is possibly affluent).

• The language for decisions based on deferred value is all about sight, which I find beautiful (and apt for these discussions). People are asking if Google is seeing the forest for the trees. I’d offer that Google is viewing this particular act-of-seeing as a distraction.

• Reader will be an interesting footnote in tech history. That’s neat and that’s enough for me; wasn’t it fun that we were able to test if it worked?

• Google is choosing to define itself by making excellent products in obvious markets that serve hundreds of millions of people. This is good. A great company with evident self-consciousness that even attempts to consider ethical consequences at that scale is awesome. But this is a perfect way to avoid the risk of creating entirely new markets which often go through a painful not-yet-serving-hundreds-of-millions period and which require a dream, some dreamers, and not-at-all-measurable luck. Seemingly Google+ could be viewed as starting a new market, but I'd argue that it mainly stands a chance of improving on the value unlocked by other social networks, which is healthy and a good thing, but which doesn't require an investigation into why it's valuable. That's self-evident in a Facebook world. Things like Reader still need a business wizard to help make sense of the value there.

• If Google is planning on deprecating Reader then its leaders are deliberately choosing to not defend decisions that fans or users will find indefensible. This would say a lot about how they would communicate to the marketplace for social apps and about how they'd be leading their workforce. If this is actually occurring and you’re internal to Google – it's ok, I can imagine you’d be feeling that these decisions are being made obtusely “just because” or since “we need to limit our scope to whatever we can cognitively or technically handle” or such but I’d offer that maybe it's needed for driving focus for a large team? I suppose sacrificing pet projects, public responsibility, and transparency could be worth it if the end is a remarkable dream fulfilled. But what if the thing you’re driving everyone toward isn’t the iPod but is instead the Zune? So just make sure it's not that.

• The following sentence is unfair but it's a kind of myth and fog that has been drifting into view about 'em: Google seems to be choosing efforts like SketchUp over Reader. I doubt there's a common calculus, but it’s now harder for Google's users to really know how important it is that many millions of people are using a product every day when Google is deciding its evolution and fate.
1
1
Nicholas Holland's profile photoBrandon Titus's profile photo
3 comments
 
I think +Chris Wage is correct in saying that a significant portion of people would pay (of course, significant is relative).

Google Reader combined with Google Books (with pre-fetch Instapapering) for 'bundled' reading would be pretty nice addition that would further justify paying for that service.
Add a comment...

Brandon Titus

Shared publicly  - 
 
Steve's first girlfriend's memories of a carefree Beat poet.

Unfortunately the article doesn't seem to be online :(
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Jean-Baptiste Quéru originally shared:
 
Dizzying but invisible depth

You just went to the Google home page.

Simple, isn't it?

What just actually happened?

Well, when you know a bit of about how browsers work, it's not quite that simple. You've just put into play HTTP, HTML, CSS, ECMAscript, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity.

Let's simplify.

You just connected your computer to www.google.com.

Simple, isn't it?

What just actually happened?

Well, when you know a bit about how networks work, it's not quite that simple. You've just put into play DNS, TCP, UDP, IP, Wifi, Ethernet, DOCSIS, OC, SONET, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity.

Let's simplify.

You just typed www.google.com in the location bar of your browser.

Simple, isn't it?

What just actually happened?

Well, when you know a bit about how operating systems work, it's not quite that simple. You've just put into play a kernel, a USB host stack, an input dispatcher, an event handler, a font hinter, a sub-pixel rasterizer, a windowing system, a graphics driver, and more, all of those written in high-level languages that get processed by compilers, linkers, optimizers, interpreters, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity.

Let's simplify.

You just pressed a key on your keyboard.

Simple, isn't it?

What just actually happened?

Well, when you know about bit about how input peripherals work, it's not quite that simple. You've just put into play a power regulator, a debouncer, an input multiplexer, a USB device stack, a USB hub stack, all of that implemented in a single chip. That chip is built around thinly sliced wafers of highly purified single-crystal silicon ingot, doped with minute quantities of other atoms that are blasted into the crystal structure, interconnected with multiple layers of aluminum or copper, that are deposited according to patterns of high-energy ultraviolet light that are focused to a precision of a fraction of a micron, connected to the outside world via thin gold wires, all inside a packaging made of a dimensionally and thermally stable resin. The doping patterns and the interconnects implement transistors, which are grouped together to create logic gates. In some parts of the chip, logic gates are combined to create arithmetic and bitwise functions, which are combined to create an ALU. In another part of the chip, logic gates are combined into bistable loops, which are lined up into rows, which are combined with selectors to create a register bank. In another part of the chip, logic gates are combined into bus controllers and instruction decoders and microcode to create an execution scheduler. In another part of the chip, they're combined into address and data multiplexers and timing circuitry to create a memory controller. There's even more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity.

Can we simplify further?

In fact, very scarily, no, we can't. We can barely comprehend the complexity of a single chip in a computer keyboard, and yet there's no simpler level. The next step takes us to the software that is used to design the chip's logic, and that software itself has a level of complexity that requires to go back to the top of the loop.

Today's computers are so complex that they can only be designed and manufactured with slightly less complex computers. In turn the computers used for the design and manufacture are so complex that they themselves can only be designed and manufactured with slightly less complex computers. You'd have to go through many such loops to get back to a level that could possibly be re-built from scratch.

Once you start to understand how our modern devices work and how they're created, it's impossible to not be dizzy about the depth of everything that's involved, and to not be in awe about the fact that they work at all, when Murphy's law says that they simply shouldn't possibly work.

For non-technologists, this is all a black box. That is a great success of technology: all those layers of complexity are entirely hidden and people can use them without even knowing that they exist at all. That is the reason why many people can find computers so frustrating to use: there are so many things that can possibly go wrong that some of them inevitably will, but the complexity goes so deep that it's impossible for most users to be able to do anything about any error.

That is also why it's so hard for technologists and non-technologists to communicate together: technologists know too much about too many layers and non-technologists know too little about too few layers to be able to establish effective direct communication. The gap is so large that it's not even possible any more to have a single person be an intermediate between those two groups, and that's why e.g. we end up with those convoluted technical support call centers and their multiple tiers. Without such deep support structures, you end up with the frustrating situation that we see when end users have access to a bug database that is directly used by engineers: neither the end users nor the engineers get the information that they need to accomplish their goals.

That is why the mainstream press and the general population has talked so much about Steve Jobs' death and comparatively so little about Dennis Ritchie's: Steve's influence was at a layer that most people could see, while Dennis' was much deeper. On the one hand, I can imagine where the computing world would be without the work that Jobs did and the people he inspired: probably a bit less shiny, a bit more beige, a bit more square. Deep inside, though, our devices would still work the same way and do the same things. On the other hand, I literally can't imagine where the computing world would be without the work that Ritchie did and the people he inspired. By the mid 80s, Ritchie's influence had taken over, and even back then very little remained of the pre-Ritchie world.

Finally, last but not least, that is why our patent system is broken: technology has done such an amazing job at hiding its complexity that the people regulating and running the patent system are barely even aware of the complexity of what they're regulating and running. That's the ultimate bikeshedding: just like the proverbial discussions in the town hall about a nuclear power plant end up being about the paint color for the plant's bike shed, the patent discussions about modern computing systems end up being about screen sizes and icon ordering, because in both cases those are the only aspect that the people involved in the discussion are capable of discussing, even though they are irrelevant to the actual function of the overall system being discussed.
1
1
Add a comment...
In his circles
194 people
Have him in circles
3,152 people

Brandon Titus

Shared publicly  - 
 
Irene Koehler originally shared:
 
Representative Gabrielle Giffords Announces She Will Step Down from Congress

What an inspiration she has been to so many. The courage and dedication to service she has demonstrated is remarkable.
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Somehow I already want a giant poster of this: http://xkcd.com/980/
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
I'm extremely excited to use Nimbus on future projects. This thing is awesome! It's starting to come out of Three20's shadow in a big way.
1
Add a comment...

Brandon Titus

Shared publicly  - 
 
Funny...you'd think they would have learned by now.
1
Brandon Titus's profile photoJordan Schulz's profile photo
2 comments
 
Apple stock after the earnings call.
Add a comment...
People
In his circles
194 people
Have him in circles
3,152 people
Work
Occupation
Student
Employment
  • Mercury Intermedia
    iOS Development Intern, present
  • University of Denver Technology Services
    Phone Support Consultant
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Denver, CO
Previously
Nashville, TN
Story
Introduction
Computers
I go to school in Denver. I enjoy writing some software in Java, a little C++ and Objective C/Cocoa. I also dabble in the web with CSS, Python, PHP, and some Ruby. I work at a computer help desk and like to play around with managing servers (Ubuntu, Windows Server 2003, OS X Server). I use Splunk, dd-wrt, and some other cool tools on a pretty regular basis.

Music
I've played drums for over 10 years in jazz bands, rock bands (garage and school), pit orchestras, and others. I love going to concerts and enjoy listening to all kinds of music (except most country and rap). My listening habits are easily viewable on my Last.fm profile (see right >>).
Education
  • University of Denver
    Information Technology & E-Commerce, 2008 - present
  • Montgomery Bell Academy
    2002 - 2008
Basic Information
Gender
Male