Profile cover photo
Profile photo
Isaac Wyatt
About
Isaac's posts

Post has attachment
New blog post about how I use my employer's product to get better dashboarding, reporting, and analytics out of our marketing automation platform.
http://blog.newrelic.com/2014/06/25/marketo/

Post has attachment

Post has attachment

Post has shared content
Very true. Same as with the word "robust", which has usually no actual dimensionality but is presented to the reader as if it does.
Have any of you noticed the fake dimension phenomenon in Web content?

I define it as any visual where one, two or all three dimensions are used to suggest that something is substantial when it is not. I don't mean things like Second Life or actual 3D. I mean something else.

It began with the "CD fan" phenomenon in infomercials. Maybe I am imagining things, but it grew particularly frantic after MP3s and digitization. People hawking audio content would fan out 12-disc set, even while offering MP3 downloads.

When eBooks (especially the DIY PDF kind) became popular, and the "free ebook" hook seemed increasingly like a joke given information overload, people began creating photoshopped 3D-looking graphics for things that weren't ever published as paperbacks.

Then there is the always-lurking danger of 3D buttons and stuff in Web UIs (a little bit as a flourish is fine, but a site with heavy fake-3D is guaranteed to turn me off).

Then there is type of blogger, graduating from PowerPoint probably, who feels the need to make conceptual-platonic graphics 3D (like pie charts in an isometric view with a non-zero thickness, or pyramids drawn to suggest 4-sidedness).

I am not interested in the visualization/aesthetics of this so much as the clear psychological anxieties at work.

My theory: A lot of people who create Web content are basically subconciously troubled by a secret sense that they are producing fluff (true in 90% of the cases, lack of self-confidence in the other 10%). So they attempt to make it more substantial by trying to make it visually more solid-looking.

This when it is obvious (think xkcd or gapingvoid) that the natural way to do most such visual elements on the Web is a spartan, vigorous comic-book style. Very low-fi.

Post has attachment

Post has attachment
My take on Geeks, Nerd, Dorks and Dweebs and those who Co-Opt the term.

Post has shared content
Good tips about how to perform the Anti-PIPA/SOPA Blackout on your website on Jan 18th 2012.
Website outages and blackouts the right way

tl;dr: Use a 503 HTTP status code but read on for important details.

Sometimes webmasters want to take their site offline for a day or so, perhaps for server maintenance or as political protest. We’re currently seeing some recommendations being made about how to do this that have a high chance of hurting how Google sees these websites and so we wanted to give you a quick how-to guide based on our current recommendations.

The most common scenario we’re seeing webmasters talk about implementing is to replace the contents on all or some of their pages with an error message (“site offline”) or a protest message. The following applies to this scenario (replacing the contents of your pages) and so please ask (details below) if you’re thinking of doing something else.

1. The most important point: Webmasters should return a 503 HTTP header for all the URLs participating in the blackout (parts of a site or the whole site). This helps in two ways:

a. It tells us it's not the "real" content on the site and won't be indexed.

b. Because of (a), even if we see the same content (e.g. the “site offline” message) on all the URLs, it won't cause duplicate content issues.

2. Googlebot's crawling rate will drop when it sees a spike in 503 headers. This is unavoidable but as long as the blackout is only a transient event, it shouldn't cause any long-term problems and the crawl rate will recover fairly quickly to the pre-blackout rate. How fast depends on the site and it should be on the order of a few days.

3. Two important notes about robots.txt:

a. As Googlebot is currently configured, it will halt all crawling of the site if the site’s robots.txt file returns a 503 status code for robots.txt. This crawling block will continue until Googlebot sees an acceptable status code for robots.txt fetches (currently 200 or 404). This is a built-in safety mechanism so that Googlebot doesn't end up crawling content it's usually blocked from reaching. So if you're blacking out only a portion of the site, be sure the robots.txt file's status code is not changed to a 503.

b. Some webmasters may be tempted to change the robots.txt file to have a “Disallow: /” in an attempt to block crawling during the blackout. Don’t block Googlebot’s crawling like this as this has a high chance of causing crawling issues for much longer than the few days expected for the crawl rate recovery.

4. Webmasters will see these errors in Webmaster Tools: it will report that we saw the blackout. Be sure to monitor the Crawl Errors section particularly closely for a couple of weeks after the blackout to ensure there aren't any unexpected lingering issues.

5. General advice: Keep it simple and don't change too many things, especially changes that take different times to take effect. Don't change the DNS settings. As mentioned above, don't change the robots.txt file contents. Also, don't alter the crawl rate setting in WMT. Keeping as many settings constant as possible before, during, and after the blackout will minimize the chances of something odd happening.

Questions? Comment below or ask in our forums: http://www.google.com/support/forum/p/Webmasters?hl=en

Post has attachment

Post has attachment
My version of the Geek, Nerd, Dork, Dweeb ecosystem:
Photo

"As a rule of thumb, useless arguments that [win disputes] without discovering truths are most often found in the application of unexamined "values." Such values are rarely about profound moral positions. They are more often a crutch for lazy thinkers."
Wait while more posts are being loaded