Profile

Cover photo
Jaimie Sirovich
Works at SEO Egghead, Inc.
Attended Stevens Institute of Technology
Lives in Pelham, NY
219 followers|14,150 views
AboutPostsPhotosVideos

Stream

Jaimie Sirovich

General Tech Talk  - 
 
Thoughts?  Adam Audette does some awesome research on Google and probably confirms that Google is indeed crawling the web with a modified Chrome. Some "bad habits" can now be dispensed with and "good habits" like physical-position-of-content-must-be-early (usu. putting header after content) should probably also be tossed.
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
Think Google can't handle JavaScript? Think again. Contributor Adam Audette shares the results of a series of tests conducted by his colleagues at Merkle | RKG to examine how different JavaScript functions would be crawled and indexed by Google.
7
4
Kathy Alice Brown's profile photoMark Traphagen's profile photoniraj shah's profile photoJán Valkovič's profile photo
2 comments
 
+Chris Koszo Yeah I remember that post (and came to the same conclusion), but it's good to see a decent test.  I don't believe Google ever wanted to be in the browser market, but it happened out of necessity.  There are things it does that make me think it's designed for scale, like not JIT'ing JS.
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
With HTTP/2.0 standardized without the requirement of SSL, does anyone (from Google or otherwise) know if Chrome is still considering the explicit warnings for non-SSL enabled sites?  See   http://www.bbc.com/news/technology-30505970
1
Jaimie Sirovich's profile photoAlistair Lattimore's profile photo
7 comments
 
Ok, now I'm with you - I was reading about this indirectly recently about tuning Varnish/xginx configuration.
Add a comment...

Jaimie Sirovich

commented on a post on Blogger.
Shared publicly  - 
 
Not fake! The Hoff even photobombs his own videos.  SEE => David Hasselhoff - Hooked on a Feeling
1
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
Faceted search recommendations from Google and a toot from my own horn. What I've been recommending for years is spot on. Thank God I wasn't wrong. I'd be really embarrassed.

http://googlewebmastercentral.blogspot.com/2014/02/faceted-navigation-best-and-5-of-worst.html
1
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
If I have no way to 301 at all, what's better? There are various reports of meta refresh working, but it's very iffy.

1. Screw it.  404.
2. <meta HTTP-EQUIV="refresh" CONTENT="0;URL=SOMEURL">
2
1
Gregory Karpinsky's profile photoJaimie Sirovich's profile photoEnrico Altavilla's profile photoPatt K. (ppattern)'s profile photo
19 comments
 
+Jaimie Sirovich : the JavaScript redirect is and will always be ambiguous, because by definition it has to be interpreted and interpretation can change according to many variables, as you hinted.

On the contrary, a meta tag refresh requires little interpretation, because it's not a piece of programming language code but a piece of markup code which contains a command that browsers just execute.

I don't know why Google "endorses" the JS version of the redirect. Maybe they have some statistics that show that the meta tag refresh is less and less supported by browsers. Maybe they want to endorse something that will work on any JS-capable browser than a technique that is formally discouraged by W3C. Or maybe the person who wrote the JS "endorsement" simply forgot to mention the meta tag alternative.  :-)

I wouldn't read anything negative into the fact that only the JS alternative has been cited in some (a bit unrelated) FAQ page.


In his popular interview by Eric Enge, Matt Cutts said:

"In general, Google does a relatively good job of following the 301s, and 302s, and even Meta Refreshes and JavaScript."

(source: http://www.stonetemple.com/articles/interview-matt-cutts.shtml)

Personally, I would go with the meta tag version, because it is also supported by search engines that don't have a JS-enabled spider.

Also, depending on how the crawler has been designed, some search engines could notice and "execute" a meta tag redirection in an earlier phase of crawling, while the interpretation of a JS-based redirection could be assigned to a parser and interpreter called in a subsequent phase of the process.

Finally, I wouldn't bother about the "spammy" aspect associated to the meta tag refresh. Google is interested in punishing bad intentions, not the techniques used by the webmaster.
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
OK guys, this is a nerdy post.  And I think everyone should understand WHY this post on SEL is completely wrong from a computer-science/tech perspective.  The reason is that recall != rank.

http://searchengineland.com/matt-cutts-more-proof-google-does-count-links-from-press-releases-158350

Feel free to tell me if any of my analysis is wrong, but this is my take.  Recall != rank.

IOW, just because it matches the document in boolean mode doesn't mean it contributes to rank. If you point leasreepressmm at a document and they model that as a term in the document, but it's flagged either as NO_RANK or BUPKIS_RANK, or put it in a UNTRUSTED_ANCHOR_TEXT document section it would do what Matt Cutts says while still returning the document.

It would be similar, I guess, to taking a BM25F function and giving said document section a weight of 0 or 0.0000001.

It could be totally possible that all those leasreepressmm links contribute a weight of 0, but this is very difficult to know because it's just 1 factor baked in to or summed with 200 other factors.  So they just throw untrusted anchor text like that in a section that contributes little to no weight in the final score.

At least then, if it's a very obscure term, it will recall the documents.  But that's not helping them rank.  The fact that Matt Cutts ranks for this term could be a function of several non-text factors.
3
Jaimie Sirovich's profile photoRick Bucich's profile photoDamon Gudaitis's profile photoBert van Heerde's profile photo
5 comments
 
At Google, the right hand doesn't know what the left hand is doing. Just take a look at Google.jp which has been punished for link selling on the homepage... Twice...
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
I see Hittail running Facebook campaigns with the bold "Linkbuilding is dead" claims.  Thing is, isn't Hittail dead, too, if Google hides the q parameter from us via "secure" Google search?

Forget analytics, what happens to Hittail when 100% of organic traffic is anonymized?  We feed it with paid PPC traffic to get the same suggestions?

http://www.hittail.com/landing.asp?utm_source=facebook&utm_medium=ppc&utm_campaign=us31
4
1
Marcin Lejman's profile photoJaimie Sirovich's profile photoTom Roberts's profile photoOmar Dalberry's profile photo
4 comments
 
Title translation:

"Pay attention to meeeeeee"
Add a comment...
Have him in circles
219 people
Huỳnh Minh Hoàng's profile photo
Nhu Van's profile photo
Suresh Sippy's profile photo
Nikhil Raj's profile photo
Jeremy Hong's profile photo
Miq Moham's profile photo
Jason Frovich's profile photo
Phil Buckley's profile photo
Hung Neu's profile photo

Jaimie Sirovich

General Tech Talk  - 
 
"#!" died, officially:
https://www.seroundtable.com/google-ajax-deprecating-20218.html
https://www.seroundtable.com/google-ajax-guidelines-dead-19986.html

See note specifically about escaped_fragment.  Google is now viewing the web with the same rendering engine you use if you use Chrome.  I don't think Google wanted to create a browser originally.

But until I see "#" without "!" in the URLs going to an AJAX-powered page, I can't get excited.  And Google needs a URL that represents state to actually go there, and no amount of "magic" will fix that because they still send the traffic to that URL. 

So what are they going to do?
2
Add a comment...

Jaimie Sirovich

Shared publicly  - 
 
It was only a matter of time before we got these emails, right? Are you switching your entire web site to SSL?  Just remember that you'll have to serve all your static content via SSL as well.  Also remember that while domain verified SSL is anonymous, both Organization and EV SSL do a great job identifying you for Google.
1
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
RE: FACETED SEARCH IA -- LONG TECHNICAL QUESTION:  Google finally published http://googlewebmastercentral.blogspot.com/2014/02/faceted-navigation-best-and-5-of-worst.html, which is the most comprehensive writeup they've ever published.  I'll add (toot toot) that many of the things I've suggested in the past are there.  OK, so there are basically 2 solutions she cites, both of which I've investigated:

#1. Nofollow the link and canonicalize it to a superset.
#2. Exclude it via robots.txt

Aside from the fact that she mentions nofollow+canonicalize first, she provides no obvious preference.  She does seem to imply #1 is better because "you can consolidate indexing signals from the unnecessary URLs with a searcher-valuable URL by adding rel=canonical."  OK, but then she says it only "minimizes the crawler’s discovery of unnecessary URLs ... rel=nofollow doesn’t prevent the unnecessary URLs from being crawled (only a robots.txt disallow prevents crawling).

I've never completely understood this overloaded use of nofollow to indicate distrust (and historically sculpt).  The only thing I can think of is in the pagerank model "nofollow" means don't flow probability/PR (the wandering visitor) here.  +Alistair Lattimore +Traian Neacsu +Eric Wu?  Allegedly, it doesn't work for sculpting anymore, though this would be a perfect example of when it would be appropriate to sculpt -- tens-of-thousands of URLs that are superfluous/duplicative but not quite duplicate content.  But they have to exist.  Sculpting doesn't work, though.  So that advantage is out.

On the flipside, she notes that nofollow+canonicalize "doesn’t prevent the unnecessary URLs from being crawled (only a robots.txt disallow prevents crawling)."  What does she mean here?  Does she mean that if it's externally linked it will still get crawled?  Doesn't that mean all that you need is some hacky mirror site to get you into trouble that leaves off the nofollow?

HOWEVER, it will preserve link equity if someone does link to some odd filtered URL whereas if the URL were robots.txted out it would not, theoretically.  I just don't think those odds are high.

For the record, I've been the proponent of #2 for a long time because I didn't trust nofollow+canonicalizing.

More questions than answers in some ways :)
5
Alistair Lattimore's profile photoJaimie Sirovich's profile photo
2 comments
 
Deep indeed.  The problem is that faceted search is:

1. Often used on the aforementioned large sites.
2. Often generating enormous numbers of URLs — deceptively unintuitively large numbers unless you've thought about it.

I worry about low-quality spam mirror sites linking lots of content — we've all seen this — and Google either ignoring the rel="nofollow" now that it's external on said mirror, or somehow that "nofollow" disappeared on said spam mirror site.  Failing that, it is statistically unlikely that a large proportion of faceted URLs will get lots of links organically, but there might be "here-and-there" links.  It is also likely that there are millions of finite states that neither Google nor user nor God will ever query — let alone link!

But "nofollow" is a very overloaded device.  It can mean:

1. No-trust (the meandering PR visitor doesn't follow)
2. Actually don't follow, but Google uses words like "we don't generally follow" here
3. Link is paid (see #1)

There is no guarantee Google won't follow even a nofollowed link from an external domain in my understanding and through empirical observation.  I'm pretty sure they do, and why wouldn't they want to discover content via that link?

Also, what about Bing?  Will they treat nofollow the same way?  In that regard robots.txt exclusion might win.  It is more obvious what's happening.

If you're truly paranoid you could nofollow and URL-exclude =)
(note that URL-exclude means via robots.txt not via meta tag)

I've been URL-excluding for years.  I've also tested nofollow+canonicalizing.  The differences are syntactic sugar, basically.  The former technique was first practiced by me to my knowledge.  I don't use a prefix — I use a parameter.  Maile probably (but I won't put words in her mouth) used a prefix for clarity in explanation. You don't need a wildcard rule in robots.txt her way.

Lastly, Pagerank is just one of many factors now.  However, this is one of the few cases I think sculpting should work and is sort of appropriate.  It's also an example of how Pagerank may not model reality in cases like this as a result of the sculpting "fix."  There is no reason for the so-called Linkjuice (Note: I hate that term) — OK, link equity — to be lost because of faceted search.  It's an unfortunate side-effect.  I agree that nofollow+canonicalizing has the nice possibility of accruing link equity better to that actual "here-and-there" link, but my understanding is that a robots.txted out inbound link will still credit to the site as a whole (you said this as well, I think).

Pros and cons.  Maile did list nofollow+canonicalizing first, but I won't read too much into that.  With the sculpting aspect of nofollow lost it becomes more of a tossup for me.
Add a comment...

Jaimie Sirovich

Shared publicly  - 
 
In case any of you get tired of Windows :)
3
Jaimie Sirovich's profile photoGregory Karpinsky's profile photo
5 comments
 
If you really miss it, $149 will make you happy:
http://www.ecomstation.com/
Add a comment...

Jaimie Sirovich

General Tech Talk  - 
 
Has anyone mastered the art of getting the right data into the right spots in Google Analytics for faceted/parametric search? It isn't terribly straightforward. The best documentation I've found thus far is @ https://developers.google.com/commerce-search/docs/reporting#sitesearch re: how they suggest setting up Google Commerce (which does faceted search) with GA.  The first section is for search, the latter is for filters.

But they do kludgey things like toss all the filters in as 'category' (probably unsorted too so selection-order will result in different rows), and using events for everything else.  Is this really all you can do?

Has anyone tried to get a handle on faceted search pages with GA?  Anyone have any tips?
4
Add a comment...
People
Have him in circles
219 people
Huỳnh Minh Hoàng's profile photo
Nhu Van's profile photo
Suresh Sippy's profile photo
Nikhil Raj's profile photo
Jeremy Hong's profile photo
Miq Moham's profile photo
Jason Frovich's profile photo
Phil Buckley's profile photo
Hung Neu's profile photo
Work
Occupation
Programmer; SEM
Employment
  • SEO Egghead, Inc.
    Programmer; SEM, present
  • RustyBrick, Inc.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Pelham, NY
Previously
Story
Introduction
Jaimie Sirovich is a search engine marketing consultant. Officially Jaimie is a computer programmer, but he claims to enjoy marketing much more. He graduated from Stevens Institute of Technology with a degree in Computer Science. At present, Jaimie consults for several organizations regarding search marketing, supervises core web development projects, and keeps SEO Egghead afloat.
Education
  • Stevens Institute of Technology
Basic Information
Gender
Male
Other names
SEO Egghead