Profile cover photo
Profile photo
Paul Shapiro
Digital marketer. Programmer. Professional SEO. /r/BigSEO Mod. Horror movie fan.
Digital marketer. Programmer. Professional SEO. /r/BigSEO Mod. Horror movie fan.

Paul's posts

Post has attachment
I wrote a post detailing some examples of how to use amp-form to implement forms on an AMP page.

Post has attachment

Post has attachment
Google’s Structured Data Testing Tool is a pretty awesome resource for auditing any sort of markup or other structured data formats, but if you need to audit multiple URLs, you’re out of luck...until now.

Post has attachment
During a presentation I gave at Distilled’s SearchLove Boston conference in early May, I advocated that people use the slope formula and Google Trends data to determine if interest keywords have grown over time or if they are slipping away into searcher…

Post has shared content
Hacking the Knowledge Graph

Google recently facilitated sharing of Knowledge Graph Panels, and some other features ( - thanks +Jennifer Slegg).

This comes not too far on the heels of Google's release of the Knowledge Graph Search API in December of 2015 (

In playing around with the URL now available from a Knowledge Panel, it became immediately apparent (and wasn't surprising) that the Knowledge Graph identifier there was the same one you can retrieve through a search using the Knowledge Graph API search.

For example, the ID returned from Knowledge Graph for everyone's favorite orange-haired politician is:

Which, transformed into an HTTP address using the "kg" prefix provided in the API search results, is:

Which resolves (via a 301) to:

Compare this to the Knowledge Panel share URL when one searches for "donald trump":

Resolves (via a 301) to:

The difference is, as per the clue offered by the parameter "kponly" in the fully-resolved URL from the Knowledge Graph Search API results, is that* returns only the Knowledge Graph Panel, without any search context (the query term displayed is actually, for all Panels retrieved via the prefix URL, is "knowledge graph search api") - again:

However, deconstructing the URL to which a shortened Knowledge Panel share resolves exposes some parameters that can be used to expose a Knowledge Panel in a more meaningful content.

By modifying the "hl" parameter you can, of course, change the language of the content displayed - including the content of the Knowledge Panel:

It turns out this also works when using the "kg" URL provided in Knowledge Graph API search results:

But by hacking the structure of the share URL you can provide users with search results and an accompanying Knowledge Panel where the query is only related to the entity in the search result:

Or, as per the call-out image (which is a direct screenshot, not a Photoshop treatment), not related to the Knowledge Panel at all (in this case the query might be related - I'll the reader decide):

I don't know what the other parameters do (though "source" and "entrypoint" seem straightforward enough).  "kgmid" and "q" in themselves seem sufficient to generate a search result accompanied by a Knowledge Panel:

None of this, at first blush, has earth-shattering practical implications, although I now know how to generate a Knowledge Panel in a language other than English, and - should it ever become useful - I know now how to send a user to a Knowledge Panel with the search query context of my choosing.

Oh, and worth noting the share URL now provides a method of retrieving a Knowledge Graph ID without using the search API:  if you're able to generate a Knowledge Panel via search, you can now simply expose the ID by copying and pasting the share link.

#knowledgegraph   #google   #identifiers  

Post has attachment
My post on keyword research metrics just went live on +Search Engine Land. Give it a read.

Post has attachment
Calling all #SEO  folk,

Can you spare a moment to fill out this 30-second survey about how long it takes you to do keyword research.



Post has attachment

Post has attachment

Post has shared content
An update (March 2016) on the current state & recommendations for JavaScript sites / Progressive Web Apps [1] in Google Search. We occasionally see questions about what JS-based sites can do and still be visible in search, so here's a brief summary for today's state:

# Don't cloak to Googlebot. Use "feature detection" & "progressive enhancement" [2] techniques to make your content available to all users. Avoid redirecting to an "unsupported browser" page. Consider using a polyfill or other safe fallback where needed. The features Googlebot currently doesn't support include Service Workers, the Fetch API, Promises, and requestAnimationFrame.

# Use rel=canonical [3] when serving content from multiple URLs is required.

# Avoid the AJAX-Crawling scheme on new sites. Consider migrating old sites that use this scheme soon. Remember to remove "meta fragment" tags when migrating. Don't use a "meta fragment" tag if the "escaped fragment" URL doesn't serve fully rendered content. [4]

# Avoid using "#" in URLs (outside of "#!"). Googlebot rarely indexes URLs with "#" in them. Use "normal" URLs with path/filename/query-parameters instead, consider using the History API for navigation.

# Use Search Console's Fetch and Render tool [5] to test how Googlebot sees your pages. Note that this tool doesn't support "#!" or "#" URLs.

# Ensure that all required resources (including JavaScript files / frameworks, server responses, 3rd-party APIs, etc) aren't blocked by robots.txt. The Fetch and Render tool will list blocked resources discovered. If resources are uncontrollably blocked by robots.txt (e.g., 3rd-party APIs) or otherwise temporarily unavailable, ensure that your client-side code fails gracefully.

# Limit the number of embedded resources, in particular the number of JavaScript files and server responses required to render your page. A high number of required URLs can result in timeouts & rendering without these resources being available (e.g., some JavaScript files might not be loaded). Use reasonable HTTP caching directives.

# Google supports the use of JavaScript to provide titles, description & robots meta tags, structured data, and other meta-data. When using AMP, the AMP HTML page must be static as required by the spec, but the associated web page can be built using JS/PWA techniques. Remember to use a sitemap file with correct "lastmod" dates for signaling changes on your website.

# Finally, keep in mind that other search engines and web services accessing your content might not support JavaScript at all, or might support a different subset.

Looking at this list, none of these recommendations are completely new & limited to today -- and they'll continue to be valid for foreseeable future. Working with modern JavaScript frameworks for search can be a bit intimidating at first, but they open up some really neat possibilities to make fast & awesome sites!

I hope this was useful! Let me know if I missed anything, or if you need clarifications for any part.

[1] PWA:
[2] Progressive enhancement:
[3] rel=canonical:
[4] AJAX Crawling scheme:
Wait while more posts are being loaded