Shared publicly  - 
We just launched an awesome new robots.txt testing tool in Webmaster Tools.  I recommend you try it out, even if you're sure that your robots.txt file is fine. Some of these issues can be subtle and easy to miss. While you're at it, also double-check how the important pages of your site render with Googlebot, and if you're accidentally blocking any JS or CSS files from crawling. 
Jason Morton's profile photoJulie Holberg's profile photoRajeev Gaur's profile photoMike O'Mara's profile photo
I was hoping Google would mention the effect on crawling your (already) indexed pages when you change the robot.txt within this editor. And an auto-de-index option when you block indexed pages with robots.txt would come in handy too. 
looking forward to trying this out ...:-)
+Peter Driessen disallowing crawling with the robots.txt doesn't necessarily remove the URLs from search; if you want to remove something from search you need to allow crawling, and serve a noindex robots meta tag (or x-robots-tag HTTP header). It does seem like something we could mention in there somewhere, so I'll put that on our list :).
Hi John,

I tried to start a private thread as you suggested but I'm not sure that it worked as I didn't hear back from you.

Did you receive the message?

+John Mueller I'm familiar with the differences, but most webmasters aren't familiar with it. I notice a lot of confusion in the productforums about the differences between noindex and robots.txt. This new feature can make the differences more clear and maybe ask the user if they want to de-index the blocked pages right away (removal request after it is blocked by robots.txt). 
Yes i tried it... its an awesome tool... 
Hi John, It works perfect. Thanks for the update.
Hi John, is it a problem if I block JS and CSS in my robots.txt file?
+Gianluca Mileo I'd recommend not blocking JS & CSS via the robots.txt unless it causes technical problems on your server to have those files crawled. Often JS & CSS give us more information about the pages, so that we can give those pages the credit they deserve in search.
Great tool, John Thanks. When viewing the latest versions, any reason why it would say, "Failure No Content" one moment, and OK (200) a few minutes later? Or is this status quo? I'm not trying to block anything
User-agent: *
+Mark Barrus that can happen if we have trouble reaching your server. We'll usually retry as soon as we can, so if the initial attempt fails, and the retry works, we may list that as separate events there. Technically, it's not awesome, practically at least the retry worked, so nothing got lost :).
Thank you John,  all is well, as the retry always works. I love Fetch and Render by the way. :)
Why many time i get message in webmaster that your robots.txt is not accessible?
I have to use fetch as Google every time.
Is there any way to fix robots.txt .
+Prince Namdev the "robots.txt unreachable" message means that we couldn't reach your server at all. Usually that's a sign of connectivity problems on your side (maybe your hoster is limiting the number of users on your site). It doesn't mean that there's anything wrong with your robots.txt file, it's just that we couldn't check.
Google dont pay tax non of them do 
Add a comment...