What said about this is that those 404 errors doesn't affect your rankings and that you don't need to worry about that:
But, if you have a big website (1.000.000 URLs), and you have 2 different "ghost" urls in each of your pages, wouldn't that be a waste of time for Googlebot? (2.000.000 extra URLs to try to crawl). Shouldn't we block those URLs in the robots.txt file, as we know those URLs are "useless" for Googlebot?
Yes googles crawling code and reporting it in wmt and it shouldn't be, they aren't real links and they are in js as they are unique to each page they can't be moved into a separate file...
Annoying as the 404 reporting would be useful as our sites too large to crawl otherwise (easily)