Googlebot is Google's web crawling bot (sometimes also called a “spider”). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index. We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site. From Google: Googlebot
Google hellet, krûpt en legt jo pagina-ynhâld oars op as in browser. De Wylst Google kin crawl skriptsje, docht it net mean that it will always be successful. And just because you test a redirect in your browser and it works, doesn't mean that the Googlebot is properly redirecting that traffic. It took some dialogue between our team and the hosting company before we figured out what they were doing… and key to finding out was using the Helje as Google ark yn Webmasters.
The Fetch as Google tool allows you to enter a path within your site, see whether or not Google was able to crawl it, and actually see the crawled content as Google does. For our first client, we were able to show that Google was not reading the script as they would have hoped. For our second client, we were able to utilize a different methodology to redirect the Googlebot.
As jo sjogge Crawlfouten brûk yn Webmasters (yn 'e seksje Sûnens) de Fetch as Google om jo trochferwizings te testen en de ynhâld te besjen dy't Google ophellet.