Web Crawler TechnologyA web crawler, known as a network robot or spider, is a program, software, or programmed script that automatically fetches large collections of web pages according to http protocol and some clearly defined crawling strategies (e.g., depth-first strategy, breadth-first strategy, and best-first strategy) [22,23].