With the unbelievable speed that the internet is growing, seeing thousands of new pages and documents being added to the World Wide Web every day, it can be a huge challenge to find information that is relevant to what a user is searching for.
That is the problem internet search engines were designed to tackle. A search engine is a special website that is designed to help users find information on the internet. It compiles data from websites all over the internet, and stores the information in an index that allows it to be retrieved when a specific key phrase or term is typed in by a user.
In order to find and index all of these sites, a software robot referred to as a spider, is sent out to build lists from all the words found on a site. These robots travel to more pages and sites through links found on other websites, taking it all over the internet indexing information, and returning to look for new content.
The spiders will take note of information on a page, and compile lists of words that are dominant on a page in order to assess the purpose its purpose. Each search engine has different algorithms that their spiders follow when retrieving words on a page. Some will put emphasis on meta tags, while others will look for recurring words in titles and throughout the page. The job of finding these sites all over the internet and indexing them to be found is a monumental task, and one that is never finished.
When a user goes to Google, Yahoo or any of the many other search engines, they will type in a search string related to what they are searching for. The search engine will go back into its index and retrieve the websites that are most relevant and display them in order of how important they believe the sites to be in regards to what is being searched for.
While it is very difficult to index every site, search engines are making navigating the internet for information much easier every day.