Friday, October 12, 2007
How Search Engines Work
A lot of business owners are perplexed when it comes to how search engines work. Below you will find an excerpt out of my book
“Search Engines from A to Z”.
Search engines are the key to finding specific information on the web. Without the use of sophisticated search engines, it would be virtually impossible to locate anything on the internet without knowing a specific URL.
URL: is the domain name that is in the address bar of a web site.
ie: www.artistadesign.com
There are basically three types of search engines:
Those that are powered by robots or spiders; those that are powered by human submissions; and those that are a combination of the two.
Spiders: A program that automatically goes over web pages and collects information. Spiders are used to feed web page content to search engines.
Crawler-based engines send crawlers, or spiders, out on to the internet. These spiders visit a website, read the information on the web page, read the site's Key words and also follow the links that the site connects to. The spider returns all that information back to a central database where the data is indexed. The spider will periodically return to the sites to check for any information that has changed, and the frequency with which this happens depends on the search engine.
Human-powered search engines rely on humans to submit information that is then indexed and catalogued. Only information that is submitted is put into the index.
In both cases, when you go to a search engine to locate information, you are actually searching through the index that the search engine has created. Search engines are compiled of giant data-bases full of information that is collected and stored. This explains why sometimes a search on a commercial search engine, such as Google or Yahoo!, will return results that are in fact dead pages. The search results are based on the information that has been collected, if the information hasn't been updated, and the web page becomes invalid the search engine treats the page as still being active. It will remain that way until the search engine updates the information.
So why will the same search on different search engines produce different results?
It depends on what the spiders find or what the person submits. More importantly, not every search engine uses the same criteria or algorithm when indexing the web page information.
Algorithm: is what the search engines use to determine the importance of the information on the web page.
One of the elements that a search engine algorithm scans is the frequency, location and consistency of keywords on a Website. Another common element that algorithms analyze is the way that pages link to other pages.
By analyzing how pages link to each other, a search engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking.
Search engines are becoming wise to Web masters who build artificial links into their sites in order to build an artificial ranking.
That is why it is now more important than ever to have content that matches the keywords in your meta-tags, to link to relevant sites and have backlinks from sites that are not only relevant but have a higher than 3 ranking in Google.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Building a community of business people who are interested in taking there business to the next level. The world is you oyster now find out how to access it!