CrawlingCrawling is the process of fetching all the web pages according to their titles, description tags and other internal and external links which are linked to a certain web site. This task is performed by automatic software, called a crawler or a spider.In the case with Google, it is Google-bot. On search query, most of the time they crawl good meta descriptions.
IndexingThe process of creating index for all the fetched web pages. The search engines sort into a massive database from where it can later be retrieved. In this process search engines identifies the searched words which are keywords and store them to memory. Is it fine to make title according to Google's algorithm.
Make effective use of robots.txt
A site map will do good for your site.
Be aware of rel="nofollow" for links
ProcessingThe search engine processes the search string in the search request with the indexed pages in the database.
Offer quality content and optimize it
- Write better anchor text
- Optimize images
Relevancy of the websiteThe search engine starts calculates the relevancy of each of the web pages in the index to the search string or keywords. That is why page rank is so important. Linking, site architecture, keyword densities also have an important role.
Improve the structure of your URLs
- Make your site easier to navigate
- Use heading tags appropriately
Retrieving ResultsThen the search engines' retrieves the best matched results from the index.If there are no meta tags, then the text around the term within the body of the content will be displayed in the snippet.
sources which Google extract the snippets:
1) Open Directory Project
2) Site Content
3) Meta tags