Indexing and Crawling

Started by lucky1, 12-03-2015, 00:54:45

Previous topic - Next topic

lucky1Topic starter

Hello friends,



   I want to know that what is the difference between Indexing and Crawling ? tell me ..


RH-Calvin

Crawling is the reading of your webpage by spiders to provide a cache certificate to the webpage. Indexing is the process of storing or updating the cached webpage in search engine database. These pages are now ready for search engine ranking.


ethanmillar

Crawling : Google Crawler visit your web page to track your web activities it's call Crawling, it's done by Google crawler.

Indexing : When Crawling has been done then after Google display your website to Google search it's call indexing.
  •  

PrajeshNarayanswami

Crawling happens when there is an effective getting of one of a kind URIs which can be followed from legitimate connections from other site pages.

Indexing happens after a Crawling URIs are prepared. Note that there might be a few URIs that are crept however, there could be less of them whose substance will be handled through indexing.
http://dmsinfosystem.com/
  •  

Shikha Singh

Whenever we write blogs are start a website, the first thing we probably want to happen is to have people find it. But the main question is how? Actually, we have to wait for the Googlebot to crawl and add it to the google index. To know clear difference between indexing and crawling. We should know key elements of "Index, Crawl & Googlebot".

Googlebot: Googlebot is a software bot it's main purpose is to collect information about documents on the web to add to Google's searchable index.

Crawling: Crawling is a process. When Googlebot goes 1 website to another website to collect new information to send back to the Google. It traverses the billions of interlinked pages on the world wide web to update its information such as new sites or pages, changes to existing sites, and dead links. It's like Pacman following all those dots and eating them.

Indexing: Indexing is the processing of the information gathered by the Googlebot from its crawling activities. In Indexing, all its accessible data is stored (cached) on search engine servers so it can be retrieved when a user performs a search query.

It is clear now Crawling is reading process and Indexing is the like storing process. In Crawling google bot read information about the blog or website's content through the links and in Indexing it process the collected data and saves the information to a particular location like server.


hrishivardhan

Read this to get better idea about indexing and crawling
https://www.google.co.in/insidesearch/howsearchworks/crawling-indexing.html
Orderhive - Multichannel Order & Inventory Management Software
  •  

HarshMehra

We use software known as "web crawlers" to discover publicly available WebPages. The most well-known crawler is called "Google bot." Crawlers look at WebPages and follow links on those pages, much like you would if you were browsing content on the web. They go from link to link and bring data about those WebPages back to Google's servers.
The crawl process begins with a list of web addresses from past crawls and sitemaps provided by website owners. As our crawlers visit these websites, they look for links for other pages to visit. The software pays special attention to new sites, changes to existing sites and dead links.
The web is like an ever-growing public library with billions of books and no central filing system. Google essentially gathers the pages during the crawl process and then creates an index, so we know exactly how to look things up. Much like the index in the back of a book, the Google index includes information about words and their locations. When you search, at the most basic level, our algorithms look up your search terms in the index to find the appropriate pages.
The search process gets much more complex from there. When you search for "dogs" you don't want a page with the word "dogs" on it hundreds of times. You probably want pictures, videos or a list of breeds. Google's indexing systems note many different aspects of pages, such as when they were published, whether they contain pictures and videos, and much more. With the Knowledge Graph, we're continuing to go beyond keyword matching to better understand the people, places and things you care about.

a4nuser

SEO is a very big sea and to understand SEO, we should know all the basic terminology of SEO. Crawling and indexing are two such words, which are the basis of SEO. If you have been in the web world for a while you must be aware of the terms Google Crawling and Indexing. These are the two terms on which the whole web world  depends. Lets understand both the words and get more in-depth information about the same.
  •  


Rammadhur

In the role of SEO, Crawling is used by the search engine to crawl the new website or pages re-written content, dead links whereas indexing that make it easier to exract stored information about web page content by crawling of search engine using a search engine algorithm for the purpose of similar web pages.