Introduction to Search Engine Optimization

What is search engine optimization?

When we ask a query to the search engine it will produce the list of suitable pages. That set of results is called as SEARCH RESULTS RECORD.  To do that search engine uses many deep learning algorithms to navigate through the related pages. Each and every website is pulled into the search engine database and maintains a replica of that. Every query which we put into search engine is passed through the data base and is processed by several algorithms like spider’s crawling algorithms and ranking algorithms etc.

TYPES OF SEO’s:

  1. White hat SEO
  2. Black hat SEO
  3. Grey hat SEO

White hat SEO: in this process of optimization we will improve the search techniques and methods by following the guidelines of the search engine. Every page is given some ranking based on that it will decide at what position our page will appear. If you use this white hat SEO it will take up along time to be in the highest rank.

Black hat SEO: In this process of optimization we will improve the search techniques and methods by obligating the protocols of search engine. By using this method the page will be ranked in the top with in no span of time. In this many of the contents are hidden from the client like hidden text, hidden URL etc..

Grey hat SEO: In this process of optimization we will use both the methods involved in the white hat SEO and black hat SEO.

 

GOOGLE SEARCH ENGINE

What is crawling??

Crawling is the process of searching and scanning the images, keywords and the tittles of the every webpage related to our search. Googles spider also known as crawler or chat bot process thousands of pages in a second. And the results are placed in the search results record and is called as indexing.

To make our website/web page more superior or more visible to spiders we have to use robots.txt. we can check our page containing the robots.txt file by using the Google Robots.Txt Tester. Robots.txt actually contains the regular expression. Here is the snapshot of my blogger which by default contains the robots.txt file.

 

The robots.txt file is the main reason for the web page to be on the SRR  or not.

Crawler searches the related web pages based on the key words for example if you enter ladakh it will search the total pages which contains the ladakh as a key word and display it in the SRR.

 

HOW TO CREATE ROBOTS.txt FILE:

User-agent: this is used to specify the chat bot which is used to index the page. Based on this we will categorize which bot will identify the page.

For example when google bot wants to recognize we have to use gbot like as yahoo bot and so.

If we want to allow our page to be recognized by every bot we have to simply use *.

User-agent *

Allow:  in this we can include the images or files or some links etc to be visible to the bots so that it can index them.

Allow: /path/filename

Disallow: in this we can include the images or files or some links etc to not to make the bot index so that it cannot be visible to the search engine.

Disallow: /path/filename

 

Sitemap: this is used to place our site address into it. We no need to declare every time in the allow or disallow fields. We have to use ,xml after sitemap.

Sitemap: https://demourl.com , xml

SAMPLE ROBOTS.txt FILE

Username: *

Allow: /samplepath/sai.jpg

Allow: /samplepath/storage/file.txt

Disallow: /samplepath/hide.jpg

Disallow: /samplepath/hide.pdf

 

Sitemap: https://demo_site.com , xml

We have to save the entire file with an extension of .txt

Consider the robots.txt file of open sense labs.

 

Contributor's Info

Created:
0Comment