SEO 101: Basic SEO Research

So, I think it is pretty obvious where to start when you want to learn something. You Google it. I assume these companies must have a really optimised website for learning SEO, which means they know what they are talking about.

Screen-Shot-2018-02-12-at-2.16.02-PM
SERP for 'Learn SEO'

I had heard the name 'Moz' somewhere and as it was also first on the result list, I checked their page out. They had a massive resource for learning and implementing SEO, right from beginners to advanced which I found pretty cool. I navigated to the beginners section 'The Beginners Guide to SEO'. I went through all the ten chapters and they were all very informative.

After having a basic understanding, I needed a roadmap. This roadmap would contain the order of things in which I proceeded with learning and implementing SEO for my website. Obviously, I cant forsee everything, it will not be a very strict roadmap and essentially will play a vital role to keep me in a strong direction.

Approach:

  1. List down all the services provided by Tripin Studio and Competition Analysis.
  2. Keywords Research for Optimisation
  3. Curating Relevant Content
  4. Technical Implementation for SEO
  5. Metrics for SEO
  6. Link Building

Before I start with my roadmap for SEO of Tripin Studio, I researched in detail, the technical aspect of working of a search engine. I think it is very essential to know this as it will keep everything in context while I am implementing various methods.

So, in this new unexplored territory of INTERNET, how does a search engine operate?
Internet is basically a lot of computers connected with each other to share and exchange data. Data is stored at a location in one of these computers. The website address points to these locations on these computers, and HTML converts them into human-readable format.

A search engine basically has two jobs:

  1. Find content
  2. Analyse the content quality and context

Obviously, it is not that simple but for our understanding and development, let us limit it here.

A web crawler or a robot which deciphers the code from webpages and stores selected pieces like(metrics for search algorithms) in massive databases. When a crawler comes across a link leading to another webpage in the content, it also visits that page and crawls its content. This process is constantly repeated to reach out to more content.
The metrics that are stored in the massive databases are fed to search algorithm of the engine which decids a rank for the webpage for various keywords. They all work on a basic rationale, the more popular a website is, the more valuable content it holds.

With developments in Machine Intelligence, these search algorithms have become better to identify better content and unrank spam content. There are still certain limitations like identification of content in flash files, images, gifs etc. which even though makes the content more human friendly, decreases the visibility for Search Engines, therefore yet cannot be avoided as it will decrease User Experience.

Now, the way I see it, it is very much like a setting up a business for offline visibility. First thing, that needs to be done is create the content that needs to be marketed. This will help us know what needs to be told or IDENTIFY KEYWORDS to sell the services we offer. Second, we need to spread the word in our network, or BUILDING LINKS so that more and more people can know what we do.

In my next blog, I will discuss majorly on Keyword Research and Keyword Selection for a design and development agency.