How to make Angularjs application crawalable

All we need is an easy explanation of the problem, so here it is.

Im building single page app using Angular.js, My question is how to make application crawlable because routing is handled using ng-view on client side and server just return simple header file.

Site Link:

How to solve :

I know you bored from this bug, So we are here to help you! Take a deep breath and look at the explanation of your problem. We have many solutions to this problem, But we recommend you to use the first method because it is tested & true method that will 100% work for you.

Method 1

I implemented crawling in my site using above all points and below link

Created Static template using PhantomJs

Method 2

The only working solution I know is the one the core AngularJS team uses for its documentation website.

  • First they use HTML5 history for URLs and a fallback with a hashbang. URLs with a hashbang make Google crawl them with _escaped_fragment_ in the query string.
  • Then they use AngularJS string interpolation and directives on the backend to render the templates as they will be in the DOM when the user loads the page and AngularJS parses it.
  • They pass that to Google and thus they have the same content in the search index as in the users’ browsers (so this is not cloaking).

This was mentioned by the core developers in the AngularJS Google group. [1] [2] [3]

Also from the rest of the threads there I think they are using PhantomJS and NodeJS to parse the pages.




Method 3

I came across this service that might be worth checking out. It runs a PhantomJS server and does all the legwork for you.

Method 4

Making a single page app Crawl able yet interactive is not a straight forward task. You have to think about access points from the UX perspectives that will allow the back button, and jump in access. When the back button is pressed, for instance, marks for object states needed to be recreated on the server without user interaction generating the same markup as the usage to get to that access point would create on the client. Phantom.js can be used for this task, or client/server agnostic js can be used to run on both ends, or like in the good ol php days the entire logic to replicate the state of the access point can be re-written for the server. @Ajay Beniwal has detailed some links on how to create html snapshots.

Assuming you have a webserver that can throw out bootstrapping markup given a particular object state. The state can be supplied via a state identifier, this needs be the url to make your code crawlable. Libraries like Angular js and Backbone.js supply mechanisms like the Backbone.Router, which in turn either use link fragments or HTML5 pushState() method to store the state identifier on the client. The beauty of HTML5 is however that a refresh makes a straight call for the right object state to the server without having to load an initial page that parses the hash supplied and redirects to the proper object state url, tho there is no other option for old browsers, architecting your application around the HTML5 paradigm will make them a cake for crawlers, and most implementations of HTML5 pushState such as Backbone.Router degrade gracefully in to hash tag state marking implementations for older browsers to still allow the back button.

Method 5

Since October 2015 you don’t need to do anything in order to make your application crawlable (I assume you’re referring to Google crawling).

Check this article :

Note: Use and implement method 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply