June 4, 2017 § 1 Comment
My research and observation over the last couple of years found that Alexa is a very biased and dishonest ranking system, which is now becoming more crazy everyday. But they are still trying to impose the fact that their ranking matters in business, which seriously got a very negative impact in many ways in the less internet-aware countries like Bangladesh. In this post I’m gonna show a few examples that will clearly indicate how fraudulent ranking they do/did for websites.
Before going into the detail, I would like to share a personal experience on Alexa ranking from some of my experiments in 2016. A site of mine which almost had no visitors (as it has only some password protected backend panels) got a rank of 600 in the country crossing many popular sites, all from my own visits using some techniques that Alexa seemed care about. That was just an experiment ahead of being able of writing an article like this one.
Alexa basically has two type of rankings in their business logic – one that pays them handsome money every month and the other that does not pay a penny. There is definitely a third hidden category that naturally gets enormous amount of visitors on their own capacity and does not need to care about Alexa or any sort of ranking.
Paid ones get an abrupt positive change in their ranking (of course upon payment) no matter what their previous ranking was or how many visitors they really have. Alexa checks no parameters to list them as a top ranked site. I found that even domains that were only booked a few months ago were also placed on top of high traffic sites without a doubt or calculation.
My picked sites for the current comparison/examples are here (just for technical analysis, no offense please) –
A few facts before more detail:
- The first two are the most popular online news portals in Bangladesh, while the latter two are still unknown.
- The first two have a Google Page Rank of 4 and 5 respectively, while the latter two both have 0 (zero).
- The first two domains were registered in 2006 and 2004 respectively (more than 10 years age), while the latter two were registered at the end of 2016 (only a few months age).
Now lets see how much Google, the biggest search engine, has resources to show for them in search results:
- The first one has about 1.2 millions (1280K) pages indexed with Google.
- The 2nd one has 819K pages indexed with Google.
- The 3rd one has only 168 pages indexed with Google and most of them are broken (returned me 404 during my tests).
- The 4th one has 3600 pages indexed with Google.
Now lets see what Alexa ranking they each has at this moment (June 4th, 2017). This is simply shocking:
Now lets see how abrupt their ranks (the 3rd and 4th sites) were changed in Alexa ranking system:
This shows how their rankings climbed up over night. The red circled points are the dates when they were taken from one point to another abruptly. In other words, these are the dates when their paid money took the targeted effects (green tick marks beside the line “Alexa Traffic Ranks” indicate a paid & hence verified site). However, in real situation, change like this is very impossible.
I have many other points to mention here in this regard, but I guess this topic got enough already.
Anyway, these 4 sites used in the current analysis are not the only sites of this kind. There are many more where the similar discrimination was found.
The purpose of this technical analysis is not to humiliate/undermine these 4 sites or publicize them in any way, but to let the decision makers and/or business people who are still considering Alexa a metric to differentiate popularity of a site know that the real fact is really different than just the number shown in rank.
If this helps a single person or an event, the purpose of this post will find its meaning.
Thanks very much for reading this.
June 11, 2010 § 6 Comments
It’s been quite a few weeks I didn’t write any post in this blog. I know many of my well wishers and clients regularly check this blog for getting updates about me and my works. So yesterday I thought I would share about my latest development concentration.
Depending on the recent project traffic, I chose two fields to concentrate on for this year (or until other better options come to my horizon) – 1) web scraping and 2) Drupal. Beside these, I have been developing some facebook applications upon request since the last week of last month. As a result, now we are supposed to receive a big chunk of facebook projects this month. If that happens, we will probably be engaged to FB development in the next few months. Else, I will be concentrating on the selected fields – Drupal and web scraping.
The reason of choosing web scraping is the huge development demand in the freelance market. Most of the webmasters now-a-days want to fight with their competitors very smarty but without spending lots time and money. For example – a store owner in Florida or NY can not just sit in relax at his desk today, he needs to continuously check who else is selling similar products and what is their rates…is my rate competitive to the market? blah blah blah… So what he needs to do? Either he needs some people to monitor all competitor stores prices or an automated way to know the prices without using people. Web scraping is the solution here. Good scraping tools can easily bring updated results to this store owner’s computer regularly. This is just an example – there are hundreds cases where scraping is the only way to manage business smartly.
I started learning Drupal development in late 2009 to accomplish 2 projects of one of my most important clients. I had to scrape data from another site, create nodes dynamically and then deploy those data under different categories with the capability of SEO friendly URLs plus some management tasks in the backend. Doing all these, I fall in love with Drupal. I did some more projects later beginning of this year and started being professional on this. Now it is giving me good results. My latest Drupal development was for a big real estate site to automate the importing / updating of their properties from some sort of files supplied by agent companies.
That’s all my recent development concentration. Thanks for reading.