• Qwanturank SEO contest

    Qwanturank SEO contest

    Qwant has launched a few days ago its SEO contest, named Qwanturank. It had been announced for a long time, so it was no surprise to see it coming. In my opinion, the surprise is rather to see a search engine organize an SEO contest, a mix of genres to my knowledge never seen on the Web since its creation. So why did Qwanturank implement this challenge? That's the question I asked myself and that is the origin of this article ...

    SEO contests aren't my thing...

    I have to say at the outset that SEO contests (and there have been a lot of them in several decades of the Web) have never been my "thing". I have never participated in any of these contests, often for lack of time, always for lack of interest. I have always found that the purpose of "rotting" engine indexes with lots of useless sites and pages, did not really advance the schmilblick and was certainly not in line with my vision of the business. That said, on totally invented requests (from "stork eater" to "sentimancho" to "tiger osmosis"), after all, the incidence was low because it didn't bother many people. When the contest was about common and competitive requests on which "normal" sites were already trying to position themselves, on the other hand, it bothered me more...

    But I also know that some people had a lot of fun during these contests, had a really good time (even if some very average human methods were also used, it must be said...) and keep some memorable memories of them. These contests also gave rise to a lot of vocations in the profession. In this, they have certainly been useful.

    The only time I was involved in an SEO contest was with the .alsace extension and the CCI of Alsace, because I had asked that the participants propose real content sites and that the methods used be rigorously "white hat", a point that I had personally checked on each site (some had been released at the time for not following this directive). One must remain consistent with one's convictions...

    In short, in a word like in 100, SEO contests have never really been my thing (without judging those who participate, of course). It doesn't really matter if I participated or not in these contests anyway. But at least, I can say that I have a certain hindsight to talk about it and that the passionate aspect won't make me talk nonsense.

    Qwanturank: what are the objectives for Qwant?

    Let's talk about Qwanturank, the Qwant contest. I want to ask myself the question about the reason of this contest: What does it bring to the search engine? Roughly speaking, I see several possibilities:

    • Notoriety? Of course not, since the contest is only about the SEO microcosm, which already knows Qwant. So nothing to expect from this side. Besides, the press coverage of the contest seems to me very minimal for the moment.
    • A cooler vision of the engine towards the SEO community? Indeed, Qwant having missed its launch in large widths, 6 years ago, it had alienated at that time a lot of SEO providers with die-cut ads and a totally incoherent communication. The competition could soften his image in a community where many people are attached to this type of challenge.
    • A response to recent criticism of the engine, accusing it of using Bing's index and results and not really having a "clean" engine. This is a possibility, even a strong probability.

    In any case, according to Qwant's official release, it leaves no doubt: "So it's time for SEO experts to put it to the test. To do so, we must constantly improve our algorithms and index in order to always offer better results and ensure our technological sovereignty. The objective is to test our algorithms in real conditions on new keywords without history, which will allow equal treatment between participants."

    At least, it has the merit of being clear: the avowed goal is to test the indexing and the engine algorithm. So let's start with this vision, which seems the most logical, the most likely: the goal of the contest is to explain that Qwant is a "real" engine, with index, algorithm and all the junk...

    Note that, on Twitter, the answer was slightly different in spirit: "Now imagine from the engineers working on the Qwant search engine what they will learn by analyzing different strategies. It's formative. By analyzing algorithm behaviors, by analyzing participants' strategies. All on a clean basis, a keyword without history and creating web pages on the same beginning of the timeline. So to answer your question, the purpose of this contest is to train engineers and improve Qwant's algorithms by analyzing the evolution of a contest from A to Z over 6 months." The answer here explains that the goal is to train engineers in spam techniques, in a way, to be able to better fight them afterwards.

    That said, we can then analyze the impact of the contest and its interest on each of the "bricks" of an engine.

    Indexation

    Can the "Qwanturank" contest be used to test the global crawl capabilities of the Qwant web? Honestly, it seems impossible. Indeed, if a site wants to participate in the contest, it must submit its application through an online form, which will give it in return a meta tag to integrate into its code. But from the moment you submit your site, the crawl work is pre-checked, since Qwant then has its URL. He just has to go on this address to explore the site and that's it. Let's imagine that 50 sites are competing and that each site has 100 pages (which is already very unlikely and seems rather a high range), we arrive at a total of 5,000 pages to crawl, which is within the reach of any entry-level crawler that can be found everywhere on the Web in open source. A Screaming Frog or other can even do very well for a few euros at most.

    Impossible then to imagine that the quality of the crawl of Qwant's robots (clearly: the ability to crawl the Web) can be appreciated by the results and the functioning of this contest. Moreover, if the goal was to make the "bad tongues" understand that Qwant did not use (more) Bing, it would have been more logical to ask the participants to forbid the crawl to Bingbot, via the robots.txt of their sites. At least, the situation would have been clear at that level. After reading the rules, nothing seems to be planned at this level. That's too bad.

    On this subject and to go further, if Qwant agrees to see its index more or less "rotten" by sites without interest (because there is still a good chance that this is the case, to see the first participating sites, and in large widths in view of the prizes offered for winners), would it not have been wise not to do this to other engines and ask, still via the robots.txt, a ban on crawl to Googlebot and others? In short, logically, in the context of a contest organized by Qwant, participating sites should only be crawled by the robot(s) of Qwant. It would have seemed logical and coherent to me. But maybe I'm wrong... At least, it's not the case.

    Finally, only sites created for the occasion and whose domain name has been bought after the announcement of the challenge are accepted in this contest. So it is impossible to participate with an "old" site (Abondance type), which is a pity because it would have allowed to analyze the crawl of the robot on large sites in the context of the contest. Obviously, crawling very small recent sites is much easier...

    In short, evaluating Qwant's ability to crawl the entire Web by showing that he can do it on a few thousand pages is just impossible, if not totally inconsistent. For the crawl, it's a failure, in my opinion. No conclusion can be drawn from this contest.

    « Le moteur de recherche Qwanturank

  • Commentaires

    Aucun commentaire pour le moment

    Suivre le flux RSS des commentaires


    Ajouter un commentaire

    Nom / Pseudo :

    E-mail (facultatif) :

    Site Web (facultatif) :

    Commentaire :