In a relatively rare and much appreciated move, Microsoft issued an apology for its IE9 TestCenter that included wrong tests and wrong success percentages for all major browsers. Let's not push that discussion further, the issue is now closed.

But the problem raises a logical discussion about Tests, their goal and their fate. In my personal opinion, Tests are of two kinds: the tests that a browser vendor writes to help internally improve the layout engine, and the tests the standard body (hear W3C in our case) uses to demonstrate that a spec can leave the Workind Draft status and move along the RECommendation track. Initially, these two categories were different and the goals were different even if the intersection is not empty. Nowadays, browser vendors submit their tests suites to the Consortium and their tests feed the specs' Tests Suites. That's good, that's really very good. But Tests are also used these days to compare implementations and I think that's bad if it's done by the browser vendors themselves. I'm probably influenced by my french local context, where comparative ads are forbidden. But I think you cannot enter a fair competition mode and have rather harsh marketing practices. Comparing browsers should not be done by browser vendors because it's not neutral from a Browser War point of view.

Engineers working for different browser vendors are competitors on the market, even if this word has less and less meaning in a world of Standards Compliance. We're competitors but often friends too. There's often deep respect and trust among us because true geekiness is a world of trust. We work together in W3C Working Groups and you'll find there an atmosphere that hardly represents every day a Browser War.

I honestly prefer a world where browser vendors demonstrate THEIR OWN quality but a world where they demonstrate the weaknesses of others. Last time I checked, a product was evaluated in the light of its feature set and overall quality, not in the light of the weaknesses of challenging products.

I'm urging browser vendors to adopt marketing practices that are more in line with the way we work in standard bodies: respect. Saying the competitor is bad on a marketing web page is not the best way to prove your own product is the best because it opens a Pandora's box and you'll rapidly face other marketing web pages demonstrating your browser sucks in front of competitors for other technologies or, as in our case today, that some of your tests were wrong, plaguing the whole results and even the marketing process. In other terms, you have in hands a double-sided knife. First side wounds the competitors, but second side harms your own hand... In the end, it's a wrong way.

Microsoft, show me the value of YOUR browser. Competitors to Microsoft, show me the value of YOUR browser. And let the press aggregate the data and show the masses who's the best with comparative charts. Thanks.