I read with great interest Nicholas C. Zakas's article about the speed difference between getElementsByTagName() and querySelectorAll(). Since he tested this with Gecko, let me give more details here.

Nicholas is only partly right in his article. The fact querySelectorAll() deals with the full power of selectors does matter a lot here. When a call to that method is made, Firefox does the following:

  1. parse the selector into a nsCSSSelectorList ; I'm sure you will easily agree that is more expensive than just dealing with one single token representing one single element name or a "*" in getElementsByTagName()..
  2. match that nsCSSSelectorList against the document tree ; since a selector in the selectorList can contain any type of simple selector, combinator, pseudo, that requires more testing on the selectorList

So could selectorMatches() be optimized to avoid testing classes, attribute selectors and the whole set of CSS selectors using a RuleProcessor if the queried selector is restricted to let's say one single type element name or one ID? Probably not worth the extra bit. But querySelectorAll() could default to a static clone of the results of getElementsByTagName() or getElementById() or even getElementsByClass() or whatever in case of such a single simple selector allowing a DOM Level 1 equivalent for the call. That would require some rather easy tests in nsGenericElement::doQuerySelectorAll to check if selectorList is "atomic" or not.

That said, the results of getElementsByTagName() are also cached. The second call to it is much faster than the first one, something you can hardly do with querySelectorAll() and the diversity of its arguments...

All in all, querySelectorAll() is slower mostly because it does much much much more than getElementsByTagName() and it does it very differently. Well, it *has* to do it very differently. Perhaps the optimization suggested above is a good thing...