# <Glazblog/>

## Standards

Tuesday 28 December 2010

## The current CSS Gradients mess...

(this article uses SVG and MathML, Safari has issues with it because of HTML mimetype ; please prefer Chrome or Firefox)

Let's take a given square box. Height and width are the same. We want to apply a red-to-black background at let's say alpha degrees and through the center C (50%, 50%) of the box. The W3C gradients draft says we find the start and end points of the gradient that way:

Let's suppose the size of the box is 100%*100%. In that case, finding the coordinates of the end point (for instance) is easy:

• α is our user-chosen angle
• let β be the angle between the horizontal and the line between the C and D; we have
$tan\beta =\frac{{C}_{y}-{D}_{y}}{{D}_{x}-{C}_{x}}$
• the distance between C and D is of course $l=\sqrt{{\left({c}_{y}-{d}_{y}\right)}^{2}+{\left({c}_{x}-{d}_{x}\right)}^{2}}$
• the distance between C and the end point is then $l\text{'}=l\cdot cos\left(\beta -\alpha \right)$
• and the coordinates of our end point are then $\left({C}_{x}+l\text{'}cos\left(\alpha \right),{C}_{y}-l\text{'}sin\left(\alpha \right)\right)$

Of course, Gecko-based gradients use a start point and an angle to define a linear gradient while WebKit-based gradients use a start point and an end point.

But according to the above, we will get different absolute coordinates for our start and end points depending on the box's size even if the angle remains the same.

The above means that it's not possible, in the general case, to derive a -webkit-gradient(linear, ...) from a -moz-linear-gradient(...) - and vice-versa - without having access to the element's size.

Conclusion: sorry, BlueGriffon will not output WebKit-based gradients outside of the trivial cases, it's just not possible.

Sunday 31 October 2010

## Where is Daniel

Attending W3C Technical Plenary Meeting in Lyon. Back at the end of the week. Don't forget the W3C Meetup in Lyon, 04-nov-2010 7pm.

Saturday 25 September 2010

## the CSS Working Group needs you

(this message is posted with my CSS WG Co-chair hat on)

Yes, we need you. CSS 2.1 is a complex specification, and it has roughly 20,000 HTML4 and XHTML1 tests in its Test Suite. To make the document move from Candidate Recommendation to Proposed Recommendation, we need to show that each and every test in that Test Suite is passed by at least two different implementations. And that's where you can help :

if you have a few spare cycles and are able to test a few hundreds or thousands of the tests in the Test Suite with the latest version (see below) of Opera, Firefox4beta, IE or WebKit, please help us focusing on the least tested tests or the ones that have only 0 or 1 passing implementation.

The results are agregated into a database. Thanks a lot for your help!

Builds to be tested (and only those ones please):

Wednesday 22 September 2010

## Earthquake

Chris Wilson leaves Microsoft for Google.

Tuesday 22 June 2010

## ZDnet.fr

Je viens d'être interviewé par ZDnet.fr sur le W3C, HTML et tout ça. Je mettrai l'intégralité de l'interview en ligne sur ce blog après leur publication (qui n'est pas pour tout de suite apparemment).

Thursday 3 June 2010

## Interview of Wolfgang Kriesing at SWDC 2010

Interview of Wolfgang Kriesing about Mobile Web Apps during SWDC 2010.

## Interview of Chris Heilmann at SWDC 2010

Interview of Chris Heilmann from Yahoo! at SWDC 2010.

## Interview of Rik Arends at SWDC 2010

Interview of Rik Arends about Ajax.org during SWDC 2010.

## Interview of Dylan Schiemann at SWDC 2010

Interview of Dylan Schiemann about Dojo during SWDC 2010.

## Interview of Robert Nyman at SWDC 2010

Robert Nyman interviewed on HTML5 during SWDC 2010.

Thursday 1 April 2010

## W3C HTML5/CSS3 Meetup - Paris Mercredi 7 avril 2010 - 19h00

Saturday 20 March 2010

## The IE9 Test Center

In a relatively rare and much appreciated move, Microsoft issued an apology for its IE9 TestCenter that included wrong tests and wrong success percentages for all major browsers. Let's not push that discussion further, the issue is now closed.

But the problem raises a logical discussion about Tests, their goal and their fate. In my personal opinion, Tests are of two kinds: the tests that a browser vendor writes to help internally improve the layout engine, and the tests the standard body (hear W3C in our case) uses to demonstrate that a spec can leave the Workind Draft status and move along the RECommendation track. Initially, these two categories were different and the goals were different even if the intersection is not empty. Nowadays, browser vendors submit their tests suites to the Consortium and their tests feed the specs' Tests Suites. That's good, that's really very good. But Tests are also used these days to compare implementations and I think that's bad if it's done by the browser vendors themselves. I'm probably influenced by my french local context, where comparative ads are forbidden. But I think you cannot enter a fair competition mode and have rather harsh marketing practices. Comparing browsers should not be done by browser vendors because it's not neutral from a Browser War point of view.

Engineers working for different browser vendors are competitors on the market, even if this word has less and less meaning in a world of Standards Compliance. We're competitors but often friends too. There's often deep respect and trust among us because true geekiness is a world of trust. We work together in W3C Working Groups and you'll find there an atmosphere that hardly represents every day a Browser War.

I honestly prefer a world where browser vendors demonstrate THEIR OWN quality but a world where they demonstrate the weaknesses of others. Last time I checked, a product was evaluated in the light of its feature set and overall quality, not in the light of the weaknesses of challenging products.

I'm urging browser vendors to adopt marketing practices that are more in line with the way we work in standard bodies: respect. Saying the competitor is bad on a marketing web page is not the best way to prove your own product is the best because it opens a Pandora's box and you'll rapidly face other marketing web pages demonstrating your browser sucks in front of competitors for other technologies or, as in our case today, that some of your tests were wrong, plaguing the whole results and even the marketing process. In other terms, you have in hands a double-sided knife. First side wounds the competitors, but second side harms your own hand... In the end, it's a wrong way.

Microsoft, show me the value of YOUR browser. Competitors to Microsoft, show me the value of YOUR browser. And let the press aggregate the data and show the masses who's the best with comparative charts. Thanks.

Wednesday 23 December 2009

## Microsoft, Word, i4i, XML #2

This is what I wrote last 12th of august:

Microsoft ordered to stop selling Word... And basically most Office products and Visio and and and.

First personal reaction is shock ; second reaction is "oh wait, I4I ????" ; third reaction is "oooooh shit".

Just for the record, and that's something the CNet article does not mention, I4I acquired Grif's assets when it collapsed... Oh, and my old boss Jean Paoli (XML 1.0 co-editor) moved from Grif to Microsoft a while before that.

I4I filed the patent in july 1994, i.e. at a time the idea of a unified DOM and DOM api started percolating slowly into the SGML community. As a matter of fact, the patent is not about the Web but really about SGML. Please note USPTO took four years to validate the patent !!! Four years, that's more than a generation in our web wold. In 1994, the Web was still almost confidential. In 1998, the Web had already changed the world.

I am unfortunately not sure this patent fight is a patent troll. Patents on software are incredibly harmful, they are a too weak shield for innovators that use them and a burden on innovators that don't carry the patent. Let's compare codes, not ideas.

I was right. Microsoft just lost and has to pay \$290 million. For those of you who don't really understand what's going on here and how it could affect the XML world, let me explain a bit...

The original authors of XML had two kinds of document instances in mind : the first ones, well-known, conform to a document model. Call it a dtd or a schema or whatever, documents conform to some sort of structural description and only what's allowed by the structure is found in the documents ; validators are the tools that can confirm a given document conforms to a given structure. On another hand, well-formed documents are documents that are XML, with tags and everything, but don't have a structure. You design them as you need them, you're the sole user of the format so you don't really need a specified structure and validation.

"Custom XML" lives between these two species. If you're working with documents conforming to a given document model, how do you insert "custom" tags (no, don't think namespaces, think 1994...) in these documents, retaining validity and still enabling load/edit/save and everything? That's the purpose of i4i's patent.

Does it affect our daily work on XML or does it affect our future work? I don't think so. First, inserting arbitrary XML tags without associated dtd/schema and namespace in a given instance is nowadays probably a very marginal use case. Second, you could always declare an arbitrary namespace for your user-defined tags and let the user-agent treat the document in a single document tree (in other terms, you don't need to separate structure and content and recreate an internal structure for your arbitrary tags, and that is the heart of i4i's patent). Third, i4i's patent was filed at a time the DOM and namespaces did not exist and we now handle compound XML instances in a different way. Fourth, schemas can control where extra XML elements are allowed (the case of DTDs is a bit more complex ).

Let's summarize: Microsoft fell in a nice trap, probably because their Legal Department did not do its job well enough. \$290m, that's severe, and a few layers deserve a kick in the butt. I also think the whole debate (and to be more precise the case) is totally rotten. Microsoft was judged on the presence of a "custom XML editor" add-on in Word; but I see no clear facts in the ruling about a technical infringement on i4i's patent. In other words, yes Microsoft implemented and shipped a "custom XML editor" and a "custom XML editor" is described in i4i's patent; but no it's not clear at all they implemented it using the methods described in i4i's patent...

Again, I do believe software patents are a serious threat for Software in general. In this case, codes and algorithms were not even compared and I find it not only ridiculous but also dangerous.

Wednesday 16 December 2009

## Browser War 2009, ma présentation W3C/INRIA du 25/11/2009

Tout est dit dans le titre je pense, c'est en français et c'est là.

Thursday 19 November 2009

## Conférence à l'INRIA Sophia-Antipolis

Je serai le 25 novembre au matin à l'INRIA Sophia-Antipolis pour donner une conférence d'une heure intitulée "Browser War 2009". Seront également présents avec moi des employés du W3C dont Bert Bos (co-inventeur des CSS, ancien chairman du CSS WG, Style Activity Lead au W3C et actuel W3C Staff Contact du CSS WG) et probablement d'autres. Si vous êtes intéressé par l'état de l'art des standards du Web, que vous voulez voir quelques démos assez bluffantes du futur que les navigateurs Web nous préparent, ou si vous avez envie de vous renseigner sur le W3C et savoir pourquoi vous devriez rejoindre le World Wide Web Consortium, l'entrée est gratuite (dans la limite des places disponibles évidemment...). Nota bene important : la conférence sera donnée en français, comme le tite l'indique bien

A mercredi !

Thursday 12 November 2009

## HTML 5 needs a way to say some element's contents are ajax-based

Webchunks (the Firefox add-on) lets you select an arbitrary element in a web page to turn it into a webchunk. Nothing magical here. A Webslice is, per Microsoft's "spec", an element carrying an ID and the class slice. In other terms, an element selected by the selector #thatID.slice. Webchunks just extends that notion to be able to use here any CSS selector, not only that one.

But selecting an arbitrary element can lead to unexpected results. Let's take a concrete example :

• the web page liberation.fr has a block of recent news on the right hand side ("Dernières Dépêches")
• that block can be selected using the selector #e1b3-1
• but the contents of that block are not available at load time but rather ajax-based...
• so comparing a cached version of that element and a recently downloaded version is meaningless if you don't let all JavaScript code in the page run and compare them only afterwards...

I am then unable to say liberation.fr updated its Last News section just loading the source of that HTML page... Painful.

Conclusion : I miss here something standard (an attribute, a class, whatever and I don't care) in the DOM to let me know the contents of that element are retrieved over the wire after page load. I also miss a standard event allowing me to know the contents are now loaded and that the "final" content of the element can now be observed.

Thursday 22 October 2009

## <a onlyreplace>

From time to time, the Web standards community unburies proposals made long - and sometimes very long - ago. That's currently the case with the excellent <a onlyreplace> thread on the WHAT-WG mailing-list. Although still unpolished, the proposal does make sense and is not logically very different from the overlays XUL authors use on a daily basis.

Thursday 8 October 2009

## Ma présentation à ParisWeb 2009

http://www.slideshare.net/glazou/paris-web2009-one-web

Tuesday 6 October 2009

## ParisWeb 2009, j'y serai !

ParisWeb 2009, j'y serai jeudi et vendredi ! Et vous ?

Wednesday 30 September 2009

## Message to past and present members of the CSS Working Group

The CSS Working Group is going to meet in Santa Clara, CA the 2nd and 3rd of november (that's the Technical Plenary Meeting of the W3C). Since we did almost nothing for the tenth anniversary of CSS, we plan to have an informal gathering of all past and present members of the Group (warning this list probably lacks a few names) who can attend on the evening of the 1st of november for a drink (we don't cover). Just for the pleasure to meet again. Please ping me at daniel AT glazman DOT org if you participated in the CSS WG and would like to attend!

- page 3 of 12 -