Monday 19 September 2011


I had an interesting online chat with a friend in the US this morning (well, this night for him...). The topic was HTML and the recent Microsoft announcements. While we see HTML reach a next level with desktop apps, the lingua franca of the web is still far, really far away from having all what's needed for desktop apps or even web-based desktop-alike apps. If form controls improved a lot between html4 and html5, a wide range of UI elements are still out of reach. Well. Let me explain before shouting : they are codable but each and every site has to reinvent the wheel with lots of UI theming and JavaScript-based controls. That's bad because that's incoherent and expensive. HTML+CSS still lacks major, really major stuff like:

  • real menus
  • tabboxes
  • pop-ins and more powerful tooltips
  • trees (and when I mean trees, I mean tree-like widgets like the XUL <tree> able to render tens of thousands of lines w/o making the user experience awful)
  • better integration with the OS/WindowManager
  • etc.

While we're (the Standards Community) now focusing on super-mega-hyper-useful stuff like an API to get the battery status (sic...), we're still unable to include in a web- or desktop app such widgets with a native look&feel. There are zillions of frameworks to ease the pain, but none of them is ready for desktop apps, and HTML+CSS+JS desktop apps need them to become mainstream AND cross-platform.

I think then a new standardization effort between HTML and CSS is needed to make HTML UI appear. Let's look back at XUL and XAML a bit... Thoughts?

Thursday 1 September 2011

W3C Workshop Program: A Local Focus for the Multilingual Web

A W3C Multilingual Web workshop will be held in Limerick, Ireland, and co-located with the 16th LRC Conference and hosted by the University of Limerick. I'll give the KeyNote speech, titled "Babel 2012 on the Web", on the 21st of september. See you there!

Saturday 25 June 2011

Why html5 elements INS and DEL suck

I have said it multiple times here, in W3C mailing-lists or in public between 1998 and now but apparently it must be said again and again: the current HTML5 Last Call Working Draft - that does not reach at all the quality of other LCWD in the W3C and did not meet the basic requirements for a LCWD in the W3C Process - still has not worked on that erratum. So let me repeat it : html5 ins and del elements suck and should be dropped in favor of a better solution.

  • ins and del are, by definition, both inline-level and block-level elements. If in a Wysiwyg editor, you select the textual contents of a paragraph, turn on a "Visible Modification Marks" feature and hit the Delete or Backspace key, the editor has the option between <del><p>....</p></del> and <p><del>...</del></p>. The user has no way to make a difference between the two but the two are NOT strictly equivalent. In the latter case, it is still theoritically possible to place the caret in the paragraph but BEFORE or AFTER the del element and insert new chars. In the former case, the whole paragraph is deleted and the user can't insert anything inside any more.
  • In the latter case just above, it's impossible for the user to know if a caret placed at the beginning of the paragraph is before the paragraph, inside the paragraph but before the del element, or at the beginning of the del element.
  • much more importantly, ins and del cannot cover one trivial case : since there is no equivalent to SGML inclusions (see for instance this link for a rather clean explanation) in XML, the following is impossible: <ul><del><li>a</li></del><li>b</li></ul>. It is for instance totally impossible to mark an element as entirely deleted if the parent container's model does not allow the del element...

The situation is unfortunately very clear: the ins and del elements as they exist now in the various html specs are unable to provide editing environments with a workable and predictable solution for Visible Modification Marks, the primary reason why the elements were originally introduced in HTML 4. As a matter of fact, almost no Wysiwyg editor implements them.

For the n-th time in 13 years, I strongly recommend to drop the ins and del elements in favor of the following attributes. All elements inside the body element should be able to carry them.

  • change attribute ; possible values: inserted, deleted optionnally followed by a whitespace and one of the keywords reviewed or to-be-reviewed.
  • review-by attribute ; an arbitrary value meaningful only when the change attribute contains the to-be-reviewed value and meant to be displayed for human consumption ; can be for instance a name, a mail, a twitter id, etc.
  • reviewed-by attribute ; an arbitrary value meaningful only when the change attribute contains the reviewed value and meant to be displayed for human consumption ; can be for instance a name, a mail, a twitter id, etc.
  • the cite and datetime attributes as currently defined in the html5 spec

This is the minimum attributes set needed to resolve the issue. Another attribute "tagging" the potential reviews of the proposed change could also be added.

I really hope this change is going to happen. Again, the current ins and del html elements are totally hopeless.

Tuesday 21 June 2011


@johnfoliot If browser vendors spent 1/10th the time trying to kill off  #longdesc actually fixing the support, issue would be closed years ago

Tuesday 24 May 2011


I am hesitating between hilarity and shock. IMHO, W3C's HTML WG died today. Again.

Saturday 19 February 2011

A world of trust

Our geek world is a world of trust. Just like in Antwerp at the Diamond Bourse, people belong only to two categories: trustable or not trustable. In Software, most of the people around us are trustable. Around me, almost everyone is trustable, almost everyone has always been trustable. It's so rare so find someone untrustable that it always hit me as a shock. I still remember marca's words about the three challenges a company faces "hire, hire and hire". A corollary of the hiring process is trust. Hire only people you trust. Hire only people you can respect. Hire only people who can do better than you if they're not already doing better than you.

So yesterday, I had a shock. Three shocks to be more precise. That's a bad beginning for 2011, since nobody shocked me to that level in the five last years. Grrr.

PS: in fact, there are two other categories at the Diamond Bourse: those who can speak yiddish and the others.

Wednesday 19 January 2011

The HTML... hum... logo

HTML5logoI discovered yesterday the HTML 5 "logo" and I find it completely missing its target. Except the name, nothing in the logo's design is clearly related to the Web. Change "HTML" in that logo to "Interstate" and it could well be a road sign...

I already had a chance to give my opinion about the "HTML 5 is everything" current buzz during the last Technical Plenary Meeting of the W3C in Lyon. I find it counter-productive and in fact harmful. Oh, that's the only acronym journalists use to describe "the Open Web Platform"? Since when journalists DO things instead of WRITING ABOUT THEM?

Being the co-chairman of the CSS Working Group, I am also puzzled by the "CSS 3 / Styling" thingy that goes with the "HTML 5 logo". See by yourself:

CSS 3 / Styling

Hum, to say the least! I just don't understand this beast. What the hell is it supposed to tell me? CSS, Presentation, Style, Fonts? Really?!?

Speaking only for myself here, who can seriously think I am going to use such a meaningless horror (see below) anywhere? Hmmm?

HTML5 Powered with CSS3 / Styling, Device Access, Graphics, 3D & Effects, Multimedia, Performance & Integration, Semantics, and Offline & Storage

Tuesday 28 December 2010

The current CSS Gradients mess...

(this article uses SVG and MathML, Safari has issues with it because of HTML mimetype ; please prefer Chrome or Firefox)

Let's take a given square box. Height and width are the same. We want to apply a red-to-black background at let's say alpha degrees and through the center C (50%, 50%) of the box. The W3C gradients draft says we find the start and end points of the gradient that way:

Layer 1 start point end point C D α β (0%,0%) (100%,100%)

Let's suppose the size of the box is 100%*100%. In that case, finding the coordinates of the end point (for instance) is easy:

  • α is our user-chosen angle
  • let β be the angle between the horizontal and the line between the C and D; we have
  • the distance between C and D is of course l=(cy-dy)2+(cx-dx)2
  • the distance between C and the end point is then l'=lcos(β-α)
  • and the coordinates of our end point are then (Cx+l'cos(α),Cy-l'sin(α))

Of course, Gecko-based gradients use a start point and an angle to define a linear gradient while WebKit-based gradients use a start point and an end point.

But according to the above, we will get different absolute coordinates for our start and end points depending on the box's size even if the angle remains the same.

Layer 1

The above means that it's not possible, in the general case, to derive a -webkit-gradient(linear, ...) from a -moz-linear-gradient(...) - and vice-versa - without having access to the element's size.

Conclusion: sorry, BlueGriffon will not output WebKit-based gradients outside of the trivial cases, it's just not possible.

Sunday 31 October 2010

Where is Daniel

Attending W3C Technical Plenary Meeting in Lyon. Back at the end of the week. Don't forget the W3C Meetup in Lyon, 04-nov-2010 7pm.

Saturday 25 September 2010

the CSS Working Group needs you

(this message is posted with my CSS WG Co-chair hat on)

Yes, we need you. CSS 2.1 is a complex specification, and it has roughly 20,000 HTML4 and XHTML1 tests in its Test Suite. To make the document move from Candidate Recommendation to Proposed Recommendation, we need to show that each and every test in that Test Suite is passed by at least two different implementations. And that's where you can help :

if you have a few spare cycles and are able to test a few hundreds or thousands of the tests in the Test Suite with the latest version (see below) of Opera, Firefox4beta, IE or WebKit, please help us focusing on the least tested tests or the ones that have only 0 or 1 passing implementation.

The results are agregated into a database. Thanks a lot for your help!

Builds to be tested (and only those ones please):

Wednesday 22 September 2010


Chris Wilson leaves Microsoft for Google.

Tuesday 22 June 2010


Je viens d'être interviewé par ZDnet.fr sur le W3C, HTML et tout ça. Je mettrai l'intégralité de l'interview en ligne sur ce blog après leur publication (qui n'est pas pour tout de suite apparemment).

Thursday 3 June 2010

Interview of Wolfgang Kriesing at SWDC 2010

Interview of Wolfgang Kriesing about Mobile Web Apps during SWDC 2010.

Interview of Chris Heilmann at SWDC 2010

Interview of Chris Heilmann from Yahoo! at SWDC 2010.

Interview of Rik Arends at SWDC 2010

Interview of Rik Arends about Ajax.org during SWDC 2010.

Interview of Dylan Schiemann at SWDC 2010

Interview of Dylan Schiemann about Dojo during SWDC 2010.

Interview of Robert Nyman at SWDC 2010

Robert Nyman interviewed on HTML5 during SWDC 2010.

Thursday 1 April 2010

W3C HTML5/CSS3 Meetup - Paris Mercredi 7 avril 2010 - 19h00

Inscrivez-vous !!!

Saturday 20 March 2010

The IE9 Test Center

In a relatively rare and much appreciated move, Microsoft issued an apology for its IE9 TestCenter that included wrong tests and wrong success percentages for all major browsers. Let's not push that discussion further, the issue is now closed.

But the problem raises a logical discussion about Tests, their goal and their fate. In my personal opinion, Tests are of two kinds: the tests that a browser vendor writes to help internally improve the layout engine, and the tests the standard body (hear W3C in our case) uses to demonstrate that a spec can leave the Workind Draft status and move along the RECommendation track. Initially, these two categories were different and the goals were different even if the intersection is not empty. Nowadays, browser vendors submit their tests suites to the Consortium and their tests feed the specs' Tests Suites. That's good, that's really very good. But Tests are also used these days to compare implementations and I think that's bad if it's done by the browser vendors themselves. I'm probably influenced by my french local context, where comparative ads are forbidden. But I think you cannot enter a fair competition mode and have rather harsh marketing practices. Comparing browsers should not be done by browser vendors because it's not neutral from a Browser War point of view.

Engineers working for different browser vendors are competitors on the market, even if this word has less and less meaning in a world of Standards Compliance. We're competitors but often friends too. There's often deep respect and trust among us because true geekiness is a world of trust. We work together in W3C Working Groups and you'll find there an atmosphere that hardly represents every day a Browser War.

I honestly prefer a world where browser vendors demonstrate THEIR OWN quality but a world where they demonstrate the weaknesses of others. Last time I checked, a product was evaluated in the light of its feature set and overall quality, not in the light of the weaknesses of challenging products.

I'm urging browser vendors to adopt marketing practices that are more in line with the way we work in standard bodies: respect. Saying the competitor is bad on a marketing web page is not the best way to prove your own product is the best because it opens a Pandora's box and you'll rapidly face other marketing web pages demonstrating your browser sucks in front of competitors for other technologies or, as in our case today, that some of your tests were wrong, plaguing the whole results and even the marketing process. In other terms, you have in hands a double-sided knife. First side wounds the competitors, but second side harms your own hand... In the end, it's a wrong way.

Microsoft, show me the value of YOUR browser. Competitors to Microsoft, show me the value of YOUR browser. And let the press aggregate the data and show the masses who's the best with comparative charts. Thanks.

Wednesday 23 December 2009

Microsoft, Word, i4i, XML #2

This is what I wrote last 12th of august:

Microsoft ordered to stop selling Word... And basically most Office products and Visio and and and.

First personal reaction is shock ; second reaction is "oh wait, I4I ????" ; third reaction is "oooooh shit".

Just for the record, and that's something the CNet article does not mention, I4I acquired Grif's assets when it collapsed... Oh, and my old boss Jean Paoli (XML 1.0 co-editor) moved from Grif to Microsoft a while before that.

I4I filed the patent in july 1994, i.e. at a time the idea of a unified DOM and DOM api started percolating slowly into the SGML community. As a matter of fact, the patent is not about the Web but really about SGML. Please note USPTO took four years to validate the patent !!! Four years, that's more than a generation in our web wold. In 1994, the Web was still almost confidential. In 1998, the Web had already changed the world.

I am unfortunately not sure this patent fight is a patent troll. Patents on software are incredibly harmful, they are a too weak shield for innovators that use them and a burden on innovators that don't carry the patent. Let's compare codes, not ideas.

I was right. Microsoft just lost and has to pay $290 million. For those of you who don't really understand what's going on here and how it could affect the XML world, let me explain a bit...

The original authors of XML had two kinds of document instances in mind : the first ones, well-known, conform to a document model. Call it a dtd or a schema or whatever, documents conform to some sort of structural description and only what's allowed by the structure is found in the documents ; validators are the tools that can confirm a given document conforms to a given structure. On another hand, well-formed documents are documents that are XML, with tags and everything, but don't have a structure. You design them as you need them, you're the sole user of the format so you don't really need a specified structure and validation.

"Custom XML" lives between these two species. If you're working with documents conforming to a given document model, how do you insert "custom" tags (no, don't think namespaces, think 1994...) in these documents, retaining validity and still enabling load/edit/save and everything? That's the purpose of i4i's patent.

Does it affect our daily work on XML or does it affect our future work? I don't think so. First, inserting arbitrary XML tags without associated dtd/schema and namespace in a given instance is nowadays probably a very marginal use case. Second, you could always declare an arbitrary namespace for your user-defined tags and let the user-agent treat the document in a single document tree (in other terms, you don't need to separate structure and content and recreate an internal structure for your arbitrary tags, and that is the heart of i4i's patent). Third, i4i's patent was filed at a time the DOM and namespaces did not exist and we now handle compound XML instances in a different way. Fourth, schemas can control where extra XML elements are allowed (the case of DTDs is a bit more complex :-) ).

Let's summarize: Microsoft fell in a nice trap, probably because their Legal Department did not do its job well enough. $290m, that's severe, and a few layers deserve a kick in the butt. I also think the whole debate (and to be more precise the case) is totally rotten. Microsoft was judged on the presence of a "custom XML editor" add-on in Word; but I see no clear facts in the ruling about a technical infringement on i4i's patent. In other words, yes Microsoft implemented and shipped a "custom XML editor" and a "custom XML editor" is described in i4i's patent; but no it's not clear at all they implemented it using the methods described in i4i's patent...

Again, I do believe software patents are a serious threat for Software in general. In this case, codes and algorithms were not even compared and I find it not only ridiculous but also dangerous.

- page 3 of 12 -