<Glazblog/>

Tuesday 28 May 2019

LR = La Rouste

Les Républicains se sont donc pris une rouste maousse-costaud. Deux ans après la présidentielle, ce parti a un pied assez net dans la tombe. Malgré les commentaires larmoyants de certains (cherchez sur twitter mais soyez prévenus, c'est de la lecture difficile au petit matin, genre des anti-IVG, des cathos bien fondus, etc.), la campagne de Bellamy n'a non seulement pas été bonne, elle a été mauvaise. Je m'explique...

Avant de parler du bonhomme, il faut parler de stratégie, de ligne de campagne et d'électorat cible. Le positionnement de la campagne européenne a été très clairement le cul entre deux chaises, entre FIllon et Wauquiez. Or le premier n'a pas réussi à rassembler même son propre parti alors qu'il lui aurait fallu aller au-delà pour percer, tandis que le second est un repoussoir absolu pour les historiques RPR/UMP qui l'accusent de buissonisme effréné (à juste titre, d'ailleurs...). Une telle campagne, entre valeurs bourgeoises et cathos-tradi d'une part, frôlant de bien trop près des thèmes nets de l'extrême-droite d'autre part, ne pouvait pas faire mouche en 2019. Il est d'ailleurs passionnant de noter que seuls 34% des électeurs de Fillon en 2017 ont voté Bellamy en 2019, excusez du peu. Ce n'est plus une rouste, c'est un coup de poing dans le ventre.

Si vous avez bien suivi comme moi la "campagne" des européennes, que nous a proposé LR dans le poste et sur les radios récemment ?

  • un des personnages les plus présents et les plus reconnaissables a été Eric Ciotti. Son inculture, son époustouflante incompétence, ses interventions hors-sol et tous azimuts ridiculisent LR. Il hérisse le poil à la majorité des gens, même de son propre parti.
  • Bellamy est philosophe conseiller municipal à Versailles (prononcer Versââââââilles) et il est infoutu de ne pas le montrer dans son langage, sa présentation, son style, tout. Il ne parle pas aux masses et il ne parle pas beaucoup plus à son propre électorat dont l'histoire entière est celle du suivi d'un Chef, avec un grand C, et pas un inconnu sorti d'un chapeau. Il sort de nulle part et prend des positions dignes de 1960 dans une France de 2019. Il a successivement perdu les non-cathos, les non-bourgeois, les progressistes, les femmes et il a fini en prenant une position sur la fin de vie totalement à contre-pied de ce que souhaite la majorité de la population de ce pays. Il fait exactement ce que fait l'Église en France : faire le contraire de ce que les gens attendent et se demander ensuite pourquoi les églises sont vides. Quant à sa position sur l'IVG - encore un connard de mec qui se préoccupe de l'utérus des femmes au lieu de tenter d'arrêter de pisser le plus loin - on lui recommanderait bien de s'expatrier dans un état du sud américain. Les Français sont dans leur vaste majorité horrifiés de ce qui se passe là-bas, ce qu'il s'est passé en Irlande ou en Pologne. Être à contre-courant à ce point ne pouvait que se payer cher.
  • personne n'a oublié que Retailleau a été le fidèle lieutenant de Fillon et qu'il détient encore les clés de son microparti. Ses prises de positions ont souvent été archaïques, débilitantes pour son propre parti. La stratégie de soutien par Sens Commun et la Manif pour Tous a été dévastatrice, n'est pas oubliée et reste pour beaucoup d'électeurs LR impardonnable et surtout inacceptable.
  • idem pour Valérie Boyer, qui ne sait plus quoi inventer pour exister.
  • Laurent Wauquiez ne passe pas dans son propre parti. Le greffon ne prend pas. Dans ma ville, fief historique RPR/UMP s'il en est un, LR a fait 16.36% alors que LREM a fait 39.07%. Et je suis ravi de noter que les chemises brunes recolorées en beige y font moins de 10%.
  • Sarkozy s'est bien abstenu de mettre un doigt dans un tel panier de crabes et il a eu raison.
  • côté idées, propositions, LR a été dans un vide abyssal. La campagne a été invisible parce qu'en fait inexistante. Toute la presse s'est planté : un jeune à belle gueule qui parle bien dans son microcosme et une méthode Coué de son propre parti disant "si, si, ça bouge !" ne suffit pas.
Il est incroyable de constater que LR n'a plus vraiment que deux noms à sortir du chapeau : Nicolas Sarkozy d'une part, et Xavier Bertrand d'autre part. Bertrand a désormais toutes ses chances et je le crois parfaitement capable de donner l'impulsion moderniste et plus centriste capable de reconstruire LR. Mais cela sera difficile, et il doit se poser beaucoup de questions en ce moment. Prendre le risque de se faire atomiser en même temps d'un parti en coma quasi-dépassé ou continuer une belle carrière locale pour tenter sa chance directement au plus haut ?

Thursday 21 February 2019

ParcoursSup et Informatique en France, consternationnage totalitudineux

Je viens de passer plusieurs heures avec mon fils à brouter du ParcoursSup. Après avoir passé des bons moments à chercher pour lui la formation idoine. J'avais pensé, j'avais cru ne pas avoir engendré deux babasseurs pour fils mais eh, j'ai été rattrapé par la réalité... Le premier écrivait son premier jeu pour Android à 15 ans et le second se passionne pour l'intelligence artificielle et le Machine Learning.

Si l'aîné a plongé dans une MathsSup comme sortie normale de Terminale S (alors qu'il aurait très bien pu présenter SciencesPo), le cadet est moins intéressé par le rythme somme toute totalement délirant des CPGE. Je ne peux pas lui donner tort... On a donc cherché autre chose.

Eh bien le résultat est relativement clair :

  • il y a pas mal de "formations informatiques" en France, qu'elles soient publiques ou privées. Mais ce "pas mal" reste très insuffisant. La France sort par an ce que Bengalore sort à elle seule ; or en Inde, y'a pas que Bengalore, hein...
  • les formations privées me semblent rarement à la hauteur des besoins industriels modernes ; c'est fait pour alimenter les SSII/ESN, pas franchement pour générer de l'innovation de haut niveau
  • les facs proposent des bonnes formations, mais au bout d'un temps certain seulement et avec évidemment une forte orientation Recherche... Il faut avant se coltiner des plombes de matières qu'on avalera pour le partiel, recrachera pendant le partiel et évidemment oubliera juste après le partiel. Toute ressemblance avec la théorie du signal apprise à TélécomParis par les fadas de babasse dans mon genre n'est pas fortuite.
  • l'Ensimag - je suis rentré moi-même en MathSup pour tenter de la décrocher après être tombé sur leur plaquette en Terminale mais j'ai un peu bifurqué ;-) - affiche haut et fort sur leur Home Page qu'ils sont "École d'application de Polytechnique" mais ils n'ont pas de formation intégrée décorrélée des CPGE (et de leur écrase-purée) digne de ce nom. Les babasseurs qui sortent de l'Ensimag ont du auparavant prouver à la face du monde qu'ils connaissent le Lemme de Zermelo, sont capables de causer de diastéréoisomérie moléculaire, et surtout surtout savent que le tire-bouchon du vrai scientifique pur et dur doit être nommé Maxwell. Ce dont 98% des étudiants de l'Ensimag se foutent probablement comme de leur première chaussette et n'utiliseront plus jamais de leur vie. L'Ensimag est une formation magnifique, mais réservée au Concours Commun Polytechnique. Connerie.
  • l'Epita, dont le nom déclenche chez moi un souvenir "ému" (between quotes) pour des raisons qu'il vaut mieux ne pas citer ici, me semble très clairement la seule et unique formation française post-Bac sérieuse en Informatique. Les appellations de InfoSup et InfoSpé données à leurs première et seconde années sont bien vues, et circonstanciées. Les errements des années de jeunesse de l'Epita sont clairement oubliés depuis des lustres, et il est clair qu'ils positionnent l'informatique non pas comme un seul outil des mathématiques ou de la physique, mais bien comme une discipline supérieure (et j'insiste lourdement sur cet adjectif) à part entière. Certes, un bagage minimal dans les sciences "dures" est au programme, c'est bien la moindre des choses ; mais on n'est pas dans le bourrage de crâne, ni dans l'inutile.

Quant à ParcoursSup, mon fils a du répondre à un auto-QCM pour les candidatures Fac/IUT. Il a choisi, ô surprise, Maths/Informatique. Les questions étaient consternantes de nullité, voire de bêtise crasse. Quand je dis auto-QCM, cela signifie donc que le gamin est seul devant son navigateur Web et que l'onglet 1 est ouvert sur le questionnaire ; il va sans dire que le slacker  (mais pas mon fils) aura l'onglet 2 ouvert sur Wikipedia ou tout autre ressource immédiatement utile. Non mais quel est le bachi-bouzouk qui a inventé ce machin ? Pour certaines questions, c'était même pire : une vidéo de quatre minutes (si, si) était à voir/écouter et les réponses à trois questions étaient DIRECTEMENT fournies dans la video... Tout cela fait, le moutard doit fournir n, avec n grand selon la formule consacrée, lettres de motivation voire CVs ! Mais bordel de Zeus, on parle d'un gamin de 17 ans qui sort des jupes de sa mère, n'a rien encore vécu professionnellement puisque justement il espère beaucoup de sa formation post-Bac. Et avec ça on va inonder tous les établissements de France et de Navarre avec près de 700 000 lettres de motivation fois 10 candidatures par moutard, soit près de 7 MILLIONS DE LETTRES DE MOTIVATION. Alors arrêtons d'être faux-culs, la vaste majorité d'entre elles n'est pas lue, le nombre d'hommes-années pour les lire toutes est hallucinant ; elles finissent donc poubellisées sans autre forme de procès et on devrait arrêter les frais pour la prochaine session ParcoursSup avant qu'on n'atteigne la fourniture d'une analyse de matière fécale pour compléter son dossier.

En conclusion, je dirais que la France a besoin d'une vingtaine d'Epita. Elle a également besoin, toujours besoin, d'un grand coup pied dans le cul en ce qui concerne l'Informatique mais là, malheureusement, rien de nouveau sous le soleil. Quant à ParcoursSup, c'est un pansement sur une jambe de bois. Le système éducatif supérieur français a fini d'être mis en place alors que moins de 70% d'une classe d'âge obtenait son Bac. Or en 2018, le Bac général a un taux de réussite de 91.1%, excusez du peu. Notre système est saturé, il n'en peut plus, il craque de toute part. Nous payons des décennies de sous-investissement dans ce qui est l'avenir du pays, son éducation. Il faut arrêter de prendre les gens pour des veaux : ParcourSup, c'est la sélection qu'on n'a pas le droit, pour l'instant, d'appliquer. Ouvrons les yeux, le nouveau prolétariat est prêt, il a le Bac - et rien que le Bac...

PS: il a fallu des décennies pour que l'INRIA quitte récemment les préfabriqués indigents de Rocquencourt. Je me rappellerai longtemps de ce chercheur américain, rencontré à Stanford, me racontant son effarement à son arrivée à Rocquencourt dans un préfab entre la caserne de pompiers et la voie rapide.. Voilà, tout est dit.

Wednesday 23 January 2019

WebExtensions v3 considered harmful

The Open Web Platform is a careful and fragile construction billions of people, including millions of implementors rely on. HTML, CSS, JavaScript, the Document Object Model, the Web API and more are all standardized one way or another; that means vendors and stakeholders gather around a table to discuss all changes and that these changes must pass quality and/or availability criteria to be considered "shippable".

One notable absent from the list of Web Standards is WebExtensions. WebExtensions are the generalized name of Google Chrome Extensions that became mainstream when Google achieved dominance over the desktop browser market and when Mozilla abandoned its own, and much more powerful, addons system based on XUL and privileged scripts.

As a reminder, the WebExtension API allows coders to implement extensions to the browser based on:

  • HTML/CSS/JS for each and every dialog created by the extension, including the ones "integrated" into the browser's UI
  • a dual model with "background scripts" with more privileges than "content scripts" that get added to visited web pages
  • a new API (the WebExtension API) that offers - and rather strictly controls - access to information that is not otherwise reachable from JavaScript
  • a permissions model that declare what part of the aforementioned API the extension uses and which remote URLs the embedded scripts can access
  • a URL model that puts everything in the extension under a chrome-extension:// URL
  • a review process (on the Google Chrome Extension store) supposed to block harmful codes and more

A while ago, at a time Microsoft still had its own rendering engine, it initiated a Community Group on WebExtensions at the World Wide Web Consortium (W3C). With members from most browser vendors plus a few others, this seemed to be a very positive move not only for implementors but also for users.

But unfortunately, that effort went nowhere. Lack of commitment from other browser vendors and in particular Google, Microsoft abandoning its own rendering engine, lax Community Group instead of a formal W3C Working Group, the WebExtension draft specification has been in limbos for a while now and WebExtensions clearly remain the poor parent of Web Standards even if most people have at least one browser extension installed (usually some sort of ad-blocker).

Today, Google is impulsing a deep change in its WebExtension model:

  • Background HTML pages will be deprecated in favor of ServiceWorkers. That change alone will imply a complete rearchitecture of existing extensions and will also impact their ability to create and deal with the dialogs their UX model requires.
  • The webRequest API that billions of users activate on a daily basis to block advertisement, trackers or undesirable content, is at stake and should be replaced by a declarartive new API that will not allow to monitor the requested resources any more. At a time the advertisement model on the Web is harmed by ad blockers, one can only wonder if this change is triggered only by technical considerations or if ad strategy is also behind it... Furthermore, it will be limited to a few dozens of thousands of declarations, which is far below the number of trackers and advertisement scripts available in the wild today.
  • Some heavily used API will be removed, without consideration for usage metrics or change cost to implementors
  • Even the description of the top level of an extension (aka the "browser action" and the "page action") will change and impact extension vendors
  • All of that is for the time being decided on the Google side alone, with little or no visible contact with the other WebExtension host (Mozilla) or the thousands of WebExtension (free or commercial) providers. There is even a "migration plans" document but it's not publicly available, the link being access-restricted

On the webRequest part specifically, all major actors of the ad-blocking and security landscape are screaming (see also the chromium-extensions Google group). Us at Privowny are also deeply concerned by the v3 proposed changes. Even Amnesty International complained in a recent message! To me, the most important message posted in reply to the proposed changes is the following one:

Hi, we are the developer of a child-protection add-on, which strives to make the Internet safer for minors. This change would cripple our efforts on Chrome.

Talk about "don't be evil"...

All of that gives a set of very bad signals to third-party implementors, including us at Privowny:

  1. WebExtensions are not a mature part of the Open Web Platform. It completely lacks stability, and software vendors willing to use it must be ready to life-threatening (for them) changes at any time
  2. WebExtensions are fully in the hands of Google, that can and will change it any time based on its own interests only. It is not a Web Standard.
  3. Google is ready to make WebExtensions diverge from cross-browser interoperability at any time, killing precisely what brought vendors like us at Privowny to WebExtensions.
  4. Google Chrome is not what it seems to be, a browser based on an Open Source project that protects users, promotes openness and can serve as a basis tool for webcitizen's protection.

Reading the above, and given the fact Google is able to impulse changes of such magnitudes with little or no impact study on vendors like us, we consider that WebExtensions are not a safe development platform any more. We will probably study soon an extraction of most of our code into a native desktop application, leaving only the minimum minimorum in the browser extension to communicate with web pages and of course with our native app.

After Mozilla that severely harmed its amazing addons ecosystem (remember it triggered the success of Firefox), after Apple that partly went away from JavaScript-based Safari extensions jeopardizing its addons ecosystem so much it's anemic (I could even say dying), Google is taking a move that is harmful to Chrome extensions vendors. What is striking here is that Google is making the very same mistake Mozilla did: no prior discussion with stakeholders (hear extension implementors), release of a draft spec that was obviously going to trigger strong reactions, unmeasured impact (complexity, time and finances) on implementors, more and more restrictions on what it is possible to do but a too limited set of new features.

On the legal side of things, this unilateral change could probably even qualify as "Abuse of dominant position" under European Union's article 102 TFUE, and could then cost Google a lot, really a lot...

The Open Web Platform is alive and vibrant. The Browser Extension ecosystem is in jail, subject to unpredictable harmful changes decided by one single actor. This must change, it's not viable any more.

Monday 7 January 2019

Lettre ouverte à Agnès Buzyn

Madame la Ministre, chère cousine (je suis le fils de Sarah Burzyn ; mon père Maurice et moi-même saluons chaleureusement Elie. Maurice qui est toujours également avec nous se ferait une joie de le revoir),

Je souhaite vous alerter aujourd'hui sur un changement récent impactant gravement les personnes souffrant de rhumatismes douloureux, souvent inclassés. Un produit dédié par AMM aux douleurs du zona, le Versatis 700mg, est un emplâtre médicamenteux à la Lidocaïne. Il est souvent et facilement utilisé, sous prescription, dans le cas de douleurs tendineuses ou articulaires persistantes et donne des résultats très positifs : s'il ne guérit rien, il permet par exemple d'atténuer des douleurs au point de pouvoir trouver le sommeil et présente peu de danger. Son utilité en rhumatologie est avérée, et facile à démontrer.

Or, je le redis, ce produit est dédié par AMM au zona. Devant la recrudescence des usages rhumato de ce produit, il a été déremboursé pour tout autre usage au 1er janvier 2019. Or chaque boîte de 30 emplâtres coûte la bagatelle de 70€ environ... Même sur prescription d'un rhumatologue, même sur prescription formelle du Centre de Rhumatologie du CHU Henri-Mondor, il faut désormais payer plein pôt le Versatis. Ce déremboursement laisse donc tous les usagers "rhumato" du Versatis devant un choix impossible : des douleurs permanentes et/ou invalidantes, ou un budget conséquent de dépenses imprévues.

J'utilise personnellement ce produit dans le cadre d'une aponévrosite plantaire très douloureuse, et ma compagne dans le cadre d'un rhumatisme inclassé très douloureux.

J'ai donc l'honneur de vous demander l'annulation en urgence de ce déremboursement. À une époque où la prise en compte de la douleur est enfin réalisée, ce déremboursement est un signal incompréhensible et laisse les patients sans option de rechange. Si je peux me permettre cette dépense, l'idée que d'autres ne le puissent et que ce déremboursement induise un inégalité devant la douleur m'est insupportable.

Respectueusement,

Daniel Glazman

Sunday 9 December 2018

Edge and Chromium, a different analysis

I am quite surprised by all the public reactions I read about Microsoft's last browser moving to Chromium. I think most if not all commenters have missed the real point, a real point that seems to me way bigger than Edge. Even Mozilla's CEO Chris Beard has not mentioned it. People at Microsoft must be smiling and letting go loud french « Ahlala...». Let me remind everyone that a browser is, from a corporate point of view,  a center of cost and not a center of revenue; if you're really nitpicking, you can call it a center of indirect revenue. So let's review and analyse the facts:

  • I am surprised by the codename supposedly attached to that future version of Microsoft's browser, Anaheim. That codename is not confirmed by Microsoft but I find it quite surprising for a web browser... First, Anaheim is in California and not in Washington State where most of the browser stuff is supposed to happen; yes, it's a detail but still, it's a surprising one. Secondly, Anaheim is really a weird codename in the history of browser codenames at Microsoft. So what happened in Anaheim, CA? A decisive meeting?
  • The blog article about Edge and Chromium was published by a Corporate Vice President of the Windows division. That's absolutely not normal for a browser-only decision.
  • Edge's and IE's market share are, sorry to my dear Microsoft friends, not enough to care that much about such a change. Yes, the browser ecosystem is like a real ecosystem and the lack of genetic diversity that implies EdgeHTML's retirement (see also immediately below) is a global concern. But from a business point of view, nothing to see here, sorry.
  • The blog article and the Github readme page (most people have not seen that one...) say Edge will switch to Chromium. They don't say that EdgeHTML will die. As a matter of fact, EdgeHTML itself is mentioned in the blog article's title and only there, and not at all in the GH page.
  • Microsoft's CEO is currently impulsing a change that sounds to me like a new Samsung's « Change everything but your wife and children ». The tech debt at Microsoft is immense and Nadella rang the rush bell.

So I think the whole thing is not about Edge. The microcosm reacted, and reacted precisely as expected (again, probable laughters in Redmond), but this is really about Windows and the core of activity of Microsoft. Impulsing a change like a move to Chromium and using it as a public announcement by a Windows CVP, is, beyond technical and business choices, a political signal. It says « expect the unexpected ».

I think Microsoft Windows as we know it is about to change and change drastically. Windows as we know it could even die and Microsoft move to another new, different operating system, Edge+Chromium's announcement being only the top of the iceberg. And it's well known that 9/10th of an iceberg remain below water surface.

The gravity center of the company is then about to change too; Nadella probably knows too well the impact of the Windows division on the rest of the company during the Vista years and he certainly knows too well the inter-division wars at Microsoft. It could be highly time to shake the whole thing. As I told Dean Hachamovitch long ago, « you need a commando and what you have now is a mexican army with a lot of generals and not enough soldiers ». Still valid?

Of course, I could be partially or even totally wrong. But I don't think so. This announcement is weird on too many counts, and it's most certainly on purpose. It seems to be telling us « guys, read between the lines, the big message is right there ».

Monday 27 August 2018

Applications sous OS X

Vous regrettez amèrement l'option « N'importe où » vous autorisant à installer/lancer sur votre OS X des applications sans contrôle de leur origine ?

Before

Une simple ligne de commande peut vous aider :

sudo spctl --master-disable

Après avoir quitté et relancé les préférences, votre option préférée sera de nouveau là :

After

Pour revenir à l'état antérieur, une autre ligne de commande :

sudo spctl --master-enable

Saturday 28 July 2018

Gerv, oh Gerv :-(

Gervase Markham

Thursday 7 June 2018

Browser detection inside a WebExtension

Just for the record, if you really need to know about the browser container of your WebExtension, do NOT rely on StackOverflow answers... Most of them are based, directly or not, on the User Agent string. So spoofable, so unreliable. Some will recommend to rely on a given API, implemented by Firefox and not Edge, or Chrome and not the others. In general valid for a limited time only... You can't even rely on chrome, browser or msBrowser since there are polyfills for that to make WebExtensions cross-browser.

So the best and cleanest way is probably to rely on chrome.extension.getURL("/") . It can start with "moz", "chrome" or "ms-browser". Unlikely to change in the near future. Simple to code, works in both content and background.

My pleasure :-)

Wednesday 2 May 2018

Nominating Florian Rivoal for a seat at the W3C Advisory Board

The World Wide Web Consortium (W3C) is at crossroads. There are multiple reasons for that:

  • it's slow, extremely slow. Its Process, that rules the daily life of the Membership, cannot be changed fast even in front of a major hickup. It's not an exaggeration to say that important problems do take years to fix, even when there is an clear, critical and immediate issue.
  • it's opaque and weirdly managed. Elections at the Advisory Board (AB) and the Technical Architecture Group (TAG) are the only elections I know where the scores of the candidates remain secret after the end of the election. Even the candidates, successful or not, don't know their scores. It's the only organization I know where votes of the Membership can be biaised by changes to the requests during the course of a vote... It's the only Standards Committee I know where the management regularly interferes with the local Process. The finances of the W3C are also opaque, with a Staff that has seen a salary freeze for many years now, a non-incorporated structure (even as a Foundation) and extremely complicated relationships between the MIT-based foot of W3C and its european or asian feet.
  • it's unable to acknowledge the fact that some crucial parts of the Open Web Platform (OWP) it voluntarily abandoned long ago are now gone. html, DOM and other major layers of the OWP are now in the smart hands of the WHATWG. As an example, nobody really cares about the W3C versions of html and DOM because they're not what's implemented, because they diverge from implementations; it's just not usable in a real production environment. A clear side-effect is a counter-productive state of war with WHATWG that affects everyone and everything.
  • the merger with IDPF, mostly done behind curtains and with little interaction with the Membership, is suboptimal, to say the least. The former IDPF has recreated its ivory tower inside the W3C. The original Charters of the Publishing Groups were not conformant to W3C Process; the spirit of these Groups itself is not conformant the W3C spirit and usual way of producing Standards. As I said above, W3C is at crossroads but it has experience and expertise about delivering Standards to billions of people, like it or not.
  • the W3C Director, our Maaaaaster, is mostly gone. He's on a sabbatical now but he's been mostly absent from W3C daily activities for the last decade. Decisions, technical or process-wise, are always made « in the name of the Director », but the Director does not show up any more if you except big events like Plenary Meetings. In my 7.5 years tenure as CSS WG Co-chair, not a single spec transition conference call was held by the Director (and the CSS WG is one of the major groups of the W3C). The W3C is unable to acknowledge that absence and materialize it in its Process.
  • a part of Standardization relies on some absolutely crucial Invited Experts that invest their own budget for us. Even the attendance cost to the Plenary Meetings can be too much for some of them. We started discussing a sponsoring scheme for Invited Experts a decade ago and we're still nowhere at all.

So I think the reasons why I ran myself in the past for a seat at the W3C Advisory Board still stand. The W3C needs reforms, and probably a management change. But I wrote here a while ago, « I am now 50 years old, I have been contributing to W3C for, er, almost 22 years and that's why I will not run any more. We need younger people, we need different perspectives, we need different ways of doing, we need different futures. We need a Consortium of 2017, we still have a Consortium of 2000, we still have the people of 2000. »

I pinged Florian Rivoal about all of that. Florian is a extremely talented, multicultural, brilliant french engineer (but he also holds a MBA from INSEAD, often ranked #1 MBA in the whole world) based in Japan. He has been a crucial contributor to the CSS Working Group, the Publishign activity and many other areas of the daily W3C activities as a Avisory Committee representative for his various past employers. I do trust him, I like his vision, I like his diplomatic talent, appreciate that he deeply and truely cares for the future of the World Wide Web and the future of the W3C, and I love his technical expertise.

After a short chat, I told Florian that I wanted to nominate him for a seat at the W3C Advisory Board. After some thoughts, Florian accepted. Kodansha (one of the largest publishing company in Japan) has agreed to sponsor his participation to the AB.

You can read his official candidacy in french, english, japanese, chinese and korean. If your employer is a W3C Member and you're the AC-rep of your employer, please consider giving your vote to Florian, Florian would be a great addition to the W3C Advisory Board. And if you're not the AC-rep but your employer is a W3C Member, please consider telling your AC-rep about Florian's candidacy with a recommendation to vote for him.

Thank you.

Thursday 1 February 2018

LibreOffice and EPUB

LibeOffice 6.0 is now available. And it's through the inevitable Korben I discovered this morning it has a builtin EPUB export. So let's take a closer look at that new beast and evaluate how it deals with that painful task. Conformant EPUB? And which version of EPUB? Reusable XHTML and CSS? We'll see.

After installation (on a Mac), I created a new trivial text document; it contains a paragraph, a level 1 header, an image, a table, and a unordered list of three items. I did not touch at all fonts, styles, margins, etc.

Trivial text document in LibreOffice 6.0

Then I discovered LibreOffice now has two new menu items: File > Export As... > Export directly as EPUB and File > Export As... > Export as EPUB... .

Export directly as EPUB

It directly opens a filepicker to select a destination *.epub file. Let's unzip the saved package and take a look at its guts:

  • the mimetype file is correctly placed as first file in the package and it's correctly stored without compression
  • other files are correctly stored using Deflate
  • the META-INF/container.xml is stored in last position in the zip, which is probably a mistake
  • the OPF file says it's a EPUB 3.0 package and its metadata are clean ; AFAICT, the OPF file is conformant to the spec
  • XML and XHTML files in the package are serialized without carriage returns (if you except one after the XML prolog) or indentation...
  • a NCX is present
  • the Navigation Document (called toc.xhtml) and the NCX live side by side in a OEBPS folder (sigh)
  • there is a  empty OEBPS/styles/stylesheet.css file
  • the content files are in a OEBPS/sections folder
  • that folder contains 2 files (!) section001.xhtml and section002.xhtml
  • looking at these files, LibreOffice seems to have split the original document at section breaks, hence the two sections found in the EPUB package
  • there is no title element in these files
  • there is clearly a problem with exported CSS styles, the body of each generated document having no margins, paddings. And since there is no CSS-reset either...
  • the set of LibreOffice styles (the leftmost dropdown in the toolbar) are not exported to CSS; the whole export relies on CSS inline styles (style attributes) and not on classes
  • the original document uses the "Liberation Serif" font, that is not registered under that name into the OS X fontbook (old issue well known in the OOXML world...). The final rendition in a browser is then buggy, font-wise. The font-family declarations in the document don't use a fallback to serif.
  • there is a very weird font-effect: outline property serialized on all paragraphs in table cells
  • strangely again, all these paragraphs have text-decoration: overline; text-shadow: 1px 1px 1px #666666; while the original text is not overlined nor shadowed
  • when a paragraph (a p in terms of OOXML) contains one single run of text (a r in OOXML), the output could be optimized getting rid of a span and adding its inline styles to the parent paragraph. The output is too verbose and will trigger issues in html editors, Wysiwyg or not.
  • the margin values in the document use a mix of inches and pixels, which is kind of weird
  • the image in the original document is lost in the EPUB package
  • headers are not generated as h1, h2, ... but as p elements with styles.
  • the EPUB version does not correctly deal with the unordered list and all list items become regular paragraphs. No ol or ul, bullet, no counter, no list-style-type. Semantics is lost.

Firefox Quantum viewing the resulting section002.xhtml file. You can clearly see where the html+CSS export is buggy:

Firefox Quantum viewing the resulting section002.xhtml file

How iBooks sees that EPUB:

how iBooks sees that EPUB

Export as EPUB...

Aaah, that one is quite different since it first opens the following dialog:

Export as EPUB... dialog

The dialog offers the following choices:

  1. export as EPUB2 or EPUB3 (nice!)
  2. Split at Page Breaks or Headings (very nice feature but why not also a "Don't split" option?)

and validating the dialog goes to the aforementioned *.epub filepicker.

Conclusion

This is an excellent start, really, and splitting the document at headers or page breaks is an excellent idea. Unfortunately, there are too many holes in the xhtml+CSS export at this time to make it really usable unless your document has almost unstyled paragraphs only. Some generated styles (overline?!?) are not present in the original document, it generates only paragraphs and tables losing the header or list semantics, the LibreOffice styles are not serialized in a CSS stylesheet (bug?) and more. This will help some individuals but I am not sure it will help EPUB publication chains, at least for now.

Update: Wow. I added an extra test: I compared the result of "Export to XHTML" and the XHTML inside a "Export to EPUB". In the former, styles are correctly exported as a stylesheet, classes are correctly used, h1 and ol/li are correctly used, the image is preserved, and the general rendering is MUCH better. So the Export to EPUB has one of the two following problems: it reuses the "Export to XHTML" code and splitting introduced a lot of bugs, OR it has its own export-to-xhtml code and it's a mistake since the existing one does quite a decent job...

Second update: LibreOffice's trunk does a significantly better job: stylesheet is correctly generated, xhtml files are CR'd and indented correctly, images are preserved.

Thursday 18 January 2018

Announcing WebBook Level 1, a new Web-based format for electronic books

TL;DR: the title says it all, and it's available there.

Eons ago, at a time BlueGriffon was only a Wysiwyg editor for the Web, my friend Mohamed Zergaoui asked why I was not turning BlueGriffon into an EPUB editor... I had been observing the electronic book market since the early days of Cytale and its Cybook but I was not involved into it on a daily basis. That seemed not only an excellent idea, but also a fairly workable one. EPUB is based on flavors of HTML so I would not have to reinvent the wheel.

I started diving into the EPUB specs the very same day, EPUB 2.0.1 (released in 2009) at that time. I immediately discovered a technology that was not far away from the Web but that was also clearly not the Web. In particular, I immediately saw that two crucial features were missing: it was impossible to aggregate a set of Web pages into a EPUB book through a trivial zip, and it was impossible to unzip a EPUB book and make it trivially readable inside a Web browser even with graceful degradation.

When the IDPF started working on EPUB 3.0 (with its 3.0.1 revision) and 3.1, I said this was coming too fast, and that the lack of Test Suites with interoperable implementations as we often have in W3C exit criteria was a critical issue. More importantly, the market was, in my opinion, not ready to absorb so quickly two major and one minor revisions of EPUB given the huge cost on both publishing chains and existing ebook bases. I also thought - and said - the EPUB 3.x specifications were suffering from clear technical issues, including the two missing features quoted above.

Today, times have changed and the Standards Committee that oversaw the future of EPUB, the IDPF, has now merged with the World Wide Web Consortium (W3C). As Jeff Jaffe, CEO of the W3C, said at that time,

Working together, Publishing@W3C will bring exciting new capabilities and features to the future of publishing, authoring and reading using Web technologies

Since the beginning of 2017, and with a steep acceleration during spring 2017, the Publishing@W3C activity has restarted work on the EPUB 3.x line and the future EPUB 4 line, creating a EPUB 3 Community Group (CG) for the former and a Publishing Working Group (WG) for the latter. If I had some reservations about the work division between these two entities, the whole thing seemed to be a very good idea. In fact, I started advocating for the merger between IDPF and W3C back in 2012, at a moment only a handful of people were willing to listen. It seemed to me that Publishing was a underrated first-class user of Web technologies and EPUB's growth was suffering from two critical ailments:

  1. IDPF members were not at W3C so they could not confront their technical choices to browser vendors and the Web industry. It also meant they were inventing new solutions in a silo without bringing them to W3C standardization tables and too often without even knowing if the rendering engine vendors would implement them.
  2. on another hand, W3C members had too little knowledge of the Publishing activity, that was historically quite skeptical about the Web... Working Groups at W3C were lacking ebook expertise and were therefore designing things without having ebooks in mind.

I was then particularly happy when the merger I advocated for was announced.

As I recently wrote on Medium, I am not any more. I am not convinced by the current approach implemented by Publishing@W3C on many counts:

  • the organization of the Publishing@W3C activity, with a Publishing Business Group (BG) formally ruling (see Process section, second paragraph) the EPUB3 CG and a Steering Committee (see Process section, first paragraph) recreated the former IDPF structure inside W3C.  The BG Charter even says that it « advises W3C on the direction of current and future publishing activity work » as if the IDPF and W3C did not merge and as if W3C was still only a Liaison. It also says « the initial members of the Steering Committee shall be the individuals who served on IDPF’s Board of Directors immediately prior to the effective date of the Combination of IDPF with W3C », maintaining the silo we wanted to eliminate.
  • the EPUB3 Community Group faces a major technical challenge, recently highlighted by representatives of the Japanese Publishing Industry: EPUB 3.1 represents too much of a technical change compared to EPUB 3.0.1 and is not implementable at a reasonable cost in a reasonable timeframe for them. Since EPUB 3 is recommended by the Japanese Government as the official ebook format in Japan, that's a bit of a blocker for EPUB 3.1 and its successors. The EPUB3 CG is then actively discussing a potential rescindment of EPUB 3.1, an extraction of the good bits we want to preserve, and the release of a EPUB 3.0.2 specification based on 3.0.1 plus those good bits. In short, the EPUB 3.1 line, that saw important clarifying changes from 3.0.1, is dead.
  • the Publishing Working Group is working on a collection of specifications known as Web Publications (WP), Packaged Web Publications (PWP), and EPUB 4. What these specifications represent is extremely complicated to describe. With a daily observation of the activities of the Working Group, I still can't firmly say what they're up to, even if I am already convinced that some technological choices (for instance JSON-LD for manifests) are highly questionable and do not « lead Publishing to its full Web potential », to paraphrase the famous W3C motto. It must also be said that the EPUB 3.1 hiatus in the EPUB3 CG shakes the EPUB 4 plan to the ground, since it's now extremely clear the ebook market is not ready at all to move to yet another EPUB version, potentially incompatible with EPUB 3.x (for the record, backwards-compatibility in the EPUB world is a myth).
  • the original sins of EPUB quoted above, including the two missing major features quoted in the second paragraph of the present article, are a minor requirement only. Editability of EPUB, one of the greatest flaws of that ecosystem, is still not a first-class requirement if not a requirement at all. Convergence with the Web is severely encumbered by personal agendas and technical choices made by one implementation vendor for its own sake; the whole W3C process based on consensus is worked around not because there is no consensus (the WG minutes show consensus all the time) but mostly beacause the rendering engine vendors are still not in the loop and their potential crucial contributions are sadly missed. And they are not in the loop because they don't understand a strategy that seems decorrelated from the Web; the financial impact of any commitment to the Publishing@W3C is then a understandable no-go.
  • the original design choices of EPUB, using painful-to-edit-or-render XML dialects, were also an original sin. We're about to make the same mistake, again and again, either retaining things that partly block the software ecosystem or imagining new silos that won't be editable nor grokable by a Web Browser. Simplicity, Web-centricity and mainstream implementations are not in sight.

Since the whole organization of Publishing @W3C is governed by the merger agreement between IDPF and W3C, I do not expect to change anyone's mind with the present article. I only felt the need to express my opinion, in both public and private fora. Unsurprisingly, the feedback to my private warnings was fairly negative. In short, it works as expected and I should stop spitting in the soup. Well, if that works as expected, the expectations were pretty low, sorry to say, and were not worth a merger between two Standard Bodies.

I have then decided to work on a different format for electronic books, called WebBook. A format strictly based on Web technologies and when I say "Web technologies", I mean the most basic ones: html, CSS, JavaScript, SVG and friends; the class of specifications all Web authors use and master on a daily basis. Not all details are decided or even ironed, the proposal is still a work in progress at this point, but I know where I want to go to.

I will of course happily accept all feedback. If people like my idea, great! If people disagree with it, too bad for me but fine! At least during the early moments of my proposal, and because my guts tell me my goals are A Good Thing™️, I'm running this as a Benevolent Dictator, not as a consensus-based effort. Convince me and your suggestions will make it in.

I have started from a list of requirements, something that was never done that way in the EPUB world:

  1. one URL is enough to retrieve a remote WebBook instance, there is no need to download every resource composing that instance

  2. the contents of a WebBook instance can be placed inside a Web site’s directory and are directly readable by a Web browser using the URL for that directory

  3. the contents of a WebBook instance can be placed inside a local directory and are directly readable by a Web browser opening its index.html or index.xhtml topmost file

  4. each individual resource in a WebBook instance, on a Web site or on a local disk, is directly readable by a Web browser

  5. any html document can be used as content document inside a WebBook instance, without restriction

  6. any stylesheet, replaced resource (images, audio, video, etc.) or additional resource useable by a html document (JavaScript, manifests, etc.) can be used inside the navigation document or the content documents of a WebBook instance, without restriction

  7. the navigation document and the content documents inside a WebBook instance can be created and edited by any html editor

  8. the metadata, table of contents contained in the navigation document of a WebBook instance can be created and edited by any html editor

  9. the WebBook specification is backwards-compatible

  10. the WebBook specification is forwards-compatible, at the potential cost of graceful degradation of some content

  11. WebBook instances can be recognized without having to detect their MIME type

  12. it’s possible to deliver electronic books in a form that is compatible with both WebBook and EPUB 3.0.1

I also made a strong design choice: the Level 1 of the specification will not be a fit-all-cases document. WebBook will start small, simple and extensible, and each use case will be evaluated individually, sequentially and will result in light extensions at a speed the Publishing industry can bear with. So don't tell me WebBook Level 1 doesn't support a given type of ebook or is not at parity level with EPUB 3.x. It's on purpose.

With that said, the WebBook Level 1 is available here and, again, I am happily accepting Issues and PR on github. You'll find in the spec references to:

  • « Moby Dick » released as a WebBook instance
  • « Moby Dick » released as a EPUB3-compatible WebBook instance
  • a script usable with Node.js to automagically convert a EPUB3 package into a EPUB3-compatible WebBook

My EPUB Editor BlueGriffon is already modified to deal with WebBook. The next public version will allow users to create EPUB3-compatible WebBooks.

I hope this proposal will show stakeholders of the Publishing@W3C activity another path to greater convergence with the Web is possible. Should this proposal be considered by them, I will of course happily contribute to the debate, and hopefully the solution.

Thursday 11 January 2018

Web. Period.

Well. I have published something on Medium. #epub #web #future #eprdctn

Monday 25 December 2017

En vrac de Noël

  • le monde du livre électronique basé sur EPUB est au bord de l'implosion. EPUB, sous les auspices de l'IDPF qui a depuis fusionné avec le W3C, a réussi à lister à peu près tout ce qu'il ne fallait pas faire dans un Standard de ce type :
    • références normatives vers des documents non stables
    • extensions propriétaires non implémentées par les navigateurs
    • mécanismes presque impossibles à implémenter dans des éditeurs de contenu (dans le monde du Livre Électronique, une erreur stratégique gravissime)
    • des versions sorties trop vite, trop rapprochées les unes des autres et surtout sans rapports d'implémentation obligatoires
    • des versions successives cassant à chaque fois la compatibilité ascendante
    • j'en passe et des meilleures

    et c'est en train de pêter : l'industrie japonaise de l'édition a des tonnes et des tonnes d'EPUB 3.0.1 qu'ils ne peuvent ni ne souhaitent faire migrer vers EPUB 3.1. Tout d'abord, 3.0.1 est le format de livre électronique officiellement recommandé par le fameux MITI, le ministère de la technologie du gouvernement japonais. Ensuite, toute migration a un coût et l'absence de rétro-compatibilité fait exploser ce coût. Enfin, les liseuses ne sont pas (encore) conformes à 3.1 et cela ne présente donc aucun intérêt. Certains, dont votre serviteur, avaient pourtant prévenu que « trop tôt, trop vite, trop mal, sans implémentations validantes » risquait d'aboutir à un fiasco industriel. Nous y sommes désormais. Les Japonais demandent un 3.0.2 et jetteraient 3.1 à la poubelle si cela était possible. Bien évidemment, on avait pour construire 3.1 viré de 3.0.1 toutes les horreurs qu'on n'aurait jamais du y mettre. Relancer EPUB sur une 3.0.2 nous ramènerait à des emmerdes quasi-insolubles.

    Je crois qu'EPUB est désormais une impasse. Il faut tuer EPUB pour revenir à des fondamentaux intégralement basés sur les Standards du Web : un zip, un fichier index.html et des documents html voire tout format restitué en natif par un navigateur. Du CSS, du JS. Aucune autre contrainte.

  • Depuis un mois, le nombre de spams me proposant d'investir dans du bitcoin a explosé. Ma patience avec. Le bitcoin, c'est un « ratio "cours sur bénéfices " (...) supérieur à celui des actions de la crise de 1929 et de la bulle internet ». De toute manière, « en dix ans, personne n'a encore trouvé d'usage pour la blockchain ».
  • Hier, j'ai craqué. Au 117ème site Web me proposant de m'abonner à ses notifications, et donc au 117ème popup "This Web site wants to send you notifications" dans Firefox, j'ai ouvert l'URL about:config et changé la préférence dom.webnotifications.enabled de true à false. Vous devriez faire pareil... Il y a des Standards du Web qui partent d'une bonne intention, d'une bonne idée générale mais qui finissent par être une vraie PITA pour les usagers.
  • Je suis en train de lutter dans le CSS WG contre un changement de fonctionnement de la shorthand border. Avant le passage de border-image en Candidate Recommandation, border permettait d'assigner les propriétés border-*-style, border-*-width et border-*-color et celles-là seulement. Depuis ce passage, elle assigne également à leurs valeurs initiales les sous-propriétés de border-image. En clair et en décodé : avant le changement, si vous fixiez le cadre d'un élément à red solid thin côté par côté, vous aboutissiez à border: thin solid red dans votre feuille de styles. C'est fini. Pour obtenir cela, il vous faudrait aussi mettre les propriétés border-image-* à leur valeur initiale, ce que personne ne fait puisque border-image est quasiment inutilisé sur le Web. On a cassé la bijection en permettant à border de remettre à zéro des propriétés qu'elle ne peut assigner, et on a par là changé un comportement du Web vieux de plus de vingt ans...
  • Je suis passé dans l'émission SuperFail de Guillaume Erner sur France Culture, à propos de SAIP.
  • Je suis également passé dans son émission Les Matins de France Culture à propos de la panne logicielle de la SNCF.
  • L'attitude du Gouvernement vis-à-vis du Conseil National du Numérique ne cesse, depuis le 11 décembre, de me scotcher. C'est un fiasco phénoménal et je me demande franchement qui va désormais oser prendre le risque de diriger une telle instance depuis son "rappel à l'ordre"... Soyons clairs, je n'apprécie pas Rokhaya Diallo ; mais elle avait été choisie par Marie Ekeland et validée par le Gouvernement. Le revirement sur l'aile de ce dernier est donc lamentable et surtout contre-productif. Même la "vieille politique" n'aurait pas osé agir aussi stupidement. Quant aux arguments exposés par Mounir, le ci-devant Secrétaire d’État au Numérique, ils ont été pour le moins laborieux. Comment se saborder en deux temps et trois mouvements... Je ne suis pas certain que le Secrétariat d'État au Numérique ne soit pas l'heureux gagnant d'un passage de témoin lors du prochain remaniement.
  • Star Wars 8, c'est quand même assez mauvais. Le scénario est indigne, Adam Driver est mauvais, Marc Hamill a raison de critiquer le rôle de Luke, la poursuite spatiale est risible, il y a bien trop de choses qui in fine laissent un sentiment soit de déjà-vu, soit de mauvais, soit d'inachevé, soit de carrément pas démarré.
  • Apple qui bride ses vieux iPhones quand leur batterie vieillit sans avertir les usagers. Mais comment est-il encore possible de faire des erreurs pareilles...

Tuesday 19 December 2017

Qui arrêtera les logiciels fous de l'État ?

Je l'avoue, j'ai sans vergogne piqué le titre d'un article du Point Éco, parce que la question posée est excellente. Elle rejoint parfaitement mon propos lors de mon passage récent sur France Culture dans l'émission matinale de Guillaume Erner. Avant de lire la suite, prenez quelques minutes pour lire cet article de Libération également.

Ça y est, vous êtes prêts ? Bien. Alors voilà : contrairement à ce que raconte l'article du Point, je ne crois pas du tout que « faire appel à des start-ups » sera la solution qui permettra à l'État de cesser de dépenser entre 10 et 50 voire 100 fois (excusez du peu...) trop pour ses logiciels. Et encore, quand ils marchent, parce qu'il arrive souvent qu'ils ne fonctionnent jamais correctement voire jamais tout court.

La solution est un changement de perspective : il faut que l'État passe du « Not Invented Here » au « Invented Here ». En clair, il est temps d'arrêter de se reposer essentiellement sur des ressources externes (SSII) atrocement coûteuses mais surtout non fiables ni dans l'immédiateté ni dans la durée. L'État doit au contraire renforcer très nettement ses capacités internes de développement informatique. L'économie réalisée sur les prestations (SIRHEN ? Un demi-milliard d'euros et ce n'est toujours pas fini. LOUVOIS ? Un autre demi-milliard d'euros en tout et ça ne marche pas du tout) permettra de staffer non pas des fonctionnaires mais des contractuels de qualité payés au prix du marché. Même avec un tel recrutement contractuel et même en n'arnaquant pas l'État, la structure réalisera des gains substantiels permettant de continuer les embauches et assurer l'expansion. De toute manière, les dépenses actuelles sont telles qu'on ne peut que faire nettement mieux et donc économiser vraiment énormément. Évidemment, la structure en question devra être strictement interne à l'État, elle ne peut entrer en concurrence avec les ESNs (anciennes SSII) sur le marché global.

Internaliser évitera de perdre les développeurs et leurs compétences, assurera la pérennité et surtout la disponibilité de l'expertise, réduira très fortement les coûts et enfin améliorera drastiquement la qualité. Seule une structure pérenne, avec des gens de haut niveau payés à hauteur de leur haut niveau, peut aujourd'hui stopper la gabegie hallucinante à laquelle nous assistons. C'est parfaitement faisable et c'est faisable vite si on le veut. À bons lecteurs...

Thursday 7 December 2017

Open Office XML shading patterns for JavaScript

Working on Open Office XML and html these days, I ended up reading and implementing section 17.18.78 of the ISO/IEC 29500 spec. It's the one dedicated to shading patterns. In words we're used to, predefined background images serving as pattern masks. It's not a too long list but the PNG or data URLs were not available as public resource, and I found that rather painful. I am then making my own implementation, in JavaScript, of ST_Shd public. Feel free to use it under MPL 2.0 if you need it.

Thursday 23 November 2017

XUL, Mac Touchbar, BlueGriffon

The title of this article says it all. First attempt, works fine, trivial to add to any XUL window. This is a code I wrote for Postbox, used here with permission.

BlueGriffon with Mac Touchbar

Thursday 16 November 2017

BlueGriffon 3.0

I am insanely happy (and a bit proud too, ahem) to let you know that BlueGriffon 3.0 is now available. As I wrote earlier on this blog, implementing Responsive Design in a Wysiwyg editor supposed to handle all html documents whatever their original source has been a tremendous amount of work and something really painful to implement. Responsive Design in BlueGriffon is a commercial feature available to holders of a Basic or a EPUB license.

BlueGriffon Responsive Design

/* Enjoy! */

Friday 3 November 2017

Responsive Design in BlueGriffon

After nearly two years of failed attempts and revamped algos, it's finally time to shout that Wysiwyg Responsive Design in BlueGriffon is ready to ship, and that deserves a major version number for BlueGriffon :-) It was really, really painful and hard to implement given the fact BlueGriffon is and must remain a Wysiwyg editor able to edit any arbitrary document, whatever its source. It means being always able to add styles as requested by the user : « I want this element to be bold when the viewport's width is between 400 and 500px and I don't care if it's simple or hard because the Media Queries in that document are a real mess, just do it ». Most editors can't do that. They let you create and edit only "Mobile First" or only "Desktop First" media queries, or they're a source editor. With BlueGriffon, even a site that is pure Media Queries' hell like http://cnn.com can be modified...

Responsive Design will be available soon at no extra cost to Basic and EPUB license holders.

/* Enjoy! */

Wednesday 18 October 2017

OS X High Sierra installer hell (OSInstall.mpkg missing or corrupted)

Dear Apple, this is the fourth time in a row one of your system upgrades on iOS or OS X make me loose a day or two - when it does not make me loose a lot of data - and I am fed up with it. My last experience with your High Sierra upgrade is truly shocking:

  • this morning, I decided to finally upgrade my eligible MacBookPro to High Sierra
  • I did it the right way, and everything initially seemed to work fine
  • then suddenly the installer stopped, announcing that "mac OS could not be installed on your computer" because "file OSInstall.mpkg was missing or damaged". Uuuuh???? What the hell?!? I was really scared since my backup missed two days of data, some of them being extremely important to me.
  • I tried the Recovery mode to install, no result
  • I tried to locate the missing file somewhere else in the installer's filesystem, no result
  • I tried the Disk Utility and it was worse since the app was struck with a spinning wheel...
  • I tried disk utils in the Terminal but my HD was gone. Just gone. Awful. I was so shaken I had to stay away from the computer for a few minutes.
  • then I discovered there are literally thousands of Mac users complaining about High Sierra's installer bricking their Mac with the same error... We're not speaking of a beta here, we're not speaking of something released yesterday. How can this remain broken?
  • fortunately, we have a few other Macs at home so I downloaded High Sierra from another one, downloaded the excellent and free Disk Creator to create a bootable USB version of the High Sierra installer
  • the install from that USB stick seemed to work and my data is still there, wooooof.

So for the visitor hitting this article and willing to upgrade a Mac to High Sierra, these are my VERY strong recommendations:

  1. full Time Machine backup first. Full. Mandatory. More than ever with the filesystem change. Make 100% sure your backup ended correctly and is usable. Do it, whatever the time cost.
  2. download High Sierra from the App Store but do NOT install; hit Cmd-Q to close the installer.
  3. download Disk Creator (link above) and create a bootable USB version of the High Sierra installer (located in your /Applications folder). Of course, you need a USB key...
  4. shut down your Mac ; insert your bootable USB key and reboot while pressing the Alt/Option key. At prompt, use the arrows and the CR key to select the USB bootable installer.
  5. install High Sierra on your disk that way and if it fails, use the Time Machine backup you fortunately did at step 1.

My Mac went bricked at 10am. All in all, it took me 6 hours and 36 minutes to find how to get it fixed, stop being scared of launching that process that could wipe all my HD out, and do it. Let's be very clear : this is totally unacceptable. The High Sierra installer is still broken and thousands of people are hit by that breakage.

On another hand, last Windows10 upgrade was so smooth it felt old-days-Apple, ahem.

I had to recommend my less geeky dad, kids, friends to avoid High Sierra's installer if I am not around. Wake up Apple, you're reaching unacceptable limits here. Your hardware starts sucking (incredibly noisy and ugly keyboard, bad touchpad design, useless and expensive touchbar, USB-C hell, no more SD slot) and some of your software are now below expectations. Wake up. Now!

Tuesday 18 July 2017

A month with a new MacBookPro

I have been using a 2017 MacBookPro with touchbar for a month now and I can start giving some impressions about it:

  • loving the dark grey color
  • thinner, lighter; that's cool
  • better screen, that's cool too
  • USB-C is at the same time very nice and a true PITA. I need an adapter for so many of my USB devices it's awful. It's just ridiculous there is not 1 USB3. I can't even connect my iPhone without an adapter. Thinner for thinner is pointless in that case.
  • I just hate the noise of the new keyboard, INCREDIBLY noisier than the old MBP one, a huge negative point during conference calls. All in all, the old MBP keyboard seems to me ten times superior and less error-prone.
  • the Touchbar is cool - and I implemented touchbar support in Postbox - but after a month of usage, I clearly see it as a useless gadget. It's too easy to have a finger hover over the ESC key and I erroneously sent an email before finishing it because a finger hovered over the "Send mail" key of the Touchbar for Apple Mail. All in all, I sincerely regret the real KEYS of the old MBP. The Touchbar is not worth the price difference and not worth the hassle. Please also note the Touchbar is 100% unusable in a sunny environment since you don't even see what's on the Touchbar... Well done. Oh, and I suppose it sucks more power too.
  • I have extremely mixed feelings about the larger touchpad... The right-click is painful to get, the left-click is too often unreliable, the touchpad is too tall and I am deeply missing the wider gap between the keyboard and the touchpad to let my thumbs on it. Because of that, I am too often hitting the touchpad when I am typing. All in all, I think this is the worst touchpad made by Apple, by far.
  • the power adapter is such a regression I could cry. The "wings" of the power adapter are gone, the longer power chord is now a costly option, the incredibly great MagSafe is gone.
  • I do regret the SD/SDHC port, the DisplayPort port, all these things that now require an adapter. The new MBP is adapter's hell.
  • I noticed some static electricity on the MBP's shell when the battery is charging. Weird.

- page 1 of 288