<Glazblog/>

Wednesday 29 June 2016

Media Queries fun #1

So one of the painful bits when you have UI-based management of Media Queries is the computation of what to display in that UI. No, it's not as simple as browsing all your stylesheets and style rules to retrieve only the media attribute of the CSS OM objects... Let's take a very concrete example:

<style type="text/css" media="screen and (min-width: 200px)">
@import url("foo.css") screen and (max-width: 750px);
</style>

where file foo.css contains the following rules:

@media screen and (max-width: 500px) {
h1 { color: red; }
}

What are the exact media constraints triggering a red foreground color for h1 elements in such a document? Right, it's screen and (min-width: 200px) and (max-width: 500px)... And to compute that from the inner style rule (theh1 { color: red; } bit), you have to climb up the CSS OM up to the most enclosing stylesheet and intersect the various media queries between that stylesheet and the final style rule:

CSS OM objectapplicable mediumapplicable min-widthapplicable max-width
embedded stylesheetscreen200px-
imported stylesheetscreen200px750px
@media rulescreen200px750px
style rulescreen200px500px

It becomes seriously complicated when the various constraints you have to intersect are expressed in different units because, unless you're working with computed values, you really can't intersect. In BlueGriffon, I am working on specified values and it's then a huge burden. In short, dealing with a width Media Query that is not expressed in pixels is just a no-go.

I'd really love to have a CSS OM API doing the work described above for a given arbitrary rule and replying a MediaList. Maybe something for Houdini?

Thursday 23 June 2016

Implementing Media Queries in an editor

You're happy as a programmer ? You think you know the Web ? You're a browser implementor ? You think dealing with Media Queries is easy ? Do the following first:

Given a totally arbitrary html document with arbitrary stylesheets and arbitrary media constraints, write an algo that gives all h1 a red foreground color when the viewport size is between min and max, where min is a value in pixels (0 indicating no min-width in the media query...) , and max is a value in pixels or infinite (indicating no max-width in the media query).  You can't use inline styles, of course. !important can be used ONLY if it's the only way of adding that style to the document and it's impossible otherwise. Oh, and you have to handle the case where some stylesheets are remote so you're not allowed to modify them because you could not reserialize them :-)

What's hard? Eh:

  • media attributes on stylesheet owner node
  • badly ordered MQs
  • intersecting (but not contained) media queries
  • default styles between MQs
  • remote stylesheets
  • so funny to deal with the CSS OM where you can't insert a rule before or after another rule but have to deal with rule indices...
  • etc.

My implementation in BlueGriffon is almost ready. Have fun...

Tuesday 14 June 2016

Synology meets Sony Vaio in my Hall of Worst

All in all, I am quite satisfied of all the hardware I bought across all these years. Computers, disks, memory, etc. All that geekery is reasonably good and the epic failures are rare. And even when an epic failure happens, the manufacturer is usually smart enough to issue a recall and free replacement even if the 2 years period after purchase is over. (e.g. Apple with the graphics card failure that hit so many older MBPs a while ago). In two occasions only, the manufacturer did show a pretty bad behaviour:

  1. the first case was years ago, when my Sony Vaio laptop was hit by the infamous "inverted trackpad" bug. In short, Sony avoided a 3 cents' expense isolating the trackpad electrical system from the metal case of its Vaio line and the trackpad was inverting the x axis, sometimes the y axis, sometimes both... Totally unusable of course. Millions of Sony laptops were hit by the issue. I had to shout on the phone to obtain, a few months after purchase of the most expensive Vaio at that time, a replacement of my computer. Cured me from Sony computers forever.

  2. and the most recent case, happening right now, is my Synology DS412+ NAS server. In short, it's been plagued by a zillion of issues, all severe since let's say day 50. But every time I was ready to send back the NAS to Synology, the support was asking me to try something (or giving that hint on their fora) that led to a normal reboot of the unit despite of motherboard warnings. On reboot, the warnings were going away... A few days ago, my DS412+ stopped working, with a forever-blinking blue led light. Motherboard failure, again and again and again.
    The Web is full of people explaining how bad the DS*12 series is, and the reports of motherboard failures are absolutely everywhere. They even issued a press release but never contacted customers, eh.
    Unlike Apple, Synology refused today to change my motherboard. And it took a week, and tweets added to my email, to eventually obtain that response from them. To someone else, they proposed to buy a new motherboard for $395, so 80% of the unit's price (yes, expensive!). They can rot in hell, of course.
    So Synology, it's a crappy support for buggy motherboards sold for an extremely high price. If you're lucky with your motherboard, you will never see a problem. If like me you bought a buggy unit shipped buggy, Synology will refuse responsibility after the end of the guarantee period. And during it, you'll need to ship the unit at your own expense.
    Synology's DSM is cool, their units are cool. But they're not reliable enough and the way Synology considers customers who paid a premium price is just a total shame and not in line with 2016 industry's standards, and most probably the French law here.
    Conclusion: avoid Synology as much as you can. You've been warned.

Wednesday 1 June 2016

BlueGriffon 2.0 released

I am happy to announce that BlueGriffon 2.0 is now available from http://www.bluegriffon.org

Screenshot of BlueGriffon 2.0

Sunday 22 May 2016

CSS Variables in BlueGriffon

I guess the title says it all :-) Click on the thumbnail to enlarge it.

CSS Variables in BlueGriffon

Thursday 12 May 2016

BlueGriffon 2.0 approaching...

BlueGriffon 2.0 is approaching, a major revamp of my cross-platform Wysiwyg Gecko-based editor. You can find previews here for OSX, Windows and Ubuntu 16.04 (64 bits).

BlueGriffon 2.0

Warnings:

  • it's HIGHLY recommended to NOT overwrite your existing 1.7 or 1.8 version ; install it for instance in /tmp instead of /Applications
  • it's VERY HIGHLY recommended to start it creating a dedicated profile
    • open BlueGriffon.app --args -profilemanager (on OSX)
    • bluegriffon.exe -profilemanager (on Windows)
  • add-ons will NOT work with it, don't even try to install them in your test profile
  • it's a work in progress, expect bugs, issues and more

Changes:

- major revamp, you won't even recognize the app :-)
- based on a very recent version of Gecko, that was a HUGE work.
- no more floating panels, too hacky and expensive to maintain
- rendering engine support added for Blink, Servo, Vivliostyle and
Weasyprint!
- tons of debugging in *all* areas of the app
- BlueGriffon now uses the native colorpicker on OSX. Yay!!!
The native colorpicker of Windows is so weak and ugly we just can't
use it (it can't even deal with opacity...) and decided to stick
to our own implementation. On Linux, the situation is more
complicated, the colorpicker is not as ugly as the Windows' one,
but it's unfortunately too weak compared to what our own offers.
- more CSS properties handled
- helper link from each CSS property in the UI to MDN
- better templates handling
- auto-reload of html documents if modified outside of BlueGriffon
- better Markdown support
- zoom in Source View
- tech changes for future improvements: support for :active and
other dynamic pseudo-classes, support for ::before and ::after
pseudo-elements in CSS Properties; rely on Gecko's CSS lexer
instead of our own. We're also working on cool new features on
the CSS side like CSS Variables and even much cooler than that :-)

Tuesday 26 April 2016

First things first

Currently implementing many new features into Postbox, I carefully read (several times) Mark Surman's recent article on Thunderbird's future. I also read Simon Phipps's report twice. Then the contract offer for a Thunderbird Architect posted by Mozilla must be read too:

... Thunderbird is facing a number of technical challenges, including but not limited to:

  • ...
  • The possible future deprecation of XUL, its current user interface technology and XPCOM, its current component technology, by Mozilla
  • ...

In practice, the last line above means for Thunderbird:

  1. rewrite the whole UI and the whole JS layer with it
  2. most probably rewrite the whole SMTP/MIME/POP/IMAP/LDAP/... layer
  3. most probably have a new Add-on layer or, far worse, no more Add-ons

Well, sorry to say, but that's a bit of a « technical challenge »... So yes, that's indeed a « fork in the road » but let's be serious a second, it's unfortunately this kind of fork; rewriting the app is not a question of if but only a question of when. Unless Thunderbird dies entirely, of course.

Evaluating potential hosts for Thunderbird and a fortiori chosing one seems to me rather difficult without first discussing the XUL/XPCOM-less future of the app, i.e. without having in hands the second milestone delivered by the Thunderbird Architect. First things first. I would also be interested in knowing how many people MoCo will dedicate to the deXULXPCOMification of Firefox, that would allow some extrapolations and some pretty solid requirements (and probably rather insurmountable...) for TB's host.

Last but not least and from a more personal point of view, I feel devastated confronting Mark's article and the Mozilla Manifesto.

Friday 19 February 2016

A makefile-based build system for Swift

I'm currently playing with Swift, outside of Xcode (but of course, you need to have Xcode installed to have swiftc installed...), when I have a few spare cycles and building more than just a few files was getting so painful I started working on a minimal build structure. Since it's working pretty well and it was not always easy to figure out what to do, I've decided to share it.

First, clone the following git and copy the swiftbuild directory you'll find inside to your project's top directory: git clone https://github.com/therealglazou/swift-makefiles.git

Then tweak the swift/config.mk according to your wishes (in the repository, it's for a macosx x86_64 target).

Finally, add a Makefile file to each directory inside your project. Each file should look as following:

# TOPSRCDIR is a relative path to the top directory of your project
# for instance:
TOPSRCDIR = ../../..

# if you have subdirs to build, list them in DIRS. They will be built
# before the files directly in your directory
# for instance:
DIRS = public src

# if you're building a module from the swift files in this directory
# name your module in MODULE_NAME
# for instance:
MODULE_NAME = MyProjectCore

# if your directory contains your main.swift (mandatory for a
# standalone executable), give your app's name in APP_NAME
# for instance:
APP_NAME=MyProject

# now, always include $(TOPSRCDIR)/swiftbuild/config.mk
include $(TOPSRCDIR)/swiftbuild/config.mk

# your swift sources should go into SOURCES
# for instance:
SOURCES = foo.swift bar.swift

# if you need to import modules outside of your project
# use IMPORTS to list the paths, of course prepended by -I
# for instance:
IMPORTS =-I /Users/myname/trees/myotherproject/modules/

# if you need some dylibs to build your module or your app,
# then use LD_FLAGS (always +=) and LIBS; for instance:
LDFLAGS += -L/Users/myname/trees/myotherproject/lib
LIBS = -lfooCore

# and finally always include $(TOPSRCDIR)/swiftbuild/rules.mk
include $(TOPSRCDIR)/swiftbuild/rules.mk

And that's it, you're now good to make build, make clean and make clobber. The first one will build everything, your dylibs and app being in $(OBJDIR) at the end of the build while your modules are in $(OBJDIR)/modules. The second one will delete all dylibs, *.o and build remains from your directories but will preserve $(OBJDIR). The last one performs a clean and deletes $(OBJDIR) too.

Let me know if it's useful please. And I'm of course accepting pull requests. Next steps: static linking and build of a complete foo.app OS X application...

Wednesday 10 February 2016

歌剧

When Opera switched to WebKit and started its deep reorganization, I had a chat with a few ex-Opera fellows. We agreed that the whole move was indeed a purely financial decision made to increase the value of the company on the "market" at a time many larger companies and phone manufacturers were looking for a seat at the main players' table, the restricted list of companies owning either a rendering engine or a world-class rendering engine team. In other terms, the whole strategy of the company was strictly oriented towards a sale. The often quoted potential deal with AOL did not make any sense to me.

While I was thinking more of phone manufacturers like Samsung or Huawei, we all agreed the most plausible option was an asian host. It seems we were right. It would have made a lot of sense for Samsung to acquire Opera. I even advocated a bit in that direction during my tenure there but the company - or at least the people my message reached - either were not receptive or never forwarded the message. I still think Samsung missed a rather important opportunity here.

Golden Brick and Yonglian are more acquiring the user base than the engineering team, and I do have some concerns about the future of that team. In fact, I have some concerns about the whole thing. We'll see. Browser teams are a very small community around the world and I wish our Opera friends all the best for that coming change.

Now that Opera's not on the market any more, the next company to watch in that space is clearly, from my point of view, Igalia.

Monday 8 February 2016

Inventory and Strategy

“There’s class warfare, all right, but it’s my class, the native class, that’s making war, and we’re winning.” -- Android and iOS, blatantly stolen from Warren Buffet

Firefox OS tried to bring Web apps to the mobile world and it failed. It has been brain dead - for phones - for three days and the tubes preserving its life will be turned off in May 2016. I don't believe at all myself in the IoT space being a savior for Mozilla. There are better and older competitors in that space, companies or projects that bring smaller, faster, cleaner software architectures to IoT where footprint and performance are an even more important issue than in the mobile space. Yes, this is a very fragmented market; no, I'm not sure FirefoxOS can address it and reach the critical mass. In short, I don't believe in it at all.

Maybe it's time to discuss a little bit a curse word here: strategy. What would be a strategy for the near- and long-term future for Mozilla? Of course, what's below remains entirely my own view and I'm sure some readers will find it pure delirium. I don't really mind.

To do that, let's look a little bit at what Mozilla has in hands, and let's confront that and the conclusion drawn from the previous lines: native apps have won, at least for the time being.

  • Brains! So many hyper-talented brains at Mozilla!
  • Both desktop and mobile knowledge
  • An excellent, but officially unmaintained, runtime
  • Extremely high expertise on Web Standards and implementation of Web Standards
  • Extremely high expertise on JS
  • asm.js
  • Gaia, that implements a partial GUI stack from html but limited to mobile

We also need to take a look at Mozilla's past. This is not an easy nor pleasant inventory to make but I think it must be done here and to do it, we need to go back as far in time as the Netscape era.

Technology Year(s) Result
Anya 2003 AOL (Netscape's parent company) did not want of Anya, a remote browser moving most of the CPU constraints to the server, and it died despite of being open-sourced by its author. At the same time, Opera successfully launched Opera Mini and eventually acquired its SkyFire competitor. Opera Mini has been a very successful product on legacy phones and even smartphones in areas with poor mobile connectivity.
XUL 2003- Netscape - and later Mozilla - did not see any interest in bringing XUL to Standards committees. When competitors eventually moved to XML-based languages for UI, they adopted solutions (XAML, Flex, ...) that were not interoperable with it.
Operating System 2003- A linux+Gecko Operating System is not a new idea. It was already discussed back in 2003 - yes, 2003 - at Netscape and was too often met with laughter. It was mentioned again multiple times between 2003 and 2011, without any apparent success.
Embedding 2004- Embedding has always been a poor parent in Gecko's family. Officially dropped loooong ago, it drove embedders to WebKit and then Blink. At the time embedding should have been improved, the focus was solely on Firefox for desktop. If I completely understand the rationale behind a focus on Firefox for desktop at that time, the consequences of abandoning Embedding have been seriously underestimated.
Editing 2005- Back in 2004/2005, it was clear Gecko had the best in-browser core editor on the market. Former Netscape editor peers working on Dreamweaver compared mozilla/editor and what Macromedia/Adobe had in hands. The comparison was vastly in favor of Mozilla. It was also easy to predict the aging Dreamweaver would soon need a replacement for its editor core. But editing was considered as non-essential at that time, more a burden than an asset, and no workforce was permanently assigned to it.
Developer tools 2005 In 2005, Mozilla was so completely mistaken on Developer Tools, a powerful attractor for early adopters and Web Agencies, that it wanted to get rid of the error console. At the same moment, the community was calling for more developer tools.
Runtime 2003- XULRunner has been quite successful for such a complex technology. Some rather big companies believed enough in it to implement apps that, even if you don't know their name, are still everywhere. As an example, here's at least one very large automotive group in Europe, a world-wide known brand, that uses XULRunner in all its test environments for car engines. That means all garages dealing with that brand use a XULRunner-fueled box...
But unfortunately, XULrunner was never considered as essential, up to the point its name is still a codename. For some time, the focus was instead given to GRE, a shared runtime that was doomed to fail from the very first minute.
Update: XULRunner just died...
Asian market 2005 While the Asian market was exploding, Gecko was missing a major feature: vertical writing. It prevented Asian embedders from considering Gecko as the potential rendering engine to embed in Ebook reading systems. It also closed access to the Asian market for many other usages. But vertical writing did not become an issue to fix for Mozilla until 2015.
Thunderbird 2007 Despite of growing adoption of Thunderbird in governmental organizations and some large companies, Mozilla decided to spin off Thunderbird into a Mail Corporation because it was unable to get a revenue stream from it. MailCo was eventually merged back with Mozilla and Thunderbird is again in 2015/2016 in limbos at Mozilla.
Client Customization Kit 2003- Let's be clear, the CCK has never been seen as a useful or interesting project. Maintained only by the incredible will and talent of a single external contributor, many corporations rely on it to release Firefox to their users. Mozilla had no interest in corporate users. Don't we spend only 60% of our daily time at work?
E4X 2005-2012 Everyone had high expectations about E4X and and many were ready to switch to E4X to replace painful DOM manipulations. Unfortunately, it never allowed to manipulate DOM elements (BMO bug 270553), making it totally useless. E4X support was deprecated in 2012 and removed after Firefox 17.
Prism (WebRunner) 2007-2009 Prism was a webrunner, i.e. a desktop platform to run standalone self-contained web-based apps. Call them widgets if you wish. Prism was abandoned in 2009 and replaced by Mozilla Chromeless that is itself inactive too.
Marketplace 2009 Several people called for an improved marketplace where authors could sell add-ons and standalone apps. That required a licensing mechanism and the possibility to blackbox scripting. It was never implemented that way.
Browser Ballot 2010 The BrowserChoice.eu thing was a useless battle. If it brought some users to Firefox on the Desktop, the real issue was clearly the lack of browser choice on iOS, world-wide. That issue still stands as of today.
Panorama (aka Tab Groups) 2010 When Panorama reached light, some in the mozillian community (including yours truly) said it was bloated, not extensible, not localizable, based on painful code, hard to maintain on the long run and heterogeneous with the rest of Firefox, and it was trying to change the center of gravity of the browser. Mozilla's answer came rather sharply and Panorama was retained. In late 2015, it was announced that Panorama will be retired because it's painful to maintain, is heterogeneous with the rest of Firefox and nobody uses it...
Jetpack 2010 Jetpack was a good step on the path towards HTML-based UI but a jQuery-like framework was not seen by the community as what authors needed and it missed a lot of critical things. It never really gained traction despite of being the "official" add-on way. In 2015, Mozilla announced it will implement the WebExtensions global object promoted by Google Chrome and WebExtensions is just a more modern and better integrated JetPack on steroids. It also means being Google's assistant to reach the two implementations' standardization constraint again...
Firefox OS 2011 The idea of a linux+Gecko Operating System finally touched ground. 4 years later, the project is dead for mobile.
Versioning System 2011 When Mozilla moved to faster releases for Firefox, large corporations having slower deployment processes reacted quite vocally. Mozilla replied it did not care about dinosaurs of the past. More complaints led to ESR releases.
Add-ons 2015 XUL-based add-ons have been one of the largest attractors to Firefox. AdBlock+ alone deserves kudos, but more globally, the power of XUL-based add-ons that could interact with the whole Gecko platform and all of Firefox's UI has been a huge market opener. In 2015/2016, Mozilla plans to ditch XUL-based add-ons without having a real replacement for them, feature-per-feature.
Evangelism 2015 While Google and Microsoft have built first-class tech-evangelism teams, Mozilla made all its team flee in less than 18 months. I don't know (I really don't) the reason behind that intense bleeding but I read it as a very strong warning signal.
Servo 2016 Servo is the new cool kid on the block. With parallel layout and a brand new architecture, it should allow new frontiers in the mobile world, finally unleashing the power of multicores. But instead of officially increasing the focus on Servo and decreasing the focus on Gecko, Gecko is going to benefit from Servo's rust-based components to extend its life. This is the old sustaining/disruptive paradigm from Clayton Christensen.

(I hope I did not make too many mistakes in the table above. At least, that's my personal recollection of the events. If you think I made a mistake, please let me know and I'll update the article.)

Let's be clear then: Mozilla really succeeded only three times. First, with Firefox on the desktop. Second, enabling the Add-ons ecosystem for Firefox. Third, with its deals with large search engine providers. Most of the other projects and products were eventually ditched for lack of interest, misunderstanding, time-to-market and many other reasons. Mozilla is desperately looking for a fourth major opportunity, and that opportunity can only extend the success of the first one or be entirely different.

The market constraints I see are the following:

  • Native apps have won
  • Mozilla's reputation as an embedded solution's provider among manufacturers will probably suffer a bit from Firefox OS for phones' death. BTW, it probably suffers a bit among some employees too...

Given the assets and the skills, I see then only two strategic axes for Moz:

  1. Apple must accept third-party rendering engines even if it's necessary to sue Apple.
  2. If native apps have won, Web technologies remain the most widely adopted technologies by developers of all kinds and guess what, that's exactly Mozilla's core knowledge! Let's make native apps from Web technos then.

I won't discuss item 1. I'm not a US lawyer and I'm not even a lawyer. But for item 2, here's my idea:

  1. If asm.js "provides a model closer to C/C++" (quote from asmjs.org's FAQ), it's still not possible to compile asm.js-based JavaScript into native. I suggest to define a subset of ES2015/2016 that can be compiled to native, for instance through c++, C#, obj-C and Java. I suggest to build the corresponding multi-target compiler. Before telling me it's impossible, please look at Haxe.
  2. I suggest to extend the html "dialect" Gaia implements to cross-platform native UI and submit it immediately to Standard bodies. Think Qt's ubiquity. The idea is not to show native-like (or even native) UI inside a browser window... The idea is to directly generate browser-less native UI from a html-based UI language, CSS and JS that can deal with all platform's UI elements. System menus, dock, icons, windows, popups, notifications, drawers, trees, buttons, whatever. Even if compiled, the UI should be DOM-modifyable just like XUL is today.
  3. WebComponents are ugly, and Google-centric. So many people think that and so few dare saying it... Implementing them in Gecko acknowledges the power of Gmail and other Google tools but WebComponents remain ugly and make Mozilla a follower. I understand why Firefox needs it. But for my purpose, a simpler and certainly cleaner way to componentize and compile (see item 1) the behaviours of these components to native would be better.
  4. Build a cross-platform cross-device html+CSS+JS-based compiler to native apps from the above. Should be dead simple to install and use. A newbie should be able to get a native "Hello World!" native app in minutes from a trivial html document. When a browser's included in the UI, make Gecko (or Servo) the default choice.
  5. Have a build farm where such html+CSS+JS are built for all platforms. Sell that service. Mozilla already knows pretty well how to do build farms.

That plan addresses:

  • Runtime requests
  • Embedding would become almost trivial, and far easier than Chromium Embedded Framework anyway... That will be a huge market opener.
  • XUL-less future for Firefox on Desktop and possibly even Thunderbird
  • XUL-less future for add-ons
  • unique source for web-based app and native app, whatever the platform and the device
  • far greater performance on mobile
  • A more powerful basis for Gaia's future
  • JavaScript is currently always readable through a few tools, from the Console to the JS debugger and app authors don't want that.
  • a very powerful basis for Gaming, from html and script
  • More market share for Gecko and/or Servo
  • New revenue stream.

There are no real competitors here. All other players in that field use a runtime that does not completely compile script to native, or are not based on Web Standards, or they're not really ubiquitous.

I wish the next-generation native source editor, the next-gen native Skype app, the next-gen native text processor, the next-gen native online and offline twitter client, the next native Faecbook app, the next native video or 3D scene editor, etc. could be written in html+CSS+ECMAScript and compiled to native and if they embed a browser, let be it a Mozilla browser if that's allowed by the platform.

As I wrote at the top of this post, you may find the above unfeasible, dead stupid, crazy, arrogant, expensive, whatever. Fine by me. Yes, as a strategy document, that's rather light w/o figures, market studies, cost studies, and so on. Absolutely, totally agreed. Only allow me to think out loud, and please do the same. I do because I care.

Updates:

  • E4X added
  • update on Jetpack, based on feedback from Laurent Jouanneau
  • update on Versioning and ESR, based on feedback from Fabrice Desré (see comments below)
  • XULrunner has died...

Clarification: I'm not proposing to do semi-"compilation" of html à la Apache Cordova. I suggest to turn a well chosen subset of ES2015 into really native app and that's entirely different.

Friday 29 January 2016

Google, BlueGriffon.org and blacklists

Several painful things happened to bluegriffon.org yesterday... In chronological order:

  1. during my morning, two users reported that their browser did not let them reach the downloads section of bluegriffon.org without a security warning. I could not see it myself from here, whatever the browser or the platform.
  2. during the evening, I could see the warning using Chrome on OS X
  3. apparently, and if I believe the "Search Console", Google thought two files in my web repository of releases are infected. I launched a complete verification of the whole web site and ran all the software releases through three anti-virus systems (Sophos, Avast and AVG) and an anti-adware system. Nothing at all to report. No infection, no malware, no adware, nothing.
  4. since this was my only option, I deleted the two reported files from my server. Just for the record, the timestamps were unchanged, and I even verified the files were precisely the ones I uploaded in january and april 2012. Yes, 2012... Yesterday, without being touched/modified in any manner during the last four years, they were erroneously reported infected.
  5. this morning, Firefox also reports a security warning on most large sections of BlueGriffon.org and its Downloads section. I guess Firefox is also using the Google blacklist. Just for the record, both Spamhaus and CBL have nothing to say about bluegriffon.org...
  6. the Google Search Console now reports my site is ok but Firefox still reports it unsecure, ahem.

I need to draw a few conclusions here:

  • Google does not tell you how the reported files are unsecure, which is really lame. The online tool they provide to "analyze" a web site did not help at all.
  • Since all my antivir/antiadware declared all files in my repo clean, I had absolutely no option but to delete the files that are now missing from my repo
  • two reported files in bluegriffon.org/freshmeat/1.4/ and bluegriffon.org/freshmeat/1.5.1/ led to blacklisting of all of bluegriffon.org/freshmeat and that's hundreds of files... Hey guys, we are and you are programmers, right? Sure you can do better than that?
  • during more than one day, customers fled from bluegriffon.org because of these security warnings, security warnings I consider as fake reports. Since no public antimalware app could find anything to say about my files, I am suspecting a fake report of human origin. How such a report can reach the blacklist when files are reported safe by four up-to-date antimalware apps and w/o infection information reported to the webmaster is far beyond my understanding.
  • blacklists are a tool that can be harmful to businesses if they're not well managed.

Update: oh I and forgot one thing: during the evening, Earthlink.net blacklisted one of the Mail Transport Agents of Dreamhost. Not, my email address, that whole SMTP gateway at Dreamhost... So all my emails to one of my customers bounced and I can't even let her know some crucial information. I suppose thousands at Dreamhost are impacted. I reported the issue to both Earthlink and DH, of course.

Saturday 16 January 2016

Ebook pagination and CSS

Let's suppose you have a rather long document, for instance a book chapter, and you want to render it in your browser à la iBooks/Kindle. That's rather easy with just a dash of CSS:

body {
height: calc(100vh - 24px);
column-width: 45vw;
overflow: hidden;
margin-left: calc(-50vw * attr(currentpage integer));
}

Yes, yes, I know that no browser implements that attr()extended syntax. So put an inline style on your body for margin-left: calc(-50vw * <n>) where <n> is the page number you want minus 1.

Then add the fixed positioned controls you need to let user change page, plus gesture detection. Add a transition on margin-left to make it nicer. Done. Works perfectly in Firefox, Safari, Chrome and Opera. I don't have a Windows box handy so I can't test on Edge.

Wednesday 13 January 2016

CSS Prefixed and unprefixed properties

I used to be quite a heavy user of Peter Beverloo's list of CSS properties, indicating all prefixed versions, but he stopped maintaining it a while ago, unfortunately. So I wrote my own, because I still need such data for BlueGriffon... In case you also need it, it's available from http://disruptive-innovations.com/zoo/cssproperties/ and is automatically refreshed from Gecko, WebKit and Blink sources on a daily basis.

Tuesday 29 December 2015

Star Wars episode VII

I was ten years old when the first Star Wars movie was released and I still remember my mood when I left the famous Grand Rex movie theater in Paris that 22nd of October 1977. I was thinking to myself « this is going to change forever SciFi movies »... But I was not young, naive or blind enough to miss the fact Star Wars was only the regular princess/prince/felon tale with a stars' and space fighters' background.

  • the first hero is the lost son of the villain
  • the princess is the lost daughter of the villain
  • the princess is the sister of the first hero
  • the villain's army is trying to rule the world and the villain has magical powers
  • the first hero recovers the sword of his father
  • the second hero is a ruffian but he will eventually save the world and marry the princess
  • the villain abducts the princess and keeps her in custody in his space castle
  • the two heroes attack the space castle to deliver the princess
  • the first hero will eventually destroy the space castle, source of power of the villain threatening the world
  • and so on.

It's nothing but a space Grimms' tale.

Spoiler alert: don't read further if you haven't seen Star Wars episode VII and plan to see it in the near future...

I saw Episode VII yesterday with the kids and enjoyed it a lot while I was in the room. But when I left, I started thinking about it:

  • the new princess is clearly from the family of the former villain and the lost first hero
  • the new princess recovers the family's sword
  • the new second hero is a renegade from the new villain's army
  • the new third hero will eventually attack and destroy the new space castle
  • the new villain is the son of the princess and original second hero
  • the new villain's army is trying to rule the world and the new villain has magical powers
  • the new villain abducts the new princess and keeps her in custody in his space castle
  • the new second hero and the original second hero attack the space castle to deliver the princess
  • the space castle, source of power of the villain threatening the world, is eventually destroyed

Ah that sense of déjà-vu. Where are the originality, the darkness, the characters' complexity seen in the Revenge of the Sith, gone?

All in all, The Force Awakens leaves me very mixed feelings. A good show, a bad Star Wars. Even Harrison Ford totally lost his mojo and did not play well at all. And the final scene made me think J.J. Abrams lost it too, the theatrical intensity of the scene being abysmal (should have ended on Luke taking the saber, closer view on his hand pressing the button, glowing light of the saber turning on screen-wide and sudden black with The Force's music). Ten years after the Revenge of the Sith, I was expecting something that would renew the SciFi/space heroic fantasy genre. It's very far from it.

Tuesday 6 October 2015

OS X El Capitan and USB

I upgraded my MacBookPro to El Capitan two days ago and this was a huge mistake: suddenly, all my external USB hard disks, including my TimeMachine backup (based on a huge Seagate disk), stopped completely working. Not only El Capitan does not show the disks in the Finder, but the Disk Utility does not see them any more. In the release notes of OS 10.11 beta, one could read:

  • USB Known Issues
    • USB storage devices, including internal SD card readers, may become unavailable after system sleep and require either re-plug or restart to recover.
    • USB input devices may become non-functional on some Macs after several days.
    • USB 1.0, 1.1 and older 2.0 devices may not function.

I have tried to reboot, reset the PRAM, reset the SMC, nothing works. Online fora are full of people experiencing blocking issues with USB since they upgraded to El Capitan. The issue was reported by many people during the beta program of OS 10.11 and nothing changed.

This is lame and not at the Apple « level ». Hurry up Apple, fix this.

Thursday 24 September 2015

Calendar on iOS

Dear Apple, I have a request for enhancement for the Calendar app on iOS: when you add a new entry in a calendar, there a possibility to set a travel time. That travel time is hard-defined: 5, 15, 30, 60, 90 or 120 minutes. But since there is also the possibility to set an address for the entry, the travel time should be computed from the preferred transport method (car, foot, public transports) and my current location, looking at traffic data. With your Maps app, that's feasible and that would be a superb addition to the Calendar. Best.

Update: so apparently, this exists when you set up yourself an entry in the calendar. But I received a meeting invitation by email, with the address provided and everything ok, and Calendar lets me only set a hard-defined travel time...

Friday 11 September 2015

Death of XUL-based add-ons to Firefox

I will not discuss here right now the big image of the message sent a few weeks ago (although my company is deeply, very deeply worried about it) but I would like instead to dive into a major technical detail that seems to me left blatantly unresolved: the XUL tree element was introduced to handle very long lists of items with performance. Add-ons touching bookmarks, lists of URLs, list of email adresses, contacts, anti-spam lists, advertisement blocking lists and more all need a performant tree solution.

As far as I am concerned, there is nothing on the html side approaching the performance of the XUL tree for very long lists. Having no replacement for it before the removal of XUL-based add-ons is only a death signal sent to all the add-ons needing the XUL tree, and almost a "we don't care about you" signal sent to the authors of these add-ons. From an ecosystem's point of view, I find it purely suicidal.

So I have a question: what's the plan for the limited number of XUL elements that have no current replacement in html for deep technical reasons?

Update: yes, there are some OSS packages to deal with trees in html, but until the XUL tree is removed from Firefox's UI, how do we deal with it if access to the XUL element is not reachable from add-ons?

Thursday 30 July 2015

CSS Vendor Prefixes

I have read everything and its contrary about CSS vendor prefixes in the last 48 hours. Twitter, blogs, Facebook are full of messages or articles about what are or are supposed to be CSS vendor prefixes. These opinions are often given by people who were not members of the CSS Working Group when we decided to launch vendor prefixes. These opinions are too often partly or even entirely wrong so let me give you my own perspective (and history) about them. This article is with my CSS Co-chairman's hat off, I'm only an old CSS WG member in the following lines...

  • CSS Vendor Prefixes as we know them were proposed by Mike Wexler from Adobe in September 1998 to allow browser vendors to ship proprietary extensions to CSS.

    In order to allow vendors to add private properties using the CSS syntax and avoid collisions with future CSS versions, we need to define a convention for private properties. Here is my proposal (slightly different than was talked about at the meeting). Any vendors that defines a property that is not specified in this spec must put a prefix on it. That prefix must start with a '-', followed by a vendor specific abbreviation, and another '-'. All property names that DO NOT start with a '-' are RESERVED for using by the CSS working group.

  • One of the largest shippers of prefixed properties at that time was Microsoft that introduced literally dozens of such properties in Microsoft Office.
  • The CSS Working Group slowly evolved from that to « vendor prefixes indicate proprietary features OR experimental features under discussion in the CSS Working Group ». In the latter case, the vendor prefixes were supposed to be removed when the spec stabilized enough to allow it, i.e. reaching an official Call for Implementation.
  • Unfortunately, some prefixed « experimental features » were so immensely useful to CSS authors that they spread at fast pace on the Web, even if the CSS authors were instructed not to use them. CSS Gradients (a feature we originally rejected: « Gradients are an example. We don't want to have to do this in CSS. It's only a matter of time before someone wants three colors, or a radial gradient, etc. ») are the perfect example of that. At some point in the past, my own editor BlueGriffon had to output several different versions of CSS gradients to accomodate the various implementation states available in the wild (WebKit, I'm looking at you...).
  • Unfortunately, some of those prefixed properties took a lot, really a lot, of time to reach a stable state in a Standard and everyone started relying on prefixed properties in production web sites...
  • Unfortunately again, some vendors did not apply the rules they decided themselves: since the prefixed version of some properties was so widely used, they maintained them with their early implementation and syntax in parallel to a "more modern" implementation matching, or not, what was in the Working Draft at that time.
  • We ended up just a few years ago in a situation where prefixed proprerties were so widely used they started being harmful to the Web. The indredible growth of first WebKit and then Chrome triggered a massive adoption of prefixed properties by CSS authors, up to the point other vendors seriously considered implementing themselves the -webkit- prefix or at least simulating it.

Vendor prefixes were not a complete failure. They allowed the release to the masses of innovative products and the deep adoption of HTML and CSS in products that were not originally made for Web Standards (like Microsoft Office). They allowed to ship experimental features and gather priceless feedback from our users, CSS Authors. But they failed for two main reasons:

  1. The CSS Working Group - and the Group is really made only of its Members, the vendors - took faaaar too much time to standardize critical features that saw immediate massive adoption.
  2. Some vendors did not update nor "retire" experimental features when they had to do it, ditching themselves the rules they originally agreed on.

From that perspective, putting experimental features behind a flag that is by default "off" in browsers is a much better option. It's not perfect though. I'm still under the impression the standardization process becomes considerably harder when such a flag is "turned on" in a major browser before the spec becomes a Proposed Recommendation. A Standardization process is not a straight line, and even at the latest stages of standardization of a given specification, issues can arise and trigger more work and then a delay or even important technical changes. Even at PR stage, a spec can be formally objected or face an IPR issue delaying it. As CSS matures, we increasingly deal with more and more complex features and issues, and it's hard to predict when a feature will be ready for shipping. But we still need to gather feedback, we still need to "turn flags on" at some point to get real-life feedback from CSS Authors. Unfortunately, you can't easily remove things from the Web. Breaking millions of web sites to "retire" an experimental feature is still a difficult choice...

Flagged properties have another issue: they don't solve the problem of proprietary extensions to CSS that become mainstream. If a given vendor implements for its own usage a proprietary feature that is so important to them, internally, they have to "unflag" it, you can be sure some users will start using it if they can. The spread of such a feature remains a problem, because it changes the delicate balance of a World Wide Web that should be readable and usable from anywhere, with any platform, with any browser.

I think the solution is in the hands of browser vendors: they have to consider that experimental features are experimental whetever their spread in the wild. They don't have to care about the web sites they will break if they change, update or even ditch an experimental or proprietary feature. We have heard too many times the message « sorry, can't remove it, it spread too much ». It's a bad signal because it clearly tells CSS Authors experimental features are reliable because they will stay forever as they are. They also have to work faster and avoid letting an experimental feature alive for more than two years. That requires taking the following hard decisions:

  • if a feature does not stabilize in two years' time, that's probably because it's not ready or too hard to implement, or not strategic at that moment, or that the production of a Test Suite is a too large effort, or whatever. It has then to be dropped or postponed.
  • Tests are painful and time-consuming. But testing is one of the mandatory steps of our Standardization process. We should "postpone" specs that can't get a Test Suite to move along the REC track in a reasonable time. That implies removing the experimental feature from browsers, or at least turning the flag they live behind off again. It's a hard and painful decision, but it's a reasonable one given all I said above and the danger of letting an experimenal feature spread.

Monday 29 June 2015

CSS Working Group's future

Hello everyone.

Back in March 2008, I was extremely happy to announce my appointment as Co-chairman of the CSS Working Group. Seven years and a half later, it's time to move on. There are three main reasons to that change, that my co-chair Peter and I triggered ourselves with W3C Management's agreement:

  1. We never expected to stay in that role 7.5 years. Chris Lilley chaired the CSS Working Group 1712 days from January 1997 (IIRC) to 2001-oct-10 and that was at that time the longest continuous chairing in W3C's history. Bert Bos chaired it 2337 days from 2001-oct-11 to 2008-mar-05. Peter and I started co-chairing it on 2008-mar-06 and it will end at TPAC 2015. That's 2790 days so 7 years 7 months and 20 days! I'm not even sure those 2790 days hold a record, Steven Pemberton probably chaired longer. But it remains that our original mission to make the WG survive and flourish is accomplished, and we now need fresher blood. Stability is good, but smart evolution and innovation are better.
  2. Co-chairing a large, highly visible Working Group like the CSS Working Group is not a burden, far from it. But it's not a light task either. We start feeling the need for a break.
  3. There were good candidates for the role, unanimously respected in the Working Group.

So the time has come. The new co-chairs, Rossen Atanassov from Microsoft and Alan Stearns from Adobe, will take over during the Plenary Meeting of the W3C held in Sapporo, japan, at the end of October and that is A Good Thing™. You'll find below a copy of my message to W3C.

To all the people I've been in touch with while holding my co-chair's hat: thank you, sincerely and deeply. You, the community around CSS, made everything possible.

Yours truly.

Dear Tim, fellow ACs, fellow Chairs, W3C Staff, CSS WG Members,

After seven years and a half, it's time for me to pass the torch of the CSS Working Group's co-chairmanship. 7.5 years is a lot and fresh blood will bring fresh perspectives and new chairing habits. At a time the W3C revamps its activities and WGs, the CSS Working Group cannot stay entirely outside of that change even if its structure, scope and culture are retained. Peter and I decided it was time to move on and, with W3M's agreement, look for new co-chairs.

I am really happy to leave the Group in Alan's and Rossen's smart and talented hands, I'm sure they will be great co-chairs and I would like to congratulate and thank them for accepting to take over. I will of course help the new co-chairs on request for a smooth and easy transition, and I will stay in the CSS WG as a regular Member.

I'd like to deeply thank Tim for appointing me back in 2008, still one of the largest surprises of my career!

I also wish to warmly thank my good friends Chris Lilley, Bert Bos and Philippe Le Hégaret from W3C Staff for their crucial daily support during all these years. Thank you Ralph for the countless transition calls! I hope the CSS WG still holds the record for the shortest positive transition call!

And of course nothing would have been possible without all the members of the CSS Working Group, who tolerated me for so long and accepted the changes we implemented in 2008, and all our partners in the W3C (in particular the SVG WG) or even outside of it, so thank you all. The Membership of the CSS WG is a powerful engine and, as I often say, us co-chairs have only been a drop of lubricant allowing that engine to run a little bit better, smoother and without too much abrasion.

Last but not least, deep thanks to my co-chair and old friend Peter Linss for these great years; I accepted that co-chair's role to partner with Peter and enjoyed every minute of it. A long ride but such a good one!

I am confident the CSS Working Group is and will remain a strong and productive Group, with an important local culture. The CSS Working Group has both style and class (pun intended), and it has been an honour to co-chair it.

Thank you.

</Daniel>

Wednesday 3 June 2015

In praise of Rick Boykin and its Bulldozer editor

Twenty years ago this year, Rick Boykin started a side project while working at NASA. That project, presented a few months later as a poster session at the 4th International Web Conference in Boston (look in section II. Infrastructure), was Bulldozer, one of the first Wysiwyg editors natively made for the Web. I still remember his Poster session at the conference as the most surprising and amazing short demo of the conference. His work on Bulldozer was a masterpiece and I sincerely regretted he stopped working on it, or so it seemed, when he left NASA the next year.

I thanked you twenty years ago, Rick, and let me thank you again today. Happy 20th birthday, Bulldozer. You paved the way and I remember you.

- page 1 of 121