Steven Pemberton : Views and Feelings

Why XHTML

Molly Holzschlag asked me if I'd try and clearly and simply explain why XML parsing is advantageous and why XHTML still is relevant. This was my answer.

Firstly, some background. I sometimes give talks on why books are doomed. I think books are doomed for the same reasons that I used to think that the VCR was doomed, or film cameras were doomed. People present at the talks make the mistake of thinking that because I think books are doomed, I want them to be doomed, and get very cross with me. Very cross. But in fact, I love books, have thousands of them ... and think they are doomed.

Similarly, people make the mistake of thinking that because I am the voice behind XHTML, that I therefore think that XML is totally perfect and the answer to all the world's problem, etc.

I don't think that, but

  1. I was chartered to create XHTML, and so I did
  2. XML is not perfect; in fact I think the designers were too print-oriented and failed to anticipate properly its use for applications. As Tim Bray said "You know, the people who invented XML were a bunch of publishing technology geeks, and we really thought we were doing the smart document format for the future. Little did we know that it was going to be used for syndicated news feeds and purchase orders."
  3. I have often tried to get some of XML's worst errors fixed (not always successfully).
  4. I believe that you should row with the oars you have, and not wish that you had some other oars.
  5. XML is there, there are loads of tools for it, it is interoperable, and it really does solve some of the world's problems.

Parsing

So, parsing. Everyone has grown up with HTML's lax parsing and got used to it. It is meant to be user friendly. "Grandma's markup" is what I call it in talks. But there is an underlying problem that is often swept under the carpet: there is a sort of contract between you and the browser; you supply markup, it processes it. Now, if you get the markup wrong, it tries to second-guess what you really meant and fixes it up. But then the contract is not fully honoured. If the page doesn't work properly, it is your fault, but you may not know it (especially if you are grandma) and since different browsers fix up in different ways you are forced to try it in every browser to make sure it works properly everywhere. In other words, interoperability gets forced back to being the user's responsibility. (This is the same for the C programming language by the way, for similar but different reasons.)

Now, if HTML had never had a lax parser, but had always been strict, there wouldn't be an incorrect (syntax-wise) HTML page in the planet, because everyone uses a 'suck it and see' approach:

  1. Write your page
  2. Look at it in the browser, if there is a problem, fix it, and look again.
  3. Is it ok? Then I'm done

and thus keeps iterating their page until it (looks) right. If that interation also included getting the syntax right, no one would have complained. No one complains that compilers report syntax errors, but in the web world there is no feedback that it has an error or has been fixed up.

It was tried once with programming languages actually. PL/I had the property of being lax, and many programs did something other than what the programmer intended, and the programmer just didn't know. Luckily other programming languages haven't followed its example.

For programming languages laxness is a disaster, for HTML pages it is an inconvenience, though with Ajax, it would be better if your really knew that the DOM was what you thought it was.

So the designers said for XML "Let us not make that mistake a second time" and if everyone had stuck to the agreement, it would have worked out fine. But in the web world, as soon as one player doesn't honour the agreement, you get an arms race, and everyone starts being lax again. So the chance was lost.

But, still, being told that your page is wrong, even if the processor goes on to fix it up for you, is better than not knowing. And I believe that draconian error handling doesn't have to be as draconian as some people would like us to think it is. I would like to know, without having to go to the extra lengths that I have to nowadays.

So I am a moderate supporter of strict parsing, just as I am with programming languages. I want the browsers to tell me when my pages are wrong, and to fix up other people's wrong pages, which I have no control over, so I can still see them.

There is one other thing on parsing. The world isn't only browsers. XML parsing is really easy. It is rather trivial to write an XML parser. HTML parsing is less easy because of all the junk HTML out there that you have to deal with, so that if you are going to write a tool to do something with HTML, you have to go to a lot of work to get it right (as I saw from a research project I watched some people struggling with).

Let me tell a story. I was once editor-in-chief of a periodical, and we accepted articles in just about any format, because we had filters that transformed the input into the publishing package we used. One of the formats we accepted was HTML, and the filter of course fixed up wrong input as it had to. Once we had published the paper version of the periodical, we would then transform the articles from the publishing package into a website. One of the authors complained that the links in his article on the website weren't working, and asked me to fix them. The problem turned out that his HTML was incorrect, the input filters were fixing it up, but in a slightly different way to how his browser had been doing it. And I had to put work in to deal with this problem.

Another example was in a publishing pipeline where one of the programs in the pipeline was producing HTML that was being fixed up but in a way that broke the pipeline later on. Our only option was to break open the pipeline, feed the output into a file, edit the file by hand, and feed it into the second part of the pipeline.

Usability is where you try to make people's lives better by easing their task: make the task quicker, error-free, and enjoyable. By this definition, the HTML attempt to be more usable completely failed me in this case.

XHTML

The relevance of XHTML also starts with the statement that not everything is a browser. Because a lot of the producers of XHTML do it because they have a long XML-based tool pipeline, that spits out XHTML at the end, because it is an XML pipeline. Their databases talk XML, their production line produces and validates XML and at the end, out comes XML, in the form of XHTML. They just want to browsers to render their XHTML, since that is what they produce. That is why I believe it is perfectly acceptable to send XHTML to a browser using the media type text/html. All I want is to render the document, and with care there is nothing in XHTML that breaks the HTML processing model.

But there is more. The design of XML is to allow distributed markup design. Each bit of the markup story can be designed by domain experts in that area: graphics experts, maths experts, multi-media experts, forms experts and so on, and there is an architecture that allows these parts to be plugged together.

SVG, MathML, SMIL, XForms etc are the results of that distributed design, and if anyone else has a niche that they need a markup language for, they are free to do it. It is a truly open process, and there are simple, open, well-defined ways that they can integrate their markup in the existing markups. (One of the problems with the current HTML5 process is that it is being designed as a monolithic lump, by people who are not experts in the areas they need to be experts in.)

So anyway, the reason behind the need for XHTML is that the XML architecture needs the hypertext bit to plug in. It was a misunderstanding by many that XHTML 1.* offered next to no new functionality. The new functionality was SVG, SMIL, MathML and so on.

And my poster child for that architecture was Joost (alas no longer available) which combined whole bunches of those technologies to make an extremely functional IP TV player and you just didn't realise it was actually running in a browser (Mozilla in that case).

Anyway, out on the intranets, there are loads of companies using that architecture to do their work and having then to do extra work to push the results out to the world's browsers by making the results monolithic again.

What I anticipate is that we will see the emergence of XML javascript libraries that will allow you to push your XML documents to the browsers, which are then just used as shells supplying javascript processors and renderers, which will process the XML, and make it visible. HTML will become the assembly language of the web. HTML is just not addressing the use cases of the real world any more. We need higher-levels of markup.

So in brief, XHTML is needed because 1) XML pipelines produce it; 2) there really are people taking advantage of the XML architecture.

Other Posts