Hypertext 2001 Trip Report, by Lloyd Rutledge

Overview

This year's conference was a good one, though it's hard to identify one overriding theme that distinguishes it from other years. The CWI regulars Lynda and Jacco were dearly missed, with few people greeting me without wondering where the others were (what am I, chopped liver?). If there were to be a theme, other than Lynda's and Jacco's absence, it could be that which Hugh Davis identified at the closing: "back to the link". He expressed disappointment in there not being more semantic and more education papers, but was pleased that there were more papers specifically about linking. Much, if not all, of the conference was videotaped, and supposedly this material will be made available. Other material will certainly be available from the Website, and CD's for the whole Hypertext series (even beyond papers, I think) will be included in upcoming SIGWeb bulletins.

Getting There

The northbound flight did not offer the usual wonderful landmass, and sea mass, views, but some nice clouds. It was nice to be above them for a change. I arrived at the hotel room at midnight. The room was okay, but sadly lacking in air conditioning. The hotel itself was located in the center, conveniently by the train station and city hall reception, and far from the conference.

Tuesday's Structural Computing Workshop

I decided to attend the Structural Computing workshop since I did not quite understand what it meant. My impression now is that it is work relating to a philosophical observation that structure should have the same first class status that data has. It is not necessarily and architectural or technical approach in and of itself. Perhaps the intention is that that may come. The workshop was not given a thematic introduction, so we just leapt into the presentations.

Shapiro Talk

The presenter is an American-born Aarhus-based photographer. He has no computer background, but finds structural computing fascinating.

This talk covers the relation between visual arts and structural computing. There is a temporal order in how people perceive things. For example, a person walks around a sculpture in a particular way. Spatial layout follows similar accidental, incidental unpredictability, such as photographs strewn on a table.

Photographs themselves have structure. Certain pixels are a face, for example. There are different ways of encoding this. Histograms are one. Photographs can also have relations with each other.

"Movement as the carrier of meaning." Multimedia provides this. There is a limit to how far we can take it.

"Broadening Structural Computing Systems toward Hypermedia Development" Talk

How does hypermedia modeling fit in? This talk discussed hypermedia design, then hypermedia development. These processes necessitate a model of what is being conveyed. Topics discussed include abstractions, constraints, authoring-in-the-large and authoring-in-the-small. authoring-in-the-large involves schema: formal structural specifications. RMDM is one schema. OOHDM also uses schemas.

Schema management is a large problem and complex solution. It is popular and has a long research thread. It comes from genres, like museums and education, the have instances.

Typing and arbitrary granularity is a requirement. One should be able to have as many attributes as desired. The system should be scalable.

Monica: How is this structure computing vs reuse of other systems?

Answer: These are requirements, not new features.

Aggregation is another requirement. Cardinality is yet another. It is an intrinsic feature of links. Generalisation and specialisation make up more requirements. There are designer constraints on authors, and author and designer constraints on readers. A constraint engine should be part of the infrastructure.

An architecture is presented. It involves the designer, author and reader all interacting through it. Schema and domain definitions are in a repository, stored with data. A constraint engine is also involved. Designer handles schemas and domain. Author handles schema and document. Reader handles document.

Discussion was about why these techniques were chosen for this problem. The group also discussed detailing the architecture with tools.

Talk by Weigang Wang on GUIs

Icons for visualisation can be used. Structure unfolding in the GUI reveals relations between components. GUI enables zoom navigation, as in SVG. Automated spatial browsing allows "precedence" links for creating paths. Animations can be used to fly through paths.

Ken's Talk

Software artifacts have many implicit and explicit relationships. They have used OHS, and now want to use structural computing as well. They want to integrate information, which is like schema (or ontology) merging. Their focus is on all aspects of handling relations. The system is called InfiniTe Information Integration Environment. Translators get things in. Integrators relate them once their in. System used W3C formats.

Problems with the prototype include that its XML- and servlet-based structure has little abstraction, making it hard to change. However, it is flexible. Many services were implemented from scratch. End up with no structural computing advantages. XML may not be enough. It provides rapid protopyting. Xlink provides some help with typed links.

Pete N: Can't model Trellis with Xlink, for example.

Structures are both within and between files. Documents have contexts.

Ken showed a demo. Its a month-old, proof-of-concept system. This TextTranslator translated a text file into the repository. It is an XML file, with basic XML encoding with line tags for each line. This helps keyword search. Human integrators can then add more meaningful structural and relational tags. A structure server could dynamically perform different analyses on this.

Monica: Why Xpointer and not Xpath.

Answer: Xpointers contain Xpath.

Ken then loaded a second file and gave it a context. Files can be added to contexts - he added the first one to it. Files can't be edited or deleted, ala Ted. They must be versioned.

Ken ran an integrator next. He typed in some keywords and a context. With this process, files get marked up with keyword tags. Then indices can be made from these keyword tags, exporting the information back into the file system.

Related work includes GeoWorlds and xlinkit.com. Structures are not first class in GeoWorlds. The tool xlinkit.com has different goals.

Sigi: Would you distinguish between instances of relation schemas and having a meta-schema, such as some XML formats? How to you handle versions of schemas? VML models instance types, or instances of instances, to handle this.

Answer: Sounds good.

Uffe's Talk on SC for the Web

SC is the topic for an upcoming issue of JNCA, with a submission deadline of December 1st.

There is already much SC on the Web, in various forms. Uffe's is the Construct SC Environment Structure Services. It provides authoring interfaces for certain types of structure. They are often interfaces built over other tool interfaces like "virusses". The primary host tool is Netscape.

See http://ww.cs.aue.auc.dk/~kock/Publications/Construct/auecse-01-01.pdf.

The question/answer session discussed how good HCI was used. Some spatial relations in the GUI were misleading.

Tata Talk

This talk compares SC with CSCW. Perhaps it is a good read for Stephane. There was again discussion of the HCI sins in the database.

monica and Pete's talk on SC's relation to other fields

What semantic conceptual shifts are there in going from data-centric to having structure also be first class? First of all, structural is not structuralism. Structure does not embody fundamental knowledge, according to SC. Another shift is that data is inherently structured. Data and structure cannot be separated as concepts. Thus, a purely data-centric perspective does not make sense.

Post-modernism insists that such structures don't exist, because all structure is temporary. SC lies between structuralism and post-modernism. Some things benefit from structure, and may have inherent structure that consistently manifests itself under varying circumstances. But often it is hard to find a structure for all potential uses, for all time.

Peter then hops in for his part of this talk and says there is no data, only well and poorly structured structure. First he goes over all the presentations so far. Then he discusses how SC work is. Engelbart's C-level work, from his HT98 keynote: work to improve RnD, which in turn improves the end product. This leads into a CFP for his Metainformatics Symposium.

Wrapping up and What's Next?

This session involved particular philosophers and philosophies. There was talk of how feasible it is to establish a set lexicon. Pete rants a bit about Hypertext staying from its truer path, but in his defense he uses all the usual lucid disclaimers. The will set up a symposium series to be held seperately from byut complement and cooperate with Hypertext.

SMIL Tutorial

On the second day I gave the SMIL tutorial -- one week after SMIL 2.0 was released as a recommendation. This was the first full day SMIL tutorial, but I still have way to many slides. GRiNS worked like a charm, and I freely wandered into totally unknown features of it that made themselves obvious to me and the audience and then worked. An audience member stated he tried to buy GRiNS of the Website but couldn't, so I surfed the laptop display to the Website and did a rehearse editor purchase for them. The tutorial was favorably received.

Main Conference

SIGWeb Meeting

Newletters a good place for non peer reviewed work, perhaps for some of our work. They need more submissions. They will start recruiting candidates soon (gulp). The website www.ht00.org was accendently not renewed and then hijacked by a porn operator. The site now has a porn page with a link on more information on how to purchase the URL.

Session on Rhetoric

Jim Roseberg's paper on Conjunctive links

Conjunctive links are and-links. An acteme is a low-level activity. A simultaneity is one of multiple views of the same thing. Where are you after you traverse an n-ary link? Jim continues with a quick glossary of terms he uses. Following the talk is a large lexical exercise. A key one is actualizing a conjunctive link. This a new issue than we had with disjunctive links, the old kind.

Traversal also gets tricky with conjunctive links, and with n-ary links. You can be in multiple places at once. Anti-conjunctive drift occurs when traversal in individual conjunction components causes the conjunction to disappear. Unfinished conjunction actualizations are pending, and then later get closed. This structural completeness gets tricky. Is there a rhetoric of "re-arrival"? Jim explores other boolean relations as well. Sequence is one.

This is an interesting talk, thick with concepts, but in a way that makes sense, and hints at things not understood that promise to make sense once investigated. This is often not true in wordy talks.

Miles Paper

Introduced as a "strong humanist" with a newcomer Nelson-candidate paper, but actually an old hat. Again, this is a lexically challenging talk, but now more in a referential sense. Adrian is a funny and disarming speaker, whose Australian manner lightens his potentially heavy topic. He wants to loosen up stuffy prohibitions on links. While the previous paper was fore-titled "and and", this one should start "not not". Sometimes links are just beautiful, sometimes they are just fun. They don't always have to generate a profit. He was a lot wordier than this, of course. He refers to film theory quite a bit, though denies promoting film theory as an improvement over hypertext theory. All I got is that he repeats Hypertext and film theory cliches that he uses to say that links can be simpler. In the technical background, his slideshow was a MOV file with scripts, which he mentions sometimes causes the need for a reboot. However, it offered nothing everyday tools like PowerPoint couldn't do. Is this what literati would call an analogy?

But in the end, this paper won the Nelson award for best newcomer paper. Perhaps I'm just too dense to see the point. The author, Adrian Miles, approached me asking if we'd like to participate in a project using SMIL to handle film analysis and film art stuff. That discussion is now in the works, and is primarily up to Frank Nack.

Cole Short Paper

This paper goes over the site www.onlinecaroline.com site that was much acclaimed at last year's hypertext, and elsewhere. Turns out the narrative is fixed, though your interaction changes some templated phrases the narrative's description to the reader. He wasn't disappointed. The makers admit this lack of user control publicly. The story is about disempowerment, and the hypertext tease enhances the user's sense of disempowerment. Hypertexts don't have to empower to have an affect.

Semantic Web Panel

This panel, organized at the last minute (after having been abandoned by Lynda ;), was chaired by Wendy and consisted of Carole, Cathy Marshall, David Durand, David DeRoure and myself. It got very positive feedback, with some people calling it their highlight of the conference. Wendy promised the audience that the video of this workshop would be made available. Wendy gave us panelists witty introductions, calling Carole "Miss Ontology UK" and saying, based on our earlier discussions, that "Lloyd isn't a semantics expert himself, but has lunch with semantic experts almost every day."

David DeRoure general expressed the sober viewpoint the there are some neat things that may be achievable with the Semantic Web that are worth working for, but they tend not to be what people end up having in mind as features of the upcoming promised Semantic Web. David unfortunately did get to speak as much as we would have like or certainly deserved because the rest of the panel was so assertive and talkative.

Cathy gave the skeptic's viewpoint. Devastating for some of our conjectures in the Semantic Web agenda technical report, Cathy stated that she did early semantic hypertext research and stopped because she felt this research failed. She told a story of how Tim Berners-Lee came to her in 1994 and told her about the Semantic Web and how great her earlier research was, and Cathy warned him that it was all faulty and that he shouldn't pursue the Semantic Web. Unfortunately, she didn't get very specific, and I never had a chance to catch her later to go over it.

Carole gave the informed expert's viewpoint, assuring the audience that the ontology and KR people doing the actual work on the Semantic Web are the first to soberly restrain descriptions of specifically what the Semantic Web means in terms of new end-user functionality. She gave some informed insider examples to confirm this.

David Durand was critical of the mix of research and standardization that the effort was drumming up. He stated that standardization is not a place for researchers because most of the work is political.

I forget most of what I said, but it's all on videotape. I suggested renaming the effort to "The somewhat more informative but still very messy Web", stating that there is indeed a positive contribution worth working for, but those knowing the details will not share in the magic and mystique the term "semantic" conjures.

WebDAV and DeltaV technical briefing by Jim Whitehead

Infrastructure is the key word. DeltaV puts versioning on top of WebDAV. Microsoft Windows 2000 office now uses WebDAV, as does Go Live 5 and Dreamweaver 4, and many others. WebDAV is getting very much and very important adoption and employment. Jim gives very clean and professional - and good, of course - talks. WebDAV records metadata, by the way. Properties are a first class object to author collaboratively. The big change is the new DeltaV work, to put the V back in WebDAV.

References

For more information, see the Hypertext 2001 Website (before it becomes a porn site ;). Also, see the media I captured with the CWI INS2 Media Capture Apparatus, at /ufs/lloyd/lib/media/CWI-INS2/13-18aug2001HT01/.