Hypertext 2005 Trip Report

by Lloyd Rutledge (so far)

Workshops

Adaptive and Personalized Semantic Web workshop

I was a bit disappointed that some papers here weren't our adaptive, personalized or Semantic Web. The first paper discussed using link metrics and browse history to "learn" user practices. The second paper was a very techical discussion of wrappers.

The third paper was by Thanyalak Maneewatthana from Southampton, who discussed Southampton's general model (built on top of FOHM) for all this conference's (and most of CHIP's) keywords. Their model has many familiar components we would make. One distinction is that they distinguish a structure model from their RDF-encoded user and domain models. The structure model is highly rhetoric. Their user model is an overlay of "beginner", "intermediate", or "expert" on the components of the domain.

The fourth paper discussed adding inferencing to knowledge discovery on the Semantic Web. He describes their search for a reasoner resulting in the choice of RACER.

Narrative, Musical, Cinematic and Gaming Hyperstructure workshop

Katya's Creating a semantic based discourse model for hypermedia presentations: (un)discovered problems

Katya resented her paper. The resulting discussion was about templates versus grammars for genres and narrative generation. Templates are restricted grammars. Grammars give more flexibility, with more danger of producing nonsense.

Mischa's Towards the Narrative Annotation of Personal Information and Gaming Environments

Mischa of Southampton is involved in making their ontomedia ontology. Among his interests is making life stories from digital photographs and their metadata. He uses multiplayer games for some insight in doing this. We discussed a scenario where one takes pictures at a wedding. The pictures capture GPS and time, which correspond to an entry in the user's agenda for a wedding of two friends.

Frank's and Stefano's Generating Media Stories - Play it again, Sam

Frank discussed Auteur. Stefano discussed DISC and Vox Populi. Presented as "closed world" versus "open world". They wrap up together. Conclusions include clear need for genres, and how larger story structures are still a challenge.

The discussion started with how loose you can make it. How much a priori knowledge do you need for each genre? Auteur defines slapstick. Vox Populi defines a space of all possible stories.

Mark J. Weal's Exposing potential narrative structure through user choice

Resulting in discussion of different sorts of underlying models. Hypertext, for example, is a directed graph.

David De Roure's The Sonification of Hyperstructure

This was presented by David Millard. We discussed what information the music communicated, and why music should be the form. There are other possibilities that are unique to music, such as certain intervals for mood, conflict, resolution, pace and other.

Kevin McGee's Programmable Fiction

Kevin McGee asked good questions for this and previous talks. Discussion included what kind of semi-standardized model and interface he used. The game has a world state with rules for changing and for defining winning. His take is that by allowing the programming of interaction macros, the user can define his own terms for interaction, instead of being limted to the objects provided.

Jean-Hugues Réty's Some issues about verifying interactive narrative structures

This paper is co-authored by Nicolas Szilas, who has spoken at INS2. The discussion again went to types of models and structures and what kinds of processes can occur on them, and if these processes are tractable.

Future Direction Discussion

I would call all this "presentation-oriented metastructures". Kevin mentions all these are descriptions. Also, how do you represent and process aesthetics? All this workshop's keywords have structures. So do learning systems. Learning system involve user model a lot. And much much more.

Opening Keynote: Monika Henzinger Hyperlink Analysis on the World Wide Web

Monika provided an overview of Google-ology, including such terms as term frequency (tf) and inverse document frequency (idf). What applies to terms also applies to properties and concepts (that is, any resource) in RDF search structuring, and we've discussed such things for Noadster and CHIP. If a property occurs more often in your selection than in the collection as a whole, then it is important to you.

She also discussed at length Google's pageRank concept, comparing it often with HITS. She went further into new and more detailed aspects of searching, many of which apply to Noadster and CHIP. These included finding hubs. Authorities don't apply to semantics until you start accounting for "friends" who provide semantic annotations. Issues relative to link analysis apply well, but those involving spam and trust aren't (yet) quite so necessary for semantics.

Paper Presentations

Session 2: Authoring for Comprehension

Jill Walker's Feral Hypertext: When Hypertext Literature Escapes Control

Her thesis is that the tendency of hypertext concepts to go beyond our control and our original intentions is a good, useful thing. Folksonomies are an example. I discussed with her a comparison with "viral" programming for the Web. Here, both feral and viral mean having what you contribute to the Web be decentralized and act, and replicate, on its own, and adapt to many varied forms of use. Such contributions act along the lines of chaos theory, being considered as small agents with patterns of interaction with each other. Can we engineer hypermedia contributions by exploiting such factors?

David Millard's Mind the Semantic Gap

Dave presented an axial model for relating different hypermedia systems in terms of formality for authors and readers.

Claus Atzenbeck and Peter J. Nürnberg's Constraints in Spatial Structures

The paper uses real world paper-based desktop spatial structures to model (or at least as an analogy for) what computers can do. They did some visual stylisting replication of this analogy on computers. This falls in the Hypertext "spin off" field Structural Computing, which Peter champions.

Session 3: Quantifying and Computing with Structure

Stefano's Supporting the Generation of Argument Structure within Video Sequences

Stefano demonstrated the maturing (and maturity) of his PhD thesis work here well. David Millard asked about the inconsistency with RDF. (I warned Stefano on the train ride (and many other times) that he'd get this question.) Stefano answered that his structure was less complex computationally. Another question was about the two sample sequences having opposing views indicates the challange of giving a moderated perspective. Stefano answered that the bias or balance can be explicitly set by author or use. Frank Wagner asked about user access to the underlying argumentation.

Neoklis Polyzotis's Searching a File System using Inferred Semantic Links

Paul asked about dots in filename comparison, thus finding suffixes. He said they went to ignoring suffixes, only using them to structure search returns. Fabio asked about adding weights to certain extensions to improve search.

Panel

First question was about where the meaning lies. Stefano replied that in his system the author can control what components mean. Neoklis added that structure is inherit information and is typically easy and useful to extract.

Does the structure have meaning internally or does the user perceive it? What are their different approached. Neoklis feels it can be found automatically to provide user additional information. Automatically finding meaning to this structure can't be done. Stefano shares generation of semantic graph but uses underlying assumptions, not raw text.

Paul asked how well Stefano's system scales. Stefano replied that you can do the process in parallel with minimal interdependencies. He finds the order of magnitude as the same as editing. Neoklis asked if machines could do it. Stefano replied that perhaps some text extraction could enable Neoklis's work to happen, but it would have errors. Another question about automation had Stefano reply that one essential is getting good cuts, which requires human work. He then plugged his demo (good boy). Fabio asked if editors would bias the material. Stefano replied that the bias is introduced in the thesaurus.

Session 5: Transformations and Adaptations

Kumiyo Nakakoji's What Is the Space For? The Role of Space in Authoring Hypertext Representations

She made an interesting point about using space as a means (representation) vs as an end (presentation).

Ewald Ramp's High-Level Translation of Adaptive Hypermedia Applications

TU/e's (or Pittsburg's?) own Ewald presented another chapter in AHA!. He offered a Pittsburg-to-Eindhoven mapping as an example of his means for representing (and thus mapping) multiple systems, thus having a meta-system model (an adaptive Dexter, I suppose, but processible). Handles concept structure, adaptation and layout. The chair Jocelyn asked if he'd work with other systems. He says it will happen in the next paper.

Craig Stewart's (and Alexandra Christea's) Evaluation of Adaptive Hypermedia Systems' Conversion

This is an AH mapping paper like the last, but at an interface level, and between MOT and WHURLE.

Posters and Demos

Session 8: User Trails

Daniel Smith's (and monica schraefel's) The Evolving mSpace Platform: Leveraging the Semantic Web on the Trail of the Memex

Never put "evolving" in a paper title: it's as useless as like "novel", but self-deprecating as well. The question was about how much knowledge structure is not in a matrix, such as once you select a composer, you are limited to one genre. Daniel answered that sometimes composers have multiple genres, which misses an important point: how can mSpace handle structure that is not inherently strongly matrixed.

Mária Bieliková's Improving Adaptation in WebBased Educational Hypermedia by means of Knowledge Discovery

This paper is about data mining user behavior to provide adaptation. The result is a recommended sequence of concepts for the user to traverse. Paul asked about Christobal Romero's mining on user data from AHA!.

Frank Shipman's Parsing and Interpreting Ambiguous Structures in Spatial Hypermedia in Session 4: Patterns, Irregularities and Ambiguities

The question was about what type of user work this supports. Frank said it looks for general structure that occur in multiple activities. These particular structures are derived from those typical of user tasks in general, and work-oriented in particular. Supporting art is hard because the structures are more diverse.

Panel for User Trails

They started with general comments on what to learn about users from watching them. Watching groups is important, and learning from groups. This is analogous to paths forming is woods from scratch, with no paths fixed by non-using "experts". Is it possible to mine history logs of multiple users? Studies show most user will follow preset paths, so it's important to observe those who wander off the fixed trail. Folksonomies are group formed paths over knowledge instead of navigation.

Session 10: Narratives

Frank Shipman's Hypervideo Expression: Experiences with Hyper-Hitchcock

I bothered Frank about comparisons with SMIL. They really just analyzed use of an authoring system and the behaviors it generated. He said they lost desired control if using SMIL. However, much of the desired end behavior user studies indicated has direct correspondence with particular SMIL constructs, such as exclusion.

Session 11: Annotations

Olivier Aubert's Advene: Active Reading through Hypervideo

A question was about how to detect scenes and how to share data. Scenes are out of research scope, so they do it simply. They now share metadata with same video.

Ippokratis Pandis's Semantically Annotated Hypermedia Services

The question was about expressing workflows using services. It was unclear what kind of services he seeks to represent, and why they are useful.

Panel

Chair Marc asked about what level of precision they use. Olivier says they just need simple conceptual indicators and links (URIs) (I suppose the details are then emergent). Ippokratis says detail-level of requirements varies much from person to person. Actual use is more important than its intent.

Stefano asked for their feeling in what users will perceive and need the most. Users should have power, but author should be able to ensure vision. User's interaction is more visual: I ask, I get. You don't need the mechanical details. What user-perceived direct user do your systems enable?

Olivier replied that the metadata should be accessible to the end user -- often they can't do anything with it, even if it is their like with DVDs. Defining own sequencing is desired for DVDs but denied, even with needed info in it.

Another asked about cross-video joining and using the tools for organization, and for adding subtitles. How can this data you collected be queried? Olivier replied that his system is an offline wiki for video.

Next question proposed having a service ontology before services came to market. Services devoted to authoring enable metadata transference in a standardized means.

Then they showed demos. First was Hypervideo showing HT04. Then was Stefano's Vox Populi. The interface looked impressive. He gave two short examples: one of attack and one of support.

Session 13: Enabling Frameworks and Foundations: Schemas, Part II

Franca Garzotto's Towards Enterprise Frameworks for Networked Hypermedia: a Case-Study in Cultural Tourism

Most hypermedia is content intensive. Tools should support user programming, that is, workflow and domain expert activity. A question was about next step of layout customization. Says they also customize navigation. Layout templates are in CSS now, requiring text editing. They will add an interface for easier customization.

Jessica Rubart's Supporting Joint Modeling by End Users

Semantic holism is modeling. Spatial hypertext is visual. Schema-based is typing. Combining these add simplicity and flexibility. The question was about workflow in semantic holism's emergent structure. How does having basic building blocks help. The answer was that emergence was important and enabled by lack of strict workflow. Spatial hypertext helped.

Panel

Schemas are a comprise between enabling and restricting. Franca's approach starts with a scheme and progresses by modification. Jessica's starts with primitives and progress by adding. Jessica's approach encouraged emergence more. Franca wants more powerful abstraction. She's waiting for something concrete from the semantic web. I asked about how to engineer chaos/emergence as effectively combining the approaches.

Form through Stretching

Anders Fagerjord's Editing Stretchfilm

This paper is written and published as a hypertext, and a stretching (zooming) one at that.

Tor Brekke Skjøtskift's Syntagmatic- and Paradigmatic Stretchtext

Panel

How much will people repeated use stretching? Will they watch the whole short version first. The answer is that people are used to abridged versions. Time is the motivation. Repetition depends on genre. Friends don't see your family videos repeatedly. DVDs have cut scenes and director's cuts. Nice if you could set a time, as in the Die Hard DVD. There is also a "spiral" pattern of short then long then short.

What motivates stretching? Again, it is genre-dependent. Sometimes you know the outcome, which makes stretching/shrinking less dangerous. I asked about the maintaining narrative quality during stretching and shrinking. How much can you do this? Anders mentioned how footnotes can be spoilers, whereas other details are not.

The next question was about pace. Can you keep (same) good tempo for abridged or stretched versions? Choosing pace is separate from length. Trailers have quick pace and don't spoil. And they scramble order. Primarily, they set mood.

What other genres have stretch issues? News is one. It gets shortened. It has inverse pyramid form. Encyclopedia is another.

Some people prefer the annotations over the material itself. It guides the reader even more than the material. This can help or hinder additional interpretations. Find ways to shorten involves complex issues of narrative. What makes a movie pleasant can be more important than its content.

Coffee, Food and Beer

Dinner with Southampton

Mischa. Stefano.

Coffee with Ewald Ramp

Eindhoven and Pittsburg.

Dinner with Franca

Lynda. Rijksmuseum website.

Dinner with Salzburg Research

No HT2006?

SIGWeb Meeting

Peter Nürnberg, president of SIGWeb, chaired, of course. He started by saying SIGWeb (and Hypertext) was much larger but went down in the late 90s. Membership drop has stabilized around June 2004. Recent conferences have lost money. WWW2006 is agreeded in principle as being in SIGWeb, along with ICWE and CIKM. We now do only CD proceedings. We get money from ACM DL hits on proceedings.

Hypertext does not perform well from a SIG perspective. We can help ensure HT's future by ... volunteering, especially by hosting. Especially in the US. Staying in the conference hotel also helps by meeting the room block. SIGWeb can no longer hold HT without justification. We will hold HT06, but as a symposium (and in Denmark). Uffe will address it more in his keynote. We have preliminary CFPs. HT07 is set, and also in Europe. No plan for HT08, which we need soon. This all is not panic, just managable facts of life.