|
|
Note:
I am in the process of updating the abstracts and links on this page to
point to stable versions of documents. This has the disadvantage that
subscriptions may be required to access full text versions of articles.
In order to help readers who don't have the required subscription, a
draft copy of articles is also provided. This version may not be the
same as the final published text. For the most complete copy, follow
the link to the referenced digital library. If you have questions on
any item here, please contact me.
|
|
|
|
|
SMIL 3.0: Interactive Multimedia for the Web, Mobile Devices and Daisy Talking Books
|
Author(s) |
Dick C.A. Bulterman and Lloyd Rutledge
|
Abstract |
The SMIL 3.0 language is the W3C's standard language for web-based multimedia. This book provides a comprehensive guide to all of SMIL's elements and attributes for all versions, from the complete SMIL Language profile to SMIL Tiny. Driven by dozens of examples and illustrated code fragments, every aspect of SMIL is treated in this book.
Starting with essential background on XML and the streaming media infrastructure, this book considers recent features such as smilText and smilState, as well as the structuring, layout and timing aspects required to master SMIL. In addition, the book describes advanced SMIL functionality, including transitions, animation and embedded metadata, all from an application-oriented perspective. |
Publisher |
Springer-Verlag, Heidelberg, December 2008.
|
Links |
|
|
|
|
|
SMIL 2.0: Interactive Multimedia for Web and Mobile Devices
|
Author(s) |
Dick C.A. Bulterman and Lloyd Rutledge
|
Abstract |
The SMIL 2.0 Language is the Web's solution for multimedia. This book
provides a comprehensive guide to all of SMIL 2.0's elements and
attributes for both desktop and mobile multimedia. Driven by dozens of
examples and illustrated code fragments, every aspect of SMIL is
treated. This book considers the document structure, media
integration, presentation layout and detailed timing constructs
available in a wide range of SMIL compatible players and browsers.The
book also considers advanced SMIL topics of linking, content control
and SMIL animation, all from a practical, implementation-oriented
perspective. |
Publisher |
Springer-Verlag, Heidelberg, May 2004.
|
Links |
|
|
|
|
|
|
Multimedia Modelling 2001
|
Author(s) |
Lloyd Rutledge and Dick C.A. Bulterman, eds.
|
Abstract |
The book is the edited proceedings of the 2001 Multimedia Modelling conference, held at CWI in Amsterdam.
|
Publisher |
CWI, Amsterdam, 2001.
|
Links |
|
|
|
|
|
From One to Many Boxes: Mobile Devices as Primary and Secondary Screens; in A.C. Roib‡s, A. Marcus, and R. Sala (eds.), Mobile TV: Customizing Content and Experiences.
| Author(s) |
Pablo Cesar, Dick C.A. Bulterman, Hendrick Knoche
| Abstract |
 
|
Publisher |
Springer-Verlag, Heidelberg, Germany, 2010.
| Links | |
|
|
|
Present and Future of Software Graphics Architectures for IDTV; In G. Lekakos, K. Chorianopoulos, and G. Doukidis (eds.), Interactive Digital Television: Technologies and Applications.
| Author(s) |
P. Cesar, D. C.A. Bulterman, K. Baker, L.F. Gomes Soares, S. Cruz-Lara, A. Kaptein
| Abstract |
 
|
Publisher |
IGI Global Publishing, Hershey, PA, USA, 2007.
| Links | |
|
|
|
Readings in Multimedia Computing and Networking: Authoring Systems (K. Jeffay and H.J. Zhang, Eds.)
|
Author(s) |
Dick C.A. Bulterman (section editor)
|
Abstract |
Readings in Multimedia Computing and Networking captures the broad
areas of research and developments in this burgeoning field, distills
the key findings, and makes them accessible to professionals,
researchers, and students alike. For the first time, the most
influential and innovative papers on these topics are presented in a
cohesive form, giving shape to the diverse area of multimedia
computing. The seminal moments are recorded by a dozen visionaries in
the field and each contributing editor provides a context for their
area of research by way of a thoughtful, focused chapter introduction. |
Publisher |
Morgan Kaufmann, San Francisco, 2001.
|
Links |
|
|
|
Title |
Video Analysis Tools for Annotating User-Generated Content from Social Events
| Author(s) |
R.L. Guimaraes, R. Kaiser, A. Hofmann, P. Cesar, and D.C.A. Bulterman
| Abstract |
In this presentation we present how low-level metadata extraction tools have been applied in the context of a pan-European project called Together Anywhere, Together Anytime (TA2). The TA2 project studies new forms of computer-mediated social communications between spatially and temporally distant people. In particular, we concentrate on automatic video analysis tools in an asynchronous community-based video sharing environment called MyVideos, in which users can experience and share personalized music concert videos within their social group
| In |
Proceedings of the International Conference on Semantic and Digital Media Technologies (SAMT 2010), Saarbrucken, Germany, December 1-3, 2010.
| Links | | |
Title |
Beyond the Playlist: Seamless Playback of Structured Video Clips
| Author(s) |
B. Gao, A. J. Jansen, P.S. Cesar, D.C.A. Bulterman
| Abstract |
In this paper we introduce the design and implementation of seamless playback for video/audio in the Ambulant Player. The Ambulant Open SMIL Player is an open-source media player that supports SMIL 3.0. A typical SMIL multimedia presentation consists of a set of declarative references to video/audio clips, which are relative to each other in terms of temporal and spatial relationships. Unfortunately, the declarative nature of SMIL often imposes performance delays, as individual items are fetched and presented. In this paper, we discuss the design and implementation of a caching and prefetching scheme that avoids service interruption and eliminate switch delay among these clips. A collection of videos is thereby rendered as if they were continuously rendered from one media container on one media source. Experiments are carried out to validate that our techniques can significantly lower the start delay of media rendering and therefore realize the seamless playback of SMIL multimedia presentations.
| In |
IEEE Transactions on Consumer Electronics (IEEE TCE), 56(3): 1495-1501, 2010.
| Links | | |
Title |
A Model for Editing Operations on Active, Temporal Multimedia Documents
| Author(s) |
A. J. Jansen, P.S. Cesar, D.C.A. Bulterman
| Abstract |
Inclusion of content with temporal behavior in a structured document leads to such a document gaining temporal semantics. If we then allow changes to the document during its presentation, this brings with it a number of fundamental issues that are related to those temporal semantics. In this paper we study modifications of active multimedia documents and the implications of those modifications for temporal consistency. Such modifications are becoming increasingly important as multimedia documents move from being primarily a standalone presentation format to being a building block in a larger application. We present a categorization of modification operations, where each category has distinct consistency and implementation implications for the temporal semantics. We validate the model by applying it to the SMIL language, categorizing all possible editing operations. Finally, we apply the model to the design of a teleconferencing application, where multimedia composition is only a small component of the whole application, and needs to be reactive to the rest of the system. The primary contribution of this paper is the development of a temporal editing model and a general analysis which we feel can help application designers to structure their applications such that the temporal impact of document modification can be minimized.
| In |
Proceedings of the ACM Symposium on Document Engineering (ACM DocEng 2010), Manchester, UK, September 21-24, 2010.
| Links | | |
Title |
Creating and Sharing Personalized Time-Based Annotations of Videos on the Web
| Author(s) |
R. Laiola Guimaraes, P.S. Cesar, D.C.A. Bulterman
| Abstract |
This paper introduces a multimedia document model that can structure community comments about media. In particular, we describe a set of temporal transformations for multimedia documents that allow end-users to create and share personalized timed-text comments on third party videos. The benefit over current approaches lays in the usage of a rich captioning format that is not embedded into a specific video encoding format. Using as example a Web-based video annotation tool, this paper describes the possibility of merging video clips from different video providers into a logical unit to be captioned, and tailoring the annotations to specific friends or family members. In addition, the described transformations allow for selective viewing and navigation through temporal links, based on end-users' comments. We also report on a predictive timing model for synchronizing unstructured comments with specific events within a video(s). The contributions described in this paper bring significant implications to be considered in the analysis of rich media social networking sites and the design of next generation video annotation tools.
| In |
Proceedings of the ACM Symposium on Document Engineering (ACM DocEng 2010), Manchester, UK, September 21-24, 2010
| Links | | |
Title |
Challenges for Model-Based User Interfaces: Multimedia and The Web of Things
| Author(s) |
A.J. Jansen, P.S. Cesar, D.C.A. Bulterman
| Abstract |
This paper discusses two upcoming challenges for model-based user interfaces: multimedia and the Web of Things. One the one hand, multimedia content and real-time media transmission have their own temporal model Ð state Ð that has to be seamless integrated into the user interface event model. On the other hand, thousands of interconnected objects that can act as input and output devices impose unforeseen challenges for the user interface mapping into the real world.
| In |
W3C Workshop on Future Standards for Model-Based User Interfaces, Rome, Italy, May 13-14, 2010
| Links | | |
Title |
From IPTV Services to Shared Experiences: Challenges in Architecture Design
| Author(s) |
I. Vaishnavi, P.S. Cesar, D.C.A. Bulterman, O. Friedrich
| Abstract |
This paper discusses the architectural challenges of transitioning from services to experiences. In particular, it focuses on evolution from traditional IPTV services to more social scenarios, in which groups of people in different locations watch synchronized multimedia content together. In addition to the multimedia content, the shared experiences envisioned in this article provide a real-time communication channel between the participants. Based on an implemented architecture, this paper identifies a number of challenges and analyzes them. The most important challenges highlighted in this article include: shared experience modeling, universal session handling, synchronization, and quality of service. This article is the first stone paving the way for a truly interoperable ecosystem, which can offer cross-domain experiences to the users.
| In |
Proceedings of IEEE International Conference on Multimedia & Expo (ICME 2010), Singapore, July 19-23, 2010
| Links | | |
Title |
Web-Mediated Communication: in Search of Togetherness
| Author(s) |
P.S. Cesar, D.C.A. Bulterman, R. Laiola Guimaraes, I. Kegel
| Abstract |
This paper introduces a community-based video sharing environment to support asynchronous communication among heterogeneous participants within a restricted social community. Unlike other community sharing efforts, our work is predicated on the desire to strengthen existing strong ties among group members, in which existing relationships can be nurtured. Using the example of a high school concert as a starting point, this paper discusses a sharing framework in which highly personalized music videos are constructed from a collection of independent parent-made recordings. The environment addresses a series of parent needs for producing tailored presentations with custom features, based on Ôsafe sharingÕ of common assets. We report on the user needs determined by a number of focus groups and on a web-based environment that can be used to manage the complex inter-personal relationships and time-variant social contexts with a community of diverse (but related) users.
| In |
Proceedings of the Web Science Conference (WebSci10), Raleigh (NC), USA, April 26-27, 2010.
| Links | | |
|
|
Title |
Fragment, Tag, Enrich, and Send: Enhancing the Social Sharing of Videos
| Author(s) |
P. Cesar, D.C.A. Bulterman, J. Jansen, D. Geerts, H. Knoche, and W. Seager
| Abstract |
The migration of media consumption to personal computers retains distributed social viewing, but only via nonsocial, strictly personal interfaces. This article presents an architecture, and implementation for media sharing that allows for enhanced social interactions among users. Using a mixed-device model, our work allows targeted, personalized enrichment of content. All recipients see common content, while differentiated content is delivered to individuals via their personal secondary screens. We describe the goals, architecture, and implementation of our system in this article. In order to validate our results, we also present results from two user studies involving disjoint sets of test participants.
| In |
Proceedings of ACM Conference on Networks and OS Support for Digital Audio and Video (NOSSDAV 2009), Williamsberg, Virginia, June 2009
| Links | | |
Title |
Estimate and Serve: Scheduling Soft Real-Time Packets for Delay Sensitive Media Applications on the Internet
| Author(s) |
I. Vaishnavi, D.C.A. Bulterman
| Abstract |
| In |
Proceedings of ACM Conference on Networks and OS Support for Digital Audio and Video (NOSSDAV 2009), Williamsberg, Virginia, June 2009
| Links | | |
Title |
From One to Many Boxes: Mobile Devices as Primary and Secondary Screens
| Author(s) |
P. Cesar, D.C.A. Bulterman, H. Knoche
| Abstract |
| In |
Mobile TV: Customizing Content and Experiences. Springer-Verlag, 2009.
| Links | | |
Title |
Television Content Enrichment and Sharing: The Ambulant Annotator
| Author(s) |
P. Cesar, D.C.A. Bulterman, J. Jansen, R. Laiola Guimaraes
| Abstract |
| In |
Social Interactive Television: Immersive Shared Experiences and Perspectives, pp. 67 - 75. IGI Global Publishing, Hershey (PA). 2009. | Links | | |
Title |
Adding Dynamic Visual Manipulations to Declarative Multimedia Documents
| Author(s) |
A.M.M. Kuijk, Laiola Guimar‹es, P.S. Cesar, D.C.A. Bulterman
| Abstract |
The objective of this work is to define a document model extension that enables complex spatial and temporal interactions within multimedia documents. As an example we describe an authoring interface of a photo sharing system that can be used to capture stories in an open, declarative format. The document model extension defines visual transformations for synchronized navigation driven by dynamic associated content. Due to the open declarative format, the presentation content can be targeted to individuals, while maintaining the underlying data model. The impact of this work is reflected in its recent standardization in the W3C SMIL language. Multimedia players, as Ambulant and the RealPlayer, support the extension described in this paper.
| In |
Proceedings of ACM Symposium on Document Engineering (DocEng 2009), Munich, Germany. Sept 15 - 18, 2009.
| Links | | |
Title |
From Photos to Memories: A User-Centric Authoring Tool for Telling Stories with your Photos
| Author(s) |
A.M.M. Kuijk, Laiola Guimar‹es, P.S. Cesar, D.C.A. Bulterman
| Abstract |
Over the last years we have witnessed a rapid transformation on how people use digital media. Thanks to innovative interfaces, non-professional users are becoming active nodes in the content production chain by uploading, commenting, and sharing their media. As a result, people now use media for communication purposes, for sharing experiences, and for staying in touch. This paper introduces a user-centric authoring tool that enables common users to transform a static photo into a temporal presentation, or story, which can be shared with close friends and relatives. The most relevant characteristics of our approach is the use of a format-independent data model that can be easily imported and exported, the possibility of creating different storylines intended for different people, and the support of interactivity. As part of the activities carried out in the TA2 project, the system presented in this paper is a tool for end-users to nurture relationships.
| In |
Proceedings of International User Centric Media Conference (UCMedia 2009), Venice, Italy. December 9 - 11, 2009.
| Links | | |
Title |
SMIL State: an architecture and implementation for adaptive time-based web applications
| Author(s) |
A.J. Jansen, D.C.A. Bulterman
| Abstract |
| In |
Multimedia Tools and Applications, Vol. 43(3), pp. 203 - 224. Springer, 2009.
| Links | | |
Title |
Leveraging the User Impact: An Architecture for Secondary Screens Usage in an Interactive Television Environment
| Author(s) |
P.S. Cesar, A.J. Jansen, D.C.A. Bulterman
| Abstract |
This paper reports on an architecture, and a working implementation, for using secondary screens in the interactive television environment. While there are specific genres and programs that immerse the viewer into the television experience, there are situations in which people perform as well a secondary task, whilst watching. In the living room, people surf the web, use email, and chat using one or many secondary screens. Instead of focusing on unrelated activities to television watching, the architecture presented in this paper aims at related activities, i.e., to leverage the user impact on the content being watched. After a comprehensive literature review and working systems analysis, the requirements for the secondary screen architecture are identified and modeled in the form of a taxonomy. The taxonomy is divided into three high-level categories: control, enrich, and share content. By control we refer to the decision what to consume and where to render it. In addition, the viewer can use the secondary screen for enriching media content and for sharing the enriched material. The architecture is validated based on the taxonomy and by an inspection of the available services. The final intention of our work is to leverage the viewersÕ control over the consumed content in our multi-person, multi-device living rooms.
| In |
ACM Multimedia Systems Journal, Vol. 15(3), pp. 127 - 142. Springer, 2009
| Links | | |
|
|
Title |
Human-centered television: directions in interactive television research
| Author(s) |
P.S. Cesar, D.C.A. Bulterman and L.F.G. Soares
| Abstract |
The research area of interactive digital TV is in the midst of a significant revival. Unlike the first generation
of digital TV, which focused on producer concerns that effectively limited (re)distribution, the current generation of research
is closely linked to the role of the user in selecting, producing, and distributing content. The research field of interactive
digital television is being transformed into a study of human-centered television. Our guest editorial reviews relevant
aspects of this transformation in the three main stages of the content lifecycle: content production, content delivery,
and content consumption. While past research on content production tools focused on full-fledged authoring tools for professional
editors, current research studies lightweight, often informal end-user authoring systems. In terms of content delivery,
user-oriented infrastructures such as peer-to-peer are being seen as alternatives to more traditional broadcast solutions.
Moreover
, end-user interaction is no longer limited to content selection, but now facilitates nonlinear participatory
television productions. Finally, user-to-user communication technologies have allowed television to become a central component
of an interconnected social experience. The background context given in this article provides a framework for appreciating
the significance of four detailed contributions that highlight important directions in transforming interactive television
research.
| In |
ACM Trans. Multimed. Comput. Comm. Appl., 4(4) October 2008, pp. 1-7.
| Links | | |
Title |
A Presentation Layer Mechanism for Multimedia Playback Mobility in Service Oriented Architectures
| Author(s) |
I.Vaishnavi, P. Cesar, D.C.A. Bulterman
| Abstract |
This paper presents a new approach for media presentation continuity in playback mode. We use the term presentation
continuity over session transfer since our solution is at the presentation layer. Previous research on this topic has focused
on transferring a particular stream or set of related streams at the sessions layer. We argue that in the realm of service
oriented architectures, such as telecom operator networks, this approach is not the best solution for media playback. Our
mechanism presents an alternative to the traditional approach, recognising the fact that a user is connected to a media
presentation, which, may be composed of multiple sessions. The advantages of our system are i) Lower network control plane
overhead, thus reducing chances of presentation consistency loss ii) Lower network data overhead due to lesser need for
transcoding iii) Delegating presentation consistency issues, such as inter-media synchronisation, to the media player iv)
Dynamically adapting the presentation to the new target devices without transcoding. At the end of this paper we present
experimental results, showing a comparison with previous approaches.
| In |
Proceedings of the International ACM Conference on Mobile and Ubiquitous Multimedia, (MUM), Umea, Sweden, December 3-5, 2008.
| Links | | |
Title |
Enhancing Social Sharing of Videos: Fragment, Annotate, Enrich, and Share
| Author(s) |
P. Cesar, D.C.A. Bulterman, D. Geerts, A.J. Jansen, H. Knoche, and W. Seager
| Abstract |
Media consumption is an inherently social activity, serving to communicate ideas and emotions across both
small- and large-scale communities. The migration of the media experience to personal computers retains social viewing,
but typically only via a non-social, strictly personal interface. This paper presents an architecture and implementation
for media content selection, content (re)organization, and content sharing within a user community that is heterogeneous
in terms of both participants and devices. In addition, our application allows the user to enrich the content as a differentiated
personalization activity targeted to his/her peer-group. We describe the goals, architecture and implementation of our system
in this paper. In order to validate our results, we also present results from two user studies involving disjoint sets of
test participants.
| In |
Proceedings ACM Multimedia (ACM MM), Vancouver (BC), Canada, October 27 – November 1, 2008.
| Links | | |
Title |
The Implications of Program Genres for the Design of Social Television Systems
| Author(s) |
D. Geerts, P. Cesar, and D.C.A. Bulterman
| Abstract |
In this paper, we look at how television genres can play a role in the use of social interactive television
systems (social iTV). Based on a user study of a system for sending and receiving enriched video fragments to and from a
range of devices, we discuss which genres are preferred for talking while watching, talking about after watching and for
sending to users with different devices. The results show that news, soap, quiz and sport are genres during which our participants
talk most while watching and are thus suitable for synchronous social iTV systems. For asynchronous social iTV systems film,
news, documentaries and music programs are potentially popular genres. The plot structure of certain genres influences if
people are inclined to talk while watching or not, and to which device they would send a video fragment. We also discuss
how this impacts the design and evaluation of social iTV systems.
| In |
Proceedings of the International Conference on Designing Interactive User Experiences for TV and Video (UXTV 2008), Mountain View (CA), USA, October 22 - 24, 2008.
| Links | | |
Title |
Enabling adaptive time-based web applications with SMIL state
| Author(s) |
J. Jansen, and D.C.A. Bulterman
| Abstract |
In this paper we examine adaptive time-based web applications (or presentations). These are interactive presentations
where time dictates the major structure, and that require interactivity and other dynamic adaptation. We investigate the
current technologies available to create such presentations and their shortcomings, and suggest a mechanism for addressing
these shortcomings. This mechanism, SMIL State, can be used to add user-defined state to declarative time-based languages
such as SMIL or SVG animation, thereby enabling the author to create control flows that are difficult to realize within
the temporal containment model of the host languages. In addition, SMIL State can be used as a bridging mechanism between
languages, enabling easy integration of external components into the web application.
| In |
In Proceeding of the ACM Symposium on Document Engineering (ACM DocEng 2008), Sao Paulo, Brazil, September 16-19, 2008, pp. 18-27.
| Links | | |
Title |
Multimedia Adaptation in Ubiquitous Environments: Benefits of Structured Multimedia Documents
| Author(s) |
P. Cesar, I. Vaishnavi, R. Kernchen, S. Meissner, C. Hesselman, M. Boussard, A. Spedalieri, B. Gao, and D.C.A. Bulterman
| Abstract |
This paper demonstrates the advantages of using structured multimedia documents for session management and
media distribution in ubiquitous environments. We show how document manipulations can be used to perform powerful operations
such as content to context adaptation and presentation continuity. When consuming media in ubiquitous environments, where
the set of devices surrounding a user may change, dynamic media adaptation and session transfer become primary requirements.
This paper presents a working system, based on a representative scenario, in which multimedia content is distributed and
adapted to a movable user to best suit his/her contextual situation. The implemented scenario includes the following scenes:
content selection using a personal mobile phone, content distribution to the most suitable device according to the user's
context, and presentation continuity when the user moves to another location. This paper introduces the underlying document
manipulations that turn the scenario into a working system.
| In |
Proceedings of the ACM Symposium on Document Engineering (ACM DocEng 2008), Sao Paulo, Brazil, September 16-19, 2008, pp. 275-284.
| Links | | |
Title |
Multimedia Content Transformation: Fragmentation, Enrichment, and Adaptation
| Author(s) |
P. Cesar, D.C.A. Bulterman, J. Jansen, M.G.C. Pimentel, and S. Barbosa
| Abstract |
This working session will be an interactive discussion about multimedia content transformation. The basic
assumption is that content transformation activities should be provided as non-destructive operations. The final goal of
the panel is to gather researchers within the community interested in manipulating multimedia content for providing rich
user experiences. The organizers of the panel will moderate and shape the discussion; nevertheless, position papers from
the participants are expected.
| In |
Proceedings of the ACM Symposium on Document Engineering (ACM DocEng 2008), Sao Paulo, Brazil, September 16-19, 2008, pp. 1-2.
| Links | | |
Title |
Usages of the Secondary Screen in an Interactive Television Environment: control, enrich, share, and transfer television content
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
This paper investigates a number of techniques and services around a unifying concept: the secondary screen.
Far too often television consumption is considered a passive activity. While there are specific genres and programs that
immerse the viewer into the media experience, there are other times in which whilst watching television, people talk, scan
the program guide, record another program or recommend a program by phone. This paper identifies four major usages of the
secondary screen in an interactive digital television environment: control, enrich, share, and transfer television content.
By control we refer to the decoupling of the television stream, optional enhanced content, and television controls. Moreover,
the user can use the secondary screen to enrich or author media content by, for example, including personalized media overlays
such as an audio commentary that can be shared with his peer group. Finally, the secondary screen can be used to bring along
the television content. This paper reviews previous work on the secondary screen, identifies the key usages, and based on
a working system provides the experiences of developing relevant scenarios as well as an initial evaluation of them.
| In |
Proceedings of the European Interactive TV Conference (EuroITV2008), Salzburg, Austria, July 3-4, 2008, pp. 168-177.
| Links | | |
Title |
Authoring from the Couch: Research Directions and Possibilities
| Author(s) |
R. L. Guimarães, P. Cesar, and D.C.A. Bulterman
| Abstract |
Despite most of the authoring systems for digital TV assume the author to be seated in front of a computer
on the broadcaster side, current research is interested in the new role of the viewer in producing and distributing content.
The goal of this paper is to identify a number of research directions around the authoring from the couch paradigm, an entertainment-oriented
approach in which the authoring task is performed incidentally.
| In |
Adjunct Proceedings of the European Interactive TV Conference (EuroITV2008), Salzburg, Austria, July 3-4, 2008, pp. 37-38.
| Links | | |
Title |
A Mechanism for Presentation-Layer Media Continuity in Media Playback Mode
| Author(s) |
I. Vaishnavi, P. Cesar, A.J. Jansen, B. Gao, and D.C.A. Bulterman
| Abstract |
This demo presents a new approach for media presentation continuity in playback mode. We use the term presentation
continuity over session transfer since our solution is at the presentation layer. Previous research on this topic has focused
on transferring a particular stream or set of related streams at the sessions layer. Our approach presents an alternative,
recognising the fact that a user is connected to a media presentation, which, may be composed of multiple sessions. The
advantages of our approach are i) Lower network control plane overhead, thus reducing chances of semantic presentation loss
ii) Lower network data overhead due to lesser need for transcoding iii) delegating presentation semantic issues, such as
inter-media synchronisation, to the player iv) dynamically adapt the presentation to the new target devices without transcoding.
| In |
Proceedings of the International workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV 2008), Braunschweig, Germany, May 28-30, 2008.
| Links | | |
Title |
Multimodal Adaptation and Enriched Interaction of Multimedia Content for Mobile Users
| Author(s) |
P. Cesar, D.C.A. Bulterman, R. Kernchen, C. Hesselman, M. Boussard, A. Spedalieri, I. Vaishnavi, and B. Gao
| Abstract |
This paper introduces an architecture, together with an implemented scenario, capable of dynamically adapt
the way mobile users consume and interact with multimedia content. The architecture is based on a representative scenario
identified by the European project SPICE, in which multimedia content is distributed to a user independently of his/her
contextual situation. The implemented scenario includes the following scenes: content selection using a personal mobile
phone, content distribution to the most suitable device according to the user's context, and presentation continuity when
the user moves to another location. This paper reports on the defined architecture and the current status of its implementation.
It shows the initial results in the form of screenshots.
| In |
Proceedings of the Taiwanese-French Conference on Information Technology (TFIT2008), Taipei, Taiwan, March 3-5, 2008, pp. 230-239.
| Links | | |
Title |
Temporal Manipulation and Sharing of Presentation State in Browser-Embedded Multimedia Documents
| Author(s) |
D.C.A. Bulterman, A.J. Jansen, and P. Cesar
| Abstract |
This paper describes an approach to defining,
manipulating and sharing state variables between a
web browser and a multimedia presentation engine in
functionally compound XML-based documents. This
framework, which we call smilState:
the SMIL XML State Architecture, is a fully declarative approach to
sharing state without the need for extensive scripting.
The state variables in smilState are defined using standard
Web technologies such as XPath, XForms and
XSLT, which have been integrated with a SMIL-specific
temporal component. The smilState architecture
enables interactive, user-centered applications to be
created that allow temporal semantics that extend
beyond the facilities currently available for integrating
a conventional
(X)HTML browser interface with SMIL,
SVG or HTML+Time content. The primary benefit of
this work is that it adds a controlled
temporal dimension
to existing state architectures that is free of document
scheduling side effects within the multimedia
content.
This paper provides a use cases for temporal
sharing of state across documents, it describes the smil-
State architecture
in detail, and it describes an implementation of smilState within the open-source Ambulant SMIL player.
| In |
Proceedings of the Taiwanese-French Conference on Information Technology (TFIT2008), Taipei, Taiwan, March 3-5, 2008, pp. 256- 266.
| Links | | |
Title |
A Framework for Video Interaction with Web Browsers
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
In order to make multimedia a first-class citizen on the Web, there is a need for major efforts across the
community. European projects such as Passepartout (ITEA) and SPICE (IST IP) show that there is a need for a standardized
mechanism to provide rich interaction for continuous media content. CWI is helping to build a framework that adds a temporal
dimension to existing a-temporal Web browsers.
| In |
ERCIM News, Number 72 (The Future of the Web), January 2008, pp. 25-26.
| Links | | |
Title |
Multimedia Systems, Languages, and Infrastructures for Interactive Television
| Author(s) |
P. Cesar, D.C.A. Bulterman, K. Chorianopoulos, and J.F. Jensen (eds.)
| Abstract |
For this special issue on Multimedia Systems, Languages, and Infrastructures for Interactive Television the
four best papers on multimedia systems and infrastructures were invited to extend their conference contribution in the form
of a journal paper. These papers cover a wide range of current challenges for multimedia systems: content recommendation,
participatory multimedia genres, evaluation of mobile media usage, and digital media narratives.
| In |
Springer/ACM Multimedia Systems Journal (MSJ), Volume: 14, Number 2, July 2008.
| Links | | |
|
|
Title |
Media presentation Synchronisation in Non Monolithic Rendering Architectures
| Author(s) |
I. Vaishnavi, D.C.A. Bulterman, P. Cesar, and B. Gao
| Abstract |
Coming soon...
| In |
Proceedings of the IEEE Symposium on Multimedia,
Taichung, Taiwan, R.O.C., December 10-12, 2007.
| Links | | |
Title |
Enabling Pro-Active User-Centered Recommender Systems: An Initial Evaluation
| Author(s) |
D.C.A. Bulterman, P. Cesar, A.J. Jansen, H. Knoche, and W. Seager
| Abstract |
Coming soon...
| In |
Proceedings of the IEEE Symposium on Multimedia,
Taichung, Taiwan, R.O.C., December 10-12, 2007.
| Links | | |
Title |
Social Sharing of Television Content: An Architecture
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
Coming soon...
| In |
Proceedings of the IEEE Symposium on Multimedia,
Taichung, Taiwan, R.O.C., December 10-12, 2007.
| Links | | |
Title |
Open Standard and Open Sourced: SMIL for Interactivity
| Author(s) |
D. Zucker and D.C.A. Bulterman
| Abstract |
Coming soon...
| In |
ACM <interactions>, November-December 2007.
| Links | | |
Title |
NeighbourCast: A Synchronisation Algorithm for Ad hoc Networks
| Author(s) |
I. Vashnavi, D.C.A. Bulterman and P. Cesar
| Abstract |
Coming soon...
| In |
IASTED International Conference on Parallel and Distributed Computing and Systems (PDCS 2007), November 19 - 21, 2007, Cambridge, MA, USA. | Links | | |
Title |
An Efficient, Streamable Text Format for Multimedia Captions and Subtitles
| Author(s) |
D.C.A. Bulterman, A.J. Jansen, P. Cesar, and S. Cruz-Lara
| Abstract |
Coming soon...
| In |
Proceedings of the ACM Symposium on Document Engineering, Winnipeg, Canada, August 28-31, 2007, pp. 101-110. | Links | | |
Title |
Synchronized Multimedia Integration Language (SMIL 3.0)
| Author(s) |
D.C.A. Bulterman, et al.
| Abstract |
SMIL 3.0 is the next version of the W3C Synchronized Multimedia
Integration Language. This version supports a wide range of fundamental
enhancements in
the areas of metadata, state, animation, timing, layout and media
handling. It also includes a new internal timed text datatype.
The first public draft version of the language allows interested
users from within and outside W3C to comment on this evolving language.
| In |
W3C Last Call Public Working Draft 15 July 2007
| Links | | |
Title |
An Architecture for Non-Intrusive User Interfaces for Interactive Digital Television
| Author(s) |
D.C.A. Bulterman, P. Cesar, Z. Obrenovic, J. Ducret, and S. Cruz-Lara
| Abstract |
Coming soon...
| In |
Proceedings of the 5th European Interactive TV Conference, Amsterdam, The Netherlands, May 24-25, 2007, pp. 11-20. | Links | | |
Title |
Evaluating Viewer-Side Enrichment of Television Content
| Author(s) |
P. Cesar, D.C.A. Bulterman, A.J. Jansen, D. Boullier, S. Kocergin, and A. Visonneau
| Abstract |
Coming soon...
| In |
Workshop "Supporting non-professional users in the new media landscape" in conjunction with the ACM Computer-Human Interaction Conference, San Jose (CA), USA, 28 April - 3 May, 2007. | Links | | |
Title |
User-centered control within multimedia presentations
| Author(s) |
D.C.A. Bulterman
| Abstract |
The focus of much of the research on providing user-centered control of
multimedia has been on the definition of models and (meta-data)
descriptions that assist in locating or recommending media objects.
While this can provide a more efficient means of selecting content, it
provides little extra control for users once that content is rendered.
In this article, we consider various means for supporting user-centered
control of media within a collection of objects that are structured
into a multimedia presentation. We begin with an examination of the
constraints of user-centered control based on the characteristics of
multimedia applications and the media processing pipeline. We then
define four classes of control that can enable a more user-centric
manipulation within media content. Each of these control classes is
illustrated in terms of a common news viewing system. We continue with
reflections on the impact of these control classes on the development
of multimedia languages, rendering infrastructures and authoring
systems. We conclude with a discussion of our plans for infrastructure
support for user-centered multimedia control. | In |
ACM/Spinger Multimedia Systems Journal, Volume 12, Numbers 4-5, March 2007 , pp. 423-438(16). | Links | | |
|
|
Title |
Present and Future of Software Graphics Architectures for IDTV
| Author(s) |
P. Cesar, K. Baker, D.C.A. Bulterman, L.F.G. Soares, S. Cruz-Lara and A. Kaptein
| Abstract |
Coming soon...
| In |
Interactive Digital Television: technologies and applications, eds. G. Lekakos and K. Chorianopoulos.
| Publisher |
IDEA Group, 2006.
| Links | | |
Title |
An Architecture for End-User TV Content Enrichment
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
Coming soon...
| In |
Journal of Virtual Reality and Broadcasting (JVRB), Volume: 3, Number: 9, December 2006.
| Links | | |
Title |
Synchronized Multimedia Integration Language (SMIL 3.0)
| Author(s) |
D.C.A. Bulterman, et al.
| Abstract |
SMIL 3.0 is the next version of the W3C Synchronized Multimedia
Integration Language. This version supports a wide range of fundamental
enhancements in
the areas of metadata, state, animation, timing, layout and media
handling. It also includes a new internal timed text datatype.
The first public draft version of the language allows interested
users from within and outside W3C to comment on this evolving language.
| In |
W3C Working Draft 20 December 2006
| Links | | |
Title |
An Architecture for Viewer-Side Enrichment of TV Content
| Author(s) |
D.C.A. Bulterman, P. Cesar and A.J. Jansen
| Abstract |
Coming soon...
| In |
Proceedings of the ACM Multimedia Conference 2006, Santa Barbara, California, USA, October 23-27, 2006, pp. 651-654
| Links | | |
Title |
Benefits of Structured Multimedia Documents in iDTV: The End-User Enrichment System
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
Coming soon...
| In |
Proceedings of the ACM Symposium on Document Engineering, Amsterdam, The Netherlands, October 10-13, 2006, pp. 176-178
| Links | | |
Title |
The Ambulant Annotator: Empowering Viewer-Side Enrichment of Multimedia Content
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
Coming soon...
| In |
Proceedings of the ACM Symposium on Document Engineering, Amsterdam, The Netherlands, October 10-13, 2006, pp. 186-187.
| Links | | |
Title |
An Architecture for End-User TV Content Enrichment
| Author(s) |
P. Cesar, D.C.A. Bulterman, and A.J. Jansen
| Abstract |
Coming soon...
| In |
Proceedings of the 4th European Interactive TV Conference, Athens, Greece, May 25-26, 2006, pp. 39-47.
| Links | | |
Title |
A Rationale for Creating an Interactive-TV Profile for SMIL
| Author(s) |
D.C.A. Bulterman
| Abstract |
Coming soon...
| In |
Proceedings of the 4th European Interactive TV Conference, Athens, Greece, May 25-26, 2006, pp. 593-597.
| Links | | |
Title |
Experiences with User-Centered Multimedia Systems Deployment
| Author(s) |
D.C.A. Bulterman
| Abstract |
Coming soon...
| In |
TFIT 2006, Nancy, France, March, 2006. Also available as Springer LNCS in early 2007.
| Links | | |
|
|
Title |
SMIL 2.1 Layout Module Functional Specification
| Author(s) |
Dick C.A. Bulterman |
Abstract |
SMIL 2.1 Layout provides two classes of changes to SMIL 2.0 layout.
First, the SMIL 2.0 HierarchicalLayout module has been replaced by the
SubRegionLayout, AlignmentLayout, and OverrideLayout modules; this
allows differentiated features to be implemented in profiles without
necessarily requiring support for all of the functionality in the
HierarchicalLayout module. Second, several new elements and attributes
have been added to SMIL 2.1 Layout to provide expression for common
functions in an authoring-efficient manner; these functions include the
short-cut notations for media positioning now available in the
AlignmentLayout module and support for background image tiling in the
BackgroundTilingLayout. In addition, new support for simple audio
positioning has been added that allows audio placement to be supported
by those players that allow audio 2-D imaging. The new OverrideLayout
module groups existing support for per-media-object overrides of
BasicLayout attributes. | In |
SMIL 2.1 Draft Recommendation Specification
| Links | | |
Title |
SMIL 2.1 Mobile Profile
| Author(s) |
Dick C.A. Bulterman, Guido Grassel and Daniel F. Zucker |
Abstract |
The SMIL 2.1 Mobile Profile is a collection of SMIL 2.1 modules that
provide support for the SMIL 2.1 language within the context of a
mobile device. Such a device is expected to have sufficient display,
memory, and processor capabilities to render basic interactive
multimedia presentations in SMIL. The SMIL 2.1 Mobile Profile is a
super-set of the SMIL 2.1 Basic Profile and a sub-set of the SMIL 2.1
Extended Mobile Profile. The SMIL 2.1 Mobile profile is largely
compatibility with the SMIL profile that third Generation Partnership
Program (3GPP) has defined for the multimedia messaging (MMS) and the
enhanced packed switched streaming (e-PSS) mobile services in its own
specification ([3GPP26.246R6]).
The functionality of the SMIL 2.1 Mobile Profile may be further
extended by using the SMIL 2.1 Scalability Framework. When extending
the functionality of this profile, it is highly recommended to include
functionality from the SMIL 2.1 Extended Mobile Profile first. | In |
SMIL 2.1 Draft Recommendation Specification
| Links | | |
Title |
SMIL 2.1 Media Module Functional Specification
| Author(s) |
Dick C.A. Bulterman |
Abstract |
The Media Object Modules for SMIL 2.1 introduces a facility to
predefine sets of common param element values in the document head
section and a facility to refer to these definitions from media object
references within the body section. This change is made to reduce the
size of a SMIL document containing many similar parameter definitions
and to ease the authoring and maintenance of SMIL 2.1 documents that
use the elements and attributes in the MediaParam Module. | In |
SMIL 2.1 Draft Recommendation Specification
| Links | | |
Title |
Structured Multimedia Authoring
| Author(s) |
Dick C.A. Bulterman and Lynda Hardman |
Abstract |
Authoring context sensitive, interactive multimedia presentations is
much more complex than authoring either purely audiovisual applications
or text. Interactions among media objects need to be described as a set
of spatio-temporal relationships that account for synchronous and
asynchronous interactions, as well as on-demand linking behavior. This
paper considers the issues that need to be addressed by an authoring
environment. We begin with a partitioning of concerns based on seven
classes of authoring problems. We then describe a selection of
multimedia authoring environments within four different authoring
paradigms: structured, timeline, graph and scripting. We next provide
observations and insights into the authoring process and argue that the
structured paradigm provides the most useful framework for presentation
authoring. We close with an example application of the structured
multimedia authoring paradigm in the context of our own structure-based
system GRiNS. | In |
ACM Trans. on Multimedia Computing, Communications and Applications, 1(1) 2005.
| Links | | |
|
|
Title |
Is It Time for a Moratorium on Metadata?
| Author(s) |
Dick C.A. Bulterman
| Abstract |
This article provides a review of the state of metadata architecture
and application. It starts with a short story that serves as a
background to the article, the goal being to discover whether metadata
is, in fact, the greatest thing since sliced bread. | In |
IEEE Multimedia, 11(4) , Pp. 10-17.
| Links | | |
Title |
Animating Peer-Level Annotations Within Web-Based Multimedia.
| Author(s) |
Dick C.A. Bulterman |
Abstract |
The TabletPC is an example of a new generation of user interface device
where pen-based manipulation of information is integrated directly into
a user’s workflow. Using the TabletPC's existing pen and electronic ink
systems, a wide range of static documents can be created or annotated.
While the facilities of the TabletPC are useful for creating virtual
images containing ink that can be overlaid on text or picture context,
there is little support for creating annotations of time-based content
such as video.
This article describes an annotation authoring model and interface for
creating peer-level annotations to video media. Peer-level annotations
allow existing content to be enriched with additional content
annotations that can be co-presented with the original media. A system
for creating a SMIL language document containing SVG-based annotations
that exist along-side the visual content is described, along with a
discussion of the needs and limitations of supporting video markup in a
web context. An example using peer-level annotations in a medical
context is provided. | In |
Eurographics Multimedia 2004, Nanjing, China, 27-28 October 2004.
| Links | | |
Title |
Ambulant: A Fast, Multi-Platform Open Source SMIL Player
| Author(s) |
D.C.A. Bulterman, A.J. Jansen, K. Kleanthous, K. Blom, and D. Benden |
Abstract |
This paper provides an overview of the Ambulant Open SMIL player.
Unlike other SMIL implementations, the Ambulant Player is a
reconfigureable SMIL engine that can be customized for use as an
experimental media player core. The Ambulant Player is a reference SMIL
engine that can be integrated in a wide variety of media player
projects. This paper starts with an overview of our motivations for
creating a new SMIL engine, then discusses the architecture of the
Ambulant Core (including the scalability and custom integration
features of the player). We close with a discussion of our
implementation experiences with Ambulant instances for Windows, Mac and
Linux versions for desktop and PDA devices. | In |
Proc. ACM Multimedia 2004, New York, Oct 2004.
| Links | | |
Title |
Supporting the Production and Playback of Complex Multimedia Documents.
| Author(s) |
Dick C.A. Bulterman |
Abstract |
This paper discusses the work-flow control features provided by the
GRiNS editor for creating SMIL presentations. We start with an overview
of the generic presentation creation process workflow and then
introduce the general features supported by GRiNS. We then follow with
a detailed set of examples of how these features can be used to create
a simple slideshow of the type that can be played in mobile devices. We
then provide an analysis of the use of GRiNS’s feature set and contrast
these with features found in other SMIL editors. We close with a set of
directions for future work in supporting a presentation workflow. | In |
Proc. Workshop on Web Engineering 2004, Santa Cruz, CA, August 2004.
| Links | | |
Title |
A Linking and Interaction Evaluation Test Set for SMIL.
| Author(s) |
Dick C.A. Bulterman |
Abstract |
The SMIL 2.0 Language profile support several mechanisms for
controlling interactivity in a SMIL 2.0 presentation. Unfortunately,
the SMIL standard testset does not verify complex interactions of
linking/interaction behavior of SMIL players and applications. This
paper describes a linking and interaction test suite that was developed
as part of the Ambulant SMIL Player project. We begin with a short
review of SMIL’s linking and interaction facilities, then describe
aspects of the test suite that have proven to highlight faults in
current SMIL players. | In |
ACM Hypertext 2004, Santa Cruz, CA, August 2004.
| Links | | |
|
|
Title |
Using SMIL to Encode Interactive, Peer-Level Multimedia Annotations.
| Author(s) |
Dick C.A. Bulterman
| Abstract |
This paper discusses applying facilities in SMIL 2.0 to the
problem of annotating multimedia presentations. Rather than
viewing annotations as collections of (abstract) meta-information
for use in indexing, retrieval or semantic processing, we
view annotations as a set of peer-level content with temporal
and spatial relationships that are important in presenting a
coherent story to a user. The composite nature of the collection
of media is essential to the nature of peer-level annotations:
you would typically annotate a single media item much differently
than that same media item in the context of a total presentation.
This paper focuses on the document engineering aspects of the
annotation system. We do not consider any particular user
interface for creating the annotations or any back-end storage
architecture to save/search the annotations. Instead, we focus
on how annotations can be represented within a common document
architecture and we consider means of providing document
facilities that meet the requirements of our user model.
We present our work in the context of a medical patient dossier
example.
| In |
Proc. of ACM DocumentEngineering 2003, Grenoble, France, November 2003, pp. 32-41.
| Links | | |
Title |
The Ambulant Annotator: Medical Multimedia Annotations on TabletPC’s.
| Author(s) |
Dick C.A. Bulterman
| Abstract |
A new generation of tablet computers has stimulated end-user interest on annotating
documents by making pen-based commentary and spoken audio labels to otherwise static
documents. The typical application scenario for most annotation systems is to convert existing
content to a (virtual) image, capture annotation mark-up, and to then save the annotations is a
database. While this is often acceptable for text documents, most multimedia documents are timesensitive
and can be dynamic: content can change often depending on the types of audio/video data
used. Our work looks at expanding the possibilities of annotation by integrating annotations onto
timed basis media. This paper discusses the AMBULANT Annotation Architecture. We describe
requirements for multimedia annotations, the multimedia annotation architecture being developed
at CWI, and initial experience from providing various classes of temporal and spatial annotation
within the domain of medical documents..
| In |
Proc. E-Learn 2003, Phoenix, AZ, November 2003.
| Links | | |
Title |
SMIL Authoring Systems: The State of the Art.
| Author(s) |
Dick C.A. Bulterman
| Abstract |
This document provides background information to create slideshows
for the RealOne, IE-6’s HTML+TIME and 3GPP/Mobile players using the
GRiNS Pro Editor for SMIL 2.0 software (hereafter called simply: GRiNS/
Pro). We discuss how to create simple slideshows and how to integrate
transitions, animations and links for several SMIL players, as well as
creating links to the RealOne media browser and related info panes. You will
also learn how to publish your presentation for use with the RealOne player
and a streaming server and various other SMIL 2.0 media players.
| In |
Proc of SMIL Europe 2003, Paris, February 2003, 47-53.
| Links | | |
|
|
Title |
SMIL 2.0: Examples and Comparisons
| Author(s) |
Dick C.A. Bulterman
| Abstract |
The article is the second part of a two-part series on SMIL 2.0, the
newest version of the World Wide Web Consortium's Synchronized
Multimedia Integration Language. Part 1 looked in detail at various
aspects of the SMIL specification and the underlying SMIL timing model.
This part looks at simple and complex examples of SMIL 2.0's use and
compares SMIL with other multimedia formats. We focus on SMIL's textual
structure in its various implementation profiles. | In |
IEEE Multimedia, 9(1), 2002, pp. 68-79.
| Links | | |
|
|
Title |
SMIL 2.0: Overview, Concepts, and Structure
| Author(s) |
Dick C.A. Bulterman
| Abstract |
The World Wide Web Consortium's Synchronized Multimedia Integration
Language format for encoding multimedia presentations for delivery over
the Web is a little-known but widely used standard. First released in
mid-1998, SMIL has been installed on approximately 200,000,000 desktops
worldwide, primarily because of its adoption in RealPlayer G2,
Quicktime 4.1, and Internet Explorer 5.5. In August 2001, the W3C
released a significant update with SMIL 2.0. In a two-part report on
SMIL 2.0, the author will discuss the basics of SMIL 2.0 and compare
its features with other formats. This article will focus on SMIL's
basic concepts and structure. Part two, in the January-March 2002
issue, will look at detailed examples of SMIL 2.0, covering both simple
and complex examples. It'll also contrast the facilities in SMIL 2.0
and MPEG-4. | In |
IEEE Multimedia, 8(4), 2001, pp. 82-88.
| Links | | |
Title |
Synchronized Multimedia Integration Language (SMIL) 2.0
| Author(s) |
J. Ayers, Aaron Cohen, Dick C.A. Bulterman, et al.
| Abstract |
This document specifies the second version of the Synchronized
Multimedia Integration Language (SMIL, pronounced "smile"). SMIL 2.0
has the following two design goals:
* Define an XML-based language that allows authors to write
interactive multimedia presentations. Using SMIL 2.0, an author can
describe the temporal behavior of a multimedia presentation, associate
hyperlinks with media objects and describe the layout of the
presentation on a screen.
* Allow reusing of SMIL syntax and semantics in other XML-based
languages, in particular those who need to represent timing and
synchronization. For example, SMIL 2.0 components are used for
integrating timing into XHTML [XHTML10] and into SVG [SVG]. | In |
World Wide Web Consortium TR/REC-smil20-20010807, Aug. 2001.
| Links | | |
Title |
Repurposing Broadcast Content for the Web
| Author(s) |
Dick C.A. Bulterman
| Abstract |
As end-user bandwidth increases to a level where the (re)distribution of audio/video
material via the Internet becomes attractive, XML-based standards that help
broadcasters migrate their existing content to the Web are becoming richer and more
powerful. SMIL 2.0 – developed by the World Wide Web Consortium (W3C) – is the
newest version of the Web’s most popular multimedia format.
This article provides an introduction to the concepts and facilities of the SMIL 2.0
language, in the context of the work flow requirements for taking existing broadcast
content and making it available for a Web-centric audience.
| In |
European Broadcast Union (EBU) Technical Review, 287, June 2001, pp. 1-10.
| Links | | |
|
|
Title |
Hypermedia: The Link with Time
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman and D. C.A. Bulterman
| Abstract |
This essay presents a brief discussion of combining temporal aspects of
multimedia presentations with hypertext links. Three ways of combining
linking with temporally synchronized components of a presentation are
described. We describe work that has been done to incorporate both
temporal and linking information within the W3C language SMIL
(Synchronized Multimedia Integration Language). We conclude with a
discussion of future directions, namely providing support for linking
within and among non-linear presentations and the ability to add
temporal information to existing XML document languages. | In |
ACM Computing Surveys, December 1999.
| Links | | |
Title |
GRiNS: An Authoring Environment for Web Multimedia
| Author(s) |
D.C.A. Bulterman, L. Rutledge, J. van Ossenbruggen and L. Hardman
| Abstract |
The W3C has recently released a language for Web-based Multimedia presentations
called SMIL: the Synchronized Multimedia Integration Language. GRiNS is an authoring and
presentation environment that can be used to create SMIL-compliant documents and to play
SMIL documents created with GRiNS or by hand. This paper discusses GRiNS as a tool for
creating multimedia education materials on the Web.
| In |
Proc. Ed-Media ‘99 — World Conference on Educational Multimedia,
Hypermedia & Educational Telecommunications, Seattle (WA), June
1999. | Links | | |
Title |
Anticipating SMIL 2.0: The Developing Cooperative Infrastructure for Multimedia on the Web
| Author(s) |
D.C.A. Bulterman, L. Rutledge, J. van Ossenbruggen and L. Hardman
| Abstract |
SMIL is the W3C recommendation for bringing synchronized multimedia to
the Web. Version 1.0 of SMIL was accepted as a recommendation in June.
Work is expected to be soon underway for preparing the next version of
SMIL, version 2.0. Issues that will need to be addressed in developing
version 2.0 include not just adding new features but also establishing
SMIL's relationship with various related existing and developing W3C
efforts. In this paper we offer some suggestions for how to address
these issues. Potential new constructs with additional features for
SMIL 2.0 are presented. Other W3C efforts and their potential
relationship with SMIL 2.0 are discussed. To provide a context for
discussing these issues, this paper explores various approaches for
integrating multimedia information with the World Wide Web. It focuses
on the modeling issues on the document level and the consequences of
the basic differences between text-oriented Web-pages and networked
multimedia presentations. | In |
Proceedings of The Eighth International World Wide Web Conference (WWW8), May 1999.
| Links | | |
Title |
Mix'n'Match: Exchangeable Modules of Hypermedia Style
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman and D.C.A. Bulterman
| Abstract |
Making hypermedia adaptable for multiple forms of presentation involves
enabling multiple distinct specifications for how a given collection of
hypermedia can have its presentation generated. The Standard Reference
Model for Intelligent Multimedia Presentation Systems describes how the
generation of hypermedia presentation can be divided into distinct but
cooperating layers. Earlier work has described how specifications for
generating presentations can be divided into distinct modules of code
corresponding to these layers. This paper explores how the modules for
each layer of a presentation specification can be exchanged for another
module encoded for that layer and result in the whole specification
remaining well functioning. This capability would facilitate specifying
presentation generation by allowing for the use of pre-programmed
modules, enabling the author to focus on particular aspects of the
presentation generation process. An example implementation of these
concepts that uses current and developing Web standards is presented to
illustrate how wide-spread modularized presentation generation might be
realized in the near future. | In |
Proceedings of ACM Hypertext 99, February 1999.
| Links | | |
Title |
Do You Have the Time? Composition and Linking in Time-based Hypermedia
| Author(s) |
L. Hardman, J. van Ossenbruggen, L. Rutledge, K. S. Mullender and D.C.A. Bulterman
| Abstract |
Most hypermedia documents don't incorporate time explicitly. This
prevents authors from having direct control over the temporal aspects
of a presentation. In this paper we discuss the concept of presentation
time - the timing of the individual parts of a presentation and the
temporal relations among them. We argue why time is necessary from a
presentation perspective, and discuss its relationship with other
temporal views of a presentation. We derive the requirements and
present our solution for incorporating temporal and linking information
in a model of time-based hypermedia. | In |
Proceedings of ACM Hypertext 99, February 1999.
| Links | | |
Title |
Supporting Adaptive and Adaptable Hypermeida Presentation Semantics
| Author(s) |
D.C.A. Bulterman, L. Rutledge, J. van Ossenbruggen and L. Hardman
| Abstract |
Having the content of a presentation adapt to the needs, resources and
prior activities of a user can be an important benefit of electronic
documents. While part of this adaptation is related to the encodings of
individual data streams, much of the adaptation can/should be guided by
the semantics in and among the objects of the presentation. The
semantics involved in having hypermedia presentations adapt can be
divided between adaptive hypermedia, which adapts autonomously, and
adaptable hypermedia, which requires presentationexternal intervention
to be adapted. Understanding adaptive and adaptable hypermedia and the
differences between them helps in determining the best manner with
which to have a particular hypermedia implementation adapt to the
varying circumstances of its presentation. The choice of which type of
semantics to represent can affect speed of the database management
system processing them. This paper reflects on research and
implementation approaches toward both adaptive and adaptable hypermedia
and how they apply to specifying the semantics involved in hypermedia
authoring and processing. We look at adaptive approaches by considering
CMIF and SMIL. The adaptable approaches are represented by the
SGML-related collection of formats and the Standard Reference Model
(SRM) for IPMS are also reviewed. Based on our experience with both
adaptive and adaptable hypermedia, we offer recommendations on how each
approach can be supported at the data storage level. | In |
8th IFIP 2.6 Working Conference on Database Semantics (DS-8): Semantic
Issues in Multimedia Systems, Rotorua, New Zealand, January 1999. | Links | | |
|
|
Title |
User-Centered Abstractions for Adaptive Hypermedia Presentations
| Author(s) |
D.C.A. Bulterman
| Abstract |
This paper describes document modelling constructs that support
alternate content choices for generalized hypermedia presentations.
While there has been much work done on adaptive hypermedia documents in
the context of low-level quality-of-service adaptation, little
attention has been paid to support of user-level adaptation of
multimedia content. Taking examples from the domains of information
accessibility for the visual/hearing impaired, multi-lingual
information presentation, and content adaptation in distance learning,
we show how simple interfaces to rich hypermedia documents can give
decided benefits to the user community.
We discuss our work in terms of experiments from the CWI CMIF project
and indicate how these solutions have been integrated with the W3C SMIL
language in the GRiNS editor and player for Web use. | In |
Proc. ACM Multimedia 1998, ACM Press, November 1998, pp 145-150
| Links | | |
Title |
Structural Distinctions Between Hypermedia Storage and Presentation
| Author(s) |
D.C.A. Bulterman, L. Rutledge, J. van Ossenbruggen and L. Hardman
| Abstract |
In order to facilitate adaptability of hypermedia documents a
distinction is often made between the underlying conceptual structure
of a document and the structure of its presentation. This distinction
enables greater variety in how a presentation can be adapted to best
convey these underlying concepts in a given situation. What is often
confusing for those applying this distinction is that although both
levels of structure often share similar components: transformation form
the storage of a document to its presentation sometimes occurs directly
between these similar components and sometimes does not. These
similarities typically fall in the categories of space, time and
relationships between document portions. This paper identifies some
primary similarities between the structure of hypermedia storage and
presentation. It also explores how the transformation from storage to
presentation often does not follow these similarities. This discussion
is illustrated with the Fiets hypermedia application, which addresses
the issues of storage, presentation and transformation using public
domain formats and tools. The intention is to help authors who separate
storage from presentation to better understand this distinction. | In |
Proc. ACM Multimedia 1998, ACM Press, November 1998, pp 183-189
| Links | | |
Title |
Implementing Adaptability in the Standard Reference Model for Intelligent Multimedia Presentation Systems
| Author(s) |
D.C.A. Bulterman, L. Rutledge, J. van Ossenbruggen and L. Hardman
| Abstract |
This paper discusses the implementation of adaptability in environments
that are based on the Standard Reference Model for Intelligent
Multimedia Presentation Systems. This adaptability is explored in the
context of style sheets, which are represented in such formats as
DSSSL. The use of existing public standards and tools for this
implementation of style sheet-based adaptability is described. The
Berlage environment is presented, which integrates these standards and
tools into a complete storage-to-presentation hypermedia environment.
The integration of the SRM into the Berlage environment is introduced
in this work. This integration illustrates the issues involved in
implementing adaptability in the model. | In |
Proceedings of Multimedia Modeling 98, October 1998.
| Links | |
|
Title |
Practical Application of Existing Hypermedia Standards and Tools
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman and D.C.A. Bulterman
| Abstract |
In order for multimedia presentations to be stored, accessed
and played from a large library they should not be encoded
as final form presentations, since these consume storage
space and cannot easily be adapted to variations in
presentation-time circumstances such as user characteristics
and changes in end-user technology. Instead, a more
presentation independent approach needs to be taken that
allows the generation of multiple versions of a presentation
based on a presentation-independent description.
In order for such a generated presentation to be widely
viewable, it must be in a format that is widely implemented
and adopted. Such a format for hypermedia presentations
does not yet exist. However, the recent release of SMIL,
whose creation and promotion is managed by the World
Wide Web Consortium, promises to become such a format in
the short term and be for hypermedia what HTML is for
hypertext.
The technology for enabling this presentation-independent
approach is already available, but requires the use of large
and unapproachable standards, such as DSSSL and HyTime.
In this paper we show that these two standards can be used
with SMIL, and by concentrating on a particular application,
illustrate the use of publicly available tools to support the
generation of multiple presentations from a single
presentation-independent source.
| In |
Proceedings of Digital Libraries 98, June 1998
| Links | | |
Title |
Synchronized Multimedia Integration Language (SMIL) 1.0
| Author(s) |
S. Bugaj, D.C.A. Bulterman, et al.
| Abstract |
This document specifies version 1 of the Synchronized Multimedia
Integration Language (SMIL 1.0, pronounced "smile"). SMIL allows
integrating a set of independent multimedia objects into a synchronized
multimedia presentation. Using SMIL, an author can
1. describe the temporal behavior of the presentation
2. describe the layout of the presentation on a screen
3. associate hyperlinks with media objects
This specification is structured as follows: Section 1 presents the
specification approach. Section 2 defines the "smil" element. Section 3
defines the elements that can be contained in the head part of a SMIL
document. Section 4 defines the elements that can be contained in the
body part of a SMIL document. In particular, this Section defines the
time model used in SMIL. Section 5 describes the SMIL DTD. | In |
World Wide Web Consortium TR/REC-smil10-19980615, June, 1998.
| Links | | |
Title |
Presenting Multmedia on the Web and in TV Broadcast
| Author(s) |
W. ten Kate, P. Deunhouwer, L. Hardman, L. Rutledge and D.C.A. Bulterman |
Abstract |
This paper investigates the main issues related to the translation of SMIL
into MHEG documents This is driven by the more general objective to
achieve interoperability between the domains of Web and digital TV where
MHEG is used in the digital TV environment and SMIL is the Web format
to specify interactive synchronized multimedia presentations
A summary of both formats is presented on the basis of which it is
shown how SMIL translates into MHEG Although the formats have dif
ferences such translation appears to be feasible Aspects of authoring for
both domains and other interoperability issues are discussed.
| In |
Proceedings of The Third European Conference on Multimedia Applications, Services and Techniques (ECMAST 98), May 1998.
| Links | | |
Title |
Addressing Publishing Issues with Hypermedia Distributed on the Web
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman, D.C.A. Bulterman
| Abstract |
The content and structure of an electronically published document can
be authored and processed in ways that allow for flexibility in
presentation on different environments for different users. This
enables authors to craft documents that are more widely presentable.
Electronic publishing issues that arise from this separation of
document storage from presentation include (1) respecting the intent
and restrictions of the author and publisher in the document's
presentation, and (2) applying costs to individual document components
and allowing the user to choose among alternatives to control the price
of the document's presentation. These costs apply not only to the
individual media components displayed but also to the structure created
by document authors to bring these media components together as
multimedia.
A collection of ISO standards, primarily SGML, HyTime and DSSSL,
enable the representation of presentation-independent documents and the
creation of environments that process them for presentation. SMIL is a
W3C format under development for hypermedia documents distributed on
the World Wide Web. Since SMIL is SGML-compliant, it can easily be
incorporated into SGML/HyTime and DSSSL environments.
This paper discusses how to address these issues in the context of
a presentation-independent hypermedia storage. It introduces the
Berlage environment, which uses SGML, HyTime, DSSSL and SMIL to store,
process, and present hypermedia data. This paper also describes how the
Berlage environment can be used to enforce publisher restrictions on
media content and to allow users to control the pricing of document
presentations. Also explored is the ability of both SMIL and HyTime to
address these issues in general, enabling SMIL and HyTime systems to
consistently process documents of different document models authored in
different environments. | In |
Proceedings of ICCC/IFIP Conference on Electronic Publishing '98, April 1998.
| Links | | |
Title |
GRiNS: A GRaphical INterface for Creating and Playing SMIL Documents
| Author(s) |
D.C.A. Bulterman, L. Hardman, J. Jansen, K. S. Mullender and L. Rutledge
| Abstract |
The W3C working group on synchronized multimedia has developed a
language for Web-based Multimedia presentations called SMIL: the
Synchronized Multimedia Integration Language. This paper presents
GRiNS, an authoring and presentation environment that can be used to
create SMIL-compliant documents and to play SMIL documents created with
GRiNS or by hand. | In |
Proceedings of 7th Int. World Wide Web Conference (WWW7), April 1998.
| Links | | |
|
|
Title |
Document Model Issues for Hypermedia
| Author(s) |
L. Hardman, D.C.A. Bulterman
| Abstract |
A number of different systems exist for creating multimedia or
hypermedia applications—each with its own internal document model.
This leads to problems in comparing documents created by these
systems, and in describing the information captured by a document for
long-term (system independent) storage and future playback.
We discuss the components which should be considered for a
hypermedia document model. These include the hierarchical and linking
structure of a document and the spatial and temporal relations among
components of the document. Other aspects, such as transportability of
documents and information retrieval, are also addressed briefly.
We present the Amsterdam Hypermedia Model which, while expressing
only a subset of all possible structures, has been used as a basis for a
comprehensive authoring environment.
| In |
The Handbook of Multimedia Management Information, Eds. W.I. Grosky, R. Jain and R. Mehrotra, pp 39 - 68, 1997
| Links | | |
Title |
Models, Media and Motion: Using the Web to Support Multimedia Documents
| Author(s) |
D.C.A. Bulterman
| Abstract |
The World-Wide Web has been used extensively to present hypertext documents that
have a limited mixture of text and simple graphics which are distributed via the public
Internet. The performance characteristics of the Internet have made the delivery of
complex multimedia documents (that is, documents that include time-based
components) difficult. An effort is currently underway by members of industry,
research centers and user groups to define a standard document format that can be
used in conjunction with time-based transport protocols over Inter- and Intranets to
support rich multimedia presentations. This paper outlines the goals of the W3C’s
Synchronized Multimedia working group and presents an initial description of the first
version of the proposed multimedia document model and format.
| In |
Proc. Multimedia Modelling, Singapore, Nov 17-20 '97, 227-246
| Links | | |
Title |
A Framework for Generating Hypermedia Documents
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman, D.C.A. Bulterman
| Abstract |
Being able to author a hypermedia document once for presentation under
a wide variety of potential circumstances requires that it be stored in
a manner that is adaptable to these circumstances. Since the nature of
these circumstances is not always known at authoring time, specifying
how a document adapts to them must be a process that can be performed
separately from its original authoring. These distinctions include the
porting of the document to different platforms and formats and the
adapting of the document s presentation to suit the needs of the user
and of the current state of the presentation environment. In this paper
we discuss extensions to our CMIF hypermedia authoring and presentation
environment that provide adaptability through this distinction between
authoring and presentation specification. This extension includes the
use of HyTime for document representation and of DSSSL for presentation
specification. We also discuss the Berlage architecture, our extension
to HyTime that specifies the encoding of certain hypermedia concepts
useful for presentation specification. | In |
Proc. ACM Multimedia 97, November 1997.
| Links | | |
Title |
Cooperative Use of MHEG-5 and HyTime
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman, D.C.A. Bulterman
| Abstract |
Being able to author a hypermedia document once for presentation under
a wide variety of potential circumstances requires that it be stored in
a manner that is adaptable to these circumstances. Since the nature of
these circumstances is not always known at authoring time, specifying
how a document adapts to them must be a process that can be performed
separately from its original authoring. These distinctions include the
porting of the document to different platforms and formats and the
adapting of the document s presentation to suit the needs of the user
and of the current state of the presentation environment. In this paper
we discuss extensions to our CMIF hypermedia authoring and presentation
environment that provide adaptability through this distinction between
authoring and presentation specification. This extension includes the
use of HyTime for document representation and of DSSSL for presentation
specification. We also discuss the Berlage architecture, our extension
to HyTime that specifies the encoding of certain hypermedia concepts
useful for presentation specification. | In |
Proceedings of Hypertexts and Hypermedia: Products, Tools, Methods (HHPTM'97), September 1997.
| Links | | |
Title |
Use of Standards for Hypermedia Generic Structure and Presentation Specifications
| Author(s) |
L. Rutledge, J. van Ossenbruggen, L. Hardman, D.C.A. Bulterman, A. Eliëns
| Abstract |
We consider the generic hypermedia structure of a document to be a
means of representing the document that allows it to be processed into
a wide variety of presentations. Representing a document in this manner
requires additional specification and resources to render it into any
presentation. In this paper we discuss the relationship between the
generic hypermedia structure of documents and the processing of this
structure into presentation. Our discussion is expressed in terms of
existing models for hypertext and hypermedia systems and also in terms
of ISO standards for text and hypermedia document formatting and
processing. The discussion and the resulting formalisms and the
resulting formalisms are illustrated with extension designs for the
hypermedia authoring and presentation environment developed at our
laboratory. | In |
Proceedings of ICCC/IFIP Conference on Electronic Publishing '97, April 1997.
| Links | | |
Title |
Integrating the Amsterdam Hypermedia Model with the Standard Reference Model for Intelligent Multimedia Presentation Systems
| Author(s) |
L. Hardman, M. Worring, D.C.A. Bulterman
| Abstract |
The standard reference model (SRM) for intelligent multimedia presentation systems describes a
framework for the automatic generation of multimedia presentations. This framework, however,
lacks an explicit document model of the presentation being generated. The Amsterdam
hypermedia model (AHM) describes the document features of a hypermedia presentation
explicitly. We take the AHM and use it as a basis for describing in detail the stages of generating
a hypermedia presentation within the SRM framework, which we summarise in a table. By doing
so the responsibilities of the individual SRM layers become more apparent.
| In |
Computer Standards & Interfaces, vol 18 (6-7) 497-508.
| Links | | |
|
|
Title |
Challenges in Human-Computer Interfaces: Making the Technology Serve the User
| Author(s) |
D.C.A. Bulterman
| Abstract |
In this position statement, challenges in three areas of information
and data interface development are considered. These are: interfaces
to the technology that are used to deliver communication primitives;
interfaces to the presentation of a single information projection on a
particular system; and interfaces to the abstract information
containing the ultimate message that the technological messenger is
trying to present. Since the nature of this statement makes the
presentation cursory, I will focus on the needs of the user rather than
the potential of the infrastructure. | In |
ACM Computing Surveys, Vol. 28, No 4, December 1996.
| Links | | |
Title |
Multimedia User Interfaces: Who Should Interface to Whom?
| Author(s) |
D.C.A. Bulterman
| Abstract |
Coming soon...
| In |
Proc. AVI ’96, Gubbio, Italy, May 28-29, 1996.
| Links | | |
|
|
Title |
Embedded Video in Hypermedia Documents: Supporting Integration and Adaptive Control
| Author(s) |
Dick C.A. Bulterman
| Abstract |
As the availability of digital video becomes commonplace, a shift in
application focus will occur from merely accessing video as an
independent data stream to embedding video with other multimedia data
types into coordinated hypermedia presentations. The migration to
embedded video will present new demands on applications and support
environments: processing of any one piece of video data will depend on
how that data relates to other data streams active within the same
presentation. This article describes presentation, synchronization and
interaction control issues for manipulating embedded video. First, we
describe the requirements for embedded video, contrasted against other
forms of video use. Next we consider mechanisms for describing and
implementing the behavior of embedded video segments relative to other
data items in a document; these relationships form the basis of
implementing cooperative control among the events in a presentation.
Finally, we consider extending the possibilities for tailoring embedded
video to the characteristics of the local runtime environment; this
forms the basis for adaptive, application-level quality of service
control of a presentation. In all cases, we describe a mechanism to
externalize the behavior of hypermedia presentations containing
resource intensive data requirements so that effective control can be
implemented by low-level system facilities based on
application-specific requirements. We present our results in terms of
the CMIFed authoring/presentation system. | In |
ACM TOIS 13(4), October 1995, pp. 440-470.
| Links | | |
Title |
Multimedia Authoring Tools:
State of the Art and Research Challenges
| Author(s) |
Dick C.A. Bulterman and Lynda Hardman
| Abstract |
The integration of audio, video, graphics and text on the desktop
promises to fundamentally challenge the centuries-old model of the
printed document as the basis for information exchange. Before this
potential can be realized, however, systems must be devised that enable
the production and presentation of complex, inter-related media
objects. These systems are generically called multimedia authoring
tools. In this article, we consider the development of multimedia
authoring tools, examine the current state of the art, and then discuss
a set of research challenges that need to be addressed before the full
potential of multimedia output technology can be effectively utilized
to share information. | In |
Springer LNCS-1000, 1995, pp. 575-591.
| Links | | |
Title |
Using the Amsterdam Hypermedia Model for Abstracting Presentation Behavior
| Author(s) |
Lynda Hardman and Dick C.A. Bulterman
| Abstract |
We give a short description of the Amsterdam Hypermedia Model followed
by examples of its use in a number of existing and planned
applications. The main application to date has been as a basis of the
multimedia authoring system, CMIFed, along with its ability to specify
trade-offs for resource use. We discuss the model's potential for
generating differing document formats, followed by future work on using
it as a goal format for generating multimedia documents. | In |
Procs. of Effective Abstractions in Multimedia, ACM Multimedia '95 workshop, 4 November 1995.
| Links | | |
Title |
Authoring Support for Durable Interactive Multimedia Presentations
| Author(s) |
Lynda Hardman and Dick C.A. Bulterman
| Abstract |
There are two major problems with the current ways of creating
interactive multimedia presentations. Firstly, authoring a multimedia
presentation is a non-trivial task, requiring a range of skills such as
creating individual items in each medium as well as combining these
into a coherent presentation. Secondly, after having devoted a large
amount of time and effort to the creation of a presentation, there is
no guarantee that it can be played back on a platform other than the
one for which it was created, let alone whether it can be played back
by future systems.
To tackle both these problems, we first present an information
model for interactive multimedia, so that information can be stored
independently of the system that creates or plays it. We then
investigate a number of authoring systems. By differentiating four
authoring paradigms we classify and describe a selection of both
research and commercial systems. These provide examples of the types of
support that can be given to authors, and how this support can be
provided in practice. Using the approaches and features supported by
these systems as a base, we give an analysis of the facilities desired
in an ideal authoring system. | In |
State of the ArT Report in Eurographics '95,
Maastricht, The Netherlands, 28 August - 1 September 1995.
| Links | | |
Title |
Towards the Generation of Hypermedia Structure
| Author(s) |
Lynda Hardman and Dick C.A. Bulterman
| Abstract |
We present an approach for generating hypermedia presentations from
multimedia information items distributed around a network. Our goal is
to create a media-independent description of a presentation, from which
multiple final presentations can be generated, taking into account the
user's information need, the user's task and network and end-user
platform resources.
In order to generate the structure of a hypermedia presentation
from existing media items we need to define a way of grouping similar
items and making links among the groups. This grouping can be based on
semantic annotations attached to the media items. Current approaches to
video annotation, as a complex example, are analysed. A number of
research questions arising from our approach are discussed. | In |
Proc. of First International Workshop on Intelligence
and Multimodality in Multimedia Interfaces,
Edinburgh, UK, July 1995.
| Links | | |
Title |
Adaptive Quality-of-Service Support in Heterogeneous Networks: Results of a Trans-European Experiment
| Author(s) |
D.C.A. Bulterman, P. Beertema, K.S. Mullender
| Abstract |
The increasing availability of high-speed networks has served as an
enabling technology for applications such as networked multimedia,
where a mix of information types can be retrieved from decentralized
information stores and presented at a user's workstation. Such
transfers work best when a guaranteed amount of network capacity is
presented to the application fetching the data. Unfortunately, as the
storage of information becomes more decentralized, and as the load on
individual information sources and sinks increase, it is often
difficult to obtain service guarantees for the duration of a lengthy
information exchange-especially when the information access is
determined by the dynamic behaviour of users, such as is present in a
hypermedia application environment.
This project defined an experiment in providing adaptive, runtime
control over hypermedia information in a wide-area network. The purpose
of the experiment was to determine the adaptive control mechanisms
required to provide decentralized access to complex data in an
unpredictable networked environment. The unpredictability of the
environment may be caused by transient reallocations of network
bandwidth, overloading of network servers or reliability problems
within the communications infrastructure. In the following sections, we
describe the planned and encountered: environment, method and expected
results of a series of quality-of-service experiments conducted over
moderate-bandwidth links between various European network
organizations. We then discuss problems in producing the planned
results. We conclude with a travel/expense summary for the project. | In |
CEC RACE project STEN (1003) report, 1995. Copies available on request.
| |
|
|
Title |
The Amsterdam Hypermedia Model: Adding Time, Structure and Context to Hypertext
| Author(s) |
Lynda Hardman, Dick C. A. Bulterman, Guido van Rossum
| Abstract |
On the surface, hypermedia is a simple and natural extension of
multimedia and hypertext: multimedia provides a richness in data types
that facilitates flexibility in expressing information, while hypertext
provides an elegant way of navigating through this data in a
context-based manner. One popular approach to supporting hypermedia is
to take an existing hypertext model and augmenting it with multimedia
data types within the storage model. Although such a `marriage of
convenience' can provide some immediate results, the underlying control
assumptions of the hypertext model make this approach unsuitable for
describing and supporting generalized hypermedia documents. In
particular, conventional hypertext cannot adequately support complex
temporal relationships among data items, specifications that support
high-level presentation semantics or a notion of `information context'
that specifies global behavior when following links-all elements that
are of fundamental importance in supporting multimedia. | In |
Communications of the ACM 37 (2), Feb 94, pp 50 - 62.
| |
Title |
Managing the Adaptive Processing of Distributed Multimedia Information
| Author(s) |
Dick C.A. Bulterman
| Abstract |
The term multimedia conjures up visions of desktop computers
reproducing digital movies, high-resolution images and stereo sound.
While many current systems support such functionality, none do so
elegantly-especially when data is fetched and synchronized from
dissimilar sources distributed across resource-limited networks. Our
research investigates general approaches for managing the flow of
multimedia information in a distributed computing environment,
providing adaptive support for time-sensitive retrieval and
presentation based on multimedia document specifications. The benefit
of our approach is that it provides flexible, content-based utilization
of resources without overburdening the application author/developer. | In |
CWI Quarterly, 7(1), Special Issue on Multimedia (D.C.A. Bulterman, Ed.),
March 1994, pp 3-25.
| |
Title |
Authoring Interactive Multimedia: Problems and Prospects
| Author(s) |
Lynda Hardman and Dick Bulterman
| Abstract |
The creation of a multimedia presentation is a non-trivial task. It
involves skills that are not readily available to users and it requires
support not generally available from authoring software. In order to
understand the basic problems of multimedia authoring, this article
considers the requirements for defining interactive, dynamic
presentations. When contrasted against the facilities available in
current-generation commercial authoring systems, we can see that their
focus is often on low-level details rather than high-level structure.
The prospects for future editing systems are somewhat brighter: support
for high level editing can be provided. As an example, we describe the
CMIFed authoring environment; CMIFed not only supports authoring at a
high level but also incorporates most low-level features found in
current systems. | In |
CWI Quarterly, 7(1), Special Issue on Multimedia (D.C.A. Bulterman, Ed.),
March 1994, pp 47-66.
| |
Title |
CWI's experimentation with High-Speed Communication: Life near the fast lane...
| Abstract |
Dick Bulterman
| Abstract |
CWI has been investigating the placement and use of high-speed networks
for use in international, national and local communication. As part of
a multi-year grant from the Dutch Government, we are currently
upgrading our local network infrastructure to make use of ATM
technology, providing an environment to support research in areas such
as multimedia and scientific visualization, and to improve general
network service to our users. We have conducted several performance
tests during the past year to familiarize ourselves with the technology
and its potential in servicing user needs. Our initial results indicate
that while installation of network interfaces is relatively trivial,
much work remains to be done before the full potential of high-speed
communication can be enjoyed by general applications. | In |
ERCIM News, July 1994. Copies available on request.
| |
Title |
Supporting Adaptive Multimedia
| Author(s) |
Dick C.A. Bulterman
| Abstract |
Supporting intelligent multi-media multi-modal systems is a broad
problem that has many facets. One of these facets is defining the
phrase `intelligent multi-media multi-modal systems': where one puts
the intelligence, when one supports the multi-modality and how one
provides multi-media capabilities depend greatly on one's perspective
and assumptions about how information is to be manipulated and
presented within a user/computer environment. From the perspective of
the user, the entire problem may be paraphrased as follows: "how can
information be presented (and extracted) to make the solution of a set
of problems more accessible." From the perspective of the `system,' the
problem can be paraphrased as being: "how can information be
manipulated and presented to aid in the development of a solution." In
both cases, an emphasis exists on transforming information so that it
can be used to better suit the needs of a (phase of an) application. In
general, we refer to these transformations with respect to various
static and dynamic data types as adaptive multimedia. | In |
Proc. Workshop on Multi-Modal Multimedia Interfaces, AAAI Spring 1994 Symposium,Stanford University, Palo Alto, April 1994.
| |
|
|
Title |
Links in Hypermedia: The Requirement for Context
| Author(s) |
Lynda Hardman, Dick C.A. Bulterman and Guido van Rossum
| Abstract |
Taking the concept of a link from hypertext and adding it to the rich
collection of information formats found in multimedia systems provides
an intuitive extension to multimedia that is often called hypermedia.
Unfortunately, the implicit assumptions under which hypertext links
work do not extend well to time-based presentations that consist of a
number of simultaneously active media items. It is not obvious where
links should lead and there are no standard rules that indicate what
should happen to other parts of the presentation that are active.
This paper addresses the problems associated with links in hypermedia.
In order to provide a solution, we introduce the notion of context for
the source and the destination of a link. A context makes explicit
which part of a presentation is affected when a link is followed from
an anchor in the presentation. Given explicit source and destination
contexts for a link, an author is able to state the desired
presentation characterisitics for following a link, including whether
the presentation currently playing should remain playing or be removed.
We first give an intuitive description of contexts for links, then
present a structural-based approach. We go on to describe the
implementation of contexts in our hypermedia authoring system CMIFed. | In |
ACM Hypertext '93, Seattle WA, Nov '93, 183-191.
| Links | | |
Title |
Specification and Support of Adaptable Networked Multimedia
| Author(s) |
Dick C.A. Bulterman
| Abstract |
Accessing multimedia information in a networked environment introduces
problems that don't exist when the same information is accessed
locally. These problems include: competing for network resources within
and across applications, synchronizing data arrivals from various
sources within an application, and supporting multiple data
representations across heterogeneous hosts. Often, special-purpose
algorithms can be defined to deal with these problems, but these
solutions usually are restricted to the context of a single
application. A more general approach is to define an adaptable
infrastructure that can be used to manage resources flexibly for all
currently active applications. This paper describes such an approach.
We begin by introducing a general framework for partitioning control
responsibilities among a number of cooperating system and application
components. We then describe a specification formalism that can be used
to encode an application's resource requirements, synchronization
needs, and interaction control. This specification can be used to
coordinate the activities of the application, the operating system(s)
and a set of adaptive information objects in matching the (possibly
flexible) needs of an application to the resources available in an
environment at run-time. The benefits of this approach are that it
allows adaptable application support with respect to system resources
and that it provides a natural way to support heterogeneity in
multimedia networks and multimedia data. | In |
Multimedia Systems 1(2), 68 - 76.
| Links | | |
Title |
Structured Multimedia Authoring
| Author(s) |
Lynda Hardman, Guido van Rossum and Dick Bulterman
| Abstract |
We present the user interface to the CMIF authoring environment for
constructing and playing multimedia presentations. The CMIF authoring
environment supports a rich hypermedia document model allowing
structure-based composition of multimedia presentations and the
specification of synchronization constraints between constituent media
items. An author constructs a multimedia presentation in terms of its
structure and additional synchronization constraints, from which the
CMIF player derives the precise timing information for the
presentation.
We discuss the advantages of a structured approach to authoring
multimedia, and describe the facilities in the CMIF authoring
environment for supporting this approach. The authoring environment
presents three main views of a multimedia presentation: the hierarchy
view is used for manipulating and viewing a presentation's hierarchical
structure; the channel view is used for managing logical resources and
specifying and viewing precise timing constraints; and the player for
playing the presentation.
We present the authoring environment in terms of a short example:
constructing a walking tour of Amsterdam. | In |
ACM Multimedia '93, Anaheim, Aug '93, 283 - 289.
| Links | | |
Title |
CMIFed: A Presentation Environment for Portable Hypermedia Documents
| Author(s) |
Guido van Rossum, Jack Jansen, K.Sjoerd Mullender and Dick C.A. Bulterman
| Abstract |
In this paper we discuss the architecture and implementation of CMIFed,
an editing and presentation environment for hypermedia documents.
Typically such documents contain a mixture of text, images, audio, and
video (and possibly other media), augmented with user interaction.
CMIFed allows the author flexibility in specifying what is presented
when, using multiple simultaneous output channels.
Unlike systems that use a timeline or scripting metaphor to control the
presentation, in CMIFed the user manipulates a collection of events and
timing constraints among those events. Common timing requirements can
be specified by grouping events together in a tree whose nodes indicate
sequential and parallel composition. More specific timing constraints
between events can be added in the form of synchronization arcs. User
interaction is supported in the form of hyperlinks.
We place CMIFed in the context of the Amsterdam model for hypermedia
documents, which formalizes the properties of hypermedia presentations
in a platform-independent manner. | In |
ACM Multimedia '93, Anaheim, Aug '93, 183 - 188.
| Links | | |
Title |
Retrieving (JPEG) Pictures in Portable Hypermedia Documents
| Author(s) |
Dick C.A. Bulterman
| Abstract |
In this paper, we single out one of the problem aspects of multimedia:
how can one efficiently store high-quality picture information in a
manner that does not make its retrieval characteristics incompatible
with the needs of a (distributed) multimedia application. These needs
include: timely access to data, predictable access to common (i.e.,
network) resources and ease in user specification of information. Our
approach to solving this problem is to define adaptive data objects
that adjust the amount and type of information given to an application
as a function of resource availability. The key to our approach is that
we transparently adapt the information presented to the application
based on a set of pre-specified conditions that were defined by the
application at author time. We discuss this work in the context of a
parallel JPEG image decoder that provides adaptive images (with respect
to data content and image representation) based on a transparent
client/server negotiation scheme. Our work is based on the Amsterdam
Multimedia Framework (AMF), a partitioning of control operations for
supporting distributed multimedia. The parallel JPEG algorithm, AMF and
the negotiated control algorithm are explained. | In |
Proc. Multimedia Modelling '93, Singapore, November 1993.
| |
Title |
The Amsterdam Hypermedia Model: Extending Hypertext to Support Real Multimedia
| Author(s) |
Lynda Hardman, Dick C A Bulterman and Guido van Rossum
| Abstract |
We present a model of hypermedia that allows the combination of
"hyper-structured" information with dynamic multimedia information. The
model is derived by extending the Dexter hypertext reference model and
the CMIF multimedia model. The Amsterdam hypermedia model allows the
following, in addition to the model provided by Dexter:
· the composition of multiple dynamic media, in order to specify a
collection of time-based media making up a complete multimedia
presentation;
· the definition of channels for specifying default presentation
information, allowing the specification of the presentation
characteristics of nodes at a more general level than that for an
individual node;
· the composition of existing presentations into larger presentations,
taking into account possible clashes of resource usage;
· the inclusion of temporal relations while maintaining the separation
of structure and presentation information, where time-based
relationships are treated as presentation information;
· the definition of context for the source and destination anchors of a
link in order to specify the parts of a presentation affected on
following the link.
The Amsterdam hypermedia model enables the description of structured
multimedia documents, incorporating time at a fundamental level, and
extending the hypertext notion of links to time-based media and
compositions of different media.
The paper is organised as follows. The Dexter hypertext model and the
CMIF multimedia model are summarised, and their limitations for use as
a more general hypermedia model are discussed. The extensions included
in the Amsterdam hypermedia model are described and a summary of the
resulting model is given. | In |
Hypermedia Journal 5(1), July 1993, 47 - 69.
| Links | | |
Title |
A Distributed Approach to Retrieving JPEG Pictures in Portable Hypermedia Documents
| Author(s) |
Dick C.A. Bulterman and Dik T. Winter
| Abstract |
In this paper, we single out one of the problem aspects of multimedia:
how can one efficiently store high-quality picture information in a
manner that does not make its retrieval characteristics incompatible
with the needs of a (distributed) multimedia application. These needs
include: timely access to data, predictable access to common (i.e.,
network) resources and ease in user specification of information. Our
approach to solving this problem is to define adaptive data objects
that adjust the amount and type of information given to an application
as a function of resource availability. The key to our approach is that
we transparently adapt the information presented to the application
based on a set of pre-specified conditions that were defined by the
application at author time. We discuss this work in the context of a
parallel JPEG image decoder that provides adaptive images (with respect
to data content and image representation) based on a transparent
client/server negotiation scheme. Our work is based on the Amsterdam
Multimedia Framework (AMF), a partitioning of control operations for
supporting distributed multimedia. The parallel JPEG algorithm, AMF and
the negotiated control algorithm are explained. | In |
Multimedia Technologies and Future Applications
(Damper, Hall and Richards, Eds.). Pentech Press (London), 1994.
| Links | | |
|
|
Title |
Synchronization of Multi-Sourced Multimedia Data for Heterogeneous Target Systems
| Author(s) |
Dick C.A. Bulterman
| Abstract |
Accessing multimedia information in a networked environment introduces
problems to an application designer that don't exist when the same
information is fetched locally. These problems include "competing" for
the allocation of network resources across applications, synchronizing
data arrivals from various sources within an application, and
supporting multiple data representations across heterogeneous hosts. In
this paper, we present a general framework for addressing these
problems that is based on the assumption that time-sensitive data can
only be controlled by having the application, the operating system(s)
and a set of active, intelligent information object coordinate their
activities based on an explicit specification of resource,
synchronization, and representation information. After presenting the
general framework, we describe a document specification structure and
two active system components that cooperatively provide support for
synchronization and data-transformation problems in a networked
multimedia environment. | In |
Proceedings of the 3nd International
Workshop on Network and OS Support for Digital
Audio/Video, San Diego, Nov. 1992.
| Links | | |
Title |
Multimedia Onderzoek: Drie Open Vragen
| Author(s) |
Dick C.A. Bulterman
| Abstract |
Multimedia is "in". In bijna iedere computerwinkel zijn
multimedia-systemen te koop en multimedia software is inmiddels
verkrijgbaar op toepassingsgebieden van reparatie en onderhoud tot
muziekonderwijs. Toch hebben huidige multimedia systemen geen enkel
probleem op het gebied van de informatietechnologie opgelost. Wel is
een aantal problemen sterker naar voren gekomen. Willen die opgelost
kunnen worden, dan is een gezamenlijke inzet van onderzoekers uit
diverse vakgebieden nodig. Pas dan kan multimedia de doelstelling
waarmaken dat mensen op een totaal andere manier met computers kunnen
gaan communiceren - en mogelijk zelfs met elkaar. |
In |
Informatie en Informatiebeleid,
Vol. 12, No. 4, Winter 1992 (in Dutch).
| |
|
|
Title |
Multimedia Synchronization and UNIX
| Author(s) |
Dick C.A. Bulterman and Robert van Liere
| Abstract |
One of the most important emerging developments for improving the
user/computer interface has been the addition of multimedia facilities
to high-performance workstations. Although the mention of multimedia
I/O often conjures up visions of moving images, talking text and
electronic music, multimedia I/O is not synonymous with interface bells
and whistles. Instead, multimedia should be synonymous with the
synchronization of bells and whistles so that application programs can
integrate data from a broad spectrum of independent sources (including
those with strict timing requirements). This paper considers the role
of the operating system (in general) and UNIX (in particular) in
supporting multimedia synchronization. The first section reviews the
requirements and characteristics that are inherent to the problem of
synchronizing a number of otherwise autonomous data sets. We then
consider the ability of UNIX to support decentralized data and complex
data synchronization requirements. While our conclusions on the
viability of UNIX for supporting generalized multimedia are not
optimistic, we offer an approach to solving some of the synchronization
problems of multimedia I/O without losing the benefits of a standard
UNIX environment. The basis of our approach is to integrate a
distributed operating system kernel as a "multimedia co-processor."
This co-processor is a programmable device that can implement
synchronization relationships in a manner that decouples I/O management
from (user) process support. The principal benefit of this approach is
that it integrates the potential of distributed I/O support with the
standardization provided by a "real" UNIX kernel. | In |
Proceedings of the
2nd International Workshop on Network and OS
Support for Digital Audio/Video, Heidelberg, Nov.
1991. (Also available in LNCS-164, Springer-Verlag, 1992.)
| Links | | |
Title |
Multimedia Synchronization and UNIX-or-If Multimedia Support is the Problem, Is UNIX the Solution?
| Author(s) |
Dick C.A. Bulterman, Guido van Rossum and Dik Winter
| Abstract |
This paper considers the role of UNIX in supporting multimedia
applications. In particular, we consider the ability of the UNIX
operating system (in general) and the UNIX I/O system (in particular)
to support the synchronization of a number of high-bandwidth data sets
that must be combined to support generalized multimedia systems. The
paper is divided into three main sections. The first section reviews
the requirements and characteristics that are inherent to multimedia
applications. The second section reviews the facilities provided by
UNIX and the UNIX I/O model. The third section contrasts the needs of
multimedia and the abilities of UNIX to support these needs, with
special attention paid to UNIX's problem aspects. We close by sketching
an approach we are studying to solve the multimedia processing problem:
the use of a distributed operating system to provide a separate data
and processing management layer for multimedia information. | In |
Proceedings of the EurOpen Autumn 1991
Conference, Budapest.
| Links | | |
Title |
A Structure for Transportable, Dynamic Multimedia Documents
| Author(s) |
Dick C.A. Bulterman, Guido van Rossum and Robert van Liere
| Abstract |
This paper presents a document structure for describing transportable,
dynamic multimedia documents. Multimedia documents consist of a set of
discrete data components that are joined together in time and space to
present a user (or reader) with a single coordinated whole.
Transportable documents are those in which the document structure can
be accessed across system environments independently of individual
component input or output dependencies; dynamic documents are those in
which the synchronization of document components are not staticly
defined as an integral part of the data definition but are dynamicly
defined as attributes of the general document structure.
The focus of this paper is the presentation of the basic building
blocks of the CWI Multimedia Interchange Format (CMIF). CMIF is used to
describe the temporal and structural relationships that exist in
multimedia documents. In order to put our work in a concrete context,
we start our discussion with a brief description of the portability
requirements for documents used within the CWI /Multimedia Pipeline. We
then provide a layered description of our document structure format;
this format provides a means for expressing a document in terms of
synchronization channels, event descriptors, data descriptors, data
blocks and synchronization arcs, each element of which contains a set
of appropriate descriptive attributes. The paper describes each of
these concepts abstractly as well as in the context of a uniform
example. The paper concludes with a discussion of our intended future
direction in using the various attribute descriptors to control a broad
range of activities within the CWI /Multimedia Pipeline. | In |
Proceedings of the Summer
1991 USENIX Conference, Nashville, TN, pp137-155.
| Links | | |
|
|
Title |
Academic Networking: A Review of Options and Challenges
| Author(s) |
D. C. A. Bulterman
| In |
Proceedings of CCCD-89, Bombay, India, September 1989.
| |
Title |
Application-Level Performance Measurement Tools for Network-Based Systems
| Author(s) |
D. C. A. Bulterman and E. Manolis
| In |
IEEE Networks, Vol. 1, Number 2, 1987.
| |
Title |
An Animated Modelling Environment for Parallel Architectures
| Author(s) |
D. C. A. Bulterman
| In |
8th International Conference of Computer Hardware Description Languages, Amsterdam, 1987.
| |
Title |
Instrumentation of Distributed Signal Processing Systems
| Author(s) |
D. C. A. Bulterman
| In |
Workshop on Instrumentation for Distributed Computing Systems, Sanibel Isl, FL (USA), April 1987.
| |
Title |
CASE: An Integrated Design Environment for Algorithm-Driven Architectures
| Author(s) |
D. C. A. Bulterman
| In |
24th IEEE/ACM Design Automation Conference, Miami Beach, FL (USA),
July 1987.
| |
Title |
A Heterogeneous Network Interface Using a Low-cost LAN Controller
| Author(s) |
D. C. A. Bulterman and J.A. Wong
| In |
Proceedings of IEEE Infocom '84, San Francisco, April 1984.
|
|
|
Last Updated: 12 March 2009
|