The convergence of several hardware and software technologies appears to make the Internet (and its external face, the World-Wide Web) a candidate for the integrated distribution of combined push- and pull-information streams. In our view, the short-term model for such integrated distribution will strongly mimic existing user interaction models. The push stream will be pro-active and remain a variant of current broadcast-based TV programming: these programs are general-purpose in nature and aimed at a non-specific audience. The pull stream(s) will be reactive and will be delivered in response to a particular user's request, triggered by one or more link anchors contained in the basic broadcast stream.
There are several technological approaches that could lead to such an integrated environment. One could be that the current Web infra-structure could replace the existing broadcast infrastructure. Another is that the broadcast infrastructure could subsume the existing Web by providing new transport technology and integrated TV and data services. In all probability--given the huge investments in existing infrastructure by both the computer and TV community--a more likely approach will be that individual pieces of content will be offered in hybrid forms to both environments, with a layer of (probably fairly coarse) integration that will help customers bridge between both environments. Until the social patterns of view behavior change, we expect that current broadcast TV will remain a shared experience and that interactive access will be primarily personal (except for high-level program selection and initial program activation).
While we do not foresee a short-term displacement of the TV by the PC, we do suspect that there will be a need to produce multiple versions of broadcast programming to fit the needs of particular user groups. One example is on-demand captioning of information, with some form of integrated language selection available. Another form will be a distinction between a `first-run' broadcast stream coupled with a `second-run' personalized interactive version of the same content, where more freedom for pausing content and linking to secondary information streams exists. In order to provide this class of reusable and extensible presentations, authoring support needs to exist that will allow content providers to generate multiple versions of their programming for both push and pull distribution.
CWI has had a long history in developing authoring systems that support the adaptive presentation of multimedia information. The focus of our work has been to define a presentation as an abstract document that is not tied to a single presentation format or environment, but to allow content providers to reuse their content streams by supporting multiple format projections based on a single content base. In this respect, we have developed prototype environments that produce projections from our native formats to both the W3C SMIL format and the MHEG-5 formats. SMIL is a candidate to become the Web's multimedia standard, and MHEG-5 is the presentation format used by the DAVIC consortium for processing multimedia information on set-top box systems. More recently, CWI has been researching the use of developing W3C formats such as XSL and XLink to automatically generate SMIL and MHEG-5 presentations from very general, highly presentation-independent media and metadata archives. This would provide the browsing user with more adaptability of a broader, more widely integrated collection of media data.
Our interest in attending the workshop on TV and the Internet is to understand the trends that are shaping the convergence of these technologies. We hope to understand how a content provider can be assisted in reusing its (very costly) content by offering value-added additions to information in each of the infrastructures on which it is presented. For example, if a TV program is created for the broadcast medium, we are interested in investigating how a Web version of the same program--or of program fragments--could be delivered that makes use of an embedded link architecture for use in a Web environment.
We are prepared to discuss our experience with multiple projection authoring (in general) or our experiments with transforming Web-based content into MEHG-5 presentations (in particular). We are also prepared to discuss the hyper-architecture that we use to support the constrained presentation of information by preserving/altering the context of a presentation at the time the source anchor of a link is activated.
Lloyd Rutledge, Jacco van Ossenbruggen, Lynda Hardman and Dick C. A. Bulterman, "Cooperative Use of MHEG-5 and HyTime", Proceedings of Hypertexts and Hypermedia: Products, Tools, Methods (HHPTM'97), September 1997.
The standards MHEG and HyTime are interchange formats for hypermedia information. While they may seem to compete, they actually play separate and complementary roles in a complete and open hypermedia environment. MHEG is used for portable final-form hypermedia presentations. HyTime is used for the long-term, presentation-independent storage of hypermedia documents. Given these tasks, MHEG can be used to encode presentations of HyTime documents. This paper explores these two standards, the cooperative roles they play and their application to the CMIF hypermedia environment architecture. The issues discussed include the semantic overlap between the hypermedia models each standard represents and how the use of each standard affects the cooperative use of the other.
Ten Kate, W., Bulterman, D.C.A., Deunhouwer, P., Hardman, L. and Rutledge, L. "Presenting Multimedia on the Web and in TV Broadcast", in Proc. 3rd European Conference on Multimedia Applications, Services and Techniques (ECMAST 98), May 1998.
This paper provides a high-level description of Web-based multimedia using SMIL. It is based on an older version of the standard, but readers will get the general idea of how SMIL presentations are structured.