INS2 Video Footage

This is a list of available video material in realvideo format.

Available Videos

Multimedia Metadata Standards Panel, World Wide Web Conference, 2007, Banf, Canada

The Role of Multimedia Metadata Standards in a (Semantic) Web 3.0 (web site, slides)

Abstract:

Are multimedia metadata really needed on the Semantic Web? Are "real" multimedia standards, such as MPEG-7, of any value at all to the Web community? Do we really need to invoke the complexity of formal knowledge representation languages and ontologies?

The Web already supports the storage and retrieval of media such as text, image, video and audio. In contrast to text, other media types require some sort of metadata in order to make them indexable and searchable. Surrounding text on the web page and user assigned tags are current methods which give access to millions of images and videos to millions of users. The Web 2.0 paradigm has become successful partly because it asks almost nothing from its end users. So, is there even a problem to be solved?
Existing multimedia metadata standards, such as MPEG-7, provide a means of associating semantics with particular sections of audio-visual material. Expensive and complex tools are available for giving professional users access to this. Many other metadata standards, such as EXIF, ID3 and XMP, also exist, making it difficult for a typical Web user to see the wood for the trees.

The panel will discuss how to overcome the shortcomings of current mulimedia metadata formats. In addition, the Web community needs to understand the benefits of explicit semantics for retrieving and process multimedia content. We will discuss whether standards for multimedia metadata for Web content are really needed, and what role they could play in a future semantic multimedia web.

Panelists:

PhD Defense Stefano Bocconi, 30-11-2006, Eindhoven

Vox Populi. Generating video documentaries from semantically annotated media repositories

Abstract:

/facet, International Semantic Web Conference (ISWC2006), November 2006

/facet: A browser for heterogeneous Semantic Web repositories (local, external)

Abstract:

Facet browsing has become popular as a user friendly interface to data repositories. The Semantic Web raises new challenges due to the heterogeneous character of the data. First, users should be able to select and navigate through facets of resources of any type and to make selections based on properties of other, semantically related, types. Second, where traditional facet browsers require manual configuration of the software, a semantic web browser should be able to handle any RDFS dataset without any additional configuration. Third, hierarchical data on the semantic web is not designed for browsing: complementary techniques, such as search, should be available to overcome this problem. We address these requirements in our browser, /facet. Additionally, the interface allows the inclusion of facet-specific display options that go beyond the hierarchical navigation that characterizes current facet browsing. \facet{} is a tool for Semantic Web developers as an instant interface to their complete dataset. The automatic facet configuration generated by the system can then be further refined to configure it as a tool for end users. The implementation is based on current Web standards and open source software. The new functionality is motivated using a scenario from the cultural heritage domain.

Media day 2005, Groningen

A Visual Trip Report

Lloyd Rutledge's talk at the Scientific Meeting, 28/01/2005

"Presenting Knowledge -Bridging the World Wide and Semantic Webs"

Abstract:

The Semantic Web is emerging as a component of the World Wide Web. It promises as a companion to the current document-based WWW a globally-accessible representation of knowledge itself by standardizing technologies developed by knowledge representation research -- thus doing for knowledge what the WWW has done for documents. Defining the representation of and access to this emerging Semantic Web is the topic of much ongoing research. CWI-INS2's research focus is on providing a human-computer interface for the Semantic Web.

Just as most websites are now powered by underlying databases providing the content on generated web pages, the Semantic Web will enable us to generate many different presentations from underlying information -- not just from a single data source but potentially from different sources. This information can be presented in different ways.
We discuss three example applications:

  1. the authoring of multimedia presentations using on-the-fly collection of material,
  2. automated montage of semantically-annotated video Fragments into biased sequences and
  3. automated generation of web pages showing different perspectives on a single knowledge base.

Jan Karel Lenstra's talk at the Scientific Meeting, 28/01/2005

"Graph coloring and clique partitioning"

Abstract:

I will discuss the formulation and solution of two practical optimization problems, which occurred in 1969 at the University of Amsterdam and in 2003 at Georgia Tech.

ACM Multimedia 2004, October 10-16, New York, NY USA

A Visual Trip Report

The fourteenth conference on Hypertext and Hypermedia, August 26-30, 2003, Nottingham

A Visual Trip Report

Prof. Marc Davis - Mobile Media Metadata: The Future of Mobile Imaging - Feb 28 2005

Mobile Media Metadata: The Future of Mobile Imaging

Speaker:

Prof. Marc Davis
University of California at Berkeley
School of Information Management and Systems
Garage Cinema Research
Homepage

Abstract:

The devices and usage context of consumer digital imaging are undergoing rapid transformation from the traditional camera-to-desktop-to-network image pipeline to an integrated mobile imaging experience.
Since 2003, more camera phones were sold than digital cameras worldwide; since 2004, 5 megapixel cameraphones with optical zoom and camera flash have been on the market.
The ascendancy of these mobile media capture devices makes possible a significant new paradigm for digital imaging because, unlike traditional digital cameras, cameraphones integrate: media capture; software programmable processing; wireless networking; rich user interaction modalities; and automatically gathered contextual metadata (spatial, temporal, social) in one mobile device.
We will discuss our Mobile Media Metadata (MMM) prototypes which leverage the spatio-temporal context and social community of media capture to infer the content and the sharing recipients of media captured on cameraphones. Over the past two years we have deployed and tested MMM1 (context-to-content inferencing on cameraphones to infer media content) and MMM2 (context-to-community inferencing on cameraphones to infer sharing recipients) with 60 users in the fall of 2003 and 2004 respectively.
We will report on findings from our system development, user testing, ethnographic research, design research, and student generated product concepts for MMM-supported vertical applications.
As a result of our approach to context-aware mobile media computing, we believe our MMM research will help solve a fundamental problem in personal media production and reuse: the need to have content-based access to the media consumers capture and share on mobile imaging devices.

The Twelfth International World Wide Web Conference, 20-24 May 2003, Budapest, HUNGARY

A Visual Trip Report

Presentation from Raphaël Troncy: Formalization of documentary knowledge and conceptual knowledge with ontologies, 23rd of April, 2004

Formalization of documentary knowledge and conceptual knowledge with ontologies

Speaker:

Raphaël Troncy

Outline of the talk (available here in PowerPoint)