SIGIR 2007, Amsterdam 23-27 Jul 2007

General Impression

This is my first SIGIR conference. I am very satisfied with it. The conference have a good balance between user-centered and technical papers. To be precise:

There are 27 session over 3 days, and 3 sessions are dedicated to personalization, users & the web and interaction. In addition to this, There are 9 tutorials, 1 of them is a tutorial on how to do quantitave user-centered evaluation and in total there are 9 workshops with 1 workshop dedicated to web information seeking & interaction.

What strikes me about the papers in this conference is they are not very different from the way HCI research are conducted. Technical and User-centric research are carried out in well structured methods, have a strong emphasis on evaluation and use benchmarks or baselines to judge the performance of their algorithm/approach/interface/idea.

I have a lot of respect for this community and personally think the SW community can learn a lot from the IR community.

Tutorial: Conducting User-centered IR Systems Evaluation by Diane Kelly

In a nutshell, all the fundamental statistics used when you want to evaluate your interface. If you want to make a claim that your new novel and brilliant interface is 'better' (easier, faster, or helps users in whatever aspect) then another type of interface for USERS, this is the only way you can validate it.

Anyone interested: Michiel & Alia have the hard copies.

Workshop: Web Information Seeking and Interaction

I am delighted to be able to meet and talk to the IR people who do research in user centric information seeking. 

Things discussed in the workshop:

1. How to educate users to become more expert and use more advanced functionality:

manual ----teaching (classroom, curiculum) --------- focus teaching (tips, query feedback, expectation management) ----- expose model (keyword snippets, longer snippets) ------------ 'its just works' (query expansion) ----- automatic

2.  We need to encourge users to enter more keyword!

note to self: can be a research question! (but more IR research.)

such as testing a popup on query 'television': 'do you want to buy a television? or do you want to know more about television set?'

3. How to test google experimental search:

1. problematic: users like and are comfortable with the current interface; 2. there is high baseline, new interface may offer a small improvement, 3. issue is not usability because it is already easy to use, but other measures may be interesting: effort, enjoyable.

The challenge is to find usecase where it is really useful

One way to test it is with closed and friendly community such as the google family.

4. universal search is still the best answer: look at google  blog, people are searching image, blog, text at the same time.

 

Workshop: Multimedia Information Retrieval

There is a lot of talk about setting up benchmarks and procedure on how to evaluate multimedia information retrieval systems which is currently non existent.

Laura talked about using metadata and SW to retrieve images. The voice was a minority here. I can understand why IR community does not take SW people seriously for text retrieval but in the case of multimedia collections, metadata can help a lot in addition to the content retrieval technology.

The fact is IR people have been researching on how to use controlled vocabulary to increase performance of retrieval system.

Interesting Papers

Review of these papers will be posted later when i have more time.

Note to self:

- YAGO http://dbpedia.org/docs/

- http://www.seco.tkk.fi/publications/2007/ruotsalo-hyvonen-annotationrelevance.pdf

- check journal IP&M: ian & peter borland (personal information management journal)