Sunday, August 18, 2013

Recapping SIGIR 2013 : Overview

Last time around I wrote about interpretability in machine learning, something that I have dabbled with in the past. Today I wanted to recap some of the highlights of SIGIR 2013.

To begin with I should mention I found the conference to very well-organized and found some of the talks to be very interesting and stimulating. Compared to the last SIGIR I attended in 2010 (Geneva), I found that there has been a noticeable change in the papers at the conference and the overall program content:

1. User modeling and coming up with models more suited for modeling search user's satisfaction is a very popular topic. The MUBE workshop at SIGIR this year, which actively focused on this topic, was well-attended and lively. KDD this year too had quite a few papers on user modeling.

Coming up with a good surrogate to user satisfaction is excellent for post-hoc analysis of different methods. However I am slightly concerned as to how some of these proposed measures can be employed to train better search models and improve the overall search experience i.e., it is unclear to me if current learning methods can optimize for these new measures instead of tradition measures like NDCG. Thus I feel like there is more learning-oriented work to be done on this topic.

2. In fact it was not just the above topic that had reduced machine learning content but the conference in general. In my opinion, it seems like work on learning-related topics like Learning to Rank (LTR) is now better suited to appear in KDD or ICML than SIGIR. I guess I should not be shocked given rumors of current search engine ranking learners hitting a plateau. Seeing how tuned these rankers are for different criterion and verticals it is understandable why people would worry about the global changes a new learned model could bring about and hence why they tend to not directly use learning. While fine for the short-term, I'm not sure how viable this strategy is for the long term. It seems that the the time is right for someone to come up with the next big thing in LTR.

3. People are a lot more interested in complex search tasks and session search as seen by the substantial number of papers on the topic. I'll detail some of these papers in my next post.


As mentioned earlier, there were a lot of great papers this year at SIGIR which has left me with a long reading list. Over the next few days I'll try to go over some of these papers. I'll try to go over one of the highlight papers (in my opinion) along with papers on a specific topic.

You may also be interested in :

- Recapping SIGIR 2013 - Part 2 (Session/Task Search)

No comments:

Post a Comment