As this contest has been documented at some length (New York Times Magazine, Wired Magazine), I’ll skip the setup and instead describe what has been learned about predicting the ratings people give to movies. To whet the appetite, first consider these two curious graphs. I’ll get to them soon.

The simplest model for predicting a new rating is to just guess a single number each time. What number to guess? The one that minimizes the prediction error. This happens to be the mean (3.7 stars I think). But while we have a very robust estimate of the rating mean, derived from 100 million examples, our model is too simple, and the error rate is around 1.05 stars (the Netflix “Cinematch” system is off by 0.95 stars; the winning system is off by 0.85 stars).

Let’s make the model a little fancier. We’ll add an offset for each movie and for each user to acknowledge that "The Shawshank Redemption" is better than average and I am more critical than average. The predicted rating according to the new model is the overall mean plus a movie offset plus a user offset. My predicted rating for Shawshank Redemption might be 3.7 (overall mean) + 0.8 (Shawshank offset) – 0.3 (my offset) = 4.2. This new model, which adds 500,000 user parameters and 20,000 movie parameters, improves the error rate to 0.97 stars. Again the parameters are estimated from the training data to minimize the error rate.

We’ll call the estimate from this model

**b**, the baseline predicted rating for user

_{ui}**u**and item

**i**(I’ll use item instead of movie because these methods are generally applicable). Next, we’d like to improve our predictions by including information about similarities between movies—if I rated Rushmore highly, I’m likely to rate "The Royal Tennenbaums" highly too. So lets add to our baseline another offset, attributed to similarities with a movie’s neighbors. Say we’re trying to predict my rating for "The Royal Tennenbaums" (

**i**).

For every other movie

**j**I’ve rated, we compute (

**r**)

_{uj}– b_{uj}**w**, where

_{ij}**w**is the weight we want to place on movie

_{ij}**j**—large if

**i**and

**j**are similar. Then, if the rating I gave to movie

**j**(

**r**) is higher than expected (

_{uj}**b**), and movie

_{uj}**j**is very similar to movie

**i**(

**w**is large), we boost our new prediction for movie

_{ij}**i**. If the rating I gave to movie

**j**is just about what the baseline expected, then (

**r**) will be close to zero, and we won’t adjust our new prediction for

_{uj}– b_{uj}**i**regardless of the similarity.

Our final new prediction for movie

**i**is estimated as the baseline prediction plus the sum of all the values (

**r**–

_{uj}**b**)

_{uj}**w**, one for each movie

_{ij}**j**that I’ve rated.

This all seems very sensible, but where do we get

**w**, the similarity between movie

_{ij}**i**and

**j**? As with the simpler models, we estimate these parameters from the training data to minimize the prediction error rate. This estimation problem is no longer a matter of calculating averages. We want the set of

**w**s (there are 20,000 * 20,000 = 400,000,000 of them) such that the sum of the differences between real ratings and predicted ratings over the 100,000,000 examples in the training data is as small as possible. How do we go about finding this set, a tiny needle in an giant haystack?

_{ij}Here is the key. Consider two sets of values for

**w**, A and B. Suppose set A gives a lower error rate predicting the ratings in the training data than set B.

_{ij}*Then set A is closer to the best possible set of*. Put another way, if the

**w**s than set B_{ij}**w**s are at a local minimum—small changes in the values give worse predictions—then this is the best solution, globally. The minimization function is

_{ij}*convex*. This means that we can go about finding the best set of

**w**s simply by comparing our current set with some other one, and keeping whichever works better.

_{ij}Practically, we choose the direction to move according to the derivative of the minimization function, but that’s starting to get too technical. Read about

*stochastic gradient descent*if you want to know more.

So to learn the best set of

**w**s, we start with some initial guess (maybe every similarity is 0.5). Then try predicting the rating that user

_{ij}**u**gave to movie

**i**. Compare the prediction to the real rating, and adjust the relevant parameters:

**w**,

_{i1}**w**,

_{i2}**w**, and so on. Then move on to the next rating. One pass through the entire dataset, 100 million examples, is called an iteration. It takes around 20 iterations to find the best estimates of

_{i3}**w**. Each iteration takes around an hour on a reasonable computer. This method, with a few adjustments, can reduce the error rate to around 0.90 stars.

_{ij}So how do you get down to 0.85 stars? One of the most important insights is that ratings change over time, as the graphs above suggest. For each movie, ratings increase over time—on average nearly 10% over 5 years. The best explanation for this phenomenon is that people are less discriminating when they rent new releases. To rent an older movie, they need more encouragement—some indication that they will like it—so these end up better matched to their tastes.

There is something more insidious going on in the first plot: 1500 days after Netflix began collecting rating data, the average rating jumped strikingly, from 3.4 to 3.6 all at once. One hypothesis is that Netflix changed their recommendation system in 2004, improving the match between users and movies. A second, somewhat more appealing hypothesis is that the text accompanying the star ratings changed from an objective scale (excellent, good, fair, …) to a subjective scale (loved it, liked it, …), and people are less accommodating when they think they need to be objective.

Regardless of the underlying causes, modeling temporal dynamics is clearly valuable. Scaling the ratings based on absolute date and the time since their release allows for more meaningful comparison between ratings—like adjusting for inflation. Such adjustments can reduce the error rate to 0.89 stars.

What’s left? Dimensionality reduction methods attempt to represent each movie and user as a short list of factors (violence, comedy, etc…, but because the factors are derived purely statistically, they have no such simple meanings). Including such a technique, like

*Singular Value Decomposition,*in the baseline estimate (above) is good for an error rate of 0.88.

The rest of gap seems to be covered by combining a wide variety of methods, all variants of what I’ve just described. Remember how we computed item-item similarities

**w**? Well we could have used user-user similarites instead. In the end, four teams all combined results (probably estimating a final rating by taking a weighted average of all the different systems’ predicted ratings). I imagine each system as a blurry lens; stacking up a whole bunch of blurry lenses at slightly different angles, we manage to reduce the blur just a little bit.

_{ij}**References:**

- Yehuda Koren, a member of the winning team, has written some very nice papers about his work on the contest, on which I based this article.

- Bellkor’s Pragmatic Chaos, the webpage for the winning team, has a number of related links.

- An article in IEEE Spectrum written by members of the winning team.

- The official Netflix Prize leaderboard

## Comments