Surprise is an easy-to-use open source Python library for recommender systems. Its goal is to make life easier for reseachers who want to play around with new algorithms ideas, for teachers who want some teaching materials, and for students.
Surprise was designed with the following purposes in mind:
- Give the user perfect control over his experiments. To this end, a strong emphasis is laid on documentation, which we have tried to make as clear and precise as possible by pointing out every details of the algorithms.
- Alleviate the pain of Dataset handling. Users can use both built-in datasets (Movielens, Jester), and their own custom datasets.
- Provide various ready-to-use prediction algorithms (see below) similarity measures (cosine, MSD, pearson...).
- Make it easy to implement new algorithm ideas.
- Provide tools to evaluate, analyse and compare the algorithms performance. Cross-validation procedures can be run very easily.
At the moment, the available prediction algorithms are:
- NormalPredictor: an algorithm predicting a random rating based on the distribution of the training set, which is assumed to be normal.
- BaselineOnly: an agorithm predicting the baseline estimate for given user and item.
- KNNBasic: a basic collaborative filtering algorithm.
- KNNWithMeans: a basic collaborative filtering algorithm, taking into account the mean ratings of each user.
- KNNBaseline: a basic collaborative filtering algorithm taking into account a baseline rating.
- SVD and PMF: the famous SVD algorithm, as popularized by Simon Funk during the Netflix Prize. The unbiased version is equivalent to Probabilistic Matrix Factorization.
- SVD++: an extension of SVD taking into account implicite ratings.
- NMF: a collaborative filtering algorithm based on Non-negative Matrix Factorization. (Available in latest version).
- Slope One: a simple yet accurate collaborative filtering algorithm. (Available in latest version).
- Co-clustering: a collaborative filtering algorithm based on co-clustering. (Available in latest version).
The name SurPRISE (roughly :) ) stands for Simple Python RecommendatIon System Engine.
The easiest way is to use pip (you'll need numpy):
$ pip install surprise
Or you can clone the repo and build the source (you'll need Cython and numpy):
$ git clone https://github.com/NicolasHug/surprise.git
$ python setup.py install
Here is a simple example showing how you can (down)load a dataset, split it for 3-folds cross-validation, and compute the MAE and RMSE of the SVD algorithm.
from surprise import SVD
from surprise import Dataset
from surprise import evaluate
# Load the movielens-100k dataset (download it if needed),
# and split it into 3 folds for cross-validation.
data = Dataset.load_builtin('ml-100k')
data.split(n_folds=3)
# We'll use the famous SVD algorithm.
algo = SVD()
# Evaluate performances of our algorithm on the dataset.
perf = evaluate(algo, data, measures=['RMSE', 'MAE'])
print(perf)
Output:
Evaluating RMSE, MAE of algorithm SVD.
Fold 1 Fold 2 Fold 3 Mean
MAE 0.7475 0.7447 0.7425 0.7449
RMSE 0.9461 0.9436 0.9425 0.9441
Surprise can also be used from the command line, e.g.:
python -m surprise -algo SVD -params "{'n_factors': 10}" -load-builtin ml-100k -n-folds 3
Here are the average RMSE, MAE and total execution time of various algorithms (with their default parameters) on a 5-folds cross-validation procedure. The datasets are the Movielens 100k and 1M datasets. The folds are the same for all the algorithms (the random seed is set to 0). All experiments are run on a small laptop with Intel Core i3 1.7 GHz, 4Go RAM. The execution time is the real execution time, as returned by the GNU time command.
Movielens 100k | RMSE | MAE | Time (s) |
---|---|---|---|
NormalPredictor | 1.5228 | 1.2242 | 4 |
BaselineOnly | .9445 | .7488 | 5 |
KNNBasic | .9789 | .7732 | 27 |
KNNWithMeans | .9514 | .7500 | 30 |
KNNBaseline | .9306 | .7334 | 44 |
SVD | .9392 | .7409 | 46 |
SVD++ | .9200 | .7253 | 31min |
NMF | .9634 | .7572 | 55 |
Slope One | .9454 | .7430 | 25 |
Co clustering | .9678 | .7579 | 15 |
Movielens 1M | RMSE | MAE | Time (min) |
---|---|---|---|
NormalPredictor | 1.5037 | 1.2051 | < 1 |
BaselineOnly | .9086 | .7194 | < 1 |
KNNBasic | .9207 | .7250 | 22 |
KNNWithMeans | .9292 | .7386 | 22 |
KNNBaseline | .8949 | .7063 | 44 |
SVD | .8936 | .7057 | 7 |
NMF | .9155 | .7232 | 9 |
Slope One | .9065 | .7144 | 8 |
Co clustering | .9155 | .7174 | 2 |
The documentation with many other usage examples is available online on ReadTheDocs.
This project is licensed under the BSD 3-Clause license.
- Pierre-François Gimenez, for his valuable insights on software design.
Any kind of feedback/criticism would be greatly appreciated (software design, documentation, improvement ideas, spelling mistakes, etc...).
If you'd like to see some features or algorithms implemented in Surprise, please let us know! Some of the current ideas are:
- Bayesian PMF
- RBM for CF
Please feel free to contribute (see guidelines) and send pull requests!