Project Assignment 3, Semester 2, 2022
Marks : This assignment is worth 30% of the overall assessment for this course.
Due Date : Wed, 26 October 2022, 11:59PM (Week 14), via canvas. Late penalties apply. A penalty of 10% of the total project score will be deducted per day. No submissions will be accepted 5 days beyond the due date.
Objective
The key objectives of this assignment are to learn how to compare and contrast several recommendation system algorithms. There are three major components of the assignment – a completed Jupyter notebook used to run your experiments, a written report, and a short video presentation where you describe what you did and your key findings.
The dataset you will use will be a sample of the Netflix Prize data. The problem is movie recommendation, and the data is already split into a training and validation set that you can use to run all of your experiments.
Provided files
The following template files are provided:
- SXXXXX-A3.ipynb : The primer Jupyter notebook file you should use to stage and run all of your experiments.
- netflix-5k.movie-titles.feather : The movie title dataframe that can be used to map a movieID to a title, as well as a list of genres.
- netflix-5k.train.feather : The training tuples for 5,000 users, where each tuple is
⟨userID,movieID,rating⟩.
- netflix-5k.validation.feather : A predefined set of validation tuples for the same users that can be used by you to benchmark the performance of various algorithms.
- A3.pdf : This specification file.
Creating Your Workspace
Once again, you should rename the file SXXXXX-A3.ipynb appropriately based on your student ID.
Creating Your Anaconda Environment
In order to create your anaconda environment for this project, you should run the following command in a terminal shell:
conda create -n PDSA3 python=3.8 conda activate PDSA3
pip install jupyterhub notebook numpy pandas pip install matplotlib scikit-learn seaborn
pip install kneed scikit-surprise
Note that both kneed and scikit-surprise can be finicky on some systems. For example on my machine, an error was thrown during the compile of scikit-surprise (Macbook Pro M1) but the install
still worked when it tried a fallback install method. So if you find that it really fails for you using pip, then and only then resort to conda. This would break requirements.txt, but it should work reliably for everyone albeit not very reproducible. The magic commands would be:
conda install -c conda-forge kneed
conda install -c conda-forge scikit-surprise
You can type “pip freeze” to see a list of the packages that are correctly installed in your environment. You can also install scikit-learn-intelex and/or psutil if you want to use the Intel-based optimisations or debug memory management as shown in the sample Jupyter notebook.
If you also wish the timing and other jupyter extensions to be enabled in your notebook (optional but may be useful depending on how you decide to present your results), you need to run the following additional commands:
pip install jupyter_contrib_nbextensions
pip install jupyter_nbextensions_configurator jupyter contrib nbextension install –user jupyter nbextensions_configurator enable –user
Now you just need to type “jupyter notebook” to start up jupyter correctly with access to the libraries you just installed. Also, recall that if you ever stop working in your environment and come back later.
You must open a terminal, run “conda activate PDSA3” and then “jupyter notebook”, otherwise you will not be working in the virtual environment you created above, and most things you try to do will probably start failing. You should not use any other libraries to complete your assignment beyond the ones shown above without written permission from the course coordinator (Shane).
1 The Jupyter Notebook Primer (5 marks)
I have included the jupyter notebook I walked through at the end of the Week 10 Lectorial. This notebook will provide you with everything you need to correctly load the dataframe files from feather, and also includes an example of how to do both a grid search and randomised search for parameter tuning on one of the recommendation system algorithms included in Surprise. You should spend some time reading the API documentation and tutorial for this library provided at https://surpriselib.com. This will be critical information you should use to stage your experiments.
2 The Report (15 marks)
The main component of your assignment will be to carefully write up your key findings. Your report should not be more than 5 pages using 11pt font. You may also have one 2 additional Appendix page containing additional graphs or tables. Your final report must be submitted as a PDF file. You can use Microsoft Word, but I would strongly encourage you to consider writing up your report using LATEX (https:\overleaf. com).
Writing in LATEX may seem daunting at first, but Overleaf provides plenty of tutorials and examples, and it is pretty easy once you get the hang of it. This spec file was written using LATEX. Microsoft Word is not a good tool for writing technical documents, and in fact most Computer Science Conferences all require papers to be written in LATEX. The quality of of the presentation layer is easily discernible by most people in Computer Science. Write a paragraph or two with a graph or diagram in overleaf and then in MSWord and compare the two – I bet you’ll see the difference immediately!
Regardless of which tool you use to generate your final PDF, the format of the report should be:
- Your name and student number at the top of page 1.
- Introduction (usually no more than 1/2 page).
- Methodology (usually about 1 page) – This would contain a clear description of each recommendation system algorithm you are using and a rationale as to why it is being used.
- Experiments ( 3 pages) – This is the main component of your report. Here you should document all of the parameters and algorithms used, Tables, Figures, or Graphs that you create in order to compare and contrast all of the algorithms you have benchmarked and a discussion about what you have discovered. There is no reason include images of code snippets as you are submitting your Jupyter notebook already – you should include images of graphs you create – assuming they are important to the story you are telling.
- Conclusion (1/2 page) – A clear summary of your key findings.
- References (Separate page) – You can include as many references as you see fit and need in your report, and this is not counted against the 5 page limit.
- Appendix (Separate page) – Any additional graphs or tables that you think are important but that you could not include into the main document because of space constraints.
Summary – a report that has a 5 page body, 1+ pages of citations, and 1 optional page at the end as an Appendix.
You must compare at least four different algorithms from the surprise library in your report. Note that one algorithm (e.g. SVD) with two different parameter settings does not count as two different algorithms. That is one algorithm with two different parameter settings, i.e. one algorithm. You can certainly include results for multiple parameter settings for each of the four algorithms in your experiments, just make sure you are using 4 different algorithms in total in your shootout.
In week 8, I provided several evaluation classes you can use to compute a wide variety of evaluation measures, and we encourage you to explore as many as you can, as doing so will provide additional evidence that your “winning” algorithm is really a winner. The “official” metric will be RMSE as this is the metric used in the original competition, but you should compare the 4 algorithms using a minimum of three different evaluation metrics. You should aim to have at least one algorithm that can achieve an RMSE score ≤ 0.800. This should be achievable with a little parameter tuning, if you choose good algorithms from surprise, and you may even decide that the algorithm that got the best RMSE score is not necessarily the “best” algorithm overall based on the experiments you have ran.
Hint: Think carefully about what “good” movie recommendations really mean to you. We all know what it is, so what do you think is the best way to prove or disprove recommendations from Algorithm A are clearly better than the ones you get from Algorithm B. We covered a wide variety of evaluation measures in the Week 8 Lectorial, so go look at what was in that notebook and think about it.
Other key hints – (1) If you have a Figure, Table, or Diagram in the report, it must have a caption and you must reference it in your report and discuss it. By that I mean “Table 1 contains a table of RMSE results for Algorithms A-E. We can see that …” (2) If you use ideas or code from somewhere else, you must include it in your references. A typical “bibtex” citation for something taken from the web would look something like:
@misc{StackA,
title = {{Stackoverflow Discussion on X}, howpublished = {\url{http://www.example.com}}, note = {Accessed: 2022-10-05}
The preferred referencing style for Computer Science is usually APA. See https://libguides.murdoch.edu.au/APA/sample for an example. There are lots of tutorials available online on APA referencing, so if you have never seen it, just search for tutorials on APA referencing and you’ll find more than you could ever read/watch. (3) If it isn’t clear, you must include graphs, tables, and/or diagrams in your experimental section, which are to be used as evidence to back any claims about algorithm performance that you make.
Get expert help for Practical Data Science and many more. 24X7 help, plag-free solution. Order online now!