Building Recommender Systems with Machine Learning and AI

share ›
‹ links

Below are the top discussions from Reddit that mention this online Udemy course.

How to create machine learning recommendation systems with deep learning, collaborative filtering, and Python

Reddemy may receive an affiliate commission if you enroll in a paid course after using these buttons to visit Udemy. Thank you for using these buttons to support Reddemy.

Taught by
Sundog Education by Frank Kane

2

Reddit Posts and Comments

0 posts • 2 mentions • top 2 shown below

r/learnmachinelearning • comment
1 points • _Ventulus_

How are you taking the course? It looks like there is an offering on Udemy, and there's also a textbook.

r/MachineLearning • comment
1 points • GabberZuzie

In my opinion, the best way to learn and have a baseline, is to use the surprise lib package (Python) (https://surprise.readthedocs.io/en/stable/).

With the package, you can easily build your own first baseline, however, they do use the MovieLens as a built in set. I have used suprirse on my own data, but many people use the Books dataset (ironically, can't find a link anymore) But I found a nice repository for you http://cseweb.ucsd.edu/\~jmcauley/datasets.html .

​

Good evaluation metrics -> evaluation metrics can either be in an online or offline form. Offline form, is the typical eval metrics such as accuracy, diversity, novelty etc, and are not as important as the online evaluation metrics (example: A/B testing). YouTube studies have shown that even the best performing evaluation metrics can result in poor recommendations (I do not have a direct source, but I got the info from Frank Kane's course on RS https://www.udemy.com/course/building-recommender-systems-with-machine-learning-and-ai/ ). The idea here is that you can have accurracy of 95% and your recommendations will be poor just because you need to tune your parameters based on the opinions of the actual users, which you often do not possess.

​

From your post I understand that you are just learning, so online evaluation is out of the question. I've build a RS for my Master's thesis, and from September I will be doing a PhD on RS, so let me give you some general advice.

Know why you are building a RS, what type and what is your goal.

I am assuming you've made an educated decision on WHY your RS should be top-N (in the end, you can build a knowledge-based RS instead of focusing on collaborative filtering system, but that's up to you).

but lets follow up on the Top-n build (user-based? item-based?)

If you opt for this option, be aware that there are evaluation metrics that are specifically built for top-N systems (hit rate, hit rate by ranking value, cumulative hit rate, average reciprocal hit rate). You will have to go into the definitios of each of these and understand what they entail. For your RS, you will have to decide which trade offs are best for your recommendations and use only the metrics that maximize the trade off for the desired goal.

but

Maybe you do not want to use any of the top-N build specific eval metrics, because you like the "standard" ones more. Then, you have to do exactly the same thing and decide. For example: novelty is often traded off coverage, you have decide which parameter you want to maximize.

​

The way I've learnt the most was to use towardsdatascience and their guides. Build your first one with the guide as a baseline, and then your own with your experience.

Here are some links for you:

General guide:

https://towardsdatascience.com/building-and-testing-recommender-systems-with-surprise-step-by-step-d4ba702ef80b

Eval:

https://towardsdatascience.com/evaluating-a-real-life-recommender-system-error-based-and-ranking-based-84708e3285b

https://towardsdatascience.com/recommendation-systems-models-and-evaluation-84944a84fb8e

​

Happy learning! If you have more questions, feel free to ask.