You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The main goal of reading paper is not just understanding it. Try to understand the key concept, but we need to get new ideas and research directions from the paper.
Paper information
title: Collaborative Similarity Embedding for Recommender Systems
authors: Chen, C. M., Wang, C. J., Tsai, M. F., & Yang, Y. H.
venue: The World Wide Web Conference. 2019: 2637-2643.
abstract:
We present collaborative similarity embedding (CSE), a unified framework that exploits comprehensive collaborative relations available in a user-item bipartite graph for representation learning and recommendation. In the proposed framework, we differentiate two types of proximity relations: direct proximity and k-th order neighborhood proximity. While learning from the former exploits direct user-item associations observable from the graph, learning from the latter makes use of implicit associations such as user-user similarities and item-item similarities, which can provide valuable information especially when the graph is sparse. Moreover, for im- proving scalability and flexibility, we propose a sampling technique that is specifically designed to capture the two types of proximity relations. Extensive experiments on eight benchmark datasets show that CSE yields significantly better performance than state-of-the- art recommendation methods.
Summary: problems to address, key ideas, quick results
They provide a unified representation learning framework that exploits comprehensive collaborative relations available in a user-item bipartite graph for recommender systems, which is quite comprehensive.
Its efficiency and scalability are also awesome.
What you don't like?
-They convert rating datasets into binary preference datasets. They do not show their model’s performance on graphs where arbitrary weighted edge distributions exist.
How to improve?
Any new ideas?
Reproducing results (if any)
The text was updated successfully, but these errors were encountered:
The main goal of reading paper is not just understanding it. Try to understand the key concept, but we need to get new ideas and research directions from the paper.
Paper information
We present collaborative similarity embedding (CSE), a unified framework that exploits comprehensive collaborative relations available in a user-item bipartite graph for representation learning and recommendation. In the proposed framework, we differentiate two types of proximity relations: direct proximity and k-th order neighborhood proximity. While learning from the former exploits direct user-item associations observable from the graph, learning from the latter makes use of implicit associations such as user-user similarities and item-item similarities, which can provide valuable information especially when the graph is sparse. Moreover, for im- proving scalability and flexibility, we propose a sampling technique that is specifically designed to capture the two types of proximity relations. Extensive experiments on eight benchmark datasets show that CSE yields significantly better performance than state-of-the- art recommendation methods.
Summary: problems to address, key ideas, quick results
presentation link
Questions about the paper?
What do you like?
What you don't like?
-They convert rating datasets into binary preference datasets. They do not show their model’s performance on graphs where arbitrary weighted edge distributions exist.
How to improve?
Any new ideas?
Reproducing results (if any)
The text was updated successfully, but these errors were encountered: