Research paper on LLMs as zero-shot rankers for recommender systems
Top 91.2% on sourcepulse
This repository provides the implementation for "Large Language Models are Zero-Shot Rankers for Recommender Systems," targeting researchers and practitioners in recommender systems. It demonstrates how Large Language Models (LLMs) can be utilized as zero-shot ranking models, offering a novel approach to personalized recommendations without task-specific fine-tuning.
How It Works
The project leverages LLMs within an instruction-following paradigm. For each user, it constructs natural language prompts that incorporate sequential interaction histories and candidate items. These prompts are then fed to LLMs, which are expected to output ranked results based on the instructions, enabling zero-shot ranking capabilities. This approach aims to harness the LLM's understanding of natural language for personalized ranking tasks.
Quick Start & Requirements
pip install -r requirements.txt
llmrank/openai_api.yaml
), dataset files (unzip ml-1m.inter.zip
and Games.inter.zip
in respective directories).cd llmrank/
then python evaluate.py -m Rank
Highlighted Details
Maintenance & Community
The project is associated with authors Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. It acknowledges the implementation of asynchronous OpenAI API dispatching by @neubig. The work builds upon the RecBole library.
Licensing & Compatibility
The repository does not explicitly state a license in the provided README. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
LLMs may struggle with perceiving the order of user interaction history without specific prompting. The project identifies inherent biases (position, popularity) in LLM ranking, though mitigation strategies are explored. The primary dependency on OpenAI API may limit offline or alternative LLM usage.
2 months ago
1 day