Are you fascinated by the way chatbots interact with humans? Have you ever wondered how these interactions could be optimized to better suit human preferences? The LMSYS – Chatbot Arena Human Preference Predictions competition is your chance to dive into this intriguing challenge. Here, you’re tasked with predicting which chatbot responses users will prefer in direct head-to-head comparisons, using real-world data from Chatbot Arena.
Join the Competition
To participate, you’ll need to verify your identity. This ensures a fair and secure competition environment.
Competition Overview
This competition revolves around the idea of enhancing chatbot interactions by predicting user preferences. You’ll work with a dataset of conversations where users have interacted with different large language models (LLMs) and chosen their preferred responses. Your mission is to build a machine learning model that can accurately predict these preferences, thus improving how chatbots align with human expectations.
Key Dates
- Start Date: May 2, 2024
- Entry Deadline: July 29, 2024
- Team Merger Deadline: July 29, 2024
- Final Submission Deadline: August 5, 2024
Challenge Details
LLMs are increasingly becoming a part of our daily lives, but their effectiveness depends on how well they meet user preferences. This competition provides a platform to address this challenge using real-world data. You’ll predict user preferences in head-to-head chatbot responses, contributing to the development of more user-friendly AI systems.
This task relates to the concept of “reward models” or “preference models” in reinforcement learning from human feedback (RLHF). Traditional methods often suffer from biases such as position bias, verbosity bias, or self-enhancement bias. You’ll need to explore innovative machine-learning techniques to overcome these challenges and create a robust prediction model.
Evaluation
Your submissions will be evaluated based on the log loss between predicted probabilities and actual outcomes. The submission file format should be as follows:
id,winner_model_a,winner_model_b,winner_tie 136060,0.33,0,33,0.33 211333,0.33,0,33,0.33 1233961,0.33,0,33,0.33
Prizes
- 1st Place: $25,000
- 2nd Place: $20,000
- 3rd Place: $20,000
- 4th Place: $20,000
- 5th Place: $15,000
Submission Requirements
This is a code competition, and submissions must be made through Notebooks with the following conditions:
- CPU Notebook: ≤ 9 hours run-time
- GPU Notebook: ≤ 9 hours run-time
- Internet access: Disabled
- External data: Allowed if freely and publicly available, including pre-trained models
- Submission file: Must be named
submission.csv
Please review the Code Competition FAQ for more details and troubleshooting tips.
Citation
For more information and to cite this competition in your work, refer to the official citation:
Wei-lin Chiang, Lianmin Zheng, Lisa Dunlap, Joseph E. Gonzalez, Ion Stoica, Paul Mooney, Sohier Dane, Addison Howard, Nate Keating. (2024). LMSYS – Chatbot Arena Human Preference Predictions. Kaggle. https://kaggle.com/competitions/lmsys-chatbot-arena
Conclusion
By participating in this competition, you’ll be at the forefront of improving human-LLM interactions. Your contributions could lead to more intuitive and satisfying experiences with AI chatbots, making a significant impact in the field of artificial intelligence. So, gear up, get your models ready, and join the challenge to predict human preferences in the wild!
Also Read: Tally CodeBrewers 2024 Hackathon
Join our Whatsapp Channel for latest updates on Hackathons, Jobs, Internships opportunities.