Investigating on RLHF methodology
Abstract
In this article, we investigate the alignment of Large Language Models according to human preferences. We discuss the features of training a Preference Model, which simulates human preferences, and the methods and details we found essential for achieving the best results. We also discuss using Reinforcement Learning to fine-tune Large Language Models and describe the challenges we faced and the ways to overcome them. Additionally, we present our experience with the Direct Preference Optimization method, which enables us to align a Large Language Model with human preferences without creating a separate Preference Model. As our contribution, we introduce the approach for collecting a preference dataset through perplexity filtering, which makes the process of creating such a dataset for a specific Language Model much easier and more cost-effective.
- Publication:
-
arXiv e-prints
- Pub Date:
- October 2024
- DOI:
- 10.48550/arXiv.2410.01789
- arXiv:
- arXiv:2410.01789
- Bibcode:
- 2024arXiv241001789K
- Keywords:
-
- Computer Science - Machine Learning;
- Computer Science - Artificial Intelligence;
- 68T50;
- I.2.7
- E-Print:
- 23 pages, 6 figures, 6 tables