Gitstar Ranking
Users
Organizations
Repositories
Rankings
Users
Organizations
Repositories
Sign in with GitHub
astorfi
Fetched on 2025/01/09 12:32
astorfi
/
LLM-Alignment-Project
A comprehensive template for aligning large language models (LLMs) using Reinforcement Learning from Human Feedback (RLHF), transfer learning, and more. Build your own customizable LLM alignment solution with ease. -
View it on GitHub
Star
29
Rank
642880