train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism - View it on GitHub
Star
0
Rank
11466163