Fine tuning a LLaMA 2 model on Finance Alpaca using 4/8 bit quantization, easily feasible on Colab. - View it on GitHub
Star
3
Rank
2932739