Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs. - View it on GitHub
Star
0
Rank
13789574