High-efficiency LLM inference engine in C++/CUDA. Run Llama 70B on RTX 3090. - View it on GitHub
Star
0
Rank
13821525