Chat with LLMs locally utilizing llamafile as the underlying model executor. - View it on GitHub
Star
23
Rank
796664