Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference. - View it on GitHub
Star
2857
Rank
14526