I mentioned this in another thread, but figured I’d make a separate topic for in depth discussion.
I propose using something akin to Petals: https://github.com/bigscience-workshop/petals to setup a decentralized LLM compute pool. It would prevent our reliance on centralized compute providers for AI and allow us more freedom in the models we can serve to users.
I think it would be perfect to add as a new node role. Thoughts?