TensorOpera, a provider of
generative AI platform has teamed up with Aethir, a distributed
cloud infrastructure provider, to speed up its newest foundation model, TensorOpera Fox-1. This makes it the first mass-scale use case for LLM training on a decentralized physical infrastructure network.
The alliance with Aethir outfits TensorOpera with cutting edge GPU assets necessary for training Fox-1. Aethir's partnership with NVIDIA Cloud Partners, Infrastructure Funds, and different endeavor grade equipment suppliers has laid out a worldwide, large-scale GPU cloud.
The network guarantees the conveyance of affordable and adaptable GPU assets, fundamental for high-throughput, significant memory limit, and efficient parallel processing capabilities. TensorOpera gets the tools it needs to speed up AI development, which requires a lot of GPU power and high network bandwidth, with the assistance pf Aethir's decentralized cloud infrastructure.
“I am thrilled about our partnership with Aethir,” said Salman Avestimehr, Co-Founder and CEO of TensorOpera. “In the dynamic landscape of generative AI, the ability to efficiently scale up and down during various stages of model development and in-production deployment is essential. Aethir’s decentralized infrastructure offers this flexibility, combining cost-effectiveness with high-quality performance. Having experienced these benefits firsthand during the training of our Fox-1 model, we decided to deepen our collaboration by integrating Aethir's GPU resources into TensorOpera's AI platform to empower developers with the resources necessary for pioneering the next generation of
AI technologies."