More than 8,000 customers of the cloud services provider Snowflake will be able to create their own generative AI helpers thanks to a new cooperation launched by Nvidia and Snowflake.
The information was shared during a fireside conversation between Snowflake CEO Frank Slootman and Nvidia CEO Jensen Huang at the Snowflake Summit.
Users of Snowflake will now be able to create unique AI models utilising internal data thanks to the change. That's a significant problem for businesses that want to use generative AI and massive language models and need to find solutions to their own questions that are specific to their own organisations.
That implies that Snowflake's customers will be able to create their own generative AI chatbots to extract information from their enormous information databases, among other things.
NeMO, the underlying toolkit from Nvidia, offers a vast language model that can be customised by Snowflake users using their own data. Additionally, Nvidia will give customers the hardware they need, such as the GPUs they need to train their generative AI models.
The manufacturer of graphics chips revealed a similarly comparable alliance with ServiceNow (NOW) in May. However, ServiceNow creates the generative AI models rather than letting users train the models themselves. Giving consumers a rapid way to benefit from generative AI capabilities without having to train the platforms on their own data is the aim.
This is not the first programme to let businesses create their own generative AI applications.
Microsoft introduced the Azure AI Studio in May, enabling users to design unique copilots—AI-powered applications—for their own use. Copilots, like Snowflake's solution, come in a variety of configurations, such as operating as chatbots.
When OpenAI published ChatGPT in November 2022, the field of generative AI exploded onto the scene.