OpenAI is expanding a program, Custom Model, to help enterprise customers develop tailored generative AI models using its technology for specific use cases, domains and applications.
Custom Model launched last year at OpenAI’s inaugural developer conference, DevDay, offering companies an opportunity to work with a group of dedicated OpenAI researchers to train and optimize models for specific domains. “Dozens” of customers have enrolled in Custom Model since. But OpenAI says that, in the course of working with this initial set of users, it’s perceived a need to “evolve” the program to “further maximize performance.”
Hence assisted fine-tuning.
Assisted fine-tuning, a new component of the Custom Model program, leverages techniques beyond fine-tuning — such as “additional hyperparameters and various parameter efficient fine-tuning methods at a larger scale,” in OpenAI’s words — to enable organizations to set up data training pipelines, evaluation systems and more toward bolstering model performance on particular tasks.
OpenAI gives the example of SK Telecom, the Korean telecommunications giant, who worked with OpenAI to fine-tune GPT-4 to improve its performance in “telecom-related conversations” in Korean. Another assisted fine-tuning customer, Harvey, which is building AI-powered legal tools with support from OpenAI’s Startup Fund, teamed up with OpenAI to create a custom model for case law that incorporated hundreds of millions of words of legal text and feedback from expert attorneys.
“We believe that in the future, the vast majority of organizations will develop customized models that are personalized to their industry, business, or use case,” OpenAI writes in a blog post. “With a variety of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impact from their AI implementations.”
OpenAI is flying high, reportedly nearing an astounding $2 billion in annualized revenue. But there’s surely internal pressure to maintain pace, particularly as the company plots a $100 billion data center co-developed with Microsoft (if reports are to be believed). The cost of training and serving flagship generative AI models isn’t coming down anytime soon, after all — and consulting work like custom model training might just be the thing to keep revenue reliably flowing while OpenAI plots its next moves.
Alongside the expanded Custom Model program, OpenAI today announced new model fine-tuning features for developers working with GPT-3.5, including a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases) and enhancements to tooling. Mum’s the word on fine-tuning for GPT-4, which launched in early access during DevDay.