OpenAI

OpenAI has launched fine-tuning for GPT-3.5 Turbo, the company announced on Tuesday, adding that fine-tuning for GPT-4 will also launch this fall. The move allows developers to customize models, optimizing performance and achieving superior results tailored to their specific use cases.

“Early tests have shown a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks,” stated OpenAI in a statement.

There’s a misunderstanding that fine-tuning involves training the model with data and then asking it questions based on that data. That’s not how what it is for. For that, a good product is perhaps Coral by Cohere.

Fine-tuning is basically teaching the model on how to do a task by showing it lots of examples. It is like giving it a ton of practice. It’s like giving it a ton of practice. While one can kind of do this with prompts, fine-tuning is meant for scale. As this quick tutorial explains, it is useful when there are a lot of examples to be used in the prompts and when the model is run often and is needed to run fast.

“You should consider fine-tuning when you have so many that your prompt has become a burden. OpenAI recommends fine-tuning on at least 50 examples to see clear improvements. Fine tuning essentially front-loads the cost and time it takes to train a model, making future API calls faster.”

We’re already starting to see some examples in action on X, and other social media platforms.

https://twitter.com/perplexity_ai/status/1695102995325710484

OpenAI’s announcement has outlined some common use cases, including enhancing steerability—essentially enabling developers to ensure the model follows instructions accurately; refining output formatting to improve the model’s consistent response presentation; and customizing the tone to, for instance, better fit a business’s voice. These examples illustrate how developers and businesses with private beta access to fine-tuning have utilized it. However, the possibilities for fine-tuning’s applications are certainly broader.

The announcement also underscored the point that the data transmitted into and out of the fine-tuning API is not employed by OpenAI or any other organization to train the model in any manner.

OpenAI would charge developers two types of costs for using fine-tuning; initial training cost, and usage cost. The training cost would be $0.008 per 1,000 tokens, usage input cost $0.012 per 1,000 tokens, and usage output cost $0.016 per 1,000 tokens. Sharing an example in the announcement, the company stated that a gpt-3.5-turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40.

Following the launch of fine-turning for GPT-3.5, OpenAI has also announced Scale as its preferred partner to help companies use it to customize models.