OpenAI adds fine-tuning on GPT-3.5 Turbo
San Francisco, Aug 23 (IANS) OpenAI has announced that fine-tuning for GPT-3.5 Turbo is now available, with fine-tuning for GPT-4 coming this fall.
"Developers can now run supervised fine-tuning to make this model perform better for their use cases," the company said in a blogpost on Tuesday.
According to early tests, a fine-tuned version of GPT-3.5 Turbo can match, or even outperform, base GPT-4-level capabilities on certain narrow tasks.
OpenAI further noted that similar to all its application programming interfaces (APIs), data sent in and out of the fine-tuning API is owned by the customer and is not used by the company, or any other organisation, to train other models.
Fine-tuning allows businesses to make the model follow instructions better, like making outputs "terse" or always responding in a given language
"Fine-tuning improves the model's ability to consistently format responses -- a crucial aspect for applications demanding a specific response format, such as code completion or composing API calls," the company said.
Also, businesses with a recognisable brand voice can use fine-tuning for the model to be more consistent with their tone.
In addition to increased performance, fine-tuning also enables businesses to shorten their prompts while ensuring similar performance.
"Fine-tuning with GPT-3.5-Turbo can also handle 4k tokens -- double our previous fine-tuned models," OpenAI said.
Also, early testers have reduced prompt size by up to 90 per cent by fine-tuning instructions into the model itself, speeding up each API call and cutting costs.
The company further mentioned that the support for fine-tuning with function calling and "gpt-3.5-turbo-16k" will be coming later this fall.
Disclaimer: This story has not been edited by the Sakshi Post team and is auto-generated from syndicated feed.