OpenAI has made some major announcements at its latest DevDay conference which took place on Monday morning in San Fransisco. Perhaps the biggest feature coming to ChatGPT is the ability to make custom GPTs tailored to specific needs, but there are also a lot of other upgrades to talk about.
The AI tech giant has introduced GPT 4 Turbo, which is a substantial improvement to the existing GPT 4 model. This includes an updated knowledge base, which now goes up to April 2023 instead of the previous September 2021, a much larger context window, and more.
While revealing GPT 4 Turbo on stage, Open AI CEO Sam Altman said: “A lot of people have tasks that require a much longer context length. GPT 4 supported up to 8K and in some cases up to 32k context length. GPT 4 Turbo supports up to 128,000 tokens of context. This is 16 times longer than our 8K context.”
Previously, GPT 4 was limited to a context window of 32,000 input tokens, but this is expanding to a whopping 128,000 tokens with GPT 4 Turbo. This is equivalent to more than 300 pages of text in a single prompt, which should let users work with much larger documents. It will also help ChatGPT understand more of the question and offer more thought-out responses.
Not only that, but OpenAI also says that the new version of GPT 4 will be 3 times cheaper than the earlier versions for input tokens and 2 times cheaper in terms of output tokens. The cost for input is now just $0.01 per 1,000 tokens as opposed to the previous $0.03 with GPT-4. For each output, the rate is $0.03 per 1,000 tokens.
Moreover, GPT 4 Turbo will continue to work with image prompts, and text-to-speech requests, and will also integrate DALL E 3. This was originally announced last month.
Updated GPT 3.5 Turbo
OpenAI has also introduced upgrades to the existing GPT 3.5 Turbo model. It will now have a 16K context window by default and will feature many of the same function updates as GPT 4 Turbo improved instruction following, JSON mode and parallel function calling. There is also upgraded functionality and cheaper pricing.
The blog post from OpenAI says: “Our internal evals show a 38% improvement on format following tasks such as generating JSON, XML, and YAML. Developers can access this new model by calling gpt-3.5-turbo-1106
in the API. Applications using the gpt-3.5-turbo
name will automatically be upgraded to the new model on December 11. Older models will continue to be accessible by passing gpt-3.5-turbo-0613
in the API until June 13, 2024.”
GPT-3.5 Turbo is priced at $0.01 for input and $0.002 for output.
OpenAI introduced GPT-3.5 Turbo in March, positioning it as the top model for non-chat applications. In August, the company also unveiled a version that can be customized through fine-tuning.
Keep in mind that the improvements to GPT 3.5 and the new GPT 4 Turbo are still in preview and will roll out far and wide in the upcoming weeks.