OpenAI introduces New Updates and improving its text-generating algorithms while lowering prices as the competition in the generative AI market heats up.
What do the updates bring to the Table?
New versions of GPT-3.5-turbo and GPT-4, the latter of which is OpenAI’s most recent text-generating AI and features function calling, were released today, according to the company’s announcement. Function calling, as described by OpenAI in a blog post, enables developers to specify programming functions to GPT-3.5-turbo and GPT-4 and have the models generate code to carry out those functions.
Function calling, for instance, can be used to build chatbots that respond to requests by invoking external tools, translate natural language into database queries, and extract structured data from text. According to OpenAI, “These models have been tuned to both detect when a function needs to be called… and to respond with JSON that adheres to the function signature.” Developers can more reliably retrieve structured data from models via function calling.
The New GPT-3.5-Turbo
In addition to function calling, OpenAI introduces New Updates and releasing a variant of GPT-3.5-turbo with a significantly larger context window. The text that the model takes into account before producing any new text is referred to as the context window and is measured in tokens, or raw bits of text. Models with limited context windows have a tendency to “forget” the details of even recent discussions, which causes them to stray—often problematically.
At twice the cost ($0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens, respectively), the new GPT-3.5-turbo delivers four times the context length (16,000 tokens) of the original GPT-3.5-turbo. OpenAI introduces New Updates and claims that it can handle about 20 pages of text at a time, which is far less than the hundreds of pages that AI company Anthropic’s lead model can process. (OpenAI introduces New Updates and testing a limited-release version of GPT-4 with a 32,000-token context window.)
Price Reductions
Positively, OpenAI announces a 25% price reduction for GPT-3.5-turbo, the original version without the larger context window. The model is now available to developers for $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, or about 700 pages per $1.
OpenAI introduces New Updates
The cost of text-embedding-ada-002, one of OpenAI’s more well-liked text embedding models, is also going down. Text embeddings are often used for search (where results are ordered by relevance to a query string) and recommendations (where items with related text strings are recommended). They quantify the relatedness of text strings.
The price of text-embedding-ada-002 has been reduced by 75% to $0.0001 for every 1,000 tokens. OpenAI claims that the decrease was made possible by improved system efficiency; this is undoubtedly an important area of focus for the business, which invests hundreds of millions of dollars in infrastructure and R&D.