Unleashing AI Power: OpenAI’s New Models, Pricing, and What It Means for Cybersecurity

OpenAI has recently revealed a host of enhancements designed to boost the experience of developers leveraging its AI solutions. These upgrades encompass fresh embedding models, a cost reduction for GPT-3.5 Turbo, an advanced preview of GPT-4 Turbo, along with more powerful tools for content moderation.

The AI research facility in San Francisco announced that its latest text-embedding-3-small and text-embedding-3-large models have significantly improved performance compared to their predecessors. For instance, the text-embedding-3-large model scores an average of 54.9 percent on the MIRACL benchmark and 64.6 percent on the MTEB benchmark. This is a considerable increase from the previous text-embedding-ada-002 model which scored 31.4 percent and 61 percent respectively on these benchmarks.

OpenAI has also announced a 5x reduction in the cost per 1,000 tokens for ‘text-embedding-3-small’ as compared to ‘text-embedding-ada-002’, bringing it down from $0.0001 to $0.00002. In order to curb expenses further, the firm suggests that developers could trim down embeddings, which wouldn’t notably affect precision.

Coming up next week, OpenAI is set to launch a revised version of its GPT-3.5 Turbo model, slashing its costs by half for input tokens and a quarter for output tokens. This signifies the third time in the last year that OpenAI has reduced the price for GPT-3.5 Turbo, as part of their strategy to encourage more widespread use.

OpenAI has also enhanced its GPT-4 Turbo preview to a new version named gpt-4-0125-preview, observing that more than 70 percent of requests have switched to this model since its introduction. The upgrade encompasses more comprehensive code generation and other task completions.

In a bid to assist programmers in creating secure AI applications, OpenAI has unveiled its latest and most sophisticated content moderation model to date, known as text-moderation-007. The firm claims that this new model can detect potentially damaging text with higher accuracy compared to its predecessors.

At last, developers are now equipped with enhanced control over API keys and can gain insight into usage metrics. OpenAI has stated that developers have the option to allocate permissions to keys and monitor consumption at a per-key level, enabling them to more efficiently supervise individual products or projects.

OpenAI has announced that they are scheduling enhancements to their platform in the upcoming months to better accommodate larger developer groups.

Site Footer