GPU shortage hits OpenAI, delays rollout of ChatGPT-4.5

OpenAI has had to stagger the release of its new GPT-4.5 model because it is out of GPU (Graphics Processing Units) chips. These are used to train models, process large amounts of data, and perform complex calculations. If you're wondering why CPU (Central Processing Unit) chips aren't used, the answer is pretty easy to understand. GPUs process data simultaneously which is more suitable for AI's matrix operations than the sequential processing used by CPUs.
In a tweet posted by OpenAI CEO Sam Altman, he says that the calvary is on the way. Actually, what Altman said was that hundreds of thousands of GPU chips are coming soon, and he probably will use each one fairly quickly. The new model has been available only to those willing to shell out $200 per month for the Pro version of ChatGPT. Once the new GPUs are obtained, ChatGPT Plus users paying $20 per month will also get the benefits of using the new model.
Running over to Walmart to pick up some more GPUs, similar to how you and I refresh our printer ink when we're out, is not a possibility here. This is why OpenAI is looking at developing its own chips which would leave it less vulnerable to fluctuating inventory levels at NVIDIA.

Sam Altman admits that OpenAI is out of GPU chips. | Image credit-X
Discussing the new GPT-4.5 model in his tweet, Altman points out that it is not a reasoning model and won't set new benchmark records. He goes on to say that as far as GPT-4.5 is concerned, "it’s a different kind of intelligence and there’s a magic to it i haven’t felt before. really excited for people to try it!"
Altman does warn that GPT-4.5 is "a giant expensive model" that costs $75 per million input tokens and $150 per million output tokens. By comparison, GPT-4o only costs $2.50 per million input tokens and $10 per million output tokens. A token is a unit of text that can be a single letter, a single word, or even punctuation. Large Language Models (LLM) break down text into tokens before processing it. Compare the costs of GPT-4.5 to that of GPT-4o which costs a much less pricey $2.50 per million input tokens and $10 per million output tokens.
If you're wondering how NVIDIA became a Wall Street darling rising an incredible 1,748.96% over the last five years, this is your answer. Nothing is hotter on that street we call Wall than AI and demand for NVIDIA's flagship chips have been growing throughout the last few years.
Things that are NOT allowed: