cross-posted from: https://lemmy.sdf.org/post/29335261
cross-posted from: https://lemmy.sdf.org/post/29335160
Here is the original report.
The research firm SemiAnalysis has conducted an extensive analysis of what’s actually behind DeepSeek in terms of training costs, refuting the narrative that R1 has become so efficient that the compute resources from NVIDIA and others are unnecessary. Before we dive into the actual hardware used by DeepSeek, let’s take a look at what the industry initially perceived. It was claimed that DeepSeek only utilized “$5 million” for its R1 model, which is on par with OpenAI GPT’s o1, and this triggered a retail panic, which was reflected in the US stock market; however, now that the dust has settled, let’s take a look at the actual figures.
…
That is the cost of the entire semiconductor stock the company owns, plus 4 year ownership. I highly doubt they ran the training step for 4 years, using all hardware resources available to them, and somehow also destroyed all of the GPUs in the process.