[He/Him, Nosist, Touch typist, Enthusiast, Superuser impostorist, keen-eyed humorist, endeavourOS shillist, kotlin useist, wonderful bastard, professinal pedant miser]
Stuped person says stuped things, people boom
I have trouble with using tone in my words but not interpreting tone from others’ words. Weird, isn’t it?
Formerly on kbin.social and dbzer0
And Reuters calls it “loosening enforcement”. Rare Reuters L.
My god, the preamble for that thing is so dang long. 13:30 with some AI sponsorship the comments are talking about I may have accidentally skipped over, and only 10:27-11:37 deals with what you’re talking about. The video makes a good point that they have existing operating infrastructure. However, for the stockpiling accusation, the statements that it cites are from the CEO of big competitor “Chips AI”, who cite nothing except “only costing $6 million is impossible, therefore it actually cost more and they must have cheated! I think they have 50,000 illegally imported Nvidia GPUs!” which just sounds like the behavior of a cult ringleader trying to maintain power to me. The other source it cites for this claim is Elon Musk, whose reasoning was “Obviously”.
I just think that no matter whether DeepSeek smuggled or not, an investigation into whether or not they smuggled is of course going to be launched. I do want more transparency regarding where the Singapore billing goes, but that alone is too shaky for conclusions.
Note that s1 is transparently a distilled model instead of a model trained from scratch, meaning it inherits knowledge from an existing model (Gemini 2.0 in this case) and doesn’t need to retrain its knowledge nearly as much as training a model from scratch. It’s still important, but the training resources aren’t really directly comparable.
Elaborate? Link? Please tell me this is not just an “allegedly”.
extra time which Im not sure I want to spend
It’s your burden of proof, bud.
The AI models use the same fuel for energy.
You might think this is apples and oranges, but I think it’s just another dimension: whether it’s better to have quality and bountiful output, or if such gains are eclipsed by the far wider appeal and adoption of such technologies. Just like how the cotton gin’s massive efficiency and yield increase in turning harvested cotton into clothing filling skyrocketed the harvesting of cotton.
The issue might be that the energy it saves in training is offset by its more intensive techniques for answering questions, and by the long answers they produce.
It’s more like comparing them while they use the same fuel (as the article directly compares them in joules): Let’s say the train also uses gasoline. The car is a far more “independent”, controllable, and “doesn’t waste fuel driving to places you don’t want to go” and thus seen as “better” and more appealing, but that wide appeal and thus wide usage creates far more demand for gasoline, dries up the planet, and clogs up the streets, wasting fuel idling at traffic stops.
The benchmark feels just like the referenced Jevons Paradox to me: Efficiency gains are eclipsed with a rise in consumption to produce more/better products.
Remember that East Asian privacy culture is different
Not to mention, JSTOR didn’t press any charges.
I wonder what the post was? Either way this seems quite bad.
but me and four others just wanted to know what legolas saw