NinjaZ@infosec.pub to Technology@lemmy.worldEnglish · 20 hours agoChina scientists develop flash memory 10,000× faster than current techinterestingengineering.comexternal-linkmessage-square69fedilinkarrow-up1333arrow-down123
arrow-up1310arrow-down1external-linkChina scientists develop flash memory 10,000× faster than current techinterestingengineering.comNinjaZ@infosec.pub to Technology@lemmy.worldEnglish · 20 hours agomessage-square69fedilink
minus-squaregravitas_deficiency@sh.itjust.workslinkfedilinkEnglisharrow-up1arrow-down1·13 hours agoYou’re willing to pay $none to have hardware ML support for local training and inference? Well, I’ll just say that you’re gonna get what you pay for.
minus-squarebassomitron@lemmy.worldlinkfedilinkEnglisharrow-up4·12 hours agoNo, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.
You’re willing to pay $none to have hardware ML support for local training and inference?
Well, I’ll just say that you’re gonna get what you pay for.
No, I think they’re saying they’re not interested in ML/AI. They want this super fast memory available for regular servers for other use cases.