• Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    20 hours ago

    True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.

      ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they’re open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.