• Alphane Moon@lemmy.worldOPM
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 days ago

    There are numerous ways to measure AI throughput, making it difficult to compare chips. Google is using FP8 precision as its benchmark for the new TPU, but it’s comparing it to some systems, like the El Capitan supercomputer, that don’t support FP8 in hardware. So you should take its claim that Ironwood “pods” are 24 times faster than comparable segments of the world’s most powerful supercomputer with a grain of salt.

    This is not a grain of salt. This is premeditated lying.

    Honestly the whole article reminds of this:

    https://www.youtube.com/watch?v=GFRzIOna2oQ

    • UnfortunateShort@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 days ago

      Especially given that these supercomputers not supporting FP8 instead support AVX512. And I’m almost positive using INT8 or INT16 you can achieve quite good results populating these vectors