• tuckerm@feddit.online
    link
    fedilink
    English
    arrow-up
    49
    ·
    edit-2
    14 hours ago

    I remembered seeing a post on Mastodon a while ago about an AI-generated vulnerability report, and this article reminded me of that. Turns out, that old one was also about curl. He has been dealing with this bullshit for a while now. https://mastodon.social/@bagder/111245232072475867

    On that old one, the indignant attitude of the guy who “reported” the vulnerability still irritates me. He admits that he used AI (this was when Google’s AI was called Bard, so that’s what he means by “I have searched in the Bard”), and still has this tone of “how dare you not have fixed this by now!”

  • RedSnt 👓♂️🖥️@feddit.dk
    link
    fedilink
    arrow-up
    98
    ·
    15 hours ago

    “One way you can tell is it’s always such a nice report. Friendly phrased, perfect English, polite, with nice bullet-points … an ordinary human never does it like that in their first writing,”

    Damn straight.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          I feel this is one of the few instances where I can say ‘takes one to know one’ and not mean that in some kind of rude or bellittling way.

          Also: Etiquette!

          Thats the word I couldn’t think of, thats used in ShadowRun to describe the … set of vocabulary and base cultural knowledge that functionally constitutes a social class, within those games.

  • Akatsuki Levi@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    14 hours ago

    I still don’t get it, like, why tf would you use AI for this kind of thing? It can barely make a basic python script, let alone actually handle a proper codebase or detect a vulnerability, even if it is the most obvious vulnerability ever

    • emzili@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 hours ago

      It’s simple actually, curl has a bug bounty program where reporting even a minor legitimate vulnerability can land you at a minimum $540

    • kadup@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      We have several scientific articles being published and later found to have been generated via AI.

      If somebody is willing to ruin their academic reputation, something that takes years to build, don’t you think people are also using AI to cheat at a job interview and land a high paying IT job?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      11 hours ago

      I think it might be the developers of that AI, letting their system make bug reports to train it, see what works and what doesn’t (as is the way with training AI), and not caring about the people hurt in the process.

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    58
    arrow-down
    4
    ·
    edit-2
    16 hours ago

    Those who use AI to report to open source projects and flood the volunteering devs who keep the World going, should be disqualified from using those open source projects to begin with (even though thats not feasible)

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      8
      ·
      11 hours ago

      Consider that it’s not intended to be helpful, but actually could be a malicious DDOS attempt. If it slows down devs from fixing real vulnerabilities, then it empowers those holding zero days for a widely used package (like curl).

    • Dave.@aussie.zone
      link
      fedilink
      arrow-up
      11
      ·
      12 hours ago

      Those who use AI to report to open source projects and flood the volunteering devs who keep the World going, should be disqualified from using those open source projects

      I propose a GPL-noAI licence with this clause inserted.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    12 hours ago

    On a barely related note:

    It would be funny to watch Markiplier try to take out a Tesla Bot, and then Asimo, and then a humanoid Boston Dynamics robot, in hand to hand combat.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I mean… the thumbnail looks almost exactly like Markiplier to me.

        All these years later, still can’t get his damn voice out of my head, purely from clicking on ‘really great vid’ links from randos on Discord… bleck.

  • zarathustra0@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    1
    ·
    16 hours ago

    I have a dream that one day it will be possible to identify which AI said slop came from and so to charge the owners of said slop generator for releasing such a defective product uncontrolled on the world.

  • TheTechnician27@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    4
    ·
    16 hours ago

    Just rewrite curl in Rust so you can immediately close any AI slop reports talking about memory safety issues. /s