as seen here and here, some instances are feeding posts wholesale to prompts, for what seem like extremely unsound reasons to me

any of you run into this shit yet?

  • rainwall@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    10 hours ago

    Once again. The admin in question DID NOT USE THE LLM TO DECIDE ON THE ADMIN ACTION. Can you understand this? Can you read this?

    Literally no one believes this corpo speak bullshit. That they just coincidentally ran this unpublished python tool, did their own work, then just happened to use an LLM to do the exact same work right after, totally innocently? That reads as absolute ass covering and nothing more. This is the “I smelled weed” of cop stops, just filtered through nerdy fediverse bullshit.

    Then, because the above totally happened like you said it did, as a one off joke that no one would ever notice, the same admin opted to put a current OpenAI model name in the LLM field in an absolutely not tongue in cheek way for other admins to totally catch and joke about? Which of course happened, haha, y’all had a big laugh about it before this blew up, yeah?

    Oh, and of course this only happened the one time, and never again. Of course no one on your team used this unpublished time saving and thought terminating tool again, of course not.

    Come the fuck on.