as seen here and here, some instances are feeding posts wholesale to prompts, for what seem like extremely unsound reasons to me

any of you run into this shit yet?

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 hours ago

    think very carefully on this because I’m not sure pretending to not understand what I’m asking is working out for you

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      13 hours ago

      assume I know and understand that the LLM did not literally do the banning

      maybe there’s a large body of existing research on how even human in the loop systems confirm and worsen biases? maybe it’s a bit obvious when you go through the process the moderator took to get to their decisions in your head? slowly now, maybe you’ll get there

      • db0@lemmy.dbzer0.comBanned from community
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        12 hours ago

        assume I know and understand that the LLM did not literally do the banning

        I am telling, again, that the human did not use the LLM to think for them either. The admin took the decision to ban the user irrespective of the LLM, and the rest of our admin team and me specifically, would never let an admin become a “human in the loop”. The LLM was used just to summarize, as part of the test, with a misguided inside joke on using OpenAI tech.

        I will readily admit that there was mistakes made by the admin. Not on their actions, but on their visuals. Because those visuals were spun to keep feeding this made-up controversy. We didn’t use the LLM to decide or even guide our decision, but it appeared like we did, and we already owned that.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          12 hours ago

          The admin took the decision to ban the user irrespective of the LLM, and the rest of our admin team and me specifically, would never let an admin become a “human in the loop”. The LLM was used just to summarize

          you don’t appear to have much understanding of how a human in the loop system works in practice. LLM summaries are used to confirm biases, especially when the prompt is something along the lines of “do these posts contain <negative term>?” though these systems are stochastic so you’re going to get unpredictable biases regardless of the prompt.

          I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

          which is why maybe you should just not have them in the future? just don’t touch LLMs when you’re doing mod work. either there’s no reason for it or you’re doing something monstrously wrong.

          • db0@lemmy.dbzer0.comBanned from community
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            12 hours ago

            I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

            In this case, according to the admin in question, the LLM summary came after the decision, as a sort of a test. I.e. the admin made a decision, and wanted to see if an LLM would subsequently agree with that decision. In this specific case, it did, which is why they misguidedly decided to keep its summary in the modlog (opening us up to this whole shitstorm), but ultimately, that admin anyway decided LLMs in the mix is not good at all, which is why you never again saw an LLM summary in the modlog.

            I can only put so much fault for a person for just testing shit out, yanno? I am not happy that they decided to use the output of the test because they are not familiar with how quickly disinfo breeds, but ultimately they came to the right decision anyway. If they had not and they had raised the issue on using LLMs officially, they would have been shut down.

            • [deleted]@piefed.world
              link
              fedilink
              English
              arrow-up
              9
              ·
              12 hours ago

              Having a LLM confirm a decision is the same thing as having the LLM make a decision and then figure out if the mod agrees with it. If they would have chosen not to rule based on the LLM output, then the LLM was part of the decision making process. The order does not matter.

              Including the LLM outputting something that implies a determination at any step automatically makes it part of the process.

                • self@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  10 hours ago

                  hey fucko, you know we don’t have to take their word for it right? we can read all the relevant posts and come to the conclusion that actually the use of LLMs as stated fucking sucks, and that we don’t fucking want it. we can read something and come to a different conclusion than you, believe it or not.

    • db0@lemmy.dbzer0.comBanned from community
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      12 hours ago

      this human in loop shit is how corporations absolve themselves of responsibility for decisions taken purely on the word of an LLM. it lets them fire a worker instead of an executive. you’re sure this is the route you want to go?

      I completely agree with you. We have never and will never go that route.

      Here’s the answer to the only question you posted which should be obvious from everything else I’ve said and done.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        12 hours ago

        cool! please make it clear to the mod in question that they shouldn’t be using an LLM for anything in the future, even summarizing posts. make it part of your instance’s policies.

        • midribbon_action@lemmy.blahaj.zoneBanned from community
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          How could an admin possibly enforce that? What if a mod created a summary locally and never shared it with anyone? The ai summary wasn’t used as evidence, that is already policy and has been explained to you multiple times. You are shifting the goal posts to the moon, and no policy change will ever satisfy you.

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            12 hours ago

            oh no the goalposts! think of the theoretical shitheads who might do this in secret!

            please see the pinned post and fuck off