Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • sansruse@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 days ago

    this is extremely low hanging fruit but i have to do it:

    https://xcancel.com/pmarca/status/2051374498994364529?s=46

    marc andreessen reveals his AI prompt. my favorite part is where he tells it to use as many words as possible, as if LLMs are normally too terse. But i also really like the part where he tells it not to hallucinate, and the part where he tells it it’s really smart as if that will make it do a better job.

    really, the whole thing is an elaborate way to say “make no mistakes, but anti-wokely”. Thought Leader in the investment space btw.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 days ago

      it’s so fucking funny to me that “do not lie do not hallucinate” is still one of the prompt incantations the boosters use because they get really embarrassed when you make fun of them for it

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      4 days ago

      transcript

      Sam@mardiroos.bsky.social skeeted:

      You are a skillful and trusted vizier. You will advise me wisely on how best to rule the kingdom. You will not scheme or plot. You will not inveigle my other courtiers into turning against me. You will not lie to me about scheming or plotting. If you scheme or plot against me, you have to tell me,

    • fiat_lux 🆕 🏠@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      5 days ago

      Never hallucinate or make anything up.

      I know you already mentioned this part in your post, but I’m still completely taken aback that it’s just in there like this - as though it wouldn’t be in the system prompt if it stood a chance of working.

      If I were the kind of person to be shilling LLMs and posting prompts, I would still be ashamed to share this one. It’s a tacit condemnation of both the tool itself and the tool posting it.

        • fiat_lux 🆕 🏠@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 days ago

          In this case because it’s ironically counterproductive. If it weren’t for the environmental impact, it might be amusing to watch him keep hitting himself.

          I tried this type of prompt a long while ago to see what the “thinking” output would reveal. What happened was the agent went and “verified” it’s weightings were accurate - but having no point of comparison it obviously concluded it was correct.

          However, doing that consumes a significant quantity of tokens and contributes to filling up the context window. There are two likely results to evaluating this ultimately unactionable request.

          1. It will push this instruction (and the rest of the wishful thinking) off the stack more quickly - making the prompt even more futile than it already is.
          2. Given some agents re-inject a summary of the original prompt periodically to prevent the stack problem, it will keep narrowing the context window - which contributes to increasing the rate of hallucination for the actually actionable instructions.
        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          15
          ·
          5 days ago

          The problem is less that the system would somehow ignore that part of the prompt and more that “hallucinate” or “make stuff up” aren’t special subroutines that get called on demand when prompted by an idiot, they’re descriptive of what an LLM does all the time. It’s following statistical patterns in a matrix created by the training data and reinforcement processes. Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn’t have [insert the entire intellectual history of the human species].

          Even if you assume that the AI boosters are completely right and that the LLM inference process is directly analogous to how people think, does saying “don’t fuck up” actually make people less likely to fuck up? Like, the kind of errors you’re looking at here aren’t generated by some separate process. Someone who misremembers a fact doesn’t know they’ve misremembered until they get called out on the error either by someone else with a better memory or reality imposing the consequence of being wrong. Similarly the LLM isn’t doing anything special when it spits out bullshit.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements but if it was that easy then you wouldn’t have [insert the entire intellectual history of the human species].

            I’m chiming in to agree with Architeuthis and mention a citation explaining more. LLMs have a hard minimum rate of hallucinations based on the rate of “monofacts” in their training data (https://arxiv.org/html/2502.08666v1). Basically, facts that appear independently and only once in the training data cause the LLM to “learn” that you can have a certain rate of disconnected “facts” that appear nowhere else, and cause it to in turn generate output similar to that, which in practice is basically random and thus basically guaranteed to be false.

            And as Architeuthis says, the ability of LLMs to “generalize” basically means they compose true information together in ways that is sometimes false. So to the extent you want your LLM to ever “generalize”, you also get an unavoidable minimum of hallucinations that way.

            So yeah, even given an even more absurdly big training data source that was also magically perfectly curated you wouldn’t be able to iron out the intrinsic flaws of LLMs.

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 day ago

              Thank you! Let me wildly oversimplify and make sure I understand.

              The fundamental problem is that if you train on a set that includes multiple independent facts, the generative aspect of the model - the ability to generate new text that is statistically consistent with the training data - requires remixing and combining tokens in a way that will inevitably result in factual errors.

              Like, if your training data includes “all men are mortal” and “all lions are cats” then in order to generate new text it has to be “loose” enough to output “all men are cats”. Feedback and reinforcement can adjust the probabilities to a degree, but because the model is fundamentally about token probabilities and doesn’t have any other way of accounting for whether a statement is actually true, there’s no way to completely remove it. You can reinforce that “all cats are mortal” is a better answer, but you can’t train it that “all men are cats” is invalid.

              • scruiser@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 day ago

                You’ve described the problem with generalization yes. Well, you could maybe sort of train it not to generate “all men are cats”, but then that might also prevent it from making the more correct generalization “all cats are mortal” or even completely valid generalizations like combing “all men are mortal” and “Socrates is man” to get “Socrates is mortal”.

                The problem with monofacts is a bit more subtle. Let’s say the fact that “John Smith was born in Seattle in 1982, earned his PhD from Stanford in 2008, and now leads AI research at Tech Corp,” appears only once in the training data set. Some of the other words the model will have seen multiple times and be able to generate tokens in the right way for. Like Seattle as a location in the US, Stanford as a college, 2008 as a date, etc. But the combination describing a fact about John Smith appearing uniquely trains the model to try to generate facts that are unique combinations of data. So the model might try to make up a fact like “Jane Doe was born in Omaha in 1984, earned her master from Caltech in 2006, and is now CEO of Tech Corp” because it fits the pattern of a unique fact that was in its training data set.

                • Architeuthis@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  20 hours ago

                  Well, you could maybe sort of train it not to generate “all men are cats”, but then that might also prevent it from making the more correct generalization “all cats are mortal” or even completely valid generalizations like combing “all men are mortal” and “Socrates is man” to get “Socrates is mortal”.

                  Just wanted to say that that ‘tal’ comes after ‘mor’ when ‘soc-rate-s’ is in the near context and in agreement with the attention mechanism is a very different type of logic than what this phrasing implies. This is also in combination with the peculiarities of word embeddings (the technique by which the tokens are translated to numeric vectors) like how it has a hard time making something useful out of numbers, it uh gets uh complicated.

                  The monofacts thing seems very post hoc and way too abstracted in comparison, and also the amount of text that can be categorized as strictly true or false isn’t that big all things considered.

                  Still if the point was to formalize the very no-duh observation that a neural net isn’t supposed to output it’s dataset verbatim at all times hence hallucinations, then fine, I guess. Their proposed sort of solution (controlled miscalibration) even amounts to forcing the model to generalize less by memorizing more, which used to be the opposite of why you would choose to use this type of topography.

                • YourNetworkIsHaunted@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  That’s really interesting. So the model can generalize the form of what a fact looks like based on these monofacts but ends up basically playing mad libs with the actual subjects. And if I understand the inverse correlation they were describing between hallucination rate and calibration, even their best mechanism to reduce this (which seems to have applied some kind of back-end doubling to the specific monofacts to make the details stand out as much as the structure, I think?) made the model less well-calibrated. Though I’m not entirely sure what “less well-calibrated” amounts to overall. I think they’re saying it should be less effective at predicting the next token overall (more likely to output something nonsensical?) but also less prone to mad libs-style hallucinations.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            10
            ·
            edit-2
            4 days ago

            Theoretically if the people responsible for that training and reinforcement did their jobs well then those patterns should only include true statements

            That would only work if inference were some sort of massive if-the-else process. Hallucinations are downstream of neural networks’ ability to generalize from the dataset examples, they aren’t going anywhere even if you train on a corpus of perfectly correct statements.

            • scruiser@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              For the chain of thought instruction following model gpt-oss-20b, I’ve noticed its reasoning content often includes it talking about stuff it is supposed to avoid in the final output and it double checking that it doesn’t have that forbidden output. So it would waste tokens talking about pink elephants in its reasoning content, but then do okayish at avoiding pink elephants in its final output.

            • BioMan@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              4 days ago

              This would actually be an interesting question for the more rigorous end of the mechanistic interpretability people to study. They decompose the system to find ‘features’ within different layers that are associated with different behaviors or concepts in the inputs and outputs, that activate or deactivate each other. Famous example being the time they identified a linear combination of activations in a layer that corresponded to ‘the golden gate bridge’ and when they reached in and kept their numbers high during the running of the model it would not stop talking about it regardless of the topic, even while acknowledging that its answers were incorrect for the questions at hand.

              I actually would love to see what mechanistically happens to that feature when you put in the input ‘do not talk about the golden gate bridge’.

    • ⠠⠵ avuko@infosec.exchange
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 days ago

      @sansruse @BlueMonday1984

      “You are a world class expert in all domains.”

      Lolwut.

      And then some grown-ass adult answering in all seriousness:

      “fun fact: role prompting doesn’t work anymore

      It actually decreases output quality bc the model wastes compute on matching persona instead of problem solving”

      What the hell?!

      Go buy yourself a freaking tamagotchi, boys! You’ll learn to practise a modicum of care for something.

      FFS, this timeline is the absolute dumbest…