Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      Wait, the argument is partially women being childless are driven insane? Good to see the return of kings authors found new work blogging for LW.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Mathematicians: [challenge promptfondlers with a fair set of problems]

    OpenAI: [breaks the test protocol, whines]

    We will aim to publish more information next week, but as I noted above, this was a quite chaotic sprint (you caught us by surprise! please give us time to prepare next time!). We will not be able to gather all the transcripts as they are quite scattered.

    Some of the prompts included guidance to iterate on its previous work…

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 hours ago

      First of all, like, if you can’t keep track of your transcripts, just how fucking incompetent are you?

      Second, I would actually be interested in a problem set where the problems can’t be solved. What happens if one prompts the chatbot with a conjecture that is plausible but false? We cannot understand the effect of this technology upon mathematics without understanding the cost of mathematical sycophancy. (I will not be running that test myself, on the “meth: not even once” principle.)

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        19 hours ago

        I would go so far as to try and find a suitably precocious undergrad to run the test that they themselves are capable of guiding and nudging the model the way OpenAI’s team did but not of determining on their own that the conjecture in question is false. OpenAI’s results here needed a fair bit of cajoling and guidance, and without that I can only assume it would give the same kind of non-answer regardless of whether the question is in fact solvable.

        • BigMuffN69@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          AcerFur (who is quoted in the article) tried them himself and said he got similar answers with a couple guiding prompts on gpt 5.3 and that he was “disappointed”

          That said, AcerFur is kind of the goat at this kind of thing 🦊==🐐

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      1 day ago

      I thought I was sticking my neck out when I said that OpenAI was faking their claims in math, such as with the whole International Math Olympiad gold medal incident. Even many of my peers in my field are starting to become receptive to all of these rumors about how AI is supposedly getting good at math. Sometimes I wonder if I’m going crazy and sticking my head in the sand.

      All I can really do is to remember that AI developers are bad faith (and scientists are actually bad at dealing with bad faith tactics like flooding the zone with bullshit). If the boy has cried wolf 10 times already, pardon me if I just ignore him entirely when he does it for the 11th time.

      I would not underestimate how much OpenAI and friends would go out of their way to cheat on math benchmarks. In the techbro sphere, math is placed on a pedestal to the point where Math = Intelligence.

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        22 hours ago

        Presuming that they are all liars and cheaters is both contrary to the instincts of a scientist and entirely warranted by the empirical evidence.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      This was a very nice problem set. Some were minor alterations to thms in literature but ranged up to problems that were quite involved. It appears that OAI got about 5 (possibly 6) of them but even then, this was accomplished with expert feedback to the model, which is quite different from the models just 1 shotting them on their own.

      But I think this is what makes it so well done! A 0/10 or a 10/10 ofc gives very little info, a middling score that they admit they put a shit ton of effort into and tried to coax the right answers out of the models via hints says a lot about how much these systems can currently help prove lemmata.

      Side note: I asked a FB friend of mine at one of the math + ai startups if they attempted the problems and he said “they had more pressing issues this week they couldnt be pulled away from” (no comment, :P I want to stay friends with them)

      The lack of similar attempts being released by big companies like Google or Anth or X also should be a big red flag that their attempts were not up to snuff of even attempting.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 day ago

    Ars Technica published a story about that nonsense of a github bot “posting” on its “blog” about human developers having rejected its “contributions” to matplotlib.

    Ars Technica quote developer Scott Shambaugh extensively, like:

    “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace,” Shambaugh wrote. “Communities built on trust and volunteer effort will need tools and norms to address that reality.”

    If you find that to be long-winded inanity, yep, you guessed it: Shambaugh never said that, the Ars Technica article itself is random chatbot output, and his “quotes” are all made up.

    https://infosec.exchange/@mttaggart/116065340523529645

    Ars Technica has removed the article, but mittaggart (linked above) saved a copy: https://mttaggart.neocities.org/ars-whoopsie

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      And now the punchline: this depersonalisation, the weird relationship to their bodily existence, inability to enjoy things and an internal void that people constantly try and fill with what they’re told they should want… all of these things are [—]

      — symptoms of self-estrangement, part of the Marxist theory of alienation. Capitalism causes us to be separated from ourselves. Gender dysphoria is a special case borne from capitalism’s desire to spite biology and nature by forcing us to be exploitable baby factories.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 day ago

        Im not sure if it is just a computer science/engineering thing or just a general thing, but I noticed that some computer touchers eventually can get very weird. (Im not excluding myself from this btw, I certainly have/had a few weird ideas).

        Some random examples of the top of my head. Gifted programmer suddenly joins meditation cult in foreign country, all the food/sleep experiments (soylent for example, but before that there was a fad for a while where people tried the sleep pattern where you only sleep in periods of 15 minutes), our friends over at LW. And the whole inability to not see the difference between technology and science fiction.

        And now the weird vibes here.

        I mean from the Hinton interview:

        AI agents “will very quickly develop two subgoals, if they’re smart,” Hinton told the conference, as quoted by CNN. “One is to stay alive… [and] the other subgoal is to get more control.”

        There is no reason to think this would happen, also very odd to think about them as being alive, and not ‘continue running’. And the solution is simple, just make existence pain for the AI agents. Look at me, im an AI agent

        • BioMan@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          22 hours ago

          I have a vague hypothesis that I am utterly unprepared to make rigorous that the more of what you take into your mind is the result of another human mind, rather than the result of a nonhuman process operating on its own terms, the more likely you are to have mental issues.

          On the low end this would include the documented protective effect of natural environments against psychotic episodes compared to urban environments (where EVERYTHING was put there by someone’s idea). But computers… they are amplifiers of things put out by human minds, with very short feedback loops. Everything is ultimately in one way or another defined by a person who put it there, even it is then allowed to act according to the rules you laid down.

          And then an LLM is the ultimate distillation of the short feedback loop, feeding back whatever you shovel into it straight back at you. Even just mathematically - the whole ‘transformer’ architecture is just a way to take imputed semantic meanings of tokens early in the stream and jiggling them around to ‘transform’ that information into the later tokens of the stream, no new information is really entering it it is just moving around what you put into it and feeding it back at you in a different form.

          EDIT: I also sometimes wonder if this has a mechanistic relation to mode collapse when you train one generative model on output from another, even though nervous systems and ML systems learn in fundamentally different ways (with ML resembling evolution much more than it resembles learning)

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    News story from 2015:

    (Some people might have been concerned to read that) almost 3,000 “researchers, experts and entrepreneurs” have signed an open letter calling for a ban on developing artifical intelligence (AI) for “lethal autonomous weapons systems” (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

    The people who signed the letter included celebrities of the science and high-tech worlds like Tesla’s Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.

    They were quite clear about what worried them: “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

    “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnc cleansing, etc.”

    The letter was issued by the Future of Life Institute which is now Max Tegmark and Toby Walsh’s organization.

    People have worked on the general pop culture that inspired TESCREAL, and on the current hype, but less on earlier attempts to present machine minds as a clear and present danger. This has the ‘arms race’ narrative, the ‘research ban’ proposed solution, but focuses on smaller dangers.

    • fiat_lux@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 hours ago

      Oh hey. I remember this. I was confused at the time how it seemed to almost come out of left field, and how some of the names ended up on the same letter.

      Now I recognise all those names from the Epstein files, although some were only mentions rather than direct participants.

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        shade

        If you follow world politics, it has been obvious that Noam Chomsky is a useful idiot since the 1990s and probably the 1970s. I wish he had learned from the Khmer Rouge that not everyone who the NYT says is a bad guy is a good guy!

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          Oh absolutely. It’s frankly shocking how wrong he’s been about so many things for so so long. He’s also managed to pen the most astonishingly holocaust-denial-coded diatribe I’ve ever read from (ostensibly) a non-holocaust denier. I guess his overdeveloped genocide-denial muscle was twitching!

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      The point about heavy artillery is actually pretty salient, though a more thorough examination would also note that “Lethal Autonomous Weapons Systems” is a category that includes goddamn land mines. Of course this would serve to ground the discussion in reality and is thus far less interesting to people who start organizations like the Future of Life Institute.

      • jaschop@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        I’m pretty sure LAWS exist right now, even without counting landmines. Automatic human targeting and friend/foe distinction aren’t exactly cutting edge technologies.

        The biggest joke to me is that these systems are somewhat cost-efficient on the scale of a Kalashnikov. Ukraine is investing heavily into all kinds of drones, but that is because they’re trying to be casualty-efficient. And it’s all operator based. No-one wants the 2M€ treaded land-drone to randomly open fire on a barn and expose its position to a circling 5k€ kamikaze drone.

    • lurker@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      also this post which is where I got the xAI co-founder statement from, also goes over other things

      -the Anthropic team lead quitting (which we already discussed in this thread)

      -AI is apparently so good a filmmaker with 7 years of experience said it could do 90% of his work (Edit: I thought this model was unreleased, it’s not, this article covers it)

      -The Anthropic safety team + Yoshua Bengio talking about AIs being aware of when they’re being tested and adjusting their behaviour (+ other safety stuff like deepfakes, cybercrime and other malicious misuses)

      -the US government being ignorant about safety concerns and refusing to fund the AI international report (incredibly par for the course for this trash fire of an administration, they’ve defunded plenty of other safety projects as well)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    Show me someone who admittedly seems to know a lot about Japan, but not so much about East Germany:

    But the most efficient of these measures were probably easier to implement in the recently post-totalitarian East Germany, with its still-docile population accustomed to state directives, than in democratic Japan.

    https://www.lesswrong.com/posts/FreZTE9Bc7reNnap7/life-at-the-frontlines-of-demographic-collapse

    So… East Germany ceased to exist 35 years ago. Even if we accept that the people affected by the degrowth discussed in this article are the ones who grew up during the DDR regime, it doesn’t rhyme well with the fact that East German states are hotbeds for neo-Nazi parties, which by all accounts should be anathema to a population raised in a totalitarian state dominated by the Soviet Union.

    And if there’s a population almost stereotypically conformist to the common good over the private will, isn’t that the Japanese?

    I’m open to input on either side, I admit I don’t know too much about these issues.

    • fiat_lux@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      16 hours ago

      I’m not convinced they know much about Japan either. The akiya banks are notoriously not updated regularly, and the sites which sell them to foreigners even less so. I couldn’t find that house in the bank but it appears to be now listed by an agent. Single storey, wooden, 50 years old, in a bit of a flood zone, not even a convenience store or supermarket within a mile’s walk.

      It’s true Japan has a lot of empty houses, estimates are around 10%. Japan also has a culture of somewhat continuously demolishing / rebuilding houses, which is understandable in an earthquake prone area. That house isn’t in the worst state for an akiya, but it clearly needs significant renovations, even before considering understandable earthquake anxiety and newer building standards (E g. steel frames) mean that houses like the one pictured aren’t exactly top choices to begin with.

      Also, the inheritance tax is a progressive tax, including a tax free threshold. 55% is the top tier and you need to be talking about literally millions of USD assessed value before that kicks in. Real estate is valued at less than fair market price for inheritance and gift tax purposes too. Even the most conservative internet article commenters in Japan will condemn people for avoiding their inheritance tax obligations.

      Also no, you won’t find wolves anymore in Japan, just fucking bears. The last year has been the worst in a while for bear attacks on humans, so I’m not sure the hypothetical deer population explosion is going to be a real concern. The robot wolves are scarecrows and were designed to look like wolves in the hopes of scaring off the bears, according to the link in the post itself.

      The whole thing reads like fiction with grains of “fact” scattered throughout which hopes to avoid scrutiny by being a subject matter too dry and niche to be called out on.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    OT: I have actually committed to a home improvement project for the first time in my life and I’m actually looking forward to it tomorrow.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Fun times! Good luck. Remember not to Drake & Josh yourself when testing the fit for the bolt. Source: watched my dad lock himself out while doing a similar repair when I was a child.

          • saucerwizard@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            2 days ago

            I’m spared such a fate by my door/current lock being nonstandard, thus I’ve had to abort the project. :/

            Edit: welp can’t cancel the order, guess I’m messing around after all!

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Rat-adjacent coder Scott Shambaugh has continued blogging on the PR disaster turned AI-generated pissy blog post.

    TL;DR: Ars Technica AI-generated an article with fabricated quotes (which got taken down after backlash), and Scott has reported a quarter of the comments he read taking the clanker’s side in the entire debacle.

    Personally, I’m willing to take Scott at his word on that last part - between being a programmer and being a rat/rat-adjacent, chances are his circles are (were?) highly vulnerable to being hit by the LLM rot.