Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    5 days ago

    Kelsey Piper posts a new fanfiction about Ed Zitron :

    https://www.theargumentmag.com/p/ais-biggest-critic-has-lost-the-plot

    Edit: Lately, Kelsey Piper has been serving as the ambassador to centrist liberals from lesswrong, which is why the “big mad” nature of the piece caught my attention.

    Included below is a previous example of Piper’s work for the benefit of the uninitiated:

    https://old.reddit.com/r/SneerClub/comments/1my5z3g/kelsey_piper_of_vox_cowrote_an_epic_eugenics

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      4 days ago

      I am a pretty big fan of Ed’s work, so I’m going to hold my nose and read Kelsey’s work thoroughly enough to do a line by line debunking:

      Over the last two years, he has called the top repeatedly:

      Well yes, but he has also explicitly said that the bubble peaking and popping would be a multiyear process. I’ve only kept up with his every article for the past year, but in the past year, his median guess for the bubble pop becoming undeniable was 2027. I guess making timelines with big events in 2027 and hedging on the median number is only for the rationalists? Also, we are already starting to see the narrative fray as Anthropic and OpenAI experiment with price hikes and struggle with getting ready for IPO, which would count as meeting his predictions for the start of the bubble pop.

      In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.

      This is basically an admission that he can’t make the case in terms of the economics anymore.

      ??? Ed has been making the case for circular financing and investors being deceived because he thinks there are circular financing deals and investors being deceived. Ed has slightly softened on his position on exactly how useless or not LLMs are, but he is still holding to his economic case that the amount they cost isn’t worth the value they provide, extremely blatantly so once consumers start paying the real cost and not the VC-subsidized cost.

      By almost every metric, AI progress from 2024 to 2026 has been much faster than AI progress from 2022 to 2024.

      And she is quoting a rat-adjacent think-tank for proof that AI improvement has been exponential. Even among the rationalist, the case has been made that the benchmarks are not reflective of real world usage/value and that costs are growing with “capabilities”.

      It can no longer argue that costs aren’t falling; they are.

      Even accepting the premise that real costs have fallen, Kelsey fails to address Ed’s case that the costs LLM companies charge is massively subsidized. If real costs are 10x the current subsidized costs (which have already been pushed up as far they can be without losing customers), and model inference prices miraculously drop 5x (which Kelsey would treat as a given, but I think is pretty unlikely barring some radical paradigm shifts), that is still a 2x gap.

      It is a straightforward crime to claim $2 billion in monthly revenue if you mean that you are giving away services that would have a $2 billion market value.

      Yes, exactly. Technically OpenAI and Anthropic play games with ARR and “gross” revenue (i.e. magically excluding the cost of training the model in the first place), but in a just nation it would straightforwardly be a crime. Why does she find this hard to believe?

      Epoch AI has an in-depth analysis of the same financial questions from the same public information

      (Looks inside the Epoch AI article):

      So what are the profits? One option is to look at gross profits. This only considers the direct cost of running a model

      Ed has gone into detail repeatedly about why excluding the cost of training the model is bullshit.

      (More details from the article)

      But we can still do an illustrative calculation: let’s conservatively assume that OpenAI started R&D on GPT-5 after o3’s release last April. Then there’d still be four months between then and GPT-5’s release in August,22 during which OpenAI spent around $5 billion on R&D.23 But that’s still higher than the $2 billion of gross profits. In other words, OpenAI spent more on R&D in the four months preceding GPT-5, than it made in gross profits during GPT-5’s four-month tenure.24

      Oh that is surprising, the Epoch AI article actually acknowledges the point that these models are wildly unprofitable once you account for the training cost! Of course, they throw away their point in the next section by just magically assuming LLMs will prove to massively valuable in the near future! (One of the exact things Ed has complained about!)

      He’s found too many grounds for dismissing all the financial information we have as dishonest or irrelevant to seriously engage with what any of it would imply if it were true.

      He has shown in detail how the companies use barely technically not lying obfuscated bullshit metrics like gross profit or ARR to inflate their numbers and if you try un-obfuscate them the numbers look a lot worse.

      Kelsey goes on to try to claim how much value LLMs provide

      Making them more productive is a big deal, and in 2026, AI makes them more productive.

      Zitron can’t really contest this with contemporary data, so he cites 2024 and 2025 studies of much weaker AIs with much weaker productivity impacts.

      Two years to… 4 months ago! Such outdated information! In the first place there has been very few rigorous studies of how much of a productivity boost LLM coding agents actually provide, and one of the few studies with even a passing attempt at rigor (while still below good academic standards), was METR’s study (and keep in mind they are a rat-adjacent think tank and not proper academics), which showed programmers thought they got a productivity boost but actually got a net productivity decrease!

      From this set of beliefs, you could, in fact, defend a delightful bespoke AI bubble take: that AI would have been a catastrophic investment bubble, but the AI companies were saved from their mistakes by the determined NIMBYs of America killing off the excess data center build-out.

      But that’s not Zitron’s stance. He seems to account “the build-out is too aggressive” and “the build-out is not happening as planned” as both independent strikes against AI — both things that show it’s bad, and the more of those he finds, the more bad it is.

      It could in fact be all 3! The hyped-up build out, such as that indicated by OpenAI’s and Oracle’s 300 billion dollar detail was completely insanely too aggressive (for it to pay off, Ed calculated LLMs would have to drastically exceed Netflix+Microsoft Office in terms of ubiquity and price point), not achievable given realistic build times for data centers (Ed has also brought the numbers here), and even at the reduced actually rate of build out, still not actually financially viable (is simply because the LLM companies aren’t charging enough). So yes, both things are bad, and one type of badness partway mitigates the other, but it is still all bad!

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      5 days ago

      Thanks for posting this; if you hadn’t, I would have. Piper really doesn’t seem to understand that bubbles form and pop over a span of three to five years. Like, I’m not sure how much charity I’m supposed to give to analyses like:

      When you read “AI is a bubble,” think of the dot-com boom of the late 1990s: Yes, the internet was going to be a big deal, but valuations soared for specific companies that had small or speculative revenue, often on the assumption that they would capture the value the internet would one day deliver. They didn’t, their stocks crashed, and the invested money was mostly lost. The internet was as big as imagined — bigger, even — but Pets.com didn’t survive to see it.

      Pets.com!? Kelsey, even reading a basic article about the dot-com bubble would have saved you embarrassment here. Zitron’s analogy is excellent because the bubble is multifactorial and the analogies that we can make are factor-to-factor. Here’s some things that caused the dot-com bubble; people were overly optimistic about:

      Compared to all of that, Kelsey, Pets.com was just an Amazon.com experiment. Remember Amazon.com? Did the dot-com bubble kill them? No? Anyway, Pets.com is kind of like the small labs that hover around OpenAI and Anthropic, trying out various little harnesses and adapters on top of their token APIs. Pets.com is like OpenClaw; it’s not that important of a player in the overall finances, just an example of how severely the big labs are distorting incentives for small labs.

      The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.

      The uselessness of the products in 2023 directly led to the bad investments in 2024 and the Enron-esque financial deals in 2025, Kelsey. The future is conditioned upon the past, y’know?

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        5 days ago

        Alleging widespread financial fraud?! How absurd! And to prove just how absurd it is, I will namedrop the infamous financial fraud from the industry full of exactly the same people. Checkmate atheists

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 days ago

          Widespread financial fraud which was legitimized and in some cases directly backed by EAs! Surely there are no parallels!

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        Zitron’s analogy is excellent because the bubble is multifactorial and the analogies that we can make are factor-to-factor. Here’s some things that caused the dot-com bubble; people were overly optimistic about:

        Ed has also been clear there are a few factors that make this bubble worse (for the economy and the general public) than the dotcom bubble. For one, Ed is strongly convinced that GPU lifecycles are much shorter and worse than fiber optic life cycles. You build fiber optic infrastructure and it will last for decades. Meanwhile, GPUs used constantly at max load have life cycles of 3-5 years. The end result of the internet is also much more useful and less of a double-edged sword than the slop generators which churn out propaganda and spam.

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        5 days ago

        All the legal and regulatory uncertainties make it very hard to talk about the financial viability of chatbots. What do you do if your $20 billion model is shut down forever by court order after it counsels the wrong person into suicide? Piper can overlook this because she is a hack with patrons - to my knowledge, she has never been paid to write by anyone outside the EA world. If she were a working writer who had to deal with chatbots driving up the cost of her website, creating knockoffs of her novels, and competing for editing gigs (let alone someone whose friend had a mental crisis after talking too long with friend computer) she might sound different.

        Zitron’s populist, conspiratorial tone reminds me of independent investigative reporters from the 1990s and 2000s who also had to find and keep paying readers. Piper just has to persuade one patron at a time that she has propaganda value.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      Kelsey Piper is a propagandist explaining Effective Altruism to centrist professionals and elected officials in the USA. She got into journalism because Vox wanted an Effective Altruism column and Effective Altruists were willing to fund it (and EA emerged out of the community around Yudkowsky). The Argument (a group blog on a Nazi site) feels like a step down from Vox (a fairly traditional media organization, although web-first).

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 days ago

          I wonder about her future because she is in the same niche that Scott Alexander used to have, but without his ability to build an enthusiastic online audience. I think she has the self-control not to share her weird beliefs on main, but if her patrons figure out that there is not much audience for technocratic centrism in the USA in 2026, she may be in trouble. Her friends’ biggest policy win, the legalization of prediction markets, is already getting a lot of bad press in the USA.

          • istewart@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            4 days ago

            if her patrons figure out that there is not much audience for technocratic centrism in the USA in 2026, she may be in trouble.

            I think Piper and Casey Newton are part of a class of media professionals, now in mature phases of their careers, who built those careers around posting online and assume that format will necessarily continue to be the core of their work going forward. It’s not just the EA/rationalist factor, although that certainly doesn’t help; it’s the idea of building outward from the Twitter hot-take and resulting discussion. A Substack post like the one we’re examining is a superset of tweets, the tweets are not a distillation of longer-form writing. (And also, of course, Substack itself is an attempt to cram simple blogging into a financialized walled garden, but that’s a separate issue.) People aren’t just disengaging from the 2010s formats of social media, they’re getting sick of that entire way of thinking. So these people who have bounced around from one fragile Web outlet to another, all the while clinging to their Twitter audience to drive their careers, are at substantial risk no matter what they believe. I don’t doubt that their financial backers will keep throwing good money after bad, though, even if they do cut loose a few of the line workers. After all, Scientology still manages to cling to prime real estate in this day and age.

            I’d also put people like Jamelle Bouie in this class, but Jamelle a) writes for the New York Times, for better or worse and b) consciously considers himself as part of a broader, enduring historical dialogue and struggle, not someone standing on a capstone or culmination of historical progress who can safely ignore history, as Piper presents herself here.

            • CinnasVerses@awful.systems
              link
              fedilink
              English
              arrow-up
              7
              ·
              edit-2
              4 days ago

              I agree that many people launched careers in journalism or science communication by being on Twitter in the 2010s, and that many people tweet, skeet, or blog because they hope the same thing will happen to them even though Old Media has no more money to sponsor them with.

              I put Kelsey Piper in a different place than Ezra Klein, Matt Yglesias, or Scott Alexander because AFAIK she never built a huge and engaged online audience. Piper is paid by Effective Altruist organizations to write Effective Altruist messages on third-party sites. That is why I call her a hack: she is in the economic position of a PR worker but pretends to be a journalist. She has not showed that anyone else is willing to pay her to write.

              edit/ Her only media appearance that I can find that is not with an EA, Rationalist, or Libertarian outfit is on something called the Frames of Space podcast this spring. Compare Bret Devereaux collecting bylines and podcast appearances and with a very engaged comment section and paying Patreon fandom. Devereaux is a working writer and speaker who works to develop new sources of income, Piper is a propagandist whose entire career has been funded by Effective Altruists, mostly friends of her old schoolmate Caroline Ellison.

              • istewart@awful.systems
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 days ago

                Moreover, I think we agree that the EA funders will continue to pursue astroturfing places like Twitter and Substack well past the point that provides any effective entry into the mainstream public dialogue. Your point about the prediction market hype, and the gambling bubble more generally, indicates a likely catalyst of that collapse.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 days ago

      I advise being very cautious about consuming Zitron’s posts, but the same is true of Piper. Many coders are using chatbots, but I don’t know of evidence that it makes them more productive since the “where is all the AI code?” study last year (especially when we consider the whole software lifecycle and not just lines of code pushed to codeberg).

      The paragraph about “what if you assume that all these pathological liars and PR hacks are not lying, wouldn’t that imply something amazing?” reminds me that she is not trained as a journalist.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        I advise being very cautious about consuming Zitron’s posts

        He has got a dramatic and vitriolic style, but as dgerard says, he has also dug through the numbers. I see lots of criticism of Ed’s style, but not nearly so substantial criticism about the hard numbers he has come up with. The LLM companies put out contradictory and obfuscated numbers, and taken naively they seem to contradict Ed’s numbers, but as Ed has shown, many, many times, when you start trying to un-obfuscate them they start looking really bad for everyone betting on LLMs.

        Many coders are using chatbots, but I don’t know of evidence that it makes them more productive

        So more and more coders are coming around to “actually AI code is okay”… but as we’ve seen repeatedly with LLM generated content, it is very easy for people to “Clever Hans” themselves and convince themselves LLMs are contributing more than they actually are, so I am not going to trust anecdotal reports.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          yeah, complaints about Ed’s tone tend to be in direct proportion to how many numbers he’s brought (and peppered with “fuck”)

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        5 days ago

        I take Zitron’s takes with a massive grain of salt, but I think the fundamental difference between him and rats is that for him, AI is just another technology. He’s looking at the figures, seeing the adoption, and not premising his arguments with the supposition that Anthropic’s Claude is literally gonna escape and kill us all.

        Piper says she’s fine with paying $100/month for Claude. OK, but how large is the total addressable market for that kind of monthly expenditure - especially in a world where costs are rising? I’ve seen people stating that because they personally spend $200 on streaming services, increasing that load by 50% monthly is no big deal for them. But streaming services are much more mainstream than AI agents, and crucially, adding another subscriber to them is basically zero-cost for the provider on the margin. Not so with AI! The more people use them, the more they cost for the provider!

        We’re seeing “pricing adjustments” from both Anthropic and Microsoft, which sure doesn’t align with the idea that they have a huge inference pricing margin cushion. Everything is gonna get more expensive - fuel, chips, employees (who are gonna be expected to be compensated for their own rising costs). Just based on what I’m reading in the news titls the analysis over in Ed’s favor.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          12
          ·
          5 days ago

          hello hello AI coverer here, Ed brings the numbers, which is insanely valuable work, and he’s at the stage where people just tell him shit now (it’s a great stage to be at), and Piper is a fucking idiot as usual