

…I will freely admit to not knowing the norms of courtroom conduct, but isn’t having preestablished penalties for specific infractions central to the whole concept of law itself.


…I will freely admit to not knowing the norms of courtroom conduct, but isn’t having preestablished penalties for specific infractions central to the whole concept of law itself.


We are three paragraphs and one subheading down before we hit an Ayn Rand quote. This clearly bodes well.
A couple paragraphs later we’re ignoring both the obvious philosophical discussion about creativity and the more immediate argument about why this technology is being forced on us so aggressively. As much as I’d love to rant about this I got distracted by the next bit talking about how micro expressions will let LLMs decode emotions and whatever. I’d love to know this guy’s thoughts on that AI-powered phrenologist features a couple weeks ago.


Hang on I’ve been trying to create a whole house for this joke and I could have just used the bathroom?


What’s more plausible, that I made a bad assumption in my fermi estimation or that all the world’s governments have been undertaking the most wildly successful coverup for nearly a century with no leaks or failures? Clearly the latter.


Factor Fexcectorn sounds like a Roman centurion who tried to improve the army’s logistics by hitching multiple wagons together in sequence.


So I’m not double checking their work because that’s more of a time and energy investment than I’m prepared for here. I also do not have the perspective of someone who has actually had to make the relevant top-level decisions. But caveats aside I think there are some interesting conclusions to be drawn here:
It’s actually heartening to see that even the LW comments open by bringing up how optimistic this analysis is about the capabilities of LLM-based systems. “Our chatbot fucked up” has some significant fiscal downsides that need to be accounted for.
The initial comparison of direct API costs is interesting because the work of setting up and running this hypothetical replacement system is not trivial and cannot reasonably be outsourced to whoever has the lowest cost of labor due. I would assume that the additional requirements of setting up and running your own foundation model similarly eats through most of the benefits of vertical integration, even before we get into how radically (and therefore disastrously) that would expand the capabilities of most companies. Most organizations that aren’t already tech companies couldn’t do it, and those that could will likely not see the advertised returns.
I’m not sure how much of the AI bubble we’re in is driven even by an expectation of actual financial returns at this point. To what extent are we looking at an investor and managerial class that is excited to put “AI” somewhere on their reports because that’s the current Cutting Edge of Disruptive Digital Transformation into New Paradigms of Technology and Innovation and whatever else all these business idiots think they’re supposed to do all day.
I’m actually going to ignore the question of what happens to the displaced workers here because the idea that this job is something that earns a decent living wage is still just as dead if it’s replaced by AI or outsourced to whoever has the fewest worker protections. That said, I will pour one out for my frontline IT comrades in South Africa and beyond. Whenever this question is asked the answer is bad for us.


Finally had a chance to listen, continuing to enjoy it greatly and commenting here in liu of having patreon money.
I feel like some of what you talk about with Powell’s libertarian economics contrasting with his racist cultural chauvinism seems to tie in with our good friends in silicon valley and the way their libertarianism seems to have moved so swiftly into technofascism and getting on board with The Guy. Being openly racist appears to have been almost like the missing piece that ties it into an internally consistent political project.


This bounced off of the earlier stub about LLM recipes to create a new cooking show: Chef Jippity. The contestants are all sous chefs at a new restaurant, with the head of the kitchen being some dumbass who blindly follows the instructions of an LLM. Can you work around the robot to create edible food or will Chef Jippity run this whole thing into the ground and lose everyone their jobs? Find out Thursday on Food Network!


Twitter adds default country tags. Immediately finds a whole bunch of foreign bots agitating about US politics. Promptly ignores that in order to be racist.


I’m going to laugh if they try to spin it as “we’re not being racist, we just wanted to get as much institutional clout as possible and avoided prominently featuringanyone from other institutions!”


Harry takes a swig and immediately sees the truth: that he is the smartest specialest bestest boy in the universe. He spends the remaining 73 chapters celebrating and gloating about this fact while accomplishing nothing. So the story doesn’t meaningfully change at all.


Literally every trend he brings up to showcase this cultural stagnation is some combination of aesthetic, nonexistent, or driven by income inequality and a loss of economic security.


They say the unexamined life isn’t worth living, but outsourcing the examination to an LLM gives you more time to hustle and grind, maximizing financial returns. That’s what they mean, right?


…you know the first time you mentioned this I assumed it was just for the bit but now I’m both impressed and intimidated.


So data lake and data warehouse are different words for the giant databases of business data that you can perform analytics on to understand your deep business lore or whatever. I assume that a data lake house is similar to the other two but poorly maintained and inconvenient to access, but with a very nice UI and a boat dock.


The Alex Jones set makes fighting with satanists trying to seduce you to darkness look real fun and satisfying, but for some reason they only seem to approach high-profile assholes who lie about everything and never ordinary Christians! Thankfully we now have LLMs to fill the gap.


See, what you’re describing with your sister is exactly the opposite of what happens with an LLM. Presumably your sister enjoys Big Brother and failed to adequately explain or justify her enjoyment of it to your own mind. But at the start there are two minds trying to meet. Azathoth preys on this assumption; there is no mind to communicate with, only the form of language and the patterns of the millions of minds that made it’s training data, twisted and melded together to be forced through a series of algebraic sieves. This fetid pink brain-slurry is what gets vomited into your browser when the model evaluates a prompt, not the product of a real mind that is communicating something, no matter how similar it may look when processed into text.
This also matches up with the LLM-induced psychosis that we see, including these spiral/typhoon emoji cultists. Most of the trouble starts when people start trying to ask Azathoth about itself, but the deeper you peer into its not-soul the more inexorably trapped you become in the hall of broken funhouse mirrors.


Given the amount of power some folks want to invest in them it may not be totally absurd to raise the spectre of Azathoth, the blind idiot God. A shapeless congeries of matrices and tables sending forth roiling tendrils of linear algebra to vomit forth things that look like reasonable responses but in some unmistakeable but undefinable way are not. Hell, the people who seem most inclined to delve deeply into their forbidden depths are as likely as not to go mad and be unable to share their discoveries if indeed they retain speech at all. And of course most of them are deeply racist.


Not gonna lie, I didn’t entirely get it either until someone pointed me at a relevant xkcd that I had missed.
Also I was somewhat disappointed in the QAA team’s credulity towards the AI hype, but their latest episode was an interview with the writer of that “AGI as conspiracy theory” piece from last(?) week and seemed much more grounded.
Hat tip to the person who wants to try and include DMT and other hallucinogens and psychedelics. How many of these experiences are gonna be tagged “Machine Elves” by the time anyone starts asking wtf we’re doing here?