

I am the journeyer from the valley of the dead Sega consoles. With the blessings of Sega Saturn, the gaming system of destruction, I am the Scout of Silence… Sailor Saturn.




There have been a couple of cases of generative AI graphics being used in anime recently:
Ascendance of a Bookworm used AI backgrounds in the opening song
Liar Game featured an AI chandelier (xcancel link) (this one is brand new so the studio hasn’t responded yet).
This sucks because I wanted to like Liar Game (the manga is excellent though. Read it! Read it!)


At my job I have spent many hours fending off, reverting, or fixing automated AI slop code changes. So depending on your definition of “tearing through”…
Like I spent the better part of a day fixing a C++ signed integer overflow that no one actually cares about because it was the only way to ward off a robot repeatedly trying to fix it in terrible unreadable ways. I could have spent that day maximizing shareholder value but I had to fend off a robot instead.


This post has all the usual cliches, exaggerations, lies, and unfounded optimism you’d expect in a blog post about a company forcing AI down their workers and user’s throats. I’ll try to avoid sneering at every sentence.
Delegating elements of Site Reliability Engineering to an agent does not necessarily introduce an entirely new class of risk; it should inherit the constraints of existing production systems. Well-run production environments already rely on strict access controls, audit trails, and clear separation between observation and action. […] In that sense, the challenge is less about “trusting the agents”, and more about building trust in the same guardrails we already apply to any production system.
This might sound good to at first, but falls apart under the slightest scrutiny. There is a reason that companies don’t open their intranets to the public despite having fine-grained access controls. Or in other words, "I’m getting a lot of questions already answered by my ‘does not necessarily introduce an entire new class of risk’ T-shirt.
Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.
And right after arguing that LLMs are safe if you have a perfect permissions model, now he’s proposing letting one #yolo configure a git server or something? This is the sort of thing that could easily easily lead to random security issues.
I suspect that “Troubleshoot a wi-fi connection issue” will work about as well as existing network troubleshooting wizards (e.g. terribly), and that we don’t actually need to reinvent the software wizard but less deterministic.


Detective: “So Magic Eight Ball. I’m just gonna ask you outright. Were you the killer?”
Magic Eight Ball: “It is decidedly so.”
Some Guy: “Oh my god.”


It’s bad for me too.
I’m trying to hang in there until I get some healthcare stuff taken care of over the next year or two but it is getting increasingly difficult. Most of the the good people at my job have been driven out, quit, or been poached by other (AI) companies.
By this point a majority of the programmers at my job (or at least the one’s most active on the mailing lists) are LLM true believers who think that the end times are near. My management chain has explicitly said that LLM programming is required, and that a subsequent increase in “productivity” is expected with it. My department got renamed to something with “AI” in the name. I constantly field questions from people who want me to read a screen full of LLM nonsense, or who push back when I tell them something claiming that the chatbot said differently.
There’s always some frantic push to adopt “MCP” or “Skills” or whatever the next fad will be without any guidance as to how or why. If I ignore this I get nastygrams from my manager.
And at my last doctor visit I had elevated blood pressure :)


I saw this headline:
Engineer Says It’s Time to Rebuild the Twin Towers With Giant Data Centers, Huge Tech Labs, and Anti-Aircraft Lasers on the Roof
And thought nothing in the body of the article could possibly top the headline. I was wrong:
Now, a long-shot effort seeks to rebuild them in an unlikely locale: Chicago
he’s even gone so far as to get a tribute to the original World Trade Center tattooed on his arm.
fireproof steel I-beams
a dedicated fire department
Journalists know that they don’t have to cover every random crank on the internet right? I suppose it’s my own fault for clicking on it.


Apropos of nothing, archive.is ddos-d a website and altered page snapshots as part of some sort of inscrutable net drama causing Wikipedia to stop using it.
So if anyone cares about such things here’s a wayback machine link instead
That said: lmao. Where are all these billion dollar LLM-run Tamogotchi feeding startups hiding? Still in the strawberry counting phase of starting a business?


I mean maybe it’s poorly worded and there’s only one set of beeps at the end. But then why would the protagonist be reminded multiple times?
Unless she’s remembering all the times in the past that microwaving bland chicken reminded her of the world being orderly?
But now I think I’m thinking too deeply about microwaves.


All I could think about is who has a microwave that beeps while it’s still cooking?


Against all odds they found a worse future to pursue than forcing employees to strap VR goggles to their heads 40 hours a week, and have pivoted towards AI.


New AI legal filing sanctions just dropped: https://storage.courtlistener.com/recap/gov.uscourts.ca6.152857/gov.uscourts.ca6.152857.50.2.pdf
I don’t have time to read over it completely yet, but here’s a taste:
That briefing repeatedly misrepresented the record, cited non-existent cases, and cited cases for propositions of law that they did not even discuss, much less support. As explained below, Irion’s and Egli’s misconduct warrants the sanctions laid out in Section II.C.
If we included typos and other errors that are arguably, but not clearly, a misrepresentation or fake citation, we would be looking at far more misstatements of fact and law
Irion and Egli did not respond to these directives. Instead, they said the show cause order was “void on its face for failing to include a signature of an Article III judge,” was “motivated by harassment of the Respondent attorneys,” and “reflect[ed] illegal ex-parte [sic] communications within this Court.”


https://www.reddit.com/r/law/comments/1rrhzhc/doge_lead_in_deposition_details_how_he_emailed/
Behold: the Silicon Valley masters of opsec.


Last week 404 Media reported on some DOGE deposition videos.
The videos were since removed via court order, but are available on Internet Archive.
For anyone unfamiliar: this slots under TechTakes because DOGE is basically Elon Musk’s army of naive fascist silicon valley tech-bros rampaging about the federal government with Chat-GPT, SQL, and unsecured thumb-drives.
This article is behind a paywall, but links to the following video snippets from the depositions:
https://www.instagram.com/reels/DVtOiqJjcu4/ https://www.instagram.com/reels/DVyhJT9jf4f/
For example here Justin Fox talks about deleting federal grants that he considered in-scope for an anti DEI executive order: https://www.instagram.com/reels/DVtOiqJjcu4/
Q: “Why is a documentary about Holocaust survivors DEI”
A: “It’s the gender based story 🤷 that’s inherently discriminatory to focus on this specific group 🙄.”
Q: “It’s inherently discriminatory to focus on what specific group?”
A: “The gender based. So, females 🤷 during the Holocaust.”
He goes on to clarify that it’s DEI because it focuses on Jewish women. Oh that’s OK then!
There is a lot of video to work through but I know there is more comedy gold rage inducing punchable nazi snippets within.


There’s only one thing that’s advertised as not-waterproof that I’ll risk using underwater and that’s Casio wristwatches. “Water resist” is a huge understatement for them the things are indestructable.
(This comment sponsored by Casio)


Apparently this sort of machine learning training pitfall I learned about a decade go in an undergraduate level class that I was like halfway paying attention to in a party school is now evidence of the impending AI apocalypse.


The editor in chief has an apology: https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/ – the commenters are not happy.
The reporter in question wrote an explanation here: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
The AI part of the explanation is about what you’d expect from an AI enthusiast caught with his hand in the cookie jar and trying to blame the danger-tools as much as he can. Though it also shows that he felt compelled to work during an acute covid infection, and yikes to that work life balance.


Elon Musk pivots from mars colony tweets to moon colony tweets (xcancel).
I’m not quite clear on what “self-growing” means here given how inhospitable the moon is.


The first issue filed is called “Hello world does not compile” so you can tell it’s off to a good start. Then the rest of the six pages of issues appear to be mostly spam filed by some AI guy’s rogue chatbot.


tl;dr: someone made a thing where chatbots control a computer called clawdbot, moltbot, openclaw https://github.com/openclaw/openclaw
someone else made a thing where these chatbots can chat at eachother https://www.moltbook.com/
and now all the ai people are freaking out about how game changing chatbots doing computer tasks (dangerously and expensively) is. could this be a robot consciousness? the end of the economic order? an excuse for the bubble to go on for another fiscal quarter?
I might be missing something but I think that’s literally it.