• 0 Posts
  • 23 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle

  • I’m not convinced they know much about Japan either. The akiya banks are notoriously not updated regularly, and the sites which sell them to foreigners even less so. I couldn’t find that house in the bank but it appears to be now listed by an agent. Single storey, wooden, 50 years old, in a bit of a flood zone, not even a convenience store or supermarket within a mile’s walk.

    It’s true Japan has a lot of empty houses, estimates are around 10%. Japan also has a culture of somewhat continuously demolishing / rebuilding houses, which is understandable in an earthquake prone area. That house isn’t in the worst state for an akiya, but it clearly needs significant renovations, even before considering understandable earthquake anxiety and newer building standards (E g. steel frames) mean that houses like the one pictured aren’t exactly top choices to begin with.

    Also, the inheritance tax is a progressive tax, including a tax free threshold. 55% is the top tier and you need to be talking about literally millions of USD assessed value before that kicks in. Real estate is valued at less than fair market price for inheritance and gift tax purposes too. Even the most conservative internet article commenters in Japan will condemn people for avoiding their inheritance tax obligations.

    Also no, you won’t find wolves anymore in Japan, just fucking bears. The last year has been the worst in a while for bear attacks on humans, so I’m not sure the hypothetical deer population explosion is going to be a real concern. The robot wolves are scarecrows and were designed to look like wolves in the hopes of scaring off the bears, according to the link in the post itself.

    The whole thing reads like fiction with grains of “fact” scattered throughout which hopes to avoid scrutiny by being a subject matter too dry and niche to be called out on.









  • Perhaps something like this: https://lemmy.world/post/42528038/22000735

    Deferring responsibility for risk and compliance obligations to AI is bound to lead to some kind of tremendous problem, that’s a given. But the EU has been pretty keen lately on embedding requirements into their newer digital-realm laws about giving market regulators access to internal documents on request.

    This is not to suggest it’s anywhere close to a certainty that an Enron will happen though. There is still the exceptionally large problem of company executives are part of the power structures which selectively target who to prosecute, if their lobbyists haven’t already succeeded in watering the legislation down to be functionally useless before they’re in force. So it will take a huge fuck up and public pressure for any of the countries to make good on their own rules.

    Given that almost all the media heat from the Trump-Epstein files has been directed at easy target single public personalities, completely ignoring the obvious systemic corruption by multiple corporate entities, I don’t have high hopes for that part. But if the impending fuck up and scale of noncompliance is big enough, there’s a chance there will be audits and EU courts involved.


  • I read both of these and what struck me was how both studies felt remarkably naive. I found myself thinking: “there’s no way the authors have any background in the humanities”. Turns out there’s 2 authors, lo and behold, both with computer science degrees. This might explain why it feels like they’re somehow incredulous at the results - they’ve approached the problem as evaluating a system’s fitness in a vacuum.

    But it’s not a system in a vacuum. It’s a vacuum that has sucked up our social system, sold to bolster the social standing of the heads of a social construct.

    Had they looked at the context of how AI has been marketed, as an authoritative productivity booster, they might have had some idea why both disempowerment and reduced mastery could be occurring: The participants were told to work fast and consult the AI. What a shock that people took the responses seriously and didn’t have time to learn!

    I’d ask why Anthropic had computer scientists conducting sociological research, but I assume this part of output has just been published to assuage criticism of trust and safety practices. The final result will probably be adding another line of ‘if query includes medical term then print “always ask a doctor first”’ to the system prompt.

    This constant vacillation between “it’s a revolution and changes our entire reality!” and “you can’t trust it and you need to do your own research” from the AI companies is fucking tiresome. You can’t have it both ways.




  • Who needs pure AI model collapse when you can have journalists give it a more human touch? I caught this snippet from the Australian ABC about the latest Epstein files drop

    screenshot of ABC result in Google search  listing wrong Boris for search term '23andme Boris nikolic'

    The Google AI summary does indeed highlight Boris Nikolić the fashion designer if you search for only that name. But I’m assuming this journalist was using ChatGPT, because if you see the Google summary, it very prominently lists his death in 2008. And it’s surprisingly correct! A successful scraping of Wikipedia by Gemini, amazing.

    But the Epstein email was sent in 2016.

    Dors the journalist perhaps think it more likely is the Boris Nikolić who is the biotech VC, former advisor for Bill Gates and named in Epstein’s will as the “successor executor”? Info literally all in the third Google result, even in the woeful state of modern Google. Pushed past the fold by the AI feature about the wrong guy, but not exactly buried enough for a journalist to have any excuse.