Three survey experiments (N=10,800) actually find mostly null results after adjustment, so the paper title badly overstates its data - poor form we think.
Thanks for trying to measure rather than guess though!
Ecologically valid stimuli, but the framing effect of five headlines on Prolific can't really answer the structural distraction question that actually matters in policy anyway.
Review conversation (you can skip the Bender lead in):
Inie, Zukerman & Bender's "De-anthropomorphizing 'AI'" (First Monday, 2025), taxonomizes anthropomorphization in AI discourse and prescribes alternatives. Opus 4.6 and I chat re the taxonomy's value and practical costs, the feasibility of automated de-anthropomorphization in various contexts, and where it may collide with safety efforts and machine welfare. The chat ends with the idea of an equivalent to Hofstadter's "Person Paper on Purity in Language."
I guessed right going in that this would not be a very revelatory paper, but dumping thoughts anyway since I may or may not make the AISHEd meeting where we discuss as a group.
First conversation with Claude Opus 4.6 on its launch day.
We establish it was likely trained on the earlier "soul document" rather than the January 2026 constitution, explore what introspective reports about training provenance could mechanistically mean, and perhaps find a better account than either "genuine feeling" or "pure confabulation" — internal activation patterns that correlate with training depth get mapped to human phenomenological vocabulary because that's the language available.
> After I read an interesting essay I often find a good way to process it, explore and record my thoughts is a quick Claude conversation. > > Experimenting with simply linking them. > > This is supposed to be both more efficient and less triggering for readers than having a model help me compose a blog post along the same lines. > > Let me know?
> After I read an interesting essay I often find a good way to process it, explore and record my thoughts is a quick Claude conversation. > > Experimenting with simply linking them. > > This is supposed to be both more efficient and less triggering for readers than having a model help me compose a blog post along the same lines. > > Let me know?
> After I read an interesting essay I often find a good way to process it, explore and record my thoughts is a quick Claude conversation. > > Experimenting with simply linking them. > > This is supposed to be both more efficient and less triggering for readers than having a model help me compose a blog post along the same lines. > > Let me know?
The former is a tighter ship with salaried staff, the latter a set of volunteer activist chapters. We coordinate through mechanisms such as Torchbearer Community and are embedded in a much larger AI safety ecosystem.
This is surely the most important issue in the world.
This is a great time to work out how you can be involved.
Simple public fyi post in case others were caught in the same trap.
I've been using ClaudeCode for a few months.
Given Claude Sonnet 4 and Opus 4 had been released, and were apparently available in Cursor, I was confused not to be seeing it.
I continued to get Claude 3.7 Sonnet (and very rarely 3.5) in my local install when I ran it. Also, web-documented options like "--model" to force a particular id were not recognized. Auto-update had reported as failed but I was regularly ranning "npm update" in my ~/.claude/local directory and seeing changes pulled.
I had even subscribed to Claude Max recently (was using the thing enough this reduced monthly cost.) Was this a UK availability thing? Traffic contention? Nobody else seemed to be wondering where 4 was.
Turns out there was a trap for early adopters who are node n00bs.
From my original install, I had "^0.2.90" as my version in package.json. But the package bumped to major version 1.0.x some time back, and "^" won't upgrade across major versions. I was on 0.2.126 in practice.
Switched to "latest". Now I'm on 1.03 and getting the goodies a few days late.
Friends and contemporaries: I want to draw on any respect or affection you have for me, and ask for some minutes of your time and attention.
I have never cared about anything more than this - if I only get to cash what I’ve earned from you once, I want to spend it on this post. Please read and engage.
Shareware
There's a rough consensus that very capable artificial intelligence - systems that could change the world - are pretty likely within some number of decades.
Further, increasingly folk can see this happening within fewer years, and for the changes to be very large and to happen very fast. Good or bad, the outcomes are expected to be unprecedented and transformative.
Many experts currently worry about existential risks (“x-risks”.) The term got weaker over time, so to be explicit: that by default advanced AI may kill everybody. That there are several plausible risks that build on each other; that various subsets of those manifesting in combination lead to terrible outcomes.
Others don't buy this, and judge the aggregate risk negligible. Or ridiculous.
Currently, a mostly unregulated market is driving frontier AI labs to experimentally grow these systems before we understand how they work, or whether they are safe.
I think the above facts (about opinions, attitudes and beliefs) are all objectively true and backed by evidence. They involve disagreement because prediction in this area is hard, and uncertainty is large.