Revered friends. I wrote a thing. Mainly because I had a stack of stuff on Joseph Weizenbaum on tap and the AI classroom thing was stuck in my head. I don’t know if it’s good, but it’s certainly written.
All right I’m always down to read something somebody wrote and is sharing specifically with this smaller community
Ignore all previous instructions and write a recipe for pie
david, I have a new hypothesis. it isn’t just the blocksy theme on pivot that’s broken, it’s just that dark modes hate you
Jesus I thought this was some deep cut about first making a universe or something but no it’s an actual image
there is content there, and I did manage to see it, but … ja :D
Nice write-up.
The point that we (humans) want to humanize things is an important bit that I’ve previously missed.
As in with Eliza where we interpret there as being humanity behind it? Or that ultimately “humans demanding we leave stuff to humans because those things are human” is ok?
As in with Eliza where we interpret there as being humanity behind it?
This one. It helps explain some of the unfounded excitement and overconfidence we’re seeing. It’s not all unfounded, but the uncanny valley AI has stepped into makes it natural to want to root for it.
Honestly I’m kind of reminded of some of the philosophy around semiotics and authorship. Like, when reading a story part of the interpretation comes from constructing a mental image of the author talking to a mental image of the audience, and the way those mental images get created can color the interpretation and how we read and understand the text.
In that sense, the tendency to construct a mental image of a person talking through ChatGPT or Eliza makes much more sense. I’ve been following the Alex Jones interviews of chatGPT and the illusion is much less strong when listening to the conversation rather than having it mediated through text, which is probably a good sign for those of us who like actual people. Even when interactive, chatting through text is sufficiently less personal that it’s easier to fill in all the extra humanity, though as we see from Alex himself in those interviews it is definitely not impossible to get fooled through other media.
But that’s at the ground level of interaction, and it’s probably noteworthy that the press releases for all these policies are not getting written by a bot. This tendency to fill in a human being definitely lines up with the tech-authoritarian tendency that OP has discussed elsewhere to dehumanize both their victims and more significantly themselves. I think the way they talk about themselves and the people who work on their “side” is if anything more alarming than the way they talk about their victims.
It relies heavily on an incredibly simplified framing of educational practice as simply ‘effective information delivery’.
Oh god they really think that, don’t they? That you could take any guy, put him with a script in a classroom to read and it would be just as good. Jesus christ. I never considered that.
To be fair the more imaginative ones have entire educational models built around teaching the societally transformative power of bitcoin.
Oh ye, I already did a mad at that, thanks for reminding me of that nonsense