AI finally allowing grooming at scale is the kind of thing I’d expect to be the setup for a joke about Silicon Valley libertarians, not something that’s actually happening.
AI finally allowing grooming at scale is the kind of thing I’d expect to be the setup for a joke about Silicon Valley libertarians, not something that’s actually happening.
Computer scientists hate him: solve the halting problem by smashing all running computers with a sledgehammer.
Sure we’ve been laying the groundwork for this for decade, but we wanted someone from our cult of personality to undermine democracy and replace it with explicit billionaire rule, not someone with his own cult of personality.
Reading the article explains the article, my dude.
I know next to nothing about C++ but I do know that I heard that closing line in the original voice and got goosebumps.
I’m pretty sure you could download a decent markov chain generator onto a TI-89 and do basically the same thing with a more in-class appropriate tool, but speaking as someone with dogshit handwriting I’m so glad to have graduated before this was a concern. Godspeed, my friend.
Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.
I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn’t have room to complain? Not that I’d recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.
You’re doing the lord’s simulation-author’s work, my friend.
Since the Middle ages we’ve reduced God’s divine realm from the glorious kingdom of heaven to an office chair in front of a computer screen, rather than an office chair behind it.
Oh the author here is absolutely a piece of work.
Here’s an interview where he’s talking about the biblical support for all of this and the ancient Greek origins of blah blah blah.
I can’t definitely predict this guy’s career trajectory, but one of those cults where they have to wear togas is not out of the question.
You’re missing the most obvious implication, though. If it’s all simulated or there’s a Cartesian demon afflicting me then none of you have any moral weight. Even more importantly if we assume that the SH is true then it means I’m smarter than you because I thought of it first (neener neener).
This feels like quackery but I can’t find a goal…
But if they both hold up to scrutiny, this is perhaps the first time scientific evidence supporting this theory has been produced – as explored in my recent book.
There it is.
How sneerable is the entire “infodynamics” field? Because it seems like it should be pretty sneerable. The first referenced paper on the “second law of infodynamics” seems to indicate that information has some kind of concrete energy which brings to mind that experiment where they tried to weigh someone as they died to identify the mass of the human soul. Also it feels like a gross misunderstanding to describe a physical system as gaining or losing information in the Shannon framework since unless the total size of the possibility space is changing there’s not a change in total information. Like, all strings of 100 characters have the same level of information even though only a very few actually mean anything in a given language. I’m not sure it makes sense to talk about the amount of information in a system increasing or decreasing naturally outside of data loss in transmission? IDK I’m way out of my depth here but it smells like BS and the limited pool of citations doesn’t build confidence.
Anyone remember when Chrome had that issue with validating nested URL-encoded characters? Anyone for John%%80%80 Doe?
I was watching an old Day9 stream today and this story is bouncing off of some comments he made about the importance of degenerate players to competitive design. Like, this is a pretty dumb outcome of regulation, but at the same time if you try to define regulations to create an exception for what “everybody knows” is in something suddenly that process gets taken advantage of by self-interested manufacturers who will figure out how to convincingly argue that “everybody knows” their generic shampoo contains peanuts and shellfish or whatever. And in the context of this kind of regulation that degenerate play will get people killed.
I mean, up until this year they would both have been beat by Clallum County, WA, which had matched the National winner since Gerald Ford. So way to identify the best of the losers I guess?
There’s also a common argument that the problem in AV accidents is primarily the other human drivers, which is a classic case of “if everyone just immediately changed over to doing things this way it would solve the problem!”
Honestly the most surprising and interesting part of that episode of Power(projection)Points with Perun was the idea of simple land mines as autonomous lethal systems.
Once again, the concept isn’t as new as they want you to think, moral and regulatory frameworks already exist, and the biggest contribution of the AI component is doing more complicated things than existing mechanisms but doing them badly.
There’s got to be some kind of licensing clarity that can be actually legislated. This is just straight-up price gouging through obscurantism.