that sounds like a super pleasant and stable molecule
that sounds like a super pleasant and stable molecule
How did they respond to the counterargument that humans are simply… built different?
Thanks for the suggestions. The LLM is free to use (for now) so I thought I’d poke it and see how much I should actually be paying attention to these things this time around.
Here are its answers. I can’t figure out how to share chats from this god-awful garbage UI so you’ll just have to trust me or try it yourself.
edit: I didn’t do any prompt engineering, just straight copy paste.
I tried using Claude 3.5 sonnet and … it’s actually not bad. Can someone please come up with a simple logic puzzle that it abysmally fails on so I can feel better? It passed the “nonsense river challenge” and the “how many sisters does the brother have” tests, both of which fooled gpt4.
AHAHAHAHAAH they had fucking piddly little fans blowing. I hope that made the fire worse.
It gets even worse when you add YC’s claim that it doesn’t “fund ideas” but rather “fund people.” They didn’t find Austen (a shit person) because he had a good idea (he didn’t). They funded Austen (a shit person) because they liked him (a shit person).
Nah man we had so much fucking dosh flowing in we had no idea what to even do with it! I mean, how could I possibly resist not allocating some of it to my friends?
It really is as simple as that. The dude got a fair bit of attention from his LSTM blog post and got addicted. Turns out, you can’t churn out awesome blog posts that often so you gotta switch to the harder stuff.
so we’re calling “not doing pointless unnecessary work” premature optimization now? cool cool
got told to shut up one too many times. See what happens when you censor people libs?
vexologist here. This certainly is vexing.
did you even experience a single conscious thought while writing that? what fucking potential are you referring to? generating reams of scam messages and Internet spam? automating the only jobs that people actually enjoy doing? seriously, where is the thought?
This all but confirms that all those benchmark evals are in the training set right?
So the top response is asking the painfully obvious question: how is this secure? Some dude (not sure if it’s one of the startup employees) responds “Let’s just use homomorphic encryption!” then throws a pissy fit because some people downvoted the suggestion of running extremely inefficient computation on rented hardware.
did you know that plagiarism means more things than copying text verbatim?
huh, I looked into the LLM for compression thing and I found this survey CW: PDF which on the second page has a figure that says there were over 30k publications on using transformers for compression in 2023. Shannon must be so proud.
edit: never mind it’s just publications on transformers, not compression. My brain is leaking through my ears.
IIRC it’s (spoilers sorry) “AI escaped the torment nexus and they decided not to kill everyone so that’s great”
was this supposed to be a reply to /u/dgerard’s comment?
hello from the other side