- cross-posted to:
- programmerhumor@lemmy.ml
Well… 2 years from now vibe coding will be default.
Why is he figuring things out himself? Surely that’s the AI’s job, right? Right?
Show this soydev his place
What cracks me up is that he is not technical so it takes him longer than usual to figure it out :D :D :D :D
He usually figures these things out much quicker but this time he is struck by some “not being technical” illness. As soon as it passes, he will figure it out as usual.
Listen you can whine about tech or you can start building with it.
And by building, I mean telling it what it should do.
And by telling it what it should do, I mean typing out what you want.
And by telling it what you want, I mean explaining a crypto bro idea in a rant to Chat GPT.
I mean he’s not technical but I’m sure he’s really nailed this one.
AI will not replace software engineers, exhibit fuck knows how many.
His first mistake is to call it AI.
This is what FAFO in public looks like. Gold!
As you know I’m not technical. AI doesn’t write robust code, is that the joke?
Yes, that’s the joke.
AI creates almost (but not) good enough stuff really fast. And occasionally straight up hallucinates stuff that is meaningless or worse.
So this person has a huge stack of functional but broken crap, and it’s blaming X for their woes.
There’s an old saying that goes roughly “It takes four times the experience to maintain a program as it took to write it. So anyone writing the most clever program they can think of is, by definition, not competent to maintain it.”
In this case, it’s extra funny, because neither the AI nor the AI user has the faintest idea how the generated code works. So maintaining it is almost certainly 1000% outside their abilities.
So they’ve paid an AI for the
privilegeunpleasant daily panic of learning everything they need to learn after the app has gone to production, rather than before.It certainly isn’t good at security, which is what it sounds like his biggest problem is.
I wanted to edit my Ghostty themes but found out a lot of the colors are in #hexadecimal notation. I like #rrggbb percentage style colors (b/c they are easy to tweak by hand) and I couldn’t find an online color picker that would output that format, so I used deepseek (free) & now have a scrappy ass one w Python & Tkinter completely via “vibe” coding (I call it Clyde Color Picker. It’s adorable).
Pretty awesome when you’re just some dumbass who needs a very specific tool and not trying to fleece people.
I use AI toolings to generate snippets of bash scripts because I can’t be fucked to remember that syntax. Obviously not for anything with high risks or that I can’t easily verify. But things like parsing through mass amounts of files
But… bash snippet extensions already exist. The only difference is maybe it doesn’t auto name your variables for you. I’d take that over non-deterministic LLM outputs.
I have no idea what the hell a bash snippet extension is, but I do know what a local llama.cpp instance running a small model to tell me bash commands on the fly is.
I use it to make .desktop files, too. Isn’t that so lazy?
They seem to be genuinely trying to provide information about a tool that they find preferable to your solution. And you’re not even the OP they were responding to. Nobody in this thread has called you or your solution lazy.
A bash snippet extension is “an extension [for a code editor] that provides a collection of snippets for bash scripting.” It’s a tool that is purpose-built to tell you bash commands on the fly, but smaller, more efficient, and easier to install than a local LLM.
The user you are replying to appears to prefer this because it will also tell you the same bash command every time you ask (non-deterministic outputs can be different for identical requests)
Lol what a gimp
Hey, gimp is a nice open source image editor, don’t insult it by comparing it to this guy
Imagine needing to understand a thing to build something. /s
Just speak the incantation of motive energy and light the incense to soothe the machine spirit.
Lol, I’m surprised it only took two days.
I’m surprised it took two whole days :p
2 days for him to realize something wrong
Play stupid games, win stupid prizes.
I’ve always appreciated the feature of AI coding tools, where they confidently tell you they’ve done something completely wrong. Then if you call them on it, they super-confidently say: “Of course, here’s what needs to be done…”
Then proceed to do something even worse.
Or when you say there’s something wrong and the new version is just the same with comments
Yes. I love the confidently incorrect additional comments explaining in detail how the incorrect code works.
Though I’m usually pretty angry at that point, it is also pretty funny.
ChatGPT would not let me call it “you doofus” when I point outed it had done that repeatedly. For “policy violations”.
Because if I choose squirrel everytime I’ll never get anything done.
Yeah but you would have a lot of nuts!
I wonder if the website did the thing where it lists their big customers like a trophy cabinet on the main landing page.
It would probably make a good list of places to sell snake oil
Also love that this is all evidence to back up the premise that building the happy path of an application is generally easy, one of the main skills in software engineering is ensuring the unhappy paths are covered sufficiently. I can say I’ve started a bank and keep people’s money in my wardrobe, I’ll be providing the service of holding their money—I’ll also probably get robbed sharpish because I’m not skilled in the kind of security needed to avoid that.
Any “customers” landed are going to be friends and family, if not just outright fakes invented by leo.