I don’t entirely subscribe to the first paragraph – I’ve never worked at a place so dear to me that spurred me to spend time thinking about its architecture (beyond the usual rants). Other than that, spot on
I don’t entirely subscribe to the first paragraph – I’ve never worked at a place so dear to me that spurred me to spend time thinking about its architecture (beyond the usual rants). Other than that, spot on
TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.
For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.
This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.
Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle). If you have everything broken down into pieces where the description of the function/class is something like “given X this function does Y” (and unrelated things thus aren’t unnecessarily coupled) it makes reorganization of the higher level logic to fit the current requirements a lot easier.
Preach. DRY is IMO the most abused/mis-understood best practice particularly by newer programmers. DRY is not about compressing your code/minimizing line count. It’s about … avoiding things like writing the exact same general (e.g., a sort) algorithm inline in a dozen places. People are really good at finding patterns and “over fitting” making up abstractions that make no sense.
Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.
There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.
Basically, everything in moderation and never lean on a single thing.
DRY is one of the most misunderstood practices. If you read pragmatic programmer (where DRY was coined), they make it clear that DRY doesn’t mean “avoid all repetition at all cost”. Just because two pieces of code look identical doesn’t necessarily mean they are the same. If they can grow independently of each other, then they’re not repetitions according to DRY and should be left alone.
Yup, and that is because people only ever lean DRY coding by its name. It is never really what it originally meant, when to use it and more importantly when not to use it. So loads of people apply it religiously and over use it. This is true of all the popular catchy named methodologies/principals etc.
The damning thing about DRY is that it’s so satisfying. It’s like a fun little puzzle.