Hacker News

4 years ago by bwilliams

I read the article and didn't really feel that they disagreed with, or debunked any of the principles. It reads like they formed their own understanding of each principle and maybe disagreed with how they were taught, or how the principles are sometimes presented.

This change in outlook of existing thoughts/ideas is how many crafts grow, such as martial arts, painting, philosophy, etc (instead of stagnating). Sometimes we need to frame things in a more modern manner, and sometimes we need to discard them completely. In this case, I think re-framing the concepts is helpful, and I found it to be an interesting point of view. I agreed with a good amount of it, but I don't think we need to discard SOLID principles just yet.

4 years ago by pydry

This alludes to one of my main beefs with SOLID - the lack of a clear definition.

What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.

4 years ago by barbazoo

I agree. In a way that's one of the strengths of SOLID in my opinion. All places I've worked at had slightly different versions of what e.g. SRP meant. And that's ok. It's a way to make sure your team writes code in a way their teammates would expect. Whether that's objectively good code or not doesn't matter as much to me.

4 years ago by SideburnsOfDoom

> What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.

The Single Responsibility Principle is not a rigorously defined Scientific law, where the "responsibility count" can be exactly measured and minimised.

It is a subjective design guideline, with elements of experience, of context and of "I know it when I see it".

This does _not_ make it useless. It is still very useful indeed. But you do have to understand that disagreements over it are not always about "who is objectively right" but "which style makes the most sense in context".

4 years ago by pydry

shrug I think every time I've ever seen a disagreement about code quality it's boiled down to both developers thinking "I know it when I see it" about their separate approach.

If a set of principles lets them both think they're both correct and the other one is wrong, what exactly is the point of those principles?

This isn't just a coding thing. It's also, say, why society follows a body of laws rather than something like the 10 commandments.

4 years ago by raverbashing

Here's the problem:

Some developers will say a (data structure) Controller is a class obeying the SRP

Some others will say the class can manage itself and not need a controller, so M and C can be one thing.

Some other will argue that it's better to make 2 controllers, one for complicated thing 1 and another for complicated thing 2, all based on the same Model.

4 years ago by bick_nyers

I think that is a feature, not a bug. I think it makes sense in different contexts for "responsibility" to be abstracted differently. The two extremes are lots of files and functions versus fewer files and functions, and the optimal balance to strike is probably based on whether it is important for people to focus on the modules, or the arrangement of those modules. For high-performance C++ libraries with a good design/architect, it could make sense to split up to a lot of files/functions, so that each function can be iterated upon and improved. For a less-performance sensitive Java library where understanding and usage is most important, you would want less files/functions such that the development focus is more on the high level ideas, the arrangement of the parts (or refactoring).

With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic. What SOLID aims to do is say that these main points are not something you should dedicate brain cycles towards, as they are best spent elsewhere in the design.

4 years ago by pydry

>With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic.

It's because, unless you're careful, human language is insufficiently precise by its nature for many domains.

This is why mathematicians communicate with mathematical notation, for instance. It's why lawyers use special lawyer only terms (or why interpretation of the law is an explicitly separated function).

With SOLID the lack of precision renders the whole exercise rather pointless.

You're supposed to be able to use these principles as something you can agree upon with other developers so that you have a shared understanding of what constitutes "good" code.

However, it doesn't happen. Instead we argue over what constitutes a single responsibility.

SOLID isn't the only example of this. "Agile" is actually far worse.

4 years ago by shock

A responsibility is an obligation to perform a task or know information (authoritatively). If an object performs two tasks, it has two responibilities, and so on.

For example, an Entity part of the persistence layer in the app, has the responsibility to persist the state of the Entity to the database, but the responsibility to know the information is with an object that is part of the business logic layer. The information that is stored in the Entity to be able to persist it to the DB is just a cache of the information in the Business Object, the Entity is not responsible for it, it just holds a cache. If the same object would be responsible for both holding the information and persisting it, it would have 2 responsibilities.

This might sound somewhat confusing and useless, but it isn't so. Imagine a future where computers have a form of RAM that is not volatile. There is no need for a database, in the classical sense – whatever is in RAM when the computer is powered off/rebooted will still be there when the program resumes running.

4 years ago by krona

The author presents a series of strawman arguments to debunk SOLID, and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension. In my experience this commonly how incomprehensible codebases evolve; the code is comprehensible right up to the point that it isn't, and by that point it's too late to change anything.

4 years ago by jdlshore

> he suggests we should make code as complex as possible

I read no such thing. (Speaking of straw men...) instead, I read him saying that the alternative to SOLID was “Write simple code.”

I think there’s room to criticize this essay, but that’s a bizarre one to lead with.

4 years ago by jancsika

> instead, I read him saying that the alternative to SOLID was “Write simple code.”

The part you missed is in the same sentence as the three words you quoted.

"Instead I suggested to write simple code using the heuristic that it 'Fits In My Head'."

The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head. The author even seems to acknowledge this line of attack in the next paragraph:

> You might ask “Whose head?” For the purpose of the heuristic I assume the owner of the head can read and write idiomatic code in whichever languages are in play, and that they are familiar with the problem domain. If they need more esoteric knowledge than that, for instance knowing which of the many undocumented internal systems we need to integrate with to get any work done, then that should be made explicit in the code so that it will fit in their head.

But that just protects against spaghetti made from esoteric know of internals. For example, in C a "big head" would still be able to justify using global variables (after all, that's idiomatic C), rolling their own buggy threading approach, and deeply nested side-effect based programming.

I'd much prefer pointing big heads to the rule of "do one thing" than to ever read a response of the category, "Well, it fits in my head."

4 years ago by gregmac

> The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head.

The thing is: that's just a bad defense. If the people who are going to be to maintaining it and working with it are saying it's bad code, it's bad. Even if this is the only developer who is going to maintain it for now (which itself is always a terrible idea), eventually someone else will take over, and the closest thing you have to that "someone else" is the rest of the current team.

I'm having a hard time imaging someone seriously making this argument, who is not also:

* Trying to establish themselves as the sole possible maintainer, so either the company can't fire them or is completely screwed if they ever leave, or:

* Is a saying the rest of their team is just too stupid to work with

In either case, this person is a massive liability, and the longer they are employed the more damage they'll do.

4 years ago by lupire

Not defending the article at all, but "whose head" should be at least a code reviewer's head, ideally two.

4 years ago by refactor_master

Fits in my head and works on my computer.

slaps on hood

4 years ago by coldtea

>The author presents a series of strawman arguments to debunk SOLID

Care to elaborate as to why those are strawmans?

Without that, this is a no-argument, which is probably worse than a strawman.

>and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension

The combo (go and make "software as complex as possible but no more complex than the coders own comprehension") is suggested nowhere in the post.

The second part of the combo (make software "no more complex than the coders own comprehension") is indeed said, and is very sensible advice.

It is, however, combined with the inverse advice of "make it as complex as possible" which you claim the author combined it with: with the advice to keep it simple.

4 years ago by eternalban

> The Single Responsibility Principle says that code should only do one thing. Another framing is that it should have “one reason to change”. I called this the “Pointlessly Vague Principle”. What is one thing anyway? Is ETL – Extract-Transform-Load – one thing (a DataProcessor) or three things? Any non-trivial code can have any number of reasons to change, which may or may not include the one you had in mind, so again this doesn’t make much sense to me.

The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

So, a reasonable reading of SOLID naturally is not ruling out composing complex software using "simple single purpose code". However, OP is assuming a ridiculous reading (the strawman) that basically can only be valid for software that has only one irreducible capability.

One can read SOLID SRtC in terms of capability as "compose complex software using single purpose code".

One can read SOLID SRtC in terms of change as "changes to code should consist of one, or a sequential set of, single purpose changes".

4 years ago by coldtea

>The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

That's not what TFA says here. It doesn't claim that SOLID says that software as in 'a full program' should do one thing.

It just says that SOLID says that a piece of code (as in a function or a class) should do one thing, which SOLID does say, and which the author of TFA disagrees with.

Perhaps the use of the word "code" (or "non-trivial code") is confusing, but the author doesn't imply the whole program with that, but the same as SOLID does (a unit of code):

"Code should fit in your head at any level of granularity, whether it is at method/function level, class/module level, components made up of classes, or entire distributed applications".

4 years ago by legulere

That's the point where you need refactoring. The basic problem is that you cannot look into the future and clearly see what exactly you are going to need (if you are lucky enough to implement something along a specification, you should us that knowledge though!).

As long as you manage to keep artificial complicatedness out of your code, you will always have the complexity of the problem mirrored in your code. A common problem of ideas about object oriented programming like SOLID or Clean Code is that they have a focus on classes. If you keep your classes very simple, you will instead end up with a complex architecture, where you might have zero responsibility layers or functions that just pass on the call further to a layer down.

4 years ago by geofft

In my opinion, that's in fact exactly how you should be developing your software.

Code is cheap to write, and mostly debt. A software product - a working system that meets some particular need - is not. The distinction is that building a software product is much more than banging out code; it's the experience of figuring out what exactly that need is (gathering requirements, getting a system out for users to test, getting real-world feedback). Sometimes you capture that in documentation, test cases, ADRs, comments, commit messages, etc. Sometimes it's in your head, which is okay as long as you're still there. (Of course, if it's in your head and you leave, then the next set of programmers will be scared to change the system until they redo the work of figuring out what the software is, despite the code being in front of them.)

If you have that understanding about what the software does, ideally in the form of automated test cases, you can rewrite the code. So you may as well bang out the code in a way that gets you a working system and remains comprehensible. Once you're making enough changes to it that you're worried about it getting incomprehensible, proceed to rewrite it. Probably the world has changed in many ways sine you wrote it - maybe you can run the system for a lot faster and cheaper with containers in the cloud talking to a SaaS database than with your expensive IBM mainframe talking to DB2. Or perhaps the requirements are changing significantly, and starting with the old code doesn't give you much of an advantage.

Trying to keep a codebase comprehensible over the long term makes it, if you'll excuse the pun, solidify - it becomes increasingly hard to make changes that weren't anticipated by the original design, and it also requires more and more effort to just do anything. You might be able to swap out the relational database for another one, but you probably can't switch to a design where you're using a key-value store with totally different performance characteristics. And even if you do just want to change the database, you have to figure out how ten classes get dependency-injected into each other before you can start coding.

All the time you spent making the code "future-proof" so it remains forever comprehensible could have been spent simply not doing that, delivering business value, and writing new code as the need arises.

4 years ago by js4ever

Loved it! I agree with most of it as well, I'm so fed up with preparing with abastractions to things that will probably never occur (changing the underlying DB for eg) I changed my coding style a decade ago to focus on producing code that is as simple as possible to read/maintain with as few abastractions layers as possible.

4 years ago by ivan_gammel

That’s one of the traps, every programmer will fall into at least once: copying some abstraction because someone else did it. I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course). I have seen many times when developers designed architectures, created APIs or added buttons to user interfaces, because they felt it may be useful in the future, not because someone told them it will happen. That code was very different in quality, sometimes too abstract, sometimes a big multi-screen function doing plenty of magic. All of it had nothing to do with SOLID — it was always a violation of another principle: KISS. Keeping it simple, an engineering variant of Ockham’s razor, does not contradict or replace SOLID, it complements it, defining the scope of application of other principles. If your requirements are complex enough, you may need an abstraction — if you really need it, here are some guidelines. That’s it, now keep it simple.

4 years ago by jcelerier

> I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course).

... and, you haven't ever seen a business requirement of what used to be a "software" to become a framework / plateform instead ? it happens all the time

4 years ago by gregmac

I've seen bits of this, and I've seen it happen in code bases that were built as "abstract" so its components could re-used.

But I've never seen it happen in any way close to resembling what the original architect thought would happen, and as a result, all those abstractions and generic implementations not only added time to the mainline development, but in the end actually got in the way of the abstraction that was needed.

4 years ago by ivan_gammel

Well, that - "any" software becoming a framework - does happen, but not all the time. Why does this matter?

4 years ago by youerbt

Yes, let's abstract the database is probably one of the most mindlessly applied rituals in an enterprise software. And if running on different databases is not a feature of your application, it is mostly harmful practice.

If you can decide on a language, framework, critical libraries etc. then you should be able to decide on a database. It's probably more important than your application anyway.

4 years ago by matsemann

Back before devops, containers, postgres etc were mainstream (so not more than ~6 years ago in my industry), soo many were running oracle dbs. And everyone shared the same instance, and it wasn't exactly trivial to get a new personal up and running (licensing, probably required a DBAs help). So then using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.

4 years ago by ivan_gammel

What platform are you talking about? In Java world JDBC existed from early days and was enough if you stick to standard SQL, in tests you may have needed only to switch driver classpath and connection string. ORMs existed at least since 2003-2004 (early versions of Hibernate).

At the same time, substituting Oracle with a lightweight DB in an environment where full-time DBA was developing a database with loads of stored procedures and DB-specific stuff wasn't something really feasible - no abstraction layer would solve that.

4 years ago by no-s

>>using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.

This also improves efficiency in operations, not necessarily development. If you used a library/framework for database access anyway, it's not an extra expense. There's ultimately a portability concern even if "vendoring", it only imposes a cut-out to permit control of necessary change).

After a few unpleasant experiences I endlessly advocated we should always use an interface to access populated data objects and not interact with the database directly, not even running queries directly but always using at least lightweight IOC. I also advocated for testing where known result sets were fed through mock objects. After all, saved result sets could also be used to test/diagnose the database independently after schema/data changes. My experience predates a lot of ORM and framework responses.

Unfortunately later frameworks (intended to abstract these concerns) became ends in themselves, rather than a means to an end. These were used to satiate "enterprise-y" concerns (sacrifices to the enterprise gods). If you could afford to deploy operational Oracle, you would't necessarily flinch at the cost of the extra (often pointless) layers of abstraction.

4 years ago by silentbugs

Exactly.

DRY and YAGNI are two basic concepts I've always tried to work with. Do I need this piece of code more than once? Should be extracted/created in a separate place. Would I need this in other projects? Library.

YAGNI is most likely directly conflicting with most of what's mentioned, if the developer simply asks themselves: do I really need this? Will this project benefit from having separate layers of abstraction for things that are most likely never going to change?

I always think twice before writing a single line of code, with the main point of focus being if future me (or anyone who reads that piece of code) will be able to understand and change things, if needed.

4 years ago by jehlakj

DRY seems to be one of those principles too many people take literally. Especially among junior devs from my observation (and experience).

Just because there is a block of code that’s being used in two different places doesn’t mean it should always be abstracted out. There’s a subtle yet mindful consideration whether these two consumers are under the same domain. Or if it should exist as a utility function across all domains. And if that’s the case, changing the code to make it generic, simple and without side effects is ideal.

I’ve seen too many of these mindless DRY patterns over time, and they eventually end up with Boolean flags to check which domain it’s being used in as the codebase becomes larger.

4 years ago by gfody

DRY should really be DRYUTETNRYDTP - dont repeat yourself unless the effort to not repeat yourself defeats the purpose

I also propose LAWYD - look at what you've done, a mandatory moment of honest reflection after drying out some code where you compare it to the original and decide if you've really improved the situation

4 years ago by sixothree

Just this week I came across a set of 20+ controls in a form. Every control downloaded some version of a file from "report". Not once was there any shared code behind any of these controls. Because different people over time touched this code, not all of the functions were in the same place. And those that were each had slightly different nuances.

Without DRY, this would be a perfectly acceptable practice. DRY gives me something I have in my head when I see this and refactor into something that is manageable. DRY gives me something I can point to and say "please for the love of god don't perpetuate this folly".

4 years ago by h3mb3

I think a good rule of thumb is to never prematurely apply DRY in source code BUT always try to aim for it when it comes to data and configuration unless there's a special need.

4 years ago by cjvirtucio

pylint irritates me a bit for this. I already created an abstraction, and I'm using the abstraction in a few places, but pylint doesn't like that and says it's duplicate code.

4 years ago by edoceo

Yea, 20+ years on PostgreSQL. Why would I change my DB away from the perfect one?

4 years ago by majkinetor

IME, because client asked for the different one for whatever reason and is willing to pay for higher development cost.

You can argue about that just the same as when client needs a feature X.

In my case, I was very often getting away with YAGNI or 'how about we implement nightly sync from pg to oracle' but not always.

4 years ago by rzzzt

Self-hosted, low volume or demo installations can benefit from a lightweight database (that is also in-process, so the user does not have to install and maintain it) like SQLite, H2 or Derby.

4 years ago by TimTheTinker

One never knows what the future may bring.

Long-lived OSS software is a relatively stable bet, but the point still stands.

4 years ago by jacques_chester

Maintaining database abstraction layers and the like is a real option used to hedge the risk of the holdup problem. But like any real option, it has a carrying cost. That means it's an economic question, there is no purely technical rule that can give you a robust answer to whether or not to abstract something away.

I feel the industry has a long hangover from the 80s and 90s in terms of the holdup problem. Oracle, basically, created a massive externality of anxiety about vendor lockin that continues to impose drag to this day.

4 years ago by rootsofallevil

>One never knows what the future may bring.

Exactly, so make the changes when you need them, otherwise you are relying on hitting the abstraction lottery

4 years ago by pydry

Decoupling your data model from postgres because you "might" need to swap database is a bet I've seen taken many times but it's never one I've seen paid off.

This is a clear example of where YAGNI applies, I think.

Extra work plus extra boilerplate to maintain. No payoff.

That said, extra code + boilerplate is a great way to treat riskier software.

4 years ago by paulryanrogers

Also love Pg and using flavor specific features where they deliver tangible value. That said, if a team maintains multiple flavors often then an abstraction can improve quality of life.

4 years ago by refactor_master

To me this blog doesn’t really formally/finally refute anything, but is simply saying “don’t over-engineer your solution”.

> “dependency inversion has single-handedly caused billions of dollars in irretrievable sunk cost and waste over the last couple of decades”

Oh please. Is there anything in programming that hasn’t had the “Irreparable harm to humanity” sticker attached to it by now?

4 years ago by gfody

I like to imagine a final analysis of all code ever written by humans, after some ai hypermind from the future has digested it, turning out to be 99.999% dependency injection boilerplate

4 years ago by layer8

Dependency injection has nothing to do with dependency inversion.

4 years ago by gfody

nothing? okay

4 years ago by jonnypotty

So becuase our industry is so unprofessional that you can literally point to loads of it and say "that has cost humanity billions", this entire argument is stupid and actually there is no problem at all?

Not sure that's the right attitude.

4 years ago by AlphaSite

It’s an industry which ha s probably generated trillions in value so it maybe a matter of perspective.

4 years ago by refactor_master

So what would you have done to end up in the alternate universe where it hadn’t cost humanity billions? Rational agents and perfect knowledge does not exist outside of economic theories.

4 years ago by hinkley

Cloud computing, but only because we haven’t hit the trough of disillusionment yet.

4 years ago by agnosias

Honest question: if you don't do dependency inversion, or if you don't depend on interfaces/abstractions that can't be mocked - how do you unit test your code?

Unit testing is the only reason pretty much all of my code depends on interfaces. Some people seem to consider this a bad thing/over-engineering, but it's how I've seen it done in every place I've worked at.

How do you do it?

4 years ago by aszen

Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

More interesting is if your code relies on the outside world, then instead of abstracting out the connection with the outside world abstract out your business logic and test it separately.

So instead of a database repository being injected into your domain services, make your services rely on pure domain objects which could come from any where be it tests or the database.

Make a thin outer shell which feeds data into your domain logic from the outside world and test that via integration tests if necessary.

I'll admit I don't have the full picture here, but I have used this technique to good effect. The core idea is don't embed your infrastructure code deep inside your architecture instead move it to the very top.

4 years ago by globular-toast

> Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

This the thing. People have a tendency to overuse mocks. The point of automated testing (whether it's unit tests or something else), is to enable refactoring. That's really the only reason. Code that you don't touch doesn't suddenly change its behaviour one day.

In a previous jobs the developers started to go crazy with mocking and it reached a kind of singularity where essentially if a function called anything, that thing was mocked. It definitely tests each function in complete isolation, but what's the point? It makes refactoring impossible, which is the entire point of the tests in the first place!

This excellent talk completely changed the way I approached testing. Every developer who writes tests needs to watch this now! https://www.youtube.com/watch?v=EZ05e7EMOLM

4 years ago by chakspak

I generally agree with you, though I want to comment on this line:

> Code that you don't touch doesn't suddenly change its behaviour one day.

This can and does happen all the time, when the platforms and abstractions your code builds on change underneath you. This is why a compelling environment / dependency management story is so important. "Code rot" is real. =P

4 years ago by garethrowlands

Some of this is covered in "Functional Core, Imperative Shell", https://www.destroyallsoftware.com/screencasts/catalog/funct...

Haskell programs tend to have this structure because pure functions aren't allowed to call impure functions.

4 years ago by snidane

You are doing interface/function level testing and calling it unit testing.

That's what industry converged on, that a function/method = a unit, but it apparently used to be that a module meant to be a unit.

It can be that both interfaces and modules can be considered a "bag of functions".

Seems like a lot of confusion about unit testing, mocking and DI stems from this historical shift.

I believe that interface/method level testing is too granular and mostly results in overfitted tests. Testing implementation, not behaviour. Which can be useful for some algorithm package for example, but probably not so much applicable to typical business logic code.

TDD: where did it all go wrong. https://youtu.be/EZ05e7EMOLM

4 years ago by robertknight

An option in some languages is to simply create an alternate mock/fake version of the actual class or function that is depended upon and monkey-patch the code under test to use it for the duration of the test. This is commonly done in Python ( see `unittest.mock.patch`) and JavaScript (eg. with Jest mocks) for example.

The end result is the same as if you'd created an interface or abstraction with two implementations (real and fake/mock), but you skip the part where a separate interface is defined.

The upside to this is that the code is more straightforward to follow. The downside is that the code is exposed/potentially coupled to the full API of the real implementation, as opposed to some interface exposing only what is needed.

4 years ago by mrloba

Move your logic to pure functions. Use classes for plumbing. Now you can unit test the logic, and integration test the plumbing. You can easily test with a real database using docker for example. Note that you'll still use DI in this case, but you'll have far fewer classes and therefore fewer dependencies as well.

4 years ago by DanielBMarkham

1. Watching the tech community over a really long time, please guys, don't swing violently from "A is all good!" to "A is all bad!" It was good for something, else so many people wouldn't have successfully used it for so long. Work more on discrimination functions and less on hyperbole please. Future generations will thank you for it.

2. "...coupled with the way most developers conflate subtypes with subclasses..." Speaking as somebody who both likes SOLID and could write this essay/present the deck, I think there's a lot of confusion to go around. There are a lot of guys coding classes using rules better suited for types. There are a lot of guys applying OO paradigms to functional code and vice-versa. In general, whenever we swing from one side to the other on topics, it's a matter of definitions. There is no such thing as "code". There's "thinking/reasoning about code" and there's coding. You can't take the human element out and reason abstractly about the end product. Whatever the end product is, it's a result of one/many humans pounding away on keyboards to get there.

3. My opinion, for what it's worth: as OO took off, we had to come up with heuristics as to how to think about grouping code into classes, and do it in such a way that others might reasonably get to the same place ... or at least be able to look at your code and reason about how or why you did it. That's SOLID and the rest of it. Now we're seeing the revenge of the FP guys, and stuff like SOLID looks completely whack to them, as it should. It's a different way of thinking about and solving problems.

ADD: Trying to figure out who's right and who's wrong is a (philosophical) nonsense question. It's like asking "which smell is plaid?" Whatever answer you get is neither going to provide you with any information or help you do anything useful in the future. (Shameless plug: Just submitted an essay I wrote last week that directly addresses reasoning about types in a large app)

4 years ago by elric

> don't swing violently from "A is all good!" to "A is all bad!"

Indeed. This clickbaity style of laying out arguments is not terribly constructive. Software is not black/white. It's entirely grey. And there's a lot of room for contextual nuance everywhere.

Principles like SOLID (and DRY, and YAGNI, etc) are principles. They are not laws. Principles are guidelines which can help you make solid (heh heh) decisions. They are subject to context and judgement.

If good software design were as easy as memorizing a couple of acronyms, we'd all be out of a job. But it's not. It takes practice and experience. Writers and academics can make things easier by presenting accumulated experience in principles and guidelines, but there are no silver bullets. It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.

4 years ago by dvlsg

> It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.

I think that's a big part of the problem, and how we end up with articles like this. A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.

4 years ago by no-s

>>A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.

Prescriptive principles without informed discretion. That is the problem, and it's not just software.

"A foolish consistency is the hobgoblin of little minds."

It's terrifying how easily a reasoned and seeming sensible polemic may be interpreted into a foolish oppression, even by otherwise rational folk...

4 years ago by undefined

[deleted]

4 years ago by cratermoon

> Software is not black/white. It's entirely grey

entirely? There aren't parts that are black/white? That's seems a bit black/white.

4 years ago by wizzwizz4

Ah, but that comment isn't software, is it?

[0]: https://esolangs.org/wiki/English

4 years ago by aroman

Let's settle on it being a gradient from black to white.

4 years ago by andrewprock

Well, the 0s are black and the 1s are white, except when it is the other way around.

4 years ago by elric

Touché. "Only the Sith deal in absolutes" and all that.

4 years ago by drewcoo

A thousand times this!

Principles, heuristics, "best practices," and just generally good ideas are not absolute truth. SOLID is like hand washing. Please do it! Unless you have a good reason not to.

The root of the "disprove a heuristic by a single counterexample" problem is a misunderstanding of logic. A heuristic is not a statement that universally all hands must always be washed. It is a milder claim that generally handwashing has proved useful via inductive means, so you should probably wash your hands if you want to minimize your risk.

Any expert in a given field should know times when not washing hands has been justified. But by the same token, those people know that they should still recommend hand washing to the general public because they won't know when it's not justified.

Wash your hands, please.

4 years ago by lostcolony

"It was good for something"

The post does seek out where the SOLID principles came from. And it's not really debunking them; just saying they're not absolutes. Which, yes, the title is click-baity, but I've certainly found people who treated them as absolutes, or tried to talk about code from SOLID perspective, and I've certainly never found that useful.

In fact, I've not found -any "best practice" to ever be absolute in a generalizable sense, and it's never been useful rhetoric to bring them up in a design discussion because of that. In fact, they sometimes run counter to each other. "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts".

Learn the heuristics, then deal with the subjective realities each project brings; anyone who tries to treat a given codebase as having an objectively correct approach, rather than a nuanced and subjective series of tradeoffs, is not someone worth talking or listening to.

4 years ago by dllthomas

> "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts"

As originally coined, DRY speaks of ensuring every piece of knowledge is encoded once. If pieces of code are "dealing with different things" then those are two pieces of knowledge, and DRY (per that formulation) does not recommend combining them.

I agree that there is a prevalent notion of DRY that is more syntactic, but I find that version substantially less useful and so I try (as here) to push back on it. Rather than improving code, it's compressing it; I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

Note that it's not just that syntactic DRY sometimes goes to far - it also misses opportunities where the original DRY would recommend finding ways to collapse things: if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

There are, of course, still tradeoffs - maybe the technology to unify the description isn't available, maybe deepening my tech stack makes things less inspectable, &c.

4 years ago by lostcolony

I posted to a sibling comment to yours, but wanted to say here too - at that point it ceases to be a useful statement to ever bring up, because I've never seen a discussion where everyone agreed things were the same 'piece of knowledge', and one side was saying it should be repeated. When I've heard "DRY" trotted out, it's -always- been in a situation where the other side was trying to claim/explain that they were different pieces of knowledge. Hence my statement - it's worth understanding the meaning of the principle, internalizing the lesson, as it were, but then the formulation ceases to be useful.

4 years ago by no-s

>>I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

heheh I've made that joke too, also "let's un-complicate this into a pointless complexity," when its goes over their head.

4 years ago by cjblomqvist

I understand that this is almost nitpicking (because the DRY example is not the point of your comment), but your DRY example is a really bad example of this, but rather a very good example of the lack of knowledge within the software community. According to Wikipedia, DRY means "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system" [1], NOT to deduplicate all code that looks the same. It's actually more or less exactly the same as the Single Responsibility principle.

PS. Interviewing senior devs, team leads and tech leads atm. And so far none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication. Imo quite scary, and tells you a fair bit about the shape of the software dev industry...

[1] https://en.m.wikipedia.org/wiki/Don%27t_repeat_yourself

4 years ago by no-s

>>none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication

Well, duplicate code is the typical manifestation, as the sibling comment [1] relates:

>>> if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

"and maybe" it's just otherwise hard to get the point across, as you've discovered. Isomorphisms are easier to distinguish (and validate!) versus homeomorphisms, but unnecessary pedantry usually results in MEGO. We shouldn't expect senior developers to automatically embody the virtues of mathematicians or philosopher-kings...

yeah, I am nitpicking worse, but after 40+ years in the industry and suffering through tens of thousands of marginally relevant distinctions in "gotcha" interview questions, I am without shame even though my head is nodding in acknowledgment of your points...

[1] oops, I meant https://news.ycombinator.com/item?id=26532424

4 years ago by lostcolony

Yes, but at that point it stops being a heuristic and instead is a tautology. "Don't repeat the things that shouldn't be repeated". Or even what you said, which, again, is a nice statement, but who decides what 'a piece of knowledge' is? I.e., when is it the same bit of knowledge, vs when is it different? That's often the heart of it; I've had those debates (and in fact, was referencing them in my parent comment), where someone feels these ~things~ are basically the same, and so should be treated mostly the same, with shared code, and where someone else feels that, no, the differences are sufficient that they should be treated differently. And that's a reasonable discussion to have. But it's one that trotting out "Don't Repeat Yourself!" or SOLID or etc adds -nothing- to; the principles themselves clash, and ignore the core difference you're trying to work out.

In short, the reasons for the principle matter, but if you know to look for and understand the reasons, the principles themselves are obvious and do not serve as useful heuristics.

4 years ago by andrewprock

Well code is data, so it's understandable that things get squishy inside people's heads.

4 years ago by cghendrix

+1 on the hyperboles. You see so many articles with titles like “Never use a singleton unless you want to lose your marriage like me”

4 years ago by undefined

[deleted]

4 years ago by spaetzleesser

In politics and in tech consulting there is a lot of money and fame to be made by going to the extremes and not allowing the middle ground. I just wish people wouldn't constantly fall for this, be it in politics or in tech.

Wait another 10-20 years and FP will suffer the same fate.

4 years ago by j_san

Uncle Bob has actually written a blog post about Dan's presentation: http://blog.cleancoder.com/uncle-bob/2020/10/18/Solid-Releva...

4 years ago by jdlshore

The article mentions the Bob’s post, and criticizes him for reacting to the slides out of context—that is, without seeing the pub talk that the slides were from, or contacting Dan to understand what the slides were about.

4 years ago by mdoms

There's a big push across all of Dev Twitter right now to do away with SOLID. To say I think it's misguided is an understatement. There's a similar push underway to do away with unit tests. The direction of our industry right now is very concerning to me. And honestly, this may be an unpopular opinion, but I think a lot of it is driven by people who disagree so vehemently with Bob Martin's politics that they overcorrect and start throwing out his good work too.

4 years ago by resonantjacket5

I think the push against SOLID is fine. I've never really seen the 'single-responsibility' part ever really followed or used in a way that made sense.

I haven't really seen a strong push against unit tests?

4 years ago by gherkinnn

Booking.com famously doesn’t write many tests at all. Something something move fĂ€st. Well, other than the A/B kind. But you know what I mean. I also recall a recent Stack Overflow blog post mentioning that they don’t have many either.

Regarding a push against unit tests, the Frontend world, for what its worth, has a rising school of thought that favours integration tests based around what the user sees and interacts with.

4 years ago by mdoms

Booking.com is one of the most horrifically unreliable, buggy pieces of garbage on the internet, so this doesn't surprise me.

4 years ago by drooby

You’ve never seen SRP in practice? This is honestly concerning.

You’ve never seen like a User class that only encapsulates fields and methods relating to a User abstraction?

4 years ago by resonantjacket5

I meant more as for non-trivial classes when people are deciding when to break up a class the "single responsibility" part is too loosely defined to the point where I've never seen people actually use it as a metric. I agree classes can grow too large the hard part is what rubric actually used for delineating it and just saying "single responsibility" really hasn't by itself been useful.

4 years ago by callmeal

>There's a similar push underway to do away with unit tests.

I agree with that push, as long as there's some other way of validating requirements. In my team, that's done with integration tests and our core principle is: code that is integration tested does not require unit testing. That principle surprisingly covers a good 70-75% of the codebase, leaving us with a few core unit tests for the secret sauce of our product.

Daily digest email

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.