Hacker News

13 hours ago by bwilliams

I read the article and didn't really feel that they disagreed with, or debunked any of the principles. It reads like they formed their own understanding of each principle and maybe disagreed with how they were taught, or how the principles are sometimes presented.

This change in outlook of existing thoughts/ideas is how many crafts grow, such as martial arts, painting, philosophy, etc (instead of stagnating). Sometimes we need to frame things in a more modern manner, and sometimes we need to discard them completely. In this case, I think re-framing the concepts is helpful, and I found it to be an interesting point of view. I agreed with a good amount of it, but I don't think we need to discard SOLID principles just yet.

13 hours ago by pydry

This alludes to one of my main beefs with SOLID - the lack of a clear definition.

What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.

13 hours ago by barbazoo

I agree. In a way that's one of the strengths of SOLID in my opinion. All places I've worked at had slightly different versions of what e.g. SRP meant. And that's ok. It's a way to make sure your team writes code in a way their teammates would expect. Whether that's objectively good code or not doesn't matter as much to me.

13 hours ago by SideburnsOfDoom

> What counts as a "responsibility", for instance. Where I see one responsibility some people see two and vice versa.

The Single Responsibility Principle is not a rigorously defined Scientific law, where the "responsibility count" can be exactly measured and minimised.

It is a subjective design guideline, with elements of experience, of context and of "I know it when I see it".

This does _not_ make it useless. It is still very useful indeed. But you do have to understand that disagreements over it are not always about "who is objectively right" but "which style makes the most sense in context".

12 hours ago by pydry

shrug I think every time I've ever seen a disagreement about code quality it's boiled down to both developers thinking "I know it when I see it" about their separate approach.

If a set of principles lets them both think they're both correct and the other one is wrong, what exactly is the point of those principles?

This isn't just a coding thing. It's also, say, why society follows a body of laws rather than something like the 10 commandments.

10 hours ago by raverbashing

Here's the problem:

Some developers will say a (data structure) Controller is a class obeying the SRP

Some others will say the class can manage itself and not need a controller, so M and C can be one thing.

Some other will argue that it's better to make 2 controllers, one for complicated thing 1 and another for complicated thing 2, all based on the same Model.

5 hours ago by shock

A responsibility is an obligation to perform a task or know information (authoritatively). If an object performs two tasks, it has two responibilities, and so on.

For example, an Entity part of the persistence layer in the app, has the responsibility to persist the state of the Entity to the database, but the responsibility to know the information is with an object that is part of the business logic layer. The information that is stored in the Entity to be able to persist it to the DB is just a cache of the information in the Business Object, the Entity is not responsible for it, it just holds a cache. If the same object would be responsible for both holding the information and persisting it, it would have 2 responsibilities.

This might sound somewhat confusing and useless, but it isn't so. Imagine a future where computers have a form of RAM that is not volatile. There is no need for a database, in the classical sense ā€“ whatever is in RAM when the computer is powered off/rebooted will still be there when the program resumes running.

12 hours ago by bick_nyers

I think that is a feature, not a bug. I think it makes sense in different contexts for "responsibility" to be abstracted differently. The two extremes are lots of files and functions versus fewer files and functions, and the optimal balance to strike is probably based on whether it is important for people to focus on the modules, or the arrangement of those modules. For high-performance C++ libraries with a good design/architect, it could make sense to split up to a lot of files/functions, so that each function can be iterated upon and improved. For a less-performance sensitive Java library where understanding and usage is most important, you would want less files/functions such that the development focus is more on the high level ideas, the arrangement of the parts (or refactoring).

With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic. What SOLID aims to do is say that these main points are not something you should dedicate brain cycles towards, as they are best spent elsewhere in the design.

12 hours ago by pydry

>With any paradigm, there is often ambiguity with certain elements, because those elements should be dynamic.

It's because, unless you're careful, human language is insufficiently precise by its nature for many domains.

This is why mathematicians communicate with mathematical notation, for instance. It's why lawyers use special lawyer only terms (or why interpretation of the law is an explicitly separated function).

With SOLID the lack of precision renders the whole exercise rather pointless.

You're supposed to be able to use these principles as something you can agree upon with other developers so that you have a shared understanding of what constitutes "good" code.

However, it doesn't happen. Instead we argue over what constitutes a single responsibility.

SOLID isn't the only example of this. "Agile" is actually far worse.

13 hours ago by krona

The author presents a series of strawman arguments to debunk SOLID, and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension. In my experience this commonly how incomprehensible codebases evolve; the code is comprehensible right up to the point that it isn't, and by that point it's too late to change anything.

12 hours ago by jdlshore

> he suggests we should make code as complex as possible

I read no such thing. (Speaking of straw men...) instead, I read him saying that the alternative to SOLID was ā€œWrite simple code.ā€

I think thereā€™s room to criticize this essay, but thatā€™s a bizarre one to lead with.

12 hours ago by jancsika

> instead, I read him saying that the alternative to SOLID was ā€œWrite simple code.ā€

The part you missed is in the same sentence as the three words you quoted.

"Instead I suggested to write simple code using the heuristic that it 'Fits In My Head'."

The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head. The author even seems to acknowledge this line of attack in the next paragraph:

> You might ask ā€œWhose head?ā€ For the purpose of the heuristic I assume the owner of the head can read and write idiomatic code in whichever languages are in play, and that they are familiar with the problem domain. If they need more esoteric knowledge than that, for instance knowing which of the many undocumented internal systems we need to integrate with to get any work done, then that should be made explicit in the code so that it will fit in their head.

But that just protects against spaghetti made from esoteric know of internals. For example, in C a "big head" would still be able to justify using global variables (after all, that's idiomatic C), rolling their own buggy threading approach, and deeply nested side-effect based programming.

I'd much prefer pointing big heads to the rule of "do one thing" than to ever read a response of the category, "Well, it fits in my head."

10 hours ago by gregmac

> The obvious criticism here is that now any developer who wants to defend their code will simply claim a given spaghetti incident easily fit in their head.

The thing is: that's just a bad defense. If the people who are going to be to maintaining it and working with it are saying it's bad code, it's bad. Even if this is the only developer who is going to maintain it for now (which itself is always a terrible idea), eventually someone else will take over, and the closest thing you have to that "someone else" is the rest of the current team.

I'm having a hard time imaging someone seriously making this argument, who is not also:

* Trying to establish themselves as the sole possible maintainer, so either the company can't fire them or is completely screwed if they ever leave, or:

* Is a saying the rest of their team is just too stupid to work with

In either case, this person is a massive liability, and the longer they are employed the more damage they'll do.

12 hours ago by lupire

Not defending the article at all, but "whose head" should be at least a code reviewer's head, ideally two.

11 hours ago by refactor_master

Fits in my head and works on my computer.

slaps on hood

9 hours ago by coldtea

>The author presents a series of strawman arguments to debunk SOLID

Care to elaborate as to why those are strawmans?

Without that, this is a no-argument, which is probably worse than a strawman.

>and then suggests that instead we should make software as complex as possible but no more complex than the coders own comprehension

The combo (go and make "software as complex as possible but no more complex than the coders own comprehension") is suggested nowhere in the post.

The second part of the combo (make software "no more complex than the coders own comprehension") is indeed said, and is very sensible advice.

It is, however, combined with the inverse advice of "make it as complex as possible" which you claim the author combined it with: with the advice to keep it simple.

5 hours ago by eternalban

> The Single Responsibility Principle says that code should only do one thing. Another framing is that it should have ā€œone reason to changeā€. I called this the ā€œPointlessly Vague Principleā€. What is one thing anyway? Is ETL ā€“ Extract-Transform-Load ā€“ one thing (a DataProcessor) or three things? Any non-trivial code can have any number of reasons to change, which may or may not include the one you had in mind, so again this doesnā€™t make much sense to me.

The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

So, a reasonable reading of SOLID naturally is not ruling out composing complex software using "simple single purpose code". However, OP is assuming a ridiculous reading (the strawman) that basically can only be valid for software that has only one irreducible capability.

One can read SOLID SRtC in terms of capability as "compose complex software using single purpose code".

One can read SOLID SRtC in terms of change as "changes to code should consist of one, or a sequential set of, single purpose changes".

2 hours ago by coldtea

>The strawman here is the fallacy that SOLID (SRtC) is clearly not saying that a software, composed of "code", should only do one thing. By that reasoning, SOLID rules out any software that provide multiple capabilities. Your editor, for example, has multiple capabilities. It can save. It can highlight. It can cut. It can paste.

That's not what TFA says here. It doesn't claim that SOLID says that software as in 'a full program' should do one thing.

It just says that SOLID says that a piece of code (as in a function or a class) should do one thing, which SOLID does say, and which the author of TFA disagrees with.

Perhaps the use of the word "code" (or "non-trivial code") is confusing, but the author doesn't imply the whole program with that, but the same as SOLID does (a unit of code):

"Code should fit in your head at any level of granularity, whether it is at method/function level, class/module level, components made up of classes, or entire distributed applications".

11 hours ago by legulere

That's the point where you need refactoring. The basic problem is that you cannot look into the future and clearly see what exactly you are going to need (if you are lucky enough to implement something along a specification, you should us that knowledge though!).

As long as you manage to keep artificial complicatedness out of your code, you will always have the complexity of the problem mirrored in your code. A common problem of ideas about object oriented programming like SOLID or Clean Code is that they have a focus on classes. If you keep your classes very simple, you will instead end up with a complex architecture, where you might have zero responsibility layers or functions that just pass on the call further to a layer down.

13 hours ago by rootsofallevil

that's an interesting take, what would you consider strawman arguments in the article?

12 hours ago by refactor_master

For example, the open-close principle. The author blames this advice on tooling of the 90s and proposes instead ā€œChange the code to make it do something elseā€.

This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

By doing as the author suggests youā€™ll end up with either 500 broken tests or 5000 compiler errors in the best case, or in the worst case an effectively instantly legacied code base where you canā€™t trust anything to do what it says.

I once had to change an entire codebaseā€™s usage of ints to uuids, which took roughly 2 whole days of fixing types and tests, even though logically it was almost equivalent. Imagine changing anything and everything to ā€œmake it do something elseā€.

12 hours ago by garethrowlands

It has a fair bit to do with tooling. For example, C++ suffers from the fragile base class problem and some changes can cause long compile times. Nowadays, we have tests and deployment pipelines that are explicitly designed to let us make and deploy changes safely.

Honestly, if you cannot change your code, you have a problem.

9 hours ago by coldtea

>This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

That's still a matter of tooling. With type-checking, static analysics, and test suites, changing code doesn't have "very unintended effects".

Back in the day, without those, things were much more opaque.

12 hours ago by geofft

What's the alternative here? If you had to change a codebase's usage of ints to uuids, should the original author have used dependency inversion and required an IdentifierFactory that was ints at the time so you could just swap out the implementation? And if they did - why wouldn't they have just used UUIDs in the first place? You're betting on the fact that the original author anticipated a particular avenue of further change, but also made the wrong decision for their initial implementation, which seems like the wrong bet. If they made the wrong decision, they almost certainly didn't anticipate the need for another one, either.

And how long would it have taken for the original author to use an IdentifierFactory instead of ints and write meaningful tests for it? Less than two days?

13 hours ago by sorokod

Not the person you asked but, I would say that the expectation that SOLID provides or should provide, unambiguous guidance

12 hours ago by tanseydavid

His entire screed on single responsibility principle spins around on semantics.

13 hours ago by js4ever

Loved it! I agree with most of it as well, I'm so fed up with preparing with abastractions to things that will probably never occur (changing the underlying DB for eg) I changed my coding style a decade ago to focus on producing code that is as simple as possible to read/maintain with as few abastractions layers as possible.

13 hours ago by ivan_gammel

Thatā€™s one of the traps, every programmer will fall into at least once: copying some abstraction because someone else did it. I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course). I have seen many times when developers designed architectures, created APIs or added buttons to user interfaces, because they felt it may be useful in the future, not because someone told them it will happen. That code was very different in quality, sometimes too abstract, sometimes a big multi-screen function doing plenty of magic. All of it had nothing to do with SOLID ā€” it was always a violation of another principle: KISS. Keeping it simple, an engineering variant of Ockhamā€™s razor, does not contradict or replace SOLID, it complements it, defining the scope of application of other principles. If your requirements are complex enough, you may need an abstraction ā€” if you really need it, here are some guidelines. Thatā€™s it, now keep it simple.

12 hours ago by jcelerier

> I have not seen support of changing underlying DB as a business requirement yet (well, outside of frameworks and platforms, of course).

... and, you haven't ever seen a business requirement of what used to be a "software" to become a framework / plateform instead ? it happens all the time

9 hours ago by gregmac

I've seen bits of this, and I've seen it happen in code bases that were built as "abstract" so its components could re-used.

But I've never seen it happen in any way close to resembling what the original architect thought would happen, and as a result, all those abstractions and generic implementations not only added time to the mainline development, but in the end actually got in the way of the abstraction that was needed.

11 hours ago by ivan_gammel

Well, that - "any" software becoming a framework - does happen, but not all the time. Why does this matter?

13 hours ago by youerbt

Yes, let's abstract the database is probably one of the most mindlessly applied rituals in an enterprise software. And if running on different databases is not a feature of your application, it is mostly harmful practice.

If you can decide on a language, framework, critical libraries etc. then you should be able to decide on a database. It's probably more important than your application anyway.

12 hours ago by matsemann

Back before devops, containers, postgres etc were mainstream (so not more than ~6 years ago in my industry), soo many were running oracle dbs. And everyone shared the same instance, and it wasn't exactly trivial to get a new personal up and running (licensing, probably required a DBAs help). So then using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.

10 hours ago by ivan_gammel

What platform are you talking about? In Java world JDBC existed from early days and was enough if you stick to standard SQL, in tests you may have needed only to switch driver classpath and connection string. ORMs existed at least since 2003-2004 (early versions of Hibernate).

At the same time, substituting Oracle with a lightweight DB in an environment where full-time DBA was developing a database with loads of stored procedures and DB-specific stuff wasn't something really feasible - no abstraction layer would solve that.

11 hours ago by no-s

>>using hsqldb or something else lightweight were golden for local development or integration tests. So abstracting the DB was the default, and absolutely needed.

This also improves efficiency in operations, not necessarily development. If you used a library/framework for database access anyway, it's not an extra expense. There's ultimately a portability concern even if "vendoring", it only imposes a cut-out to permit control of necessary change).

After a few unpleasant experiences I endlessly advocated we should always use an interface to access populated data objects and not interact with the database directly, not even running queries directly but always using at least lightweight IOC. I also advocated for testing where known result sets were fed through mock objects. After all, saved result sets could also be used to test/diagnose the database independently after schema/data changes. My experience predates a lot of ORM and framework responses.

Unfortunately later frameworks (intended to abstract these concerns) became ends in themselves, rather than a means to an end. These were used to satiate "enterprise-y" concerns (sacrifices to the enterprise gods). If you could afford to deploy operational Oracle, you would't necessarily flinch at the cost of the extra (often pointless) layers of abstraction.

12 hours ago by silentbugs

Exactly.

DRY and YAGNI are two basic concepts I've always tried to work with. Do I need this piece of code more than once? Should be extracted/created in a separate place. Would I need this in other projects? Library.

YAGNI is most likely directly conflicting with most of what's mentioned, if the developer simply asks themselves: do I really need this? Will this project benefit from having separate layers of abstraction for things that are most likely never going to change?

I always think twice before writing a single line of code, with the main point of focus being if future me (or anyone who reads that piece of code) will be able to understand and change things, if needed.

12 hours ago by jehlakj

DRY seems to be one of those principles too many people take literally. Especially among junior devs from my observation (and experience).

Just because there is a block of code thatā€™s being used in two different places doesnā€™t mean it should always be abstracted out. Thereā€™s a subtle yet mindful consideration whether these two consumers are under the same domain. Or if it should exist as a utility function across all domains. And if thatā€™s the case, changing the code to make it generic, simple and without side effects is ideal.

Iā€™ve seen too many of these mindless DRY patterns over time, and they eventually end up with Boolean flags to check which domain itā€™s being used in as the codebase becomes larger.

12 hours ago by gfody

DRY should really be DRYUTETNRYDTP - dont repeat yourself unless the effort to not repeat yourself defeats the purpose

I also propose LAWYD - look at what you've done, a mandatory moment of honest reflection after drying out some code where you compare it to the original and decide if you've really improved the situation

12 hours ago by sixothree

Just this week I came across a set of 20+ controls in a form. Every control downloaded some version of a file from "report". Not once was there any shared code behind any of these controls. Because different people over time touched this code, not all of the functions were in the same place. And those that were each had slightly different nuances.

Without DRY, this would be a perfectly acceptable practice. DRY gives me something I have in my head when I see this and refactor into something that is manageable. DRY gives me something I can point to and say "please for the love of god don't perpetuate this folly".

8 hours ago by h3mb3

I think a good rule of thumb is to never prematurely apply DRY in source code BUT always try to aim for it when it comes to data and configuration unless there's a special need.

12 hours ago by herval

Reading these comments make me feel like a huge outlier. Practically every project I ever worked on included AT LEAST one database swap. That includes startups and big tech, for all sorts of different reasons.

12 hours ago by geofft

I've worked on projects with database swaps, too, but I find it hard to believe that use of abstraction in advance would have helped them. There's a couple of cases.

One is that you're using two SQL databases and you're not using any advanced features. You started dev on MySQL, and then the company says "Thou shalt use Postgres" (or whatever). You don't need anything fancy in your own code to handle this. You're still making SQL queries, you're just swapping out the database engine. Technically, this is an example of dependency inversion (depend on the SQL abstraction), but you also didn't set out to do it - basically any programming language you're likely to use has the common database libraries use a common abstraction for sending queries. And you didn't specifically make sure you were writing generic SQL, you just happened not to need anything.

More commonly, you're switching databases for a specific feature. Maybe you realize PostGIS (or whatever) is going to solve a problem for you very well. But then you're changing how you model data, what your schemata are, and even how your code is architected and accesses things in the database. You're deciding to move certain logic from your code to the database engine - might even decide to move certain logic from the frontend into the backend, or change how request routing works, or something. This is a fantastic reason to move databases, but no amount of abstraction can prepare you for it, because you're fundamentally changing what the abstraction is. And you're deliberately abandoning SOLID because you're picking up a dependency on a concrete database.

But the real case I've seen is where you're switching databases (or data storage layers, more generically) to a different model - MongoDB to not-MongoDB, a C/P database to an A/P one, a relational one to a key-value store, etc. This is the above case but even larger. There is no abstraction you could possibly write that could encompass the old and new cases. It requires rearchitecting how your code works.

And then there are the most boring of cases - the ones where a database swap sounds doable in theory and the code is supposedly using an abstraction layer, but no one has ever verified that the code doesn't make assumptions about what database it's on and the code has gotten too big, so we just get an architectural exemption from "Thou shalt" and we run our own instance of the wrong database, because the overhead of running our own DB costs the business less than getting the swap wrong.

(Some public examples of these sorts of database migrations that come to mind: https://slack.engineering/scaling-datastores-at-slack-with-v... is about how Slack couldn't move away from MySQL and the architectural assumptions they made about it and had to rule out migrations to non-relational databases out of hand, and https://about.gitlab.com/blog/2018/09/12/the-road-to-gitaly-... talks about how GitLab moved from NFS storage to an RPC service, requiring a lot of refactoring of callers.)

12 hours ago by herval

Iā€™ve been through plenty of cases where companies successfully swap out databases exactly as you described - a document storage for an Rdbms or vice-versa. The bird social network is an example where the db was so well abstracted, they managed to swap these out with no need to rewrite any application code. So does Facebook, for instance. Slack is a clear example of what the complete lack of forethought on this leads to. (Disclaimer: Iā€™m familiar with all 3, but obviously canā€™t talk details - thereā€™s plenty of public posts on these cases, though)

FWIW, every single db abstraction Iā€™ve ever witnessed was worth it - if only so that one could run tests in a sqlite and run prod in something else, or as a way to contain vendor lockin in the code (Iā€™ve seen projects successfully migrate from a plsql-heavy system to mysql because the code was well segregated, and I worked at a startup that literally imploded bc the database was metastasized all over the place)

Anyway, as I put it, abstracting data storage is a no-brainer for me, and it saved my skin every single time. I donā€™t expect to convince anyone here to go do it. :-)

13 hours ago by refactor_master

To me this blog doesnā€™t really formally/finally refute anything, but is simply saying ā€œdonā€™t over-engineer your solutionā€.

> ā€œdependency inversion has single-handedly caused billions of dollars in irretrievable sunk cost and waste over the last couple of decadesā€

Oh please. Is there anything in programming that hasnā€™t had the ā€œIrreparable harm to humanityā€ sticker attached to it by now?

12 hours ago by gfody

I like to imagine a final analysis of all code ever written by humans, after some ai hypermind from the future has digested it, turning out to be 99.999% dependency injection boilerplate

11 hours ago by layer8

Dependency injection has nothing to do with dependency inversion.

2 hours ago by gfody

nothing? okay

12 hours ago by jonnypotty

So becuase our industry is so unprofessional that you can literally point to loads of it and say "that has cost humanity billions", this entire argument is stupid and actually there is no problem at all?

Not sure that's the right attitude.

12 hours ago by AlphaSite

Itā€™s an industry which ha s probably generated trillions in value so it maybe a matter of perspective.

12 hours ago by refactor_master

So what would you have done to end up in the alternate universe where it hadnā€™t cost humanity billions? Rational agents and perfect knowledge does not exist outside of economic theories.

12 hours ago by hinkley

Cloud computing, but only because we havenā€™t hit the trough of disillusionment yet.

13 hours ago by agnosias

Honest question: if you don't do dependency inversion, or if you don't depend on interfaces/abstractions that can't be mocked - how do you unit test your code?

Unit testing is the only reason pretty much all of my code depends on interfaces. Some people seem to consider this a bad thing/over-engineering, but it's how I've seen it done in every place I've worked at.

How do you do it?

12 hours ago by aszen

Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

More interesting is if your code relies on the outside world, then instead of abstracting out the connection with the outside world abstract out your business logic and test it separately.

So instead of a database repository being injected into your domain services, make your services rely on pure domain objects which could come from any where be it tests or the database.

Make a thin outer shell which feeds data into your domain logic from the outside world and test that via integration tests if necessary.

I'll admit I don't have the full picture here, but I have used this technique to good effect. The core idea is don't embed your infrastructure code deep inside your architecture instead move it to the very top.

12 hours ago by garethrowlands

Some of this is covered in "Functional Core, Imperative Shell", https://www.destroyallsoftware.com/screencasts/catalog/funct...

Haskell programs tend to have this structure because pure functions aren't allowed to call impure functions.

12 hours ago by globular-toast

> Firstly if the dependency isn't doing any io, you can test your code as a whole along with its dependency. No need to mock.

This the thing. People have a tendency to overuse mocks. The point of automated testing (whether it's unit tests or something else), is to enable refactoring. That's really the only reason. Code that you don't touch doesn't suddenly change its behaviour one day.

In a previous jobs the developers started to go crazy with mocking and it reached a kind of singularity where essentially if a function called anything, that thing was mocked. It definitely tests each function in complete isolation, but what's the point? It makes refactoring impossible, which is the entire point of the tests in the first place!

This excellent talk completely changed the way I approached testing. Every developer who writes tests needs to watch this now! https://www.youtube.com/watch?v=EZ05e7EMOLM

9 hours ago by chakspak

I generally agree with you, though I want to comment on this line:

> Code that you don't touch doesn't suddenly change its behaviour one day.

This can and does happen all the time, when the platforms and abstractions your code builds on change underneath you. This is why a compelling environment / dependency management story is so important. "Code rot" is real. =P

12 hours ago by snidane

You are doing interface/function level testing and calling it unit testing.

That's what industry converged on, that a function/method = a unit, but it apparently used to be that a module meant to be a unit.

It can be that both interfaces and modules can be considered a "bag of functions".

Seems like a lot of confusion about unit testing, mocking and DI stems from this historical shift.

I believe that interface/method level testing is too granular and mostly results in overfitted tests. Testing implementation, not behaviour. Which can be useful for some algorithm package for example, but probably not so much applicable to typical business logic code.

TDD: where did it all go wrong. https://youtu.be/EZ05e7EMOLM

13 hours ago by robertknight

An option in some languages is to simply create an alternate mock/fake version of the actual class or function that is depended upon and monkey-patch the code under test to use it for the duration of the test. This is commonly done in Python ( see `unittest.mock.patch`) and JavaScript (eg. with Jest mocks) for example.

The end result is the same as if you'd created an interface or abstraction with two implementations (real and fake/mock), but you skip the part where a separate interface is defined.

The upside to this is that the code is more straightforward to follow. The downside is that the code is exposed/potentially coupled to the full API of the real implementation, as opposed to some interface exposing only what is needed.

12 hours ago by mrloba

Move your logic to pure functions. Use classes for plumbing. Now you can unit test the logic, and integration test the plumbing. You can easily test with a real database using docker for example. Note that you'll still use DI in this case, but you'll have far fewer classes and therefore fewer dependencies as well.

13 hours ago by DanielBMarkham

1. Watching the tech community over a really long time, please guys, don't swing violently from "A is all good!" to "A is all bad!" It was good for something, else so many people wouldn't have successfully used it for so long. Work more on discrimination functions and less on hyperbole please. Future generations will thank you for it.

2. "...coupled with the way most developers conflate subtypes with subclasses..." Speaking as somebody who both likes SOLID and could write this essay/present the deck, I think there's a lot of confusion to go around. There are a lot of guys coding classes using rules better suited for types. There are a lot of guys applying OO paradigms to functional code and vice-versa. In general, whenever we swing from one side to the other on topics, it's a matter of definitions. There is no such thing as "code". There's "thinking/reasoning about code" and there's coding. You can't take the human element out and reason abstractly about the end product. Whatever the end product is, it's a result of one/many humans pounding away on keyboards to get there.

3. My opinion, for what it's worth: as OO took off, we had to come up with heuristics as to how to think about grouping code into classes, and do it in such a way that others might reasonably get to the same place ... or at least be able to look at your code and reason about how or why you did it. That's SOLID and the rest of it. Now we're seeing the revenge of the FP guys, and stuff like SOLID looks completely whack to them, as it should. It's a different way of thinking about and solving problems.

ADD: Trying to figure out who's right and who's wrong is a (philosophical) nonsense question. It's like asking "which smell is plaid?" Whatever answer you get is neither going to provide you with any information or help you do anything useful in the future. (Shameless plug: Just submitted an essay I wrote last week that directly addresses reasoning about types in a large app)

12 hours ago by elric

> don't swing violently from "A is all good!" to "A is all bad!"

Indeed. This clickbaity style of laying out arguments is not terribly constructive. Software is not black/white. It's entirely grey. And there's a lot of room for contextual nuance everywhere.

Principles like SOLID (and DRY, and YAGNI, etc) are principles. They are not laws. Principles are guidelines which can help you make solid (heh heh) decisions. They are subject to context and judgement.

If good software design were as easy as memorizing a couple of acronyms, we'd all be out of a job. But it's not. It takes practice and experience. Writers and academics can make things easier by presenting accumulated experience in principles and guidelines, but there are no silver bullets. It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.

12 hours ago by dvlsg

> It's unfair and pointless to expect SOLID (or anything else) to apply in any and all cases.

I think that's a big part of the problem, and how we end up with articles like this. A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.

11 hours ago by no-s

>>A lot of developers do expect SOLID to apply to every case, and I've seen fine code get rejected in reviews because it wasn't SOLID enough.

Prescriptive principles without informed discretion. That is the problem, and it's not just software.

"A foolish consistency is the hobgoblin of little minds."

It's terrifying how easily a reasoned and seeming sensible polemic may be interpreted into a foolish oppression, even by otherwise rational folk...

11 hours ago by undefined

[deleted]

12 hours ago by cratermoon

> Software is not black/white. It's entirely grey

entirely? There aren't parts that are black/white? That's seems a bit black/white.

11 hours ago by wizzwizz4

Ah, but that comment isn't software, is it?

[0]: https://esolangs.org/wiki/English

11 hours ago by aroman

Let's settle on it being a gradient from black to white.

11 hours ago by elric

TouchƩ. "Only the Sith deal in absolutes" and all that.

10 hours ago by andrewprock

Well, the 0s are black and the 1s are white, except when it is the other way around.

10 hours ago by drewcoo

A thousand times this!

Principles, heuristics, "best practices," and just generally good ideas are not absolute truth. SOLID is like hand washing. Please do it! Unless you have a good reason not to.

The root of the "disprove a heuristic by a single counterexample" problem is a misunderstanding of logic. A heuristic is not a statement that universally all hands must always be washed. It is a milder claim that generally handwashing has proved useful via inductive means, so you should probably wash your hands if you want to minimize your risk.

Any expert in a given field should know times when not washing hands has been justified. But by the same token, those people know that they should still recommend hand washing to the general public because they won't know when it's not justified.

Wash your hands, please.

12 hours ago by lostcolony

"It was good for something"

The post does seek out where the SOLID principles came from. And it's not really debunking them; just saying they're not absolutes. Which, yes, the title is click-baity, but I've certainly found people who treated them as absolutes, or tried to talk about code from SOLID perspective, and I've certainly never found that useful.

In fact, I've not found -any "best practice" to ever be absolute in a generalizable sense, and it's never been useful rhetoric to bring them up in a design discussion because of that. In fact, they sometimes run counter to each other. "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts".

Learn the heuristics, then deal with the subjective realities each project brings; anyone who tries to treat a given codebase as having an objectively correct approach, rather than a nuanced and subjective series of tradeoffs, is not someone worth talking or listening to.

11 hours ago by dllthomas

> "DRY would say we should combine these together" - "Yeah, but Single Responsibility; while the code is mostly the same, they fundamentally are dealing with different things in different contexts"

As originally coined, DRY speaks of ensuring every piece of knowledge is encoded once. If pieces of code are "dealing with different things" then those are two pieces of knowledge, and DRY (per that formulation) does not recommend combining them.

I agree that there is a prevalent notion of DRY that is more syntactic, but I find that version substantially less useful and so I try (as here) to push back on it. Rather than improving code, it's compressing it; I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

Note that it's not just that syntactic DRY sometimes goes to far - it also misses opportunities where the original DRY would recommend finding ways to collapse things: if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

There are, of course, still tradeoffs - maybe the technology to unify the description isn't available, maybe deepening my tech stack makes things less inspectable, &c.

9 hours ago by lostcolony

I posted to a sibling comment to yours, but wanted to say here too - at that point it ceases to be a useful statement to ever bring up, because I've never seen a discussion where everyone agreed things were the same 'piece of knowledge', and one side was saying it should be repeated. When I've heard "DRY" trotted out, it's -always- been in a situation where the other side was trying to claim/explain that they were different pieces of knowledge. Hence my statement - it's worth understanding the meaning of the principle, internalizing the lesson, as it were, but then the formulation ceases to be useful.

11 hours ago by no-s

>>I've joked that we should call it "Huffman coding" when someone tries to collapse unrelated things in ways that will be unmaintainable.

heheh I've made that joke too, also "let's un-complicate this into a pointless complexity," when its goes over their head.

11 hours ago by cjblomqvist

I understand that this is almost nitpicking (because the DRY example is not the point of your comment), but your DRY example is a really bad example of this, but rather a very good example of the lack of knowledge within the software community. According to Wikipedia, DRY means "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system" [1], NOT to deduplicate all code that looks the same. It's actually more or less exactly the same as the Single Responsibility principle.

PS. Interviewing senior devs, team leads and tech leads atm. And so far none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication. Imo quite scary, and tells you a fair bit about the shape of the software dev industry...

[1] https://en.m.wikipedia.org/wiki/Don%27t_repeat_yourself

10 hours ago by no-s

>>none (!) have been able to properly formulate this (even after I'm hinting that it's not only about code deduplication) and 75-90% believe it's all about code deduplication

Well, duplicate code is the typical manifestation, as the sibling comment [1] relates:

>>> if I'm saying "there's a button here" in my HTML and in my JS and in my CSS, then I'm saying the same thing in three places (even though they look nothing alike) and maybe I should find a way to consolidate.

"and maybe" it's just otherwise hard to get the point across, as you've discovered. Isomorphisms are easier to distinguish (and validate!) versus homeomorphisms, but unnecessary pedantry usually results in MEGO. We shouldn't expect senior developers to automatically embody the virtues of mathematicians or philosopher-kings...

yeah, I am nitpicking worse, but after 40+ years in the industry and suffering through tens of thousands of marginally relevant distinctions in "gotcha" interview questions, I am without shame even though my head is nodding in acknowledgment of your points...

[1] oops, I meant https://news.ycombinator.com/item?id=26532424

9 hours ago by lostcolony

Yes, but at that point it stops being a heuristic and instead is a tautology. "Don't repeat the things that shouldn't be repeated". Or even what you said, which, again, is a nice statement, but who decides what 'a piece of knowledge' is? I.e., when is it the same bit of knowledge, vs when is it different? That's often the heart of it; I've had those debates (and in fact, was referencing them in my parent comment), where someone feels these ~things~ are basically the same, and so should be treated mostly the same, with shared code, and where someone else feels that, no, the differences are sufficient that they should be treated differently. And that's a reasonable discussion to have. But it's one that trotting out "Don't Repeat Yourself!" or SOLID or etc adds -nothing- to; the principles themselves clash, and ignore the core difference you're trying to work out.

In short, the reasons for the principle matter, but if you know to look for and understand the reasons, the principles themselves are obvious and do not serve as useful heuristics.

10 hours ago by andrewprock

Well code is data, so it's understandable that things get squishy inside people's heads.

12 hours ago by cghendrix

+1 on the hyperboles. You see so many articles with titles like ā€œNever use a singleton unless you want to lose your marriage like meā€

12 hours ago by undefined

[deleted]

12 hours ago by spaetzleesser

In politics and in tech consulting there is a lot of money and fame to be made by going to the extremes and not allowing the middle ground. I just wish people wouldn't constantly fall for this, be it in politics or in tech.

Wait another 10-20 years and FP will suffer the same fate.

3 hours ago by resonantjacket5

The main problem I see is that the overall SOLID principles while still correct, the original definitions used are too outdated and heavily imply inheritance everywhere.

Single Responsibility, honestly I like 'separation of concerns' much more. People tend to think single responsibility means use tiny classes and single line functions. Open closed principle stating "should be open for extension, but closed for modification" infers to many it's about inheritance only. Open for customizability would be much better. Liskov Substitutions seems like its talking about inheritance when it really also applies to interface usage in general. Dependency Inversion principle is interpreted by many to be use dependency injection everywhere etc...

I wish someone would go and update these definitions for the modern world.

12 hours ago by mdoms

There's a big push across all of Dev Twitter right now to do away with SOLID. To say I think it's misguided is an understatement. There's a similar push underway to do away with unit tests. The direction of our industry right now is very concerning to me. And honestly, this may be an unpopular opinion, but I think a lot of it is driven by people who disagree so vehemently with Bob Martin's politics that they overcorrect and start throwing out his good work too.

8 hours ago by resonantjacket5

I think the push against SOLID is fine. I've never really seen the 'single-responsibility' part ever really followed or used in a way that made sense.

I haven't really seen a strong push against unit tests?

6 hours ago by gherkinnn

Booking.com famously doesnā€™t write many tests at all. Something something move fƤst. Well, other than the A/B kind. But you know what I mean. I also recall a recent Stack Overflow blog post mentioning that they donā€™t have many either.

Regarding a push against unit tests, the Frontend world, for what its worth, has a rising school of thought that favours integration tests based around what the user sees and interacts with.

6 hours ago by mdoms

Booking.com is one of the most horrifically unreliable, buggy pieces of garbage on the internet, so this doesn't surprise me.

7 hours ago by drooby

Youā€™ve never seen SRP in practice? This is honestly concerning.

Youā€™ve never seen like a User class that only encapsulates fields and methods relating to a User abstraction?

5 hours ago by resonantjacket5

I meant more as for non-trivial classes when people are deciding when to break up a class the "single responsibility" part is too loosely defined to the point where I've never seen people actually use it as a metric. I agree classes can grow too large the hard part is what rubric actually used for delineating it and just saying "single responsibility" really hasn't by itself been useful.

11 hours ago by callmeal

>There's a similar push underway to do away with unit tests.

I agree with that push, as long as there's some other way of validating requirements. In my team, that's done with integration tests and our core principle is: code that is integration tested does not require unit testing. That principle surprisingly covers a good 70-75% of the codebase, leaving us with a few core unit tests for the secret sauce of our product.

Daily digest email

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.