Valentina (Cupać) Jemuović’s Post

View profile for Valentina (Cupać) Jemuović

Technical Coach | Want zero bugs and faster software delivery? DM me to find out how I can help your team

Don't set Code Coverage goals. It's DANGEROUS. A mistake made by engineering managers is to set an arbitrary minimum code coverage metric (e.g. minimum 80% coverage) to teams who do not have the adequate skillset - leading to bad consequences! Once upon a time, there was an engineering manager - let’s call him Robert. The problem was the software was buggy. He knew that the development teams rarely wrote any tests. They had given up on tests because (they said) the tests took too long to write, were too fragile, and were eventually just commented out. The only tests they had was some UI tests written by QA Engineers, but those tests were also really fragile and slow to run. So the main testing strategy was - manual regression testing. One day, Robert went to a conference, where he found THE answer to all his problems: enforcing Code Coverage metrics! He heard we can use code coverage to measure whether our tests cover the code. So then, achieving 100% code coverage should be the goal, right? 100% code coverage means we have a high-quality test suite? Robert thought this was a “quick and easy“ solution to “measure“ the test suite's quality. By measuring code quality, he could gain transparency regarding whether they’re writing enough tests - are the tests covering the code? Robert returned and announced the new standard to the team - 100% code coverage is the target goal for the next quarter. Anything less than 80% would mean performance appraisal. So, the team was given the code coverage goal, but they didn’t have the skillset in writing tests and weren’t provided any form of support regarding acquiring those skillsets. They also didn’t know what Code Coverage meant besides being a formal target. Robert looked at the Code Coverage report: 100%. But it was just an illusion of success. The reality was that the team didn’t move forward; they moved backward. The situation was WORSE than before. The bug count remained. Delivery speed dropped. Motivation plummeted. How? Why? (Coming up on Friday!) Read the full article on the Optivem Journal: https://buff.ly/3Cq2Acu #tdd #testdrivendevelopment #softwareengineering #optivem

  • No alternative text description for this image
Valentina (Cupać) Jemuović

Technical Coach | Want zero bugs and faster software delivery? DM me to find out how I can help your team

1y

NEWS UPDATE: I wrote a follow-up post that compares the presence of "holes" in test suites if we use Test First vs Test Last approaches https://www.linkedin.com/feed/update/urn:li:activity:6970277577198305280/

⚡️Michaël Azerhad

Je dev vos apps Front/Back complexes avec ma garantie 0 bug, mes clients paient au résultat ! | Formateur réputé TDD / CleanArchi / DDD / CQRS (+ de 1500 formés!) | Créateur du Slack WealCome pour du mentoring Craft

1y

The coverage is like checking that you were walking through any street in the city, meaning traversing any line of code you wrote. At the end of the day, it can tell you: "Heyy great, you went everywhere today!" The caveat is that the expectation was: "Check out any street and count homeless men/women". YOUR code was just "walking" through the streets, without checking...meaning without fulfil the main expectation! When you look at the test, it was: "should check whether I traverse any street in the city"... Where is THE expectation about counting? Humm...we haven't it. So you can have 100% coverage if your code match your poor test by your poor developer! To detect that, run a mutation testing process, and it will REMOVE the lines where you count homeless people, and ... the tests will still pass indicating your poor tests quality and your full illusion of 100% coverage! To succeed mutation testing challenge in one shot, trust TDD, real TDD; because that discipline won't let you write any line/letter of production code that doesn't pass a failing test.

Alexander Pushkarev

Senior Software Engineer at TripAdvisor

1y

I used coverage goals as a forcing function to help people adopt TDD. The target was quite simple 100%, and the meaningfulness of this coverage was evaluated during code review stage If it wasn't 100% - build would fail. It is extremely painful to add coverage at this point, especially when code reviews are quite rigorous. So people would inevitably adopt TDD. Of course it is not guaranteed to work, but it did work at least once.

Martin Heininger

Effective and efficient development processes/methods | Appointment? calendly.com/martinheininger

1y

Valentina Cupać (Валентина Цупаћ) In Safety Software Testing this is a fully known and understood problem. Therefore you are never asked to provide "just" Code Coverage, you are always asked to provide Code Coverage derived from Requirement based tests applying Boundary and Equivalence Class Testing. Unfortunately many test books still define White Box Tests as tests with only focus on reaching the code coverage. Exactly this effect leads to the bad suprises desribed in the article. But, why do we no re-write our test books?

Christian Hujer

Spammers will be blocked • 🧙♂️ Blogger Coach Speaker Trainer 🖦 Agile DevOps SWCraft BDD TDD XP • Lean Process Architect, Humanist • CEO Nelkinda

1y

I disagree with this. 1. This lacks differentiation between code execution coverage and code mutation coverage. 2. Everyone can make a chart. No offense. Charts are useful to illustrate information. But without a published study, it's not scientific but hearsay. Of course that's a generic problem, despite the size of our industry, solid scientific data is hard to come buy. Still, without link to data and method of study, the chart is just this: a chart. 3. We coaches have a bias towards these negative, both of which are inherent to our job. One bias is that we focus onc the negative, as it's our job to replace it with something better. We'll see the tests without meaning. We'll not see all the good tests written as they're not troubling. And we coaches are usually called into places where our help is needed a lot. Places where stuff is going well enough are less likely to call for our help. 4. Goodhart's law isn't a law, it's an adage. It's useful, but just a model. I don't believe that as a general rule, given coverage goals, developers will automatically write bad tests. That's not a law of nature. It's a matter of priority and the overall situation. (Continued in reply.)

Richard Smith

Principal Software Engineer / Innovator at Preservica

1y

Ideally you want to achieve 100% *use case* coverage, though in all but the simplest application this is likely not practical. But the important point is that increasing *code* coverage does not necessarily help you increase use case coverage. And setting code coverage metrics means people will inevitably pick the easiest possible tests to write that increase code coverage - which are always going to be ones targeted at class level calls, the least useful and most brittle kind of test. I worked on a project before where there had been code coverage targets, and all the property getters had unit tests. Literally this kind of stuff void test_can_read_email() { instance.email = 'user@domain.net'; AssertEqual('user@domain.net', instance.email); } Using code coverage stats as an overall indicator of the health of a project can be ok (a project with 20% coverage probably is in trouble), but it needs to be done in a way that doesn't encourage developers to write easy but pointless tests, and that's tough.

Nafaa Azaiez

Passionné par le TDD | DDD | Clean architecture | Hexagonal architecture | Software craftmenship | Agilité (la vraie) | CQRS | Event Driven architecture...

1y

Uncle bob seems to disagree, according to him, we should push towards 100% coverage but he admits that this is not really achievable and that it gives us false feedback, a false sense of security... this is why he recommends combining high coverage with mutation tests... Got all this from this post: https://blog.cleancoder.com/uncle-bob/2016/06/10/MutationTesting.html

Jordan Carter

Director, Cloud Engineering at Fidelity Investments

1y

Honestly this a represented as a false dichotomy. Arbitrary targets without meaningful execution and value centric metrics can lead to gaming the system, however if the proper quality controls and support is given, analysis of code coverage does provide insight for developers. That being said ,one can easily make the argument that the teams with the culture and mindset to create good testing capabilities don’t require the arbitrary target in the first place, because they write the test to catch problems early as a means of assisting themselves and their fellow team members and not because they are a tedious external requirement.🙃

Frank Raiser

SW developer, coach, trainer, architect, consultant and other roles needed for a project's success.

1y

In my trainings I usually bring this point across with a simple thought experiment: Let's assume the test suite follows a nice arrange-act-assert structure for all tests and you have 100% coverage. Now remove all the assertions and measure coverage again. Maybe the goal isn't the coverage after all. (I've also seen projects that were required to reach 100% coverage as a goal, and this sort of pointless testing is what you'll get for that goal when the deadline approaches.)

See more comments

To view or add a comment, sign in

Explore topics