Thursday, June 13, 2019

Are your tests enough?

When you develop an application, that has more that two features, it is calming to know that you have tests that can validate that the application is still functioning when you change something...at least some of you might have that feeling...or wish.

It is great to be able to balance over a safety-net, knowing that if you screw things up...you will not fall hard...the net will catch you...

If you work on products that have this...consider yourselves lucky...you bastards ...I envy you.
You, who do TDD by the book... and have written all the possible unit tests...and have all the coverage in the world...youuuuu...

Today we had a meeting about the raising concerns that although we strive to have as many regression tests as possible we still have a feeling that this is not enough...bugs can go unnoticed...and QA might be blamed for it... and consequences might be significant...
...and arguments were passed back and forth...and we agreed that there is no budget for this shit...cover it up...and back to work.

Something bugs my mind:
Let's assume that you have some tests...unit...or other functional tests...but automated.
How do you measure if you cover enough from the functionality of the product?
If you have several layers of APIs and and API surface as wide as ocean...a gazillion modules that each do something...but maybe not always used...how do you measure coverage?


If you can't measure coverage...you can't estimate the quality of your tests...regardless if they run for 3 hours straight...all you do is create the illusion of confidence by tests...nothing more.

Measuring coverage is not a simple task...as these days, Cobertura and friends are not your friends anymore...if you have almost no chance to do unit tests against your code...   (...and I'll let you scream all you want here...yes this is the case...)

All these "off-the-shelf" tools do, is instrument your code and measure how much did you touch from it during you tests...or runs...but if you have 10 million lines of code and the product is monster monolith...all these tools will report is an insignificant percent of coverage...that is almost impossible to get to a value that can be shown to management.

What about the API coverage? The API that is touched by the developers who build on top your monster...and dress it up in shiny skirts...to sell it to the unaware customers..

How do you measure coverage of a messaging API? How do you measure the code that is not purely functional?...it's OOP...remember...classes...hierarchies...dependencies...
Even Spring struggles with this...and all it does is it makes you end up with more code in src/test/java than in src/main/java .... but at least you have tests....check.


If you can't measure it... you walk blindfolded...in a dark room...full of venomous snakes...starved to near death.

Are your tests enough? Do you cover enough?

No comments:

Post a Comment