Tuesday, August 11, 2009

Quality Goverance goes down a little easier when automated

It's been a while since my last post. I've recently started a new job contracting for a company in the health care industry. As one of the earliest developers on the project, I was tasked with setting up some continuous integration (we decided on Hudson) and the beginnings of some software quality governance (PMD for static analysis and Cobertura for test coverage).

Our client has some fairly strict requirements in terms of static analysis and test coverage, but fortunately they have a PMD ruleset already defined- and their test coverage metrics amount to 95% coverage.

As we're early in the project, we're ramping up the team and still doing lots of 'infrastructure' work like defining ANT tasks to automate certain build tasks (although realistically that sort of thing never ends on an evolutionary project). I've never worked on a team where there was much "formal" governance- the aspirations of the team, in terms of code quality, were stated and agreed upon, but never governed with strict rules. We tended to pair program occasionally and scrum regularly so that any wide discrepancies in practices were ferreted out.

But having automated our static analysis rules by using the Hudson PMD plugin, and automating our test coverage using the Cobertura plugin, I must confess I'm liking the strict governance. Now, the build fails not just when all the tests don't pass, but also when there aren't enough tests (coverage metrics failed), or the static analysis metrics fail.

I've been a big fan of Test Driven Development for a quite a few years now, and although I've been fortunate to work on teams where pair programming wasn't out of the ordinary, it's always been a struggle to maintain high code coverage as the project evolves. Having a client that mandates challenging goals and strict governance will actually be a blessing.

As new developers join the team, it won't seem like such an artificial or arbitrary goal to have 95% coverage and strict analysis metrics. For one thing, it's being mandated by the client. But more importantly, it's now (and forever more) baked into the build- these developers will quickly understand that their work will not be allowed unless it passes the build- and in this case that means governance as well as functionality. I'm really hoping it makes the overall quality goals consistent, maintainable and achievable.

It also self reinforces the Test Driven approach. For example, say up to this point we have an initial domain model, some services, some controllers and some stubbed out objects. The moment we add a persistence layer, we'll have to test it. The code coverage rules will fail if we don't. It might be easy (and very tempting) to just whip up some simple CRUD operations on a DAO without any tests (and let's face it, with frameworks like Hibernate, we know pretty much that they'll work). But our build process won't allow us. This means we have to think about how we're going to test our persistence layer very early- do we go with DBUnit? Do we grow something ourselves? Those are nice questions to have answered (and working) before we actually have any persistence code working. Not to say that we must have it perfect out of the gate- but we must have it covered.

Having the governance automated makes swallowing those sort of pills just a little bit easier.

No comments: