One-page summary of my research plan

Thursday, June 11, 2009

I'm exploring the software quality of climate modelling software. I'm investigating what software quality means for climate modellers, and how we can go about measuring quality and benchmarking it. For example, some of the broader questions that motivate me are: How can we determine the software quality of a climate model? How can we compare the quality of one model to another? How can we compare software quality to commercial products or to other computational science software applications? What do the climate modellers themselves mean when they attempt to build good quality software?

These are big questions. To start to answer them I am going to do two small things. Firstly, I am going to inspect the climate modelling software itself. I will use fault density (i.e. statically identifiable errors and "misuses" of the programming language) as well as bug density (i.e. reported and fixed defects) to benchmark the software quality of several climate models. This analysis will carefully consider these statistics across the various defect dimensions (for example, pre- and post-release and defect type). See my blog post on counting defects for more details.

I believe the more interesting questions about how climate modellers view software quality cannot be answered through their code or bug reports alone. So, I'd also like to interview some of the climate scientists that have built the software. Specifically, I'd like to ask them for the stories behind a selection of defects that they found and fixed during development or after a release. The intention here is that by asking questions about the circumstances of a defect and about the judgements made to find and solve it I may start to piece together an understanding of their specific notion of software quality. See my blog post that describes this part of the study in more detail, and my blog post that describes some of the questions I can ask about defects.

Thoughts? Comments? Questions?

3 comments:

Carolyn said...

I like the idea of asking the scientists for the context. Are you planning to ask them about what you find in the inspection of the climate model in particular?

gvwilson said...

I think it would be interested to ask if they've ever had a paper rejected (or rejected a paper) because of concerns over the quality of the software used to produce the results (vs. challenges to the model or science embodied in that software).

jon said...

@carolyn My inspection of the software is "just" a matter of counting defects, and doing the static analysis to determine "safe coding" practices. I wasn't planning any detailed or even cursory look at the code as part of my study (though I'll do it, of course!)

I plan to ask the scientists about specific defects that they themselves have logged.

But yeah, of course, if I notice anything interesting whilst I'm poking around (or from the output of the static analysis tool) I'll ask.

@gvwilson I think we both know the answer to that question! But sure, I'll include that question half-way during the interviews for comic relief.

Post a Comment