Static analysis software for Fortran

Friday, June 26, 2009

I started with the list of Fortran tools on fortran.com:
There are also many projects referenced that are now defunct as far as I can tell:I also came across a 1999 technical report from Council for the Central Laboratory of the Research Councils (now Science and Technology Facilities Council) that summarises the tools available then and lists their static analysis features. Much of this is still useful.

Note: these are all separate analysis tools. I didn't look at the analysis that the various Fortran compilers can do. Polyhedron Software has a very detailed looking comparison chart.

tl;dr: Forcheck, Cleanscape FortranLint.

Currently reading

Friday, June 12, 2009

I haven't posted paper summaries in a while but here's a selection of what I've been reading:

On software quality:
On scientific software:
On empirical studies:

One-page summary of my research plan

Thursday, June 11, 2009

I'm exploring the software quality of climate modelling software. I'm investigating what software quality means for climate modellers, and how we can go about measuring quality and benchmarking it. For example, some of the broader questions that motivate me are: How can we determine the software quality of a climate model? How can we compare the quality of one model to another? How can we compare software quality to commercial products or to other computational science software applications? What do the climate modellers themselves mean when they attempt to build good quality software?

These are big questions. To start to answer them I am going to do two small things. Firstly, I am going to inspect the climate modelling software itself. I will use fault density (i.e. statically identifiable errors and "misuses" of the programming language) as well as bug density (i.e. reported and fixed defects) to benchmark the software quality of several climate models. This analysis will carefully consider these statistics across the various defect dimensions (for example, pre- and post-release and defect type). See my blog post on counting defects for more details.

I believe the more interesting questions about how climate modellers view software quality cannot be answered through their code or bug reports alone. So, I'd also like to interview some of the climate scientists that have built the software. Specifically, I'd like to ask them for the stories behind a selection of defects that they found and fixed during development or after a release. The intention here is that by asking questions about the circumstances of a defect and about the judgements made to find and solve it I may start to piece together an understanding of their specific notion of software quality. See my blog post that describes this part of the study in more detail, and my blog post that describes some of the questions I can ask about defects.

Thoughts? Comments? Questions?

The shape of the playing field

Wednesday, June 10, 2009

Ever since my first pitch of the defect density study I've been trying to work out what the bigger research questions are here. I'm not really content with just collecting the defect density results unless I can see how the results fit into a larger story.

My first instinct was to find out more about defects themselves and to see what other people have done with them in their studies. What is the relationship between defect densities and software quality? How do other people understand software quality and measure it? What can we really learn about software quality from looking at defect densities? And to what use can I put these results once I have them? I'm starting to get a picture of defect densities and their usefulness, and it is not nearly as good of a tool as I had thought, but it is still worth evaluating.

The title of my talk last week was one possible framing of a much bigger question: why do climate modellers trust the code they write? As in Daniel Hook's presentation at SE-CSE '09, trustworthiness seems like an appropriate way to frame the discussion about software quality when it comes to climate models. Why? Because, coarsely, in computational science pursuits like climate modelling there are not always hard and fast rules to distinguish correct and incorrect results. As Hook says, there are no perfect oracles to which results can be checked against (the oracle problem), and that even if oracles existed the approximations and measurement errors inherent in modelling can make it tricky to distinguish any introduced error coming from faulty code (this is the tolerance problem).

So, how then do the climate modellers know if they're on the right track when constructing their models? We know they employ a wide suite of sophisticated tests to tease out flaws in the conceptual model (validation) and errors in their implementation of that model (verification). My understanding is that underlying some of the validation work are judgement calls, gut checks, and tacit heuristics used to distinguish whether a model is doing the right thing. For example, climate modellers might ask of a model output, "is it raining where it ought to be raining?" The answer to this question isn't well-defined, but it can be answered with a lot of background knowledge and familiarity with the climate processes. This is partly the oracle problem at play. The model output is the result of a scientific experiment and not something we could hope to give a complete description of before hand. I'm not saying validation is all guesswork -- not even close -- but just that there are unformalisable elements to model validation that, I don't think, we're used to thinking about when we discuss traditional software testing. We are used to thinking about software as having more explicit and testable requirements[1].

On the verification side, the tolerance problem entails that, even if we ignore the conceptual problems with the model, it is still not a straightforward matter to be certain if the code is correct. Uncertainties in the data, truncation error in approximations, and round-off error in computations can all hide real errors resulting from flaws in the model implementation.

Asking why climate modellers trust the code they write is one way of trying to understand what climate modellers are doing when they attempt to write good quality code. Given that they have such radically different notion of requirements and correctness, how is their notion of software quality different? If you can't always write unit against the bulk of your work, and you can't always explicitly write down rules for correctness, what then do you mean by good code? I think it's important to start with these questions because the answers inform other questions about the usefulness of defect densities and of quality benchmarking. With a firmer idea of what quality actually is for climate modellers, we can then work on how best to measure it or benchmark it.

To summarise, the primary question is:
What does software quality mean for climate modellers?
The software quality folks have come up with an impressive list of attributes of software quality known is the "ilities". Maybe a more specific version of the above question asks about which quality attributes are most important for climate modellers.

I think there are two other companion questions that need to be asked:
How do climate modellers judge a piece of code against these quality attributes?

What practices do climate modellers follow to achieve high quality software (in terms of the identified quality attributes)?
If software quality was a game of football, the first question asks about the shape of the field and the rules of the game, the second asks about where the goal posts are, and the third asks about the playbook. Ahem.

So, how I go about answering these questions?

I could ask the climate modellers directly. This assumes that they know the answers explicitly. I'm not sure I could answer the same questions for myself.

I could also look at defects. I've defined a defect before as "something worth fixing". Can we say this means a defect is part of the software, created or omitted, that indicates a lack of satisfaction of the important quality attributes. If so, then looking carefully a defect and its circumstances, and in particular asking the climate modellers questions about reported defects might provide some of the basis for answering the above three questions. Or at least the basis from which to ask more intelligent questions.

That is, investigating why and when a piece of climate modelling software falls short might be the very place to look for exposed notions of quality, quality goals, and the practises used to manage them.

Would interviewing scientists about defects give the complete story? Certainly not. For at least these reasons:
  1. I'd only be able to consider a sampling of the defects, and only interview a sampling of the modellers the defects related to.
  2. As noted in earlier posts, some defects may go unreported. Put another way, the selection of reported defects depends on the type of testing that is done, and not necessarily on the nature of the defect itself. That is, defects are not found if no one goes looking for them.
  3. Refining that point a bit: the defects that are found may only be associated with the subset of the quality attributes that are the least well managed. That is, software may show fewer defects related to quality attributes for which there is a well-functioning process in place. These attributes would not appear to be as well represented and thus may not seem important when, in fact, they are.
[1] I feel pretty strange talking with such authority. Please jump in if you know better.

A framework for counting problems and defects

Monday, June 8, 2009

Last week I came across this technical report from SEI:
Software Quality Measurement: A Framework for Counting Problems and Defects

Abstract. This report presents mechanisms for describing and specifying two software measures–software problems and defects–used to understand and predict software product quality and software process efficacy. We propose a framework that integrates and gives structure to the discovery, reporting, and measurement of software problems and defects found by the primary problem and defect finding activities....
I haven't yet read through the report thoroughly, though the bits I have read seem immensely sensible. This report doesn't attempt anything too grand. It simply lays out clear definitions, and provides a set of questions to ask yourself when going about trying to understand and count problems and defects. Cool.

In fact, I think these questions will also be great to use when following up with climate modellers about specific bugs. Here they are:
  • Identification: What software product or software work product is involved?
  • Finding Activity: What activity discovered the problem or defect?
  • Finding Mode: How was the problem or defect found?
  • Criticality: How critical or severe is the problem or defect?
  • Problem Status: What work needs to be done to dispose of the problem?
  • Problem Type: What is the nature of the problem? If a defect, what kind?
  • Uniqueness: What is the similarity to previous problems or defects?
  • Urgency: What urgency or priority has been assigned?
  • Environment: Where was the problem discovered?
  • Timing: When was the problem reported? When was it discovered? When was it corrected?
  • Originator: Who reported the problem?
  • Defects Found In: What software artifacts caused or contain the defect?
  • Changes Made To: What software artifacts were changed to correct the defect?
  • Related Changes: What are the prerequisite changes?
  • Projected Availability: When are changes expected?
  • Released/Shipped: What configuration level contains the changes?
  • Applied: When was the change made to the baseline configuration?
  • Approved By: Who approved the resolution of the problem?
  • Accepted By: Who accepted the problem resolution?
I might add more why and how questions to this list: why did the bug go unnoticed? why is it important to have fixed this bug, at that time? how was the bug fixed? why is the fix appropriate?

A talk to Hausi Müller's group at UVic

Wednesday, June 3, 2009

I gave a short talk to Hausi Müller's group here at UVic about my research plans. I've posted the slides so you can take a look. Note the slightly provocative title.





Some of the feedback I got from the group:
  • You could try looking at how defect rate changes over time. Use the trends as comparison points, or build a prediction model and then compare parameters of the model. (see ISSTA)
  • Can you really investigate code quality separately from the concerns of model quality?
  • How would you (well, the climate scientists) know a perfectly correct piece of software if they had one?
  • The criticality of a defect has many dimensions.
  • Consider code churn in estimation of code quality (high churn with low defects -> high quality process?). Or simply just making the churn data available might help ground the defect density data.
I actually enjoyed giving this presentation. Partly because it was to a small group and I'm much happier (read: less nervous) in discussions rather than presentations (I have been known to get incredibly nervous in front of groups). More than anything though, it was helpful to put together the slides as a way to articulate what I'm doing. It's the first time I've felt that I have something resembling a cohesive story to tell.