My poster was in the form of nine "slides". Here they are, with a bit of explanation about each of the slides.
![]() | My study is, as you know, still underway. What I'm presenting here are the method and some preliminary results. I wanted to present at CSER because I wanted to hear what other people would say about some of my findings so far, and whether anyone would have suggestions of where to go next. |
![]() | As a motivation for my study, consider how the computational scientist qua climatologist goes about trying to learn about the climate. In order to test their theories of the climate, they would like to run experiments. Since they cannot run experiments on the climate they instead build a computer simulation of the climate (a climate model) according to their theories and then run their experiments on the model. At every step there are approximations and error introduced. Moreover, the experiments that they run cannot all be replicated in the real world, so there is no "oracle" they can use to check their results against. (I've talked about this before.) All of this might lead you to ask ...Why do climate modelers trust their models? Or.. |
![]() | ... for us as software researchers, we might ask: why do they trust their software? That is, irrespective of the validity of their theories, why do they trust their implementation of those theories in software? The second question should actually read "What does software quality mean to climate modelers*?" As I see it, you can try to answer the trust question by looking at the code or development practices, deciding if they are satisfactory and, if they are, concluding that the scientists trust their software because they are building it well and it it is, in some objective sense, of high quality. Or you can answer this questions by asking the scientists themselves why they trust their software -- what plays into their judgment of good quality software. In this case the emphasis in the question is slightly different, "Why do climate modelers trust their software?" The second, and to some extent third, research questions are aimed here. * Note how I alternate between using "climate scientist" and "climate modeler" to reference the same group of people. |
![]() | My approach to answering these questions is to do a defect density analysis (I'm not sure why I called it "repository analysis" on my slides. Ignore that) of several climate models. Defect density is an intuitive and standard software engineering measure of software quality. The standard way to computer defect density is to count the number of reported defects for a release per thousand lines of code in that release. There are lots of problems with this measure, but one is that it is subject to how good the developers are at finding and reporting bugs. A more objective measure of quality may be their static fault density. So I did this type of analysis as well. Finally, I interviewed modelers to gather their stories of finding and fixing bugs as a way to understand their view and decision-making around software quality. There are five different modeling centres participating in various aspects of this study. |
![]() | A very general definition of a defect is: anything worth fixing. Deciding what is worth fixing is left up to the people working with the model, so we can be sure we are only counting relevant defects. Many of the modeling centres I've been in contact with use some sort of bug tracking system. That makes counting defects easy enough (the assumption being that if there is a ticket written up about an issue, and the ticket is resolved, then it worth fixing and we'll call it a defect). Another way to identify defects is to look through the check-ins of the version control repository and decide if the check-in was a fix for a defect simply by looking at the comment itself. Sure, it's not perfect, but it might be a more reliable measure across modeling centres. |
![]() | Presented here is the defect density for an arbitrary version of the model from each of the modeling centres. For persective, along the x-axis of the chart I've labeled two ranges "good" and "average" according to Norman Fenton's online book on software metrics. I've included a third bar, the middle one, that shows the defect density when you consider only those check-in comments which can be associated with tickets (i.e. there is a reference in the comment to a ticket marked as a defect). The top, "all defects", bar is the count of check in comments that look like defect fixes. I have included in the count all of the comments made 6 months before and after the release date. You can see that bar is divided into two parts. The left represents the pre-release defects, and the right represents the post-release defects. As yet, the main observation I have is that all of the models have a "low" defect density however you count defects (tickets, or check-in comments). It's also apparent that the modeling centres use their ticketing systems to varying degrees, as well as they have different habits about referencing tickets in their check-in comments. |
![]() | I ran the FLINT tool over a single configuration of, currently only two, climate models. The major faults I've found are about implicit type conversion and declaration. As well, there are a significant (but small) portion of faults that suggest dead code. Of course, because I'm analysing only a single configuration of the model, I can't be sure that this code is really dead. I've inspected the code where some of these faults occur and I've found instances of both dead code and of code that isn't really dead in other configurations. One example of dead code I found came from a module that had a collection of functions to perform analysis on different array types. The analysis was simliar for each function, with a few changes to the function to handle the particularity of the array. The dead code found in this module was variables that were declared and set but never referenced. My guess from looking at the regularities in the code is that because the functions were so similar, the developers just wrote one function and then copied it several times and tweaked it for each array type. In the process they forgot to remove code that didn't apply. |
![]() | Unfortunately, I have as yet only been able to interview a couple of modellers specifically about defects they have found and fixed. I have done a dozen or so interviews with modelers and other computational scientist to talk about how they do their development and software quality in general. So this part of the study is still a little lightweight, and very preliminary. In any case, when I've done the interviews I ask the modelers to go through a couple of bugs that they've found and fixed. I roughly asked them these questions. Everyone I've talked to is quite aware that their models have bugs. This, they accept as a fact of life. Partly this is a comment on the nature of a theory being an approximation, but they also include software bugs here too. Interestingly, they still believe that, depending on the bug, they can extract useful science from the model. One interviewee described how in the past, when computer time was more costly, if scientists found bugs part way through a 6 month model run they might let the run continue, publish the results but include a note about the bug they found and analysis about its effect. |
![]() | The other observation I have is connected the last statement on the previous slide, as well as this slide. Once the code has reached a certain level of stability, but before the code is frozen for a release of the model, scientists in the group will being to run in-depth analysis on it. Both bug fixes and feature additions are code changes that have the potential to change the behaviour of the model, and so invalidate the analysis that has already been done on the model. This is why I say that some bugs can be treated as "features" of a sort: just an idiosyncracy of the model. Similarily, a new feature might be rejected as a "bug" if it's introduced too late in the game. In general, the criticality of a defect is in part judged on when it is found (like any other software project I suppose). I've identified several factors that I've heard the modellers talk about when they consider how important a defect is. I've roughly categorised these factors into three groups: concerns that depend on the project timeline (momentum), concerns arising from high-level design and funding goals (design/funding), and the more immediate day-to-day concerns of running the model (operational). Very generally, these concerns have more weight at different stages in the development cycle which I tried to represent on the chart. Describing these concerns in detail probably involves a separate blog post. |
2 comments:
So after all of that work, what sort of feedback did you get? Or is that another blog post?
Shhh...I'm trying to build suspense.
Check back for part II tomorrow.
Post a Comment