Jon Pipitone

Validity and soundness in scientific software

Wednesday, November 4, 2009

In today's workshop on Software Engineering for Science we spent quite a bit of time discussing the different levels of correctness of scientific software. I was surprised since I had thought some of this was pretty basic stuff. After a bit of reflection I wonder if it isn't because we don't have common terms for these ideas.

To be clear, I'm referring to verification and validation. These activities are summed up by the questions, "Are we building the right thing?" (validation) and "Are we building the thing right?" (verification). Another way of looking at this is that verification is the act of checking that software meets its specifications, whereas validation is checking that software meets its requirements.

This comes up when you talk about scientific software since in many cases the software is supposed to enact a theory or mathematical model. Validation checks that the mathematical model is accurate where as verification checks that the software implements the mathematical model accurately.

Clearly we have words for "verification" and "validation", though I don't remember these words being used much today, or at all. The fact that they aren't commonly used and that we needed to discuss the distinction between these activities is curious to me.

But more so, whilst we have the words to discuss the activities we don't seem to have adjectives to refer to the software itself. (Do we? Tell me if we do.) I suppose we could use the terms "verified software" and "validated software". "Verified" is overloaded though. I immediately want to ask "by whom?", as if the term refers to software inspected and given a stamp of approval by an outside agency. "Validated software" seems okay though.

Borrowing from formal logic, could we refer to the "soundness" and "validity" of software?

Privilege

This deserves a much more in-depth discussion which I'm not going to go into here. But I wanted to just take a moment to publicly recognise how privileged I feel, and am, in school. Of course it's not just in being at school that I'm privileged.. it's the country I live in, the socio-economic class I am part of, the people I know, my ethnicity, and so on. And school is a whole other level of privilege.

Today leaving the CASCON conference with two of my colleagues I thought again about how damned lucky I am to be a student here. This is truly a luxurious life. I spent today sitting around a table in a warm room talking with other students and professors about whatever the hell interested us at the moment. We talked while we ate our free lunch. (I repeat, we had a free lunch!) After that we went into another room and talked some more. Again, we talked about whatever interested us. At some point we paused to have tea and stretch. Then we returned to talking until we had had enough. A few of us went home together and spent the entire trip discussing ideas for tomorrow. It was a day of ideas.

And that was a day of work. Ah-mazing. When I'm not at a conference I get to spend an entire day at a sunny desk, spending my day as I please, reading, talking to people, making notes to myself, and generally working on projects as I please.

I feel so so lucky and grateful to be here. It's a fullness of feeling which I'm not sure I can explain all that well. The flip side is that I also feel upset at myself for the times when I take this life for granted. I find it easy to do. Take it for granted, I mean. There are times when, to the exclusion of other feelings, I feel worried about my future, or about a deadline, or how my research project might turn out, etc... But, peanuts! I am a king!

I'm not sure why, but I feel compelled to acknowledge and mention this right now. Maybe just as a reminder for myself. But I'd appreciate hearing any thoughts you have on this topic; so use the comments.

CSER poster session

Monday, November 2, 2009

This week I attended the poster session at the CSER gathering. This was a great thing to do for a few reasons. Just creating the poster helped me pull together some of my thoughts and results so far. In the same vein, just having to pitch my study and explain what I've been up to helped to clarify my thoughts or bring up new questions. Then, of course, there's the feedback and criticism I get from the attendees, and the new questions they raise (intentionally or otherwise). It's also just fun and validating to have people listen to what I've been up to and engage in a discussion about it... makes me feel like I'm doing something worth talking about.

My poster was in the form of nine "slides". Here they are, with a bit of explanation about each of the slides.





My study is, as you know, still underway. What I'm presenting here are the method and some preliminary results. I wanted to present at CSER because I wanted to hear what other people would say about some of my findings so far, and whether anyone would have suggestions of where to go next.



As a motivation for my study, consider how the computational scientist qua climatologist goes about trying to learn about the climate. In order to test their theories of the climate, they would like to run experiments. Since they cannot run experiments on the climate they instead build a computer simulation of the climate (a climate model) according to their theories and then run their experiments on the model.

At every step there are approximations and error introduced. Moreover, the experiments that they run cannot all be replicated in the real world, so there is no "oracle" they can use to check their results against. (I've talked about this before.) All of this might lead you to ask ...Why do climate modelers trust their models? Or..


... for us as software researchers, we might ask: why do they trust their software? That is, irrespective of the validity of their theories, why do they trust their implementation of those theories in software?

The second question should actually read "What does software quality mean to climate modelers*?"

As I see it, you can try to answer the trust question by looking at the code or development practices, deciding if they are satisfactory and, if they are, concluding that the scientists trust their software because they are building it well and it it is, in some objective sense, of high quality.

Or you can answer this questions by asking the scientists themselves why they trust their software -- what plays into their judgment of good quality software. In this case the emphasis in the question is slightly different, "Why do climate modelers trust their software?"

The second, and to some extent third, research questions are aimed here.

* Note how I alternate between using "climate scientist" and "climate modeler" to reference the same group of people.


My approach to answering these questions is to do a defect density analysis (I'm not sure why I called it "repository analysis" on my slides. Ignore that) of several climate models. Defect density is an intuitive and standard software engineering measure of software quality.

The standard way to computer defect density is to count the number of reported defects for a release per thousand lines of code in that release. There are lots of problems with this measure, but one is that it is subject to how good the developers are at finding and reporting bugs. A more objective measure of quality may be their static fault density. So I did this type of analysis as well.

Finally, I interviewed modelers to gather their stories of finding and fixing bugs as a way to understand their view and decision-making around software quality.

There are five different modeling centres participating in various aspects of this study.




A very general definition of a defect is: anything worth fixing. Deciding what is worth fixing is left up to the people working with the model, so we can be sure we are only counting relevant defects.

Many of the modeling centres I've been in contact with use some sort of bug tracking system. That makes counting defects easy enough (the assumption being that if there is a ticket written up about an issue, and the ticket is resolved, then it worth fixing and we'll call it a defect).

Another way to identify defects is to look through the check-ins of the version control repository and decide if the check-in was a fix for a defect simply by looking at the comment itself. Sure, it's not perfect, but it might be a more reliable measure across modeling centres.


Presented here is the defect density for an arbitrary version of the model from each of the modeling centres. For persective, along the x-axis of the chart I've labeled two ranges "good" and "average" according to Norman Fenton's online book on software metrics. I've included a third bar, the middle one, that shows the defect density when you consider only those check-in comments which can be associated with tickets (i.e. there is a reference in the comment to a ticket marked as a defect).

The top, "all defects", bar is the count of check in comments that look like defect fixes. I have included in the count all of the comments made 6 months before and after the release date. You can see that bar is divided into two parts. The left represents the pre-release defects, and the right represents the post-release defects.

As yet, the main observation I have is that all of the models have a "low" defect density however you count defects (tickets, or check-in comments).

It's also apparent that the modeling centres use their ticketing systems to varying degrees, as well as they have different habits about referencing tickets in their check-in comments.





I ran the FLINT tool over a single configuration of, currently only two, climate models. The major faults I've found are about implicit type conversion and declaration. As well, there are a significant (but small) portion of faults that suggest dead code. Of course, because I'm analysing only a single configuration of the model, I can't be sure that this code is really dead. I've inspected the code where some of these faults occur and I've found instances of both dead code and of code that isn't really dead in other configurations.

One example of dead code I found came from a module that had a collection of functions to perform analysis on different array types. The analysis was simliar for each function, with a few changes to the function to handle the particularity of the array. The dead code found in this module was variables that were declared and set but never referenced. My guess from looking at the regularities in the code is that because the functions were so similar, the developers just wrote one function and then copied it several times and tweaked it for each array type. In the process they forgot to remove code that didn't apply.


Unfortunately, I have as yet only been able to interview a couple of modellers specifically about defects they have found and fixed. I have done a dozen or so interviews with modelers and other computational scientist to talk about how they do their development and software quality in general. So this part of the study is still a little lightweight, and very preliminary.

In any case, when I've done the interviews I ask the modelers to go through a couple of bugs that they've found and fixed. I roughly asked them these questions.

Everyone I've talked to is quite aware that their models have bugs. This, they accept as a fact of life. Partly this is a comment on the nature of a theory being an approximation, but they also include software bugs here too. Interestingly, they still believe that, depending on the bug, they can extract useful science from the model. One interviewee described how in the past, when computer time was more costly, if scientists found bugs part way through a 6 month model run they might let the run continue, publish the results but include a note about the bug they found and analysis about its effect.



The other observation I have is connected the last statement on the previous slide, as well as this slide.

Once the code has reached a certain level of stability, but before the code is frozen for a release of the model, scientists in the group will being to run in-depth analysis on it. Both bug fixes and feature additions are code changes that have the potential to change the behaviour of the model, and so invalidate the analysis that has already been done on the model. This is why I say that some bugs can be treated as "features" of a sort: just an idiosyncracy of the model. Similarily, a new feature might be rejected as a "bug" if it's introduced too late in the game.

In general, the criticality of a defect is in part judged on when it is found (like any other software project I suppose). I've identified several factors that I've heard the modellers talk about when they consider how important a defect is. I've roughly categorised these factors into three groups: concerns that depend on the project timeline (momentum), concerns arising from high-level design and funding goals (design/funding), and the more immediate day-to-day concerns of running the model (operational). Very generally, these concerns have more weight at different stages in the development cycle which I tried to represent on the chart.

Describing these concerns in detail probably involves a separate blog post.

Morning discussion for the WSRCC

Monday, October 26, 2009

This morning Jorge and I attempted to attend the Workshop on Software Research on Climate Change via a skype phone call.   But Skype wasn't cooperating.  So, we had our a own mini-workshop ourselves.  The purpose of the workshop is to respond to the challenge, "how can we apply our research strengths to make significant contributions to the problems of mitigation and adaptation of climate change?"  But we interpreted the question as, "What can software researchers do to make significant contributions.... ?"  As a result, we considered some alternatives that are probably out of scope for the workshop.
  • Drop out of research.  We recognise climate change is an urgent problem and that many scientific research projects have very indirect, uncertain, and long-term payoffs. For the most part, the problem of climate change is fairly well analysed and many solutions are known, but in need of political organisation in order to carry them out. Perhaps really what is needed is for more people to "roll up their sleeves" and join a movement or organisation that's fighting towards this. 
  • Engage in action research/participatory research. If you decide to stay in research then we propose that you ground your studies by working on problems that you can be sure real stakeholders have.  In particular, we suggest that you start with a stakeholder that is directly involved in solving the problem (e.g. activists, scientists, journalists, politicians) and that you work with throughout your study.  At the most basic level, they act as a reality-check for your ideas, but we think that the best way to make this relationship work is through action research: joining their organisation to solve their problems, becoming directly involved in the solutions yourself.  Finding publishable results is an added bonus which is secondary to the pressing need.  
  • Elicit the requirements of real world stakeholders.  As you can see from the last point, we're concerned that as software researchers we lack a good understanding of the problems holding us (society) back from dealing with climate change effectively.  So, we suggest a specific research project that surveys all the actors to figure out their needs and the place the software research can contribute.  This project would involve interviewing activists, scientists, journalists, politicians, and citizens to build a research roadmap. 
  • Green metrics: dealing with accountability in a carbon market.  This idea is more vague, but simply a pointer to an area where we think software research may have some applicability. Assuming there is a compliance requirement for greenhouse gas pollution (e.g. a cap and trade system), then we will need to be able to accurately measure carbon emissions on all levels: from industry to homes.  
  • Software for emergencies.  Like the last point, this is one rather vague.  The idea is this: in doomsday future scenarios of climate change, the world is not a peaceful place.  Potentially more decision-making is done by people in emergency situations.  This context shift might change the rules for interface design: where say, in peacetime, a user might be unwilling to double-click on a link, or might be willing to spend time browsing menus, but in a disaster scenario their preferences may change.   So, how exactly does a user's preferences change in an emergency, and how might we design software to adjust to them? 
  • Make video-conferencing actually easy.  This was our experience all through the day:If we ever want to maintain our personal connections without traveling we need to solve this problem.  You'd think that we had already solved it, as we have the basic technology already in place.  We have Skype, it is just too flakey for relying on for important gatherings.  Or, maybe, hotels and conference centres can't deal with the bandwidth demands.  Or, maybe conference organisers don't make remote attendance a priority. 

    Even getting us through the basic technological obstacles may not be enough for a rich conference participation.  Simply having a video and audio feed doesn't compare to face-to-face conversations.  Maybe it never will, but certainly we can do better?

Position papers from the 1st Intl. Workshop on Software Research and Climate Change

Sunday, October 25, 2009

Tomorrow the First International Workshop on Software Research and Climate Change is being held as part of the Onward! 2009 conference in Florida. Jorge and I are going to attempt to attend the workshop remotely, so wish us luck. I'll be blogging about the experience tomorrow.

To begin, and as a refresher, I thought I'd post a single sentence summary of each of the position papers submitted for this workshop. Position papers were solicited from participants and were to respond to the challenge stated on the opening page of the workshop. In summary, the challenge is: how do we apply our expertise in software research to save our butts from certain destruction due to climate collapse. Or, as Steve puts it, "how can we apply our research strengths to make significant contributions to the problems of mitigation and adaptation of climate change."

In answer to that challenge, the position papers suggest software research should...

"Data Centres vs. Community Clouds", Gerard Briscoe and Ruzanna Chitchya

... tackle the energy inefficiency of cloud computing by investigating decentralised models where consumer machines also become providers and coordinators of computing resources.

"Optimizing Energy Consumption in Software Intensive systems", Arjan de Roo, Hasan Sozer and Mehmet Aksit

... provide the tools and design patterns for building software systems that meet both their energy-consumption requirements and their functional design requirements.

"Modeling for Intermodal Freight Transportation Policy Analysis", J. Scott Hawker

... improve three aspects of decision-making tools (like, say, an intermodal freight transportation policy analysis model): make them easier to use and interact with (HCI-wise); deal with the complexity of the models and the troubles with integrating various existing implementations; as well as (my favourite), make sure the software is built well since most of the folks doing the building are not trained.

"Computing Education with a Cause", Lisa Jamba

... investigate how to involve computer science students in research "toward improving health outcomes related to climate change" as part of the university curriculum.

"Some Thoughts on Climate Change and Software Engineering Research", Lin Liu, He Zhang, and Sheikh Iqbal Ahamed

... investigate how to navigate and integrate knowledge from many different disciplines and perspectives so as to help people communicate and work together; build decision-support, analysis and educational tools for people, companies, and government; build tools for incorporating environmental non-functional requirements into software construction.

"Refactoring Infrastructure: Reducing emissions and energy one step at a time", Chris Parnin and Carsten Görg.

... use insights from software refactoring to develop refactoring techniques for physical infrastructure (energy grid, water supply, etc.).

"In search for green metrics", Juha Taina and Pietu Pohjalainen

... establish a "framework for estimating or measuring the effects of a software systems' effect on climate change."

"Enabling Climate Scientists to Access Observational Data", David Woollard, Chris Mattmann, Amy Braverman, Rob Raskin, and Dan Crichton

... build systems to help climate scientists locate, transfer, and transform observational data from disparate sources.

"Context-aware Resource Sharing for People-centric Sensing", Jorge Vallejos, Matthias Stevens, Ellie D’Hondt, Nicolas Maisonneuve, Wolfgang De Meuter, Theo D’Hondt, and Luc Steels.

... investigate how to use our everyday hand-held devices as sensors to provide fine-grained environmental data.

"Language and Library Support for Climate Data Applications", Eric Van Wyk, Vipin Kumar, Michael Steinbach, Shyam Boriah, and Alok Choudhary

... build language extensions and libraries to make climate data analysis easier and more computationally efficient.

Modeling the solutions to climate change

Tuesday, October 20, 2009

For the past couple of weeks a few of us in the software engineering group have been meeting to take up Steve's modeling challenge: we are attempting to model (visually, not computationally) the proposed solutions from several popular books. The idea is to do so so that it's possible (easy?) to compare the differences and similarities between them. Here is the homepage* for the project, which roughly tracks what we're up to. I'm going to summarise our progress so far.

To start off, we narrowed our focus down to just comparing the books by their take on wind power solutions. We began with David McKay's excellent book, Sustainable Energy -- without the hot air.

In our first few meeting we decided to just "shoot first and ask questions later". That is to say, we just collaborative built up a model of the chapters on wind power as we saw fit in the moment, without following any visual syntax and without worrying too much about what to include or what to ignore. The result looked like this:


At the bottom of that picture is our brainstorming about what other aspects to include (the left hand column), the types of perspectives/analysis that McKay uses and that may be useful to include a future exercise (middle column), and the types of differences we expect to see when comparing models (right column).

The next step would have been to come up with the same sort of model for another book, and then start to figure out how best to make the models comparable so that it is visually easy to see the differences and similarities between the various models.

We didn't do that. Instead, we decided to try making a more principled model. Actually, set of models. We decided to construct an entity-relationship (ER) model, and a goal model (i*) for two books and then see about how to go about making those models comparable.

We began with the entity-relationship model. Again, for McKay's book. McKay's book is fairly well segmented into chapters that have back-of-the-envelope-style analysis and others that have a more broad discussion of the actors and issues. In our first attempt shown above, we mainly only modeled the two chapters on wind-power analysis. But if we just stuck to those chapters for the ER and goal models we'd be left with very impoverished models that miss all of the important contextual bits that frame the wind-power discussion. We relaxed the restriction on our wind-power focus slightly so as to include parts of the book that discuss the context. In the case of McKay's book, chapter one covers this nicely.

After our first few meetings we've completed the ER domain model, as well as made a good start on the goal model.

For the wider context (chapter one), we built the following ER model:


This model is a bit of a monster, but I'm told that most models are like that. Other than the standard UML relationship syntax, we have coloured the nodes to represent whether the concept comes from the book directly (blue), or whether we included it because we felt it was implied or simply helpful for clarity (yellow).

Using the same process we created the following ER model for just the two chapters on wind:

As well, we've begun to go back over the first chapter and build up an i* goal model. Here it is so far:


Stay tuned for further updates on what we're up to. I'd suggest that at the moment these models should simply be taken as our first hack. We haven't done any work whatsoever to make them very readable or comparable, for instance.

* I feel like "homepage" is a rather outdated word now. Is that so?

Geoscientific Model Development

Sunday, October 18, 2009

I had an wonderful chat last week with Stephen Griffies from GFDL. It was a fascinating interview that I'll have to blog about over several posts because we just covered so much territory.

One especially interesting pointer Stephen gave me was to a new journal from European Geosciences Union titled Geoscientific Model Development. This is a journal that accepts articles about the nuts and bolts of building modelling software. It is apparently the only journal like it. Most of the other journals that climate scientists publish in will only accept papers on the "science" derived from the use of such models.

For those of us interested in how climate models are developed, this journal will likely be very relevant. What I find particular cool is the transparent peer-review process and open-discussion. This means for a particular article (say, this one on coupling software for earth-system modelling), you can read the paper and the current referee reviews (with the option to submit your own comments).

One issue with the journal Stephen mentioned is that it is currently not listed in any of the major scientific citation indices. Effectively this means that scientists do not get workplace "cred" for publishing in this journal. Thus, there is little motivation to publish even though, as Stephen put it, having a peer-reviewed publication to "rationalise" code and design decisions is essential to ensuring the scientific integrity of the models.