What I didn't do was give any reasons that are specific to scientific software. For that I'm going to first refer you to slides from Daniel Hook's presentation at the SECSE '09 Workshop. Daniel[1] described scientific software development as a number of model refinements: from measurements to theory to algorithms to source code to machine code. At each refinement there are different types of acknowledged errors due to simplifying assumptions, truncation and round-off errors, etc. There are also unacknowledged errors that come from concrete or conceptual mistakes. Validation and Verification activities attempt to weed-out these errors, but the testing process is frustrated by two problems unique(?) to scientific software:
- The Oracle Problem: "In general, scientific software testers only have access to approximate and/or limited oracles". As Neil points out in a comment, the output of scientific software is often what you're looking for -- the results of an experiment. If you knew exactly what you ought to get, you would not be doing science.
- The Tolerance Problem: "If an output exhibits acknowledged error then a tester cannot conclusively determine if that output is free from unacknowledged error: i.e., it can never be said that the output is "correct." This complicates output evaluation and means that many test techniques cannot be (naively) applied."
[1] Strange, now that I've met Daniel I find it more comfortable to use his first name rather than referring to him as Hook 09. ;-) Maybe also it's because I'm referring to a presentation.
No comments:
Post a Comment