Sensitivity Ranges for Summarizing Simulation Output

Last Updated: April 8, 2016

Challenge: Simulation models have two benefits - They force us to formulate very explicitly what we think we know about a system, and they allow us to subject this abstraction to a battery of tests (a.k.a sensitivity analyses) without doing any damage in the real world (presentation). However, the big challenge is the next step: How can we bring an avalanche of output from extensive model testing into a collaborative planning process?

Solution: Start with high-level comparisons, then drill into the details where relevant. For example, graphs of outcome ranges can be used to compare large numbers of scenarios, and then move as a group to more in-depth discussions of those strategies or assumptions that are flagged as having the widest range of outcomes (i.e. results are most sensitive). This then sets the stage for developing a shortlist of candidate options for close scrutiny (e.g. trade-off analysis, fault-tree analysis). By showing multiple outcome ranges on a single graph, we can even spot some of the higher order interactions between model components. For example, the effect of a management action (e.g. changing harvest level) might differ more between biological assumptions than for different levels (e.g. 30% vs. 50%). The presentation below illustrates a few of the potential patterns.

SensitivityRanges_thumb

Also check out: Simulation trajectory diagnostics

Developed for: Research paper on Fraser River sockeye salmon (e.g. Figure 49)