Home
Syllabus
Classnotes
Resources
Projects
Seminar: Steven Bankes

Robust Inference with Computational Science
and Long Term Policy Analysis

Steven Bankes
Pardee Rand Graduate School,
CTO, Evolving Logic Inc.

The world faces profound social, economic, environmental, and
technological transitions.  How we choose to meet our challenges --
stemming global terror, halting the spread of AIDS
and other infectious diseases, achieving sustainable development,
managing new genetic technologies, etc. -- will resonate throughout
the 21st century.  So, it is important to think about the long term.
But even when we value the long-term, it can be hard to translate
concerns into action.  The inability to devise objective, actionable
plans for the long term often leaves goals relating to the future
unvoiced because they cannot be connected to credible near-term
actions.

Computer modeling can be very important in dealing with the complexity
of important societal problems, and innovations such as Agent Based
Modeling provide a basis to capture much of more of what is known in
computers.  But no model, regardless of its quality, can be expected
to predict long term outcomes.  In order for computer modeling to be
rigorously applied to these and similar problems, methods are needed
to derive reliable inference from the knowledge embodied by models,
without requiring predictive accuracy.  This talk will describe one
framework for doing so, and its application to a variety of long term
policy problems.

These methods harness computation not to solve the intractable problem
of predicting the long-term future, but instead to enable a
fundamentally different, more sensible question: Given what we know
today, how should we act to best shape the future to our liking?  We
can use computers to create and consider myriad plausible futures,
likely to include at least one similar to what may actually unfold.
We can then discover near-term actions that perform well, compared to
the alternatives, over all these futures, often through clever hedging
actions and adaptation to updated information.  Finally, the computer
can be used to seek plausible futures that "break" a chosen strategy.
After repeated iterations to shore up revealed weaknesses, the
resulting strategy can support a consensus for successful action.  In
the end, the process yields near-term strategies not merely optimized
for some "best guess" scenario but rather robust across a multitude
of scenarios.

The result is a powerful enhancement to the human capacity to reason
in the face of enormous uncertainty.  This approach combines some of
the best features of the qualitative scenario-building and
quantitative decisionmaking tools developed and applied for more than
five decades.  These new tools may help address a paradox of
decisionmaking: our greatest potential influence for shaping the
future may often be precisely over those time scales where our gaze is
most dim.  Further, they provide an avenue for escaping the fruitless
arguments that routinely arise among stakeholders over which future is
the one for which we must prepare.


References

Bankes, Steven (1993) Exploratory Modeling for Policy Analysis",
Operations Research, vol. 41, No. 3, pp. 435-449.

Bankes, Steven (2002) Tools and Techniques for Developing Policies for
Complex and Uncertain Systems, Proceedings of the National Academy of
Sciences, 99, pp. 7263-7266.

Popper, S.W., Robert J. Lempert, and Steven C. Bankes (2005): "Shaping the Future," Scientific American, vol 292, No. 4, April.

Lempert, R. J., Popper, S.W., and Bankes, S.C. (2002). "Confronting
Surprise." Social Science Computing Review 20(4): 420-440.
 

Location

University of Michigan
Thursday, March 30, 2006, 4 pm
335 West Hall

[Home] [Syllabus] [Classnotes] [Resources] [Projects]

[Created by Greg Madey: gmadey@nd.edu]   [Notre Dame Home]   [COE Home]   [CSE Home]