Proper estimation of the expected NPV of a project normally requires the use of simulation techniques. Simulation is the only simple and generally applicable procedure for overcoming the limitations of sensitivity analysis, calculating the expected NPV, and analyzing risk. Simulation usually requires more information than sensitivity analysis, but the results in terms of improved project design are worth the effort. Proper estimation of the expected NPV requires three steps:
• Specifying the probability distribution of the
important uncertain components
• Specifying the correlations between the components
• Combining this information to generate the expected NPV
and the underlying probability distribution of project outcomes.
Generating the underlying distribution and calculating
the expected NPV through mathematical analysis is generally impossible. The
analyst must rely on computer-generated simulations. Using the specified
probability distributions of the uncertain project components, the computer simulates
as many outcomes as the analyst wishes. In Monte Carlo simulation, the computer
acts as if we were implementing the same project hundreds or thousands of times
under the specified conditions. Because we assume that some of the project
variables are uncertain, the simulated results are different each time.
Sometimes the resulting NPV may be negative, sometimes it may be highly
positive. The computer pools the results to obtain an estimate of the average result
and of its probability distribution. From the simulations, the computer generates
a probability distribution for the NPV, including the probability that the
project is a failure and the expected NPV. Analysts can readily obtain such
software for performing these analyses. Although the techniques are as easy to
use as estimating the NPV or IRR of a project, they do require additional
information and expert judgment concerning the probability distributions of the
critical project components.
Assigning Probability Distributions of Project Components
Assigning probability distributions to project component
variables and specifying correlations is the most difficult step. Analysts
should base economic analysis on a realistic assessment of costs and benefits,
which in turn requires that the estimates of all relevant variables draw on
experience in the sector and the country. Quantity forecasts should be based on
clearly identified market factors and on experience-based behavioral,
technical, financial, institutional, and environmental assumptions.
Analysts can quantify judgment and experience at several
levels of sophistication, but even a rather simplified approach is useful in
project design. We do not usually need to consider a large number of variables.
Sensitivity analysis can help identify the variables for
which probability distributions should be most carefully specified. If, for
example, sensitivity analysis shows that the influence of a particular variable
is relatively minor, we can treat that variable as if it were certain without
introducing large errors. Also, the specification of the probability
distribution for a selected variable need not be based on hard data. For
example, a large sample of past observations may be available that permits fits
against assumed probability distributions, or the analyst may have access to
evidence of a more qualitative and subjective nature. The subjective judgments
of experienced engineers, financial analysts, and others involved may be
valuable in this context.
Finally, if the distributions are unknown, project
analysts can also make simplifying assumptions about the probability
distribution of variables. One of the simplest and most popular distributions
used in empirical risk analysis is the triangular distribution. Three
parameters completely describe this distribution: the most likely value (the
mode), the lowest possible value, and the highest possible value. The expected
value of a triangular distribution is one-third of the sum of the three
parameters.
0 comments:
Post a Comment