Guild of Project Controls: Compendium | Roles | Assessment | Certifications | Membership

Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

What is an acceptable schedule risk?

21 replies [Last post]
Colin Cropley
User offline. Last seen 5 years 47 weeks ago. Offline
Joined: 1 Feb 2002
Posts: 57
Groups: None
A primary purpose of running a schedule risk simulation is to enable the project management team (PMT) to make more informed decisions based on the project schedule forecast end date (and cost if applicable) probability distribution (as well as forecast dates for other activities & milestones in the schedule, as required).

The question I want to raise for discussion is: Assuming the simulation is based on a realistic model, what probability level should the PMT use for realistic planning and decision making?

An immediate response is likely to be: The probability chosen depends on the purpose to which the PMT wishes to apply the answer.

For example, if the purpose is setting a target date for completion of the project that is attainable but requires a concerted effort by everybody in the project team, perhaps a probability of 50-60% is appropriate.

But if the purpose is to secure project approval and funding, including a management reserve allocation to cover the possibility of things going wrong, a probability of 90-95% may be more appropriate.
What do you think? Does your organisation have policies on this?

Replies

Tomasz Wiatr
User offline. Last seen 2 years 8 weeks ago. Offline
Joined: 22 Dec 2004
Posts: 35
Groups: None
Hello,

I see big, fundamental Experts-Discussion :-) related with suitable numbers of replications in effect of short question of Alan :-)

In my opinion if we have thousands of tasks we must to replicate model thousands times :-) but seriously :-( we must to evaluate this number. Simple (practical) method is running some simulations with different number of replications from smaller fo bigger (other parameters are constant). If our testing PDF/CDF is better and better (smoother than earlier) we have sufficient number of iterations! Of course convergency option (in Pertmaster) is valuable here. In my opinion small improvements (better indicator than 2%) is not effective, specially that this “chase” for 0% is infinite! Every correction is "meaning", but not every is “signicant”. In the Pertmaster we have only small simplification: this 2% is related with changes of time or costs, only so other changes are not viewed so 2% may be not "representative", of course.

But number of tasks it is not main factor that effects the number of iterations - other are resources (number of resources, subresources, alloacations and restrictions), not only number of tasks. It is evident for experts but I would like to highlight this aspect (our small addition). In my opinion big complexity is maybe too much detailed for some experiments (it is truth) but If we have detailed schedule (with small pieces of works) we have big number of tasks automatically but logic may to be simple here (f.e. serial and parallel flow of works) so simulation is valuable here. Simulation is good for all dimensions of schedules if we have solid schedule. It is my oppinion and I accept this opinion ;-)))

Good Luck for all
Tomek

P.S. I see small technical problem If we want to simulate with 10.000 of iterations in big and complex models with 10.000 of tasks (for example). This problem is big "temporary file". This file is able to contains 0,5 GB if we have 5MB plan. It is not critical problem of course (today) but not trivial too (specially if we have variants).
Philip Rawlings
User offline. Last seen 8 years 13 weeks ago. Offline
Joined: 16 Nov 2004
Posts: 41
Groups: None
Vladimir
My point exsactly. If you rerun the same analysis (with variable root) and (for example) the probability of completion changes by 2% then when you progress the model and it is 2% better (or worse) you cannot be sure that the project has in fact changed. Increase the iterations until that change in % is small (small according to your own rules).

I would have to disagree that only a very detailed model (9000 activities) can reflect the truth. i would be interested in what others think.
After calculating the initial plan you will start project execution and will be interested in estimating current probabilities of meeting project targets. So you will repeat Monte Carlo simulation. If the result will show that current probability is 2% higher does it mean that you performed your project fine? With that little number of iterations I can not rely on this information.
All project links are at the detailed level. You cannot properly simulate uncertainties using high level model.
Philip Rawlings
User offline. Last seen 8 years 13 weeks ago. Offline
Joined: 16 Nov 2004
Posts: 41
Groups: None
Vladimir

1) In Pertmaster (and Monte Carlo 3.0, @Risk, Crystal Ball) you have the option of setting the random root. This then samples the same series of random numbers on every simulation (set of iterations), so you get EXACTLY the same answer on every simulation. If you do not set the root then you get a different set of random numbers on each iteration and so different results on every simulation. How different the results are depends on the model itself and the number of iterations chosen. The above applies only as long as the input data does not change.

2) The number of activities in a plan depends mainly on its intended usage. You may have 9,000 activities but a management plan would have far fewer. The smaller plan will be a roll-up/summary either at code level (as P3, with logic only at the lower level) or a separate plan (with consistent logic). I contend that the 9000 activity model is too detailed for a risk analysis (for the reasons stated previously) so a higher level plan is preferred. That’s all - it’s your choice.
I don’t agree with your estimate of the size of computer models for the serious projects.
Let’s assume that project lasts three years (154 weeks). Let’s assume that there are 50 tasks performed each week. The size of your project will exceed 7500 tasks.
The model for Caspian Pipeline Construction consisted of 9000 activities, project portfolio model for the ship yard had 93000 activities, etc.
A model for construction of 25 store building had more than 5000 activities, now assume that you are building several of them.
Some IT projects may consist of 500 - 800 activities, I agree. But most projects in construction are much larger.
I don’t understand how you may receive exactly the same results using true Monte Carlo simulation and too small number of iterations several times.
Philip Rawlings
User offline. Last seen 8 years 13 weeks ago. Offline
Joined: 16 Nov 2004
Posts: 41
Groups: None
Vladimir
In Pertmaster you can set the random seed to a fixed number so that if you repeat the risk analysis you will get EXACTLY the same answer (I recommended this). If you change some data you will indeed get different results - only with infinite number of iterations will you again get exactly the same results. The issue here is that if you change one piece of data, then the change in results reflect that change or random variability instead. To check this you need to identify an acceptable change in results from one run to another - say, in a 5-10yr project, 3-4 days is probably acceptable (but your choice). Use the convergance feature or change the root and increase the number of iterations by step to see what the optimum number of iterations is. My experience IS that 5000 is plenty for all but the most complex of models.

With a 10,000 activity plan, you are probably at too low a level of activity to be able to assess the risk impact sensibly - the activities are too short to be able to set uncertainty. An analogy - how long does it take to drive home (for me 40-50-70 minutes, 90 in extremis)? Then, how long to put on your coat, pick up your bag, open the door, descend 2 steps, etc? - See what I mean? I have run models of 2-3,000 non-completed activities (project budget $8 billion) but it is not ideal. A 10,000 activity model is too detailed for risk and also for sensible high-level management appreciation/games-playing - it is more of a list rather than a plan (as may be used in a shutdown model, for example).
What do you do if the project model consists of say 10000 activities?
I think that 5000 iterations for the model consisting of 800 activities is not sufficient.
You will receive different estimates of probabilities of the same events each time you perform risk analysis using Monte Carlo simulation.
Philip Rawlings
User offline. Last seen 8 years 13 weeks ago. Offline
Joined: 16 Nov 2004
Posts: 41
Groups: None
Alan
Risk analysis is often thought to be most usable at a model size of some several hundred activities (say, 400-800). There is, of course, no absolute answer. BUT, it depends very much on the complexity of the project. Also if you have a competant plan already, stick with that rather than create a special risk model that may have ’buy in’ problems. We ususally use at least 1000 iterations - you can use the convergance feature in Pertmaster to determine how many iterations you need for your model and your tolerance for variation between runs. If you have some low probability branches, you will need more iterations (prob. 5000).
Alan Davison
User offline. Last seen 13 years 30 weeks ago. Offline
Joined: 10 Jan 2001
Posts: 5
Groups: None
We are considering using PERTMASTER and would like to hear any comments from current users. Application would be 5-10years project $5-10Billion AUD . How many iterations ? (100-200) How many activities ? Alan D
Tomasz Wiatr
User offline. Last seen 2 years 8 weeks ago. Offline
Joined: 22 Dec 2004
Posts: 35
Groups: None
Dear Colin,

We have no one/sole answer for this canon question "what probability level should (...) use for realistic planning and decision making". From experience of first PERT-researchers I know only that probability greater than 60% is wastage of resources and probability less than 25% is too high risk. These ranges are not obligatory anywhere specially in projects in non-industrial conditions. Therefore risk analysis is interested because we have not know this ranges :-) and we must to "foretell" :-))

Probably main problem is not here probability but level of risk acceptance in Your project-team. It is key but from scientific point of view better is simple set of some schedule simulations before any resolution. Sorry for my simple advices ;-) Like a doctor I want to prescribe the Pertmaster :-) Right answer is only "in hands" of You and Your team of decision makers. Meybe Risk Expert from detail branch of activity (exactly the same) can help better BUT every one value will only subjective selection!

Good Luck
Tom ;-)
Philip Rawlings
User offline. Last seen 8 years 13 weeks ago. Offline
Joined: 16 Nov 2004
Posts: 41
Groups: None
Re. risk bands - you can also use the Categories feature in Pertmaster, where you define risks (say, estimating uncertainty on design 90%-100%-120%) then allocate this risk to several activities. The risk distribution is then automatically generated.
Martijn Truijens
User offline. Last seen 13 years 2 weeks ago. Offline
Joined: 12 May 2003
Posts: 5
Dallas,

The risk banding technique really depends on the industry you’re working in. I think it is a useful technique, because it simply ranks groups of activities into low and high risk. This is the next step up from the ’quickrisk’ function in Pertmaster, where % variation is being allocated to all activities.

It should be very useful in your line of business I should think,

Martijn
Dallas Ewing
User offline. Last seen 19 years 20 weeks ago. Offline
Joined: 19 Apr 2002
Posts: 4
Groups: None
Thanks Martinj, that has addressed my concerns.

To put it simply I was wondering why all of the literature I could find (with the exception of this forum) seemed to recommend using the difference between P50 and P90 for contingency. In a verbose way I was simply articulating why I didn’t feel comfortable with that proposition.

On another note, I’ve come across several articles recommending a "Risk Banding" technique for calculating pessimistic and optimistic durations. This is done through multiplying the most-likely duration by appropriate factors which are identified by categorising the activity into a "risk band".

Is anyone familiar with this technique?
Martijn Truijens
User offline. Last seen 13 years 2 weeks ago. Offline
Joined: 12 May 2003
Posts: 5
Dallas,

Not completely sure what you’re asking.

The most likely (or deterministic) schedule will indeed not be the same as your P50 schedule. That is the point of the analysis.

If you insist on having 90% confidence that your schedule is robust, than the difference between your P90 and your deterministic schedule is your calculated schedule contingency.

Are you wondering why your P50 is not the same a what you started of with?

Martijn
Dallas Ewing
User offline. Last seen 19 years 20 weeks ago. Offline
Joined: 19 Apr 2002
Posts: 4
Groups: None
I’ve been preparing an organisational procedure for performing Monte Carlo analysis and using the output to determine how much contingency is allocated to the most-likely schedule.

From what I can on the allocation of contingency, the recommendations seems to be an allocation based on the difference between the 50% and 90% confidence levels.

It seems to me that taking the 50% CL is more likely to result in over or under allocation of contingency. Does anyone know of any alternatives or reasons for this because unless all your probability distributions are symetrical around the modal value the most-likely schedule will not necessarily the 50% confidence level?

Dallas
The optimal probability of meeting project targets depends on the competition too. If you suggest 90% success probability plan to your customer and your competitor cah make 70% probability proposal you will loose the competition. You shall decide what proportion of successful and late (and overbudget) prpojects is the best for your business taking into account that lower numbers mean more deals.
Forum Guest
User offline. Last seen 2 years 27 weeks ago. Offline
Joined: 28 Jan 2009
Posts: 1
Groups: None
Alan Davison
User offline. Last seen 13 years 30 weeks ago. Offline
Joined: 10 Jan 2001
Posts: 5
Groups: None
Colin

The primary purpose of running a schedule risk simulation is to enable the project management team (PMT) to analyse the risk and hence level of contingency to allow in the Budget and Schedule. This is typical of a high risk project like "first oil" date of landing or the moon where significant risks are involved over long time periods . Most projects do not look at these risks seriously enough and rely heavily on deterministic solutions which really only reflect one persons window on a given situation.

What probability level should the PMT use for realistic planning and decision making should relate to the specific industry and be based on historical/statistical data but would presumably lie within the 75-90% distribution or else the project is not viable.

"The probability chosen depends on the purpose to which the PMT wishes to apply the answer" is the reality as managers will manipulate the data to achieve the necessary result either in the feasability stage or to guarantee an end date.
If accurate historical data is used (3 time estimates etc ) and good planning ie minimum "hard logic" allowing the schedule to be free of too many constraints then a good indication of the end date and forecast cost at completion can be achieved .

Risk Analysis is certainly an area that needs to be considered more by clients and contractors in light of recent disasters on Goro and AMC etc where significant project "blow outs" occurred.

Risk Analysis was very popular years ago with Shell, etc in the North Sea when PERT was in its infancy ......I think we need to look again at how we use PAN/PERT/PERTMASTER and re introduce these simple but effective techniques into Poject Feasability and Definitive studies.

Allan D

Dinesh Kumar
User offline. Last seen 5 years 50 weeks ago. Offline
Joined: 3 Jan 2004
Posts: 37
Groups: None

Every project is unique in its nature and the risk levels are completely different or different from one and other. One can not use the same level of probability that others use in a particular project. In most cases the probability is also subjective.

In my opinion, the probability to use is basically depends on
- the risk involved with a project
- the details used for the optimistic, realistic and pessimistic durations.
- the decision maker’s risk attitude. Whether he is optimistic or pessimistic.
- The information available
- Data/Experience of previous similar project etc.

Regards

Dinesh
Martijn Truijens
User offline. Last seen 13 years 2 weeks ago. Offline
Joined: 12 May 2003
Posts: 5
In my experience using risk analysis tools for scheduling medium to large size offshore facilities, the level of probability will be far less than that, Colin.

A concerted team effort to achieve completion is more likely in the 5-25% range.

A securisation of say funds or contractual requirements will range between 70-90%.

Two things are essential here: Can one support the optimistic outcome with appropriate benchmarking data? and what is the extend of ’merge bias’ encountered in the risk analysis?

As a consultant however, it is safe to say (to a customer)that a 50% probability is a realistic target, whilst a 90-100% probability will ensure project success (and safeguard profitability).

But perhaps I should refer to Brasington’s Ninth Law: A carelessly planned project takes three times longer to complete; a carefully planned one will take only twice as long.....

cheers,
Tomas Rivera
User offline. Last seen 5 years 3 weeks ago. Offline
Joined: 2 May 2001
Posts: 139
Groups: None
Colin:

I do not do risk simulation. But I would like to say a couple of things.

The probability level you would choose will depend also on the urgency o importance of attaining your project goal. In qualitative terms, this could be defined by saying for example "we would like our project to finish on ...", or say "our project should finish on ...", or say "our project must finish on ...", or say "our project must finish on ... with the utmost importance". Clearly, these diferent conditions should have a diferent probability level when doing a risk analysis. In other words, they have diferent tolerance limits for delay.

Tomas Rivera
Altek System
Scheduling and control of high
performance construction projects