Guild of Project Controls: Compendium | Roles | Assessment | Certifications | Membership

Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

Monte Carlo Risk Analysis

30 replies [Last post]
Sankar Vijayan
User offline. Last seen 3 years 29 weeks ago. Offline
Joined: 21 Sep 2010
Posts: 58
Groups: None

Dear all,

My self sankar vijayan.I am a fresher in  construction field .Ia m now working with a MNC now .My boss told me to prepare Risk Analysis for a project.While doing it i came across this Monte Carlo Risk Analysis.I refer some articles in net.Can anyone explain with small example pratically how to impliment it without using any specialised software.So that i can recomend this method to my boss.I am from Tendering department.

 

Regards

Sankar Vijayan

Replies

Emily Foster
User offline. Last seen 1 year 46 weeks ago. Offline
Joined: 19 Aug 2011
Posts: 625
Groups: None

Hi Guys,

If anyone's interested...we've just completed a review of Full Monte which is a cost and schedule risk analysis add-on to Microsoft Project. You can read it here http://bit.ly/wnMkPo

I hope this is useful,

Emily

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Gary is right, you really need purpose built software to do this.  Barbecana offers a product, Full Monte, which works within Microsoft Project.  Check out www.barbecana.com.

As for why it is important, there is a paper on the same site about merge bias.  You can download from http://barbecana.com/merge-bias/

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Tony,

-"Re. your crane example, I would argue that the logic is wrong.  The reason you planned to do the second task after the first task was the resource constraint, namely having only one crane.  It was not a valid logic constraint at all."

Yes agree with what you say, it is just an example [on purpose wrong] of why using activity logic to substitute resource constraint is bad practice. A bad practice that frequently leads to out-of-sequence logic. Thanks for supporting the argument about the use of soft logic being bad practice, a bad practice performed thousand times if used in a Monte Carlo simulation.

-"You say that it is a PMI standard that floats after resource leveling should be based on resource leveling; can you give a reference for this?"

Resource Critical Path is a term used by Spider Project to distinguish float computations after resource leveling as most software after resource leveling wrongly computes float values, but the PMI definition of float remains the same. The software per-se uses the PMI definition which is the same for resource leveled jobs, uses same term "total float".

http://www.spiderproject.ru/library/pmie01_rcp.pdf

http://www.spiderproject.ru/library/london.ppt

Remaining float seems to be a poorly documented computation but it is not necessarily equal to resource floats. After leveling activities that were critical may have floats and activities that had floats may become critical. So I don't understand the term "remaining". I cannot discuss this concept since I don't understand it. Because your software yields such value maybe you can tell us what it means and how it is computed, especially when after updating the network resource availability and demand changes can shift the critical path.

About the pathetic resource leveling most software yield perhaps you can prove you can manually perform better resource leveling than using any software or maybe by using your own software if you are willing to take the challenge. From the following link you can download a sample job where you can test your resource leveling and and let us know about your results on XML format. This is a very small job of two hundred activities and only 10 resources, nothing in comparison to a not uncommon job such as an hospital.
 
https://rapidshare.com/files/2624143100/20x10_1.xml
       
https://rapidshare.com/files/1456729614/20x10_2.xml

20x10_1 is the XML file for the sample job with no constraints

20x10_2 is the sample file with some constraints as to display a leveled job without need to perform a resource level run. One solution of many feasible, one of the worst possible. Latter on I will provide you with my solution, using my software out of the box settings [no manual guessing for prioritization settings - very easy], I will export the results in XML format so you can verify it is valid.

Perhaps you can run the same using MSP alone, if the difference between the software is significant then it is to be expected the probability distribution curves will be significantly different. Something in which we seem to agree but that can be better proved with the use of actual examples. Not to mention how fun it can be to compare resource leveling results.

Best regards,

Rafael

Please note my software can import/export to MSP files if MSP is installed [for whatever reason], because MSP is not in my machine my only option is to use XML format which do not require for MSP be installed.

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Rafael, I am trying to keep my responses short so I might not cover every point you make.

Re. your crane example, i would argue that the logic is wrong.  The reason you planned to do the second task after the first task was the resource constraint, namely having only one crane.  It was not a valid logic constraint at all.

I still maintain that everything you say about the handling of out-of-sequence progress in monte carlo simulation applies equally to deterministic cpm.

I agree with the comments about Critical Chain.  There is no algorithm to determine where the buffers should be etc., so it is meaningless.  Never heard the term RCP but looked it up.  Seems to be an invention of Spider (which does not make it bad).  So, yes, now I have looked it up I know how to calculate it.  Though it isn't really a path.

Resource scheduling algorithms I have written have output a value called remaining float, which is the amount of the total float not used up in the leveling process.  But the resource scheduling process never over-wrote any values calculated by time analysis (cpm).   You say that it is a PMI standard that floats after resource leveling should be based on resource leveling; can you give a reference for this?

I agree in principle that it is not a good idea to use a different algorithm for deterministic and monte carlo calculations, but with non-resource-leveled cpm the differences are minor and the benefits are huge -- over 100 times faster.  We could provide our own deterministic calculation to close that loophope.  Of course, with resource leveling the differences could be much bigger, so we may have to go out of our way to do it as badly as MSP!  But there are bigger issues with combining risk analysis with resource leveling; the order in which things are scheduled would generally differ between iterations, creating lots of mult-modal distributions and making the results hard to interpret.

Finally, I agree that resource leveling algorithms do not try to find the best solution.  Or perhaps it is better to say that they do kind of try a little bit but they are pathetic.  This is because the problem is one of n! (n factorial) complexity, which means that the time taken to solve it tends to go up with the factorial of the number of tasks.  (This is only a  general trend because it also depends on the network logic; if everything is done in one long logic chain there is nothing to decide so it's quick!)  So, if you could solve a 100-task problem in an hour it would take 101 hours to solve a 101-task problem.  Nevertheless, I think the time may be ripe for a branch-and-bound algorithm which would be able to find provable optima in smaller cases and at least be able to examine tens of thousands or maybe hundreds of thousands of solutions in a reasonable time.  I have run this past a number of people and not elicited a lot of interest.  If there was interest this might be Barbecana's next project.  My impression is that very few people resource-level their projects, but that might be because of the miserable results from current algorithms!

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Rafael, if it is possible to do things out of sequence, then surely the original logic was wrong.

There is no such thing as absolute logic, you make assumptions and as the project moves you change course for the benefit of the job. This is most common when people substitute resource dependencies with activity logic.

If I have a single crane to do my activities and do not use true resource leveling and recur to soft logic using a FS link to delay one activity after the other to manually solve the resource over allocation and suddenly I get a crane from another job and make use of it to overlap the activities it does not means the original plan logic was wrong, it means logic as well as plans is dynamic, it means a change in plans. You do not make a complex CPM and never adjust it, you must frequently adjust it, including logic as condition change.

No one can deny errors do happen, but an out of sequence event does not mean an error occurred. Horrors do happen if you do not correct soft logic as the schedule date change and so demand and availability of resources. This is usually effectively handled with automatic resource leveling if your software provide for optimization methods [note that most software do not attempt to use optimization methods] as this does not requires you to change soft logic nor input always changing prioritization rules necessary to get close to shortest duration schedule, but using manual resource leveling through soft-logic in combination with Monte Carlo usually ends up in horror. GIGO is still valid.

You say Monte Carlo does not take account of the possibility of out-of-sequence progress in the future, but surely that is equally true of deterministic cpm.

The deterministic CPM in all software I know account/deals with the possibility of out-of-sequence by means of some limited options like progress override and retained logic. What they cannot do is substitute the planner and make some magic such that always perform the best correction when needed, "best" being subjective. This is a limitation they have and the same goes with Monte Carlo Methods. It is just about accepting this limitation.

In all the PM systems I am familiar with, floats are an output from the non-resource-leveled cpm process only.

In the PM system I use, Spider Project, float definition is a single definition in agreement with the PMI standards. They are valid output for the non-resource leveled job if you do not perform any resource leveling run. They are output for the resource leveled job if you perform a resource leveling run, and they are correct. This is a serious issue 99% of the software out there cannot handle. It is simple, make use of new knowledge, and don't be surprised if you find out hundreds of college research papers still looking for the diabolic Phantom Float.

Photobucket      Do you know how to calculate RCP?

http://www.spiderproject.ru/library/pmie01_rcp.pdf

http://users.encs.concordia.ca/~andrea/indu6230/INSE%20papers/9.pdf

Please take note the following statement is not true, most resource leveling functionality by common software do not even attempt to find shortest duration. They run the algorithm based on fixed prioritization rules the user inputs, they do not search for the best combinations of prioritization rules that can change as soon as there is any progress in the schedule. Still the reference can be used as an introduction to the topic of calculating resource leveled float.

Limited-resource allocation algorithms aim to find the CPM schedule duration that is shortest [wrong assumption - not true in most software].

What it is nuts is performing hundreds or thousands resource leveling runs on Monte Carlo software that yields wrong float values and then reporting some wrong statistics about float.

If the software cannot handle correct values of float on resource leveled jobs as soon as the scheduler performs resource leveled runs it should erase these values and not display wrong values. It is misleading and can induce management into serious errors.

Another relevant issue surfaces when the resource leveling algorithm of the software is poor and the resulting job duration after resource leveling is longer than necessary. Most software algorithms depend on a selection of prioritization of certain variables that is unpredictable and can change with every iteration. The selection of these variable is not automatic and requires input by the user. Only a few software like Spider Project provide an algorithm that searches for the combination that yields close to shorter project duration outcomes. But not always shorter duration outcome is desirable, Spider Project acknowledges this and provides for the user to select among several algorithms, but this requires user intervention. No way at the actual state of available software will Monte Carlo Simulation will be a precise model of what will happen, useful but not precise.

It makes no sense to use different software to predict schedule probabilities than the one you use to manage your job. For a start it might be possible that if your software resource leveling algorithm is superior to the algorithms used by the Monte Carlo software the difference on the deterministic duration can be such as to invalidate the results. The resulting Monte Carlo probabilities of finishing on time will be substantially shifted to the right when using poor resource leveling Monte Carlo software, un-precise above tolerable limits.

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Rafael, if it is possible to do things out of sequence, then surely the original logic was wrong.  You say Monte Carlo does not take account of the possibility of out-of-sequence progress in the future, but surely that is equally true of deterministic cpm.

As for the incorrect floats, I probably need to clarify two points.  First of all, the first version of Full Monte does not address ressources except insofar as they affect cost.  (Next release will, thouhg our experience so far is that few people reource-load their networks.)

Secondly, we do _not_ utilise MSP's algorithms, so we would not duplicate their mistakes.  We have noted a number of situations where we give different dates or floats, ususally by just a day, and in all cases we believe our resuts are correct.  (We have one case  where if you change a task from ASAP to ALAP, its dates get earlier!)

Risk+ on  the other hand does use Microsoft algorithm, which explains (a) why it can take account of resource consraints, since they sort of get it for free, (b) why it is so slow, and (c) if you  are right, why it gets the floats wrong on resource leveled projects.

Having said that, I am not sure I know what float means in the resource leveling case.   In all the PM systems I am familiar with, floats are an output from the non-resource-leveled cpm process only.  I have known some that output remaining float, being the float one didn't use up in the leveling process.  But one could argue that it shouod take account also of when the resources it uses are required by other tasks.

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Vladimir,

Some time ago I made the following comments about this error. An error I believe not by omission but by convenience as the members of the group are supposed to be "experts". I suspect some members of these groups might have commercial interest with some competing software developers and therefore there is high probability they will conveniently omit any reference to software they fear the most. I do not mean to say all members of the group did not noticed and were honest, but that by some convenient final editing the correct reference was most probably omitted, they are "experts", or not?

They will forever omit any mentioning about wrong values of float on resource leveled jobs their software yields, at least until they figure it out. They will also avoid mentioning functionalities that do not work on resource leveled networks on their Forensic Protocols, like longest path calculation algorithms that are wrong on resource leveled jobs, the same goes to negative lag computations that fails also on resource leveled jobs, at least until they figure it out.

Photobucket

http://www.physicsoftheuniverse.com/scientists_lemaitre.html

http://news.discovery.com/space/faster-speed-of-light-110922.html

Best Regards,

Rafael

Rafael,

thank you for the reference.

It is interesting that AACEI authors wrote about all software basing on their limited knowledge and experience.

The statements that only Open Plan assigns calendars to lags, all resource constrained floats are wrong are just examples.

There are other activity types, there are volume lags, etc. Authors should mention their restricted knowledge and the software tools that were discussed.

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Rafael, on second reading I am thinking that you are referring to out-of-sequence progress which is deliberately instigated as a result of trying to expedite a slipping schedule.

Out-of-sequence happens not because of a slipping schedule but of the contrary, because some event started before planned logic would allow, if the network is slipping that is a separate issue. For example you plan to cement plaster our typical concrete house after 100% of masonry is installed but eventually you change mind and accelerate the job by starting cement plaster after 50% is done, or after 60% or after 30%, is up you you as long as it is reasonable.  Can be for your own convenience, can be to reduce overhead, to reduce risk of finishing late or to make use of now available resources.

If out-of-sequence happens (are probable) and your Monte Carlo model does not takes into account this possibility it is in bias. Prior to running the simulation you know it can happen very frequently but avoid the issue in your model. This is one of the many limitations of Monte Carlo Simulation as actually applied to CPM Schedules. The logic is static and no attempt is being made to model changes in logic that are a result of statistically predictable events.

All our jobs experience out-of-sequence several times, ignoring this possibility makes the simulation less reliable, in bias. If the model is not considering something prone to happen is not good. The problem seems that some Monte Carlo simulation do not progress the network as it is run and keep it un-statussed after initial DD, something that I believe can be modeled but in actual practice is not.

To my understanding not all Monte Carlo Software works only on un-statused jobs, these when displaying out-of-sequence must be verified on the last version but the software misses the probability it might happen again and again.

You or someone else also referred to the fact that mitigation plans cannot be modeled in the simulation but t his is not true. Full Monte and other systems support conditional branching, which allows one to specify different paths to be active depending upon conditions. (e.g. if we get the drawings by Jan 1, do the job in house, otherwise outsource).

The software I use provides for conditional branching and can accept different logic for the estimation of the probabilities. By your response I must assume your software also provide for conditional branching, a piece of cake.  But there are many Black Swans, there is no way you can foresee all possible issues that will require mitigation on a complex job.

Yet another person (?) has suggested that it is impossible to know the right distribution. This is true, but it is a giant leap forward to acknowledge that there _is_ a distribution.

I believe in the value of Monte Carlo methods and other statistical methods, excluding original PERT that was too much in bias. Some do not believe on them because at the very beginning the risk distribution is a mere guess but the value can be proved using some sensitivity analysis runs. They are not precise but in no way useless, on the contrary tell better than deterministic models. The issue is about improving those things we can.

I would argue that if it is possible to do work out-of-sequence then the original network logic was just plain wrong.

I would argue the above statement is plain wrong, schedules are dynamic and as things happen the needs do change. Some people believe CPM logic is absolute and static but it is not, there is a lot of subjectivity in it, otherwise 2 different schedulers would yield exactly the same schedule for any given job.

Many (if not all) schedules use some kind of preferential logic. Say for example you scheduled to build a road from point A to point B, or from point B toward point A, or from the station at the center toward the ends. Which is correct? Well depending on the resulting mass diagrams that will be different for each option, maybe none yields optimum hauling.

What if you planned a job with 10 hauling trucks but unexpectedly some extra 10 trucks become available from other job and make adjustments to your schedule to work two crews in parallel perhaps at different locations as for your job not becoming a junk-yard racing track. Would you say the schedule logic was wrong because you did not predicted the unpredictable?

Someone else suggested that some "low quality" systems produce bad float values; if this is so it is presumably "just" a bug and should have no bearing upon a discussion about the merits of the technique.

Of course it has no bearing on the discussion of the technique but has bearing on these software that cannot yield correct float values such as MSP and if your software runs under MSP the results are flawed because of the wrong values MSP yields. Other software are even worst, they bring the files from software that claim to be capable of yielding correct values of float and model the network using an engine that yield wrong float values, this yields wrong probabilities distributions, distributions that means nothing, distributions that are nuts.

The software I use yields correct resource leveled float values, others attempt to while MSP do not even attempt to give correct values of float after resource leveling. I suppose your add-on if it yields probabilistic curves for float values it corrects the computations of MSP for resource leveled float, otherwise the results would be flawed.

from http://www.alphacorporation.com/49r-06.pdf

Resource leveled schedules usually do not show correct float values. Thus, the total float method does not identify the correct critical path(s). After the schedule model has been calculated, resource leveling routines override early and late start dates of leveled activities, scheduling them when resources are available. Other successor activities are subsequently delayed to maintain network logic. Some CPM software may adjust float values to reflect reduced float caused by delaying activities, others do not. In either case, the resource-constrained float values displayed are incorrect. In a resource-restrained schedule, the concept of float, as the software displays it, breaks down and quite often the concept of a critical path breaks down.[3] This previously undefined resource-float has been called “phantom float” by
 Fondal and others.[12]

The above is true regarding the fact that most CPM software yield wrong values of float, while some yield wrong reduced values but is wrong or outdated when it says; In either case, the resource-constrained float values displayed are incorrect. Maybe judging from their limited experience using flawed software. It is understandable that if their software cannot always yield correct values of float they can assume there are no other who can tackle this mathematical riddle.

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Rafael,  on second reading I am thinking that you are referring to out-of-sequence progress which is deliberately instigated as a result of trying to expedite a slipping schedule.  I would argue that if it is possible to do work out-of-sequence then the original network logic was just plain wrong. 

You or womeone else alsso referred to the faact that mitigation plans cannot be modeled in the simuation but t his is not true.  Full Monte and other systems support conditional branching, which allows one to specify different paths to be active depending upon conditions.   (e.g. if we get the drawings by Jan 1, do the job in house, otherwise outsource).

Someone else suggested that some "low quality" systems produce bad float values; if this is so it is preumably "just" a bug and should have no bering upon a discussion about the merits of the technique.

Yet another person (?) has suggested that it is impossible to know the right distribution.  This is true, but it is a giant leap forward to acknowledge that there _is_ a distribution.  As Sam Savage says in his book The Flaw of Averages, "In the land of averages the man with the wrong distribution is king."

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Rafeal, i do not understand your point.  I _think_ you are implying that out-of-sequence progress can occur somehow as a result of the simulation, but that is impossible.  The simulation obeys the logic of the network so cannot create out-of-sequence progress.  Out of sequence progress can occur only as a result of user-entered progress data being incompatible with the network logic.  So, any out-of-sequence progress there my be is there before the simulations starts and can be handled in exactly the same way as it would be in the deterministic case, whether this is by user intervention or by automatic rules.

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Some low quality Monte Carlo software yield far from approximate statistics for float values. They level the plan thousands of times and record wrong float values for thousands of activities on each iteration. They can do it wrong in a big way and even provide the user with a float probability distribution chart that is nuts on resource leveled networks.

I agree with Rafael.

Monte Carlo simulation creates an illusion of accuracy and it is dangerous.

If there are delays or cost overruns people imply corrective actions like fast tracking (out of sequence), using additional resources, changing work calendars, etc. They do not work as was planned initially.

Besides, there are other weak points in existing Monte Carlo models.

Monte Carlo simulation is still useful but it should be clear that it is not accurate.

Risk managment (and simulation) is a permanent process through the project life cycle. It is not enough to do it at the project start.

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Also, risk analysis tends to be most useful before a project starts, when the issue would not arise.

That is precisely the issue, if out-of-sequence happens and your Monte Carlo model does not takes into account this possibility it is a wrong model. Why so much fuss about running thousands of times schedules with different activities duration that will impact the project duration and forget about out-of-sequence that equally can impact the project duration and perhaps in exaggerated amount if using the retained logic "fix" shortcut and logic is not fixed.

A deterministic model is different, you only fix out of sequence logic when it happens and as many suggest "if needed". In a Monte Carlo simulation you run different scenarios, some shall include out of sequence events and its solution for it to be a  better representation of the possible outcomes that shall be taken into consideration.

I believe eventually more intelligence will be embedded into the CPM models as to better suggest a better fix to the out-of-sequence logic rather than a single pre-selected option. Take for example Asta PP, they added some additional options to  the common retained logic and progress override, an option that in some cases can be the correct logic fix. I do not use P6 but would not be surprised if they also have a third option. Even with some intelligence magic will not happen always and all out-of-sequence events "fix" must still be verified.

In the same way a deterministic schedule is dynamic a probabilistic model is also dynamic, new risk can surface or it might happen that your buffers should be modified. As the job moves the probabilities of success can change substantially and some intervention by the planner is in order if he desires to keep under control his risks. These changes are not modeled by Monte Carlo and it is another reason why in practice it is far from perfect, though still useful.

As Steve said, "There are many, many "black swans".

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

"Out-of-sequence events are common, even when scheduling only with FF and zero lag they occur. Not modeling these can be called to be in error. Here Monte Carlo will fail miserably as it will choose one of the pre-defined "solutions" most probably retained logic a work around that is not always true solution, same as if progress override is selected. No way to make sure Monte Carlo will check out-of-sequence occurrences and magically solve any broken logic."

I don't really see why Monte Carlo is any more susceptible to out-of-sequence progress than is deterministic cpm.  Ideally the error will be corrected by changing the logic.  But if it is not, then what Full Monte at least does is to retain all the logic. 

Also, risk anlyisis tends to be most useful before a proect starts, whenthe isssue would not arise.

 

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

"Out-of-sequence events are common, even when scheduling only with FF and zero lag they occur. Not modeling these can be called to be in error. Here Monte Carlo will fail miserably as it will choose one of the pre-defined "solutions" most probably retained logic a work around that is not always true solution, same as if progress override is selected. No way to make sure Monte Carlo will check out-of-sequence occurrences and magically solve any broken logic."

I don't really see why Monte Carlo is any more susceptible to out-of-sequence progress than is deterministic cpm.  Ideally the error will be corrected by changing the logic.  But if it is not, then what Full Monte at least does is to retain all the logic. 

Also, risk anlyisis tends to be most useful before a proect starts, whenthe isssue would not arise.

 

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Steve,

Out-of-sequence events are common, even when scheduling only with FF and zero lag they occur. Not modeling these can be called to be in error. Here Monte Carlo will fail miserably as it will choose one of the pre-defined "solutions" most probably retained logic a work around that is not always true solution, same as if progress override is selected. No way to make sure Monte Carlo will check out-of-sequence occurrences and magically solve any broken logic.

Monte Carlo is not perfect especially if the Perfect Storm hits you. Monte Carlo is an imperfect way to model many times a job, many times wrong.

Your request for it to directly model lag is a way to improve it a bit. I do not believe it will be perfect, it will not be able to predict how the network will be handled as the variations in risk events call for the planner intervention.

If we accept its limitations we can understand its value. It helps us identify how relevant the risk factors are so we can concentrate on the most relevant as the job progress. In any case the belief that the deterministic schedule is accurate is on the spot shown wrong by even the worst of the statistical models, perhaps some are better models but any "less bad" than the deterministic.

You shall use the deterministic model without buffers at the project level while at higher levels different buffers are applied. The statistical models are good at helping you determine and quantifying your necessary buffers. You as a PM can include your buffers as to help you comply with your contract needs. The Owner at his level must also include his own buffer protection for the case delays are on his responsibility side but if he still want to finish on his target that should be taken into account. He might also want to consider the possibility of you being late and reserve some buffer protection for this possibility.

http://www.planningplanet.com/content/six-steps-success-driven-project-management

-"So project has a set of targets – tight targets for project team, reasonable targets for project management team that include sufficient contingency reserves, and more comfortable targets for project sponsor that include additional management reserves."

Unfortunately on this side of the Atlantic use of buffers by the Contractor is not encouraged, on the contrary it is frequently prohibited when the specs call for the Contractor to schedule his work using al available contract time and interpret buffers as a mockery of this requirement. Many do not understand Parkinson Law and are driven crazy when you show a baseline with a projected early finish as if deterministic means 100% probability of success, it is never and frequently so low that it is unacceptable from the statistical point of view.

Best regards,

Rafael

Stephen Devaux
User offline. Last seen 17 weeks 6 days ago. Offline
Joined: 23 Mar 2005
Posts: 667

Hi, Rafael and Andrew.

Rafael wrote:

"A suggested work around is to convert lags into tasks. This idea I do not like, time consuming and might complicate too much the network."

Completely agree.

Andrew wrote:

"Stephen, regarding your Point 1) because Asta allows percentage lead and lags i.e Task B starts 50% of the way through Task A this percentage is respected on each iteration. So even if Task A duration varies by iteration Task B will always start half way through."

Very nice, Andrew!  It still doesn't obviate the problem of choosing the correct distibution for each activity, but it certainly should help handle lags much better! 

So, does Asta compute critical path drag yet?  If not, it certainly should.  (Watch this space for the upcoming article in the Jan/Feb issue of Defense AT&L Magazine, "The Drag Efficient".)  And it would be nice if Asta incorporated some of the other TPC metrics and techniques: value breakdown structure, cost of time, DIPP and DPI functions, and the CLUB, for instance.  I assume Asta incorporates ROI, as Spider does?

Contact me if you're interested in talking about these other techniques.

Fraternally in project management,

Steve the Bajan

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

Stephen,

A suggested work around is to convert lags into tasks. This idea I do not like, time consuming and might complicate too much the network.

On average for every activity in our schedules we have over 6 links and even if all FS with lag 0, this 0 value can vary. Not always we start FS events just in time, modeling schedules to happen at early dates always, all activities is wrong. Some activities you will start at some time between early and late dates and eventually this delay can have an impact in your schedule as other activity durations vary. 

I can see much value in your requirement about lag, more than what it seems at the surface.

I consider Asta PP  approach to Monte Carlo correct, it is software to watch.

Regards,
Rafael

Andrew Willard
User offline. Last seen 11 years 40 weeks ago. Offline
Joined: 9 Dec 2009
Posts: 35

Hi,

Asta Powerproject's (www.astadev.com) Monte Carlo Risk Analysis module is part of the software i.e there is no import and export and each iteration performs a new critical path calculation and respects resource levelling.

You can set either a default min and max duration and/or set these for specific tasks.

Stephen, regarding your Point 1) because Asta allows percentage lead and lags i.e Task B starts 50% of the way through Task A this percentage is respected on each iteration. So even if Task A duration varies by iteration Task B will always start half way through.

Hope this helps

 

Andrew

 

 

Andrew Willard
User offline. Last seen 11 years 40 weeks ago. Offline
Joined: 9 Dec 2009
Posts: 35

Hi,

Asta Powerproject's (www.astadev.com) Monte Carlo Risk Analysis module is part of the software i.e there is no import and export and each iteration performs a new critical path calculation and respects resource levelling.

You can set either a default min and max duration and/or set these for specific tasks.

Stephen, regarding your Point 1) because Asta allows percentage lead and lags i.e Task B starts 50% of the way through Task A this percentage is respected on each iteration. So even if Task A duration varies by iteration Task B will always start half way through.

Hope this helps

 

Andrew

 

 

Stephen Devaux
User offline. Last seen 17 weeks 6 days ago. Offline
Joined: 23 Mar 2005
Posts: 667

Rafael, your post is right on the money!  I especially liked your point about a hurricane delaying ALL work.  There are many, many "black swans" (I strongly recommend the book The Black Swan by Nassim Nicholas Taleb [http://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb] -- and no, it's not about ballet dancing! -- on the general subject of risk, Monte Carlo systems and Black-Sholes theorem).  They lurk unpredictable, invisible, and are eager to destroy your project!

The biggest problems with Monte Carlo systems are:

  1. I have yet to see a system that works with MS Project that varies lags.  So an activity with duration estimates of 5, 10 and 25 that is an SS+8 predecessor of another activity will be assumed by the Monte Carlo system to be an SS+8 predecessor whether the activity is assumed to be 5, 10 or 25!  The lag of 8 is assumed to be a time lag and therefore fixed at 8, when most lags are volume lags (a distinction that MS Project doesn't recognize). (I don't know if there are some software packages that have the ability to probabilistically vary lags -- I've never seen one.)
  2. It is VERY difficult to determine the distribution shape for each activity and input each one independently.  As a result, EVERY scheduler I've ever met (even though the funtionality is there to select from a large menu of distributions, but that's a LOT of work!) uses one of the "default" distibutions (usually triangular, occasionally Beta).  A triangular distribution, with no other variance, will give a schedule that's 12-15% longer than a Beta at the 50% confidence level.  That's a HUGE difference.  Which answer is right, triangular or Beta?  I don't know, and nobody else does either!
  3. And notice, the above problems exist even if the three estimates are based in solid historical data for the exact context (climate, time of year, weather, dependability of suppliers and subcontractors, etc.) of THIS activity!  (And how often do we have that?)

In the West Indies, we have obeah men who will tell you how long your project will take. They're less expensive than Monte Carlo systems, and they'll also throw in for free a potion that'll make your next door neighbour fall in love with you.

Fraternally in project management,

Steve the Bajan

Rafael Davila
User offline. Last seen 9 hours 18 min ago. Offline
Joined: 1 Mar 2004
Posts: 5229

I am 100% in agreement with Tony.

I was kind of suspicious, but I looked at his reference and it seems this add-in runs within MSP. This is how it should be as it is in mathematical error to say I can run a Monte Carlo simulation that represents the probabilities of the outcomes of my model using different software.

Some software sells you Monte Carlo to run files exported from other software and this is wrong especially if using resource leveling. Just to start the deterministic schedule duration you get using one software after resource leveling can differ for over 30%. This means the S curves [cumulative probabilities curve] can be shifted from project start for over 30%.

If you use MSP to manage your jobs the correct approach is to use MSP engine make sure you use the same MSP version for your Monte Carlo Run than the one you use for managing your job, this might even be a cause for different schedule durations after resource leveling if the algorithm from the versions differ.

Of course there are other differences like how different software model partial resource assignments and how shift work is modeled, these among many others.

After running Monte Carlo using the same software you got to realize it is still a model with some limitations, even if your risk probability distributions are 100% correct the truth is that Monte Carlo will run static logic that in case of slippage would be modified in actual practice.

If a hurricane strikes your job, lack of electrical power will affect many activities, no just one, assuming the rest will happen randomly is wrong, this is not the way it works. At least with Monte Carlo you can model some correlation of risk among different activities and this can make a significant difference. Yes Monte Carlo is not a perfect tool but no doubt a better one than PERT.

Tony, my understanding about the limitation in original PERT is that it was not due to limitations in computer power, but a blooper that was fixed very soon with the Monte Carlo approach. Bloopers do happen, and re-happen. No need to be polite, it was a blooper.

http://herdingcats.typepad.com/my_weblog/2007/06/pert-analysis-r.html

For a compromise:

http://laserlightnetworks.com/Documents/Modeling%20Schedule%20Uncertainty%20without%20Monte%20Carlo%20Methods.pdf

Tony Welsh
User offline. Last seen 7 years 38 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

It is very hard work doing Monte Carlo simulation on a project network without specialist software.  There are several software packages available, some of which work from inside Microsoft Project.  My company (Barbecena) offers Full Monte which is easy to use, fast, and inexpensive.  Take a look at www.barbecana.com.

PERT is different.  It looks only at the critical path, whereas the whole point of doing risk analysis is to take into account the fact that there may be several paths which are potentially critical.  It was invented in the 50's or 60's when computer power was expensive but nowdays there is no reason to use it.

 

 

Jay S
User offline. Last seen 9 years 40 weeks ago. Offline
Joined: 10 Jun 2008
Posts: 36

You can create an optimistic schedule, Your schedule, and a Pesimistic schedule. (a Normal Pert Calc) D=(O+4R+P)/6 You can apply this to lags and logic as well.

This will start as indication.

The big problem with Pertmaster it ignore's logic. (Even it it Run a 100000000000 itterations) 

Gary Whitehead
User offline. Last seen 4 years 46 weeks ago. Offline

No. You need specialist software such as Pertmaster.

Sankar Vijayan
User offline. Last seen 3 years 29 weeks ago. Offline
Joined: 21 Sep 2010
Posts: 58
Groups: None

Can we do monte carlo risk analysis in Oracle Primavera 6 .Can we perform 1000 iterations & generate graph? How good is the report?

Gary Whitehead
User offline. Last seen 4 years 46 weeks ago. Offline

Not just with excel, no -because you need the software to run these thousands of iterations of the schedule each with different values for different risks.

Sankar Vijayan
User offline. Last seen 3 years 29 weeks ago. Offline
Joined: 21 Sep 2010
Posts: 58
Groups: None

Thanks sir for ur reply can we perform it using excel sheets

Gary Whitehead
User offline. Last seen 4 years 46 weeks ago. Offline

I'm not sure you can perform monte carlo analysis without specialist software.

-It works by running thousands of iterations of the schedule, each with different values of delay for each risk and uncertainty. You then do statistical analysis on the results to get % chance of hitting your end date and inform risk mitigation decisions.