## Tips on using this forum..

**(1) **Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

**(2)** It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

# Pertmaster Iterations

How do you view a specific iteration? For example, after 1000 iterations are performed, I would like to examine the critical path of an iteration in the range of 80% probability so I can evaluate the durations.

Thanks

Andrew,

RE: I think you are missing the point of risk analysis, the aim is to take quantity the probable risk within a project.

- The theory might be mathematically correct but in practice the data source will not be accurate.
- Even when the statistical data is not accurate the approximation is closer to reality than a mere single number as if there is no risk.
- Un-accurate but closer to reality and that is all? A nice graph, so what?
- To me the aim is to also use un-accurate quantification in order to perform sensitivity analysis in a logical and efficient way.

RE: It looks as though you are trying to interrogate the program to identify the most probable "critical" risks.

- You got it right.
- Of course I wan to know what is driving the probabilities and it is not always the same activities are on the critical path. As you consume float near critical activities become more relevant.
- Criticality Index and Tornado Graphs will tell.
- http://www.prcsoftware.com/product-primavera-risk-pertmaster-training/46-pertmaster-step-7-reports/118-oracle-primavera-risk-pertmaster-tornado-chart-report.html

RE: Do you want to find activities that originally have float become critical due to the amount of risk applied you should be able to find this by analyzing the activity total float & increased duration due to risk.

- You can take a look at them with a filter for non-critical activities with a criticality index greater than 0. Usually at the beginning a few and with a low criticality index, as the project moves it changes depending on how you consume float.
- You got to watch near critical activities as well, criticality index will tell you about priorities better than float value alone.

Best Regards,

Rafael

WJM104,

I think you are missing the point of risk analysis, the aim is to take quantity the probable risk within a project.

It looks as though you are trying to interrogate the programme to identify the most probable "critical" risks.

Do you want to find activities that originaly have float become critical due to the amout of risk applied you should be able to find this by analysing the activitiy total float & increased duration due to risk.

Regards

Andy

Yes, it is clear we were talking about different things from the very beginning.

I hope the reference I gave you helped you in finding out about your question as I do not have Pertmaster or Primavera Risk spending time on the software specifics makes no sense to me. Suggest looking at the reports maybe you can get a listing of all runs critical path and the critical activities durations, maybe more information is provided on the reports. The general discussion by Pertmaster or Primavera Risk users is of interest to me as they are related although not equal to how Spider work with probability models.

None of my clients use Primavera Risk because of its cost but I have the hope this is going to change as Asta PP and Spider Project are already providing probabilistic functionality at a very affordable cost.

Best regards,

Rafael

We may be talking about different things. I am suggesting that the end dates of these thousands of simulations would ultimately be normally distributed. - Not that you would eventuallly end up with a normal distribution of a project.

If you run a thousand MC simulations each of thousand iterations after some number of iterations the resulting distribution for each MC simulation with enough iterations will converge. Although I do not have Pertmaster nor Primavera Risk it seems to me they are relatively good products.

You are confusing a single monte carlo run of 1000 iterations with many runs. The frequency distribution you have illustrated is only one test - if you performed many of these the results would be normally distributed per central limit theorum. This is why we can rely on one run as representative.

If we run 1000 runs of 1000 iterations the results might * converge* to a required precision ot it might not be enough, depending on your particular model.

Most frequently than not the resulting distribution will not be a normal distribution but a non symmetric distribution shifted to the right. The assumption that the resulting distribution must be a normal distribution was wrongly assumed by the original PERT developers but the error was discovered very soon.

From: http://www.investopedia.com/terms/n/normaldistribution.asp#axzz1jfuUvps6

*Definition of 'Normal Distribution': A probability distribution that plots all of its values in a symmetrical fashion and most of the results are situated around the probability's mean. Values are equally likely to plot either above or below the mean. Grouping takes place at values that are close to the mean and then tails off symmetrically away from the mean.
*

If you look at the following Pertmaster distribution you will notice, it is not symmetrical.

By the way I do not have a Pertmaster nor a Primavera Risk license, too expensive for my pocket. The above picture is from a Google search. I use Spider Project which derives the distribution curves from other methodology, somewhat different from original wrong PERT methodology.

Pertmaster uses a seed number(s) you nigh have some control, apparently based on the computer clock and this might be of interest if you want to replicate your model run.

http://www.prcsoftware.com/product-primavera-risk-pertmaster-training/

From step six as follows,

Maybe here you can find a programmed method for what you are looking, because I do not use Primavera Risk I only took a birdseye view of the videos but looked good to me.

In @Risk you can step through each interation or collect the data in a report. Incidentally, I disagree with your assertion that each set of iterations might generate randomly different results. If we run 1000 runs of 1000 iterations the results will be normally distributed.

The iterations are indeed random - but let's say for eample we want to examine the critical path of the iteration that is producing a certain end date. In @RISK a similar product add-on for MS Project I can run a report that lists the (randomly selected) activity durations for each iteration. I can also include the end date for each iteration. This allows me to then sort by end date and select the iterations that give me an end date I am interested in (say high end of 80th percentile). I can move the durations into Project and run a time calculation. This gives me a critical path for the specific end date in question. I can then analyze the path and compare it to another result. I am wondering if I can do this in Pertmaster.

Iterations are random. For every 100 iterations 80 will be equal of bellow the 80% value. Maybe no single iteration fits exactly the 80% value, maybe a few. You[the computer] know[determines] the 80% value after all iterations are run.

To get statistical significance the model is run thousands times, if 1000 iterations 800 outcomes will be equal or less than the 80% value.

Not sure you can set the number of iterations to one but if you can then you will be able to see the exact outcome for each 1 iteration run. Save the values for each run. Do this a thousand times. Manually generate the cumulative probability curve and here you will get the value for 80%. Now go to all saved runs bellow the 80% value, in this case will be 800 hits, and take a look at each one, or maybe only those few outcomes close to the 80% value.

This can be valid or not, depending on how the computer generates the values as some computer generated random number algorithms depend on a seed value, same as calculators of the 70's. Suggest after you generate your S curve compare it to the resulting S curve from a MC simulation set to 1000 iterations.

Hi Whoever

Why not just limit the number of iterations to the one you want.

Best regards

Mike Testro

## Replies