I wrestled with the same issue, with the same result. My approach is to subjectively develop the ranges for both cost and schedule, present these to the group for validation, adjust and then run the risk workshop. No real guidance here, only that every project will likely be different - either in nature (length and cost) or risk aversion/acceptance.
My suggestion is to question the default values and make the changes that you think would reflect Very High, High, Medium, Low, or Very Low risks, or whatever rankings you think appropriate.[I do not think there are any limitations to the number, but I have never tried to develop more] Remember the real purpose of the rankings is to drive focus to the major risks (threats/opportunities) not necessarily develop the actual impact. That may come later with mitigation strategies.
Wish I could be more helpful. I would be interested in your eventual solution/approach.
PS: Consider using Liquidated Damages as a metric for determining cost and schedule impact. No experience myself, but it seems like a reasonable approach.
Member for
21 years 7 months
Member for21 years8 months
Submitted by Rafael Davila on Tue, 2011-11-15 14:54
If you refer to individual risk events related to time and activity duration I suggest looking at Criticality Index and define your own scale. Keep in mind if two activities have the same criticality index say 50% of falling into the critical path it does not means the impact on the overall schedule will be the same.
If you are interested on the impact a specific event might have on the overall job maybe a sensitivity analysis is in order. In such case I would run the PRA with the event scheduled without risks and again with the event with a risk distribution. If the risk is activity duration and the deterministic schedule shows a high float value relative to its distribution it might be that for such event the difference is none, that is when CI is zero, but when not it shall disclose some difference between the cumulative distribution curves, curves might look similar but in reality they are different, shifted with relation to project start.
Member for
17 years 7 monthsAgree with feedbacks provided
Agree with feedbacks provided
Member for
18 years 6 monthsSM;I wrestled with the same
SM;
I wrestled with the same issue, with the same result. My approach is to subjectively develop the ranges for both cost and schedule, present these to the group for validation, adjust and then run the risk workshop. No real guidance here, only that every project will likely be different - either in nature (length and cost) or risk aversion/acceptance.
My suggestion is to question the default values and make the changes that you think would reflect Very High, High, Medium, Low, or Very Low risks, or whatever rankings you think appropriate.[I do not think there are any limitations to the number, but I have never tried to develop more] Remember the real purpose of the rankings is to drive focus to the major risks (threats/opportunities) not necessarily develop the actual impact. That may come later with mitigation strategies.
Wish I could be more helpful. I would be interested in your eventual solution/approach.
PS: Consider using Liquidated Damages as a metric for determining cost and schedule impact. No experience myself, but it seems like a reasonable approach.
Member for
21 years 7 monthsIf you refer to individual
If you refer to individual risk events related to time and activity duration I suggest looking at Criticality Index and define your own scale. Keep in mind if two activities have the same criticality index say 50% of falling into the critical path it does not means the impact on the overall schedule will be the same.
If you are interested on the impact a specific event might have on the overall job maybe a sensitivity analysis is in order. In such case I would run the PRA with the event scheduled without risks and again with the event with a risk distribution. If the risk is activity duration and the deterministic schedule shows a high float value relative to its distribution it might be that for such event the difference is none, that is when CI is zero, but when not it shall disclose some difference between the cumulative distribution curves, curves might look similar but in reality they are different, shifted with relation to project start.