Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

An initiative to conduct resource leveling engine comparison of different tools

5 replies [Last post]
Evgeny Z.
User offline. Last seen 8 weeks 2 days ago. Offline
Joined: 13 Jan 2008
Posts: 397
Groups: None

Dear all,

In the parallel thread I published results of resource a leveling engine comparison, which I did on Microsoft Project, Primavera, Asta Powerproject, and Spider Project. However some people pointed out that my test results may be a bit “colored”, as I used Test Schedules from Vladimir Liberzon (Spider Project). Since Spider came to be by far the best out of my testing, Lord mentioned, that some other manufacture may be able to come up with the specially crafted schedule, where their tool would be better, then the ones of the competition.

Therefore I came to an idea to do a little project to compare resource leveling engines of different tools in a more structured and transparent way.

However to do this, we first need to define rules of the game. Therefore I have drafted an initial version of a document, which I called “Tests description and requirements to test schedules for automatic resource leveling functionality comparison test”.

Therefore I am kindly asking everybody, who is intersted, to do the following:

  • Please comment if you think this is a worthwhile initiative.
  • If your answer on the 1st question is yes, then please provide me your comments to the below document. Please note, that there are people on this site, who are hundreds times more experienced than myself in this subjects (I am just relatively experienced Microsoft Project user). Therefore I would really like to have your comments.




Tests description and requirements to test schedules for automatic resource leveling functionality comparison test

1    Introduction:

Automatic resource leveling functionality is considered by many to be an essential feature of the scheduling software. However, unlike Critical Path calculation, there is no well-defined algorithm, which would always produce the most optimal result. Therefore different manufactures have a different implementation of resource-leveling algorithms, which often produce different result for the same schedule.  Also often specific tool shows better results for one schedule, whilst losing to alternative tool on another schedule. 

As a result of several discussions on the world’s largest  planning community site ( ), an initiative came up to run a little project to compare resource leveling algorithms of different scheduling tools with each other.

2    Purpose of this document

The idea is that selected tools will be tested on  the  set of Test Schedules, consequently outcomes will be published and analyzed. As a result both users as well as manufactures will be able see strong and weak points of resource leveling of different scheduling tools in specific situations. Hence users will be able to make an informed decision about usage of a specific tool; manufactures will be able to see strong as well as improvement points for their software.
The plan is to from one side make the tests as unbiased as possible and from the other side to allow different manufactures to show the strong point of their tools against competition by coming up with their version of a test schedule (provided it meets the requirements, agreed in this document).

This document will be used to:

  • collect and select Test Schedules
  • compare results of different tools, run on different Test Schedules 

3    Scope of this document

  • Description of tests, to be done on the “Test Schedules”
  • Requirements for Test Schedules

4    Tests.


4.1    Shortest resource leveled schedule

  1. A Test Schedule will be loaded in a tool and resource leveling on all resources will be performed. The shorter the schedule, the better the result.
  2. No priorities (e.g. task ID) will be used. If tool has any default priorities, they will be deactivated.
  3. Splitting of tasks (if available in a tool), will be disabled.
  4. No tasks rearrangements will be done. If tasks are to be re-arranged – this will be a new Test Schedule.

5    Requirements for Test Schedules


5.1    Generic  requirements

GR001 Test schedules shall be only targeted for testing of automatic resource leveling functionality, and not any other functionality, which may or may not be available to the scheduling tools.

GR002 Tests and test schedules shall be easy and transparent enough to be reproduced with minimal efforts.

GR003 Test schedule as well as leveling results shall be simple enough to be understood and assessed by a human.

GR004 To allow comparison between different tools,  test schedule shall use most basic scheduling features, available to most (if not all) scheduling tools, which have automatic resource leveling functionality.  This assumes, that some useful resource leveling features may deliberately not be included in testing for the sake of being able to cover a wider variety of tools.

5.2    Specific  requirements


5.2.1 Size requiremens

SRSZ001 Schedule size shall not be more than 20 tasks

/* EZ Reasonsing:

  • If it gets more, it will be difficult to understand for human
  • 20 tasks is a limit of a trial version of Asta Powerproject  


SRSZ002 Schedule shall not have more than 7 resources to level
/*?? I took 7 out of my head, so I need an advise on what is the reasonable number??  */

5.2.2 General requiremens

SRGN001 Schedule shall have only Finish to Start links

SRGN002 Schedule shall not have variable resource assignments.

SRGN003 Schedule shall have fixed resource assignments (no skills)

SRGN004 All tasks shall be fixed duration tasks.

SRGN005 Every resource, assigned to work on a task, shall be required to work from the very beginning till the very end of the task.

SRGN006 Tasks shall not be splitable.

SRGN007 Resources, assigned to a task shall have the same percentage of assignment through the whole duration of a task (no curved or time-phased assignments).

SRGN008 Schedule shall have only one calendar, applicable to all tasks and all resources.
/* EZ: this is just to keep it simple. */ 

5.2.3 Delivery requirements

SRDR001 Schedule shall be submitted in the format ??TBD??


/* EZ: what format shall we chose not to be biased? One could think of Microsoft Project 2007 or Microsoft Project *.XML, but I personally had problem importing MPS XML file into Primavera. When I googled it, I found out that this seems to be quite often problem.



Forest Peterson
User offline. Last seen 3 years 22 weeks ago. Offline
Joined: 22 Mar 2011
Posts: 33

#Evgeny Z, include Vico Control - resource leveling is one of their core competencies

Rafael Davila
User is online Online
Joined: 1 Mar 2004
Posts: 4723


If to use Aurora sample jobs, make sure it is something more challenging than the 8 activities job they make reference in their paper, it is too easy for Spider to get absolute minimum for this sample job, finding more than a single prioritization solution that yield minimum was easy.

The following is one example of a solution that yields the minimum, they know but did not showed how to get using their software. Note that you might have to run several times the resource leveling in order to changes in float will yield better results.


No rocket science, total float is a common suspect of a prioritization that at times can yield good results, although on real complex problems it will usually be not enough if compared to optimization. If you select DRAG [ascending] you will get the same results, but this is no obvious suspect. Another logical prioritization rule you might try is using resource hours.

Probably they missed how easy it can be with Spider but they are very knowledgeable, their product must be excellent. They know on complex problems even if you use all possible prioritization combinations only algorithms that look for optimization will consistently yield good results.

If you can get a readable display or even better a file of their longer schedule on the same paper this I would like to see.

Also ask about the price for a single deployment. If you have a very special need their customization shall pay off many times, not only for NASA, not only for Boeing but also for Massachusetts General Hospital for which the scheduling of work shifts is very important. To me they are true challenge.


Bernard Ertl
User offline. Last seen 4 years 22 weeks ago. Offline
Joined: 20 Nov 2002
Posts: 757

Evgeny, resource leveling is not a subject that interests me, but in my travels participating in various industry conferences, I did get the chance to meet Dr. Robert Richards from Stottler Henke Associates, Inc.  He developed the resource leveling algorithm for their Aurora scheduling engine.  In conversation with him, he drew comparisons for his leveling engine against Spider.  It was clear in talking to him that he considered Spider to be his main competition from a gold standard perspective (of being the best). 

You might try contacting Rob and asking him for any test schedules where Aurora might have tested better than Spider.  This of course, may produce the same sort of bias in having a test schedule that exploits a known special condition that one software has considered and another has not, but it may be the best opportunity for you to find a rival to Spider's leveling engine.

... I see that Stottler Henke is in fact offering:

"Take the Aurora Challenge! and request a free study to benchmark Aurora against your current schedules."

from their product page as linked above.

Evgeny Z.
User offline. Last seen 8 weeks 2 days ago. Offline
Joined: 13 Jan 2008
Posts: 397
Groups: None


I also wish you and all participants of PlanningPlanet happy and productive Y2013.

May be manufactures will not answer, but at least we can agree on fare and reasonable rules of the game. And later anybody (not necessarily manufactures, but may be also enthusiasts) may come up with the Test Schedules.




I don't think that you will get a response.

Test projects that I suggested were created by independent researches in 1995.

If you want smaller example look at sample project in Spider Lite Demo. It uses only two resource units and consists of 10 activities including two milestones.

Best wishes in 2013,