Please be aware that I am trying to outline a framework for approaching the assessment of priority. The criteria I use are only examples to illustrate how this system might work. There will need to be group input to define the numbers of criteria and the definition and scoring for each criterion.
This framework is a criteria-based scoring process that will place requests on a value continuum (e.g. -220 to 225) with the highest priority given to the request with the highest score. It therefore adds a quantitative rather than purely qualitative approach to request assessment. These scores can be arbitrarily assigned into three priority groups if desired e.g.: -220 to 0 lowest, 1 to 150 middle and 150 to 225 highest. The scoring system will further prioritise within these groups.
This approach should allow the development of a process that will be:
Transparent – anyone can understand how the framework functions
Reproducible – any two individuals, with appropriate knowledge, should obtain the same or similar scores
Flexible – the scoring system can be easily modified
Furthermore, the framework will allow re-assessment of priority overtime. For instance, a single Member request may start at a relatively low score, however, if over a period of time multiple Members make the same request, the score (and priority) will increase.
The framework will assess both the benefit resulting from individual requests and the effort required to reach this benefit.
This area is split into major and minor criteria. The major criteria relate to high level issues with the minor criteria as sub-divisions within each. All criteria carry a numeric value, with the highest value reflecting the greatest benefit. For instance (examples only):
Each of these major criteria has a range of minor criteria:
Error in existing content
Member request
SIRS request
All these major and minor criteria will be assigned a value (see Figure 1.) which, when summed, will provide the 'benefit score'.
However, to reach a final score it is also important to assess the effort required to achieve the assessed benefit.
While a significant benefit may accrue from a given request, the resources or effort required to reach this benefit may render the request unreasonable. Another value set is therefore necessary to assess the effort requirement and subsequent final ranking of a request's priority.
Rather than major and minor criteria as seen in the benefit assessment, this approach only has one set of criteria with associated values. Notably some of the effort values can be negative (see Figure 1.) allowing recognition that a high benefit request may in fact not be reasonably achieved due to the resourcing required (again the following are examples only):
Criteria | Sub-value | Value |
Error in existing content | 50 |
|
| 25 | 75 |
| 20 | 70 |
| 5 | 55 |
| 0 | 50 |
Member request | 40 |
|
| 25 | 65 |
| 15 | 55 |
| 0 | 40 |
SIRS request | 25 |
|
| 25 | 50 |
| 15 | 40 |
| 10 | 35 |
| 5 | 30 |
|
|
|
Ease of Work |
|
|
|
| 50 |
|
| 40 |
|
| 10 |
|
| 0 |
|
| -100 |
Time requirement |
|
|
|
| 50 |
|
| 20 |
|
| 0 |
|
| -50 |
|
| -100 |
Subject matter expertise requirement |
|
|
|
| 50 |
|
| 40 |
|
| 10 |
|
| 0 |
|
| -50 |
Next Steps
If the sub-group feels that the framework is a useful way to proceed with the assessment of content tracker priorities, then next steps include: