Page tree
Skip to end of metadata
Go to start of metadata

Considerations:

  • What are the criteria?
  • Is there an order the criteria should be applied in? 
    • Weighting? Spider graph?
  • How many is too many?
  • Comments on being able to review priorities

Draft list of criteria (not in order and under development)

  • Issues impacted by Content request submissions
    • Consider if there is a cut off e.g. if a tracker is only impacted by 4 SIRS request should it still have top priority?
  • Foundational hierarchies
  • Support Member priorities
  • High usage areas e.g. very common disorders
    • How's that determined? 
  • No labels

11 Comments

  1. One way to categorize criteria would be internal (to the terminology, examples include foundational hierarchies, facilitate modeling, increase consistency, ...) and external (to implementation projects, examples include SIRS, members priorities (by vote??), usage). Do we in any way prioritize criteria which can be measured vs. those which are based on opinion?

  2. From group call: Weighting of the criteria needs to be considered. A spider graph may be a tool to support this.

  3. Comments by Linda Parisien from Content Tracker prioritisation criteria Project 

    Hi Cathy and Elaine,

    Thank you for providing us with the two documents. This is helpful to set the mindset, although it contains a lot of information...

    I think as a starting point, some items should be more defined such as: 

    • what is a minor change vs a major change
    • when is the fast track process used over the 'regular' one
    • what are the outcomes we'd want to measure:
      • process content that is simpler sooner in the workflow?
      • increase number of RFCs resolution using the fast track process?
      • decrease the number of unassigned RFCs on the tracker?
      • complete more RFCs in a cycle?

    While looking at the Content tracker, I had a hard time to relate the 'Component' items to the 'Unresolved:By Priority' information on this page: /projects/IHTSDO/summary/statistics

    First, to me the Member Forum Priorities defined on a yearly basis is a top-1 priority, then if a request is being made for an imminent implementation, that would be a top-2 priority, which would be followed by QA issues (top-3 priority) and other RFCs (top-4 priority). Of course there are more complex use cases that could potentially modify the priority such as:

    • When a request for same content is made by more than 1 country, this should increase the priority
    • Any content that went through the approval for concept modeling change or editorial change and SME guidance that meet the above criteria should be added to the worklists and prioritised appropriately. This would require an agile process to update tracker asap and get the work started on the RFCs

    Then there is the level of effort required to achieve the changes to X number of concepts. This information will be useful to determine when and if the changes can be achieved within a time frame. That information coupled with the priority would perhaps be easier to manage... I am not sure about the categorisation but let's say level of effort (LE) 1= 1-99 concepts, 2= 100-999 concepts, 3=1000-4999 concepts 4= + 5000 concepts.

    From the Content Project document, we have clear criteria for when a project is assigned. That information is not currently available from the tracker information. I would suggest a mention to the 'Component' or otherwise that identifies what is requiring:

    • a concept model change (CM)
    • an editorial guidelines change (EG)
    • a SME guidance (SG)
    • Alignment to editorial guidelines (AEG)

    We could use for the Content Project the same LE indicator for the level of effort required or number of concepts that need to be changed.

  4. Type of change: 

    Minor change: small number of concepts or simple changes with low impact.

    Major change: change to concept model, impacts a significant number of concepts, impacts on users, vendors, software 

    Fast track projects: 

    • often the minor changes
    • don't need stakeholder engagement
    • The potential to progress a project via the fast track needs to be determined by some level of analysis. What appears simple may not always be so, when an issue is explored. 

    Current word templates for content tracker projects. Note under review outside of this group. Provided here just to show the difference. The first three documents are those used during the 'regular' process. The last of the three is just the first two combined. The last document is what is used for fast track projects.

    Comments:

    • Content development process as an FYI: The Content Development Process (under construction)
    • Content tracker issues are identified via several sources. Request submissions are only one source. 
    • Content tracker issues are worked on by Consultant Terminologists and internal team members. 
    • Components and Unresolved by Priority are separate. Components are like subject tags.   
    • The development work on the Content Tracker is not complete. For example the work done in the Criteria and Guidance document is yet to be applied. 
    • Effort is dependant on the number of concepts, complexity of issue, amount of stakeholder engagement required and whether the change can be managed in an automated way or needs to be done manually. There may be some "large projects" from a number of concepts perspective may be managed in an automated way. 
    • Issues which relate to alignment of content to Editorial Guidance are being moved to the Quality Assurance Tracker. 
    • QA issues a priority- perhaps extent of impact and size need to be considered in conjunction to this. 
    • Changes in foundational hierarchies e.g. |Substance|, |Body structure| impact the logical definition and placement of concepts in other hierarchies e.g. |Clinical finding|. This is due to the defining attribute relationships. Work in a foundational hierarchy may not be seen as 'priority' from an implementation perspective but is needed e.g. secondary use such as decision support. 
    • Areas of content that are of high use e.g. Diabetes Mellitus, Heart Disease may already have good content coverage but issues in modelling would need to be considered as priority given the use (and secondary use) of this content
    • Member Forum priorities are managed as work items themselves. Where a content tracker issue falls into the scope of work for that item it would be managed as part of that. There may be issues outside the scope of work that may be of value to do that would support the priority more broadly.
    • Content tracker issues that are impacting SIRS requests need to be prioritised. In noting that it would be worth considering a cut off or sliding scale. 
    • Need to make the assessment of a content tracker item against the criteria easy to do. 


  5. Hi All,

    I've expanded my thinking on a criteria framework for assessing content tracker priorities as requested. The emphasis is on the framework itself and whether this may be a workable approach rather than the actual criteria/values themselves.

    I look forward to critique of the attached.

    Cheers

    John

     

  6. Thanks John,

    some comments (or just thinking out loud):

    • agree with all of the criteria, noting that the criteria are subjective and potentially hard/expensive to assess and might need to be complemented by (measurable) objective criteria
    • inline with earlier proposal I would, in addition to a "total score", like to add a radar/spider chart approach.
    • specializations (operationalization) of criteria
      • Easy of work → content change vs. change in modeling pattern vs. concept model change + other measurable criteria
      • Time requirement → easy of work X number of concepts
    • additional (measurable) criteria could be (mainly for existing concepts):
      • number of concepts considered
      • number of concepts using those concepts in definitions (if existing concepts, approx. otherwise)
      • ratio of primitive vs. defined concepts (if existing concepts, approx. otherwise)
      • average number of stated relationships (if existing concepts, approx. otherwise)
      • number of related JIRA tickets/SIRS requests (given that such information exists)
    • more additional measurable proxy criteria:
      • age of concepts (as a ~proxy for use, more recent concepts have had less exposure to implementers?)
      • number of changes over time (as a proxy for complexity, stakeholder engagement necessary)
    • and finally, the criteria are not disjoint , e.g. and error via a SIRS request. How do we aggregate? Sum, maximum?
  7. Thanks John.  An excellent approach to this.

    Criteria look workable and are along the same principles as the Criteria and Guidance for Initiating a SNOMED CT Content Project.  This is important as the more information we use in priority setting that we expect to be provided by the initiator the better. Sections in this document also echo Daniel's suggestions above:

    • Concepts influence a significant number of other concept definitions
    • Concepts are intermediate primitives with a large number of descendants
    • Concepts are used, or have the potential to be used, as values in a significant number of concept definitions

     The impact of a change will generally be higher if:

    • It is a high-level concept (as opposed to a low-level or leaf concept)
    • It is referenced in a large number of concept definitions
    • If it is referenced in sufficiently defined concepts (particularly in high-level concepts)
    • If it is referenced in high use concepts

    So when initiating a project we would expect the above information to be available and stated in the issue document. It is also expected that the initiator will provide information about any SIRS related request and will have looked for linked issues.  

    An additional item (from the guidance for initiating a project) which is similar to what John has included:

    • Reliance on resources - does it require only content team effort or does it need external collaboration? 

    There could also be consideration if it relates to work in the current content work plan or relates to one of the member priorities.

     

  8. Thank you John for the development of the framework. It’s a good basis for our work.

     Comments in addition to those already provided:

    • We need to use the phrase ‘JIRA issue’ or ‘Content Tracker issue’ rather than request when referring to these items of work. They are not the same thing.
    • Member requests: John Fountain, would you clarify for me what you meant by that? Request submissions? Member priorities?
    • We need to look to make the priority determination process simple so resources can be focused on developing solutions. 
      • Agree with Elaine that asking for certain information to be provided when proposing a new issue for the tracker would be of value. (Template based on the Criteria and Guidance document yet to be developed).  
      • Agree with both Daniel and Elaine- measurable criteria over subjective and there are some good suggestions provided. 
      • Keeping across implementation activities to support prioritisation: This has the potential to be time consuming and resource intensive. While important to consider, the onus would need to be on those implementing to identify implementation activities e.g. by keeping SNOMED in Action up to date, to allow them to be considered. A willingness (where the skills exist) to actively contribute to the development of the solution could also support an increase in the priority. 
    • Effort Assessment: All issues on the Content tracker would require some level of investigation. They have been added as they can’t be easily resolved. There extent of investigation varies from a few hours to months (or more) of work. 

     

  9. All,

    since these criteria could be considered screening tests for priority focus should be on ease with which criteria could be collected and determined. There should be some time left for doing actual work once the work items have been prioritized (wink)

    /Daniel

  10. Hi, Thanks John for the drafted Framework.

    As Cathy asked, I am not sure how the Member request (in the Major criteria) is different from the SIRS's request. I would like to see that the Member countries priorities' related content requests be prioritised when that is appropriate.

    I am in agreement with Daniel and Elaine's comments above requesting for measurable criteria. For the effort assessment, I wonder if the 'Ease of work' Category should be called the Complexity Category, where combination of actions required would score higher for effort requirements. This would allow to identify things like dependencies, concept model changes, tooling changes, editorial changes, etc. It seems to me that the number of concepts in play is very important and perhaps having a specific category to represent that information would make more sense that trying to combine this with other info that are also crucial.

    Linda

  11. Apologies for missing the meeting today - wondered where you all were when I dialled in at 9,00!  Unfortunately I had it in my NHS diary for 9 and had forgotten we said an hour earlier.  I'm sure I'll catch up.