2022-09-26 - TRAG Meeting Agenda/Minutes

Date

  • 26th September 2022  -  11:00 - 12:30 (WEST) (10:00 - 11:30 UTC)

Room: Ajuda II


  • 28th September 2022  -  09:00 - 12:30 (WEST) (08:00 - 11:30 UTC)

Room: Ajuda I


Attendees


Apologies


Objectives

  • Briefly discuss each item
  • Agree on the plan to analyse and resolve each issue, and document the Action points
  • All those with Action points assigned to them to agree to complete them before the next face to face conference meeting

Discussion items



SubjectOwnerNotesAction
1Welcome!All

Thanks to our members for all of their help. Welcome to our observers!

INTRODUCTIONS...

We've got several topics that we've resolved and closed down

As always, we won't waste time going through them again in detail, but if you'd like to read through them they're listed below...  

I'll also run through them very quickly from a high level, and if you have any further questions/news on any of the discussions please let me know now and we can decide whether or not to re-open them...

2

Conclusion of previous Discussions topics

3Continual Improvement to the Frequent Delivery processAll

OCTOBER 2021 - Confirm transition is in progress, and

  • request assistance from anyone interested in helping with the validation efforts for the soft launch releases...
  • provide early visibility of the changes to the original proposal (ie):
    • The map files (ICD-0/ICD-10) will no longer be separated from the INT Edition package
    • They will also not be put onto their own separate Release Schedule.  As long as the soft launch releases continue to go well, we will aim to keep the maps up to date with each INT Release, and therefore publish them as part of the INT Edition.
  • FINAL DECISIONS:
  • PROVIDED FULL VISIBILITY OF ALL DECISIONS + FINAL STATE OF THE PLANS FOR JAN 2022 onwards.
  • No objections or queries raised, so in theory this topic can be closed down in April 2022 TRAG meeting, unless any new issues are raised in Q1 2022...
  • This is out most important topic for this week - we're currently in the development phase for all of the dependent work, so any questions about this, or issues to be raised must be done immediately if they are to be taken into account for the transition!
  • Introduction + HIGH LEVEL walk through of current Proposal...
  • APRIL 2021 - Confirm requirements are now complete and sufficient to successfully make the move to Continuous Delivery...
  •  
  • October 2021:
  • We are now in the trial period, whereby we're generating and validating "soft launch" monthly releases.
  • So far the progress has been good, with the main focus on:
    • Ensuring that the automated validation scope covers as many scenarios as possible
    • Confirming that the new authoring process is working, and that the content team have the support they require in order to complete the new gateways.
    • Verify that no issues are making it through to the Release cycle (which is now only a few days long)
      • Anything that does needs to be added into the automated validation suite
    • Timing the end to end process to ensure that the timings are refined enough to hit the much shorter deadlines.
    •  
  • The initial soft launch releases have been sent to various stakeholders in the community for review
    • SO FAR NO FEEDBACK!! (positive or otherwise!)
      • Please respond asap...
    • PLEASE VOLUNTEER if you believe that you can help out with these validation efforts
      • It is in your own best interests to help ensure that the quality of the monthly releases is as high as possible
        • Although we will have the ability to release fixes to issues in the next monthly release, it still means re-importing the updated release into your systems!
  •  
  • April 2022:
    • We have now published the first 2 Monthly Releases!!
      1. Thanks to everyone for all of your help and feedback over the past few years in order to get us to this point!
      2. Has anyone managed to download and/or review the monthly releases?
        1. Yes, Gabor and Guillermo - all positive feedback so far!
      3. ANY FEEDBACK?!!
        1. All good so far for Feb + March 22 releases!
      4. Any feedback on the process itself?  
        1. Guillermo is very happy with the new process + the transparency with which we conducted the transition
      5. Does it feel as if anything has changed, or is it all transparent to you as users?
    • Obviously we're still proceeding across multiple fronts with improvements, in particular to the automation of the process + the validation
      1. Any recommendations for improvements?
      2. Any requests fo changes to the packages?
      3. Any requests for changes to the process?
        1. USER DOCUMENTATION
        2. Yes, several end users have contacted Guillermo to request more information on the transition to more frequent delivery (and/or more frequent updates to the dependent content of both extensions and derivatives).
        3. It would be great if we could provide people with a white paper or presentation (both from a Content and Technical perspective) on 
          1. How the transition went
          2. Benefits realised
          3. Lessons learned
          4. Risks
          5. etc
        4. This should be targeted at both the Supplier level + the end users level for max effect
        5. Any issues with the maps?
      4. Any problems with the lack of Delta files?
    •  
4a)  Release Notes

The original plan was to automate the Release Notes in order to detail every change to the Release in each Month's Release Notes. 


  • However, in order to ensure that we deliver the full necessary scope of the work to support the actual Frequent RF2 releases, we have had to defer this to phase 2. 
  • The Release Notes will therefore continue to be manual for the first few releases, which may mean they are more high level and less detailed than they would otherwise have been.
    • Anyone experienced any issues as a result of this?
  • What's everyone's opinion on the current International Edition Release Notes?
    • Are they too high level?
    • Or are they too detailed?
  • We could in theory move almost ALL of the detail to the Early Visibility page, and just leave the monthly Release Notes high level, describing which projects are being published in each monthly release?  Thoughts?
5IPS Terminology

This is a new product being published later this year:

  • It will be a sub-ontology based on the scope of IPS (plus EML)
  • It will NOT be a formal SNOMED CT product, rather
    • a) A pure snapshot containing no historical data, just ACTIVE IPS concepts + any extra concepts REQUIRED to make the sub-ontology (+ associated Relationships)
    • b) Not validated as per usual SNOMED CT products
    • c) Not, therefore, targeted at our normal audience, but rather at those who are curious about SNOMED but who have little knowledge/experience in using it
    • d) If they then want further functionality, they will be directed to use the full version(s) of SNOMED CT
    • e) It will therefore be published on a separate website (similar to the GPS release) rather than on MLDS
  • BETA Release to be published shortly, and then
    • subsequent Production releases to be published in Q3/Q4, based on the October IPS release each year.
  • Any questions / issues?
  •  
  • In particular, please pay close attention to the distribution format, which will be:
    • "RF2-like", purely from a structural point of view
    • HOWEVER, it will NOT be a full, formal RF2 package
    • It will contain nothing but SNAPSHOT data, because it's going to be re-generated every year based on the latest data, rather than authored from release to release
    • This means NO HISTORICAL mechanism will be provided - users will need to create their own if required
    • However, this is all intentional, as this product is intended to be a small sub-ontology that can be used as a quick intro to SNOMED for those who have never used it before
      • ...and so is not intended to be a proper SNOMED release
      • ...any users wanting to do any proper analytics or use it in Production systems should be directed towards the FULL version of SNOMED CT.
  •  The BETA Release will be published in a few weeks' time, so you can also review that - if you would like to be involved in that review process please let me know now, so that I can send it direct to you once it's been published...
    • PLEASE SHARE WITH ORSI!!
6Bespoke Delta file creation tool
SUB-DISCUSSION of More Frequent Delivery...
  • Creation of a bespoke Delta using a new tool - Delta at the International level is very simple, but at the Extension level is much more complex due to all of the dependencies, etc. This could also become more involved when we modularise...
  • Australia intended to build this as well, but it never happened because no one requested it in the end!
  • The other issue was the traditional issue of never knowing (in a machine readbale way within the Delta file itself) what the Delta file is a Delta from (ie) is it a delta from the Jan 2014 release, or the July 2016 release, etc.
  • So there were a lot of discussion over whether or not they should create roll up Delta's, or provide the service - but in the end they found that only a few people were actually using Delta's, and those were the people who know what they were doing already, and so nothing was ever required!
  • So we need to decide whether or not this is useful...
  • We also need to be wary of the fact that there are two different things to be relative to - so you can have a Delta to a release, or a Delta to a date in time, and they can be very different things.
  • Suzy has always released a delta with multiple effectiveTimes in it (due to the Edition) and no-one has any issues of this ever.
  • If we remove the Delta files completely everyone would definitely need to provide a Service to download bespoke Delta's (both International and local Extension level) - AT THE SAME TIME WE SHOULD FIX THE ISSUE OF LACK OF METADATA PROVIDED FOR WHAT THE BASELINE OF THE DELTA IS
  • For local extensions this service does get a lot more complex than for International, as they need a range of Delta dates PER MODULE, as they have a lot more going on than just the International Edition - so the service would need to be a) clever enough to correctly get the relevant depedencies from all sources, plus b) Validate that the resulting Delta is correct and valid - provide a checksum of some kind (needs to be identified).
  • SNOMED INTERNATIONAL TO CREATE A SMALL, TARGETED SURVEY TO QUESTION WHETHER OR NOT THERE WOULD BE ANY IMPACT TO ANYONE TO PROVIDING A DELTA SERVICE INSTEAD OF DELTA FILES... Everyone will happily disseminate this to their users and get responses asap...
  • Current question is no longer whether or not we still believe this to be necessary, as we're all now agreed that it is.  
  • Instead, the new question is what are the specific requirements?
    • a)  Delta's to be generated from any point in time to any other point in time
    • b)  Metadata to be included somehow (to be discussed further in the Metadata Working Group) to record critical information, such as which Dates the Delta is from + to, which Modules are incorporated, etc
    • c)  Compound Delta's (including ALL changes since the relevant date, including ALL changes in the dependent release package(s), rather than just the latest state - so these are "Full file to Full file" Delta's, as we are used to) are favoured so far, however we should continue to assess any potential use cases for Atomic Delta's (effectively "Snapshot file to Snapshot file" Delta's) as we go along, in case it becomes apparent that there is a valid Business Case to ensure that the new Delta generation tool can provide either or both of these Delta file types...
    • d)  It needs to support the future requirements for Service Based delivery, once we transition over
    •  
7

Plans for the transition from Stated Relationship file to OWL refset files


* MAG crossover

AllThis is part of the wider Drugs and Substances improvements that are currently taking place. Other than the obvious content updates, these technical changes are those which will be likely to have the highest impact on those within our AG. 

We need to discuss the plan and ensure that we have answered all of the possible questions in advance, in order that we have a workable plan with no unwanted surprises over the next few release cycles. 

As a starting point, we should discuss the following: 

1. The schedule of changes (see here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages) (ie) 

July 2018 - initial OWL refsets introduced 
Jan 2019 - included in the Release package: a) Stated Relationship file b) the partial OWL axiom refset including all description logic features that cannot be represented in the stated relationship file. 
The Extended OWL refset file will be available on demand. 
July 2019 - the stated relationship file will be replaced by the complete OWL Axiom refset file. The stated relationship file will NOT be included in the international release; however, it may still be available on request to support migration to the OWL Axiom refset. 

2. The communications required to ensure that ALL impacted parties are completely informed of the Schedule, and the changes that they may need to make in order to transition cleanly to the new format. 

3. The technical changes that we need to make to the Release package itself, in order to support the planned schedule. 

For example, when we "replace" the Stated Relationship file in July 2019, do we remove the file from the release package immediately (in Jan 2020 once everyone has had a chance to run the inactivation file through their systems), or do we take the more measured approach of inactivating all records and leaving the inactivated file in the package for, say, 2 years, and then planning to deprecate the Stated Relationship file by July 2021? 

Further, should we be deprecating the file itself at all, or can we see any other (valid) use for the Stated Relationship file (obviously not just repurposing it for a completely different use!)? 
  • Harold Solbrig to talk to Yong and others in the MAG about his proposals for future proofing against the possibility of having multiple ontologies referenced, prefixed axioms, etc.
  • Harold confirmed nothing to report
  • Some opposition to reverting back to having the OWL file on-demand for Jan 2019 - need to discuss through with Kai in tomorrow's session - preference is to release both Stated Rel's + the "addtiional" info only in the OWL files - as with July 2018 release. Is this the current intention?
  • Done - Jan 2019 was implemented as requested - did anyone manage to use it and trial it effectively? Any feedback?
    • YES - Australia downloaded it and trialled it in their systems!
    • Worked well - however they have not got a lot of new validation to cover either the OWL format or the content itself, so these were trials to ensure that they can use it and author against it, rather than testing the actual content of the Axioms...
  • Also, has the decision already been made to NOT create a full history back to 2002 (or 2011 at least)? Sounds like most extensions will do it anyway, so maybe we should? Decision made by content team - no history to be included
  • Discussion on whether or not to go back and re-represent the content all the way back to 2002 in the new complete OWL file:
    • Pros:
      • Prevents the need of new tooling providers to create support for the ols Stated Rel way of doing things
      • If the International Edition doesn't go all the way back then the Extensions are restricted to not doinh it either, if the international Edition does then the Extensions have a choice.
      • Ability to go back through history and analyse prevent modelling decisions (if errors come up in future), even for those authors who haven't heard of Stated Rel's because they've now been deprecated for several years.
    • Cons:
      • Cost involved in creating the pure historical view
      • If the extensions have a choice as to whether or not to go back, then interoperability could be impacted - better to enforce going back if the international edition does.
      • Need to address the issue of some implmentations having both Stated Rel + OWL Axioms in the same full files going forward.
      • Uncertain use cases for most implementers
  • This discussion needs further input in order to enable us to reach an informed conclusion. The relevant internal and external stakeholders (NRC's such as Australia) will take this away and come back with the results of feasibility studies and estimates as to how long the necessary work would take to complete..... a decision must then be made well in advance of the January 2019 International Edition, in order to ensure that we agree on the correct approach before creating the initial Alpha release in November...
    • We are currently proceeding on the assumption that there was no feedback from any sources that supported the retro-fitting of the OWL Axiom files? The major con here is breaking our own regulations on tampering with history - the Stated Relationships should remain in place in order to a) accurately represent history + b) prevent the false impression that extended functionality was available via OWL Axioms before July 2019!
  • DOES ANYONE ELSE HAVE ANY OTHER CONCERNS WHATSOEVER ON THE TRANSITION PLAN TO OWL, OR IS EVERYONE NOW COMFORTABLE WITH IT? YES! All good to go...
  •  
  • We need to work with the Shared Validation working group to share as many OWL based validation assertions as possible, so that we can all effectively cover:
    • Technical validation of the OWL file structure
    • Content validation of the OWL records
    • Modelling validation post OWL
  • Having worked with OWL for a few months now, does anyone have any suggestions for new validation assertions?
    • Linda and others are confident that the MRCM validator will cover most modelling scenarios for now, but we'll need to keep extending as we go
    • Dion investigating this as part of the CSIRO RVF project - have we confirmed as yet if any extended scope is required?
    • New idea for an RVF assertion regarding the ordering of OWL records (based on first concept) with disjoints:
      • Michael Lawley suggested it (and Kai agreed) in MAG last time -
        • Can we please discuss and agree if it's worth creating?
        • Michael confirmed (20210420) that he hasn't discussed with Kai yet, but the idea is to group Axioms together in the sort order in order to make the file more human readable, and thereby enable manual validation + debugging.  As this won't have any benefit for the current drive towards automation of the validation to support the transition to Frequent Delivery, we'll leave this requirement with Michael to discuss with Kai outside of this workstream.
    • Michael/Dion confirmed that this is not currently part of the scope for the CSIRO project, as the assumption is that the classifier will highlight any critical issues in the underlying Axiom data.  - we've therefore opened the question to the entire Advisory Group plus anyone within the community - they will provide feedback either direct to me or to SNOMED International.
    •  
  • Any ideas to be fed back into the RVF Improvement Project that CSIRO are currently working on (as per the presentation earlier above) - everyone invited to feedback either direct to myself, via the new working group, or through SNOMED International.
  •  
  •  
8

Discussion of proposed format and packaging of new combined Freeset product

+

Proposed new Freeset format

AllTRAG to review and provide feedback and ideas for business case(s)...
  •  Andrew Atkinson to present the current proposal, and gather feedback
  • Feedback:
    • Uncertainty on use cases - however this was mitigated by the specific messaging from SNOMED licensed users to non-licensed recipients...
    • Content
      • DICOM in particular no representative without sub-division, PLUS actually risky with unverified attributes...
      • AAT to discuss further with Jane, etc
      • Agreed that SI are confident that DICOM will provide some use
    • Using the US PT instead of the FSN (whilst providing less exposure of the IP) prevents visibility of the hierarchy (due to lack of semantic tag) - however the reason for this is because the target users (who are NOT current SNOMED licensed users) will find more use from the PT in drop-downs, messaging, etc than the FSN...
      • Now included both!
    • Everyone happy with each subsequent release being a snapshot - so additions added but inactivations just removed - as long as we include something in the legal licence statement to state that use of all concepts that have ever been included is in perpetuity (even after they've been inactivated)
      • New requirements have suggested that we need to now include a full historical audit trail, even in the Freeset formatted file!
      • This means we've included an Active flag column to allow this to be added in future releases...
      • We don't need to do this for a few months, so we need feedback now on whether or not we think this is a good idea?
      • Any potential drawbacks?
        • None idenitifed in Oct 2019 - but no-one has used it yet!
        • Check again in April 2020 - no NONE - go ahead!
      • This is a dependency for signing off the final version of the Release packaging conventions and File Naming Conventions item (next)
    • In addition, Members would also like a Proposal to create an additional Simple refset (full RF2) of the entire GPS freeset in order to enable active/inactive querying etc by licenced users...
        • Potential to automate the creation of this using ECL queries if we ensure all freesets are included in the refset tool..

      • Would people still see a valid business case for including an RF2 refset file in the GPS package as well?

        • OCTOBER 2019 - NOT IN THE ROOM - BUT RORY HAS BEEN ASKED FOR IT BY SEVERAL PEOPLE, SO WE NEED TO DO IT

          • This will be in line with the September 2020 GPS release.

        • Any potential drawbacks with doing this?

          • NO

        • If so, should it be part of the existing GPS release package, or a separate file released at the same time?

          • Separate, released at same time - this is because the use case is different for each file type -

            • Users who don't have SNOMED CT will use freeset format file to scope which concepts they can receive successfully

            • Users who already have SNOMED CT will use the RF2 file format to scope which concepts they can send successfully to those who aren't regular SNOMED users...

  • APRIL 2020 - Any other feedback from actually using the GPS freeset file????

    • no - everyone would just like the RF2 file version in Sept 2020 as planned...

  •  
  • GPS RF2 format package published as promised on 30/09/2020 -
    • ANYONE USED IT ALREADY??  Any feedback yet?
      • Matt Cordell  about to use it - he will send feedback once uploaded the new RF2 file.
      • .....Matt never loaded it in the end, so no feedback other than generally that the format looks useful
      • Peter Williams + Michael Lawley confirmed the real benefit was the the PHIR project - it's working well for that so far
    • PLEASE CAN YOU DOWNLOAD IT AND PROVIDE FEEDBACK ASAP (even if it's just that it "all looks good"!)....
      • Anyone used this yet??
    • +++++ feedback on the Freeset format as well... 
9



10



11

Active Discussions for October 2022


12Welcome and thank you!

Thanks very much for all your hard work to our outgoing members - in this cycle Orsi has moved on, to take her turn being a member of the MAG

Welcome to new members!

  •  In her place we welcome Gabor, who has worked with SNOMED for many years now - just in case you've never met him I'll let him introduce himself...
13Member Nominations

Please let us know if anyone is interested (and who has the requisite domain knowledge and expertise) in applying for a seat on the TRAG - thanks!


14MedDRA Production release

The first SNOMED CT MedDRA Simple Map package Production Release will published on 30/04/2021

This will include 2 maps - full details will be included in the Release Notes.

  • Does anyone have any last minute questions/issues to raise before the Production release is published?
  • No - in which case we'll proceed as planned
  • Topic to be closed down in October 2021 TRAG meetings (after Production release) unless new issues are raised...
  •  
  • TOPIC TO BE RE-OPENED DUE TO FEEDBACK FROM THE COMMUNITY ON THE FORMAT OF THE MedDRA to SNOMED MAP FILES:
    • We should open up a new topic to review the proposal (incoming from the Implementation team) for a new format for reverse direction maps...
    •  
    • FINAL DECISIONS:
    •  
    • This has been agreed in the topic "Redesign of the Map Reference Set formats"
      • We will take this proposal to the MAG, and if ratified we will:
        • a) take the plans forward with the content team, in order to include the necessary new concepts in the January 2022 International Edition
        • b) update the RF2 spec accordingly
    •  
    • NOW we need to agree how to communicate it out to the community ahead of the impending 2022 MedDRA release...
      • Is it enough to:
        • a) send out general comms to the Release distribution list confirming the upcoming changes
        • b) + send the same comms out to those users who we know downloaded the April 2021 MedDRA release package?
      • Or do we need to do something more?
        • No, that's adequate 
    •  
    • In addition, we need to agree how to build the MedDRA package in 2022, in order to clearly show a distinction from the April 2021 release (in the old format), whilst also retaining the historical audit trail.
      • Everyone agreed that we need to produce the April 2022 MedDRA package:
        • a)  in the new format (as per "Redesign of the Map Reference Set formats")
        • b)  with the April 2021 map file(s) removed
        • c)  BUT the new format map files should contain both the new data, PLUS all the historical MedDRA data (from 2021) in the NEW FORMAT.  This means that the NEW file should look exactly like it would be if we had actually published the original April 2021 MedDRA release in the NEW FORMAT (with all original data from 2021 + all new inactivations/changes from the latest cycle)
        •  
    • NEW RELEASE PACKAGE IN THE NEW FORMAT HAS NOW BEEN PUBLISHED IN THE 2022 PRODUCTION MEDDRA RELEASE
    • ANY FURTHER FEEDBACK??
    • HAVE WE NOW RESOLVED ALL KNOWN ISSUES AND CAN CLOSE THIS TOPIC DOWN???
15The possibility of updating inactive contentAll

MSSP-1670 - Getting issue details... STATUS

Please see the ticket above for full explanation - in brief:

  • Descriptions in the 20220731 International edition snapshot description file appear to contain ASCII Character 160 for Non-breaking space, when the character should be ASCII 32 for a standard space. ASCII Character 160 could potentially create issues with ETL processes.
  • The suggestion is that issues caused by non-printable ASCII / UNICODE / UTF-8 characters need to be covered under their own policy because simple inactivation does not resolve the issues caused by these characters in ETL and interoperability processes.
  • Unfortunately removing these characters from inactive content contravenes our current policy, which is to only update inactive content (whether this be via the AP or via a back-end fix by the tech team) where a "critical issue" has been found. The term "critical" is used specifically to clearly denote only those issues which present risks such as clinical patient-safety or legal liability, for example.  Therefore, in order for us to flag up these inactive records as a clinical safety issue, we'd need evidence of reports from users explaining how they present such a risk to their patients.  

  • Confirmed by the content team that validation for non-breaking spaces is in place already for active content, and so no improvement to validation is required.
  • From what we can tell, many of these have been in the release for years now, but we have not received any feedback that it has caused an issue thus far. This is therefore not a "critical" issue - however we'd appreciate community to confirm if there would be any issues with making the fixes directly on inactive descriptions?
  • The following Descriptions in the 20220731 International edition snapshot description file were found to contain ASCII Character 160 for Non-breaking space, when the character should be ASCII 32 for a standard space. ASCII Character 160 creates issues with many ETL processes.

  • All of these issue are in inactive descriptions:

  • ID Column Issue
    2869833013 [term] Code:160, Position:27
    2870804019 [term] Code:160, Position:27
    2871691013 [term] Code:160, Position:16
    2880511019 [term] Code:160, Position:21
    2880958016 [term] Code:160, Position:118|Code:160, Position:149
    2881152012 [term] Code:160, Position:118|Code:160, Position:124
    2882107012 [term] Code:160, Position:118|Code:160, Position:149
    2882999016 [term] Code:160, Position:118|Code:160, Position:124
    2884068015 [term] Code:160, Position:21
    3030804017 [term] Code:160, Position:30
    3030901012 [term] Code:160, Position:30

  • The suggestion is that "issues caused by non-printable ASCII / UNICODE / UTF-8 characters need to be covered under their own policy because simple inactivation does not resolve the issues caused by these characters in ETL and interoperability processes. Given the amount of inactive SNOMED content present in the data stream, it would be best if these characters could be removed entirely from even inactive descriptions. While working in healthcare implementations, the presence of ASCII Character 160 (non-breaking space) in the LOINC descriptions broke the entire ETL process between the data warehouse and Research databases and required me to jump through some programming hoops to remove these characters from the LOINC descriptions."
  • Whilst we appreciate the issue that these characters might have on ETL processes,  from a content team perspective, this is not a critical issue.  All of the current issues are related to inactive descriptions and validations are in place to prevent this from occurring in the future. 
  • Unfortunately removing these characters from inactive content contravenes current SI policy, which is to only update inactive content (whether this be via the AP or via a back-end fix by the tech team) where a "critical issue" has been found.  The term "critical" is used specifically to clearly denote only those issues which present risks such as clinical patient-safety or legal liability, for example.  Therefore, in order for us to flag up these inactive records as a clinical safety issue, we'd need evidence of reports from users explaining how they present such a risk to their patients.  

  • SI are always reluctant to change SNOMED CT history. However, there are situations where we have had to do that in the past. We are therefore bringing this to the TRAG for consideration...
  • A couple of people thought it might be easier to update the inactive content rather than getting repeated complaints over the years - however the vast majority disagreed, and thought that not only was it a waste of valuable resource to update inactive content, but more importantly actually contravened the spec at this level!  This is because the INT Edition specifies itself as a UTF-8 format, and the ASCII 160 characters are UTF-8 compliant!  Therefore where would we stop once we start excluding certain UTF-8 characters from the INT Edition? 
  • Instead, it should be the responsibility of the end implementations to exclude any characters that disagree with their ETL routines/programs.
16RVF improvement discussionsCSIRO have been working on improvements to the RVF, and would like to report on and discuss some of the results with us...
  • Dion to Present current status + plan...
  • Comments and feedback welcomed...
    • Plenty of feedback and so further discussions required as we move through the project...
  • The main feedback for the past few months has been the RVF failures for the new MDRS assertions, which appear at first glance to be false positives.  However, they have been proven to be valid failures, as long as you consider that the MDRS format itself is (and has always been) inherently flawed.
  • The closure of this topic is therefore dependent on the outcome of the discussions on the Proposal for a complimentary file to the MDRS - the "ECRS" ("Edition Composition Reference Set")
    • If this concludes that we need to change the MDRS, then this RVF topic can be closed down.
    • If, however, we decide to retain the MDRS format, then we need to revisit these RVF assertions...
  • We need to use the new planned changes to .JSON metadata file:  Update to the .JSON file metadata - addition of "Package Composition" data in order to fix the RVF assertions and remove the false positives...
    • FRI-169 - Getting issue details... STATUS
  •  
  • QUESTION FOR DION/MICHAEL - CAN WE JUST REFINE THE .JSON FILE (as per the proposals here:  Update to the .JSON file metadata - addition of "Package Composition" data)  IN ORDER TO ALLOW THE MDRS ASSERTIONS TO WORK PROPERLY FOR NOW????
  • YES, but the question is how best to do this?
  • NEED TO ADD EXAMPLES OF EACH USE CASE + SAMPLE MANIFESTS - This will not only help with MDRS assertions, but also Syndication info...
17AttributeValue field immutability in the RF2 filesALL

Just a very quick one (especially for those who were in the MAG yesterday and have already heard this!) - the immutability of the valueID field is specified as being "depends on specific use" - see here:

The MAG are all happy to change this to "mutable", and so are we - however I just wanted to give those here who weren't in the MAG a chance to raise a valid objection in case anyone can identify a really strong reason why this field shouldn't be mutable??

18IPS Terminology ProductAll

Quick run through of the changes that we're proposing to make in the final Production release in Q4 2022, as compared to the BETA release

(ie) discussion of the feedback that we accepted and have implemented in the Production release:

  1.  INCLUSION OF THE “EML” (new Drugs refset) IN THE FEEDER FOR THIS PRODUCT FROM 2022 ONWARDS
  2. IPS Terminology URI:

*** PLEASE SEE SECTION E here for final solution:

  • Reminder that this is a SNOMED International product, but NOT a SNOMED CT product, which means it's non conformant to many of our normal standards
    • "RF2-like", purely from a structural point of view
    • HOWEVER, it will NOT be a full, formal RF2 package
    • It will contain nothing but SNAPSHOT data, because it's going to be re-generated every year based on the latest data, rather than authored from release to release
    • This means NO HISTORICAL mechanism will be provided - users will need to create their own if required
  • Another reminder that this product is NOT for members, it's only useful for non-members (mostly those new to SNOMED)
  • Questions on any changes planned?
  • OCTOBER 2022: Any final feedback before finalise first Production release?
    • YES!!! Following feedback received:
    • a)  FHIR and others have problems with the format - they have to add extra functionality for their non-member countries to use a new format - in addition, the use base is very diverse, across members and non members - this is because entities like HL7 are trying to support different users across these domains.  This means that whilst the intention was only ever to target non-members with this product, this hasn't been the practical reality.... THEIR REQUEST IS TO THEREFORE:
    • b) Various users therefore require the MDRS file to be included, in order for this product to be more usable across both types of users.
    • c) It has also been requested that we consider turning this product into an EDITION!  This would mean including all of the historical information and potentially other content, in order to turn it into a "mini SNOMED", however this was never the intention of the original product, and so is not likely to be accepted. 
    • d)  The problem with using the IPS FREESET is that the Freeset contains only 8000 concepts, whereas the IPS SUB-ONTOLOGY expands everything to about 16,000 concepts!! 
    • It was therefore suggested that perhaps in order to address this requirement instead, we could scale up the IPS RF2 Refset product, to include all concepts in the IPS Terminology product?  (currently there are nearly double the amount in the IPS Terminology product due to the expansion of the sub-ontology). This would then allow the users to get the historical info from the IPS RF2 Refset product instead...
    • e) Michael Lawley raised the following query: 

      I know that IPS is "NOT SNOMED" and thus maybe doesn't need an update to the SNOMED URI spec?!?, but the following is not documented in that spec and again creates cost for venders to do custom support for IPS rather than just re-using the existing http://snomed.info/xsct/... approach – it's not really clear what the value of using /ips/... is?

  • AAT to discuss with the business and come back to everyone with potential solutions on Wednesday...
    • With respect to the MDRS and history requests, there is no appetite to include these in the IPS Terminology release format.
    • However, we have one potential compromise - how about adding the additional content from the IPS Terminology scope into the IPS RF2 Refset release?  That way you naturally get both the MDRS + Histroy mechanism included?
      • Only drawback we can see is that removal of parents etc in the calculation of the sub-ontology, would not then be perfectly represented in the RF2 inactivations, and so we'd have to all be happy that there may be concepts that are removed each cycle because of the unusual circular mechanism involved in
        • a) firstly calculating the sub-ontology based on the original scope of the IPS Freeset, then
        • b) using that wider scope to feedback into a new Refset in the Refset tool, and finally
        • c) basing the IPS RF2 Refset Release on this new refset in the tool
        • (otherwise if we just feed it straight back into the original IPS RF2 Refset in the tool, it will grow exponentially, because the next cycle of sub-ontology calculation will start from the 16,000+ scope and then expand it again from there! 
  • So we agreed to trial a new version of the IPS Terminology format:
    • MDRS file to be added to the package
    • IPS RF2 Freeset file to be added to the package
    • Change URL spec to "xsct" as per Michael Lawleys' recommendation in the TRAG meeting...
    • Extra step in calculating the subontology Snapshot FROM 2023 ONWARDS (as no history required for this first 2022 Prod Release):
      • Add in any concepts that had "previously" been in the IPS Terminology package, BUT check they are no longer, either because:
        1. They're now inactive, or
        2. The modelling has changed and these concepts are no longer in the scope of the sub-ontology
19Proposal to change the International Monthly release dates to the 1st of the monthALLNEW DISCUSSION...
  • EVERYONE ON BOARD!!!
  • HOWEVER, WE NEED TO ENSURE THAT IN ORDER TO PREVENT CONFLICTS, WE ARE NOT JUST MOVING THE DATE TO 1st OF EACH MONTH, BUT ALSO THAT THE SNAPSHOT CALCULATION INCORPORATES ModuleID CONSIDERATIONS AS WELL AS MOVING THE DATE 
20Extension Management in the new world of Frequent 
  • With the change in release cycle to monthly, extension management has become intractable and merits consideration for tooling enhancements or procedural change on the part of SNOMED International. 
  • Since extension modules have dependencies on one or more other modules, periodic reconciliation with their parents is a requirement if they are to support interoperation of their content. 
  • Multiple dependencies for an extension creates opportunity for parent modules with dyssynchronous release cycles, introducing further complexity.
  • Since it is seemingly unrealistic to require alignment of versioning and release schedules between independent institutions, the situation calls for tooling support that would compare modules for reconciliation and prepare a systematic step-by-step workplan for the content manager to follow to achieve expedited systematic reconciliation that will validate and classify.  The tooling would ideally execute the workplan to guide the manager in the process.
  • Potential extended requirement for the Delta Generation Tool??
21Continual Improvement to the Frequent Delivery process - Presentation on Frequent Delivery

In our last TRAG meetings several people confirmed that many people within the community were asking for more information on our transition to Frequent Delivery - therefore it was requested that we create a presentation on how the migration went, benefits realised, lessons learned, etc

Maria and Andrew to Present findings on our transition to Frequent Delivery:

  • Processes transitioned
    • This is the benefits of the transition so far
              - [ ] 1.  These are the issues we faced
                  - [ ] a)  This is how we resolved them
              - [ ] 2.  This is how we see the future going - improvements + benefits
  • Lessons learned
  • Questions?


  • We could setup a blog post with Kelly (similar to Jim's on collaborative authoring - https://www.snomed.org/news-and-events/articles/embracing-collaborative-authoring) that would be based on this presentation...
    • Would this be helpful to people? 
  • FEEDBACK ON PRESENTATION:
    • Would be useful to add Example monthly cycle for Authoring team in terms of Dates for cut off's, delivery, etc
    • Would be great to clarify how often the authors have to "unpromote" concepts from MAIN because of conflicts, etc?
    • Mapping - no impact moving to keeping up with monthly cycles - useful to specify this so people aren't guessing?


22Continual Improvement to the Frequent Delivery processAll

Potential Improvements:

  1. USER DOCUMENTATION
  2. RELEASE NOTES
  3. DELTA GENERATION TOOL
  4. VALIDATION advances:

  5. Critical Incident Policy update
  1. USER DOCUMENTATION
    • Several end users have contacted Guillermo to request more information on the transition to more frequent delivery (and/or more frequent updates to the dependent content of both extensions and derivatives).
      1. It would be great if we could provide people with a white paper or presentation (both from a Content and Technical perspective) on 
        1. How the transition went
        2. Benefits realised
        3. Lessons learned
        4. Risks
        5. etc
    • This should be targeted at both the Supplier level + the end users level for max effect
    • Does the presentation myself and Maria gave cover this?  Or could we add an area on the website??
    •  
  2. RELEASE NOTES
    • Can translators please have more detailed release notes - automated to the extent of having EACH component change listed out?
    • We need to be able to automate the generation and publishing of the Release Notes -
      1. this work is still underway, but will take quite awhile for the Dev team to complete....
    • NEW REQUIREMENTS FOR SONJA:
      1. Option for detailed release notes as well as the standard level for all Releases (as per Guillermo's requirement above) - either in the RNMS (preferable) or in the Delta Generation Tool?
      2. It would be great to also link each Component change to the relevant high level change note in the official Release Notes as well (eg)
      3. Release Notes March 2023:
        1. Drugs Changes:
          1. Component 1 changed
        2. Anatomy changes:
          1. Component 1 added
          2. Component 2 removed
          3. Component 3 changed
        3. Quality Initiative:
          1. Component 1 changed
          2. Component 2 inactivated
      4. INFRA-8739 - Getting issue details... STATUS RAISED
        1. RNMS-65 created to track development
      5.  
      6. Explicit annotations in the Release Notes to that each component change in the Delta period is linked to a specific Template
      7. INFRA-8740 - Getting issue details... STATUS RAISED 
        1. MAINT-1958 created to track development
  3. DELTA GENERATION TOOL
    • Has anyone trialled the Delta File Generation Tool as yet?  Any feedback?
    • YES GABOR HAS TRIED IT AND FED BACK ISSUES TO PETER - PETER IS STILL WORKING ON IMPROVEMENTS SUCH AS THE Snowstorm issue whereby if there are multiple different states for the same component withint the Delta period, snowstorm loses the history of all those changes and just spits out a (random) one line state!  So if a concept has started active, been inactivated and then reactivated in the Delta timeframe, snowstorm might decide to spit out active or inactive as the latest state!  It will also lose those 3 changes and only export one change!
    • UPDATE ON THIS WORK:
      • "...the problem here is more with the Terminology Servers not being able to deal with deltas that contain more than one state for the same component than it with with the DGT itself.   

      • However we added a flag to the tool (see https://github.com/IHTSDO/delta-generator-tool/releases/tag/1.2.0 ) so that you can generate output that will contain multiple effective times, but only the most recent effective time for each component.   This is a workaround that means your TS does not end up with the correct Full file representation.

      • So really the only additional work would in theory be in Snowstorm so that it can treat deltas as successive updates to the Full file and generate release branches as it needs to.  However so far this has not been raised as a requirement by the community, and so is not planned work...

    • DO WE NEED FURTHER ENHANCEMENTS, OR NO APPETITE FOR THIS?? 
      • a)  Delta's to be generated from any point in time to any other point in time
      • b)  Metadata to be included somehow (to be discussed further in the Metadata Working Group) to record critical information, such as which Dates the Delta is from + to, which Modules are incorporated, etc
      • c)  Compound Delta's (including ALL changes since the relevant date, including ALL changes in the dependent release package(s), rather than just the latest state - so these are "Full file to Full file" Delta's, as we are used to) are favoured so far, however we should continue to assess any potential use cases for Atomic Delta's (effectively "Snapshot file to Snapshot file" Delta's) as we go along, in case it becomes apparent that there is a valid Business Case to ensure that the new Delta generation tool can provide either or both of these Delta file types...
      • d)  It needs to support the future requirements for Service Based delivery, once we transition over
      •  
  4. VALIDATION advances:
    • OWL testing - anyone worked on this as yet?

    • Template validation - thoughts?

    • Implementation testing feasible? (see Implementation Load Test topic below)

    • Need to identify Modelling areas that need improving - for example where concepts have 2x parents, this is usually an indication of areas that need re-modelling
    • Need automation of the QA system itself - so some quick way to validate RVF + DROOLS Assertions, both old + especially new!
    • Whitelisting - API required?
    • Specific extensions to the automation Validation scope (eg)
      • New idea for an RVF assertion regarding the ordering of OWL records (based on first concept) with disjoints:
    •  
  5. Critical Incident Policy update
    • We need to refine the Critical Incident Policy:
      • Need to ensure categorisation is solid (as otherwise requests may be made for minor issues to be fixed immediately as "Critical Incidents" just because it impacts one institution (but not Internationally))
      • Currently Content Team critical incident policy states:
        • If it's a Clinical Risk then it has to be fixed
        • OR if it's not a Clinical Risk but impacts certain number of members etc then still Critical, etc
    • October 2022
    • What other criteria do we need to use?
    • How strict should we be with the criteria?
      • We need to balance the risk of NOT fixing an issue vs the risk of impact to a stable Release candidate from the fix...
    • As discussed yesterday, we should keep the policy flexible on the solution that will need to be implemented in order to resolve any critical incidents:
      • The best solution is simply to mark the Release in question as "Invalid" and advise all users to download the next stable Release
        • However, can we reliably contact all users who've consumed a Release, given all the possible end users who've downloaded it via NRC's as well as direct from MLDS?
      • We should only use Negative Delta files and other potentially confusing and destructive techniques if there is no other option (eg) Critical Legal Incidents.
    • Any useful lessons learned from anyone else's Critical Incident Policies?
23NNF GenerationEssentially it is about what is considered redundant in the NNF generation, and the implications that has for ECL. At the moment things that are more specific than statements inherited from further up the ancestry replace the more general statements in the NNF calculation, which makes sense.However I introduced equivalence axioms which created necessary conditions that were equivalent - not more or less specific. This resulted in the NNF calculation removing (seemingly arbitrarily) one of the two sets of conditions - this is kind of right because they are "redundant" in the sense that you don't need both, however they also aren't redundant in the sense that they are both necessarily true and neither is "more specific" than the other. Which is picked will affect how ECL works and is evaluated.I'm pushing the boundaries here having equivalence axioms with expressions on both sides, but that should be theoretically possible and I suppose what we need to determine is if that's to be supported what should the NNF look like. Presumably a deterministic selection of one of the axioms or a merged set of all the necessarily true conditions may be more useful for ECL.There's a related point with property chains which I can demo with ECL too to explain and provoke discussion.
24Frequent Delivery for Managed Service

Whilst the overall MS move to Frequent Delivery won't be made available to MS customers until after the International Edition transition, we also don't want to diverge the code bases. 

Therefore, we need to consider and include configuration items within the code to allows the MS Projects to move through the new Frequent Delivery workflow WITHOUT moving to Frequent delivery (for example, we could just enable the basic mandatory automated SAC and nothing else?)

  • We have already had to introduce a small amount of change into the MS authoring processes, in order to ensure that the MS code base remains in line with the International code.
  • Comments and feedback welcome...
  •  
  • **** SI have now made the decision to standardise ALL of our Products in terms of the format of the packages
    1. This means that the MS packages are now being migrated over to Delta-less packages
    2. Any feedback on this?
    3. Same goes for the Derivative products - so far:
      1. GMDN
      2. MedDRA
      3. Have been migrated over - any feedback?
  • APRIL 2022 - only feedback was from Guillermo, who confirmed they are still creating extensions with Delta files - we assured him that we're not at the point of enforcing the new standards across ALL SNOMED Releases, just across all products published by SNOMED INTERNATIONAL - so he can continue to include/exclude the Delta files as required in his own extensions.
  •  
  • MORE FEEDBACK FROM USERS NOW??? NEW REQUIREMENTS???
25

Computer readable metadata


* MAG crossover

Andrew Atkinson

Suzy introduced the topic for discussion...


Suzy would like to raise the question of creating computer readable metadata, and raise questions such as whether or not to include known namespace & modules? 
Or just the current metadata for the files in a machine readable format? 


CAN WE PLEASE REQUEST THAT PEOPLE SHOUT NOW IF THEY HAVE ANY FURTHER REQUIREMENTS FOR THE METADATA PROJECT??

WE NEED TO FORMALISE THE SCOPE IF WE'RE GOING TO BE ABLE TO ADD THIS INTO THE WORKPLAN FOR 2023 - PLEASE SUBMIT REQUIREMENTS TO ME BY 30th September OTHERWISE THEY MAY NOT MAKE IT INTO THE SCOPE OF THE FIRST PHASE OF THIS PROJECT...

Suzy Roy to provide an update on progress:

  • All agreed that whilst this is a large topic, we should start somewhere, and get at least some of the quick wins in (then request the change to content via the CMAG):
  1. Check where the progress with the namespace metadata has got to - can we progress this?
  2. Code systems (and versions) of the map baselines
  3. Common strings such as boiler plate licence text etc
  4. Description of use cases for the various refsets (using the text definition of the Refset concetps themselves) - either json or markdown representation of multiple pieces of info within the same field.
  • Michael Lawley to provide an update from the related MAG topic...
  • TRAG agreed that this should be incorporated into the discussions with the continuous delivery, in order that we can plan the changes here in line with the transition to more frequent releases. To be continued over the next few months...
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Ideas:
    • Some human readable metadata could potentially live as descriptions (which can then be translated)? David to discuss further...
    • David will mock up something in Json...
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
  • This should now be combined with the Reference set metadata topic, to address all updated metadata use cases - Human readable, Machine readable, etc
    • We need to setup a JOINT Working group to deal with this!
    • Dion kindly volunteered for this group
  • We've added a new machine-readable file to the International Edition this cycle, which can be refined for future usage:
  •  
  • Suzy Roy kindly volunteered to run a Project Group later in 2021 to refine and improve this data as needed going forward:
    • Volunteers confirmed in October 2021 TRAG meeting:
      • Dion, Mikael + Alejandro
      • + Andrew + Peter from tech team
      • + need 2 volunteers from MAG (as SMT decided we need wide range of views)
    • Suzy to setup meetings once we have MAG volunteers confirmed...
  • This will be rolled into the holistic discussions on Metadata in the new Metadata Working Group... Working Group: Refined Metadata 
    •  
    • Plus new requirements from other discussions:
      • HOWEVER, WE'RE STILL MISSING THE IDENTIFICATION OF THE ACTUAL MAP PRODUCT ITSELF, AND THE VERSION OF THAT ENTITY
        • (eg) "ICNP version Jan 2019" should exist as metadata somewhere within the ICNP map product package...
        • + possibly even the direct URI?
        • SUGGESTION IS TO USE THE JSON FILE FOR THIS - Andrew Atkinson  to take this forward in the Metadata working group...
        •  
      •  ANOTHER DISCUSSION POINT FOR THE JSON FILE:
        • Are the "DeltaToDate" and "DeltaFromDate" fields in the JSON file now misleading in the new world of Frequent Delivery where we have no Delta files in the INT package itself?!
          • "deltaFromDate" : "20210930",
            "deltaToDate" : "20211031",
        •  
        • FINAL DECISIONS:
          • Agreed that these fields should ONLY be available in packages with Delta files
          • Monthly International Releases going forward, should instead just have:
            • EffectiveTime
            • PreviousPublishedPackage (that the current release is based upon)
            • Any retracted releases + their replacements
      •  
    •  
    •  Examples of extending this metadata:
      • .json format 5 ?? (Please see Michael Lawley's comments on 16/04/2021 here:  Re: Working Group: Refined Metadata)
      • Namespace data
      • Individual external Refset data
      • ranges of permitted values
      • mutability, etc?
      • Package Name? (Please see Michael Lawley's comments on  20/04/2021 here:  Re: Working Group: Refined MetadataYes, regarding the "Name" entry, it would be ideal if it could be used to populate the "Product Name" field in a list of available packages (and other required and relevant fields for MLDS).  Then the zip contents would be sufficient to automatically populate MLDS (or an ATOM-based Syndication feed))
      • WE'RE STILL MISSING THE IDENTIFICATION OF THE ACTUAL MAP PRODUCT ITSELF, AND THE VERSION OF THAT ENTITY
        • (eg) "ICNP version Jan 2019" should exist as metadata somewhere within the ICNP map product package...
        • + possibly even the direct URI?
        • SUGGESTION IS TO USE THE JSON FILE FOR THIS - group to provide examples of how this would look to the TRAG for review...
      •  
      • ANYTHING TO SUPPORT FREQUENT DELIVERY USEFULLY???? 
    • Also create 2 new pages -


26Refset containing the semantic tags?

This topic was closed down by the TRAG a few years ago due to the lack of requirements vs the complexity of finding a robust solution.

However, new requirements and a potential solution from Ed Cheetham have now been submitted for our review and discussion - please see here for details: Refset containing the semantic tags?

We will discuss in detail in the next TRAG meeting in April, however please feel free to contribute to the online discussion in the above link in the meantime.

  • Slide deck here for advanced review:
  • Feedback from the group:
    • Excellent identification of issues that need addressing
      • The first target should be to discuss the application of the Validation that Ed has kindly brought to us, both in the AP + Release validation stages.
      • The second sim is to bring the discussions on the potential Formalisation of the Semantic tags to the relevant AG's for furhter consideration
  • Yong has kindly agreed to add this to the agenda for the next MAG meeting, to be discussed further.....
27Dependent Releases for Derivative ProductsAll

To be discussed in April 2022, in time to make a final decision before the 2022 Derivative cycle begins shortly afterwards...

  • The original intention of more frequent delivery was to continue using the January + July Releases as the dependent INT Editions for all derivatives. This was to ease users through the transition, by allowing them to continue using the Jan + July releases indefinitely, rather than moving to individual monthly releases.

  • However, this causes a potential conflict with the July Derivative cycle, as if we don't start the (many) feeder derivatives for the September GPS product until 1st August (instead of starting on 1st July as we did last year because we had the luxury of cutting off the July 21 editing cycle in May 21), not only do we reduce the amount of time that everyone has to migrate the refsets + get external reviews completed, but more importantly we clash with the European holiday season in terms of getting reviews signed off by the key external stakeholders, who are often away during August.

  • We are in the process of discussing this with the relevant stakeholders, to see if they will be available in August 2022, but if not we are wondering if it would be acceptable for the 2022 Derivatives (including GPS) to be based on the May or June 22 release (instead of July 22)? Whilst this may initially seem inconvenient, it would have the benefit of increasing the quality of these derivative products by allowing thorough internal + external reviews before publishing.
  •  Feedback?
  •  
  • As Matt mentioned, another option is to try to feed the derivative authoring process with monthly updates, thus reducing the necessary workload in the final Release cycle. 
    • However, in order to have the desired effect, this would also require us to not only author new changes more frequently, but also to migrate each derivative multiple times per year, in line with each monthly release.
    • Whilst this could resolve the time crunch in August, it would necessarily introduce an additional overhead to the workload of the authors throughout the year,
      • ...as even though they'd technically migrate the same number of concepts over 6 monthly migrations as they would in one large migration per 6 months, the process is cumbersome enough to have an impact on capacity
      • ...this could (in theory) have a slightly positive effect however, as it would mean that authors get to know the migration process more intimately, if having to do it every month instead of every 6 or 12 months!
      • We need to discuss with WCI to ensure that the tool would support this however...
        • ...for example, we'd almost certainly need a new Delta generation process in the Refset tool, in order to enable it to provide roll-up Delta files for the past 6 or 12 months' worth of migrations in one file...
  • The vast majority of the group are in favour of retaining the Jan + July releases as the dependent releases for all derivatives, mostly because of the comms that we sent out confirming that most users won't be impacted by the Monthly Releases if they don't want to be, as they can continue to use only the Jan/July releases for the foreseeable future.
  • This is especially true for NRC's like Sweden, who Mikael says are using quite a few derivatives to package up different products for their users, and so having a conflict between the dependent releases of their extensions and thos of the derivatives will be very unhelpful for them
  • We need, therefore, to explore different options, such as 
    • a) Updating the refsets monthly (though this is confirmed as an overhead for the team by Maria)
    • b) Removing the review stage for all derivatives (except those which are brand new), s most feedback on BAU derivatives finds nothing of use nowadays...
    • c) Postponing the final delivery of the refsets impacted by lack of people to review  in July/August to November say, so the reviews can take place in September and work can continue after that.  This si probably the most popular option in the group, but then not many people in the group are dependent on the derivatives releases...
    •  
  • REVIEW AGAIN IN 2023 TO SEE IF THE APPETITE TO REFINE THIS IS NOW THERE, BASED ON USERS' EXPERIENCES OF BASING EXTENSIONS ON DIFFERENT MONTHS, ETC??
28NEW DEPRECATION PROCESS!
Link here (if JMI completed in time - if not push this to 2023)
  • 2022
  • We will shortly be refining the deprecation process for SNOMED CT Products, especially derivatives such as Nursing Activities + Nursing Health Issues.
  • If you have any pre-emptive ideas of how we can improve this process, please let us know now, as this is the time when we can easily impact the final solution?
  • For example:
    • Communication improvements?
      • Comm out as far and wide as possible...
    • Changes to the way we leave (or don't leave) the deprecated packages in MLDS?
      • Some suggested leaving on MLDS for a short period (1 year?) then removing to keep MLDS clean
      • Most others prefer to ALWAYS leave the latest (in this case Final Deprecated) version on MLDS permanently, so people know
        • HOWEVER, this should be accompanied by clear labellling on MLDS to state
          • a) That the product is deprecated
          • b) the reason for deprecation (no longer used vs INVALID vs Dangerous, etc!)
          • c) And keep the packages in a separate folder in MLDS marked "Deprecated" to make it very clear to only use them if you know what you're doing
      •  
    • Changes to the way in which we deprecate the RF2 records?  (inactivation, just leave them active but static, etc)
      • This should be optional depending on the Reason for Deprecation (ie)
        • a) If it is just no longer being maintained, then everything should remain Active, with a note clearly stating that there is no longer ACTIVE MAINTENANCE being done on this Product, and so should be used with caution as it's definitely out of date
        • b) If the content is "WRONG" or "UNSAFE" then it should be inactivated and flagged as Unsafe for use
        • c)  etc 
    • Changes to the way we deal with the metadata? (inactivating refsetDescriptor records, module records, etc?)
      • Metadata can never be "unsafe" in and of itself, and so refsetDescriptor and Module records ashould always remain active in all cases
      •  
  •  
  • 2023
  • Confirm if everyone is happy with the new process?
  • Confirm if they are then also happy with applying the new process to all following planned deprecations:
    • 2x Nursing Refsets
    • Old FORMAT MedDRA Maps (but not entire Product)
    •  
29Redesign of the Map Reference Set formatsAll

Please find below a proposal for redesigning the map reference sets to support maps in either direction:

https://docs.google.com/document/d/14bmRaVQYI7-Kz2EPgv00muGqdO6wRrMycCPCJqp5W2s


  • This proposal was signed off, ready to take to the MAG on 20/10/2021...
    •  
    • APRIL 2022 - Review and sign off of final formats:
      • An opportunity has been identified to improve the format of the SNOMED International Map Reference Set products.  This will apply to all types of simple and complex Map Reference Sets going forward, including (but not limited to) the SNOMED CT MedDRA Simple Map package, first released back in April 2021.  

      • The existing SNOMED CT map reference sets were originally designed for maps in the direction from SNOMED CT to another code system, manifested by the use of a ‘mapTarget’ string attribute used to represent the code in the other code system.  The new and improved map reference set patterns will be introduced with a ‘mapSource’ attribute, in order to more accurately represent maps from other code systems to SNOMED CT. 

        The refined format provides users with more clarity when using maps of either direction, as well as additional map metadata representing the new refset patterns and correlation values.  Users will also benefit from clearer and more predictable naming of the map refsets, as the map reference set concepts have been reviewed and updated to follow the refined description patterns.  Please see the links below for the updated technical details including the improvements:


      • The first product to be improved using the new designs will be the SNOMED CT MedDRA Simple Map package.  After in-depth discussions with the communities’ expert advisory bodies, the users confirmed that their preference was to retain the historical data from April 2021.  


      • It was therefore agreed that the 2022 SNOMED CT MedDRA Map package will be published as follows:

        • …with all new 2022 content in the improved format 
        • …with all relevant historical MedDRA data (from April 2021) also in the new format
        • …with the historical April 2021 map records that were in the original format inactivated (in order to retire the relevant UUID’s) - the inactivations would likely be published a) in the new package in the new format, and b) in a separate file/package in the original format.  However this is still to be confirmed.
        • All of this means that the 2022 file will appear as if the original April 2021 MedDRA release was actually published in the new format.  Therefore, the 2022 MedDRA Release will be published as a complete, consolidated package, with all original data from 2021 plus all new inactivations/changes from the latest cycle presented in the new and improved format.
    •  
  • To be taken forward in metadata working group:
  • HOWEVER, WE'RE STILL MISSING THE IDENTIFICATION OF THE ACTUAL MAP PRODUCT ITSELF, AND THE VERSION OF THAT ENTITY
    • (eg) "ICNP version Jan 2019" should exist as metadata somewhere within the ICNP map product package...
    • + possibly even the direct URI?
    • EXAMPLES FOR MEDDRA??
    • SUGGESTION IS TO USE THE JSON FILE FOR THIS  - Unless we need to discuss now, we will take this forward in the Metadata working group...
  •  CONFIRMED THAT WE WANT THE FOLLOWING FIELDS ADDED TO THE .JSON FILE FOR RELEVANT DERIVATIVES :
    • External MapSource (or MapTarget) - (ie) If we're publishing a map from SNOMED TO GMDN then we should state that this is from 
        • SNOMED CT version Jan 2022 to 
        • GMDN Version 2019
      • If we're publishing a map from MedDRA to SNOMED CT we should state:
        • from MedDRA version 2023 to 
        • SNOMED CT Version July 2023
    • Directionality of the map - Some people would like the Directionality to be explicitly stated so that it's machine-readbable, instead of just implied in the Map Package naming convention
    • (ie) If we're publishing a map from SNOMED TO MedDRA then we should state that this is 
      • Direction:  FROM SnomedCT TO MedDRA
    • (ie) If we're publishing a map from MedDRA to SNOMED CT then we should state that this is 
      • Direction:  FROM MedDRA to SNOMED CT
  •  
30IMPROVEMENTS TO THE RELEASE FORMAT


31a) Proposed deprecation of the CTV3 Identifier simple mapDue to it coming to the end of its useful life, SNOMED International would like to propose planning for the deprecation of the CTV3 Identifier simple map (that currently resides in the RF2 International Edition package) as of the January 2020 International Edition. 

Some Member countries have already identified the reduction of the effectiveness of the product, and have already put plans in place to withdraw support for the CTV3 Identifiers from 2020 onwards. 

The TRAG therefore need to discuss whether or not there are any apparent problems with the proposed deprecation, and if so how they can be mitigated. 

We must also discuss the most effective method to pro-actively communicate out announcements to the community to warn them of the upcoming changes, in order to ensure that everyone who may still be using the Identifiers has plenty of notice in order to be able to make the necessary arrangements well in advance. 

Finally, we will need to decide on the best method for extricating it from the package, in order to ensure the smoothest transition for all parties, whilst remaining in line with the RF2 standards and best practices. 
  • AAT CHECKED THE PREVIOUS IMPLEMENTATIONS OF DEPRECATION OF BOTH ICD-9-CM and RT Identifiers, AND AS THOUGHT BOTH WERE IN THE CORE MODULE, AND REMAINED IN THE CORE MODULE IN THE STATIC PACKAGES - SO ANY ISSUES WITH DOING THIS AGAIN?
  • So the plan would be to follow the same deprecation process as we did with ICD-9-CM (ie)
    • move all of the content to a Static Package in July 2020, and inactivate all of the content
    • publish the reasons for inactivation in the historical associations
    • Release Notes similar to ICD-9 = SNOMED CT ICD-9-CM Resource Package - IHTSDO Release notes
    • CREATE A STATIC PACKAGE FOR CTV3 BASED ON THE JULY 2019 MAP FILES AND PUBLISH ON MLDS (and link through from Confluence link as well). ALSO LIFT THE CTV3 SPECIFIC DOCS FROM THE Jan 2020 RELEASE NOTES TO INCLUDE IN THE PACKAGE.
    1. Date of the files should be before the July 2020 edition (so say 1st June), in order to prevent inference of dependency on the July 2020 International edition
      1. So we set the effectiveTime of the Static package to be inbetween the relevant international edition releases (eg) 1st June
      2. This is to ensure that it's clear that the dependency of the Static package will always be the previous International Edition (here Jan 2020), and not continually updated to future releases
      3. It cannot therefore have an effectiveTime of July 2020 (as we would normally expect because we're removing the records from the July 2020 Int Edition) as this would suggest a dependency on the July 2020 content which doesn't exist
      4. It also can't have an effectiveTime of Jan 2020 as we need to distinguish between the the final published content which was Active in Jan 2020, and the new static package content where everything is Inactive.
    2. Inside the files should be all International edition file structures, all empty except for:
      1. Delta ComplexMap file needs to be cleared down (headers only), as no change in the content since the Jan 2020 files, so no Delta
      2. Full and Snapshot ComplexMap files exactly as they were in Jan 2020 release (including the effectiveTimes)
      3. ModuleDependency file needs to be blank, as CTV3 was in the core module (not in its own like ICD-10 is), and therefore the dependency of the core (and therefore the CTV3 content) module on the Jan 2020 edition is already called out in the Jan 2020 ModuelDependency file, and therefore persists for the static package too.
      4. Date of all of the files inside the package should be the new date (1st June)
      5. But all effectiveTimes remain as they were in Jan 2020
      6. Leave refsetDescriptor records as they are in the International edition
      7. RELEASE Notes Should describe all of the thinking we went through when creating this package, why the moduleDependency file remains blank, and why we’ve wiped the Delta, etc (see above)
  • AND ALSO COMMS SAME AS WE DID WITH THE RT IDENTIFIER REFSET DEPRECATION:
    • RT Identifier Refset deprecation:

      We need additional comms around the July 2017 release, in addition to the usual Release Notes wording, in order to confirm what is happening and the rationale behind it.

      To re-iterate what was discussed on the previous call, Legal counsel confirmed that from a legal perspective, he doesn’t consider that it’s either necessary (or even advisable) for us to send CAP any further communications on this matter.  Legal counsel is confident that the informal discussions that we’ve already had with them (in order to remind them about what they need to do), are sufficient to cover our legal obligations, given that the licence is theirs and not SNOMED International's.  Therefore we no longer need to send a formal letter to CAP.

  • Has anyone identified any issues with the proposed deprecation?

    • If so what?

  • Is everyone still in favour of the refined process to use to deprecate??

  • If all good then Andrew Atkinson to begin formal deprecation process

32b) Proposal to remove the Stated Relationship file completely

Link to proposal

Inactivated all records in July 2019 - long enough now for everyone to be on Axioms?

  • Thoughts?
33c) Proposal to remove the Identifier file completely

Link to proposal

Has been completely empty for over 8 years now - what's the point in retaining it?

  • Thoughts?
34Proposal for a complimentary file to the MDRS - the "ECRS" ("Edition Composition Reference Set")

The TRAG had discussions a couple of years ago to clarify the best application of the Module Dependency Reference Set (MDRS) - some background reading is here: 

  1. Re: 4.2.1.0 Using SNOMED CT with FHIR
  2. Proposal for a complimentary file to the MDRS - the "ECRS" ("Edition Composition Reference Set")
  3. Miscellaneous Documents


Michael and Dion then walked through the proposal and answered questions, but Michael and Linda both confirmed that the use case was not a critical priority at the time, and therefore didn't need to be actively discussed until new cases were proposed...

WE THEREFORE CLOSED THE DISCUSSION DOWN AT THE TIME DUE TO A LACK OF MULTIPLE USE CASES, AND SO THIS WAS DE-PRIORITISED UNTIL SUCH TIME AS MORE USE CASES CAME TO LIGHT.

We have now identified more use cases for this proposal, as the new automated MDRS validation picks up what appear at first to be false positives, but which are actually valid failures due to the historical shortcomings of the MDRS format.

  • We therefore need to discuss and agree an approach that allows us to both express the correct moduleDependencies + the new module composition (to express which modules comprise the Edition package, for URI + validation purposes).
  • This should then be used to properly validate the MDRS and moduleDependencies within the Edition and Extension packages.
  • There was a lot of feedback on the original proposal - however in this meeting we should:
  • a) Ask Dion/Michael to walk through the proposal in person to ensure that everyone's on the same page (and remembers the original discussions)
  • b) Answer the feedback (plus any new feedback in light of new situations and/or use cases)
  • c) Agree what the final proposal should be, and what are the next steps we need to take in order to get it signed off (MAG, design authority, etc?)
  • Michael, Dion and Reuben were going to create the Australian version as an example, in order to include that in Michael's updated version of the proposal document - did this happen?
    • New proposal for representing the ECRS information in the .JSON Metadata file will be kindly brought to the table by Dion + Michael tomorrow, for further review
      • This will include an example of how the INT Edition might look...
  •  
  • As part of the discussions on this topic, we need to decide what to do about the transitivity of dependencies in the MDRS - Linda will kindly present the background and options to discuss... 
    • Initial discussion were had on 18th October 2021, leading to a provisional decision that the best course of action might be to:
      • State that transitivity is the primary method, but that
      • Explicit statement of all moduleDependencies (even though that could be inferred through their transitive inclusion) would remain an option in all cases, to be used whenever the transitive dependencies would lead to potential confusion or conflict, for example in the case where two different components (eg. ICD-10 map + IPS refset) of an Edition (eg. Pangea Edition) were themselves dependent on two different versions of the same product (eg. the July 2021 INT Edition + the October 2021 INT Edition respectively).  In this case the MDRS in the Edition which incorporates the modules would explicitly state the dependencies of all it's constituent modules, and therefore resolve the conflict that would otherwise have arisen -
        • so in this example, the  Pangea Edition would explicitly state that both ICD-10 + IPS modules were dependent on the October 2021 INT Edition
          • NB  the curator of the Pangea Edition would first be responsible for testing and confirming that the ICD-10 maps (which were implicitly dependent on the July 2021 INT Edition rather than October) worked cleanly with the October 2021 release as well, before publishing the Pangea Edition.
    • However, the one drawback raised in response to this option was that we need a strong use case to warrant changing the RF2 spec.  So we need to decide if we're happy that the use cases in the proposal are strong enough for that (ie) 
      1. Resolving issues with pre-existing Editions that did not originally spec out the URI with this in the mind
      2. Enabling more comprehensive targeted automated validation of the MDRS files
        1. This is currently not possible without resolving the transitivity question, and 
        2. The imminent transition to Frequent Delivery brings this to the forefront of our current considerations, as without the necessary breadth in the automated validation, we cannot guarantee the quality of the monthly releases.
    • FINAL DECISIONS:
      • a)  We will use the new JSON data on Package Composition to resolve the issues with the false positive results in the current MDRS RVF assertions, by having the assertions check the new JSON data to confirm whether or not the modules that are not explicitly called out in the packages' MDRS file (as its an extension or similar), or that have conflicting versions.
      • b)  We will use the new .JSON data to allow correct resolution of URI's
      • c)  We will NOT change the RF2 spec to move to transitive dependencies in the MDRS. 
        • 5.2.4.2 Module Dependency Reference Set - currently states 

          "Dependencies are not transitive and this means that dependencies cannot be inferred from a chain of dependencies. If module-A depends on module-B and module-B depends on module-C, the dependency of module-A on module-C must still be stated explicitly."

        • Despite this being a valid theoretical stance (as dependencies are inherently transitive), the weight of historical data across all products for the past many years means that introducing a new approach whereby all dependencies are assumed to be transitive unless there's a problem and are therefore stated, could result in confusion when taken in the context of all previous releases where stated dependencies are NOT only there if there's a problem!   We will therefore continue to review this use case in future TRAG meetings, to see if the case for changing the spec becomes strong enough to warrant a change to all our products, plus a change that runs contrary to all historical releases.
      • New planned changes to .JSON metadata file:  Update to the .JSON file metadata - addition of "Package Composition" data
      • FEED INTO THE METADATA WORKING GROUP DISCUSSIONS...
35

Reference set metadata

* MAG crossover

Replacement of the Refset Descriptor file with a machine readable Release Package metadata file

See David's proposal here: Reference set metadata (plus sub page here: Potential New Approach to Refset Descriptors)

  • Everyone confirmed no issues with the proposal in principle, in April 2018
  • However, do we consider this to just be relevant to refsets in the International Edition release package?
    • Or to all derivative products as well?
    • Both refsets and maps?
  • Also, are we talking about only human readable descriptive information, or also machine readable metadata such as
    • ranges of permitted values
    • mutability, etc?
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
    • Michael Lawley - where are the discussion on this currently?
    •  Michael confirmed (20210420) that this straw man was never created, and so we should use the published .json file as the straw man for future discussions... 
  • Can we link this in to the .JSON file above? (Computer readable metadata) - yes, done!
  •  
  • IN FACT, are there any requirements for machine-readable or human-readable metadata that can't be addressed with extensions to the new .JSON file in the release packages?
    • No, not that people can foresee!
    •  
  • This will be therefore be rolled into the holistic discussions on Metadata in the new Metadata Working Group...
  •  
  •  
36Refset Descriptor InactivationMatt Cordell

Question here is whether or not RefsetDescriptor records themselves should remain active for retired reference sets?

TRAG to decide on correct policy and feedback to Matt...

  • The consensus so far is that we should keep the RefsetDescriptor records themselves active, which has been the precedent for all cases in RF2 history so far, with the exception of the Non-human refset which was physically removed from the International Edition package.
  • The UKTC and others have previously requested these RefsetDescriptor records to be inactivated ( ISRS-112 - Getting issue details... STATUS , etc) - for consistency purposes, but the corollary of this is that the refset structure itself (which the refsetDescriptor describes) remains valid and active, despite the refset itself having been inactivated.
  • TRAG TO DISCUSS AND AGREE BEST SOLUTION...
    • Then propose an addition to the TIG to provide clear guidance on this for all users...
    • AGREED:
      • Happy to leave the RefsetDescriptor Active for all normal circumstances
      • If we're removing the Refset entirely from the Extension/Edition, we should 
        • a) if it's just for space or something, then leave refsetDescriptor record in place
        • b) if it's for CRITICAL INCIDENTS ONLY (and even then only certain subsets of this - most likely only legal issues), we'll remove RefsetDescriptor completely
      •  
      • Matt Cordell  to write up and send to all of us for review.... confirmed on 20/04/2021 that Matt will write this up and present to the TRAG in future meetings
      • FINAL DECISIONS:
        • Matt Presented - no contentious points, so Matt is ready to take this proposal further...
        •  
37Implementation Load TestAll

RVF has now been open sourced to allow people to contribute towards it more easily, so that Implementation issues can be reverse engineered into the assertions. All of the NRC validation systems should remain separate, in order to ensure as great a coverage across the board as possible.

However, it makes sense to ensure the critical tests are included in all systems, in order to ensure that if, say, one NRC doesn't have the capacity to run Alpha/Beta testing for a certain release, we don't miss critical checks out. We are working on this in the Working Group, and also in the RVF Improvement program, where we are including the DROOLS rules, etc. These are also being incorporated into the front end input validation for the SCA.

TRAG to therefore discuss taking the Implementation Load test forward, including the potential to incorporate key rules from NRC validation systems into the RVF. So we should discuss the tests that are specific to the Implementation of vendor and affiliate systems, in order that we can facilitate the best baseline for the RVF when agreeing the generic testing functionality in the Working Group.


  • Matt Cordell will promote some useful new ADHA specific rules to the RVF so we can improve the scope... report back in October 2019
  • Chris Morris to do the same - get the RVF up and running and then promote any missing rules that they run locally.... report back in October 2019
  •  
  • THIS NEEDS TO BE CONSIDERED AS PART OF THE OVERARCHING Shared Validation Service PROJECT GROUP
  • Anything we can add to the Shared Validation Service going forward?
  •  
38

NEW ITEM

Versioning Templates

* MAG crossover

* EAG crossover


The EAG have proposed the need to version templates in some way, and potentially even make them "Publishable" components (with all of the reletive metadata that goes along with that). Also the potential to make them language sensitive.

They would then also need to be automatically validated themselves, as well as then being used in the automated validation of the International Edition!

  • Keep an eye on EAG + MAG discussions on this topic an
  • Ensure that the decisions are fed into our Continuous Delivery proposal
  •  
  • October 2021 TRAG meeting:
    • Peter confirmed no longer being discussed in EAG or MAG
    • Instead, Linda confirmed that the templates are still being developed internally, and once the final proposal is ready they will share it with the TRAG and MAG for review+ for decisions such as how best to publish them, in what format, etc.
    • So one to revisit in October 2022...
39

Release packaging conventions and File Naming Conventions

All

TRAG to review and provide final feedback.

Reuben to provide feedback on progress of the URI specs + FHIR specs updates...

  • Document updated by Andrew Atkinson in line with the recommendations from the last meeting, and then migrated to a Confluence page here: SNOMED CT Release Configuration and Packaging Conventions
  • To be reviewed in detail by everyone, and all feedback to be discussed in the meetings. AS OF OCTOBER 2017 MOST PEOPLE STILL NEEDED TIME TO REVIEW THE DOC - Andrew Atkinson INFORMED EVERYONE THAT THIS DOCUMENT WILL BE ENFORCED AS OF THE JAN 2018 RELEASE AND THEREFORE WE NEED REVIEWS COMPLETED ASAP... so now need to check if reviews still outstanding, or if all complete and signed off??
  • AAT to add in to the Release Versioning spec that the time stamp is UTC
  • AAT to add the trailing "Z" into the Release packaging conventions to bring us in line with ISO8601
  • AAT to add new discussion point in order to completely review the actual file naming conventions. An example, would be to add into the Delta/Full/Snapshot element the dependent release that the Delta is from (eg) "_Delta-20170131_" etc. AAT to discuss with Linda/David. Or we hold a zero byte file in the Delta folder containing this info as this is less intrusive to existing users. Then publish the proposal, and everyone would then take this to their relevant stakeholders for feedback before the next meeting in October. If this is ratified, we would then update the TIG accordingly.
  • AAT to add in a statement to the section 4 (Release package configuration) to state that multiple Delta's are not advised within the same package.
  • AAT to add in appendix with human readable version of the folder structure. Done - see section 7
  • IN ADDITION, we should discuss both the File Naming convention recommendations in the Requirements section (at the top of the page), PLUS Dion's suggestions further below (with the diagram included).
  • Dion McMurtrie to discuss syndication options for MLDS in October 2018 - see hwat they've done (using Atom) and discuss with Rory as to what we can do. Suzy would be interested is this as well from an MS persepctive. UK also interested. This shouldn't hold up the publishing of the document. Discussions to continue in parallel with the creation of this document...
  • Reuben Daniels to raise a ticket to update the fhir specs accordingly
  • Reuben Daniels to talk to Linda to get URI specs updated accordingly.
  • URI Specs to be updated and aligned accordingly - Reuben Daniels to assist
  • EVERYONE TO REVIEW TONIGHT AND SIGN OFF TOMRORROW
  • ONLY outstanding point from earlier discussing was Dion's point from the joint AG where he talked about nailing down the rules for derivative modules... -
  • Dion McMurtrie to discuss/agree in the October 2018 meetings - REPORT FROM DION??
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • FIRST POINT WAS THEREFORE TO have it reviewed internally by all relevant stakeholders...
    • This has been completed and signed off
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate? YES - NOT RELEVANT!!
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
      • ??????????
  • We also need to make a decision on the final Freeset distribution format(s), as I want to ensure we only have a MAXIMUM of 2 distribution formats - RF2 + the agreed new Freeset format (whatever that may be)
    • YES everyone is happy with this!
    • Add this into the Release Packaging Conventions and publish
  • APRIL 2021 - DO WE NEED TO MAKE ANY REFINEMENTS IN ORDER TO PREPARE FOR CONTINUOUS DELIVERY? Did ADHA need any formatting changes when moving to monthly?
    • No, nothing beyond the new .json file and refinements to that 
    • We really need to tackle the Delta from and to release version in the Delta file naming, and possibly package file naming. At the moment it is impossible to know what a Delta is relative to making it hard to safely process it. Perhaps beyond the scope of this document, but quite important

  • THIS NEEDS TO BE CONTINUALLY REFINED OVER THE NEXT YEAR WHEN WORKING TOWARDS MORE FREQUENT DELIVERY:
    • Once all happy, the document will be published and opened up to anyone to view.
    •  
  • Everyone was invited to either join the Working Group, or contribute ideas towards it - we will therefore continue to report back on how this is going...
  •  
40Community ContentAll
  • COMMUNITY EDITION(s)

    1. What should the criteria be that differentiates between what goes in each Edition:
      1. SNOMED CT Core
      2. SNOMED CT International Edition
      3. SNOMED CT Community Edition
    2. What level of quality do we allow into the Community Edition? 
      1. Any quality (quick and sharable) vs validated (slower but better)
      2. One suggestion is that instead of certifying the content, we could certify the authors themselves - so we could differentiate between projects which are authored by newbies, vs those who have say passed our SNOMED CT authoring certification level 1, etc
      3. Another suggestion is that whoever delivers content to the Community content would have to provide the MRCM to support it, + conform to editorial guidelines, etc
        1. So a list of “quality indicators” could be automated against each project (eg):
          1. MRCM compliant
          2. Automated validation clean
          3. Authors have SNOMED CT certification
          4. Peer reviewed
          5. Release Notes
          6. Etc
        2. And then people can make their own minds up about which projects to use based on comparing the quality indicators between projects
    3. SOME AGREEMENT TO SUPPORT AND MAINTAIN BY @SOMEONE@ AT LEAST…
      1. For example, what happens if we change something in the core which breaks someone way down deep in the Community Edition?  (Which we can’t possibly test when we make the change in the core)
      2. The idea here would be that whoever creates the branch in the Community Edition then manages and maintains it - so everyone maintains their own branch, and is therefore responsible for resolving the conflicts coming down from the core, etc
      3. Versioning also becomes important, as whoever creates it needs to specify which Versions of each dependency their work is based on - (eg) they would state that their work is based on the 20190131 International Edition, and therefore any impact we have on the downstream community content would only happen when the owners of that content decided to upgrade their dependency(s) to the new version
    4. Promotion criteria important - thoughts?
    5. Do we remove the need for local extensions, as they can then simply become part of the Community Edition, with any local content just existing in a “country specific” edition within the Community Edition
      1. This also provides some level of assurance of the quality of the content in the Community Edition - as these would be assured by the NRC’s (and SI in some cases) and therefore provide a good baseline of high quality content for people to then start modelling against
    6. ModuleDependency is going to be important - 
      1. perhaps we answer this by making the entire Community Edition part of the same module - therefore it will all classify as one entity?
      2. However a lot of people will ONLY want to cherry pick the things that they want to take - so we need a method for taking certain modules (or realms or whatever we call them) and allowing people to create a snapshot based on just that content instead of the entire community edition
    7. Dependencies need to be properly identified:
      1. Could the CORE be standalone and published separately?
      2. Or would the CORE need to have dedpendencies on the wider International Edition, etc?
    8. HOWEVER, how do we classify the entire Community Edition when there could be different projects dependent on different versions of the dependencies (such as the international Edition)?
41

IHTSDO Release Management Communication Plan review

All

This document was reviewed in detail and all feedback was discussed and agreed upon - new version (v0.3) is available for review, attached to the IHTSDO Release Management Communication Plan review page.

AAT has added in details to state that we'll prefix the comms with "Change" or "Release" in order to distinguish between the type of communications. See version 0.4 now - IHTSDO Release Management Communication plan v0.4.docx

Once we've collated the feedback from the revised comms processes that we've implemented over the past year (in the items above), we'll incorporate that into the final version and discuss with the SNOMED International Executive Lead for Communications (Kelly Kuru), to ensure that it is aligned with the new overall Communication strategy. Once complete, the Release Management comms plan will be transferred to Confluence and opened up for everyone to view.

We have publicised the Release Management confluence portal to both NRC's and the end users to get people to sign up as and when they require the information. Do we know of anyone still not getting the information they need?

We also agreed last time that the community needs more visibility of significant, unusual changes (such as bulk plural change, or case significance change). These changes should be communicated out not just when they're assigned to a release, but actually well in advance (ie) as soon as the content team start authoring it, regardless of which future release it will actually make it in. I have therefore created a new Confluence page here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages

I've left the previous items up (from the July 2017 International Edition) because there are no examples yet from the Jan 2018 editing cycle - so please take a look and provide feedback on whether or not this is useful, and how it can be improved.

  • ACTION POINT FOR EVERYONE BEFORE OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris)
    The final version of the communication plan needs to be reviewed by everyone and any comments included before we agree the document internally and incorporate it into our communication strategy
  • Suzy Roy will discuss the end use cases of her users with them and come back to use with feedback on the practical uses of SNOMED CT and any improvements that we can make, etc 
  • We may now also need to add a new section in here wrt the comms for the TRAG, so that this is standardised and agreed with the community? Or is it outside of the scope for the Release Communication Plan? This was felt to be out of scope, and should this be restricted only to communication related to actual releases of products.
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • AAT MIGRATED THE DOCUMENT FROM WORD TO CONFLUENCE, AND THEN SENT IT TO THE EPS Team for first review.....
  • The feedback has been incorporated and the document refined accordingly.
  • https://confluence.ihtsdotools.org/display/RMT/SNOMED+CT+Release+Management+Communication+plan
  • Andrew Atkinson has now sent to the relevant members of the SMT for final sign off....
    • This has now been signed off and is ready for publication
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate?
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
  •  
  • THIS NEEDS TO BE CONTINUALLY REFINED OVER THE NEXT YEAR WHEN WORKING TOWARDS FREQUENT DELIVERY:
    • Do we need more Communications over the first few months to ensure that everyone knows what's going on? 
    • Or do we actually need LESS now that we have regular, monthly releases?
    • Once all happy, the document will be published and opened up to anyone to view
    •  
42What constitutes a true RF2 release?Harold would like to introduce this topic for discussion...
  • Language refset conflicts are not yet resolved - Linda has been discussing this in terms of how to merge Language refsets or dictate whether or not one should override the other in cases of multiple language refsets - in the UK they combine them all into one but this is not ideal either. In translation situations they use the EN-US preferred term as the default where there is no translated term in the local language. Perhaps we need to do a survey on the members and who's using what how.
  • Suzy Roy (or Harold Solbrig) to get Olivier's initial analysis and come back to us on what worked and what didn't, and we can take it from there.
  • Suzy would like to ask Matt Cordell if he can share his ppt from his CMAG extensions comparison project.
  • Matt Cordell will distribute this to everyone for review before the April 2019 meeting.....
  • Harold to continue analysis and report back with the results of reviewing the specific examples that Olivier identified in the next meeting....

  • Can you please present the revisited presentation Matt Cordell ?
  •  
43

Modularisation of SNOMED CT


* MAG crossover

All

Dion McMurtrie completed the Alpha release - did anyone have chance to review it? (I haven't had any requests for access to the remainder of the package)

The subject of Modularisation needs to be discussed between the various AG's who are considering the topic, before we can proceed with the Release-specific sections.


We need to discuss any red flags expected for the major areas of the strategy:

  1. Modularisation
  2. Members who want to abstain from monthly releases, and therefore need to use delta's with mulitple effective times contained within.
  3. Also need to consider if we continue to hold the date against the root concept - works perhaps still for 12 monthly releases, but not necessarily for continuous delivery daily!
  • THIS NOW BECOMES CRITICAL TO THE STRATEGIC DIRECTION WE DISCUSSED IN TERMS OF MODULARISING OUR CONTENT, AND IMPROVING THE WAY THAT THE MDRS WORKS, IN ORDER TO ALLOW RANGES OF DEPENDENCIES. THIS WILL ALLOW THE "UNIT" OF RELEASE TO BE REFINED ACCORDING TO THE RELEVANT USE CASES.
  • Understand the Use cases thoroughly, and refine the proposal doc to provide people with more real information - Dion McMurtrie TO PROVIDE THESE USE CASES FOR Andrew Atkinson TO DOCUMENT
  • Does the POC allow for concepts to be contained within multiple modules? NO - BUT DION CAN'T THINK OF ANY CONCRETE EXAMPLES WHERE THIS WOULD BE NECESSARY
  • What about cross module dependencies? Michael Lawley's idea on having a separate Module purely for managing module dependencies
  • IN THE FINAL PROPOSAL, WE NEED TO CREATE A NESTED MDRS TO MANAGE THE INTER-MODULE DEPENDENCIES (as per Michael's comments)
  • NEED TO PROVIDE GOOD EXAMPLES AND WHITE PAPERS OF THE USE CASES FOR MODULARISATION IN ORDER TO ENGAGE OTHERS...

  • AFTER SIGNIFICANT DISCUSSION AND CONSIDERATION, THERE ARE NO VALID USE CASES LEFT FOR MODULARISATION. IT CAUSES A LOT OF WORK AND POTENTIAL CONFUSION, WITHOUT ANY TANGIBLE BENEFIT.
  • THE PERCEIVED BENEFIT OF HAVING A WAY TO REDUCE THE SIZE/SCOPE FO RELEASE PACKAGES IS BOTH a) invalid (due to everyone's experience of being unable to successfully do anything useful with any small part of SNOMED!), and b) easily answered by tooling that using the ECL to identify sub-sections of SNOMED to pull out for research purposes, etc.
  • THEREFORE AS OF APRIL 2018 THE FEEDBACK FOR RORY AND THE STRATEGY TEAM WAS THAT MODULARISATION SHOULD NOT BE IMPLEMENTED UNLESS A VALID USE CASE CAN BE IDENTIFIED.
  • HOWEVER, KNOWING THE HISTORY OF THIS ISSUE, THIS WASN'T NECESSARILY GOING TO BE THE FINAL WORD ON THE MATTER, SO IS EVERYONE STILL SURE THAT THERE ARE NO KNOWN USE CASES FOR MODULARISATION?? (eg) linking modules to use cases, as Keith was talking about with Suicide risk assessment in Saturday's meeting,etc??
  • This topic came up several times again during other discussions in the April 2019 meetings, and it was clear that people had not yet given up on the idea of Modularisation - we therefore need to discuss further in October 2019....
  • Agreed to see where the linked discussions in the MAG etc end up going, and then discussing the proposals rather than just in abstract....
44"Negative Delta" file approachAllThis approach was successfully implemented in order to resolve the issues found in the September 2017 US Edition - is everyone comfortable with using this approach for all future similar situations? If so we can document it as the accepted practice in these circumstances...
  •  NO! Everyone is decidedly uncomfortable with this solution! In particular Keith Campbell, Michael Lawley and Guillermo are all vehemently opposed to changing history.
  • The consensus is that in the particular example of the US problem, we should have instead granted permission for the US to publish an update record in the International module, thus fixing the problem (though leaving the incorrect history in place). This would have been far preferable to changing history.
  • ACTION POINT FOR EVERYONE FOR OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris
    We therefore all need to come up with potential scenarios where going forward we may need to implement a similar solution to the Negative Delta, and send them to AAT. Once I've documented them all, we can then discuss again and agree on the correct approach in each place, then AAT will document all of these as standard, proportionate responses to each situation, and we will use these as guidelines in future. If we have issues come up that fall outside of these situations, we'll then come back to the group to discuss each one subjectively, and then add them back into the list of agreed solutions.
  1. Preference now is to retain EVERYTHING in the Full file, regardless of errors - this is because the Full File should show the state at that point in time, even if it was an error! This is because there is not an error in the Full file, the Full file is accurately representing the error in the content/data at that time.
  2. The problem here is that the tools are unable to cope with historical errors - so we perhaps need to update the tools to allow for these errors.
  3. So we need the tools to be able to whitelist the errors, and honestly document the KNOWN ISSUES (preferably in a machine readable manner), so that everyone knows what the historical errors were.
  4. The manner of this documentation is up for debate - perhaps we add it to a new refset? Then we could use something very similar in format to the Negative delta, but instead of it actually changing history retrospectively, we simply document them as known issues, and allowing people to deal with the information in their own extensions and systems in whatever way they feel is appropriate.
  5. Only situation we can think of where we couldn't apply the above gentle response, would be copyright infringement - whereby if we discovered (several releases after the fact) that we had released content that was in direct infringement of copyright, then we would potentially have to revoke all releases since the issues occurred. However, this would raise a very interesting situation where patient safety might be compromised - as if we remove all historical content that contravened the copyright, then we run the risk of patient data being impacted, thus potentially adversely affecting decision support. This is simple to resolve when the problem is in the latest release (simply recall the release), but if found in a 5 year old release for example, it could be very problematic to recall 5 years' worth of content and change it!
  • October 2018 - Guillermo proposed a separate possibility, which is to introduce a new Status (eg) -1 whereby if you find this status in the latest snpashot you would just ignore it - this doesn't however address the use case where there is a legal contravention and you need to physically remove the content from the package - the use case where you would have something that contravenes RF2 paradigm, you can't use the RF2 format to correct something that is RF2 invalid! So this is unlikely to work...
    • Nobody is on board with this idea, as it's too fragile and introduces unnecessary complexity such as we had with RF1...

  • April 2019
  • If we're still all in agreement with this, then steps 1-5 above should all be documented and disseminated to get confirmation of approval from everyone??
  • Did everyone read through everything? Has anyone got any further scenarios that we can include in the documentation?

  • The EAG raised this issue again on 08/04/2109 - Peter to try to make it to the next TRAG to explain the use case that was raised today and elaborate on the new proposal...
  • The TRAG discussed this issue at length, and came to the conclusion that we cannot address ALL potential use cases with a standard, generic, solution (certainly not any of those offered above).
    • Instead the solution in each case should be agreed on given each specific use case that comes up each time
    • So INSTEAD we should update the Critical Incident Policy to very clearly define the process to be followed each time we need to remove something from the Published release(s):
      • Which group of people should make the decision on the solution
      • Perhaps we also provide examples of how each use case might be resolved:
        • For Legal/IP contraventions, we should either remove content from history entirely, or redact it (leave the records in place, but remove all content from fields except for UUID, effectiveTime, moduleID, etc - thus allowing traceability of the history of the components, without including the offending content itself)
        • For Clinical risk issues, we can remove it from the Snapshot, but leave the Full file intact to leave a historical audit trail whilst ensuring that the dangerous content shouldn't get used again (as most people use the snapshot) - see Full file steps 1-5 above, etc
      • How to communicate it out to the users, etc
  • OCTOBER 2019 - DISCUSSION RE-OPENED AS PART OF THE MAG:
  • ONCE FEEDBACK OBTAINED FROM MAG:
  • Andrew Atkinson to update the Critical Incident Policy with
    • the various use cases that we've identified so far
    • the governing bodies who should be the deciding entities
    • the process for making the decision in each case
    • including the critical entities that need to be collaborated with in each case (all NRC's, plus 3rd party suppliers (termMed etc) who represent some of them), to ensure the final solution does not break outlying extensions or anything
    • the process for communicating out those decisions to ALL relevant users
    •  
  • UPDATES FROM MAG?


45

Potential for adding a "withdrawn" reason for inactivated content


* MAG crossover

AllDiscussions around the future strategy for SNOMED CT have included the potential for adding new statuses for content. 

In particular, many people have suggested that problems are created for those either mapping or translating from content that's still "in development". If (as is often the case) they use Daily Builds etc as input data, they can often get tripped up by content which is created but then withdrawn before it's versioned and officially released. It would be extremely useful to those users to have access to traceability data describing the reasons behind why they were removed, in order to support accurate mapping/translation. 

In another use case, there's the possibility that content needs to be formally withdrawn from the International Edition AFTER it's been officially released. This would be the case if, for example, content has unintentionally been published that breaks the RF2 paradigm, or contravenes licensing laws, etc. In this case mere inactivation is not sufficient, the content instead needs to be completely withdrawn from the releases and sometimes even from history. 

The TRAG needs to discuss all of this and be ready with recommendations if these proposals are taken forward.
  • ONE OF THE POTENTIAL SOLUTIONS TO THE ISSUE ABOVE: "Negative Delta" file approach
  • Use cases:
    • undo a historical issue (that break RF2 paradigm, etc) but don't want to pretend it never happened - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Legal contraventions - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Dead on arrival components - it should be okay to have these, and have them openly dead on arrival and therefore inactive to not map to them etc. However it's useful to be able to see these (even though they'd been activated + inactivated within the same release cycle) - so for those people who need to map/translate etc DURING the release cycle, they have to rely on the Daily Build and use live data still in development. Therefore if those concepts disappear by the time of the International Edition it causes problems for those maps/translations already including those concepts.
      • Therefore the best answer is for us to move to having 2x Daily Builds - the existing one + a separate true Daily Builds - where each Daily Build is built relative to the previous Day, and NOT to the previous Published release. This new Daily Build could then be properly relied upon by mapping and translation projects.
      • Can we align this with the transition to the more Frequent Releases?
  • HAS ANYONE HAD ANY MORE THOUGHTS ON THIS SINCE OUR LAST DISCUSSIONS??
  • MAG to discuss tomorrow (30/10/2019)
  •  
  • LATEST UPDATE FROM MAG?
46Clean modularizationAll

There are 22 module concepts, that are on the 900000000000012004|SNOMED CT model component| module.

I don't think it's documented anywhere, but we (AU) have made the assumption that the concept for a module, should be on itself. I suspect we've started to discuss this before, but can't recall how accepted this position was. The 22 concepts below (including the core module) aren't part of the core release, but clutter up the heirarchy. We also get enquiries about this content, some of which is non-existant/available.

  • Thoughts please from everyone on whether or not this proposal would have any impact (negative or positive) on the International Edition?
  • Ready to close down?
47Proposal to increase the level of metadata available for authors to log decisions made during content authoring

Jim Case +

Suzy Roy

This is a subject that would be helpful to include Jim in the discussions, as he has some definite opinions on how to improve the metadata in this area. 

Some suggestions would be to make more detailed information available for authors to describe their reasons for inactivation (especially in those areas where currently they are forced to use inactivation reason codes that aren't completely representative of the reasons in that instance).

Adding Jim Case - for further discussion later...

48Discussion on the conflict between Extension content and International content

All

Jim Case

The answer to this may be quite simple:

  1. If extensions promote content via RF2 delta, we just need to retain all ID's, and only change the ModuleID and effectiveTime, and therefore it is all managed by effectiveTime.
  2. If IHTSDO reject content this is also managed
  3. The only issue then comes if IHTSDO want to change the FSN, then we need a way to manage the change of the meaning of the concept without creating 2 FSN's - as then we need a feedback loop to ensure that it's also corrected at source in the extension as well as in the International edition.

TRAG to continue the discussion and come to a conclusion that will work for all.

  • Has this been answered in its entirety by Jim's new agreed approach? (link here to his final position)
  • Most people consider that Jim's approach covers this under most circumstances. We also need to ensure that we follow the approach listed to the left - so we should confirm all of this has been working in practice since April 2018, and if so close down?
  •  
  • OR, do we have any new requirements here in order to ensure that Promotion/Demotion works efficiently once we move to more frequent delivery?
  •  
  • In addition to this, we have had several issues with Promotions recently, with concepts being promoted without the related components (descriptions, relationships, etc) - so perhaps it's worth writing a full process document on exactly how, when and why content should be promoted + all related tasks that must take place at the same time in order to ensure a smooth and accurate promotion?
49AG Declarations of InterestAllCould each of you please go in and update your information? If there has been no change, then you can simply update the last column with the date. 
  •  
50Any other questions / issues?All
  •