2020-04-06 - TRAG Meeting Agenda/Minutes



Date

6th April 2020  -  10:00 - 12:00 GMT (09:00 - 11:00 UTC)

Room: NONE - Conference Call ONLY!

(https://snomed.zoom.us/j/289236148?pwd=NEdQZUhKZjgrTUxabGhCMzhpMWtWZz09  Password: 422062


6th April 2020  -  13:30 - 17:30 - CANCELLED

LONDON - CANCELLED

7th April 2020  -  09:00 - 12:30 - CANCELLED

LONDON - CANCELLED


Attendees

Apologies

Meeting Recording



Objectives

  • Briefly discuss each item
  • Agree on the plan to analyse and resolve each issue, and document the Action points
  • All those with Action points assigned to them to agree to complete them before the next face to face conference meeting

Discussion items



SubjectOwnerNotesAction
1Welcome!All

Thanks to our members for all of their help. Welcome to our observers!

INTRODUCTIONS...


2

Conclusion of previous Discussions topics

3URGENT: Proposed format of the July 2019 Optional Release packageAllWe wanted to give you a quick heads up on the proposed format of the July 2019 optional release package. Whilst we haven’t received any adverse feedback from any users, we did find a couple of minor issues (during our internal testing) with the UUID’s due to the divergence of the Production and Optional packages. For example, as the Jan 2019 optional release wasn’t ever officially published, a few of the ID’s in the MRCM and inferred files have been used for different records in the July 2019 Optional release than in the Jan 2019 Optional release.

Given these potential inconsistencies, we’re proposing to publish just the OWLAxiom files themselves in the July 2019 Optional release. This will ensure that the information provided is accurate, and given that there are no known use cases for anyone to use anything except for the Axiom files in this Optional package, it will serve to remove any possible confusion.
  • Please can everyone confirm whether or not you're happy with the final published format, and if not please provide details about why this would have an adverse effect on you or any other known users of this package?
  • Interested to know if anyone's intending to use it other than those I already know about?
  • Need feedback in April 2020 to see if we need to discuss further or
    • Provide any other data in an additional release?
    • IF NOT close down in April 2020
    • NO further feedback, so closing this topic down as planned
4



5

Active discussions for April 2020

6Potential for an additional interim International Edition release in June 2020AllIn order to continue supporting the efforts to combat COVID-19, there is the potential for an additional interim International Edition release in June 2020

No-one had any valid use cases to propose to support this suggestion, as the standard July 2020 release will already be well underway and so there is such a small likelihood that anyone would have the capability of taking another interim release so close the July date, that it would almost certainly not get used by anyone.

Therefore we will proceed for now on the basis that the July 2020 release will be the next International Edition published, until someone finds a use case for another interim release.

7URGENT: Additional Delta file planned for Corona Virus descriptionsAll

APRIL 2020 - How did the Release go for the end users?

Did anyone use it yet or just re-publish?

Everyone now in agreement with the 5th option, as follows:

5.  We publish the formal July 2020 International Edition as if the March 2020 Interim release is an official release, with a Delta relative to March 2020, containing all of the new Corona content with an effectiveTime of 20200309.  This is transparent as it doesn't pretend the March 2020 never happened, but assumes that the majority of July 2020 users DID consume the March 2020 Interim release.

  1. For those users who consumed the March 2020 Interim Release:
    1. It causes no problems for these users who want to use either the Full, Snapshot or Delta, as there will be no duplicate records.

  2. For those users who did NOT consume the March 2020 Interim Release:
    1. It does cause potential problems for these users, especially those who want to use the Delta's, as they will not see the new Corona content from March 2020. However, the TRAG are confident that there aren't many of these users, as the majority either use the Snapshot, or if they do use the Delta it's mostly simply to view the differences between this and the previous release, rather than actually updating systems using the Delta files.
    2. They would therefore have to either use the Snapshot, or use a new Rollup Delta package that we publish alongside the July 2020 Release:
      1. This July 2020 Rollup Delta package would be a separate resource with clear documentation.  It would include all changes since January 2020, with the actual, official effectiveTime. Therefore all of the new Corona content would have an effectiveTime of 20200309. The rest of the content will have an effectiveTime 20200731.

      2. This rollup package should include instructions on how to cope with multiple effectiveTimes, and how to validate against the Snapshot/Full to ensure that they flag it up if they somehow missed some updates in the middle of their Delta period (eg) if they are doing a delta for the entirety of 2019, but something goes wrong that means the rollup delta doesn't include the updates for say March 2019. This would be very difficult to pickup in manual validation, so needs an automated check against either multiple Snapshots, or the latest Full file.

      3. We can also use this Rollup package as an opportunity to trial our new proposal for the Delta file naming conventions - these will in future contain the effectiveTime of BOTH the current Delta, and also the release that the Delta is taken from - for example, if a Delta is from Jan 2020 to March 2020:

          1.  sct2_Concept_Delta_INT_20200131_20200309.txt 

    3. OR we simply ask them to first apply the March Interim release Delta and after that apply the official July 2020 Delta (relative to March).

      1. This option prevents confusion and a lot of extra documentation, and also prevents the users from having to potentially change their systems now in order to cope with loading in a rollup delta that contains multiple effectiveTimes.


  3. While there is some room for delta users to get confused about which one to use (the official Delta since the March 2020 release, or the Rollup Delta), the fact is that both deltas would be consistent, and this therefore a) preserves RF2 protocols, and b) pushes forward the prep for continuous delivery.

The consensus is that people should try to use option 2c wherever possible, but that we should provide the rollup package as well just in case this is easier for some users.

If we track the downloads for this rollup delta package, we should then get a really good idea of:

  1. roughly how many people would be likely to use a new "Delta" tool (currently planned for Continuous Delivery), because they aren't using the Snapshots and aren't able to simply load multiple delta's consecutively.
  2. then speak to these people and try to gently encourage them to move to using the Snapshots for future updates, in order to make strong foundations for the future migration to Continuous Delivery.
8Multiple effectiveTimes in Delta files - are they required?AllAPRIL 2020 - This needs to be answered for interim releases like we've just had, but more importantly also for the future migration to Continuous Delivery

Everyone in agreement that multiple effective times will be necessary in Delta files for future continuous releases.

We also unequivocally need to include ALL historical records in a true Delta file.

HOWEVER, the entire use case for Delta files is becoming less and less certain. This is because fewer and fewer users are utilising Delta's to actually upload content, but instead simply to quickly and cleanly ascertain the latest changes between the previous and current releases.

This latter use case could perhaps be much better served by offering a Diff Tool, rather than the originally intended Delta tool:

  • This Diff tool would allow the user to enter the two releases they want to see the differences between, and then
  • just show the Diff between 2x snapshots

  • We could also parameterise whether or not they want to see ALL changes between each dates or just the LATEST change... (so whether it would be like a true delta, or just a snapshot diff with the latest change) (eg)

  • We could also potentially include information for WHAT components were changed to and why (ie) historical associations - (eg) this component was inactivated for this reason, and was REPLACED BY this other component...


9Refset Descriptor InactivationMatt CordellTRAG to decide on correct policy and feedback to Matt...Matt wasn't available for this call, so we'll discuss this in the next TRAG meeting....
10MRCM format updateAllAPRIL 2020 - everyone to discuss and agreeEveryone agreed that there are no predictable issues with this proposal, and so we can proceed as planned. Unless further feedback received, we can close this item down in the next TRAG meeting...
11

URGENT: CONCRETE DOMAINS Consultation

+

Concrete Domains

* MAG crossover

All

The short term proposal of precoordinating the numbers and measures as concepts (and therefore not changing the RF2 format) was generally well accepted, though there were concerns raised regarding the longevity of this approach, and whether or not this addresses the original target of the project (which was to allow a standardised approach across all extensions, instead of perpetuating distinct coding for different users). The other concern raised was that any solution needs to be implemented rapidly, as otherwise the various members will be forced to start/continue implementing their own solutions.

Peter G. Williams, therefore, has taken this forward in the Modelling AG and further implementation. The functionality has been rolled in to the wider discussion of enhancing SNOMED’s DL capabilities.    The Modelling AG is planning a targeted discussion on this in June 2017, and will then produce a document which would then be reviewed by the MAG at the October conference.This Proposal document will be shared when complete.

Last update from Peter was that the OWL Refset solution allows us to classify with concrete domains. The thing we’re still discussing, is how to represent that in the release. The current most popular approach suggested is to create a 2nd Inferred file ("sct2_RelationshipConcreteValues_Delta_INT_[date].txt") which contains concrete values in the destination column, rather than SCTIDs. This allows them to be added without impact to the current approach i.e. ignore it if you don’t want to use them. The new file would only contain concrete values.

At the same time, existing drugs strengths and counts expressed using concepts (which represent those same numeric values) will be inactivated.   SNOMED International will inactivate the existing strength / concentration  attributes which use concepts-as-numbers and replace them with new ones (using the same FSNs) and switch the target/value to the corresponding concrete numeric.

This enhancement will increase the analytical power of SNOMED CT through the use of numeric queries and assist with interoperability by removing the need for extension maintainers to all - separately - add concepts representing numbers in order to publish their own drug dictionaries. 

  •  October 2018 - Harold Solbrigto give an update on the MAG's plans? No further updates yet, check back in April 2019....
  • Consultation: SNOMED International are now running a consultation around the introduction of Concrete Domains to SNOMED CT.

    I you are interested in this area, and/or wish to express an opinion on this proposed change, please read the following information and complete the feedback form if desired:

    http://www.snomed.org/news-and-events/articles/addition-concrete-domains-consultation

  • Update from Peter Williams after subsequent MAG discussions - NEW MAG Proposal:
  • ANYONE HAVE ANY FEEDBACK??
  • The MAG have only received formal feedback (via the online form - I know some of you commented direct on the page!) from ONE person so far, so can we please make a point of providing some feedback on this ASAP - even if it's just to say that you're in complete agreement? Thanks!

  • We should also be issuing advice to downstream users of the drug model to avoid using the current concepts as numbers as they will soon be disappearing - can we please have confirmation of who knows that they have users impacted by this, and that they'll provide the advice immediately?

  • Can anyone foresee any impact (negative or positive) on the Release(s)? NO

    • Does the introduction of a second inferred file present any risk of confusion, etc?
      • NO
    • Are there any perceived restrictions around the use of concrete domains in inferred format?
      • NO
    • Should the new inferred file take exactly the same format as the current file?
      • Current proposal removes the DestinationID field completely and replaces it with the new "Value" field
      • But in theory, we could just hold the Values in the existing DestinationID field, if there's a strong business case for people to need the same format as the existing Inferred file? (hard coding of field names in import systems, etc)
      • NO, THE NEW FORMAT IS ACCEPTABLE
    • Will the inactivation of the existing concepts containing drugs strengths/counts cause anyone problems?
      • NO, BUT
      • MORE COMMS NEEDED TO WANR PEOPLE THEY'LL BE INACTIVATED
    • Are they any users, for example, for currently use these concepts, who are unable to switch to the new approach?
      • YES plenty, so we just need to continue to ensure this is an OPTIONAL file (to consume, it must be mandatory to include in the Release package)
    • April 2020 - Any further feedback? (especially from any further updates from MAG plans)
      • YES - the new question is whether or not we actually NEED to inactivate all previous concepts, or if (as we are only changing one attribute) we can leave them active and just update the attributes?
      • Question sent to MAG
      • Jim also kindly agreed to ask Editorial AG in meeting on 6th April...
12

Discussion of proposed format and packaging of new combined Freeset product

+

Proposed new Freeset format

AllTRAG to review and provide feedback and ideas for business case(s)...
  •  Andrew Atkinson to present the current proposal, and gather feedback
  • Feedback:
    • Uncertainty on use cases - however this was mitigated by the specific messaging from SNOMED licensed users to non-licensed recipients...
    • Content
      • DICOM in particular no representative without sub-division, PLUS actually risky with unverified attributes...
      • AAT to discuss further with Jane, etc
      • Agreed that SI are confident that DICOM will provide some use
    • Using the US PT instead of the FSN (whilst providing less exposure of the IP) prevents visibility of the hierarchy (due to lack of semantic tag) - however the reason for this is because the target users (who are NOT current SNOMED licensed users) will find more use from the PT in drop-downs, messaging, etc than the FSN...
      • Now included both!
    • Everyone happy with each subsequent release being a snapshot - so additions added but inactivations just removed - as long as we include something in the legal licence statement to state that use of all concepts that have ever been included is in perpetuity (even after they've been inactivated)
      • New requirements have suggested that we need to now include a full historical audit trail, even in the Freeset formatted file!
      • This means we've included an Active flag column to allow this to be added in future releases...
      • We don't need to do this for a few months, so we need feedback now on whether or not we think this is a good idea?
      • Any potential drawbacks?
        • None idenitifed in Oct 2019 - but no-one has used it yet!
        • Check again in April 2020 - no NONE - go ahead!
      • This is a dependency for signing off the final version of the Release packaging conventions and File Naming Conventions item (next)
    • In addition, Members would also like a Proposal to create an additional Simple refset (full RF2) of the entire GPS freeset in order to enable active/inactive querying etc by licenced users...
        • Potential to automate the creation of this using ECL queries if we ensure all freesets are included in the refset tool..

      • Would people still see a valid business case for including an RF2 refset file in the GPS package as well?

        • OCTOBER 2019 - NOT IN THE ROOM - BUT RORY HAS BEEN ASKED FOR IT BY SEVERAL PEOPLE, SO WE NEED TO DO IT

          • This will be in line with the September 2020 GPS release.

        • Any potential drawbacks with doing this?

          • NO

        • If so, should it be part of the existing GPS release package, or a separate file released at the same time?

          • Separate, released at same time - this is because the use case is different for each file type -

            • Users who don't have SNOMED CT will use freeset format file to scope which concepts they can receive successfully

            • Users who already have SNOMED CT will use the RF2 file format to scope which concepts they can send successfully to those who aren't regular SNOMED users...


  • APRIL 2020 - Any other feedback from actually using the GPS freeset file????

    • no - everyone would just like the RF2 file version in Sept 2020 as planned...

  • Need to update the Release packaging conventions and File Naming Conventions with the final decisions on the freeset format, ONCE the next GPS release has been published and we've had time to receive any useful feedback on it all...

13

Release packaging conventions and File Naming Conventions

All

TRAG to review and provide final feedback.

Reuben to provide feedback on progress of the URI specs + FHIR specs updates...

  • Document updated by Andrew Atkinson in line with the recommendations from the last meeting, and then migrated to a Confluence page here: SNOMED CT Release Configuration and Packaging Conventions
  • To be reviewed in detail by everyone, and all feedback to be discussed in the meetings. AS OF OCTOBER 2017 MOST PEOPLE STILL NEEDED TIME TO REVIEW THE DOC - Andrew Atkinson INFORMED EVERYONE THAT THIS DOCUMENT WILL BE ENFORCED AS OF THE JAN 2018 RELEASE AND THEREFORE WE NEED REVIEWS COMPLETED ASAP... so now need to check if reviews still outstanding, or if all complete and signed off??
  • AAT to add in to the Release Versioning spec that the time stamp is UTC
  • AAT to add the trailing "Z" into the Release packaging conventions to bring us in line with ISO8601
  • AAT to add new discussion point in order to completely review the actual file naming conventions. An example, would be to add into the Delta/Full/Snapshot element the dependent release that the Delta is from (eg) "_Delta-20170131_" etc. AAT to discuss with Linda/David. Or we hold a zero byte file in the Delta folder containing this info as this is less intrusive to existing users. Then publish the proposal, and everyone would then take this to their relevant stakeholders for feedback before the next meeting in October. If this is ratified, we would then update the TIG accordingly.
  • AAT to add in a statement to the section 4 (Release package configuration) to state that multiple Delta's are not advised within the same package.
  • AAT to add in appendix with human readable version of the folder structure. Done - see section 7
  • IN ADDITION, we should discuss both the File Naming convention recommendations in the Requirements section (at the top of the page), PLUS Dion's suggestions further below (with the diagram included).
  • Dion McMurtrie to discuss syndication options for MLDS in October 2018 - see hwat they've done (using Atom) and discuss with Rory as to what we can do. Suzy would be interested is this as well from an MS persepctive. UK also interested. This shouldn't hold up the publishing of the document. Discussions to continue in parallel with the creation of this document...
  • Reuben Daniels to raise a ticket to update the fhir specs accordingly
  • Reuben Daniels to talk to Linda to get URI specs updated accordingly.
  • URI Specs to be updated and aligned accordingly - Reuben Daniels to assist
  • EVERYONE TO REVIEW TONIGHT AND SIGN OFF TOMRORROW
  • ONLY outstanding point from earlier discussing was Dion's point from the joint AG where he talked about nailing down the rules for derivative modules... -
  • Dion McMurtrie to discuss/agree in the October 2018 meetings - REPORT FROM DION??
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • FIRST POINT WAS THEREFORE TO have it reviewed internally by all relevant stakeholders...
    • This has been completed and signed off
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate? YES - NOT RELEVANT!!
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
      • ??????????
  • We also need to make a decision on the final Freeset distribution format(s), as I want to ensure we only have a MAXIMUM of 2 distribution formats - RF2 + the agreed new Freeset format (whatever that may be)
    • YES everyone is happy with this!
    • Add this into the Release Packaging Conventions and publish
  • APRIL 2020 - DO WE NEED TO MAKE ANY REFINEMENTS IN ORDER TO PREPARE FOR CONTINUOUS DELIVERY? Did ADHA need any formatting changes when moving to monthly?
    • We really need to tackle the Delta from and to release version in the Delta file naming, and possibly package file naming. At the moment it is impossible to know what a Delta is relative to making it hard to safely process it. Perhaps beyond the scope of this document, but quite important
    • This would require implementing the new proposal for the Delta file naming conventions - these will in future contain the effectiveTime of BOTH the current Delta, and also the release that the Delta is taken from - for example, if a Delta is from Jan 2020 to March 2020:
      1.  sct2_Concept_Delta_INT_20200131_20200309.txt , OR

      2. sct2_Concept_Delta_INT_20200309_20200131.txt

    • We can therefore use the July 2020 Rollup package as an opportunity to trial our new proposal


  • Once all happy, the document will be published and opened up to anyone to view.
14Continual Improvement to the Frequent Delivery processAll

APRIL 2020 - Gather Practical requirements for the various deliverables that we need to implement in order to successfully make the move to Continuous Delivery...

Requirements:

  • Increase the scope of RVF validation to include:
    • All current authoring manual validation
    • Generic Validation Service assertions (from all community validators)
15

Continual Improvement to the Frequent Delivery process

All

We need to continue discussions on this on-going item, in light of the strategic meeting before the conference. In addition we now have new members with additional experience, and we have also now lived with the more stable International Edition release process for the past couple of years.

Last time we discussed this everyone thought it a good idea in principle, but were concerned that we are not yet in a position to deliver the same level of quality on a daily basis than as on a monthly basis (due to the current gap in our manual/automated testing). Therefore we were going to discuss this once we had further progressed our automated testing - however as the new working group for the RVF service will testify this is a slow process, and therefore it may not be possible to wait for this to be completed in its entirety.

We have identified several additional potential issues with moving to Continuous Delivery, which we should consider before proposing a solution:

  • Perceived quality issues:

    • There would be no time for Alpha or Beta Releases - so all Members would have to be comfortable with issues in Production for until the next interim release

    • All issues that normally get tidied up as part of the normal Content Authoring cycle will become public - they will get fixed quickly but in the meantime there may be an impact to the reputation of the quality of SNOMED CT.

  • Roll up Releases:

    1. The 6 monthly delta releases would need to be relative to the prior 6 month release, and therefore named as such somehow (ie) we would need to somehow make it explicit as to which previous release the delta is a differential to.
    2. Other possibility is that each month is the same interim release, and then every 6 months we also release the Delta's relative to the priori 6 monthly release, in addition to the usual monthly release.  In this case we would need to reserve the 31st Jan + 31st July effectiveTime's /package naming for the 6 monthly roll up releases, so that the users who want to remain on 6 monthly schedule would remain unaffected.
  • The other option is to have no roll up releases at all, thus releasing a stand-alone package every day/week/month, depending on the agreed frequency. The issue with this approach though is that anyone using the Delta files (rather than Snapshot) for uploads would need to keep up with the continuous schedule.


UPDATE FROM THE EVOLUTION WORKSHOP:

Pros

  • Allows people to choose whether or not the users take one release every 6 months, or frequent monthly releases...
  • Derivative maps wouldn't be a huge issue as just release them whenever we had a chance, dependent on whichever edition
  • One of the plus points are that when we're still at 6 monthly releases, if the vendors miss a release its a big deal, whereas if they miss monthly releases then they have a smaller impact


Cons

  • One drawback is for the non english speaking members, who need to keep up with translations - shouldn't really have an impact if they keep up with each smaller release.
  • Could be painful for translations when a monthly release happens to contain a drop of a huge project like Drugs or something...
  • What about interoperability issues, with some people taking each monthly release, and others still waiting for every 6 months? ADHA believe this hasn't caused a huge problem for them, just an addition to the existing problem even with 6 monthly releases...
  • Also need to implement the metadata for identifying which dependent release each Delta is relative to...
  • Refsets aren't too much work to keep up to date - however Mapping is a different ball game - this can take some time
  • Maps that are still inherent in the int Edition (ICD-0 ICD-11 etc) are potentially problematic, and the workflow would need to be carefully worked out...
  • If your projects happen to drop in-between the normal 6 monthly releases, then someone who might have taken Jan and July still, might miss out on the important big releases that happen in April and November!
  • Also quality might be an issue - need to have the automated testing completely airtight before we move to continuous delivery! Thereafter you would run all major validation at input stage and ensure authors only ever promote to MAIN when everything perfectly clean. Then we run Daily builds with automated release validation every night, and provides a RAG status on release issues every morning. Then by the end of the month, we publish the last Green Daily build!
  • Andrew Atkinson to continue to feed all of this into the continued internal discussions on whether or not moving to more frequent Delivery is feasible, and if so plan what the timelines would look like.
  • Andrew Atkinson to create a survey to provide to everyone so that they can send out to all users and get feedback on the proposed changes (especially multiple effective Times in Delta files, and removal of Delta files - just a service now):
  • https://docs.google.com/forms/d/17Rhxc3TrMgPq1lnhAm2G6LkGsaN05_-TMKr69WRVdc4/edit
  • TRAG to add/update any of the questions....
  • Question: What questions would we like to ask the vendors and affiliates to a) Ensure we cover off all problems/potential issues, but b) do NOT put us in a position where they think that we might not go ahead with the plans despite their answers.... just wording the survey to ensure that they know we're going ahead, but just want to ensure there's no negative impact to them that might tweak our plans, and c) How much time do they need to adapt to the change for multiple effectiveTimes in the same Delta, and d) How do we promote the benefits? (responsiveness to changes with more frequent releases, improvement to quality with more frequent fixes, etc)
  • Andrew Atkinson to refine survey to ensure that it's accessible to those with more limited SNOMED knowledge/experience, as these are the preferable target market for the survey, given that the more advanced users will (or have already) speak up for themselves:
    • GDPR questions - verify with Terance whether or not we just need to provide a link to our data policy (https://www.iubenda.com/privacy-policy/46600952), or if we specifically need to ask the questions (of whether or not they're happy for us to store their data, etc) as questions in the survey? (check box) - If the latter, ask if we have standard legal wording I can use?
    • Small intro - description + pros/cons
    • Couple of fairly wide ranging questions as to whether or not they think they'll be impacted
    • If so, then either fill in the details here (conditional question in google forms) OR please just get in contact with your NRC to discuss
    • Avoid technical language for non native English speakers
    • Suzy to include in her UMLS survey in January
      • Not done yet as she's stuck with red tape in the NLM!
  • Andrew Atkinson refined the survey accordingly, and sent out to TRAG members for final review on 16/10/2018: https://docs.google.com/forms/d/17Rhxc3TrMgPq1lnhAm2G6LkGsaN05_-TMKr69WRVdc4/edit
  • Andrew Atkinson sent final survey to Terance and Kelly in particular, (from GDPR and comms perspective) to ensure in line with company strategy and verify whether or not they'd prefer this to be an SI survey or NRC surveys?
  • Survey sent to the TRAG to disseminate to their users
  • Survey also sent to Kelly for inclusion in the newsletter, and also on LinkedIn

AGREED:

  • Move to Monthly Releases before we go to full continuous delivery - yes, everyone agreed
  • How do we best automate all of the validation?
  • Best thing is to make the RVF the central source of truth for all International validation.
  • Therefore NRC's like Australia will promote all International related content to the core RVF, and only retain and run validation that is local to themselves.
  • This would mean that whenever they identify a new issue, they can simply promote the new test up to us and we can run it and replicate the issue for ourselves, and therefore fix it quickly.
  • It will also share the burden of maintaining the validation rules.
  • Share Validation Service to address this...
  • Question: can we do any automation for Modelling issues? ECL? New validation using the editorial rules in the new templates as a basis for automating modelling QA?
    • ECL the best bet - plus MRCM doing well so far - can we extend this? (Australia so far only implemented modelling validation by writing manual rules for known issues)
  • What's the impact of multiple effectiveTimes in Delta files?
    • Should be negligible, Australia and US already implemented with no effect to users (despite initial complaints!)
  • Creation of a bespoke Delta using a new tool - Delta at the International level is very simple, but at the Extension level is much more complex due to all of the dependencies, etc. This could also become more involved when we modularise...
  • Australia intended to build this as well, but it never happened because no one requested it in the end!
  • The other issue was the traditional issue of never knowing (in a machine readbale way within the Delta file itself) what the Delta file is a Delta from (ie) is it a delta from the Jan 2014 release, or the July 2016 release, etc.
  • So there were a lot of discussion over whether or not they should create roll up Delta's, or provide the service - but in the end they found that only a few people were actually using Delta's, and those were the people who know what they were doing already, and so nothing was ever required!
  • So we need to decide whether or not this is useful...
  • We also need to be wary of the fact that there are two different things to be relative to - so you can have a Delta to a release, or a Delta to a date in time, and they can be very different things.
  • Suzy has always released a delta with multiple effectiveTimes in it (due to the Edition) and no-one has any issues of this ever.
  • If we remove the Delta files completely everyone would definitely need to provide a Service to download bespoke Delta's (both International and local Extension level) - AT THE SAME TIME WE SHOULD FIX THE ISSUE OF LACK OF METADATA PROVIDED FOR WHAT THE BASELINE OF THE DELTA IS
  • For local extensions this service does get a lot more complex than for International, as they need a range of Delta dates PER MODULE, as they have a lot more going on than just the International Edition - so the service would need to be a) clever enough to correctly get the relevant depedencies from all sources, plus b) Validate that the resulting Delta is correct and valid - provide a checksum of some kind (needs to be identified).
  • SNOMED INTERNATIONAL TO CREATE A SMALL, TARGETED SURVEY TO QUESTION WHETHER OR NOT THERE WOULD BE ANY IMPACT TO ANYONE TO PROVIDING A DELTA SERVICE INSTEAD OF DELTA FILES... Everyone will happily disseminate this to their users and get responses asap...
  • Release Notes automation -
    • simple, just attach notes metadata to each change in MAIN then export on Release
  • Question: Is it worth starting off with a trial using just Content Requests monthly, and then bring everything else in line once happy?
    • NO! Everyone feels strongly that there would be no benefit to this whatsoever, as the majority of urgent cases in CRS are to do with getting an ID to use in refsets etc before the next 6 monthly release, and as this has already been mitigated due to the new tooling providing those ID's early, there's no benefit in moving to CRS early. Small risk in moving to monthly at all, so better off just moving everything at once to prevent a) confusion for users b) confusion in message about continuous delivery, + c) overhead for SNOMED managing 2 different delivery schedules during pilot
  • Question: What are the next steps that we need to consider to help move this forward?
    • Central RVF service, communication with community (survey etc)
  • Question: Is everyone happy with the new plan to remove the Delta files from the RF2 packages completely, and just provide the Delta service to create Delta's on the fly? YES
  • Question: How can we get a survey out to as many implementers as possible in order to ask a lot of these questions and get the
  • Question: How do we manage translations? (including the Spanish release) - How do we cope with the likelihood that one month could have only 50 changes, and the next month 50,000 (Drugs project, etc)? -
    • No impact, as should allow for incremental translations - just need to not set expectations with your users that you stay one month behind the International Edition! Just need to decouple the translation release schedule from the International Edition schedule. ARTURO would prefer the Spanish edition to also move the Monthly (or even more frequent) releases, but he fully understands the natural latency required for translated Editions, and so understands even if we went to monthly we can't keep up with the monthly content changes
  • Question: How do we manage extensions?
    • Again need to decouple them - MDRS will naturally get a lot bigger - also the versioning process internally currently takes a long time + a lot of effort for each upgrade to new International Edition...
  • Question: How do we manage derivatives?
    • Just keep them decoupled from the International Edition release schedule, and do not set false expectations by promising to keep them closely up to date with monthly International Releases!
  • Question: How do we manage maps?
    • So again there is a natural latency here where we can't keep up to date with monthly releases. WE ALSO NEED TO DEFINE WHAT AN ACCEPTABLE UNIT OF RELEASE IS FOR EACH TYPE F CONTENT CHANGE (so what our concept of "DONE" is for each type of change) - FOR EXAMPLE SOME CONCEPTS SHOULD NOT BE RELEASED UNTIL THE RELEVANT ICD-10 MAP COULD BE CREATED AND PUBLISHED AT THE SMAE TIME. OTHERS COULD BE RELEASED NO PROBLEM AND WAIT FOR 6 MONTHS FOR THE RELATED MAPS...
  • WE ALSO NEED TO CAREFULLY DEFINE AND COMMUNICATE OUT WHAT THE SCOPE AND GOALS OF MOVING TO CONTINUOUS DELIVERY ARE - TO ENSURE THAT WE MANAGE EVERYONE'S EXPECTATIONS. FOR EXAMPLE, WHAT IT DOESN'T MEAN IS THAT EVERYONE WILL GET THEIR CHANGE INTO SNOMED WITHIN 4 WEEKS, JUST BECAUSE WE'RE RELEASING DAILY!!!

  • TRAG members to send out the survey...
  • Awaiting all results... ANYTHING BACK YET from anyone?
  • New questions:
    • NEW PROPOSAL IS TO MOVE STRAIGHT TO DAILY CONTINUOUS DELIVERY!
      • WHAT DO PEOPLE THINK ABOUT THIS???
    • ALL OTHER POINTS NOT YET DISCUSSED IN THIS LIST:
    • How do we validate translations? (NLP?)
      • Spain have had some experience with using a Spanish medical dictionary to validate translations - not the actual translations themselves obviously (they've still got no way to automate this), but at least testing that the translations are syntactically correct!
        • AAT to speak to Arturo about their experiences...
    • Implications on peripheral content (MRCM, on demand deltas)

      • LARGE impact on MRCM changes - we need to carefully consider whether or not we can publish MRCM concept model changes without first having waited for all of the concepts impacted by them to have been updated as well (so to isolate them all in a feature branch before publishing anything) - HOWEVER, this does restrict the time to market of the new concept model changes that might want to be published before we have time to update all of hte relevant changes...

      • We need to discuss further with Linda, Yong, etc...

    • Implications for DERIVATIVE products:

      • Do we just continue with these as separate entities and decide arbitrarily when the cut off date for content is?

      • Or do we decided each cycle what the cut off date is for the dependent International Content, depending on which day the important in-flight content changes are being published?

      • Do we have the same cut off for each Derivative product? (even though they might be published weeks apart?

        • Or do we just take a cut of the Daily release from the day we're starting each separate product's release cycle?


      • ICD-10 maps - should we remove it from the International Edition, as mappers might not be able to keep up to date when we move to daily! (or even monthly!)

        • Current plan is to keep them in the International package, but need to answer the below first...

      • YES - some people think we should remove it from the International Edition package, and have it on it's own cycle with the moduleDependency clearly stated

      • Mikael knows users that would likely want to have ICD-10 completely up to date...

      • Other potential is to slow down the International Edition release frequency to ONLY release once ICD-10 is updated in line - so this is part of the gateway to being "DONE" before authors promote... this makes the most sense, BUT could result in projects being much slower to market because of waiting for the ICD-10 maps to be complete - especially for large projects where a lot of the content was uploaded quickly via bulk imports or templates, etc...

      • Different opinions - some people clearly don't want to take International content UNTIL ICD-10 is completely up to date, others want to push ahead with the International content regardless of whether or not ICD-10 is up to date yet!

      • We need to get estimates of how often the ICD-10 mapping might hold up the International Content - and decide whether or not ICD-10 needs to be part of the "DONE" gateway - this could be decided on a Project by Project basis....

      • AAT to discuss with content and mapping team...


    • Impact on NRCs present - how can they help out with testing/validation when Alpha/Beta periods are no longer in place?

    • Vendors (non-NRCs) view of frequent releases??

    • VALIDATION advances:

      • OWL testing - anyone worked on this as yet?

      • Template validation - thoughts?

      • Implementation testing feasible? (see Implementation Load Test topic below)

      • Need to identify Modelling areas that need improving - for example where concepts have 2x parents, this is usually an indication of areas that need re-modelling
      • Need automation of the QA system itself - so some quick way to validate RVF + DROOLS Assertions, both old + especially new!
      • MOST IMPORTANTLY, how do we avoid the usual pitfall of automated testing (ie) that the effort involved in maintaining the automated assertions and keeping them up to date with the content changes, doesn't start to exceed the effort involved in testing manually?!
        • Anyone with experience of a good answer to this?
      • Whitelisting - API required?

    • We need to re-consider the Critical Incident Policy, UNLESS we can get at least several different entities downloading and testing the monthly releases EVERY MONTH!
      • This is because if someone say only takes the release every 12 months (or even worse 24 months), and then finds a critical issue in a now 2 year old release, we would currently have to recall and republish 24 releases! 
      • Instead, we need to have agreement from the Community on a “Forward Only” approach, whereby any issues found (even Critical ones) are fixed from the next Release onwards (or possibly in several Releases time if they’re low priority issues).  Critical issues would simply have to be communicated out, warning everyone NOT to use any previous impacted releases.
      • For true critical incidents,
        1. do proper recall for ALL affected releases  +
        2. Provide patched versions of those releases, +
        3. Flag all those releases in MLDS to state that they're affected by critical incidents +
        4. Fix it going forward ASAP...

    •  
  • WHAT ARE THE BENEFITS OF MOVING TO CONTINUOUS DELIVERY?
    • Who does it help? Any use cases of institutions that will actually be able to use more frequent releases?
    • How do we best up-sell these benefits?
    • Do we go for a phased migration?
      • We could move to Daily Builds for internal use only (and perhaps a very select few members/suppliers), and just continue with the 6 monthly releases for now?
      • Or we move to Daily Builds for internal use only (and perhaps a very select few members/suppliers), and just increase the Frequency of publishing the International Edition to monthly?


  • HOW DO WE DEAL WITH INTEROPERABILITY?
    • Should we put out a white paper with the launch of Continuous Delivery to advise people as to how they can avoid the pitfalls of taking more frequent releases when others that they want to interact with might still be on annual or even longer update schedules?
    • Is it even feasible for anyone to take more frequent releases unless they want to work in silo?
    • If the rest of the community is still only updating annually then this could resign all others to the same pace for now?
  •  
  • COMMUNITY EDITION(s)

    1. What should the criteria be that differentiates between what goes in each Edition:
      1. SNOMED CT Core
      2. SNOMED CT International Edition
      3. SNOMED CT Community Edition
    2. What level of quality do we allow into the Community Edition? 
      1. Any quality (quick and sharable) vs validated (slower but better)
      2. One suggestion is that instead of certifying the content, we could certify the authors themselves - so we could differentiate between projects which are authored by newbies, vs those who have say passed our SNOMED CT authoring certification level 1, etc
      3. Another suggestion is that whoever delivers content to the Community content would have to provide the MRCM to support it, + conform to editorial guidelines, etc
        1. So a list of “quality indicators” could be automated against each project (eg):
          1. MRCM compliant
          2. Automated validation clean
          3. Authors have SNOMED CT certification
          4. Peer reviewed
          5. Release Notes
          6. Etc
        2. And then people can make their own minds up about which projects to use based on comparing the quality indicators between projects
    3. SOME AGREEMENT TO SUPPORT AND MAINTAIN BY @SOMEONE@ AT LEAST…
      1. For example, what happens if we change something in the core which breaks someone way down deep in the Community Edition?  (Which we can’t possibly test when we make the change in the core)
      2. The idea here would be that whoever creates the branch in the Community Edition then manages and maintains it - so everyone maintains their own branch, and is therefore responsible for resolving the conflicts coming down from the core, etc
      3. Versioning also becomes important, as whoever creates it needs to specify which Versions of each dependency their work is based on - (eg) they would state that their work is based on the 20190131 International Edition, and therefore any impact we have on the downstream community content would only happen when the owners of that content decided to upgrade their depednency(s) to the new version
    4. Promotion criteria important - thoughts?
    5. Do we remove the need for local extensions, as they can then simply become part of the Community Edition, with any local content just existing in a “country specific” edition within the Community Edition
      1. This also provides some level of assurance of the quality of the content in the Community Edition - as these would be assured by the NRC’s (and SI in some cases) and therefore provide a good baseline of high quality content for people to then start modelling against
    6. ModuleDependency is going to be important - 
      1. perhaps we answer this by making the entire Community Edition part of the same module - therefore it will all classify as one entity?
      2. However a lot of people will ONLY want to cherry pick the things that they want to take - so we need a method for taking certain modules (or realms or whatever we call them) and allowing people to create a snapshot based on just that content instead of the entire community edition
    7. Dependencies need to be properly identified:
      1. Could the CORE be standalone and published separately?
      2. Or would the CORE need to have dedpendencies on the wider International Edition, etc?
    8. HOWEVER, how do we classify the entire Community Edition when there could be different projects dependent on different versions of the depenencies (such as the international Edition)?
  •  

16



17

Active discussions for October 2020


18

Computer readable metadata


* MAG crossover

Suzy introduced the topic for discussion...


Suzy would like to raise the question of creating computer readable metadata, and raise questions such as whether or not to include known namespace & modules? 
Or just the current metadata for the files in a machine readable format? 

Suzy Roy to provide an update on progress:

  • All agreed that whilst this is a large topic, we should start somewhere, and get at least some of the quick wins in (then request the change to content via the CMAG):
  1. Check where the progress with the namespace metadata has got to - can we progress this?
  2. Code systems (and versions) of the map baselines
  3. Common strings such as boiler plate licence text etc
  4. Description of use cases for the various refsets (using the text definition of the Refset concetps themselves) - either json or markdown representation of multiple pieces of info within the same field.
  • Michael Lawley to provide an update from the related MAG topic...
  • TRAG agreed that this should be incorporated into the discussions with the continuous delivery, in order that we can plan the changes here in line with the transition to more frequent releases. To be continued over the next few months...
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Ideas:
    • Some human readable metadata could potentially live as descriptions (which can then be translated)? David to discuss further...
    • David will mock up something in Json...
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
  • This should now be combined with the Reference set metadata topic, to address all updated metadata use cases - Human readable, Machine readable, etc
19

Reference set metadata

* MAG crossover

Replacement of the Refset Descriptor file with a machine readable Release Package metadata file

See David's proposal here: Reference set metadata (plus sub page here: Potential New Approach to Refset Descriptors)

  • Everyone confirmed no issues with the proposal in principle, in April 2018
  • However, do we consider this to just be relevant to refsets in the International Edition release package?
    • Or to all derivative products as well?
    • Both refsets and maps?
  • Also, are we talking about only human readable descriptive information, or also machine readable metadata such as
    • ranges of permitted values
    • mutability, etc?
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
20

IHTSDO Release Management Communication Plan review

All

This document was reviewed in detail and all feedback was discussed and agreed upon - new version (v0.3) is available for review, attached to the IHTSDO Release Management Communication Plan review page.

AAT has added in details to state that we'll prefix the comms with "Change" or "Release" in order to distinguish between the type of communications. See version 0.4 now - IHTSDO Release Management Communication plan v0.4.docx

Once we've collated the feedback from the revised comms processes that we've implemented over the past year (in the items above), we'll incorporate that into the final version and discuss with the SNOMED International Executive Lead for Communications (Kelly Kuru), to ensure that it is aligned with the new overall Communication strategy. Once complete, the Release Management comms plan will be transferred to Confluence and opened up for everyone to view.

We have publicised the Release Management confluence portal to both NRC's and the end users to get people to sign up as and when they require the information. Do we know of anyone still not getting the information they need?

We also agreed last time that the community needs more visibility of significant, unusual changes (such as bulk plural change, or case significance change). These changes should be communicated out not just when they're assigned to a release, but actually well in advance (ie) as soon as the content team start authoring it, regardless of which future release it will actually make it in. I have therefore created a new Confluence page here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages

I've left the previous items up (from the July 2017 International Edition) because there are no examples yet from the Jan 2018 editing cycle - so please take a look and provide feedback on whether or not this is useful, and how it can be improved.

  • ACTION POINT FOR EVERYONE BEFORE OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris)
    The final version of the communication plan needs to be reviewed by everyone and any comments included before we agree the document internally and incorporate it into our communication strategy
  • Suzy Roy will discuss the end use cases of her users with them and come back to use with feedback on the practical uses of SNOMED CT and any improvements that we can make, etc 
  • We may now also need to add a new section in here wrt the comms for the TRAG, so that this is standardised and agreed with the community? Or is it outside of the scope for the Release Communication Plan? This was felt to be out of scope, and should this be restricted only to communication related to actual releases of products.
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • AAT MIGRATED THE DOCUMENT FROM WORD TO CONFLUENCE, AND THEN SENT IT TO THE EPS Team for first review.....
  • The feedback has been incorporated and the document refined accordingly.
  • https://confluence.ihtsdotools.org/display/RMT/SNOMED+CT+Release+Management+Communication+plan
  • Andrew Atkinson has now sent to the relevant members of the SMT for final sign off....
    • This has now been signed off and is ready for publication
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate?
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
  • Once all happy, the document will be published and opened up to anyone to view
21What constitutes a true RF2 release?Harold would like to introduce this topic for discussion...
  • Language refset conflicts are not yet resolved - Linda has been discussing this in terms of how to merge Language refsets or dictate whether or not one should override the other in cases of multiple language refsets - in the UK they combine them all into one but this is not ideal either. In translation situations they use the EN-US preferred term as the default where there is no translated term in the local language. Perhaps we need to do a survey on the members and who's using what how.
  • Suzy Roy (or Harold Solbrig) to get Olivier's initial analysis and come back to us on what worked and what didn't, and we can take it from there.
  • Suzy would like to ask Matt Cordell if he can share his ppt from his CMAG extensions comparison project.
  • Matt Cordell will distribute this to everyone for review before the April 2019 meeting.....
  • Harold to continue analysis and report back with the results of reviewing the specific examples that Olivier identified in the next meeting....

  • Can you please present the revisited presentation Matt Cordell ?
  •  
22Plans for the transition from Stated Relationship file to OWL refset filesAllThis is part of the wider Drugs and Substances improvements that are currently taking place. Other than the obvious content updates, these technical changes are those which will be likely to have the highest impact on those within our AG. 

We need to discuss the plan and ensure that we have answered all of the possible questions in advance, in order that we have a workable plan with no unwanted surprises over the next few release cycles. 

As a starting point, we should discuss the following: 

1. The schedule of changes (see here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages) (ie) 

July 2018 - initial OWL refsets introduced 
Jan 2019 - included in the Release package: a) Stated Relationship file b) the partial OWL axiom refset including all description logic features that cannot be represented in the stated relationship file. 
The Extended OWL refset file will be available on demand. 
July 2019 - the stated relationship file will be replaced by the complete OWL Axiom refset file. The stated relationship file will NOT be included in the international release; however, it may still be available on request to support migration to the OWL Axiom refset. 

2. The communications required to ensure that ALL impacted parties are completely informed of the Schedule, and the changes that they may need to make in order to transition cleanly to the new format. 

3. The technical changes that we need to make to the Release package itself, in order to support the planned schedule. 

For example, when we "replace" the Stated Relationship file in July 2019, do we remove the file from the release package immediately (in Jan 2020 once everyone has had a chance to run the inactivation file through their systems), or do we take the more measured approach of inactivating all records and leaving the inactivated file in the package for, say, 2 years, and then planning to deprecate the Stated Relationship file by July 2021? 

Further, should we be deprecating the file itself at all, or can we see any other (valid) use for the Stated Relationship file (obviously not just repurposing it for a completely different use!)? 
  • Harold Solbrig to talk to Yong and others in the MAG about his proposals for future proofing against the possibility of having multiple ontologies referenced, prefixed axioms, etc.
  • Harold confirmed nothing to report
  • Some opposition to reverting back to having the OWL file on-demand for Jan 2019 - need to discuss through with Kai in tomorrow's session - preference is to release both Stated Rel's + the "addtiional" info only in the OWL files - as with July 2018 release. Is this the current intention?
  • Done - Jan 2019 was implemented as requested - did anyone manage to use it and trial it effectively? Any feedback?
    • YES - Australia downloaded it and trialled it in their systems!
    • Worked well - however they have not got a lot of new validation to cover either the OWL format or the content itself, so these were trials to ensure that they can use it and author against it, rather than testing the actual content of the Axioms...
  • Also, has the decision already been made to NOT create a full history back to 2002 (or 2011 at least)? Sounds like most extensions will do it anyway, so maybe we should? Decision made by content team - no history to be included
  • Discussion on whether or not to go back and re-represent the content all the way back to 2002 in the new complete OWL file:
    • Pros:
      • Prevents the need of new tooling providers to create support for the ols Stated Rel way of doing things
      • If the International Edition doesn't go all the way back then the Extensions are restricted to not doinh it either, if the international Edition does then the Extensions have a choice.
      • Ability to go back through history and analyse prevent modelling decisions (if errors come up in future), even for those authors who haven't heard of Stated Rel's because they've now been deprecated for several years.
    • Cons:
      • Cost involved in creating the pure historical view
      • If the extensions have a choice as to whether or not to go back, then interoperability could be impacted - better to enforce going back if the international edition does.
      • Need to address the issue of some implmentations having both Stated Rel + OWL Axioms in the same full files going forward.
      • Uncertain use cases for most implementers
  • This discussion needs further input in order to enable us to reach an informed conclusion. The relevant internal and external stakeholders (NRC's such as Australia) will take this away and come back with the results of feasibility studies and estimates as to how long the necessary work would take to complete..... a decision must then be made well in advance of the January 2019 International Edition, in order to ensure that we agree on the correct approach before creating the initial Alpha release in November...
    • We are currently proceeding on the assumption that there was no feedback from any sources that supported the retro-fitting of the OWL Axiom files? The major con here is breaking our own regulations on tampering with history - the Stated Relationships should remain in place in order to a) accurately represent history + b) prevent the false impression that extended functionality was available via OWL Axioms before July 2019!
  • DOES ANYONE ELSE HAVE ANY OTHER CONCERNS WHATSOEVER ON THE TRANSITION PLAN TO OWL, OR IS EVERYONE NOW COMFORTABLE WITH IT? YES! All good to go...
  •  
  • We need to work with the Shared Validation working group to share as many OWL based validation assertions as possible, so that we can all effectively cover:
    • Technical validation of the OWL file structure
    • Content validation of the OWL records
    • Modelling validation post OWL
  • Having worked with OWL for a few months now, does anyone have any suggestions for new validation assertions?
  • Linda and others are confident that the MRCM validator will cover most modelling scenarios for now, but we'll need to keep extending as we go
  • We also need to continue identify as many opportunities as possible to validate the new OWLExpression content - has anyone written anything for this?
  • New idea for an RVF assertion regarding the ordering of OWL records (based on first concept) with disjoints:
    • Michael Lawley suggested it (and Kai agreed) in MAG last time -
      • Can we please discuss and agree if it's worth creating?
23Implementation Load TestAll

RVF has now been open sourced to allow people to contribute towards it more easily, so that Implementation issues can be reverse engineered into the assertions. All of the NRC validation systems should remain separate, in order to ensure as great a coverage across the board as possible.

However, it makes sense to ensure the critical tests are included in all systems, in order to ensure that if, say, one NRC doesn't have the capacity to run Alpha/Beta testing for a certain release, we don't miss critical checks out. We are working on this in the Working Group, and also in the RVF Improvement program, where we are including the DROOLS rules, etc. These are also being incorporated into the front end input validation for the SCA.

TRAG to therefore discuss taking the Implementation Load test forward, including the potential to incorporate key rules from NRC validation systems into the RVF. So we should discuss the tests that are specific to the Implementation of vendor and affiliate systems, in order that we can facilitate the best baseline for the RVF when agreeing the generic testing functionality in the Working Group.


  • Matt Cordell will promote some useful new ADHA specific rules to the RVF so we can improve the scope... report back in October 2019
  • Chris Morris to do the same - get the RVF up and running and then promote any missing rules that they run locally.... report back in October 2019
  • Updates?
24

Modularisation of SNOMED CT

All

Dion McMurtrie completed the Alpha release - did anyone have chance to review it? (I haven't had any requests for access to the remainder of the package)

The subject of Modularisation needs to be discussed between the various AG's who are considering the topic, before we can proceed with the Release-specific sections.


We need to discuss any red flags expected for the major areas of the strategy:

  1. Modularisation
  2. Members who want to abstain from monthly releases, and therefore need to use delta's with mulitple effective times contained within.
  3. Also need to consider if we continue to hold the date against the root concept - works perhaps still for 12 monthly releases, but not necessarily for continuous delivery daily!
  • THIS NOW BECOMES CRITICAL TO THE STRATEGIC DIRECTION WE DISCUSSED IN TERMS OF MODULARISING OUR CONTENT, AND IMPROVING THE WAY THAT THE MDRS WORKS, IN ORDER TO ALLOW RANGES OF DEPENDENCIES. THIS WILL ALLOW THE "UNIT" OF RELEASE TO BE REFINED ACCORDING TO THE RELEVANT USE CASES.
  • Understand the Use cases thoroughly, and refine the proposal doc to provide people with more real information - Dion McMurtrie TO PROVIDE THESE USE CASES FOR Andrew Atkinson TO DOCUMENT
  • Does the POC allow for concepts to be contained within multiple modules? NO - BUT DION CAN'T THINK OF ANY CONCRETE EXAMPLES WHERE THIS WOULD BE NECESSARY
  • What about cross module dependencies? Michael Lawley's idea on having a separate Module purely for managing module dependencies
  • IN THE FINAL PROPOSAL, WE NEED TO CREATE A NESTED MDRS TO MANAGE THE INTER-MODULE DEPENDENCIES (as per Michael's comments)
  • NEED TO PROVIDE GOOD EXAMPLES AND WHITE PAPERS OF THE USE CASES FOR MODULARISATION IN ORDER TO ENGAGE OTHERS...

  • AFTER SIGNIFICANT DISCUSSION AND CONSIDERATION, THERE ARE NO VALID USE CASES LEFT FOR MODULARISATION. IT CAUSES A LOT OF WORK AND POTENTIAL CONFUSION, WITHOUT ANY TANGIBLE BENEFIT.
  • THE PERCEIVED BENEFIT OF HAVING A WAY TO REDUCE THE SIZE/SCOPE FO RELEASE PACKAGES IS BOTH a) invalid (due to everyone's experience of being unable to successfully do anything useful with any small part of SNOMED!), and b) easily answered by tooling that using the ECL to identify sub-sections of SNOMED to pull out for research purposes, etc.
  • THEREFORE AS OF APRIL 2018 THE FEEDBACK FOR RORY AND THE STRATEGY TEAM WAS THAT MODULARISATION SHOULD NOT BE IMPLEMENTED UNLESS A VALID USE CASE CAN BE IDENTIFIED.
  • HOWEVER, KNOWING THE HISTORY OF THIS ISSUE, THIS WASN'T NECESSARILY GOING TO BE THE FINAL WORD ON THE MATTER, SO IS EVERYONE STILL SURE THAT THERE ARE NO KNOWN USE CASES FOR MODULARISATION?? (eg) linking modules to use cases, as Keith was talking about with Suicide risk assessment in Saturday's meeting,etc??
  • This topic came up several times again during other discussions in the April 2019 meetings, and it was clear that people had not yet given up on the idea of Modularisation - we therefore need to discuss further in October 2019....
  • Agreed to see where the linked discussions in the MAG etc end up going, and then discussing the proposals rather than just in abstract....
25Member Forum item: "Development of a validation service where releases can be submitted for testing" - The Shared Validation ServiceAll

Last meeting the TRAG proposed use cases for creating an actual service (with a user-friendly UI, etc) to enable people to load up their release packages and run them through the standard validation assertions.

Standardisation is the primary use case here - everyone agrees that there is a significant benefit to interoperability by ensuring that all RF2 packages are standard and conformant to the basic standards at least - and so this is a strong business case for the service.

We agreed that whilst we have the appetite to have one, this will be a long term goal - to get us started we should use the open sourced RVF as a basis to refine the rules.

We therefore setup a working group to decide a) What the scope/targets should be b) What technology platform would be most appropriate c) What the high level rules should be (packaging format, content etc) - Working Group: Generic Validation service

The good news is that we've now used the initial discussions we had as part of the working group to refine the requirements for the ongoing RVF improvement program. This is due to complete within the next few months, at which point the working group will meet again in order to begin the full gap analysis between the various streams of validation that we all have.


Liara also discussed validation with ADHA during the London conference - Dion do you have a quick update on where those discussions got up to?


  • Plan is to ensure that the generic service is flexible enough to fail gracefully if certain extensions don't contain some of the expected files, etc.
  • We should also provide a standard Manifest to show what files they should include wherever possible (even if blank)
  • We'll now take this forward with the working group, using the comprehensive list of current SI assertions as the baseline:
  • AAT sent the list out to the working group in October 2018, and requested comparison analysis results to be posted asap, with a view to being able to report back to the TRAG on proposed scope in April 2019...
  •  
  • Suggestion for a new RVF assertion for transitive closure loops:
    • This would be to highlight situations where through a path of is-a relationships a concept is an ancestor of itself.
    • So, when validation is run, considering the source and destination of all active inferred is-a relationships. Starting at each concept in turn and tracing from source to destination of each relevant relationship until reaching the root concept should NOT encounter the starting concept. 
    • This would have taken a long time (and a lot of processor power) in earlier incarnations of the RVF, but with auto-scaling etc now this should only take a few seconds
    • We've had some real life examples of this recently, so there is now a business case for adding this kind of assertion
    • Any objections?
  •  
  • Reports from the comparison analysis from the working group...
    • Suzy / Patrick?...
    • Matt...
    • Chris...
26"Negative Delta" file approachAllThis approach was successfully implemented in order to resolve the issues found in the September 2017 US Edition - is everyone comfortable with using this approach for all future similar situations? If so we can document it as the accepted practice in these circumstances...
  •  NO! Everyone is decidedly uncomfortable with this solution! In particular Keith Campbell, Michael Lawley and Guillermo are all vehemently opposed to changing history.
  • The consensus is that in the particular example of the US problem, we should have instead granted permission for the US to publish an update record in the International module, thus fixing the problem (though leaving the incorrect history in place). This would have been far preferable to changing history.
  • ACTION POINT FOR EVERYONE FOR OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris
    We therefore all need to come up with potential scenarios where going forward we may need to implement a similar solution to the Negative Delta, and send them to AAT. Once I've documented them all, we can then discuss again and agree on the correct approach in each place, then AAT will document all of these as standard, proportionate responses to each situation, and we will use these as guidelines in future. If we have issues come up that fall outside of these situations, we'll then come back to the group to discuss each one subjectively, and then add them back into the list of agreed solutions.
  1. Preference now is to retain EVERYTHING in the Full file, regardless of errors - this is because the Full File should show the state at that point in time, even if it was an error! This is because there is not an error in the Full file, the Full file is accurately representing the error in the content/data at that time.
  2. The problem here is that the tools are unable to cope with historical errors - so we perhaps need to update the tools to allow for these errors.
  3. So we need the tools to be able to whitelist the errors, and honestly document the KNOWN ISSUES (preferably in a machine readable manner), so that everyone knows what the historical errors were.
  4. The manner of this documentation is up for debate - perhaps we add it to a new refset? Then we could use something very similar in format to the Negative delta, but instead of it actually changing history retrospectively, we simply document them as known issues, and allowing people to deal with the information in their own extensions and systems in whatever way they feel is appropriate.
  5. Only situation we can think of where we couldn't apply the above gentle response, would be copyright infringement - whereby if we discovered (several releases after the fact) that we had released content that was in direct infringement of copyright, then we would potentially have to revoke all releases since the issues occurred. However, this would raise a very interesting situation where patient safety might be compromised - as if we remove all historical content that contravened the copyright, then we run the risk of patient data being impacted, thus potentially adversely affecting decision support. This is simple to resolve when the problem is in the latest release (simply recall the release), but if found in a 5 year old release for example, it could be very problematic to recall 5 years' worth of content and change it!
  • October 2018 - Guillermo proposed a separate possibility, which is to introduce a new Status (eg) -1 whereby if you find this status in the latest snpashot you would just ignore it - this doesn't however address the use case where there is a legal contravention and you need to physically remove the content from the package - the use case where you would have something that contravenes RF2 paradigm, you can't use the RF2 format to correct something that is RF2 invalid! So this is unlikely to work...
    • Nobody is on board with this idea, as it's too fragile and introduces unnecessary complexity such as we had with RF1...

  • April 2019
  • If we're still all in agreement with this, then steps 1-5 above should all be documented and disseminated to get confirmation of approval from everyone??
  • Did everyone read through everything? Has anyone got any further scenarios that we can include in the documentation?

  • The EAG raised this issue again on 08/04/2109 - Peter to try to make it to the next TRAG to explain the use case that was raised today and elaborate on the new proposal...
  • The TRAG discussed this issue at length, and came to the conclusion that we cannot address ALL potential use cases with a standard, generic, solution (certainly not any of those offered above).
    • Instead the solution in each case should be agreed on given each specific use case that comes up each time
    • So INSTEAD we should update the Critical Incident Policy to very clearly define the process to be followed each time we need to remove something from the Published release(s):
      • Which group of people should make the decision on the solution
      • Perhaps we also provide examples of how each use case might be resolved:
        • For Legal/IP contraventions, we should either remove content from history entirely, or redact it (leave the records in place, but remove all content from fields except for UUID, effectiveTime, moduleID, etc - thus allowing traceability of the history of the components, without including the offending content itself)
        • For Clinical risk issues, we can remove it from the Snapshot, but leave the Full file intact to leave a historical audit trail whilst ensuring that the dangerous content shouldn't get used again (as most people use the snapshot) - see Full file steps 1-5 above, etc
      • How to communicate it out to the users, etc
  • OCTOBER 2019 - DISCUSSION RE-OPENED AS PART OF THE MAG:
  • ONCE FEEDBACK OBTAINED FROM MAG:
  • Andrew Atkinson to update the Critical Incident Policy with
    • the various use cases that we've identified so far
    • the governing bodies who should be the deciding entities
    • the process for making the decision in each case
    • including the critical entities that need to be collaborated with in each case (all NRC's, plus 3rd party suppliers (termMed etc) who represent some of them), to ensure the final solution does not break outlying extensions or anything
    • the process for communicating out those decisions to ALL relevant users
27Re-activation policyThe standardisation of the SNOMED CT policy towards re-activation of RF2 records.

For example, we have had instances where MRCM records were originally inactivated, and replaced with updated versions that were subsequently proven to be less valid than the original records. The question was then whether or not to re-activate the original records, or to inactivate the new (invalid replacements) and create new valid records identical to the original records (but with new ID's).

Initially, it seems clear that the correct choice is re-activation, as this is cleaner, keeps churn in the files down to a minimum, and avoids confusion with any potential de-duping processes.

However, there is an argument from the users point of view, who seem to prefer to have complete visibility of the historical audit trail, and from this perspective having all inactivations and the final (new) active records in the snapshot + delta makes it easier for those who don’t use the full file to see what decisions were taken and when.

So we would like to agree a standard, consistent approach to use, rather than deciding on a case by case basis.
  • Thoughts?
    • Everyone prefers reactivation
    • Andrew Atkinson to feed this back to Linda - DONE

  • Another question that's just come up in this area, is the assignment of inactivated concepts back to Primitive status.
    • The question is relevant now because this makes it much more awkward for authors to re-activate concepts, something that's becoming increasingly preferable (rather than creating new, duplicate concepts of the old originally inactivated concepts).
    • In particular, it means that sometimes you can not revert an inactivated axiom because it refers to other concepts which have been made inactive.
    • Does anyone remember why the rule was originally put in place?
      • Not really!
      • NO-ONE has an issue with us changing this in particular, so Andrew Atkinson to feed this back to Kai
    •  

28



29Proposed deprecation of the CTV3 Identifier simple mapAllDue to it coming to the end of its useful life, SNOMED International would like to propose planning for the deprecation of the CTV3 Identifier simple map (that currently resides in the RF2 International Edition package) as of the January 2020 International Edition. 

Some Member countries have already identified the reduction of the effectiveness of the product, and have already put plans in place to withdraw support for the CTV3 Identifiers from 2020 onwards. 

The TRAG therefore need to discuss whether or not there are any apparent problems with the proposed deprecation, and if so how they can be mitigated. 

We must also discuss the most effective method to pro-actively communicate out announcements to the community to warn them of the upcoming changes, in order to ensure that everyone who may still be using the Identifiers has plenty of notice in order to be able to make the necessary arrangements well in advance. 

Finally, we will need to decide on the best method for extricating it from the package, in order to ensure the smoothest transition for all parties, whilst remaining in line with the RF2 standards and best practices. 
  • AAT CHECKED THE PREVIOUS IMPLEMENTATIONS OF DEPRECATION OF BOTH ICD-9-CM and RT Identifiers, AND AS THOUGHT BOTH WERE IN THE CORE MODULE, AND REMAINED IN THE CORE MODULE IN THE STATIC PACKAGES - SO ANY ISSUES WITH DOING THIS AGAIN?
  • So the plan would be to follow the same deprecation process as we did with ICD-9-CM (ie)
    • move all of the content to a Static Package in July 2020, and inactivate all of the content
    • publish the reasons for inactivation in the historical associations
    • Release Notes similar to ICD-9 = SNOMED CT ICD-9-CM Resource Package - IHTSDO Release notes
    • CREATE A STATIC PACKAGE FOR CTV3 BASED ON THE JULY 2019 MAP FILES AND PUBLISH ON MLDS (and link through from Confluence link as well). ALSO LIFT THE CTV3 SPECIFIC DOCS FROM THE Jan 2020 RELEASE NOTES TO INCLUDE IN THE PACKAGE.
    1. Date of the files should be before the July 2020 edition (so say 1st June), in order to prevent inference of dependency on the July 2020 International edition
      1. So we set the effectiveTime of the Static package to be inbetween the relevant international edition releases (eg) 1st June
      2. This is to ensure that it's clear that the dependency of the Static package will always be the previous International Edition (here Jan 2020), and not continually updated to future releases
      3. It cannot therefore have an effectiveTime of July 2020 (as we would normally expect because we're removing the records from the July 2020 Int Edition) as this would suggest a dependency on the July 2020 content which doesn't exist
      4. It also can't have an effectiveTime of Jan 2020 as we need to distinguish between the the final published content which was Active in Jan 2020, and the new static package content where everything is Inactive.
    2. Inside the files should be all International edition file structures, all empty except for:
      1. Delta ComplexMap file needs to be cleared down (headers only), as no change in the content since the Jan 2020 files, so no Delta
      2. Full and Snapshot ComplexMap files exactly as they were in Jan 2020 release (including the effectiveTimes)
      3. ModuleDependency file needs to be blank, as CTV3 was in the core module (not in its own like ICD-10 is), and therefore the dependency of the core (and therefore the CTV3 content) module on the Jan 2020 edition is already called out in the Jan 2020 ModuelDependency file, and therefore persists for the static package too.
      4. Date of all of the files inside the package should be the new date (1st June)
      5. But all effectiveTimes remain as they were in Jan 2020
      6. Leave refsetDescriptor records as they are in the International edition
      7. RELEASE Notes Should describe all of the thinking we went through when creating this package, why the moduleDependency file remains blank, and why we’ve wiped the Delta, etc (see above)
  • AND ALSO COMMS SAME AS WE DID WITH THE RT IDENTIFIER REFSET DEPRECATION:
    • RT Identifier Refset deprecation:

      We need additional comms around the July 2017 release, in addition to the usual Release Notes wording, in order to confirm what is happening and the rationale behind it.

      To re-iterate what was discussed on the previous call, Legal counsel confirmed that from a legal perspective, he doesn’t consider that it’s either necessary (or even advisable) for us to send CAP any further communications on this matter.  Legal counsel is confident that the informal discussions that we’ve already had with them (in order to remind them about what they need to do), are sufficient to cover our legal obligations, given that the licence is theirs and not SNOMED International's.  Therefore we no longer need to send a formal letter to CAP.

  • Has anyone identified any issues with the proposed deprecation?

    • If so what?

  • Is everyone still in favour of the refined process to use to deprecate??

  • If all good then Andrew Atkinson to begin formal deprecation process

30Clean modularizationAll

There are 22 module concepts, that are on the 900000000000012004|SNOMED CT model component| module.

I don't think it's documented anywhere, but we (AU) have made the assumption that the concept for a module, should be on itself. I suspect we've started to discuss this before, but can't recall how accepted this position was. The 22 concepts below (including the core module) aren't part of the core release, but clutter up the heirarchy. We also get enquiries about this content, some of which is non-existant/available.

  • Thoughts please from everyone on whether or not this proposal would have any impact (negative or positive) on the International Edition?
31Proposal to increase the level of metadata available for authors to log decisions made during content authoring

Jim Case +

Suzy Roy

This is a subject that would be helpful to include Jim in the discussions, as he has some definite opinions on how to improve the metadata in this area. 

Some suggestions would be to make more detailed information available for authors to describe their reasons for inactivation (especially in those areas where currently they are forced to use inactivation reason codes that aren't completely representative of the reasons in that instance).

Adding Jim Case - for further discussion later...

32Potential for adding a "withdrawn" reason for inactivated contentAllDiscussions around the future strategy for SNOMED CT have included the potential for adding new statuses for content. 

In particular, many people have suggested that problems are created for those either mapping or translating from content that's still "in development". If (as is often the case) they use Daily Builds etc as input data, they can often get tripped up by content which is created but then withdrawn before it's versioned and officially released. It would be extremely useful to those users to have access to traceability data describing the reasons behind why they were removed, in order to support accurate mapping/translation. 

In another use case, there's the possibility that content needs to be formally withdrawn from the International Edition AFTER it's been officially released. This would be the case if, for example, content has unintentionally been published that breaks the RF2 paradigm, or contravenes licensing laws, etc. In this case mere inactivation is not sufficient, the content instead needs to be completely withdrawn from the releases and sometimes even from history. 

The TRAG needs to discuss all of this and be ready with recommendations if these proposals are taken forward.
  • ONE OF THE POTENTIAL SOLUTIONS TO THE ISSUE ABOVE: "Negative Delta" file approach
  • Use cases:
    • undo a historical issue (that break RF2 paradigm, etc) but don't want to pretend it never happened - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Legal contraventions - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Dead on arrival components - it should be okay to have these, and have them openly dead on arrival and therefore inactive to not map to them etc. However it's useful to be able to see these (even though they'd been activated + inactivated within the same release cycle) - so for those people who need to map/translate etc DURING the release cycle, they have to rely on the Daily Build and use live data still in development. Therefore if those concepts disappear by the time of the International Edition it causes problems for those maps/translations already including those concepts.
      • Therefore the best answer is for us to move to having 2x Daily Builds - the existing one + a separate true Daily Builds - where each Daily Build is built relative to the previous Day, and NOT to the previous Published release. This new Daily Build could then be properly relied upon by mapping and translation projects.
      • Can we align this with the transition to the more Frequent Releases?
  • HAS ANYONE HAD ANY MORE THOUGHTS ON THIS SINCE OUR LAST DISCUSSIONS??
  • MAG to discuss tomorrow (30/10/2019)
33National pre-release processes- Joint work item with the CMAG

Suzy to provide update on progress...

Suzy introduced the topic and gave a brief update on agreed scope and timelines. Also requested any input that people not already involved might feel would be useful and appropriate

Who's already involved? Anyone would like to become involved as we've still only had one call about this, so still a good time to join in? Some people (Feikje etc) were particularly interested in the Continuous Delivery discussions, so we'll fold that in later...

Adding Matt Cordell (currently on CMAG)

  • The working group has not progressed and the priority of this dictates that it is unlikely to do so quickly, so Suzy Roy / Patrick McLaughlin and Matt Cordell will continue to keep the TRAG informed of progress as and when we need to get involved...
  • The next likely work will be done by Suzy for documenting the US Edition process...
  • We also need to consider the work being done in the Shared Validation Service here, as this will help us to define and standardise National pre-release processes...



34Spanish Member Collaboration Process refinementsSpanish Edition users only

There was a presentation made at 17:10 by Arturo Romero Gutierrez, to walk through the improvements to this process that have been discussed and agreed since the inception of this new process, and what the Spanish Edition users need to commit to in order to be contributing part of this process.

Everyone was welcome to stay and participate!

  • Presentation from Arturo
  • Agreement from all Spanish Edition users who were present (Alejandra, Suzy + Alejandro) to collaborate and contribute to the refined process
  • We then formalised the process and distributed the document out to all interested parties

  • Arturo, Guillermo, and all others to report back on how the process is working for them?
    • October 2019 cycle worked well for Arturo + Guillermo.
    • April 2020......?
35Spanish Edition feedback processes and content improvementsSpanish contingentThis is for discussion with all those interested in the Spanish Edition, in particular the refinement of the content and the processes behind the feedback procedure.
36Discussion on the conflict between Extension content and International content

All

Jim Case

The answer to this may be quite simple:

  1. If extensions promote content via RF2 delta, we just need to retain all ID's, and only change the ModuleID and effectiveTime, and therefore it is all managed by effectiveTime.
  2. If IHTSDO reject content this is also managed
  3. The only issue then comes if IHTSDO want to change the FSN, then we need a way to manage the change of the meaning of the concept without creating 2 FSN's - as then we need a feedback loop to ensure that it's also corrected at source in the extension as well as in the International edition.

TRAG to continue the discussion and come to a conclusion that will work for all.

  • Has this been answered in its entirety by Jim's new agreed approach? (link here to his final position)
  • Most people consider that Jim's approach covers this under most circumstances. We also need to ensure that we follow the approach listed to the left - so we should confirm all of this has been working in practice since April 2018, and if so close down?
37

NEW ITEM

Versioning Templates

* MAG crossover

* EAG crossover


The EAG have proposed the need to version templates in some way, and potentially even make them "Publishable" components (with all of the reletive metadata that goes along with that). Also the potential to make them language sensitive.

They would then also need to be automatically validated themselves, as well as then being used in the automated validation of the International Edition!

  • Keep an eye on EAG + MAG discussions on this topic and
  • Ensure that the decisions are fed into our Continuous Delivery proposal
38AG Declarations of InterestAllCould each of you please go in and update your information? If there has been no change, then you can simply update the last column with the date. 
  •  October 2019
39Any other questions / issues?All
  •  




Date

6th April 2020  -  10:00 - 12:00 GMT (09:00 - 11:00 UTC)

Room: NONE - Conference Call ONLY!

(https://snomed.zoom.us/j/289236148?pwd=NEdQZUhKZjgrTUxabGhCMzhpMWtWZz09  Password: 422062


6th April 2020  -  13:30 - 17:30 - CANCELLED

LONDON - CANCELLED

7th April 2020  -  09:00 - 12:30 - CANCELLED

LONDON - CANCELLED


Attendees

Apologies

Meeting Recording



Objectives

  • Briefly discuss each item
  • Agree on the plan to analyse and resolve each issue, and document the Action points
  • All those with Action points assigned to them to agree to complete them before the next face to face conference meeting

Discussion items



SubjectOwnerNotesAction
1Welcome!All

Thanks to our members for all of their help. Welcome to our observers!

INTRODUCTIONS...


2

Conclusion of previous Discussions topics

3URGENT: Proposed format of the July 2019 Optional Release packageAllWe wanted to give you a quick heads up on the proposed format of the July 2019 optional release package. Whilst we haven’t received any adverse feedback from any users, we did find a couple of minor issues (during our internal testing) with the UUID’s due to the divergence of the Production and Optional packages. For example, as the Jan 2019 optional release wasn’t ever officially published, a few of the ID’s in the MRCM and inferred files have been used for different records in the July 2019 Optional release than in the Jan 2019 Optional release.

Given these potential inconsistencies, we’re proposing to publish just the OWLAxiom files themselves in the July 2019 Optional release. This will ensure that the information provided is accurate, and given that there are no known use cases for anyone to use anything except for the Axiom files in this Optional package, it will serve to remove any possible confusion.
  • Please can everyone confirm whether or not you're happy with the final published format, and if not please provide details about why this would have an adverse effect on you or any other known users of this package?
  • Interested to know if anyone's intending to use it other than those I already know about?
  • Need feedback in April 2020 to see if we need to discuss further or
    • Provide any other data in an additional release?
    • IF NOT close down in April 2020
    • NO further feedback, so closing this topic down as planned
4



5

Active discussions for April 2020

6URGENT: Additional Delta file planned for Corona Virus descriptionsAll

APRIL 2020 - How did the Release go for the end users?

Did anyone use it yet or just re-publish?

Everyone now in agreement with the 5th option, as follows:

5.  We publish the formal July 2020 International Edition as if the March 2020 Interim release is an official release, with a Delta relative to March 2020, containing all of the new Corona content with an effectiveTime of 20200309.  This is transparent as it doesn't pretend the March 2020 never happened, but assumes that the majority of July 2020 users DID consume the March 2020 Interim release.

  1. For those users who consumed the March 2020 Interim Release:
    1. It causes no problems for these users who want to use either the Full, Snapshot or Delta, as there will be no duplicate records.

  2. For those users who did NOT consume the March 2020 Interim Release:
    1. It does cause potential problems for these users, especially those who want to use the Delta's, as they will not see the new Corona content from March 2020. However, the TRAG are confident that there aren't many of these users, as the majority either use the Snapshot, or if they do use the Delta it's mostly simply to view the differences between this and the previous release, rather than actually updating systems using the Delta files.
    2. They would therefore have to either use the Snapshot, or use a new Rollup Delta package that we publish alongside the July 2020 Release:
      1. This July 2020 Rollup Delta package would be a separate resource with clear documentation.  It would include all changes since January 2020, with the actual, official effectiveTime. Therefore all of the new Corona content would have an effectiveTime of 20200309. The rest of the content will have an effectiveTime 20200731.

      2. This rollup package should include instructions on how to cope with multiple effectiveTimes, and how to validate against the Snapshot/Full to ensure that they flag it up if they somehow missed some updates in the middle of their Delta period (eg) if they are doing a delta for the entirety of 2019, but something goes wrong that means the rollup delta doesn't include the updates for say March 2019. This would be very difficult to pickup in manual validation, so needs an automated check against either multiple Snapshots, or the latest Full file.

      3. We can also use this Rollup package as an opportunity to trial our new proposal for the Delta file naming conventions - these will in future contain the effectiveTime of BOTH the current Delta, and also the release that the Delta is taken from - for example, if a Delta is from Jan 2020 to March 2020:

          1.  sct2_Concept_Delta_INT_20200131_20200309.txt 

    3. OR we simply ask them to first apply the March Interim release Delta and after that apply the official July 2020 Delta (relative to March).

      1. This option prevents confusion and a lot of extra documentation, and also prevents the users from having to potentially change their systems now in order to cope with loading in a rollup delta that contains multiple effectiveTimes.


  3. While there is some room for delta users to get confused about which one to use (the official Delta since the March 2020 release, or the Rollup Delta), the fact is that both deltas would be consistent, and this therefore a) preserves RF2 protocols, and b) pushes forward the prep for continuous delivery.

The consensus is that people should try to use option 2c wherever possible, but that we should provide the rollup package as well just in case this is easier for some users.

If we track the downloads for this rollup delta package, we should then get a really good idea of:

  1. roughly how many people would be likely to use a new "Delta" tool (currently planned for Continuous Delivery), because they aren't using the Snapshots and aren't able to simply load multiple delta's consecutively.
  2. then speak to these people and try to gently encourage them to move to using the Snapshots for future updates, in order to make strong foundations for the future migration to Continuous Delivery.
7Multiple effectiveTimes in Delta files - are they required?AllAPRIL 2020 - This needs to be answered for interim releases like we've just had, but more importantly also for the future migration to Continuous Delivery

Everyone in agreement that multiple effective times will be necessary in Delta files for future continuous releases.

We also unequivocally need to include ALL historical records in a true Delta file.

HOWEVER, the entire use case for Delta files is becoming less and less certain. This is because fewer and fewer users are utilising Delta's to actually upload content, but instead simply to quickly and cleanly ascertain the latest changes between the previous and current releases.

This latter use case could perhaps be much better served by offering a Diff Tool, rather than the originally intended Delta tool:


  • This Diff tool would allow the user to enter the two releases they want to see the differences between, and then

    We could also include information for WHAT concepts wer echanged to - (eg) historical associations - CHANGED TO....

    • just show a diff between 2x snapshots, and ask whether or not they want to see ALL changes between each dates or just the LATEST change...

    8



Refset Descriptor InactivationMatt CordellTRAG to decide on correct policy and feedback to Matt...
9MRCM format updateAllAPRIL 2020 - everyone to discuss and agree
10

URGENT: CONCRETE DOMAINS Consultation

+

Concrete Domains

* MAG crossover

All

The short term proposal of precoordinating the numbers and measures as concepts (and therefore not changing the RF2 format) was generally well accepted, though there were concerns raised regarding the longevity of this approach, and whether or not this addresses the original target of the project (which was to allow a standardised approach across all extensions, instead of perpetuating distinct coding for different users). The other concern raised was that any solution needs to be implemented rapidly, as otherwise the various members will be forced to start/continue implementing their own solutions.

Peter G. Williams, therefore, has taken this forward in the Modelling AG and further implementation. The functionality has been rolled in to the wider discussion of enhancing SNOMED’s DL capabilities.    The Modelling AG is planning a targeted discussion on this in June 2017, and will then produce a document which would then be reviewed by the MAG at the October conference.This Proposal document will be shared when complete.

Last update from Peter was that the OWL Refset solution allows us to classify with concrete domains. The thing we’re still discussing, is how to represent that in the release. The current most popular approach suggested is to create a 2nd Inferred file ("sct2_RelationshipConcreteValues_Delta_INT_[date].txt") which contains concrete values in the destination column, rather than SCTIDs. This allows them to be added without impact to the current approach i.e. ignore it if you don’t want to use them. The new file would only contain concrete values.

At the same time, existing drugs strengths and counts expressed using concepts (which represent those same numeric values) will be inactivated.   SNOMED International will inactivate the existing strength / concentration  attributes which use concepts-as-numbers and replace them with new ones (using the same FSNs) and switch the target/value to the corresponding concrete numeric.

This enhancement will increase the analytical power of SNOMED CT through the use of numeric queries and assist with interoperability by removing the need for extension maintainers to all - separately - add concepts representing numbers in order to publish their own drug dictionaries. 

  •  October 2018 - Harold Solbrigto give an update on the MAG's plans? No further updates yet, check back in April 2019....
  • Consultation: SNOMED International are now running a consultation around the introduction of Concrete Domains to SNOMED CT.

    I you are interested in this area, and/or wish to express an opinion on this proposed change, please read the following information and complete the feedback form if desired:

    http://www.snomed.org/news-and-events/articles/addition-concrete-domains-consultation

  • Update from Peter Williams after subsequent MAG discussions - NEW MAG Proposal:
  • ANYONE HAVE ANY FEEDBACK??
  • The MAG have only received formal feedback (via the online form - I know some of you commented direct on the page!) from ONE person so far, so can we please make a point of providing some feedback on this ASAP - even if it's just to say that you're in complete agreement? Thanks!

  • We should also be issuing advice to downstream users of the drug model to avoid using the current concepts as numbers as they will soon be disappearing - can we please have confirmation of who knows that they have users impacted by this, and that they'll provide the advice immediately?

  • Can anyone foresee any impact (negative or positive) on the Release(s)? NO

    • Does the introduction of a second inferred file present any risk of confusion, etc?
      • NO
    • Are there any perceived restrictions around the use of concrete domains in inferred format?
      • NO
    • Should the new inferred file take exactly the same format as the current file?
      • Current proposal removes the DestinationID field completely and replaces it with the new "Value" field
      • But in theory, we could just hold the Values in the existing DestinationID field, if there's a strong business case for people to need the same format as the existing Inferred file? (hard coding of field names in import systems, etc)
      • NO, THE NEW FORMAT IS ACCEPTABLE
    • Will the inactivation of the existing concepts containing drugs strengths/counts cause anyone problems?
      • NO, BUT
      • MORE COMMS NEEDED TO WANR PEOPLE THEY'LL BE INACTIVATED
    • Are they any users, for example, for currently use these concepts, who are unable to switch to the new approach?
      • YES plenty, so we just need to continue to ensure this is an OPTIONAL file (to consume, it must be mandatory to include in the Release package)
    • April 2020 - Any further feedback? (especially from any further updates from MAG plans)
11

Discussion of proposed format and packaging of new combined Freeset product

+

Proposed new Freeset format

AllTRAG to review and provide feedback and ideas for business case(s)...
  •  Andrew Atkinson to present the current proposal, and gather feedback
  • Feedback:
    • Uncertainty on use cases - however this was mitigated by the specific messaging from SNOMED licensed users to non-licensed recipients...
    • Content
      • DICOM in particular no representative without sub-division, PLUS actually risky with unverified attributes...
      • AAT to discuss further with Jane, etc
      • Agreed that SI are confident that DICOM will provide some use
    • Using the US PT instead of the FSN (whilst providing less exposure of the IP) prevents visibility of the hierarchy (due to lack of semantic tag) - however the reason for this is because the target users (who are NOT current SNOMED licensed users) will find more use from the PT in drop-downs, messaging, etc than the FSN...
      • Now included both!
    • Everyone happy with each subsequent release being a snapshot - so additions added but inactivations just removed - as long as we include something in the legal licence statement to state that use of all concepts that have ever been included is in perpetuity (even after they've been inactivated)
      • New requirements have suggested that we need to now include a full historical audit trail, even in the Freeset formatted file!
      • This means we've included an Active flag column to allow this to be added in future releases...
      • We don't need to do this for a few months, so we need feedback now on whether or not we think this is a good idea?
      • Any potential drawbacks?
        • None idenitifed in Oct 2019 - but no-one has used it yet!
        • Check again in April 2020......
      • This is a dependency for signing off the final version of the Release packaging conventions and File Naming Conventions item (next)
    • In addition, Members would also like a Proposal to create an additional Simple refset (full RF2) of the entire GPS freeset in order to enable active/inactive querying etc by licenced users...
        • Potential to automate the creation of this using ECL queries if we ensure all freesets are included in the refset tool..

      • Would people still see a valid business case for including an RF2 refset file in the GPS package as well?

        • OCTOBER 2019 - NOT IN THE ROOM - BUT RORY HAS BEEN ASKED FOR IT BY SEVERAL PEOPLE, SO WE NEED TO DO IT

          • This will be in line with the September 2020 GPS release.

        • Any potential drawbacks with doing this?

          • NO

        • If so, should it be part of the existing GPS release package, or a separate file released at the same time?



  • APRIL 2020 - Any other feedback from actually using the GPS freeset file????



12

Release packaging conventions and File Naming Conventions

All

TRAG to review and provide final feedback.

Reuben to provide feedback on progress of the URI specs + FHIR specs updates...

  • Document updated by Andrew Atkinson in line with the recommendations from the last meeting, and then migrated to a Confluence page here: SNOMED CT Release Configuration and Packaging Conventions
  • To be reviewed in detail by everyone, and all feedback to be discussed in the meetings. AS OF OCTOBER 2017 MOST PEOPLE STILL NEEDED TIME TO REVIEW THE DOC - Andrew Atkinson INFORMED EVERYONE THAT THIS DOCUMENT WILL BE ENFORCED AS OF THE JAN 2018 RELEASE AND THEREFORE WE NEED REVIEWS COMPLETED ASAP... so now need to check if reviews still outstanding, or if all complete and signed off??
  • AAT to add in to the Release Versioning spec that the time stamp is UTC
  • AAT to add the trailing "Z" into the Release packaging conventions to bring us in line with ISO8601
  • AAT to add new discussion point in order to completely review the actual file naming conventions. An example, would be to add into the Delta/Full/Snapshot element the dependent release that the Delta is from (eg) "_Delta-20170131_" etc. AAT to discuss with Linda/David. Or we hold a zero byte file in the Delta folder containing this info as this is less intrusive to existing users. Then publish the proposal, and everyone would then take this to their relevant stakeholders for feedback before the next meeting in October. If this is ratified, we would then update the TIG accordingly.
  • AAT to add in a statement to the section 4 (Release package configuration) to state that multiple Delta's are not advised within the same package.
  • AAT to add in appendix with human readable version of the folder structure. Done - see section 7
  • IN ADDITION, we should discuss both the File Naming convention recommendations in the Requirements section (at the top of the page), PLUS Dion's suggestions further below (with the diagram included).
  • Dion McMurtrie to discuss syndication options for MLDS in October 2018 - see hwat they've done (using Atom) and discuss with Rory as to what we can do. Suzy would be interested is this as well from an MS persepctive. UK also interested. This shouldn't hold up the publishing of the document. Discussions to continue in parallel with the creation of this document...
  • Reuben Daniels to raise a ticket to update the fhir specs accordingly
  • Reuben Daniels to talk to Linda to get URI specs updated accordingly.
  • URI Specs to be updated and aligned accordingly - Reuben Daniels to assist
  • EVERYONE TO REVIEW TONIGHT AND SIGN OFF TOMRORROW
  • ONLY outstanding point from earlier discussing was Dion's point from the joint AG where he talked about nailing down the rules for derivative modules... -
  • Dion McMurtrie to discuss/agree in the October 2018 meetings - REPORT FROM DION??
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • FIRST POINT WAS THEREFORE TO have it reviewed internally by all relevant stakeholders...
    • This has been completed and signed off
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate? YES - NOT RELEVANT!!
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
      • ??????????
  • We also need to make a decision on the final Freeset distribution format(s), as I want to ensure we only have a MAXIMUM of 2 distribution formats - RF2 + the agreed new Freeset format (whatever that may be)
    • YES everyone is happy with this!
    • AAT to then add this into the Release Packaging Conventions and publish
  • APRIL 2020 - DO WE NEED TO MAKE ANY REFINEMENTS IN ORDER TO PREPARE FOR CONTINUOUS DELIVERY? Did ADHA need any formatting changes when moving to monthly?
    • We really need to tackle the Delta from and to release version in the Delta file naming, and possibly package file naming. At the moment it is impossible to know what a Delta is relative to making it hard to safely process it. Perhaps beyond the scope of this document, but quite important
  • Once all happy, the document will be published and opened up to anyone to view.
13Continual Improvement to the Frequent Delivery processAll

APRIL 2020 - Gather Practical requirements for the various deliverables that we need to implement in order to successfully make the move to Continuous Delivery...

Requirements:

  • Increase the scope of RVF validation to include:
    • All current authoring manual validation
    • Generic Validation Service assertions (from all community validators)
14

Continual Improvement to the Frequent Delivery process

All

We need to continue discussions on this on-going item, in light of the strategic meeting before the conference. In addition we now have new members with additional experience, and we have also now lived with the more stable International Edition release process for the past couple of years.

Last time we discussed this everyone thought it a good idea in principle, but were concerned that we are not yet in a position to deliver the same level of quality on a daily basis than as on a monthly basis (due to the current gap in our manual/automated testing). Therefore we were going to discuss this once we had further progressed our automated testing - however as the new working group for the RVF service will testify this is a slow process, and therefore it may not be possible to wait for this to be completed in its entirety.

We have identified several additional potential issues with moving to Continuous Delivery, which we should consider before proposing a solution:

  • Perceived quality issues:

    • There would be no time for Alpha or Beta Releases - so all Members would have to be comfortable with issues in Production for until the next interim release

    • All issues that normally get tidied up as part of the normal Content Authoring cycle will become public - they will get fixed quickly but in the meantime there may be an impact to the reputation of the quality of SNOMED CT.

  • Roll up Releases:

    1. The 6 monthly delta releases would need to be relative to the prior 6 month release, and therefore named as such somehow (ie) we would need to somehow make it explicit as to which previous release the delta is a differential to.
    2. Other possibility is that each month is the same interim release, and then every 6 months we also release the Delta's relative to the priori 6 monthly release, in addition to the usual monthly release.  In this case we would need to reserve the 31st Jan + 31st July effectiveTime's /package naming for the 6 monthly roll up releases, so that the users who want to remain on 6 monthly schedule would remain unaffected.
  • The other option is to have no roll up releases at all, thus releasing a stand-alone package every day/week/month, depending on the agreed frequency. The issue with this approach though is that anyone using the Delta files (rather than Snapshot) for uploads would need to keep up with the continuous schedule.


UPDATE FROM THE EVOLUTION WORKSHOP:

Pros

  • Allows people to choose whether or not the users take one release every 6 months, or frequent monthly releases...
  • Derivative maps wouldn't be a huge issue as just release them whenever we had a chance, dependent on whichever edition
  • One of the plus points are that when we're still at 6 monthly releases, if the vendors miss a release its a big deal, whereas if they miss monthly releases then they have a smaller impact


Cons

  • One drawback is for the non english speaking members, who need to keep up with translations - shouldn't really have an impact if they keep up with each smaller release.
  • Could be painful for translations when a monthly release happens to contain a drop of a huge project like Drugs or something...
  • What about interoperability issues, with some people taking each monthly release, and others still waiting for every 6 months? ADHA believe this hasn't caused a huge problem for them, just an addition to the existing problem even with 6 monthly releases...
  • Also need to implement the metadata for identifying which dependent release each Delta is relative to...
  • Refsets aren't too much work to keep up to date - however Mapping is a different ball game - this can take some time
  • Maps that are still inherent in the int Edition (ICD-0 ICD-11 etc) are potentially problematic, and the workflow would need to be carefully worked out...
  • If your projects happen to drop in-between the normal 6 monthly releases, then someone who might have taken Jan and July still, might miss out on the important big releases that happen in April and November!
  • Also quality might be an issue - need to have the automated testing completely airtight before we move to continuous delivery! Thereafter you would run all major validation at input stage and ensure authors only ever promote to MAIN when everything perfectly clean. Then we run Daily builds with automated release validation every night, and provides a RAG status on release issues every morning. Then by the end of the month, we publish the last Green Daily build!
  • Andrew Atkinson to continue to feed all of this into the continued internal discussions on whether or not moving to more frequent Delivery is feasible, and if so plan what the timelines would look like.
  • Andrew Atkinson to create a survey to provide to everyone so that they can send out to all users and get feedback on the proposed changes (especially multiple effective Times in Delta files, and removal of Delta files - just a service now):
  • https://docs.google.com/forms/d/17Rhxc3TrMgPq1lnhAm2G6LkGsaN05_-TMKr69WRVdc4/edit
  • TRAG to add/update any of the questions....
  • Question: What questions would we like to ask the vendors and affiliates to a) Ensure we cover off all problems/potential issues, but b) do NOT put us in a position where they think that we might not go ahead with the plans despite their answers.... just wording the survey to ensure that they know we're going ahead, but just want to ensure there's no negative impact to them that might tweak our plans, and c) How much time do they need to adapt to the change for multiple effectiveTimes in the same Delta, and d) How do we promote the benefits? (responsiveness to changes with more frequent releases, improvement to quality with more frequent fixes, etc)
  • Andrew Atkinson to refine survey to ensure that it's accessible to those with more limited SNOMED knowledge/experience, as these are the preferable target market for the survey, given that the more advanced users will (or have already) speak up for themselves:
    • GDPR questions - verify with Terance whether or not we just need to provide a link to our data policy (https://www.iubenda.com/privacy-policy/46600952), or if we specifically need to ask the questions (of whether or not they're happy for us to store their data, etc) as questions in the survey? (check box) - If the latter, ask if we have standard legal wording I can use?
    • Small intro - description + pros/cons
    • Couple of fairly wide ranging questions as to whether or not they think they'll be impacted
    • If so, then either fill in the details here (conditional question in google forms) OR please just get in contact with your NRC to discuss
    • Avoid technical language for non native English speakers
    • Suzy to include in her UMLS survey in January
      • Not done yet as she's stuck with red tape in the NLM!
  • Andrew Atkinson refined the survey accordingly, and sent out to TRAG members for final review on 16/10/2018: https://docs.google.com/forms/d/17Rhxc3TrMgPq1lnhAm2G6LkGsaN05_-TMKr69WRVdc4/edit
  • Andrew Atkinson sent final survey to Terance and Kelly in particular, (from GDPR and comms perspective) to ensure in line with company strategy and verify whether or not they'd prefer this to be an SI survey or NRC surveys?
  • Survey sent to the TRAG to disseminate to their users
  • Survey also sent to Kelly for inclusion in the newsletter, and also on LinkedIn

AGREED:

  • Move to Monthly Releases before we go to full continuous delivery - yes, everyone agreed
  • How do we best automate all of the validation?
  • Best thing is to make the RVF the central source of truth for all International validation.
  • Therefore NRC's like Australia will promote all International related content to the core RVF, and only retain and run validation that is local to themselves.
  • This would mean that whenever they identify a new issue, they can simply promote the new test up to us and we can run it and replicate the issue for ourselves, and therefore fix it quickly.
  • It will also share the burden of maintaining the validation rules.
  • Share Validation Service to address this...
  • Question: can we do any automation for Modelling issues? ECL? New validation using the editorial rules in the new templates as a basis for automating modelling QA?
    • ECL the best bet - plus MRCM doing well so far - can we extend this? (Australia so far only implemented modelling validation by writing manual rules for known issues)
  • What's the impact of multiple effectiveTimes in Delta files?
    • Should be negligible, Australia and US already implemented with no effect to users (despite initial complaints!)
  • Creation of a bespoke Delta using a new tool - Delta at the International level is very simple, but at the Extension level is much more complex due to all of the dependencies, etc. This could also become more involved when we modularise...
  • Australia intended to build this as well, but it never happened because no one requested it in the end!
  • The other issue was the traditional issue of never knowing (in a machine readbale way within the Delta file itself) what the Delta file is a Delta from (ie) is it a delta from the Jan 2014 release, or the July 2016 release, etc.
  • So there were a lot of discussion over whether or not they should create roll up Delta's, or provide the service - but in the end they found that only a few people were actually using Delta's, and those were the people who know what they were doing already, and so nothing was ever required!
  • So we need to decide whether or not this is useful...
  • We also need to be wary of the fact that there are two different things to be relative to - so you can have a Delta to a release, or a Delta to a date in time, and they can be very different things.
  • Suzy has always released a delta with multiple effectiveTimes in it (due to the Edition) and no-one has any issues of this ever.
  • If we remove the Delta files completely everyone would definitely need to provide a Service to download bespoke Delta's (both International and local Extension level) - AT THE SAME TIME WE SHOULD FIX THE ISSUE OF LACK OF METADATA PROVIDED FOR WHAT THE BASELINE OF THE DELTA IS
  • For local extensions this service does get a lot more complex than for International, as they need a range of Delta dates PER MODULE, as they have a lot more going on than just the International Edition - so the service would need to be a) clever enough to correctly get the relevant depedencies from all sources, plus b) Validate that the resulting Delta is correct and valid - provide a checksum of some kind (needs to be identified).
  • SNOMED INTERNATIONAL TO CREATE A SMALL, TARGETED SURVEY TO QUESTION WHETHER OR NOT THERE WOULD BE ANY IMPACT TO ANYONE TO PROVIDING A DELTA SERVICE INSTEAD OF DELTA FILES... Everyone will happily disseminate this to their users and get responses asap...
  • Release Notes automation -
    • simple, just attach notes metadata to each change in MAIN then export on Release
  • Question: Is it worth starting off with a trial using just Content Requests monthly, and then bring everything else in line once happy?
    • NO! Everyone feels strongly that there would be no benefit to this whatsoever, as the majority of urgent cases in CRS are to do with getting an ID to use in refsets etc before the next 6 monthly release, and as this has already been mitigated due to the new tooling providing those ID's early, there's no benefit in moving to CRS early. Small risk in moving to monthly at all, so better off just moving everything at once to prevent a) confusion for users b) confusion in message about continuous delivery, + c) overhead for SNOMED managing 2 different delivery schedules during pilot
  • Question: What are the next steps that we need to consider to help move this forward?
    • Central RVF service, communication with community (survey etc)
  • Question: Is everyone happy with the new plan to remove the Delta files from the RF2 packages completely, and just provide the Delta service to create Delta's on the fly? YES
  • Question: How can we get a survey out to as many implementers as possible in order to ask a lot of these questions and get the
  • Question: How do we manage translations? (including the Spanish release) - How do we cope with the likelihood that one month could have only 50 changes, and the next month 50,000 (Drugs project, etc)? -
    • No impact, as should allow for incremental translations - just need to not set expectations with your users that you stay one month behind the International Edition! Just need to decouple the translation release schedule from the International Edition schedule. ARTURO would prefer the Spanish edition to also move the Monthly (or even more frequent) releases, but he fully understands the natural latency required for translated Editions, and so understands even if we went to monthly we can't keep up with the monthly content changes
  • Question: How do we manage extensions?
    • Again need to decouple them - MDRS will naturally get a lot bigger - also the versioning process internally currently takes a long time + a lot of effort for each upgrade to new International Edition...
  • Question: How do we manage derivatives?
    • Just keep them decoupled from the International Edition release schedule, and do not set false expectations by promising to keep them closely up to date with monthly International Releases!
  • Question: How do we manage maps?
    • So again there is a natural latency here where we can't keep up to date with monthly releases. WE ALSO NEED TO DEFINE WHAT AN ACCEPTABLE UNIT OF RELEASE IS FOR EACH TYPE F CONTENT CHANGE (so what our concept of "DONE" is for each type of change) - FOR EXAMPLE SOME CONCEPTS SHOULD NOT BE RELEASED UNTIL THE RELEVANT ICD-10 MAP COULD BE CREATED AND PUBLISHED AT THE SMAE TIME. OTHERS COULD BE RELEASED NO PROBLEM AND WAIT FOR 6 MONTHS FOR THE RELATED MAPS...
  • WE ALSO NEED TO CAREFULLY DEFINE AND COMMUNICATE OUT WHAT THE SCOPE AND GOALS OF MOVING TO CONTINUOUS DELIVERY ARE - TO ENSURE THAT WE MANAGE EVERYONE'S EXPECTATIONS. FOR EXAMPLE, WHAT IT DOESN'T MEAN IS THAT EVERYONE WILL GET THEIR CHANGE INTO SNOMED WITHIN 4 WEEKS, JUST BECAUSE WE'RE RELEASING DAILY!!!

  • TRAG members to send out the survey...
  • Awaiting all results... ANYTHING BACK YET from anyone?
  • New questions:
    • NEW PROPOSAL IS TO MOVE STRAIGHT TO DAILY CONTINUOUS DELIVERY!
      • WHAT DO PEOPLE THINK ABOUT THIS???
    • ALL OTHER POINTS NOT YET DISCUSSED IN THIS LIST:
    • How do we validate translations? (NLP?)

    • Implications on peripheral content (MRCM, on demand deltas)

      • LARGE impact on MRCM changes - we need to carefully consider whether or not we can publish MRCM concept model changes without first having waited for all of the concepts impacted by them to have been updated as well (so to isolate them all in a feature branch before publishing anything) - HOWEVER, this does restrict the time to market of the new concept model changes that might want to be published before we have time to update all of hte relevant changes...

      • We need to discuss further with Linda, Yong, etc...

    • Implications for DERIVATIVE products:

      • Do we just continue with these as separate entities and decide arbitrarily when the cut off date for content is?

      • Or do we decided each cycle what the cut off date is for the dependent International Content, depending on which day the important in-flight content changes are being published?

      • Do we have the same cut off for each Derivative product? (even though they might be published weeks apart?

        • Or do we just take a cut of the Daily release from the day we're starting each separate product's release cycle?

    • Impact on NRCs present - how can they help out with testing/validation when Alpha/Beta periods are no longer in place?

    • Vendors (non-NRCs) view of frequent releases??

    • VALIDATION advances:

      • OWL testing - anyone worked on this as yet?

      • Template validation - thoughts?

      • Implementation testing feasible? (see Implementation Load Test topic below)

      • Need to identify Modelling areas that need improving - for example where concepts have 2x parents, this is usually an indication of areas that need re-modelling
      • Need automation of the QA system itself - so some quick way to validate RVF + DROOLS Assertions, both old + especially new!
      • MOST IMPORTANTLY, how do we avoid the usual pitfall of automated testing (ie) that the effort involved in maintaining the automated assertions and keeping them up to date with the content changes, doesn't start to exceed the effort involved in testing manually?!
        • Anyone with experience of a good answer to this?
      • Whitelisting - API required?

    • We need to re-consider the Critical Incident Policy, UNLESS we can get at least several different entities downloading and testing the monthly releases EVERY MONTH!
      • This is because if someone say only takes the release every 12 months (or even worse 24 months), and then finds a critical issue in a now 2 year old release, we would currently have to recall and republish 24 releases! 
      • Instead, we need to have agreement from the Community on a “Forward Only” approach, whereby any issues found (even Critical ones) are fixed from the next Release onwards (or possibly in several Releases time if they’re low priority issues).  Critical issues would simply have to be communicated out, warning everyone NOT to use any previous impacted releases.
    •  
  • WHAT ARE THE BENEFITS OF MOVING TO CONTINUOUS DELIVERY?
    • Who does it help? Any use cases of institutions that will actually be able to use more frequent releases?
    • How do we best up-sell these benefits?
    • Do we go for a phased migration?
      • We could move to Daily Builds for internal use only (and perhaps a very select few members/suppliers), and just continue with the 6 monthly releases for now?
      • Or we move to Daily Builds for internal use only (and perhaps a very select few members/suppliers), and just increase the Frequency of publishing the International Edition to monthly?


  • HOW DO WE DEAL WITH INTEROPERABILITY?
    • Should we put out a white paper with the launch of Continuous Delivery to advise people as to how they can avoid the pitfalls of taking more frequent releases when others that they want to interact with might still be on annual or even longer update schedules?
    • Is it even feasible for anyone to take more frequent releases unless they want to work in silo?
    • If the rest of the community is still only updating annually then this could resign all others to the same pace for now?
  •  
  • COMMUNITY EDITION(s)

    1. What should the criteria be that differentiates between what goes in each Edition:
      1. SNOMED CT Core
      2. SNOMED CT International Edition
      3. SNOMED CT Community Edition
    2. What level of quality do we allow into the Community Edition? 
      1. Any quality (quick and sharable) vs validated (slower but better)
      2. One suggestion is that instead of certifying the content, we could certify the authors themselves - so we could differentiate between projects which are authored by newbies, vs those who have say passed our SNOMED CT authoring certification level 1, etc
      3. Another suggestion is that whoever delivers content to the Community content would have to provide the MRCM to support it, + conform to editorial guidelines, etc
        1. So a list of “quality indicators” could be automated against each project (eg):
          1. MRCM compliant
          2. Automated validation clean
          3. Authors have SNOMED CT certification
          4. Peer reviewed
          5. Release Notes
          6. Etc
        2. And then people can make their own minds up about which projects to use based on comparing the quality indicators between projects
    3. SOME AGREEMENT TO SUPPORT AND MAINTAIN BY @SOMEONE@ AT LEAST…
      1. For example, what happens if we change something in the core which breaks someone way down deep in the Community Edition?  (Which we can’t possibly test when we make the change in the core)
      2. The idea here would be that whoever creates the branch in the Community Edition then manages and maintains it - so everyone maintains their own branch, and is therefore responsible for resolving the conflicts coming down from the core, etc
      3. Versioning also becomes important, as whoever creates it needs to specify which Versions of each dependency their work is based on - (eg) they would state that their work is based on the 20190131 International Edition, and therefore any impact we have on the downstream community content would only happen when the owners of that content decided to upgrade their depednency(s) to the new version
    4. Promotion criteria important - thoughts?
    5. Do we remove the need for local extensions, as they can then simply become part of the Community Edition, with any local content just existing in a “country specific” edition within the Community Edition
      1. This also provides some level of assurance of the quality of the content in the Community Edition - as these would be assured by the NRC’s (and SI in some cases) and therefore provide a good baseline of high quality content for people to then start modelling against
    6. ModuleDependency is going to be important - 
      1. perhaps we answer this by making the entire Community Edition part of the same module - therefore it will all classify as one entity?
      2. However a lot of people will ONLY want to cherry pick the things that they want to take - so we need a method for taking certain modules (or realms or whatever we call them) and allowing people to create a snapshot based on just that content instead of the entire community edition
    7. Dependencies need to be properly identified:
      1. Could the CORE be standalone and published separately?
      2. Or would the CORE need to have dedpendencies on the wider International Edition, etc?
    8. HOWEVER, how do we classify the entire Community Edition when there could be different projects dependent on different versions of the depenencies (such as the international Edition)?
  •  

15



16

Active discussions for October 2020


17

Computer readable metadata


* MAG crossover

Suzy introduced the topic for discussion...


Suzy would like to raise the question of creating computer readable metadata, and raise questions such as whether or not to include known namespace & modules? 
Or just the current metadata for the files in a machine readable format? 

Suzy Roy to provide an update on progress:

  • All agreed that whilst this is a large topic, we should start somewhere, and get at least some of the quick wins in (then request the change to content via the CMAG):
  1. Check where the progress with the namespace metadata has got to - can we progress this?
  2. Code systems (and versions) of the map baselines
  3. Common strings such as boiler plate licence text etc
  4. Description of use cases for the various refsets (using the text definition of the Refset concetps themselves) - either json or markdown representation of multiple pieces of info within the same field.
  • Michael Lawley to provide an update from the related MAG topic...
  • TRAG agreed that this should be incorporated into the discussions with the continuous delivery, in order that we can plan the changes here in line with the transition to more frequent releases. To be continued over the next few months...
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Ideas:
    • Some human readable metadata could potentially live as descriptions (which can then be translated)? David to discuss further...
    • David will mock up something in Json...
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
  • This should now be combined with the Reference set metadata topic, to address all updated metadata use cases - Human readable, Machine readable, etc
18

Reference set metadata

* MAG crossover

Replacement of the Refset Descriptor file with a machine readable Release Package metadata file

See David's proposal here: Reference set metadata (plus sub page here: Potential New Approach to Refset Descriptors)

  • Everyone confirmed no issues with the proposal in principle, in April 2018
  • However, do we consider this to just be relevant to refsets in the International Edition release package?
    • Or to all derivative products as well?
    • Both refsets and maps?
  • Also, are we talking about only human readable descriptive information, or also machine readable metadata such as
    • ranges of permitted values
    • mutability, etc?
  • Michael Lawley to kindly provide an update on his work with David to help design and implement the solution - this will now be in the second TRAG meeting of the April 2019 conference, after they have met together....
  • Michael + David + Harold agreed to create a straw man to put up in the next meeting and take this further...
19

IHTSDO Release Management Communication Plan review

All

This document was reviewed in detail and all feedback was discussed and agreed upon - new version (v0.3) is available for review, attached to the IHTSDO Release Management Communication Plan review page.

AAT has added in details to state that we'll prefix the comms with "Change" or "Release" in order to distinguish between the type of communications. See version 0.4 now - IHTSDO Release Management Communication plan v0.4.docx

Once we've collated the feedback from the revised comms processes that we've implemented over the past year (in the items above), we'll incorporate that into the final version and discuss with the SNOMED International Executive Lead for Communications (Kelly Kuru), to ensure that it is aligned with the new overall Communication strategy. Once complete, the Release Management comms plan will be transferred to Confluence and opened up for everyone to view.

We have publicised the Release Management confluence portal to both NRC's and the end users to get people to sign up as and when they require the information. Do we know of anyone still not getting the information they need?

We also agreed last time that the community needs more visibility of significant, unusual changes (such as bulk plural change, or case significance change). These changes should be communicated out not just when they're assigned to a release, but actually well in advance (ie) as soon as the content team start authoring it, regardless of which future release it will actually make it in. I have therefore created a new Confluence page here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages

I've left the previous items up (from the July 2017 International Edition) because there are no examples yet from the Jan 2018 editing cycle - so please take a look and provide feedback on whether or not this is useful, and how it can be improved.

  • ACTION POINT FOR EVERYONE BEFORE OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris)
    The final version of the communication plan needs to be reviewed by everyone and any comments included before we agree the document internally and incorporate it into our communication strategy
  • Suzy Roy will discuss the end use cases of her users with them and come back to use with feedback on the practical uses of SNOMED CT and any improvements that we can make, etc 
  • We may now also need to add a new section in here wrt the comms for the TRAG, so that this is standardised and agreed with the community? Or is it outside of the scope for the Release Communication Plan? This was felt to be out of scope, and should this be restricted only to communication related to actual releases of products.
  • Everyone is now happy with the current version, therefore Andrew Atkinson to publish - we can then start refining it as we use it.
  • Andrew Atkinson to therefore agree all of the relevant changes that will be required as a result of this document internally in SNOMED International, and publish the document accordingly.
  • AAT MIGRATED THE DOCUMENT FROM WORD TO CONFLUENCE, AND THEN SENT IT TO THE EPS Team for first review.....
  • The feedback has been incorporated and the document refined accordingly.
  • https://confluence.ihtsdotools.org/display/RMT/SNOMED+CT+Release+Management+Communication+plan
  • Andrew Atkinson has now sent to the relevant members of the SMT for final sign off....
    • This has now been signed off and is ready for publication
  • Do we consider anything in here needs to be incorporated into the TIG?
    • or perhaps just linked through?
    • or not relevant and just separate?
    • the litmus test should be whether or not implementers still use the TIG, or whether people now use separate documentation instead?
  • Once all happy, the document will be published and opened up to anyone to view
20What constitutes a true RF2 release?Harold would like to introduce this topic for discussion...
  • Language refset conflicts are not yet resolved - Linda has been discussing this in terms of how to merge Language refsets or dictate whether or not one should override the other in cases of multiple language refsets - in the UK they combine them all into one but this is not ideal either. In translation situations they use the EN-US preferred term as the default where there is no translated term in the local language. Perhaps we need to do a survey on the members and who's using what how.
  • Suzy Roy (or Harold Solbrig) to get Olivier's initial analysis and come back to us on what worked and what didn't, and we can take it from there.
  • Suzy would like to ask Matt Cordell if he can share his ppt from his CMAG extensions comparison project.
  • Matt Cordell will distribute this to everyone for review before the April 2019 meeting.....
  • Harold to continue analysis and report back with the results of reviewing the specific examples that Olivier identified in the next meeting....

  • Can you please present the revisited presentation Matt Cordell ?
  •  
21Plans for the transition from Stated Relationship file to OWL refset filesAllThis is part of the wider Drugs and Substances improvements that are currently taking place. Other than the obvious content updates, these technical changes are those which will be likely to have the highest impact on those within our AG. 

We need to discuss the plan and ensure that we have answered all of the possible questions in advance, in order that we have a workable plan with no unwanted surprises over the next few release cycles. 

As a starting point, we should discuss the following: 

1. The schedule of changes (see here: January 2020 Early Visibility Release Notices - Planned changes to upcoming SNOMED International Release packages) (ie) 

July 2018 - initial OWL refsets introduced 
Jan 2019 - included in the Release package: a) Stated Relationship file b) the partial OWL axiom refset including all description logic features that cannot be represented in the stated relationship file. 
The Extended OWL refset file will be available on demand. 
July 2019 - the stated relationship file will be replaced by the complete OWL Axiom refset file. The stated relationship file will NOT be included in the international release; however, it may still be available on request to support migration to the OWL Axiom refset. 

2. The communications required to ensure that ALL impacted parties are completely informed of the Schedule, and the changes that they may need to make in order to transition cleanly to the new format. 

3. The technical changes that we need to make to the Release package itself, in order to support the planned schedule. 

For example, when we "replace" the Stated Relationship file in July 2019, do we remove the file from the release package immediately (in Jan 2020 once everyone has had a chance to run the inactivation file through their systems), or do we take the more measured approach of inactivating all records and leaving the inactivated file in the package for, say, 2 years, and then planning to deprecate the Stated Relationship file by July 2021? 

Further, should we be deprecating the file itself at all, or can we see any other (valid) use for the Stated Relationship file (obviously not just repurposing it for a completely different use!)? 
  • Harold Solbrig to talk to Yong and others in the MAG about his proposals for future proofing against the possibility of having multiple ontologies referenced, prefixed axioms, etc.
  • Harold confirmed nothing to report
  • Some opposition to reverting back to having the OWL file on-demand for Jan 2019 - need to discuss through with Kai in tomorrow's session - preference is to release both Stated Rel's + the "addtiional" info only in the OWL files - as with July 2018 release. Is this the current intention?
  • Done - Jan 2019 was implemented as requested - did anyone manage to use it and trial it effectively? Any feedback?
    • YES - Australia downloaded it and trialled it in their systems!
    • Worked well - however they have not got a lot of new validation to cover either the OWL format or the content itself, so these were trials to ensure that they can use it and author against it, rather than testing the actual content of the Axioms...
  • Also, has the decision already been made to NOT create a full history back to 2002 (or 2011 at least)? Sounds like most extensions will do it anyway, so maybe we should? Decision made by content team - no history to be included
  • Discussion on whether or not to go back and re-represent the content all the way back to 2002 in the new complete OWL file:
    • Pros:
      • Prevents the need of new tooling providers to create support for the ols Stated Rel way of doing things
      • If the International Edition doesn't go all the way back then the Extensions are restricted to not doinh it either, if the international Edition does then the Extensions have a choice.
      • Ability to go back through history and analyse prevent modelling decisions (if errors come up in future), even for those authors who haven't heard of Stated Rel's because they've now been deprecated for several years.
    • Cons:
      • Cost involved in creating the pure historical view
      • If the extensions have a choice as to whether or not to go back, then interoperability could be impacted - better to enforce going back if the international edition does.
      • Need to address the issue of some implmentations having both Stated Rel + OWL Axioms in the same full files going forward.
      • Uncertain use cases for most implementers
  • This discussion needs further input in order to enable us to reach an informed conclusion. The relevant internal and external stakeholders (NRC's such as Australia) will take this away and come back with the results of feasibility studies and estimates as to how long the necessary work would take to complete..... a decision must then be made well in advance of the January 2019 International Edition, in order to ensure that we agree on the correct approach before creating the initial Alpha release in November...
    • We are currently proceeding on the assumption that there was no feedback from any sources that supported the retro-fitting of the OWL Axiom files? The major con here is breaking our own regulations on tampering with history - the Stated Relationships should remain in place in order to a) accurately represent history + b) prevent the false impression that extended functionality was available via OWL Axioms before July 2019!
  • DOES ANYONE ELSE HAVE ANY OTHER CONCERNS WHATSOEVER ON THE TRANSITION PLAN TO OWL, OR IS EVERYONE NOW COMFORTABLE WITH IT? YES! All good to go...
  •  
  • We need to work with the Shared Validation working group to share as many OWL based validation assertions as possible, so that we can all effectively cover:
    • Technical validation of the OWL file structure
    • Content validation of the OWL records
    • Modelling validation post OWL
  • Having worked with OWL for a few months now, does anyone have any suggestions for new validation assertions?
  • Linda and others are confident that the MRCM validator will cover most modelling scenarios for now, but we'll need to keep extending as we go
  • We also need to continue identify as many opportunities as possible to validate the new OWLExpression content - has anyone written anything for this?
  • New idea for an RVF assertion regarding the ordering of OWL records (based on first concept) with disjoints:
    • Michael Lawley suggested it (and Kai agreed) in MAG last time -
      • Can we please discuss and agree if it's worth creating?
22Implementation Load TestAll

RVF has now been open sourced to allow people to contribute towards it more easily, so that Implementation issues can be reverse engineered into the assertions. All of the NRC validation systems should remain separate, in order to ensure as great a coverage across the board as possible.

However, it makes sense to ensure the critical tests are included in all systems, in order to ensure that if, say, one NRC doesn't have the capacity to run Alpha/Beta testing for a certain release, we don't miss critical checks out. We are working on this in the Working Group, and also in the RVF Improvement program, where we are including the DROOLS rules, etc. These are also being incorporated into the front end input validation for the SCA.

TRAG to therefore discuss taking the Implementation Load test forward, including the potential to incorporate key rules from NRC validation systems into the RVF. So we should discuss the tests that are specific to the Implementation of vendor and affiliate systems, in order that we can facilitate the best baseline for the RVF when agreeing the generic testing functionality in the Working Group.


  • Matt Cordell will promote some useful new ADHA specific rules to the RVF so we can improve the scope... report back in October 2019
  • Chris Morris to do the same - get the RVF up and running and then promote any missing rules that they run locally.... report back in October 2019
  • Updates?
23

Modularisation of SNOMED CT

All

Dion McMurtrie completed the Alpha release - did anyone have chance to review it? (I haven't had any requests for access to the remainder of the package)

The subject of Modularisation needs to be discussed between the various AG's who are considering the topic, before we can proceed with the Release-specific sections.


We need to discuss any red flags expected for the major areas of the strategy:

  1. Modularisation
  2. Members who want to abstain from monthly releases, and therefore need to use delta's with mulitple effective times contained within.
  3. Also need to consider if we continue to hold the date against the root concept - works perhaps still for 12 monthly releases, but not necessarily for continuous delivery daily!
  • THIS NOW BECOMES CRITICAL TO THE STRATEGIC DIRECTION WE DISCUSSED IN TERMS OF MODULARISING OUR CONTENT, AND IMPROVING THE WAY THAT THE MDRS WORKS, IN ORDER TO ALLOW RANGES OF DEPENDENCIES. THIS WILL ALLOW THE "UNIT" OF RELEASE TO BE REFINED ACCORDING TO THE RELEVANT USE CASES.
  • Understand the Use cases thoroughly, and refine the proposal doc to provide people with more real information - Dion McMurtrie TO PROVIDE THESE USE CASES FOR Andrew Atkinson TO DOCUMENT
  • Does the POC allow for concepts to be contained within multiple modules? NO - BUT DION CAN'T THINK OF ANY CONCRETE EXAMPLES WHERE THIS WOULD BE NECESSARY
  • What about cross module dependencies? Michael Lawley's idea on having a separate Module purely for managing module dependencies
  • IN THE FINAL PROPOSAL, WE NEED TO CREATE A NESTED MDRS TO MANAGE THE INTER-MODULE DEPENDENCIES (as per Michael's comments)
  • NEED TO PROVIDE GOOD EXAMPLES AND WHITE PAPERS OF THE USE CASES FOR MODULARISATION IN ORDER TO ENGAGE OTHERS...

  • AFTER SIGNIFICANT DISCUSSION AND CONSIDERATION, THERE ARE NO VALID USE CASES LEFT FOR MODULARISATION. IT CAUSES A LOT OF WORK AND POTENTIAL CONFUSION, WITHOUT ANY TANGIBLE BENEFIT.
  • THE PERCEIVED BENEFIT OF HAVING A WAY TO REDUCE THE SIZE/SCOPE FO RELEASE PACKAGES IS BOTH a) invalid (due to everyone's experience of being unable to successfully do anything useful with any small part of SNOMED!), and b) easily answered by tooling that using the ECL to identify sub-sections of SNOMED to pull out for research purposes, etc.
  • THEREFORE AS OF APRIL 2018 THE FEEDBACK FOR RORY AND THE STRATEGY TEAM WAS THAT MODULARISATION SHOULD NOT BE IMPLEMENTED UNLESS A VALID USE CASE CAN BE IDENTIFIED.
  • HOWEVER, KNOWING THE HISTORY OF THIS ISSUE, THIS WASN'T NECESSARILY GOING TO BE THE FINAL WORD ON THE MATTER, SO IS EVERYONE STILL SURE THAT THERE ARE NO KNOWN USE CASES FOR MODULARISATION?? (eg) linking modules to use cases, as Keith was talking about with Suicide risk assessment in Saturday's meeting,etc??
  • This topic came up several times again during other discussions in the April 2019 meetings, and it was clear that people had not yet given up on the idea of Modularisation - we therefore need to discuss further in October 2019....
  • Agreed to see where the linked discussions in the MAG etc end up going, and then discussing the proposals rather than just in abstract....
24Member Forum item: "Development of a validation service where releases can be submitted for testing" - The Shared Validation ServiceAll

Last meeting the TRAG proposed use cases for creating an actual service (with a user-friendly UI, etc) to enable people to load up their release packages and run them through the standard validation assertions.

Standardisation is the primary use case here - everyone agrees that there is a significant benefit to interoperability by ensuring that all RF2 packages are standard and conformant to the basic standards at least - and so this is a strong business case for the service.

We agreed that whilst we have the appetite to have one, this will be a long term goal - to get us started we should use the open sourced RVF as a basis to refine the rules.

We therefore setup a working group to decide a) What the scope/targets should be b) What technology platform would be most appropriate c) What the high level rules should be (packaging format, content etc) - Working Group: Generic Validation service

The good news is that we've now used the initial discussions we had as part of the working group to refine the requirements for the ongoing RVF improvement program. This is due to complete within the next few months, at which point the working group will meet again in order to begin the full gap analysis between the various streams of validation that we all have.


Liara also discussed validation with ADHA during the London conference - Dion do you have a quick update on where those discussions got up to?


  • Plan is to ensure that the generic service is flexible enough to fail gracefully if certain extensions don't contain some of the expected files, etc.
  • We should also provide a standard Manifest to show what files they should include wherever possible (even if blank)
  • We'll now take this forward with the working group, using the comprehensive list of current SI assertions as the baseline:
  • AAT sent the list out to the working group in October 2018, and requested comparison analysis results to be posted asap, with a view to being able to report back to the TRAG on proposed scope in April 2019...
  •  
  • Suggestion for a new RVF assertion for transitive closure loops:
    • This would be to highlight situations where through a path of is-a relationships a concept is an ancestor of itself.
    • So, when validation is run, considering the source and destination of all active inferred is-a relationships. Starting at each concept in turn and tracing from source to destination of each relevant relationship until reaching the root concept should NOT encounter the starting concept. 
    • This would have taken a long time (and a lot of processor power) in earlier incarnations of the RVF, but with auto-scaling etc now this should only take a few seconds
    • We've had some real life examples of this recently, so there is now a business case for adding this kind of assertion
    • Any objections?
  •  
  • Reports from the comparison analysis from the working group...
    • Suzy / Patrick?...
    • Matt...
    • Chris...
25"Negative Delta" file approachAllThis approach was successfully implemented in order to resolve the issues found in the September 2017 US Edition - is everyone comfortable with using this approach for all future similar situations? If so we can document it as the accepted practice in these circumstances...
  •  NO! Everyone is decidedly uncomfortable with this solution! In particular Keith Campbell, Michael Lawley and Guillermo are all vehemently opposed to changing history.
  • The consensus is that in the particular example of the US problem, we should have instead granted permission for the US to publish an update record in the International module, thus fixing the problem (though leaving the incorrect history in place). This would have been far preferable to changing history.
  • ACTION POINT FOR EVERYONE FOR OCTOBER 2018: (Dion McMurtrie, Matt Cordell, Orsolya Bali, Suzy Roy, Corey Smith, Harold Solbrig, Mikael Nyström, Chris Morris
    We therefore all need to come up with potential scenarios where going forward we may need to implement a similar solution to the Negative Delta, and send them to AAT. Once I've documented them all, we can then discuss again and agree on the correct approach in each place, then AAT will document all of these as standard, proportionate responses to each situation, and we will use these as guidelines in future. If we have issues come up that fall outside of these situations, we'll then come back to the group to discuss each one subjectively, and then add them back into the list of agreed solutions.
  1. Preference now is to retain EVERYTHING in the Full file, regardless of errors - this is because the Full File should show the state at that point in time, even if it was an error! This is because there is not an error in the Full file, the Full file is accurately representing the error in the content/data at that time.
  2. The problem here is that the tools are unable to cope with historical errors - so we perhaps need to update the tools to allow for these errors.
  3. So we need the tools to be able to whitelist the errors, and honestly document the KNOWN ISSUES (preferably in a machine readable manner), so that everyone knows what the historical errors were.
  4. The manner of this documentation is up for debate - perhaps we add it to a new refset? Then we could use something very similar in format to the Negative delta, but instead of it actually changing history retrospectively, we simply document them as known issues, and allowing people to deal with the information in their own extensions and systems in whatever way they feel is appropriate.
  5. Only situation we can think of where we couldn't apply the above gentle response, would be copyright infringement - whereby if we discovered (several releases after the fact) that we had released content that was in direct infringement of copyright, then we would potentially have to revoke all releases since the issues occurred. However, this would raise a very interesting situation where patient safety might be compromised - as if we remove all historical content that contravened the copyright, then we run the risk of patient data being impacted, thus potentially adversely affecting decision support. This is simple to resolve when the problem is in the latest release (simply recall the release), but if found in a 5 year old release for example, it could be very problematic to recall 5 years' worth of content and change it!
  • October 2018 - Guillermo proposed a separate possibility, which is to introduce a new Status (eg) -1 whereby if you find this status in the latest snpashot you would just ignore it - this doesn't however address the use case where there is a legal contravention and you need to physically remove the content from the package - the use case where you would have something that contravenes RF2 paradigm, you can't use the RF2 format to correct something that is RF2 invalid! So this is unlikely to work...
    • Nobody is on board with this idea, as it's too fragile and introduces unnecessary complexity such as we had with RF1...

  • April 2019
  • If we're still all in agreement with this, then steps 1-5 above should all be documented and disseminated to get confirmation of approval from everyone??
  • Did everyone read through everything? Has anyone got any further scenarios that we can include in the documentation?

  • The EAG raised this issue again on 08/04/2109 - Peter to try to make it to the next TRAG to explain the use case that was raised today and elaborate on the new proposal...
  • The TRAG discussed this issue at length, and came to the conclusion that we cannot address ALL potential use cases with a standard, generic, solution (certainly not any of those offered above).
    • Instead the solution in each case should be agreed on given each specific use case that comes up each time
    • So INSTEAD we should update the Critical Incident Policy to very clearly define the process to be followed each time we need to remove something from the Published release(s):
      • Which group of people should make the decision on the solution
      • Perhaps we also provide examples of how each use case might be resolved:
        • For Legal/IP contraventions, we should either remove content from history entirely, or redact it (leave the records in place, but remove all content from fields except for UUID, effectiveTime, moduleID, etc - thus allowing traceability of the history of the components, without including the offending content itself)
        • For Clinical risk issues, we can remove it from the Snapshot, but leave the Full file intact to leave a historical audit trail whilst ensuring that the dangerous content shouldn't get used again (as most people use the snapshot) - see Full file steps 1-5 above, etc
      • How to communicate it out to the users, etc
  • OCTOBER 2019 - DISCUSSION RE-OPENED AS PART OF THE MAG:
  • ONCE FEEDBACK OBTAINED FROM MAG:
  • Andrew Atkinson to update the Critical Incident Policy with
    • the various use cases that we've identified so far
    • the governing bodies who should be the deciding entities
    • the process for making the decision in each case
    • including the critical entities that need to be collaborated with in each case (all NRC's, plus 3rd party suppliers (termMed etc) who represent some of them), to ensure the final solution does not break outlying extensions or anything
    • the process for communicating out those decisions to ALL relevant users
26Re-activation policyThe standardisation of the SNOMED CT policy towards re-activation of RF2 records.

For example, we have had instances where MRCM records were originally inactivated, and replaced with updated versions that were subsequently proven to be less valid than the original records. The question was then whether or not to re-activate the original records, or to inactivate the new (invalid replacements) and create new valid records identical to the original records (but with new ID's).

Initially, it seems clear that the correct choice is re-activation, as this is cleaner, keeps churn in the files down to a minimum, and avoids confusion with any potential de-duping processes.

However, there is an argument from the users point of view, who seem to prefer to have complete visibility of the historical audit trail, and from this perspective having all inactivations and the final (new) active records in the snapshot + delta makes it easier for those who don’t use the full file to see what decisions were taken and when.

So we would like to agree a standard, consistent approach to use, rather than deciding on a case by case basis.
  • Thoughts?
    • Everyone prefers reactivation
    • Andrew Atkinson to feed this back to Linda - DONE

  • Another question that's just come up in this area, is the assignment of inactivated concepts back to Primitive status.
    • The question is relevant now because this makes it much more awkward for authors to re-activate concepts, something that's becoming increasingly preferable (rather than creating new, duplicate concepts of the old originally inactivated concepts).
    • In particular, it means that sometimes you can not revert an inactivated axiom because it refers to other concepts which have been made inactive.
    • Does anyone remember why the rule was originally put in place?
      • Not really!
      • NO-ONE has an issue with us changing this in particular, so Andrew Atkinson to feed this back to Kai
    •  

27



28Proposed deprecation of the CTV3 Identifier simple mapAllDue to it coming to the end of its useful life, SNOMED International would like to propose planning for the deprecation of the CTV3 Identifier simple map (that currently resides in the RF2 International Edition package) as of the January 2020 International Edition. 

Some Member countries have already identified the reduction of the effectiveness of the product, and have already put plans in place to withdraw support for the CTV3 Identifiers from 2020 onwards. 

The TRAG therefore need to discuss whether or not there are any apparent problems with the proposed deprecation, and if so how they can be mitigated. 

We must also discuss the most effective method to pro-actively communicate out announcements to the community to warn them of the upcoming changes, in order to ensure that everyone who may still be using the Identifiers has plenty of notice in order to be able to make the necessary arrangements well in advance. 

Finally, we will need to decide on the best method for extricating it from the package, in order to ensure the smoothest transition for all parties, whilst remaining in line with the RF2 standards and best practices. 
  • AAT CHECKED THE PREVIOUS IMPLEMENTATIONS OF DEPRECATION OF BOTH ICD-9-CM and RT Identifiers, AND AS THOUGHT BOTH WERE IN THE CORE MODULE, AND REMAINED IN THE CORE MODULE IN THE STATIC PACKAGES - SO ANY ISSUES WITH DOING THIS AGAIN?
  • So the plan would be to follow the same deprecation process as we did with ICD-9-CM (ie)
    • move all of the content to a Static Package in July 2020, and inactivate all of the content
    • publish the reasons for inactivation in the historical associations
    • Release Notes similar to ICD-9 = SNOMED CT ICD-9-CM Resource Package - IHTSDO Release notes
    • CREATE A STATIC PACKAGE FOR CTV3 BASED ON THE JULY 2019 MAP FILES AND PUBLISH ON MLDS (and link through from Confluence link as well). ALSO LIFT THE CTV3 SPECIFIC DOCS FROM THE Jan 2020 RELEASE NOTES TO INCLUDE IN THE PACKAGE.
    1. Date of the files should be before the July 2020 edition (so say 1st June), in order to prevent inference of dependency on the July 2020 International edition
      1. So we set the effectiveTime of the Static package to be inbetween the relevant international edition releases (eg) 1st June
      2. This is to ensure that it's clear that the dependency of the Static package will always be the previous International Edition (here Jan 2020), and not continually updated to future releases
      3. It cannot therefore have an effectiveTime of July 2020 (as we would normally expect because we're removing the records from the July 2020 Int Edition) as this would suggest a dependency on the July 2020 content which doesn't exist
      4. It also can't have an effectiveTime of Jan 2020 as we need to distinguish between the the final published content which was Active in Jan 2020, and the new static package content where everything is Inactive.
    2. Inside the files should be all International edition file structures, all empty except for:
      1. Delta ComplexMap file needs to be cleared down (headers only), as no change in the content since the Jan 2020 files, so no Delta
      2. Full and Snapshot ComplexMap files exactly as they were in Jan 2020 release (including the effectiveTimes)
      3. ModuleDependency file needs to be blank, as CTV3 was in the core module (not in its own like ICD-10 is), and therefore the dependency of the core (and therefore the CTV3 content) module on the Jan 2020 edition is already called out in the Jan 2020 ModuelDependency file, and therefore persists for the static package too.
      4. Date of all of the files inside the package should be the new date (1st June)
      5. But all effectiveTimes remain as they were in Jan 2020
      6. Leave refsetDescriptor records as they are in the International edition
      7. RELEASE Notes Should describe all of the thinking we went through when creating this package, why the moduleDependency file remains blank, and why we’ve wiped the Delta, etc (see above)
  • AND ALSO COMMS SAME AS WE DID WITH THE RT IDENTIFIER REFSET DEPRECATION:
    • RT Identifier Refset deprecation:

      We need additional comms around the July 2017 release, in addition to the usual Release Notes wording, in order to confirm what is happening and the rationale behind it.

      To re-iterate what was discussed on the previous call, Legal counsel confirmed that from a legal perspective, he doesn’t consider that it’s either necessary (or even advisable) for us to send CAP any further communications on this matter.  Legal counsel is confident that the informal discussions that we’ve already had with them (in order to remind them about what they need to do), are sufficient to cover our legal obligations, given that the licence is theirs and not SNOMED International's.  Therefore we no longer need to send a formal letter to CAP.

  • Has anyone identified any issues with the proposed deprecation?

    • If so what?

  • Is everyone still in favour of the refined process to use to deprecate??

  • If all good then Andrew Atkinson to begin formal deprecation process

29Clean modularizationAll

There are 22 module concepts, that are on the 900000000000012004|SNOMED CT model component| module.

I don't think it's documented anywhere, but we (AU) have made the assumption that the concept for a module, should be on itself. I suspect we've started to discuss this before, but can't recall how accepted this position was. The 22 concepts below (including the core module) aren't part of the core release, but clutter up the heirarchy. We also get enquiries about this content, some of which is non-existant/available.

  • Thoughts please from everyone on whether or not this proposal would have any impact (negative or positive) on the International Edition?
30Proposal to increase the level of metadata available for authors to log decisions made during content authoring

Jim Case +

Suzy Roy

This is a subject that would be helpful to include Jim in the discussions, as he has some definite opinions on how to improve the metadata in this area. 

Some suggestions would be to make more detailed information available for authors to describe their reasons for inactivation (especially in those areas where currently they are forced to use inactivation reason codes that aren't completely representative of the reasons in that instance).

Adding Jim Case - for further discussion later...

31Potential for adding a "withdrawn" reason for inactivated contentAllDiscussions around the future strategy for SNOMED CT have included the potential for adding new statuses for content. 

In particular, many people have suggested that problems are created for those either mapping or translating from content that's still "in development". If (as is often the case) they use Daily Builds etc as input data, they can often get tripped up by content which is created but then withdrawn before it's versioned and officially released. It would be extremely useful to those users to have access to traceability data describing the reasons behind why they were removed, in order to support accurate mapping/translation. 

In another use case, there's the possibility that content needs to be formally withdrawn from the International Edition AFTER it's been officially released. This would be the case if, for example, content has unintentionally been published that breaks the RF2 paradigm, or contravenes licensing laws, etc. In this case mere inactivation is not sufficient, the content instead needs to be completely withdrawn from the releases and sometimes even from history. 

The TRAG needs to discuss all of this and be ready with recommendations if these proposals are taken forward.
  • ONE OF THE POTENTIAL SOLUTIONS TO THE ISSUE ABOVE: "Negative Delta" file approach
  • Use cases:
    • undo a historical issue (that break RF2 paradigm, etc) but don't want to pretend it never happened - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Legal contraventions - in this case we should use the Negative Delta approach - but only used in EXTREME circumstances
    • Dead on arrival components - it should be okay to have these, and have them openly dead on arrival and therefore inactive to not map to them etc. However it's useful to be able to see these (even though they'd been activated + inactivated within the same release cycle) - so for those people who need to map/translate etc DURING the release cycle, they have to rely on the Daily Build and use live data still in development. Therefore if those concepts disappear by the time of the International Edition it causes problems for those maps/translations already including those concepts.
      • Therefore the best answer is for us to move to having 2x Daily Builds - the existing one + a separate true Daily Builds - where each Daily Build is built relative to the previous Day, and NOT to the previous Published release. This new Daily Build could then be properly relied upon by mapping and translation projects.
      • Can we align this with the transition to the more Frequent Releases?
  • HAS ANYONE HAD ANY MORE THOUGHTS ON THIS SINCE OUR LAST DISCUSSIONS??
  • MAG to discuss tomorrow (30/10/2019)
32National pre-release processes- Joint work item with the CMAG

Suzy to provide update on progress...

Suzy introduced the topic and gave a brief update on agreed scope and timelines. Also requested any input that people not already involved might feel would be useful and appropriate

Who's already involved? Anyone would like to become involved as we've still only had one call about this, so still a good time to join in? Some people (Feikje etc) were particularly interested in the Continuous Delivery discussions, so we'll fold that in later...

Adding Matt Cordell (currently on CMAG)

  • The working group has not progressed and the priority of this dictates that it is unlikely to do so quickly, so Suzy Roy / Patrick McLaughlin and Matt Cordell will continue to keep the TRAG informed of progress as and when we need to get involved...
  • The next likely work will be done by Suzy for documenting the US Edition process...
  • We also need to consider the work being done in the Shared Validation Service here, as this will help us to define and standardise National pre-release processes...



33Spanish Member Collaboration Process refinementsSpanish Edition users only

There was a presentation made at 17:10 by Arturo Romero Gutierrez, to walk through the improvements to this process that have been discussed and agreed since the inception of this new process, and what the Spanish Edition users need to commit to in order to be contributing part of this process.

Everyone was welcome to stay and participate!

  • Presentation from Arturo
  • Agreement from all Spanish Edition users who were present (Alejandra, Suzy + Alejandro) to collaborate and contribute to the refined process
  • We then formalised the process and distributed the document out to all interested parties

  • Arturo, Guillermo, and all others to report back on how the process is working for them?
    • October 2019 cycle worked well for Arturo + Guillermo.
    • April 2020......?
34Spanish Edition feedback processes and content improvementsSpanish contingentThis is for discussion with all those interested in the Spanish Edition, in particular the refinement of the content and the processes behind the feedback procedure.
35Discussion on the conflict between Extension content and International content

All

Jim Case

The answer to this may be quite simple:

  1. If extensions promote content via RF2 delta, we just need to retain all ID's, and only change the ModuleID and effectiveTime, and therefore it is all managed by effectiveTime.
  2. If IHTSDO reject content this is also managed
  3. The only issue then comes if IHTSDO want to change the FSN, then we need a way to manage the change of the meaning of the concept without creating 2 FSN's - as then we need a feedback loop to ensure that it's also corrected at source in the extension as well as in the International edition.

TRAG to continue the discussion and come to a conclusion that will work for all.

  • Has this been answered in its entirety by Jim's new agreed approach? (link here to his final position)
  • Most people consider that Jim's approach covers this under most circumstances. We also need to ensure that we follow the approach listed to the left - so we should confirm all of this has been working in practice since April 2018, and if so close down?
36

NEW ITEM

Versioning Templates

* MAG crossover

* EAG crossover


The EAG have proposed the need to version templates in some way, and potentially even make them "Publishable" components (with all of the reletive metadata that goes along with that). Also the potential to make them language sensitive.

They would then also need to be automatically validated themselves, as well as then being used in the automated validation of the International Edition!

  • Keep an eye on EAG + MAG discussions on this topic and
  • Ensure that the decisions are fed into our Continuous Delivery proposal
37AG Declarations of InterestAllCould each of you please go in and update your information? If there has been no change, then you can simply update the last column with the date. 
  •  October 2019
38Any other questions / issues?All
  •