Stream: R4A/B/R5 Discussion
Topic: Initiation
Grahame Grieve (Jul 29 2020 at 21:35):
I've created this stream to discuss the issues around an intermediate version between R4 and R5, on the grounds that it's likely to be a high volume but short lived discussion that people want to track specifically
Grahame Grieve (Jul 29 2020 at 21:35):
The basis for this discussion is https://docs.google.com/document/d/1xbi1MUgGj4hYagSEsv8RtHnmguNQWu2tv1jggH7uLQg
David Pyke (Jul 30 2020 at 12:51):
Can we itemize "We want to support accelerated timelines for some content" so that we can determine real priority, not desire?
Rik Smithies (Jul 30 2020 at 12:54):
I would be interested to know how to tell between real priority and desire :-)
David Pyke (Jul 30 2020 at 12:55):
Real priority == the current one is actually unimplementable due to it breaking clincial (or other) workflows
Desirable == Implementation of it is difficult for X due to limitations of that stack's design
David Pyke (Jul 30 2020 at 12:57):
Nice to have == some groups use this in their work flows and extensions are hard.
Rik Smithies (Jul 30 2020 at 13:29):
It would be nice to have nice clear guidelines I agree.
I was expecting you meant a way to evaluate "my project is really really important" - desire or real priority?
To yours though, there is a continuum between those 3 isn't there?
Nothing is unimplementable, because there is always a workaround, or an extension or the Basic resource etc. It becomes a question of what is "easily" implementable, and then it's somewhat qualitative again.
There is also the issue of how hard it is to workaround vs how "important" it is (big or multiple projects/stakeholders etc). Together they make the value (volume of pain avoided).
David Pyke (Jul 30 2020 at 13:34):
In my mind, something that has a core structure that breaks workflow is urgent. RIght now, I hear about resources that are not optimal or need some restructuring based on industry feedback. From the depths of my memory, profiles and extensions were the way to handle that in between releases. If enough people feel that is the way to go, they become part of core and if they're really important the resource is changed.
My desire is to have the feedback from the international community placed in front of us so that we can categorize it (with leadership from the responsible work group). I think that will solve how to handle each case that is being addressed.
Rik Smithies (Jul 30 2020 at 13:51):
not wanting necessarily to get onto specifics but, for example, the medication definition resources do not really exist in R4 - they have totally different names (we were asked to change them).
You can't really work around that with extensions and profiles. An API based on them would need to change totally between R4 and R5. And the tooling, server support etc is all different.
Plus they were new and very draft at that time, only just squeezing into R4. And now they have had many more times the amount of work and review/changes. It's not incremental.
The users of these resources are certainly international (Canada, US, and lots of countries in Europe. Oh, and UK).
Vassil Peytchev (Jul 30 2020 at 13:52):
I think "accelerated timelines for some content" reflects the need for continuous iterative development of the standard. There should be a way for STU-level content to evolve using the ballot process. It is much easier to look at a smaller set of proposed changes while the context of the whole specification remains unchanged, rather than having to review everything at once trying to insure consistency. Having 4+ years between R4 and R5 without another ballot or publication in-between seems to be too long. If you are building a profile and expect something to be implemented with a one-off extension, by the time the extension is moved to core, no one will be "fixing" existing implementations.
It is probably a huge change for the tooling around the ballot and publication, but I think if it's not done now, we will suffer much more in in the future.
David Pyke (Jul 30 2020 at 14:27):
I agree that 4 years is a huge time between releases. However, we have been moving very fast for a standard and not giving time for industry to catch up. IF we keep having new full releases every two years, we're going to get people very upset because they can't get on their feet. My feeling is that new resources or workflow fixing features with a big industry push warrant a incremental release (4.1/4B/etc.) but everything else should follow the extension route.
Vassil Peytchev (Jul 30 2020 at 14:33):
I am not advocating for new full releases every two years. I am advocating for changing the release and publication process in a way that will allow resources at FMM L3 and lower to go through STU ballots relative to the last "full release" in much shorter iterations.
Lloyd McKenzie (Jul 30 2020 at 14:55):
I think that long-term, Option D in the document will address our issues. You could introduce 'customized' resources in an IG if you really needed them and they would work with the reference implementations, tooling and public test servers. The timelines (and ballot review scope) would be IG-based, not a "whole new ballot of core" - which is a huge undertaking. Those systems that really needed the content would have a way to use it in an "officially ballot approved form", with the understanding that it would be introduced in the next official release (and, as with everything not normative, might undergo further changes as part of that process). It would mean that 'official' core releases could continue with a pace that industry (and HL7) can sustain, while still allowing urgent stuff to move faster.
Lloyd McKenzie (Jul 30 2020 at 14:55):
The problem is that Option D isn't legal in R4. So it's not really a viable option this round. :(
David Pyke (Jul 30 2020 at 14:56):
What does "not legal" mean?
Gino Canessa (Jul 30 2020 at 17:29):
Don't know if it's allowed, but I'd like to propose an option E (or modified B?) for changes to the release process.
Keep the existing release process for a major version (e.g., R4). Major releases are the only ones with new normative content.
On a fixed release schedule (e.g., 3 months?), do a minor release (e.g., R4.1). Require WGs to not make breaking changes in minor releases on anything FMM 'X' or higher. These can have a lighter process since nothing can be made normative, but should have enough in common that much of the work can be reused for the major release.
The next major release is just scheduled for after 'n' minor releases.
This lets people adopt a major release with confidence that the mature content will be supported for a long time, but allows for faster iteration on the things that aren't. It also limits the chaos, since there are only a fixed number of minor releases possible. It still leaves some possible lags (e.g., something that needs breaking changes and misses a major release window) and has some issues with compatibility, but it feels like a good compromise process to me.
David Pyke (Jul 30 2020 at 17:32):
While an interesting solution, it completely prevents implementation of any resource below FMM X. Because while they may not be breaking changes, they require time to implement and get into workflows. Every three months the same resource could change requiring retooling of the workflows
Lloyd McKenzie (Jul 30 2020 at 20:25):
There's no mechanism in R4 to be able to send instances that contain resources that aren't part of the R4 schema and be conformant. And no way to introduce it to R4 in a compatible way
Lloyd McKenzie (Jul 30 2020 at 20:25):
Doing minor releases doesn't actually help anyone if the folks who build and maintain tools, test servers, etc. don't actually implement them
Lloyd McKenzie (Jul 30 2020 at 20:26):
And implementing would mean writing transforms to internal storage, testing, etc.
Lloyd McKenzie (Jul 30 2020 at 20:26):
Putting out a release that nothing implements doesn't help people - but creates lots of confusion.
Rik Smithies (Jul 30 2020 at 21:02):
(deleted)
Gino Canessa (Jul 30 2020 at 21:08):
My thought is to split apart the implementers of generic long-term production software (which are unlikely to include resources below FMM X anyway) and those that are working in those areas, who need to be able to iterate quickly.
Speaking generally, we're not going to find a solution that allows for both long-lived consistency and rapid development. If we want aspects of both, we need to try and find a compromise that hopefully isn't too painful. As others have mentioned, this is going to be more and more common as the standard grows and matures.
Lloyd McKenzie (Jul 31 2020 at 01:31):
The notion with option D is that tool developers should only need to say "This is another version" and because all that can happen is adding a new resource (even if it's a variation on an existing one), iteration should have minimal impact on tools and test servers. The tool smiths couldn't possibly tolerate quarterly releases. Plus fragmenting the community doesn't help interoperability.
Gino Canessa (Jul 31 2020 at 15:41):
Yes, that's my thought on periodic limited scope releases as well.
Because if IGs define and update resources, tool authors need to update based on every individual IG's cadence. This will lead to constant updating and/or more fragmentation.
Lloyd McKenzie (Jul 31 2020 at 15:45):
If the tools can just suck in an IG and work, with no code changes, tool vendors won't have to worry about them. That's the objective - and is really the only thing that's workable for a faster cadence.
Lloyd McKenzie (Jul 31 2020 at 15:47):
We don't re-publish the tools when we add new profiles. The notion is to do the same sort of thing for a certain class of resources. (Wouldn't be able to do this for infrastructure resources though).
Gino Canessa (Jul 31 2020 at 15:47):
I'm confused. If an IG defines a new resource (for example), how is that better for a tool vendor than a new version in a release?
Lloyd McKenzie (Jul 31 2020 at 15:51):
Right now, new resources force a whole set of development changes for tool vendors. The notion with IG-defined resources would be for tools to handle them in a way similar to how tools handle profiles. If we can make that work, then it'll be easier (and relatively painless) to introduce new resources in IGs. These would still be 'special' IGs - HL7 would need to approve their existence so we're not opening the creation of resources to everyone. And there might be very small incremental change needed to tools to recognize the new IGs (and resources in them), but nothing like what's needed now
Gino Canessa (Jul 31 2020 at 15:52):
I guess that's where I'm confused - how is the effort to build that into tooling less than doing the equivalent so that it can do quarterly releases (with the same type of scope)?
Gino Canessa (Jul 31 2020 at 15:53):
e.g., if we say that any resource below FMM X is treated that way in the spec.
Lloyd McKenzie (Jul 31 2020 at 16:50):
It's a question of how the content is packaged. If FHIR Core changes - then anything in core could change and all of the effort involved with dealing with a new release of core impact tools. If there's an IG that defines supplemental stuff, then the FHIR core release is unchanged and tools can just suck in an external 'package'
Lloyd McKenzie (Jul 31 2020 at 16:51):
Core is a single package. We could theoretically look at moving low maturity resoruces out to the IG approach if we wanted them in a distinct package
Gino Canessa (Jul 31 2020 at 17:00):
I'm quite concerned about having IG's define resources. It feels like it pushes the responsibilities the wrong direction and will cause more of the issues we are trying to resolve.
If we want two types of releases, I'd make it explicit and have a "FHIR Core" and "FHIR Supplemental" (or whatever names make sense) with the same caveats (e.g., R4 is Core, 4.1-4.x are Supplemental, compliant with R4 Core). This alleviates the burden from tool developers, but still restricts the chaos of expecting people to support arbitrarily defined resources.
Thoughts?
Lloyd McKenzie (Jul 31 2020 at 17:03):
D is sort of doing that. The key thing is that 'core' needs to be unchanged between major releases. As soon as you open core up for change - at all - the speed with which you can do anything new slows dramatically. But core + FHIR Supplemental 1, 2, 3 is much more managable.
Lloyd McKenzie (Jul 31 2020 at 17:04):
We can call the extra 'package' "Supplemental" or something other than an IG if that makes things clearer.
Lloyd McKenzie (Jul 31 2020 at 17:05):
(unchanged between major releases still allows for technical corrections, but those are still painful and something we want to minimize)
Vassil Peytchev (Jul 31 2020 at 17:41):
What is making technical corrections painful? If we can identify and improve these parts, maybe that will open some possibilities that have not been considered yet.
Gino Canessa (Jul 31 2020 at 18:10):
Perhaps it's about the phrasing. I'm thinking anything FMM 0-X is Supplemental, and things move into Core at FMM X+1.
For me, this is quite different than letting IG's define resources, since it's still an established and controlled release cycle.
Ugh. This would be so much easier to get a bunch of people in a room and sort out :-/
Vassil Peytchev (Jul 31 2020 at 18:29):
I think one of the key points is "Core is a single package" I don't know what the practical manifestation of this is, but I suspect that it is an important part of what may need to change.
There are three key features that I think are important here:
- A Ballot has everything (1) in the FHIR Core specification
- A Release has everything (1) in the FHIR Core Specificaiton
- The CI build has everything (2) in the FHIR Core Specification
How does "package" fit in here, and can we disconnect it from these features, while preserving them (the features)?
Note: the only difference between everything(1) and everything(2) is that the CI build has things that were never part of a ballot.
Lloyd McKenzie (Jul 31 2020 at 20:13):
- The core specification is built as a cohesive whole, so when you publish, you must publish from the source of everything, as it exists, in that branch. There is no ability to publish anything less than the core spec.
- It's possible to ballot subsets of the spec, but we can't do a "release" where anything's changed that hasn't gone through appropriate review. Generally that means formal ballot, though there's some degree of wiggle-room for small STU changes. What that means in practice is that if we can't guarantee that parts of the spec haven't changed, then we need to open that part of the spec to ballot.
- We have very limited ability to manage branches and merge changes from one branch to another because most of our key source is in spreadsheets and they don't merge worth beans. That means that it's very hard to start with a base' branch and pull a small set of changes across for a publication. It's not so bad when it's net new resources, but it's a real mess if you're trying to modify an existing resource and maintain the changes in both the 'ballot' branch and the 'master' branch
- Whatever goes to ballot and whatever gets published must be QAed. That process involves a lot of human review
- It takes about 2 person days just to upload a new release of core (you're replacing the whole 'site' - with links across the various versions, changing the headers on the old 'current' version to no longer be current, moving that out of the 'default' location, etc.
@Grahame Grieve could provide more details. I know the last technical correction release of R4 cost him almost a month, though I think there were some special circumstances there.
Lloyd McKenzie (Jul 31 2020 at 20:14):
Our ability to follow a more typical development cycle might be made more easy if we moved away from spreadsheets for authoring, but that, in itself, is probably a 9 month project involving development, training, etc. Plus the risk of being dependent on a custom authoring tool (which history has shown isn't a great place to be). Such a project would need funding and would consume a good chunk of Grahame's time, delaying other work.
Grahame Grieve (Aug 13 2020 at 12:10):
catching up on this... @Gino Canessa is basically right: option D is effectively giving up and doing what's easy for us and making the whole thing someone else's problem (TM). It's easy to see how we could pretend that it solves the problem, but not at all easy to see how I would deal with it as a tool smith
Grahame Grieve (Aug 13 2020 at 12:11):
I completely agree with @Vassil Peytchev's position and it was mine coming into this discussion. I just haven't figured out how to make it actually work
Grahame Grieve (Aug 13 2020 at 12:13):
I really don't think that spreadsheets are even a significant problem for the idea of forking the work on a new version. The real problem is that only a small part of the output of the main build is modular on the resources. Such a lot of it is across the board tooling and integrated package generation
Grahame Grieve (Aug 13 2020 at 12:13):
quality work applies to the entire specification
Grahame Grieve (Aug 13 2020 at 12:13):
in the next few days I have to release R4.4. I'll keep this discussion in mind as I go through that process
Vassil Peytchev (Aug 13 2020 at 12:52):
Grahame Grieve said:
The real problem is that only a small part of the output of the main build is modular on the resources. Such a lot of it is across the board tooling and integrated package generation
I definitely don't know all (most of?) the pieces of what goes into a release, but I am more than willing to learn. If you think the R4.4 snapshot might be good place to get a better understanding of all the pieces, I will be happy to document what I learn, so that we can get closer to a solution.
Catherine Hosage Norman (Aug 13 2020 at 22:27):
The document states "We don’t want to fork". There is no explanation. Why not branch? The is not the first software project that has new requirements. There is usually a production, test and development environments. If you do not want to use the features of GitHub for parallel workstreams, why not clone the build and have a separate path.
Other than that, I support D since I was already looking into AidBox since they allow creation of private resources. We cannot make any progress on developing an IG using the Medication Definition resources without being able to access them.
Catherine Hosage Norman (Aug 13 2020 at 22:37):
A 4 year cycle for versions cannot be considered "fast". How soon can option D be implemented?
Grahame Grieve (Aug 13 2020 at 22:52):
it's far from clear whether option D will work
Jose Costa Teixeira (Aug 13 2020 at 23:33):
As FHIR grows, what is the expectation that these challenges to release will become smaller, easier to automate or easier to delegate? That should be one of the criteria, right?
Grahame Grieve (Aug 13 2020 at 23:42):
probably. Sounds kind of difficult...
Scott Fradkin (Aug 14 2020 at 04:56):
This will be a rather naive comment since I don't know the full breadth of what is affected due to a version change. There's a lot of comments about how long it takes to do a release on the HL7 side and length of time for tool vendors to make changes. How can we approach making that better? What is the effort to automate things more than they might currently be? How can the release of artifacts be modified to make things easier for tool implementers? Does it make sense to expend effort to look into more automation on the HL7 side, look into how to output artifacts to help tool implementers more, look into how to make things more modular? Can we make changes to the release and balloting process to allow for faster cycle time if we can create releases in a modular manner?
Grahame Grieve (Aug 14 2020 at 05:45):
if anyone has ideas for what additional artifacts we can generate, we're all ears. But anything that we generate or not doesn't get to the issues around adoption. As the maintainer of a couple of the reference implementations, I know that every release that we support creates ongoing work for me. but even that doesn't get to the heart of it - it's the actual adopt that matters.
Grahame Grieve (Aug 14 2020 at 05:48):
as for doing a release. I'm doing one in about 48 hours or so. It's largely a matter of pushing the buttons, and then waiting for the upload to happen. I've significantly improved my internet access since last time, so it might be quicker this time
Grahame Grieve (Aug 14 2020 at 05:49):
but it's not that that's really the limit - it's all the work that goes into ensuring coherence and quality and consistency that is the problem we are concerned about
Grahame Grieve (Aug 14 2020 at 05:50):
Also I'm open to more automation...
Alexander Henket (Aug 14 2020 at 08:14):
Reading the Google doc and the arguments here I have to say that we're mostly interested in an R5 release that contains change requests we asked for based on R4. I think I see a projected timeline where R5 is due by beginning of 2022. That does not fly well with our intent to start work on a major release of our functional models this year, with releases in 2021. Basically that means sticking with STU3 or move to R4. As far as I can tell R4+ would not be something for our context.
The current process seems to be that every X period, a new beta release is done from the build site. The stuff is thus frozen for use e.g. in a connectathon. These R5 preview builds are confusingly called 4.X as if they were formal releases of R4. I'd probably name those 5.0.0-beta1 to better reflect what they are.
In my mind you already have releases of the stuff people want in those betas/previews. So which problem do we need to solve? Is it that the new medication stuff wants to be released on a stable R4 rather than a beta R5? I think proposal D comes closest to that goal.
Medication could have an IG that claims to be R4 compatible and that would constitute your addon component. I think that is what proposal D is aiming for. If the IG tooling can generate reference implementations of new resources from an IG, people will take notice and create IGs like that as they see fit. That could be a dual edge sword. Is that what was meant by "chaos" in the cons? Also: the IG Publisher does the Java part, but .Net/Swift and others need to be on board with this too. I suppose they are otherwise D would be a lot less attractive?
If you venture down the path of an addon IG, and suppose that works, would you ever merge that body of work back into FHIR Core? Is the assumption you would?
Vassil Peytchev (Aug 14 2020 at 12:37):
In my mind you already have releases of the stuff people want in those betas/previews. So which problem do we need to solve?
The problems are:
- these snapshots have not undergone ballot
- the snapshots have things in them that have not even passed WG review.
From my point of view, we need a process to add on new work on top of the last major release and be able to ballot it, and then publish it, without having to have tools and reference implementations deal with all the other things that are also in the CI build.
Brian Alper (Aug 14 2020 at 12:49):
I may not understand all the details for the technology and the balloting processes but I wonder if a "clone" idea would work well for the EBM Resources. The FHIR infrastructure is tremendous to re-use and not solve the many problems that FHIR has solved. But the deep interoperability between patient care resources that FHIR supports, and the deep interoperability between Evidence-related resources are both needed, while the deep interoperabiltiy between these resources is not critical. Today we work between these communities without any interoperability. So a EBM-FHIR clone that can grow substantially for the EBM community AND facilitate easier relations between healthcare and EBM reporting would still be a tremendous advance, and not require core changes to FHIR just to support EBM needs. I don't know how easy or difficult this would be but is this a good option?
Vassil Peytchev (Aug 14 2020 at 13:07):
Currently, all resources are part of FHIR Core. I think this is a very valuable underpinning of the specification. Unless the EBM work can be done as profiles on existing resources, I think there is a significant danger that creating an "EBM fork" will lead to divergence in the specification, where the same information is represented in two different ways.
Catherine Hosage Norman (Aug 14 2020 at 13:20):
Is there a time frame for making a decision on R5? We committed to using FHIR for PQ/CMC when R5 was scheduled for 2020. This has major impact on our project.
Catherine Hosage Norman (Aug 14 2020 at 13:59):
I do not see any reason why not to fork. The regulatory resources have very little to do with the rest of FHIR. It really goes into the minutia of clinical care that is of no interest.
Brian Alper (Aug 14 2020 at 14:04):
I wonder if the divergence danger is limited for the "EBM fork". The key EBM Resources (Citation, Evidence, EvidenceVariable, EvidenceReport, Statistic (Datatype), OrderedDistribution (Datatype)) are not used yet outside the EBM community as far as I know. If the likelihood of substantive change to commonly used/shared datatypes is low there may be little cross-system changes that cause problems. We can agree to not change resources like Group that could be used to cross-communicate. Any of these concepts are possibly simplest if figured out at the "beginning" and we are close to this beginning now.
Vassil Peytchev (Aug 14 2020 at 14:23):
I think the piece that might be missing is that with a fork, no existing reference implementation will work with these new resources, and you would have to fork the IG publisher and validator.
If I understand correctly, the EBM need is for a set of Resources and Datatypes to be part of a balloted release ASAP. This can be achieved by a general solution that enables quicker turnaround of balloting, the problem is that we don't have such a general solution yet.
Brian Alper (Aug 14 2020 at 16:03):
The EBM need is for a set of Resources and Datatypes (not used by other implementers) to have a more rapid path for revisions to be applied in a server for actual use. Because it is not being used in other implementations yet we have an opportunity to set up processes to minimize chaos, compartibilty problems, etc.
Jose Costa Teixeira (Aug 15 2020 at 08:56):
I was wondering if Option D could be precursor for option E (Gino's suggestion):
Jose Costa Teixeira (Aug 15 2020 at 08:58):
Be very liberal / chaotic in local IGs - allow changes, custom resources.
Once some IGs say "we think we're on to something good", these custom could follow a process for compiling in a minor / extraordinary release / extension.
Jose Costa Teixeira (Aug 15 2020 at 08:59):
I do not know the impact on tooling and processes, I was just thinking of the benefits and if this could be "proper".
Jose Costa Teixeira (Aug 15 2020 at 09:00):
For example I do not know how the balloting would work - ballot the IG? The minor release/extension?
Jose Costa Teixeira (Aug 15 2020 at 09:01):
I think this could allow the annealing process to be taken at the IG and implementers side, helping FHIR core to only get more robust stuff
John Moehrke (Aug 15 2020 at 14:04):
The whole problem "is" tooling and process. This is why we are trying to find a short-term path that is achievable given tooling and process.
Jose Costa Teixeira (Aug 15 2020 at 17:31):
I was pointing to a part of the process. I should have written "on tooling and adjacent processes"
Gino Canessa (Aug 16 2020 at 02:17):
I just don't see a great story for IGs defining resources. Tooling would have to include either run-time loading (which negates a lot of usefulness) or require a build process before using (anything from a Sass/LESS to compiling libraries). Either way, adoption becomes harder instead of easier.
Grahame Grieve (Aug 16 2020 at 02:18):
right.
Grahame Grieve (Aug 16 2020 at 02:19):
After listening to this discussion I've concluded that all the answers are wrong. The one we can change is limitations on what we do in the specification. I'm considering what choices we have in the build
Vassil Peytchev (Aug 16 2020 at 22:16):
By "all the answers are wrong", do you mean options A-D in the Google document? I think there is basic alignment in what is the desired state, but there is lack of knowledge on how we can achieve that. Gino suggested a call to try and get a better alignment on what we all might be thinking, is this an option?
Grahame Grieve (Aug 16 2020 at 22:36):
There is basic alignment on what we want, but none of the options A-D are viable because of the constraints around the discussion. The most addressible constraint is around the way the build works, so I am considering how that might change.
No doubt the effect of the change will be to increase the amount of time I spend fighting with build infrastructure from about 40% to about 70%. And a similar increase to everybody else involved in the build
Grahame Grieve (Aug 16 2020 at 22:37):
And one of the least fun things I do is fight with the build infrastructure
Grahame Grieve (Aug 16 2020 at 22:43):
We can do a call right now but probably it won't be productive because I need to do analysis about how the build might be changed
Scott Fradkin (Aug 17 2020 at 03:06):
Would it be useful to have a call to share the knowledge about how the build works currently? This seems like one of those areas where more eyes would help, and it doesn't hurt to have a group of people who understand the build infrastructure and process.
Grahame Grieve (Aug 17 2020 at 03:50):
well, maybe. There's a group of us who understand it generally, and we all understand different parts deeply. I probably have the best overview, but I don't know it all
Lloyd McKenzie (Aug 23 2020 at 18:20):
If we presumed that tooling and review process could just work magically and do whatever we wanted, we'd still have an issue of adoption. The reality is that the market only moves so fast - and creating more frequent releases tends to result in a fragmented market, not faster adoption of new content. I think the community is best served by a relatively slow (2-3 years ish) pace of regular releases. Trying to push the implementer community to adopt faster than that is unlikely to be successful. (In reality, a decent portion of the community only moves every 4-5 years.)
What we can potentially do is push out minor releases that small communities can adopt that include new content - with the caveat that they won't ever likely be widely supported by the overall community. That may be sufficient for groups working around the fringes or operating within a relatively closed community that can afford to be on a release that doesn't have broad support. Improvements in tooling and release processes may help to make this more viable, though there's still a challenge with HL7's ballot and QA processes, particularly if we allow changes in more than limited parts of the spec.
Option D - even if it works - can't really be viable until after we pass a new full release of R5 that actually enables it as an option. So Option D isn't viable for the near term.
Michael Lawley (Aug 24 2020 at 04:44):
I'd push back a little on this frequency issue. When you have to swallow large infrequent changes it's a much bigger and harder job than consuming a steady stream of smaller changes. It also bakes "change" and management thereof into your workflow. What it doesn't help with are the occasion big changes, but these are like complexity - you can move it around but you can't make it go away.
Vassil Peytchev (Aug 24 2020 at 12:52):
If we presumed that tooling and review process could just work magically and do whatever we wanted, we'd still have an issue of adoption. The reality is that the market only moves so fast - and creating more frequent releases tends to result in a fragmented market, not faster adoption of new content.
With that presumption, you would still have major releases every 2-3 years, and rolling releases more frequently. Accommodating some amount of breakage in the rolling releases is likely a non-trivial task, but definitely worth it.
Lloyd McKenzie (Aug 24 2020 at 14:27):
I guess the question is what the interest/willingness is in the major implementers to move to a more frequent pace (and the corresponding requirement to support a larger number of versions simultaneously and handle inter-version conversion. The amount of breakage in the rolling releases would be somewhat random - sometimes minor, sometimes significant. Also, even with a 'rolling release' schedule, we'd be looking at roughly a 12-month cycle in terms of the time to get everything through the prep/ballot/reconciliation/QA/publication process.
John Moehrke (Aug 24 2020 at 14:31):
This impact is only on STU content, as normative content won't change... right? so the problem is self limiting, and the problem diminishes as the overall specification matures. right?
Josh Mandel (Aug 24 2020 at 14:32):
That would be true if the scope of the specification stopped expanding :-)
John Moehrke (Aug 24 2020 at 14:32):
that expansion would be new STU... I expect the rate of expansion to never get to zero, but to approach over time. right?
Josh Mandel (Aug 24 2020 at 15:53):
I guess the question is what the interest/willingness is in the major implementers to move to a more frequent pace (and the corresponding requirement to support a larger number of versions simultaneously
@Lloyd McKenzie I don't think this captures the whole question; it's also important to think about how well implementers can cope with branches/forks in the release tree, where some new feature are released "on top of" a previous release, while other new features are developed along a "mainline" release cycle. This adds (at least) conceptual overhead for anyone thinking about what versions of FHIR to user/support/adopt.
Lloyd McKenzie (Aug 24 2020 at 16:44):
Normative content will change, just in more limited ways. It's certainly true that the rate of change for 'most' implementers will decline over time - and R4/R5 will probably be the peak of that 'change volume' for patient-centric clinical systems. (The peaks for public health, research, medication regulation and certain other areas will presumably come a bit later.)
Vassil Peytchev (Aug 24 2020 at 17:59):
This all makes certain assumptions:
requirement to support a larger number of versions simultaneously
Or is it a requirement for inter-version compatibility for "minor" versions?
cope with branches/forks in the release tree, where some new feature are released "on top of" a previous release, while other new features are developed along a "mainline" release cycle
One way to look at that is:
CI Build Publish
| |
| QA Process |
| <----------> |
| |\
| | \ R(X) branch
| | \____________
| | |
| | <-----------> |
| <----------> | | R(X).1
| | |
| | ...
| |
| |\
| | \ R(X+1) branch
| | \_________________
| | |
| | |
| | | R(X+1).1
| | |
| | ...
If we can apply sufficient tooling and automation to the process, I don't think it would be onerous... There may be many other approaches that are better/easier, just wanted to have a reference to be able to better discuss.
Josh Mandel (Aug 24 2020 at 18:59):
This diagram is helpful for us to keep in mind. I want to get a handle on what kinds of changes people think would be appropriate to make within a given release branch. In general I think the world gets pretty confusing if new feature development like new resource families is happening on previous release branches rather than as part of the "upcoming" release. Bug fixes and technical corrections by all means.
Grahame Grieve (Aug 24 2020 at 19:18):
Congrats Vassil on building that diagram.
Grahame Grieve (Aug 24 2020 at 19:19):
but it's certainly not what I proposed
Grahame Grieve (Aug 24 2020 at 19:20):
I proposed 2 publish heads, one for publishing fast moving content, and one for publishing all content that moves in a slower time frame
Josh Mandel (Aug 24 2020 at 19:25):
How does that look different? Is your "fast moving" publication branch based off of a release? Is it branching from a common ancestor with the slow branch? Where/how do these heads jump after publication? A quick sketch would probably be useful.
Vassil Peytchev (Aug 24 2020 at 19:41):
if new feature development like new resource families is happening on previous release branches
Here is the short, incomplete, and probably fraught with problems flow:
- New features:
- new features start in CI build
- After some process/agreement, they move to Publish so that they will be part of the next major release
- Based on additional processes/criteria to determine if it is intended for, and appropriate to be part of the latest release, move to R(X) branch, to be part of R(X).n+1
- Bugs/technical corrections
- start in current R(X) branch (latest tag R(X).n
- move to Publish and CI Build (if appropriate)
- Tag R(X).n+1 when appropriate
This is most definitely not what Grahame had proposed, since I wasn't aware of the exact details of his proposal :-)
Grahame Grieve (Aug 24 2020 at 19:51):
how do I draw a diagram like Vassil's quickly?
Vassil Peytchev (Aug 24 2020 at 19:54):
https://draw.io - there was nothing fast in that diagram :-) ASCII art, and years of experience in USENET...
Josh Mandel (Aug 24 2020 at 19:55):
Photo of a paper sketch works too!
Gino Canessa (Aug 24 2020 at 19:59):
Here's what I was thinking of, since diagrams seem to be easier :-)
image.png
Josh Mandel (Aug 24 2020 at 20:04):
In this @Gino Canessa diagram there is basically one "core" and one "supplemental" release considered current at any point in time, yes?
(Also to help spread knowledge: bonus points to anyone who provides a link to the tool or source for their diagrams, if applicable :-))
Vassil Peytchev (Aug 24 2020 at 20:05):
Isn't there a need to have a common part that is dataytpes, API, etc.? Or is this synced on both branches all the time?
Grahame Grieve (Aug 24 2020 at 20:06):
that's closer, but it isn't clear that what's in the supplemental folds into Core when Core is published (nor would I call it core)
Grahame Grieve (Aug 24 2020 at 20:06):
data types + API only change on the core.
Josh Mandel (Aug 24 2020 at 20:07):
In the Gino diagram I don't think things do fold into core when core is published -- at least not if they are still low maturity. I think this may differentiate it from what you have in mind, Grahame.
Grahame Grieve (Aug 24 2020 at 20:08):
obviously. I suppose there might be things that we don't migrate; when I published both R3 and R4 I removed a few things in order to publish, since they weren't ready for publication. but I think we'd definitely migrate most things, and anything that already exists in core
Gino Canessa (Aug 24 2020 at 21:25):
The diagram is from sequencediagram.org =)
For what I'm proposing, any actual publication/package includes both Core + Supplemental. It's just that nothing in Core (FMM 4-5 or whatever criteria you want) changes*, so there's no balloting/review/etc. of that content.
*This should help with technical corrections too, since they can be done in a "Core" build and pushed into the next Supplemental release on schedule.
The packages could be kept separate just as easily (e.g., any R4 supplemental should theoretically work with any R4 Core), but I figured this type of change would be simpler (e.g., core build can look at everything or just look at whatever meets the criteria for Core).
Gino Canessa (Aug 24 2020 at 21:33):
Issue is that things in supplemental may need revision moving to the next major release (e.g., normative changes break something).. but if nobody's working on the resource enough to keep it up to date, that probably says something as well.
Lloyd McKenzie (Aug 25 2020 at 02:23):
To clarify, @Grahame Grieve, is your intention that some of our fringe/low-maturity resources (and perhaps pages) would migrate into 'supplemental' and we would publish a base set of schemas for core plus a 'supplemental' schema for each release of supplemental? Over time, as things stabilized, we would migrate those things into the core spec. The namespace would be the same, it'd just be about where they'd publish. And because the source would be maintained separately, it'd be easier to manage balloting because you'd know exactly what could have changed. If you didn't implement any of the supplemental stuff, then new versions of that would be irrelevant. Toolsmiths would have to deal with two packages - the core package and the supplemental package. This would be similar to option D, but rather than having the 'extra' resources defined in IGs, we'd consolidate them all into the 'supplemental' spec.
Vassil - CI build isn't good enough. The medication folks, EBM folks and others want officially balloted content.
In general, I think I like this. Perhaps deciding you're at maturity 4+ means you move to core? That would also mean that once you're in core, you're committing to a slower pace for 'official' releases containing changes. (Which ought to be ok, seeing as there should be relatively high confidence in the usability of the resources by the time they hit that maturity.)
We'll still have snapshots of both core and supplemental. In general, releases of supplemental will need to depend on the most recent 'official' release of core. Once there's a new core, we'll have to do a hurry-up supplemental release that fixes all of the examples to align with the new core releases. My recommendation would be that core uses the current numbering convention, while the supplement uses something like 4A.0.1 - the 4 would indicate that it's based on 4.0.x core. The 'A' would indicate that it's the first supplemental release for that version of core. The remainder would be specific to the supplemental release. We could continue to have technical correction releases of the 4.0.x core as well as 4.1.x, 4.2.x releases as connectathon releases and ballot releases.
Josh Mandel (Aug 25 2020 at 02:27):
I'm not following the suggestion about numbering ... Which makes me wonder whether I am following the idea about how releases would work at all.
Peter Bomberg (Aug 25 2020 at 03:51):
Lloyd, the naming/numbering convention 4A.0.1 made me wonder do we mandate that supplemental builds are complete i.e. 4B.0.1 has to includes all aspects of the latest 4A.x build? I assume yes as otherwise we are back to allowing forks. And you are correct we have been restricted to only base large projects an officially balloted releases, however reading the discussion the supplemental builds would qualify.
Grahame you made a comment that all data types only change as part of core, while this makes sense it means that any resource that introduces a new data type by definition has to be part of the "slower" stream and if I understood Brian Alper's comment re: the EBM resources I got the impression they needed to be part of the "faster" i.e. supplemental stream yet they are introducing 2 data types to support their resources.
Unless there is a way to ensure that a supplemental build is (backward) compatible with the core it's based on i.e. 4A.0.1 must accept all 4 based interactions, we may end up with regulators requiring parties exchanging information to use the supplemental builds, since as many people have stated the regulatory resources are not yet FMM4 (or whatever the cutoff will be) and thus not part of core,
Grahame Grieve (Aug 25 2020 at 03:54):
new data types would be allowed
Grahame Grieve (Aug 25 2020 at 04:04):
I think we need to think primarily in terms of contracts with the implementer.
- R5 ci-build+milestones: Focused on the next main release. Milestones have the latest of everything planned for full R5
- R4+ ci-build + milestones: All the same infrastructure as R4, but selected immature domains roll around faster, and are balloted as STU
If you're working with R4+, then you know that all the infrastructure works with R4 tools but the resource content is different in those domains.
Grahame Grieve (Aug 25 2020 at 04:04):
From the editors view point, which most of the discussion has focused on there, the key is to think in terms of 2 parallel streams with 2 trunks and 2 ci-builds
Vassil Peytchev (Aug 25 2020 at 04:09):
Lloyd, I am not suggesting that the CI build is sufficient for anything. In the above diagram, new things start at the CI build, and at some point they are considered ready, and move to Publish. At that point there are two choices - stay in Publish and be part of R(X+1) ballot and release, or move to the R(X).n branch to become part of the R(X).n+1 ballot and release.
Grahame Grieve (Aug 25 2020 at 06:20):
or both
Vassil Peytchev (Aug 25 2020 at 12:52):
If a change goes in R(X).n+1, it is already in Publish, so it will be there in R(X+1). With the caveat that between R(X).n+1 and R(X+1) there can be other changes affecting the same content.
Scott Fradkin (Aug 25 2020 at 15:02):
Lloyd mentions: "Vassil - CI build isn't good enough. The medication folks, EBM folks and others want officially balloted content." Which is interesting. Do they want faster balloted content that works against R4 or R5? Or both? Do we have any processes in place to ballot anything faster than the current process? I can understand having parallel official releases to push forward the "supplemental" content if we either are going to release it unballoted or have a faster balloting mechanism in place. Because otherwise why not just include the supplemental content into a regular release if it's just balloted with everything else?
Lloyd McKenzie (Aug 25 2020 at 15:12):
With the distinct R4 ci-build approach, how do we ensure:
- that none of the infrastructure changes - and in particular that there are not substantive changes to normative content?
- that whatever changes are made there are also reflected in the R5 release?
Scott Fradkin (Aug 25 2020 at 15:23):
Extra automated tooling and testing. Could be easier or harder, I suppose depending upon whether we're just using branches in the underlying source control. I don't have any knowledge of how the current process works. I'd love to learn and help out.
Vassil Peytchev (Aug 25 2020 at 16:16):
Do they want faster balloted content that works against R4 or R5? Or both?
AFAIK, they want faster balloted content. If it is faster to get to that state as R4.n, then the content will be later in R5 as well (it is not a fork). The later R5 ballot should allow changes to the R4.n content, however.
Do we have any processes in place to ballot anything faster than the current process?
No, we don't, and this is what this discussion is about.
I can understand having parallel official releases to push forward the "supplemental" content if we either are going to release it unballoted or have a faster balloting mechanism in place.
I don't think anyone is arguing for having an unballoted release. I also think that the faster balloting mechanism is not to enable "parallel official releases", as the releases will still be sequential, but to enable certain parts of the specification to get to ballot (and release) faster based on the latest current release, while still having major releases of everything every 2-3 years. Making sure everything is in sync is (one of) the hard part(s).
Alexander Henket (Sep 01 2020 at 10:16):
If we are to have supplemental builds, I really would expect any supplement to build off the previous one. Supplement 2 and further would all contain everything from the previous supplement and never conflict. I have experience with 5 supplements to the board game Carcassonne to tell me that the risk of conflicting supplements is real.
I would also expect version numbering for supplement 2 on R4 to be 4.2.x if it was integrated; that ship has sailed because 5.0.0 prereleases already occupy those numbers. So if the supplements are separate, they could follow any versioning scheme, just like other IGs.
But if supplements are like any other IG, and can define new datatypes, resources and such: what's keeping me from producing my Dutch resources for our financial concepts, or Belgium from theirs? Could Belgium and NL then create conflicting new resource names for example? Could Epic and Philips?
Somehow I'd rather see a single core moving at the pace and with the contents the community can handle and accept that the pace might not suit everyone equally.
Lloyd McKenzie (Sep 01 2020 at 15:52):
New supplements wouldn't necessarily be compatible with previous once, but they would certainly represent 'new versions' of previous ones
Peter Bomberg (Sep 01 2020 at 16:53):
If we allow this degree of freedom, the reconciliation and alignment to produce the next major will be a real challenge. While I like the idea of supplemental builds, my fear is they might cause more harm than benefit if not well structured and managed.
Grahame Grieve (Sep 01 2020 at 20:40):
so the basic proposition with allowing resources to be defined by other parties is, at least in this case: "We can't manage the hole versioning issue, so we're giving up and kicking the can down the road. good luck"
There may be justification in allowing other parties to define resources, but this argument isn't it; we need to solve it ourselves. I'm re-working the build to give us options we don't currently have, and then we'll revisit this subject
Brian Postlethwaite (Sep 15 2020 at 11:43):
This is sounding more like @Grahame Grieve 's custom resource proposal from way back.
David Hay (Sep 16 2020 at 06:50):
and I recall the we all pushed back on that...
Last updated: Apr 12 2022 at 19:14 UTC