IMDP Webinar Series - Process Optimization - Wim Cypers

IDMP Process Optimization – Webinar Transcript with Wim Cypers

Wim Cypers has a long-standing experience in converting business requirements into application functionality and consistently showing industry thought leadership through speaking engagements at several industry events. He has applied his pharmacist training into a keen understanding of how to solve complex business issues.

For several companies, the announcement of the EMA that the DADI project will be the focus for regulatory data submissions has meant that they needed to revisit/realign their IDMP implementation projects once again. But is/was this really needed and if so, how can this be avoided going forward?

It cannot be denied that the approach needed to achieve compliance is shifting continuously but at the same time it is important to realize that the underlying ISO-IDMP structure remains the same. The benefits that implementation of ISO-IDMP brings to the life science industry remain the same. As such it is important that your implementation strategy is not only focused around achieving compliance, but that it also keeps in mind the internal improvements in data quality and processes the implementation of the ISO-IDMP standard can bring. Achieving compliance should be an important part of your strategy, but your strategy should not be controlled by compliance only.

IDMP Process Optimization – Webinar Background

During our webinar titled “Building a Flexible and Future-Proof IDMP Data & Process Management Strategy – Process Optimization”, Wim Cypers spoke specifically about how you best address the phased DADI approach and how do you align this with the need to keep the xEVMPD submissions going in parallel. What is needed internally to ensure that data is shared within your organization at the appropriate time and how do you build processes which ensure data is not only shared but also stays in sync over time.

Why should you read the transcript?

  • Dadi Compliance process
  • Data collections and data governance processes
  • Using ISO IDMP for building efficient internal processes

A full transcript of Wim Cyper’s presentation is available to download and to read below. 

IDMP WEBINAR SERIESIDMP Data & Process Management Strategy

Get Instant Access To this 3 Part Webinar Series:

  • Part 1 – Data Management
  • Part 2 – Process Optimization
  • Part 3 – Cross Department Benefits

Presented by IDMP expert, Wim Cypers

Aligning Regulatory and Compliance Processes

I want to focus on the DADI process and what you will have to be doing according to what EMA is currently published. As I said, I don’t want to look at history, but I do want to go back to what is one of your operational processes that you have today or that you should have today?

xEVMPD Process

The reason why I put this on here is that if I talk about DADI in a second, and what will happen there. It is that you will have to realize that the process is going to be slightly different. Different in one way but also, must realize that because DADI, it’s not because you implement data that xEVMPD is also going to go away. I think that’s a very important lesson to learn already, or a very important takeaway at this point in time is the fact that it’s not the case, that if you implement DADI that xEVMPD will completely go away. So, make sure that you keep that in mind also when you put at your process, when you look at your IDMP and your compliance aspect of your IDMP project.

I think our xEVMPD work is very clear. It basically follows the steps here. It’s you who submits the information. The first few steps are all related to your regulatory submission process. There is nothing to do with xEVMPD. So xEVMPD is a post-approval activity that you need to do. As soon as you get approval, you will basically have to submit an xEVMPD record to your article 57 database. It gets processed. There is a whole cycle here that I didn’t represent here because I didn’t want to go to that level of detail, but you do get acknowledgements. You do get acknowledgement one, two and three back, or you get an MDM back. So even xEVMPD has a whole cycle related to this. It also depends on whether you do this process manually or by entering it on the website on the portal.

Whether you use systems to submit it to a gateway. But again, this process is not going to go away. Keep in mind that even if you have something, that the transition will be there, and I would even say if you don’t have an electronic process in place today. It’s still worth considering based on the previous slide with the central repository to invest in an xEVMPD solution, because as I said it doesn’t go away either. But now let’s shift gears a little bit and see the DADI project or the eAF that is being done now is, what is going to happen or what is the EMA requesting now?

xEVMPD Process - Wim Cypers - Webinar Celegence

DADI Process

The DADI process is going to be a little bit more complicated because what they’re going to do, and you look at the left bottom part when you’re looking at the PMS database, this database, by the way, is being filled out and you don’t have to do anything about this. This is something that the authorities take care of.

DADI Process - Wim Cypers - Webinar Celegence

They’ve migrated to article 57 database; they integrated the article 57 database into the PMS database and the data will go there. There are a few other data sources that go in there, but you don’t have to resubmit your PMS data. Which is a good thing because you know everything that you’re doing for xEVMPD is not a waste. So at least that’s a good thing also, but what you will need to do going forward is essentially say, when you’re doing a submission. So when you’re doing the initial submission, the focus is shifting from the post-approval activities with xEVMPD to pre-submission activities already, because what you will need to do is the eAF form (electronic application form) is being adapted or changed or changed essentially to make sure that it’ll use the PMS database.

Keep in mind, not explicitly mentioned here, but at first glance, this process is only required for variations. So, when you do variations you will need to follow this initial submission until further notice of course. The xEVMPD process to populate PMS database. What you’ll have to do is you will have to go into your eAF form then into your DADI form where you enter the information, something that is already available today, and it’ll pull the PMS data. So, you’ll select your product and it’ll say, this is the data that I know about your product. Keep it in mind, we’re only looking at it from a variations perspective. So, they’ll pull the data from the PMS for the product that you submitted before. And then there are two ways, either this is wrong, or you see corrections that are needed.

If the data in the PMS database is wrong, not if you want to make a change, if you want to make live changes, of course, it’s going to show you the old one and you’ll have to give the new one. But if the data is incorrect, you will still have to go and do a correction in the article 57 database through xEVMPD submission. Then the data in your PMS will be updated and your DADI process will be updated. So, this is a way also of a quality check of the PMS data to make sure that you’re in alignment with that.

You will update the DADI forms, you will download the PDF together with the FHIR methods, you will include that in your eCTD sequence. That you submit and you will do the submission. At the end of the submission, you’ll have to do still have to do an xEVMPD submission in that and then article 57. So, it’s a loop. xEVMPD and DADI are very closely involved, the key message here is that you get your data, you get your changes in a structured format earlier in the process. Hopefully over time some things will go away also. Hopefully over time when they also do initial submissions and they start leveraging fully on the FHIR message, unless they’re on the xEVMPD message, this diagram will also change. So hopefully over time, this will completely be modified. So, keep tuned to how this evolves.

There are still a few things which are unclear. For instance, if you do questions, describe the fact that if you do an eAF form that you have to include it in your sequence. It’s not explicitly mentioned as far as I’ve seen in at least that if you have questions and you must resubmit, or you must do a second sequence or a new intermediate sequence, whether you still must do this with eAF. Second question mark, is FHIR message? Why could that not update the PMS database directly? that’s more of internal coding at EMA and maybe this is coming because if wherever you enter in the FHIR message in the eAF form, if that could automatically be uploaded in the PMS database, technically speaking, there is no need for xEVMPD anymore.

Although keep in mind also that it’s not 100% at least for iteration 1, the evolution is going to be that this xEVMPD hopefully will go away that the more you can enter your data at the very beginning of your stage and that also goes into your PMS database. The process for iteration one is very simple, very clean. You enter your data upfront, you put it in your eCTD, and you follow your submissions finally, then do your xEVMPD submission. This is from a regulatory perspective.

IDMP WEBINAR SERIESIDMP Data & Process Management Strategy

Get Instant Access To this 3 Part Webinar Series:

  • Part 1 – Data Management
  • Part 2 – Process Optimization
  • Part 3 – Cross Department Benefits

Presented by IDMP expert, Wim Cypers

DADI and Your Internal Process (Opportunities)

I also just want to look at it from an internal perspective. The main challenge that I see, and if we put our central repository into the mix here. The one thing that I’m a bit worried about or not, it goes a bit further, is the fact that as a company, you will still have to do double data entry.

DADI Internal Process - Wim Cypers - Webinar Celegence

There is no way to upload your data from your sent repository directory into the eAF form or into your PMS database. So, there are still two manual processes that you must follow, you’ll have to update your eAF form. You’ll have to update your central repository. Of course, the data comes from your risk system into the central repository, update data in your RIM system. So, you will have a period where you will have to do double data entry.

Opportunities for Automation?

Some opportunities that I see short term to think about. Could you leverage that PDF and that FHIR message to upload that directly into your central repository? If you say, look at the double data entries, not that much work. If you only have like a variation every week or so could easily be done, could easily be manageable or manage that you have double data entries.

On the other hand, if you have too many things you might consider investing and looking into whether the FHIR message that you download from the eAF, whether you could import that into your central repository also, or into your RIM system, and then into your end central repository, that must be seen on your preferences. So essentially there could be some efficiency gains here also basically leveraging on what the EMA provides. Of course, in the future, you can also look at uploading that hopefully there’ll be an opportunity where you can generate your own PDF and FHIR message.

So, step two, you enter the data in your RIM source or central depository, then you can generate automatically your eAF or your FHIR message and include that without having go to the web portal again, future considerations here may be thinking too far ahead and I do know that the EMA is probably also looking at these kinds of capabilities to have that opportunity to automate business processes at your end.

The summary here, essentially talking about the compliance aspect of it is this is moving. You will have to adapt to it and processes that you build around it, and the solutions you build around it will also have to be adaptable. You will have to do it the same way that the EMA is evolving over time.

The EMAs have clearly stated that they’ve taken an agile approach to this. They’re only talking about variations now going this way. Not about the initial submission. So, you see they will be evolving over time. So, the key takeaway from this is  to be compliant. Understand what you need to do, but also make sure that whatever you put in place, think about how you could leverage on some of the data. What can you bring automatically to the table in one hand and how can you make the solutions future proof, knowing again that the EMA will take an agile approach. So, variations phase 1. Who knows, maybe the next step will be okay. We’ve added this capability into it, make sure that you can start using this. So be flexible, be compliant, but also be flexible in the solution that you’ve built.

Aligning Your Internal Process Needs

The second part of the presentation. I really want to think about internal processes. Internal needs that you might have and how do you address this? Where do you need to be careful about this?

Some of the questions that you need to ask upfront here are, of course, the what, when, who, the quality that you need to build in, and how are we going to do it in governance? You need to think about all those aspects. We need to build your end-to-end process. But at the beginning you really need to think about, and this is still a bit of a piece about the data. It’s what is needed, who can supply me with this information? Is it complete, accurate, and consistent?

Define the Ideal (“blue sky”) End-to-end Process

But now let’s focus on the process. So where do you start that process? When you’re doing your evaluation and you’re looking at that process. Well, I would recommend the first thing that you do is define what the needs are. Build your true end to end process and this is where I would recommend saying, build a “blue-sky” scenario. What timeframe do you need it in? What do you need? What timeframe in? What business cases, use cases do you want to have?

I would always recommend starting to think about a blue sky or the ideal solution. I know a lot of companies have started from a sense and said, okay, well, let’s jump into it directly and let’s see what we have and build around that. But my experience is if you start from what you have completely, then you will never reach an ideal scenario. You will never get to a stage where it can be efficient because you get trapped into your current processes very quickly. It doesn’t mean that you have to build your blue sky. It’s still blue sky. You want to aim for that. You want to reach that. You can make compromises here and there. But at least you start from a top layer going down rather than from the bottom trying to get up.

So that is really a very important aspect I would say is when you’re building those particular processes in place. Start looking from what is the ideal scenario is and then see how you can get to that scenario and if necessary, you can down downgrade it.

Understand Your Own as-is Process Situation

The first thing that you need to do is look at your regulatory processes. What do you have in place? Does it exist? What KPIs do we have? Where does our data come from? Is everybody following the process?

It’s more an introspection that you would have to do within your regulatory department. Are we using quality checks within our data? and maybe with some touches towards external data? Or external sources you can of course start bringing those in. But first identify what is working within your organization? What is not working within your regulatory organization? Because keep in mind, regulatory will be the key supplier of the ISO IDMP data. So, look at it first and say what works, what doesn’t work, where could we make improvements? and then when you’ve identified those improvements, you can ask the question “Who can help me with those improvements?”

Who can supply me with this data in the expected timeframe?

Am I doing manual data entry for the moment? Does another source give me the information, and do I enter it manually? Or do I have a gap here in my data that can be supplied by another department? So, you can always start asking the question, who can fill the gaps? Who can make my process more efficient? I’m doing a manual entry for the moment from the document, which is being sent by clinical, for instance. Ask the question, can I automate this? Not just automate with NLP or a process where I take the document into data into my regulatory system, but even go a step further and say, where did clinical get this data from? Do they have a database that they create a document that they give to me that I need to put in data again?

Is it data to document to data if that’s the case, why don’t you just consider data to data? So, when you’re starting to talk about looking at other departments. Is there already a process in place? That’s probably the first point that you will start looking at to say, how do we exchange data now, as I said, I just gave you the example of clinical and, and regulatory, how you exchange that data. But you can do that for anything. Is there a process in place? Maybe there’s even already technology integration, which would be a step further than just a data exchange through documents. When you then start looking at other sources, I think it’s also important to understand the process at source. It does go a little bit further in the sense of saying, don’t take for granted what the source is.

I know this is sometimes very difficult. If you go to another department and you say, is your data accurate? Is everything reliable? Are you getting everything in the system in time? Nobody wants to hear another department challenge them. But of course, it must be done because if you want to be compliant at the end, if the data from that source is going to go into your central repository. You must make sure that you get that accurate data. So, you must ask those questions, regardless of how hard or painful they might be.

The other thing that you must make sure or that my experience has taught me is create awareness at the source. So, you are building those processes and where you’re thinking about, I want to bring this data across and I want these transformations and I want to build this end-to-end process either manual or either a paper-based process or a technical process, make sure that your source is already aware what it is for.

The last point here is do not try to force those changes, make sure that the others, that the source that you provide also has benefits to it. You must be on the edge here. You don’t want to offend them by saying data doesn’t have quality data, but at the same time, you want to make sure that the solution and the processes that you put in place are future-proof. If they are not future-proof, you’re not going to get the data that you want. You might get it once you might get it for a couple of months, but it is not future-proof.

So that’s really where it’s also important to start making sure that they are aware of our IDMP is, what the benefits and what benefits it could give them, what data can they get from the central source of information that can make their business process easier. It sounds logical if I’m sure everybody’s sitting there and say that’s all logical, but it’s amazing to see how many companies don’t forget that because there is so much pressure in building that IDMP compliance solution that sometimes this is forgotten also. So, make sure that you built that into your processes.

The last point that I want to make here also is indeed don’t make forced changes. I think you must bring the benefits in place, but also think about downstream corrections that you can do. If I think about an example of translations, transformations were terminology alignment or so. This is something that you could also do downstream. You don’t have to force people to start using an OMS or an RMS library or dictionary or code list. These are things that you can easily figure out downstream. But don’t try to do everything downstream because if you do everything downstream you put a lot of emphasis or a lot of effort into doing clean up in the middle and you lose some of the efficiencies and the benefits that you have. So again, look at your own sources when you’re talking about external sources, think about what processes do you have, how can I get them involved? What benefits can I give them? And how can I make changes? How can I balance out the changes that I expect from the source to be made also?

Quality – Is Data Complete, Accurate, Consistent?

The next step in the process is around quality and this is probably also one of the biggest things that you need to think about. This step is a little bit difficult, is this a data check or a process check? but of course it’s all about data. It’s about compliance, accurate and consistent data that you have. But I would also say is that when you look at it, think about putting processes in place, just cleaning up the data once is not good. If you clean up the data, if you have a process or a policy to clean up the data, that’s a one-time effort.

You must have a long-term process in place. That makes sure that the errors you identified don’t come back. So, think about, look we have a cleanup process, but also what can we do to make sure that it doesn’t happen again? You also need to put quality checks in place. You can put a process in place that consists of that data quality but make sure that you also have a process in place that checks that quality on a regular basis. If you put certain KPIs on the source with respect to timeliness, quality, accuracy, completeness, make sure that you also think about putting checks in place that makes the check on a regular basis when you’re in production.

So, monitor the processes. I think that’s very important. It comes back into the compliance aspect. If the data is not correct at the source. Think about whether you can do it downstream, you can do it at source, you can do it during transfer, at a destination, maybe at your central repository or you could combine it with other sources and whatever is missing in one source, you can get it from another source. So, there’s a lot of considerations here when it comes to data quality that you need to make. But you also realize that what I mentioned just now is that if you go downstream and if you do the further you go downstream in correcting some of those problems and those quality problems, the further you are away from your ideal solution. If you put those things in place as a temporary solution, also make sure that you have a roadmap or a plan in place to remediate that over time. If it can’t happen today yes, you can do it downstream. You can do it in your data central clean up, but it cannot become a final process. There must be some sort of a commitment to bring upstream data quality corrections. But sometimes you must compromise in the area of making that compliance happen over time also.

IDMP WEBINAR SERIESIDMP Data & Process Management Strategy

Get Instant Access To this 3 Part Webinar Series:

  • Part 1 – Data Management
  • Part 2 – Process Optimization
  • Part 3 – Cross Department Benefits

Presented by IDMP expert, Wim Cypers

How Can Technology Help with IDMP?

There are also a lot of things that’ll happen through APIs and web services. But even if you bring in the data in there is still that outer layer, the technical layer that you can see on the diagram. The inner layer is talking about data remediations that might have to happen within your data repository.

You’re never going to get the source data completely the way you want it. You’ll have to do some merging of data, splitting of data in certain areas. You might have to do some mappings of code lists with OMS and RMS. So, it’s very important also that around that central repository you put in data processes. You can do this either technically or in the written process. But you must make sure that you clearly identify what data is coming in? Who are the data stewards for that data? Who’s going to clean that data? How are we going to resolve those distinctions? So, these differences so that it can work for other departments. So these are very important and a lot of people then step away and say, well, this is pure technology.

There is still that business process layer in the middle, which will play an important role. Hopefully over time when you disseminate the quality towards the source systems. So, when you go away from your central repository, and you get the data quality that you want into your source systems that intermediate layer around your central repository will go away or will diminish. I don’t think it’ll ever go completely away, but it will diminish over time the responsibilities that are put on there, but be aware that in the beginning there’ll be a lot of effort that will need to be put into this. Here, the experience that somebody might have with other departments and other companies could be very beneficial, also to engage an external party, to investigate this and share some of the challenges that have been faced.

By the way, these are all things for both inbound as well as outbound data. A lot of my examples are focused towards inbound into the data repository, into their central repository but obviously you must look at the same things from an external view. If any department says, I want to receive data from your central repository. There’s also, of course, all those things that will need to happen, you might have to merge data. You might have to split data. You might have to map data to make it work for your destination from that sense. So really that element is also very important related to internal as well as external data.

Technology IDMP - Wim Cypers - Webinar Celegence

How Can We Ensure that Process is Future Proof?

The initial process in your IDMP project will not go away, but over time it will transform. It’ll be very much focused at the beginning towards data collection, data assessments, then from data assessments, it goes to processes which processes you need to put in place to make sure that everything happens. The integrations will happen, but also the governance at the end is a very important aspect.

How do you govern your data and how do you make sure that what you have in place stays in an appropriate way? How can you make sure that it basically is wherever you put the effort you put in today, whether five years from now, 10 years from now is still accurate? and that essentially you can keep on submitting your IDMP data to your regulators.

  • Get buy in from source department
  • Need to build not only process but also data governance process
  • Agree KPI’s with teams and monitor them on regular basis
  • Technical monitoring to be put in place
  • Regular data quality checks
  • Define Escalation process

It is very important that you build your processes to bring the data in and processes to move the data, but also governance, quality checks, controls.

IDMP WEBINAR SERIESIDMP Data & Process Management Strategy

Get Instant Access To this 3 Part Webinar Series:

  • Part 1 – Data Management
  • Part 2 – Process Optimization
  • Part 3 – Cross Department Benefits

Presented by IDMP expert, Wim Cypers

IDMP Process Optimization Conclusions

Regulatory process is mostly defined and needs to be followed

  • Follow regulatory process and timelines
  • However, build a solution which is flexible and extendable
  • Consider opportunities for efficiency gains on top of compliance

Internal processes

  • Do not force processes under umbrella of “compliance so mandatory”
  • Build a win/win solution considering source
  • Ensure long term governance for continues success

Celegence has built an IDMP transition and implementation framework focusing not only on achieving compliance but also ensuring additional benefits for the regulatory department and across your organization. Our team of technical and regulatory consultants can provide you with an in-depth analysis of your existing processes and technologies and create an accurate assessment of the changes your organization needs to ensure IDMP compliance. To know more reach out to us at info@celegence.com, contact us online or read more about Celegence’s IDMP services.