FDA Quality Metrics – How to use it as a Business Case for QbD

Question: What was the most popular topic in the last 2 years of pharmaceutical conferences? Hint: ISPE, PDA, IFPAC conferences have been highlighting this topic that some even created  conferences dedicated to this topic.

Answer: FDA Quality Metrics.

Quality Metrics is a “quality” report card of companies, products and facilities. It may be a similar scorecard to the US News Ranking of Universities.

Why the excitement? When there is ranking involved, everyone is interested because it breeds competition. As humans, we like to compare with our neighbors.

This is the reason why Quality Metrics initiative brought such intense interest from the pharmaceutical industry.

As one who practices QbD, I was interested in how to ride the wave and leverage this attention.

My prediction in 2013 was that Quality Metrics initiative will serve as a business justification for implementing QbD .

Did this come true? It’s too early to tell because we have not seen any data or FDA guidances.

The good news? Dr. Lawrence Yu’s (FDA) recently stated that FDA is targeting to issue a Quality Metrics Guidance in 2015.

With the FDA reorganizing their organization at this time, we’ll see how much progress will be made this year.

Despite what the outcome will be, let’s focus on what we, the QbD scientists can do in our circle of influence.

 

This article covers:

  1. Quality Metrics Updates (Jan. 2015)
  2. How we can take advantage of Quality Metrics
  3. Proposed Definitions of Quality Metrics

 

As a guerilla QbD strategist, I seek for FDA initiatives or business cases that can persuade the management of the QbD investment. After all, the biggest hurdle for many organizations is simply getting started.

 

 1. What is New in Quality Metrics (Jan 2015)

What is new? According to ISPE, 18 companies and 44 sites signed up for the pilot evaluation. After numerous Quality Metrics workshops, definitions (jump to section 3) are further refined.

The 4 main metrics below remain unchanged from 2013.

  • Lot acceptance rate
  • Complaints rate (total and critical)
  • Confirmed OOS rate
  • US recall events (total and by class)

Then there are “other” candidate metrics categorized as:

Quantitative, Technology Specific and Additional Survey-based.

 

FDA Quality Metrics Quality Culture Survey

(credit: ISPE)

 Specifically, the metrics list is:

Quantitative metrics

 Winners (so far)

  • Lot acceptance rate
  • Complaints rate (total and critical)
  • Confirmed OOS rate
  • US recall events (total and by class)

 Runners-up

  • Stability Failure rate
  • Invalidated (unconfirmed) OOS rate
  • Right first time (Rework / Reprocessing) rate
  • APQR reviews completed on time
  • Recurring deviations rate
  • CAPA effectiveness rate

 Technology Specific metrics

Media fill (for sterile aseptic sites) failures

Environmental monitoring (for sterile aseptic sites)

Additional Survey-based metrics

Process capability

Quality culture

 

As you can see, not much has changed in the list of the metrics proposed.

Let’s focus on what applies to us, the QbD practitioners.

 

2. How to Take Advantage of Quality Metrics as a QbD Practitioner

 Most Quality Metrics proposed above are manufacturing metrics – not applicable at the development phase. As such, most seem to not connect directly with Quality by Design at the drug development phase.

 Should we despair, then?

 Not yet, there’s still a light of hope.

In a previous article, I pointed out that the ambiguous “Quality Culture” may be the straw of hope.

 With the new updates, let’s look at what “Quality Culture” means in the Quality Metrics initiative.

 

A. Quality Culture Dimensions

Quality Metrics working group issued a survey. The first component looked at the perceived dimensions of “Quality Culture.”

There were 5 categories:

  • Integrity
  • Capabilities
  • Governance
  • Leadership
  • Mindset

Survey respondents rated the five categories on scale of 1 (disagree) to 4 (agree) to prioritize the dimensions that applied to Quality Culture.

Here are the results:

FDA Quality Metrics Quality Culture

 

In rank order,

  1. Integrity
  2. Governance
  3.  Mindset
  4. Capabilities
  5. Leadership

with Integrity and Governance coming out on top.

What does this mean? To the survey responders , the most important dimensions of Quality Culture were Integrity and Governance.

How does this help Quality by Design initiatives?

Integrity is related to cGMP. In QbD, cGMP is assumed, especially with the ongoing issue of data integrity and compliance.

Let’s move on to Governance. Governance implies systems such as business processes and infrastructure. So if organizations has a goal to achieve “Quality Culture,” one could argue that a Quality by Design Systems is necessary. This means QbD Tools, Trainings and SOP’s.

Let’s move on. If you have a more persuading argument, please share.

As you can see, I am trying to squeeze the most out of the Quality Culture section.

 

B. Quality Culture Survey Questions

The Quality Metrics working group issued a survey regarding Quality Culture.

Upon reviewing the survey, few were related to Quality by Design at the development phase.

From the survey, let’s extract the points we can leverage for QbD

 1. I know which parameters of our products are particularly important for patients.

 This is the only question directly related to Quality by Design. To understand which parameters are important to the patients, one must perform a QbD Risk Assessment and then follow up with Design Space studies. The only QbD Risk Assessment Tool which allows scientists to quantitatively correlate process parameters (CPP) to patients (QTPP) is Lean QbD.

 

2. The training I have received clearly helps me to ensure quality in the end product

This is too general. Perhaps we can promote QbD trainings.

3. All line workers are regularly involved in problem solving, troubleshooting and investigations.

 Also more relevant to commercial manufacturing. This is providing employees authority and empowerment.

 4. We recognize and celebrate both individual and group achievements in quality.

 This can be tied to a rewards system.

 5. Up-to-date quality metrics (e.g. defects, rejects, complaints) are posted and easily visible near each production line

 An effective approach called Visual Management in Lean and Toyota Production System.

6. Each line worker can explain what line quality information is tracked and why. 

If “line worker” is a development scientist or an assay analyst, perhaps we can relate this to design space and QbD risk assessment. In manufacturing, it’s more of a Critical-to-Quality concept.

7. We are regularly tracking variations in process parameters and using them to improve the processes.

 Encourage the use of Statistical Process Control or Control Charts.

 8. Supervisors provide regular and sufficient support and coaching to line workers to help them improve quality.

Also more of an organizational culture.

9. We have daily quality metrics reviews and quality issues discussions on the shop floor Management is on the floor several times a day both for planned meetings and also to observe and contribute to the daily activities

This refers to the “daily huddle” meeting approach that is commonplace in many operations. I tried this with R&D teams and had mixed results. (which I’ll share more in the future)

10. Every line worker is aware of the biggest quality issues on their line and what is being done about them

Communication issue.

11. All employees see quality and compliance as their personal responsibility

KPI or performance metrics issue.

12. I am not afraid to bring quality issues to the management’s attention 

Transparency and no-blame culture.

13. People I work with do not exploit to their advantage inconsistencies or ‘grey areas’ in procedures

I’m unclear about this one.

14. All employees care about doing a good job and go the extra mile to ensure quality

Employee morale.

 

Conclusion

There was only one survey question directly related to Quality by Design:

 

“1. I know which parameters of our products are particularly important for patients”

 

To understand which parameters are important to the patients, one must perform a QbD Risk Assessment and follow up with Design Space studies.

 Many already use Lean QbD Risk Assessment Software which allows scientists to quantitatively correlate process parameters (CPP) to patients (QTPP).

 

Other than this QbD takeaway, the rest were manufacturing or operations related questions. Sure manufacturing metrics are highly correlated with how scientists developed the processes but they are still lagging metrics. If Quality Metrics were to help Quality by Design initiative effectively, we need to include development efficacy metrics.

This is disappointing to the QbD scientists and consultants. What else do you see?

 

I’d like to hear from you on how we could further leverage this Quality Metrics initiative.

 

 

Bonus:

For those who like to dig in, below are the specific definitions of Quality Metrics.

3. Definitions of Quality Metrics

 Per ISPE’s summary, here are the definitions proposed:

Lot acceptance rate =

Total lots released for shipping out of the total finally dispositioned lots for commercial use in the period 

▪ Total lots dispositioned = total number of lots for commercial use produced and/or packaged on site that went through final disposition during the period, i.e. were released for shipping or rejected (for destruction). Rejections should be counted as final disposition regardless at what production stage the rejection occurred. Release is only final release for shipping. Excludes lots that have been sent for rework or put on hold/quarantined in this period and hence are not finally dispositioned. Excludes lots that are not produced or packaged on site, but just released for CMOs.

▪ Total lots rejected = total full lots were rejected for quality reasons. Rejected means intended for destruction or experimental use, not for rework or commercial use. Rejections should be counted regardless at what production stage the rejection occurred

▪ Total lots released (“accepted”) = total lots dispositioned less total lots rejected

 

Total complaints rate =

Total complaints received in the reporting period, related to the quality of products manufactured in the site, normalized by the number of packs released 

▪ Packs released = Total number of packs (final product form that leaves the plant, one level less than tertiary packs, most usually it is secondary packaging unit e.g. pack of blisters or bottle in carton pack) released in the period

▪ Total complaints = All complaints received in the reporting period, related to the quality of products manufactured in the site, regardless whether subsequently confirmed or not. All complaints received by the site should be counted, even if a complaint affects more than 1 site, or if eventually the root cause analysis attributes the issue to another site. Complaints related to lack of effect should be counted as well 

 

Critical complaints rate =

All critical complaints, normalized by the number of packs released 

Critical complaints = Complaints which may indicate a potential failure to meet product specifications, may impact product safety and could lead to regulatory actions, up to and including product recalls. Critical (or expedited) complaints are identified upon intake, whether subsequently confirmed or not, based on the description provided by the complainant, and include, but may not be limited to: – i. Information concerning any incident that causes the drug product or its labelling to be mistaken for, or applied to, another article. – ii. Information concerning any bacteriological contamination, or any significant chemical, physical, or other change or deterioration in the distributed drug product, or any failure of one or more distributed batches of the drug product to meet the specification established for it in the application. 

 

Confirmed OOS rate =

Total confirmed OOS (test results that fall outside the specifications or acceptance criteria), out of all lots dispositioned by the lab during the period 

Total lots tested/dispositioned by the lab = total number of lots used for commercial production that are tested and dispositioned out of the lab in the period, i.e., have a QC pass or fail decision on them. Includes: – Lots for release testing (counted as 1 lot, even if sampled separately for chemical and microbiological testing, or for in-process analytical testing in lab or on shop floor) – Lots of incoming materials for analytical testing (count 1 per each analytically tested raw material and/or packaging material lot). Includes water used as raw material. – Lots for stability testing in that period (counted as 1 per each timepoint and condition sampled per the approved stability protocol) – Does not include environmental monitoring samples

▪ Confirmed OOS = all test results that fall outside the specifications or acceptance criteria established in drug applications, drug master files (DMFs), official compendia, formulary or applied by the manufacturer when there is not an ‘official’ monograph

 

Stability failure =

Total confirmed OOS related to stability testing 

Subset of the “Confirmed OOS rate” – based on stability lots tested and confirmed OOS related to stability only

 Invalidated (unconfirmed) OOS rate =

Total unconfirmed OOS, out of all lots tested during the period 

▪ Unconfirmed OOS = all OOS minus confirmed OOS (see the definition of confirmed OOS) 

 

Recall rate – US recall events (total and by class)

 Recall events = all US market recall events

▪ By class = all US market recall events, class I and II

▪ Recalled lots = Include lots recalled either voluntarily or by regulatory order (recall implies physical removal of product from field, not just a field action or correction). Include US market recalls only 

 

Right first time (rework/reprocessing) RFT (rework/ reprocessing rate) =

Total lots that have not been through rework or reprocessing out of the total finally released lots for commercial use in the period 

Total lots released (“accepted”) = total lots dispositioned less total lots rejected (see the definition of Lot Acceptance Rate)

▪ Total lots reworked or reprocessed = all lots that have gone through rework (using alternative process) or reprocessing (using again the original process) before that final disposition in order to meet requirements for release. Only count rework or reprocessing necessitated by quality issues (for example contract manufacturing sites should exclude rework due to customer order changes). If a lot was sent for rework and received a new lot number, it should still be counted as undergone rework when finally dispositioned. 

 

APQR reviews completed on time =

Number of Annual Product Quality Reviews in the period that were completed by the original due date, normalized by all products subject to APQR

▪ Products subject to APQR = Total number of products subject to Annual Product Quality Reviews – annual evaluations of the quality standards of each drug product to verify the consistency of the process and to highlight any trends in order to determine the need for changes in drug product specifications or manufacturing or control procedures (as required by CFR Sec. 211.180, General requirements, section (e) and ICH Q7, GMPs for APIs, section 2.5 or EU Guidelines for Good Manufacturing Practice for Medicinal Products for Human and Veterinary Use, Chapter 1, Pharmaceutical Quality System, section 1.10). Does not include the data packages that a site prepares to its customers when acting as a CMO

▪ Number of Annual Product Quality Reviews on time = completed by the original due date 

 

Recurring deviations rate =

Number of deviations that have reoccurred during the preceding 12 month period out of all closed deviations 

▪ Number of deviations = Any major or minor unplanned occurrence, problem, or undesirable incident or event representing a departure from approved processes or procedures, also includes OOS in manufacturing or laboratory or both. Please count only deviations that have been closed/resolved in the period. Deviations from one period, for which the investigation was closed in the next period, should be counted in the latter period.

▪ Recurring deviations = Number of deviations for which during the 12 month period preceding each deviation, at least one other deviation has occurred with the same root cause within the same process and/or work area. If redundant/duplicative processes or equipment exist, please consider deviation events common to the grouping/work center as recurring (still within the 12 month timeframe). For example, if a deviation for missing desiccant occurs twice, on two separate packaging lines with comparable equipment/systems, it should be counted as recurring (i.e. as 2 “same” deviations, rather than 1 “different” for each line) 

 

CAPA effectiveness rate =

Number of CAPAs effective out of all CAPAs with effectiveness check in the reporting period 

▪ CAPAs with effectiveness check = Number of CAPAs evaluated for effectiveness in the reporting period. All CAPAs should be counted, including those related to inspection or audit observations

▪ CAPAs effective = those evaluated CAPs where the quality issue subject of the CAPA was resolved, and/or has not reoccurred, and there have been no unintended outcomes from the CAPA implementation 

 

Media fill rate =

Number of media fills dispositioned as successful out of all media fills to support commercial products dispositioned during the period 

▪ Media fills = Total number of media fills (regardless of number of runs in each) to support commercial products that were dispositioned (as successful or failed) during the period. If the media fill was dispositioned as failure and a rerun was needed, that repeat is counted as a separate media fill. Includes all media fills – both for initial and periodic qualifications

▪ Successful media fills = All media fills that were not dispositioned as failures 

 

Environmental monitoring

– Lots with action limit excursions, normalized by all sterile dispositioned lots – Lots rejected due to environmental monitoring reasons, normalized by all sterile dispositioned lots 

Sterile dispositioned lots during the period (see definition for Lot acceptance rate)

▪ Lots with limit excursions = All sterile dispositioned lots during the period that had associated investigations related to exceeding environmental monitoring action limits. If a lot had more than 1 such investigation please count only 1 per lot. If an investigation has affected multiple lots, please count each lot separately. Action limit is an established microbial or airborne particle level that, when exceeded, should trigger appropriate investigation and corrective action based on the investigation)

▪ Rejected lots due to environmental monitoring reasons = All sterile dispositioned lots during the period that were rejected for exceeding environmental monitoring action limits. Rejected means intended for destruction or experimental use, not for rework or commercial use. Rejections should be counted regardless at what production stage the rejection occurred 

 

Process capability questions 

▪ Do you measure that the process remains in a state of control (the validated state) during commercial manufacturing? (yes/no)

▪ For what % of products are they applied (based on your total number of products as reported in “Data by site”) – excluding packaging operations

▪ If not applied on 100% of products, how do you choose/segregate/prioritize on which products to apply these metrics? (open question)

 

 

Get New QbD Tips!
I agree to have my personal information transfered to AWeber ( more information )
Want More Tips from QbD Practitioners? Then join our newsletter!
We hate spam. Your email address will not be sold or shared with anyone else.
One Comment