Skip to main content
SearchLoginLogin or Signup

Quality Control in the Mass Spectrometry Proteomics Core: A Practical Primer

Keywords: quality control, proteomics, software, materials, standards, best practices

Published onSep 11, 2024
Quality Control in the Mass Spectrometry Proteomics Core: A Practical Primer
·

Abstract

The past decade has seen widespread advances in quality control (QC) materials and software tools focused specifically on mass spectrometry–based proteomics, yet the rate of adoption is inconsistent. Despite the fundamental importance of QC, it typically falls behind learning new techniques, instruments, or software. Considering how important QC is in a core setting where data is generated for non–mass spectrometry experts and confidence in delivered results is paramount, we have created this quick-start guide focusing on off-the-shelf QC materials and relatively easy-to-use QC software. We hope that by providing a background on the different levels of QC, different materials and their uses, describing QC design options, and highlighting some current QC software, implementing QC in a core setting will be easier than ever. There continues to be development in each of these areas (such as new materials and software), and the current generation of QC for mass spectrometry–based proteomics is more than capable of conveying confidence in results as well as minimizing laboratory downtime by guiding experimental, technical, and analytical troubleshooting from sample to results.

ADDRESS CORRESPONDENCE TO: Benjamin A. Neely, Hollings Marine Laboratory, National Institute of Standards and Technology, 331 Fort Johnson Road, Charleston, SC 29412, USA (E-mail: [email protected]; Phone: 843-460-9841)

ADDRESS CORRESPONDENCE TO: Magnus Palmblad, Center for Proteomics and Metabolomics, Leiden University Medical Center, Postbus 9600, 2300 RC Leiden, The Netherlands (E-mail: [email protected]; Phone: +31(0)71 526 6969)

Competing Interests: The authors declare no competing interests.

Keywords: quality control, proteomics, software, materials, standards, best practices

INTRODUCTION

Ensuring the validity of results is important in all areas of science but is especially important to research cores generating and providing data and results to other scientists who must be confident in the measurements before drawing meaningful biological conclusions. Broadly, this is accomplished by using a quality system encompassing everything from the calibration of pipettes to validating methods for checking instrument performance to inspecting the final results. Quality control (QC) is a subset of quality systems, which for mass spectrometry cores includes everything from sample preparation to data acquisition and initial analysis. It is useful, arguably essential, to isolate and evaluate instrument performance before, during, and after a series of sample measurements (known as a run), independently of the experimental samples.[1],[2],[3] Isolating instrument performance from experimental processes is often referred to as system suitability testing (SST). On the other hand, QC materials of varying complexity and relevance can be processed and run parallel to experimental samples to provide confidence in the total workflow from samples to results. Providing SST and QC performance to clients or users along with experimental results is essential, especially when results are negative or perceived as suboptimal. The goal of this review is to provide a “starter kit” of off-the-shelf materials, QC approaches, and QC software for any mass spectrometry proteomics core or research group to implement QC (skip to the Recommendations section) as well as to provide a reference to more in-depth QC concepts if needed, but not required.

Most mass spectrometry–based proteomics cores routinely run QC materials to ensure reliable and informative results. Such QC samples are used to verify instrument performance, validate and benchmark methods within and across laboratories, and even comply with regulatory standards.[4],[5] Especially in multi-laboratory consortia, large sample sets, or long-term data collection, QC is imperative to combine data across instruments, laboratories, and time.[6] Critically, QC samples allow separation of extrinsic measurement variance from intrinsic sample variability—something which is especially pertinent to laboratories serving a diverse group of users with varying experience levels. Historically, many mass spectrometry proteomics laboratories have manufactured their own QC samples, for example, by mixing selected compounds or digesting proteins from Escherichia coli or yeast whole-cell lysates. A number of mixtures suitable for QC purposes are now commercially available, including several developed in collaboration with Association of Biomolecular Resource Facilities (ABRF) research groups,[7],[8] aiding in reproducibility, comparability, and harmonization across experiments, instruments, laboratories, and time.

Common QC materials and analysis methods (including data processing) must be used so that data can be comparable across laboratories. We will focus on these QC standards and QC methods, with “standard” understood in the general sense rather than in the stricter definition from analytical chemistry implying certified mass fractions of a specific substance or substances that are absolutely quantified. There are QC materials used in proteomics with known mass fractions that can be used to evaluate quantitative accuracy and detection limits, but in general, these materials are comparative in nature. In an effort to simplify the discussion herein, the following designations will be used, modified from Bittremieux et al.[3] (Figure 1):

QC1: A known mixture of peptides or a digest of a single protein or set of proteins, typically used for SST or spiking into experimental or QC samples.

QC2: A whole-cell lysate or biofluid processed along with the experiment (used as a process QC) or a digest of a whole-cell lysate to be used for SST.

QC3: A spike of isotopically labeled peptides (like a QC1) into a complex whole-cell lysate or biofluid digest (like a QC2) to be used for SST.

QC4: A suite of 2 or more samples of whole-cell lysates or biofluids used within the experiment (as a QC) or predigested (SST) to quantify label-free or isotopically labeled quantification accuracy and precision. Each sample could contain different species mixtures, or be a biofluid, cell type, or environmental sample with known or predicted proteome-wide differences. This suite could likewise contain isotopically labeled peptides (QC1), similar to the construction of QC3. A variation of QC4 would be a suite of premixed isobarically labeled samples.

Figure 1

Four Levels of QC Material. In general, there are 4 types, or levels, of QC materials, described in the Introduction section and in detail throughout. A QC1 material is a mixture of peptides (shown in cyan and red), isotopically labeled or not; a QC2 material is a whole-cell digest or biofluid digest (shown as a green lined “cell” with its components); and a QC3 material is a mix of isotopically labeled peptides mixed into a more complex whole-cell digestor biofluid. Finally, a QC4 material is 2 or more samples of whole-cell lysate or biofluid digests (here, A and B) that have known mixtures of different species’ proteomes (shown as green versus pink cells, at ratios of 2:1 or 1:2). Note that this is only describing types of QC materials, while recommendations for implementing QC in experimental design can be found in the Methods for Implementing QC section.

These definitions reflect the types of material and their use cases, determining what QC information they can provide. A QC1 sample can be run on a short gradient independently of an experiment, can be designed to be shelf stable, and is relatively inexpensive. A QC2 sample would typically be analyzed with the same gradient length as the experimental samples and therefore be slightly more expensive in terms of instrument time but is excellent at checking experimental and instrument performance. Likewise, a QC3 sample can provide the same information as QC2 samples with the addition of quantitative detection limits of the method and/or retention time calibration and will be used nearly as frequently as QC2 in most laboratories. In contrast, QC4 requires multiple injections and therefore more instrument time and may not be run as regularly as QC1, QC2, or QC3 in most cores, despite potentially providing valuable insight into method accuracy and precision. The frequency of using a type of QC will vary between cores, depending on the types and scale of experiments that are analyzed. For instance, a core providing quantitative proteomics measurements requires higher levels of QC than a core that only provides protein identifications. A core performing metaproteomics analyses has more need for mixed proteome materials than a core that only analyzes human samples.

QC APPROACHES AND METHODS

How QC materials are used in a mass spectrometry proteomics core largely depends on the client/user needs and experimental designs. In a very basic sense, SST should be universal. All mass spectrometers need to be periodically calibrated, as mass measurements are affected by changes in the environment, notably temperature, in electronic components, in detectors, and in ion optics, especially as contamination builds up on internal components. System suitability testing is performed prior to the analysis of samples to confirm that the instrument is performing within instrument and experiment-specific operational margins, a concept formally known as statistical process control.[5] This concept relies on longitudinal QC measurements to establish the acceptable (inherent) variation of a system in order to detect unexpected deviations from this baseline that must be addressed before continuing. Normally, SSTs should encompass as much of the data acquisition path as possible. Since most mass spectrometry–based proteomics methods utilize liquid chromatography (LC), the SST should include the LC and not simply rely on a direct infusion of QC material onto the instrument. Likewise, the SST should be reasonably complex; otherwise, it may not identify unexpected deviations. For instance, modern instruments may still detect some QC1 within acceptable margins even when performing poorly, and therefore, some QC1 may not be fit for SST. Once the system is confirmed to be performing properly, the analysis of experimental samples can proceed with confidence.

Many software utilize retention time calibration to improve database searching and identification, and these retention time standards can also be used for QC.[9],[10] Retention time calibration standards[11] are typically added to each individual sample in LC–tandem mass spectrometry (LC-MS/MS) in order to align chromatograms[12] or normalize retention times.[13] These standards can also function as QC1 mixtures and be used for some QC tasks. Mass spectrometry calibration standards are less suited, however, as they are typically injected directly into the mass spectrometer. Conversely, both retention times[14] and mass spectra[15] can be aligned (retention time) or calibrated (m/z) without standards, using identified peptides that reveal drifts in either retention time or mass measurement. Typically, LC-MS/MS datasets are aligned in order to assist peak integration across runs and transfer identifications between the runs,[14] a practice also known as “match between runs.”[16] Normalizing or calibrating retention times takes this one step further and expresses retention times as a normalized elution time relative to the elution profiles of tryptic peptides, or estimates the fraction of organic solvent (e.g., acetonitrile) at elution, as recently demonstrated by Bouwmeester et al.[17] Knowledge of downstream analysis capabilities and whether a specific QC1 is required is essential to taking advantage of these techniques in the core.

Many QC materials can be used as ground truth for benchmarking both data acquisition and data analysis methods and parameters with respect to peptide and protein identification or quantification. For instance, a known spiked-in peptide or protein, or whole proteomes mixed in known ratios, can be used to specifically evaluate if the acquired and analyzed data reflects the true changes in relative abundance in the sample. Data from some QC standards are also available in ProteomeXchange (Table 1).[18] This data can be used to QC and benchmark data analysis methods, determining whether an issue is due to the measurement or the data analysis. This can be especially useful since different peptide identification approaches affect protein quantification in quantitative proteomics experiments.[19] For instance, in metaproteomics, a large search space can drastically affect identification depth and biological conclusions.[20],[21] Using external datasets provides independent confidence of chosen data analysis workflows.

The different methods utilizing QC materials described above are vital when embarking on mass spectrometry–based proteomic analysis of hundreds or thousands of samples. Different QC level samples should be distributed throughout the run to provide near real-time evaluation of experimental issues and instrument performance, including changes in chromatographic separation, mass measurement accuracy and measurement sensitivity, and actionable go/no-go metric for the next sample injection. Here, QC2 materials run in parallel with the samples can be used to evaluate upstream preparation steps, for example, homogenization, enzymatic digestion, and clean-up errors, which are the primary source of technical variability in a proteomics workflow,[22] while QC3 samples serve as in-run SSTs, as they provide information on linearity and limit of quantitation. Any changes beyond acceptable margins can indicate the sequence of analyses should be interrupted and problems addressed before continuing. The troubleshooting can itself be aided by the full range of QC materials. Even when measurements are running smoothly, the QC materials aid in quantifying experimental and technical variability and provide essential assurance to end-users seeking biological insights.

Lastly, it is important to note that QC samples can be used regardless of experimental design, but certain approaches require specific materials to provide desired quality metrics (discussed in detail in the QC Metrics section below). Historically, most QC metrics in proteomics, such as the numbers of peptide spectrum matches or identified proteins, have been based on bottom-up, data-dependent acquisition (DDA) and label-free analysis. However, in data-independent acquisition (DIA) and isotopically labeled experiments, modified or additional quality metrics are useful. In the future, DIA performance metrics may borrow more from DDA-centric metrics. One possibility might be that similar to how peptide spectral match identification rate can be reported from DDA data, in DIA data it may instead be the average number of peptides identified per DIA MS2 window per retention time window (since each window is deliberately chimeric and can yield multiple peptide matches).[23] Moreover, as multiple spectra are often used for quantitation in DIA, the number of data points per chromatographic peak is informative on the quality of the quantitative data. For some isobaric labeling methods, generating reporter ions at low m/z, such as iTRAQ[24] or tandem mass tags (TMT),[25] the resolving power and background in the reporter ion m/z region is particularly important for quantitative accuracy, whereas such measures have little relevance in label-free experiments. Though not discussed here, mass spectrometry–based metabolomics and lipidomics have many similarities in methods and approaches,[26],[27] albeit using different materials. Likewise, spatial methods such as MALDI[28] and DESI[29] mass spectrometry imaging require spatially informative QC samples and metrics,[30] but these deserve special consideration and are outside the scope of this primer. Finally, when working in an unfamiliar tissue or species, QC materials become even more valuable, since otherwise poor performance could be chalked up to data being the “first of its kind” or due to the FASTA used in searching, when in reality it could be underlying methodological or technical issues. Although using a tissue- and species-specific QC2 level material is optimal, this may not be possible, and in these cases, some sort of QC material should still be used regardless. Since some QC materials and metrics can be specific to the type of mass spectrometry analysis or experimental question, it is recommended that a QC plan be developed for each service that a mass spectrometry core offers. Specifically, this QC plan should define what QC materials and methods will be used and what information will be communicated to clients/users along with their results as well as a priori–defined threshold values that would lead to a reanalysis. Overall, there are many approaches to implementing QC, but we encourage cores to adopt some QC, since any QC is better than no QC.

QC MATERIALS

Two decades ago, there were no commercially available QC materials bespoke for LC-MS/MS–based proteomics, though QC1 single-protein (e.g., bovine serum albumin) tryptic digests were commonly used. In part to address this lacuna, the Proteomics Standards Research Group (sPRG) of the ABRF developed 2 multiprotein standard mixtures, UPS1 and UPS2, in collaboration with Sigma-Aldrich in 2006.[31] These standards are still commercially available in 2023. The sPRG has also developed a multispecies material that is available through JPT Peptide Technologies,[7] a phosphoproteomics standard that is commercialized by Thermo Fisher Scientific,[8] and a multispecies whole proteome mixture currently in development. These and other QC materials have different applications that in principle can be combined to provide a richer quality evaluation. Though laboratory-specific QC materials are still used, there is added value in using commercial materials continuously available to all laboratories regardless of instrumentation and geographic location. Broadly, QC materials fall into one or more of the categories defined above (QC1, QC2, QC3, and QC4) and will be discussed here in terms of increasing complexity (and listed in Table 1).

Peptides

The simplest and most accurately characterized QC materials are mixtures of synthetic peptides, including those that have been isotopically labeled to avoid interference with peptides from the samples (Table 1). Isotopically labeled or unlabeled, these peptide mixes fall under the QC1 category. The Pierce Peptide Retention Time Calibration Mixture, Biognosys iRT Kit, and Sigma-Aldrich MSRT1 standard are good examples, providing a mixture of unlabeled nonendogenous peptides that elute over a reverse phase gradient. These can be used to monitor chromatographic performance during the run but also post-acquisition retention time normalization (or calibration into a universal retention time scale). Mixtures of differentially isotopically labeled peptides are useful for checking quantitative accuracy and detection limits, which include the Pierce LC-MS/MS System Suitability Standard (7 x 5 mixture), Promega 6 x 5 LC-MS/MS Peptide Reference Mix, and the Sigma-Aldrich MSQC1 calibration standard for multiple-reaction monitoring (MRM) analyses. Due to the relative simplicity of these mixtures, they are normally spiked into a sample background or, in the case of retention time standards, into every sample. They can be spiked into digested proteome QC materials (QC2) to create a QC3 (described below). These peptide mixtures provide very specific QC information, which can easily be extracted in automated data analyses and often have associated QC software (described below). Depending on when in the process the peptides are added, they may provide different QC information on the experimental workflow. For instance, if the peptides are spiked into a sample following digestion but before peptide cleanup, the spike levels can be used to evaluate losses in the cleanup, which is critically important in samples with low protein concentration or single-cell analyses.[32] Spiking into the sample earlier covers more steps of the experiment but increases the ambiguity in the QC results when something goes wrong and may necessitate additional analyses during troubleshooting. Peptides should be spiked in as early as necessary and as late as possible to bracket experimental steps.

Proteins

As an alternative to peptide mixtures, but still relatively simple, purified undigested proteins can also be used for QC (Table 1). When undigested, these fall in the QC2 category, and unlike QC1 level peptides, they can be used to provide information on the efficiency of reduction, alkylation, and digestion. A good example is the QconCAT[33] method, in which a protein is made from a genetic construct, concatenating peptides into a protein that is added to samples to control for variability in reduction, alkylation, and digestion. The resulting peptides can be chosen from proteins targeted for quantitation by MRM and the entire protein inexpensively isotopically labeled for the resulting peptides to be used as internal standards. If the concatenated peptides span the chromatographic separation, they can also be used for retention time calibration and LC monitoring.[34],[35] Open-source software can be used to design these concatenated constructs.[36],[37] In addition to QconCATs, pure proteins with well-characterized modifications can be used to evaluate the enrichment and detection of phospho- or glycoproteins, such as β-casein for phosphoproteomics or ribonuclease B (high mannose type glycans) and bovine fetuin (complex, sialylated glycans) for glycoproteomics, though their lack of complexity limits their utility in providing more complex QC metrics. To generate a larger number of peptides than a single protein and provide a useful QC material for protein separation or fractionation methods, mixtures of a limited number of purified proteins have been designed as standards for proteomics and made commercially available. These include the abovementioned UPS1 and UPS2 standards of 48 proteins as well as the simpler UPS3 standard of 8 nonhuman proteins spanning 4 orders of magnitude in concentration. To evaluate protein inference in downstream data analysis workflows, mixtures of proteins with known shared and unique peptides can be used. Examples of such mixtures include those used in the 2016 ABRF Proteome Informatics Research Group study.[38],[39] These protein/proteome mixtures are more complex, and their analysis is less straightforward than simple peptide mixtures. Unless isotopically labeled, they likely also interfere with endogenous proteins in experimental samples and are therefore not suitable spikes. Their direct support by data analysis software is also less trivial than the simple peptide mixtures, as they do not necessarily contain a small set of peptides expected to be observed in each run. More examples are listed in Table 1.

Proteomes

With the latest generation instrumentation, over 5,000 proteins can be detected in a matter of minutes,[40],[41] and median protein coverage of 79% across the complete proteome[42] (reviewed in ref. [43]), using whole proteomes as QC, is becoming increasingly attractive and falls under the QC2 category. The benefit of using whole proteomes as QC materials is that the dynamic range of proteins in a proteome inherently covers many orders of magnitude, and the unattainable complete proteome coverage means that the performance scale should remain beyond instrument performance (that is, until vendors begin promising complete proteome coverage), allowing future comparisons of performance across system upgrades or when acquiring new mass spectrometers. Whole proteomes are also useful since they can track the complete experiment from digestion to data analysis and can be used with any protease. Having a material that includes extraction is valuable, as extraction can account for the majority of technical variability.[22] One notable proteome is the NIST RM 8461 (Human Liver for Proteomics), which is a relatively low-cost room temperature stable powder still requiring lysis and digestion.[44] There are other NIST materials containing proteins that may be similarly fit for purpose such as yeast protein extract (RM 8323[45]), or not specifically fit for proteomics like NIST SRM 1950 (Metabolites in Frozen Human Plasma), both of which are ready to digest. It is important to highlight that Promega provides paired digested and undigested proteomes, thus providing an additional opportunity for QC (Table 1).

Proteins and proteome digests

Predigested and purified protein (QC1) and proteome (QC2) digests are convenient as materials for LC-MS/MS QC, as they are quickly prepared for injection. Lyophilized digests are also shelf stable for long time periods and easy to distribute between laboratories. Commercially available digests of proteins and proteomes are listed in Table 1. Of specific note are whole-proteome digests that can be purchased along with the undigested version (e.g., Promega’s mass spectrometry–compatible protein extracts of yeast and human K562 cells and PolyQuant’s RePLiCal QconCAT protein[34]), thus adding an additional opportunity for checking experimental QC. The 2022 sPRG study mixed proteomes of human liver (NIST RM 8461), cow liver (NIST SRM 1577c), and trout tissue (NIST SRM 1947) in different ratios with plans to make both digested and lysate mixes available in the near future (note that these mixtures are no longer considered NIST materials, and their use in proteomics is beyond the original intent of the materials). Moreover, there are isotopically labeled proteomes such as the Pierce TMT11 yeast digest and the TKOpro9 and TKOpro16[46] standards created to evaluate relative quantitation using TMT.

Blended materials

Two or more well-defined materials for QC1 or QC2 can be combined to make bespoke but still well-defined materials for QC3 or QC4. Protein or proteome digests can be combined in known ratios for quantitative QC.[47],[48] Spiking peptides into a lysate or digest provides additional utility, for example, for simple chromatography and mass measurement quality control. The drawback of custom mixed standards versus commercially available premixed standards is that since they are only in their respective components, users must mix their own, meaning that data from these mixes varies across laboratories. However, some vendors now sell premixed QC1/QC2 materials, such as HeLa digests with PRTC Standard. Lastly, though we have described QC4 materials above (i.e., a suite of QC2 materials), there are currently no commercially available suites of materials for proteomics that fit this definition (aside from the sPRG suite currently in development). Arguably, the Pierce TMT11plex Yeast Digest provides similar utility as a true QC4; however, as it is prelabeled, it cannot be used to assess quantification across the complete experimental workflow. That said, there are many suites of biological materials available from NIST that could fit this need, albeit they are not explicitly listed as fit for purpose for proteomics. These include NIST SRM 1949 (Frozen Human Prenatal Serum), NIST Candidate RM 8231 (Frozen Human Plasma Suite for Metabolomics), NIST Candidate RM 8232 (Frozen Human Urine Suite for Metabolomics), and NIST Candidate RM 8462 (Frozen Human Liver Suite for Omics; “candidate” refers to materials near the end of the development cycle that are likely available soon, but not as of September 2024). We see QC4 suites as an opportunity for future development, which may include a demonstration of proteomics fit for purpose in existing suites or the creation and distribution of new suites.

Application-specific materials

Even though most of these QC materials are related to typical bottom-up proteomics measurements (including using isotopic labeling or DIA), there are materials for niche applications. Well-characterized monoclonal antibodies from Sigma-Aldrich and NIST have been used in large interlaboratory studies of top-down proteomics and glycosylation analysis.[49],[50] For metaproteomics, mock microbial communities such as ZymoBIOMICS Microbial Community Standards, ATCC whole-cell mixes, and those created in the Critical Assessment of MetaProteome Investigation studies (https://metaproteomics.org/campi/) can be used for QC and SST.[51] Note that ZymoBIOMICS and ATCC mock communities are available as cell lysates and DNA extracts, with the latter unsuitable for proteomics. Additional complex microbial materials, such as NIST Candidate RM 8048 (Human Fecal Material), are under development . In glycoproteomics, commercially available antibodies as well as many of the proteomes already mentioned can be used, either directly, after releasing glycans, or following enrichment of glycoproteins or glycopeptides. Glycan spectra from NIST RM 8671 (NISTmAb, Humanized IgG1κ Monoclonal Antibody) and the NIST SRM 1953 (Organic Contaminants in Non-Fortified Human Milk) are already in the NIST mass spectral reference libraries, helping provide a benchmark for these specific materials. Commercially available purified glycan standards include a long list of AdvanceBio N-glycans individually available from Agilent and a set of 13 abundant and commonly identified N-linked IgG glycans in the NIST SRM 3655 (Glycans in Solution).

Table 1

Commercially Available QC Materials

Note that this list is not exhaustive and is specific to LC/MS-based proteomics (as opposed to MALDI or other mass spectrometric techniques). The “public dataset” column refers to ProteomeXchange, if available. “Labeled” refers to isotopically (including isobarically) labeled.

Type

Material Name

QC level(s)

Labeled

Supplier

Product no.

Description/application

Public datasets

peptides

Pierce Peptide Retention Time Calibration (PRTC) Mixture

QC1

yes

Thermo Fisher Scientific

88321

15 labeled peptides RT calibration

PXD034525

PXD016573

PXD001731

peptides

MS RT Calibration Mix

QC1

no

Sigma

MSRT1

14 peptides for RT cal

PXD008983

PXD004712

peptides

MS Qual/Quant QC Mix

QC1

both

Sigma

MSQC1

6 protein digest spiked with labeled peptides (2 to 3 per protein); LC-MS/MS SST, quant, LOD determination

MSV000089339

peptides

iRT kit

QC1

no

Biognosys

11 peptides for RT cal and QC

PXD015026

PXD043613

PXD017217

peptides

6 x 5 LC-MS/MS Peptide Reference Mix [52]

QC1

yes

Promega

V749A

6 sets of 5 isotopologues; RT calibration, LC-MS/MS SST, quant, LOD determination

peptides

Pierce 6 Protein Digest, equimolar, LC-MS grade

QC1

no

Thermo

88342

Digest of 6 proteins at equimolar levels

peptides

Pierce LC-MS/MS System Suitability Standard (7 x 5 mixture)

QC1

yes

Thermo Fisher Scientific

A40010

7 sets of 5 isotopologues; RT calibration, LC-MS/MS SST, quant, LOD determination

peptides

HSA Peptide Standard Mix Kit

QC1

no

Agilent

G2455-85001

Human serum albumin peptides

PASSEL (PASS01166)

peptides

10-peptide standard

QC1

no

Agilent

5190-0583

peptides

MS PhosphoMix 1, 2, and 3 (light and heavy)

QC1

yes

Sigma

MSP1L

MSP1H

MSP2L

MSP2H

MSP3L

MSP3H

All 3 mixes contain peptides of the same sequences with different sites of phosphorylation

PXD025754

peptides

SureQuant Phosphopeptide Suitability Standards

QC1

yes

Thermo

A51745

131 isotopically labeled phosphopeptides

MSV000090564

peptides

SureQuant AKT Pathway (Phospho) Multiplex Panel (Absolute Quantitation)

QC1

yes

Thermo

A40084

30 unique peptides from 10 AKT-mTOR signaling pathway target proteins

peptides

SureQuant AKT Pathway Multiplex Panel (Relative Quantitation)

QC1

yes

Thermo

A40080

30 unique peptides from 10 AKT-mTOR signaling pathway target proteins

PXD019426

peptides

Waters MassPREP Digestion Standard Mix 1 and Mix 2

QC1

no

Waters

186002865

186002866

Tryptic digest of 4 proteins mixed at known levels

PXD040205

peptides

Retention Time Standardization Kit (PROCAL)

QC1

no

JPT

RTK-1-10pmol

Pool of 40 non-naturally occurring endecamers for RT normalization

PXD006832

peptides

JPT’s SpikeMix ABRF (cross-species standard)

QC1

yes

JPT

SPT-ABRF-POOL-L-1pm

1,000 heavy labeled proteotypic peptides for proteins conserved between rat, mouse, and human

PXD017385

peptides/protein

MS QCAL Peptide Mix [35]

QC1/QC2

QconCAT derived 22 peptides

Sigma, PolyQuant

MSQC2 (Sigma)

PQ-CS-5370

PQ-CS-5371 (PolyQuant)

QconCAT derived 22 peptides (digested or undigested); RT monitoring, LOD

PXD023654

peptides/protein

RePLiCal

QC1

yes

Polyquant

PQ-CS-1560,

PQ-CS-1561, PQ-CS-2560, PQ-CS-2561

27 lysine-terminating calibrant peptides from a QconCAT construct. The peptides do not contain any M, W, C, or N-terminal Q.

PXD017029

PXD012317

peptides

Mass Spec-Compatible Yeast and Human Protein Extracts

QC2

no

Promega

V7461

V6951

Yeast or Human K562 digests; SST

peptides

MassPREP E. coli Digest Standard

QC2

no

Waters

186003196

E. coli digest for SST or in-run QC

peptides

Pierce HeLa Protein Digest Standard

QC2

no

Thermo

88328

HeLa digest for SST or in-run QC

peptides

Pierce HeLa Digest/PRTC Standard

QC3

both

Thermo

A47997

HeLa digest + PRTC peptides for SST or in-run QC

PXD031426

peptides

Pierce TMT11plex Yeast Digest Standard

QC2/QC3/QC4

yes

Thermo

A40939

TMT-labeled yeast digest for SST or in-run QC

peptides

Pierce Yeast Digest Standard

QC2

no

Thermo

A47951

Yeast digest for SST or in-run QC

peptides

NIST RM 8321—Peptide Mixture for Proteomics

QC1

no

NIST

RM 8321

Endogenous human peptide mixture

protein

NIST RM 8671—NISTmAb, Humanized IgG1κ Monoclonal Antibody

QC2

no

NIST

RM 8671

mAb

protein

SILuMAB Stable Isotope Labeled Universal Monoclonal Antibody Standard human

QC2

yes

Sigma

MSQC3

Internal standard for universal MAB sequences

protein

NIST SRM 2926/2927—Recombinant Human Insulin-like Growth Factor 1

(15N-Labeled or not)

QC2

no/yes

NIST

SRM 2926 SRM 2927

protein mix

Protein Mass Spectrometry Calibration Standard (UPS1)

QC2

no

Sigma

UPS1-1KT

5 moles each of 48 human proteins

PXD001819

PXD002370

protein mix

Proteomics Dynamic Range Standard Set (UPS2)

QC2

no

Sigma-Aldrich

UPS2-1SET

6 mixtures of 8 proteins, each ranging from 50 pmol to 500 amol; LC-MS/MS SST, LFQ (iBAQ)

PXD000331

protein mix

Universal Proteomics Standard 3 (UPS3)

QC2

no

Sigma-Aldrich

UPS3-1VL

8 nonhuman proteins, 100 to 0.1 pmol

PXD007039

protein mix

Pierce Intact Protein Standard Mix

QC2

no

Thermo

A33527

protein mix

Protein Standard Mix 15 to 600 kDa

QC2

no

Supelco

69385

5 SEC markers

cells/lysate

Mass Spec-Compatible Yeast and Human Protein Extracts

QC2

no

Promega

V7341

V6941

Yeast or Human K562 “intact extract”

cells/

lysate

NIST RM 8461—Human Liver for Proteomics

QC2

no

NIST

RM 8461

Stable freeze-dried powder, digestion method check

PXD009021

PXD013608

cells/

lysate

NIST RM 8323—Yeast Protein Extract

QC2

no

NIST

RM 8323

Yeast extract

described in ref. [45]

cells/

lysate

NIST RGTM 10197—NISTCHO Test Material, Clonal CHO-K1 Cell Line Producing NISTmAb

QC2

no

NIST

RGTM 10197

CHO line expressing NIST mAb (see NIST RM 8671). This has not been demonstrated fit for purpose for proteomics, but NIST mAb has been extensively characterized.

Cells/

lysate

ZymoBIOMICS Microbial Community Standards

QC2

no

Zymo

D6300

D6310

D6320

D6321

D6323

D6331

Whole-cell mock communities, mixes, or materials

PXD015500

cells/

lysate

ATCC whole-cell mixes

QC2

no

ATCC

MSA-2002

MSA-2003

MSA-2004

MSA-2005

MSA-2006

MSA-2007

MSA-2008

MSA-2010

MSA-2014

Whole-cell mock bacterial and virus communities, mixes, or materials

  • Single proteins discussed in the text are not listed in Table 1 since there are no specific vendor suggestions (e.g., Β-casein, ribonuclease B, and bovine fetuin).

  • NIST offers numerous other materials that include human, animal, and plant tissues, human biofluids, etc., but are not specifically fit for purpose for proteomics. These are available directly from NIST at https://shop.nist.gov.

  • JPT Peptide Technologies (and other similar companies) manufacture reasonably complex (many 100s) mixtures of isotopically labeled peptides for many species and cell lines, which are not explicitly QC materials or standards but could be used similar to some mentioned above.

  • Biological specimen companies, such as Golden West Diagnostics and BioIVT, offer extensive catalogs (or services to generate) human sample matrices that could be used as QC2 (or even QC4 suites) material.

BENCHMARK DATA

Public datasets from commercially available materials are valuable complements to the physical materials, allowing direct comparison and benchmarking of experimental methods, instruments, and data analysis workflows using ground truth data. They also allow the construction of spectral libraries from externally acquired data that can be used to quickly identify peptides from one’s own data. Datasets from the ABRF RG studies, including those from sPRG studies resulting in commercially available standards, have seen frequent reuse by researchers developing new algorithms and software.[53],[54],[55],[56],[57],[58],[59],[60],[61],[62],[63],[64],[65],[66],[67],[68],[69],[70],[71],[72],[73],[74] Ground-truth datasets generated from no longer available (but in principle, reproducible) material can be used in core laboratories to evaluate new software, software updates, new processing hardware, or even just new search parameters and databases without having to analyze additional QC samples. A versatile dataset for checking and comparing data analysis workflows include ProteomeXchange dataset PXD007683 from a comparison of protein quantification methods by O’Connell et al.[75] Ideally, the methods and instrumentation that were used to generate the QC dataset are very similar to those in one’s own laboratory. Datasets from well-defined metaproteomes are also available, even if the material is not, such as PXD005776, PXD005759, and PXD005728 from the 24 bacterial species in the Mix24X reference[76] or the PXD034795 dataset from the 2020 Proteome Informatics Research Group metaproteomics study.[77]

METHODS FOR IMPLEMENTING QC

As described above, the QC level of a material dictates how and when it may be used, which in turn determines what information it can provide. Similarly, when the QC material is used, it establishes QC metric points in time, which is essential before and during an experiment. Specific approaches to using QC materials within an experiment itself are important to discuss. In addition to SST before the experimental run has started, QC materials can be used within runs to monitor changes in instrument performance. As a general heuristic, 1 out of every 20 samples should include some sort of QC check (i.e., 5% of samples are QC samples). This simple guideline does not replace the importance of proper experimental design and sample randomization. The QC samples may reveal batch effects,[4],[78] but they will not automatically correct them. Since replicates will be mentioned, for this paper we define them as follows: experimental replicate is a sample that was processed and analyzed independently more than once; technical replicate is a sample that was processed once, but repeat measurements were taken on the instrument. More detailed descriptions and discussions of experimental design and QCs such as block designs can be found here[3],[4],[78],[79] as well as a turn-key tutorial for running QC in label-free quantitative proteomics experiments[80] or implementation in isobaric tagging experiments.[81] The different commercially available QC materials that have been described in the sections above may be referred to as external QC (EQC), to differentiate this type of QC from a study pool QC (SPQC). A SPQC would be a QC2 level material but is laboratory (and experiment) specific and not commercially available. In general, the use of both EQC and SPQC samples is important. In a multi–well plate format, the EQC and SPQC can be spaced between samples,[3],[79] related to upstream process batching if needed (though these same concepts can be applied within a normal sample queue as well as a plate format). Within plates it is also important to consider the location of samples to capture effects within the plate, such as the unequal effect of heating during protein digestion.[82] Finally, when analyzing multiple plates, the same concepts of block design and normalization apply,[4],[78] and the use of QC materials can be essential to implementing, especially when the desire is to compare to historical datasets or those collected in other laboratories.[6]

QC METRICS

Up to this point, we have discussed why QC is important, what materials are available, and how to incorporate them into experiments. Foremost, QC attempts to quantify instrument and experimental performance to give confidence to the validity, uncertainty, and accuracy of the resulting data. Secondary to this, QC metrics can be used to troubleshoot issues in sample preparation, data acquisition, and data analysis. We will not go into specifics on what a deviation of QC metrics can indicate, but bracketing experimental and analytical steps with different QCs will aid in problem isolation. Also collecting longitudinal QC data is essential at defining acceptable norms for technical variation (i.e., statistical process control).

Coarse measures of system performance, such as “number of peptide spectral matches” of a known sample on a known gradient, are considered insufficient, since such metrics can mask underlying issues with the system, leading to false confidence, or simply not provide any actionable troubleshooting if the metric is too low. Therefore, there is a desire to provide more granular metrics, which include identity-free (ID-free) and ID-based metrics, covering protein extraction, digestion efficiency, chromatographic separation, ion mobility, MS, MS/MS, and peptide and protein quantification. The ID-free metrics are those metrics that can be computed independently of identifying any peptides and cover aspects such as column pressure, electrospray stability, chromatographic peak width, overall signal (total ion chromatograms), mass measurement precision, and charge state distribution.[83] Through direct comparisons of tandem mass spectra, the similarity between LC-MS/MS datasets can also be evaluated and used for experimental QC.[84] ID-based metrics are those reliant on peptide identification and include chromatographic reproducibility, mass measurement accuracy, collision energy, peptide identification rate, and most quantitative QC metrics. This general dichotomy is useful to separate QC metrics into those that can be calculated immediately from the data, making few or no assumptions, from those that first require running a data analysis pipeline, making many assumptions, in order to calculate the QC metrics. The former are more primitive, although their calculation is highly reliable and independent of search space (i.e., FASTA) or search algorithm. The latter are likely more relevant for data interpretation but dependent on assumptions being correct, for example, a calculation of the efficiency of enzymatic digestion will not yield meaningful results if no missed cleavages were allowed in the database search. This is a trivial example, but data analysis methods also influence ID-based QC metrics in less conspicuous ways; for example, the mass measurement error tolerance in the database search determines which set of peptides are identified and, in turn, which peptides are used to calculate mass measurement accuracy.

For this brief discussion of QC metrics, we recommend going back to arguably the start of mass spectrometry–based proteomics QC with the now defunct NIST MSQC, developed in 2010 to support Clinical Proteomic Tumor Analysis Consortium (CPTAC) efforts.[1],[85] The NIST MSQC included 46 metrics, which have since been expanded in numerous derived applications, such as QuaMeter,[86] and modeling-based metrics.[2] Shortly thereafter in 2012 with SIMPATIQCO,[87] there was also progress made for automatically uploading files into such QC programs, making longitudinal reports and dashboards possible in an automatic way, a concept that is still being used in more recent QC software (Table 2). In general, these early metrics focused solely on the system performance using a blend of ID-free and ID-based metrics, but depending on the QC material used, additional metrics can be derived. For instance, LC separation performance and drift can be monitored using retention time standards like the PRTC or iRT; albeit, using IDs of endogenous peptides themselves can also provide metrics by modeling retention times.[6],[88] When proteins or proteomes are used, true digestion efficiency can be calculated along with ID-based metrics like tryptic miscleavage rates to accurately qualify digestion performance. Additionally, quantitative standards can be used to determine detection limits, either of a targeted assay (Pierce System Suitability Standard 7 x 5 mixture) or of isobaric labels (TMT11 yeast digest). Moreover, different data acquisition techniques may give unique performance measures, such as isotopic and isobaric labeling, that can capture labeling efficiency and channel missingness.[89],[90] These metrics are useful in that when there is a problem between sample processing and data analysis, troubleshooting can be directed by the results.

In addition to capturing metrics per injection or experiment, it is essential to monitor these metrics longitudinally. These QC metrics can vary by LC column chemistry and flow rates, data acquisition type, and instrument platforms, and so defining acceptable margins is imperative.[5] It may be that some acceptable range of +/- 15% of a QC metric is acceptable, but anything outside this range would need the run to stop (a go/no-go metric). Defining these margins truly relies on an operator’s knowledge and experience with their specific setup and how these quality metrics manifest in the final data analysis (i.e., there are some metrics that do not have universal thresholds for QC). Being able to track values across long periods of time will invariably help if troubleshooting the system is needed, or to determine the need for periodic maintenance. This is also a case when sharing data between laboratories of the same material and same data acquisition can be helpful, since this way, drift in instrument performance and data quality can be compared across similar setups.

Finally, new quality metrics continue to be defined and implemented for specific use cases. For example, quantifying technical contaminants in samples (such as polymers and detergents[91]) is important but frequently overlooked. Tools that address this include mzsniffer (https://github.com/wfondrie/mzsniffer), custom Skyline templates,[92] and HowDirty.[93] Likewise, these concepts and tools could be applied to other informative contaminants, such as siloxane bleeding off columns. Similarly, there are QC tools for specific sample types, for instance, tools to evaluate plasma sample quality (http://www.plasmaproteomeprofiling.org/).[94] Moreover, as instruments improve and more readbacks are available, we expect there to be new metrics related to anything from spray voltage to ion mobility and single-ion detection. Each quality metric will improve our ability to define the normal and detect deviation from it.

QC SOFTWARE AND TOOLS

There is a plethora of QC scripts and tools described in the literature as “written in house,” without much further detail. For the sake of the mass spectrometry proteomics core, this discussion will focus on software and tools that are reasonably FAIR (Findable, Accessible, Interoperable, and Reusable)[95] and currently (December 15, 2023) usable with minimal setup. This includes most of the tools described in the bio.tools registry. Out of a total of 375 “quality control” software tools, 32 belong to the “Proteomics” domain (https://bio.tools/t?page=1&q=%27Quality%20control%27%2B%27Proteomics%27&sort=score). We list a few we consider to be of particular use in core laboratories and will note which software does and does not work within strict security and data privacy limitations. This table is by no means exhaustive and is complemented by an online database we make available on https://osf.io/nze34. To begin with, some software tools depend on a certain spike-in or QC material. For instance, QuiC (Biognosys) is a standalone software made for data containing the iRT peptide spike and is able to track QC metrics longitudinally. Although from a commercial company, this software is free to use and generates many of the ID-free and ID-based metrics discussed above using DDA, DIA, or targeted (i.e., SRM/PRM) analyses. The triple knockout visualization tool (TVT)[46],[96] is a web portal that takes Thermo raw files as input. Though originally developed for the TMT11plex yeast material, the current version (2.5) supports all commercially available TMT sets up to and including TMT Pro 18-plex. The TVT provides numerous ID-based metrics using the known knockout proteomes while also providing ID-free metrics like peak widths and injection time and the ability to compare metrics between runs. Another material with associated software is the Promega 6 x 5 spike-in, which can be used along with PReMiS software. Lastly, though AutoQC can work with nearly anything with endless customization, numerous resources and templates are available to work specifically with the Pierce LC-MS/MS System Suitability Standard (7 x 5 mixture). More recent is the Bruker TwinScape “digital twin” collecting QC data based on the Biognosys iRT kit on a cloud for longitudinal visualization in dashboards, although not included in Table 2 as it is commercial software. Coupling these QC materials and QC software tools lower the energy activation to get QC setup in the core (see Recommendations section).

Other software and tools typically can use any QC material to generate and track QC metrics, although they still may benefit from peptide spike-ins. For instance, the commercial software AutoQC Loader has built-in templates to use data from the Pierce LC-MS/MS System Suitability Standard (7 x 5 mixture). This generates specific metrics related to detection limits and retention time, but even without this spike-in, Panorama QC can also generate reports of ID-free and ID-based metrics as a function of time (analysis). Likewise, a free local or web-based option is QCloud2, which can track QC metrics over time for multiple instruments and QC materials, and acceptable performance margins can be set. Both RawHummus and QCloud2 can be set up locally and still accessed via a browser, but some knowledge of setting up and running containers is needed. The pmultiqc is a new tool integrated into the quantms workflow[97] based on the popular genomics QC tool multiqc[98] and using pyOpenMS and OpenMS tools.[99] This python-based tool generates QC reports that are easily shareable as html web pages, with metrics at the level of the MS1 and MS2 spectra and at the peptide and protein level. Overall, the specific software listed in Table 2 are easily installed, often have GUIs, and have adequate developer support to get any laboratory quickly up and running QC.

Table 2

Select Tools and Resources, Including Dedicated Software and Templates, for QC in Proteomics Core Facilities

Software

MS data acquisition type

Input data format

QC levels

bio.tools entry

Associated QC material(s)

AutoQC[80]

targeted, DDA, DIA

Thermo RAW, mzXML, mzML

QC1, QC2, QC3

panorama

Tutorials with Pierce System Suitability Standard (7 x 5 mixture)[80]

MaCProQC [100]

DDA

Thermo RAW, mzML

QC2

MaCProQC

MSstatsQC 2.0[101],[102]

MSstatsQCgui

targeted, DDA, DIA

search results as tsv or mzTab

QC1, QC2, QC3

MSstatsQC

MsQuality

DDA

Thermo RAW, mzML, mzXML, MGF, other

-

MsQuality

pmartR 2.0 [103],[104]

DDA, DIA

R data object of peptide intensities (can use MSNbase output)

QC2

pmartR

pmultiqc[97]

DDA, DIA

mzML, SDRF, mzTab

QC1, QC2, QC2

pmultiqc

PReMiS

DDA

Thermo raw, Sciex wiff, mzML

QC1, QC2, QC3

PReMiS

Promega 6 x 5 Peptide Reference Mix

ProteomicsQC

DDA

Thermo RAW

QC2

ProteomicsQC

Proxl[105]

DDA (crosslinking)

proxl XML file (converted from other software outputs)

QC2

proxl

PTXQC[106]

DDA

search results as tsv (Maxquant) or mzTab (OpenMS)

QC2

PTXQC

QC-ART[107]

DDA

QC metrics by injection

QC2

QC-ART

QCloud2 [108],[109]

DDA

Thermo RAW

QC1, QC2, QC3

QCloud2

Currently BSA and HeLa only

QuaMeter

DDA

mzML

QC1, QC3

QuaMeter

BSA, β-galactosidase, yeast lysate

QuiC

targeted, DDA, DIA

any vendor format

QC1, QC2, QC3

QuiC

Biognosys iRT

Rapid QC-MS [110]

DDA

Any vendor by built-in msconvert

QC1, QC2, QC3

Rapid-QC-MS

RawBeans [111]

DDA, DIA

Thermo RAW natively, others by built-in msconvert

QC1, QC2, QC3

RawBeans

RawHummus [112]

DDA

mzML, mzXML

QC1, QC2, QC3

RawHummus

SimpatiQCo [87]

DDA

Thermo RAW

SimpatiQCo

SProCoP[5]

targeted, DIA

Skyline .sky files

QC1, QC2, QC3

SProCoP

TVT2.5[113]

DDA

Thermo RAW

QC1, QC2, QC3

tvt_viewer

Pierce TMT11plex Yeast Digest

These software are free to use, actively maintained, and functional, and many are also open source. The latter is essential for transparency on how QC benchmarks are derived from measurements. “Targeted” under “MS data acquisition type” refers to targeted data acquisition such as SRM, MRM, and PRM. The bio.tools entries contain additional information on software licenses, documentation, specific types of analysis operations performed by the software, input and output data types, and file formats. A more complete software list can be found in an online database we make available on https://osf.io/nze34.

Some tools, such as those in the TVT series, require data to be uploaded to a server. This is convenient, as it can be used from data acquisition computers without the need to install additional software, and results can be viewed in a web browser, as long as the computers are connected to the internet. For laboratories operating under tighter security or where data privacy is of concern, most of the above software can be installed and run on computers not directly connected to the internet. We note that data from QC samples interspersed with experimental samples may contain carryover from those samples, especially when run immediately after a sample. In this scenario, the QC sample could also be used to quantify carry-over (i.e., abundant peptides from experimental samples could be tracked in the QC sample). However, and more critically, this observation of carryover can also mean that the same privacy and security concerns that apply to data from experimental samples may also apply to the data from the QC samples. Overall, there is an abundance of software choices that could work within most limitations.

FUTURE DIRECTIONS

It is evident that there are a plethora of materials, metrics, and software tools to incorporate QC in the core laboratory. Still, there are much needed advancements on the horizon that will make this even easier by improving the interoperability and reusability of QC. For instance, as the mzQC standard is finalized and adopted (https://psidev.info/mzqc), it will become easier to compare, combine, and track QC metrics from different software tools into custom QC reports fit for the laboratory, study, or analysis as well as deposit this information alongside experimental data.[114] The idea of universal QC, a radical concept discussed in depth during a 2022 Lorentz Center Workshop on proteomics and machine learning,[115] is that one day researchers will be able to generate realistic synthetic data matching every sample, analytical procedure, and instrumentation in order to evaluate the quality of any dataset without commercial QC standards, making each sample its own QC sample. Such integrated models for synthetic data generation may still be a few years away from being implemented in core laboratories, but once they are, they will immediately provide a framework for computing many informative QC metrics based on the analysis results. Until we reach this point, the availability of a wide range of QC materials across QC levels coupled with QC software should enable any proteomics mass spectrometry core to generate QC metrics of their complete process, helping troubleshoot when needed and lending confidence in delivered results.

RECOMMENDATIONS

This paper has hopefully provided helpful information about commercially available QC materials and how they might be used in a core setting. Although we touch on special use cases such as metaproteomics, we would like to provide a general starting recommendation to a core providing discovery or quantitative proteomics services. First, choose a software option from Table 2 and load in data from your instrument(s), or use the relevant available datasets (Table 1) to evaluate the software-generated reports. These reports should help you decide if an injection is acceptable and, if not, help troubleshoot why not as well as be something that can be shared with end-users. Some software may rely on a technique you do not use (i.e., TVT2.5 is for TMT), and other software may not provide a desired feature (i.e., some focus more on longitudinal-based statistical process control). Once software has been selected, purchase the requisite material (likely a QC1 spike, or QC2/QC3 material), and begin to implement it in your core using the Methods for Implementing QC section as a starting point. The sooner you begin collecting QC metrics, the more historical information you will have to rely on.

ACKNOWLEDGMENTS

The authors wish to thank Christopher Ashwood for insight on commonly used materials in glycan and glycoproteomic analysis, Lindsay Pino for general discussions and expert insight, specifically about AutoQC, Matt Foster for a real-world QC perspective, and Clay Davis for assistance in navigating the NIST SRM catalog. We also wish to thank the reviewers of this manuscript for their time, energy, and essential feedback. These opinions, recommendations, findings, and conclusions do not necessarily reflect the views or policies of NIST or the United States Government. Identification of certain commercial equipment, instruments, software, or materials does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the products identified are necessarily the best available for the purpose. Yasset Perez-Riverol would like to acknowledge funding from Wellcome (Grant Number 208391/Z/17/Z) and European Molecular Biology Laboratory core funding.

Comments
0
comment
No comments here
Why not start the discussion?