Posts Tagged ‘PCR data analysis’

qPCR Analysis: It’s What’s Inside That Counts

 :: Posted by avi_wener on 07-29-2010

If you watched the video on real time quantitative PCR data analysis, you should have a good understanding of real-time quantitative PCR basics and the associated data analysis techniques. Classical quantification techniques such as Livak, delta CT and the Pfaffl rely on linear regression analysis and are currently the most widely accepted methodologies for quantitative PCR.

In a paper published recently in PloS one, Jensen et al. discuss several drawbacks of the conventionally accepted methodologies and propose an alternate technique for conducting relative real time qPCR data analysis. If you recall from the video, relative quantification can be done either by normalizing samples against a unit of mass or against a reference gene. When normalizing against a unit of mass a calibrator sample is needed which is usually chosen from a control sample The calibrator’s unit of mass, such as its cell number or amount of nucleic acid, is then accurately measured through empirical techniques. While using a unit of mass as a calibrator for relative quantification is fairly simple, its drawbacks include the need for precisely quantifying the amount of starting material for each sample, a PCR reaction efficiency that is close to 100% and few changes in gene expression between the the experimental samples and control groups.

When using a reference gene as a calibrator you don’t need to accurately quantify the amount of starting material in each sample, but you do need to use a known reference gene with constant expression levels that are not changed under treatment conditions. Furthermore, the Livak method (delta, delta Ct method) requires that the PCR efficiencies for both the gene of interest and reference gene are within 5% of each other and to 100%.

Jensen et al. point out that there are several other drawbacks to the conventionally accepted methodologies:

1. “the proportionality factor between fluorescence and sequence numbers
differ widely between targets”

What this means is that the dyes used for fluorescent detection may bind differently to different targets. For example, SYBR Green dye binds to the minor groove of DNA and its signal is directly proportional to the length and sequence of the amplicon as well as the salt concentration of the reaction buffer. As such, the level of SYBR Green signal may differ from one DNA sequence to another despite there being an equal number of molecules in the PCR sample.

2. Low capacity PCR machines are limited by the number of samples that can be run and therefore cannot accommodate a full set of standard curves in each PCR run. “This introduces a run-to-run variability that inevitably contributes to the error of (PCR) efficiency.” Furthermore, “even tiny errors of the efficiency estimate are critical and induce disproportionally large errors because efficiency constitutes the base of the exponential PCR function.”

Jensen concludes that “in this light, errors associated with run-to-run variability of PCR unknowns are highly undesirable.”

In order to rectify these inaccuracies, Jensen et al.propose several solutions:

1. Construct a fusion-PCR product where both the reference gene and gene of interest are cloned together into the same plasmid. This way, you will be assured that both genes are present in the equal concentrations which will allow for a more accurate linear regression estimate.

2. Use Run-Internal Mini Standard curves (RIMS) which consists of internal data points using two different concentrations leading to more accurate intercept and slope estimations in the linear regression analysis.

The results of Jensens’ experiments indicate that internal standard curves based on fewer samples are preferable of larger, external standard curves. Furthermore, it is desirable to chose concentrations of standards that are far from each (termed “extreme concentrations” by Jensen) when using only two samples for constructing a standard curve.

Finally, Jensen demonstrates that the RIMS-based approach which utilizes an internal standard curve (generated with only 2 reference points), decreases run-to-run variability and is more accurate than Livaak’s delta, delta Ct method when calculated using an external standard curve.

A significant advantage of the RIMS-based approach is that it renders calibrator sample superfluous since reference samples are run together with experimental samples. As such, run-to-run variability becomes non-existent.

As a side note, an alternative method of avoiding run-to-run variability is to use a bigger PCR machine (such as Bio-Rad’s CFX-384) which will allow you to run a full standard curve with every PCR experiment. This way, sample quantity can be calculated using a standard curve that was run in parallel with your experimental samples.

The math involved in this paper is quite complicated and I won’t get involved in it here. Nonetheless, I definitely welcome comments from members of the American Biotechnologist community who can help “dumb down” the equations for the general molecular biology populous.

Bernth Jensen JM, Petersen MS, Stegger M, Ostergaard LJ, & Møller BK (2010). Real-Time Relative qPCR without Reference to Control Samples and Estimation of Run-Specific PCR Parameters from Run-Internal Mini-Standard Curves. PloS one, 5 (7) PMID: 20661435