← Back to all posts

Using RSD in Quality Control and Method Validation

How analysts use relative standard deviation to judge precision, set acceptance criteria, and compare assays — with concrete numbers from a calibration example.

By RSDCalc Team · March 15, 2026 · Applications

In a quality lab, you don’t just want a number — you want to know how trustworthy that number is. Relative standard deviation is the workhorse statistic for answering that question.

The validation question

When you validate an analytical method, you’re asking: if I run this assay ten times on the same sample, how close together will my answers be?

RSD gives you a single, scale-free number that summarizes that closeness.

A worked example

Suppose you run a calibration sample through your HPLC seven times and read off these peak areas:

98.1, 99.2, 97.8, 98.5, 98.9, 99.0, 98.3

  • Mean: 98.54
  • Sample SD: 0.493
  • RSD: 0.50%

Half a percent. By most pharmaceutical standards, that’s excellent precision — well below the typical 2% acceptance threshold.

Setting acceptance criteria

Different industries set different bars:

  • Pharmaceutical assays: ≤ 2% RSD
  • Bioanalytical methods: ≤ 15% (≤ 20% near LLOQ)
  • Spectroscopy quantification: ≤ 5%
  • Chromatographic peak areas: ≤ 1% for major peaks

These are starting points. Your specific method’s regulatory or internal limits override any rule of thumb.

What RSD doesn’t tell you

RSD measures precision (closeness of repeat measurements), not accuracy (closeness to the true value). You can have a tightly clustered set of readings that are all consistently wrong. Good QC programs measure both — RSD plus recovery against a reference standard.

Try a real validation set

Paste your replicate readings into the calculator and switch to the Sample (n − 1) option. You’ll see RSD, the underlying SD, and the mean side-by-side.