Tap to Read ➤

Omkar Phatak
May 10, 2019

Calculating error percentage is a requisite task in any kind of scientific analysis. This post will show you the right technique.

Error is the amount of deviation from accurate values. Its calculation is not possible, unless you make a quantitative measurement of the various quantities, that are involved in your experiment. Measurement can help us in calculating them and knowing how right we are, in our theoretical models.

Steps of the scientific method include observation, theory, and experiment. We observe a phenomenon in nature and come up with a theory to explain it. Mathematics is the language of nature. The only way to test the validity of this theory, is through experiment.

What confirms the theory, is the precise matching between observed experimental and predicted theoretical results. The degree of error, which is the difference between observed and predicted experimental values, can be the undoing of that theoretical model.

Even small changes or errors, mean that there is something we do not understand and need to account for it. We keep improving the theory and calculate the error in experimental results, until precision is achieved.

Small errors get magnified over time, to create bigger problems later. This is especially true about large machinery. Be it any form of engineering, uncorrected error, that has exceeded tolerance limits, can destroy the greatest of plans.

To calculate error percentage, you do not need advanced mathematical machinery. Just an ability to subtract, divide, and multiply numbers is enough. Let us look at the calculation, step by step.

First step is getting the observed and precise values. Say, you are making a measurement with a scale. The accurate value should be 30 cm. However, you are getting an observed value of 29.7 cm.

The next step is to subtract the observed value, from the accurate value. That is, perform the following operation: (29.7 cm - 30 cm) = -0.3 cm.

The error is negative. Next, we take the absolute value of this difference. That is, we take the value of the difference, irrespective of positive or negative sign. So, according to the absolute value, the 'Raw Error' becomes 0.3.

Next, one must divide the raw error by the accurate value. (0.3/30) = 0.01. This value, that you get after division, is the 'Relative Error', which is 0.01 here.

Multiply the relative error by 100, to get the percent error. In this case, (0.01 x 100) = 1%. Thus, our measurement had 1 percent error.

You will get used to this calculation as you advance through your lab courses in school.