The language of measurement
The following subject specific vocabulary provides definitions of key terms used in our AS and A-level Science specifications.
Accuracy
A measurement result is considered accurate if it is judged to be close to the true value.
Calibration
Marking a scale on a measuring instrument.
This involves establishing the relationship between indications of a measuring instrument and standard or reference quantity values, which must be applied.
For example, placing a thermometer in melting ice to see whether it reads 0 °C, in order to check if it has been calibrated correctly.
Data
Information, either qualitative or quantitative, that has been collected.
Errors
See also uncertainties.
measurement error
The difference between a measured value and the true value.
anomalies
These are values in a set of results which are judged not to be part of the variation caused by random uncertainty.
random error
These cause readings to be spread about the true value, due to results varying in an unpredictable way from one measurement to the next.
Random errors are present when any measurement is made, and cannot be corrected. The effect of random errors can be reduced by making more measurements and calculating a new mean.
systematic error
These cause readings to differ from the true value by a consistent amount each time a measurement is made.
Sources of systematic error can include the environment, methods of observation or instruments used.
Systematic errors cannot be dealt with by simple repeats. If a systematic error is suspected, the data collection should be repeated using a different technique or a different set of equipment, and the results compared.
zero error
Any indication that a measuring system gives a false reading when the true value of a measured quantity is zero, eg the needle on an ammeter failing to return to zero when no current flows.
A zero error may result in a systematic uncertainty.
Evidence
Data which has been shown to be valid.
Fair test
A fair test is one in which only the independent variable has been allowed to affect the dependent variable.
Hypothesis
A proposal intended to explain certain facts or observations.
Interval
The quantity between readings, eg a set of 11 readings equally spaced over a distance of 1 metre would give an interval of 10 centimetres.
Precision
Precise measurements are ones in which there is very little spread about the mean value.
Precision depends only on the extent of random errors – it gives no indication of how close results are to the true value.
Prediction
A prediction is a statement suggesting what will happen in the future, based on observation, experience or a hypothesis.
Range
The maximum and minimum values of the independent or dependent variables; important in ensuring that any pattern is detected.
For example a range of distances may be quoted as either:
'From 10 cm to 50 cm'
or
'From 50 cm to 10 cm'
Repeatable
A measurement is repeatable if the original experimenter repeats the investigation using same method and equipment and obtains the same results.
Reproducible
A measurement is reproducible if the investigation is repeated by another person, or by using different equipment or techniques, and the same results are obtained.
Resolution
This is the smallest change in the quantity being measured (input) of a measuring instrument that gives a perceptible change in the reading.
Sketch graph
A line graph, not necessarily on a grid, that shows the general shape of the relationship between two variables. It will not have any points plotted and although the axes should be labelled they may not be scaled.
True value
This is the value that would be obtained in an ideal measurement.
Uncertainty
The interval within which the true value can be expected to lie, with a given level of confidence or probability, eg “the temperature is 20 °C ± 2 °C, at a level of confidence of 95%.
Validity
Suitability of the investigative procedure to answer the question being asked. For example, an investigation to find out if the rate of a chemical reaction depended upon the concentration of one of the reactants would not be a valid procedure if the temperature of the reactants was not controlled.
Valid conclusion
A conclusion supported by valid data, obtained from an appropriate experimental design and based on sound reasoning.
Measurement uncertainty can obscure science concepts like conservation of energy. Students need a solid foundation of measurement technique to be able to learn science.
Here is a common situation in today's inquiry-based science classroom: an instructor leads a lab activity that will demonstrate the concept of conservation of mechanical energy. Students measure the energy of a pendulum at various points during its swing to compare the total energy at various locations. No matter how careful they are, most students will measure different values for the energy of the pendulum at different locations. What does that mean? Is energy conserved or not? Some students will find the energy increases as the pendulum moves, for others it decreases.
A common scapegoat is the catch-all culprit "error." But what do we as instructors mean when we say error? Are we implying that students made a mistake? Are the variations in measurements really errors? To be able to make sense of this situation, students need a firm understanding of measurement uncertainty. They need to know how to determine the measurement uncertainty, and how to preserve measurement uncertainty during calculations. Finally, they need to be able to state results in terms of uncertainty. Given the trend towards teaching science by inquiry, students must be able to understand the role of measurement uncertainty when they use data to draw conclusions about science concepts.
Effective measurement technique includes these key concepts:
- Distinguishing between error and uncertainty
- Recognizing that all measurements have uncertainty
- Identifying types of error, sources of
error and how to detect/minimize error
- Estimating, describing, and expressing uncertainty in measurements and calculations
- Using uncertainty to describe the results of their own lab work
- Comparing measured values and determine whether values are the same within stated uncertainty.
Defining Error and Uncertainty
Some of the terms in this module are used by different authors in different ways. As a result, the use of some terms here might conflict with other published uses. The definitions used in this module are intended to match the usage in documents such as the NIST Reference on Constants, Units and Uncertainty.
For example, the term error, as used here, means the difference between a measured value and the true value for a measurement. Since the exact or "true" measured value of quantity can often not be determined, the error in a measurement can rarely be determined. Instead, it is more consistent with the NIST methods to quantify the uncertainty of a measurement.
Uncertainty as used here means the range of possible values within which the true value of the measurement lies. This definition changes the usage of some other commonly used terms. For example, the term accuracy is often used to mean the difference between a measured result and the actual or true value. Since the true value of a measurement is usually not known, the accuracy of a measurement is usually not known either. Because of these definitions, we modified how we report lab results. For example, when students report results of lab measurements, they do not calculate a percent error between their result and the actual value. Instead, they determine whether the accepted value falls within the range of uncertainty of their result.
The materials presented here are intended to teach measurement technique to students grades 9 through introductory college level. In addition, we have presented examples showing how to integrate these concepts into existing lab activities.
See examples of how to integrate measurement and uncertainty