Mass and Measurement

At the start of this unit, we introduced the conservation of matter (or mass-energy), which basically states that matter is neither created nor destroyed. Mass is a property of matter that describes the amount of matter in an object. In the International System of Units (SI), mass is measured in kilograms (kg). One gram (g) is 1/1000 of a kilogram. It can also be measured in atomic mass units (1 u ≈ 1.66 × 10-27 kg), a smaller unit that can be more convenient when expressing the mass of atoms and molecules.

Many other properties of matter also depend on the amount of matter (or mass) in an object. The kinetic energy of an object, the amount of energy an object possesses because of its motion, can be calculated using the formula: Ek = ½mv2, where m is the mass of the object and v is the velocity (or speed) of the object. Kinetic energy is commonly measured in joules (1 J = 1 kg·m2/s2), but energy can also be measured in other units such as calories (Cal) or kilowatt hours (kWh). The momentum, or inertia, of an object can be calculated using the formula: P = mv. And the weight of an object, the force of attraction between the object and the planet Earth, can be calculated using the formula: W = mg, where g is the Earth’s gravitational field strength. Weight is commonly measured in newtons (N) or pounds (lb).

Because an object’s mass is directly proportional to its weight, and weight is much easier to measure than mass using standard scales and balances, we tend to find an object’s mass by measuring its weight instead. This is considered an indirect measurement because you are not measuring the mass directly. In everyday usage, mass and weight are often used interchangeably, but this is imprecise. Mass is an unchanging property of an object, but the weight of an object depends on the strength of the local gravitational field, and this does change depending on where you are on Earth (or if you are not on Earth, but on another planet).

Recording measurements

A central concept in modern science and the scientific method is that all evidence must be empirical (based on observation and experiment). Measurement is simply a way to quantify and increase the precision of our observations. When conducting experiments and making observations, it is important to be as objective as possible and to minimize the subjective bias of the experimenter or observer. One way to do that is to discipline yourself to only record what you can observe directly with your senses… not what you can infer.

If you are trying to measure the mass of a liquid, you might measure the mass of the liquid inside of a container (237.5 g) and the mass of the empty container itself (62.8 g). Then the mass of the liquid would be 174.7 g (237.5 g − 62.8 g). What would you record as your measurements? You would record the mass of the liquid inside of the container and the mass of the empty container in your observations. The mass of the liquid is an indirect measurement. It is something that you calculated and inferred (you did not measure it directly with your senses), and you would include that in your data analysis.

Significant digits and scientific notation

All measurements are approximations. If you measure a length to be 6.5 cm, it is not exactly 6.5 cm. You are limited by your perception and the accuracy of your measuring instruments. If you could zoom way, way, way in, you would almost certainly see that the length is actually a hair longer than 6.5 cm or a hair shorter than 6.5 cm. And even if you zoomed in by infinite and the length was still exactly at the 6.5 cm mark, how accurate is the ruler?

In 1799, the meter was defined as the length of a platinum bar stored at the National Archives in France. In 1889, the definition of the meter was revised to the length between two lines on a platinum-iridium bar held at the melting point of ice. The definition was updated again in 1960 to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. And the current definition of a meter was established in 1983 as “the length of the path travelled by light in a vacuum during a time interval of 1/299,792,458 of a second.” With each revision, the definition of a meter became more precise. But even with this latest definition, a meter is still only precise to within 0.1 nanometers (1 nm = 0.000000001 m). You cannot say that any length is exactly 1 m, or 6.5 cm.

The ruler below is designed to measure lengths to a precision of 0.1 cm (the markings on the ruler are spaced 0.1 cm apart). However, in science, it is conventional practice to estimate a measurement to the next “significant digit”… in this case, to the nearest 0.01 cm.

measurement

From a distance, it looks like the length of the green bar is 6.5 cm. But if you look more closely, you will see that the length is actually a bit longer, between 6.5 cm and 6.6 cm. Your goal is to estimate the measurement to the next significant digit. You do this by imagining that the space between 6.5 and 6.6 is broken up into ten smaller spaces, and then choosing the nearest one. It looks like the length is a little closer to 6.52 cm than to 6.51 cm, so I would record the measurement as 6.52 cm. You are not saying that the length is exactly 6.52 cm. Because this is a convention, other scientists will know that you measured the 6.5 cm and only estimated the 0.02 cm. This tells them that the length is in the range of 6.515 - 6.525 cm (6.52 cm ± 0.005 cm).

In math, the numbers 3, 3.0, and 3.00 are all equivalent. But in science, the measurements 3 m, 3.0 m, and 3.00 m all mean something different. If a scientist sees that a length was measured and recorded as 3 m, she knows that the length is not exactly 3 m, but only approximately 3 m. A length measured as 3 m tells her that the length is in the range of 2.5 - 3.5 m (3 m ± 0.5 m), a length measured as 3.0 m is in the range of 2.95 - 3.05 m (3.0 m ± 0.05 m), and a length measured as 3.00 m is in the range of 2.995 - 3.005 m (3.00 m ± 0.005 m). We say that a measurement of 3 m has one significant digit while a measurement of 3.00 has three significant digits (the zeros are significant because they were recorded, which means that they were measured). The number of significant digits indicates the precision of the measurement.

As a scientist, what do you know when the attendance at a football game is recorded as 30,000 people? Does that mean there were exactly 30,000 people (counted to five significant digits)? Somewhere between 25,000 and 35,000 people (counted to one significant digit)? Somewhere between 29,500 and 30,500 people (counted to two significant digits)? It is hard to know. Scientists created scientific notation to make numbers easier to read… at least for other scientists! If I report that the attendance at the football game is 3.0 × 104, then you would know that I am reporting two significant digits. Scientific notation also makes it easier for other scientists to quickly interpret the magnitude (or size) of a number by its exponent, and to write extremely large and extremely small numbers. For example, Avogadro’s number has been measured to 6.02214179 × 1023. The “6.02214179” tells you that Avogadro’s number has been measured to nine significant digits (incredibly precise) and the “23” tells you that it is very large (on the magnitude of 1 followed by twenty-three zeros).

assetassetassetasset