EXPERIMENT TWO—LECTURE NOTES
 In future lab reports, I will look for:
o A brief abstract of the experiment of the form I handed out for Exp. 1
o The organization of your report. Specifically, is it clear and easy to read? Does it flow or do I have to skip around to figure out where the next table of figure is located? Do you refer to all the figures and tables in the text (e.g., on the lab manual or written pages?)
o Proper treatment of uncertainty.
o Inclusion of all the appropriate experimental data.
o Numbers and titles on ALL tables and figures/graphs.
o Graph titles must not say what I can read on the axes. I.e., do not use “something vs. something else” for a title. E.g., don’t use “velocity squared vs. distance” when velocity squared and distance are the dimensions on the axes.
o Dimensions AND units on spreadsheet columns.
o Dimensions AND units on axes of graphs.
o Completeness and correctness of analysis.
o Accuracy, completeness, and clarity of answers to questions in the manual
§ Particularly important: Quantitative answers. I.e., numbers from your data or calculations to support your conclusion.
WHAT I WILL TAKE OFF POINTS FOR ON WEEKLY LAB REPORTS
 What you can learn in this experiment:
o How to determine and record measurement uncertainties when you:
§ Make only one measurement;
§ Make multiple measurements of the same thing.
o How to “propagate” these uncertainties when calculating a value derived from more than one measurement. In nearly every experiment, we will deal with values we do not measure. For example, volume. In this experiment, we will measure length, width, and depth, determine their uncertainties, and use these values and uncertainties to calculate volume and its uncertainty.
o How to determine if your experimental results are in agreement with an accepted value.
o The statistical basis for uncertainty propagation.
o How to maximize the accuracy (defined shortly) of measurements made with a meter stick by minimizing parallax
o How to measure using a vernier caliper
Background
 NO measurement can be made without some uncertainty. As a consequence, NO physical constant is known exactly. All are known within some range, expressed as “(numerical value) +/ (numerical uncertainty)(units).” When you measure the width of your lab bench, you cannot know the answer exactly. You can only state the width lies within a certain range of uncertainty. Your inability to determine exactly the width of the table is not a result of your lack of experience in experimentation. It is a fundamental characteristic of all physical measurements. No matter how much experience you have, no matter how many PhDs you accumulate, you CANNOT make any measurement without uncertainty. Whether experimental results agree with the accepted value (i.e., whether the experimental results are accurate) depends on whether the range of uncertainty (degree of precision) about the accepted value overlaps the range of uncertainty surrounding the experimental results. It also depends on the ability to take a number of measurements of the same thing whose values cluster closely to a mean valuemore on that later. Consequently, the ability to determine and express uncertainties in measurements, and to determine the appropriate uncertainties of values calculated from these measurements, is critical in science.
o For example, if you are trying to determine experimentally the acceleration due to gravity, you will get a number from your experiment, plus or minus an associated uncertainty. Suppose your experimental result is 9.850 +/ 0.038 m/s^{2}. The size of this uncertainty is a measure of the precision (defined shortly) of your measurements. The question is: Is your measured value accurate? That is, does this value agree with the accepted value of the acceleration due to gravity within the uncertainties of the experimental and accepted values? In order to answer this question we must know the accepted value and its associated uncertainty. Suppose the accepted value and its uncertainty is 9.810 +/ 0.004 m/s^{2}. For your experimental result to agree with the accepted value within experimental uncertainty, the two ranges of values must overlap. Thus the accepted value for the acceleration due to gravity lies in range between 9.806 and 9.814 m/s^{2}. The experimental value lies in the range between 9.812 and 9.888 m/s^{2}. Since the two ranges overlap, the experimental result agrees with the accepted value within experimental uncertainty. The correct statement of the result of the experiment is: “The experimental value is consistent with the hypothesis that the accepted value for the acceleration due to gravity is correct.” NOTE: This does not prove the hypothesis, it merely supports it.
 Note: all the uncertainty values we will generate are estimates.
 You will need to understand the following terms to correctly treat uncertainty in the experiments and you will need to understand them for both exams.
o
Random
error. When a measurement is as
likely to result in a value larger than the true value as in a value smaller
than the true value, the resulting error (or uncertainty) is said to be
random. Often, the results of such
measurements fall on a normal distribution curve (also referred to as a
Gaussian distribution). When the results
of a series of measurements are normally distributed, we can use statistical
techniques to estimate the uncertainty.
We will not go into detail on the statistical basis for our estimates of
uncertainty, but it is important that you realize that the techniques we use
have a sound statistical basis. Random
error arises from our inability to exactly repeat a measurement. For example, if you are using a stopwatch to
time the fall of a tennis ball, as you will in this experiment, you will
sometimes start or stop the watch too early and sometimes too late. The key assumption is that you are as likely
to start or stop too early as too late so any individual error is as likely to
be too high as too low and therefore we will assume the answers are randomly
distributed around the true value. Think
about the word “random.” Random means
the results have no pattern; they are as likely to be too large as too small.
o
Systematic
error. Systematic errors have a
pattern. They are typically all in one
direction from the true value and frequently by about the same amount. That is, they are all larger than the true
value or all smaller. Possible sources
of systematic error include: defective
instruments, incorrect calibration of instruments, faulty experimental methods,
improper zeroing of the instruments, regular observational errors, etc.
o
Accuracy. We will define accuracy as the degree to
which a measurement agrees with a known standard. For example, if you are given a standard
width for your lab bench (for example one specified by the manufacturer) then
the difference between the result of your measurements and this standard value
is a measure of the accuracy of your measurements. In the absence of a given standard, accuracy
is determined by the calibration accuracy of the instruments used. In this class, we have no way to check the
calibration accuracy of our instruments.
Therefore, in this class if you are not given an accepted
value you cannot determine accuracy.
o
Precision. Precision describes the width of the spread
in a series of measurements of the same object, using the same technique and instruments,
and under the same conditions. It
describes how accurately you can repeat a series of measurements. For example.,
suppose you make a series of measurements of the width of a lab bench, and the
measurements are: 1.251m, 1.252m,
1.249m, 1.253m, and 1.252m. The
precision describes the degree of variability in these measurements. I.e., the degree to which the measurements
differ from a single value that is chosen to represent the series of
measurements. The value usually chosen
to represent a set of measurements is the average, 1.251 for the above
measurements. Thus the precision is a
measure of the width of the distribution of measured values, from 1.249 to
1.253 in this example. It measures how
much the measured values differ from the average.
o
Actual
or absolute uncertainty. This is the
uncertainty you estimate (for a single measurement) or calculate (for multiple
measurements of the same thing). It
is expressed in the same units as the measurement. For example, if you measured the width of
your lab bench and determined that the length was 1.63 ± .01 m the ± .01 m is
the absolute or actual uncertainty because it has the same units as the width
of the table.
o
Fractional
uncertainty. This is the ratio of
the uncertainty in a value to the value and is always dimensionless. In the above example, if we divide the actual
uncertainty by the length, we find .01 m/1.63 m = .006, where the .006 is
dimensionless.
o
Percent
uncertainty. This is the fractional
uncertainty times 100. It is expressed
in percent. In the above example,
(.01/1.63) x 100 = 0.6%.

Without a statement of precision of your
results, their value is little to none.
 Precision versus accuracy. Consider a meter stick soaked in water over night. What would you expect to happen to the stick? Would it still accurately measure length? Suppose you made a series of measurements of the same object with this meter stick. Could the results be precise? Think about these two questions. If you can answer them correctly, you will have no problem with the questions on the two exams involving accuracy and precision.
 A meter stick soaked in water would no longer be accurately calibrated against the international standard for the meter. Thus
measurements made with it would not be accuratethey would not correspond to the true length of the object measured.
 You would be able to make measurements with this soaked meter stick that would be as precise (repeatable) as those you could make
with a properly calibrated meter stick. A key point here is that the precision of the use of the stick is not a property of the stick, but of
your ability to properly use the stick repeatedly.
 Histogram. A histogram is a verticalbar graph where the height of each bar represents the frequency of occurrence of a value that falls in a specified range, or bin. For example, one bin might be all values between 1.0 and 1.95, the next bin might be all values between 2 and 2.95, and so on. The widths of the bins must all be the same and the precise width is chosen based on an examination of the data. The width is chosen to assure no data value can fall in more than one bin and all data values fall in some bin. The result is a frequency distribution of the values in the data set. The height of each bar represents how many data values are in that bin. Said another way, the height of each bar represents the frequency with which data values, in the range represented by that bin, occur.
 Determining uncertainty. There are two methods of determining uncertainty associated with a measurement or set of measurements. If there is only one measurement, then you must estimate the uncertainty. This is not just a guess. Carefully assess how accurately you believe you can read the instrument, in this experiment a meter stick. If there are multiple measurements, you may calculate the standard deviation of the measurements. Determining the standard deviation assumes the underlying distribution of values is normally distributed. The standard deviation gives a measure of the width of this normal distribution.
 Determining the standard deviation. Compute the average of your measurements. Calculate the difference between each measurement and the average. Square these differences. Add the squares of the differences. Calculate the average of this sum of the squares of the differences. You do this by dividing the sum of squares by the number of measurements. Take the square root of this average. This is the standard deviation; it is a measure of the width of the distribution of measurements. Let’s review the process:
o You take the average of the squares of the differences between each measurement and the average of all measurements;
o You take the square root of this average of the squares of the differences.
o The process is summarized in equation (1) below.
_{}_{} _{} (1)
 Calculating the standard deviation is the preferred method. Use the estimation method only when multiple measurements are not practical.
NOTE: Actually equation (1) is the rootmeansquare value, not the standard deviation. The standard deviation is a measurement used in statistical analysis in which a subset of a large population is "sampled." We will be deliberately a bit sloppy here in our use of the term, but it is important that you understand the distinction in case you run across the standard deviation in the future.

Uncertainty propagation. First, let’s define what it means to
“propagate” uncertainty and why we do it.
Suppose you want to determine the volume of a solid body, such as a
block of metal—this is the example we will actually do in this experiment. You measure the
dimensions, length, width, and depth.
Then you calculate volume by multiplying length times width times depth to get the volume. You have estimates of the actual
uncertainties in the measurements of length, width, and depth. The problem is to use these estimated uncertainties
in your measurements to calculate an uncertainty for the calculated
volume. We say that we “propagate” the
uncertainties of the individual measurements through the calculation of
volume. So, the question is: For different types of calculations, how do
we propagate the uncertainties in measured values to determine the uncertainty
of the calculated value? There are two
methods: The simple rules and the
calculus method.
o
Simple
rules. The simple rules for
uncertainty propagation are:
1. Simple rule for addition or subtraction:
If C = A + B or C = A  B, then
σ_{C} = (σ_{A}^{2} + σ_{B}^{2})^{1/2} (2)
Note: if C = rA + sB, where r and s are constants, then
σ_{C} = [(rσ_{A})^{2} + (sσ_{B})^{2}]^{1/2} (3)
^{ }
2. Simple rule for multiplication or division:
If C = AB^{n}, then
σ_{C}/C = [(σ_{A}/A)^{2} + (nσ_{B}/B)^{2}]^{1/2} (4)
and
σ_{C}_{ }= C[(σ_{A}/A)^{2} + (nσ_{B}/B)^{2}]^{1/2 }(5)
NOTE: The exponent goes inside the brackets and is squared. One of the most common errors students make in this class is not placing the exponent inside the brackets to be squared.
NOTE: The simple rule for multiplication or division works when the exponent n is positive (multiplication) or negative (division).
NOTE: If C = B^{n}, then
σ_{C}/C = [(nσ_{B}/B)^{2}]^{1/2} (6)
Equation (6) holds whether n is positive or negative.
NOTE: Rules 1 and 2 work ONLY when no variable occurs more than one time in the equation for which you are propagating uncertainty. If any variable occurs more than once, you must factor the expression so that no variable occurs more than once or you must use the basic calculus equation. (There will be a handout for the basic calculus equation.)
o Basic calculus equation. It is frequently confusing to use the simple rules. When in doubt, use the basic calculus equation. If C = f(ABC), where f(ABC) means “a function of the variables A, B, and C,”
_{} (7)
Often
the basic calculus equation is the easiest to use.

Significant figures versus decimals. For single measurements, uncertainty may
be indicated explicitly as
(value) ± (uncertainty)(units) (8)
or it may be indicated implicitly by the number of
significant figures. Significant figures
in a number are those figures that are certain, plus the first one that is
doubtful. For example, in measuring the
width of your lab bench, you might determine that the width is between 1.253
and 1.254 m. Suppose you estimate the
distance between the edge of the bench and the 1.253 m mark and determine that
the measurement should be 1.2535m. Thus,
1, 2, 5, and 3 are the certain digits.
The final 5 is the first doubtful digit.
The 3 is a measurement in millimeters. The meter stick is marked in
millimeters so I can tell with great confidence whether the point to which I am
measuring is on one side of a millimeter line or the other. The final 5 is
the "first uncertain digit." That means that this is the first digit of
which I am not certain. I acknowledge that I cannot be absolutely
confident by what fraction of a millimeter I am beyond the 3 millimeter mark.
Don’t confuse significant figures with decimals—digits to the right of
the decimal. Significant figures may lie
on either side of the decimal. Consider
these examples.
o 2.713 has 4 sig. figs. and 3 decimals
o 0.0832 has 3 sig. figs. and 4 decimals
o 3710.0 has 5 sig. figs. and 1 decimal
o 3710 is ambiguous. It could mean 3 sig. figs. and no decimals or 4 sig. figs if the writer forgot to write the decimal point.
o 3710. has 4 sig. figs. and no decimals
o 3.710E3 has 4 sig. figs. and no decimals
o 3.71E5 has 3 sig. figs. and 7 decimals
 The simple rule is: leading zeros do not count as sig; figs. Trailing zeros count.
 The question that always arises is: “How many significant figures should I carry through my calculations?”
o Your final answer should contain all certain figures and the first doubtful one. Recall the example of measuring your lab bench for the distinction between certain and doubtful. However, in order to report the answer in these terms, you must carry one extra digit through the calculations and round off at the end to get the final answer. For example, suppose you had the following set of measurements: 1.2533, 1.2535, 1.2536, 1.2534, and 1.2535m. The average of these measurements is: 1.25346m. Now there are 5 significant figures in each of the measurements, but 6 digits in the average. The sixth digit represents more information than was contained in the measurements. We can’t regard this sixth digit as significant, since where could this additional information have come from? (Think about this question for a moment. This concept of uncertainty and significant figures is new to you and thinking about this topic in terms of the available information may be helpful.) However, we will carry this sixth digit through our calculations since there may well be other calculations, like this average, that yield one more digit than our data. We will round off to five significant figures (all the information in the data) when we calculate the final result of our experiment. A simple rule of thumb is: “Carry one more figure than contained in the data.” When in doubt, ask your instructor.
o When multiplying or dividing two numbers with different numbers of significant figures, the significant figures in the answer will be the smallest number in any of the numbers you are combining.
o When adding or subtracting two numbers, it is the number of decimals that counts. For example: 132.41 + 9.8 = 142.2. Although 132.41 is reliably known to the nearest hundredth (0.01), 9.8 is only known to the nearest tenth (0.1). Thus, their sum is only reliably known to the nearest tenth. This is not really an easy concept to get your mind around. Ask for clarification in class.
Experiment Two
 There are three parts to this lab. In Part IA, we will measure the length of our lab benches. We will learn how to estimate, and write, the uncertainty in an individual measurement that is not repeated and how to propagate the uncertainties in several such measurements to determine the uncertainty of their sum. In Part IB, we will measure the width of our lab benches. We will learn how to determine a “best estimate” of the value of a repeated measurement and how to estimate the uncertainty associated with repeated measurements of the same thing. In Part II, we will calculate the density of a block of metal and learn how to determine the uncertainty in a product of several measurements and one way to compare experimental results to an accepted value. In part III, we will try to determine the acceleration due to gravity by dropping a tennis ball from a known height. We will learn how to calculate the uncertainty in a series of measurements and compare our experimental results with a known standard. I.e., we will determine the accuracy or our results.

Purpose:
o
Determine uncertainty when each measurement
is made only once
o
Learn how to propagate uncertainties when
adding measurements.
 Theoretical background
o Concept of endpoint error
o Simple rule for addition.
 You will use a meter stick to measure the length of your lab bench. Because the bench is longer than the meter stick, you will have to use the meter stick several times and keep track of the location of the ends of the meter stick each time.
 Sources of uncertainty:
o Determining the end points of the meter stick
o Parallax
o Moving the meter stick to the correct point between each pair of measurements

Measurement tips.
o
Because
the ends of the meter sticks may be damaged, do not measure using the
ends. You may use the 1 cm and 99 cm
marks or the 2 cm and 98 cm marks (for example).
o
You
will need to keep track of the location of the 1 cm and 99 cm or 2 cm and 98 cm marks
you use. You may
use the edge of a piece of masking tape placed on the table, or make a small
pencil mark on the table, or use the edge of a piece of paper placed on the
table, or use some other method you devise.
o
Parallax.
If you place the meter stick flat on the table, the length you measure
will depend on the angle between your line of sight and a line perpendicular to
the table at the point you want to measure.
This is called parallax. Try it.
Looking down on the meter stick lying flat with the scale on top, move your head
from side by side along the meter stick and see how the mark on the stick that
aligns with your measured point changes as you move your head. If you
place the meter stick on edge so that the marks on the stick actually touch the
surface of the table, you can minimize parallax.
 Uncertainty. Each time you use the meter stick in this measurement process you will have uncertainties in your readings at both ends of the meter stick. These are called endpoint uncertainties. Note that each measurement is made only one time and that you determine the uncertainty by estimating how well you can read the meter stick. In Part IB, we will see how to determine uncertainty when you make multiple measurements of the same thing.
o A comment on notation. In this lab, we will be deliberately a little sloppy in our notation. Conventionally, σ (the lower case Greek letter sigma) is used to represent the standard deviation calculated from a series of measurements of the same thing. We will use σ for all uncertainties, even with only one measurement when we must estimate the uncertainty.
Consider a situation where you make only one measurement with the meter stick. We will call the two ends of the meter stick A and B. We will call the uncertainty associated with end A, σ_{A}, and the uncertainty associated with end B, σ_{B}.
o Nature of the precision uncertainty. Think about the example above for measuring your lab bench. We discussed the need to estimate the last digit in the measurement. You are looking at the marks on the meter stick, comparing them to some point on the table, and estimating how close the point on the table falls to some mark on the stick. It seems reasonable to believe that you would sometimes estimate the last digit a bit larger than the “true” value and sometimes a bit smaller. The point I am making is that there is no obvious reason to believe your estimates would be all too high or all too low. Thus it seems reasonable that your measurement uncertainties will be random. Random uncertainties contribute to precision uncertainty. That is, the random uncertainties cause measurements to be slightly different from each other but by (usually small) but random amounts and in random directions (some too large and some too small). The result is that a set of measurements will be distributed about their average value. The width of this distribution is a measure of the precision of the measurements. A key point here is that if there are no accuracy uncertainties, then the average of our measurements should be very close to the “true” or accepted value. However, if the width of our distribution of measurements is large, then the results or our experiment are essentially meaningless. You will see this in Part 3 of this experiment when we attempt to measure the acceleration due to gravity, g, by dropping a tennis ball from a consistent height.
o Nature of accuracy uncertainty. It also seems possible that there could be a systematic (i.e., all measurements are too high or all too low) uncertainty introduced by such factors as:
§ The difference between the meter sticks and the known standard for the meter. I.e., our meter sticks may not agree exactly with the international standard for the meter. If you think about it for a minute, you will realize that the actual length of the meter stick will vary with:
· Temperature
· Amount of moisture in the wood
· Whether the stick is warped
§ Consistent mistakes in experimental procedure.
§ Consistent mistakes in use of the measuring equipment.
One key point is that a systematic uncertainty is consistentconsistently off in one direction or another.
A second key point: If there is systematic uncertainty, the average of our measurements will not be close to the “true” value.
In this class, we will assume, for simplicity, systematic uncertainty is negligible.
How do we estimate the uncertainty in the measurement made using the meter stick one time? One obvious option is to simply add the two endpoint uncertainties. This approach is straightforward and has the elegance of simplicity. However, if we simply add the end point uncertainties to get the total uncertainty for our single measurement, we are overestimating the uncertainty. To see this, think about the endpoint uncertainties, σ_{A}, and σ_{B}. Suppose σ_{A} is ± .001m and σ_{B} is ± .001m. There is no particular reason to think we could not read both ends of the meter stick with equal precision. Since we assume that systematic uncertainties are negligible, all our uncertainties are precision and are, therefore, random in nature. Thus any given measurement is as likely to be too large as too small. If we simply add the uncertainties to get σ_{TOTAL} = ± .002m we overestimate the uncertainty. Think about it. Suppose we are making the measurement starting with the left end of the meter stick, call this A. Then we are assuming that any value we read on the meter stick at A that happens to be a bit too small, will always be associated with a measurement at the other end of the meter stick, B, that is always too large. That is the “+” part of the ± 0.002m. Similarly, we are assuming that any measurement at A that is a bit too large will always be associated with a measurement at B that is a bit too small. That is the “” part of the ± 0.002m. (Think about this until you understand it. Drawing a sketch may help. Ask for clarification during lecture if you are not clear about this point.) On the other hand, if, as we assume, the measurements at each end are independent of each other and randomly distributed, then if the measurement at one end is too large, the measurement at the other end is equally likely to be too large or too small and frequently a “too large” measurement at one end will combine with a “too small” measurement at the other end and vice versa. We need to combine endpoint errors in a way that is less prone to overestimation. The method conventionally chosen is called addition in quadrature, or taking the square root of the sum of the squares, and is shown by
σ_{L1} = (σ_{A}^{2} + σ_{B}^{2})^{1/2}. (9)
In this experiment, we will assume the two endpoint errors for a single measurement are equal. If σ_{A}_{ = }σ_{B}, then_{ }
_{ }
σ_{L1} = (σ_{A}^{2} + σ_{B}^{2})^{1/2} = (2 σ_{A}^{2} )^{1/2} = √2 · σ_{A} (10)
o This gives the total uncertainty for one use of the meter stick. When measuring the length of the table, you will need to use the meter stick several times and there will be a total error, σ_{TOTAL}, associated with all uses. For each measurement, you will record the measurement along with the suminquadrature of the two endpoint uncertainties in Table 1 in your manual. L_{TOTAL} is thus the sum of the length measurements. The uncertainty in L_{TOTAL}, σ_{LTOTAL}, is, determined by addition in quadrature of the uncertainties (σ_{Li}) for each measurement, or:
σ_{TOTAL}_{ }= (σ_{L1}^{2} + σ_{L2}^{ 2} + σ_{L3}^{ 2} + σ_{L4}^{ 2})^{1/2} (11)
 Record your data. Record each measurement and its associated uncertainty (by addition in quadrature of the two endpoint uncertainties) in Table 2.1. Calculate L_{TOTAL} and σ_{LTOTAL} and record these values in the table. Be sure to show your work in calculating each σ_{Li} and σ_{TOTAL}.

Purpose:
o
Determine
the best estimate of result of a set of repeated measurements.
o
Determine
uncertainty when a measurement is repeated several times.
 Theoretical background:
o Concept of average as a best representation of repeated measurements.
o Concept of standard deviation as an estimate of the width of the distribution of a set of repeated measurements.

Step 1.
Measurement
o Make ten measurements of the lab bench width using the 2meter stick.
o Make the measurements at different places along the length of the table.
o Do not use the ends of the 2meter stick.
o Record your ten measurements of width in Table 2.2. Each partner should make five measurements. You must complete the measurement of ten widths before you calculate the average width.
 Step 2. Determine the average of your measurements. We choose the average as the best single representation of our set of repeated measurements. There are other possible choices such as the median, but it is customary to choose the mean.
o Record the sum of the ten measurements in Table 2.2.
o Calculate and record the average of the ten measurements.
§ The average is your estimate of the value that best represents the ten measurements.
 Step 3. Calculate the standard deviation.
o Calculate the values of the differences between each measurement (subscript “i”) and the average, and record them in Table 2.2.
o Square each difference and record the squares in Table 2.2.
o Compute the average of these squares of the differences.
§ Calculate sum of squares of differences and record in Table 2.2.
§ Divide this sum by the number of squares calculated.
o Take the square root of the average of the squared differences. Do not use the standard deviation routine in Excel or in your calculator. For reasons beyond the scope of this class—unless you ask—Excel and your calculator may return incorrect results. Use equation (12) and show your work.
_{} (12)
 Step 4. Fill in the blanks beneath Table 2.2. Percent uncertainty is defined as
% unc. = [(uncertainty)/(measurement)] x 100 (13)
 Step 5. Fill in Table 2.3. Absolute or actual uncertainty is the value you estimate or calculate. It always has the same dimensions as the measurement.
 Purpose:
o To see how to propagate uncertainties in a product
o To determine the accuracy of the results. I.e., we will compare the results of our measurements to the accepted value.

Hypothesis 1: You can accurately determine the density of a
metal block.
 Reading a vernier caliper. There are two scales on a vernier caliper. The long one is on the caliper and short one is on the vernier (the small portion that slides).
(Insert photo of vernier caliper with callouts or notes.)
To read the vernier caliper, place your thumb on the knurled, springloaded trigger on the vernier. Press the trigger. Slide the vernier, and open the jaws of the caliper. Place the object to be measured snuggly between the jaws. When you release the trigger, the vernier is locked in place. Read the millimeter mark on the caliper (longer scale) to the left of, but closest to, the zero mark on the vernier (shorter scale). This will be your first approximation to the measurement, e.g., 1.2 cm. Determine the next two significant figures by moving your eye along the vernier scale until you find a line on the vernier scale that exactly aligns with a line on the caliper scale. Read the next two significant figures off the scale on the vernier. Your final measurement will be something like 1.230cm or 1.235cm depending on whether the “3” line or the “3.5” line on the vernier scale exactly aligns with a line on the main caliper scale.
 Step one. Measurement.
o Use the vernier caliper to measure the length, width, and height of your metal block and record the values in Table 2.4 in your manual.
o Estimate the uncertainty of each measurement and enter the values in the table. NOTE: Actual uncertainty is your estimate of the uncertainty in each measurement. It has the same units as the basic measurement. The manufacturer states that the precision uncertainty in reading the vernier caliper is 1/20^{th} of a millimeter. Hence, your uncertainty estimate will be approximately 0.00005m.
o Calculate percent uncertainty in each measurement by dividing estimated uncertainty by the measurement and multiplying by 100. E.g., for 0.01635 ± 0.00005 m, we have 0.00005 /0.01635 = 0.00306. Record the values in Table 2.4.
 Step 2. Calculate the volume and record it in the table.
V = LWH (14)
 Step 3. Calculate the uncertainty in the volume.
o Apply Simple Rule 2 to V = LWH.
_{} (15)
 Step 3. Measure the mass of the block using one of the balance scales.
o Use the small knurled knob beneath the scale tray to align the mark on the balance beam with the reference mark on the vertical scale before weighing your block.
o Calculate the uncertainty in the density.
o The instructor will give you the uncertainty in M, σ_{M}.
_{} (16)
 Step 4. Compare the experimental and accepted values.
o Look up the accepted value of the density of your block in the Chemical Rubber Company (CRC) Handbook of Chemistry and Physics.
o Calculate the percent difference between your value for density, D, and the accepted value. NOTE: it is important you memorize the order of the measured and accepted values. You will use this formula frequently in this course and you will need to know it for both exams.
Percent difference = ((Measured – Accepted)/Accepted) x 100 (17)
NOTE: The percent difference can be negative.

Purpose:
o
Determine
accuracy of your experimental results by comparing them to a known standard.
o
See
how a poor experimental design can result in a huge range of data values, hence
low precision, though accuracy may be reasonably high.

Hypothesis 2: You can accurately measure the acceleration
due to gravity, g.
 In Part I, we had no standard to which we could compare our results and determine their accuracy. In part II, we had an accepted value of the density and we determined whether our experimental value agreed with the accepted value. In Part III, we will repeat a measurement several times and calculate the average and standard deviation. We will have an accepted value for the acceleration due to gravity and we will determine whether our experimental value is in agreement with the accepted value. If not, we will look at aspects of the experiment that may have contributed to inaccurate results.
 The experiment consists of dropping a tennis ball a known distance (two meters—if you can reach that high) using the 2meter stick to measure height. You are asked to measure the drop distance and drop time for your tennis ball and record these values in Table 5. DO NOT STAND ON A STOOL. TENNIS BALLS BOUNCE, STUDENTS DO NOT. If both partners cannot reach to a height of 2 meters, use a lower starting height. Your starting height is Y. What starting height you choose is not important, but whatever starting height you select must be used for all measurements.
 One partner can hold the ball at the agreed starting height. (What part of the ball should be at the 2meter point, top, middle, bottom? Why does this matter in reducing the uncertainty in our experimental results?) The other partner can start and stop the stopwatch. Alternatively, one partner can drop the ball and operate the watch. Each partner should take five measurements. However you decide to do the experiment, take turns.
 Step 1. Measure drop time and drop distance and record the values in Table 2.5
 Step 2. Calculate the values of g_{i} for each trial using the equation Y = 0.5g_{i}t^{2} and record them in the table.
 Step 3. Calculate the average of the g_{i}_{ }values, _{}, and then calculate the difference between each value and the average and the square of this difference for each trial and record them in the table.
 Step 4. Calculate the standard deviation of the g_{i} values and record the average value of g_{i} and the standard deviation beneath Table 2.5.
 Step 5. Calculate the percent uncertainty in your experimental value of g:
_{} (18)
where _{}= g_{average},_{ }and record the result below Table 2.5.
 Step 6. Calculate the percent difference between your average value, _{}, and the accepted value:
Percent difference = ((Measured – Accepted)/Accepted) x 100 (19)
NOTE: Percent difference can be negative.
 How do you know if the experimental value agrees with the accepted value? If the percent uncertainty in your experimental value is larger than the percent difference between the experimental and accepted values, then your experimental result agrees with the accepted value. Think about this relationship. If you do not understand, ask for clarification of the point in class.
 Finally, answer the questions in the manual covering all parts of the experiment.
 One key point of Part III is to see how we determine if the experimental result agrees with the accepted answer. The accepted value of the acceleration due to gravity given in your manual has no uncertainty specified. This is not strictly correct because, as we saw earlier, no physical constant is known without uncertainty. However, to keep things simple, we will assume for this experiment that the uncertainty in the accepted value is negligible compared to the precision uncertainty in the experimental result. When you see your results, you will agree that this was a reasonable assumption. You will be asked to compute the percent difference between the measured value and the accepted value of the acceleration due to gravity. You will also be asked to compute the percent uncertainty in your experimental result. If the percent uncertainty in your result is larger than the percent difference between your result and the accepted value, then your experimental result is in agreement with the accepted value. If not, it is not in agreement. Think about these two percents for a moment. One is a percent difference between the experimental and accepted value. The other is the percent uncertainty in the experimental result. Do you see why the former must be smaller than the latter in order for the experimental result to agree with the accepted value? If not, ask for clarification.
o Try sketching a normal curve. Draw a vertical line representing the average. Draw two more vertical lines representing the (+ or ) uncertainty associated with this average. Sketch another vertical line between the uncertainty line and the average. Let this line represent the accepted value. Since the accepted value falls within the range of uncertainty of the measured value, our measured value agrees with the accepted value. Note that the difference between the average and the accepted value is less than the difference between the average plus uncertainty and the average. It should be obvious from inspection, that the percent difference between the measured and accepted value
((average – accepted)/accepted)x100(%) (20)
o is also smaller than the percent uncertainty in the average
(σ/average)x100(%) (21)
Look at your sketch and the various vertical lines and think about it.
o Try the same sketch again, but this time place the vertical line representing the accepted value outside the uncertainty range and repeat the above thought process.
 This is a concept we will deal with numerous times in this class and it is important that you understand. Being able to determine whether an experimental result is in agreement with an accepted value is the point of all our treatment of uncertainty.
 An additional teaching point from Part 3, is that a poorly designed experiment, which this part was, can give you a very large spread in the measurements. Typical percent uncertainties for this part are in the range of 2030%. If you think about it, if the range of your data from one sigma above the mean to one sigma below is 60% of the size of your mean the result of your measurement is meaningless. The fact that the accepted value falls into a range that big gives you no confidence whatever in the outcome of your experiment.