banner



Can Association Be Liable For Loss Of Value When Association Will Not Repair A Common Area

Measurements and Error Analysis

"It is better to exist roughly right than precisely wrong." — Alan Greenspan

The Uncertainty of Measurements

Some numerical statements are exact: Mary has iii brothers, and 2 + 2 = 4. Notwithstanding, all measurements accept some degree of uncertainty that may come from a variety of sources. The procedure of evaluating the dubiety associated with a measurement event is often called dubiety analysis or mistake analysis. The complete statement of a measured value should include an estimate of the level of confidence associated with the value. Properly reporting an experimental result along with its doubtfulness allows other people to make judgments nigh the quality of the experiment, and it facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an doubt estimate, it is incommunicable to reply the basic scientific question: "Does my result concur with a theoretical prediction or results from other experiments?" This question is fundamental for deciding if a scientific hypothesis is confirmed or refuted. When we brand a measurement, we by and large presume that some exact or true value exists based on how we define what is being measured. While we may never know this truthful value exactly, we attempt to notice this ideal quantity to the all-time of our ability with the time and resources available. As we make measurements past different methods, or even when making multiple measurements using the aforementioned method, nosotros may obtain slightly different results. So how practice nosotros report our findings for our best gauge of this elusive truthful value? The well-nigh common way to show the range of values that we believe includes the true value is:

( 1 )

measurement = (all-time estimate ± dubiety) units

Let's take an example. Suppose you want to find the mass of a gold band that you would like to sell to a friend. You practise not want to jeopardize your friendship, then you desire to become an accurate mass of the ring in society to accuse a off-white market toll. You estimate the mass to exist between 10 and 20 grams from how heavy information technology feels in your mitt, simply this is non a very precise estimate. After some searching, you lot observe an electronic residual that gives a mass reading of 17.43 grams. While this measurement is much more precise than the original gauge, how do you know that it is accurate, and how confident are you that this measurement represents the true value of the band's mass? Since the digital display of the residuum is limited to two decimal places, yous could report the mass as

m = 17.43 ± 0.01 grand.

Suppose you utilise the same electronic balance and obtain several more readings: 17.46 g, 17.42 g, 17.44 g, so that the boilerplate mass appears to exist in the range of

17.44 ± 0.02 g.

Past now you may feel confident that you know the mass of this ring to the nearest hundredth of a gram, merely how do y'all know that the true value definitely lies between 17.43 g and 17.45 g? Since you want to be honest, you make up one's mind to use another balance that gives a reading of 17.22 g. This value is clearly below the range of values found on the get-go residual, and under normal circumstances, you might not care, just you desire to exist fair to your friend. So what do you do now? The answer lies in knowing something about the accuracy of each instrument. To help answer these questions, nosotros should first define the terms accurateness and precision:

Accuracy is the closeness of agreement between a measured value and a truthful or accustomed value. Measurement error is the amount of inaccuracy.

Precision is a measure of how well a event can be adamant (without reference to a theoretical or truthful value). Information technology is the degree of consistency and understanding among independent measurements of the same quantity; also the reliability or reproducibility of the result.

The dubiety estimate associated with a measurement should account for both the accuracy and precision of the measurement.

Note: Unfortunately the terms error and uncertainty are often used interchangeably to draw both imprecision and inaccuracy. This usage is so common that it is impossible to avoid entirely. Whenever you see these terms, make sure you understand whether they refer to accuracy or precision, or both. Notice that in gild to make up one's mind the accurateness of a particular measurement, nosotros have to know the ideal, true value. Sometimes we take a "textbook" measured value, which is well known, and nosotros assume that this is our "ideal" value, and utilise it to estimate the accuracy of our result. Other times we know a theoretical value, which is calculated from basic principles, and this likewise may be taken every bit an "platonic" value. But physics is an empirical science, which means that the theory must be validated by experiment, and not the other way around. We tin can escape these difficulties and retain a useful definition of accuracy past assuming that, fifty-fifty when we practice not know the truthful value, we can rely on the best bachelor accustomed value with which to compare our experimental value. For our example with the gold band, there is no accepted value with which to compare, and both measured values have the same precision, so nosotros take no reason to believe ane more than the other. We could look up the accuracy specifications for each balance as provided by the manufacturer (the Appendix at the stop of this lab manual contains accuracy data for near instruments yous will employ), merely the best way to appraise the accuracy of a measurement is to compare with a known standard. For this state of affairs, it may be possible to calibrate the balances with a standard mass that is accurate within a narrow tolerance and is traceable to a primary mass standard at the National Establish of Standards and Engineering (NIST). Calibrating the balances should eliminate the discrepancy between the readings and provide a more accurate mass measurement. Precision is often reported quantitatively by using relative or fractional uncertainty:

( 2 )

Relative Uncertainty =

uncertainty
measured quantity

Case:

m = 75.5 ± 0.5 k

has a partial uncertainty of:

 = 0.00half-dozen = 0.7%.

Accuracy is often reported quantitatively by using relative error:

( 3 )

Relative Error =

measured value − expected value
expected value

If the expected value for g is 80.0 g, and then the relative error is:

 = −0.056 = −5.6%

Note: The minus sign indicates that the measured value is less than the expected value.

When analyzing experimental data, information technology is of import that you lot empathise the departure between precision and accurateness. Precision indicates the quality of the measurement, without any guarantee that the measurement is "correct." Accuracy, on the other hand, assumes that at that place is an platonic value, and tells how far your answer is from that platonic, "right" reply. These concepts are straight related to random and systematic measurement errors.

Types of Errors

Measurement errors may be classified as either random or systematic, depending on how the measurement was obtained (an instrument could cause a random fault in one state of affairs and a systematic error in another).

Random errors are statistical fluctuations (in either management) in the measured data due to the precision limitations of the measurement device. Random errors tin be evaluated through statistical assay and can exist reduced by averaging over a large number of observations (see standard fault).

Systematic errors are reproducible inaccuracies that are consistently in the aforementioned direction. These errors are difficult to detect and cannot exist analyzed statistically. If a systematic error is identified when calibrating against a standard, applying a correction or correction cistron to compensate for the effect can reduce the bias. Unlike random errors, systematic errors cannot be detected or reduced by increasing the number of observations.

When making conscientious measurements, our goal is to reduce as many sources of error as possible and to keep rails of those errors that we can not eliminate. It is useful to know the types of errors that may occur, and then that we may recognize them when they arise. Mutual sources of mistake in physics laboratory experiments:

Incomplete definition (may be systematic or random) — One reason that information technology is impossible to make exact measurements is that the measurement is not ever clearly defined. For example, if two different people measure out the length of the aforementioned string, they would probably go different results considering each person may stretch the string with a different tension. The all-time way to minimize definition errors is to advisedly consider and specify the atmospheric condition that could affect the measurement. Failure to account for a factor (usually systematic) — The most challenging part of designing an experiment is trying to control or account for all possible factors except the ane independent variable that is being analyzed. For instance, you lot may inadvertently ignore air resistance when measuring free-autumn acceleration, or you lot may neglect to business relationship for the outcome of the Earth'southward magnetic field when measuring the field near a small magnet. The best mode to business relationship for these sources of error is to brainstorm with your peers about all the factors that could perhaps affect your issue. This begin should exist done before start the experiment in guild to plan and account for the confounding factors before taking data. Sometimes a correction can be practical to a issue after taking data to account for an error that was not detected earlier. Environmental factors (systematic or random) — Be aware of errors introduced by your immediate working surroundings. You lot may need to have account for or protect your experiment from vibrations, drafts, changes in temperature, and electronic noise or other effects from nearby apparatus. Instrument resolution (random) — All instruments have finite precision that limits the ability to resolve modest measurement differences. For instance, a meter stick cannot be used to distinguish distances to a precision much better than about half of its smallest calibration sectionalization (0.5 mm in this example). I of the best ways to obtain more precise measurements is to use a naught departure method instead of measuring a quantity directly. Nil or remainder methods involve using instrumentation to measure the difference betwixt two similar quantities, one of which is known very accurately and is adaptable. The adjustable reference quantity is varied until the deviation is reduced to zip. The two quantities are then balanced and the magnitude of the unknown quantity can be found by comparison with a measurement standard. With this method, problems of source instability are eliminated, and the measuring instrument can be very sensitive and does not even need a scale. Calibration (systematic) — Whenever possible, the scale of an musical instrument should be checked before taking data. If a calibration standard is not available, the accuracy of the instrument should be checked past comparison with another instrument that is at least as precise, or by consulting the technical information provided by the manufacturer. Calibration errors are usually linear (measured every bit a fraction of the full scale reading), so that larger values outcome in greater absolute errors. Zip offset (systematic) — When making a measurement with a micrometer caliper, electronic residual, or electrical meter, always check the zero reading beginning. Re-null the musical instrument if possible, or at least measure and record the zero start and then that readings tin can exist corrected later. It is also a adept idea to bank check the zero reading throughout the experiment. Failure to goose egg a device will result in a abiding error that is more significant for smaller measured values than for larger ones. Physical variations (random) — Information technology is e'er wise to obtain multiple measurements over the widest range possible. Doing and so frequently reveals variations that might otherwise go undetected. These variations may call for closer exam, or they may be combined to find an average value. Parallax (systematic or random) — This mistake can occur whenever there is some altitude between the measuring calibration and the indicator used to obtain a measurement. If the observer's eye is not squarely aligned with the pointer and scale, the reading may be too high or low (some analog meters have mirrors to help with this alignment). Instrument drift (systematic) — Virtually electronic instruments have readings that migrate over time. The amount of migrate is mostly not a concern, but occasionally this source of error can exist significant. Lag time and hysteresis (systematic) — Some measuring devices require time to achieve equilibrium, and taking a measurement earlier the instrument is stable will result in a measurement that is as well high or low. A common case is taking temperature readings with a thermometer that has not reached thermal equilibrium with its environment. A similar result is hysteresis where the musical instrument readings lag behind and announced to take a "memory" event, as information are taken sequentially moving upwardly or down through a range of values. Hysteresis is most ordinarily associated with materials that get magnetized when a changing magnetic field is applied. Personal errors come from carelessness, poor technique, or bias on the function of the experimenter. The experimenter may measure incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with the expected outcome.

Gross personal errors, sometimes called mistakes or blunders, should be avoided and corrected if discovered. As a dominion, personal errors are excluded from the mistake analysis give-and-take because it is by and large assumed that the experimental result was obtained by post-obit correct procedures. The term human error should too exist avoided in error analysis discussions because it is as well general to be useful.

Estimating Experimental Dubiousness for a Unmarried Measurement

Any measurement yous make will have some incertitude associated with information technology, no matter the precision of your measuring tool. So how do you lot determine and report this incertitude?

The uncertainty of a single measurement is limited by the precision and accuracy of the measuring instrument, forth with whatever other factors that might touch on the ability of the experimenter to make the measurement.

For example, if y'all are trying to apply a meter stick to mensurate the bore of a tennis brawl, the uncertainty might exist

± 5 mm,

but if you used a Vernier caliper, the dubiety could exist reduced to maybe

± 2 mm.

The limiting factor with the meter stick is parallax, while the second example is limited by ambiguity in the definition of the tennis ball's diameter (it'southward fuzzy!). In both of these cases, the uncertainty is greater than the smallest divisions marked on the measuring tool (likely ane mm and 0.05 mm respectively). Unfortunately, in that location is no general rule for determining the doubtfulness in all measurements. The experimenter is the ane who can best evaluate and quantify the uncertainty of a measurement based on all the possible factors that bear upon the issue. Therefore, the person making the measurement has the obligation to brand the all-time judgment possible and report the uncertainty in a way that clearly explains what the dubiety represents:

( iv )

Measurement = (measured value ± standard uncertainty) unit of measurement

where the ± standard uncertainty indicates approximately a 68% confidence interval (meet sections on Standard Deviation and Reporting Uncertainties).
Instance: Diameter of tennis brawl =

6.vii ± 0.two cm.

Estimating Uncertainty in Repeated Measurements

Suppose yous time the period of oscillation of a pendulum using a digital instrument (that you assume is measuring accurately) and find: T = 0.44 seconds. This single measurement of the period suggests a precision of ±0.005 s, but this instrument precision may non give a consummate sense of the uncertainty. If you lot echo the measurement several times and examine the variation among the measured values, y'all can get a meliorate idea of the doubt in the catamenia. For case, here are the results of v measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.

( 5 )

Boilerplate (mean) =

ten 1 + x 2 + + x North
N

For this state of affairs, the best estimate of the menses is the boilerplate, or mean.

Whenever possible, echo a measurement several times and average the results. This average is generally the all-time estimate of the "true" value (unless the information prepare is skewed past one or more than outliers which should exist examined to make up one's mind if they are bad data points that should exist omitted from the average or valid measurements that crave further investigation). Generally, the more repetitions you lot make of a measurement, the better this estimate will be, but be careful to avert wasting fourth dimension taking more measurements than is necessary for the precision required.

Consider, as another example, the measurement of the width of a piece of newspaper using a meter stick. Beingness careful to go on the meter stick parallel to the border of the paper (to avoid a systematic error which would crusade the measured value to be consistently higher than the correct value), the width of the paper is measured at a number of points on the sheet, and the values obtained are entered in a information tabular array. Note that the last digit is only a crude judge, since it is hard to read a meter stick to the nearest tenth of a millimeter (0.01 cm).

( 6 )

Average =

sum of observed widths
no. of observations
 = = 31.nineteen cm

This average is the best available judge of the width of the piece of newspaper, but information technology is certainly non exact. Nosotros would have to average an infinite number of measurements to approach the true hateful value, and even then, we are not guaranteed that the mean value is accurate because there is still some systematic error from the measuring tool, which can never be calibrated perfectly. So how practice we express the incertitude in our boilerplate value? I fashion to limited the variation among the measurements is to use the average deviation. This statistic tells united states of america on average (with l% conviction) how much the individual measurements vary from the mean.

( 7 )

d =

|x 1 10 | + |ten 2 x | + + |ten Northward ten |
N

However, the standard difference is the most common fashion to characterize the spread of a data set. The standard deviation is e'er slightly greater than the average departure, and is used considering of its association with the normal distribution that is frequently encountered in statistical analyses.

Standard Divergence

To calculate the standard deviation for a sample of North measurements:

  • one

    Sum all the measurements and dissever by N to become the boilerplate, or mean.
  • 2

    Now, subtract this average from each of the Due north measurements to obtain N "deviations".
  • 3

    Square each of these N deviations and add them all upwards.
  • 4

    Dissever this consequence by

    ( N − 1)

    and take the foursquare root.

Nosotros tin write out the formula for the standard divergence equally follows. Let the North measurements be called x 1, x ii, ..., 10N . Let the average of the N values be chosen

x .

Then each divergence is given by

δ x i = 10 i x , for i = 1, 2, , N .

The standard departure is:

In our previous instance, the average width

10

is 31.19 cm. The deviations are: The average deviation is:

d = 0.086 cm.

The standard deviation is:

south =

(0.14)ii + (0.04)2 + (0.07)2 + (0.17)two + (0.01)2
5 − 1
 = 0.12 cm.

The significance of the standard departure is this: if you lot now brand one more measurement using the aforementioned meter stick, you tin reasonably await (with about 68% confidence) that the new measurement will be within 0.12 cm of the estimated average of 31.19 cm. In fact, information technology is reasonable to use the standard divergence every bit the uncertainty associated with this single new measurement. All the same, the doubtfulness of the average value is the standard divergence of the mean, which is always less than the standard deviation (see next section). Consider an example where 100 measurements of a quantity were made. The boilerplate or mean value was x.5 and the standard difference was s = 1.83. The figure below is a histogram of the 100 measurements, which shows how often a certain range of values was measured. For example, in 20 of the measurements, the value was in the range nine.5 to 10.5, and about of the readings were shut to the mean value of ten.five. The standard deviation due south for this set of measurements is roughly how far from the boilerplate value most of the readings savage. For a large enough sample, approximately 68% of the readings will exist within i standard deviation of the mean value, 95% of the readings volition exist in the interval

x ± 2 due south,

and nearly all (99.7%) of readings will lie within three standard deviations from the mean. The smoothen curve superimposed on the histogram is the gaussian or normal distribution predicted by theory for measurements involving random errors. Every bit more and more measurements are fabricated, the histogram will more closely follow the bellshaped gaussian bend, just the standard difference of the distribution will remain approximately the same.

Figure 1

Figure 1

Standard Departure of the Mean (Standard Error)

When we written report the average value of N measurements, the incertitude we should associate with this boilerplate value is the standard deviation of the mean, frequently called the standard fault (SE).

( nine )

σ x =

due south
N

The standard fault is smaller than the standard deviation by a cistron of

i/

Northward
.

This reflects the fact that we expect the uncertainty of the average value to go smaller when we use a larger number of measurements, North. In the previous instance, we notice the standard error is 0.05 cm, where we have divided the standard departure of 0.12 by

5
.

The final upshot should and so exist reported as:

Average paper width = 31.xix ± 0.05 cm.

Dissonant Data

The first pace you should take in analyzing data (and even while taking data) is to examine the information gear up as a whole to look for patterns and outliers. Dissonant data points that lie outside the full general trend of the data may propose an interesting miracle that could lead to a new discovery, or they may but be the result of a mistake or random fluctuations. In any case, an outlier requires closer examination to make up one's mind the crusade of the unexpected result. Extreme data should never exist "thrown out" without clear justification and explanation, because you may exist discarding the most pregnant part of the investigation! Nevertheless, if you lot can clearly justify omitting an inconsistent data point, then you lot should exclude the outlier from your analysis and so that the boilerplate value is non skewed from the "true" mean.

Fractional Uncertainty Revisited

When a reported value is determined by taking the average of a ready of independent readings, the fractional uncertainty is given past the ratio of the uncertainty divided by the boilerplate value. For this case,

( x )

Partial uncertainty =  =  = 0.0016 ≈ 0.2%

Annotation that the fractional uncertainty is dimensionless but is often reported equally a percent or in parts per million (ppm) to emphasize the partial nature of the value. A scientist might also make the statement that this measurement "is skillful to about 1 part in 500" or "precise to nearly 0.2%". The fractional incertitude is also of import considering it is used in propagating uncertainty in calculations using the result of a measurement, as discussed in the adjacent department.

Propagation of Uncertainty

Suppose nosotros desire to determine a quantity f, which depends on ten and maybe several other variables y, z, etc. We want to know the error in f if we measure out x, y, ... with errors σ x , σ y , ... Examples:

( eleven )

f = xy (Area of a rectangle)

( 12 )

f = p cos θ ( x -component of momentum)

( 13 )

f = x / t (velocity)

For a single-variable role f(10), the deviation in f can be related to the divergence in x using calculus:

( 14 )

δ f =

δ x

Thus, taking the square and the average:

( 15 )

δ f two =

ii
δ x 2

and using the definition of σ , we get:

( 16 )

σ f =

σ x

Examples: (a)

f =

x

( 17 )

 =

1
2
x

( eighteen )

σ f =

σ x
ii
x
, or  =

(b)

f = x 2

(c)

f = cos θ

( 22 )

σ f = |sin θ | σ θ , or  = |tan θ | σ θ


Note : in this situation, σ θ must be in radians.

In the example where f depends on two or more variables, the derivation above can exist repeated with minor modification. For 2 variables, f(x, y), we have:

The partial derivative means differentiating f with respect to x belongings the other variables fixed. Taking the square and the average, nosotros get the law of propagation of uncertainty:

If the measurements of ten and y are uncorrelated, then

δ x δ y = 0,

and we get:

Examples: (a)

f = ten + y

( 27 )

σ f =

σ x 2 + σ y two

When calculation (or subtracting) contained measurements, the absolute uncertainty of the sum (or difference) is the root sum of squares (RSS) of the individual absolute uncertainties. When calculation correlated measurements, the doubtfulness in the result is simply the sum of the absolute uncertainties, which is e'er a larger uncertainty estimate than adding in quadrature (RSS). Adding or subtracting a constant does not change the absolute uncertainty of the calculated value as long equally the constant is an exact value.

(b)

f = xy

( 29 )

σ f =

y ii σ 10 two + ten 2 σ y 2

Dividing the previous equation past f = xy, we become:

(c)

f = x / y

Dividing the previous equation by

f = ten / y ,

we get:

When multiplying (or dividing) independent measurements, the relative doubt of the product (quotient) is the RSS of the private relative uncertainties. When multiplying correlated measurements, the doubt in the effect is just the sum of the relative uncertainties, which is always a larger doubtfulness gauge than adding in quadrature (RSS). Multiplying or dividing past a constant does not change the relative uncertainty of the calculated value.

Note that the relative uncertainty in f, equally shown in (b) and (c) above, has the same class for multiplication and sectionalization: the relative uncertainty in a production or caliber depends on the relative incertitude of each individual term. Example: Find uncertainty in 5, where

v = at

with a = nine.eight ± 0.1 g/due south2, t = 1.2 ± 0.1 south

( 34 )

=  =  =

(0.010)2 + (0.029)2
 = 0.031 or 3.1%

Notice that the relative dubiousness in t (2.9%) is significantly greater than the relative incertitude for a (1.0%), and therefore the relative uncertainty in 5 is essentially the same every bit for t (near 3%). Graphically, the RSS is similar the Pythagorean theorem:

Figure 2

Effigy 2

The total uncertainty is the length of the hypotenuse of a right triangle with legs the length of each uncertainty component.

Timesaving approximation: "A chain is simply as strong every bit its weakest link."
If 1 of the uncertainty terms is more than than 3 times greater than the other terms, the root-squares formula tin can be skipped, and the combined uncertainty is simply the largest uncertainty. This shortcut tin save a lot of time without losing any accurateness in the estimate of the overall dubiety.

The Upper-Lower Bound Method of Uncertainty Propagation

An alternative, and sometimes simpler procedure, to the ho-hum propagation of uncertainty law is the upper-lower leap method of uncertainty propagation. This alternative method does not yield a standard uncertainty judge (with a 68% confidence interval), only it does give a reasonable judge of the dubiety for practically any situation. The basic idea of this method is to employ the uncertainty ranges of each variable to calculate the maximum and minimum values of the function. Y'all can also think of this procedure every bit examining the best and worst example scenarios. For case, suppose you measure an angle to be: θ = 25° ± 1° and you needed to observe f = cos θ , then:

( 35 )

f max = cos(26°) = 0.8988

( 36 )

f min = cos(24°) = 0.9135

( 37 )

f = 0.906 ± 0.007

where 0.007 is half the difference between f max and f min

Note that even though θ was only measured to ii significant figures, f is known to 3 figures. By using the propagation of uncertainty police:

σ f = |sin θ | σ θ = (0.423)( π /180) = 0.0074

(same result as to a higher place).

The doubtfulness estimate from the upper-lower bound method is generally larger than the standard incertitude judge found from the propagation of uncertainty constabulary, but both methods volition requite a reasonable estimate of the uncertainty in a calculated value.

The upper-lower bound method is especially useful when the functional relationship is not articulate or is incomplete. One practical application is forecasting the expected range in an expense upkeep. In this instance, some expenses may be stock-still, while others may be uncertain, and the range of these uncertain terms could exist used to predict the upper and lower bounds on the total expense.

Significant Figures

The number of significant figures in a value can be defined equally all the digits between and including the first non-zero digit from the left, through the last digit. For instance, 0.44 has two meaning figures, and the number 66.770 has 5 significant figures. Zeroes are pregnant except when used to locate the decimal point, as in the number 0.00030, which has 2 significant figures. Zeroes may or may not be significant for numbers like 1200, where it is non clear whether two, 3, or iv significant figures are indicated. To avoid this ambiguity, such numbers should be expressed in scientific notation to (e.g. i.20 × 103 conspicuously indicates iii significant figures). When using a calculator, the display will often show many digits, just some of which are meaningful (significant in a dissimilar sense). For example, if you desire to estimate the area of a circular playing field, you might pace off the radius to be 9 meters and apply the formula: A = π r two. When yous compute this area, the calculator might report a value of 254.4690049 m2. It would be extremely misleading to report this number every bit the area of the field, considering information technology would suggest that you know the area to an absurd degree of precision—to inside a fraction of a square millimeter! Since the radius is just known to ane pregnant figure, the final answer should also contain only one significant figure: Expanse = 3 × ten2 chiliad2. From this instance, nosotros can run into that the number of significant figures reported for a value implies a certain caste of precision. In fact, the number of significant figures suggests a crude approximate of the relative dubiousness:

The number of significant figures implies an guess relative uncertainty:
i meaning figure suggests a relative uncertainty of about 10% to 100%
2 significant figures suggest a relative uncertainty of about one% to ten%
3 significant figures suggest a relative uncertainty of about 0.1% to i%

To understand this connection more clearly, consider a value with 2 pregnant figures, similar 99, which suggests an uncertainty of ±1, or a relative uncertainty of ±1/99 = ±1%. (Actually some people might argue that the implied dubiety in 99 is ±0.five since the range of values that would round to 99 are 98.v to 99.4. Simply since the doubtfulness here is merely a rough gauge, there is not much betoken arguing nigh the factor of two.) The smallest 2-significant figure number, 10, besides suggests an uncertainty of ±ane, which in this instance is a relative dubiety of ±i/10 = ±10%. The ranges for other numbers of significant figures can be reasoned in a similar manner.

Use of Significant Figures for Simple Propagation of Uncertainty

By post-obit a few simple rules, significant figures can be used to discover the appropriate precision for a calculated effect for the four nigh basic math functions, all without the employ of complicated formulas for propagating uncertainties.

For multiplication and division, the number of significant figures that are reliably known in a product or caliber is the same as the smallest number of meaning figures in whatever of the original numbers.

Example:

six.6
× 7328.7
48369.42  =   48 × 10three
(2 significant figures)
(five significant figures)
(2 pregnant figures)

For improver and subtraction, the result should exist rounded off to the final decimal place reported for the to the lowest degree precise number.

Examples:

223.64 5560.5
+ 54 + 0.008
278 5560.five

If a calculated number is to exist used in farther calculations, it is expert practice to keep one extra digit to reduce rounding errors that may accrue. And so the terminal answer should exist rounded according to the in a higher place guidelines.

Uncertainty, Significant Figures, and Rounding

For the same reason that it is quack to written report a result with more than significant figures than are reliably known, the dubiety value should also not be reported with excessive precision. For example, it would be unreasonable for a student to report a outcome similar:

( 38 )

measured density = viii.93 ± 0.475328 g/cm3 Wrong!

The uncertainty in the measurement cannot possibly be known and so precisely! In most experimental work, the confidence in the incertitude guess is non much better than about ±l% because of all the various sources of mistake, none of which can be known exactly. Therefore, uncertainty values should be stated to only one meaning figure (or perhaps 2 sig. figs. if the first digit is a 1).

Considering experimental uncertainties are inherently imprecise, they should exist rounded to one, or at most 2, significant figures.

To assist give a sense of the amount of conviction that can be placed in the standard deviation, the following table indicates the relative uncertainty associated with the standard deviation for diverse sample sizes. Note that in order for an dubiousness value to exist reported to three pregnant figures, more than 10,000 readings would be required to justify this degree of precision! *The relative uncertainty is given past the approximate formula:

 =

i
2(N − 1)

When an explicit doubt approximate is made, the uncertainty term indicates how many significant figures should be reported in the measured value (non the other way effectually!). For example, the dubiety in the density measurement above is virtually 0.5 thou/cmthree, and so this tells usa that the digit in the tenths place is uncertain, and should be the last one reported. The other digits in the hundredths identify and beyond are insignificant, and should not be reported:

measured density = viii.9 ± 0.v k/cmiii.

Right!

An experimental value should exist rounded to be consistent with the magnitude of its uncertainty. This more often than not means that the last significant figure in any reported value should be in the same decimal place as the uncertainty.

In well-nigh instances, this practice of rounding an experimental result to be consistent with the uncertainty estimate gives the same number of significant figures as the rules discussed earlier for unproblematic propagation of uncertainties for adding, subtracting, multiplying, and dividing.

Caution: When conducting an experiment, information technology is important to keep in mind that precision is expensive (both in terms of time and material resource). Practise non waste your time trying to obtain a precise issue when only a rough approximate is required. The cost increases exponentially with the amount of precision required, so the potential do good of this precision must exist weighed against the extra cost.

Combining and Reporting Uncertainties

In 1993, the International Standards Organization (ISO) published the first official worldwide Guide to the Expression of Uncertainty in Measurement. Before this time, doubt estimates were evaluated and reported according to different conventions depending on the context of the measurement or the scientific subject. Hither are a few cardinal points from this 100-page guide, which tin can be plant in modified form on the NIST website. When reporting a measurement, the measured value should be reported along with an estimate of the total combined standard incertitude

U c

of the value. The total uncertainty is institute by combining the uncertainty components based on the two types of uncertainty analysis:
  • Type A evaluation of standard uncertainty - method of evaluation of uncertainty by the statistical analysis of a series of observations. This method primarily includes random errors.
  • Type B evaluation of standard incertitude - method of evaluation of dubiousness by means other than the statistical analysis of series of observations. This method includes systematic errors and any other doubt factors that the experimenter believes are important.
The individual doubtfulness components u i should be combined using the police of propagation of uncertainties, normally chosen the "root-sum-of-squares" or "RSS" method. When this is done, the combined standard uncertainty should be equivalent to the standard deviation of the result, making this dubiousness value correspond with a 68% confidence interval. If a wider conviction interval is desired, the dubiousness can be multiplied by a coverage factor (usually k = 2 or three) to provide an doubtfulness range that is believed to include the true value with a confidence of 95% (for k = 2) or 99.7% (for grand = 3). If a coverage factor is used, at that place should exist a articulate explanation of its meaning so in that location is no confusion for readers interpreting the significance of the dubiousness value. You should be aware that the ± incertitude notation may be used to indicate different conviction intervals, depending on the scientific field of study or context. For case, a public stance poll may study that the results have a margin of error of ±three%, which means that readers can be 95% confident (not 68% confident) that the reported results are accurate within 3 per centum points. Similarly, a manufacturer's tolerance rating generally assumes a 95% or 99% level of confidence.

Conclusion: "When exercise measurements agree with each other?"

Nosotros at present have the resources to answer the fundamental scientific question that was asked at the beginning of this error assay discussion: "Does my result hold with a theoretical prediction or results from other experiments?" Generally speaking, a measured result agrees with a theoretical prediction if the prediction lies within the range of experimental uncertainty. Similarly, if two measured values have standard uncertainty ranges that overlap, and so the measurements are said to be consistent (they agree). If the dubiety ranges practise not overlap, and then the measurements are said to be discrepant (they do not agree). However, you should recognize that these overlap criteria can give two contrary answers depending on the evaluation and confidence level of the uncertainty. It would exist unethical to arbitrarily inflate the uncertainty range simply to make a measurement agree with an expected value. A better procedure would be to talk over the size of the difference between the measured and expected values within the context of the doubtfulness, and try to notice the source of the discrepancy if the difference is truly significant. To examine your ain data, you lot are encouraged to use the Measurement Comparing tool available on the lab website. Here are some examples using this graphical analysis tool:

Figure 3

Figure 3

A = 1.2 ± 0.4

B = 1.8 ± 0.4

These measurements agree within their uncertainties, despite the fact that the percent difference between their primal values is 40%. Withal, with half the uncertainty ± 0.2, these aforementioned measurements do non agree since their uncertainties do not overlap. Further investigation would exist needed to determine the cause for the discrepancy. Peradventure the uncertainties were underestimated, at that place may have been a systematic fault that was not considered, or there may be a true departure between these values.

Figure 4

Figure four

An alternative method for determining agreement between values is to calculate the difference between the values divided past their combined standard incertitude. This ratio gives the number of standard deviations separating the two values. If this ratio is less than one.0, and then it is reasonable to conclude that the values agree. If the ratio is more ii.0, so information technology is highly unlikely (less than most 5% probability) that the values are the aforementioned. Example from in a higher place with

u = 0.4: = ane.one.

Therefore, A and B likely concord. Example from above with

u = 0.2: = 2.one.

Therefore, it is unlikely that A and B concur.

References

Baird, D.C. Experimentation: An Introduction to Measurement Theory and Experiment Design, threerd. ed. Prentice Hall: Englewood Cliffs, 1995. Bevington, Phillip and Robinson, D. Data Reduction and Error Assay for the Concrete Sciences, 2nd. ed. McGraw-Loma: New York, 1991. ISO. Guide to the Expression of Uncertainty in Measurement. International Organization for Standardization (ISO) and the International Commission on Weights and Measures (CIPM): Switzerland, 1993. Lichten, William. Data and Error Analysis., twond. ed. Prentice Hall: Upper Saddle River, NJ, 1999. NIST. Essentials of Expressing Measurement Uncertainty. http://physics.nist.gov/cuu/Uncertainty/ Taylor, John. An Introduction to Fault Analysis, 2nd. ed. Academy Science Books: Sausalito, 1997.

Source: https://www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html

Posted by: valdezcaceneviver.blogspot.com

0 Response to "Can Association Be Liable For Loss Of Value When Association Will Not Repair A Common Area"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel