DỮ LIỆU OCR TỪ SÁCH: [1] Devore J.L (2012) Modern mathematical statistics with applications, 2Ed., Springer.pdf ============================================================ --- Trang 1 --- Springer Texts in Statistics : Jay L. Devore ' Kenneth N. Berk EXTRA --- Trang 2 --- ° Springer Texts 0 ° 0 in Statistics Series Editors: G. Casella S. Fienberg 1. Olkin For further volumes: http://www.springer.com/series/417 --- Trang 3 --- --- Trang 4 --- Modern Mathematical Statistics with Applications Jay L. Devore California Polytechnic State University Kenneth N. Berk g) Springer --- Trang 5 --- Jay L. Devore Kenneth N. Berk California Polytechnic State University Ilinois State University Statistics Department Department of Mathematics San Luis Obispo California Normal Illinois USA USA jdevore@calpoly.edu kberk@ilstu.edu ISBN 978-1-4614-0390-6 e-ISBN 978-1-4614-0391-3 DOI 10.1007/978-1-4614-0391-3 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011936004 © Springer Science+Business Media, LLC 2012 Allrights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. ‘The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) --- Trang 6 --- To my wife Carol whose continuing support of my writing efforts over the years has made all the difference. To my wife Laura who, as a successful author, is my mentor and role model. --- Trang 7 --- Jay L. Devore _ j Jay Devore received a B.S. in Engineering Science from the University of California, Berkeley, and a Ph.D. in Statistics from Stanford University. He previ- : Fa ously taught at the University of Florida and Oberlin College, and has had visiting Ae, | om positions at Stanford, Harvard, the University of Washington, New York Univer- ef sity, and Columbia. He has been at California Polytechnic State University, ‘ » San Luis Obispo, since 1977, where he was chair of the Department of Statistics : ‘ for 7 years and recently achieved the exalted status of Professor Emeritus. Jay has previously authored or coauthored five other books, including Probabil- ity and Statistics for Engineering and the Sciences, which won a McGuffey Longevity Award from the Text and Academic Authors Association for demon- strated excellence over time. He is a Fellow of the American Statistical Associa- tion, has been an associate editor for both the Journal of the American Statistical Association and The American Statistician, and received the Distinguished Teach- ing Award from Cal Poly in 1991. His recreational interests include reading, playing tennis, traveling, and cooking and eating good food. Kenneth N. Berk ae") Ken Berk has a B.S. in Physics from Carnegie Tech (now Carnegie Mellon) and a a) Ph.D. in Mathematics from the University of Minnesota. He is Professor Emeritus go> wy of Mathematics at Illinois State University and a Fellow of the American Statistical ¥ A “4 Association. He founded the Software Reviews section of The American Statisti- a cian and edited it for 6 years. He served as secretary/treasurer, program chair, and - ~_ chair of the Statistical Computing Section of the American Statistical Association, at TM \ and he twice co-chaired the Interface Symposium, the main annual meeting in Statistical computing. His published work includes papers on time series, statistical computing, regression analysis, and statistical graphics, as well as the book Data Analysis with Microsoft Excel (with Patrick Carey). vi --- Trang 8 --- Preface x 1 Overview and Descriptive Statistics 1 Introduction 1 1.1 Populations and Samples 2 1.2 Pictorial and Tabular Methods in Descriptive Statistics 9 1.3. Measures of Location 24 1.4 Measures of Variability 32 2 Probability 50 Introduction 50 2.1 Sample Spaces and Events 51 2.2 Axioms, Interpretations, and Properties of Probability 56 2.3. Counting Techniques 66 24 Conditional Probability 74 2.5 Independence 84 3 Discrete Random Variables and Probability Distributions 96 Introduction 96 3.1 Random Variables 97 3.2 Probability Distributions for Discrete Random Variables 101 3.3 Expected Values of Discrete Random Variables 112 3.4 Moments and Moment Generating Functions 121 3.5 The Binomial Probability Distribution 128 3.6 Hypergeometric and Negative Binomial Distributions 138 3.7 The Poisson Probability Distribution 146 4 Continuous Random Variables and Probability Distributions 158 Introduction 158 4.1 Probability Density Functions and Cumulative Distribution Functions 159 4.2 Expected Values and Moment Generating Functions 171 4.3. The Normal Distribution 179 4.4 The Gamma Distribution and Its Relatives 194 4.5 Other Continuous Distributions 202 4.6 Probability Plots 210 4.7 Transformations of a Random Variable 220 5 Joint Probability Distributions 232 Introduction 232 5.1 Jointly Distributed Random Variables 233 5.2 Expected Values, Covariance, and Correlation 245 5.3 Conditional Distributions 253 5.4 Transformations of Random Variables 265 5.5 Order Statistics 271 vii --- Trang 9 --- viii Contents 6 Statistics and Sampling Distributions 284 Introduction 284 6.1 Statistics and Their Distributions 285 6.2 The Distribution of the Sample Mean 296 6.3 The Mean, Variance, and MGF for Several Variables 306 6.4 Distributions Based on a Normal Random Sample 315 Appendix: Proof of the Central Limit Theorem 329 7 Point Estimation 331 Introduction 331 7.1 General Concepts and Criteria 332 7.2 Methods of Point Estimation 350 73 Sufficiency 361 7.4 — Information and Efficiency 371 8 Statistical Intervals Based on a Single Sample 382 Introduction 382 8.1 Basic Properties of Confidence Intervals 383 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion 391 8.3 Intervals Based on a Normal Population Distribution 401 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population 409 8.5 Bootstrap Confidence Intervals. 411 9 Tests of Hypotheses Based on a Single Sample 425 Introduction 425 9.1 Hypotheses and Test Procedures 426 9.2 Tests About a Population Mean 436 9.3. Tests Concerning a Population Proportion 450 9.4 P-Nalues 456 9.5 Some Comments on Selecting a Test Procedure 467 10 Inferences Based on Two Samples 484 Introduction 484 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 485 10.2 The Two-Sample t Test and Confidence Interval 499 10.3. Analysis of Paired Data 509 10.4 Inferences About Two Population Proportions 519 10.5 Inferences About Two Population Variances 527 10.6 Comparisons Using the Bootstrap and Permutation Methods 532 11. The Analysis of Variance 552 Introduction 552 11.1 Single-Factor ANOVA 553 11.2 Multiple Comparisons in ANOVA 564 11.3. More on Single-Factor ANOVA 572 11.4 Two-Factor ANOVA with Ky= 1 582 11.5 Two-Factor ANOVA with Ky > 1 597 12 Regression and Correlation 613 Introduction 613 12.1. The Simple Linear and Logistic Regression Models 614 12.2 Estimating Model Parameters 624 12.3 Inferences About the Regression Coefficient f, 640 --- Trang 10 --- Contents ix 12.4 Inferences Concerning jy... and the Prediction of Future Y Values 654 12.5 Correlation 662 12.6 Assessing Model Adequacy 674 12.7 Multiple Regression Analysis 682 12.8 Regression with Matrices 705 13 Goodness-of-Fit Tests and Categorical Data Analysis 723 Introduction 723 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 724 13.2 Goodness-of-Fit Tests for Composite Hypotheses 732 13.3 Two-Way Contingency Tables 744 14 Alternative Approaches to Inference 758 Introduction 758 14.1 The Wilcoxon Signed-Rank Test 759 14.2. The Wilcoxon Rank-Sum Test 766 14.3 Distribution-Free Confidence Intervals 771 14.4 Bayesian Methods 776 Appendix Tables 787 Al Cumulative Binomial Probabilities 788 A.2 Cumulative Poisson Probabilities 790 A.3. Standard Normal Curve Areas 792 A.4 The Incomplete Gamma Function 794 AS Critical Values for t Distributions 795 A6 Critical Values for Chi-Squared Distributions 796 A7 Curve Tail Areas 797 A8 Critical Values for F Distributions 799 AQ Critical Values for Studentized Range Distributions 805 A.10 Chi-Squared Curve Tail Areas 806 A.11 Critical Values for the Ryan—Joiner Test of Normality 808 A.12. Critical Values for the Wilcoxon Signed-Rank Test 809 A.13 Critical Values for the Wilcoxon Rank-Sum Test 810 A.14 Critical Values for the Wilcoxon Signed-Rank Interval 811 A.15 Critical Values for the Wilcoxon Rank-Sum Interval 812 A16 [Curves for t Tests 813 Answers to Odd-Numbered Exercises 814 Index 835 --- Trang 11 --- Purpose Our objective is to provide a postcalculus introduction to the discipline of statistics that + Has mathematical integrity and contains some underlying theory. + Shows students a broad range of applications involving real data. + Is very current in its selection of topics. + Illustrates the importance of statistical software. + Is accessible to a wide audience, including mathematics and statistics majors (yes, there are a few of the latter), prospective engineers and scientists, and those business and social science majors interested in the quantitative aspects of their disciplines. A number of currently available mathematical statistics texts are heavily oriented toward a rigorous mathematical development of probability and statistics, with much emphasis on theorems, proofs, and derivations. The focus is more on mathematics than on statistical practice. Even when applied material is included, the scenarios are often contrived (many examples and exercises involving dice, coins, cards, widgets, or a comparison of treatment A to treatment B). So in our exposition we have tried to achieve a balance between mathemati- cal foundations and statistical practice. Some may feel discomfort on grounds that because a mathematical statistics course has traditionally been a feeder into gradu- ate programs in statistics, students coming out of such a course must be well prepared for that path. But that view presumes that the mathematics will provide the hook to get students interested in our discipline. This may happen for a few mathematics majors. However, our experience is that the application of statistics to real-world problems is far more persuasive in getting quantitatively oriented students to pursue a career or take further coursework in statistics. Let’s first draw them in with intriguing problem scenarios and applications. Opportunities for exposing them to mathematical foundations will follow in due course. We believe it is more important for students coming out of this course to be able to carry out and interpret the results of a two-sample f test or simple regression analysis than to manipulate joint moment generating functions or discourse on various modes of convergence. Content The book certainly does include core material in probability (Chapter 2), random variables and their distributions (Chapters 3-5), and sampling theory (Chapter 6). But our desire to balance theory with application/data analysis is reflected in the way the book starts out, with a chapter on descriptive and exploratory statistical x --- Trang 12 --- Preface xi techniques rather than an immediate foray into the axioms of probability and their consequences. After the distributional infrastructure is in place, the remaining statistical chapters cover the basics of inference. In addition to introducing core ideas from estimation and hypothesis testing (Chapters 7-10), there is emphasis on checking assumptions and examining the data prior to formal analysis. Modern topics such as bootstrapping, permutation tests, residual analysis, and logistic regression are included. Our treatment of regression, analysis of variance, and categorical data analysis (Chapters 11-13) is definitely more oriented to dealing with real data than with theoretical properties of models. We also show many examples of output from commonly used statistical software packages, something noticeably absent in most other books pitched at this audience and level. Mathematical Level The challenge for students at this level should lie with mastery of statistical concepts as well as with mathematical wizardry. Consequently, the mathematical prerequisites and demands are reasonably modest. Mathematical sophistication and quantitative reasoning ability are, of course, crucial to the enterprise. Students with a solid grounding in univariate calculus and some exposure to multivariate calculus should feel comfortable with what we are asking of them. The several sections where matrix algebra appears (transformations in Chapter 5 and the matrix approach to regression in the last section of Chapter 12) can easily be deemphasized or skipped entirely. Our goal is to redress the balance between mathematics and statistics by putting more emphasis on the latter. The concepts, arguments, and notation contained herein will certainly stretch the intellects of many students. And a solid mastery of the material will be required in order for them to solve many of the roughly 1,300 exercises included in the book. Proofs and derivations are included where appropriate, but we think it likely that obtaining a conceptual understanding of the statistical enterprise will be the major challenge for readers. Recommended Coverage There should be more than enough material in our book for a year-long course. Those wanting to emphasize some of the more theoretical aspects of the subject (e.g., moment generating functions, conditional expectation, transformations, order statistics, sufficiency) should plan to spend correspondingly less time on inferential methodology in the latter part of the book. We have opted not to mark certain sections as optional, preferring instead to rely on the experience and tastes of individual instructors in deciding what should be presented. We would also like to think that students could be asked to read an occasional subsection or even section on their own and then work exercises to demonstrate understanding, so that not everything would need to be presented in class. Remember that there is never enough time in a course of any duration to teach students all that we'd like them to know! Acknowledgments We gratefully acknowledge the plentiful feedback provided by reviewers and colleagues. A special salute goes to Bruce Trumbo for going way beyond his mandate in providing us an incredibly thoughtful review of 40+ pages containing --- Trang 13 --- xii Preface many wonderful ideas and pertinent criticisms. Our emphasis on real data would not have come to fruition without help from the many individuals who provided us with data in published sources or in personal communications. We very much appreciate the editorial and production services provided by the folks at Springer, in particular Mare Strauss, Kathryn Schell, and Felix Portnoy. A Final Thought It is our hope that students completing a course taught from this book will feel as passionately about the subject of statistics as we still do after so many years in the profession. Only teachers can really appreciate how gratifying it is to hear from a student after he or she has completed a course that the experience had a positive impact and maybe even affected a career choice. Jay L. Devore Kenneth N. Berk --- Trang 14 --- e Overview Iptiv and Descriptive Statistics Introduction Statistical concepts and methods are not only useful but indeed often indis- pensable in understanding the world around us. They provide ways of gaining new insights into the behavior of many phenomena that you will encounter in your chosen field of specialization. The discipline of statistics teaches us how to make intelligent judgments and informed decisions in the presence of uncertainty and variation. Without uncertainty or variation, there would be little need for statistical methods or statis- ticians. If the yield of a crop were the same in every field, if all individuals reacted the same way to a drug, if everyone gave the same response to an opinion survey, and so on, then a single observation would reveal all desired information. An interesting example of variation arises in the course of performing emissions testing on motor vehicles. The expense and time requirements of the Federal Test Procedure (FTP) preclude its widespread use in vehicle inspection programs. As a result, many agencies have developed less costly and quicker tests, which it is hoped replicate FTP results. According to the journal article “Motor Vehicle Emissions Variability” (J. Air Waste Manage. Assoc., 1996: 667-675), the acceptance of the FTP as a gold standard has led to the widespread belief that repeated measurements on the same vehicle would yield identical (or nearly identical) results. The authors of the article applied the FTP to seven vehicles characterized as “high emitters.” Here are the results of four hydrocarbon and carbon dioxide tests on one such vehicle: HC (g/mile) 13.8 18.3 32.2 32.5 CO (g/mile) 118 149 232 236 JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 1 DOI 10.1007/978-1-4614-0391-3_1, © Springer Science+Business Media, LLC 2012 --- Trang 15 --- 2 CHAPTER 1 Overview and Descriptive Statistics The substantial variation in both the HC and CO measurements casts considerable doubt on conventional wisdom and makes it much more difficult to make precise assessments about emissions levels. How can statistical techniques be used to gather information and draw conclusions? Suppose, for example, that a biochemist has developed a medication for relieving headaches. If this medication is given to different individuals, varia- tion in conditions and in the people themselves will result in more substantial relief for some individuals than for others. Methods of statistical analysis could be used on data from such an experiment to determine on the average how much relief to expect. Alternatively, suppose the biochemist has developed a headache medication in the belief that it will be superior to the currently best medication. A comparative experiment could be carried out to investigate this issue by giving the current medication to some headache sufferers and the new medication to others. This must be done with care lest the wrong conclusion emerge. For example, perhaps really the two medications are equally effective. However, the new medication may be applied to people who have less severe headaches and have less stressful lives. The investigator would then likely observe a difference between the two medica- tions attributable not to the medications themselves, but to a poor choice of test groups. Statistics offers not only methods for analyzing the results of experiments once they have been carried out but also suggestions for how experiments can be performed in an efficient manner to lessen the effects of variation and have a better chance of producing correct conclusions. Populations and Samples We are constantly exposed to collections of facts, or data, both in our professional capacities and in everyday activities. The discipline of statistics provides methods for organizing and summarizing data and for drawing conclusions based on infor- mation contained in the data. An investigation will typically focus on a well-defined collection of objects constituting a population of interest. In one study, the population might consist of all gelatin capsules of a particular type produced during a specified period. Another investigation might involve the population consisting of all indi- viduals who received a B.S. in mathematics during the most recent academic year. When desired information is available for all objects in the population, we have what is called a census. Constraints on time, money, and other scarce resources usually make a census impractical or infeasible. Instead, a subset of the popula- tion—a sample—is selected in some prescribed manner. Thus we might obtain a sample of pills from a particular production run as a basis for investigating whether pills are conforming to manufacturing specifications, or we might select a sample of last year’s graduates to obtain feedback about the quality of the curriculum. --- Trang 16 --- 1.1 Populations and Samples 3 We are usually interested only in certain characteristics of the objects in a population: the amount of vitamin C in the pill, the gender of a mathematics graduate, the age at which the individual graduated, and so on. A characteristic may be categorical, such as gender or year in college, or it may be numerical in nature. In the former case, the value of the characteristic is a category (e.g., female or sophomore), whereas in the latter case, the value is a number (e.g., age = 23 years or vitamin C content = 65 mg). A variable is any characteristic whose value may change from one object to another in the population. We shall initially denote variables by lowercase letters from the end of our alphabet. Examples include x = brand of calculator owned by a student y = number of major defects on a newly manufactured automobile z = braking distance of an automobile under specified conditions Data comes from making observations either on a single variable or simultaneously on two or more variables. A univariate data set consists of observations on a single variable. For example, we might consider the type of computer, laptop (L) or desktop (D), for ten recent purchases, resulting in the categorical data set DLLLODLLODLL The following sample of lifetimes (hours) of brand D batteries in flashlights is a numerical univariate data set: 56 5.1 62 60 58 65 5.8 5.5 We have bivariate data when observations are made on each of two variables. Our data set might consist of a (height, weight) pair for each basketball player on a team, with the first observation as (72, 168), the second as (75, 212), and so on. If a kinesiologist determines the values of x = recuperation time from an injury and y = type of injury, the resulting data set is bivariate with one variable numerical and the other categorical. Multivariate data arises when observations are made on more than two variables. For example, a research physician might determine the systolic blood pressure, diastolic blood pressure, and serum cholesterol level for each patient participating in a study. Each observation would be a triple of numbers, such as (120, 80, 146). In many multivariate data sets, some variables are numerical and others are categorical. Thus the annual automobile issue of Consumer Reports gives values of such variables as type of vehicle (small, sporty, compact, midsize, large), city fuel efficiency (mpg), highway fuel efficiency (mpg), drive train type (rear wheel, front wheel, four wheel), and so on. Branches of Statistics An investigator who has collected data may wish simply to summarize and describe important features of the data. This entails using methods from descriptive statistics. Some of these methods are graphical in nature; the construction of histograms, boxplots, and scatter plots are primary examples. Other descriptive methods involve calculation of numerical summary measures, such as means, --- Trang 17 --- 4 CHAPTER 1 Overview and Descriptive Statistics standard deviations, and correlation coefficients. The wide availability of statistical computer software packages has made these tasks much easier to carry out than they used to be. Computers are much more efficient than human beings at calculation and the creation of pictures (once they have received appropriate instructions from the user!). This means that the investiga- tor doesn’t have to expend much effort on “grunt work” and will have more time to study the data and extract important messages. Throughout this book, we will present output from various packages such as MINITAB, SAS, and R. Example 1.1 Charity is a big business in the United States. The website charitynavigator. com gives information on roughly 5500 charitable organizations, and there are many smaller charities that fly below the navigator’s radar screen. Some charities operate very efficiently, with fundraising and administrative expenses that are only a small percentage of total expenses, whereas others spend a high percentage of what they take in on such activities. Here is data on fundraising expenses as a percentage of total expenditures for a random sample of 60 charities: 6.1 126 347 16 188 2.2 30 22 56 38 2.2 3.1 13° 11 141 4.0 21.0 61 1.3 20.4 75 3.9 10.1 81 195 5.2 12.0 15.8 104 5.2 64 108 83.1 3.6 62 63 163 12.7 13 08 88 5.1 3.7 26.3 60 48.0 82 117 7.2 3.9 15.3 166 88 120 4.7 147 64 17.0 2.5 16.2 Without any organization, it is difficult to get a sense of the data’s most promi- nent features: what a typical (i.e., representative) value might be, whether values are highly concentrated about a typical value or quite dispersed, whether there are any gaps in the data, what fraction of the values are less than 20%, and so on. Figure 1.1 shows a histogram. In Section 1.2 we will discuss construction and interpretation of this graph. For the moment, we hope you see how it describes the 40 30 = 2 3 20 2 c 10 0 0 10 20 30 40 50 60 70 80 90 FundRsng Figure 1.1 A MINITAB histogram for the charity fundraising % data --- Trang 18 --- 1.1 Populations and Samples 5 way the percentages are distributed over the range of possible values from 0 to 100. Of the 60 charities, 36 use less than 10% on fundraising, and 18 use between 10% and 20%. Thus 54 out of the 60 charities in the sample, or 90%, spend less than 20% of money collected on fundraising. How much is too much? There is a delicate balance; most charities must spend money to raise money, but then money spent on fundraising is not available to help beneficiaries of the charity. Perhaps each individual giver should draw his or her own line in the sand. Ll) Having obtained a sample from a population, an investigator would fre- quently like to use sample information to draw some type of conclusion (make an inference of some sort) about the population. That is, the sample is a means to an end rather than an end in itself. Techniques for generalizing from a sample to a population are gathered within the branch of our discipline called inferential statistics. Human measurements provide a rich area of application for statistical methods. The article “A Longitudinal Study of the Development of Elementary School Chil- dren’s Private Speech” (Merrill-Palmer Q., 1990: 443-463) reported on a study of children talking to themselves (private speech). It was thought that private speech would be related to IQ, because IQ is supposed to measure mental maturity, and it was known that private speech decreases as students progress through the primary grades. The study included 33 students whose first-grade IQ scores are given here: 082 096 099 102 103 103 106 107 108 108 108 108 109 110 110 111 113 113 113 113 115 115 118 118 119 121 122 122 127 132 136 140 146 Suppose we want an estimate of the average value of IQ for the first graders served by this school (if we conceptualize a population of all such IQs, we are trying to estimate the population mean). It can be shown that, with a high degree of confidence, the population mean IQ is between 109.2 and 118.2; we call this a confidence interval or interval estimate. The interval suggests that this is an above average class, because the nationwide IQ average is around 100. Ll) The main focus of this book is on presenting and illustrating methods of inferential statistics that are useful in research. The most important types of inferen- tial procedures—point estimation, hypothesis testing, and estimation by confidence intervals—are introduced in Chapters 7-9 and then used in more complicated settings in Chapters 10-14. The remainder of this chapter presents methods from descriptive statistics that are most used in the development of inference. Chapters 2-6 present material from the discipline of probability. This material ultimately forms a bridge between the descriptive and inferential techniques. Mastery of probability leads to a better understanding of how inferential procedures are developed and used, how statistical conclusions can be translated into everyday language and interpreted, and when and where pitfalls can occur in applying the methods. Probability and statistics both deal with questions involving populations and samples, but do so in an “inverse manner” to each other. In a probability problem, properties of the population under study are assumed known (e.g., in a numerical population, some specified distribution of the population values may be assumed), and questions regarding a sample taken --- Trang 19 --- 6 CHAPTER 1 Overview and Descriptive Statistics from the population are posed and answered. In a statistics problem, characteristics of a sample are available to the experimenter, and this information enables the experimenter to draw conclusions about the population. The relationship between the two disciplines can be summarized by saying that probability reasons from the population to the sample (deductive reasoning), whereas inferential statistics reasons from the sample to the population (inductive reasoning). This is illustrated in Figure 1.2. Probability statistics Figure 1.2 The relationship between probability and inferential statistics Before we can understand what a particular sample can tell us about the population, we should first understand the uncertainty associated with taking a sample from a given population. This is why we study probability before statistics. As an example of the contrasting focus of probability and inferential statis- tics, consider drivers’ use of manual lap belts in cars equipped with automatic shoulder belt systems. (The article “Automobile Seat Belts: Usage Patterns in Automatic Belt Systems,” Hum. Factors, 1998: 126-135, summarizes usage data.) In probability, we might assume that 50% of all drivers of cars equipped in this way in a certain metropolitan area regularly use their lap belt (an assumption about the population), so we might ask, “How likely is it that a sample of 100 such drivers will include at least 70 who regularly use their lap belt?” or “How many of the drivers in a sample of size 100 can we expect to regularly use their lap belt?” On the other hand, in inferential statistics we have sample information available; for example, a sample of 100 drivers of such cars revealed that 65 regularly use their lap belt. We might then ask, “Does this provide substantial evidence for concluding that more than 50% of all such drivers in this area regularly use their lap belt?” In this latter scenario, we are attempting to use sample information to answer a question about the structure of the entire population from which the sample was selected. Suppose, though, that a study involving a sample of 25 patients is carried out to investigate the efficacy of a new minimally invasive method for rotator cuff surgery. The amount of time that each individual subsequently spends in physical therapy is then determined. The resulting sample of 25 PT times is from a popula- tion that does not actually exist. Instead it is convenient to think of the population as consisting of all possible times that might be observed under similar experimental conditions. Such a population is referred to as a conceptual or hypothetical popula- tion. There are a number of problem situations in which we fit questions into the framework of inferential statistics by conceptualizing a population. Sometimes an investigator must be very cautious about generalizing from the circumstances under which data has been gathered. For example, a sample of five engines with a new design may be experimentally manufactured and tested to investigate efficiency. These five could be viewed as a sample from the conceptual population of all prototypes that could be manufactured under similar conditions, but not necessarily as representative of the population of units manufactured once regular production gets under way. Methods for using sample information to draw --- Trang 20 --- 1.1 Populations and Samples 7 conclusions about future production units may be problematic. Similarly, a new drug may be tried on patients who arrive at a clinic, but there may be some question about how typical these patients are. They may not be representative of patients elsewhere or patients at the clinic next year. A good exposition of these issues is contained in the article “Assumptions for Statistical Inference” by Gerald Hahn and William Meeker (Amer. Statist., 1993: 1-11). Collecting Data Statistics deals not only with the organization and analysis of data once it has been collected but also with the development of techniques for collecting the data. If data is not properly collected, an investigator may not be able to answer the questions under consideration with a reasonable degree of confidence. One common problem is that the target population—the one about which conclusions are to be drawn— may be different from the population actually sampled. For example, advertisers would like various kinds of information about the television-viewing habits of potential customers. The most systematic information of this sort comes from placing monitoring devices in a small number of homes across the United States. It has been conjectured that placement of such devices in and of itself alters viewing behavior, so that characteristics of the sample may be different from those of the target population. When data collection entails selecting individuals or objects from a list, the simplest method for ensuring a representative selection is to take a simple random sample. This is one for which any particular subset of the specified size (e.g., a sample of size 100) has the same chance of being selected. For example, if the list consists of 1,000,000 serial numbers, the numbers 1, 2, ... , up to 1,000,000 could be placed on identical slips of paper. After placing these slips in a box and thoroughly mixing, slips could be drawn one by one until the requisite sample size has been obtained. Alternatively (and much to be preferred), a table of random numbers or a computer’s random number generator could be employed. Sometimes alternative sampling methods can be used to make the selection process easier, to obtain extra information, or to increase the degree of confidence in conclusions. One such method, stratified sampling, entails separating the population units into nonoverlapping groups and taking a sample from each one. For example, a manufacturer of DVD players might want information about customer satisfaction for units produced during the previous year. If three different models were manufactured and sold, a separate sample could be selected from each of the three corresponding strata. This would result in information on all three models and ensure that no one model was over- or underrepresented in the entire sample. Frequently a “convenience” sample is obtained by selecting individuals or objects without systematic randomization. As an example, a collection of bricks may be stacked in such a way that it is extremely difficult for those in the center to be selected. If the bricks on the top and sides of the stack were somehow different from the others, resulting sample data would not be representative of the popula- tion. Often an investigator will assume that such a convenience sample approx- imates a random sample, in which case a statistician’s repertoire of inferential methods can be used; however, this is a judgment call. Most of the methods discussed herein are based on a variation of simple random sampling described in Chapter 6. --- Trang 21 --- 8 CHAPTER 1 Overview and Descriptive Statistics Researchers often collect data by carrying out some sort of designed experiment. This may involve deciding how to allocate several different treatments (such as fertilizers or drugs) to the various experimental units (plots of land or patients). Alternatively, an investigator may systematically vary the levels or categories of certain factors (e.g., amount of fertilizer or dose of a drug) and observe the effect on some response variable (such as corn yield or blood pressure). An article in the New York Times (January 27, 1987) reported that heart attack risk could be reduced by taking aspirin. This conclusion was based on a designed experiment involving both a control group of individuals, who took a placebo having the appearance of aspirin but known to be inert, and a treatment group who took aspirin according to a specified regimen. Subjects were randomly assigned to the groups to protect against any biases and so that probability-based methods could be used to analyze the data. Of the 11,034 individuals in the control group, 189 subsequently experienced heart attacks, whereas only 104 of the 11,037 in the aspirin group had a heart attack. The incidence rate of heart attacks in the treatment group was only about half that in the control group. One possible explanation for this result is chance variation, that aspirin really doesn’t have the desired effect and the observed difference is just typical variation in the same way that tossing two identical coins would usually produce different numbers of heads. However, in this case, inferential methods suggest that chance variation by itself cannot adequately explain the magnitude of the observed difference. a Exercises | Section 1.1 (1-9) 1. Give one possible sample of size 4 from each of the a. Pose several probability questions based on se- following populations: lecting a sample of 100 such DVD players. a. All daily newspapers published in the United b. What inferential statistics question might be States answered by determining the number of such b. All companies listed on the New York Stock DVD players in a sample of size 100 that need Exchange warranty service? Allenidenis abyour college qeumversity 4. a. Give three different examples of concrete popu- d. All grade point averages of students at your vate . college:oruniversity lations ai three different examples of hypothet- ical populations. 2. For each of the following hypothetical populations, b. For one each of your concrete and your hypo- give a plausible sample of size 4: thetical populations, give an example of a prob- a. All distances that might result when you throw a ability question and an example of an inferential football statistics question. b. Page lengths of books published 5 years from 5 i440 universities and colleges have instituted sup- Foe const plemental instruction (SI) programs, in which a c. All possible earthquake-strength measurements : " Al ‘ ; ‘i student facilitator meets regularly with a small (Richter scale) that might be recorded in Califor- . é nladoriag lhemextiyear pe of an enrolled af a ne to poner d. All possible yields (in grams) from a certain SEEMED OL COMEE IN EEE BY SE MARER SY lec . 5 s mastery. Suppose that students in a large statistics chemical reaction carried out in a laboratory course (what else?) are randomly divided into a 3. Consider the population consisting of all DVD control group that will not participate in SI and a players of a certain brand and model, and focus on treatment group that will participate. At the end of whether a DVD player needs service while under the term, each student's total score in the course is warranty. determined. --- Trang 22 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 9 a. Are the scores from the SI group a sample from bathrooms, distance to the nearest school, and an existing population? If so, what is it? If not, so on. How might she select a sample of single- what is the relevant conceptual population? family homes that could be used as a basis for this b. What do you think is the advantage of randomly analysis? dividing the students into the two groups rather The amount of flow through a solenoid valve in an than letting each student choose which group to bile’s polluti 1 4 . join? automobile’s pollution-control system is an impor- é tant characteristic. An experiment was carried out . Why didn’t the investigators put all students in nag Sa es to study how flow rate depended on three factors: the treatment group? [Note: The article “Supple- s y : : armature length, spring load, and bobbin depth. mental Instruction: An Effective Component of . : enna, Two different levels (low and high) of each factor Student Affairs Programming” J. Coll. Stud. " ° 3 ‘ 4 were chosen, and a single observation on flow was Dev., 1997: 577-586 discusses the analysis of z data f 1 SI 5 made for each combination of levels. ata from several SI programs.] a. The resulting data set consisted of how many 6. The California State University (CSU) system con- observations? sists of 23 campuses, from San Diego State in the b. Does this study involve sampling an existing south to Humboldt State near the Oregon border. population or a conceptual population? A CSU administrator wishes to make an inference 7 ° 9. In a famous experiment carried out in 1882, about the average distance between the hometowns . . i‘ p Michelson and Newcomb obtained 66 observations of students and their campuses. Describe and dis- ae . 1 diff Li hods that might on the time it took for light to travel between two lee aaah 4 ferent sampling methods that might locations in Washington, D.C. A few of the mea- eeMBIOY EC: surements (coded in a certain manner) were 31, 23, 7. A certain city divides naturally into ten district 32, 36, 22, 26, 27, and 31. neighborhoods. A real estate appraiser would like a. Why are these measurements not identical? to develop an equation to predict appraised value b. Does this study involve sampling an existing from characteristics such as age, size, number of population or a conceptual population? Pictorial and Tabular Methods in Descriptive Statistics There are two general types of methods within descriptive statistics. In this section we will discuss the first of these types—representing a data set using visual techniques. In Sections 1.3 and 1.4, we will develop some numerical summary measures for data sets. Many visual techniques may already be familiar to you: frequency tables, tally sheets, histograms, pie charts, bar graphs, scatter diagrams, and the like. Here we focus on a selected few of these techniques that are most useful and relevant to probability and inferential statistics. Notation Some general notation will make it easier to apply our methods and formulas to a wide variety of practical problems. The number of observations in a single sample, that is, the sample size, will often be denoted by n, so that n = 4 for the sample of universities {Stanford, lowa State, Wyoming, Rochester} and also for the sample of pH measurements {6.3, 6.2, 5.9, 6.5}. If two samples are simultaneously under consideration, either m and n or m, and n can be used to denote the numbers of observations. Thus if {3.75, 2.60, 3.20, 3.79} and {2.75, 1.20, 2.45} are grade point averages for students on a mathematics floor and the rest of the dorm, respectively, then m = 4 and n = 3. --- Trang 23 --- 10 = cuarter 1 Overview and Descriptive Statistics Given a data set consisting of n observations on some variable x, the individual observations will be denoted by x), x2, X3, ... , %,. The subscript bears no relation to the magnitude of a particular observation. Thus x, will not in general be the smallest observation in the set, nor will x, typically be the largest. In many applications, x; will be the first observation gathered by the experimenter, x2 the second, and so on. The ith observation in the data set will be denoted by +;. Stem-and-Leaf Displays Consider a numerical data set x1, x2, . . . , x, for which each x; consists of at least two digits. A quick way to obtain an informative visual representation of the data set is to construct a stem-and-leaf display. STEPS FOR 1. Select one or more leading digits for the stem values. The trailing digits CONSTRUCT- become the leaves. ING A STEM- 2. List possible stem values in a vertical column. AND-LEAF 3. Record the leaf for every observation beside the corresponding stem DISPLAY value. 4, Order the leaves from smallest to largest on each line. 5. Indicate the units for stems and leaves someplace in the display. If the data set consists of exam scores, each between 0 and 100, the score of 83 would have a stem of 8 and a leaf of 3. For a data set of automobile fuel efficiencies (mpg), all between 8.1 and 47.8, we could use the tens digit as the stem, so 32.6 would then have a leaf of 2.6. Usually, a display based on between 5 and 20 stems is appropriate. For a simple example, assume a sample of seven test scores: 93, 84, 86, 78, 95, 81, 72. Then the first pass stem plot would be 7182 81461 9135 With the leaves ordered this becomes. 7128 stem: tens digit 81146 _ leaf: ones digit 9135 The use of alcohol by college students is of great concern not only to those in the academic community but also, because of potential health and safety consequences, to society at large. The article “Health and Behavioral Consequences of Binge Drinking in College” (J. Amer. Med. Assoc., 1994: 1672-1677) reported on a comprehensive study of heavy drinking on campuses across the United States. A binge episode was defined as five or more drinks in a row for males and --- Trang 24 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 11 04 11345678889 2|1223456666777889999 ‘Stem: tens digit 3]0112233344555666677777888899999 Leaf: ones digit 41 11222223344445566666677788888999 5|001 11222233455666667777888899 6|01111244455666778 Figure 1.3 Stem-and-leaf display for percentage binge drinkers at each of 140 colleges four or more for females. Figure 1.3 shows a stem-and-leaf display of 140 values of x =the percentage of undergraduate students who are binge drinkers. (These values were not given in the cited article, but our display agrees with a picture of the data that did appear.) The first leaf on the stem 2 row is 1, which tells us that 21% of the students at one of the colleges in the sample were binge drinkers. Without the identification of stem digits and leaf digits on the display, we wouldn’t know whether the stem 2, leaf 1 observation should be read as 21%, 2.1%, or .21%. The display suggests that a typical or representative value is in the stem 4 row, perhaps in the mid-40% range. The observations are not highly concentrated about this typical value, as would be the case if all values were between 20% and 49%. The display rises to a single peak as we move downward, and then declines; there are no gaps in the display. The shape of the display is not perfectly symmetric, but instead appears to stretch out a bit more in the direction of low leaves than in the direction of high leaves. Lastly, there are no observations that are unusually far from the bulk of the data (no outliers), as would be the case if one of the 26% values had instead been 86%. The most surprising feature of this data is that, at most colleges in the sample, at least one-quarter of the students are binge drinkers. The problem of heavy drinking on campuses is much more pervasive than many had suspected. a A stem-and-leaf display conveys information about the following aspects of the data: + Identification of a typical or representative value + Extent of spread about the typical value + Presence of any gaps in the data + Extent of symmetry in the distribution of values + Number and location of peaks + Presence of any outlying values ee «Figure 1.4 presents stem-and-leaf displays for a random sample of lengths of golf courses (yards) that have been designated by Golf Magazine as among the most challenging in the United States. Among the sample of 40 courses, the shortest is 6433 yards long, and the longest is 7280 yards. The lengths appear to be distributed in a roughly uniform fashion over the range of values in the sample. Notice that a stem choice here of either a single digit (6 or 7) or three digits (643, ...,728) would yield an uninformative display, the first because of too few stems and the latter because of too many. --- Trang 25 --- 12 = charter 1 Overview and Descriptive Statistics a b 64] 33.35.6470 Stem: Thousands and hundreds digits Stemvandleaf of yardage N = 40 65| 06 26 27 83 Leaf: Tens and ones digits ee ae 66| 05 14 94 65 0228 67| 00 13 45 70 70 90 98 66 019 68) 50 70 73 90 eee 69) 00 04 27 36 69 0023 70| 05 11 22 40 50 51 ay Wye 71) 05 13 31 65 68 69 apes 72| 09 80 Figure 1.4 Stem-and-leaf displays of golf course yardages: (a) two-digit leaves; (b) display from MINITAB with truncated one-digit leaves a Dotplots A dotplot is an attractive summary of numerical data when the data set is reason- ably small or there are relatively few distinct data values. Each observation is represented by a dot above the corresponding location on a horizontal measurement scale. When a value occurs more than once, there is a dot for each occurrence, and these dots are stacked vertically. As with a stem-and-leaf display, a dotplot gives information about location, spread, extremes, and gaps. Ue «=Figure 1.5 shows a dotplot for the first grade IQ data introduced in Example 1.2 in the previous section. A representative IQ value is around 110, and the data is fairly symmetric about the center. 81 90 99 108 117 126 135 144 First grade IQ Figure 1.5 A dotplot of the first grade 1Q scores 7 If the data set discussed in Example 1.6 had consisted of the IQ average from each of 100 classes, each recorded to the nearest tenth, it would have been much more cumbersome to construct a dotplot. Our next technique is well suited to such situations. It should be mentioned that for some software packages (including R) the dot plot is entirely different. Histograms Some numerical data is obtained by counting to determine the value of a variable (the number of traffic citations a person received during the last year, the number of persons arriving for service during a particular period), whereas other data is --- Trang 26 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 13 obtained by taking measurements (weight of an individual, reaction time to a particular stimulus). The prescription for drawing a histogram is generally different for these two cases. Consider first data resulting from observations on a “counting variable” x. The frequency of any particular x value is the number of times that value occurs in the data set. The relative frequency of a value is the fraction or proportion of times the value occurs: number of times the value occurs relative frequency of a value = i J W____ number of observations in the dataset Suppose, for example, that our data set consists of 200 observations on x = the number of major defects in a new car of a certain type. If 70 of these x values are 1, then frequency of the x value 1 : 70 z 70 relative frequency of the x value 1: —~ = .35 200 Multiplying a relative frequency by 100 gives a percentage; in the defect example, 35% of the cars in the sample had just one major defect. The relative frequencies, or percentages, are usually of more interest than the frequencies themselves. In theory, the relative frequencies should sum to 1, but in practice the sum may differ slightly from | because of rounding. A frequency distribution is a tabulation of the frequencies and/or relative frequencies. AHISTO- First, determine the frequency and relative frequency of each x value. Then GRAM FOR mark possible x values on a horizontal scale. Above each value, draw a COUNTING rectangle whose height is the relative frequency (or alternatively, the fre- DATA quency) of that value. This construction ensures that the area of each rectangle is proportional to the relative frequency of the value. Thus if the relative frequencies of x = 1 and x = 5 are .35 and .07, respectively, then the area of the rectangle above | is five times the area of the rectangle above 5. How unusual is a no-hitter or a one-hitter in a major league baseball game, and how frequently does a team get more than 10, 15, or even 20 hits? Table 1.1 is a frequency distribution for the number of hits per team per game for all nine-inning games that were played between 1989 and 1993. Notice that a no-hitter happens only about once in a 1000 games, and 22 or more hits occurs with about the same frequency. The corresponding histogram in Figure 1.6 rises rather smoothly to a single peak and then declines. The histogram extends a bit more on the right (toward large values) than it does on the left, a slight “positive skew.” --- Trang 27 --- 14 =~ cuarreR | Overview and Descriptive Statistics Table 1.1 Frequency distribution for hits in nine-inning games Number Relative Number Relative Hits/game ofgames frequency _—-Hits/game —of games _ frequency 0 20 .0010 14 569 0294 1 72 .0037 15 393 .0203 2 209 0108 16 253 0131 3 527 0272 17 171 0088 4 1048 0541 18 97 .0050 5 1457 0752 19 53 0027 6 1988 1026 20 31 0016 7 2256 1164 21 19 .0010 8 2403 1240 22 13 .0007 9 2256 1164 23 5 .0003 10 1967 1015 24 1 .0001 11 1509 .0779 25 0 -0000 12 1230 0635 26 1 .0001 13 834 0430, 27 1 .0001 19,383 1.0005 Relative frequency 10 05 0 Hits/game 0 10 20 —_ Figure 1.6 Histogram of number of hits per nine-inning game Either from the tabulated information or from the histogram itself, we can determine the following: proportion of games with relative relative relative at most two hits = frequency + frequency + frequency forx=0 forx=1 forx=2 = .0010 + .0037 + .0108 = 0155 Similarly, proportion of games with between 5 and 10 hits (inclusive) = .0752+.1026+-- -+.1015 =.6361 That is, roughly 64% of all these games resulted in between 5 and 10 (inclusive) hits. i | --- Trang 28 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 15 Constructing a histogram for measurement data (observations on a “measurement variable”) entails subdividing the measurement axis into a suitable number of class intervals or classes, such that each observation is contained in exactly one class. Suppose, for example, that we have 50 observations on x = fuel efficiency of an automobile (mpg), the smallest of which is 27.8 and the largest of which is 31.4. Then we could use the class boundaries 27.5, 28.0, 28.5, ... , and 31.5 as shown here: a 27.5 28.0 28.5 29.0 29.5 30.0 30.5 31.0 31.5 One potential difficulty is that occasionally an observation falls on a class boundary and therefore does not lie in exactly one interval, for example, 29.0. One way to deal with this problem is to use boundaries like 27.55, 28.05, ... , 31.55. Adding a hundredths digit to the class boundaries prevents observations from falling on the resulting boundaries. The approach that we will follow is to write the class intervals as 27.5—28, 28—28.5, and so on and use the convention that any observation falling on a class boundary will be included in the class to the right of the observation. Thus 29.0 would go in the 29-29.5 class rather than the 28.5—29 class. This is how MINITAB constructs a histogram. However, the default histogram in R does it the other way, with 29.0 going into the 28.5—29.0 class. A HISTO- OT GRAM FOR Determine the frequency and relative frequency for each class. Mark the class MEASURE- boundaries on a horizontal measurement axis. Above each class interval, draw a MENT DATA: rectangle whose height is the corresponding relative frequency (or frequency). EQUA (RASS | At _ ‘WIDTHS Power companies need information about customer usage to obtain accurate fore- casts of demands. Investigators from Wisconsin Power and Light determined energy consumption (BTUs) during a particular period for a sample of 90 gas- heated homes. An adjusted consumption value was calculated as follows: adjusted consumption = __Seasimpfion, —___ (weather in degree days)(house area) This resulted in the accompanying data (part of the stored data set FURNACE. MTW available in MINITAB), which we have ordered from smallest to largest. 2.97 4.00 5.20 5.56 5.94 5.98 6.35 6.62 6.72 6.78 6.80 6.85 6.94 TAS 7.16 7.23 7.29 7.62 7.62 7.69 7.73 7.87 193 8.00 8.26 8.29 8.37 8.47 8.54 8.58 8.61 8.67 8.69 8.81 9.07 9.27 9.37 9.43 9.52 9.58 9.60 9.76 9.82 9.83 9.83 9.84 9.96 10.04 10.21 10.28 10.28 10.30 10.35. 10.36 10.40 10.49 10.50 10.64 10.95 11.09 11.12 11.21 11.29 11.43 11.62 11.70 11.70 12.16 12.19 12.28 12.31 12.62 12.69 12.71 12.91 12.92 13.11 13.38 13.42 13.43 13.47 13.60 13.96 14.24 14.35 15.12 15.24 16.06 16.90 18.26 --- Trang 29 --- 16 CHAPTER 1 Overview and Descriptive Statistics We let MINITAB select the class intervals. The most striking feature of the histogram in Figure 1.7 is its resemblance to a bell-shaped (and therefore symmet- ric) curve, with the point of symmetry roughly at 10. 30 20 = 8 oI & 10 i) 1 3 5 7 9 11 13 15 17 19 BTUN Figure 1.7 Histogram of the energy consumption data from Example 1.8 Class 1-30 3-5) 5-7, 7-9) 9-11 1-13, -13-15) 15-17 17-19 Frequency 1 1 2125 17 9 4 1 Relative frequency O11 O11 122.233) 278.189 -100 044, OL From the histogram, proportion of 34 observations =~ 01 +.01+.12+.23=.37 (cx value = a = am) less than 9 The relative frequency for the 9-11 class is about .27, so we estimate that roughly half of this, or .135, is between 9 and 10. Thus proportion of observations less than 10 37+ .135=.505 (slightly more than 50%) The exact value of this proportion is 47/90 = .522. a There are no hard-and-fast rules concerning either the number of classes or the choice of classes themselves. Between 5 and 20 classes will be satisfactory for most data sets. Generally, the larger the number of observations in a data set, the more classes should be used. A reasonable rule of thumb is number of classes ~*~ Vnumber of observations Equal-width classes may not be a sensible choice if a data set “stretches out” to one side or the other. Figure 1.8 shows a dotplot of such a data set. Using a small number of equal-width classes results in almost all observations falling in just --- Trang 30 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 17 @ PE Ld hdl PP Pf by ff ff revseofroreorserdon ff g Tf seen’ fp Figure 1.8 Selecting class intervals for “stretched-out” dots: (a) many short equalwidth intervals; (b) a few wide equal-width intervals; (c) unequal-width intervals one or two of the classes. If a large number of equal-width classes are used, many classes will have zero frequency. A sound choice is to use a few wider intervals near extreme observations and narrower intervals in the region of high concentration. A HISTO- After determining frequencies and relative frequencies, calculate the height GRAM FOR of each rectangle using the formula MEASURE- MENT DATA: rectangle height = relative frequency of the class UNEQUAL Tecan cael: class width CLASS ‘WIDTHS The resulting rectangle heights are usually called densities, and the vertical scale is the density scale. This prescription will also work when class widths are equal. There were 106 active players on the two Super Bowl teams (Green Bay and Pittsburgh) of 2011. Here are their weights in order: 180 180 184 185 186 190 190 191 191 191 194 195 195 196 198 199 200 200 200 200 200 202 203 205 205 207 207 207 208 208 208 209 209 213 215 216 216 217 218 219 225 225 225 229 230 230 231 233 234 235 236 238 239 241 242 243 245 245 247 248 250 250 250 252 252 254 255 255 255 256 260 262 263 265 270 280 285 285 290 298 300 300 304 305 305 305 305 306 308 308 314 315 316 318 318 318 319 320 324 325 325 337 338 340 344 365 and here they are in categories: 180 190 200 210 220 240 260 300 310 320 330 Class -190 -200 -210 -220 -240 -260 -300 -310 -320 -330 -370 Frequency 5 IL 17 7 13 17 10 10 7 4 5 Relative frequency 047 104.160.066.123, 160.094 094.066.038.047 Density .0047 .0104 .0160 .0066 .0061 .0080 .0024 .0094 .0066 .0038 .0012 The resulting histogram appears in Figure 1.9. --- Trang 31 --- 18 = cHarreR | Overview and Descriptive Statistics 0.018 0.016 0.014 0.012 2B 0.010 ia & Q 0.008 0.006 0.004 0.002 0.000 180 200 220 240 260 280 300 320 340 360 Weight Figure 1.9 A MINITAB density histogram for the weight data of Example 1.9 This histogram has three rather distinct peaks: the first corresponding to lightweight players like defensive backs and wide receivers, the second to “medium weight” players like linebackers, and the third to the heavyweights who play offensive or defensive line positions. a When class widths are unequal, not using a density scale will give a picture with distorted areas. For equal-class widths, the divisor is the same in each density calculation, and the extra arithmetic simply results in a rescaling of the vertical axis (ie., the histogram using relative frequency and the one using density will have exactly the same appearance). A density histogram does have one interesting property. Multiplying both sides of the formula for density by the class width gives relative frequency = (class width) (density) = (rectangle width) (rectangle height) = rectangle area That is, the area of each rectangle is the relative frequency of the corresponding class. Furthermore, because the sum of relative frequencies must be 1.0 (except for roundoff), the total area of all rectangles in a density histogram is |. It is always possible to draw a histogram so that the area equals the relative frequency (this is true also for a histogram of counting data)}—just use the density scale. This property will play an important role in creating models for distributions in Chapter 4. Histogram Shapes Histograms come in a variety of shapes. A unimodal histogram is one that rises to a single peak and then declines. A bimodal histogram has two different peaks. Bimodality can occur when the data set consists of observations on two quite different kinds of individuals or objects. For example, consider a large data set --- Trang 32 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 19 consisting of driving times for automobiles traveling between San Luis Obispo and Monterey in California (exclusive of stopping time for sightseeing, eating, etc.). This histogram would show two peaks, one for those cars that took the inland route (roughly 2.5 h) and another for those cars traveling up the coast (3.5—4 h). However, bimodality does not automatically follow in such situations. Only if the two separate histograms are “far apart” relative to their spreads will bimodality occur in the histogram of combined data. Thus a large data set consisting of heights of college students should not result in a bimodal histogram because the typical male height of about 69 in. is not far enough above the typical female height of about 64-65 in. A histogram with more than two peaks is said to be multimodal. A histogram is symmetric if the left half is a mirror image of the right half. A unimodal histogram is positively skewed if the right or upper tail is stretched out compared with the left or lower tail and negatively skewed if the stretching is to the left. Figure 1.10 shows “smoothed” histograms, obtained by superimposing a smooth curve on the rectangles, that illustrate the various possibilities. a b c d Figure 1.10 Smoothed histograms: (a) symmetric unimodal; (b) bimodal; (c) positively skewed; and (a) negatively skewed Qualitative Data Both a frequency distribution and a histogram can be constructed when the data set is qualitative (categorical) in nature; in this case, “bar graph” is synonymous with “histo- gram.” Sometimes there will be a natural ordering of classes (for example, freshmen, sophomores, juniors, seniors, graduate students) whereas in other cases the order will be arbitrary (for example, Catholic, Jewish, Protestant, and the like). With such categorical data, the intervals above which rectangles are constructed should have equal width. Each member of a sample of 120 individuals owning motorcycles was asked for the name of the manufacturer of his or her bike. The frequency distribution for the resulting data is given in Table 1.2 and the histogram is shown in Figure 1.11. Table 1.2 Frequency distribution for motorcycle data Manufacturer Frequency Relative frequency 1. Honda 41 34 2. Yamaha 27 23 3. Kawasaki 20 17 4. Harley-Davidson 18 AS 5. BMW 3 03 6. Other ll 09 120 1.01 --- Trang 33 --- 20 =~ charrer 1 Overview and Descriptive Statistics 34 23 7 15 09 03 a Q) QB) (4) (6) (6) Figure 1.11 Histogram for motorcycle data ii Multivariate Data The techniques presented so far have been exclusively for situations in which each observation in a data set is either a single number or a single category. Often, however, the data is multivariate in nature. That is, if we obtain a sample of individuals or objects and on each one we make two or more measurements, then each “observation” would consist of several measurements on one individual or object. The sample is bivariate if each observation consists of two measurements or responses, so that the data set can be represented as (x), y)), --- 5 (Xn. Yn). For example, x might refer to engine size and y to horsepower, or x might refer to brand of calculator owned and y to academic major. We briefly consider the analysis of multivariate data in several later chapters. Exercises | Section 1.2 (10-29) 10. Consider the IQ data given in Example 1.2. 3, and 4) and 6H for scores in the high 60's a. Construct a stem-and-leaf display of the data. (leaves 5, 6, 7, 8, and 9). Similarly, the other What appears to be a representative IQ value? stems can be repeated twice to obtain a display Do the observations appear to be highly con- consisting of eight rows. Construct such a display centrated about the representative value or for the given scores. What feature of the data is rather spread out? highlighted by this display? Bs Dest ena appeaiits Welseesanely ld 74 89 80 93 64 67 72 70 66 85 89 81 81 metric about a representative value, or woul 1. 74 82 85? 63°72 81°81 95 BABE 80 70. you describe its shape in some other way? 69 66 60 83 85 98 84 68 90 82 69 72 87 ¢. Do there appear to be any outlying IQ values? 88 d. What proportion of IQ values in this sample exceed 100? 12. The accompanying specific gravity values for 11. Every score in the following batch of exam various wood types used in construction scores is in the 60's, 70’s, 80's, or 90's. appeared in the article “Bolted Connection A stem-and-leaf display with only the four Design Values Based on European Yield stems 6, 7, 8, and 9 would not give a very Model” (J. Struct. Engrg., 1993: 2169-2186): detailed description of the distribution of scores. 31 35 36 36 37 38 40 40 40 In such situations, it is desirable to use repeated Al Al 42 42 42 42 42 43 44 stems. Here we could repeat the stem 6 twice, AS 46 46 47 48 48 48 S154 using 6L for scores in the low 60’s (leaves 0, 1, 2, 54°55 58 62 66 66 67 68 .75 --- Trang 34 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 21 Construct a stem-and-leaf display using repeated =5 1 7 40132053313247023 stems (see the previous exercise), and comment 04213113412322845131 on any interesting features of the display. 50232106421603336123 13. The accompanying data set consists of observa- wi Detain frequencies awn temnave fiequen: tions on shower-flow rate (L/min) for a sample of Ges for thechetrved values ofe'=numberok n = 129 houses in Perth, Australia (“An Appli- nonconfonning transducers in batch, cation of Bayes Methodology to the Analysis of bs: What nero ine cf balhce int tc : , . proportion of batches in the sample have Diary Records in a Water Use Study,” J. Amer. Z : Soa Statist. Assoc., 1987: 705~711): at most five nonconforming transducers? What proportion have fewer than five? What propor- 46 123 71 7.0 40 92 67 69 115 5.1 tion have at least five nonconforming units? ae ene er ee ee ¢. Draw a histogram of the data using relative 83 65 76 93 92 73 50 63 138 62 frequency on the vertical scale, and comment 54 48 75 60 69 108 75 66 5.0 3.3 on its features. eee oo te 103 od ce Ss go. 16 Inastudy of author productivity (“Lotka's Test,” 84 7.3 103 119 60 56 95 93 104 9.7 Collection Manage., 1982: 111-118), a large 5.1 67 10.2 62 84 7.0 48 5.6 10.5 14.6 number of authors were classified according to 108 155 75 64 34 $566 59 150 96 the number of articles they had published during 78 #70 69 41 3.6 11.9 3.7 5.7 68 11.3 " . 4 93 96 104 93 69 98 91 106 45 62 a certain period. The results were presented in 83 32 49 50 60 82 63 38 60 the accompanying frequency distribution: a. Construct a stem-and-leaf display of the data. Number of b. What is atypical, o representative, flow rate? nay ad we 402 oh OH Oe ¢. Does the display appear to be highly concen- trated or spread out? Number of d. Does the distribution of values appear to be papers 2 JO dL 12 AS M4: 15 16 17 reasonably symmetric? If not, how would you Rrequericy; = 6 TG Th ee BS describe the departure from symmetry? a. Construct a histogram corresponding to this e. Would you describe any observation as being frequency distribution. What is the most inter- far from the rest of the data (an outlier)? esting feature of the shape of the distribution? 14. Do running times of American movies differ bi Whiat propditicn of these:authors publistiedat somehow fromm times*of Frenchsmovies?: The least five papers? At least ten papers? More authors investigated this question by randomly than ten papers? . selecting 25 recent movies of each type, resulting @ Sunnose thevfive: loys; three 16's, andithree ig the following vaning tines: 17’s had been lumped into a single category Am 04 90 98” («93 18S DS displayed as “215.” Would you be able to ° oi jot 16 ie 102 0 110 draw a histogram? Explain. 9% 1B 116 90 97 103 95 d. Suppose that instead of the values 15, 16, and ET 17 being listed separately, they had been com- bined into a 15-17 category with frequency oe tee dee ais Ie ine 11. Would you be able to draw a histogram? 95 125 122 103 96 Il 81 Explain. M9 128 93) BR 17. The article “Ecological Determinants of Herd Construct a comparative stem-and-leaf display Size in the Thorncraft’s Giraffe of Zambia” by listing stems in the middle of your paper and (Afric. J. Ecol., 2010: 962-971) gave the follow- then placing the Am leaves out to the left and the ing data (read from a graph) on herd size for a Fr leaves out to the right. Then comment on sample of 1570 herds over a 34-year period. interesting features of the display. Herd size 1 2 3 4 5678 15. Temperature transducers of a certain type are Frequency 589 190 176 157 115 89 57 55 shipped in batches of 50. A sample of 60 batches was selected, and the number of transducers Heras OO a a . . . Frequency 33° 31° 22 10 4 10 ll 5 in each batch not conforming to design specifi- cations was determined, resulting in the follo- Herd size 18 19 20 22 23 24 26 32 wing data: Frequency 2 4 2 2 2 2 1 4 --- Trang 35 --- 22 CHAPTER 1 Overview and Descriptive Statistics a. What proportion of the sampled herds had just 4000? How would you describe the shape of one giraffe? the histogram? b. What proportion of the sampled herds had six 49. The article cited in Exercise 18 also gave the or more giraffes (characterized in the article : : Des hedet following values of the variables y = number of as: large, nerds} culs-de-sac and z = number of intersections: ¢. What proportion of the sampled herds had between five and ten giraffes, inclusive? ylorooz201ri21o0or1lo0lt d. Draw a histogram using relative frequency on 7 ' ; i ; ; ; ° ° t ; : ° ' ; ' : ; the vertical axis. How would you describe the shapeot thie hiseeramn? 20301101324660118335 SaaS SEA IOB EE y150301100 18. The article “Determination of Most Representae 7 9 5 2 3.100 0 3 tive “Subdivision” \C. .Bnerey Engrs». 1993: a. Construct a histogram for the y data, What 43-55) gave data on various characteristics of ion of th divisions baa ‘ subdivisions that could be used in deciding PROpOHIGH Of ticse/aubdivisians. Rad ino cule sions ta Oe Us e de-sac? At least one cul-de-sac? ether 10 provide electrical Powel using over, Construct a histogram forthe = data, What pro- 7 ae see is portion of these subdivisions had at most five values of the variable oo tte lenge OFStreets intersections? Fewer than five intersections? within a subdivision: 108053204300 2100 1240 3060 4779 20 How does the speed of a runner vary over the course CE ET ET RE) of a marathon (a distance of 42.195 km)? Consider 1320 530 3350~«—«540.-«3870«1250- 2400 determining both the time to run the first 5 km and 960 1120 2120 450-2250 2320-2400 the time to run between the 35-km and 40-km points, 3150 5700 5220 500 1850 2460 5850 and then subtracting the former time from the latter 2700 «2730-1670 100. 5770 «3150-1890 time. A positive value of this difference corresponds 510 240 396 «1419 = 2109 to arunner slowing down toward the end of the race. 1 Canswnctresmoncandceabediislayeudiueniic ‘The accompanying histogram is based on times of ionsaniterdicivag Hevstem/andl the banitrede runners who participated in several different Japa- agiasthe teak el comavabenticwettas nese marathons (“Factors Affecting Runners’ Mar- features of the displa ‘ athon Performance,” Chance, Fall 1993: 24-30). eee What are some interesting features of this b. Construct a histogram using class boundaries Histogram? What ea pleat ‘lifference vane? 0, 1000, 2000, 3000, 4000, 5000, and 6000. : : id : What proportion of subdivisions have total Roughly ‘what: proportion:of the: runners ran.the length less than 2000? Between 2000 and late distance more quickly than the early distance? Histogram for Exercise 20 Frequency 200 150 100 50 Time 100 0 100 200 «300: «400 500. 600 700-800 difference --- Trang 36 --- 1.2 Pictorial and Tabular Methods in Descriptive Statistics 23 21. Ina study of warp breakage during the weaving of properties than the original data. In particular, it fabric (Technometrics, 1982: 63), 100 specimens may be possible to find a function for which the of yarn were tested. The number of cycles of strain histogram of transformed values is more symmetric to breakage was determined for each yarn speci- (or, even better, more like a bell-shaped curve) than men, resulting in the following data: the original data. As an example, the article “Time 86 146 251 653 98 249 400 292 131 169 Eapee (inemalograpaie “AnalysisvorsBeryttni— 175 176 76 264 15 364 195 262 88 264 Lung Fibroblast Interactions’ Environ, Res., 157 220 42 321 180 198 38 20 61 121 1983: 34-43) reported the results of experiments 282 224 149 180 325 250 196 90 229 166 designed to study the behavior of certain individual 38 337 65 151 341 40 40 135 597 246 cells that had been exposed to beryllium. An impor- 211 180 93 315 353 S71 124 279 81 186 tant characteristic of such an individual cell is its 497 182 423 185 229 400 338 290 398 71 interdivision time (IDT). IDTs were determined for 246 185 188 568 55 55 61 244 20 284 a large number of cells both in exposed (treatment) 393 396 203 829 239 236 286 194 277 143 and unexposed (control) conditions. The authors of 198 264 105 203 124 137 135 350 193 188 the article used a logarithmic transformation, that is, a. Construct a relative frequency histogram based transformed vallie = logisCorieinal value), Con: on the class intervals 0-100, 100-200, ... , and sider the following representative IDT data: comment on features of the distribution. Wl 312 137 460 258 168 348 b. Construct a histogram based on the following 623 280 179 19.5 211 31.9 28.9 class intervals: 0-50, 50-100, 100-150, 60.1 23.7 18.6 214 26.6 26.2 32.0 150-200, 200-300, 300-400, 400-500, vie oy vee me on Po oe 500-600, 600-900. - . . 489 214 20.7 573 40.9 ¢. If weaving specifications require a breaking strength of at least 100 cycles, what proportion Use class intervals 10-20, 20-30, ... to construct a of the yarn specimens in this sample would be histogram of the original data. Use intervals 1.1-1.2, considered satisfactory? 1.2-1.3, ... to do the same for the transformed data. 22, The accompanying data set consists of observa- ‘Whats the effect of the transformation? tions on shear strength (Ib) of ultrasonic spot 24, Unlike most packaged food products, alcohol bev- welds made on a type of alclad sheet. Construct erage container labels are not required to show a relative frequency histogram based on ten equal- calorie or nutrient content. The article “What Am width classes with boundaries 4000, 4200, ... . I Drinking? The Effects of Serving Facts Informa- [The histogram will agree with the one in “Com- tion on Alcohol Beverage Containers” (J. of parison of Properties of Joints Prepared by Ultra- Consumer Affairs, 2008: 81-99) reported on a sonic Welding and Other Means” (J. Aircraft, pilot study in which each individual in a sample 1983: 552-556).] Comment on its features. was asked to estimate the calorie content of a 12 oz can of light beer known to contain 103 cal. The 5434 4948 4521 4570 4990 57025241 following information appeared in the article: 5112 S015 4659 4806 4637 56704381 4820 5043 4886 4599 5288 52994848 —______, 5378 5260 S055 S828 S218 4859 4780 Class Percentage 5027 S008 4609 4772 5133 5095 4618 — 4848 5089 5518 5333 S164 5342 5069 D0 u 4755 4925 5001 4803 4951 5679 5256 oOese75 2 5207 5621 4918 5138 4786 4500 5461 F528 100 2B 5049 4974 4592 4173 5296 4965 5170 109 <:125 31 4740 5173 4568 5653 5078 4900 4968 125 = 150. 1 5248 5245 4723 5275 S419 5205 4452 so. ee fs 5227 5555 5388 S498 4681 5076 4774 2 4931 4493 5309 5582 4308 4823 4417 500 6500 3 5364 5640 5069 5188 5764 5273 5042 OT 3189: 4986. a. Construct a histogram of the data and comment 23. A transformation of data values by means of some Gib any. anverestits feaniies, mathematical function, such as /* or I/x, can often » i hae tae DE the estimates: wettrat least " ne u ? Less than 200? yield a set of numbers that has “nicer” statistical --- Trang 37 --- 24 =~ charter 1 Overview and Descriptive Statistics 25. The article “Study on the Life Distribution of product nonconformity or production problem. The Microdrills” (J. Engrg. Manuf., 2002: 301-305) categories are ordered so that the one with the reported the following observations, listed in largest frequency appears on the far left, then the increasing order, on drill lifetime (number of category with the second largest frequency, and so holes that a drill machines before it breaks) when on. Suppose the following information on noncon- holes were drilled in a certain brass alloy. formities in circuit packs is obtained: failed com- 11 14 2 23 31 36 309 44 47 50 ponent, 126; incorrect component, 210; insufficient 50 61 65 67 68 TL 74°76 78 79 solder, 67; excess solder, 54; missing component, 81 84 85 89 91 93 96 99 101 104 131. Construct a Pareto diagram. 105 105 112 118 123, 136 139 141 148 158. 98. The cumulative frequency and cumulative rela- 161 168 184 206 248 263 289 322 388 513 . i . tive frequency for a particular class interval are a. Construct a frequency distribution and histo- the sum of frequencies and relative frequencies, gram of the data using class boundaries 0, 50, respectively, for that interval and all intervals 100, ... , and then comment on interesting lying below it. If, for example, there are four characteristics. intervals with frequencies 9, 16, 13, and 12, then b. Construct a frequency distribution and histo- the cumulative frequencies are 9, 25, 38, and gram of the natural logarithms of the lifetime 50, and the cumulative relative frequencies are observations, and comment on_ interesting .18, .50, .76, and 1.00. Compute the cumulative characteristics. frequencies and cumulative relative frequencies ¢. What proportion of the lifetime observa- for the data of Exercise 22. tions in this sample are less than 100? 99 Fire load (MJ/m?) is the heat energy that could be What proportion of the observations are at released per square meter of floor area by com- least bustion of contents and the structure itself. The 26. Consider the following data on type of health com- article “Fire Loads in Office Buildings” (J. Struct. plaint (J = joint swelling, F = fatigue, B = back Engrg., 1997: 365-368) gave the following cumu- pain, M = muscle weakness, C = coughing, N = lative percentages (read from a graph) for fire nose running/irritation, O = other) made by tree loads in a sample of 388 rooms: planters. Obtain frequencies and relative frequen- cies for the various categories, and draw a histo- Value 0 150 300 450 600 gram. (The data is consistent with percentages Cumulative % = 0 19.3. 37.6 62.7775 given in the article “Physiological Effects of — \aine 750 900 1050 -«1200-~—«1350 Work Stress and Pesticide Exposure in Tree Plant- Cumulative % © 87.2 938 95.7 98.6 «99.1 ing by British Columbia Silviculture Workers,” Ergonomics, 1993: 951-961.) Value 1500 1650 1800-1950 Cumulative % = 995 99.6 99.8 100.0 OONJCFBBFOJOOM OF FOONONIJ FJ BOC Jo} J FNOBMOJ MOB a. Construct a relative frequency histogram and OF] OOBNCOOOMBFEF comment on interesting features. JOOFN b. What proportion of fire loads are less than 600? 27. A Pareto diagram is a variation of a histogram for ‘Atleast 1200? categorical data resulting from a quality control ¢. What proportion of the loads are between 600 study. Each category represents a different type of and 1200? Measures of Location Visual summaries of data are excellent tools for obtaining preliminary impressions and insights. More formal data analysis often requires the calculation and interpre- tation of numerical summary measures. That is, from the data we try to extract several summarizing numbers—numbers that might serve to characterize the data set and convey some of its most important features. Our primary concern will be with numerical data; some comments regarding categorical data appear at the end of the section. --- Trang 38 --- 1.3 Measures of Location 25 Suppose, then, that our data set is of the form x), x3, ...,x,, where each x; is a number. What features of such a set of numbers are of most interest and deserve emphasis? One important characteristic of a set of numbers is its location, and in particular its center. This section presents methods for describing the location of a data set; in Section 1.4 we will turn to methods for measuring variability in a set of numbers. The Mean For a given set of numbers x1, x2, ... , X,, the most familiar and useful measure of the center is the mean, or arithmetic average of the set. Because we will almost always think of the x;’s as constituting a sample, we will often refer to the arithmetic average as the sample mean and denote it by x. DEFINITION The sample mean x of observations x), x2, ... , X, is given by Ds gatite techie . n n The numerator of ¥ can be written more informally as > x; where the summation is over all sample observations. For reporting x, we recommend using decimal accuracy of one digit more than the accuracy of the x;’s. Thus if observations are stopping distances with x, = 125, X2 = 131, and so on, we might have ¥ = 127.3 ft. eee ©=A class was assigned to make wingspan measurements at home. The wingspan is the horizontal measurement from fingertip to fingertip with outstretched arms. Here are the measurements given by 21 of the students. x = 60 x2 = 64 x3 = 72 xy = 63 Xs = 66 Xe = 62 x7 = 75 y= 66% = 59 =75 ky = 9 tp=O2 43 =8 xX4=61 X15 = 65 X16 = 67 x7 = 65 X13 = 69 Xyg = 95 Xx = 60 Xy, = 70 Figure 1.12 shows a stem-and-leaf display of the data; a wingspan in the 60’s appears to be “typical.” 5H|9 6L| 00122334 6H| 5566799 TL|02 7H|55 8L| 8H| 9L| OHS Figure 1.12 A stem-and-leaf display of the wingspan data --- Trang 39 --- 26 CHAPTER 1 Overview and Descriptive Statistics With S> x; = 1408, the sample mean is 1408 tea 67.0 a value consistent with information conveyed by the stem-and-leaf display. ll A physical interpretation of ¥ demonstrates how it measures the location (center) of a sample. Think of drawing and scaling a horizontal measurement axis, and then representing each sample observation by a 1-Ilb weight placed at the corresponding point on the axis. The only point at which a fulcrum can be placed to balance the system of weights is the point corresponding to the value of X (see Figure 1.13). The system balances because, as shown in the next section, ¥ (4 — ¥) = 0 so the net total tendency to turn about x is 0. Mean = 67.0 HehH te 8 r 60 65 70 1% 80 85 90 95 Figure 1.13 The mean as the balance point for a system of weights Just as ¥ represents the average value of the observations in a sample, the average of all values in the population can in principle be calculated. This average is called the population mean and is denoted by the Greek letter 4. When there are N values in the population (a finite population), then yx = (sum of the N population values)/N. In Chapters 3 and 4, we will give a more general definition for j that applies to both finite and (conceptually) infinite populations. Just as x is an interesting and important measure of sample location, 4 is an interesting and important (often the most important) characteristic of a population. In the chapters on statistical inference, we will present methods based on the sample mean for drawing conclusions about a population mean. For example, we might use the sample mean ¥ = 67.0 computed in Example 1.11 as a point estimate (a single number that is our “best” guess) of j« = the true average wingspan for all students in introductory statistics classes. The mean suffers from one deficiency that makes it an inappropriate measure of center under some circumstances: its value can be greatly affected by the presence of even a single outlier (unusually large or small observation). In Example 1.11, the value xj9 = 95 is obviously an outlier. Without this observation, ¥ = 1313/20 = 65.7; the outlier increases the mean by 1.3 in. The value 95 is clearly an error—this student is only 70 in. tall, and there is no way such a student could have a wingspan of almost 8 ft. As Leonardo da Vinci noticed, wingspan is usually quite close to height. Data on housing prices in various metropolitan areas often contains outliers (those lucky enough to live in palatial accommodations), in which case the use of average price as a measure of center will typically be misleading. We will momen- tarily propose an alternative to the mean, namely the median, that is insensitive to outliers (recent New York City data gave a median price of less than $700,000 and a mean price exceeding $1,000,000). However, the mean is still by far the most --- Trang 40 --- 1.3 Measures of Location 27 widely used measure of center, largely because there are many populations for which outliers are very scarce. When sampling from such a population (a normal or bell-shaped distribution being the most important example), outliers are highly unlikely to enter the sample. The sample mean will then tend to be stable and quite representative of the sample. The Median The word median is synonymous with “middle,” and the sample median is indeed the middle value when the observations are ordered from smallest to largest. When the observations are denoted by +1, ... , X,, we will use the symbol x to represent the sample median. DEFINITION The sample median is obtained by first ordering the n observations from smallest to largest (with any repeated values included so that every sample observation appears in the ordered list). Then, The single middle n+1\" 5 = {——] ordered value value if n 2 is odd ¥=¢ The average of the two myth 7 th middle =average of (5) and (G+ 1) ordered values values if n iseven People not familiar with classical music might tend to believe that a composer’s instructions for playing a particular piece are so specific that the duration would not depend at all on the performer(s). However, there is typically plenty of room for interpretation, and orchestral conductors and musicians take full advantage of this. We went to the website ArkivMusic.com and selected a sample of 12 recordings of Beethoven’s Symphony #9 (the “Choral”, a stunningly beautiful work), yielding the following durations (min) listed in increasing order: 62.3 62.8 63.6 65.2 65.7 66.4 67.4 68.4 68.8 70.8 75.7 79.0 Since n = 12 is even, the sample median is the average of the n/2 = 6th and (n/2 + 1) = 7th values from the ordered list: 66.4 + 67.4 k= aa = 66.90 --- Trang 41 --- 28 CHAPTER 1 Overview and Descriptive Statistics Note that if the largest observation 79.0 had not been included in the sample, the resulting sample median for the n = 11 remaining observations would have been the single middle value 67.4 (the [m + 1]/2 = 6th ordered value, i.e., the 6th value in from either end of the ordered list). The sample mean is ¥ = YO x;/n = 816.1/12 = 68.01, a bit more than a full minute larger than the median. The mean is pulled out a bit relative to the median because the sample “stretches out” somewhat more on the upper end than on the lower end. a The data in Example 1.12 illustrates an important property of x in contrast to x. The sample median is very insensitive to a number of extremely small or extremely large data values. If, for example, we increased the two largest x;’s from 75.7 and 79.0 to 95.7 and 99.0, respectively, * would be unaffected. Thus, in the treatment of outlying data values, ¥ and ¥ are at opposite ends of a spectrum: ¥ is sensitive to even one such value, whereas x is insensitive to a large number of outlying values. Because the large values in the sample of Example 1.12 affect ¥ more than x, x < X for that data. Although x and x both provide a measure for the center of a data set, they will not in general be equal because they focus on different aspects of the sample. Analogous to ¥ as the middle value in the sample is a middle value in the population, the population median, denoted by ji. As with x and j1, we can think of using the sample median ¥ to make an inference about ji. In Example 1.12, we might use + = 66.90 as an estimate of the median duration in the entire population from which the sample was selected. A median is often used to describe income or salary data (because it is not greatly influenced by a few large salaries). If the median salary for a sample of statisticians were ¥ = $66,416, we might use this as a basis for concluding that the median salary for all statisticians exceeds $60,000. The population mean y and median ji will not generally be identical. If the population distribution is positively or negatively skewed, as pictured in Figure 1.14, then y ju. When this is the case, in making inferences we must first decide which of the two population characteristics is of greater interest and then proceed accordingly. a b c BE wai iu Negative skew Symmetric Positive skew Figure 1.14 Three different shapes for a population distribution Other Measures of Location: Quartiles, Percentiles, and Trimmed Means The median (population or sample) divides the data set into two parts of equal size. To obtain finer measures of location, we could divide the data into more than two such parts. Roughly speaking, quartiles divide the data set into four equal parts, with the observations above the third quartile constituting the upper quarter of the data set, the second quartile being identical to the median, and the first quartile --- Trang 42 --- 1.3 Measures of Location 29 separating the lower quarter from the upper three-quarters. Similarly, a data set (sample or population) can be even more finely divided using percentiles; the 99th percentile separates the highest 1% from the bottom 99%, and so on. Unless the number of observations is a multiple of 100, care must be exercised in obtaining percentiles. We will use percentiles in Chapter 4 in connection with certain models for infinite populations and so postpone discussion until that point. The sample mean and sample median are influenced by outlying values in a very different manner—the mean greatly and the median not at all. Since extreme behavior of either type might be undesirable, we briefly consider alternative measures that are neither as sensitive as ¢ nor as insensitive as ¥. To motivate these alternatives, note that x and X are at opposite extremes of the same “family” of measures. After the data set is ordered, x is computed by throwing away as many values on each end as one can without eliminating everything (leaving just one or two middle values) and averaging what is left. On the other hand, to compute ¥ one throws away nothing before averaging. To paraphrase, the mean involves trimming 0% from each end of the sample, whereas for the median the maximum possible amount is trimmed from each end. A trimmed mean is a compromise between ¥ and x. A 10% trimmed mean, for example, would be computed by eliminating the smallest 10% and the largest 10% of the sample and then averaging what remains. Consider the following 20 observations, ordered from smallest to largest, each one representing the lifetime (in hours) of a type of incandescent lamp: 612 623 666 «744 883 898 964 970 = 983-1003 1016 1022 1029 1058 1085 1088 1122 1135 1197 1201 The average of all 20 observations is ¥ = 965.0, and ¥ = 1009.5. The 10% trimmed mean is obtained by deleting the smallest two observations (612 and 623) and the largest two (1197 and 1201) and then averaging the remaining 16 to obtain Xw(i0) = 979.1. The effect of trimming here is to produce a “central value” that is somewhat above the mean (x is pulled down by a few small lifetimes) and yet considerably below the median. Similarly, the 20% trimmed mean averages the middle 12 values to obtain X,(29) = 999.9, even closer to the median. (See Figure 1.15.) “ao Jee} >} +} —oolofeson-esefoo——_of— 600 800 tooo t 1200 ¥ x Figure 1.15 Dotplot of lifetimes (in hours) of incandescent lamps : Generally speaking, using a trimmed mean with a moderate trimming proportion (between 5% and 25%) will yield a measure that is neither as sensitive to outliers as the mean nor as insensitive as the median. For this reason, trimmed means have merited increasing attention from statisticians for both descriptive and inferential purposes. More will be said about trimmed means when point estimation is discussed in Chapter 7. As a final point, if the trimming proportion is denoted by « and nz is not an integer, then it is not obvious how the 100% trimmed mean --- Trang 43 --- 30 =~ charrer 1 Overview and Descriptive Statistics should be computed. For example, if « = .10 (10%) and n = 22, then na = (22) (.10) = 2.2, and we cannot trim 2.2 observations from each end of the ordered sample. In this case, the 10% trimmed mean would be obtained by first trimming two observations from each end and calculating ¥,, then trimming three and calculating x, and finally interpolating between the two values to obtain (10). Categorical Data and Sample Proportions When the data is categorical, a frequency distribution or relative frequency distri- bution provides an effective tabular summary of the data. The natural numerical summary quantities in this situation are the individual frequencies and the relative frequencies. For example, if a survey of individuals who own laptops is undertaken to study brand preference, then each individual in the sample would identify the brand of laptop that he or she owned, from which we could count the number owning Sony, Macintosh, Hewlett-Packard, and so on. Consider sampling a dichot- omous population—one that consists of only two categories (such as voted or did not vote in the last election, does or does not own a laptop, etc.). If we let x denote the number in the sample falling in category A, then the number in category B is n — x. The relative frequency or sample proportion in category A is x/n and the sample proportion in category B is 1 — x/n. Let’s denote a response that falls in category A by a | and a response that falls in category B by a 0. A sample size of n = 10 might then yield the responses 1, 1, 0, 1, 1, 1, 0,0, 1, 1. The sample mean for this numerical sample is (because the number of 1’s = x = 7). Mtoe tx, L+L4+O04--4+141 7 x . — UF? sample proportion This result can be generalized and summarized as follows: /f in a categorical data situation we focus attention on a particular category and code the sample results so that a 1 is recorded for an individual in the category and a 0 for an individual not in the category, then the sample proportion of individuals in the category is the sample mean of the sequence of |’s and 0’s. Thus a sample mean can be used to summarize the results of a categorical sample. These remarks also apply to situations in which categories are defined by grouping values in a numerical sample or population (e.g., we might be interested in knowing whether individuals have owned their present automobile for at least 5 years, rather than studying the exact length of ownership). Analogous to the sample proportion x/n of individuals falling in a particular category, let p represent the proportion of individuals in the entire population falling in the category. As with x/n, p is a quantity between 0 and 1. While x/n is a sample characteristic, p is a characteristic of the population. The relationship between the two parallels the relationship between x and ji and between X and y. In particular, we will subsequently use x/n to make inferences about p. If, for example, asample of 100 car owners reveals that 22 owned their cars at least 5 years, then we might use 22/100 = .22 as a point estimate of the proportion of all owners who have owned their car at least 5 years. We will study the properties of x/n as an estimator of p and see how x/n can be used to answer other inferential questions. With k categories (k > 2), we can use the k sample proportions to answer questions about the population proportions pi, ..- , Px- --- Trang 44 --- 1.3 Measures of Location 31 Exercises | Section 1.3 (30-40) 30. The May 1, 2009 issue of The Montclarion rather than psi. Is it necessary to reexpress reported the following home sale amounts for a each observation in ksi, or can the values sample of homes in Alameda, CA that were sold calculated in part (a) be used directly? [Hint: the previous month (1000s of $): kg = 2.2 1b. 590 815 575 608 350 1285 408 540 555 679 33. A sample of 26 offshore oil workers took part in a simulated escape exercise, resulting in the a. Calculate and interpret the sample mean and accompanying data on time (sec) to complete median. the escape (“Oxygen Consumption and Ventila- b. Suppose the 6th observation had been 985 tion During Escape from an Offshore Platform,” rather than 1285. How would the mean and Ergonomics, 1997: 281-292): Lae wae en by 389 356 359 363 375 424 325 394 402 c. Calculate a 20% trimmed mean by first 373 373 370 364 366 364 325 339 393 trimming the two smallest and two largest 302 369 374 359 356 403 334 397 observations. d. Calculate a 15% trimmed mean. a. Construct a stem-and-leaf display of the data. . . How does it suggest that the sample mean and 31. In Superbowl XXXVII, Michael Pittman of median will compare? Tampa Bay rushed (ran with the football) 17 b. Calculate the values of the sample mean and times on first down, and the results were the median, (Hint: > x; = 9638.] following gains in yards: ¢. By how much could the largest time, currently 23 1 4 1 6 s 9 6 F 424, be increased without affecting the value -1 3 2 0 2 24 1 1 of the sample median? By how much could 7 this value be decreased without affecting the a. Determine the value of the sample mean. 4 bi : value of the sample median? b. Determine the value of the sample median. ae we " . d. What are the values of x and ¥ when the Why is it so different from the mean? b: tions z sed inutes? c. Calculate a trimmed mean by deleting the OSETVanOnS ate Teexpressed In'minutes! smallest and largest observations. What is 34, The article “Snow Cover and Temperature Rela- the corresponding trimming percentage? tionships in North America and Eurasia” (/. Cli- How does the value of this x, compare to mate Appl. Meteorol., 1983: 460-469) used the mean and median? statistical techniques to relate the amount of 4 7 Bane ue : snow cover on each continent to average conti- 32. The minimum injection pressure (psi) for injec- : nental temperature. Data presented there tion molding specimens of high amylose corn : ‘ * = F ad : included the following ten observations on Octo- was determined for eight different specimens a . ber snow cover for Eurasia during the years (higher pressure corresponds to greater proces- 1970-1979 (i Ilion km2): sing difficulty), resulting in the following obser- Cinemnillion kent): vations (from “Thermoplastic Starch Blends with 65 120 149 10.0 10.7 7.9 219 125 145 92 a Polyethylene Co, Vinyl leohols Brocessability What would you report as a representative, or and Physical Properties,” Polymer Engrg. & Sci., : : ) ; typical, value of October snow cover for this 1994: 17-23): ; ad period, and what prompted your choice? 15.0 130 18.0 145 120 11.0 89 80 36. Blood pressure values are often reported to the a, Determine the values of the sample mean, nearest 5 mmHg (100, 105, 110, etc.). Suppose sample median, and 12.5% trimmed mean, the actual blood pressure values for nine ran- and compare these values. domly selected individuals are b. By how much could the smallest sample 118.6 127.4 138.4 130.0 113.7 122.0 108.3 131.5 133.2 observation, currently 8.0, be increased with- ; ; out affecting the value of the sample median? a, What is the median of the reported blood ¢. Suppose we want the values of the sample presbiite Vales? mean and median when the observations are b. Suppose the blood pressure of the second expressed in kilograms per square inch (ksi) individual is 127.6 rather than 127.4 (a small change in a single value). How does this --- Trang 45 --- 32 CHAPTER 1 Overview and Descriptive Statistics iiRecE Ue SEAROR Ihe: TEDGHEM VElUESY a. What is the value of the sample proportion of. What does this say about the sensitivity of . “ the median to rounding or grouping in the Siiceesées a/n ‘iad ig oF grouping b. Replace each S with a | and each F with a0. . Then calculate x for this numerically coded 36. The propagation of fatigue cracks in various sample. How does ¥ compare to x/n? aircraft parts has been the subject of extensive ¢. Suppose it is decided to include 15 more cars study in recent years. The accompanying data in the experiment. How many of these would consists of propagation lives (flight hours/10*) have to be S’s to give x/n = .80 for the entire to reach a given crack size in fastener holes sample of 25 cars? intended for use in military aircraft (“Statistical 39. a. If a constant c is added to each x; in a sample, Crack Propagation in Fastener Holes under Spec- 7 : trum Loading,” J. Aircraft, 1983: 1028-1032): yielding yi = x; + ¢, how do the sample mean and median of the y,’s relate to the mean and .736 863 865 913 915 937.983 1.007 median of the x;’s? Verify your conjectures. 1.011 1.064 1.109 1.132 1.140 1.153 1.253 1.304 b. Ifeach x; is multiplied by a constant c, yielding a. Compute and compare the values of the sam- Ji = eX» answer the question of part (a). ple mean and median. Again, verify your conjectures. b. By how much could the largest sample obser- 40, An experiment to study the lifetime (in hours) for vation be decreased without affecting the a certain type of component involved putting ten value of the median? components into operation and observing them for AT, (Capi he! Taal Weis, O5eR) tine 100 hours. Eight of the components failed during mean, 10% trimmed mean, and sample mean that period, and those lifetimes were recorded. for the microdrill data given in Exercise 25, and Denote: the: lifetimes: of the ‘two components still connpiresthese’meanuies: functioning after 100 hours by 100+. The resulting sample observations were 38. A sample of n = 10 automobiles was selected, : 48 79 100+ 35 92 86 57 100+ 17 29 and each was subjected to a 5-mph crash test. Denoting a car with no visible damage by S (for Which of the measures of center discussed in this success) and a car with such damage by F, results section can be calculated, and what are the values were as follows: of those measures? [Note: The data from this s S F S§ § S$ F F 8 8 experiment is said to be “censored on the right.”] Measures of Variability Reporting a measure of center gives only partial information about a data set or distribution. Different samples or populations may have identical measures of center yet differ from one another in other important ways. Figure 1.16 shows dotplots of three samples with the same mean and median, yet the extent of spread about the center is different for all three samples. The first sample has the largest amount of variability, the third has the smallest amount, and the second is interme- diate to the other two in this respect. 1: ox * * * * %* * oe oe 239 9 09000 0 ° 3: © @ ©0000 © © —_— a _ Dh 9) ss 30 40 50 60 70 Figure 1.16 Samples with identical measures of center but different amounts of variability --- Trang 46 --- 1.4 Measures of Variability 33 Measures of Variability for Sample Data The simplest measure of variability in a sample is the range, which is the difference between the largest and smallest sample values. Notice that the value of the range for sample | in Figure 1.16 is much larger than it is for sample 3, reflecting more variability in the first sample than in the third. A defect of the range, though, is that it depends on only the two most extreme observations and disregards the positions of the remaining n — 2 values. Samples | and 2 in Figure 1.16 have identical ranges, yet when we take into account the observations between the two extremes, there is much less variability or dispersion in the second sample than in the first. Our primary measures of variability involve the deviations from the mean, X, —X,X%2 —X,...,X, — X. That is, the deviations from the mean are obtained by subtracting ¥ from each of the n sample observations. A deviation will be positive if the observation is larger than the mean (to the right of the mean on the measurement axis) and negative if the observation is smaller than the mean. If all the deviations are small in magnitude, then all x,’s are close to the mean and there is little variability. On the other hand, if some of the deviations are large in magnitude, then some x;’s lie far from x, suggesting a greater amount of variability. A simple way to combine the deviations into a single quantity is to average them (sum them and divide by n). Unfortunately, there is a major problem with this suggestion: sum of deviations = Dy (4 -Z) =0 isl so that the average deviation is always zero. The verification uses several standard rules of summation and the fact that }>.¥ = +X + +++ +¥ = nr: 1 (x, —x) =) x4 —) ¢=) x, -—nx=) x -—n(-) x) =0 Dw 9) =e Des Dv a= Da) How can we change the deviations to nonnegative quantities so the positive and negative deviations do not counteract each other when they are combined? One possibility is to work with the absolute values of the deviations and calculate the average absolute deviation }> |x; —x|/n. Because the absolute value operation leads to a number of theoretical difficulties, consider instead the squared deviations (x1 — 8), (2 — ¥)’,..., (%; — ¥). Rather than use the average squared deviation @- x)°/n, for several reasons we will divide the sum of squared deviations by n — I rather than n. DEFINITION The sample variance, denoted by s°, is given by ~\2 elt _ Se n=l n-1 The sample standard deviation, denoted by s, is the (positive) square root of the variance: --- Trang 47 --- 34 =~ curren 1 Overview and Descriptive Statistics The unit for s is the same as the unit for each of the x;’s. If, for example, the observations are fuel efficiencies in miles per gallon, then we might have s = 2.0 mpg. A rough interpretation of the sample standard deviation is that it is the size of a typical or representative deviation from the sample mean within the given sample. Thus if s = 2.0 mpg, then some x;’s in the sample are closer than 2.0 to ¥, whereas others are farther away; 2.0 is a representative (or “standard”) deviation from the mean fuel efficiency. If s = 3.0 for a second sample of cars of another type, a typical deviation in this sample is roughly 1.5 times what it is in the first sample, an indication of more variability in the second sample. The website www.fueleconomy.gov contains a wealth of information about fuel characteristics of various vehicles. In addition to EPA mileage ratings, there are many vehicles for which users have reported their own values of fuel efficiency (mpg). Consider Table 1.3 with n = 11 efficiencies for the 2009 Ford Focus equipped with an automatic transmission (for this model, the EPA reports an overall rating of 27-24 mpg in city driving and 33 mpg in highway driving). Effects of rounding account for the sum of deviations not being exactly zero. The numerator of s? is Sy. = 314.110, from which > Sw _ 314.110 s =f Garo s=5.60 The size of a representative deviation from the sample mean 33.26 is roughly 5.6 mpg. [Note: Of the nine people who also reported driving behavior, only three did more than 80% of their driving in highway mode; we bet you can guess which cars they drove. We haven’t a clue why all 11 reported values exceed the EPA figure — maybe only drivers with really good fuel efficiencies communicate their results.] Table 1.3 Data for Example 1.14 Xi xj —E (4 — x) 1 27.3 —5.96 35.522 2 27.9 —5.36 28.730 3 32.9 —0.36 0.130 4 35.2 1.94 3.764 5 44.9 11.64 135.490 6 39.9 6.64 44.090 7 30.0 —3.26 10.628 8 29.7 —3.56 12.674 9 28.5 —4.76 22.658 10 32.0 1.26 1.588 ll 376 4.34 18.836 Yi = 365.9 YG — ¥) = 04 Yi -3)" = 314.110 X= 33.26 a Motivation for s” To explain why s° rather than the average squared deviation is used to measure variability, note first that whereas. s’ measures sample variability, there is a measure of variability in the population called the population variance. We will use ¢* (the --- Trang 48 --- 1.4 Measures of Variability 35 square of the lowercase Greek letter sigma) to denote the population variance and ¢ to denote the population standard deviation (the square root of ¢”). When the population is finite and consists of N values, N 2 => (W/V i=l which is the average of all squared deviations from the population mean (for the population, the divisor is N and not N—1). More general definitions of o appear in Chapters 3 and 4. Just as ¥ will be used to make inferences about the population mean pu, we should define the sample variance so that it can be used to make inferences about o°. Now note that o” involves squared deviations about the population mean jy. Tf we actually knew the value of jz, then we could define the sample variance as the average squared deviation of the sample x;’s about y. However, the value of yu is almost never known, so the sum of squared deviations about x must be used. But the x;'s tend to be closer to their average X than to the population average |, so to compensate for this the divisor n — 1 is used rather than n. In other words, if we used a divisor n in the sample variance, then the resulting quantity would tend to underestimate o7 (produce estimated values that are too small on the average), whereas dividing by the slightly smaller n — 1 corrects this underestimation. It is customary to refer to s° as being based on n — 1 degrees of freedom (df). This terminology results from the fact that although s* is based on the n quantities X, —X,X%2 —¥,...,Xy — X, these sum to 0, so specifying the values of any n — 1 of the quantities determines the remaining value. For example, if n = 4 and x) —X¥ = 8, x» —xX = —6, and xy —x = —4, then automatically x3 —x=2, so only three of the four values of x; — x are freely determined (3 df). . 2 A Computing Formula for s Computing and squaring the deviations can be tedious, especially if enough decimal accuracy is being used in ¥ to guard against the effects of rounding. An alternative formula for the numerator of s* circumvents the need for all the subtraction necessary to obtain the deviations. The formula involves both (Om), summing and then squaring, and )> x?, squaring and then summing. An alternative expression for the numerator of sis (Sx? Se = ya =) ese Des = De Proof Because ¥= >xj/n, nk? = n(S>.x)?/n® =(4xi)? /n. Then, Yi 8)? =P OF -28-art-?) = S07 -227 + VO ) 2 = Jot 29 ne tna? = oe—aley?=oe =)0 9 - 28 nz-+n(8) = S > x7—n(@) =O? < P --- Trang 49 --- 36 CHAPTER 1 Overview and Descriptive Statistics Traumatic knee dislocation often requires surgery to repair ruptured ligaments. One measure of recovery is range of motion (measured as the angle formed when, starting with the leg straight, the knee is bent as far as possible). The given data on postsurgical range of motion appeared in the article “Reconstruction of the Anterior and Posterior Cruciate Ligaments After Knee Dislocation” (Amer. J. Sports Med., 1999: 189-197): 154 142 137, 133 122 126 135 135 108 120 127 134 122 The sum of these 13 sample observations is }> x; = 1695, and the sum of their squares is Sod = 154? + 142? +--+ 122? = 222, 581 Thus the numerator of the sample variance is See = S047 = [(92.xi)7]/n = 222,581 — (1695)? /13 = 1579.0769 from which s* = 1579.0769/12 = 131.59 and s = 11.47. Ll) The shortcut method can yield values of s? and s that differ from the values computed using the definitions. These differences are due to effects of rounding and will not be important in most samples. To minimize the effects of rounding when using the shortcut formula, intermediate calculations should be done using several more significant digits than are to be retained in the final answer. Because the numerator of s? is the sum of nonnegative quantities (squared deviations), sis guaranteed to be nonnegative. Yet if the shortcut method is used, particularly with data having little variability, a slight numerical error can result in the numerator being zero or negative [ > x? less than or equal to (> x;)°/n]. Of course, a negative Sis wrong, and a zero s° should occur only if all data values are the same. As an example of the potential difficulties with the formula, consider the data 1001, 1002, 1003. The formula gives S,, =10017 + 1002* + 10037 — (1001 + 1002 + 1003)?/3 = 3,012,014 — 3,012,012 = 2. Thus, we could carry six decimal digits and still get the wrong answer of 3,012,010 — 3,012,010 = 0. All seven digits must be carried to get the right answer. The problem occurs because we are subtracting two numbers of nearly equal size, so the number of accurate digits in the answer is many fewer than in the numbers being subtracted. Several other properties of s* can facilitate its computation. PROPOSITION Let x), X5,...,,, be a sample and c be a constant. L. If yy = 31 + Gyn = 42 + 6... Yn = An + C, then s? = 52, and 2. If yr = CX)... Yn = C%m, then 8? = C72, 5, = |elsy, where s2 is the sample variance of the x’s and sz is the sample variance of the y’s. --- Trang 50 --- 1.4 Measures of Variability 37 In words, Result | says that if a constant c is added to (or subtracted from) each data value, the variance is unchanged. This is intuitive, because adding or subtracting c shifts the location of the data set but leaves distances between data values unchanged. According to Result 2, multiplication of each x; by c results in s* being multiplied by a factor of c*. These properties can be proved by noting in Result 1 that y = x + c and in Result 2 that y = cx (see Exercise 59). Boxplots Stem-and-leaf displays and histograms convey rather general impressions about a data set, whereas a single summary such as the mean or standard deviation focuses on just one aspect of the data. In recent years, a pictorial summary called a boxplot has been used successfully to describe several of a data set’s most prominent features. These features include (1) center, (2) spread, (3) the extent and nature of any departure from symmetry, and (4) identification of “outliers,” observations that lie unusually far from the main body of the data. Because even a single outlier can drastically affect the values of x and s, a boxplot is based on measures that are “resistant” to the presence of a few outliers—the median and a measure of spread called the fourth spread. DEFINITION Order the n observations from smallest to largest and separate the smallest half from the largest half; the median x is included in both halves if n is odd. Then the lower fourth is the median of the smallest half and the upper fourth is the median of the largest half. A measure of spread that is resistant to outliers is the fourth spread f,, given by Jf; = upper fourth — lower fourth Roughly speaking, the fourth spread is unaffected by the positions of those observations in the smallest 25% or the largest 25% of the data. The simplest boxplot is based on the following five-number summary: smallest x; lower fourth median upper fourth largest x; First, draw a horizontal measurement scale. Then place a rectangle above this axis; the left edge of the rectangle is at the lower fourth, and the right edge is at the upper fourth (so box width = f,). Place a vertical line segment or some other symbol inside the rectangle at the location of the median; the position of the median symbol relative to the two edges conveys information about skewness in the middle 50% of the data. Finally, draw “whiskers” out from either end of the rectangle to the smallest and largest observations. A boxplot with a vertical orientation can also be drawn by making obvious modifications in the construction process. Ultrasound was used to gather the accompanying corrosion data on the thickness of the floor plate of an aboveground tank used to store crude oil (“Statistical Analysis of UT Corrosion Data from Floor Plates of a Crude Oil Aboveground Storage Tank,” Mater. Eval., 1994: 846-849); each observation is the largest pit depth in the plate, expressed in milli-in. --- Trang 51 --- 38 = — cnarrer 1 Overview and Descriptive Statistics a 40 52 55 60 70 75 85 85 90 90 92 94 94 95 98 100 115 125 125 SS The five-number summary is as follows: smallest x;= 40 — lower fourth = 72.5 X¥=90 upper fourth = 96.5 largest x; = 125 Figure 1.17 shows the resulting boxplot. The right edge of the box is much closer to the median than is the left edge, indicating a very substantial skew in the middle half of the data. The box width (f,) is also reasonably large relative to the range of the data (distance between the tips of the whiskers). ttt 11 1 1 1 pepin 40 50 60 70 80 90 100 110 120 130 Figure 1.17 A boxplot of the corrosion data Figure 1.18 shows MINITAB output from a request to describe the corrosion data. The trimmed mean is the average of the 17 observations that remain after the largest and smallest values are deleted (trimming percentage ~5%). QI and Q3 are the lower and upper quartiles; these are similar to the fourths but are calculated in a slightly different manner. SE Mean is s/,/n; this will be an important quantity in our subsequent work concerning inferences about ju. Variable N Mean Median TrMean StDev SE Mean depth 19 86.32 90.00 86.76 23.32 5.35 variable Minimum Maximum fol 93 depth 40.00 125.00 70.00 98.00 Figure 1.18 MINITAB description of the pit-depth data 7 Boxplots That Show Outliers A boxplot can be embellished to indicate explicitly the presence of outliers. --- Trang 52 --- 1.4 Measures of Variability 39 DEFINITION Any observation farther than 1.5f, from the closest fourth is an outlier. An outlier is extreme if it is more than 3f, from the nearest fourth, and it is mild otherwise. Let’s now modify our previous construction of a boxplot by drawing a whisker out from each end of the box to the smallest and largest observations that are not outliers. Each mild outlier is represented by a closed circle and each extreme outlier by an open circle. Some statistical computer packages do not distinguish between mild and extreme outliers. The Clean Water Act and subsequent amendments require that all waters in the United States meet specific pollution reduction goals to ensure that water is “fishable and swimmable.” The article “Spurious Correlation in the USEPA Rating Curve Method for Estimating Pollutant Loads” (J. Environ. Eng., 2008: 610-618) investigated various techniques for estimating pollutant loads in watersheds; the authors “discuss the imperative need to use sound statistical methods” for this purpose. Among the data considered is the following sample of TN (total nitrogen) loads (kg N/day) from a particular Chesapeake Bay location, displayed here in increasing order. 9.69 13.16 17.09 18.12 23.70 24.07 24.29 26.43 30.75 31.54 35.07 36.99 40.32 42.51 45.64 48.22 49.98 50.06 55.02 57.00 58.41 61.31 64.25 65.24 66.14 67.68 81.40 90.80 92.17 92.42 100.82 101.94 103.61 106.28 106.80 108.69 11461 120.86 124.54 143.27 143.75 149.64 167.79 182.50 192.55 193.53 271.57 292.61 312.45 352.09 371.47 444.68 460.86 563.92 690.11 826.54 1529.35 Relevant summary quantities are ¥=92.17 lower fourth = 45.64 upper fourth = 167.79 fs = 122.15 1.5f; = 183.225 3fs = 366.45 Subtracting 1.5f, from the lower fourth gives a negative number, and none of the observations are negative, so there are no outliers on the lower end of the data. However, upper fourth + 1.5f= 351.015 upper fourth + 3f,= 534.24 Thus the four largest observations — 563.92, 690.11, 826.54, and 1529.35 — are extreme outliers, and 352.09, 371.47, 444.68, and 460.86 are mild outliers. The whiskers in the boxplot in Figure 1.19 extend out to the smallest observation 9.69 on the low end and 312.45, the largest observation that is not an outlier, on the upper end. There is some positive skewness in the middle half of the data (the median line is somewhat closer to the right edge of the box than to the left edge) and a great deal of positive skewness overall. --- Trang 53 --- 40 CHAPTER 1 Overview and Descriptive Statistics + ee @ 0 Oo ° ° a 0 200 400 600 800 1000 1200 1400 1600 Daily nitrogen load Figure 1.19 A boxplot of the nitrogen load data showing mild and extreme outliers @ Comparative Boxplots A comparative or side-by-side boxplot is a very effective way of revealing simila- tities and differences between two or more data sets consisting of observations on the same variable. ee =n recent years, some evidence suggests that high indoor radon concentration may be linked to the development of childhood cancers, but many health professionals remain unconvinced. The article “Indoor Radon and Childhood Cancer” (Lancet, 1991: 1537-1538) presented the accompanying data on radon concentration (Bq/m*) in two different samples of houses. The first sample consisted of houses in which a child diagnosed with cancer had been residing. Houses in the second sample had no recorded cases of childhood cancer. Figure 1.20 presents a stem-and-leaf display of the data. 1. Cancer 2. No cancer 9987653 | 0 | 3356677889999 88876665553321111000 | 1 | 11111223477 73322110 | 2 | 11449999 9843 | 3 | 389 s|4 7| 5 | 55 6 4 Stem: Tens digit HI:210 8 [5 Leaf: Ones digit Figure 1.20 Stem-and-leaf display for Example 1.18 Numerical summary quantities are as follows: z s te Cancer 22.8 16.0 317 11.0 No cancer 19.2 12.0 17.0 18.0 --- Trang 54 --- 1.4 Measures of Variability 41 The values of both the mean and median suggest that the cancer sample is centered somewhat to the right of the no-cancer sample on the measurement scale. The values of s suggest more variability in the cancer sample than in the no-cancer sample, but this impression is contradicted by the fourth spreads. The observation 210, an extreme outlier, is the culprit. Figure 1.21 shows a comparative boxplot from the R computer package. The no-cancer box is stretched out compared with the cancer box (f, = 18 vs. f, = 11), and the positions of the median lines in the two boxes show much more skewness in the middle half of the no-cancer sample than the cancer sample. Were the cancer victims exposed to more radon, as you would expect if there is a relationship between cancer and radon? This is not evident from the plot, where the cancer box fits well within the no-cancer box and there is little difference in the highest and lowest values if you ignore outliers. Because the R package boxplot does not normally distinguish between mild and extreme outliers, a few commands were needed to get the hollow circles and filled circles in Figure 1.21 (the commands are available on the web pages for this book). 200 ° _ 150 2 g 8 100 5 * (4 50 : * ——, | m ; : Cancer No Cancer Figure 1.21 A boxplot of the data in Example 1.18, from R : Exercises | Section 1.4 (41-59) 41. The article “Oxygen Consumption During Fire ¢. The sample standard deviation Suppression: Error of Heart Rate Estimation” d,s? using the shortcut method (Ergonomics, 1991: 1469-1474) reported the fol- 49, THe value of Young’s modulus (GPa) was deter- lowing data on oxygen consumption (mL/kg/ : mod abi eps . mined for cast plates consisting of certain inter- min) for a sample of ten firefighters performing : : afi adon simulation? me metallic substrates, resulting in the following avtife: Suppression: simnufaltons sample observations (“Strength and Modulus of 29.5 49.3 30.6 28.2 28.0 26.3 33.9 294 235 316 a Molybdenum-Coated Ti-25AI-10Nb-3U-1Mo Caapite neTalEWing: Intermetallic,” J. Mater. Engrg. Perform., 1997: a. The sample range 46-50): b. The sample variance s* from the definition 116.4 115.9 114.6 115.2 115.8 by. Bist: computing: devianons, chen squaring a. Calculate ¥ and the deviations from the mean. them, etc.) --- Trang 55 --- 42 CHAPTER 1 Overview and Descriptive Statistics b. Use the deviations calculated in part (a) to injuries were caused by the keyboard (Genessy obtain the sample variance and the sample v. Digital Equipment Corp.). The jury awarded standard deviation. about $3.5 million for pain and suffering, but ¢. Calculate s? by using the computational for- the court then set aside that award as being mula for the numerator S,.. unreasonable compensation. In making this d. Subtract 100 from each observation to obtain a determination, the court identified a “norma- sample of transformed values. Now calculate tive” group of 27 similar cases and specified a the sample variance of these transformed reasonable award as one within two standard values, and compare it to s? for the original deviations of the mean of the awards in the 27 data, State the general principle. cases. The 27 awards were (in $1000s) 37, 60, ‘ ss 75, 115, 135, 140, 149, 150, 238, 290, 340 43. The 2 2 servations on stabilized eh hde 1S os Oa A TOU, Zobace Os BA wiseosy GD) tors «cies ofa or ain sails 410, 600, 750, 750, 750, 1050, 1100, 1139, peony Noe) A eee ee 8 1150, 1200, 1200, 1250, 1576, 1700, 1825, and of asphalt with 18% rubber added are from the ‘ é aera ave 2000, from which Sox; = 20,179, 3x2 = article “Viscosity Characteristics of Rubber- ; i : : Ree 24,657,511. What is the maximum possible Modified Asphalts” (J. Mater. Civil Engrg., amount that could be awarded under the two- 1996: 153-156): , ; standard-deviation rule? 9 27812900, 3013, 2856-2888 ag. The anticle “A Thin-Film Oxygen Uptake Test a. What are the values of the sample mean and for the Evaluation of Automotive Crankcase sample median? Lubricants” (Lubric, Engrg., 1984: 75-83) b. Calculate the sample variance using the reported the following data on oxidation-induc- computational formula. [Hint: First subtract tion time (min) for various commercial oils: a convenient number from each observation. $7 103 130 160 180 195 132 145 211 105 145 4. Calculate and interpret the values of the sample 183152138 879993 119 129 median, sample mean, and sample standard devi- a. Calculate the sample variance and standard ation for the following observations on fracture deviation. strength (MPa, read from a graph in “Heat-Resis- b. If the observations were reexpressed in hours, tant Active Brazing of Silicon Nitride: Mechani- what would be the resulting values of the cal Evaluation of Braze Joints,” Welding J., Aug. sample variance and sample standard devia- 1997): tion? Answer without actually performing the 87 93 96 98 105 114 128 131 142 168 TEERPIESNON: 45, Exercise 33in Section 1.3 presetited a sample of 4% ‘The first four deviations from the mean in a : ae Be sample of n = 5 reaction times were .3, .9, 1.0, 26 escape times for oil workers in a simulated ; bbe ee) i and 1.3. What is the fifth deviation from the escape exercise. Calculate and interpret the sam- . an : mean? Give a sample for which these are the ple standard deviation. (Hint: 32x; = 9638 and sive devtaliSAR TOR theTHISaR Yox8 = 3,587, 566]. tations : ' 50. Reconsider the data on area of scleral lamina 46. A study of the relationship between age and iiGt TA RRRSE A: various visual functions (such as acuity and e Keres 0. oe depth 5 d the followi b a. Determine the lower and upper fourths. lepth perception) reported tei following obser b. Calculate the value of the fourth spread. vations on area of scleral lamina (mm?) from : ; ¢. If the two largest sample values, 4.33 and 4.52, human optic nerve heads (“Morphometry of rape 2 ‘ " had instead been 5.33 and 5.52, how would this Nerve Fiber Bundle Pores in the Optic Nerve affect ? Explain, Head of the Human,” Exper. Eye Res., 1988: Seem . Hae d. By how much could the observation 2.34 be 568): increased without affecting f,? Explain. 2.75 2.62 2.74 3.85 2.34 2.74 3.93 4.21 3.88 e. If an 18th observation, x13 = 4.60, is added to 433 346 4.52 243 3.65 2.78 3.56 3.01 the sample, what is f,? a, Calculate 3) x; and 0.x}. 51. Reconsider these values of rushing yardage from b. Use the values calculated in part (a) to com- Exercise 31 of this chapter: pute the sample variance s* and then the sam- @ Yk i RR 8 ES ple standard deviation s. a a 47. In 1997 a woman sued a computer keyboard a. What are the values of the fourths, and what is manufacturer, charging that her repetitive stress the value of f.? --- Trang 56 --- 1.4 Measures of Variability 43 b. Construct a boxplot based on the five-number 54. Here is summary information on the alcohol per- summary, and comment on its features. centage for a sample of 25 beers: cc. How large or small does an observation have to be to qualify as an outlier? As an extreme lower fourth =4.35 median =5 upper fourth = 5.95 outlier? d. By how much could the largest observation be The bottom three are 3.20 (Heineken Premium decreased without affecting f,? Light), 3.50 (Amstel light), 4.03 (Shiner Light) 52. Here is a stem-and-leaf display of the escape time facd the top tines ate 7.90 (Tertapin All-dafierioas Ants GHRGAASSAGATENSERE 9 METRE Imperial Pilsner), 9.10 (Great Divide Hercules ATT ETOMUC ECE TB BRCICISE2 9 OR US-CHApICr: Double IPA), 11.60 (Rogue Imperial Stout). 32 | 55 a. Are there any outliers in the sample? Any 33 49 extreme outliers? 34 b. Construct a boxplot that shows outliers, and 35 6699 comment on any interesting features. a ane 55. A company utilizes two different machines to manufacture parts of a certain type. During a sin- 38 9 4 gle shift, a sample of n = 20 parts produced by 39 2347 each machine is obtained, and the value of a 40 23 ; Bits A ; particular critical dimension for each part is deter- 4l : : mined, The comparative boxplot below is con- 4a) 4 : structed from the resulting data. Compare and a. Determine the value of the fourth spread. contrast the two samples. b. Are there any outliers in the sample? Any extreme outliers? Machine ¢. Construct a boxplot and comment on its features. d. By how much could the largest observation, 5 currently 424, be decreased without affecting _L_ | the value of the fourth spread? 53. Many people who believe they may be suffering 1 {ik - from the flu visit emergency rooms, where they are subjected to long waits and may expose others or L Tie Tg Dimension themselves be exposed to various diseases. The 5 93 3 article “Drive-Through Medicine: A Novel Pro- 56, Biood cocaine concentration (mg/L) was deter- poss] forthe Rapid Evaluation of Patents During mined both for a sample of individuals who had an Influenza Pandemic” (Ann, Emerg. Med., 2010: died leony cuca ce dgdaced exciied wel sum 268-273 described an experiment to see whether (ED) and for a sample of those who had died pationts:eould bevevaluated while-remairing in from a cocaine overdose without excited delir- their vehicles. The following total processing sina RARUAWAL Tinie Tar Pople TH BALL FOURS WE times (min) for a sample of 38 individuals were at oat: 6h. The accompanying data was ead ‘fend from a hah that appented in thecited article: from a comparative boxplot in the article “Fatal 9 16 16 17 19 20 20 20 Excited Delirium Following Cocaine Use” (J. 23°23 «223:0«23:( ssh Forensic Sci., 1997: 25-31). 25 25 26 26 27 27 28 28 ep Oe KRRADAA Be Sa SS 2%» 2 «2»22 «230 «32 «233°«33 3h 3 4 5 7 8 101527 28 35 40 89 37 43 «444648853 92 TILT 210 NowEDO 0 000 4.44 4 2 2 2 a. Calculate several different measures of center 3093 43 45 5 6 8 9 10 12 14 ' 15 17 203235414348 50 56 59 60 and compare them. — . 64 79 8387919699 11.0 115 12.2 12.7 140 b. Are there any outliers in this sample? Any 16.6 17.8 ? extreme outliers] a. Determine the medians, fourths, and fourth ¢. Construct a boxplot and comment on any inter- : spteads for the two samples. esting features. --- Trang 57 --- 44 ~~ cuarrer 1 Overview and Descriptive Statistics b. Are there any outliers in either sample? Any Construct a comparative boxplot and comment on extreme outliers? interesting features. Compare the salaries of the ¢. Construct a comparative boxplot, and use it as a two teams. The Indians won more games than basis for comparing and contrasting the ED and the Yankees in the regular season and defeated non-ED samples. the Yankees in the playoffs. 57. At the beginning of the 2007 baseball season each 58. The comparative boxplot below of gasoline vapor American League team had nine starting position coefficients for vehicles in Detroit appeared in the players (this includes the designated hitter but not article “Receptor Modeling Approach to VOC Emis- the pitcher). Here are the salaries for the New sion Inventory Validation” (J. Environ. Engrg., York Yankees and the Cleveland Indians in 1995: 483-490). Discuss any interesting features. thoussnids.of dollars: 59, Let x1, ..., x) be a sample and let a and b be Vamboos; pers ee br Fi, 21600 constants. If y; = ax; + b fori = 1,2,...,,how litte: «aoe SO AEG” 000: does f, (the fourth spread) for the y’s relate to f, 3750917 3000 = 4050 for the x;’s? Substantiate your assertion. Comparative boxplot for Exercise 58 Gas vapor coefficient 70 60 ° 50 40 30 8 20 . = | 0 Time 6am. 8am. 12noon 2 p.m. 10 p.m. 60. Consider the following information from a sample etch on a silicon wafer used in the manufacture of four Wolferman’s cranberry citrus English of integrated circuits, resulting in the following muffins, which are said on the package label to data: weigh 116 g: x = 104.4 g; s = 4.1497 g, smallest Flow rate weighs 98.7 g, largest weighs 108.0 g. Determine aT ae, a a Ce eC the values of the two middle sample observations 160 36 42 42 46 49 50 (and don’t do it by successive guessing!). 200 «290«34 03S 4 46S 61. Three different C2F flow rates (SCCM) were Compare and contrast the uniformity observa- considered in an experiment to investigate the tions resulting from these three different flow effect of flow rate on the uniformity (%) of the rates. --- Trang 58 --- Supplementary Exercises 45 62. The amount of radiation received at a greenhouse Using the interpolation method suggested in Sec- plays an important role in determining the rate of tion 1.3, compute the 10% trimmed mean. photosynthesis. The accompanying observations 66. a, For what value of c is the quantity on incoming solar radiation were read from a Yo (a; —c)? minimized? (Hint: Take the graph in the article “Radiation Components derivative with respect to c, set equal to 0, over Bare and Planted Soils in a Greenhouse” anid'eolve.] (Solar Energy, 1990: 1011-1016). b. Using the result of part (a), which of the two 63 64 77 84 85 88 89 quantities > (xj — x)? and (xj — 4)* will 90 91 10.0 10.1 10.2 106 10.6 be smaller than the other (assuming that 10.7 10.7 108 109 IL 112 112 x#u)? 14 19 19 12.2 13.1 67. a. Let a and b be constants and let y; = ax; + b Use some of the methods discussed in this chap- ford = 1,2,..0, it! What are the relationships ter to describe and summarize this data. between X and y and between sy and s;? . b. The Australian army studied the effect of 63. The following data on HC and CO emissions for igh témpersites and’ humidity OAvhuman one particular vehicle was given in the chapter body temperature (Neural Network Training introduction. on Human Body Core Temperature Data, HC (gimile) 13.8 18.3 573 32.5 Technical Report DSTO TN-0241, Com- CO(gimile) 118 149-232-236 batant Protection Nutrition Branch, Aeronau- _. tical and Maritime Research Laboratory). a. Compute the sample standard deviations for They found that, at 30°C and 60% relative the HC and CO observations. Does the wide- nullity, the. sample average Kody impar- spread belief appear to be justified? ture for nine soldiers was 38.21°C, with b. The sample coefficient of variation s/X (or Standaid: deviation’ .318°C.. What ‘are’ the 100 - s/X) assesses the extent of variability sample averapevan’l the -aiidard (deviation relative to the mean. Values of this coefficient in °F? for several different data sets can be compared ; to determine which data sets exhibit more or _ 68. Elevated energy consumption during exercise less variation. Carry out sucha comparison for continues after the workout ends. Because cal- the given data. ories burned after exercise contribute to weight a. . . loss and have other consequences, it is important 64. A sample of 77 individuals working at a particu- to understand this process. The paper “Effect of lar office was selected and the noise level (ABA) Weight Training Exercise and Treadmill Exer- experienced by each one was determined, yield- Hise oH PORLEXeteion (ORISSA (Cousiaption” ing the following data (“Acceptable Noise (Med. Sci. Sports Exercise, 1998: 518-522) Levels for Construction Site Offices, Build. reported Ihezaccompanying data framaisnidyin Serv. Engr. Res. Technol., 2009: 87-94). which oxygen consumption (liters) was 55.3 55.3 55.3 55.9 55.9 55.9 55.9 56.1 56.1 measured continuously for 30 min for each of 56.1 56.1 56.1 56.1 56.8 56.8 57.0 57.0 57.0 per er i ini Ci 378 578 $78 579 579 579 SB8 SRE SB8 15 subjects both after a weight training exercise 59.8 598 598 622 622 638 638 638 639 and after a treadmill exercise. 63.9 63.9 64.7 64.7 64.7 65.1 65.1 65.1 65.3 Subject 1 2 3 4.05 6 65.3 65.3 65.3 674 674 67.4 674 68.7 68.7 68.7 68.7 69.0 704 704 71.2 71.2 71.2 73.0 aa .) the Es a pes re ies 73.0 73.1 73.1 746 74.6 74.6 74.6 79.3 79.3 os - = ¢ + - = 79.3 79.3 83.0 83.0 83.0 Subject 7 8 9 1 i 12 Weight(x) 23.0 18.7 19.0 17.0 19.1 19.6 Use various techniques discussed in this chapter Treadmill (y) 20.8 10.3. 103 2.6 16.6 22.4 to organize, summarize, and describe the data. supjecy B 14 15 65. Fifteen air samples from a certain region were ee a ae a obtained, and for each one the carbon monoxide concentration was determined. The results (in a. Construct a comparative boxplot of the pin) vere weight and treadmill observations, and com- 93 10.7 85 96 122 156 9.2 10.5 ment on what you see. 9.0 13.2 110 88 137 121 98 --- Trang 59 --- 46 CHAPTER 1 Overview and Descriptive Statistics b. Because the data is in the form of (x, y) pairs, determined for each specimen, resulting in the with x and y measurements on the same vari- accompanying data. able under two different conditions, itis natu- Type 1 350 350 350 358 370 370 370 371 ral to focus on the differences within pairs: 371 372 372 384 391 391 392 =x -y, =i) —y, Type 2 dy Ay — Yin eeia a= tn — Ya: Construct a *PE* 350 354 359 363 365 368 369 371 boxplot of the sample differences. What does 373 374 376 380 383 388 302 it suggest? Type 3 - 350 361 362 364 364 365 366 371 69. Anxiety disorders and symptoms can often be 377 377 377 379 380 380 302 effectively treated with benzodiazepine medica- 2 . é . a. Construct a comparative boxplot, and comment tions. It is known that animals exposed to stress pe ae “i a a : on similarities and differences. exhibit a decrease in benzodiazepine receptor ‘ an b. Construct a comparative dotplot (a dotplot for binding in the frontal cortex. The paper (caaeels Sagres . ra each sample with a common scale). Comment Decreased Benzodiazepine Receptor Binding in air : P on similarities and differences. Prefrontal Cortex in Combat-Related Posttrau- zx : : ¢. Does the comparative boxplot of part (a) give matic Stress Disorder” (Amer. J. Psychiatry, ‘fafornatt tof similariti d 2000: 1120-1126) described the first study of iived Guleenieiae benzodiazepine receptor binding in individuals HISISAGES JXDIEI QUE TEASONINE: suffering from PTSD. The accompanying data on 73. The three measures of center introduced in this a receptor binding measure (adjusted distribution chapter are the mean, median, and trimmed mean. volume) was read from a graph in the paper. Two additional measures of center that are occa- PTSD: 10,20, 25, 28, 31, 35, 37, 38, 38, 39, 39, 42, plonellyused are The umidransgaw inchs theayer: 46 age of the smallest and largest observations, and Healthy: 23, 39, 40, 41, 43, 47, 51, 58, 63, 66, 67, the midfourth, which is the average of the two 69,72. fourths. Which of these five measures of center Be . . are resistant to the effects of outliers and which are Use various methods from this chapter to describe f . not? Explain your reasoning. and summarize the data. 70.70 GR Ca We Realy wane seagaer 74 The authors of thie article "Predictive Model for Pitting Corrosion in Buried Oil and Gas Pipelines (Amer. J. Phys. Anthropol., 1992: 19-27) reported 3 aa aene . (Corrosion, 2009: 332-342) provided the data on ‘on an experiment in which each of 20 healthy men é ihe iad Daas : which their investigation was based. was asked to walk as straight as possible to a target z ‘ a. Consider the following sample of 61 observa- 60 m away at normal speed. Consider the follow- e . ; . 5 A tions on maximum pitting depth (mm) of pipe- ing observations on cadence (number of strides . ! A line specimens buried in clay loam soil. per second): 95 85 92 95 93 86 1.00 92 85 81 041 0.41 041 0.41 0.43 043 0.43 0.48 0.48 78 93 93 1.05 93 1.06 1.06 96 81 .96 0.58 0.79 0.79 0.81 0.81 081 091 0.94 0.94 1.02 1.04 1.04 1.17 117 117) 117) 117 1.17 Use the methods developed in this chapter to 117 1.19 1.19 127 140 140 159 1.59 1.60 summarize the data; include an interpretation or 1.68 1.91 1.96 1.96 1.96 2.10 221 2.31 2.46 discussion wherever appropriate. [Note: The peeks Baas ek a el ecu : ppropriate: 4 4.75 533 7.65 7.70 8.13 1041 13.44 author of the article used a rather sophisticated statistical analysis to conclude that people cannot Construct a stem-and-leaf display in which the walk in a straight line and suggested several expla- two largest values are shown in a last row nations for this.] labeled HI. 71. The mode of a numerical data set is the value that b. Refer back to (a), and create a histogram based occurs most frequently in the set. on eight classes with 0 as the lower limit of the a, Determine the mode for the cadence data given first class and class widths of .5, .5, .5, .5, 1,2. in Bxercise:705 5, and 5, respectively. b. For a categorical sample, how would you c. The accompanying comparative boxplot from define the modal category? MINITAB shows plots of pitting depth for four 72. Specimens of three different types of rope wire different types of soils. Describe its important were selected, and the fatigue limit (MPa) was features. --- Trang 60 --- Supplementary Exercises 47 14 * 12 aie * * B bl * 6 - 3 : BG * z Es £ * : 4 x BS 0 c Gil SCL SYCL Soil type 75. Consider a sample x), x2, ... ,x, and suppose that 77. Lengths of bus routes for any particular transit the values of x, s, and s have been calculated. system will typically vary from one route to a. Let yj =x; —¥ for i= 1, ... , n. How do the another. The article “Planning of City Bus values of s? and s for the y;'S compare to the Routes” (J. Institut. Engrs., 1995: 211-215) corresponding values for the x;’s? Explain. gives the following information on lengths (km) b. Let 2 = (a; —¥)/s for i = 1, ... , 2. What are for one particular system: the vais of the sone variance and sample pengih 6-8 8-10 10-12 12-14 14-16 standard deviation for the z;’s? Freq. 6 B 30 35 32 76. Let 4 and sy denote the sample meanand variance) soy 6 18 18-20 20-22 22-24 24-06 for the sample xy, ..., x, and let X41 and s2,, a i dh a denote these quantities when an additional obser- req. vation x,,.1 is added to the sample. Length 26-28 28-30 30-35 35-40 40-45 a. Show how ¥,1 can be computed from x, and Freq. 26 14 27, ia 2) ct ey Hat a. Draw a histogram corresponding to these fre- ° quencies. > > Kt 5 b. What proportion of these route lengths are less Sp 41 = (a — 1s, + an. (nt = Fn) than 20? What proportion of these routes have lengths of at least 30? ee ee ¢. Roughly what is the value of the 90th percen- mie mete ie tile of the route length distribution? ini d. Roughly what is the median route length? . Si se that a sample of 15 strands of di G- Suppose tat a sampie of Td strands oF craPery 78, A study carried out to investigate the distribution of yarn has resulted in a sample mean thread oe ears elongation of 12.58 mm and a sample standard total braking time (reaction time plus accelerator- deviation of .512 mm. A 16th strand results in to-brake movement time, in msec) during real Gavclonstion valle cP 118. What ae the driving conditions at 60 km/h gave the following values Of ihe sample mean and sample standard summary information on the distribution of times deviation for all 16 elongation observations? CX Field Stodyiion (Braking Resionkes duting Driving,” Ergonomics, 1995: 1903-1910): --- Trang 61 --- 48 — cuarrer 1 Overview and Descriptive Statistics mean = 535 median = 500 mode = 500 [Note: A relevant reference is the article “Simple sd=96 minimum = 220 maximum = 925 Statistics for Interpreting Environmental Data,” Sth percentile = 400 10th percentile = 430 Water Pollution Contr. Fed. J., 1981: 167-175.| 90th percentile = 640 95th percentile =720 89, Consider numerical observations x1, ... . %» What can you conclude about the shape of a his- It is frequently of interest to know whether the togram of this data? Explain your reasoning. x;'s are (at least approximately) symmetrically 79. The sample data x;, x, ... , X, Sometimes repre- distributed about some value. If n is at least sents a time series, where x, = the observed value moderately large, the extent of symmetry can be of a response variable x at time f, Often the assessed from a stem-and-leaf display or histo- observed series shows a great deal of random gram. However, if n is not very large, such variation, which makes it difficult to study pictures are not particularly informative. Consider longer-term behavior. In such situations, it is the following alternative. Let y, denote the smal- desirable to produce a smoothed version of the lest xj, y2 the second smallest x;, and so on. series. One technique for doing so involves expo- Then plot the following pairs as points on a two- nential smoothing. The value of a smoothing dimensional coordinate system: (y;, —%, ¥— yi), constant « is chosen (0 < «% < 1). Then with x, Qa-1 — ¥, ¥— y2), On-2 — ¥, ¥— ys), ... . There = smoothed value at time f, we set 1 =, and are n/2 points when n is even and (n — 1)/2 when for t= 2,3,...,9,% =a, + (1—a)t1. nis odd. a. Consider the following time series in which a. What does this plot look like when there is xX; = temperature (°F) of effluent at a sewage perfect symmetry in the data? What does it treatment plant on day t: 47, 54, 53, 50, 46, 46, look like when observations stretch out more 47, 50, 51, 50, 46, 52, 50, 50. Plot each x, above the median than below it (a long upper against f on a two-dimensional coordinate sys- tail)? tem (a time-series plot). Does there appear to b. The accompanying data on rainfall (acre-feet) be any pattern? from 26 seeded clouds is taken from the b. Calculate the ¥,’s using « = .1. Repeat using article “A Bayesian Analysis of a Multiplica- a = .5. Which value of « gives a smoother x, tive Treatment Effect in Weather Modi- series? fication” (Technometrics, 1975: 161-166). c. Substitute ¥,_) = x; +(1—«)x;-2 on the Construct the plot and comment on the extent right-hand side of the expression for ¥,, then of symmetry or nature of departure from substitute X,_2 in terms of x, and X,-3, and so symmetry. on. On how many of the values x;, X;1, +++ .¥1 41677 175 314 32.7 406 92.4 does % depend? What happens to the coeffi- 115.3 118.3 119.0 129.6 198.6 200.7 242.5 cient on x,_, as k increases? 255.0 274.7 274.7 302.8 334.1 430.0 489.1 d. Refer to part (c). If ris large, how sensitive is x, TEE DIRQ 16960, 1097:8 2745.6: to the initialization x; = x,? Explain. Chambers, John, William Cleveland, Beat Kleiner, Hoaglin, David, Frederick Mosteller, and John Tukey, and Paul Tukey, Graphical Methods for Data Anal- Understanding Robust and Exploratory Data Anal- ysis, Brooks/Cole, Pacific Grove, CA, 1983. ysis, Wiley, New York, 1983. Discusses why, as A highly recommended presentation of both older well as how, exploratory methods should be and more recent graphical and pictorial methodol- employed; it is good on details of stem-and-leaf ogy in statistics. displays and boxplots. Freedman, David, Robert Pisani, and Roger Purves, Hoaglin, David and Paul Velleman, Applications, Statistics (4th ed.), Norton, New York, 2007. An Basics, and Computing of Exploratory Data Anal- excellent, very nonmathematical survey of basic ysis, Duxbury Press, Boston, 1980. A good discus- statistical reasoning and methodology. sion of some basic exploratory methods. --- Trang 62 --- Bibliography 49 Moore, David, Statistics: Concepts and Controversies. MA, 2012. The first few chapters give a very (7th ed.), Freeman, San Francisco, 2010. An nonmathematical survey of methods for describing extremely readable and entertaining paperback and summarizing data. that contains an intuitive discussion of problems Peck, Roxy, et al. (eds.), Statistics: A Guide to the connected with sampling and designed experi- Unknown (4th ed.),_- Thomson-Brooks/Cole, ments. Belmont, CA, 2006. Contains many short, nontech- Peck, Roxy, and Jay Devore, Statistics: The Exploration nical articles describing various applications of and Analysis of Data (Tthed.), Brooks/Cole, Boston, _ statistics. --- Trang 63 --- P eye robability Introduction The term probability refers to the study of randomness and uncertainty. In any situation in which one of a number of possible outcomes may occur, the theory of probability provides methods for quantifying the chances, or likelihoods, asso- ciated with the various outcomes. The language of probability is constantly used in an informal manner in both written and spoken contexts. Examples include such statements as “It is likely that the Dow Jones Industrial Average will increase by the end of the year,” “There is a 50-50 chance that the incumbent will seek reelection,” “There will probably be at least one section of that course offered next year,” “The odds favor a quick settlement of the strike,” and “It is expected that at least 20,000 concert tickets will be sold.” In this chapter, we introduce some elementary probability concepts, indicate how probabilities can be interpreted, and show how the rules of probability can be applied to compute the probabilities of many interesting events. The methodology of probability will then permit us to express in precise language such informal statements as those given above. The study of probability as a branch of mathematics goes back over 300 years, where it had its genesis in connection with questions involving games of chance. Many books are devoted exclusively to probability and explore in great detail numerous interesting aspects and applications of this lovely branch of mathematics. Our objective here is more limited in scope: We will focus on those topics that are central to a basic understanding and also have the most direct bearing on problems of statistical inference. JL. Devore and K.N, Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 50 DOI 10.1007/978-1-4614-0391-3 2, © Springer Science+Business Media, LLC 2012 --- Trang 64 --- 2.1 Sample Spaces and Events 51 Sample Spaces and Events An experiment is any action or process whose outcome is subject to uncertainty. Although the word experiment generally suggests a planned or carefully controlled laboratory testing situation, we use it here in a much wider sense. Thus experiments that may be of interest include tossing a coin once or several times, selecting a card or cards from a deck, weighing a loaf of bread, ascertaining the commuting time from home to work on a particular morning, obtaining blood types from a group of individuals, or calling people to conduct a survey. The Sample Space of an Experiment DEFINITION The sample space of an experiment, denoted by 8, is the set of all possible outcomes of that experiment. Example 2.1 The simplest experiment to which probability applies is one with two possible outcomes. One such experiment consists of examining a single fuse to see whether it is defective. The sample space for this experiment can be abbreviated as & ={N,D}, where N represents not defective, D represents defective, and the braces are used to enclose the elements of a set. Another such experiment would involve tossing a thumbtack and noting whether it landed point up or point down, with sample space = {U,D}, and yet another would consist of observing the gender of the next child born at the local hospital, with § = {M, F}. a Ue «=«If we examine three fuses in sequence and note the result of each examination, then an outcome for the entire experiment is any sequence of N’s and D’s of length 3, so § = {NNN,NND,NDN,NDD,DNN,DND, DDN, DDD} If we had tossed a thumbtack three times, the sample space would be obtained by replacing N by U in £ above. A similar notational change would yield the sample space for the experiment in which the genders of three newborn children are observed. a Two gas stations are located at a certain intersection. Each one has six gas pumps. Consider the experiment in which the number of pumps in use at a particular time of day is determined for each of the stations. An experimental outcome specifies how many pumps are in use at the first station and how many are in use at the second one. One possible outcome is (2, 2), another is (4, 1), and yet another is (1, 4). The 49 outcomes in ¥ are displayed in the accompanying table. The sample space for the experiment in which a six-sided die is thrown twice results from deleting the 0 row and 0 column from the table, giving 36 outcomes. --- Trang 65 --- 52 CHAPTER 2 Probability Second Station First Station 0 1 2 3 4 5 6 0 0.0) ©1) ©2) ©,3) ©,4) (0, 5) (0, 6) 1 (1,0) a, ) (, 2) a, 3) (dl, 4) (, 5) (1, 6) 2 20 2) 22 £@3) @4 £25) 26 3 (3, 0) 3, 1) (3, 2) (3, 3) (3,4) (3, 5) (3, 6) 4 (4,0) 4,1) (4,2) (4, 3) (4,4) (4,5) (4, 6) 5 (5, 0) 6, (5, 2) (5, 3) (5, 4) (5, 5) (5, 6) 6 (6, 0) (6, 1) (6, 2) (6, 3) (6, 4) (6, 5) (6, 6) a If a new type-D flashlight battery has a voltage that is outside certain limits, that battery is characterized as a failure (F); if the battery has a voltage within the prescribed limits, it is a success (S). Suppose an experiment consists of testing each battery as it comes off an assembly line until we first observe a success. Although it may not be very likely, a possible outcome of this experiment is that the first 10 (or 100 or 1000 or . . .) are F’s and the next one is an S. That is, for any positive integer n, we may have to examine n batteries before seeing the first S. The sample space is ={S, FS, FFS, FFFS, ...}, which contains an infinite number of possible outcomes. The same abbreviated form of the sample space is appropriate for an experiment in which, starting at a specified time, the gender of each newborn infant is recorded until the birth of a male is observed. a Events In our study of probability, we will be interested not only in the individual out- comes of § but also in any collection of outcomes from ¥. DEFINITION An event is any collection (subset) of outcomes contained in the sample space &. An event is said to be simple if it consists of exactly one outcome and compound if it consists of more than one outcome. When an experiment is performed, a particular event A is said to occur if the resulting experimental outcome is contained in A. In general, exactly one simple event will occur, but many compound events will occur simultaneously. Consider an experiment in which each of three vehicles taking a particular freeway exit turns left (L) or right (R) at the end of the exit ramp. The eight possible outcomes that comprise the sample space are LLL, RLL, LRL, LLR, LRR, RLR, RRL, and RRR. Thus there are eight simple events, among which are E; = {LLL} and E;= {LRR}. Some compound events include A= {RLL, LRL, LLR} = the event that exactly one of the three vehicles turns right B= {LLL, RLL, LRL, LLR} = the event that at most one of the vehicles turns right C= {LLL, RRR} = the event that all three vehicles turn in the same direction --- Trang 66 --- 2.1 Sample Spaces and Events 53 Suppose that when the experiment is performed, the outcome is LLL. Then the simple event £, has occurred and so also have the events B and C (but not A). a When the number of pumps in use at each of two 6-pump gas stations is observed, (Example 2.3 there are 49 possible outcomes, so there are 49 simple events: E; = {(0, 0)}, continued) E,={(0, 1}, ..., Eso = {(6, 6)}. Examples of compound events are A={(0, 0), C1, 1, (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)} =the event that the number of pumps in use is the same for both stations B={(0, 4), (1, 3), (2, 2), (3, 1), (4, 0)} =the event that the total number of pumps in use is four C= {(0,0), (0, 1), (1,0), (1, 1)} =the event that at most one pump is in use at each station a The sample space for the battery examination experiment contains an infinite (Example 2.4 number of outcomes, so there are an infinite number of simple events. Compound continued) events include A=({S, FS, FFS} =the event that at most three batteries are examined E={FS, FFFS, FFFFFS, ...} =the event that an even number of batteries are examined a Some Relations from Set Theory An event is nothing but a set, so relationships and results from elementary set theory can be used to study events. The following operations will be used to construct new events from given events. DEFINITION 1. The union of two events A and B, denoted by A U B and read “A or B,” is the event consisting of all outcomes that are either in A or in B or in both events (so that the union includes outcomes for which both A and B occur as well as outcomes for which exactly one occurs)—that is, all outcomes in at least one of the events. 2. The intersection of two events A and B, denoted by AM B and read “A and B,” is the event consisting of all outcomes that are in both A and B. 3. The complement of an event A, denoted by A’, is the set of all outcomes in that are not contained in A. For the experiment in which the number of pumps in use at a single six-pump (Example 2.3 gas station is observed, let A= {0, 1, 2, 3,4}, B= {3, 4, 5, 6}, and C= {1, 3, 5}. continued) Then AUB = {0,1,2,3,4,5,6} =£ AUC = {0,1,2,3,4,5} ANB= {3,44 ANC={1,3} A’= {5,6} {AUC} = {6} / --- Trang 67 --- 54 CHAPTER 2 Probability In the battery experiment, define A, B, and C by (Example 2.4 continued) A = {S, FS, FFS} B ={S,FFS, FFFFS} and C= {FS,FFFS,FFFFFS,...} Then AUB = {S,FS,FFS,FFFFS} ANB = {S, FFS} Al = {FFFS,FFFFS,FFFFFS, ...} and C' = {S,FFS,FFFFS, ...} = {an odd number of batteries are examined} a Sometimes A and B have no outcomes in common, so that the intersection of A and B contains no outcomes. DEFINITION When A and B have no outcomes in common, they are said to be disjoint or mutually exclusive events. Mathematicians write this compactly as ANMB=© where © denotes the event consisting of no outcomes whatsoever (the “null” or “empty” event). A small city has three automobile dealerships: a GM dealer selling Chevrolets and Buicks; a Ford dealer selling Fords and Lincolns; and a Chrysler dealer selling Jeeps and Chryslers. If an experiment consists of observing the brand of the next car sold, then the events A = { Chevrolet, Buick} and B = { Ford, Lincoln} are mutually exclusive because the next car sold cannot be both a GM product and a Ford product a The operations of union and intersection can be extended to more than two events. For any three events A, B, and C, the event A U B U C is the set of outcomes contained in at least one of the three events, whereas A M B € C is the set of outcomes contained in all three events. Given events Aj, A>, A3, ... , these events are said to be mutually exclusive (or pairwise disjoint) if no two events have any outcomes in common. A pictorial representation of events and manipulations with events is obtained by using Venn diagrams. To construct a Venn diagram, draw a rectangle whose interior will represent the sample space £. Then any event A is represented as the interior of a closed curve (often a circle) contained in £. Figure 2.1 shows examples of Venn diagrams. --- Trang 68 --- 2.1 Sample Spaces and Events 55 a b c d e A B A B A B A B A s 8 & £ Cs Venn diagram of Shaded region Shaded region Shaded region Mutually exclusive events A and B is ANB is AUB is AT events Figure 2.1 Venn diagrams Exercises | Section 2.1 (1-12) 1. Ann and Bev have each applied for several jobs at b. List all outcomes in the event B that all three a local university. Let A be the event that Ann is vehicles take different directions. hired and let B be the event that Bev is hired. cc. List all outcomes in the event C that exactly Express in terms of A and B the events two of the three vehicles turn right. a. Ann is hired but not Bev. d. List all outcomes in the event D that exactly b. At least one of them is hired. two vehicles go in the same direction. ¢. Exactly one of them is hired. e. List outcomes in D’, C UD, and CN D. 2. Two voters, Al and Bill, are each choosing 5. Three components are connected to form a system between one of three candidates ~ 1, 2, and 3 — as shown in the accompanying diagram. Because who are running for city council. An experimental the components in the 2-3 subsystem are outcome specifies both Al’s choice and Bill’s connected in parallel, that subsystem will function choice, e.g. the pair (3,2). if at least one of the two individual components a. List all elements of &. functions. For the entire system to function, com- b. List all outcomes in the event A that Al and Bill ponent | must function and so must the 2-3 sub- make the same choice. system, cc. List all outcomes in the event B that neither of them vote for candidate 2. 2 3. Four universities—1, 2, 3, and 4—are participat- zi ing in a holiday basketball tournament. In the first 8 round, | will play 2 and 3 will play 4. Then the two winners will play for the championship, and the The experiment consists of determining the condi- two losers will also play. One possible outcome tion of each component [5 (success) for a func- can be denoted by 1324 (1 beats 2 and 3 beats 4 tioning component and F (failure) for a in first-round games, and then 1 beats 3 and nonfunctioning component]. 2 beats 4). a. What outcomes are contained in the event A a. List all outcomes in S. that exactly two out of the three components b. Let A denote the event that | wins the tourna- function? ment. List outcomes in A. b. What outcomes are contained in the event B c. Let B denote the event that 2 gets into the that at least two of the components function? championship game. List outcomes in B. c. What outcomes are contained in the event C d. What are the outcomes in A UB and in A 9 B? that the system functions? What are the outcomes in A’? d. List outcomes in C’, A UC,A MC, BUC, and BNC. 4. Suppose that vehicles taking a particular freeway exit can turn right (R), turn left (L), or go straight 6. Each of a sample of four home mortgages is clas- (S). Consider observing the direction for each of sified as fixed rate (F) or variable rate (V). three successive vehicles. a. What are the 16 outcomes in /? a. List all outcomes in the event A that all three b. Which outcomes are in the event that exactly vehicles go in the same direction. three of the selected mortgages are fixed rate? --- Trang 69 --- 56 CHAPTER 2 Probability ¢. Which outcomes are in the event that all four ballot box contains four slips with votes for mortgages are of the same type? candidate A and three slips with votes for candi- d. Which outcomes are in the event that at most date B. Suppose these slips are removed from the one of the four is a variable-rate mortgage? box one by one. e. What is the union of the events in parts (c) and a. List all possible outcomes. (d), and what is the intersection of these two b. Suppose a running tally is kept as slips are events? removed. For what outcomes does A remain f. What are the union and intersection of the two ahead of B throughout the tally? events in pants (O) and 2 10. A construction firm is currently working on three 7. A family consisting of three persons—A, B, and different buildings. Let A; denote the event that C—belongs to a medical clinic that always has a the ith building is completed by the contract date. doctor at each of stations 1, 2, and 3. During a Use the operations of union, intersection, and certain week, each member of the family visits the complementation to describe each of the follow- clinic once and is assigned at random to a station. ing events in terms of Aj, A>, and A3, draw a Venn The experiment consists of recording the station diagram, and shade the region corresponding to number for each member. One outcome is (1, 2, 1) each one. for A to station 1, B to station 2, and C to station 1. a. At least one building is completed by the con- a. List the 27 outcomes in the sample space. tract date. b. List all outcomes in the event that all three b. All buildings are completed by the contract members go to the same station. date. cc. List all outcomes in the event that all members c. Only the first building is completed by the go to different stations. contract date. d. List all outcomes in the event that no one goes d. Exactly one building is completed by the con- to station 2. tract date. 8. A college library has five copies of a certain text e. Either the first building or both of the other two a 7 “ buildings are completed by the contract date. on reserve. Two copies (1 and 2) are first print- ings, and the other three (3, 4, and 5) are second 11. Use Venn diagrams to verify the following two printings. A student examines these books in ran- relationships for any events A and B (these are dom order, stopping only when a second printing called De Morgan’s laws): has been selected. One possible outcome is 5, and a. (AUB)! =A'NB’ another is 213. b. (ANB)! =A’UB' a. List the outcomes in S. 12. a. In Example 2.10, identify three events that are b. Let A denote the event that exactly one book ree must be'examined. What outcomes are in'A? b pre exelpent: 7 . Suppose there is no outcome common to all ce. Let B be the event that book 5 is the one “ three of the events A, B, and C. Are these three selected. What outoomensare:ity 7 events necessarily mutually exclusive? If your d. Let C be the event that book | is not examined. ‘ As wa . z WRAP SiICOES aC? answer is yes, explain why; if your answer is no, give a counterexample using the experi- 9. An academic department has just completed vot- ment of Example 2.10. ing by secret ballot for a department head. The Axioms, Interpretations, and Properties of Probability Given an experiment and a sample space &, the objective of probability is to assign to each event A a number P(A), called the probability of the event A, which will give a precise measure of the chance that A will occur. To ensure that the probability assignments will be consistent with our intuitive notions of probability, all assign- ments should satisfy the following axioms (basic properties) of probability. --- Trang 70 --- 2.2 Axioms, Interpretations, and Properties of Probability 57 AXIOM 1 For any event A, P(A) > 0. AXIOM2 ~—- P(S) = 1. AXIOM 3 If Aj, Az, A3, ... is an infinite collection of disjoint events, then P(A, UA2 UA3 « ++) = D> P(Ai) it You might wonder why the third axiom contains no reference to a finite collection of disjoint events. It is because the corresponding property for a finite collection can be derived from our three axioms. We want our axiom list to be as short as possible and not contain any property that can be derived from others on the list. Axiom | reflects the intuitive notion that the chance of A occurring should be nonnegative. The sample space is by definition the event that must occur when the experiment is performed (¥ contains all possible outcomes), so Axiom 2 says that the maximum possible probability of 1 is assigned to £. The third axiom formalizes the idea that if we wish the probability that at least one of a number of events will occur and no two of the events can occur simultaneously, then the chance of at least one occurring is the sum of the chances of the individual events. PROPOSITION P(@)=0 where @ is the null event. This in turn implies that the property contained in Axiom 3 is valid for a finite collection of events. Proof First consider the infinite collection A; = @, Ar =@, Az =Q@,.... Since 0M © = @, the events in this collection are disjoint and UA; = ©. The third axiom then gives P(®) = > PO) This can happen only if P(O) = 0. Now suppose that Aj, Az, ... , Ag are disjoint events, and append to these the infinite collection Ag; = O, Axi» = O, Acy3 = O,.... Again invoking the third axiom, k ~ oo k iI iI i=l iI as desired. : Consider tossing a thumbtack in the air. When it comes to rest on the ground, either its point will be up (the outcome U) or down (the outcome D). The sample space for this event is therefore & = {U,D}. The axioms specify P(£) = 1, so the probability assignment will be completed by determining P(U) and P(D). Since U and D are disjoint and their union is £, the foregoing proposition implies that 1=P(S) = P(U) +P(D) --- Trang 71 --- 58 CHAPTER 2 Probability It follows that P(D)=1—P(U). One possible assignment of probabilities is P(U) = .5, P(D) = .5, whereas another possible assignment is P(U) = .75, P(D) = .25. In fact, letting p represent any fixed number between 0 and 1, P(U) =p, P(D) = 1 — pis an assignment consistent with the axioms. Ll) Consider the experiment in Example 2.4, in which batteries coming off an assembly line are tested one by one until one having a voltage within prescribed limits is found. The simple events are E, = {S}, E, = {FS}, Ex = {FFS}, Ey = {FFFS}, .... Suppose the probability of any particular battery being satisfactory is .99. Then it can be shown that P(E,) = .99, P(Ey) = (.01)(.99), P(E3) = (.01)?(.99), ... is an assignment of probabilities to the simple events that satisfies the axioms. In particular, because the £;’s are disjoint and § = E; UE, UE3U..., it must be the case that 1 = P(S) = P(E,) + P(Ex) + P(E3) ++: = .99[1 + .01 + (.01)? + (.01)° +--] Here we have used the formula for the sum of a geometric series: 2 3 bs Ba AAP Ea sb he However, another legitimate (according to the axioms) probability assign- ment of the same “geometric” type is obtained by replacing .99 by any other number p between 0 and | (and .01 by 1—p). i | Interpreting Probability Examples 2.11 and 2.12 show that the axioms do not completely determine an assignment of probabilities to events. The axioms serve only to rule out assign- ments inconsistent with our intuitive notions of probability. In the tack-tossing experiment of Example 2.11, two particular assignments were suggested. The appropriate or correct assignment depends on the nature of the thumbtack and also on one’s interpretation of probability. The interpretation that is most frequently used and most easily understood is based on the notion of relative frequencies. Consider an experiment that can be repeatedly performed in an identical and independent fashion, and let A be an event consisting of a fixed set of outcomes of the experiment. Simple examples of such repeatable experiments include the tack- tossing and die-tossing experiments previously discussed. If the experiment is performed n times, on some of the replications the event A will occur (the outcome will be in the set A), and on others, A will not occur. Let n(A) denote the number of replications on which A does occur. Then the ratio n(A)/n is called the relative frequency of occurrence of the event A in the sequence of n replications. Empirical evidence, based on the results of many of these sequences of repeatable experi- ments, indicates that as n grows large, the relative frequency n(A)/n stabilizes, as pictured in Figure 2.2. That is, as n gets arbitrarily large, the relative frequency approaches a limiting value we refer to as the limiting relative frequency of the event A. The objective interpretation of probability identifies this limiting relative frequency with P(A). --- Trang 72 --- 2.2 Axioms, Interpretations, and Properties of Probability 59 1 ee 25 ae tl 0 n 1 2 3 100101102. n= Number of experiments performed Figure 2.2 Stabilization of relative frequency If probabilities are assigned to events in accordance with their limiting relative frequencies, then we can interpret a statement such as “The probability of that coin landing with the head facing up when it is tossed is .5” to mean that ina large number of such tosses, a head will appear on approximately half the tosses and a tail on the other half. This relative frequency interpretation of probability is said to be objective because it rests on a property of the experiment rather than on any particular individual concerned with the experiment. For example, two different observers of a sequence of coin tosses should both use the same probability assignments since the observers have nothing to do with limiting relative frequency. In practice, this interpretation is not as objective as it might seem, because the limiting relative frequency of an event will not be known. Thus we will have to assign probabilities based on our beliefs about the limiting relative frequency of events under study. Fortunately, there are many experiments for which there will be a consensus with respect to probability assignments. When we speak of a fair coin, we shall mean P(H) = P(T) = .5, and a fair die is one for which limiting relative frequen- cies of the six outcomes are all equal, suggesting probability assignments P({1}) = ++ = P({6}) = 1/6. Because the objective interpretation of probability is based on the notion of limiting frequency, its applicability is limited to experimental situations that are repeatable. Yet the language of probability is often used in connection with situa- tions that are inherently unrepeatable. Examples include: “The chances are good for a peace agreement;” “It is likely that our company will be awarded the contract;” and “Because their best quarterback is injured, I expect them to score no more than 10 points against us.” In such situations we would like, as before, to assign numerical probabilities to various outcomes and events (e.g., the probability is .9 that we will get the contract). We must therefore adopt an alternative interpretation of these probabilities. Because different observers may have different prior infor- mation and opinions concerning such experimental situations, probability assign- ments may now differ from individual to individual. Interpretations in such situations are thus referred to as subjective. The book by Robert Winkler listed in the chapter references gives a very readable survey of several subjective inter- pretations. --- Trang 73 --- 6O —currter 2 Probability More Probability Properties PROPOSITION For any event A, P(A) = 1 — P(A’) Proof Since by definition of A’, AUA’=8 while A and A’ are disjoint, 1 = P(S) = P(AUA’) = P(A) + P(A’), from which the desired result follows. This proposition is surprisingly useful because there are many situations in which P(A’) is more easily obtained by direct methods than is P(A). Consider a system of five identical components connected in series, as illustrated in Figure 2.3. L*] Figure 2.3 A system of five components connected in series Denote a component that fails by F and one that doesn’t fail by S (for success). Let A be the event that the system fails. For A to occur, at least one of the individual components must fail. Outcomes in A include SSFSS (1, 2, 4, and 5 all work, but 3 does not), FF SSS, and so on. There are in fact 31 different outcomes in A. However, A’, the event that the system works, consists of the single outcome SSSSS. We will see in Section 2.5 that if 90% of all these components do not fail and different components fail independently of one another, then P(A’) = P(SSSSS) = I =.59. Thus P(A) = 1 — .59 = .41; so among a large number of such systems, roughly 41% will fail. a In general, the foregoing proposition is useful when the event of interest can be expressed as “at least ... ,” because the complement “less than ...” may be easier to work with. (In some problems, “more than . . .” is easier to deal with than “at most. ..””) When you are having difficulty calculating P(A) directly, think of determining P(4’). PROPOSITION For any event A, P(A) <1. This follows from the previous proposition, 1 = P(A) + P(A’) > P(A), because P(A’) > 0. When A and B are disjoint, we know that P(A UB) = P(A) + P(B). How can this union probability be obtained when the events are not disjoint? PROPOSITION For any events A and B, P(AUB) = P(A) + P(B) — P(ANB). --- Trang 74 --- 2.2 Axioms, Interpretations, and Properties of Probability 61 Notice that the proposition is valid even if A and B are disjoint, since then P(A M B)=0. The key idea is that, in adding P(A) and P(B), the probability of the intersection A / B is actually counted twice, so P(A MB) must be subtracted out. Proof Note first that AUB=AU(BNMA’), as illustrated in Figure 2.4. Because A and (B 1 A’) are disjoint, P(AUB) = P(A) +P(BM 4‘). But B = (BNA) U(B MA’) (the union of that part of B in A and that part of B not in A). Furthermore, (B 1A) and (B A’) are disjoint, so that P(B) = P(B A) + P(BN A’). Combining these results gives P(AUB) = P(A) + P(BNA’') = P(A) + [P(B) — P(ANB)] = P(A) + P(B) — P(ANB) A B = U Figure 2.4 Representing A U Bas a union of disjoint events | In a certain residential suburb, 60% of all households get internet service from the local cable company, 80% get television service from that company, and 50% get both services from the company. If a household is randomly selected, what is the probability that it gets at least one of these two services from the company, and what is the probability that it gets exactly one of the services from the company? With A = {gets internet service from the cable company} and B = { gets tele- vision service from the cable company }, the given information implies that P(A) = .6, P(B) = .8, and P(AMB) = .5. The previous proposition then applies to give P(gets at least one of these two services from the company) P(AUB) = P(A) + P(B) — P(ANB) = 6+.8-.5=.9 The event that a household gets only television service from the company can be written as A’ B [(not internet) and television]. Now Figure 2.4 implies that 9 = P(AUB) = P(A) + P(A’ NB) = .6 + P(A'NB) from which P(A! B) = .3. Similarly, P(A 0B’) = P(AUB) — P(B) = .1. This is all illustrated in Figure 2.5, from which we see that P(exactly one) = P(ANB') + P(A’ NB) =.14+.3=4 PLAN B) ™ mA NB) 10.5 )3 Figure 2.5 Probabilities for Example 2.14 rT The probability of a union of more than two events can be computed analogously. For three events A, B, and C, the result is --- Trang 75 --- 62 CHAPTER 2 Probability P(AUBUC) = P(A) + P(B) + P(C) — P(ANB) — P(ANC) ~P(BNC)+P(ANBNC). This can be seen by examining a Venn diagram of A U B U C, which is shown in Figure 2.6. When P(A), P(B), and P(C) are added, outcomes in certain intersections are double counted and the corresponding probabilities must be subtracted. But this results in P(A M B 9 C) being subtracted once too often, so it must be added back. One formal proof involves applying the previous proposition to P((A UB) UC), the probability of the union of the two events A U B and C. More generally, a result concerning P(A; U---U Ax) can be proved by induction or by other methods. A B C Figure 2.6 AUBUC Determining Probabilities Systematically When the number of possible outcomes (simple events) is large, there will be many compound events. A simple way to determine probabilities for these events that avoids violating the axioms and derived properties is to first determine probabilities P(E;) for all simple events. These should satisfy P(E;) > 0 and 2, ;P(E;) = 1. Then the probability of any compound event A is computed by adding together the P(E;)’s for all E;’s in A. P(A) = > P(E;) all E)sinA During off-peak hours a commuter train has five cars. Suppose a commuter is twice as likely to select the middle car (#3) as to select either adjacent car (#2 or #4), and is twice as likely to select either adjacent car as to select either end car (#1 or #5). Let p;=P(car i is selected) = P(E;). Then we have p3 = 2p. =2p,4 and p= 2p, = 2ps = pa. This gives 1= 3° PE) = pr + 2p + 4p1 + 2p + Pr = 10p1 implying pj =ps=.1, p»=p4=.2, and p;=.4. The probability that one of the three middle cars is selected (a compound event) is then p,+p3+p4=.8. a Equally Likely Outcomes In many experiments consisting of N outcomes, it is reasonable to assign equal probabilities to all N simple events. These include such obvious examples as tossing a fair coin or fair die once or twice (or any fixed number of times), or selecting one or several cards from a well-shuffled deck of 52. With p = P(E;) for every i, --- Trang 76 --- 2.2 Axioms, Interpretations, and Properties of Probability 63 N N 1 | P(E;) = p=p-N so p=— 2 (Ei) Dy r P= That is, if there are N possible outcomes, then the probability assigned to each is 1/N. Now consider an event A, with N(A) denoting the number of outcomes contained in A. Then P(A) = Pe) = LN Ejina Ejina Once we have counted the number N of outcomes in the sample space, to compute the probability of any event we must count the number of outcomes contained in that event and take the ratio of the two numbers. Thus when outcomes are equally likely, computing probabilities reduces to counting. When two dice are rolled separately, there are N = 36 outcomes (delete the first row and column from the table in Example 2.3). If both the dice are fair, all 36 outcomes are equally likely, so P(E;) = 1/36. Then the event A = {sum of two numbers = 7} consists of the six outcomes (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), and (6, 1), so P(A) = NIA) _ 6.1 . N 36 6 Exercises | Section 2.2 (13-30) 13. A mutual fund company offers its customers sev- a, Compute the probability that the selected indi- eral different funds: a money-market fund, three vidual has at least one of the two types of cards different bond funds (short, intermediate, and (Le., the probability of the event A U B). long-term), two stock funds (moderate and high- b. What is the probability that the selected indi- risk), and a balanced fund. Among customers who vidual has neither type of card? own shares in just one fund, the percentages of ¢. Describe, in terms of A and B, the event that the customers in the different funds are as follows: selected student has a Visa card but not a Mas- Maneysiaeest 20% High-risk stock 18% terCard, and then calculate the probability of Short bond 15% | Moderate-risk stock 25% thisreyents Intermediate bond 10% Balanced 7% 15, A consulting firm presently has bids out on three Long bond 5% projects. Let A;= {awarded project i}, for i= A customer who owns shares in just one fund is 1, 2, 3, and suppose that P(A;) =.22, P(A) = 25, randomly selected. P(A3)=.28, P(A, MA) =.11, P(A, MA3)=.05, a. What is the probability that the selected indi- P(A21A3)=.07, P(A, MA2NA3)=.01. Express vidual owns shares in the balanced fund? in words each of the following events, and compute b. What is the probability that the individual the probability of each event: owns shares in a bond fund? a. A; UA2 c. What is the probability that the selected indi- b. Ai’ Aa! [Hint : (Ar UA2)! = Ai! Aa] vidual does not own shares in a stock fund? e, Ay U Az UA . . . d. Ay! A2'NA3! 14. Consider randomly selecting a student at a certain e. Ay! N Ay! NA3 university, and let A denote the event that the f. (Ay .Ay') UA3 selected individual has a Visa credit card and B be the analogous event for a MasterCard. Suppose 16. A particular state has elected both a governor and that P(A) =.5, P(B) = .4, and P(AMB) =.25. a senator. Let A be the event that a randomly --- Trang 77 --- 64 CHAPTER 2 Probability selected voter has a favorable view of a certain 21. Human visual inspection of solder joints on party’s senatorial candidate, and let B be the printed circuit boards can be very. subjective. corresponding event for that party’s gubernatorial Part of the problem stems from the numerous candidate. Suppose that P(A’) =.44, P(B') = types of solder defects (e.g., pad nonwetting, knee 57, and P(AUB) = .68 (these figures are sug- visibility, voids) and even the degree to which a gested by the 2010 general election in California). joint possesses one or more of these defects. Conse- a. What is the probability that a randomly quently, even highly trained inspectors can disagree selected voter has a favorable view of both on the disposition of a particular joint. In one batch candidates? of 10,000 joints, inspector A found 724 that were b. What is the probability that a randomly judged defective, inspector B found 751 such joints, selected voter has a favorable view of exactly and 1159 of the joints were judged defective by at one of these candidates? least one of the inspectors. Suppose that one of the ¢. What is the probability that a randomly 10,000 joints is randomly selected. selected voter has an unfavorable view of at a, What is the probability that the selected joint least one of these candidates. was judged to be defective by neither of the ss . two inspectors? 17. Consider the type of clothes dryer (gas or electric) b. What is the probability that the selected joint purchased by each of five different customers at a ! ehh ; as was judged to be defective by inspector B but certain store. byrinenecton A? a. If the probability that at most one of these not by inspector. #! customers purchases an electric dryer is 428, 22. A factory operates three different shifts. Over the what is the probability that at least two pur- last year, 200 accidents have occurred at the fac- chase an electric dryer? tory. Some of these can be attributed at least in part b. If P(all five purchase gas)=.116 and P(all to unsafe working conditions, whereas the others five purchase electric) =.005, what is the are unrelated to working conditions. The accompa- probability that at least one of each type is nying table gives the percentage of accidents fall- purchased? ing in each type of accident-shift category. 18. An individual is presented with three different See glasses of cola, labeled C, D, and P. He is asked | Unsafe Unrelated: to to taste all three and then list them in order of, Shirt Conditions Conditions preference. Suppose the same cola has actually Day 0H mea been put into all three glasses. : : . Swing 8% 20% a. What are the simple events in this ranking : , Night 5% 22% experiment, and what probability would you assign to each one? 500 acek te tame b. What is the probability that C is ranked first? Suppose one of the 200 accident reports is ran- : : : domly selected from a file of reports, and the shift c. What is the probability that C is ranked first . : and type of accident are determined. and D is ranked last? : a. What are the simple events? 19. Let A denote the event that the next request for b. What is the probability that the selected acci- assistance from a statistical software consultant dent was attributed to unsafe conditions? relates to the SPSS package, and let B be the c. What is the probability that the selected acci- event that the next request is for help with SAS. dent did not occur on the day shift? Suppose that PA) = 30 and PB) = 50, 23. An insurance company offers four different deduct- a. Why is it not the case that P(A) + P(B) = 12 . " . : ible levels—none, low, medium, and high—for its b. Calculate P(A’). ne ae a homeowner’s policyholders and three different ¢. Calculate P(A UB). : : : d. Calculate P(A‘ BY levels—low, medium, and high—for its automo- ee )- bile policyholders. The accompanying table gives 20. A box contains four 40-W bulbs, five 60-W bulbs, proportions for the various categories of policy- and six 75-W bulbs. If bulbs are selected one by holders who have both types of insurance. For one in random order, what is the probability that at example, the proportion of individuals with both least two bulbs must be selected to obtain one that low homeowner’s deductible and low auto deduct- is rated 75 W? ible is .06 (6% of all such individuals). --- Trang 78 --- 2.2 Axioms, Interpretations, and Properties of Probability 65 ————————=.....a ¢. What is the probability that at least one selected Homeowner’s setup is for a desktop computer? mute N L M H d. What is the probability that at least one computer of each type is chosen for setup? L 04 06 05 -03 26. Use the axioms to show that if one event A is M 07 10 20 10 contained in another event B (ie., A is a subset H 02 03 AS 1S of B), then P(A) < P(B). [Hint: For such A and B, A ae and BO A’ are disjoint and B= AU (BN A’), Suppose un individual having bothaypesof poli> as can be seen from a Venn diagram.] For general cies is randomly selected. A and B, what does this imply about the relation- a. What is the probability that the individual has a ship among P(A 1B), P(A), and P(A UB)? aoc auto id aucnble! and a high home- 37, The three major options on a car model are an owner’s deductible? automatic transmission (A), a sunroof (B), and an b. What is the probability that the individual has a upgtaded stereo (C). ff 70% of all purchasers low auto deductible? A low homeowner's request A, 80% request B, 75% request C, 85% deductible? request A or B, 90% request A or C, 95% request B ¢. What is the probability that the individual is in or €, and 98% toquest-A of B or C;compute. the the same category for both auto and home- probabilities of the following events. [Hint: “A or owner’s deductibles? . B” is the event that at least one of the two options d. Based on your answer in part (c), what is is requested; try drawing a Venn diagram and the probability that the two categories are dif- labeling all regions.] ferent? a. The next purchaser will request at least one of e. What is the probability that the individual has the three options. at least one low deductible level? b. The next purchaser will select none of the three f. Using the answer in part (e), what is the proba- options. bility that neither deductible level is low? ¢. The next purchaser will request only an auto- 24, The route used by a driver in commuting to work matic transmission and neither of the other two contains two intersections with traffic signals. The options. ; probability that he must stop at the first signal is d. The next purchaser will select exactly one of 4, the analogous probability for the second signal these three options. is 5, and the probability that he must stop atone org, certain system can experience three different more of the two signals is .6. What is the probabil- types of defects. Let A; ((=1, 2, 3) denote the ity that he must stop event that the system has a defect of type i. a. At both signals? Suppase that b. At the first signal but not at the second one? ¢. At exactly one signal? P(A,) = 12 P(Ay) =.07 P(A3) = .05 25. The computers of six faculty members in a certain P(A, UAg) = 13, P(Ay UAs) = 14 department are to be replaced. Two of the faculty P(A2 UA3) = 10 P(A, MA. MA3) = .01 members have selected laptop machines and the = other four have chosen desktop machines. Suppose a. What is the probability that the system does not that only two of the setups can be done on a partic- have a type | defect? ular day, and the two computers to be set up are b. What is the probability that the system has both randomly selected from the six (implying 15 type | and type 2 defects? equally likely outcomes; if the computers are num- @ Whats the probatnlitydthat ite dsystem i bered 1, 2, ... , 6, then one outcome consists of bate vee Tand type 2 deteess Batata types computers 1 and 2, another consists of computers 1 defect? im and 3, and so on). d. What is the probability Unt the system has at a. What is the probability that both selected most two of these defects? setups are for laptop computers? 29. In Exercise 7, suppose that any incoming individ- b. What is the probability that both selected ual is equally likely to be assigned to any of the setups are desktop machines? --- Trang 79 --- 66 —ctiapreR 2 Probability three stations irrespective of where other individuals c. Every family member is assigned to a different have been assigned. What is the probability that station? a All tines family membersane asiened TH: oy as vsccmy propeeaN MVOlVInD Ihe prObabIAy at same station? . 7 b. A famil bs ioned A UB to the union of the two events (A U B) and C SE most: two: Tamuly amembersancmssiened’ to in order to verify the result for P(A UBUC). the same station? Counting Techniques When the various outcomes of an experiment are equally likely (the same proba- bility is assigned to each simple event), the task of computing probabilities reduces to counting. In particular, if N is the number of outcomes in a sample space and N(A) is the number of outcomes contained in an event A, then N(A) P(A) = —— 2.1 w= (2.1) Tf a list of the outcomes is available or easy to construct and N is small, then the numerator and denominator of Equation (2.1) can be obtained without the benefit of any general counting principles. There are, however, many experiments for which the effort involved in con- structing such a list is prohibitive because N is quite large. By exploiting some general counting rules, it is possible to compute probabilities of the form (2.1) without a listing of outcomes. These rules are also useful in many problems involving outcomes that are not equally likely. Several of the rules developed here will be used in studying probability distributions in the next chapter. The Product Rule for Ordered Pairs Our first counting rule applies to any situation in which a set (event) consists of ordered pairs of objects and we wish to count the number of such pairs. By an ordered pair, we mean that, if O, and OQ, are objects, then the pair (O;, O2) is different from the pair (02, O,). For example, if an individual selects one airline for a trip from Los Angeles to Chicago and (after transacting business in Chicago) a second one for continuing on to New York, one possibility is (American, United), another is (United, American), and still another is (United, United). PROPOSITION If the first element or object of an ordered pair can be selected in , ways, and for each of these n, ways the second element of the pair can be selected in ny ways, then the number of pairs is nn. eee =A homeowner doing some remodeling requires the services of both a plumbing contractor and an electrical contractor. If there are 12 plumbing contractors and 9 electrical contractors available in the area, in how many ways can the contractors be chosen? If we denote the plumbers by P;, ... , Piz and the electricians by --- Trang 80 --- 2.3 Counting Techniques 67 Qi, --- , Qo, then we wish the number of pairs of the form (P;, Q;). With n, = 12 and nz = 9, the product rule yields N = (12)(9) = 108 possible ways of choosing the two types of contractors. a In Example 2.17, the choice of the second element of the pair did not depend on which first element was chosen or occurred. As long as there is the same number of choices of the second element for each first element, the product rule is valid even when the set of possible second elements depends on the first element. ea =A family has just moved to a new city and requires the services of both an obstetrician and a pediatrician. There are two easily accessible medical clinics, each having two obstetricians and three pediatricians. The family will obtain maximum health insur- ance benefits by joining a clinic and selecting both doctors from that clinic. In how many ways can this be done? Denote the obstetricians by O;, 02, 03, and O, and the pediatricians by P;, ... , Ps. Then we wish the number of pairs (O;, P;) for which O; and Pj are associated with the same clinic. Because there are four obstetricians, n,=4, and for each there are three choices of pediatrician, so ny = 3. Applying the product rule gives N = njnz = 12 possible choices. a Tree Diagrams In many counting and probability problems, a configuration called a tree diagram can be used to represent pictorially all the possibilities. The tree diagram associated with Example 2.18 appears in Figure 2.7. Starting from a point on the left side of the diagram, for each possible first element of a pair a straight-line segment emanates rightward. Each of these lines is referred to as a first-generation branch. Now for any given first-generation branch we construct another line segment emanating from the tip of the branch for each possible choice of a second element of the pair. Each such line segment is a second-generation branch. Because there are four obstetricians, there are four first-generation branches, and three pediatricians for each obstetrician yields three second-generation branches emanating from each first-generation branch. Py Py 0, Ps Py 0» Py Ps Os Py 0, Rs Ps Py Ps Ps Figure 2.7 Tree diagram for Example 2.18 --- Trang 81 --- 68 CHAPTER 2 Probability Generalizing, suppose there are n, first-generation branches, and for each first-generation branch there are nz second-generation branches. The total number of second-generation branches is then m,7. Since the end of each second-genera- tion branch corresponds to exactly one possible pair (choosing a first element and then a second puts us at the end of exactly one second-generation branch), there are n\Nnz pairs, verifying the product rule. The construction of a tree diagram does not depend on having the same number of second-generation branches emanating from each first-generation branch. If the second clinic had four pediatricians, then there would be only three branches ema- nating from two of the first-generation branches and four emanating from each of the other two first-generation branches. A tree diagram can thus be used to represent pictorially experiments when the product rule does not apply. A More General Product Rule If a six-sided die is tossed five times in succession rather than just twice, then each possible outcome is an ordered collection of five numbers such as (1, 3, 1, 2, 4) or (6,5, 2, 2, 2). We will call an ordered collection of k objects a k-tuple (so a pair is a 2-tuple and a triple is a 3-tuple). Each outcome of the die-tossing experiment is then a 5-tuple. PRODUCT Suppose a set consists of ordered collections of k elements (k-tuples) and that RULE FOR there are 7 possible choices for the first element; for each choice of the first K-TUPLES element, there are nz possible choices of the second element;...; for each possible choice of the first k—1 elements, there are nz choices of the kth element. Then there are 77 - +--+ mg possible k-tuples. This more general rule can also be illustrated by a tree diagram; simply construct a more elaborate diagram by adding third-generation branches emanating from the tip of each second generation, then fourth-generation branches, and so on, until finally kth-generation branches are added. Suppose the home remodeling job involves first purchasing several kitchen appli- (Example 2.17 ances. They will all be purchased from the same dealer, and there are five dealers in continued) the area. With the dealers denoted by Dj, ..., Ds, there are N= nynon3 = (5)(12)(9) = 540 3-tuples of the form (Dj, P;, Q,), so there are 540 ways to choose first an appliance dealer, then a plumbing contractor, and finally an electrical contractor. Il If each clinic has both three specialists in internal medicine and two general (Example 2.18 surgeons, there are 7172n3n4 = (4)(3)(3)(2) = 72 ways to select one doctor of each continued) type such that all doctors practice at the same clinic. a Permutations So far the successive elements of a k-tuple were selected from entirely different sets (e.g., appliance dealers, then plumbers, and finally electricians). In several tosses of adie, the set from which successive elements are chosen is always {1, 2, 3, 4,5, 6}, --- Trang 82 --- 2.3 Counting Techniques 69 but the choices are made “with replacement” so that the same element can appear more than once. We now consider a fixed set consisting of n distinct elements and suppose that a k-tuple is formed by selecting successively from this set without replacement so that an element can appear in at most one of the k positions. DEFINITION Any ordered sequence of k objects taken from a set of n distinct objects is called a permutation of size k of the objects. The number of permutations of size k that can be constructed from the n objects is denoted by P,.,. The number of permutations of size k is obtained immediately from the general product rule. The first element can be chosen in n ways, for each of these n ways the second element can be chosen in n — | ways, and so on; finally, for each way of choosing the first K—1 elements, the kth element can be chosen in n—(k—1)=n—k+1 ways, so Pry =n(n—1)(n—2)--++-(n—k+2)(n—k +1) Ten teaching assistants are available for grading papers in a particular course. The first exam consists of four questions, and the professor wishes to select a different assistant to grade each question (only one assistant per question). In how many ways can assistants be chosen to grade the exam? Here n= the number of assistants = 10 and k=the number of questions =4. The number of different grading assignments is then P4190 = (10)(9)(8)(7) = 5040. a The use of factorial notation allows P;,, to be expressed more compactly. DEFINITION For any positive integer m, m! is read “m factorial” and is defined by m! = m(m—1)- «++ +(2)(1). Also, 0! =1. Using factorial notation, (10)(9)(8)(7) = (10)(9)(8)(7)(6!)/6! = 10!/6!. More generally, Pyy = n(n —1)--++-(n-k +1) _ n(n 1)r (nk + I) (n—’)(n—k—1)+---+(2)0) (@—@—k=1)--(Q)) which becomes Pini n! = Goel For example, P39 =9!/(9 — 3)! =91!/6!=9 - 8-7 - 6!/6!=9 - 8 - 7. Note also that because 0! = 1, P,,,=n!/(n —n)! =n!/0! =n!/1 =n!, as it should. --- Trang 83 --- 70 CHAPTER 2 Probability Combinations Often the objective is to count the number of unordered subsets of size k that can be formed from a set consisting of n distinct objects. For example, in bridge it is only the 13 cards in a hand and not the order in which they are dealt that is important; in the formation of a committee, the order in which committee members are listed is frequently unimportant. DEFINITION Given a set of n distinct objects, any unordered subset of size k of the objects is called a combination. The number of combinations of size k that can be formed from n distinct objects will be denoted by (@) . (This notation is more common in probability than C;,,, which would be analogous to notation for permutations.) The number of combinations of size k from a particular set is smaller than the number of permutations because, when order is disregarded, some of the permutations correspond to the same combination. Consider, for example, the set {A, B, C, D, E} consisting of five elements. There are 5!/(5 — 3)! = 60 permutations of size 3. There are six permutations of size 3 consisting of the elements A, B, and C because these three can be ordered 3 - 2-1 =3! = 6 ways: (A, B,C), (A, C, B), (B.A, C), (B,C, A), (C, A, B), and (C, B, A). These six permutations are equivalent to the single combination {A, B, C}. Similarly, for any other combination of size 3, there are 3! permutations, each obtained by ordering the three objects. Thus, 2 5 60 = = +3! =—= 60 = P35 (3) 3! « (3) 3 10 These ten combinations are {A,B,C} {A,B, D} {A,B, E} {A,C, D} {A,C, E}{A, D, E} {B,C,D} {B,C,E} {B,D, E} {C,D,E} When there are n distinct objects, any permutation of size k is obtained by ordering the k unordered objects of a combination in one of k! ways, so the number of permutations is the product of k! and the number of combinations. This gives n\ Pan al k}) kK Kn —B! Notice that (") = land (3) = | because there is only one way to choose a set of (all) n elements or of no elements, and (") = nsince there are n subsets of size 1. A bridge hand consists of any 13 cards selected from a 52-card deck without regard to order. There are (7) = 52!/(13! - 39!) different bridge hands, which works out to approximately 635 billion. Since there are 13 cards in each suit, the number of hands consisting entirely of clubs and/or spades (no red cards) is (7$) = 26!/(13! - 13!) = 10, 400, 600. One of these (7) hands consists entirely of spades, and one consists entirely of clubs, so there are [(7$) — 2] hands that consist entirely of clubs and --- Trang 84 --- 2.3 Counting Techniques P| spades with both suits represented in the hand. Suppose a bridge hand is dealt from a well-shuffled deck (i.e., 13 cards are randomly selected from among the 52 possi- bilities) and let A= {the hand consists entirely of spades and clubs with both suits represented} B= (the hand consists of exactly two suits} The N = 2) possible outcomes are equally likely, so N(A) (3) 2 P(A) = ——= = .0000164 (4) N 52 13 Since there are (3) = 6 combinations consisting of two suits, of which spades and clubs is one such combination, 26 {(s) 7 N(B 13 P(B) = N(B) _ N37) goo0983 N 52 13 That is, a hand consisting entirely of cards from exactly two of the four suits will occur roughly once in every 10,000 hands. If you play bridge only once a month, it is likely that you will never be dealt such a hand. a A university warehouse has received a shipment of 25 printers, of which 10 are laser printers and 15 are inkjet models. If 6 of these 25 are selected at random to be checked by a particular technician, what is the probability that exactly 3 of those selected are laser printers (so that the other 3 are inkjets)? Let D3 = {exactly 3 of the 6 selected are inkjet printers}. Assuming that any particular set of 6 printers is as likely to be chosen as is any other set of 6, we have equally likely outcomes, so P(D3) =N(D3)/N, where N is the number of ways of choosing 6 printers from the 25 and N(D3) is the number of ways of choosing 3 laser printers and 3 inkjet models. Thus N = (7). To obtain N(D3), think of first choosing 3 of the 15 inkjet models and then 3 of the laser printers. There are @® ways of choosing the 3 inkjet models, and there are (2) ways of choosing the 3 laser printers; N(D3) is now the product of these two numbers (visualize a tree diagram—we are really using a product rule argument here), so 15 10 15! 10! = N@s) _\3 JX 3) _ 3a 3i7 _ P(D3) = = 35 = 21S 22 = 3083 6 6119! --- Trang 85 --- 72 CHAPTER 2 Probability Let D4= {exactly 4 of the 6 printers selected are inkjet models} and define Ds and Dg in an analogous manner. Then the probability that at least 3 inkjet printers are selected is P(D3 UDs UDs UD6) = P(D3) + P(Da) + P(Ds) + P(Ds) (*) (") (") (°) 3 3. 4 2 = 7795) +735 6 6 (*) (”) (2) (5) 5 1 6 0 = 8530 +75) +735 6 6 . Exercises | Section 2.3 (31-44) 31. The College of Science Council has one student Beethoven symphony and then a Mozart con- representative from each of the five science certo, in how many ways can this be done? departments (biology, chemistry, statistics, math- b. The station manager decides that on each suc- ematics, physics). In how many ways can cessive night (7 days per week), a Beethoven a. Both a council president and a vice president symphony will be played, followed by a be selected? Mozart piano concerto, followed by a Schubert b. A president, a vice president, and a secretary string quartet (of which there are 15). For be selected? roughly how many years could this policy be c. Two members be selected for the Dean’s continued before exactly the same program Council? would have to be repeated? 32. A friend is giving a dinner party. Her current wine 34. A chain of stereo stores is offering a special price supply includes 8 bottles of zinfandel, 10 of mer- on a complete set of components (receiver, com- lot, and 12 of cabernet (she drinks only red wine), pact disc player, speakers). A purchaser is offered all from different wineries. a choice of manufacturer for each component: a. If she wants to serve 3 bottles of zinfandel and serving order is important, how many ways are there to do this? Receiver: Kenwood, Onkyo, Pioneer, b. If 6 bottles of wine are to be randomly selected Sony, Yamaha from the 30 for serving, how many ways are Compact disc Onkyo, Pioneer, Sony, there to do this? player: Panasonic c. If 6 bottles are randomly selected, how many Speakers: Boston, Infinity, Polk ways are there to obtain two bottles of each variety? ; A switchboard display in the store allows a cus- d. If 6 bottles are randomly selected, what is the tomer to hook together any selection of compo- probability that this results in two bottles of nents (consisting of one of each type). Use the each variety being chosen? product rules to answer the following questions: e If 6 bottles are randomly selected, what is the a. In how many ways can one component of each probability that all of them are the same variety? type be selected? 33, a. Beethoven wrote 9 symphonies and Mozart b. In how many ways can components be selected wrote 27 piano concertos. If a university radio if both the receiver and the compact disc player station announcer wishes to play first a are to be Sony? --- Trang 86 --- 2.3 Counting Techniques 73 c. In how many ways can components be selected 38. An experimenter is studying the effects of temper- if none is to be Sony? ature, pressure, and type of catalyst on yield from d. In how many ways can a selection be made if at a chemical reaction. Three different temperatures, least one Sony component is to be included? four different pressures, and five different cata- e. If someone flips switches on the selection in a lysts are under consideration. completely random fashion, what is the proba- a. If any particular experimental run involves the bility that the system selected contains at least use of a single temperature, pressure, and cata- one Sony component? Exactly one Sony com- lyst, how many experimental runs are possible? ponent? b. How many experimental runs involve use of the 2 35. A particular iPod playlist contains 100 songs, of Jevest temperate and ve lowest presses? which 10 are by the Beatles. Suppose the shuffle 39, Refer to Exercise 38 and suppose that five differ feature is used to play the songs in random order ent experimental runs are to be made on the first (the randomness of the shuffling process is inves- day of experimentation. If the five are randomly tigated in “Does Your iPod Really Play Favor- selected from among all the possibilities, so that ites?” (The Amer. Statistician, 2009: 263 — 268)). any group of five has the same probability of What is the probability that the first Beatles song selection, what is the probability that a different heard is the fifth song played? catalyst is used on each run? 36. A production facility employs 20 workers on the 40. A box ina certain supply room contains four 40-W day shift, 15 workers on the swing shift, and 10 lightbulbs, five 60-W bulbs, and six 75-W bulbs. workers on the graveyard shift. A quality control Suppose that three bulbs are randomly selected. consultant is to select 6 of these workers for in- a. What is the probability that exactly two of the depth interviews. Suppose the selection is made in selected bulbs are rated 75 W? such a way that any particular group of 6 workers b. What is the probability that all three of the has the same chance of being selected as does any selected bulbs have the same rating? other group (drawing 6 slips without replacement cc. What is the probability that one bulb of each from among 45). type is selected? a. How many selections result in all 6 workers d. Suppose now that bulbs are to be selected one coming from the day shift? What is the proba- by one until a 75-W bulb is found. What is the bility that all 6 selected workers will be from probability that it is necessary to examine at the day shift? least six bulbs? ie What isthe: probability: that ‘all Gseleeted: 49) Gon Jelestioney have just beer received at workers will be from the same shift? oe an authorized service center. Five of these tele- cc. What is the probability that at least two differ- Fa : phones are cellular, five are cordless, and the other ent shifts will be represented among the i 3 five are corded phones. Suppose that these com- selected workers? : . ponents are randomly allocated the numbers 1, d. What is the probability that at least one of the : art é 2, ..., 15 to establish the order in which they shifts will be unrepresented in the sample of : aki will be serviced. _ a, What is the probability that all the cordless 37. An academic department with five faculty mem- phones are among the first ten to be serviced? bers narrowed its choice for department head to b. What is the probability that after servicing ten either candidate A or candidate B. Each member of these phones, phones of only two of the then voted on a slip of paper for one of the candi- three types remain to be serviced? dates. Suppose there are actually three votes for A ¢. What is the probability that two phones of each and two for B. If the slips are selected for tallying type are among the first six serviced? in random order, what is the probability that A 49 ‘Three molecules of type A, three of type B, three remains ahead of B throughout the vote count (for 4 i ; : : of type C, and three of type D are to be linked example, this event occurs if the selected ordering tovether to form a chain molecule. One such chain is AABAB, but not for ABBAA)? . . --- Trang 87 --- 74 CHAPTER 2 Probability molecule is ABCDABCDABCD, and another is 43. Three married couples have purchased theater BCDDAAABDBCC. tickets and are seated in a row consisting of just a. How many such chain molecules are there? six seats. If they take their seats in a completely (Hint: If the three A’s were distinguishable random fashion (random order), what is the prob- from one another—A;, A>, A;—and the B’s, ability that Jim and Paula (husband and wife) sit in C’s, and D’s were also, how many molecules the two seats on the far left? What is the probabil- would there be? How is this number reduced ity that Jim and Paula end up sitting next to one when the subscripts are removed from the A’s?] another? What is the probability that at least one b. Suppose a chain molecule of the type described of the wives ends up sitting next to her husband? israndomly selected. What is the probability that 44. Show that (") = |). Give an interpretation all three molecules of each type end up next to . * each other (such as in BBBAAADDDCCC)? involving subsets. Conditional Probability The probabilities assigned to various events depend on what is known about the experimental situation when the assignment is made. Subsequent to the initial assignment, partial information about or relevant to the outcome of the experiment may become available. Such information may cause us to revise some of our probability assignments. For a particular event A, we have used P(A) to represent the probability assigned to A; we now think of P(A) as the original or unconditional probability of the event A. In this section, we examine how the information “an event B has occurred” affects the probability assigned to A. For example, A might refer to an individual having a particular disease in the presence of certain symptoms. If a blood test is performed on the individual and the result is negative (B = negative blood test), then the probability of having the disease will change (it should decrease, but not usually to zero, since blood tests are not infallible). We will use the notation P(AIB) to represent the conditional probability of A given that the event B has occurred. eee §= 0, the conditional probability of A given that B has occurred is defined by P(ANB) P(A|B) =——_—.. 2.3 (418) = So (23) Suppose that of all individuals buying a certain digital camera, 60% include an optional memory card in their purchase, 40% include an extra battery, and 30% include both a card and battery. Consider randomly selecting a buyer and let A={memory card purchased} and B= {battery purchased}. Then P(A) =.60, --- Trang 89 --- 76 CHAPTER 2 Probability P(B)=.40, and P(both purchased)= P(A B)=.30. Given that the selected individual purchased an extra battery, the probability that an optional card was also purchased is P(ANB) _ 30 P(A|B) = —— = = = ..75 (418) P(B) 40 That is, of all those purchasing an extra battery, 75% purchased an optional memory card. Similarly, P(ANB 30 P(battery|memory card) = P(B|A) = aria = 357 50 Notice that P(A|B) # P(A) and P(B|A) # P(B). . A news magazine includes three columns entitled “Art” (A), “Books” (B), and “Cinema” (C). Reading habits of a randomly selected reader with respect to these columns are Read regularly A BC ANB ANC BNC ANBNC Probability 14.23.37 08 .09 13 05 (See Figure 2.9.) A B 02 /.03\ .07 05 04.08 20 51 c Figure 2.9 Venn diagram for Example 2.26 We thus have P(ANB) _ 08 P(A|B) =——_~* = —_ = 348 (418) PB) 23 P(AN(BUC)) 044.054.0312 P(AIBUC) = OL 055 GPUS) P(BUC) a7 AT P(AN(AUBUC P(Alreads at least one) =P(A|A UBUC) = ato P(A) 14 = _ = = 286 P(AUBUC) 49 and P((AUB)NC) _ 04+ .05 + .08 P(AUBIC) = “= = 459 (vuBic) P(C) 37 . --- Trang 90 --- 2.4 Conditional Probability 77 The Multiplication Rule for P(A 9 B) The definition of conditional probability yields the following result, obtained by multiplying both sides of Equation (2.3) by P(B). THE MULTI- PLICATION P(A B) = P(A|B) - PB) RULE i This rule is important because it is often the case that P(A M B) is desired, whereas both P(B) and P(AIB) can be specified from the problem description. Consideration of P(B| A) gives P(AMB) = P(B| A) - P(A) Four individuals have responded to a request by a blood bank for blood donations. None of them has donated before, so their blood types are unknown. Suppose only type O+is desired and only one of the four actually has this type. If the potential donors are selected in random order for typing, what is the probability that at least three individuals must be typed to obtain the desired type? Making the identification B = {first type not O+} and A = {second type not O+}, P(B) = 3/4. Given that the first type is not O+, two of the three individuals left are not O+, so P(A! B) = 2/3. The multiplication rule now gives P(at least three individuals are typed) = P(AMB) = P(A|B) - P(B) 23 6 “34° 12 : =.5 The multiplication rule is most useful when the experiment consists of several stages in succession. The conditioning event B then describes the outcome of the first stage and A the outcome of the second, so that P(A | B)—conditioning on what occurs first—will often be known. The rule is easily extended to experiments involving more than two stages. For example, P(A, NA2 MA3) = P(A3/A1 NA2) - P(A1 Az) (24) = P(As|A1 Aa) - P(A2|A1) + P(A1) : where A, occurs first, followed by Ao, and finally A3. For the blood typing experiment of Example 2.27, P(third type is O+) = P(third is|first isn’t Q second isn’t) - P(second isn’t|first isn’t) - P(first isn’t) [2 Bi 1 =s-5-l= = .25 23474 7 When the experiment of interest consists of a sequence of several stages, it is convenient to represent these with a tree diagram. Once we have an appropriate tree diagram, probabilities and conditional probabilities can be entered on the various branches; this will make repeated use of the multiplication rule quite straightforward. --- Trang 91 --- 78 CHAPTER 2 Probability A chain of video stores sells three different brands of DVD players. Of its DVD player sales, 50% are brand 1 (the least expensive), 30% are brand 2, and 20% are brand 3. Each manufacturer offers a 1-year warranty on parts and labor. It is known that 25% of brand 1’s DVD players require warranty repair work, whereas the corresponding percentages for brands 2 and 3 are 20% and 10%, respectively. 1. What is the probability that a randomly selected purchaser has bought a brand 1 DVD player that will need repair while under warranty? 2. What is the probability that a randomly selected purchaser has a DVD player that will need repair while under warranty? 3. If a customer returns to the store with a DVD player that needs warranty repair work, what is the probability that it is a brand 1 DVD player? A brand 2 DVD player? A brand 3 DVD player? The first stage of the problem involves a customer selecting one of the three brands of DVD player. Let A;= {brand i is purchased}, for i= 1, 2, and 3. Then P(A,) =.50, P(A) = .30, and P(A3) = .20. Once a brand of DVD player is selected, the second stage involves observing whether the selected DVD player needs warranty repair. With B = {needs repair} and B’ = {doesn’t need repair}, the given information implies that P(B|A,) = .25, P(B|A)) = .20, and P(B|A3) = .10 The tree diagram representing this experimental situation is shown in Figure 2.10. The initial branches correspond to different brands of DVD players; there are two second-generation branches emanating from the tip of each initial branch, one for “needs repair” and the other for “doesn’t need repair.” 28 P(B|A,) P(A) = P(BM A,) = 128 pei’? epait PB 4 ES “) = 75 FCS) = NO repay ow _ 20 P(B | Ay) +P(A3) = P(BM A3) = .060 pov’? P(A.) = 30 pena Brand 2 PB & 4) = 9 com No "pai Sony ; o 286 P(B| As) “P(Ay) = P(BOA3) = 020 s pa\ 49 Rope LR 4) = 99 N0 repair P(B) = .205 Figure 2.10 Tree diagram for Example 2.29 --- Trang 92 --- 2.4 Conditional Probability 79 The probability P(A;) appears on the ith initial branch, whereas the conditional probabilities P(B|A;) and P(B'|A;) appear on the second-generation branches. To the right of each second-generation branch corresponding to the occurrence of B, we display the product of probabilities on the branches leading out to that point. This is simply the multiplication rule in action. The answer to the question posed in 1 is thus P(A; 1B) = P(B|A;)- P(A;) = .125. The answer to question 2 is P(B) = P{(brand 1 and repair) or (brand 2 and repair) or (brand 3 and repair)] = P(A, MB) + P(A2MB) + P(A3 NB) = .125 + .060 + .020 = .205 Finally, P(A, NB 125 P(A, |B) = PMAGNB) _ 128 a P(B) 205 P(AyMB) _ .060 P(4)|B) = 2 = __ = 29 (4218) P(B) 205 and P(A3|B) = 1 —P(A,|B) — P(A2|B) = .10 Notice that the initial or prior probability of brand 1 is .50, whereas once it is known that the selected DVD player needed repair, the posterior probability of brand 1 increases to .61. This is because brand | DVD players are more likely to need warranty repair than are the other brands. The posterior probability of brand 3 is P(A3IB) = .10 which is much less than the prior probability P(A3) = .20. i | Bayes’ Theorem The computation of a posterior probability P(AjB) from given prior probabilities P(A)) and conditional probabilities P(B | A;) occupies a central position in elemen- tary probability. The general rule for such computations, which is really just a simple application of the multiplication rule, goes back to the Reverend Thomas Bayes, who lived in the eighteenth century. To state it we first need another result. Recall that events A;, ... , Ay are mutually exclusive if no two have any common outcomes. The events are exhaustive if one A; must occur, so that Ai U---UAQ = 8. THE LAW Let A;, ... , A, be mutually exclusive and exhaustive events. Then for any OF TOTAL other event B, PROBABILITY P(B) = P(B| Ai) -P(Ar) ++ + PB AL) PCAL) k (2.5) = SOPBIA)P(A) i=l --- Trang 93 --- 80 CHAPTER 2 Probability Proof Because the A;’s are mutually exclusive and exhaustive, if B occurs it must be in conjunction with exactly one of the A;’s. That is, B = (A, and B) or... or (A, and B) = (A, 9 B) U--- U (Ay B), where the events (A; M B) are mutually exclusive. This “partitioning of B” is illustrated in Figure 2.11. Thus k k P(B) = S) (ANB) = D> PBI Ai) P(Ai) iI i=l as desired. -- A Ay if As Ay Figure 2.11 Partition of B by mutually exclusive and exhaustive A/s . An example of the use of Equation (2.5) appeared in answering question 2 of Exam- ple 2.29, where A, = {brand 1}, A, = {brand 2}, A; = {brand 3}, and B = {repair}. BAYES’ Let Aj, ... , Ay be a collection of mutually exclusive and exhaustive events THEOREM with P(A;) > 0 fori=1,..., k. Then for any other event B, for which P(B) > 0 P(A;NB. P(B\A;)P(A P(aj|a) =e ) PBIAIP(A) ANP) gate (2.6) DL PBIAIP (A) The transition from the second to the third expression in (2.6) rests on using the multiplication rule in the numerator and the law of total probability in the denominator. The proliferation of events and subscripts in (2.6) can be a bit intimidating to probability newcomers. As long as there are relatively few events in the partition, a tree diagram (as in Example 2.29) can be used as a basis for calculating posterior probabilities without ever referring explicitly to Bayes’ theorem. INCIDENCE OF A RARE DISEASE Only 1 in 1000 adults is afflicted with a rare disease for which a diagnostic test has been developed. The test is such that when an individual actually has the disease, a positive result will occur 99% of the time, whereas an individual without the disease will show a positive test result only 2% of the time. If a randomly selected individual is tested and the result is positive, what is the probability that the individual has the disease? [Note: The sensitivity of this test is 99%, whereas the specificity (how specific positive results are to the disease) is 98%. As an indication of the accuracy of medical tests, an article in the October 29, 2010 New York Times reported that the sensitivity and specificity for a new DNA test for colon cancer were 86% and 93%, respectively. The PSA test for prostate cancer has sensitivity 85% and specificity about 30%, while the mammogram for breast cancer has sensitivity 75% and specificity 92%. All tests are less than perfect.] --- Trang 94 --- 2.4 Conditional Probability 81 To use Bayes’ theorem, let A, = {individual has the disease}, A> = { individual does not have the disease}, and B= {positive test result}. Then P(A;)=.001, P(Ap) =.999, P(B|A1) =.99, and P(B|Ap) = .02. The tree diagram for this problem is in Figure 2.12. 9 P(4,N B) = .00099 e are’ a - 01 gen Bre en S ep AY P(A 1B) = .01998 te +999 wo a8) Poesy h ae veg, Be ease 98 Bre Tesp Figure 2.12 Tree diagram for the rare-disease problem Next to each branch corresponding to a positive test result, the multiplication tule yields the recorded probabilities. Therefore, P(B) = .00099 + .01998 = .02097, from which we have P(A, NB) _ .00099 P(A, |B) =— = = 1047 (4118) P(B) 02097 This result seems counterintuitive; because the diagnostic test appears so accurate, we expect someone with a positive test result to be highly likely to have the disease, whereas the computed conditional probability is only .047. However, because the disease is rare and the test only moderately reliable, most positive test results arise from errors rather than from diseased individuals. The probability of having the disease has increased by a multiplicative factor of 47 (from prior .001 to posterior .047); but to get a further increase in the posterior probability, a diagnostic test with much smaller error rates is needed. If the disease were not so rare (e.g., 25% incidence in the population), then the error rates for the present test would provide good diagnoses. This example shows why it makes sense to be tested for a rare disease only if you are in a high-risk group. For example, most of us are at low risk for HIV infection, so testing would not be indicated, but those who are in a high-risk group should be tested for HIV. For some diseases the degree of risk is strongly influ- enced by age. Young women are at low risk for breast cancer and should not be tested, but older women do have increased risk and need to be tested. There is some argument about where to draw the line. If we can find the incidence rate for our group and the sensitivity and specificity for the test, then we can do our own calculation to see if a positive test result would be informative. a An important contemporary application of Bayes’ theorem is in the identifi- cation of spam e-mail messages. A nice expository article on this appears in Statistics: A Guide to the Unknown (see the Chapter | bibliography). --- Trang 95 --- 82 CHAPTER 2 Probability Exercises | Section 2.4 (45-65) 45. The population of a particular country consists of b. Given that the system has a type | defect, what three ethnic groups. Each individual belongs to is the probability that it has all three types of one of the four major blood groups. The accom- defects? panying joint probability table gives the pro- c. Given that the system has at least one type of portions of individuals in the various ethnic defect, what is the probability that it has group-blood group combinations. exactly one type of defect? d. Given that the system has both of the first two eo types of defects, what is the probability that it Blood Group does not have the third type of defect? Ethnic Group O. A B_ —AB__ 49. If two bulbs are randomly selected from the box of — lightbulbs described in Exercise 40 (Section 2.3) 1 082.106.008.004 and at least one of them is found to be rated 75 W, 2 135 141 018.006 what is the probability that both of them are 75-W 3 215.200.065.020 bulbs? Given that at least one of the two selected is TT <€ not rated 75 W, what is the probability that both Suppose that an individual is randomly selected selected! bulbs have the) sane rating? from the population, and define events by A= 50. A department store sells sport shirts in three sizes {type A selected}, B= (type B selected}, and (small, medium, and large), three patterns (plaid, C= ethnic group 3 selected}. print, and stripe), and two sleeve lengths (long and a. Calculate P(A), P(C), and P(A NC). short). The accompanying tables give the propor- b. Calculate both P(A|C) and P(C|A) and tions of shirts sold in the various category combi- explain in context what each of these probabil- nations. ities represents. c. If the selected individual does not have type B Short-sleeved blood, what is the probability that he or she is from ethnic group 1? Pattern 46. Suppose an individual is randomly selected from Size Pl Pr St the population of all adult males living in the United States. Let A be the event that the selected Ss 04 02.05 individual is over 6 ft in height, and let B be the M 08 0712 event that the selected individual is a professional L 03.07.08 basketball player. Which do you think is larger, TT P(A|B) or P(B|A)? Why? 47. Return to the credit card scenario of Exercise 14 Long-sleeved (Section 2.2), where A= (Visa), B= {Master- Patter Card}, P(A)=.5, P(B)= 4, and P(A 9 B)= .25. Calculate and interpret each of the following prob- Size PI Pr St abilities (a Venn diagram might help). a. P(B|A) s 03.02.03 b. P(B’|A) M 10.05.07 c. P(A|B) L 04 02.08 d. P(A’|B) TT bs mares ies bist diataorate a, What isthe probability thatthe next shirt sold has a Visa card? is a medium, long-sleeved, print shirt? b. What is the probability that the next shirt sold 48. Reconsider the system defect situation described is a medium print shirt? in Exercise 28 (Section 2.2). c. What is the probability that the next shirt a. Given that the system has a type | defect, what sold is a short-sleeved shirt? A long-sleeved is the probability that it has a type 2 defect? shirt? --- Trang 96 --- 2.4 Conditional Probability 83 d. What is the probability that the size of the next 55. For any events A and B with P(B) > 0, show that shirt sold is medium? That the pattern of the P(A|B)+P(A'|B)=1. next shirt sold is a print? J n we: Given chat the shirt just sold was asghon» © ft P@|ASP@) show: that PB Ale Pe). . F S (Hint: Add P(B’| A) to both sides of the given sleeved plaid, what is the probability that its i aes se inequality and then use the result of Exercise 55.] size was medium? f. Given that the shirt just sold was a medium 57. Show that for any three events A, B, and C with plaid, what is the probability that it was short- P(C)>0, P(A U BIC)=P(A|C)+P(B|C)— sleeved? Long-sleeved? PANBIC). 51. One box contains six red balls and four green 58. Ata gas station, 40% of the customers use regular balls, and a second box contains seven red balls gas (A,), 35% use mid-grade gas (A>), and 25% and three green balls. A ball is randomly chosen use premium gas (A3). Of those customers using from the first box and placed in the second box. regular gas, only 30% fill their tanks (event B). Then a ball is randomly selected from the second Of those customers using mid-grade gas, 60% fill box and placed in the first box. their tanks, whereas of those using premium, 50% a. What is the probability that a red ball is fill their tanks. selected from the first box and a red ball is a. What is the probability that the next customer selected from the second box? will request mid-grade gas and fill the tank b. At the conclusion of the selection process, (A,B)? what is the probability that the numbers of b. What is the probability that the next customer red and green balls in the first box are identical fills the tank? to the numbers at the beginning? cc. If the next customer fills the tank, what is the 52. A system consists of two identical pumps, #1 ents Leniraiek al is requested/onid and #2. If one pump fails, the system will still ErApeIBas BBS! operate. However, because of the added strain, 59. Seventy percent of the light aircraft that disappear the extra remaining pump is now more likely while in flight in a certain country are subsequently to fail than was originally the case. That is, discovered. Of the aircraft that are discovered, 60% r=P(#2 fails|#1 fails) > P(#2 fails)=q. If at have an emergency locator, whereas 90% of the least one pump fails by the end of the pump aircraft not discovered do not have such a locator. design life in 7% of all systems and both pumps Suppose a light aircraft has disappeared. fail during that period in only 1%, what is the a. If it has an emergency locator, what is the probability that pump #1 will fail during the probability that it will not be discovered? pump design life? b. If it does not have an emergency locator, what 53. A certain shop repairs both audio and video com- asthe probability that wall beidiscovered ponents. Let A denote the event that the next 60. Components of a certain type are shipped to a component brought in for repair is an audio com- supplier in batches of ten. Suppose that 50% of ponent, and let B be the event that the next com- all such batches contain no defective components, ponent is a compact disc player (so the event B is 30% contain one defective component, and 20% contained in A). Suppose that P(A)=.6 and contain two defective components. Two compo- P(B)=.05. What is P(B| A)? nents from a batch are randomly selected and 54, In Exercise 15, A; = {awarded project i), for i= 1, tested. What are the probabilities associated with aya 0, 1, and 2 defective components being in the 2, 3. Use the probabilities given there to compute B 4 . . batch under each of the following conditions? the following probabilities: Y : a. Neither tested component is defective. a. P(Ap|A1) b. One of the two tested components is defective. b. P(A2A3 |A1) . [Hint: Draw a tree diagram with three first- c. P(A, UA3|A1) - ‘ generation branches for the three different d. P(A, N.A2 M.A3|A1 UA2 UA3) iipewor bathe] Express in words the probability you have PSs a calculated. 61. Show that P(A BIC)=P(AIB NC) - P(BIC). --- Trang 97 --- 84 CHAPTER 2 Probability 62. For customers purchasing a full set of tires ata 64. Ata large university, in the never-ending quest for particular tire store, consider the events a satisfactory textbook, the Statistics Department as . . has tried a different text during each of the last A= — Wee SHR ther United three quarters. During the fall quarter, 500 stu- B = (porchaser has tires balanced immediately} dents used the text by Professor Mean; during purchaser has tires balanced immediately : C=(purchaser requests front-end alignment} the winter quarter, 300 students used the text by along with 4’, B', and C’. Assume the following Professor Median; and during the spring quarter, unconditional and conditional probabilities: 200) students Used the text by Professor Mode. A survey at the end of each quarter showed that P(A) =.75 P(B|A)=.9 P(BIA) = 8 200 students were satisfied with Mean’s book, 150 . " were satisfied with Median’s book, and 160 were KC Isp B)as BGI RE) satisfied with Mode’s book. If a student who took P(C|A'NB) =.7 P(C\A'NB') = 3 statistics during one of these quarters is selected at a. Construct a tree diagram consisting of first-, random and admits to having been satisfied with second-, and third-generation branches and the text, is the student most likely to have used the place an event label and appropriate probabil- ool by: Meatr, Media) of Mode? Whoasithe least ity next to each branch, likely author? [Hint: Draw a tree-diagram or use b. Compute P(A NB NC). Bayes" theorem. ¢. Compute P(BNC) 65. A friend who lives in Los Angeles makes frequent d. Compute P(C). consulting trips to Washington, D.C.; 50% of the e. Compute P(A |B MC) the probability of a pur- time she travels on airline #1, 30% of the time on chase of U.S. tires given that both balancing airline #2, and the remaining 20% of the time on and an alignment were requested. airline #3. For airline #1, flights are late into D.C. 63. A professional organization (for statisticians, of S09G:.66; thie) time and’ Tape /anto..L.A; 10% oF the course) sells term life insurance and major medical tame, Hor‘airline #2, these’ percentages afe,25% insurance. Of those who have just life insurance, and 207i heteas foratrlines# ithe percentages 70% will renew next year, and 80% of those with are 40% and 25%. If we learn that on a particular only a major medical policy will renew next year. ttiprshe, amvedidate at exactly onésof /the:two However, 90% of policyholders who have both types destinations, what are the posterior probabilities of policy will renew at least one of them next year. OPhavitg Mowe Gil aiffines 41):.62, anid 3? Of the policy holders 75% have term life insurance, Assume that the chance of a late arrival in L.A. 45% have major medical, und 20% have both: is unaffected by what happens on the flight to D.C. a, Calculate the percentage of policyholders that [eta Brom the tip'of each first-generation branch will renew at least one policy next year. on a tree diagram, draw three second-generation bi lf'at falidotnlyselecie policy ‘holder Goeeia branches labeled, respectively, 0 late, 1 late, and fact renew next year, what is the probability milate that he or she has both life and major medical insurance? Independence The definition of conditional probability enables us to revise the probability P(A) originally assigned to A when we are subsequently informed that another event B has occurred; the new probability of A is P(A| B). In our examples, it was frequently the case that P(A| B) was unequal to the unconditional probability P(A), indicating that the information “B has occurred” resulted in a change in the chance of A occurring. There are other situations, though, in which the chance that A will occur or has occurred is not affected by knowledge that B has occurred, so that P(A|B)=P(A). It is then natural to think of A and B as independent events, meaning that the occurrence or nonoccurrence of one event has no bearing on the chance that the other will occur. --- Trang 98 --- 2.5 Independence 85 DEFINITION Two events A and B are independent if P(A |B) = P(A) and are dependent otherwise. The definition of independence might seem “unsymmetrical” because we do not demand that P(B | A) = P(B) also. However, using the definition of conditional probability and the multiplication rule, P(ANMB) _ P(A|B)P(B) P(B|A) = —__— = 2.7 (BIA) = So aay (2.7) The right-hand side of Equation (2.7) is P(B) if and only if P(A |B) = P(A) (independence), so the equality in the definition implies the other equality (and vice versa). It is also straightforward to show that if A and B are independent, then so are the following pairs of events: (1) A’ and B, (2) A and B’, and (3) A’ and B'. Consider an ordinary deck of 52 cards comprised of the four “suits” spades, hearts, diamonds, and clubs, with each suit consisting of the 13 denominations ace, king, queen, jack, ten, ... , and two. Suppose someone randomly selects a card from the deck and reveals to you that it is a face card (that is, a king, queen, or jack). What now is the probability that the card is a spade? If we let A= {spade} and B = {face card}, then P(A) = 13/52, P(B) = 12/52 (there are three face cards in each of the four suits), and P(A B) = P(spade and face card) = 3/52. Thus P(ANB 3/52 Se Oe ES) p(a|p) = PAB) _ 3/52 _ 3 1 _ 13 _ pig) P(B) (12/52. «12-«4~S«*S2 Therefore, the likelihood of getting a spade is not affected by knowledge that a face card had been selected. Intuitively this is because the fraction of spades among face cards (3 out of 12) is the same as the fraction of spades in the entire deck (13 out of 52). It is also easily verified that P(B|A) = P(B), so knowledge that a spade has been selected does not affect the likelihood of the card being a jack, queen, or king. a Let A and B be any two mutually exclusive events with P(A) > 0. For example, for a randomly chosen automobile, let A = {car is blue} and B = {car is red}. Since the events are mutually exclusive, if B occurs, then A cannot possibly have occurred, so P(A|B)=0# P(A). The message here is that if two events are mutually exclusive, they cannot be independent. When A and B are mutually exclusive, the information that A occurred says something about B (it cannot have occurred), so independence is precluded. a P(A ™ B) When Events Are Independent Frequently the nature of an experiment suggests that two events A and B should be assumed independent. This is the case, for example, if a manufacturer receives a circuit board from each of two different suppliers, each board is tested on arrival, and A = {first is defective} and B= {second is defective}. If P(A) =.1, --- Trang 99 --- 86 —criapreR2 Probability it should also be the case that P(AIB)=.1; knowing the condition of the second board shouldn’t provide information about the condition of the first. Our next result shows how to compute P(A / B) when the events are independent. PROPOSITION A and B are independent if and only if P(ANB) = P(A) - P(B) (2.8) To paraphrase the proposition, A and B are independent events iff! the probability that they both occur (A M B) is the product of the two individual probabilities. The verification is as follows: P(ANMB) = P(A|B) - P(B) = P(A) - P(B) (2.9) where the second equality in Equation (2.9) is valid iff A and B are independent. Because of the equivalence of independence with Equation (2.8), the latter can be used as a definition of independence.” It is known that 30% of a certain company’s washing machines require service while under warranty, whereas only 10% of its dryers need such service. If someone purchases both a washer and a dryer made by this company, what is the probability that both machines need warranty service? Let A denote the event that the washer needs service while under warranty, and let B be defined analogously for the dryer. Then P(A) = .30 and P(B) =.10. Assuming that the two machines function independently of each other, the desired probability is P(ANB) = P(A) - P(B) = (.30)(.10) = .03 The probability that neither machine needs service is P(A’ NB’) = P(A’) - P(B’) = (.70)(.90) = 63 Note that, although the independence assumption is reasonable here, it can be questioned. In particular, if heavy usage causes a breakdown in one machine, it could also cause trouble for the other one. a Each day, Monday through Friday, a batch of components sent by a first supplier arrives at a certain inspection facility. Two days a week, a batch also arrives from a second supplier. Eighty percent of all supplier 1’s batches pass inspection, and 90% of supplier 2’s do likewise. What is the probability that, on a randomly selected day, two batches pass inspection? We will answer this assuming that on days when two " [ff is an abbreviation for “if and only if.” ? However, the multiplication property is satisfied if P(B) = 0, yet P(AIB) is not defined in this case. To make the multiplication property completely equivalent to the definition of independence, we should append to that definition that A and B are also independent if either P(A) =0 or P(B) =0. --- Trang 100 --- 2.5 Independence 87 batches are tested, whether the first batch passes is independent of whether the second batch does so. Figure 2.13 displays the relevant information. 3 passe 2 6 5 4 xX (8 X .9) ya? ails - \ a8 po 4 3 lt 2 es Baten, va 2nd fing 2 ? 'St trips 208 wee : 1 2nd hails Figure 2.13 Tree diagram for Example 2.34 P(two pass) = P(two received M both pass) = P(both pass | two received) - P(two received) = [(.8)(.9)](.4) = .288 P] Independence of More Than Two Events The notion of independence of two events can be extended to collections of more than two events. Although it is possible to extend the definition for two independent events by working in terms of conditional and unconditional prob- abilities, it is more direct and less cumbersome to proceed along the lines of the last proposition. DEFINITION Events Aj, ..., A,, are mutually independent if for every k (k= 2, 3,...,”) and every subset of indices i,, i5,... , i, P(A NA A+ Ay) = (Ai) P(Ai) +++ PAn) To paraphrase the definition, the events are mutually independent if the probability of the intersection of any subset of the n events is equal to the product of the individual probabilities. As was the case with two events, we frequently specify at the outset of a problem the independence of certain events. The definition can then be used to calculate the probability of an intersection. --- Trang 101 --- 838 CHAPTER 2 Probability The article “Reliability Evaluation of Solar Photovoltaic Arrays” (Solar Energy, 2002: 129-141) presents various configurations of solar photovoltaic arrays con- sisting of crystalline silicon solar cells. Consider first the system illustrated in Figure 2.14a. There are two subsystems connected in parallel, each one containing three cells. In order for the system to function, at least one of the two parallel subsystems must work. Within each subsystem, the three cells are connected in series, so a subsystem will work only if all cells in the subsystem work. Consider a particular lifetime value ty, and suppose we want to determine the probability that the system lifetime exceeds fo. Let A; denote the event that the lifetime of cell i exceeds fy ((=1, 2, ... , 6). We assume that the A;’s are independent events (whether any particular cell lasts more than fg hours has no bearing on whether any other cell does) and that P(A;) = .9 for every i since the cells are identical. Then P(system lifetime exceeds fo) = P[(Ay 1 Az .A3) U (Aq M As 1.Ap)] = P(A, NA? MNA3) + P(Ag NAs NAG) — P[(A1 N.A2A3) N (Ag As NAo)] = (.9)(.9)(.9) + (-9)(-9)(-9) — (.9)(.9)(.9)(.9)(.9)(.9) = .927 Alternatively, P(system lifetime exceeds fo) = 1 — P(both subsystem lives are < to) = 1 — [P(subsystem life is < to)]* = 1—[1 — P(subsystem life is>7p)]? =1-[1-(9))? =.927 a b 1}421H43 1H{2Hi3 4t{sHe 44{sHle Figure 2.14 System configurations for Example 2.35: (a) series-parallel; (b) total- cross-tied Next consider the total-cross-tied system shown in Figure 2.14b, obtained from the series-parallel array by connecting ties across each column of junctions. Now the system fails as soon as an entire column fails, and system lifetime exceeds fo only if the life of every column does so. For this configuration, P(system lifetime exceeds fo) = [P (column lifetime exceeds fo)}* = [1 —P(column lifetime is < to)}* = [1 —P (both cells in a column have lifetime < f)]* =1-[1-(.9)?)P =.970 = --- Trang 102 --- 2.5 Independence 89 Exercises | Section 2.5 (66-83) 66. Reconsider the credit card scenario of Exercise 47 a. If 20% of all seams need reworking, what is the (Section 2.4), and show that A and B are depen- probability that a rivet is defective? dent first by using the definition of independence b. How small should the probability of a defec- and then by verifying that the multiplication prop- tive rivet be to ensure that only 10% of all erty does not hold. seams need reworking? 67. An oil exploration company currently has two 73. A boiler has five identical relief valves. The prob- active projects, one in Asia and the other in Eur- ability that any particular valve will open on ope. Let A be the event that the Asian project is demand is .95. Assuming independent operation successful and B be the event that the European of the valves, calculate P(at least one valve opens) project is successful. Suppose that A and B are and P(at least one valve fails to open). wie Agente ae 4 ane me 74. ‘Two pumps connected in parallel fail indepen- is the probability that the European project is dently of-each other on/any:given dey: The proba also not successful? Explain your reasoning. Dulity that only the elder pumpi will. tails It) and b. What is the probability that at least one of the the probability that only the-newer-pump will fail . tie 6 ce te will - “ casstill, . is .05. What is the probability that the pumping c. neh that at least one of the two projects is system will fail on any given day (which happens A a eis ff both pumps fail)? successful, what is the probability that only the if both pumps fail) Asian project is successful? 75. Consider the system of components connected as the acc i icture. C ts 1 and 68. In Exercise 15, is any A; independent of any other ovane-cenne ed ait ican a ao - es i Doses using themultepbeahon property tor works iff either 1 or 2 works; since 3 and 4 are anon soe connected in series, that subsystem works iff both 69. If A and B are independent events, show that A’ 3 and 4 work. If components work independently and B are also independent. [Hint: First establish of one another and P(component works) = 9, cal- a relationship among P(4’NB), P(B), and culate P(system works). P(AMB).] 70. Suppose that the proportions of blood phenotypes t in a particular population are as follows: AB ABO a 4 10 04 Assuming that the phenotypes of two randomly q A selected individuals are independent of each ther, what is the )bability that both phenotypes 7 sre G2 Whats ts SOU stage nich es 76. Refer back to the series-parallel system configu- of two randomly selected individuals match? ‘tabon/antrodived mnExample, 2:4p,/und suppose 71. The probability that a grader will make a marking thal thee a ey oe ee haitet a. as * error on any particular question of a multiple- cack parallel aubsystem [in Figure 2 Uta-elinit choice exam is .1. If there are ten questions and Hive celle'S galt, ans senuesber-celiy Sand 5 467 questions are marked independently, what is the aid AF Using (Aj) ==.9; the probability: that-eye: probability that no errors are made? That at least fen lifetime exceeds to ts edsily seen to be!.9639: one error is made? If there are n questions and the Ror iter; value; would..9 have 19 ,becchanged. an probability of a marking error is p rather than .1, order to increase the system lifetime reliability give expressions for these two probabilities. from .9639 to 99? [Hint: Let P(A) =p, express system reliability in terms of p, and then 72. An aircraft seam requires 25 rivets. The seam will let x=p?] have to be reworked if any of these rivets is . . defective, Suppose rivets are defective indepen, 77+ Consider independently rolling two fair dice, one dently of one another, each with the same proba- red and the other green. Let A be the event that the bil oe : e red die shows 3 dots, B be the event that the green --- Trang 103 --- 90 — carrer 2 Probability die shows 4 dots, and C be the event that the total of these boards (2000) are actually too green to number of dots showing on the two dice is 7. Are be used in first-quality construction. Two these events pairwise independent (i.e., are A and boards are selected at random, one after the B independent events, are A and C independent, other. Let A= (the first board is green} and and are B and C independent)? Are the three B=(the second board is green). Compute events mutually independent? P(A), P(B), and P(A 2 B) (a tree diagram 2 i 78. Components arriving at a distributor are checked b mer rye Awa ant 8 eae een, P= for defects by two different inspectors (each com- Becca, wiay fe PE BVT Th ee ponent is checked by both inspectors). The first Ae ae there bet ef ‘over ana inspector detects 90% of all defectives that are BOLTS sant ta) Bec oupeges of leat present, and the second inspector does likewise. a aa tecalaadia gl part at At least one inspector fails to detect a defect on ings 8) ccanjweassume;that, 2 and Bo) 20% of all defective components. What is the Part (a) are independent to obtain essentially probability that the following occur? the correct probability? . fi * cc. Suppose the lot consists of ten boards, of which a. A defective component will be detected only nae bythe Hie agpedin” BY Wkaally one aF ite. two are green. Does the assumption of indepen- (we ‘epectore? dence now yield approximately the correct b. All three defective components in a batch grower for BUA C.8)? What ip the serstital 7 4 Sey encasdaeanrat difference between the situation here and escape detection by both inspectors (assuming habof a)? When d ink ‘th inspections of different components are inde- thatiot-parti(a)? Whendo youthinic that-an cndent of one another)? independence assumption would be valid in Pp ° obtaining an approximately correct answer to 79. A quality control inspector is inspecting newly P(ANB)? Produced items for faults. The inspector searches 54 peter tq the assumptions stated’ in Exercise 75 an item for faults in a series of independent fixa- : 4 tions, each of a fixed duration. Given that a flaw is andl answer the question’ posed thers forthe:system actually present, let p denote the probability that in Se ecompaning ee How vig the the flaw is detected during any one fixation (this Deobebuuty) change Ui tus were (2 subsystem model is discussed in “Human Performance in connected in parallel to the subsystem pictured i dad ig) ve Sampling Inspection,” Hum. Factors, 1979: maigure ay 99-105). i oo a. Assuming that an item has a flaw, what is the probability that it is detected by the end of the ud second fixation (once a flaw has been detected, 2 sl the sequence of fixations terminates)? b. Give an expression for the probability that a 82 Professor Stander Deviation can take one of two Haw, will. be detected by the end of the wth routes on his way home from work. On the first cation: route, there are four railroad crossings. The prob- ¢. If when a flaw has not been detected in three ability that he will be stopped by a train at any fixations, the item is passed, what is the proba- particular one of the crossings is .1, and trains bility that a flawed item will pass inspection? operate independently at the four crossings. The d. Suppose 10% of all items contain a flaw other route is longer but there are only two cross- [Piandomiy’ chosen: item. ig flawed) 11. ings, independent of each other, with the same With the assumption of part (c), what is the stoppage probability for each as on the first probability that a randomly chosen item will route. On a particular day, Professor Deviation pass inspection (it will automatically pass if it has a meeting scheduled at home for a certain ig Hot. awed, Dit Deu AIO pues FAB time. Whichever route he takes, he calculates flawed)? that he will be late if he is stopped by trains at e. Given that an item has passed inspection least half the crossings encountered. (no flaws in three fixations), what is the proba- a. Which route should he take to minimize the bility that it is actually flawed? Calculate for probability of being late to the meeting? p=5. b. If he tosses a fair coin to decide on a route and . . he is late, what is the probability that he took 80. a. A lumber company has just taken delivery on a the four-crossing route? lot of 10,000 2 x4 boards. Suppose that 20% --- Trang 104 --- Supplementary Exercises 91 83. Suppose identical tags are placed on both the left (involving 7) for the probability that exactly one ear and the right ear of a fox. The fox is then let tag is lost given that at most one is lost (“Ear Tag loose for a period of time. Consider the two events Loss in Red Foxes,” J. Wildlife Manag., 1976: Cy = [left ear tag is lost} and Cy = {right ear tag is 164-167). [Hint: Draw a tree diagram in which lost}. Let z= P(C,)=P(C2), and assume C and the two initial branches refer to whether the left C) are independent events. Derive an expression ear tag was lost.] | Supplementary Exercises| laguney (84-109) 84. A small manufacturing company will start 86. An employee of the records office at a university operating a night shift. There are 20 machinists currently has ten forms on his desk awaiting pro- employed by the company. cessing. Six of these are withdrawal petitions and a, If a night crew consists of 3 machinists, how the other four are course substitution requests. many different crews are possible? a. If he randomly selects six of these forms to b. If the machinists are ranked 1, 2, ... , 20 in give to a subordinate, what is the probability order of competence, how many of these crews that only one of the two types of forms remains would not have the best machinist? on his desk? ¢. How many of the crews would have at least 1 b. Suppose he has time to process only four of of the 10 best machinists? these forms before leaving for the day. If these d. If one of these crews is selected at random to four are randomly selected one by one, what is work on a particular night, what is the proba- the probability that each succeeding form is of a bility that the best machinist will not work that different type from its predecessor? night? 87. One satellite is scheduled to be launched from 85. A factory uses three production lines to manufac- Cape Canaveral in Florida, and another launching ture cans of a certain type. The accompanying is scheduled for Vandenberg Air Force Base in table gives percentages of nonconforming cans, California. Let A denote the event that the Van- categorized by type of nonconformance, for each denberg launch goes off on schedule, and let B of the three lines during a particular time period. represent the event that the Cape Canaveral launch goes off on schedule. If A and B are Line1 Line2 Line3 independent events with P(A)>P(B) and eee P(A UB) =.626, P(A NB) =.144, determine the Blemish 15 12 20 values of P(A) and P(B). Crack 50 44 40 88. A transmitter is sending a message by using a Pull-Tab Problem = 21 28 24 binary code, namely, a sequence of 0's and 1's. Surface Defect 10 8 15 Each transmitted bit (0 or 1) must pass through Other 4 8 2 three relays to reach the receiver. At each relay, the probability is .20 that the bit sent will be differ- During this period, line 1 produced 500 noncon- ent from the bit received (a reversal). Assume that forming cans, line 2 produced 400 such cans, and the slays operate independently of cne another: line 3 was responsible for 600 nonconforming Transmitter > Relay 1 — Relay 2— Relay 3 cans. Suppose that one of these 1500 cans is — Receiver randomly selected. an . . a. What is the probability that the can was pro- A nie et Crome tne stegeltiee wha te duced by line 1? That the reason for noncon- probability-dhata! is(senithy all'thiee-relays? forties 18 aera? b. Ifal is sent from the transmitter, what is the b. If the selected can came from line 1, what is the BODAb ty: tise Vas se eiVER Ey MieuRcenv ee? probability that it had a blemish? (Hint: The eight experimental outcomes can be ¢. Given that the selected can had a surface defect, cisplayed oma treesciagram: with three meneras we ° : tions of branches, one generation for each what is the probability that it came from line 1? lay] --- Trang 105 --- 92 CHAPTER 2 Probability c. Suppose 70% of all bits sent from the transmit- b. Given that a fastener passed inspection, what is ter are 1’s. Ifa 1 is received by the receiver, the probability that it passed the initial inspec- what is the probability that a 1 was sent? tion and did not need recrimping? 89. Individual A has a circle of five close friends 93. One percent of all individuals in a certain popula- (B, C, D, E, and F). A has heard a certain rumor tion are carriers of a particular disease. A diagnos- from outside the circle and has invited the five tic test for this disease has a 90% detection rate for friends to a party to circulate the rumor. To begin, carriers and a 5% detection rate for noncarriers. A selects one of the five at random and tells the Suppose the test is applied independently to two rumor to the chosen individual. That individual different blood samples from the same randomly then selects at random one of the four remaining selected individual. individuals and repeats the rumor. Continuing, a a. What is the probability that both tests yield the new individual is selected from those not already same result? having heard the rumor by the individual who has b. If both tests are positive, what is the probability just heard it, until everyone has been told. that the selected individual is a carrier? saber ace en aa the nr © 94, A system consists of two components. The proba- i, Waris shevarobubitig sbaealacaneahand bility that the second component functions in a . nail e Ly ‘0 el dah a satisfactory manner during its design life is .9, the Sancta aA aC ab San probability that at least one of the two components ¢. What is the probability that F is the last person does s0 is .96, and the probability that both compo- ‘ . to hear therrumory nents do so is .75. Given that the first component 90. Refer to Exercise 89. If at each stage the person functions in a satisfactory manner throughout its who currently “has” the rumor does not know design life, what is the probability that the second who has already heard it and selects the next one does also? recipient at random from all five possible indivi- gs 4 certain company sends 40% ofits overnight mail duals, what is the probability that F has still not parcels. via express mail service E,; OF these par- heard the rumor after it has been told ten times at cele, 2% ative. ane: ha fk cananieed delivery time the: panty’, (denote the event “late delivery” by L). If'a record 91. A chemist is interested in determining whether a of an overnight mailing is randomly selected from certain trace impurity is present in a product. An the company’s file, what is the probability that the experiment has a probability of .80 of detecting the parcel went via E, and was late? impurity if it is present. The probability of not 96 Refer to Exercise 95. Suppose that 50% of the detecting the: mpaity at i a taboent: eds The overnight parcels are sent via express mail ser- Prior probabilities of the impurity being present vice Ey and the remaining 10% are sent via E3. Of and being absent are 40 and .60, respectively. those sent via E>, only 1% arrive late, whereas Three separate experiments result in only two 8%; off the parcels handléd by Ey arrive late detections. What is the posterior probability that a What. ig ite: Peon oy the mpurity:1s present? selected parcel arrived late? 92. Fasteners used in aircraft manufacturing are b. If a randomly selected parcel has arrived on slightly crimped so that they lock enough to avoid time, what is the probability that it was not sent loosening during vibration. Suppose that 95% of all via Ey? fasteners pass an initial inspection. Of the 5% that ga ‘A. company uses three different assembly lines—A. fail, 20% are so seriously defective that they must 6 aia Ap, and A;—to manufacture a particular component. be scrapped. The remaining fasteners are sent to a Of those manufactured by line Ay, 5% need rework recrimping operation, where 40% cannot be sal- IG letacdy adele: where $4 el L.'s components aged and are discarded. The other 60% of th Rint lesen alee chris Sear N BECO AIG ates CHRCAIOED Ties OMIEE. A001 DESY need rework and 10% of A3’s need rework. Suppose fasteners/arecontected. by, the; recrimpinis: process that 50% of all components are produced by line A,, ane phen ae domi 30% are produced by line A>, and 20% come from a yaticat ecescan eet eves eemnteetone, line Ay. If a randomly selected component needs selected incoming fastener will pass inspection rework, what is the probability that it c: fi . wa cetera wsy ; s probability that it came from either initially or after recrimping? line A,? From line A;? From line A;? --- Trang 106 --- Supplementary Exercises 93 98. Disregarding the possibility of a February 29 100. In a Little League baseball game, team A’s birthday, suppose a randomly selected individual pitcher throws a strike 50% of the time and a is equally likely to have been born on any one of ball 50% of the time, successive pitches are inde- the other 365 days. pendent of each other, and the pitcher never hits a a. If ten people are randomly selected, what is the batter. Knowing this, team B’s manager has probability that all have different birthdays? instructed the first batter not to swing at any- That at least two have the same birthday? thing. Calculate the probability that b. With & replacing ten in part (a), what is the a. The batter walks on the fourth pitch. smallest k for which there is at least a 50-50 b. The batter walks on the sixth pitch (so two of chance that two or more people will have the the first five must be strikes), using a counting same birthday? argument or constructing a tree diagram. c. If ten people are randomly selected, what is c. The batter walks. the probability that either at least two have d. The first batter up scores while no one is out the same birthday or at least two have the (assuming that each batter pursues a no-swing same last three digits of their Social Security strategy). munibers?! Nore: The aiticle “Methods for’ aie. usar peaauatiay aetol, A,B, CjandD BAVeBSEA Studying Coincidences” (F. Mosteller and email: ° me ; scheduled for job interviews at 10 a.m. on Friday, P. Diaconis, J. Amer. Statist. Assoc., 1989: Januney13)at Raiden Sampling: Ines The weteon: : iary 13, at Random Sampling, Inc. The person. S53-861) discusses problems:of this types] nel manager has scheduled the four for interview 99. One method used to distinguish between granitic (G) rooms 1, 2, 3, and 4, respectively. Unaware of this, and basaltic (B) rocks is to examine a portion of the the manager’s secretary assigns them to the four infrared spectrum of the sun’s energy reflected from rooms in a completely random fashion (what the rock surface. Let R,, >, and R denote measured else!). What is the probability that spectrum intensities at three different wavelengths; a. All four end up in the correct rooms? typically, for granite Ry P(basalt order of preference and will be interviewed in (RR, ER) TP hibdaucehente fielded Re random order. However, at the conclusion of RECR; would jou relassity HieTeDkeas granite each interview, the manager will know only how or basalt? the current candidate compares to those previ- b. If measurements yielded R; , how ously interviewed. For example, the interview would you classify the rock? Answer the order 3, 4, 1, 2 generates no information after same question for Ry as the juror being unbiased, biased those already interviewed (if no such candidate against the prosecution, and biased against the appears, the last one interviewed is hired). defense, respectively. Also let C be the event that For example, with s=2, the order 3, 4, 1, bias is revealed during the questioning and D be 2 would result in the best being hired, whereas the event that the juror is eliminated for cause. the order 3, 1, 2, 4 would not. Of the four possible Let b;= P(B)) (i=0, 1,2), ¢= P(CIB,) = P(CIB2) s values (0, 1, 2, and 3), which one maximizes and d= P(DIB, N.C) = PIB A.C) [“Fair Num- P(best is hired)? (Hint: Write out the 24 equally ber of Peremptory Challenges in Jury Trials,” likely interview orderings: s=0 means that the J. Amer. Statist. Assoc. 1979: 747-153]. first candidate is automatically hired.] a. Ifa juror survives the voir dire process, what 104. Consider four independent events Ay, A>, Az, se thes probability that he/she is unhinged (in and Ay and let p= P(A) for f= 1, 2 & 4 terms of the ,’s, c, and d)? What is the prob- i Pane et Bs ability that he/she is biased against the prose- Express the probability that at least one of wine me " ee a E cution? What is the probability that he/she is these four events occurs in terms of the p;’s, . . a biased against the defense? [Hint: Represent and do the same for the probability that at least bik a : this situation using a tree diagram with three two of the events occur. a generations of branches. ] 105. A box contains the following four slips of paper, b. What are the probabilities requested in (a) if each having exactly the same dimensions: (1) win bo =.50, by =.10, by =.40 (all based on data prize 1; (2) win prize 2; (3) win prize 3; (4) win relating to the famous trial of the Florida prizes 1, 2, and 3. One slip will be randomly murderer Ted Bundy), c =.85 (corresponding selected. Let A=(win prize 1}, A2=(win to the extensive questioning appropriate in a prize 2}, and A3= (win prize 3}. Show that A; capital case), and d =.7 (a “moderate” judge)? apd Avareaniependent; that Ay and Asareinde oe. vcnaiaaetceineany nayeboatd SapTeRpeR pendent, and that A> and A are also independent é Neen a hors : : tively. A fair coin is tossed. If the result of the (this is pairwise independence). However, show Fit Ailanwdina €l lem eth, atierens ti that P(A, MAz1A3) # PCAy) P(Aa): PAS), so the toseis HL Allan: wins § | from Heth, whereas the ‘ coin toss results in T, then Beth wins $1 from three events are not mutually independent. Z : Allan. This process is then repeated, with a coin 106. Consider a woman whose brother is afflicted toss followed by the exchange of $1, until one of with hemophilia, which implies that the woman’s the two players goes broke (one of the two mother has the hemophilia gene on one of her gamblers is ruined). We wish to determine two X chromosomes (almost surely not both, = " rape since that is generally fatal), Thus there is a Hip RCA Ste Winer | He AEATES WATE S2) 50-50 chance that the woman's mother has To do so, let’s also consider aj = P(Allan wins | passed on the bad gene to her. The woman has he starts with $i) for i=0, 1, 3, 4, and 5. two sons, each of whom will independently a. What are the values of ao and as? inherit the gene from one of her two chromo- b. Use the law of total probability to obtain an somes. If the woman herself has a bad gene, there equation relating a> to a; and a3. [Hint: Con- is a 50-50 chance she will pass this on to a son. dition on the result of the first coin toss, Suppose that neither of her two sons is afflicted realizing that if it is a H, then from that with hemophilia. What then is the probability point Allan starts with $3.] that the woman is indeed the carrier of the hemo- c. Using the logic described in (b), develop a philia gene? What is this probability if she has a system of equations relating a; (i= 1, 2, 3, 4) third son who is also not afflicted? to dj. and aj, Then solve these equations. 107. Jurors may be a priori biased for or against the (Hint: Write each equation so that 4; — a; is prosecution in a criminal trial. Each juror is goithelefyhand aie Theatuse theresnlt of the, questioned by both the prosecution and the rst equation tovexpress each other ai—a)-4 defense (the voir dire process), but this may not asa function of 4;;and add together-all fourof reveal bias, Even if bias is revealed, the judge these expressions ((=2,3,4,5).) may not excuse the juror for cause because of the d. Generalize the result to the situation in which narrow legal definition of bias. For a randomly Allan’s initial fortune is $a and Beth’s is $b. selected candidate for the jury, define events Bo, ‘Notes The solition 15 abit tiorecomplicated if p = P(Allan wins $1) 4.5. --- Trang 108 --- Bibliography 95 109. Prove that if P(BIA) > P(B) [in which case we probability 1/2 and a with probability 1/2, say that “A attracts B”], then P(A| B) > P(A) whereas the second contributes A for sure. The [“B attracts A”). resulting offspring will be either AA or Aa, and therefore will be dark colored. Assume that this 110. Suppose a single gene determines whether the : ‘ : i ‘ child then mates with an Aa animal to produce a coloring of a certain animal is dark or light. The pee : : raeee: : grandchild with dark coloring. In light of this coloring will be dark if the genotype is either AA : : : , fi : information, what is the probability that the or Aa and will be light only if the genotype is aa a . F = Ge ioe first-generation offspring has the Aa genotype (so A is dominant and a is recessive). 3 : (is heterozygous)? [Hint: Construct an appropri- Consider two parents with genotypes Aa and te tree diag ] AA. The first contributes A to an offspring with Ate ee: agra. Durrett, Richard, Elementary Probability for Applica- introduction to probability, written at a slightly tions, Cambridge Univ. Press, London, England, higher mathematical level than this text but con- 2009. A concise presentation at a slightly higher taining many good examples. level than this text. Ross, Sheldon, A First Course in Probability (8th ed.), Mosteller, Frederick, Robert Rourke, and George Prentice Hall, Upper Saddle River, NJ, 2010. Thomas, Probability with Statistical Applications Rather tightly written and more mathematically (2nd ed.), Addison-Wesley, Reading, MA, 1970. sophisticated than this text but contains a wealth A very good precalculus introduction to probabil- of interesting examples and exercises. ity, with many entertaining examples; especially Winkler, Robert, Introduction to Bayesian Inference good on counting rules and their application. and Decision (2nd ed.), Probabilistic Publishing, Olkin, Ingram, Cyrus Derman, and Leon Gleser, Sugar Land, Texas, 2003. A very good introduction Probability Models and Application (2nd ed.), to subjective probability. Macmillan, New York, 1994. A comprehensive --- Trang 109 --- CHAPTER THREE Di te Random Variables and Probability Whether an experiment yields qualitative or quantitative outcomes, methods of statistical analysis require that we focus on certain numerical aspects of the data (such as a sample proportion .x/n, mean x, or standard deviation s). The concept of a random variable allows us to pass from the experimental outcomes themselves to a numerical function of the outcomes. There are two fundamentally different types of random variables—discrete random variables and continuous random variables. In this chapter, we examine the basic properties and discuss the most important examples of discrete variables. Chapter 4 focuses on continuous random variables. JL. Devore and K.N, Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 96 DOI 10.1007/978-1-4614-0391-3_3, © Springer Science+Business Media, LLC 2012 --- Trang 110 --- 3.1 Random Variables = 97 Random Variables In any experiment, numerous characteristics can be observed or measured, but in most cases an experimenter will focus on some specific aspect or aspects of a sample. For example, in a study of commuting patterns in a metropolitan area, each individual in a sample might be asked about commuting distance and the number of people commuting in the same vehicle, but not about IQ, income, family size, and other such characteristics. Alternatively, a researcher may test a sample of compo- nents and record only the number that have failed within 1000 hours, rather than record the individual failure times. In general, each outcome of an experiment can be associated with a number by specifying a rule of association (e.g., the number among the sample of ten components that fail to last 1000 h or the total weight of baggage for a sample of 25 airline passengers). Such a rule of association is called a random variable—a variable because different numerical values are possible and random because the observed value depends on which of the possible experimental outcomes results (Figure 3.1). CT ——\ 7 = 2 1 0 «| 3 & Figure 3.1 A random variable DEFINITION For a given sample space £ of some experiment, a random variable (ry) is any rule that associates a number with each outcome in &. In mathematical language, a random variable is a function whose domain is the sample space and whose range is the set of real numbers. Random variables are customarily denoted by uppercase letters, such as X and Y, near the end of our alphabet. In contrast to our previous use of a lowercase letter, such as x, to denote a variable, we will now use lowercase letters to represent some particular value of the corresponding random variable. The notation X(s) = x means that x is the value associated with the outcome s by the rv X. Example 3.1 When a student attempts to connect to a university computer system, either there is a failure (F), or there is a success (S). With & = {S, F}, define an rv X by X(S) = 1, X(F) = 0. The rv X indicates whether (1) or not (0) the student can connect. ll In Example 3.1, the rv X was specified by explicitly listing each element of & and the associated number. If £ contains more than a few outcomes, such a listing is tedious, but it can frequently be avoided. --- Trang 111 --- 98 CHAPTER 3 Discrete Random Variables and Probability Distributions Consider the experiment in which a telephone number in a certain area code is dialed using a random number dialer (such devices are used extensively by polling organizations), and define an rv Y by y= 1 if the selected number is unlisted ~ |0_ if the selected number is listed in the directory For example, if 5282966 appears in the telephone directory, then Y(5282966) = 0, whereas Y(7727350) = 1 tells us that the number 7727350 is unlisted. A word description of this sort is more economical than a complete listing, so we will use such a description whenever possible. a In Examples 3.1 and 3.2, the only possible values of the random variable were 0 and 1. Such a random variable arises frequently enough to be given a special name, after the individual who first studied it. DEFINITION Any random variable whose only possible values are 0 and 1 is called a Bernoulli random variable. We will often want to define and study several different random variables from the same sample space. Example 2.3 described an experiment in which the number of pumps in use at each of two gas stations was determined. Define rv’s X, Y, and U by X = the total number of pumps in use at the two stations Y = the difference between the number of pumps in use at station I and the number in use at station 2 U = the maximum of the numbers of pumps in use at the two stations If this experiment is performed and s = (2, 3) results, then X((2, 3)) = 2 + 3 = 5,so we say that the observed value of X is x = 5. Similarly, the observed value of Y would be y = 2 —3 = —1, and the observed value of U would be u = max(2,3) = 3. Each of the random variables of Examples 3.1—3.3 can assume only a finite number of possible values. This need not be the case. In Example 2.4, we considered the experiment in which batteries were examined until a good one (S) was obtained. The sample space was £ = {S, FS, FFS,... }. Define an rv X by X = the number of batteries examined before the experiment terminates Then X(S) = 1,X(FS) = 2,X(FFS) = 3, ...,X(FFFFFFS) =7, and so on. Any positive integer is a possible value of X, so the set of possible values is infinite. Mi --- Trang 112 --- 3.1 Random Variables = 99 Suppose that in some random fashion, a location (latitude and longitude) in the continental United States is selected. Define an rv Y by Y = the height above sea level at the selected location For example, if the selected location were (39°50'N, 98°35’W), then we might have Y((39°S0'N, 98°35'W)) = 1748.26 ft. The largest possible value of Y is 14,494 (Mt. Whitney), and the smallest possible value is —-282 (Death Valley). The set of all possible values of Y is the set of all numbers in the interval between —282 and 14,494—1that is, {y:y is a number, —282 < y < 14,494} and there are an infinite number of numbers in this interval. a Two Types of Random Variables In Section 1.2 we distinguished between data resulting from observations on a counting variable and data obtained by observing values of a measurement variable. A slightly more formal distinction characterizes two different types of random variables. DEFINITION A discrete random variable is an rv whose possible values either constitute a finite set or else can be listed in an infinite sequence in which there is a first element, a second element, and so on. A random variable is continuous if both of the following apply: 1. Its set of possible values consists either of all numbers in a single interval on the number line (possibly infinite in extent, e.g., from —oo to oo) or all numbers in a disjoint union of such intervals (e.g., [0, 10] U [20, 30}). 2. No possible value of the variable has positive probability, that is, P(X = c) = 0 for any possible value c. Although any interval on the number line contains an infinite number of numbers, it can be shown that there is no way to create an infinite listing of all these values— there are just too many of them. The second condition describing a continuous random variable is perhaps counterintuitive, since it would seem to imply a total probability of zero for all possible values. But we shall see in Chapter 4 that intervals of values have positive probability; the probability of an interval will decrease to zero as the width of the interval shrinks to zero. All random variables in Examples 3.1—3.4 are discrete. As another example, suppose we select married couples at random and do a blood test on each person until we find a husband and wife who both have the same Rh factor. With X = the number of blood tests to be performed, possible values of X are D = {2, 4, 6, 8, ... }. Since the possible values have been listed in sequence, X is a discrete rv. a To study basic properties of discrete rv’s, only the tools of discrete mathemat- ics—summation and differences—are required. The study of continuous variables requires the continuous mathematics of the calculus—integrals and derivatives. --- Trang 113 --- 100 = cHarrer 3 Discrete Random Variables and Probability Distributions Exercises | Section 3.1 (1-10) 1. A concrete beam may fail either by shear (S) or 8. Each time a component is tested, the trial is a flexure (F). Suppose that three failed beams are success (S) or failure (F). Suppose the component randomly selected and the type of failure is deter- is tested repeatedly until a success occurs on three mined for each one. Let X = the number of beams consecutive trials. Let Y denote the number of among the three selected that failed by shear. List trials necessary to achieve this. List all outcomes each outcome in the sample space along with the corresponding to the five smallest possible values associated value of X. of Y, and state which Y value is associated with 2. Give three examples of Bernoulli rv’s (other than Eachione, those in the text). 9. An individual named Claudius is located at the 3, Using the experiment in Example 3.3, define two Point 0 in the/accompanying’diagram, more random variables and list the possible values 4B 4 of each. 4, Let X = the number of nonzero digits in a ran- domly selected zip code. What are the possible Bi Bs values of X? Give three possible outcomes and | of | their associated X values. 5. If the sample space & is an infinite set, does this ay ds a necessarily imply that any rv X defined from ¢ will Using an appropriate randomization device have an infinite set of possible values? If yes, say (such as a tetrahedral die, one having four sides), why. If no, give an example. Claudius first moves to one of the four locations 6. Starting at a fixed time, each car entering an By, Bz, B3, Bs. Once at one of these locations, he intersection is observed to see whether it turns uses another randomization device to decide left (L), right (R), or goes straight ahead (A). The whether he next returns to 0 or next visits one of experiment terminates as soon as a car is observed the other two adjacent points. This process then to tum left. Let X = the number of cars observed. continues; after each move, another move to one What are possible X values? List five outcomes of the (new) adjacent points is determined by and their associated X values. tossing an appropriate die or coin. 7. For each random variable defined here, describe a. Let X = the number of moves that Claudius the set of possible values for the variable, and state makes before first returning to 0. What are whether the variable is discrete. possible values of X? Is X discrete or continu- a, X =the number of unbroken eggs in a ran- ous? domly chosen standard egg carton b. If moves are allowed also along the b. Y = the number of students on a class list for a diagonal paths connecting 0 to Ay, As, As, particular course who are absent on the and Aq, respectively, answer the questions in first day of classes part (a). ©. U = the number of times a duffer has to swing 49, “The number of pumps in use at both a six-pump aba pols Gall berore batg station and a four-pump station will be deter- d. X = the length of a randomly selected rattle- mined. Give the possible values for each of the snake following random variables: e. Z = the amount of Toyalties earned from the a. T = the total number of pumps in use sale of a first edition of 10,000 textbooks b. X = the difference between the numbers in use f. Y = the pH of a randomly chosen soil sample atstations 1 and 2 Bi Sethe fension (Pet) vat whichya randomly c¢. U = the maximum number of pumps in use at selected tennis racket has been strung either station: biiXe—aihe totil-nimber Gf" coin tosses requited d. Z = the number of stations having exactly two for three individuals to obtain a match pumps in use (HHH ot TIT) --- Trang 114 --- 3.2 Probability Distributions for Discrete Random Variables 101 Probability Distributions for Discrete Random Variables When probabilities are assigned to various outcomes in &, these in turn determine probabilities associated with the values of any particular rv X. The probability distribution of X says how the total probability of 1 is distributed among (allocated to) the various possible X values. Six lots of components are ready to be shipped by a supplier. The number of defective components in each lot is as follows: Lot 1 2 3 4 5 6 Number of defectives 0 2 0 1 2 0 One of these lots is to be randomly selected for shipment to a customer. Let X be the number of defectives in the selected lot. The three possible X values are 0, 1, and 2. Of the six equally likely simple events, three result in X¥ = 0, one in X = 1, and the other two in X = 2. Let p(0) denote the probability that X = 0 and p(1) and p(2) represent the probabilities of the other two possible values of X. Then 3 p(0) = P(X = 0) = P(lot 1 or 3 or 6 is sent) = 67 500 1 P(1) = P(X = 1) = P(lot 4 is sent) = = = .167 2 p(2) = P(X = 2) = P(lot 2 or 5 is sent) = en 333 That is, a probability of .500 is distributed to the X value 0, a probability of .167 is placed on the X value 1, and the remaining probability, .333, is associated with the X value 2. The values of X along with their probabilities collectively specify the probability distribution or probability mass function of X. If this experiment were repeated over and over again, in the long run X = 0 would occur one-half of the time, X = | one-sixth of the time, and X = 2 one-third of the time. i | DEFINITION The probability distribution or probability mass function (pmf) of a discrete rv is defined for every number x by p(x) = P(X =x)= P(all s € 8: X(s) =x)." In words, for every possible value x of the random variable, the pmf specifies the probability of observing that value when the experiment is performed. The conditions p(x) > 0 and Zp(x) = 1, where the summation is over all possible x, are required of any pmf. 'P(X = 2) is read “the probability that the rv X assumes the value x.” For example, P(X = 2) denotes the probability that the resulting X value is 2. --- Trang 115 --- 102 = cHarrer 3 Discrete Random Variables and Probability Distributions Consider randomly selecting a student at a large public university, and define a Bernoulli rv by X = 1 if the selected student does not qualify for in-state tuition (a success from the university administration’s point of view) and X = 0 if the student does qualify. If 20% of all students do not qualify, the pmf for X is p(0) = P(X = 0) = P(the selected student does qualify) = .8 p(1) = P(X = 1) = P(the selected student does not qualify) = .2 p(x) = P(X =x) =0 for x 4 0or 1. 8 ifx=0 p(x)=§ 2 ifx=1 0 ifx40 orl Figure 3.2 is a picture of this pmf, called a line graph. | 1 x 0 1 Figure 3.2. The line graph for the pmf in Example 3.8 : Consider a group of five potential blood donors—A, B, C, D, and E—of whom only A and B have type O+ blood. Five blood samples, one from each individual, will be typed in random order until an O+ individual is identified. Let the rv Y = the number of typings necessary to identify an 0+ individual. Then the pmf of Y is p(1) =P(Y = 1) =P(A or B typed first) =i- 4 p(2) =P(Y =2) =P(C,D,or E first,and then A or B) =P(C,D,or E first) -P(A or B next|C, D, or E first) = ie 3 p(3) =P(Y =3) =P(C, D, or E first and second, and then A or B) =34 = 2 32 1. p(4) =P(¥=4) =P(C, D, and Eall done first) = 5-5-5 =-1 p(y) =O fory 4 1,2,3,4. The pmf can be presented compactly in tabular form: y 1 2 3 4 po) 4 3 2 Fi where any y value not listed receives zero probability. This pmf can also be displayed in a line graph (Figure 3.3). --- Trang 116 --- 3.2 Probability Distributions for Discrete Random Variables 103 Pv) 5 5 0 1 = 3 4 Figure 3.3 The line graph for the pmf in Example 3.9 . The name “probability mass function” is suggested by a model used in physics for a system of “point masses.” In this model, masses are distributed at various locations x along a one-dimensional axis. Our pmf describes how the total probability mass of 1 is distributed at various points along the axis of possible values of the random variable (where and how much mass at each x). Another useful pictorial representation of a pmf, called a probability histo- gram, is similar to histograms discussed in Chapter 1. Above each y with p(y) > 0, construct a rectangle centered at y. The height of each rectangle is proportional to p(y), and the base is the same for all rectangles. When possible values are equally spaced, the base is frequently chosen as the distance between successive y values (though it could be smaller). Figure 3.4 shows two probability histograms. a b 0 1 1 2 3 4 Figure 3.4 Probability histograms: (a) Example 3.8; (b) Example 3.9 A Parameter of a Probability Distribution In Example 3.8, we had p(0) = .8 and p(1) = .2 because 20% of all students did not qualify for in-state tuition. At another university, it may be the case that p(0) = .9 and p(1) = .1. More generally, the pmf of any Bernoulli rv can be expressed in the form p(1) = « and p(0) = 1 — «, where 0 < % < 1. Because the pmf depends on the particular value of x, we often write p(x; «) rather than just p(x): l-a ifx=0 P(x; %) = a ifx=1 (3.1) 0 otherwise Then each choice of « in Expression (3.1) yields a different pmf. --- Trang 117 --- 104 = cuarrer3 Discrete Random Variables and Probability Distributions DEFINITION Suppose p(x) depends on a quantity that can be assigned any one of a number of possible values, with each different value determining a different proba- bility distribution. Such a quantity is called a parameter of the distribution. The collection of all probability distributions for different values of the parameter is called a family of probability distributions. The quantity % in Expression (3.1) is a parameter. Each different number « between 0 and 1 determines a different member of a family of distributions; two such members are A ifx=0 5 ifx=0 p(xi.6)=% 6 ifx=1 and p(xy.5)=¢ 5 ifx=1 0 otherwise 0 otherwise Every probability distribution for a Bernoulli rv has the form of Expression (3.1), so it is called the family of Bernoulli distributions. Starting at a fixed time, we observe the gender of each newborn child at a certain hospital until a boy (B) is born. Let p = P(B), assume that successive births are independent, and define the rv X by X = number of births observed. Then p(1) = P(X = 1) = P(B) =p p(2) = P(X = 2) = P(GB) = P(G) - P(B) = (1— p)p and 5 p(3) = P(X = 3) = P(GGB) = PG) -P(G)-P(B) = (1—p)’p Continuing in this way, a general formula emerges: 1—p)*'p x=1,2,3,... ps) = { (1p) ; (32) 0 otherwise The quantity p in Expression (3.2) represents a number between 0 and | and is a parameter of the probability distribution. In the gender example, p = .51 might be appropriate, but if we were looking for the first child with Rh-positive blood, then we might have p = .85. a The Cumulative Distribution Function For some fixed value x, we often wish to compute the probability that the observed value of X will be at most x. For example, the pmf in Example 3.7 was 500 x=0 167 x=1 PO) 333 yao 0 otherwise --- Trang 118 --- 3.2 Probability Distributions for Discrete Random Variables 105. The probability that X is at most 1 is then P(X < 1) = p(0) +p(1) = .500 + .167 = .667 In this example, X < 1.5 iff X < 1, so P(X < 1.5) = P(X < 1) = .667. Similarly, P(X < 0) = P(X = 0) = .5, and P(X < .75) = .5 also. Since 0 is the smallest possible value of X, P(X < —1.7) = 0, P(X < —.0001) = 0, and so on. The largest possible X value is 2, so P(X < 2) = 1, and if x is any number larger than 2, P(X < x) = 1; that is, P(X <5) = 1, P(X < 10.23) = 1, and so on. Notice that P(X < 1) = .5 4 P(X < 1), since the probability of the X value 1 is included in the latter probability but not in the former. When X is a discrete random variable and x is a possible value of X, P(X < x) < P(X < x). DEFINITION The cumulative distribution function (cdf) F(x) of a discrete rv X with pmf p(x) is defined for every number x by FQ) =P 1 where [x] is the largest integer < x (e.g., [2.7] = 2). Thus if p = .51 as in the birth example, then the probability of having to examine at most five births to see the first boy is F(5) = 1 — (.49)° = 1 — .0282 = .9718, whereas F(10) ~ 1.0000. This cdf is graphed in Figure 3.6. Fx) 4.9 [ese resents a ere een en a eee eS e— —_ 8 — & =— A 2 9 x 0 1 2 4 6 8 10 Figure 3.6 A graph of F(x) for Example 3.12 - In our examples thus far, the cdf has been derived from the pmf. This process can be reversed to obtain the pmf from the cdf whenever the latter function is available. Suppose, for example, that X represents the number of defective compo- nents in a shipment consisting of six components, so that possible X values are 0, 1,...., 6. Then p(3) = P(X = 3) = [p() + p(1) + p(2) + p(3)] ~ [p(0) + (1) + p(2)] = P(X <3) — P(X <2) = F(3) — F(2) --- Trang 121 --- 108 = cHarrer 3 Discrete Random Variables and Probability Distributions More generally, the probability that X falls in a specified interval is easily obtained from the cdf. For example, P(2 3) tics professor notices that four copies of the text ©. P2 xp) xeD This expected value will exist provided that S>,¢p |x| - p(x) < 00 --- Trang 126 --- 3.3 Expected Values of Discrete Random Variables 113 When it is clear to which X the expected value refers, jt rather than ply is often used. For the pmf in (3.6), H=1- p(1)+2- p(2)+-- +7 - p(7) = (1)(.01) + (2)(.03) +--+ -+(7)(.02) = .01 + .06 + .39 + 1.00 + 1.95 + 1.02 + .14 = 4.57 If we think of the population as consisting of the X values 1,2,...,7, then « = 4.57 is the population mean. In the sequel, we will often refer to 4 as the population mean rather than the mean of X in the population. a In Example 3.15, the expected value ji was 4.57, which is not a possible value of X. The word expected should be interpreted with caution because one would not expect to see an X value of 4.57 when a single student is selected. Just after birth, each newborn child is rated on a scale called the Apgar scale. The possible ratings are 0, 1, ... , 10, with the child’s rating determined by color, muscle tone, respiratory effort, heartbeat, and reflex irritability (the best possible score is 10). Let X be the Apgar score of a randomly selected child born at a certain hospital during the next year, and suppose that the pmf of X is x 0 1 2 3 4 5 6 7 8 9 10 pP(ix)| .002 001 .002 005 .02 04 18 37 .25 .12 01 Then the mean value of X is E(X) = y= (0)(.002) + (1)(.001) + -+ - +(8)(-25) + (9)(.12) + (10)(.01) = 7.15 Again, 1 is not a possible value of the variable X. Also, because the variable refers to a future child, there is no concrete existing population to which y refers. Instead, we think of the pmf as a model for a conceptual population consisting of the values 0, 1,2,..., 10. The mean value of this conceptual population is then p = 7.15. Ml Let X = 1 if a randomly selected component needs warranty service and = 0 otherwise. Then X is a Bernoulli rv with pmf l-p x=0 pix)=4 p x= 0 x£0,1 from which E(X) =0 - p(0) +1 - p(1) =0(1 — p) + l(p) =p. That is, the expected value of X is just the probability that X takes on the value 1. If we conceptualize a population consisting of 0’s in proportion 1 — p and 1’s in proportion p, then the population average is = p. a --- Trang 127 --- 114 = cuarrer3 Discrete Random Variables and Probability Distributions From Example 3.10 the general form for the pmf of X = the number of children born up to and including the first boy is (2) (1—py"'p x= 1,2,3,... x) = P 0 otherwise From the definition, E(X) = Sox- p(x) = So apt =p)! =p Sox =p) D xl x1 x. d ’ =P) [-Z0-n' 9) If we interchange the order of taking the derivative and the summation, the sum is that of a geometric series. After the sum is computed, the derivative is taken, and the final result is E(X) = 1/p. If p is near 1, we expect to see a boy very soon, whereas if p is near 0, we expect many births before the first boy. For p = .5, EX) = 2. a There is another frequently used interpretation of j. Consider the pmf (.5).(.5)! x = 1,2,3,-.. plx) = 0 otherwise This is the pmf of X = the number of tosses of a fair coin necessary to obtain the first H (a special case of Example 3.18). Suppose we observe a value x from this pmf (toss a coin until an H appears), then observe independently another value (keep tossing), then another, and so on. If after observing a very large number of x values, we average them, the resulting sample average will be very near to 4 = 2. That is, 41 can be interpreted as the long-run average observed value of X when the experiment is performed repeatedly. Let X, the number of interviews a student has prior to getting a job, have pmf (x) kp x =1,2,3,... x) = P 0 otherwise where k is chosen so that 7°, (k/x?)= 1. (Because 7%, (1/x?) = 1/6, the value of k is 6/x”.) The expected value of X is x xy n= EX) = Yor z=ky— (3.10) x1 x1 The sum on the right of Equation (3.10) is the famous harmonic series of mathematics and can be shown to equal oo. E(X) is not finite here because p(x) does not decrease sufficiently fast as x increases; statisticians say that the probability distribution of X has “a heavy tail.” If a sequence of X values is chosen using this distribution, the sample average will not settle down to some finite number but will tend to grow without bound. --- Trang 128 --- 3.3 Expected Values of Discrete Random Variables 115 Statisticians use the phrase “heavy tails” in connection with any distribution having a large amount of probability far from y (so heavy tails do not require [t= oo). Such heavy tails make it difficult to make inferences about j1. a The Expected Value of a Function Often we will be interested in the expected value of some function A(X) rather than X itself. Suppose a bookstore purchases ten copies of a book at $6.00 each to sell at $12.00 with the understanding that at the end of a 3-month period any unsold copies can be redeemed for $2.00. If X represents the number of copies sold, then net revenue = A(X) = 12X + 2(10 — X) — 60 = 10X — 40. i | An easy way of computing the expected value of h(X) is suggested by the following example. The cost of a certain diagnostic test on a car depends on the number of cylinders (4, 6, or 8) in the car’s engine. Let X denote the number of cylinders on a randomly chosen vehicle about to undergo this test, and suppose the cost function is h(X) = 20 + 3X + .5X”. Since X is a random variable, so is A(X); denote this latter rv by Y. The pmf’s of X and ¥ are as follows: x 4 6 8 y 40 56 76 pix) 5 3 2 po) 5 3 2 With D* denoting possible values of Y, E(Y) = Elh(X)] = Sy-p) De = (40)(.5) + (56)(.3) + (76)(.2) = h(4) - (.5) + A(6) - (.3) +.A(8) - (.2) = Vals) - p(x) (3.11) D According to Equation (3.11), it was not necessary to determine the pmf of Y to obtain E(Y); instead, the desired expected value is a weighted average of the possible h(x) (rather than x) values. i | PROPOSITION If the rv X has a set of possible values D and pmf p(x), then the expected value of any function h(X), denoted by E[/(X)] or {4,,x), is computed by E[A(X)] = > AL) - px) D assuming that >, |/(x)| - p(x) is finite. --- Trang 129 --- 116 = cHarrer3 Discrete Random Variables and Probability Distributions According to this proposition, E[A(X)] is computed in the same way that E(X) itself is, except that A(x) is substituted in place of x. A computer store has purchased three computers at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = .1, p(1) = .2, p(2) = .3, and p(3) = .4. With h(X) denoting the profit associated with selling X units, the given information implies that h(X) = revenue — cost = 1000X + 200(3 — X) — 1500 = 800x — 900. The expected profit is then Elh(X)] = h(0) - p(0) + ACL) « pC) + A) - p(2) + AGB) - PB) = (—900)(.1) + (—100)(.2) + (700)(.3) + (1500) (.4) = $700 a The h(X) function of interest is quite frequently a linear function aX + b. In this case, E[h(X)] is easily computed from E(X). PROPOSITION E(aX +b) =a-E(X) +b (3.12) (Or, using alternative notation, fy ., = a> Ly +b.) To paraphrase, the expected value of a linear function equals the linear function evaluated at the expected value E(X). Since h(X) in Example 3.22 is linear and E(X) = 2, E[h(X)] = 800(2) — 900 = $700, as before. Proof E(aX +b) = S> (ax +b) p(x) =a Sx p(x) +b po) D D D = aE(X)+b a Two special cases of the proposition yield two important rules of expected value. 1. For any constant a, E(aX) = a -E(X) [take b = 0 in (3.12)]. 2. For any constant b, E(X + b) = E(X) + b [take a = 1 in (3.12)]. Multiplication of X by a constant a changes the unit of measurement (from. dollars to cents, where a = 100, inches to cm, where a = 2.54, etc.). Rule 1 says that the expected value in the new units equals the expected value in the old units multiplied by the conversion factor a. Similarly, if the constant b is added to each possible value of X, then the expected value will be shifted by that same constant amount. --- Trang 130 --- 3.3. Expected Values of Discrete Random Variables 117 The Variance of X The expected value of X describes where the probability distribution is centered. Using the physical analogy of placing point mass p(x) at the value x on a one- dimensional axis, if the axis were then supported by a fulcrum placed at 1, there would be no tendency for the axis to tilt. This is illustrated for two different distributions in Figure 3.7. a b P(x) P(x) 5 5 1 2 3 5 1 2 3 5 6 7 8 Figure 3.7 Two different probability distributions with w = 4 Although both distributions pictured in Figure 3.7 have the same center 1, the distribution of Figure 3.7b has greater spread or variability or dispersion than does that of Figure 3.7a. We will use the variance of X to assess the amount of variability in (the distribution of) X, just as s° was used in Chapter | to measure variability in a sample. DEFINITION Let X have pmf p(x) and expected value jz. Then the variance of X, denoted by V(X) or a, or just 0°, is V(X) = 7 = w)? - pla) = EK 1? D The standard deviation (SD) of X is ox= VoX The quantity A(X) = (X — wy is the squared deviation of X from its mean, and o° is the expected squared deviation. If most of the probability distribution is close to 1, then o? will typically be relatively small. However, if there are x values far from y that have large p(x), then a” will be quite large. Consider again the distribution of the Apgar score X of a randomly selected newborn described in Example 3.16. The mean value of X was calculated as pp = 7.15, so 10 V(X) = 07 =) (x - 7.15) - p(x) =0 = (0 —7.15)?(.002) +... + (10 — 7.15)°(.01) = 1.5815 The standard deviation of X is ¢ = V'1.5815 = 1.26. a --- Trang 131 --- 118 = cuarrer3 Discrete Random Variables and Probability Distributions When the pmf p(x) specifies a mathematical model for the distribution of population values, both o? and o measure the spread of values in the population; o° is the population variance, and o is the population standard deviation. A Shortcut Formula for a7 The number of arithmetic operations necessary to compute a” can be reduced by using an alternative computing formula. PROPOSITION V(X) = 0? = Pe | ~ 2 = E(X) — [ECP D In using this formula, E(X’) is computed first without any subtraction; then E(X) is computed, squared, and subtracted (once) from E(X’). Referring back to the Apgar score scenario of Examples 3.16 and 3.23, 10 E(X?) = S72? - p(x) = (0?) (.002) + (17)(.001) + ... + (10)(.01) = 52.704 =0 Thus o? = 52.704 — (7.15)° = 1.5815 as before. Ls Proof of the Shortcut Formula Expand (x — ,:)’ in the definition of ? to obtain x° — 2x + jC, and then carry Z through to each of the three terms: & = Sox - px) - 20 Sox pe) + ore) D D D = E(X?) —2p- wt ye = E(X*) — 2° |] Rules of Variance The variance of h(X) is the expected value of the squared difference between A(X) and its expected value: 2 2 V[A(X)] = ohn) = Do (lx) — ElA(x)]}? - pe) (3.13) D When /i(x) is a linear function, V[/A(X)] is easily related to V(X) (Exercise 40). PROPOSITION > > > V(aX + b) = o7y,, =a + oy and oax4p = |al - ox --- Trang 132 --- 3.3 Expected Values of Discrete Random Variables 119 This result says that the addition of the constant b does not affect the variance, which is intuitive, because the addition of b changes the location (mean value) but not the spread of values. In particular, L Gave eas lal cox (3.14) 2, as = ah The reason for the absolute value in ¢,y is that a may be negative, whereas a standard deviation cannot be negative; a” results when a is brought outside the term being squared in Equation 3.13. In the computer sales scenario of Example 3.22, E(X) = 2 and E(X?) = (0?)(-1) + (1?)(.2) + (27)(.3) + (3°)(4) = 5 so V(X) = 5 — (2)? = 1. The profit function h(X) = 800X — 900 then has variance (800) - V(X) = (640,000)(1) = 640,000 and standard deviation 800. a Exercises | Section 3.3 (28-43) 28. The pmf for X = the number of major defects on Y is within 1 standard deviation of its mean a randomly selected appliance of a certain type is value. 31. An appliance dealer sells three different x 0 1 2 3 4 models of upright freezers having 13.5, 15.9, pa) 08 15 45 27 05 and 19.1 cubic feet of storage space, respec. tively. Let X = the amount of storage space pur- 7 chased by the next customer to buy a freezer. Compute the following: Suppose that X has pmf a. E(X) b. V(X) directly from the definition c. The standard deviation of X x 3S 15.9 19.1 d. V(X) using the shortcut formula px) 2 5 3 29. An individual who has automobile insurance from a company is randomly selected. Let Y be a. Compute E(X), E(X?), and V(X). the number of moving violations for which the b. Ifthe price of a freezer having capacity X cubic individual was cited during the last 3 years. The feet is 25X — 8.5, what is the expected price pmf of Y is paid by the next customer to buy a freezer? c. What is the variance of the price 25X — 8.5 y 0 1 2 3 paid by the next customer? = d. Suppose that although the rated capacity of a Po) 60 25 10 05 freezer is X, the actual capacity is h(X) = X — .01X?. What is the expected actual capacity of a. Compute E(Y). the freezer purchased by the next customer? b. Suppose an individual with Y violations ; — 2 32. Let X be a B nt ith pmf as in Exampl incurs a surcharge of $100¥2. Calculate the SG ne expected amount of the surcharge. qa ‘Connpiite E(X). 30. Refer to Exercise 12 and calculate V(Y) b. Show that V(X) = p(1 — p). and cy. Then determine the probability that c. Compute E(x”). --- Trang 133 --- 120 = cuarrer3 Discrete Random Variables and Probability Distributions 33. Suppose that the number of plants of a particular 5-lb containers. Let X = the number of containers type found in a rectangular region (called a quad- ordered by a randomly chosen customer, and sup- rat by ecologists) in a certain geographic area is pose that X has pmf an rv X with pmf x 1 2 3 4 Qo oye p(x) ={ ors ane po) 2 4 3 a Is E(X) finite? Justify your answer (this is Compute E(X) and V(X). Then compute the another distribution that statisticians would call Gxpeciedstumber of pounds Iefemec the Hex heavy-tailed). customer’s order is shipped and the variance of 34. A small market orders copies of a certain the number of pounds left. (Hint: The number of magazine for its magazine rack each week. Let pounds leftiis;a, near function x] ; X= déiniandl for the magazine, witht pint 39. a, Draw a line graph of the pmf of X in Exercise 34, Then determine the pmf of —X and draw its line graph. From these two pictures, what can x 1 2 3 4 =5 = 6 you say about V(X) and V(—X)? o>) t 2 3B 4&4 3B 2 b. Use the proposition involving V(aX + b) to Pr) 15 15 15 15 15 15 a . y establish a general relationship between V(X) Suppose the store owner actually pays $2.00 for se each copy of the magazine and the price to cus- 40. Use the definition in Expression (3.13) to prove tomers is $4,00. If magazines left at the end of that V(aX +b) =a? oy. [Hints With A(X) = the week have no salvage value, is it better to aX + b, Elh(X)] = ay + b where = E(X).] order three or four copies of the magazine? 41 suppose E(X) = 5 and AIX — 1) = 275. [Hint: For both three and four copies ordered, WA express net revenue as a function of demand X, a. E(X2)? (Hint: EIX(® — )) = EX? — X= and then compute the expected revenue.] E(X) — E(X).] 35. Let X be the damage incurred (in $) in a certain b. van? Opet ee beer wake - ¢. The general relationship among the quantities abilities .8, .1, .08, and .02, respectively. A partic- EQ), BOC 1), and YOO? ular company offers a $500 deductible policy. If 42. Write a general rule for E(X — c) where c is a the company wishes its expected profit to be $100, constant. What happens when you let c = 41, the what premium amount should it charge? expected value of X? 36. The n candidates for a job have been ranked 1,2, 43. A result called Chebyshey’s inequality states that 3, ...., n. Let X =the rank of a randomly for any probability distribution of an rv X and any selected candidate, so that X has pmf number k that is at least 1, P(|X — | 2 ko) < 1/. In words, the probability that the value of X I/n x=1,2,3,...,7 lies at least & standard deviations from its mean is Po) ={ 0 otherwise at most 1/k°. a. What is the value of the upper bound for (this is called the discrete uniform distribution). k=2k=32kK=4k=52k=102 Compute E(X) and V(X) using the shortcut for- b. Compute ys and o for the distribution of mula. [Hint: The sum of the first n positive Exercise 13. Then evaluate P(|X — y| > ko) integers is n(n + 1)/2, whereas the sum of their fob ihe Val SER Given iHpait G@). What squares isn(n + D2n+ 1/6.) does this suggest about the upper bound rela- 37. Let X = the outcome when a fair die is rolled once. ive meccommsponding peobability? If before the die is rolled you are offered either c LetX have three'possible-values; —1, 0and 1, (1/3.5) dollars or A(X) = 1/X dollars, would you with probabilities 1/18 , 8/9, and 1/18 respec- accept the guaranteed amount or would you gamble? tively. What is P(\X — ju] > 3c), and how does [Note: Itis not generally true that 1/E(X) = E(1/X).] it compare to the corresponding bound? 38. A chemical supply company currently has in stock d. Give a distribution for which 100 Ib of a chemical, which it sells to customers in P(X — | > 5a) = .04. --- Trang 134 --- 3.4 Moments and Moment Generating Functions 121 Moments and Moment Generating Functions Sometimes the expected values of integer powers of X and X — yw are called moments, terminology borrowed from physics. Expected values of powers of X are called moments about 0 and powers of X — are called moments about the mean. For example, E(x?) is the second moment about 0, and E[(X — wl is the third moment about the mean. Moments about 0 are sometimes simply called moments. Suppose the pmf of X, the number of points earned on a short quiz, is given by ¥ 0 1 2 3 p(x) al 2 3 A The first moment about 0 is the mean H = E(X) = S> xp(x) = O(-1) + 1(.2) + 2(.3) + 3(.4) = 2 xeD The second moment about the mean is the variance & = E(X — 2] => (x 2) P@) xeD = (0—2)?(.1) + (1 — 2)°(.2) + (2 — 2)°(.3) + (3 —2)?(4) =1 The third moment about the mean is also important. BUX W)] = SE WP) xeD = (0—2)°(.1) + (1 — 2)3(.2) + (2 — 2)°(.3) + (3 —2)°(.4) = -.6 We would like to use this as a measure of lack of symmetry, but E[(X — | depends on the scale of measurement. That is, if X is measured in feet, the value is different from what would be obtained if X were measured in inches. Scale independence results from dividing the third moment about the mean by a: B(x =1)') é = “) ] ae ee (= a o This is our measure of departure from symmetry, called the skewness. For a symmetric distribution the third moment about the mean would be 0, so the skewness in that case is 0. However, in the present example the skewness is E[(X — 1)*\/o* = —.6/1 = —.6. When the skewness is negative, as it is here, we say that the distribution is negatively skewed or that it is skewed to the left. Generally speaking, it means that the distribution stretches farther to the left of the mean than to the right. If the skewness were positive then we would say that the distribution is positively skewed or that it is skewed to the right. For example, reverse the order of the probabilities in the p(x) table above, so the probabilities of the values 0, 1, 2, and 3 are now .4, .3, .2, and .1, respectively (a much harder quiz). This changes the sign but not the magnitude of the skewness, so it becomes .6 and the distribution is skewed right (see Exercise 57). i | --- Trang 135 --- 122 = cuarrer3 Discrete Random Variables and Probability Distributions Moments are not always easy to obtain, as shown by the calculation of E(X) in Example 3.18. We now introduce the moment generating function, which will help in the calculation of moments and the understanding of statistical distributions. We have already discussed the expected value of a function, E[h(X)]. In particular, let e denote the base of the natural logarithms, with approximate value 2.71828. Then we may wish to calculate E(e?*) = Le**p(x), E(e*”*), or E(e~ 7°). That is, for any particular number f, the expected value E(e*) is meaningful. When we consider this expected value as a function of t, the result is called the moment generating function. DEFINITION The moment generating function (mgf) of a discrete random variable X is defined to be Mx(0) = Ele®) = 3° e" p(x) veD where D is the set of possible X values. We will say that the moment generating function exists if My(¢) is defined for an interval of numbers that includes zero as well as positive and negative values of f (an interval including 0 in its interior). If the mgf exists, it will be defined on a symmetric interval of the form (—t, to), Where fo > 0, because fy can be chosen small enough so the symmetric interval is contained in the interval of the definition. When t = 0, for any random variable X Mx(0) = E(e™*) = 7 e™p(x) = Y7 Ip) = 1 veD xeD That is, My(0) is the sum of all the probabilities, so it must always be 1. However, in order for the mgf to be useful in generating moments, it will need to be defined for an interval of values of t including 0 in its interior, and that is why we do not bother with the mgf otherwise. As you might guess, the moment generating function fails to exist in cases when moments themselves fail to exist, as in Example 3.19. See Example 3.30 below. The simplest example of an mgf is for a Bernoulli distribution, where only the X values 0 and | receive positive probability. Let X be a Bernoulli random variable with p(0) = $ and p(1) = 3. Then 1 2 1 2 My(t) = Ele®) = Hy(x) =e pelo a tele x(t) = E(e®) = e*p(x) =e gtegegtes xeD It should be clear that a Bernoulli random variable will always have an mgf of the form p(0) + p(1)e’. This mgf exists because it is defined for all r. i | --- Trang 136 --- 3.4 Moments and Moment Generating Functions 123 The idea of the mgf is to have an alternate view of the distribution based on an infinite number of values of t. That is, the mgf for X is a function of t, and we get a different function for each different distribution. When the function is of the form of one constant plus another constant times e’, we know that it corresponds to a Bernoulli random variable, and the constants tell us the probabilities. This is an example of the following “uniqueness property.” PROPOSITION If the mgf exists and is the same for two distributions, then the two distribu- tions are the same. That is, the moment generating function uniquely speci- fies the probability distribution; there is a one-to-one correspondence between distributions and mgf’s. Let X be the number of claims in a year by someone holding an automobile insurance policy with a company. The mgf for X is My(t) = .7 + .2e" + le. Then we can say that the pmf of X is given by x 0 1 2: p(x) a 2 4 Why? If we compute E(e™) based on this table, we get the correct mgf. Because X and the random variable described by the table have the same mgf, the uniqueness property requires them to have the same distribution. Therefore, X has the given pmf. a This is a continuation of Example 3.18, except that here we do not consider the number of births needed to produce a male child. Instead we are looking for a person whose blood type is Rh+. Set p = .85, which is the approximate probability that a random person has blood type Rh+. If X is the number of people we need to check until we find someone who is Rh+, then p(x) = p(1 — p)*~! = .85(.15)""! ESS TD Beceiecucen Determination of the moment generating function here requires using the formula for the sum of a geometric series: 2 a Clear bar pers a) == where a is the first term, r is the ratio of successive terms, and Ir| < 1. The moment generating function is Mx(t) = E(e™) = 37 e.85 (15)! = 85e' $7 e915) xl x1 ~ = -85e! = B50 [e(.15) = De = ase --- Trang 137 --- 124 = cuarrer3 Discrete Random Variables and Probability Distributions The condition on r requires |.15e‘l < 1. Dividing by .15 and taking logs, this gives t < —In(.15) © 1.90. The result is an interval of values that includes 0 in its interior, so the mgf exists. What about the value of the mgf at 0? Recall that M,(0) = 1 always, because the value at 0 amounts to summing the probabilities. As a check, after computing an mgf we should make sure that this condition is satisfied. Here My(0) = 85/1 — .15) = 1. Ll) Reconsider Example 3.19, where p(x) = k/x°, x = 1, 2, 3, ... . Recall that E(X) does not exist, so there might be problems with the mgf, too: Mg(t) = Ble) = rev 5 a With the help of tests for convergence such as the ratio test, we find that the series converges if and only if e' < 1, which means that t < 0. Because zero is on the boundary of this interval, not the interior of the interval (the interval must include both positive and negative values), this mgf does not exist. Of course, it could not be useful for finding moments, because X does not have even a first moment (mean). i | How does the mgf produce moments? We will need various derivatives of M,(t). For any positive integer r, let M(t) denote the rth derivative of My(). By computing this and then setting t = 0, we get the rth moment about 0. THEOREM If the mgf exists, E(X") = MY (0) Proof We show that the theorem is true for r= 1 and r = 2. A proof by mathematical induction can be used for general r. Differentiate d d Mt _ d ow _ ett GM =F Xe p(x) = ae P(x) = > xe“p(x) xe xD xeD where we have interchanged the order of summation and differentiation. This is justified inside the interval of convergence, which includes 0 in its interior. Next we set t = 0 and get the first moment My(0) = MY (0) = Y xp(e) = BC) reD --- Trang 138 --- 3.4 Moments and Moment Generating Functions 125, Differentiate again: ae d d SMa) = 4 De") = Vo xFe"r@) = Yer) xeD xeD xeD Set t = 0 to get the second moment 2 My(0) = MX(0) = Y7 x°p(x) = EO) veD a This is a continuation of Example 3.28, where X represents the number of claims in a year with pmf and mgf F lo 1 2 My(t) = 7 + 2e! + Le pr) at 2 Far First, find the derivatives My(t) = .2e + .1(2)e" Mx(t) = .2e' + .1(2)(2)e” Setting ¢ to 0 in the first derivative gives the first moment E(X) = Mi(0) = MY (0) = 2c + .1(2)e2 =.24+.1(2) =.4 Setting f to 0 in the second derivative gives the second moment E(X?) = M4(0) = MF) (0) = .26° + .1(2)(2)e) = .2+.1(2)(2) = .6 To get the variance recall the shortcut formula from the previous section: V(X) = 0? = E(X’) -[E(X)P = .6- 4 = 6-.16 = 44 Taking the square root gives ¢ = .66 approximately. Do a mean of .4 and a standard deviation of .66 seem about right for a distribution concentrated mainly on 0 and 1? a Recall that p = .85 is the probability of a person having Rh + blood and we keep (Example 3.29 checking people until we find one with this blood type. If X is the number of people continued) we need to check, then p(x) = .85(.15)""!, x = 1,2,3,..., and the mef is 85e! Mx(t) = E(e*) = ——_ x() = Ele") => a5 Differentiating with the help of the quotient rule, 85e! My (t) = ——_, (0) (1 = 15)" --- Trang 139 --- 126 = cHarrer3 Discrete Random Variables and Probability Distributions Setting t = 0, 1 = E(X) = Mi,(0) = — HEX) =My(0) = = Recalling that .85 corresponds to p, we see that this agrees with Example 3.18. To get the second moment, differentiate again: -85e'(1 + .15e" yy = 20+ 1) (1 —.15e') Setting t = 0, 2 1.15 E(X*) = My(0) = (8) = My(0) == Now use the shortcut formula for the variance from the previous section: 1.15 1 1S V(X) =o? = E(X?) — [E(X)? = 2 - Sy = Sy = 2076 (X)=6 (X°) — [E(X)] B52 gs? ge a There is an alternate way of doing the differentiation that can sometimes make the effort easier. Define Ry(t) = In[My(¢)], where In(w) is the natural log of u. In Exercise 54 you are requested to verify that if the moment generating function exists, H = E(X) = Ry(0) a = V(X) = Ry (0) Here we apply Rx(#) to Example 3.32. Using In(e') = 1, .85e! Ry(t) = In{My(1)] = Inf ——— } = In(.85) + ¢ — In(1 — .15e) 1-.15e! The first derivative is 1 R(t) = ——— «(= 75 e and the second derivative is Se" Ri (t) = ——_,, x() (1 — .15e')? Setting t to 0 gives 1 = E(X) = Ry (0) = = E(K) = Ri (0) = 35 1S 2 a “ = V(X) = Ry(0) = g. (x) x (0) eee These are in agreement with the results of Example 3.32. a --- Trang 140 --- 3.4 Moments and Moment Generating Functions 127 As mentioned at the end of the previous section, it is common to transform X using a linear function Y = aX + b. What happens to the mgf when we do this? PROPOSITION Let X have the mgf Mx() and let Y = aX + b. Then My(t) = e™My(at). Let X be a Bernoulli random variable with p(0) = 32 and p(1) = 48. Think of X as the number of wins, 0 or 1, in a single play of roulette. If you play roulette at an American casino and you bet red, then your chances of winning are 8 because 18 of the 38 possible outcomes are red. Then from Example 3.27 My(t) = + e'38. Let your bet be $5 and let Y be your winnings. If X = 0 then Y = —5S, and if X = 1 then Y = +5. The linear equation Y = 10X — 5 gives the appropriate relationship. The equation is of the form Y = aX + b witha = 10 and b = —5, so by the proposition My(t) = e”My(at) = e~*My(10t) 20 18 20 18 St lor St St =e) 4 el | = et 4 et Ee =| 38° 38 From this we can read off the probabilities for Y: p(—5) = 3% and p(5)= 8. Exercises | Section 3.4 (44-57) 44, For a new car the number of defects X has the 48. Let X have the moment generating function distribution given by the accompanying table. of Example 3.29 and let ¥Y = X — 1. Recall that Find My(#) and use it to find E(X) and V(X). X is the number of people who need to be checked to get someone who is Rh+, so Y is the x 0 1 2 3 4 5 6 number of people checked before the first Rh+ person is found. Find My(s) using the second pal 04 20 34 20 AS 04.03 propasition, 45. In flipping a fair coin let X be the number of 49 Hf Mx(#) =e" then find E(X) and V(X) by tosses to get the first head. Then p(x) = .5* for differentiating x = 1,2,3,.... Find My(s) and use it to get E(X) a. Mx() and V(X). b. Rx) 46. Given My(t) = 2+ Be! + Se find p(x), 0X), 50+ Prove the result in the second proposition, ie., V(X). Maxsn (O) = &™My(an). 47. Using a calculation similar to the one in Example 54. Let My(#) = #7" and let ¥ = (X — 5)/2. Find 3.29 show that, if X has the distribution of M (0) and use/tt to; find ECY).and Vi), Example 3.18, then its mgf is 52. If you toss a fair die with outcome X, p(x) = . for x = 1, 2,3, 4, 5, 6. Find Mx(1). Mx()) = __ 2 ; 1—(1—p)et 53. If My(t) = 1/1 — P), find E(X) and V(X) by IfY has mgf My(t) = .75e'/(1 —.25e'), determine differentiating M(t). the probability mass function p(y) with the help 4, Prove that the mean and variance are obtainable of the uniqueness property. from Ry(t) = In(My(2): --- Trang 141 --- 128 = cuarter 3 Discrete Random Variables and Probability Distributions m= E(X) =R,,(0) possible values 0, 1,2,..., 10 and pmf p(x), and 2 prey. _ pit suppose the distribution has a skewness of c. Now a = V(X) = Ri (0) consider reversing the probabilities in the distri- 55. Show that g(#) = te’ cannot be a moment generat- bution, so that p(0) is interchanged with p(10), ing function. (1) is interchanged with p(9), and so on. Show that the skewness of the resulting distribution = eSle'-1), find E(X) 2 56. pee ee then find BOO) and VOX, "by is —c. [Hint: Let ¥ = 10 — X and show that Y Mae has the reversed distribution. Use this fact to BY (0) determine py and then the value of skewness for - Rx(0) the ¥ distribution.] 57. Let X be the number of points earned by a ran- domly selected student on a 10 point quiz, with The Binomial Probability Distribution Many experiments conform either exactly or approximately to the following list of requirements: 1. The experiment consists of a sequence of n smaller experiments called trials, where n is fixed in advance of the experiment. 2. Each trial can result in one of the same two possible outcomes (dichotomous trials), which we denote by success (S) or failure (F). 3. The trials are independent, so that the outcome on any particular trial does not influence the outcome on any other trial. 4, The probability of success is constant from trial to trial; we denote this probability by p. DEFINITION An experiment for which Conditions 1— are satisfied is called a binomial experiment. The same coin is tossed successively and independently n times. We arbitrarily use S to denote the outcome H (heads) and F to denote the outcome T (tails). Then this experiment satisfies Conditions 1-4. Tossing a thumbtack n times, with S = point up and F = point down, also results in a binomial experiment. Ll) Some experiments involve a sequence of independent trials for which there are more than two possible outcomes on any one trial. A binomial experiment can then be created by dividing the possible outcomes into two groups. The color of pea seeds is determined by a single genetic locus. If the two alleles at this locus are AA or Aa (the genotype), then the pea will be yellow (the phenotype), and if the allele is aa, the pea will be green. Suppose we pair off 20 Aa seeds and cross the two seeds in each of the ten pairs to obtain ten new genotypes. Call each new genotype a success S$ if it is aa and a failure otherwise. Then with this --- Trang 142 --- 3.5 The Binomial Probability Distribution 129 identification of § and F, the experiment is binomial with n = 10 and p = P(aa genotype). If each member of the pair is equally likely to contribute a or A, then p=P(a)-Pla) = (3) (3) =4 = Suppose a city has 50 licensed restaurants, of which 15 currently have at least one serious health code violation and the other 35 have no serious violations. There are five inspectors, each of whom will inspect one restaurant during the coming week. The name of each restaurant is written on a different slip of paper, and after the slips are thoroughly mixed, each inspector in turn draws one of the slips without replacement. Label the ith trial as a success if the ith restaurant selected (i = 1, ... , 5) has no serious violations. Then 35 P(s first trial) = = = .70 (S on first trial) 50 and P(Son second trial) = P(SS) + P(FS) = P(second S|first $)P (first S) + P(second S|first F)P (first F) 34 35 35 15 35/34 15 35 ey Se (4) Seo 49 50 49 50 S0\49 49, 50 Similarly, it can be shown that P(S on ith trial) = .70 for i = 3, 4, 5. However, . 31 P(S on fifth trial|SSSS) == = .67 46 whereas . 35 P(S on fifth trial|FFFF)) === = 76 The experiment is not binomial because the trials are not independent. In general, if sampling is without replacement, the experiment will not yield independent trials. If each slip had been replaced after being drawn, then trials would have been independent, but this might have resulted in the same restaurant being inspected by more than one inspector. a Suppose a state has 500,000 licensed drivers, of whom 400,000 are insured. A sample of ten drivers is chosen without replacement. The ith trial is labeled S if the ith driver chosen is insured. Although this situation would seem identical to that of Example 3.37, the important difference is that the size of the population being sampled is very large relative to the sample size. In this case 399,999 P(S on 2|s 1) =——~—_ = .80000 (Son 21S on 1) = a0 995 and . 399,991 P(S on 10|S on first 9) = 755 5y7 = -799996 = 80000 --- Trang 143 --- 130 = cHarter 3 Discrete Random Variables and Probability Distributions These calculations suggest that although the trials are not exactly inde- pendent, the conditional probabilities differ so slightly from one another that for practical purposes the trials can be regarded as independent with constant P(S) = .8. Thus, to a very good approximation, the experiment is binomial with n= 10andp =.8. a We will use the following rule of thumb in deciding whether a “without- replacement” experiment can be treated as a binomial experiment. RULE Consider sampling without replacement from a dichotomous population of size N. If the sample size (number of trials) n is at most 5% of the population size, the experiment can be analyzed as though it were exactly a binomial experiment. By “analyzed,” we mean that probabilities based on the binomial experiment assumptions will be quite close to the actual “without-replacement” probabilities, which are typically more difficult to calculate. In Example 3.37, n/N = 5/50 = .1 > .05, so the binomial experiment is not a good approximation, but in Example 3.38, n/N = 10/500,000 < .05. The Binomial Random Variable and Distribution In most binomial experiments, it is the total number of S’s, rather than knowledge of exactly which trials yielded S’s, that is of interest. DEFINITION Given a binomial experiment consisting of 7 trials, the binomial random variable X associated with this experiment is defined as X = the number of S’s among the 7 trials Suppose, for example, that n = 3. Then there are eight possible outcomes for the experiment: SSS SSF SFS SFF FSS FSF FFS FFF From the definition of X, X(SSF) = 2, X(SFF) = 1, and so on. Possible values for X in an n-trial experiment are x = 0, 1, 2,...,. We will often write X ~ Bin(n, p) to indicate that X is a binomial rv based on n trials with success probability p. NOTATION Because the pmf of a binomial rv X depends on the two parameters n and p, we denote the pmf by h(x; n, p). --- Trang 144 --- 3.5 The Binomial Probability Distribution 134 Consider first the case n = 4 for which each outcome, its probability, and corresponding x value are listed in Table 3.1. For example, P(SSFS) = P(S)-P(S)-P(F)-P(S) (independent trials) =p-p-(1—p)-p {constant P(S)] =p-(1-p) Table 3.1 Outcomes and probabilities for a binomial experiment with four trials Outcome x Probability Outcome x Probability SSSS 4 pt FSSS 3 pl —p) SSSF 3 pl —p) FSSF 2 pl — py SSFS 3 pl —p) FSFS 2 Pl py SSFF 2 pil—py FSFF 1 pil—p)y SFSS 3 pil—p) FFSS 2 p(l—p) SFSF 2 Pl =p FFSF 1 pl — py SFFS 2 pl — py FFFS 1 p= py SFFF 1 pa — py FFFF 0 (i —p)* In this special case, we wish b(x; 4, p) for x = 0, 1, 2, 3, and 4. For b(3; 4, p), we identify which of the 16 outcomes yield an x value of 3 and sum the probabilities associated with each such outcome: b(3; 4, p) = P(FSSS) + P(SFSS) + P(SSFS) + P(SSSF) = 4p*(1 — p) There are four outcomes with x =3 and each has probability p*(1 — p) (the probability depends only on the number of S’s, not the order of S’s and F’s), so b(3:4, p) = number of outcomes | J probability of any particular oh PPS) with X = 3 outcome with X = 3 Similarly, b(2; 4, p) = 6p7(1 — p)’, which is also the product of the number of outcomes with X = 2 and the probability of any such outcome. In general, number of sequences of probability of any b(x; n, p) = . ore he . length n consisting of x S’s particular such sequence Since the ordering of S’s and F’s is not important, the second factor in the previous equation is p‘(1 — p)"* (e.g., the first x trials resulting in § and the last n — x resulting in F). The first factor is the number of ways of choosing x of the n trials to be S’s—that is, the number of combinations of size x that can be constructed from distinct objects (trials here). --- Trang 145 --- 132 © cuarrer3 Discrete Random Variables and Probability Distributions THEOREM my nox 0.1.2 brmp) = 2 (JPY = 0,12. 0 otherwise Each of six randomly selected cola drinkers is given a glass containing cola S and one containing cola F. The glasses are identical in appearance except for a code on the bottom to identify the cola. Suppose there is no tendency among cola drinkers to prefer one cola to the other. Then p = P(a selected individual prefers S) = .5, so with X = the number among the six who prefer S, X ~ Bin(6, .5). Thus — 3) = Al3- _(6 37 6\3 _ 6 P(X = 3) = b(3;6,.5) = ( 3 )(.5)°(.5)° = 20(.5)° = 313 The probability that at least three prefer S is 6 6/6 6 P(38) =1—P(X <7) =1—B(7;15, 2) —1—(enty in x =7 row _ of p = .2 column = 1—.996 = .004 4, Finally, the probability that between 4 and 7, inclusive, fail is P(4SX<7)=P(X=4,5,6, or7) = P(X <7)—P(X <3) = B(7;15,.2) — B(3;15,.2) = .996 — .648 = 348 Notice that this latter probability is the difference between entries in the x = 7 and x = 3 rows, not the x = 7 and x = 4 rows. a An electronics manufacturer claims that at most 10% of its power supply units need service during the warranty period. To investigate this claim, technicians at a testing laboratory purchase 20 units and subject each one to accelerated testing to simulate use during the warranty period. Let p denote the probability that a power supply unit needs repair during the period (the proportion of all such units that need repair). The laboratory technicians must decide whether the data resulting from the experiment supports the claim that p < .10. Let X denote the number among the 20 sampled that need repair, so X ~ Bin(20, p). Consider the decision rule Reject the claim that p < .10 in favor of the conclusion that p > .10 if x > 5 (where x is the observed value of X), and consider the claim plausible ifx<4. The probability that the claim is rejected when p = .10 (an incorrect conclusion) is P(X >5 when p=.10) = 1 —B(4;20,.1) = 1 — .957 = .043 The probability that the claim is not rejected when p = .20 (a different type of incorrect conclusion) is P(X <4 when p =.2) = B(4;20,.2) = 630 --- Trang 147 --- 134 = cuarrer3 Discrete Random Variables and Probability Distributions The first probability is rather small, but the second is intolerably large. When p = .20, so that the manufacturer has grossly understated the percentage of units that need service, and the stated decision rule is used, 63% of all samples will result in the manufacturer’s claim being judged plausible! One might think that the probability of this second type of erroneous conclu- sion could be made smaller by changing the cutoff value 5 in the decision rule to something else. However, although replacing 5 by a smaller number would yield a probability smaller than .630, the other probability would then increase. The only way to make both “error probabilities” small is to base the decision rule on an experiment involving many more units. a Note that a table entry of 0 signifies only that a probability is 0 to three significant digits, for all entries in the table are actually positive. Statistical computer packages such as MINITAB will generate either b(x; n, p) or B(x; n, p) once values of nand p are specified. In Chapter 4, we will present a method for obtaining quick and accurate approximations to binomial probabilities when n is large. The Mean and Variance of X For n = 1, the binomial distribution becomes the Bernoulli distribution. From Example 3.17, the mean value of a Bernoulli variable is 4 = p, so the expected number of $’s on any single trial is p. Since a binomial experiment consists of trials, intuition suggests that for X ~ Bin(n, p), E(X) = np, the product of the number of trials and the probability of success on a single trial. The expression for V(X) is not so intuitive. PROPOSITION IfX ~Bin(n, p), then E(X) =np, V(X) =np(1—p) =npq, and ox = /npq (where q=1-p) Thus, calculating the mean and variance of a binomial rv does not necessitate evaluating summations. The proof of the result for E(X) is sketched in Exercise 74, and both the mean and the variance are obtained below using the moment generating function. If 75% of all purchases at a store are made with a credit card and X is the number among ten randomly selected purchases made with a credit card, then X~Bin(10, .75). Thus E(X) =np = (10)(.75) =7.5, V(X) =npq = 10(.75)(.25) = 1.875, and o = v'1.875. Again, even though X can take on only integer values, E(X) need not be an integer. If we perform a large number of independent binomial experiments, each with n = 10 trials and p = .75, then the average number of S’s per experiment will be close to 7.5. a --- Trang 148 --- 3.5 The Binomial Probability Distribution 135 The Moment Generating Function of X Let’s find the moment generating function of a binomial random variable. Using the definition, My(*) = Ete), n " M(t) = Ble) = > eps) => e"(" pra — py xeD =0 a om (2 1) nix 1 n = >( \tve) (1—p)"* = (pe +1—p) x=0 \* Here we have used the binomial theorem, }7)_9 a°b"~* = (a +b)". Notice that the mgf satisfies the property required of all moment generating functions, My(0) = 1, because the sum of the probabilities is 1. The mean and variance can be obtained by differentiating My(t): Mj(t) = n(pe! +1 — p)""'pe! and w = Mi,(0) = np Then the second derivative is My(t) = n(n — 1)(pe! +1 — p)"pelpe! + n(pe! +1 — p)"'pe! and E(X*) = My(0) = n(n — 1)p? + np Therefore, 2_y 2 2 o = V(X) = E(X”) - [E(X)] = n(n —1)p? + np — r°p? = np — np? = np(1—p) in accord with the foregoing proposition. Exercises | Section 3.5 (58-79) 58. Compute the following binomial probabilities f. P(X < 1) when X ~ Bin(10, .7) directly from the formula for h(x; n, p): g. P(2 5). c. b(6; 10, .7) ¢. Determine P(1 < X < 4). d. PQ < X <4) when X ~ Bin(10, .3) d. What is the probability that none of the 25 e. P(2 < X) when X ~ Bin(10, .3) boards is defective? --- Trang 149 --- 136 = cHarrer3 Discrete Random Variables and Probability Distributions e. Calculate the expected value and standard 65. Twenty percent of all telephones of a certain type deviation of X. are submitted for service while under warranty. Clin A Soispaay thik PRON RNC EVRA as Of these, 60% can be repaired, whereas the other from experience that 10% of its goblets have “OG must, besreplaced wittenesy ums, [f ascams cosmetioflawecand must bexclassified saszsecs pany purchases ten of these telephones, what is the onds.” probability that exactly two will end up being a. Among six randomly selected goblets, how replacedmunder: warranty? likely is it that only one is a second? 66. The College Board reports that 2% of the two b. Among six randomly selected goblets, what is million high school students who take the SAT the probability that at least two are seconds? each year receive special accommodations c. If goblets are examined one by one, what is because of documented disabilities (Los Angeles the probability that at most five must be Times, July 16, 2002). Consider a random sample selected to find four that are not seconds? of 25 students who have recently taken the test. 62. Suppose that only 25% of all drivers come to a a. What is the probability that exactly 1 received complete stop at an intersection having flashing iaispecial accommodation; . red lights in all directions when no other cars are b. What is the probability that at least 1 received visible. What is the probability that, of 20 ran- especial accommodatica? doiilyy Chesed divides. coming to at Wntenseceion cc. What is the probability that at least 2 received a sinded wheke: conditions special accommodation? fa, “AtHIOSt Gall one toa complete stop? d. What is the probability that the number among b. Exactly 6 will come to a complete stop? the 25 who received a special accommodation ¢. At least 6 will come to a complete stop? is within 2 standard deviations of the number d. How many of the next 20 drivers do you you would expect to be accommodated? expeét to come to acompletestop? e. Suppose that a student who does not receive a P ea special accommodation is allowed 3 h for 63. Exercise 29 (Section 3.3) gave the pmf of Y, the the exam, whereas an accommodated student number of traffic citations for a randomly is allowed 4.5 h. What would you expect the selected individual insured by a company. What average time allowed the 25 selected students is the probability that among 15 randomly chosen to be? such individuals a Aclenat 10 havesnorcleations 67. Suppose that 90% of all batteries from a supplier a Fewer than half have at least one citation? have acceptable voltages. A certain type of flash- ¢. The number that have at least one citation is light requires two type-D batteries, and the flash- * Beiliesh 6 aad 1b tnclesec™ = light will work only if both its batteries have - 3 7 acceptable voltages. Among ten randomly 64. A particular type of tennis racket comes in a selected flashlights, what is the probability that midsize version and an oversize version. Sixty at least nine will work? What assumptions did percent of all customers at a store want the over- you make in the course of answering the question size version. posed? = anion Tannly ses ue eee 68. A very large batch of components has arrived at a ; arc same i ea distributor. The batch can be characterized as ity that at least six want the oversize version? . b. Among ten randomly selected customers acceptable only if the proportion of defective what is the. probability that the nimibee whe components is at most .10. The distributor decides want the oversize version is within 1 standard fo-tandatnly. st lect 10:coniponents andl ;so:aceept deviation of the mean value? the batch only if the number of defective compo- ¢. The store currently has seven rackets of each ents tithe samp leas atmos 2 . version, What isthe probability that all of the a. What is the probability that the batch will be next ten customers who want this racket can lacte pted velien thesachuat Proportion SRdeiee, : : ives is 01? .05? .10? .20? .25? get the version they want from current stock? tives 18 .017/-052:i107:-207'257 “Between a and b, inclusive” is equivalent to (a < X < b). --- Trang 150 --- 3.5 The Binomial Probability Distribution 137 b. Let p denote the actual proportion of defectives for either topic chosen. If the probability that a in the batch. A graph of P(batch is accepted) book ordered through interlibrary loan actually as a function of p, with p on the horizontal axis arrives in time is .9 and books arrive indepen- and P(batch is accepted) on the vertical axis, is dently of one another, which topic should the called the operating characteristic curve for student choose to maximize the probability of the acceptance sampling plan. Use the results writing a good paper? What if the arrival proba- of part (a) to sketch this curve for 0 < p < 1. bility is only .5 instead of .9? © Enccphecepa coment replacing “2” 79. Let X be a binomial random variable with fixed n. — EPS corel a. Are there values of p (0 < p < 1) for which d. Bevee parts (a) and (b) with “15” replacing VOD = 07 Replat Why this 18 50, 10” in the acceptance sampling plan. oo . u b. For what value of p is V(X) maximized? [Hint: . Which of the three sampling plans, that of part ; ‘i ’ : Either graph V(X) as a function of p or else take (a), (©), or (d), appears most satisfactory, and a derivative.] why? (60) Ika ordianics equthing hab ciamoueidsiedionbs. 7 S SHOWED Ad 2) = Aa ae), " “i " b. Show that B(x; n, 1 — p) = 1 — Ban—x— 130, installed in all previously constructed houses has : ‘a z . " p). (Hint: At most x S’s is equivalent to at least been in effect in a city for 1 year. The fire depart- 9 Fs] ment is concerned that many houses remain with- a eee . : ¢c. What do parts (a) and (b) imply about the out detectors. Let p = the true proportion of such . “ necessity of including values of p >.5 in houses having detectors, and suppose that a ran- ; : a Appendix Table A.1? dom sample of 25 homes is inspected. If the sam- ple strongly indicates that fewer than 80% of all 74. Show that E(X) = np when X is a binomial ran- houses have a detector, the fire department will dom variable. [Hint: First express E(X) as a sum campaign for a mandatory inspection program. with lower limit x= 1. Then factor out np, let Because of the costliness of the program, the y =x — 1 so that the remaining sum is from department prefers not to call for such inspections y = 0 toy =n— 1, and show that it equals 1.] unless sample evidence strongly argues for their 75, Customers at a gas station pay with a credit card necessity Let X denote the number of homes with (A), debit card (B), or cash (C). Assume that suc- detectors among the 25 sampled. Consider reject- cessive customers make independent choices, sng He claiinithat pe ies 15: oe with P(A) = .5, PB) = .2, and P(C) = 3. i AER is; the “probability; that ‘the elgiet is a. Among the next 100 customers, what are the rejected when the actual value of p is .8? mean and variance of the number who pay with b. What is the probability of not rejecting the a debit card? Explain your reasoning. clamwhen'p = ,17 When p= 6 b. Answer part (a) for the number among the 100 ¢. How do the “error probabilities” of parts (a) who don’t pay with cash, and (b) change if the value 15 in the decision fale ig replaced By 142 76. An airport limousine can accommodate up to four ; ; passengers on any one trip. The company will 70. A toll bridge charges $1.00 for passenger cars and accept a maximum of six reservations for a trip, $2°50 for other VehiGles:/Supposetthat during day; and a passenger must have a reservation. From time hours, 60% of all vehicles are passenger cars. pievious fecnrds, 20%-oF all those making reser: If 25 vehicles cross the bridge during a particular vations do not appear for the trip. In the following daytime perion what is the resulting expected toll questions, assume independence, but explain why revenue? [Hint Let X = the number of passenger there could be dependence. cars; then the toll revenue /(X) is a linear function a. If six reservations are made, what is the proba- of X.] bility that at least one individual with a reser- 71. A student who is trying to write a paper for a vation cannot be accommodated on the trip? course has a choice of two topics, A and B. If b. If six reservations are made, what is the topic A is chosen, the student will order two expected number of available places when the books through interlibrary loan, whereas if topic limousine departs? B is chosen, the student will order four books. The ¢. Suppose the probability distribution of the student believes that a good paper necessitates number of reservations made is given in the receiving and using at least half the books ordered accompanying table. --- Trang 151 --- 138 = cuarrer3 Discrete Random Variables and Probability Distributions Number of reservalions| 3 4 5 6 78. At the end of this section we obtained the mean and variance of a binomial rv using the mef. Probability 1 2 34 Obtain the mean and variance instead from Ryx() = In[My(d)]. Let X denote the number of passengers on a ran- 79, Obtain the moment generating function of the domly selected trip. Obtain the probability mass numberof Flines = X inn binomial exper function of X. ; ; ‘ment, and use it to determine the expected number 77. Refer to Chebyshev’s inequality given in Exercise of failures and the variance of the number of fail- 43 (Section 3.3). Calculate P(X — yl > ko) for ures. Are the expected value and variance intui- k = 2 and k = 3 when X ~ Bin(20, .5), and com- tively consistent with the expressions for E(X) and pare to the corresponding upper bounds. Repeat V(X)? Explain. for X ~ Bin(20, .75). Hypergeometric and Negative Binomial Distributions The hypergeometric and negative binomial distributions are both closely related to the binomial distribution. Whereas the binomial distribution is the approximate probability model for sampling without replacement from a finite dichotomous (S—F) population, the hypergeometric distribution is the exact probability model for the number of S’s in the sample. The binomial rv X is the number of S’s when the number n of trials is fixed, whereas the negative binomial distribution arises from fixing the number of S’s desired and letting the number of trials be random. The Hypergeometric Distribution The assumptions leading to the hypergeometric distribution are as follows: 1. The population or set to be sampled consists of N individuals, objects, or elements (a finite population). 2. Each individual can be characterized as a success (S) or a failure (F), and there are M successes in the population. 3. A sample of n individuals is selected without replacement in such a way that each subset of size n is equally likely to be chosen. The random variable of interest is X = the number of S’s in the sample. The probability distribution of X depends on the parameters n, M, and N, so we wish to obtain P(X = x) = h(x; n, M,N). During a particular period a university’s information technology office received 20 service orders for problems with printers, of which 8 were laser printers and 12 were inkjet models. A sample of 5 of these service orders is to be selected for inclusion in a customer satisfaction survey. Suppose that the 5 are selected in a completely random fashion, so that any particular subset of size 5 has the same chance of being selected as does any other subset (think of putting the numbers 1, 2, ... , 20 on 20 identical slips of paper, mixing up the slips, and --- Trang 152 --- 3.6 Hypergeometric and Negative Binomial Distributions 139 choosing 5 of them). What then is the probability that exactly x (v = 0, 1, 2, 3, 4, or 5) of the selected service orders were for inkjet printers? In this example, the population size is N = 20, the sample size is n = 5, and the number of S’s (inkjet = S) and F’s in the population are M = 12 and N — M = 8, respectively. Consider the value x = 2. Because all outcomes (each con- sisting of 5 particular orders) are equally likely, P(X =2) = (2: 5, 12, 20) = number of DUIeoMES having X = 2 number of possible outcomes The number of possible outcomes in the experiment is the number of ways of selecting 5 from the 20 objects without regard to order—that is, (2). To count the number of outcomes having X = 2, note that there are ( 3) ways of selecting 2 of the inkjet orders, and for each such way there are (3) ways of selecting the 3 laser orders to fill out the sample. The product rule from Chapter 2 then gives (2) - (§)as the number of outcomes with X = 2, so (2G) 2 3 77 A(2;5, 12,20) = = =, = -238 ( ) 20 $23 a 5 In general, if the sample size n is smaller than the number of successes in the population (M), then the largest possible X value is n. However, if M A(x; 10,5, 25) x=0 = .057 + .257 + .385 = .699 a Comprehensive tables of the hypergeometric distribution are available, but because the distribution has three parameters, these tables require much more space than tables for the binomial distribution. MINITAB, R and other statistical software packages will easily generate hypergeometric probabilities. As in the binomial case, there are simple expressions for E(X) and V(X) for hypergeometric rv’s. PROPOSITION The mean and variance of the hypergeometric rv X having pmf h(x; n, M, N) are M N= M M E(X)=n-— V(X) =(——_) -n-—(1-— Oana VO) (4) ni ( ™) The proof will be given in Section 6.3. We do not give the moment generating function for the hypergeometric distribution, because the mgf is more trouble than it is worth here. --- Trang 154 --- 3.6 Hypergeometric and Negative Binomial Distributions 141 The ratio M/N is the proportion of S’s in the population. Replacing M/N by p in E(X) and V(X) gives E(X) = np (3.16) N- vex) = T=") «mot ~p) Expression (3.16) shows that the means of the binomial and hypergeometric rv’s are equal, whereas the variances of the two rv’s differ by the factor (NV — n)((N — 1), often called the finite population correction factor. This factor is <1, so the hypergeometric variable has smaller variance than does the binomial rv. The correction factor can be written (1 — n/N)/(1 — 1/N), which is approximately 1 when n is small relative to N. In the animal-tagging example, n = 10, M = 5, and N = 25, so p= 4 = .2 and (Example 3.44 _ = continued) 15 EQ) = 10C2)=2 V(X) = 34 (10)(-2)(8) = (.625)(1.6) =1 If the sampling were carried out with replacement, V(X) = 1.6. Suppose the population size N is not actually known, so the value x is observed and we wish to estimate N. It is reasonable to equate the observed sample proportion of S’s, x/n, with the population proportion, M/N, giving the estimate nt x If M = 100, n = 40, and x = 16, then N = 250. a Our general rule of thumb in Section 3.5 stated that if sampling is without replacement but n/N is at most .05, then the binomial distribution can be used to compute approximate probabilities involving the number of S’s in the sample. A more precise statement is as follows: Let the population size, N, and number of population S’s, M, get large with the ratio M/N approaching p. Then h(x; n, M, N) approaches h(x; n, p); so for n/N small, the two are approximately equal provided that p is not too near either 0 or 1. This is the rationale for our rule of thumb. The Negative Binomial Distribution The negative binomial rv and distribution are based on an experiment satisfying the following conditions: 1. The experiment consists of a sequence of independent trials. 2. Each trial can result in either a success (S) or a failure (F). 3. The probability of success is constant from trial to trial, so P(S on trial 1) = p for 7=1,2,3.... 4, The experiment continues (trials are performed) until a total of r successes has been observed, where r is a specified positive integer. --- Trang 155 --- 142 cuarrer3 Discrete Random Variables and Probability Distributions The random variable of interest is X = the number of failures that precede the rth success, and X is called a negative binomial random variable. In contrast to the binomial rv, the number of successes is fixed and the number of trials is random. Why the name “negative binomial?” Binomial probabilities are related to the terms in the binomial theorem, and negative binomial probabilities are related to the terms in the binomial theorem when the exponent is a negative integer. For details see the proof for the last proposition of this section. Possible values of X are 0, 1, 2, ... . Let nb(x; r, p) denote the pmf of X. The event {X = x} is equivalent to {7 — 1 S’s in the first (v + r — 1) trials and an S on the (x + r)th trial} (e.g., ifr = 5 and x = 10, then there must be four S’s in the first 14 trials and trial 15 must be an S$). Since trials are independent, nb(x; r, p) = P(X =x) = P(r—1Ss on the first x +r —1 trials) - P(S) (3.17) The first probability on the far right of Expression (3.17) is the binomial probability (* ed Vora —p)' where P(S) =p PROPOSITION The pmf of the negative binomial rv X with parameters r = number of S’s and p = P(S) is nb(icnsp) = saan "Vora =p) x=0,1)2,... A pediatrician wishes to recruit 5 couples, each of whom is expecting their first child, to participate in a new natural childbirth regimen. Let p = P(a randomly selected couple agrees to participate). If p = .2, what is the probability that 15 couples must be asked before 5 are found who agree to participate? That is, with S = {agrees to participate}, what is the probability that 10 F’s occur before the fifth S? Substituting r = 5, p = .2, and x = 10 into nb(x; r, p) gives nb(10; 5, 2) = ( °) 25 9! = 034 The probability that at most 10 F’s are observed (at most 15 couples are asked) is 10 1/44 = 7 = Bless P(X < 10) = S° nb(x;5,2) = .2' >( 4 \s = 164 x=0 x=0 a In some sources, the negative binomial rv is taken to be the number of trials X +r rather than the number of failures. --- Trang 156 --- 3.6 Hypergeometric and Negative Binomial Distributions 143 In the special case r = 1, the pmf is nb(x;1,p) =(1—py'p_ x =0,1,2,... (3.18) In Example 3.10, we derived the pmf for the number of trials necessary to obtain the first S, and the pmf there is similar to Expression (3.18). Both X = number of F’s and Y = number of trials (= 1 + X) are referred to in the literature as geometric random variables, and the pmf in (3.18) is called the geometric distribution. The name is appropriate because the probabilities form a geometric series: p, (1 — p)p, (1 — DYPp. .... To see that the sum of the probabilities is 1, recall that the sum of a geometric series isa + ar + ar? +++» =a((1—r)if ll < 1, so forp > 0, 2 P pt+(l—p)p+(1—-p)pt+---=———=1 (1=p)p+ (=p) Se In Example 3.18, the expected number of trials until the first § was shown to be 1/p, so that the expected number of F’s until the first S is (1/p) — 1 = (1 — p)/p. Intuitively, we would expect to see r - (1 — p)/p F’s before the rth S, and this is indeed E(X). There is also a simple formula for V(X). PROPOSITION If X is a negative binomial rv with pmf nb(x; r, p), then pt rl-p) ,, r(l~p) Mx(t) = ———_—_,,_ E(X) = ———_ V(X) = ——— itt) {1 —e'(1 =p)’ @) Pp *) Pp Proof In order to derive the moment generating function, we will use the binomial theorem as generalized by Isaac Newton to allow negative exponents, and this will help to explain the name of the distribution. If n is any real number, not necessarily a positive integer, 2 O(N penne (a+b)"= > (2) a where n\ _n-(n—1e(n—x4t1) n\ _ (") nn except that o)= 1 In the special case that x > 0 and n is a negative integer, n = —r, ar\ re (-r—1)- + (Gr x +1) xy x! _(tx—Wirtx-2)eer, vy (rexel Z eI pe JD --- Trang 157 --- 144 = cuarrer 3 Discrete Random Variables and Probability Distributions Using this in the generalized binomial theorem with a = 1 and b = —u, va (rte eu (r+x—1 _ pte x —1)¥(—ay = x= x (1—u) Hl eT )i 1)*(—u) »( rd a x=0 x=0 Now we can find the moment generating function for the negative binomial distribution: ed rt+x-1 © (r+x—1 Mx(1) = So e* p'(l-py' =p >> fe -p)l" rao \r-i x=o\r-1 = p “Te pyr The mean and variance of X can now be obtained from the moment generat- ing function (Exercise 91). = Finally, by expanding the binomial coefficient in front of p"(1 — p)* and doing some cancellation, it can be seen that nb(x; r, p) is well defined even when r is not an integer. This generalized negative binomial distribution has been found to fit observed data quite well in a wide variety of applications. Exercises | Section 3.6 (80-92) 80. A bookstore has 15 copies of a particular text- d. Consider a large shipment of 400 refrigera- book, of which 6 are first printings and the other tors, of which 40 have defective compressors. 9 are second printings (later printings provide an If X is the number among 15 randomly opportunity for authors to correct mistakes). Sup- selected refrigerators that have defective pose that 5 of these copies are randomly selected, compressors, describe a less tedious way to and let X be the number of first printings among calculate (at least approximately) P(X < 5) the selected copies. than to use the hypergeometric pmf. a. What kind of a distribution does X have go an instructor who taught two sections of statis- (name and values of all parameters)? tics last term, the first with 20 students and the b. Compute P(X = 2), P(X < 2), and P(X > 2). eae : 2 . ° v second with 30, decided to assign a term project. ¢, Calculate the mean value and standard devia- : ' : 1X. After all projects had been turned in, the instruc- MORES tor randomly ordered them before grading. Con- 81. Each of 12 refrigerators has been returned to a sider the first 15 graded projects. distributor because of an audible, high-pitched, a. What is the probability that exactly 10 of oscillating noise when the refrigerator is running. these are from the second section? Suppose that 7 of these refrigerators have a b. What is the probability that at least 10 of these defective compressor and the other 5 have less are from the second section? serious problems. If the refrigerators are exam- . What is the probability that at least 10 of these ined in random order, let X be the number among are from the same section? the first 6 examined that have a defective com- d. What are the mean value and standard devia- pressor. Compute the following: tion of the number among these 15 that are a. P(X = 5) from the second section? b. P(X < 4) e. What are the mean value and standard devia- ¢. The probability that X exceeds its mean value tion of the number of projects not among by more than | standard deviation. these first 15 that are from the second section? --- Trang 158 --- 3.6 Hypergeometric and Negative Binomial Distributions 145. 83. A geologist has collected 10 specimens of basal- a. If 15 of the firms are actually violating at least tic rock and 10 specimens of granite. The geolo- one regulation, what is the pmf of the number gist instructs a laboratory assistant to randomly of firms visited by the inspector that are in select 15 of the specimens for analysis. violation of at least one regulation? a. What is the pmf of the number of granite b. If there are 500 firms in the area, of which 150 specimens selected for analysis? are in violation, approximate the pmf of part b. What is the probability that all specimens of (a) by a simpler pmf. one of the two types of rock are selected for c. For X = the number among the 10 visited that analysis? are in violation, compute E(X) and V(X) both c. What is the probability that the number of for the exact pmf and the approximating pmf granite specimens selected for analysis is in part (b). within | standard deviation of its mean value? 87. Suppose that p = P(male birth) = .5. A couple 84. Suppose that 20% of all individuals have an wishes to have exactly two female children in adverse reaction to a particular drug. A medical their family. They will have children until this researcher will administer the drug to one indi- condition is fulfilled. vidual after another until the first adverse reac- a. What is the probability that the family has x tion occurs. Define an appropriate random male children? variable and use its distribution to answer the b. What is the probability that the family has following questions. four children? a. What is the probability that when the experi- c. What is the probability that the family has at ment terminates, four individuals have not most four children? had adverse reactions? d. How many male children would you expect b. What is the probability that the drug is admi- this family to have? How many children nistered to exactly five individuals? would you expect this family to have? ©. What is the probability that at most four indi- g8. 4 family decides to have children until it has viduals do not have an adverse reaction? . . ere three children of the same gender. Assuming d. How many individuals would you expect to P(B) = P(G) = 5, what is the pmf of X = the not have an adverse reaction, and to how 3 Pae number of children in the family? many individuals would you expect the drug to be given? 89. Three brothers and their wives decide to have e. What is the probability that the number of children until each family has two female chil- individuals given the drug is within | standard dren. Let X = the total number of male children deviation of what you expect? born to the brothers. What is E(X), and how does 85. Twenty pairs of individuals playing in a bridge Vth ae tournament have been seeded 1, ... , 20. In the first part of the tournament, the 20 are randomly —_—-90._ Individual A has a red die and B has a green die divided into 10 east-west pairs and 10 north—- (both fair). If they each roll until they obtain five south pairs. “doubles” (1-1, ... , 6—6), what is the pmf of a. What is the probability that x of the top 10 X = the total number of times a die is rolled? pairs end up playing east-west? What are E(X) and V(X)? b. What is the probability that all of the top five 94 Use the moment generating function of the neg- pairs end up playing the same direction? ative binomial distribution to derive ¢. If there are 2n pairs, what is the pmf of X= ‘a. The mean the number among the top n pairs who end be‘ Thevananee up playing east-west? What are E(X) and V(X)? 92. If X is a negative binomial rv, then Y = r + X . is the total number of trials necessary to obtain 86. A second-stage smog alert has been called in an 9° Obtain the inet of Vand then ins asen valne area of Los Angeles County in which there are 50 and variance. Are the mean and variance intui- industrial firms. An inspector will visit 10 ran- tively consistent with the expressions for E(X) domly selected firms to check for violations of and V(X)? Explain, regulations. --- Trang 159 --- 146 = cuarrer3 Discrete Random Variables and Probability Distributions The Poisson Probability Distribution The binomial, hypergeometric, and negative binomial distributions were all derived by starting with an experiment consisting of trials or draws and applying the laws of probability to various outcomes of the experiment. There is no simple experiment on which the Poisson distribution is based, although we will shortly describe how it can be obtained by certain limiting operations. DEFINITION A random variable X is said to have a Poisson distribution with parameter 4 (4 > 0) if the pmf of X is ene p(s4)=—S x =0,1,2,... x! We shall see shortly that A is in fact the expected value of X, so the pmf can be written using y in place of 2. Because 1 must be positive, p(x; 2) > 0 for all possible x values. The fact that > 9 p(x; 4) = 1 is a consequence of the Maclaurin infinite series expansion of e, which appears in most calculus texts: PR B oo yt daltdto tate = Dig (3.19) If the two extreme terms in Expression (3.19) are multiplied by e~* and then e~ is placed inside the summation, the result is ok 1=e =a x=0 which shows that p(x; A) fulfills the second condition necessary for specifying a pmf. Let X denote the number of creatures of a particular type captured in a trap during a given time period. Suppose that X has a Poisson distribution with 7 = 4.5, so on average traps will contain 4.5 creatures. [The article “Dispersal Dynamics of the Bivalve Gemma gemma in a Patchy Environment” (Ecol. Monogr., 1995: 1-20) suggests this model; the bivalve Gemma gemma is a small clam.] The probability that a trap contains exactly five creatures is (4.5)° P= 5) = aan = 1708 The probability that a trap has at most five creatures is 5 x 2 5 _ ag (45)" _ a 4.52 455] P(K<5)= 2? LF 4S tpt +L] = 7029 ~ a --- Trang 160 --- 3.7 The Poisson Probability Distribution 147 The Poisson Distribution as a Limit The rationale for using the Poisson distribution in many situations is provided by the following proposition. PROPOSITION Suppose that in the binomial pmf h(x; n, p) we let n + oo and p > 0 in such a way that np approaches a value 4 > 0. Then b(x; n, p) > p(x; 4). Proof Begin with the binomial pmf: ager f@\ eq pi —— tena bOxsn,p) = (")p (1—p) -aa a P) n-e(n—1)s--(n=x41) , _ anne Denes Dp) x! Include n* in both the numerator and denominator: 4 eel © (p_ py" non n x! (1—p) Taking the limit as n — oo and p > 0 with np > 4, ae 1 n lim B(x:n,p) =1- Led Ze (jim Covi" n00 x! \nsc0 1 The limit on the right can be obtained from the calculus theorem that says the limit of (1 —a,/ny" is e~“ if a, — a. Because np > 2, ae nena lim b(x;n,p) =. lim (1 = =) =22" = pwd) = n00 xt no0 n x! It is interesting that Siméon Poisson discovered his distribution by this approach in the 1830s, as a limit of the binomial distribution. According to the proposition, in any binomial experiment for which n is large and p is small, b(x; n, p) © px: a) where 2 = np. As a rule of thumb, this approximation can safely be applied if n > 50 and np < 5. If a publisher of nontechnical books takes great pains to ensure that its books are free of typographical errors, so that the probability of any given page containing at least one such error is .005 and errors are independent from page to page, what is the probability that one of its 400-page novels will contain exactly one page with errors? At most three pages with errors? With S denoting a page containing at least one error and F an error-free page, the number X of pages containing at least one error is a binomial rv with n = 400 and p = .005, so np = 2. We wish ge 20h P(X = 1) = b(1;400, .005) ~ p(1; 2) =F = .270671 The binomial value is b(1; 400, .005) = .270669, so the approximation is good to five decimal places here. --- Trang 161 --- 148 = cuarrer 3 Discrete Random Variables and Probability Distributions Similarly, 3 3 2x P(X <3) %¥° p(x;2) = > er x=0 x=0 : = .135335 + .270671 + .270671 + .180447 = 8571 and this again is quite close to the binomial value P(X < 3) = .8576. a Table 3.2 shows the Poisson distribution for 1 = 3 along with three binomial distributions with np = 3, and Figure 3.8 (from R) plots the Poisson along with the first two binomial distributions. The approximation is of limited use for n = 30, but of course the accuracy is better for n = 100 and much better for n = 300. Table 3.2 Comparing the Poisson and three binomial distributions x n=30,p=.1 n=100,p=.03 n=300,p=.01 _ Poisson,A =3 0 0.042391 0.047553 0.049041 0.049787 1 0.141304 0.147070 0.148609 0.149361 2 0.227656 0.225153 0.224414 0.224042 3 0.236088 0.227474 0.225170 0.224042 4 0.177066, 0.170606 0.168877 0.168031 5 0.102305 0.101308 0.100985 0.100819 6 0.047363 0.049610 0.050153 0.050409 7 0.018043 0.020604 0.021277 0.021604 8 0.005764 0.007408 0.007871 0.008 102 9 0.001565 0.002342 0.002580 0.002701 10 0.000365 0.000659 0.000758 0.000810 Bin,n=30(0); Bin,n=100(x); Poisson(!) x 0.20 x 0.15 = a 0.10 0.05 | 0.00 | I ow « 0 2 4 6 8 10 x Figure 3.8 Comparing a Poisson and two binomial distributions --- Trang 162 --- 3.7 The Poisson Probability Distribution 149 Appendix Table A.2 exhibits the cdf F(x; 2) for 4 = .1,.2,...,1,2,..., 10, 15, and 20. For example, if 2 = 2, then P(X < 3) = F(3; 2) = .857 as in Example 3.48, whereas P(X = 3) = F(3; 2)—F(2; 2) = .180. Alternatively, many statistical computer packages will generate p(x; 2) and F(x; 4) upon request. The Mean, Variance and MGF of X Since b(x; n, p) > p(x; 4) asin — 00, p > 0, np — 4, the mean and variance of a binomial variable should approach those of a Poisson variable. These limits are np — 2 and np(1 — p) > 4. PROPOSITION If X has a Poisson distribution with parameter A, then E(X) = V(X) = 2. These results can also be derived directly from the definitions of mean and variance (see Exercise 104 for the mean). Both the expected number of creatures trapped and the variance of the number (Example 3.47 trapped equal 4.5, and oy = /2 = V4.5 = 2.12. | continued) The moment generating function of the Poisson distribution is easy to derive, and it gives a direct route to the mean and variance (Exercise 108). PROPOSITION The Poisson moment generating function is My(t) = e(¢-) Proof The megf is by definition = let) = SO tot en ot SOE _ igi = gi Mx(t) = E(e )=dee evaae aug ef =e This uses the series expansion > u‘ /x! =e". a x=0 The Poisson Process A very important application of the Poisson distribution arises in connection with the occurrence of events of a particular type over time. As an example, suppose that starting from a time point that we label t = 0, we are interested in counting the number of radioactive pulses recorded by a Geiger counter. We make the following assumptions about the way in which pulses occur: --- Trang 163 --- 150 = cuaprer3 Discrete Random Variables and Probability Distributions 1. There exists a parameter % > 0 such that for any short time interval of length Ar, the probability that exactly one pulse is received is « - At + o(At> 2. The probability of more than one pulse being received during At is o(At) [which, along with Assumption 1, implies that the probability of no pulses during Ar is 1—a- At — (Ad). 3. The number of pulses received during the time interval At is independent of the number received prior to this time interval. Informally, Assumption | says that for a short interval of time, the probability of receiving a single pulse is approximately proportional to the length of the time interval, where a is the constant of proportionality. Now let P,(t) denote the probability that k pulses will be received by the counter during any particular time interval of length t. PROPOSITION P(t) = eo (at)'/k!, so that the number of pulses during a time interval of length fis a Poisson rv with parameter 2 = at. The expected number of pulses during any such time interval is then at, so the expected number during a unit interval of time is «. See Exercise 107 for a derivation. Suppose pulses arrive at the counter at an average rate of 6/min, so that % = 6. To find the probability that in a .5-min interval at least one pulse is received, note that the number of pulses in such an interval has a Poisson distribution with parameter af = 6(.5) = 3 (5 min is used because % is expressed as a rate per minute). Then with X = the number of pulses received in the 30-s interval, oa P(L SX) =1-P(X=0) =1-—- = 950 : If in Assumptions 1-3 we replace “pulse” by “event,” then the number of events occurring during a fixed time interval of length t has a Poisson distribution with parameter xt. Any process that has this distribution is called a Poisson process, and « is called the rate of the process. Other examples of situations giving rise to a Poisson process include monitoring the status of a computer system over time, with breakdowns constituting the events of interest; recording the number of accidents in an industrial facility over time; answering calls at a telephone switch- board; and observing the number of cosmic-ray showers from an observatory over time. Instead of observing events over time, consider observing events of some type that occur in a two- or three-dimensional region. For example, we might select on a map a certain region R of a forest, go to that region, and count the number of trees. Each tree would represent an event occurring at a particular point in space. SA quantity is o(Ap) (read “little 0 of delta r”) if, as At approaches 0, so does o(A0)/ At. That is, o(Af) is even more negligible than Ar itself. The quantity (A‘)” has this property, but sin(A/) does not. --- Trang 164 --- 3.7 The Poisson Probability Distribution 154 Under assumptions similar to 1-3, it can be shown that the number of events occurring in a region R has a Poisson distribution with parameter « - a(R), where a(R) is the area or volume of R. The quantity « is the expected number of events per unit area or volume. Exercises | Section 3.7 (93-109) 93. Let X, the number of flaws on the surface of a c. If two disks are independently selected, what randomly selected carpet of a particular type, is the probability that neither contains a have a Poisson distribution with parameter missing pulse? agit ine 5 J 5. Ase Appendie Table Ato COmpuee Me! gy A, aciislesin the Los Anseles Times (eos following probabilities: : ; Pe oe 1993) reports that 1 in 200 people carry the igs oa defective gene that causes inherited colon can- nO 8) cer. In a sample of 1000 individuals, what is c PO 10, what is the probability that any particular diode will fail is .01. Suppose 2 : : a cifeuit boar’ Coutains'200 diodes that y arrive during the hour, of which ten a. How many diodes would you expect to fail, havesneivielations) # a aid What ievthe standard deviation of the ¢. What is the probability that ten “no-violation’ number that are expected to fail? cars arrive during the next hour? [Hint: Sum b. What is the (approximate) probability that at the probabilities in part (b) from y = 10to.00.] least four diodes will fail on a randomly 107. a. In a Poisson process, what has to happen in selected board? both the time interval (0, f) and the interval c. If five boards are shipped to a particular cus- (t, t+ Ad) so that no events occur in the tomer, how likely is it that at least four of entire interval (0, t + Af)? Use this and them will work properly? (A board works Assumptions 1-3 to write a relationship properly only if all its diodes work.) between Po(t + Ar) and Po(t). 103. The article “Reliability-Based Service-Life be Use is seule art ve iia meet Assessment of Aging Concrete Structures” (/. Tae th ae aa ae hs sh Struct. Engrg., 1993: 1600-1621) suggests that en divide by AF and’ Tet ar & to obtain a Poisson process can be used to represent the an equation involving (ddi)Po(), the deriva- occurrence of structural loads over time. Suppose VE OL FOG) Writ TES pEERIG ty ; the mean time between occurrences of loads ee Nenty thee Rett)ie= ¢~*eatisnesichelequation (which can be shown to be = 1/a) is .5 year, of part:(b). . a. How many loads can be expected to occur d. It can be shown in a manner similar to parts during a 2-year period? (a) and (b) that the P,(s)’s must satisfy the b. What is the probability that more than five system of differential equations loads occur during a 2-year period? a ¢. How long must a time period be so that the 5 Palt) = aPea(t) —aPe(t) k= 1,2,3,... probability of no loads occurring during that period is at most .1? Verify that P,(t) = e “(at)'/k! satisfies the sys- . . tem. (This is actually the only solution.) 104, Let X have a Poisson distribution with parameter 2. Show that E(X) = 4 directly from the defini- 108. a, Use derivatives of the moment generating tion of expected value. (Hint: The first term in function to obtain the mean and variance for the sum equals 0, and then x can be canceled. the Poisson distribution. Now factor out 2 and show that what is left b. As discussed in Section 3.4, obtain the Pois- sums to 1] son mean and variance from Ry(f) = In oo. . [Mx(0]. In terms of effort, how does this 105. Suppose that trees are distributed in a forest method compare with the one in part (a)? according to a two-dimensional Poisson process . . . with parameter , the expected number of trees 109. Show that the binomial moment generating func- per acre, equal to 80. tion converges to the Poisson moment generating --- Trang 166 --- Supplementary Exercises. 153 function if we let n + 00 and p — 0 in such a saying that convergence of the mgf implies con- way that np approaches a value 2 > 0. [Hint: Use vergence of the probability distribution. In par- the calculus theorem that was used in showing ticular, convergence of the binomial mgf to the that the binomial probabilities converge to the Poisson mgf implies h(x; n, p) > p(x; 4). Poisson probabilities.] There is in fact a theorem 110. Consider a deck consisting of seven cards, what is the probability that the requests of marked 1, 2, ... , 7. Three of these cards are these 15 customers can all be met from exist- selected at random. Define an rv W by W = the ing stock? sum of the resulting numbers, and compute the amitie ici pif Gt W. Then'compute j and”. (Hint Con.. 414A fiend recently planned a, camping trip. He ss had two flashlights, one that required a single 6- sider outcomes as unordered, so that (1, 3, 7) . fag: V battery and another that used two size-D and (3, 1, 7) are not different outcomes. Then . : batteries. He had previously packed two 6-V there are 35 outcomes, and they can be listed. : sean Fs : : : and four size-D batteries in his camper. Sup- (This type of rv actually arises in connection iis ko i Z pose the probability that any particular battery with Wilcoxon's rank-sum test, in which there : : : i works is p and that batteries work or fail inde- is an x sample and a y sample and W is the sum. : Fhe Tan BEER fi alee _ pendently of one another. Our friend wants to ofthestanks Gbthe-<\ ene te combiied sample] take just one flashlight, For what values of 111. After shuffling a deck of 52 cards, a dealer deals p should he take the 6-V flashlight? out 5. Let X = the number of suits represented: 45 4 ¢ suiof n system is-one that will function if in the five-card hand. if pane chawaka sen obe and only if at least k of the n individual compo- Bao ROME RIDER as nents in the system function. If individual com- x 1 2 3 4 ponents function independently of one another, each with probability .9, what is the probability pix) | .002 146 588 264 that a 3-out-of-5 system functions? 116. A manufacturer of flashlight batteries wishes to [Hint: p(1) = 4P(all spades), p(2) = 6P(only control the quality of its product by rejecting spades and hearts with at least one of each), any lot in which the proportion of batteries and p(4) = 4P(2 spades M one of each other having unacceptable voltage appears to be too suit).] 5 high. To this end, out of each large lot (10,000 b. Compute 1, 0°, and o. batteries), 25 will be selected and tested. If at 112. The negative binomial rv X was defined as the least 5 of these generate an unacceptable volt- number of F’s preceding the rth S. Let ¥ = the age, the entire lot will be rejected. What is the number of trials necessary to obtain the rth S. In probability that a lot will be rejected if the same manner in which the pmf of X was a. Five percent of the batteries in the lot have derived, derive the pmf of Y. unacceptable voltages? -. hacine 5 ae b. Ten percent of the batteries in the lot have 113. Of all customers Surcising automatic garage- unacceptable voltages? door ‘openers,.75% purchase’ a ichain-deiven c. Twenty percent of the batteries in the lot model. pet X = the numbed among the next have unacceptable voltages? 15 purchasers who select the chain-driven d. What would happen to the probabilities in motel soc yy parts (a)-(c) if the critical rejection number a. What is the pmf of X were increased from 5 t0 6? b. Compute P(X > 10). c. Compute P(6 < X < 10). 117. Of the people passing through an airport metal d. Compute pz and 07. detector, .5% activate it; let X¥ = the number e. If the store currently has in stock 10 chain- among a randomly selected group of 500 who driven models and 8 shaft-driven models, activate the detector. --- Trang 167 --- 154 = cuaprer3 Discrete Random Variables and Probability Distributions a. What is the (approximate) pmf of X? ideas to a communication system in which the b. Compute P(X = 5). dichotomy was active/ idle user rather than dis- c. Compute P(S < X). eased/nondiseased.] 118. An educational consulting firm is trying to 120, Let p, denote the probability that any particular decide whether high school students who have code syfabol is erroneously transmitled through a never before used a hand-held calculator can communication system. Assume that on different solve a certain type of problem more easily symbols, errors occur independently of one with a calculator that uses reverse Polish logic anoiier: Suppose also’ thawsdikyprobabiliey, ps or one that does not use this logic. A sample of aittecronegussyynibel is'currectedduponsreceint: 25 students is selected and allowed to practice Let X denote the number of correct symbols in a on both calculators. Then each student is asked message block consisting of n symbols (after the to work one problem on the reverse Polish cal- correction process has ended). What is the prob- culator and a similar problem on the other. Let ability distribution of X? p = P(S), where S indicates that a student . worked the problem more quickly using reverse 124+ The purchaser of a power-generating unit Polish logic than without, and let X = number requires c consecutive successful start-ups before of S’s. the unit will be accepted. Assume that the out- a. If p =.5, what is P(7 < X < 18)? comes of individual start-ups are independent of b. If p = .8, what is P(7 < x < 18)? one another. Let p denote the probability that any ¢. If the claim that p = .5 is to be rejected when Particular start-up is successful. The random either X < 7 or X > 18, what is the probabil- variable of interest is X = the number of start- ity of rejecting the claim when it is actually ups that must be made prior to acceptance. Give decir the pmf of X for the case ¢ = 2. If p = .9, what is d. If the decision to reject the claim p = 5 is P(X < 8)? [Hint: For x 2 5, express p(x) “recur- made as in part (c), what is the probability that sively” a8. tetmisof (he pat evaluated at the the claim is not rejected when p = .6? When smaller values x — 3, x — 4, ... , 2.] (This p= 8? problem was suggested by the article “Evalua- e. What decision rule would you choose for tion of a Start-Up Demonstration Test,” J. Qual. rejecting the claim p = .5 if you wanted the Tech., 1983: 103-106.) probability in part (c) to be at most .01? 122. A plan for an executive travelers’ club has been 119. Consider a disease whose presence can be iden- developed by an airline on the premise that 10% tified by carrying out a blood test. Let p denote of its current customers would qualify for mem- the probability that a randomly selected individ- bership: oo ual has the disease. Suppose n individuals are a. Assuming the validity of this premise, among independently selected for testing. One way to 2o randomly velected current.customers, what proceed is to carry out a separate test on each of ag the mrobabihty-that Between Zand GGncla: the n blood samples. A potentially more econom- sive) qualify for membership? . ; i b. Again assuming the validity of the premise, ical approach, group testing, was introduced dur- itt STRUT BEI eA UAC ERT E'SUIORTS ing World War II to identify syphilitic men : ct among army inductees. First, take a part of each vb alilify se the standa eciation Pike blood sample, combine these specimens, and HF npeese noah iG arsandomisample cof carry out a single test. If no one has the disease, ee Lee deals he mertberiim:arrandom sample the result will be negative, and only the one test of 25 current customers who qualify for mem- is required. If at least one individual is diseased, Wefiip. Consider rejecting te ‘compaiy’s Hiedtest onthe scombined.: sample! will. yield 4. premise in favor of the claim that p > .10 if = pany’s premise is rejected when it is actually what is the expected number of tests using this valid? procedure? What is the expected number when d. Refer to the decision rule introduced in part n= 5? [The article “Random Multiple-Access (©. What is the. probability ‘that the com- Communication and Group Testing” (EEE anys praise $8 at rejected BVeH OH Trans. Commun. 1984: 769-774) applied these p= 20 (ie, 20% qualify)? --- Trang 168 --- Supplementary Exercises. 155 123. Forty percent of seeds from maize (modern-day 128, Individuals A and B begin to play a sequence of com) ears carry single spikelets, and the other chess games. Let § = {A wins a game}, and sup- 60% carry paired spikelets. A seed with single pose that outcomes of successive games are inde- spikelets will produce an ear with single spikelets pendent with P(S) = p and P(F) = 1 — p (they 29% of the time, whereas a seed with paired never draw). They will play until one of them wins spikelets will produce an ear with single spikelets ten games. Let X = the number of games played 26% of the time. Consider randomly selecting (with possible values 10, 11, ..., 19). ten seeds. a. Forx = 10, 11,..., 19, obtain an expression a, What is the probability that exactly five of for p(x) = P(X =»). these seeds carry a single spikelet and pro- b. Ifa draw is possible, with p = P(S),q = PF), duce an ear with a single spikelet? 1 — p — q = P(draw), what are the possible b. What is the probability that exactly five of the values of X? What is P(20 < X)? [Hint: ears produced by these seeds have single spi- P20 < X) = 1- PX < 20)] kelets? What is the probability that at most 499 4 test for the presence of a disease has probabil- five ears have single spikelets? : ge a ae ona ity .20 of giving a false-positive reading (indicat- 124. A trial has just resulted in a hung jury because ing that an individual has the disease when this is eight members of the jury were in favor of a guilty not the case) and probability .10 of giving a false- verdict and the other four were for acquittal. If the negative result. Suppose that ten individuals are jurors leave the jury room in random order and tested, five of whom have the disease and five of each of the first four leaving the room is accosted whom do not. Let X = the number of positive by a reporter in quest of an interview, what is the readings that result. pmf of X = the number of jurors favoring acquit- a. Does X have a binomial distribution? Explain tal among those interviewed? How many of those your reasoning. favoring acquittal do you expect to be inter- b. What is the probability that exactly three of viewed? the ten test results are positive? 125. A reservation service employs five information 130. The generalized negative binomial pmf is given operators who receive requests for information by independently of one another, each according to a Poisson process with rate ¢ = 2/min. nb(x; r, p) = k(r, x) » p"(1—p)" a. What is the probability that during a given x=0,1,2,... 1-min period, the first operator receives no where requests? b. What is the probability that during a given kr, x) = —_— BSD Dyvee 1-min period, exactly four of the five opera- 1 x=0 tors receive no requests? Let X, the number of plants of a certain species ¢. Write an expression for the probability that found in a particular region, have this distribu- during a given I-min period, all of the opera- tion with p = .3 andr = 2.5. What is P(X = 4)? tors receive exactly the same number of What is the probability that at least one plant is requests. found? 126. Grasshoppers are distributed at random in a large 131. Define a function py; 2, 4) by field according to a Poisson distribution with Aa P(x: 4, 1) parameter % = 2 per square yard, How large should the radius R of a circular sampling region Ladle. ¥= 61,2... be taken so that the probability of finding at least =42° M72" a. one in the region equals .99? 0 otherwise 127. A newsstand has ordered five copies of a certain a. Show that p(x; 4, 1) satisfies the two condi- issue of a photography magazine. Let X = the tions necessary for specifying a pmf. [Note: If number of individuals who come in to purchase a firm employs two typists, one of whom this magazine. If X has a Poisson distribution makes typographical errors at the rate of 1 with parameter 2 = 4, what is the expected num- per page and the other at rate 4 per page ber of copies that are sold? and they each do half the firm’s typing, then --- Trang 169 --- 156 = «cuarrer3 Discrete Random Variables and Probability Distributions p(x; 4, 40) is the pmf of X = the number of Assoc., 1989: 360-372, considers the intensity errors on a randomly chosen page.] function b. If the first typist (rate 2) types 60% of all ate pages, what is the pmf of X of part (a)? ID) c. What is E(X) for p(x; 2, 0) given by the dis- . a . played expression? as appropriate for events involving transmission ais WHFS a far BU Ay ve Wy RAP ERDTB of HIV (the AIDS virus) via blood transfusions. wae a Suppose that a = 2 and b = .6 (close to values ston suggested in the paper), with time in years. 132, The mode of a discrete random variable X with a, What is the expected number of events in the pmf p(x) is that value x* for which p(x) is largest interval (0, 4]? In [2, 6]? (the most probable x value). b. What is the probability that at most 15 events a. Let X ~ Bin(n, p). By considering the ratio occur in the interval (0, 9907]? bore | a. phGs tpl, show that 606.2) 409, susnsee wear eenny pie ainterent entree niakens increases with x as long as x < np — (1 — p). ‘ ; a : Goncluae hit neds Le tie leer of a particular brand, a basic model selling for oe $30 and a fancy one selling for $50. Let X be the satisfying (n + 1)p — 1 < x* < (n + L)p. : b. Show that if X has a Poisson distribution with number nf people:amons the next:2), purchasing parameter A, the mode is the largest integer this brand who choose the fancy one. Then ee A(X) = revenue = SOX + 30(25 — X) = 20X + less than 4. If 2 is an integer, show that both . : . ; 750, a linear function. If the choices are inde- A Land 4 are moden: pendent and have the same probability, then how 133. For a particular insurance policy the number of is X distributed? Find the mean and standard claims by a policy holder in 5 years is Poisson deviation of A(X). Explain why the choices distributed. If the filing of one claim is four times might not be independent with the same proba- as likely as the filing of two claims, find the bility. expetied numberof claims; 138. Let X be a discrete rv with possible values 0, 1, 2, 134, If X is a hypergeometric rv, show directly from .. or some subset of these. The function the definition that E(X) = nM/N (consider only se the case n s*-p(x) sum for E(X), and show that the terms inside the com sum are of the form AQ; n— 1,M—1,N—D, is called the probability generating function [e.g., where y = x ~ 1.] h(2) = E2*p(x), h(3.7) = E(3.7)'p), ete.]. 135. Use the fact that a. Suppose X is the number of children born to a family, and p(0) = .2, p(l) =.5, and » (x= p)°p(x) > s (x= p)>p(x) p(2) = .3. Determine the pgf of X. all xroaote b. Determine the pgf when X has a Poisson to prove Chebyshev’s inequality, given in distribution with parameter 4. Exercise 43 (Sect, 3.3). ¢. Show that h(1) = 1. ; d. Show that h'(s)|,-o = p(1) (assuming that the 136, The simple Poisson process of Section 3.7 is char- derivative can be brought inside the summa- acterized by a constant rate a at which events tion, which is justified). What results from occur per unit time. A generalization is to suppose taking the second derivative with respect to that the probability of exactly one event occur- sand evaluating at s = 0? The third deriva- ring in the interval (t, ¢ + An) is a(2) - Ar + o(Ad). tive? Explain how successive differentiation It can then be shown that the number of events of h(s) and evaluation at s = 0 “generates the occurring during an interval [4, f2] has a Poisson probabilities in the distribution.” Use this to distribution with parameter recapture the probabilities of (a) from the pgf. * [Note: This shows that the pgf contains all the j= [ a(tat information about the distribution—knowing Ji h(s) is equivalent to knowing p(x).] ‘The occurrence of events over time in this situa- 139. Three couples and two single individuals have tion is called a nonhomogeneous Poisson pro- besa invined wa dlaner paren Assume indepen: kes. ‘The: afticle “Inference Rated. on dence of arrivals to the party, and suppose that Retrospective Ascertainment,” J. Amer. Statist. the probability of any particular individual or --- Trang 170 --- Bibliography 157 any particular couple arriving late is 4 (the failure, one trial has been performed and, two members of a couple arrive together). starting from the second trial, we are still Let X =the number of people who show up looking for the first §. This implies that EX! late for the party. Determine the pmf of X. A’) = E(XIF) = 1+ J 140. Consider a sequence of identical and indepen- Dy The vexpected value ipsoperty ay \(8) «can os be extended as follows. Let Ai, Ax, ... , Ay dent trials, each of which will be a success § or id W GUAGE GP WS RR ERS failure F. Let p = P(S) and q = P(F). 4 partition ‘of the sample space (so : a : when the experiment is performed, exactly a. Define a random variable X as the number of " i trials necessary to obtain the first S. In Exam- Ge Of Wes: Ae Will ocean: Then EC’ ran the st E(K | Ay)» P(A) + BOX | Ag)» P(Ag) + + ple 3.18 we determined E(X) directly from ; . . ; E(X1A,) ° P(A,). Let X = the number of trials the definition. Here is another approach. Just ve bi vutive as P(B) = P(BIA)P(A) + P(BIA)P(A’), it can necessary to obtain two consecutive $s, ‘ nas and determine E(X). [Hint: Consider the parti- be shown that E(X) = E(XIA)P(A) + ah GHB A A ies E(XlA’)P(A"), where E(XIA) denotes the Hon WithikS andy [Fi Ane—ul SS], 5 A; = {SF}.] [Note: It is not possible to deter- expected value of X given that the event A a ie st as mine E(X) directly from the definition because has occurred. Now let A = {Son 1* trial}. ; Foe ee ite cat a Show again that E(X) = I/p. [Hint: Denote E ere e no. ike il a Ve Bee a as 2S (X) by . Then given that the first trial is a complication: sethe word consecutive.| Durrett, Richard, Elementary Probability for Applica- discussion of both general properties of discrete tions, Cambridge Univ. Press, London, England, and continuous distributions and results for specific 2009. distributions. Johnson, Norman, Samuel Kotz, and Adrienne Kemp, Pitman, Jim, Probability, Springer-Verlag, New York, Univariate Discrete Distributions Grd ed.), Wiley- 1993. Interscience, New York, 2005. An encyclopedia of Ross, Sheldon, Introduction to Probability Models (9th information on discrete distributions. ed.), Academic Press, New York, 2006. A good Olkin, Ingram, Cyrus Derman, and Leon Gleser, Prob- source of material on the Poisson process and gen- ability Models and Applications 2nded.), Macmil-_eralizations and a nice introduction to other topics lan, New York, 1994. Contains an in-depth in applied probability. --- Trang 171 --- CHAPTER FOUR Continuous Random Variables d Probability Distributi Introduction As mentioned at the beginning of Chapter 3, the two important types of random variables are discrete and continuous. In this chapter, we study the second general type of random variable that arises in many applied problems. Sections 4.1 and 4.2 present the basic definitions and properties of continuous random variables, their probability distributions, and their moment generating functions. In Section 4.3, we study in detail the normal random variable and distribution, unquestionably the most important and useful in probability and statistics. Sections 4.4 and 4.5 discuss some other continuous distributions that are often used in applied work. In Section 4.6, we introduce a method for assessing whether given sample data is consistent with a specified distribution. Section 4.7 discusses methods for finding the distribution of a transformed random variable. JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 158 DOI 10.1007/978-1-4614-0391-3_4, © Springer Science+Business Media, LLC 2012 --- Trang 172 --- 4.1 Probability Density Functions and Cumulative Distribution Functions 159 Probability Density Functions and Cumulative Distribution Functions A discrete random variable (rv) is one whose possible values either constitute a finite set or else can be listed in an infinite sequence (a list in which there is a first element, a second element, etc.). A random variable whose set of possible values is an entire interval of numbers is not discrete. Recall from Chapter 3 that a random variable X is continuous if (1) possible values comprise either a single interval on the number line (for some A < B, any number x between A and B is a possible value) or a union of disjoint intervals, and (2) P(X = c) = 0 for any number c that is a possible value of X. If in the study of the ecology of a lake, we make depth measurements at randomly chosen locations, then X = the depth at such a location is a continuous rv. Here A is the minimum depth in the region being sampled, and B is the maximum depth. ll If a chemical compound is randomly selected and its pH X is determined, then X is a continuous rv because any pH value between 0 and 14 is possible. If more is known about the compound selected for analysis, then the set of possible values might be a subinterval of [0, 14], such as 5.5 < x < 6.5, but X would still be continuous. I Let X represent the amount of time a randomly selected customer spends waiting for a haircut before his/her haircut commences. Your first thought might be that X is a continuous random variable, since a measurement is required to determine its value. However, there are customers lucky enough to have no wait whatsoever before climbing into the barber’s chair. So it must be the case that P(X = 0) > 0. Conditional on no chairs being empty, though, the waiting time will be continuous since X could then assume any value between some minimum possible time A and a maximum possible time B. This random variable is neither purely discrete nor purely continuous but instead is a mixture of the two types. a One might argue that although in principle variables such as height, weight, and temperature are continuous, in practice the limitations of our measuring instruments restrict us to a discrete (though sometimes very finely subdivided) world. However, continuous models often approximate real-world situations very well, and continuous mathematics (the calculus) is frequently easier to work with than the mathematics of discrete variables and distributions. Probability Distributions for Continuous Variables Suppose the variable X of interest is the depth of a lake at a randomly chosen point on the surface. Let M = the maximum depth (in meters), so that any number in the interval [0, M] is a possible value of X. If we “discretize” X by measuring depth to the nearest meter, then possible values are nonnegative integers less than or equal to M. The resulting discrete distribution of depth can be pictured using a probability histogram. If we draw the histogram so that the area of the rectangle above any possible integer k is the proportion of the lake whose depth is (to the nearest meter) k, then the total area of all rectangles is 1. A possible histogram appears in Figure 4.1(a). --- Trang 173 --- 160 = cHarrer 4 Continuous Random Variables and Probability Distributions If depth is measured much more accurately and the same measurement axis as in Figure 4.1(a) is used, each rectangle in the resulting probability histogram is much narrower, although the total area of all rectangles is still 1. A possible histogram is pictured in Figure 4.1(b); it has a much smoother appearance than the histogram in Figure 4.1(a). If we continue in this way to measure depth more and more finely, the resulting sequence of histograms approaches a smooth curve, as pictured in Figure 4.1(c). Because for each histogram the total area of all rectangles equals 1, the total area under the smooth curve is also 1. The probability that the depth at a randomly chosen point is between a and b is just the area under the smooth curve between a and b. It is exactly a smooth curve of the type pictured in Figure 4.1(c) that specifies a continuous probability distribution. a b c 0 M 0 M 0 M Figure 4.1 (a) Probability histogram of depth measured to the nearest meter; (b) probability histogram of depth measured to the nearest centimeter; (c) a limit of a sequence of discrete histograms DEFINITION Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b witha < b, b P(a 0 for all x 2. [°F (x)dx = [area under the entire graph of f(x)] = 1 The direction of an imperfection with respect to a reference line on a circular object such as a tire, brake rotor, or flywheel is, in general, subject to uncertainty. Consider the reference line connecting the valve stem on a tire to the center point, and let X be the angle measured clockwise to the location of an imperfection. One possible pdf for X is 1 F as («OX 360 fx) = 4300-9 S739 0 otherwise The pdf is graphed in Figure 4.3. Clearly f(x) > 0. The area under the density curve is just the area of a rectangle: (height)(base) = (4) (360) = 1. The probability that the angle is between 90° and 180° is p00 0; to show that [™ f(x)dx = 1 we use the calculus result [* e~ dy = (1/k)e“™. Then | f(x)dx = | 15e7 50-9) ay = 1507 | ody J20 s Js 1 = 150975. + _p-15(5) _ 4 was” --- Trang 176 --- 4.1 Probability Density Functions and Cumulative Distribution Functions 163. F(x) 15> —- i i 1 i 1 I i 1 1 ' ' 1 t A Lyx 05 5 10 15 Figure 4.4 The density curve for headway time in Example 4.5 The probability that headway time is at most 5 s is 5 Ss | 5 P(X <5)= | f(x)dx = | Ter SES) dem 152078 | eB dy Jose Js Js -1 , ‘ = 15¢975 ae “| 05 (—e~75 4. 6-075) = 1.078(—.472 + .928) = .491 = P(less than 5s) = P(X<5) Unlike discrete distributions such as the binomial, hypergeometric, and negative binomial, the distribution of any given continuous rv cannot usually be derived using simple probabilistic arguments. Instead, one must make a judicious choice of pdf based on prior knowledge and available data. Fortunately, some general pdf families have been found to fit well in a wide variety of experimental situations; several of these are discussed later in the chapter. Just as in the discrete case, it is often helpful to think of the population of interest as consisting of X values rather than individuals or objects. The pdf is then a model for the distribution of values in this numerical population, and from this model various population characteristics (such as the mean) can be calculated. Several of the most important concepts introduced in the study of discrete distributions also play an important role for continuous distributions. Definitions analogous to those in Chapter 3 involve replacing summation by integration. The Cumulative Distribution Function The cumulative distribution function (cdf) F(x) for a discrete rv X gives, for any specified number x, the probability P(X < x). It is obtained by summing the pmf p(y) over all possible values y satisfying y B, F(x) = 1, since all the area is accumulated to the left of such an x. Finally, for A < x < B, x xy 1 art F(x -{ f | rao | =e @) lied 0) 4B—A B-A”|,_, BOA Kx) Shaded area = F(x) 1 1 B-A B-A Pq ‘ A B x A xB Figure 4.6 The pdf for a uniform distribution The entire cdf is 0 Bree A x—A F(x) =4 —— AB The graph of this cdf appears in Figure 4.7. F(x) | 1 | p ‘ A B x Figure 4.7 The cdf for a uniform distribution : --- Trang 178 --- 4.1 Probability Density Functions and Cumulative Distribution Functions 165. Using F(x) to Compute Probabilities The importance of the cdf here, just as for discrete rv’s, is that probabilities of various intervals can be computed from a formula or table for F(x). PROPOSITION Let X be a continuous rv with pdf f(x) and cdf F(x). Then for any number a, P(X > a) =1—F(a) and for any two numbers a and b with a < b, P(a1) =1-P(x <1) =1-F01) =1- [Lay 42] = = 688 d= <= HT Pg) +760) | = a6! Ax) Fx) 1 a a & a x x 0 2 I 2 Figure 4.9 The pdf and cdf for Example 4.7 a Once the cdf has been obtained, any probability involving X can easily be calculated without any further integration. Obtaining f(x) from F(x) For X discrete, the pmf is obtained from the cdf by taking the difference between two F(x) values. The continuous analog of a difference is a derivative. The following result is a consequence of the Fundamental Theorem of Calculus. PROPOSITION If X is a continuous rv with pdf f(x) and cdf F(x), then at every x at which the derivative F’(x) exists, F’(x) = f(x). When X has a uniform distribution, F(x) is differentiable except at x = A and (Example 4.6 x = B, where the graph of F(x) has sharp corners. Since F(x) = 0 for x B, F’(x) = 0 = f@) for such x. For A < x < B, d[{x-A 1 F'(x) = —(—_) =—_= w=2(E4) - an . Percentiles of a Continuous Distribution When we say that an individual’s test score was at the 85th percentile of the population, we mean that 85% of all population scores were below that score and 15% were above. Similarly, the 40th percentile is the score that exceeds 40% of all scores and is exceeded by 60% of all scores. --- Trang 180 --- 4.1 Probability Density Functions and Cumulative Distribution Functions 167. DEFINITION Let p be a number between 0 and 1. The (100p)th percentile of the distribu- tion of a continuous rv X, denoted by 7(p), is defined by (P) p=Fintp)|= | Fd (42) According to Expression (4.2), 7(p) is that value on the measurement axis such that 100p% of the area under the graph of f(x) lies to the left of 7(p) and 100(1 — p)% lies to the right. Thus 7(.75), the 75th percentile, is such that the area under the graph of f(x) to the left of 7(.75) is .75. Figure 4.10 illustrates the definition. fe FQ) 5 10 4 8 Shaded area = p A 3 4] 6 “Nf } ri ; Fa) me aaa i I 0 4 0 x 5 6 7 ts 9 10 5 6 7 fs 9 10 np) np) Figure 4.10 The (100p)th percentile of a continuous distribution The distribution of the amount of gravel (in tons) sold by a construction supply company in a given week is a continuous rv X with pdf 3 2 Za- 0). a. Sketch the graph of fx). cc. Compute P(-1 < X < 1). b. Find the value of k. d. Compute P(X < —5 or X > .5). ¢. Find the probability that a GPA exceeds 3. d. Find the probability that a GPA is within .25 4. Let X denote the vibratory stress (psi) on a wind of 3. turbine bladesat-a:particulan wind speed.in.a-yind e. Find the probability that a GPA differs from 3 tunnel. The article “Blade Fatigue Life Assessment by more than .5. with Application to VAWTS” (J. Solar Energy Engrg., 1982: 107-111) proposes the Rayleigh 7. The time X (min) for a lab assistant to prepare the distribution, with pdf equipment for a certain experiment is believed to have a uniform distribution with A = 25 and X= /(207) B= 35, f0) = {@ ne ee a. Write the pdf of X and sketch its graph. 0 otherwise b. What is the probability that preparation time exceeds 33 min? as a model for the X distribution. ¢. What is the probability that preparation time is a. Verify that f(x; 0) is a legitimate pdf. within 2 min of the mean time? [Hint: Identify b. Suppose = 100 (a value suggested by a je from the graph of fx).] graph in the article), What is the probability d. For any a such that 25 .5) d. The median checkout duration ji [solve 5 = 1 F(a) 357 Osy10 0 x<-2 a, Sketch the pdf of ¥. . 13 3 - b. Verify that [* f(y)dy = 1. EG)= pa(* -5) EREO cc. What is the probability that total waiting time is 1 onery at most 3 min? d. What is the probability that total waiting time a. Compute P(X < 0). isatmost Simin? oo. b. Compute P(-1 < X < 1). e. What is the probability that total waiting time is c. Compute P(.5 < X). between 3 and 8 min? ; d. Verify that f(x) is as given in Exercise 3 by f. What is the probability that total waiting time is obtaining F(x). either less than 2 min or more than 6 min? ©. Verify that fi = 0. 9. Consider again the pdf of X =time headway 13, Example 4.5 introduced the concept of time given in Example 4.5. What is the probability liekdway in iiaffiec flow’ “and proposed a that time headway is particular distribution for X= the headway a. At most 6.8? between two randomly selected consecutive cars b. More than 6 s? At least 6 s? (sec). Suppose that in a different traffic environ- ¢. Between 5 and 6 s? ment, the distribution of time headway has 10. A family of pdf's that has been used to approxi- the form mate the distribution of income, city population size, and size of firms is the Pareto family. The k - family has two parameters, k and 0, both > 0, and fe=ie ~*~ the pdf is 0 x 6, obtain an expression for P(X < b). of X is d. For @ 4 related to the 90th percentile of the X distribu- . tion? Verify your conjecture. [This type of cdf is suggested in the article “Varia- c. More generally, if Y = aX +b, how is any bility in Measured Bedload-Transport Rates” particular percentile of the Y distribution (Water Resources Bull., 1985:39-48) as a model related to the corresponding percentile of the for a hydrologic variable.] What is X distribution? Expected Values and Moment Generating Functions In Section 4.1 we saw that the transition from a discrete cdf to a continuous cdf entails replacing summation by integration. The same thing is true in moving from expected values and mgf’s of discrete variables to those of continuous variables. Expected Values For a discrete random variable X, E(X) was obtained by summing x - p(x) over possible X values. Here we replace summation by integration and the pmf by the pdf to get a continuous weighted average. DEFINITION The expected or mean value of a continuous rv X with pdf f(x) is by = E(X) = | xf (x) dx This expected value will exist provided that [™. |x| f(x) dx < 00 The pdf of weekly gravel sales X was (Example 4.9 3 continued) 5(-) O 0; fix) = 0 otherwise. Then Mx(t) = | ef (x)dx = | e(2e")dx = | 20 Pd Jose Jo Jo —2 ans | © z P =e em =>— ift<2. rt 0° 2-7 | < --- Trang 189 --- 176 == charter 4 Continuous Random Variables and Probability Distributions This mgf exists because it is defined for an interval of values including 0 in its interior. Notice that M,(0) = 2/(2—0) = 1. Of course, from the calculation preceding this example we know that My(0) = 1 must always be the case, but it is useful as a check to set ¢ = 0 and see if the result is 1. a Recall that in the discrete case we had a proposition stating the uniqueness principle: The mgf uniquely identifies the distribution. This proposition is equally valid in the continuous case. Two distributions have the same pdf if and only if they have the same moment generating function, assuming that the mgf exists. Let X be a random variable with mgf Mx(#) = 2/2 — 1), t < 2. Can we find the pdf f(x)? Yes, because we know from Example 4.15 that if f(x) = 2e~>* when x > 0, and f(x) = 0 otherwise, then My(t) = 2/(2 — t), t < 2. The uniqueness principle implies that this is the only pdf with the given mgf, and therefore f(x) = 2e >", x > 0, f(x) = 0 otherwise. | In the discrete case we had a theorem on how to get moments from the mgf, and this theorem applies also in the continuous case: E(X’) = MY?(0), the rth derivative of the mgf with respect to t evaluated at t = 0, if the mgf exists. In Example 4.15 for the pdf f(x) = 2e-** when x > 0, and f(x) = 0 otherwise, we found M(t) = 2/2 — ft) = 2(22 — t)~', t < 2. To find the mean and variance, first compute the derivatives. Mi (t) = —2(2-)7(-1) = —s * 2-9 3 4 MYO) = (2-22 =I) = By Setting f to 0 in the first derivative gives the expected checkout time as E(X) = My(0) = My? (0) = 5. Setting ¢ to 0 in the second derivative gives the second moment E(X?) = Mi(0) = MP)(0) = 5. The variance of the checkout time is then: V(X) = 0? = F(X) — [E(X)? = 5 — 5? = 25 : As mentioned in Section 3.4, there is another way of doing the differentiation that is sometimes more straightforward. Define Ry(t) = In[My(s)], where In(uw) is the natural log of u. Then if the moment generating function exists, p= E(X) = Ry(0) o = V(X) = RY(0) --- Trang 190 --- 4.2 Expected Values and Moment Generating Functions 177 The derivation for the discrete case in Exercise 54 of Section 3.4 also applies here in the continuous case. We will sometimes need to transform X using a linear function Y = aX + b. As discussed in the discrete case, if X has the mgf M,(t) and Y = aX + b, then My(t) = e”"Mx(at). eee §=© Let X have a uniform distribution on the interval [A, B], so its pdf is f(x) = 1/((B — A), A 0 sat O 1, compute E(X). 0 otherwise b. What can you say about E(X) if k = 1? 2 “ . ce. If k > 2, show that V(X) = introduced in Exercise 8. Wk — 1) 2k 2) a. Compute and sketch the edf of ¥. [Hint: Con- d. If k = 2, what can you say about VOX)? sider separately 0 © y. 0; fix) = S. Show that its value for the given pdf is .566. What would the skewness f= 3 -(W-n{] 9 0 oo 1.25), (c) PZ < —1.25), and (d) P(—.38 < Z < 1.25). a. P(Z < 1.25) = ®(1.25), a probability that is tabulated in Appendix Table A.3 at the intersection of the row marked 1.2 and the column marked .05. The number there is .8944, so P(Z < 1.25) = .8944. See Figure 4.15(a). a b Shaded area = (1.25) - curve reine \ “a ee 0 1.25 0 1.25 Figure 4.15 Normal curve areas (probabilities) for Example 4.19 b. P(Z > 1.25) = 1 — P(Z < 1.25) = 1 — (1.25), the area under the standard normal curve to the right of 1.25 (an upper-tail area). Since @(1.25) = .8944, it follows that P(Z > 1.25) = .1056. Since Z is a continuous rv, P(Z > 1.25) also equals .1056. See Figure 4.15(b). e. P(Z < —1.25) = O(—1.25), a lower-tail area. Directly from Appendix Table A.3, @(—1.25) = .1056. By symmetry of the normal curve, this is the same answer as in part (b). d. P(—.38 < Z < 1.25) is the area under the standard normal curve above the interval whose left endpoint is —.38 and whose right endpoint is 1.25. From Section 4.1, if X is a continuous rv with cdf F(x), then Pia < X < b) = F(b) — F(a). This gives P(—.38 < Z < 1.25) = (1.25) — O(—.38) = .8944 — .3520 = .5424. (See Figure 4.16.) = curve ae | Oa —.38 0 1.25 0 1.25 38 0 Figure 4.16 P(—.38 < Z < 1.25) as the difference between two cumulative areas Percentiles of the Standard Normal Distribution For any p between 0 and 1, Appendix Table A.3 can be used to obtain the (100p)th percentile of the standard normal distribution. The 99th percentile of the standard normal distribution is that value on the horizontal axis such that the area under the curve to the left of the value is .9900. Now Appendix Table A.3 gives for fixed z the area under the standard normal curve --- Trang 196 --- 4.3 The Normal Distribution 183 to the left of z, whereas here we have the area and want the value of z. This is the “inverse” problem to P(Z < z) = ? so the table is used in an inverse fashion: Find in the middle of the table .9900; the row and column in which it lies identify the 99th z percentile. Here .9901 lies in the row marked 2.3 and column marked .03, so the 99th percentile is (approximately) z = 2.33. (See Figure 4.17.) By symmetry, the first percentile is the negative of the 99th percentile, so it equals —2.33 (1% lies below the first and above the 99th). (See Figure 4.18.) Shaded area = .9900 ‘¢ - z curve ———— EES | 99th percentile Figure 4.17 Finding the 99th percentile / curve Shaded area = .01 | ° | —2.33 = 1st percentile 2.33 = 99th percentile Figure 4.18 The relationship between the 1st and 99th percentiles . In general, the (100p)th percentile is identified by the row and column of Appendix Table A.3 in which the entry p is found (e.g., the 67th percentile is obtained by finding .6700 in the body of the table, which gives z = .44). If p does not appear, the number closest to it is often used, although linear interpolation gives amore accurate answer. For example, to find the 95th percentile, we look for .9500 inside the table. Although .9500 does not appear, both .9495 and .9505 do, corresponding to z = 1.64 and 1.65, respectively. Since .9500 is halfway between the two probabilities that do appear, we will use 1.645 as the 95th percentile and —1.645 as the Sth percentile. Zq Notation In statistical inference, we will need the values on the measurement axis that capture certain small tail areas under the standard normal curve. --- Trang 197 --- 184 = cuarrer4 Continuous Random Variables and Probability Distributions NOTATION Z, will denote the value on the measurement axis for which « of the area under the z curve lies to the right of z,. (See Figure 4.19.) For example, 29 captures upper-tail area .10 and zo; captures upper-tail area .O1. Zz ame Shaded area = P(Z>z,) = a aw, ‘ | Figure 4.19 z, notation illustrated Since « of the area under the standard normal curve lies to the right of z,, 1 — a of the area lies to the left of z,. Thus z, is the 100(1 — «)th percentile of the standard normal distribution. By symmetry the area under the standard normal curve to the left of —z, is also x. The z,’s are usually referred to as z critical values. Table 4.1 lists the most useful standard normal percentiles and z,, values. Table 4.1 Standard normal percentiles and critical values Percentile 90. 95 975 99 99.5 99.9 99.95 2 (tail area) ol 05 025 01 .005 001 .0005 z, = 100(1 — «)th percentile 1.28 1.645 196 2.33 2.58 3.08 3.27 The 100(1 — .05)th = 95th percentile of the standard normal distribution is zs, $0 Zo5 = 1.645. The area under the standard normal curve to the left of —z95 is also .05. (See Figure 4.20.) 2 curve ‘Shaded area = .05 \ ‘Shaded area = .05 | ° | 1.645 = -295 2.95 = 95th percentile = 1.645 Figure 4.20 Finding zo5 . --- Trang 198 --- 4.3 The Normal Distribution 185 Nonstandard Normal Distributions When X ~ N(j, o°), probabilities involving X are computed by “standardizing.” The standardized variable is (X — 2)/o. Subtracting y shifts the mean from jp to zero, and then dividing by o scales the variable so that the standard deviation is 1 rather than o. PROPOSITION Tf X has a normal distribution with mean yu and standard deviation o, then ya kot o has a standard normal distribution. Thus a} b= Piab) =1- o( *) o o The key idea of the proposition is that by standardizing, any probability involving X can be expressed as a probability involving a standard normal rv Z, so that Appendix Table A.3 can be used. This is illustrated in Figure 4.21. The proposition can be proved by writing the cdf of Z = (X — p)/o as oxi P(Z <2) =P(Z ( ~~ ) ( 46 ~~ 46 ) = P(—.54 < Z < 1.09) = (1.09) — @(—.54) = .8621 — .2946 = .5675 This is illustrated in Figure 4.22. Similarly, if we view 2 s as a critically long- reaction time, the probability that actual reaction time will exceed this value is 2 1.25 P(X >2) = P(Z> =) = P(Z> 1.63) = 1 - (1.63) = 0516 Normal, u = 1.25, 0 = 46 P(1.00 < X < 1.75) | | 0 | 1.00 1.75 54 1.09 Figure 4.22 Normal curves for Example 4.22 . Standardizing amounts to nothing more than calculating a distance from the mean value and then re-expressing the distance as some number of standard deviations. For example, if 1 = 100 and o = 15, then x = 130 corresponds to z = (130 — 100)/15 = 30/15 = 2.00. Thus 130 is 2 standard deviations above (to the right of) the mean value. Similarly, standardizing 85 gives (85 — 100)/I5 = —1.00, so 85 is 1 standard deviation below the mean. The z table applies to any normal distribution provided that we think in terms of number of standard deviations away from the mean value. --- Trang 200 --- 4.3 The Normal Distribution 187 The return on a diversified investment portfolio is normally distributed. What is the probability that the return is within 1 standard deviation of its mean value? This question can be answered without knowing either 1 or o, as long as the distribution is known to be normal; in other words, the answer is the same for any normal distribution: P X is Wath ‘oni: standard \ _ Pwo c) = .005, or, equivalently, that P(X < c) = .995. Thus c is the 99.5th percentile of the normal distribution with 4 = 64 and ¢ = .78. The 99.5th percentile of the standard normal distribution is 2.58, so ¢ = n(.995) = 64 + (2.58)(.78) = 64 + 2.0 = 66 oz This is illustrated in Figure 4.23. “~ Shaded area = .995 EE ua 64 | © = 99.5th percentile = 66.0 Figure 4.23 Distribution of amount dispensed for Example 4.24 . The Normal Distribution and Discrete Populations The normal distribution is often used as an approximation to the distribution of values in a discrete population. In such situations, extra care must be taken to ensure that probabilities are computed in an accurate manner. 1Q (as measured by a standard test) is known to be approximately normally distributed with = 100 and o = 15. What is the probability that a randomly selected individual has an IQ of at least 125? Letting X = the IQ of a randomly chosen person, we wish P(X > 125). The temptation here is to standardize X > 125 immediately as in previous examples. However, the IQ population is actually discrete, since IQs are integer-valued, so the normal curve is an approximation to a discrete probability histogram, as pictured in Figure 4.24. 125 Figure 4.24 A normal approximation to a discrete distribution --- Trang 202 --- 4.3 The Normal Distribution 189 The rectangles of the histogram are centered at integers, so IQs of at least 125 correspond to rectangles beginning at 124.5, as shaded in Figure 4.24. Thus we really want the area under the approximating normal curve to the right of 124.5. Standardizing this value gives P(Z > 1.63) = .0516. If we had standardized X > 125, we would have obtained P(Z > 1.67) = .0475. The difference is not great, but the answer .0516 is more accurate. Similarly, P(X = 125) would be approximated by the area between 124.5 and 125.5, since the area under the normal curve above the single value 125 is zero. a The correction for discreteness of the underlying distribution in Example 4.25 is often called a continuity correction. It is useful in the following application of the normal distribution to the computation of binomial probabilities. The normal distribution was actually created as an approximation to the binomial distribution (by Abraham De Moivre in the 1730s). Approximating the Binomial Distribution Recall that the mean value and standard deviation of a binomial random variable X are fix = np and oy = \/npq, respectively. Figure 4.25 displays a probability histogram for the binomial distribution with n = 20, p = .6 [so 4 = 20(.6) = 12 and o = \/20(.6)(.4) = 2.19]. A normal curve with mean value and standard deviation equal to the corresponding values for the binomial distribution has been superimposed on the probability histogram. Although the probability histogram is a bit skewed (because p # .5), the normal curve gives a very good approximation, especially in the middle part of the picture. The area of any rectangle (probability of any particular X value) except those in the extreme tails can be accurately approxi- mated by the corresponding normal curve area. Thus P(X = 10) = B(10; 20, .6) — B(9; 20, .6) = .117, whereas the area under the normal curve between 9.5 and 10.5 is P(—1.14 < Z < —.68) = .120. More generally, as long as the binomial probability histogram is not too skewed, binomial probabilities can be well approximated by normal curve areas. It is then customary to say that X has approximately a normal distribution. Normal curve, 20 =~ p=12,6= 219 r 1 15 10 | 05 / \ r L a] 0 2 4 6 8 10 12 14 1 18 2 Figure 4.25 Binomial probability histogram for n = 20, p = .6 with normal approxi- mation curve superimposed --- Trang 203 --- 190 = cHarrer4 Continuous Random Variables and Probability Distributions PROPOSITION Let X be a binomial rv based on n trials with success probability p. Then if the binomial probability histogram is not too skewed, X has approximately a normal distribution with f= np and o = \/npq. In particular, for x = a possible value of X, P(X 10 and ng > 10. If either np < 10 or nq < 10, the binomial distribution may be too skewed for the (symmetric) normal curve to give accurate approximations. Suppose that 25% of all licensed drivers in a state do not have insurance. Let X be the number of uninsured drivers in a random sample of size 50 (somewhat per- versely, a success is an uninsured driver), so that p = .25. Then pw = 12.5 and o = 3.062. Since np = 50(.25) = 12.5 > 10 and ng = 37.5 > 10, the approxima- tion can safely be applied: 10+ .5—12.5 P(X < 10) = B(10; 50, .25) = @| —~_—___—— = ©(—.65) = .2578 Similarly, the probability that between 5 and 15 (inclusive) of the selected drivers are uninsured is P(5 0. Explain how you could still a. At least 40 can taste the difference between the compute two oils? a. P(-L72 1, P(X — nl 2 ke) SUE. Gee 66. show that if X has a normal distribution with Exercise 43 in Section 3.3 for an interpretation js = yi Fi parameters 4 and o, then Y = aX + b (a linear and Exercise 135’ in Chapter’ 3 Supplementary function of X) also has a normal distribution. Exercises for a proof). Obtain this probability in Wi, ae . iat are the parameters of the distribution of ¥ the case of a normal distribution for k = 1, 2, and [ie., E(Y) and V(P)]? [Hine: Write the cdf of Y, Stra (comparerte the lppe baum; P(Y 2) =1—®(2) a. PIS < X < 20) b. Pu < 15) wi exp{ [SE SE 50} . P20 < X) (703/z) + 165 63. Suppose that 10% of all steel shafts produced by a ITH MASLIN STOR OR IONE appRORAANTGHT iRLIERE process (ste Hoticonforming but. can’ be-teworked than .042%. Use this to calculate approximations (rather than having to. be-scrapped).. Consider to the following probabilities, and compare when- a.random/sample:of 200 shafts, and Jet X dendte ever possible to the probabilities obtained from the number among these that are nonconforming Appendix Table A.3. and can be reworked. What is the (approximate) a. P(Z> 1) probability that X is b. PZ < 3) a. At most 30? c P(-4 5) ¢. Between 15 and 25 (inclusive)? 68. The moment generating function can be used ‘GA. ‘Suppose only'70% of all. drivers in‘a state regi to find the mean and variance of the normal distri- larly wear a seat belt. A random sample of 500 bution. drivers is selected. What is the probability that a. Use derivatives of My(t) to verify that E(X) = a. Between 320 and 370 (inclusive) of the drivers and VX) = 02. in the sample regularly wear a seat belt? b. Repeat (a) using Ry(¢) = In{Mx(#)], and com- b. Fewer than 325 of those in the sample regularly pare. with part (a) ih terind of effort wear a seat belt? Fewer than 315? The Gamma Distribution and Its Relatives The graph of any normal pdf is bell-shaped and thus symmetric. In many practical situations, the variable of interest to the experimenter might have a skewed distri- bution. A family of pdf's that yields a wide variety of skewed distributional shapes is the gamma family. To define the family of gamma distributions, we first need to introduce a function that plays an important role in many branches of mathematics. --- Trang 208 --- 4.4 The Gamma Distribution and Its Relatives 195 DEFINITION For « > 0, the gamma function I'(~) is defined by (a) = | x le dx (4.5) 0 The most important properties of the gamma function are the following: 1. For any x > 1, (a) = (a — 1) + F(a — 1) (via integration by parts) 2. For any positive integer, n, (mn) = (n — 1)! 3) =va By Expression (4.5), if we let alex ee x>0 f(a) =< Pla) (4.6) 0 otherwise then f(x; 2) > 0 and f,° f(x; a)dx = P(a)/T(«) = 1, so fix; a) satisfies the two basic properties of a pdf. The Family of Gamma Distributions DEFINITION A continuous random variable X is said to have a gamma distribution if the pdf of X is ate x0 f(x:a,B) =< BT (x) (4.7) 0 otherwise where the parameters « and f satisfy « > 0, 8 > 0. The standard gamma distribution has f = 1, so the pdf of a standard gamma rv is given by (4.6). Figure 4.26(a) illustrates the graphs of the gamma pdf for several (x, f) pairs, whereas Figure 4.26(b) presents graphs of the standard gamma pdf. For the standard pdf, when « < 1, f(x; «) is strictly decreasing as x increases; when x > 1, f(x; a) rises toa maximum and then decreases. The parameter f in (4.7) is called the scale parameter because values other than | either stretch or compress the pdf in the x direction. PROPOSITION The moment generating function of a gamma random variable is Mx(t) = gape = BF --- Trang 209 --- 196 = cHarrer4 Continuous Random Variables and Probability Distributions a b f(x; 0, B) L(x; a”) | a= 2, B= ; 1.0 woh 7% y a=1 a=1,B=1 =6 05 ra 0.5 - a= 2, B=2 and vi fe a=5 __ a=2,B=1 Ze 0 +x 0 x 1 2 3 4 5 6 7 12 3 4 5 Figure 4.26 (a) Gamma density curves; (b) standard gamma density curves Proof By definition, the mgf is 2 co al ii oo yt esl Mx(t) = E(e! -| Ae gel ar=| ge OO ON Be~ |, Tar ) TOF One way to evaluate the integral is to express the integrand in terms of a gamma density. This means writing the exponent in the form —x/b and having b take the place of B. We have —x(—t + 1/8) = —x{(—fr + 1)/B] = —x/[B/. — fr)]. Now multiplying and at the same time dividing the integrand by 1/(1—f7)” gives oe a 16/(.-60) My(t =a | BP y O° T= pe |e TRIO = HF But now the integrand is a gamma pdf, so it integrates to 1. This establishes the result. LI The mean and variance can be obtained from the moment generating func- tion (Exercise 80), but they can also be obtained directly through integration (Exercise 81). PROPOSITION The mean and variance of a random variable X having the gamma distribution Sfx; a, B) are E(X) =" = of V(X) =o? = af? When X is a standard gamma rv, the cdf of X, which is X ya-ly-y yr te F(x; a” x>0 4.8 (ia) =]. Ti) (4.8) is called the incomplete gamma function [sometimes the incomplete gamma function refers to Expression (4.8) without the denominator I'(«) in the integrand]. --- Trang 210 --- 4.4 The Gamma Distribution and Its Relatives 197 There are extensive tables of F(x; ~) available; in Appendix Table A.4, we present a small tabulation for « = 1, 2,..., 10 and x = 1,2,..., 15. Suppose the reaction time X of a randomly selected individual to a certain stimulus has a standard gamma distribution with « = 2. Since P(a4) =1—P(X <4) =1—F(4;2) = 1—.908 = .092 rT The incomplete gamma function can also be used to compute probabilities involving nonstandard gamma distributions. PROPOSITION Let X have a gamma distribution with parameters « and f. Then for any x > 0, the cdf of X is given by P(X 30) = 1 — P(X < 30) =1—P(X < 30) = 1 —F(30/15;8) = .999 rT The Exponential Distribution The family of exponential distributions provides probability models that are widely used in engineering and science disciplines. DEFINITION X is said to have an exponential distribution with parameter 2 (A > 0) if the pdf of X is je* x30 F(x;4) = _ 49 FCs) { 0 otherwise. 2) The exponential pdf is a special case of the general gamma pdf (4.7) in which 2 = 1 and f has been replaced by 1/4 [some authors use the form (/pye*}. The mean and variance of X are then 1 1 p= aps, o = of? == A zd Both the mean and standard deviation of the exponential distribution equal 1/2. Graphs of several exponential pdf’s appear in Figure 4.27. F(xr) 2 1.5 — 1 A= 5 ——1=1 iv x 0 1 2 3 4 5 6 7 8 Figure 4.27 Exponential density curves --- Trang 212 --- 4.4 The Gamma Distribution and Its Relatives 199 Unlike the general gamma pdf, the exponential pdf can be easily integrated. In particular, the cdf of X is aN 0 x<0 west={, Oi 489 The response time X at an on-line computer terminal (the elapsed time between the end of a user’s inquiry and the beginning of the system’s response to that inquiry) has an exponential distribution with expected response time equal to 5 s. Then E(X) = 1/A = 5, so 2 = .2. The probability that the response time is at most 10 s is P(X < 10) = F(10;.2) = 1— e0 = 1 —e? = 1 — 135 = 865 The probability that response time is between 5 and 10 s is P(5 t) = 1 — P{no events in (0, 1)| — 0 ae 0! which is exactly the cdf of the exponential distribution. Calls are received at a 24-h “suicide hotline” according to a Poisson process with rate x = .5 call per day. Then the number of days X between successive calls has an exponential distribution with parameter value .5, so the probability that more than 2 days elapse between calls is P(X>2) =1-P(X <2) =1-F(2;.5) =e) = 368 The expected time between successive calls is 1/.5 = 2 days. a --- Trang 213 --- 200 = cuarrer 4 Continuous Random Variables and Probability Distributions Another important application of the exponential distribution is to model the distribution of component lifetime. A partial reason for the popularity of such applications is the “memoryless” property of the exponential distribution. Suppose component lifetime is exponentially distributed with parameter 2. After putting the component into service, we leave for a period of fo h and then return to find the component still working; what now is the probability that it lasts at least an additional t hours? In symbols, we wish P(X > t + to | X > to). By the definition of conditional probability, P(X > t+) 9 (X > )] P(X >t+t0|X > 9) = ——_ (KE 1+ 9X > 0) ay But the event X > fp in the numerator is redundant, since both events can occur if. and only if X > t + to. Therefore, P(X>t+) 1-F(t+h;4)_ eM P(X >t+t|X >t) =— = = ee (KE tb WIK 2 0) = Se ope eee This conditional probability is identical to the original probability P(X > #) that the component lasted t hours. Thus the distribution of additional lifetime is exactly the same as the original distribution of lifetime, so at each point in time the component shows no effect of wear. In other words, the distribution of remaining lifetime is independent of current age. Although the memoryless property can be justified at least approximately in many applied problems, in other situations components deteriorate with age or occasionally improve with age (at least up to a certain point). More general lifetime models are then furnished by the gamma, Weibull, and lognormal distributions (the latter two are discussed in the next section). The Chi-Squared Distribution DEFINITION Let v be a positive integer. Then a random variable X is said to have a chi- squared distribution with parameter v if the pdf of X is the gamma density with « = v/2 and B = 2. The pdf of a chi-squared rv is thus le x0 f(x;v) = ¢ 2?T(v/2) ~ (4.10) 0 x<0 The parameter v is called the number of degrees of freedom (df) of X. The symbol 7? is often used in place of “chi-squared.” The chi-squared distribution is important because it is the basis for a number of procedures in statistical inference. The reason for this is that chi-squared distributions are intimately related to normal distributions (see Exercise 79). We will discuss the chi-squared distribution in more detail in Section 6.4 and the chapters on inference. --- Trang 214 --- 4.4 The Gamma Distribution and Its Relatives 201 Exercises | Section 4.4 (69-81) 69. Evaluate the following: 74. Let X denote the distance (m) that an animal a. (6) moves from its birth site to the first territorial b. T(5/2) vacancy it encounters. Suppose that for banner- ¢. F(4; 5) (the incomplete gamma function) tailed kangaroo rats, X has an exponential distri- d. F(5; 4) bution with parameter 2 = .01386 (as suggested e. F(0; 4) in the article “Competition and Dispersal from 70. Let X have a standard gamma distribution with Multiple Nests,” Ecology, 1997:.873-889). a = 7. Evaluate the following: a. What is the probability that the distance is a. P(X <5) at most 100 m? At most 200 m? Between 100 b. PO& <5) and 200 mi a . c. P(X > 8) b. What is the probability that distance exceeds d. PB f} is equivalent to what event distribution by differentiating the moment gen- involving Aj, . . . As? erating function My(t). b. Using the independence of the five A;’s, com- b. Find the mean and variance of the gamma dis- pute P(X >). Then obtain F(f) = P(X < 1) tribution by differentiating Ry(#) = In[My(1)]- and the pdf of X. What type of distribution ‘does X have? 81. Find the mean and variance of the gamma distri- cc. Suppose there are 2 components, each having bution using integration to obtain E(X) and E(X~). exponential lifetime with parameter 4. What (Hint: Express the integrand in terms of a gamma type of distribution does X have? density. ] Other Continuous Distributions The normal, gamma (including exponential), and uniform families of distributions provide a wide variety of probability models for continuous variables, but there are many practical situations in which no member of these families fits a set of observed data very well. Statisticians and other investigators have developed other families of distributions that are often appropriate in practice. The Weibull Distribution The family of Weibull distributions was introduced by the Swedish physicist Waloddi Weibull in 1939; his 1951 article “A Statistical Distribution Function of Wide Applicability” (J. Appl. Mech., 18: 293-297) discusses a number of applications. DEFINITION A random variable X is said to have a Weibull distribution with parameters aand f (x > 0, B > 0) if the pdf of X is 2 tle wie x>0 f(xia,p) = 4 B (4.11) 0 x<0 --- Trang 216 --- 4.5 Other Continuous Distributions 203 In some situations there are theoretical justifications for the appropriateness of the Weibull distribution, but in many applications f(x; «, 8) simply provides a good fit to observed data for particular values of « and $. When « = 1, the pdf reduces to the exponential distribution (with 2 = 1/B), so the exponential distribu- tion is a special case of both the gamma and Weibull distributions. However, there are gamma distributions that are not Weibull distributions and vice versa, so one family is not a subset of the other. Both x and f can be varied to obtain a number of different distributional shapes, as illustrated in Figure 4.28. Note that f is a scale parameter, so different values stretch or compress the graph in the x-direction. f(x) 1 av @= 1, B= 1 (exponential) a=2,p=1 a= 2,p=5 x 0 5 10 f(x) 8 6 a= 10, B=.5 4 a= 10, B=1 a a= 10, B=2 -] J 9 x 0 5 1.0 1.5 20 25 Figure 4.28 Weibull density curves Integrating to obtain E(X) and E(X°) yields 1 2 ia a fo a --- Trang 217 --- 204 = cuarrer 4 Continuous Random Variables and Probability Distributions The computation of j: and o” thus necessitates using the gamma function. The integration {; f(y; 2, B)dy is easily carried out to obtain the cdf of X. The cdf of a Weibull rv having parameters « and f is F(x;2,8) oy (4.12) x; 0, B) = asi i 1—e F/B" x30 In recent years the Weibull distribution has been used to model engine emissions of various pollutants. Let X denote the amount of NO, emission (g/gal) from a randomly selected four-stroke engine of a certain type, and suppose that X has a Weibull distribution with « = 2 and B = 10 (suggested by information in the article “Quantification of Variability and Uncertainty in Lawn and Garden Equipment NO, and Total Hydrocarbon Emission Factors,” J. Air Waste Manag. Assoc., 2002: 435-448). The corresponding density curve looks exactly like the one in Figure 4.28 for x = 2, 8 = 1 except that now the values 50 and 100 replace 5 and 10 on the horizontal axis (because f is a “scale parameter”). Then P(X < 10) = F(10;2, 10) =1— e919 1 _ = 632 Similarly, P(X < 25) = .998, so the distribution is almost entirely concentrated on values between 0 and 25. The value c, which separates the 5% of all engines having the largest amounts of NO, emissions from the remaining 95%, satisfies 95 = 1-7 €/10" Isolating the exponential term on one side, taking logarithms, and solving the resulting equation gives c © 17.3 as the 95th percentile of the emission distribution. a Frequently, in practical situations, a Weibull model may be reasonable except that the smallest possible X value may be some value y not assumed to be zero (this would also apply to a gamma model). The quantity can then be regarded as a third parameter of the distribution, which is what Weibull did in his original work. For, say, ? = 3, all curves in Figure 4.28 would be shifted 3 units to the right. This is equivalent to saying that X — y has the pdf (4.11), so that the cdf of X is obtained by replacing x in (4.12) by x — . An understanding of the volumetric properties of asphalt is important in designing mixtures that will result in high-durability pavement. The article “Is a Normal Distribution the Most Appropriate Statistical Distribution for Volumetric Proper- ties in Asphalt Mixtures” J. of Testing and Evaluation, Sept. 2009: 1-11 used the analysis of some sample data to recommend that for a particular mixture, X = air void volume (%) be modeled with a three-parameter Weibull distribution. Suppose the values of the parameters are » = 4, « = 1.3, and B = .8 (quite close to estimates given in the article). For x > 4, the cumulative distribution function is F(x;0, B,) = F613, 8,4) =1—e e/a? --- Trang 218 --- 4.5 Other Continuous Distributions 205 The probability that the air void volume of a specimen is between 5% and 6% is P(5 0 F(xiu.o) =< V2n0x = 0 x<0 Be careful here; the parameters jz and o are not the mean and standard deviation of X but of In(X). The mean and variance of X can be shown to be E(X) = et7/2 V(X) = eH. (e* — 1) In Chapter 6, we will present a theoretical justification for this distribution in connection with the Central Limit Theorem, but as with other distributions, the lognormal can be used as a model even in the absence of such justification. Figure 4.29 illustrates graphs of the lognormal pdf; although a normal curve is symmetric, a lognormal curve has a positive skew. f(x) .25 .20 met,o=1 15 a 10 n= 3,028 wa u=3,0=1 .05 L gLhe er rg 0 5 10 15 20 25 Figure 4.29 Lognormal density curves --- Trang 219 --- 206 = cuarrer 4 Continuous Random Variables and Probability Distributions Because In(X) has a normal distribution, the cdf of X can be expressed in terms of the cdf ®(z) of a standard normal rv Z. For x > 0, In(X) — I = Fema) = POX <2) = Plln(X) < n(n] = P[ACOH «eda o o (4.13) _% le 5). d. Compute P(5 < X < 8). 83. ‘The authors of the article “A Probabilistic Insult- 1 4+ ¥ have a Weibull distribution with the pdf from tion Life Model for Combined Thermal-Electrical E E AIL). Verify th — rl 1 Stresses” (IEEE Trans. Electr. Insul., 1985: seen a: ey He i ¢ ‘a ie) 519-522) state that “the Weibull distribution is t ney Che on “ ums ae change o} widely used in statistical problems relating to variable y = (/f)", so that.x = By’. aging of solid insulating materials subjected 86. a. In Exercise 82, what is the median lifetime of to aging and stress.” They propose the use of the such tubes? [Hint: Use Expression (4.12).] distribution as a model for time (in hours) to failure b. In Exercise 84, what is the median return time? of solid insulating specimens subjected to ac volt- ¢. If X has a Weibull distribution with the cdf age. The values of the parameters depend on the from Expression (4.12), obtain a general voltage and temperature; suppose x = 2.5 and expression for the (100p)th percentile of the B = 200 (values suggested by data in the article). distribution. ‘a. What is the probability that a specimen’s life- d. In Exercise 84, the company wants to refuse to time is at most 250? Less than 250? More than accept returns after t weeks. For what value of 300? t will only 10% of all returns be refused? b. ‘What is the probability that a specimen’s life- gy. 1 ot ¥ denote the ultimate tensile strength (ksi) at time is between 100 and 250? Yo . A ; —200° of a randomly selected steel specimen of a ¢. What value is such that exactly 50% of all stat Sh: hibits “cold brittl atlow 2 have lifetimes exceeding that value? certain type that exhibits “cold brittleness” at low SPECHTEDS ae) : temperatures. Suppose that X has a Weibull distri- 84. Let X = the time (in 10-' weeks) from shipment bution with « = 20 and B = 100. of a defective product until the customer returns --- Trang 222 --- 4.5 Other Continuous Distributions 209 a, What is the probability that X is at most b. Compute P(X > 125). 105 ksi? ¢. Compute P(I10 < X < 125). b. If specimen after specimen is selected, what is d, What is the value of median ductile strength? the long-run proportion having strength values e. If ten different samples of an alloy steel of this between 100 and 105 ksi? type were subjected to a strength test, how cc. What is the median of the strength distribution? many would you expect to have strength of at 2 88. The authors of a paper from which the data in least 1252 ras f. If the smallest 5% of strength values were un- Exercise 25 of Chapter 1 was extracted suggested cetahle, whar-wauldecanind : that a reasonable probability model for drill life- SECEDIRNIE, WHALE WOE TG HmInNTIUM ACcEDE: . oe a . able strength be? time was a lognormal distribution with ye = 4.5 fe and o = 8. 92. The article “The Statistics of Phytotoxic Air a, What are the mean value and standard devia- Pollutants” (J. Roy. Statist Soc., 1989: 183-198) tion of lifetime? suggests the lognormal distribution as a model for b. What is the probability that lifetime is at SO, concentration above a forest. Suppose the most 100? parameter values are t= 1.9 and o = 9. cc. What is the probability that lifetime is at least a. What are the mean value and standard devia- 200? Greater than 200? tion of concentration? 99, Tax — tie neusly mela power GH aeeINEIN) SE b. What is the probability that concentration is at . = . ? received radio signals transmitted between two most.10), Between|d,and, 107 cities. The authors of the article “Families of Dis- 93, What condition on and f is necessary for the tributions for Hourly Median Power and Instanta- standard beta pdf to be symmetric? neous Power of Received Radio Signals” J-Res. 94° suppose the proportion X of surface area in a Nat. Bureau Standards, vol. 67D, 1963: 753-762) . : randomly selected quadrate that is covered by a argue that the lognormal distribution provides a sea . eee i (SadSable pro Babilitywibelel SKIES para certain plant has a standard beta distribution with eter values are x = 3.5 and o = 1.2, calculate the se a fillovcing: a, Compute E(X) and V(X). i The jie nd standard Govioninn of b. Compute P(X < .2). a. te ni oaper and = standar« leviation of © Compute P(2 0.1000 Figure 4.34 Normal probability plot of the breakdown voltage data from MINITAB A nonnormal population distribution can often be placed in one of the following three categories: 1. It is symmetric and has “lighter tails” than does a normal distribution; that is, the density curve declines more rapidly out in the tails than does a normal curve. 2. It is symmetric and heavy-tailed compared to a normal distribution. 3. It is skewed. A uniform distribution is light-tailed, since its density function drops to zero outside a finite interval. The density function f(x) = 1/[m(1 + x)], for —00 z percentile). The result is an S -shaped pattern of the type pictured in Figure 4.32. A sample from a heavy-tailed distribution also tends to produce an S-shaped plot. However, in contrast to the light-tailed case, the left end of the plot curves downward (observed < z percentile), as shown in Figure 4.35(a). If the underlying distribution is positively skewed (a short left tail and a long right tail), the smallest sample observations will be larger than expected from a normal sample and so will the largest observations. In this case, points on both ends of the plot will fall above a --- Trang 229 --- 216 = cuaprer 4 Continuous Random Variables and Probability Distributions straight line through the middle part, yielding a curved pattern, as illustrated in Figure 4.35(b). A sample from a lognormal distribution will usually produce such a pattern. A plot of [z percentile, In(x)] pairs should then resemble a straight line. a b 2 . 3 A 5 Zpercentile zpercentile Figure 4.35 Probability plots that suggest a nonnormal distribution: (a) a plot consistent with a heavytailed distribution; (b) a plot consistent with a positively skewed distribution Even when the population distribution is normal, the sample percentiles will not coincide exactly with the theoretical percentiles because of sampling variabil- ity. How much can the points in the probability plot deviate from a straight-line pattern before the assumption of population normality is no longer plausible? This is not an easy question to answer. Generally speaking, a small sample from a normal distribution is more likely to yield a plot with a nonlinear pattern than is a large sample. The book Fitting Equations to Data (see the Chapter 12 bibliography) presents the results of a simulation study in which numerous samples of different sizes were selected from normal distributions. The authors concluded that there is typically greater variation in the appearance of the probability plot for sample sizes smaller than 30, and only for much larger sample sizes does a linear pattern generally predominate. When a plot is based on a small sample size, only a very substantial departure from linearity should be taken as conclusive evidence of nonnormality. A similar comment applies to probability plots for checking the plausibility of other types of distributions. Given the limitations of probability plots, there is need for an alternative. In Section 13.2 we introduce a formal procedure for judging whether the pattern of points in a normal probability plot is far enough from linear to cast doubt on population normality. Beyond Normality Consider a family of probability distributions involving two parameters, 0, and 63, and let F(x; 6), 02) denote the corresponding cdf’s. The family of normal distribu- tions is one such family, with 0; = pu, 02 =, and F(x; ,1,0) = O[(x — p)/o}. Another example is the Weibull family, with 0, = x, 6. = B, and F(x;4,B) = 1 — 76/8" --- Trang 230 --- 4.6 Probability Plots 217 Still another family of this type is the gamma family, for which the cdf is an integral involving the incomplete gamma function that cannot be expressed in any simpler form. The parameters 6, and 63 are said to be location and scale parameters, respectively, if F(x; 01, 02) is a function of (x — 0;)/ 02. The parameters 1 and ¢ of the normal family are location and scale parameters, respectively. Changing jc shifts the location of the bell-shaped density curve to the right or left, and changing @ amounts to stretching or compressing the measurement scale (the scale on the horizontal axis when the density function is graphed). Another example is given by the cdf F(x;01,02) =1-e —00 0 and zero otherwise. A location parameter can be introduced as a third parameter (we did this for the Weibull distribution) to shift the density function so that it is positive if x > y and zero otherwise. When the family under consideration has only location and scale parameters, the issue of whether any member of the family is a plausible population distribution can be addressed via a single, easily constructed probability plot. One first obtains the percentiles of the standard distribution, the one with 0; = 0 and 0, = 1, for percentages 100i — .5)/n (i = 1, ..., n). The n (standardized percentile, observation) pairs give the points in the plot. This is, of course, exactly what we did to obtain an omnibus normal probability plot. Somewhat surprisingly, this methodology can be applied to yield an omnibus Weibull probability plot. The key result is that if X has a Weibull distribution with shape parameter a and scale parameter f, then the transformed variable In(X) has an extreme value distribution with location parameter 0; = In(f) and scale parameter %. Thus a plot of the [extreme value standardized percentile, In(x)] pairs that shows a strong linear pattern provides support for choosing the Weibull distribution as a popula- tion model. The accompanying observations are on lifetime (in hours) of power apparatus insulation when thermal and electrical stress acceleration were fixed at particular values (“On the Estimation of Life of Power Apparatus Insulation Under Combined --- Trang 231 --- 218 = cuaprer 4 Continuous Random Variables and Probability Distributions Electrical and Thermal Stress,” JEEE Trans. Electr. Insul., 1985: 70-78). A Weibull probability plot necessitates first computing the Sth, 15th, ..., and 95th percentiles of the standard extreme value distribution. The (100p)th percentile 7(p) satisfies p= F[y(p)| =1-e from which 7(p) = In[—In(1 — p)]. Percentile =297 —1.82 —1.25 —.84 = x 282 501 741 851 1,072 In(x) 5.64 6.22 6.61 6.75 6.98 Percentile —.23 05 33 64 1.10 x 1,122 1,202 1,585 1,905 2,138 In(x) 7.02 7.09 737 7.55 7.67 The pairs (—2.97, 5.64), (—1.82, 6.22), ..., (1.10, 7.67) are plotted as points in Figure 4.36. The straightness of the plot argues strongly for using the Weibull distribution as a model for insulation life, a conclusion also reached by the author of the cited article. In(x) 8 7 6 Lo Percentile “3 = 4 0 1 Figure 4.36 A Weibull probability plot of the insulation lifetime data . The gamma distribution is an example of a family involving a shape parame- ter for which there is no transformation /(x) such that A(X) has a distribution that depends only on location and scale parameters. Construction of a probability plot necessitates first estimating the shape parameter from sample data (some methods for doing this are described in Chapter 7). Sometimes an investigator wishes to know whether the transformed variable X’ has a normal distribution for some value of 0 (by convention, 0 = 0 is identified with the logarithmic transformation, in which case X has a lognormal distribution). The book Graphical Methods for Data Analysis, listed in the Chapter | bibliography, discusses this type of problem as well as other refinements of probability plotting. --- Trang 232 --- 4.6 Probability Plots 219 Exercises | Section 4.6 (97-107) 97. The accompanying normal probability plot was in concrete specimens should have a Weibull constructed from a sample of 30 readings on distribution and presents several histograms tension for mesh screens behind the surface of, of data that appear well fit by superimposed video display tubes. Does it appear plausible that Weibull curves. Consider the following sample the tension distribution is normal? of size n = 18 observations on toughness for Tension high-strength concrete (consistent with one of the histograms); values of p; = (i — .5)/18 are 0. also given. 300 Observation AT 58 65 69 72 74 Pi 0278 .0833. .1389 1944 .2500 .3056 Observation 77 19 80 81 82 84 20) Bi 3611 4167 4722 5278 5833 .6389 Observation 86 89 1 95 LOL 1.04 200 Pi 6944 .7500 .8056 8611 9167 .9722 = percentile = “I @ 4 8 Construct a Weibull probability plot and comment. 101. Construct a normal probability plot for the 98. A sample of 15 female collegiate golfers was escape time data given in Exercise 33 of Chapter 1. selected and the clubhead velocity (km/h) while dee ilappelir pLaTBIENRL eRape Ae Has swinging a driver was determined for each one, sioriial Histebunion? Replat resulting in the following data (“Hip Rotational ew . . . Velocities during the Full Golf Swing,” J. of 102+ Theauticle “The Load:-Life Relationship for MSO Sports Science and Medicine, 2009: 296-299): Bealings withSilicon Nitride Ceramic Balls’ (Lubricat. Engrg., 1984: 153-159) reports the 09.0 69.7 72.7 80.3 81.0 sbiconnpshiy ie datscon bearing: load life'(million 85.0 86.0 86.3 86.7 87.7 revs.) for bearings tested at a 6.45-KN load. 89.3 90.7 91.0 92.5 93.0 471 68.1 68.1 90.8 103.6 106.0 115.0 ‘The corresponding z percentiles are 126.0 146.6 229.0 240.0 240.0 278.0 278.0 —183 -1.28 097 0.73 0.52 289.0 289.0. 367.0 385.9 392.0 505.0 0.34 -0.17 00° «017034 o 052 073 0.97 1.28 1.83 a. Construct a normal probability plot. Is nor- mality plausible? Construct a normal probability plot and a dotplot. b. Construct a Weibull probability plot. Is the Is it plausible that the population distribution is Weibull distribution family plausible? arin 103. Construct a probability plot that will allow you to 99. Construct a normal probability plot for the fol- assess the plausibility of the lognormal distribu- lowing sample of observations on coating thick- tion as a model for the rainfall data of Exercise 80 ness for low-viscosity paint (“Achieving a Target in Chapter 1. Value for a Manufacturing Process: A Case 194 the accompanying observations are precipita- Study,” J. Qual. Tech., 1992: 22-26). Would Lia ) : ; ; : : tion values during March over a 30-year period you feel comfortable estimating population oe : : in Minneapolis-St. Paul. mean thickness using a method that assumed a normal population distribution? T7120 3.00 162-281 2.48, 83.88 88 1.04 1.09 1.12 1.29 1.31 Wh 473.09 131 18796 148 149 159 1.62 1.65 1.71 1.76 1.83 om Bo a se 120. 337 2105913590 100. The article “A Probabilistic Model of Fracture in 195220 5281475. 2.05 Concrete and. Size, Eeets:on,fmcture: Tough: a. Construct and interpret a normal probability ness” (Mag. Concrete Res., 1996: 311-320) ees a plot for this data set. gives arguments for why fracture toughness --- Trang 233 --- 220 = cuarrer 4 Continuous Random Variables and Probability Distributions b. Calculate the square root of each value and of the (®'[(p; + 1)/2], w)) pairs, where p; = (i then construct a normal probability plot based — .5)/n. The virtue of this plot is that small or on this transformed data. Does it seem plau- large outliers in the original sample will now sible that the square root of precipitation is appear only at the upper end of the plot rather normally distributed? than at both ends. Construct a half-normal plot c. Repeat part (b) after transforming by cube for the following sample of measurement errors, roots. and comment: —3.78, —1.27, 1.44, —.39, 12.38, 105. Use a statistical software package to construct ASAD, 115396; 2:54, 30:84, a normal probability plot of the shower-flow 107. The following failure time observations (1,000’s rate data given in Exercise 13 of Chapter 1, and of hours) resulted from accelerated life testing of comment. 16 integrated circuit chips of a certain type: 106. Let the ordered sample observations be denoted 82.8 11.6 359.5 502.5 307.8 179.7 by yi, Yo, «+ +s Yn (1 being the smallest and y,, the 242.0 26.5 244.8 304.3 379.1 212.6 largest). Our suggested check for normality is to 229.9 558.9 366.7 204.6 plot the (®” '[(i — .5)/n], y,) pairs. Suppose we . . believe that the observations come from a distri- Use the corresponding percentiles of the exponen- bution with mean 0, and let wy, <<. Wy be tial distribution with 2 = | to construct a proba- the ordered absolute values of the x;’s. A half- bility plot. Then explain why the plot assesses the sioemal plot i a peony plot of the Wt plausibility of the sample having been generated Morelspécifically: since/P(Z| Sa) = PCW from any exponential distribution. Z 0. Can we find the pdf of Y = 60X, so Y is the number of seconds? In order to get the pdf, we first find the cdf. The cdf of Y is Fy(y) =P(Y 0. The distribution of Y is exponential with mean 120 s (2 min). Sometimes it isn’t possible to evaluate the cdf in closed form. Could we still find the pdf of Y without evaluating the integral? Yes, and it involves differentiating the integral with respect to the upper limit of integration. The rule, which is sometimes presented as part of the Fundamental Theorem of Calculus, is g |, h(u)du = h(x). --- Trang 234 --- 4.7 Transformations of a Random Variable 221 Now, setting x = y/60 and using the chain rule, we get the pdf using the rule for differentiating integrals: d d dx d v0) = $v) = Fat =F EPC o) dy o dy o yoo dy de ) x=y/60 ld f‘l 1, il 1 ye we le pW y ee ee peta ny WH ela “60 2° 120 © Although it is useful to have the integral expression of the cdf here for clarity, it is not necessary. A more abstract approach is just to use differentiation of the cdf to get the pdf. That is, with x = y/60 and again using the chain rule, d d dx d 1 fry) ==Fyy =F Fu) => F xix) == f(x) 0) dy %) dy ( | yo dy dx @) 60 ( oe ee a Be ae /'O 39, 2° i Is it plausible that, if X ~ exponential with mean 2, then 60X ~ exponential with mean 120? In terms of time between calls, if it is exponential with mean 2 min, then this should be the same as exponential with mean 120 s. Generalizing, there is nothing special here about 2 and 60, so it should be clear that if we multiply an exponential random variable with mean y by a positive constant c we get another exponential random variable with mean cy. This is also easily verified using a moment generating function argument. : The method illustrated above can be applied to other transformations. THEOREM Let X have pdf f(x) and let Y = g(X), where g is monotonic (either strictly increasing or strictly decreasing) so it has an inverse function X = A(Y). Assume that h has a derivative h'(y). Then fry) = fx(h(y)) la'()I Proof Here is the proof assuming that g is monotonically increasing. The proof for g monotonically decreasing is similar. We follow the last method in Example 4.38. First find the cdf. Fy(y) =P(Y Sy) = Plg(X) Sy] = PIX < Aly) = Fx{AQy)]. Now differentiate the cdf, letting x = h(y). d d dx d y(y) =F y(y) = SF y{h(y)] = 2 SF x(x) = WO) (x) = Wf [hly filo) =F Fr) = 5 Fsld)] =F LF a(t) = WOH) =H OVALAO)] The absolute value is needed on the derivative only in the other case where g is decreasing. The set of possible values for Y is obtained by applying g to the set of possible values for X. : --- Trang 235 --- 222 = carrer 4 Continuous Random Variables and Probability Distributions A heuristic view of the theorem (and a good way to remember it) is to say that fx (x)dx = frly)dy dx. , fil) = fel) F = fel) HO) Of course, because the pdf’s must be nonnegative, the absolute value is required on the derivative if it is negative. Sometimes it is easier to find the derivative of g than to find the derivative of h. In this case, remember that dx 1 wy & dx Let’s apply the theorem to the situation introduced in Example 4.38. There Y = g(X) = 60X and X = h(Y) = ¥/60. 1 pi 1 ) = fela(y)a'(y)| = tev? b= 1 20 0 fils) = FelbOIMO)] = 50"? 2 = ae y> . Here is an even simpler example. Suppose the arrival time of a delivery truck will be somewhere between noon and 2:00. We model this with a random variable X that is uniform on [0, 2], so fx(x) = 4 on that interval. Let Y be the time in minutes, starting at noon, Y = g(X) = 60X so X = h(Y) = Y/60. 1 1 i: v(y) = fla") ==: =—- 0 sy =e” y>0 Tame OY = Tye This is the chi-squared distribution (with 1 degree of freedom) introduced in Sec- tion 4.4. The squares of normal random variables are important because the sample variance is built from squares, and we will need the distribution of the variance. The variance for normal data is proportional to a chi-squared rv. --- Trang 238 --- 4.7 Transformations of a Random Variable 225 You were asked to believe intuitively that fy(w) = 2 fy(w) on an intuitive basis. Here is a little derivation that works as long as fy(x) is an even function, [i.e. f(x) = fx]. If u > 0, Fu(u) = PU 1. The reciprocal Y = 1/X represents 5 . - _ 4.4 « ihe tatio of thellime forthe winnée divided bythe, “Hool>™ Hastthe pdt fx) 0/5) Kies'a, find a : 0 transformation ¥ = g(X) such that Y is uniformly time of the other runner. Find the pdf of Y. distributed on [0, 1] Explain why Y also represents the speed of the % 1d, other runner relative to the winner. 118. If X is uniformly distributed on {—1, 1], find the 109. If X has the pdf fy(x) = 2x, 0 < x <1, find the pater ey = Ul. pdf of Y = 1/X. The distribution of Y is a special 119. If X is uniformly distributed on [~1, 1], find the case of the Pareto distribution (see Exercise 10). pdf of Y = X?. 110. Let X have the pdf fy(x) = 2/13, > 1. Find the 120. Ann is expected at 7:00 pm after an all-day drive. pdf of Y = VX. She may be as much as | h early or as much as ILL. Let X have the chi-squared distribution with 2 iales Sscuming: thal: hervareiya) ime, 2s 1 -x/2 uniformly distributed over that interval, find the 2 degree of freedom, so fy(x) =he~"?, x50. ‘ ; : 2 pdf of IX — 71, the unsigned difference between Find the pdf of Y = VX. Suppose you choose _ an ae a Ne her actual and predicted arrival times. a point in two dimensions randomly, with the horizontal and vertical coordinates chosen inde- 121. If X is uniformly distributed on [—1, 3], find the pendently from the standard normal distribution. pdf of ¥ = X?, Then X has the distribution of the squared dis- 19 4p y is distributed as N(0, 1), find the pdf of IX. tance from the origin and Y has the distribution of the distance from the origin. Because Y is 123. A circular target has radius | ft. Assume that you the length of a vector with normal components, hit the target (we shall ignore misses) and that the there are lots of applications in physics, and its probability of hitting any region of the target is distribution has the name Rayleigh. proportional to the region’s area. If you hit the target at a dist: Y fi the c 112. If X is distributed as Ns, 02), find the pdf of RB te cresntine ah then Jet x : ; = 7 Y" be the corresponding area. Show that Y =e*. The distribution of Y is lognormal, as “oat : seem Rein ‘ 2 (a) X is uniformly distributed on (0, x]. (Hint: ISCURSE DN -SECHD HI Show that Fy(x) = P(X < x) = a/n.] 113. If the side of a square X is random with the pdf (b) Y has pdf fy(y) = 2y,0 10). deviation TaN . c. Obtain the cdf F(X). a. What is the probability that the voltage ofa ds Compate BOD aad om single diode is between 39 and 42? b. What value is such that only 15% of all 128. A 12-in. bar clamped at both ends is subjected to diodes have voltages exceeding that value? an increasing amount of stress until it snaps. Let c. If four diodes are independently selected, Y = the distance from the left end at which the what is the probability that at least one has a break occurs. Suppose Y has pdf voltage exceeding 42? y y 132. The article “Computer Assisted Net Weight v al - 4) O0 6), and P(4 < ¥ < 6). a, What is the probability that a single jar con- c. E(Y), E(?), and VY). tains more than the stated contents? d. The probability that the break point occurs b. Among ten randomly selected jars, what is more than 2 in. from the expected break the probability that at least eight contain point. more than the stated contents? e. The expected length of the shorter segment c. Assuming that the mean remains at 137.2, to when the break occurs. what value would the standard deviation have to be changed so that 95% of all jars contain 129, Let X denote the time to failure (in years) of a more'than‘the:stated:contents? hydraulic component. Suppose the pdf of X is fx) = 32/lx + 4) for.x > 0. 133. When circuit boards used in the manufacture of a. Verify that f(x) is a legitimate pdf. compact disc players are tested, the long-run b. Determine the cdf. percentage of defectives is 5%. Suppose that a &. Use the teault oF part’ @&) te calculate the batch of 250 boards has been received and that probability that time to failure is between the condition of any particular board is indepen- 2 and 5 years. dent of that of any other board. d. What is the expected time to failure? a. What is the approximate probability that at least e. If the component has a salvage value equal to 10% of the boards in the batch are defective? 100/(4 + x) when its time to failure is x, what bei What, is the :approximate:;probability that is the expected salvage value? there are exactly ten defectives in the batch? 130. The completion time X for a task has cdf F(x) 134. Let X be a non-negative continuous random var- aieny iable with pdf f(x), cdf F(x), and mean E(X). a. Show that E(X) = f° [1 — F(y)]dy. [Hint: In 0 x<0 the expression for E(X), write x in the inte- 3 grand as [1 dy, and then reverse the order in a O ; and B, and how does it compare to the median fa) = - =) ieee? and mode? Sketch the graph of the density 0 otherwise function. [Note: This is called the largest extreme value distribution.| i: sac pe gorken i Os it 139. Let = the amount of sales tax a retailer owes the c. Is 0 the median temperature at which the govetniient fora eotiailt period, The article "Sta ee Te tistical Sampling in Tax Audits” (Statistics and temperature smaller or larger than 0? the’ Te 2B 320 398) peonoses moxieling: te d. Suppose this reaction is independently carried uncertainty: iy by regarding it asacnonnally out once in each of ten different labs and that ei oc ete pm varia ee ieament vale thepdl of reaction me impeach labavareiven: and standard deviation @ (in the article, these Let Y =the number among the ten labs at two parameters are estimated from the results of whichthie tempersture-exceeds 1: What kind a tax audit involving n sampled transactions). If a of distribution does RAVE? (Givelthe Hume, represents the amount the retailer is assessed, then and values of any parameters.) an underassessment results if ¢ > a and an over- assessment if a > ft. We can express this in terms 137. The article “Determination of the MTF of Posi- of a loss function, a function that shows zero loss tive Photoresists Using the Monte Carlo Method” if t = a but increases as the gap between f and a (Photographic Sci. Engrg., 1983: 254-260) pro- increases. The proposed loss function is L(a,t) = poses the exponential distribution with parameter t—aift>aand=ka—dsift lis 2, = 93 as a model for the distribution of a suggested to incorporate the idea that overassess- photon’s free path length (”m) under certain ment is more serious than underassessment). circumstances. Suppose this is the correct model. a, Show that a* = + ¢@"!(1/(k +1) is the a. What is the expected path length, and what is value of a that minimizes the expected loss, the standard deviation of path length? where 7! is the inverse function of the stan- b. What is the probability that path length dard normal cdf. exceeds 3.0? What is the probability that b. If k=2 (suggested in the article), path length is between 1.0 and 3.0? 1 = $100,000, and o = $10,000, what is the c. What value is exceeded by only 10% of all optimal value of a, and what is the resulting path lengths? probability of overassessment? --- Trang 242 --- Supplementary Exercises 229 140. A mode of a continuous distribution is a value x* ¢. TEX has fix; A1, 22, p) as its pdf, what is E(X)? that maximizes fx). d. Using the fact that E(X?) = 2/2? when X has a. What is the mode of a normal distribution an exponential distribution with parameter 2, with parameters jt and o? compute E(X?) when X has pdf f(x; A1, 2. p). b. Does the uniform distribution with para- Then compute V(X). meters A and B have a single mode? Why or e. The coefficient of variation of a random vari- why not? able (or distribution) is CV = o/j1. What is the ¢. What is the mode of an exponential distribu- CV for an exponential rv? What can you say tion with parameter 2? (Draw a picture.) about the value of CV when X has a hyper- d. IfX has a gamma distribution with parameters exponential distribution? x and f, and % > 1, find the mode. [Hint: f. What is the CV for an Erlang distribution with In{f()] will be maximized if and only if f(x) parameters 2 and 7 as defined in Exercise 76? is, and it may be simpler to take the derivative (Note: In applied work, the sample CV is used of Inffix)].] to decide which of the three distributions e, What is the mode of a chi-squared distribution might be appropriate.] having y degtees of feedom? 143. Suppose a state allows individuals filing tax 141, The article “Error Distribution in Navigation” returns to itemize deductions only if the total of (J. Institut. Navigation, 1971: 429-442) suggests all itemized deductions is at least $5,000. Let X that the frequency distribution of positive errors (in 1,000’s of dollars) be the total of itemized (magnitudes of errors) is well approximated by deductions on a randomly chosen form. Assume an exponential distribution. Let X = the lateral that X has the pdf position error (nautical miles), which can be either negative or positive. Suppose the pdf of flea) = {ur x25 Xis ea 0 otherwise f(x) = (ye 0 0 b. What is the probability that the output current = 0 otherwise is more than twice the input current? c. What are the expected value and variance of This is often called the hyperexponential or the ratio of output to input current? mixed exponential distribution. This distribution 145, ‘The article “Response of SiCy/SisNy Composites is also proposed as a model for rainfall amount in Under Static and Cyclic Loading—An Experi- “Modeling Monsoon Affected Rainfall of Paki- mental and Statistical Analysis” (J. Engrg. Mate- stan by Point Processes” (J. Water Resources rials Tech., 1997: 186-193) suggests that tensile Planning Manag., 1992: 671-688). strength (MPa) of composites under specified a. Verify that fos /1, 42, p) is indeed a pdf. conditions can be modeled by a Weibull distri- b. What is the edf F(x; 41, 42, p)? bution with x = 9 and f = 180. --- Trang 243 --- 230 = = cuarrer 4 Continuous Random Variables and Probability Distributions a. Sketch a graph of the density function. conception and birth) could be modeled as b. What is the probability that the strength of a having a normal distribution with mean randomly selected specimen will exceed 175? value 280 days and standard deviation 19.88 Will be between 150 and 175? days. The due dates for the three Utah sisters c. If two randomly selected specimens are cho- were March 15, April 1, and April 4, respec- sen and their strengths are independent of tively. Assuming that all three due dates are at each other, what is the probability that at the mean of the distribution, what is the prob- least one has strength between 150 and 175? ability that all births occurred on March 11? d. What strength value separates the weakest [Hint: The deviation of birth date from due 10% of all specimens from the remaining date is normally distributed with mean 0.] 90%? d, Explain how you would use the information 146, a. Suppose the lifetime X of a component, when anapart (©) toscaleulatesthesprobability gf Bia ak common birth date. measured in hours, has a gamma distribution with parameters x and f. Let Y = lifetime 149, Let X denote the lifetime of a component, with measured in minutes. Derive the pdf of Y. fx) and F(x) the pdf and cdf of X. The proba- b. IfX has a gamma distribution with parameters bility that the component fails in the interval a and f, what is the probability distribution of (x, x + Ax) is approximately f(x) - Ax. The condi- Y =cX? tional probability that it fails in (x, x + Ax) given 147, Based on data from a dart-throwing experiment, that it has lasted at leastiaas fix) An/L1. EC), the article “Shooting Darts” (Chance, Summer Taviding tins) by: Au-produees the future rate 1997: 16-19) proposed that the horizontal and function; vertical errors from aiming at a point target FO) should be independent of each other, each with r(x) = nee : 1—F(x) a normal distribution having mean 0 and vari- ance a”. It can then be shown that the pdf of the An increasing failure rate function indicates distance V from the target to the landing point is that older components are increasingly likely to wear out, whereas a decreasing failure rate is fo) a2 FIA) ys evidence of increasing reliability with age. In o practice, a “bathtub-shaped” failure is often oo ; assumed. a:i-Thisypdf isa. membersof what family intro: a. IfX is exponentially distributed, what is r(x)? ducedin this chapter! b. If X has a Weibull distribution with para- b. If = 20mm (close to the value suggested in acteie ania, Wha 107(2) FOr wh gerain- the: paper), Whatasithe probability that.adart eter values will r(x) be increasing? For what will and within 25 mm (roughly 1 in.) of the parniviter values will) decrease wittta? target? c. Since r(x) = —(d/dx)In[1 — FO), 148, The article “Three Sisters Give Birth on the In{1 — F(a)] = — r(x) dx. Suppose Same Day”(Chance, Spring 2001: 23-25) used the fact that three Utah sisters had all given _ a1 -3) O q. Now write an 7 integral expression for expected profit (as a func- rs pa § Tangent tion of g) and differentiate.] = line A 4 _ 156. An insurance company issues a policy covering losses up to 5 (in thousands of dollars). The loss, x X, follows a distribution with density function: 153. Let X have a Weibull distribution with parameters 3 > 2yp2 _J]qz x21 a =2 and f. Show that Y = 2X7/f? has a chi- f= squared distribution with v = 2. 0 x Yl») (yea A large insurance agency services a number of customers who have purchased both a homeowner’s policy and an automobile policy from the agency. For each type of policy, a deductible amount must be specified. For an automobile policy, the choices are $100 and $250, whereas for a homeowner’s policy, the choices are 0, $100, and $200. Suppose an individual with both types of policy is selected at random from the agency’s files. Let ¥ = the deductible amount on the auto policy and Y = the deductible amount on the homeowner’s policy. Possible (X, Y) pairs are then (100, 0), (100, 100), (100, 200), (250, 0), (250, 100), and (250, 200); the joint pmf specifies the probability associated with each one of these pairs, with any other pair having probability zero. Suppose the joint pmf is given in the accompanying joint probability table: Fy ptt, y) 0 100 200 100 .20 10 20 A 250 05 1S 30 --- Trang 247 --- 234 = cuarrer 5 Joint Probability Distributions Then p(100, 100) = P(X = 100 and Y = 100) = P($100 deductible on both policies) = .10. The probability P(Y > 100) is computed by summing probabilities of all (x, y) pairs for which y > 100: P(Y¥ > 100) =p(100, 100) + p(250, 100) + (100,200) +p(250,200)=.75 A function p(x, y) can be used as a joint pmf provided that p(x, y) > 0 for all x and y and Y7, 30, p(%.y) = 1. The pmf of one of the variables alone is obtained by summing p(x, y) over values of the other variable. The result is called a marginal pmf because when the p(x, y) values appear in a rectangular table, the sums are just marginal (row or column) totals DEFINITION The marginal probability mass functions of X and of Y, denoted by px(x) and py(y), respectively, are given by Px) = So py) pv) = Spy) Thus to obtain the marginal pmf of X evaluated at, say, x = 100, the probabilities p(100, y) are added over all possible y values. Doing this for each possible X value gives the marginal pmf of X alone (without reference to Y). From the marginal pmf’s, probabilities of events involving only X or only Y can be computed. The possible X values are x = 100 and x = 250, so computing row totals in the joint (Example 5.1 probability table yields continued) px(100) = p(100, 0) + p(100, 100) + p(100, 200) = .50 And px(250) = p(250, 0) + p(250, 100) + p(250, 200) = .50 The marginal pmf of X is then @efS = 100, 250 x)= Px 0 otherwise Similarly, the marginal pmf of Y is obtained from column totals as 25 y=0, 100 pr(y) = 50 y=20 0 otherwise so P(Y > 100) = py(100) + py(200) = .75 as before. ] --- Trang 248 --- 5.1 Jointly Distributed Random Variables 235 The Joint Probability Density Function for Two Continuous Random Variables The probability that the observed value of a continuous rv X lies in a one- dimensional set A (such as an interval) is obtained by integrating the pdf f(x) over the set A. Similarly, the probability that the pair (X, Y) of continuous rv’s falls in a two-dimensional set A (such as a rectangle) is obtained by integrating a function called the joint density function. DEFINITION Let X and Y be continuous rv’s. Then f(x, y) is the joint probability density function for X and Y if for any two-dimensional set A “A In particular, if A is the two-dimensional rectangle {(x,y) : a 0 and [™. J. f(x, y)dxdy = 1. We can think of f(x, y) as specifying a surface at height f(x, y) above the point (x, y) in a three-dimensional coordinate system. Then P(X, Y) € A] is the volume underneath this surface and above the region A, analogous to the area under a curve in the one-dimensional case. This is illustrated in Figure 5.1. f(xy) y Surface f(x, y) ws Zagh A = Shaded rectangle x Figure 5.1. PI(X, Y) € A] = volume under density surface above A A bank operates both a drive-up facility and a walk-up window. On a randomly selected day, let X = the proportion of time that the drive-up facility is in use (at least one customer is being served or waiting to be served) and Y = the proportion of time that the walk-up window is in use. Then the set of possible values for (X, Y) --- Trang 249 --- 236 = cuarrer 5 Joint Probability Distributions is the rectangle D = {(x,y):0 0 and mo poo 16 | | f(x,y) dx dy = | | =(x+y")dx dy ps deed Jo Jo 5 1 pl 1 pl 6 6 = =xdx ay+| | =y"dx dy \, I, 5 0 Jo 5 1 1 6 6» 6 6 = [3 act |S dy a9 457 1 The probability that neither facility is busy more than one-quarter of the time is 1 1 1/4 pl/4 6 5 P(0 0. To verify the second condition on a joint pdf, recall that a double integral is computed as an iterated integral by holding one variable fixed (such as x as in Figure 5.2), integrating over values of the other variable lying along --- Trang 251 --- 238 © cuarrer 5 Joint Probability Distributions the straight line passing through the value of the fixed variable, and finally integrating over all possible values of the fixed variable. Thus 20 poo 1 pix [LF soordac=[[ronaac=[ {[ 240 aha ne o Vo D 1 y y=l-x 1 -| 24x | | 12x(1 —x)'dx = 1 0 21-0 0 To compute the probability that the two types of nuts together make up at most 50% of the can, let A = {(x,y):0, has an exponential distribution with parameter 22. Then the joint pdf is f(x1,%2) = fix (%1) - fi (x2) Aye Age" = Aydge xy > 0,22 > 0 0 otherwise --- Trang 253 --- 240 = cuarrer 5 Joint Probability Distributions Let 2, = 1/1000 and A, = 1/1200, so that the expected lifetimes are 1000 h and 1200 h, respectively. The probability that both component lifetimes are at least 1500 h is P(1500 < X1, 1500 < X>) = P(1500 < X;) - P(1500 < X») = ¢-2(1500) _ ,~72(1500) = (.2231)(.2865) = .0639 rT More than Two Random Variables To model the joint behavior of more than two random variables, we extend the concept of a joint distribution of two variables. DEFINITION If X,, Xo, ..., X, are all discrete random variables, the joint pmf of the variables is the function P(%1,X2, +++. Xn) = P(X) = 411, X2 = 40, --., Xn = An) If the variables are continuous, the joint pdf of X;,X2, ...,X,, is the function f (1,22, ...,Xn) such that for any n intervals {ay, bi], ..., [an, Dal, bt Dn Play =.5, then 1D ped) epi ge P(X1 = 2,X2 = 5,X3 = 3) = p(2, 5,3) = srapay (25 ) (50°) (.25°) = .0769 a When a certain method is used to collect a fixed volume of rock samples in a region, there are four resulting rock types. Let X,, X2, and X3 denote the proportion by volume of rock types 1, 2, and 3 in a randomly selected sample (the proportion of rock type 4 is 1 — X, — Xz — X3, so a variable X4 would be redundant). If the joint pdf of X1, Xo, Xa is kxyx2 (1-23) O0,...,X,, are said to be independent if for every subset Xj, ,Xj,,--- , X;,0f the variables (each pair, each triple, and so on), the joint pmf or pdf of the subset is equal to the product of the marginal pmf’s or pdf's. --- Trang 255 --- 242 = cuaprer 5 Joint Probability Distributions Thus if the variables are independent with n = 4, then the joint pmf or pdf of any two variables is the product of the two marginals, and similarly for any three variables and all four variables together. Most important, once we are told that n variables are independent, then the joint pmf or pdf is the product of the n marginals. eee =f X;, ..., X, represent the lifetimes of n components, the components operate independently of each other, and each lifetime is exponentially distributed with parameter /, then SF (11,2566 Xn) = (Ze-*") « (20-2) oe (RE) _ fae ™ x 20,4 20,...,%, 20 0 otherwise If these n components are connected in series, so that the system will fail as soon as a single component fails, then the probability that the system lasts past time ¢ is PUGS teveyXy oD) = | | SF Xty 2s. 9Xn) ax 2. dxp 1 t ' Jr _ (e*)" = em Therefore, P(system lifetime < 1) =1—e7"” for r>0 which shows that system lifetime has an exponential distribution with parameter n/; the expected value of system lifetime is 1/nd. a In many experimental situations to be considered in this book, independence is a reasonable assumption, so that specifying the joint distribution reduces to deciding on appropriate marginal distributions. Exercises | Section 5.1 (1-17) 1. A service station has both self-service and full- b. Compute P(X < 1 and Y < 1) service islands. On each island, there is a single c. Give a word description of the event regular unleaded pump with two hoses. Let X {X 40 and ¥ #0}, and compute the proba- denote the number of hoses being used on the bility of this event. self-service island at a particular time, and let Y d. Compute the marginal pmf of X and of Y. denote the number of hoses on the full-service Using p(x), what is P(X < 1)? island in use at that time. The joint pmf of X and e. Are X and Y independent rv’s? Explain. W appears at the secomipanying tabulation, 2. When an automobile is stopped by a roving safety patrol, each tire is checked for tire wear, and each , 6 y . headlight is checked to see whether it is properly Poy) Z aimed, Let X denote the number of headlights that 0 10 04 02 need adjustment, and let Y denote the number of oe 1 08 20 06 defective tires. 2 ‘06 4 30 a. If X and Y are independent with px(0) = .5, px(1) = 3,px(2) = 2, and py(0) = .6, py(1) = .1,py(2) = py(3) = .05, py(4) = .2, display a, What is P(X = 1 and Y = 1)? the joint pmf of (X, Y) ina joint probability table. --- Trang 256 --- 5.1 Jointly Distributed Random Variables 243 b. Compute P(X <1 and Y < 1) from the joint d. Ina random sample of 10 candies, what is the probability table, and verify that it equals the probability that there are at most 3 orange product P(X < 1) -P(¥ < 1) candies? (Hint: Think of an orange candy as a ¢. What is P(X +Y = 0) (the probability of no success and any other color as a failure.] violations)? e. Ina random sample of 10 candies, what is the d. Compute P(X + Y < 1) probability that at least 7 are either blue, 3. A market has both an express checkout line and a orange, or green? superexpress checkout line. Let X; denote the num- 5. The number of customers waiting for gift-wrap ser- ber of customers in line at the express checkout at a vice at a department store is an rv X with possible particular time of day, and let X> denote the number values 0, 1, 2, 3, 4 and corresponding probabilities of customers in line at the superexpress checkout at <1, 2,.3,.25, .15. Arandomly selected customer will the same time. Suppose the joint pmf of X, and X, have 1, 2, or 3 packages for wrapping with prob- is as given in the accompanying table. abilities .6, .3, and .1, respectively. Let Y = the total number of packages to be wrapped for the customers oy waiting in line (assume that the number of packages 0 1 2 3 submitted by one customer is independent of the number submitted by any other customer). Oo | 08 7 04 00 a. Determine P(X = 3, Y = 3), that is, p(3, 3). 1 06 AS 05 04 b. Determine p(4, 11). x 2 | 05 04 10 06 3 00 03 04 07 6. Let X denote the number of Canon digital cameras 4 | 00 ‘Ol ‘05 06 sold during a particular week by a certain store. The pmf of X is a. What is P(X; = 1, X) = 1), thats, the probabil- ity that there is exactly one customer in each line? x 0 1 2 3 4 b. What is P(X; =X), that is, the probability e_|e 1 @ 3 % that the numbers of customers in the two lines Px) Poe 3 25S are identical? c. Let A denote the event that there are at least two. Sixty percent of all customers who purchase these more customers in one line than in the other caineras alsc buly(an extended ‘warrabty... Let ¥ line, Express, A in termsvof, X; and %,, and denote the number of purchasers during. this calculate the probability of this event. week: who buy-amextended warranty. d. What is the probability that the total number of a. What is P(X = 4, Y = 2)? (Hint: This proba- customers in the two lines is exactly four? At bility equals P(Y = 2|X = 4) - P(X = 4); now Lease TUE? think of the four purchases as four trials of a Déletinine the marginal ‘pint of Xj, and then binomial experiment, with success on a trial calculate the expected number of customers in Contesponding. to buying an. extended war: line at the express checkout. ranty.] f. Determine the marginal pmf of X>. b. Calculate P(X = Y) g. By inspection of the probabilities P(X, = 4), ¢. Determine the joint pmf of X and Y and then the P(Xp = 0), and P(X; =4,X)=0), are X, marginal pmf of Y. and Xz independent random variables? Explain. 7, The joint probability distribution of the number X i Neorg tthe MaRS Candy Compay HE Tog of cars and the number Y of buses per signal cycle run percentages of various colors of M&M milk it‘. proposed: lefé-furn: lane! is, diéplayed ‘in. the chocolate candies are as follows: accompanyitig joint probability table. Blue: Orange: Green: Yellow: Red: Brown: y 24% 20% «16% += «14% «= 13% «13% shied 0 1 2 a, In a random sample of 12 candies, what is the é we ge @i0 probability that there are exactly two of each 1 050 030 020 calor? , t 2 125 075 .050 b. In a random sample of 6 candies, what is the 3 150) ‘090 “050 probability that at least one color isnot included? 4 100 060 040 ¢. Ina random sample of 10 candies, what is the : oe a “00 probability that there are exactly 3 blue candies and exactly 2 orange candies? --- Trang 257 --- 244 = cuarter 5 Joint Probability Distributions a. What is the probability that there is exactly one b. What is the probability that they both arrive car and exactly one bus during a cycle? between 5:15 and 5:45? b. What is the probability that there is at most one cc. If the first one to arrive will wait only 10 min car and at most one bus during a cycle? before leaving to eat elsewhere, what is the cc. What is the probability that there is exactly one probability that they have dinner at the health- car during a cycle? Exactly one bus? food restaurant? [Hint: The event of interest is d. Suppose the left-turn lane is to have a capacity A={(x,y):by—yl ( jew =(a+b)" metric distribution — sampling without replace- =O, ment from a finite population consisting of for any a, b.] more than two categories.) . . . 7 12. Two components of a computer have the follow- 9. Each front tire of a vehicle is supposed to be filled ing joint pdf for their useful lifetimes X and ¥: to a pressure of 26 psi. Suppose the actual air pressure in each tire is a random variable — X se for the right tire and Y for the left tire, with f(xy) = {* Se 220 apd 9 20 heat 0 otherwise joint pdf Bes Bs a. What is the probability that the lifetime X of rene {“" +y*) 200<%<30, 0 .8, moderate if .5 < Ipl <.8, and weak if Ipl < .5. If we think of p(x, y) or f(x, y) as prescribing a mathematical model for how the two numerical variables X and Y are distributed in some population (height and weight, verbal SAT score and quantitative SAT score, etc.), then p is a population characteristic or parameter that measures how strongly X and Y are related in the population. In Chapter 12, we will consider taking a sample of pairs (x1, y1),++5 (Xn, Yn) from the population. The sample correlation coefficient r will then be defined and used to make inferences about p. The correlation coefficient p is actually not a completely general measure of the strength of a relationship. PROPOSITION 1. If X and Y are independent, then p = 0, but p = 0 does not imply independence. 2. p = 1 or —1 iff Y = aX + b for some numbers a and b with a ¥ 0. Exercise 29 and Example 5.17 relate to Property 1, and Property 2 is investigated in Exercises 32 and 35. This proposition says that p is a measure of the degree of linear relationship between X and Y, and only when the two variables are perfectly --- Trang 264 --- 5.2 Expected Values, Covariance, and Correlation 251 related in a linear manner will p be as positive or negative as it can be. A p less than 1 in absolute value indicates only that the relationship is not completely linear, but there may still be a very strong nonlinear relation. Also, p = 0 does not imply that X and Y are independent, but only that there is complete absence of a linear relationship. When p = 0, X and ¥ are said to be uncorrelated. Two variables could be uncorrelated yet highly dependent because of a strong nonlinear relationship, so be careful not to conclude too much from knowing that p = 0. Let X and ¥ be discrete rv’s with joint pmf 1 zy) = (-4, 1),4,-1), 2, 2), (-2, -2) PQx.y)=4 4 0 otherwise The points that receive positive probability mass are identified on the (x, y) coordinate system in Figure 5.5. It is evident from the figure that the value of X is completely determined by the value of Y and vice versa, so the two variables are completely dependent. However, by symmetry xy = fy = 0 and E(XY) = (-4)1 + (4) 14 (4)1+ (4)1 = 0 so Cov(X, Y) = E(XY) — my - Hy = 0 and thus pxy = 0. Although there is perfect dependence, there is also complete absence of any linear relationship! 3 ° e 1 4 3 2 41 1 2 3 4 1 . 2 Figure 5.5 The population of pairs for Example 5.17 . A value of p near | does not necessarily imply that increasing the value of X causes Y to increase. It implies only that large X values are associated with large Y values. For example, in the population of children, vocabulary size and number of cavities are quite positively correlated, but it is certainly not true that cavities cause vocabulary to grow. Instead, the values of both these variables tend to increase as the value of age, a third variable, increases. For children of a fixed age, there is probably a very low correlation between number of cavities and vocabulary size. In summary, association (a high correlation) is not the same as causation. --- Trang 265 --- 252 = cuaprer 5 Joint Probability Distributions Exercises | Section 5.2 (18-35) 18. An instructor has given a short quiz consisting of Annie’s arrival time by X, Alvie’s by Y, and sup- two parts, For a randomly selected student, let X = pose X and ¥ are independent with pdf's the number of points earned on the first part and Y = the number of points earned on the second 3x2 O 0, and p = —1 implies and let Zy be the standardized Y,Zy = that Y = aX +b where a < 0. (¥ — fy)/oy. Use Exercise 31 to show that Cort(X,¥) = Cov(Zy,Zy) = E(ZxZy) Conditional Distributions The distribution of Y can depend strongly on the value of another variable X. For example, if X is height and Y is weight, the distribution of weight for men who are 6 ft tall is very different from the distribution of weight for short men. The conditional distribution of Y given X = x describes for each possible x how probability is distributed over the set of possible y values. We define the conditional distribution of Y given X, but the conditional distribution of X given Y can be obtained by just reversing the roles of X and Y. Both definitions are analogous to that of the conditional probability P(AIB) as the ratio P(A 1 B)/P(B). DEFINITION Let X and Y be two discrete random variables with joint pmf p(x,y) and marginal X pmf px(x). Then for any x value such that px(x) > 0, the conditional probability mass function of Y given X = x is (xy) PyxQiX) =~ 0) 5G) An analogous formula holds in the continuous case. Let X and Y be two continuous random variables with joint pdf f(x,y) and marginal X pdf fy(x). Then for any x value such that fx(x) > 0, the conditional probability density function of Y given X = x is f(x,y) fix) = So Fx(x) For a discrete example, reconsider Example 5.1, where X represents the deductible amount on an automobile policy and Y represents the deductible amount on a homeowner’s policy. Here is the joint distribution again. y px, y) 0 100 200 oy 100 20 10 .20 250 05 1S 30 --- Trang 267 --- 254 = cuaprer 5 Joint Probability Distributions The distribution of Y depends on X. In particular, let’s find the conditional probability that Y is 200, given that X is 250, using the definition of conditional probability from Section2.4. P(Y = 200 and X = 250) 3 P(Y = 200|X = 250) =A =A ane AS ON) ( | ) P(X = 250) 67 15+3 ° With our new definition we obtain the same result p(250, 200) 3 200|250) = ——~__— = —_—_____ = 6 Prx(200)250) = 950) = 5.1543 The conditional probabilities for the two other possible values of Y are p(250, 0) 05 0|250) = —_— = ———__——_= 1 Prx(01250) = 7950) ~ 054.153 p(250, 100) 5 100/250) = ——~_— = —_______ = 3 Pyx (1001250) = 950) = 054 1543 Thus, pyiy(0[250) + pyyy(100|250) + pyjx(200|250) = .1 + 3 + .6 = 1. This isno coincidence; conditional probabilities satisfy the properties of ordinary probabil- ities. They are nonnegative and they sum to 1. Essentially, the denominator in the definition of conditional probability is designed to make the total be 1. Reversing the roles of X and Y, we find the conditional probabilities for X, given that Y = 0: p(100, 0) 20 100|0) = —_~— = ———__= 8 Pxyy(100/0) py(0) 204.05 p(250,0) 05 250/0) = ——_— = ——_= 2 Pxw(25010) =) = 204.05 Again, the conditional probabilities add to 1. a For a continuous example, recall Example 5.5, where X is the weight of almonds and ¥ is the weight of cashews in a can of mixed nuts. The sum of X + Y is at most one pound, the total weight of the can of nuts. The joint pdf of X and Y is _ f 24xy O JLEA™MW LEE ——S Figure 5.6 A graph of the bivariate normal pdf If p = 0, then f(x,y) = fx(x) fy(y), where X is normal with mean j and standard deviation o), and Y is normal with mean juz and standard deviation o>. That is, X and Y have independent normal distributions. In this case the plot in three dimensions has elliptical contours that reduce to circles. Recall that in Section 5.2 we emphasized that independence of X and Y implies p = 0 but, in general, p = 0 does not imply independence. However, we have just seen that when X and Y are bivariate normal p = 0 does imply independence. Therefore, in the bivariate normal case p = 0 if and only if the two rv’s are independent. What do we get for the marginal distributions? As you might guess, the marginal distribution fy(x) is just a normal distribution with mean j1; and standard deviation o;: f(x) = belle) y/2 oV2n The integration to show this [integrating f(x,y) on y from —oo to oo] is rather messy. More generally, any linear combination of the form aX + bY, where a and b are constants, is normally distributed. We get the conditional density by dividing the marginal density of X into f(x,y). Unfortunately, the algebra is again a mess, but the result is fairly simple. The conditional density fyx(ylx) is a normal density with mean and variance given by x- Hyg = EU =3) = H+ poy 4 Fixx = V(VIX = 3) = 03(1 ~ p") Notice that the conditional mean is a linear function of x and the conditional variance doesn’t depend on x at all. When p = 0, the conditional mean is the mean of Y and the conditional variance is just the variance of Y. In other words, if p = 0, then the conditional distribution of Y is the same as the unconditional distribution of Y. This says that if p = 0 then X and Y are independent, but we already saw that previously in terms of the factorization of f(x,y) into the product of the marginal densities. When p is close to 1 or —1 the conditional variance will be much smaller than V(Y), which says that knowledge of X will be very helpful in predicting Y. --- Trang 273 --- 260 = cHarrer 5 Joint Probability Distributions If p is near 0 then X and Y are nearly independent and knowledge of X is not very useful in predicting Y. Let X be mother’s height and Y be daughter's height. A similar situation was one of the first applications of the bivariate normal distribution, by Francis Galton in 1886, and the data was found to fit the distribution very well. Suppose a bivariate normal distribution with mean j1; = 64 in. and standard deviation ¢, = 3 in. for X and mean ji = 65 in. and standard deviation a2 = 3 in. for Y. Here fo > 4), which is in accord with the increase in height from one generation to the next. Assume p= 4. Then = — 64 Myyeox = Hy + p02~—"! = 65 + 43)-5= = 65+ 4(x— 64) = Ar 439.4 o1 OF yyy = V(Y|X =x) = 03 (1 — p?) = 9 (1 — 4°) = 7.56 and yyy. = 2.75 Notice that the conditional variance is 16% less than the variance of Y. Squaring the correlation gives the percentage by which the conditional variance is reduced relative to the variance of Y. a Regression to the Mean The formula for the conditional mean can be re-expressed as L = Ib - Hyixax ~ He p- x-h on o1 In words, when the formula is expressed in terms of standardized variables, the standardized conditional mean is just p times the standardized x. In particular, for the example of heights, Hyjvx 05 _ yx — 64 3 3 If the mother is 5 in. above the mean of 64 in. for mothers, then the daughter’s conditional expected height is just 2 in. above the mean for daughters. In this example, with equal standard deviations for Y and X, the daughter’s conditional expected height is always closer to its mean than the mother’s height is to its mean. In general, the conditional expected Y is closer when it is measured in terms of standard deviations. One can think of the conditional expectation as being pulled back toward the mean, and that is why Galton called this regression to the mean. Regression to the mean occurs in many contexts. For example, let X be a baseball player’s average for the first half of the season and let Y be the average for the second half. Most of the players with a high X (above .300) will not have such a high Y. The same kind of reasoning applies to the “sophomore jinx,” which says that if a player has a very good first season, then the player is unlikely to do as well in the second season. --- Trang 274 --- 5.3 Conditional Distributions 261 The Mean and Variance Via the Conditional Mean and Variance From the conditional mean we can obtain the mean of Y. From the conditional mean and the conditional variance, the variance of Y can be obtained. The following theorem uses the idea that the conditional mean and variance are themselves random variables, as illustrated in the tables of Example 5.20. THEOREM a. E(Y) = EE(¥|X)] b. V(Y) = V[E(V|X)] + E[V(Y|X)] The result in (a) says that E(Y) is a weighted average of the conditional means E(YIX = x), where the weights are given by the pmf or pdf of X. We give the proof of just part (a) in the discrete case: EIE(Y|X)] = > E(VIX =x)px(x) = 32 YO ypyx(vlx)px() 4eDy xeDx yeDy Psy) Hy oxe) = oy No plea) = yr) £0) {Dy Jody PC) YeDy XeDy yDy To try to get a feel for the theorem, let’s apply it to Example 5.20. Here again is the table for the conditional mean and variance of Y given X. x P= 3) E(Y|X = x) V(Y|X = x) 100 Es 100 8000 250 =) 150 4500 Compute E\E(¥|X)] = E(Y|X = 100)P(X = 100) + E(Y| X = 250)P(X = 250) = 100(.5) + 150(.5) = 125 Compare this with E(Y) computed directly: E(Y) = OP(Y = 0) + 100P(Y = 100) + 200P(Y = 200) = 0(.25) + 100(.25) + 200(.5) = 125 For the variance first compute the mean of the conditional variance: E\V(¥|X)] = V(Y|X = 100)P(X = 100) + V(Y|X = 250)P(X = 250) = 4500(.5) + 8000(.5) = 6250 Then comes the variance of the conditional mean. We have already computed the mean of this random variable to be 125. The variance is V[E(Y|X)] = .5(100 — 125)? + .5(150 — 125)” = 625 --- Trang 275 --- 262 = cuarrer 5 Joint Probability Distributions Finally, do the sum in part (b) of the theorem: V(Y) = VIE(Y|X)] + E[V(¥|X)] = 625 + 6250 = 6875 To compare this with V(Y) calculated from the pmf of Y, compute first E(¥*) = O?P(Y = 0) + 100°P(Y = 100) + 200°P(Y = 200) = 0(.25) + 10, 000(.25) + 40,000(.5) = 22, 500 Thus, V(Y) = E(¥) — [E(Y)|’ = 22,500 — 125” = 6875, in agreement with the calculation based on the theorem. a Here is an example where the theorem is helpful in finding the mean and variance of a random variable that is neither discrete nor continuous. The probability of a claim being filed on an insurance policy is .1, and only one claim can be filed. If a claim is filed, the amount is exponentially distributed with mean $1000. Recall from Section 4.4 that the mean and standard deviation of the exponential distribution are the same, so the variance is the square of this value. We want to find the mean and variance of the amount paid. Let X be the number of claims (0 or 1) and let Y be the payment. We know that E(Y1 X = 0) = 0 and E(¥1 X = 1) = 1000. Also, V(YIX = 0) = 0 and V(YIX = 1) = 1000* = 1,000,000. Here is a table for the distribution of E(YIX = x) and VX =»): x P(X =x) E(Y|X = x) Vy x = x) 0 2 0 0 1 il 1000 1,000,000 Therefore, E(Y) = E[E(¥|X)] = E(¥|X = 0)P(X = 0) + E(Y|X = 1)P(X = 1) = 0(.9) + 1000(.1) = 100 The variance of the conditional mean is V[E(Y|X)] = .9(0 — 100)? + .1(1000 — 100)? = 90, 000 The expected value of the conditional variance is E\V(¥|X)] = .9(0) + .1(1, 000, 000) = 100, 000 Finally, use part (b) of the theorem to get V(Y): V(Y) = VIE(Y|X)] + E[V(¥|X)] = 90, 000 + 100, 000 = 190, 000 Taking the square root gives the standard deviation, cy = $435.89. Suppose that we want to compute the mean and variance of Y directly. Notice that X is discrete, but the conditional distribution of Y given X = 1 is continuous. The random variable Y itself is neither discrete nor continuous, because it has probability .9 of being 0, but the other .1 of its probability is spread out from 0 to oo. Such “mixed” distributions may require a little extra effort to evaluate means and variances, although it is not especially hard in this case. Compute --- Trang 276 --- 5.3 Conditional Distributions 263 = = SL -y/1000 7) — = by = E(Y) = cI Yp00° ‘dy = (.1)(1000) = 100 E(¥?) = w| ye e-v/t00 gy = (.1)2(10007) = 200,000 o> 1000 V(Y) = E(Y”) — [E(Y) = 200,000 — 10,000 = 190,000 These agree with what we found using the theorem. Ll) Exercises | Section 5.3 (36-57) 36. According to an article in the August 30, 2002 d. Are X and Y independent? Explain. issue of the Chronicle of Higher Education, e. Determine the conditional mean of Y given 30% of first-year college students are liberals, X = x. Is E(YIX = x) a linear function of x? 20% are conservatives, and 50% characterize f. Determine the conditional variance of Y given themselves as middle-of-the-road. Choose two X=x. students at random, let X be the number of liber- 38. Refer back to Exercise 37. als, and let Y be the number of conservatives. * . moe ai eae a. Determine the marginal density of Y. a Using the multinomial distribution from Determine the conditional density of X given Section 5.1, give the joint probability mass func- Y=. Hong y of X and ¥, Give the joint probability ¢. Determine the conditional mean of X given table showing all nine values, of which three PE apenas he : HABE BE. Y=y. Is E(XIY = y) a linear function of y? . . oye d. Determine the conditional variance of X given b. Determine the marginal probability mass func- at tions by summing p(x, y) numerically. How a could these be obtained directly? [Hint: What 39. A pizza place has two phones. On each phone the are the univariate distributions of X and Y?] waiting time until the first call is exponentially ¢. Determine the conditional probability mass distributed with mean one minute. Each phone is function of Y given X = x for x = 0, 1, 2. not influenced by the other. Let X be the shorter of Compare with the Bin{2—x, .2/(.2 + .5)] distri- the two waiting times and let Y be the longer. bution. Why should this work? It can be shown that the joint pdf of X and Y is d. Are X and Y independent? Explain. F (x,y) = 20), O 25). Part (c). ¢. If the pressure in the right tire is found to be 41. A stick is one foot long. You break it at a point X 22 psi, what is the expected pressure in the left (measured from the left end) chosen randomly tire, and what is the standard deviation of pres- uniformly along its length. Then you break the sure in this tire? left part at a point ¥ chosen randomly uniformly 45 sunnose that X is uniformly distributed between along its length. In other words, X is uniformly : : eee A 0 and 1. Given X = x, Y is uniformly distributed distributed between 0 and | and, given X = x, Y is Between ORNS uniformly distributed between 0 and x. a. Determine E(/K'= x) and then'V(VIK = 30. a. Determine E(YIX = x) and then V(YIX = x). Is epee ch ct : Z E i Is E(YIX = x) a linear function of x? E(YIX = x) a linear function of x? b. Determine ftx.y) using f(x) and fng(st) b. Determine f(x,y) using f(x) and fyx(ylv). ‘ ‘ie Gaede wx . ¢. Determine f(y). ¢. Determine f(y). ‘i d. Use f(y) from (c) to get E(Y) and V(Y). 46. This is a continuation of the previous exercise. e. Use (a) and the theorem of this section to get a. Use fy(y) from Exercise 45(c) to get E(Y) and E(Y) and V(¥). vy). a . b. Use Exercise 45(a) and the theorem of this 42. A system consisting of two components will con- am a4 section to get E(Y) and V(¥). tinue to operate only as long as both components function. Suppose the joint pdf of the lifetimes 47. David and Peter independently choose at random a (months) of the two components in a system number from 1, 2, 3, with each possibility equally is given by f(x, y) = c[l0—(x+y)] forx > 0, likely. Let X be the larger of the two numbers, and y>0,x+y< 10 let Y be the smaller. a. If the first component functions for exactly a. Determine p(x, y). 3 months, what is the probability that the sec- b. Determine px(x), x = 1, 2, 3. ond functions for more than 2 months? ¢. Determine pyy(ylx). b. Suppose the system will continue to work only d. Determine E(YIX = ). Is this a linear function as long as both components function. Among 20 of x? of these systems that operate independently of e. Determine V(YIX = x). each other, what is the probability that at least 4g In Exercise 47 find half work for more than 3 months? a. E(X) 43. Refer to Exercise 1 and answer the following b. py). questions: . E(Y) using py(y). a. Given that X = 1, determine the conditional d. E(Y) using E(Y1X). pmf of Y—that is, py(Oll), pyx(IIl), and e. E(X) + E(Y). Intuitively, why should this be 4? Pyx(2ll). --- Trang 278 --- 5.4 Transformations of Random Variables 265 49. In Exercise 47 find That is, the nine digits other than X are equally a. puy(aly). likely for Y. b. E(XIY = y). Is this a linear function of y? a. Determine py(x), prx(vlx), pyy(.y). ce. V(XIY = y). b. Determine a formula for E(YIX = x). Is this a 50. For a Calculus I class, the final exam score Y and Hnsacchinction olze the average of the four earlier tests X are bivariate 54. In our discussion of the bivariate normal, there is normal with mean jt; = 73, standard deviation an expression for E(YIX = x). oy = 12, mean pr, =70, standard deviation o = a. By reversing the roles of X and Y give a similar 15. The correlation is p =.71. Determine formula for E(XIY = y). a. Ly b. Both E(YIX = x) and E(XIY = y) are linear b. Giy=x functions. Show that the product of the two c: Gy. slopes is p?. d. POY > 901X'= 80), i.., the probability thatthe 56 Tis week the number X of claims coming into an final exam score exceeds 90 given that the nels i Fi 7 5 ve insurance office is Poisson with mean 100. The average of the four earlier tests is 80 : cise aie) probability that any particular claim relates to 51. Let X and Y, reaction times (sec) to two different automobile insurance is .6, independent of any stimuli, have a bivariate normal distribution with other claim. If Y is the number of automobile mean jt; = 20 and standard deviation o, = 2 for X claims, then Y is binomial with X trials, each and mean jr =30 and standard deviation 6; = 5 with “success” probability .6. for Y. Assume p =.8. Determine a. Determine E(YIX = x) and V(YIX = x). a. ly» b. Use part (a) to find E(Y). btn. . Use part (a) to find V(Y). 5 ee eine 8 56. In Exercise 55 show that the distribution of Y is . PY > 461X = 25) Poisson with mean 60. You will need to recognize 52. Consider three ping pong balls numbered 1, 2, and 3. the Maclaurin series expansion for the exponential Two balls are randomly selected with replacement. function. Use the knowledge that Y is Poisson with If the sum of the two resulting numbers exceeds 4, mean 60 to find E(Y) and V(¥). two balls are again selected. This process continues 5 1 4+ ¥ and Y be the times for a randomly selected until the sum is at most 4, Let X and Y denote the " i i individual to complete two different tasks, and last two numbers selected. Possible (X, Y) pairs are : oe two slsttaricg 1. D,(,2), (3), @, ), (2,2),,1 assume that (X, Y) has a bivariate normal distribution iK ge 22), (1,3), 2, D, 2,2), 8, DI. with jy = 100, ox = 50, wy = 25, oy =5, p = 5. a. Determine px.v(x,y). From statistical software we obtain P(X < 100, b. Determine pyy(ylx). _ Z Determine E(YIX = x). Is this a li fini Y < 25) = .3333, P(X < 50, Y < 20) = .0625, ¢ Betermine =). Ts tthis a linear finction, P(X < 50, Y < 25) = .1274, and P(X < 100, Y < aes 20) = .1274 d. Determine E(XIY = y). What special property (a) Determine P(S0 &X-< 100,20 < ¥ X25); of p(x, y) allows us to get this from (c)? , . 7 ) (b) Leave the other parameters the same but e. Determine V(YIX = x). change the correlation to p = 0 (indepen- 53. Let X be a random digit (0, 1, 2,..., 9 are equally dence). Now recompute the answer to part likely) and let Y be a random digit not equal to X. (a). Intuitively, why should the answer to part (a) be larger? Transformations of Random Variables In the previous chapter we discussed the problem of starting with a single random variable X, forming some function of X, such as X or e*, to obtain a new random variable Y = h(X), and investigating the distribution of this new random variable. We now generalize this scenario by starting with more than a single random variable. Consider as an example a system having a component that can be replaced just once before the system itself expires. Let X, denote the lifetime of the original --- Trang 279 --- 266 = cuarrer 5 Joint Probability Distributions component and X, the lifetime of the replacement component. Then any of the following functions of X, and Xz may be of interest to an investigator: 1. The total lifetime X, + X> 2. The ratio of lifetimes X,/X,; for example, if the value of this ratio is 2, the original component lasted twice as long as its replacement 3. The ratio X;/(X, + X), which represents the proportion of system lifetime during which the original component operated The Joint Distribution of Two New Random Variables Given two random variables X, and Xz, consider forming two new random vari- ables Y; = u(X,, X2) and Yz = up(X;, X2). We now focus on finding the joint distribution of these two new variables. Since most applications assume that the X;’s are continuous we restrict ourselves to that case. Some notation is needed before a general result can be given. Let f(x), x2) = the joint pdf of the two original variables (01, 2) = the joint pdf of the two new variables The w,(-) and w>(-) functions express the new variables in terms of the original ones. The general result presumes that these functions can be inverted to solve for the original variables in terms of the new ones: Xi =vi(Yi, Yo) , X2 = v2(¥i, Yo) For example, if +x) and “1 yi = 2X1 +.x2 and yy = ——_ vy X1 + X2 and y2 uta then multiplying y2 by y, gives an expression for x,, and then we can substitute this into the expression for y, and solve for x: x1 =yiy2 = vilyi,y2) 2 = yi(l — ya) = va(yi, ya) In a final burst of notation, let S = {(a1,x2):f(x1,22)>0} T= {(y1, 92): 861,92) > OF That is, S is the region of positive density for the original variables and T is the region of positive density for the new variables; T is the “image” of S under the transformation. THEOREM Suppose that the partial derivative of each v(y;, y2) with respect to both y, and y» exists for every (y1, y2) pair in T and is continuous. Form the 2 x 2 matrix Ovilyi,y2) Ovi(V1,¥2) oy, Oy: w= oy, Oy2 Ove(Y1,y2) Ovo(1,¥2) Oy1 Oy --- Trang 280 --- 5.4 Transformations of Random Variables 267 The determinant of this matrix, called the Jacobian, is siti = OF V8 Oi 0 Oy; Oyr Ayr Oy The joint pdf for the new variables then results from taking the joint pdf f(x, X2) for the original variables, replacing x, and x2 by their expressions in terms of y; and y>, and finally multiplying this by the absolute value of the Jacobian: (v1.92) =FlviQ1,92), v2(1,¥2)] - Hdet(M)| (yi, 92) € T The theorem can be rewritten slightly by using the notation aun det(M) = |=——] = Fr Then we have ales sal 292) =f (1.92) + |-— 801,92) = f(%1,%2) raneed which is the natural extension of the univariate result (transforming a single rv X to obtain a single new rv Y) g(y) = f(x) - Idx/dyl discussed in Chapter 4. Continuing with the component lifetime situation, suppose that X; and X> are independent, each having an exponential distribution with parameter ). Let’s determine the joint pdf of XxX, Yy = u(X1,X2) =X + Xp and Y2 = w2(X1,X2) = =—— 1 = 101, Xo) = Xi +.Xo and Yo = wn, Xa) = ye We have already inverted this transformation: x1 =VilN1,¥2) =Yiy2-X2 = va(¥1,2) = yi (1 — y2) The image of the transformation, i.e. the set of (y;, y2) pairs with positive density, is 0 < y, and 0 < y2 < 1. The four relevant partial derivatives are Ov, ov, Ove 1 Ovy a= stay Sel-y = - dn By from which the Jacobian is — yyy. — y,(1 — yo) = —y1 Since the joint pdf of X, and X, is f(x1,%2) = de® Jem = Petnen x1 >0, 2 >0 we have 801.2) = Vem -y=Pye% 1 O alone, the individual (i.e. marginal) pdf's of the two new variables were obtained from the joint pdf without any further effort. Often this will not be the case — that is, Y; and Y> will not be independent. Then to obtain the marginal pdf of Y;, the joint pdf must be integrated over all values of the second variable. In fact, in many applications an investigator wishes to obtain the distribution of a single function u,(X,, X>) of the original variables. To accomplish this, a second function #>(X,, X>) is selected, the joint pdf is obtained, and then yy is integrated out. There are of course many ways to select the second function. The choice should be made so that the transformation can be easily inverted and the integration in the last step is straightforward. Consider a rectangular coordinate system with a horizontal x; axis and a vertical x2 axis as shown in Figure 5.7(a). First a point (X1, Xz) is randomly selected, where the joint pdf of X, X2 is # ) Mt. O sd =2-y) OK, and X3, and forming three new variables Y,, Y5, and Y;. Suppose again that the transformation can be inverted to express the original variables in terms of the new ones: M1 =vi(y1,y2,¥3), 2 = v2(yrY2,¥3), ¥3 = v3(N1,92,93) Then the foregoing theorem can be extended to this new situation. The Jacobian matrix has dimension 3 x 3, with the entry in the ith row and jth column being 0v,/0y;. The joint pdf of the new variables results from replacing each x; in the original pdf f(-) by its expression in terms of the y;’s and multiplying by the absolute value of the Jacobian. --- Trang 283 --- 270 = cuarrer 5 Joint Probability Distributions eee §=Consider n = 3 identical components with independent lifetimes X,, X>, X3, each having an exponential distribution with parameter i. If the first component is used until it fails, replaced by the second one which remains in service until it fails, and finally the third component is used until failure, then the total lifetime of these components is ¥;3 = X; + X2 + X3. To find the distribution of total lifetime, let’s first define two other new variables: Y; = X and Y; = X, + X2 (so that Y; < Y> < Y3). After finding the joint pdf of all three variables, we integrate out the first two variables to obtain the desired information. Solving for the old variables in terms of the new gives N= BW=Y-M 3 =Y3—y2 It is obvious by inspection of these expressions that the three diagonal elements of the Jacobian matrix are all 1’s and that the elements above the diagonal are all 0’s, so the determinant is 1, the product of the diagonal elements. Since f(x1.x2,43) = Be Fes) x 50,09 > 0,23 >0 by substitution, s0uyays) =e O< yr 0 This is a gamma pdf. The result is easily extended to n components. It can also be obtained (more easily) by using a moment generating function argument. a Exercises | Section 5.4 (58-64) 58. Consider two components whose lifetimes X; and 60. An exam consists of a problem section and a short- X, are independent and exponentially distributed answer section. Let X, denote the amount of time with parameters 4, and A», respectively. Obtain (hr) that a student spends on the problem section the joint pdf of total lifetime X, + X2 and the and X> represent the amount of time the same proportion of total lifetime X/(X; + X2) during student spends on the short-answer section. which the first component operates. Suppose the joint pdf of these two times is 59. Let X, denote the time (hr) it takes to perform a xy xy : cane Cx XD aZ, 0 denote the time it takes to perform f (x1,22) = 3 2 a second one. The second task always takes at 0 otherwise least as long to perform as the first task. The joint pdf of these variables is a, What is the value of c? b. If the student spends exactly .25 h on the short- flim) = (201+) OS Sm SI answer section, what is the probability that at Si a 0 otherwise most .60 h was spent on the problem section? [Hint: First obtain the relevant conditional dis- a. Obtain the pdf of the total completion time for tribution.] the two tasks. cc. What is the probability that the amount of time b. Obtain the pdf of the difference X2—X, spent on the problem part of the exam exceeds between the longer completion time and the the amount of time spent on the short-answer shorter time. part by at least .5 hr? --- Trang 284 --- 5.5 Order Statistics 271 d. Obtain the joint distribution of ¥; = X2/X,, the from which X, = VY; cos(¥2),X2 = VYisin(¥2). ratio of the two times, and Y, = X>. Then Obtain the joint pdf of the new variables and then obtain the marginal distribution of the ratio. the marginal distribution of each one. [Note: It 61. Consider randomly selecting a point (X;, X2, X3) in ould bemniee:ttiwe/couldisimply let Ye arctan the'unit cube { (i, 05,29): 0 allows it to assume any value between f(%1,2,x3) and 2r.] Brix OS <1, 0, 0), (X, Xs. 0), (0, 0,.X3), (X1,0,.X3), U, and Up be independent uniform(O, 1) rv’s, and (0, Xo, X3), and (X,, Xo, X3). The volume of this cube then let is ¥3 =X X2X3. Obtain the pdf of this volume. [Hint: Let Y; =X, and Y) = X}X>.] Y, =—2In(U1) Y2 = 2nU2 62. Let X; and X> be independent, each having a standard normal distribution. The pair (X;, X>) Z, = VY cos(¥2) Z. = V¥isin(¥2) corresponds to a point in a two-dimensional coor- dinate system. Consider now changing to polar Show that the Z;’s are independent standard nor- coordinates via the transformation, mal. [Note: This is called the Box-Muller transfor- mation after the two individuals who discovered Yy =X} 4X} it. Now that statistical software packages will generate almost instantaneously observations X> from a normal distribution with any mean and arctan (2) X,>0,X) >0 variance, it is thankfully no longer necessary for t people like you and us to carry out the transforma- tan (X2 tions just described — let the software do it! yey () pat MSO SO ea Het Rivand Xa besindependent random, variables: Xp. each having a standard normal distribution. Show arctan (2) +n X<0 that the pdf of the ratio Y = X;/X> is given by fly) = I/{n(1 + y°)] for — 00 < y < oo (this is called 0 xX =0 the standard Cauchy distribution). Order Statistics Many statistical procedures involve ordering the sample observations from smallest to largest and then manipulating these ordered values in various ways. For example, the sample median is either the middle value in the ordered list or the average of the two middle values depending on whether the sample size n is odd or even. The sample range is the difference between the largest and smallest values. And a trimmed mean results from deleting the same number of observations from each end of the ordered list and averaging the remaining values. Suppose that X,, X>, ..., X,, is a random sample from a continuous distribu- tion with cumulative distribution function F(x) and density function f(x). Because of continuity, for any i, j with i 4 j, P(X; = X;) = 0. This implies that with probability 1, the n sample observations will all be different (of course, in practice all measuring instruments have accuracy limitations, so tied values may in fact result). --- Trang 285 --- 272 = carrer 5 Joint Probability Distributions DEFINITION The order statistics from a random sample are the random variables Y;, ... Y,, given by Y, = the smallest among X,, X>, ..., X,, Y, = the second smallest among Xj, Xo, ..., Xn, Y,, = the largest among X), X2, ..., Xn so that with probability 1, ¥, < Y¥2<...<¥, 21 y) =1-P(X > y, Xo > y,...,Xs > y) =1-P(X > y) P(X >y)- + POs >y) = 1 [ew] = 1 This is the form of an exponential cdf with parameter .05. More generally, if the n components in a series connection have lifetimes that are independent, each exponentially distributed with the same parameter 2, then system lifetime will be --- Trang 286 --- 5.5 Order Statistics 273 a b Figure 5.8 Systems of components for Example 5.28: (a) parallel connection; (b) series connection exponentially distributed with parameter n/. The expected system lifetime will then be I/n/, much smaller than the expected lifetime of an individual component. An argument parallel to that of the previous example for a general sample size n and an arbitrary pdf f(x) gives the following general results. PROPOSITION Let ¥; and Y,, denote the smallest and largest order statistics, respectively, based on a random sample from a continuous distribution with cdf F(x) and pdf f(x). Then the cdf and pdf of Y,, are n io Gily) =[FO)!" —saly) = alFO)" -FO) The cdf and pdf of Y, are Gly) =1-[1- FO)" giv) = all — FOI" -F0) Let X denote the contents of a one-gallon container, and suppose that its pdf is f(x) = 2x for 0 < x < 1 (and 0 otherwise) with corresponding cdf F(x) = x? in the interval of positive density. Consider a random sample of four such containers. Let’s determine the expected value of Y4 — Y;, the difference between the contents of the most-filled container and the least-filled container; ¥, — Yj is just the sample range. The pdf’s of Y4 and Y, are gay) = 407)" 2y O, Y3 will be positive only for values of y,, y2, y3 satisfying yi <2 < y3. What is this joint pdf at the values y; = 28.4, y2 = 29.0, y3 = 30.5? There are six different ways to obtain these ordered values: X,=28.4 X2,=29.0 X3=30.5 X,=284 X>=30.5 X;=29.0 X,=29.0 X>=284 X;=305 X,=29.0 X5=30.5 X;=284 X,=30.5 X5=284 X;=29.0 X,=30.5 X5=20.0 X;=28.4 These six possibilities come from the 3! ways to order the three numerical observa- tions once their values are fixed. Thus g(28.4,29.0,30.5) = f (28.4) -f (29.0) -f(30.5) + -+-+£ (30.5) -f(29.0) -F (28.4) =31f (28.4) -f(29.0) -f (30.5) --- Trang 288 --- 5.5 Order Statistics 275 Analogous reasoning with a sample of size n yields the following result: PROPOSITION Let g(y1, y2, ..-; Yn) denote the joint pdf of the order statistics Y;, Y2, .... Yn resulting from a random sample of X;’s from a pdf f(x). Then 5 5 yy = J MFO) -F02) Fn) YSZ , X3, and X, are independent random variables, each uniformly distributed on the interval from 0 to 1. The joint pdf of the four corresponding order statistics Y;, Yo, ¥3, and Y4 is f(1, yo, y3, ya) = 41-1 for 0 < yy < yo .2, Y3 — Y2 > .2, and Y4 — ¥3 > .2. This latter probability results from integrating the joint pdf of the Y;s over the region .6 < y4 <1,4 .2,¥3 —Yo > .2,¥s-Y3 >.2) = | | | | Aldy:dyody3dys Jota Jo Jo The inner integration gives 4!(y2 — .2), and this must then be integrated between .2 and y3 — .2. Making the change of variable zz = yz — .2, the integration of zp is from 0 to y; — .4. The result of this integration is 4!-(y; — 4/2. Continuing with the 3rd and 4th integration, each time making an appropriate change of variable so that the lower limit of each integration becomes 0, the result is P(¥) —Y1 > .2,¥3 —Y> > .2,Ys—Ys > .2) =.44 = 0256 A more general multiple integration argument for n independent uniform (0, B) tvs shows that the probability that at all values are separated by at least dis 0 if d > Bin — 1) and P(all values are separated by more than d) _ [|= (=1a/By" 0B/(n—-1) As an application, consider a year that has 365 days, and suppose that the birth time of someone born in that year is uniformly distributed throughout the 365-day period. Then in a group of 10 independently selected people born in that year, the probability that all of their birth times are separated by more than 24 h (d =1 day) is (1 — 9/365)'° = .779. Thus the probability that at least two of the 10 birth times are separated by at most 24 h is .221. As the group size n increases, it becomes more likely that at least two people have birth times that are within 24 h of each other --- Trang 289 --- 276 = cuarrer 5 Joint Probability Distributions (but not necessarily on the same day). For n = 16, this probability is 467, and for n= 1Tit is .533. So with as few as 17 people in the group, it is more likely than not that at least two of the people were born within 24 h of each other. Coincidences such as this are not as surprising as one might think. The probability that at least two people are born on the same day (assuming equally likely birthdays) is much easier to calculate than what we have shown here; see Exercise 2.98. a The Distribution of a Single Order Statistic We have already obtained the (marginal) distribution of the largest order statistic Y,, and also that of the smallest order statistic Y,. Let’s now focus on an intermediate order statistic ¥; where 1 < i < n. For concreteness, consider a random sample X;, X3,...,X¢ of n = 6 component lifetimes, and suppose we wish the distribution of the 3 smallest lifetime Y3. Now the joint pdf of all six order statistics is 8(V1, 921-196) = OL F(V1) + +++ -F() yi , and then integrate y, from — 00 to y3. 2. Integrate y¢ from ys to oo, then integrate ys from y, to oo, and finally integrate y4 from y3 to oo. That is, Sop pop pe ; eo) =| [Pf 9700 202)----F00) arebadvdsdy v3 Iya dys I-20 I 00 yy py 00 poe poe =6 lI | FOF 02) and| : ll | | Svat s)he) avadrsds] f(y) 20) 6 vs Ivy Jys In these integrations we use the following general results: 1 [lecoironae— Irth be [et w= FC] 1 ju = FOFG)de=—p GM FEM be [let w= 1 =F) Therefore 3 py 33 1 . F(yfO2) dvidyx =] FO2)f(2) dy2 = 5 1FOs)] --- Trang 290 --- 5.5 Order Statistics 277 and | | | Fye)f Qs \f(va) dyedysdys = | | [1 —F(ys)If (vs )f(4) dysdya dys Jos Sys dys Jys 2 | 5 =~] 5[- Fa)! fOa) dvs ys = salt -Fouh “3.2 a Thus 6! 2 3 803) = aq FOIL — Os) £03) — 00 [stro -r08)avean areave dys dys J-o0 Jy, --- Trang 291 --- 278 = cuarrer 5 Joint Probability Distributions The result of this integration is 6! 2 1 1 ; 83,5(93,9s) = am Fon [FOs) — FOs)I {1 — Fs) FO) 0s) — 00 <3 < ys < 00 In the general case, the numerator in the leading expression involving factorials becomes n! and the denominator becomes (i — 1)!(j—i-— 1)!(n —J)!. The three exponents on bracketed terms change in a corresponding way. An Intuitive Derivation of Order Statistic PDF’s Let A be a number quite close to 0, and consider the three class intervals (—oo, y], (vy + A], and (y+ A, oo). For a single X, the probabilities of these three classes are yA m=F0) =| f)ae=f0)A my =1-FO+A) ty For a random sample of size n, it is very unlikely that two or more X’s will fall in the second interval. The probability that the ith order statistic falls in the second interval is then approximately the probability that i — 1 of the X’s are in the first interval, one is in the second, and the remaining n — i X’s are in the third class. This is just a multinomial probability: n! aT es Piy <¥ix) = P(X < —x) =F(-x). . ; : ‘ before there will be a payout. Suppose the For the second question, consider W = X — jt; 2 f opetaee! amount (1000s of dollars) of a randomly selected what is the median of the distribution of W?] ae , Deak ick claim is a continuous rv with pdf fix) = 3/x* for 69. A store is expecting n deliveries between the x> 1. Consider a random sample of three claims. hours of noon and | p.m. Suppose the arrival a. What is the probability that at least one of the time of each delivery truck is uniformly claim amounts exceeds $5000? distributed on this one-hour interval and that the b. What is the expected value of the largest times are independent of each other. What are the amount claimed? eg pecten values of the cadeted /atrival titties? 74. Conjecture the form of the joint pdf of three order 70. Suppose the cdf F(x) is strictly increasing and let statistics Y;, Y;, Y, in a random sample of size n. Fw) denote the inverse function for0 1). Give expressions involving the gamma statistics, respectively, from a random sample of function for both the mean and variance of the ith size n, and let W2 = Y,, — Y, (this is the sample smallest amount of time Y; from a random sam- range). ple of n such time periods. a. Let W; = Yj, obtain the joint pdf of the W;’s 12. The logistic pf fl) =e*/(1 bey? (use the method of Section 54), and then : 2 : derive an expression involving an integral for for — 00 be quantitative and verbal find the message “You will receive as a refund scores on one aptitude exam, and let Y, and Y> the difference between the cost of the more be corresponding scores on another exam. If expensive and the less expensive meal that Cov(X1, ¥1) = 5, Cov(X), ¥2) = 1, Cov(X2, Yi) = 2, you have chosen.” How much does the restau- and Cov(X2, Y2) = 8, what is the covariance rant expect to refund? between the two total scores X, + X2 and Y; + ¥2? 81. A health-food store stocks two different brands of 85. Simulation studies are important in investigating atype of grain. Let X = the amount (Ib) of brand A various characteristics of a system or process. on hand and Y = the amount of brand B on hand. They are generally employed when the mathe- Suppose the joint pdf of X and Y is matical analysis necessary to answer important --- Trang 294 --- Supplementary Exercises. 281 questions is too complicated to yield closed-form of N are 1, 2,3, .... Show that the pmf of N is solutions. For example, in a system where the p(n) = 1/[n(n + 1)], and then determine the time between successive customer arrivals has a expected number of cars in your cohort. [Hint: N particular pdf and the service time of any particu- = 3 requires that X, < X2, X, < X3,X4< X;.] Jar customer has another pdf, simulation can prog sunnose the number of children bom to’an indi- vide information about the probability that the idual has pmf p(x). A Galton-Watson branching system is empty when a customer arrives, the ws Pm - . _ s : process unfolds as follows: At time t = 0, the expected number of customers in the system, and intioaeconeisis of aveihele individual, Just the expected waiting time in queue. Such studies fe amas ce Noni z fi prior to time ¢ = 1, this individual gives birth to depend on being able to generate observations ee . 3 cee ae X; individuals according to the pmf p(x), so there from a specified probability distribution. “ aan ! : are X, individuals in the first generation. Just prior The rejection method gives a way of generating an to time t = 2, each of these X, individuals gives observation from a pdf f(-) when we have a way of birth independently of the others according to the generating an observation from g(-) and the ratio pmf p(x), resulting in X individuals in the second ‘flx)/g(x) is bounded, that is, < ¢ for some finite c. generation (e.g., if X, = 3, then X> = ¥, + Yo + Ys, The steps are as follows: where Y; is the number of progeny of the ith 1. Use a software package’s random number individual in the first generation). This process generator to obtain a value u from a uniform then continues to yield a third generation of size distribution on the interval from 0 to 1. X;, and so on. 2. Generate a value y from the distribution with a. IfX) = 3, ¥) =4, Yo = 0, Ys = 1, draw a tree pdf g(y). diagram with two generations of branches to 3. If u < fly)/eg(y), set x = y accept” x); other- represent this situation. wise return to step 1. That is, the procedure is b. Let A be the event that the process ultimately repeated until at some stage u < fiy)/eg(y). becomes extinct (one way for A to occur would a. Argue that c > 1. [Hint: If c < 1, then fly) < be to have X,; = 3 with none of these three g(y) for all y; why is this bad?] second-generation individuals having any b. Show that this procedure does result in progeny) and let p* = P(A). Argue that p* an observation from the pdf f(:); that is, satisfies the equation P(accepted value < x) = F(x). [Hint: This probability is P({U < f(y)/cg(y)} 9 _ x, {Y < x}); to calculate, first integrate with p= Ps) -P@) respect to u for fixed y and then integrate 7 , 5 withirespeetta'y. That is, p* = h(p*) where h(s) is the probability ¢. Show that the probability of “accepting” at generating function introduced in Exercise 138 any particular stage is 1/c. What does this from Chapter 3. Hint: A= Ux (A (%1 =x), imply about the expected number of stages so the Jaw of total probability can be applied. necessary to obtain an acceptable value? Now givenithiaty =a, Aavill accunifand only What kind of value\orcis’desirabie? if each of the three separate branching pro- a. Lee fis) = 20x1 — 2° for 0% KA, cesses starting from the first generation ulti- a particular beta distribution. Show that mately becomes extinct; what is the taking g(y) to be a uniform pdf on (0, 1) probability, of this happening? works. What is the best value of ¢ in this c. Verify that one solution to the equation in (b) situation? is p* = 1. It can be shown that this equation 86. You are driving on a highway at speed X;. Cars has just one other solution, and that the proba- entering this highway after you travel at speeds bility of ultimate extinction is in fact the smal- Xo, X3, ..... Suppose these X;’s are independent ler of the two roots. If p(0) = 3, p(l) = 5, and and identically distributed with pdf f(x) and cdf p(2) = .2, what is p*? Is this consistent with the F(x). Unfortunately there is no way for a faster car Yale of 16 the expected. nmmberiof progeny, to pass a slower one — it will catch up to the slower from: a single individual? What. happens: if one and then travel at the same speed. For exam- PO) = .2, p(1) = .5, and p(2) = 3? ple, if X, = 52.3, X> = 37.5, and X; = 42.8, then 88, Let f(x) and g(y) be pdf’s with corresponding cdf’s no car will catch up to yours, but the third car will F(x) and G(y), respectively. With c denoting a catch up to the second. Let N = the number of cars numerical constant satisfying Icl < 1, consider that ultimately travel at your speed (in your “cohort”), including your own car. Possible values f(x,y) =f (x)g(y){1 + ¢[2F (x) — 1][2G(y) — 1} --- Trang 295 --- 282 = cuarrer 5 Joint Probability Distributions Show that f(x, y) satisfies the conditions necessary Poisson process with “rate” .5 plant per square to specify a joint pdf for two continuous rv’s. foot. Let Y denote the number of plants in the What is the marginal pdf of the first variable X? region. Of the second variable Y? For what values of c are a. Find E(Y|X = x) and V(Y|X = x) X and Y independent? If f(x) and g(y) are normal b. Use part (a) to find E(Y). pdf's, is the joint distribution of X and Y bivariate ¢. Use part (a) to find V(Y). ninetnisey 91. The number of individuals arriving at a post office 89. The joint cumulative distribution function to mail packages during a certain period is a of two random variables X and Y, denoted by Poisson random variable X with mean value 20. F(, y), is defined by Independently of the others, any particular cus- tomer will mail either 1, 2, 3, or 4 packages with probabilities .4, .3, .2, and .1, respectively. Let ¥ F(x,y) = P(X = 0), .5 (if either X, = 0 and X, = 1 or X,; = | and X, = 0), 1, 1.5, .... The probability distribution of X specifies P(X = 0), P(X = .5) and so on, from which other probabilities such as P(1 2.5) can be calculated. Similarly, if for a sample of size n = 2, the only possible values of the sample variance are 0, 12.5, and 50 (which is the case if X,; and X, can each take on only the values 40, 45, and 50), then the probability distribution of S* gives P(S? = 0), P(S* = 12.5), and P(S* = 50). The probability distribution of a statistic is sometimes referred to as its sampling distribution to emphasize that it describes how the statistic varies in value across all samples that might be selected. Random Samples The probability distribution of any particular statistic depends not only on the population distribution (normal, uniform, etc.) and the sample size n but also on the method of sampling. Consider selecting a sample of size n = 2 from a popula- tion consisting of just the three values 1, 5, and 10, and suppose that the statistic of interest is the sample variance. If sampling is done “with replacement,” then S? = 0 will result if X, = X». However, S* cannot equal 0 if sampling is “without replace- ment.” So P(S* = 0) = 0 for one sampling method, and this probability is positive for the other method. Our next definition describes a sampling method often encountered (at least approximately) in practice. DEFINITION The rv’s X,,X>,...,X,, are said to form a (simple) random sample of size n if 1. The X;’s are independent rv’s. 2. Every X; has the same probability distribution. Conditions 1 and 2 can be paraphrased by saying that the X;’s are independent and identically distributed (iid). If sampling is either with replacement or from an infinite (conceptual) population, Conditions | and 2 are satisfied exactly. These conditions will be approximately satisfied if sampling is without replacement, yet the sample size n is much smaller than the population size N. In practice, if n/N < .05 (at most 5% of the population is sampled), we can proceed as if the X;’s form a random sample. The virtue of this sampling method is that the probability distribution of any statistic can be more easily obtained than for any other sampling method. --- Trang 301 --- 288 = cuarrer 6 Statistics and Sampling Distributions There are two general methods for obtaining information about a statistic’s sampling distribution. One method involves calculations based on probability rules, and the other involves carrying out a simulation experiment. Deriving the Sampling Distribution of a Statistic Probability rules can be used to obtain the distribution of a statistic provided that it is a “fairly simple” function of the X;’s and either there are relatively few different X values in the population or else the population distribution has a “nice” form. Our next two examples illustrate such situations. A certain brand of MP3 player comes in three configurations: with 2 GB of memory, costing $80, a 4 GB model priced at $100, and an 8 GB version with a price tag of $120. If 20% of all purchasers choose the 2 GB model, 30% choose the 4 GB, and 50% choose the 8 GB model, then the probability distribution of the cost of a single randomly selected MP3 player purchase is given by x 80 100 120 |“ _“_"“ with = 106, o? = 244 (6.1) px) 2 3 Ry Suppose only two MP3 players are sold today. Let X, = the cost of the first player and X, = the cost of the second. Suppose that X, and X, are independent, each with the probability distribution shown in (6.1), so that X; and X> constitute a random sample from the distribution (6.1). Table 6.2 lists possible (x, x2) pairs, the probability of each computed using (6.1) and the assumption of independence, and the resulting ¥ and s* values. (When n = 2, s* = (m1 — ¥)° + (2 — x?) Table 6.2 Outcomes, probabilities, and values of z and s* for Example 6.2 x 2 P(1, X2) x 2 80 80 (.2)(.2) = .04 80 0 80 100 (.2)(3) = .06 90 200 80 120 (.2)(5) = 10 100 800 100 80 (.3)(.2) = .06 90 200 100 100 (.3)(3) = .09 100 0 100 120 (.3)(.5) = .15 110 200 120 80 (.5)(.2) = 10 100 800 120 100 (5)(3) = .15 110 200 120 120 (.5)(5) = .25 120 0 Now to obtain the probability distribution of X, the sample average cost per MP3 player, we must consider each possible value x and compute its probability. For example, < = 100 occurs three times in the table with probabilities .10, .09, and .10, so P(X = 100) = .10 + .09 + .10 = .29 --- Trang 302 --- 6.1 Statistics and Their Distributions 289 Similarly, s* = 800 appears twice in the table with probability .10 each time, so P(S? = 800) = P(X; = 80,X> = 120) + P(X, = 120,X> = 80) = .10+.10 = .20 The complete sampling distributions of X and S? appear in (6.2) and (6.3). Bs 80 90 100 110 120 ——_{ (6.2) pee) | 2 12 29 30 5 se 0 200 800 (6.3) ps(s2) 38 42 20 Figure 6.2 pictures a probability histogram for both the original distribution of X (6.1) and the X distribution (6.2). The figure suggests first that the mean (i.e. expected value) of X is equal to the mean $106 of the original distribution, since both histograms appear to be centered at the same place. Indeed, from (6.2), E(R) = S° spy(®) = 80(.04) + +++ + 120(.25) = 106 = gw a x 5 b ¥ 29 _ .30 3 -25 2 vid 04 80 100 120 80 90 100 110 120 Figure 6.2 Probability histograms for (a) the underlying population distribution and (b) the sampling distribution of X in Example 6.2 Second, it appears that the X distribution has smaller spread (variability) than the original distribution, since the values of ¥ are more concentrated toward the mean. Again from (6.2), VX) => urge) = SO — 106)pg(2) = (80 — 106)°(.04) + --- + (120 — 106)?(.25) = 122 Notice that the V(X) = 122 = 244/2 = 0/2, is exactly half the population vari- ance; the division by 2 here is a consequence of the fact that n = 2. Finally, the mean value of S? is E(S?) = S° 8*pse(s*) = 0(.38) + 200(.42) + 800(.20) = 244 = o? That is, the X sampling distribution is centered at the population mean j1, and the S* sampling distribution (histogram not shown) is centered at the population variance 07. If four MP3 players had been purchased on the day of interest, the sample average cost X would be based on a random sample of four X;s, each having the distribution (6.1). More calculation eventually yields the distribution of X for n=4as x 80 85 90 95 100 105 110 115 120 P(x)! 0016 0096 =.0376 = 0936-1761 .2340)— 2350) «1500 .0625 --- Trang 303 --- 290 = cuarteR 6 Statistics and Sampling Distributions From this, E(X) = 106 = yw and V(X) = 61 =o7/4. Figure 6.3 is a probability histogram of this distribution. | _ alii 80 90 100 110 120 Figure 6.3 Probability histogram for X based on n = 4 in Example 6.2 Ly Example 6.2 should suggest first of all that the computation of pg() and ps: (s?) can be tedious. If the original distribution (6.1) had allowed for more than three possible values 80, 100, and 120, then even for n = 2 the computations would have been more involved. The example should also suggest, however, that there are some general relationships between E(X), V(X), E(S?), and the mean yi and variance o? of the original distribution. These are stated in the next section. Now consider an example in which the random sample is drawn from a continuous distribution. The time that it takes to serve a customer at the cash register in a minimarket is a random variable having an exponential distribution with parameter 2. Suppose X, and X, are service times for two different customers, assumed independent of each other. Consider the total service time T, = X, + X> for the two customers, also a statistic. The cdf of T, is, for t > 0, Fr) =PO+% <= — | [lnm an de {Qtr a2)et1 tae 0 t= = 6.4 min={” 128 (64) This is a gamma pdf (~ = 2 and f = 1/2). This distribution for T, can also be derived by a moment generating function argument. The pdf of X = To/2 can be obtained by the method of Section 4.7 as 4Pre# x >0 f(x) = = 6.5 fey = {HE 820 65) The mean and variance of the underlying exponential distribution are u = 1// and o° = 1/2. Using Expressions (6.4) and (6.5), it can be verified that E(X) = 1/4, V(X) = 1/(222), E(To) = 2/2, and V(To) = 2/27. These results again sug- gest some general relationships between means and variances of X, Ty, and the underlying distribution. a Simulation Experiments The second method of obtaining information about a statistic’s sampling distribu- tion is to perform a simulation experiment. This method is usually used when a derivation via probability rules is too difficult or complicated to be carried out. Such an experiment is virtually always done with the aid of a computer. The following characteristics of an experiment must be specified: 1. The statistic of interest (X, S, a particular trimmed mean, etc.) 2. The population distribution (normal with 4. = 100 and o = 15, uniform with lower limit A = 5 and upper limit B = 10, etc.) 3. The sample size n (e.g.,n = 10 or n = 50) 4, The number of replications k (e.g., k = 1000) Then use a computer to obtain & different random samples, each of size n, from the designated population distribution. For each such sample, calculate the value of the Statistic and construct a histogram of the k calculated values. This histogram gives the approximate sampling distribution of the statistic. The larger the value of k, the better the approximation will tend to be (the actual sampling distribution emerges as k — oo). In practice, k = 1000 may be enough for a “fairly simple” statistic and population distribution, but modern computers allow a much larger number of replications. The population distribution for our first simulation study is normal with pe = 8.25 and ¢ = .75, as pictured in Figure 6.5. [The article “Platelet Size in Myocardial Infarction” (British Med. J., 1983: 449-451) suggests this distribution for platelet volume in individuals with no history of serious heart problems.] We actually performed four different experiments, with 500 replications for each one. In the first experiment, 500 samples of n = 5 observations each were generated using MINITAB, and the sample sizes for the other three were n = 10, n= 20, and n = 30, respectively. The sample mean was calculated for each sample, and the resulting histograms of x values appear in Figure 6.6. --- Trang 305 --- 292 —cuarrer 6 Statistics and Sampling Distributions 6.00 675 7.50 t 9.00 975 10.50 w= 8.25 Figure 6.5 Normal distribution, with . = 8.25 and o = .75 a b Relative Relative frequency frequency 25 25 -20 -20 15 15 10 10 .05 05 % ¥ 7.35 7.65 7.95 825 8.55 885 9.15 7.50 7.80 8.10 840 8.70 7.50 7.80 8.10 840 8.70 9.00 9.30 7.65 7.95 8.25 855 8.85 c d Relative Relative frequency frequency 25 25 .20 -20 15 15 10 10 05 .05 5 z 7.80 8.10 840 8.70 7.80 8.10 8.40 8.70 7.95 8.25 8.55 7.95 8.25 8.55 Figure 6.6 Sample histograms for X based on 500 samples, each consisting of n observations: (a) n = 5; (b) n = 10; (c) n = 20; (d) n = 30 --- Trang 306 --- 6.1 Statistics and Their Distributions 293 The first thing to notice about the histograms is their shape. To a reasonable approximation, each of the four looks like a normal curve. The resemblance would be even more striking if each histogram had been based on many more than 500 ¥ values. Second, each histogram is centered approximately at 8.25, the mean of the population being sampled. Had the histograms been based on an unending sequence of ¥ values, their centers would have been exactly the population mean, 8.25. The final aspect of the histograms to note is their spread relative to each other. The smaller the value of n, the greater the extent to which the sampling distribution spreads out about the mean value. This is why the histograms for n = 20 and n = 30 are based on narrower class intervals than those for the two smaller sample sizes. For the larger sample sizes, most of the ¥ values are quite close to 8.25. This is the effect of averaging. When n is small, a single unusual x value can result in an x value far from the center. With a larger sample size, any unusual x values, when averaged in with the other sample values, still tend to yield an ¥ value close to yz. Combining these insights yields a result that should appeal to your intuition: X based on a large n tends to be closer to than does X based ona small n. a Consider a simulation experiment in which the population distribution is quite skewed. Figure 6.7 shows the density curve for lifetimes of a certain type of electronic control (This is actually a lognormal distribution with E[In(X)] = 3 and V[In(X)] = .16; that is, In(X) is normal with mean 3 and variance .16.). Again the statistic of interest is the sample mean X. The experiment utilized 500 replications and considered the same four sample sizes as in Example 6.4. The resulting histograms along with a normal probability plot from MINITAB for the 500 ¥ values based on n = 30 are shown in Figure 6.8. f(x) 05 04 03 02 01 < t) 2 50 75 Figure 6.7 Density curve for the simulation experiment of Example 6.5 [E(X) = = 21.7584, V(X) = 0? = 82.1449] Unlike the normal case, these histograms all differ in shape. In particular, they become progressively less skewed as the sample size n increases. The averages of the 500 x values for the four different sample sizes are all quite close to the mean value of the population distribution. If each histogram had been based on an --- Trang 307 --- 294 > cuarrer 6 Statistics and Sampling Distributions unending sequence of x values rather than just 500, all four would have been centered at exactly 21.7584. Thus different values of n change the shape but not the center of the sampling distribution of X. Comparison of the four histograms in Figure 6.8 also shows that as n increases, the spread of the histograms decreases. Increasing n results in a greater degree of concentration about the population mean value and makes the histogram look more like a normal curve. The histogram of Figure 6.8(d) and the normal probability plot in Figure 6.8(e) provide convincing evidence that a sample size of n = 30 is sufficient to overcome the skewness of the population distribution and give an approximately normal X sampling distribution. a b 0.25 0.25 0.20 0.20 Bo15 2015 5 5 a 0.10 Q 0.10 0.05 0.05 0.00 0.00 12 16 20 24 28 32 (36 12 16 20 24 28 32 (36 n=5 n=10 c d 0.25 0.25 0.20 0.20 Fo15 Bos € 4 8 0.10 & 0.10 0.05 0.05 0.00 0.00 12 16 20 24 28 32 (36 12 16 20 24 28 32 (36 n=20 n=30 e — tT ar ee ee oe eee oe eres! sek cetnaebiec tee gest thasacianpenniend toon lace eggs teat z » oceianninandenm age gi cstcad ans 3 60 t eee een eer ee oe 2 2 srattfescrlfenhaenenf 05 foi veda enfeceeh becedeenned o «F eae ; 18 19 2 2 22 73 Mh 2B OH BF mean30 ‘vornge: 21.7001 tet or Nay Stone 578 Re C075 ws00 Pek (nto » 0.1006 Figure 6.8 Results of the simulation experiment of Example 6.5: (a) X histogram for n = 5; (b) X histogram for n = 10; (c) X histogram for n = 20; (d) X histogram for n = 30; (e) normal probability plot for n = 30 (from MINITAB) : --- Trang 308 --- 6.1 Statistics and Their Distributions 295 Exercises | Section 6.1 (1-10) 1. A particular brand of dishwasher soap is sold in a. Obtain the probability distribution of this three sizes: 25 0z, 40 oz, and 65 oz. Twenty statistic. percent of all purchasers select a 25-0z box, 50% b. Describe how you would carry out a simulation select a 40-oz box, and the remaining 30% choose experiment to compare the distributions of M a 65-oz box. Let X, and X2 denote the package for various sample sizes. How would you guess sizes selected by two independently selected pur- the distribution would change as n increases? cet ee 5. Let X be the number of packages being mailed by a. Determine the sampling distribution of X, cal- i en op theca Be a randomly selected customer at a shipping facil- ulate, E(X) and .compare tol. ity. Suppose the distribution of X is as follows: b. Determine the sampling distribution of the sample variance S”, calculate E(S), and com- pare to 0”. x 1 2 3 4 2. There are two traffic lights on the way to work. pix) 4 3 2 dl Let X, be the number of lights that are red, requir- ibG'a.atdp,.and suppose that the distribution of X a. Consider a random sample of size n = 2 (two isasfallows customers), and let X be the sample mean num- ber of packages shipped. Obtain the probability x4 0 1 2 distribution of X. w=11,0? = 49 b. Refer to part (a) and calculate P(X < 2.5). pad! 2 5 3 ¢. Again consider a random sample of size n = 2, but now focus on the statistic R = the sample Let Xz be the number of lights that are red on range (difference between the largest and smal- the way home; X2 is independent of X,. Assume lest values in the sample). Obtain the distribu- that X> has the same distribution as X;, so that X;, tion of R. [Hint: Calculate the value of R for X, is a random sample of size n = 2. each outcome and use the probabilities from a. Let 7, = X; + X, and determine the probabil- part (a).] ity distribution of T.. d. If a random sample of size n = 4 is selected, b. Calculate yip,. How does it relate to 4, the what is P(X < 1.5)? (Hint: You should not population mean? have to list all possible outcomes, only those ¢. Calculate oj,. How does it relate to o, the for which ¥ < 1.5.] popillarion:vaniance’, 6. A company maintains’ three offices ina region, 3. It is known that 80% of all brand A DVD players each staffed by two employees. Information con- work in a satisfactory manner throughout the cerning yearly salaries (1000's of dollars) is as warranty period (are “successes”). Suppose that follows: n= 10 players are randomly selected. Let X = the number of successes in the sample. The statis- Office 1 1 2 2 3 3 tic X/n is the sample proportion (fraction) of suc- : cesses. Obtain the sampling distribution of this an a a a statistic. [Hint: One possible value of X/n is 3, Salary = 29-1 33.8 30-2 93.0 25:8 29.7 corresponding to X = 3. What is the probability of this value (what kind of random variable is X)?] © Tepe BOORMAN eHpgereR MORTY 4. A box contains ten sealed envelopes numbered 1, selected from among the six. (withoutreplace- .., 10, The first five contain no money, the next ment). Determine the sampling distribution of three each contain $5, and there is a $10 bill in the sample mean salary X. each of the last two. A sample of size 3 is selected b. Suppose one of the three offices is randomly with replacement (so we have a random sample), selected. Let X, and X denote the salaries of and you get the largest amount in any of the the two employees. Determine the sampling envelopes selected. If X;, X>, and X; denote the distribution of X. amounts in the selected envelopes, the statistic of ¢. How does E(X) from parts (a) and (b) compare interest is M = the maximum of X;, X>, and X3. to the population mean salary 1? --- Trang 309 --- 296 ctiarreR 6 Statistics and Sampling Distributions 7. The number of dirt specks on a randomly selected 9. Carry out a simulation experiment using a statistical square yard of polyethylene film of a certain type computer package or other software to study the has a Poisson distribution with a mean value of 2 sampling distribution of ¥ when the population dis- specks per square yard. Consider a random sample tribution is Weibull with « = 2 and f = 5, as in of n=5 film specimens, each having area 1 Example 6.1. Consider the four sample sizes n = 5, square yard, and let X be the resulting sample 10, 20, and 30, and in each case use 500 replications. mean number of dirt specks. Obtain the first 21 For which of these sample sizes does the X sampling probabilities in the X sampling distribution. [Hint: distribution appear to be approximately normal? What does a moment generating function argument 49. Carry out a simulation experiment using a statisti- say about the distribution of X; + «+» + X52] cal computer package or other software to study 8. Suppose the amount of liquid dispensed by a the sampling distribution of X when the popula- machine is uniformly distributed with lower limit tion distribution is lognormal with E[In(X)] = 3 A =8 oz and upper limit B = 10 oz. Describe and V[{In(X)] = 1. Consider the four sample sizes how you would carry out simulation experiments n = 10, 20, 30, and 50, and in each case use 500 to compare the sampling distribution of the replications. For which of these sample sizes does (sample) fourth spread for sample sizes n = 5, the ¥ sampling distribution appear to be approxi- 10, 20, and 30. mately normal? The Distribution of the Sample Mean The importance of the sample mean X springs from its use in drawing conclusions about the population mean yz. Some of the most frequently used inferential proce- dures are based on properties of the sampling distribution of X. A preview of these properties appeared in the calculations and simulation experiments of the previous section, where we noted relationships between E(X) and yi and also among V(X), o, and n. PROPOSITION Let X,, X5, ..., X,, be a random sample from a distribution with mean value and standard deviation ¢. Then 1. ER) = ng = 2.V(X)=0;=07/n and og=o//n In addition, with T, =X, +--+ ++X,, (the sample total), E(T,) = nu, V(To) = no*, and 67, = \/no. Proofs of these results are deferred to the next section. According to Result 1, the sampling (i.e., probability) distribution of X is centered precisely at the mean of the population from which the sample has been selected. Result 2 shows that the X distribution becomes more concentrated about yas the sample size n increases. In marked contrast, the distribution of T, becomes more spread out as n increases. Averaging moves probability in toward the middle, whereas totaling spreads probability out over a wider and wider range of values. --- Trang 310 --- 6.2 The Distribution of the Sample Mean 297 The amount of time that a patient spends in a certain outpatient surgery center is a random variable with a mean value of 4.5 h. and a standard deviation of 2 h. Let Xj, ..., X25 be the times for a random sample of 25 patients. Then the expected value of the sample mean amount of time is E(X) = 1 = 4.5, and the expected total time for the 25 patients is E(To) = nge = 25(4.5) = 112.5. The standard deviations of X and To are i=Ta= 4 oz =0//n=—= =: a V25 on = Vino = V25(2) = 10 If the sample size increases ton = 100, E(X) is unchanged, but oy = .2, half of its previous value (the sample size must be quadrupled to halve the standard deviation of X). a The Case of a Normal Population Distribution Looking back to the simulation experiment of Example 6.4, we see that when the population distribution is normal, each histogram of ¥ values is well approxi- mated by a normal curve. The precise result follows (see the next section for a derivation). PROPOSITION Let X), Xo, ...,X,, be arandom sample from a normal distribution with mean cand standard deviation ¢. Then for any n, X is normally distributed (with mean jt and standard deviation ¢//n), as is T, (with mean ny and standard deviation \/no). We know everything there is to know about the X and T, distributions when the population distribution is normal. In particular, probabilities such as P(a < X < b) and P(c < Ty < d) can be obtained simply by standardizing. Figure 6.9 illustrates the proposition. )\ F distributi at } \yse i when n= 4 \ Figure 6.9 A normal population distribution and X sampling distributions --- Trang 311 --- 298 — cuarrer 6 Statistics and Sampling Distributions The time that it takes a randomly selected rat of a certain subspecies to find its way through a maze is a normally distributed rv with 4 = 1.5 min and o = .35 min. Suppose five rats are selected. Let X;, ..., Xs denote their times in the maze. Assuming the X;’s to be a random sample from this normal distribution, what is the probability that the total time T, = X, +--+ + Xs for the five is between 6 and 8 min? By the proposition, 7, has a normal distribution with pp, = np = 5(1.5) = 7.5 and variance 07, = no* = 5(.1225) = .6125, so o7, = .783. To stan- dardize T,, subtract frp, and divide by o7,: 6-75 8-75 P(6 oo, the standardized versions of X and To have the standard normal distribution. That is, Xt lim P{( ——~ <2} =P(Z 4) x P(Z> £82) 09 (3.33) = .0004 ~ a) ~ 2 Consider the distribution shown in Figure 6.11 for the amount purchased (rounded to the nearest dollar) by a randomly selected customer at a particular gas station (a similar distribution for purchases in Britain (in £) appeared in the article “Data Mining for Fun and Profit”, Statistical Science, 2000: 111-131; there were big spikes at the values 10, 15, 20, 25, and 30). The distribution is obviously quite non- normal. 0.16 0.14 0.12 2 0.10 3 '§ 0.08 2 > 0.06 0.04 0.02 0.00 5 10 15 20 25 30 35 40 45 50 55 60 purchase amount Figure 6.11 Probability distribution of X = amount of gasoline purchased ($) We asked MINITAB to select 1000 different samples, each consisting of n = 15 observations, and calculate the value of the sample mean X for each one. Figure 6.12 is a histogram of the resulting 1000 values; this is the approximate sampling distribution of X under the specified circumstances. This distribution is clearly approximately normal even though the sample size is not all that large. As further evidence for normality, Figure 6.13 shows a normal probability plot of the 1000 ¥ values; the linear pattern is very prominent. It is typically not non-normality in the central part of the population distribution that causes the CLT to fail, but instead very substantial skewness. --- Trang 314 --- 6.2 The Distribution of the Sample Mean 301 0.14 0.12 0.10 2 0.08 @ 5 S 0.06 0.04 0.02 0.00 18 21 24 27 30 33 36 mean Figure 6.12 Approximate sampling distribution of the sample mean amount purchased when n = 15 and the population distribution is as shown in Figure 6.11 99.99 Mean 2649, SiDev 3.112 N 1000 99 Ru 0.989 PAValue _>0.100 95 ~ 80 & 8 50 ES 20 5 1 0.01 15 20 25 30 35 40 mean Figure 6.13 Normal probability plot from MINITAB of the 1000 x values based on samples of size n = 15 : A practical difficulty in applying the CLT is in knowing when n is suffi- ciently large. The problem is that the accuracy of the approximation for a particular n depends on the shape of the original underlying distribution being sampled. If the underlying distribution is symmetric and there is not much probability in the tails, then the approximation will be good even for a small n, whereas if it is highly skewed or there is a lot of probability in the tails, then a large n will be required. For example, if the distribution is uniform on an interval, then it is symmetric with no probability in the tails, and the normal approximation is very good for n as --- Trang 315 --- 302 cHarreR 6 Statistics and Sampling Distributions small as 10. However, at the other extreme, a distribution can have such fat tails that the mean fails to exist and the Central Limit Theorem does not apply, so no nis big enough. We will use the following rule of thumb, which is frequently somewhat conservative. RULE OF If n > 30, the Central Limit Theorem can be used. THUMB Of course, there are exceptions, but this rule applies to most distributions of real data. Other Applications of the Central Limit Theorem The CLT can be used to justify the normal approximation to the binomial distribu- tion discussed in Chapter 4. Recall that a binomial variable X is the number of successes in a binomial experiment consisting of n independent success/failure trials with p = P(S) for any particular trial. Define new rv’s X), Xo, ..., Xn by 1 if the ith trial results in a success Xi= : a , . (i= 1,...,n) 0. if the ith trial results in a failure: Because the trials are independent and P(S) is constant from trial to trial, the X;’s are iid (a random sample from a Bernoulli distribution). The CLT then implies that if n is sufficiently large, both the sum and the average of the X;’s have approxi- mately normal distributions. When the X;’s are summed, a | is added for every S that occurs and a 0 for every F, so X, + +++ + X,, = X = To. The sample mean of the X;’s is ¥ = X/n, the sample proportion of successes. That is, both X and X/n are approximately normal when n is large. The necessary sample size for this approximation depends on the value of p: When p is close to .5, the distribution of each X; is reasonably symmetric (see Figure 6.14), whereas the distribution is quite skewed when p is near 0 or 1. Using the approximation only if both np > 10 and n(1 —p) > 10 ensures that n is large enough to overcome any skewness in the underlying Bernoulli distribution. Recall from Section 4.5 that X has a lognormal distribution if In(X) has a normal distribution. a b [| 0 1 0 1 Figure 6.14 Two Bernoulli distributions: (a) p = .4 (reasonably symmetric); (b) p = -1 (very skewed) --- Trang 316 --- 6.2 The Distribution of the Sample Mean 303 PROPOSITION Let X, X2, ..., X, be a random sample from a distribution for which only positive values are possible [P(X; > 0) = 1]. Then if 7 is sufficiently large, the product Y = X, X2- +--+ X,, has approximately a lognormal distribution; that is, In(Y) has a normal distribution. To verify this, note that In(Y) = In(X%y) + In(X%>) +++» + In(X,) Since In (Y) is a sum of independent and identically distributed rv’s [the In(X;)’s], it is approximately normal when » is large, so Y itself has approximately a lognormal distribution. As an example of the applicability of this result, it has been argued that the damage process in plastic flow and crack propagation is a multiplicative process, so that variables such as percentage elongation and rupture strength have approximately lognormal distributions. The Law of Large Numbers Recall the first proposition in this section: If X,, Xz, ..., X, is a random sample from a distribution with mean j and variance o”, then E(X) = wand V(X) = o?/n. What happens to X as the number of observations becomes large? The expected value of X remains at j but the variance approaches zero. That is, V(X) = E[(X — |’ 4 0. We say that X converges in mean square to 1 because the mean of the squared difference between X and j1 goes to zero. This is one form of the Law of Large Numbers, which says that X > as n > co. The law of large numbers should be intuitively reasonable. For example, consider a fair die with equal probabilities for the values 1, 2, ..., 6 so = 3.5. After many repeated throws of the die x1, x2, ..., x», we should be surprised if x is not close to 3.5. Another form of convergence can be shown with the help of Chebyshev’s inequality (Exercises 43 and 135 in Chapter 3), which states that for any random variable Y, P(|¥Y — | > ko) < 1/k? whenever k > 1. In words, the probability that Y is at least k standard deviations away from its mean value is at most 1/K?; as k increases, the probability gets closer to 0. Apply this to the mean Y =X of a random sample X;, X>, ..., X, from a distribution with mean ye and variance o. Then E(Y) = E(X) =p and V(Y) =V(X) =o7/n, so the o in Chebyshev’s inequality needs to be replaced by a/\/n. Now let ¢ be a positive number close to 0, such as .01 or .001, and consider P(|X — uu| > é), the probability that X differs from yu by at least ¢ (at least .01, at least .001, etc.). What happens to this probability as n — oo? Setting ¢ = ko/,/n and solving for k gives k = é\/n/o. Thus evn o 1 o P(X —p| > e) =P||[R—p| >] < => (R= W 20) =P|ix— 2X42) 0 asin > 00 b. In probability: P(\X — p| > 2) - 0asn — oo foranye > 0 Often we do not know 1 so we use X to estimate it. According to the theorem, X will be an accurate estimator if n is large. Estimators that are close for large n are called consistent. eM ©Let’s apply the Law of Large Numbers to the repeated flipping of a fair coin. Intuitively, the fraction of heads should approach 3 as we get more and more coin flips. For i = 1, ...n, let X; = 1 if the ith toss is a head and = 0 if it is a tail. Then the X; ’s are independent and each X; is a Bernoulli rv with u = .5 and standard deviation ¢ = .5. Furthermore, the sum X; + X2 + ... + X,, is the total number of heads, so X is the fraction of heads. Thus, the fraction of heads approaches the mean, # = .5, by the Law of Large Numbers. i | Exercises | Section 6.2 (11-26) 11. The inside diameter of a randomly selected pis- circumference was 86.3 cm. A somewhat compli- ton ring is a random variable with mean value cated method was used to estimate various popula- 12 cm and standard deviation .04 cm. tion percentiles, resulting in the following values: a wanton die oe amine Sth 10th 25th Oth 75th 90th 95th sa = gs, s the sa i a distribution of X centered, and what is the 69.6 70.9 75.2 81.3 954 107.1 1164 standard deviation of the X distribution? a. Is it plausible that the waist size distribution is b. Answer the questions posed in part (a) for a at least approximately normal? Explain your sample size of n = 64 rings. reasoning. If your answer is no, conjecture the ¢. For which of the two random samples, the one shape of the population distribution. of part (a) or the one of part (b), is ¥ more b. Suppose that the population mean waist size likely to be within .01 cm of 12 cm? Explain is 85 cm and that the population standard your reasoning. deviation is 15 cm. How likely is it that a 12. Refer to Exercise 11. Suppose the distribution of vandonu sample of 277 individuals will result 5 ° ina sample mean waist size of at least 86.3 cm? tameter se noomal, Referring back to (b), suppose now that the a. Calculate P(11.99 < X < 12.01) whenn = 16. ° mone eSBs ‘ ‘ Si population mean waist size is 82 cm (closer to b. How likely is it that the sample mean diame- ‘ : “ the median than the mean). Now what is the ter exceeds 12.01 when n = 25? e (approximate) probability that the sample 13. The National Health Statistics Reports dated Oct. mean will be at least 86.3? In light of this 22, 2008 stated that for a sample size of 277 18- calculation, do you think that 82 is a reason- year-old American males, the sample mean waist able value for j1? --- Trang 318 --- 6.2 The Distribution of the Sample Mean 305 14. There are 40 students in an elementary statistics 19. Suppose the sediment density (g/cm) of a ran- class. On the basis of years of experience, the domly selected specimen from a region is nor- instructor knows that the time needed to grade a mally distributed with mean 2.65 and standard randomly chosen first examination paper is a deviation .85 (suggested in “Modeling Sediment random variable with an expected value of and Water Column Interactions for Hydrophobic 6 min and a standard deviation of 6 min. Pollutants,” Water Res., 1984: 1169-1174). a. If grading times are independent and the a. Ifarandom sample of 25 specimens is selected, instructor begins grading at 6:50 p.m. and what is the probability that the sample average grades continuously, what is the (approxi- sediment density is at most 3.00? Between 2.65 mate) probability that he is through grading and 3.007 before the 11:00 p.m. TV news begins? b. How large a sample size would be required to b. Ifthe sports report begins at 11:10, what is the ensure that the first probability in part (a) is at probability that he misses part of the report if least .99? he waits until grading is done before turning 49 ‘The first assignment ina statistical computing class on the TV? A ss 5 involves running a short program. If past experience 15. The tip percentage at a restaurant has a mean indicates that 40% of all students will make no value of 18% and a standard deviation of 6%. programming errors, compute the (approximate) a. What is the approximate probability that the probability that in a class of 50 students sample mean tip percentage for a random a. At least 25 will make no errors [Hint: Normal sample of 40 bills is between 16% and 19%? approximation to the binomial] b. If the sample size had been 15 rather than 40, b. Between 15 and 25 (inclusive) will make no could the probability requested in part (a) be errors sealciitaped feats the given information? 21. The number of parking tickets issued in a certain 16. The time taken by a randomly selected applicant city on any given weekday has a Poisson distribu- for a mortgage to fill out a certain form has a tion with parameter 2 = 50. What is the approxi- normal distribution with mean value 10 min and mate probability that standard deviation 2 min. If five individuals fill a. Between 35 and 70 tickets are given out on a out a form on 1 day and six on another, what is particular day? [Hint: When 2 is large, a Poisson the probability that the sample average amount rv has approximately a normal distribution. } of time taken on each day is at most 11 min? b. The total number of tickets given out during a 17. The lifetime of a type of battery is normally S-day week is between 225 and 275? distributed with mean value 10 h and standard 22. Suppose the distribution of the time X (in hours) deviation | h. There are four batteries in a pack- spent by students at a certain university on a partic- age. What lifetime value is such that the total ular project is gamma with parameters % = 50 and lifetime of all batteries ina package exceeds that B = 2. Because «is large, it can be shown that X has value for only 5% of all packages? approximately a normal distribution. Use this fact to 18. Let X represent the amount of gasoline (gallons) Commute; the probability: tata, ratdonilymeleesed a student spends at most 125 h on the project. purchased by a randomly selected customer at a gas station. Suppose that the mean value 23. The Central Limit Theorem says that X is approx- and standard deviation of X are 11.5 and 4.0, imately normal if the sample size is large. More respectively. specifically, the theorem states that the standar- a. In a sample of 50 randomly selected custo- dized X has a limiting standard normal distribu- mers, what is the approximate probability that tion. That is, (¥ — x) /(o/\/n) has a distribution the sample mean amount purchased is at least approaching the standard normal. Can you recon- 12 gallons? cile this with the Law of Large Numbers? If the b. In a sample of 50 randomly selected custo- standardized X is approximately standard normal, mers, what is the approximate probability that then what about ¥ itself? the-lotel: amount of gasolite purchased 1888 gy) 4 ue « seauenenval dndependene-wldls, each most 600 gallons. é Se ae . with probability p of success. Use the Law of c. What is the approximate value of the 95th Lae Nakibels to chow that ths i Pui. : ge Numbers to show that the proportion of suc percentile for the total amount purchased by ; aaortee ane cesses approaches p as the number of trials becomes 50 randomly selected customers. tat. --- Trang 319 --- 306 = cuarrer 6 Statistics and Sampling Distributions 25. Let Y,, be the largest order statistic in a sample of that waiting times going and returning on various size n from the uniform distribution on [0, 6]. days are independent of each other. What is the Show that Y,, converges in probability to 0, that approximate probability that total waiting time is, that P(|¥, — | > @) — 0 as n approaches oo. for an entire week is at most 75 min? [Hint: [Hint: The pdf of the largest order statistic appears Carry out a simulation experiment using statistical in Section 5.5, so the relevant probability can be software to investigate the sampling distribu- obtained by integration (Chebyshev’s inequality is tion of T, under these circumstances. The idea of not needed).] this problem is that even for an n as small as 12, 26. A friend commutes by bus to and from work Toon sould. be appisimintely note. hes on oe . the parent distribution is uniform. What do 6 days/week. Suppose that waiting time is uni- think? formly distributed between 0 and 10 min, and youthink?] The Mean, Variance, and MGF for Several Variables The sample mean X and sample total T, are special cases of a type of random variable that arises very frequently in statistical applications. DEFINITION Given a collection of n random variables X,, Xz, ..., X,, and n numerical constants dj, ..., Gn, the rv Y= aX, +++ +aXn = Yak; (6.6) i=l is called a linear combination of the X;’s. Taking a; =a,=- + -=a,=1 gives Y=X,++ + -+X,=T,, and a Says Sa, st yields Y= 1X, +--+ +4X, =1(X) +--+ +X,) = 1T, = X. Notice that we are not requiring the X;’s to be independent or identically distributed. All the X;’s could have different distributions and therefore different mean values and variances. We first consider the expected value and variance of a linear combination. PROPOSITION Let X,, X>, ..., X,, have mean values jy, ..., ,, respectively, and variances o7,...,02, respectively. 1. Whether or not the X;’s are independent, E(aQqQXy +++ + AyXn) = ayE(X1) +++ + QyE(Xn) 6.7) = ayy +o + dab, : --- Trang 320 --- 6.3 The Mean, Variance, and MGF for Several Variables 307 2. If X;,...,X,, are independent, V(aX1 + +++ + anXn) = a{V(X1) +++ + a,V (Xn) (63) =ajoi ++ +02 , and Fa:X,+--4a,X, = \f oy +++ + aor (6.9) 3. For any X1, Xo, ..., Xn, non V(aiX1 + +++ +anXn) = >So aiaycov(Xi,X)) (6.10) iI jal Proofs are sketched out later in the section. A paraphrase of (6.7) is that the expected value of a linear combination is the same linear combination of the expected values—for example, E(2X, + 5X2) = 2p; + Spo. The result (6.8) in Statement 2 is a special case of (6.10) in Statement 3; when the X;’s are indepen- dent, Cov(X;, X;) = 0 for i # j and = V(X,) for i = j (this simplification actually occurs when the X;’s are uncorrelated, a weaker condition than independence). Specializing to the case of a random sample (X;’s iid) with a; = 1/n for every i gives E(X) = sand V(X) = o?/n, as discussed in Section 6.2. A similar comment applies to the rules for T, A gas station sells three grades of gasoline: regular, plus, and premium. These are priced at $3.50, $3.65, and $3.80 per gallon, respectively. Let X;, X2, and X3 denote the amounts of these grades purchased (gallons) on a particular day. Suppose the X;’s are independent with fy = 1000, 2 = 500, uz = 300, o; = 100, 72 = 80, and a3 = 50. The revenue from sales is ¥Y = 3.5X, + 3.65Xz + 3.8X3, and E(Y) = 3.51; + 3.65py + 3.8), = $6465 V(Y) = 3.5°o] + 3.6503 + 3.8°03 = 243, 864 ay = \/ 243, 864 = $493.83 | The results of the previous proposition allow for a straightforward derivation of the mean and variance of a hypergeometric rv, which were given without proof in Section 3.6. Recall that the distribution is defined in terms of a population with NV items, of which M are successes and N — M are failures. A sample of size n is drawn, of which X are successes. It is equivalent to view this as random arrangement of all N items, followed by selection of the first n. Let X; be 1 if the ith item is a success and 0 if it is a failure, i = 1, 2, ..., N. Then X=X,+Xp+-° ++ Xn According to the proposition, we can find the mean and variance of X if we can find the means, variances, and covariances of the terms in the sum. --- Trang 321 --- 308 = cuarrer 6 Statistics and Sampling Distributions By symmetry, all N of the X;’s have the same mean and variance, and all of their covariances are the same. Because each X; is a Bernoulli random variable with success probability p = M/N, M M M Bx) =p=B vex) =p») =F(1-F) Therefore, n E(X) = (S Xi) = np. i=l Here is a trick for finding the covariances Cov(X;, X;) for i # j, all of which equal Cov(X,, X2). The sum of all N of the X;’s is M, which is a constant, so its variance is 0. We can use Statement 3 of the proposition to express the variance in terms of N identical variances and N(N — 1) identical covariances: iW 0=V(M) = (Sx) = NV(X1) +N(N — 1)Cov(X1,X2) i=l = Np(1 —p) + N(N = 1)Cov(X1, Xo). Solving this equation for the covariance, =p(l — p) Cov(X1,X2) = ——__—.. ov(X1,X2) WL Thus, using Statement 3 of the proposition with n identical variances and n(n — 1) identical covariances, n V(X) =V[ SOX} = aV(%1) + n(n — 1)Cov(X1, Xo) =I =p ~ p) = np(1—p) +n(n—1) np(1 =p) + n(n = 1) PA n-1 = np(1 —p)( 1 -——_ np( ol No *) N-n = np(l— np(1 —p) & = *) . The Difference Between Two Random Variables An important special case of a linear combination results from taking n = 2, a, = l,anda, = —1: Y =a,X; + @X2 =X; — Xz We then have the following corollary to the proposition. COROLLARY E(X, — Xz) = E(X,) — E(X>) and, if X, and X, are independent, V(X, — X2) = V(X1) + VX). --- Trang 322 --- 6.3 The Mean, Variance, and MGF for Several Variables 309 The expected value of a difference is the difference of the two expected values, but the variance of a difference between two independent variables is the sum, not the difference, of the two variances. There is just as much variability in X, — X, as in X, + X> [writing X, — X, = X, + (—1)X3, (—1)X2 has the same amount of variability as X> itself]. An automobile manufacturer equips a particular model with either a six-cylinder engine or a four-cylinder engine. Let X, and X be fuel efficiencies for indepen- dently and randomly selected six-cylinder and four-cylinder cars, respectively. With py) = 22, po = 26, o; = 1.2, and a2 = 1.5, E(X; — Xo) = py — fy = 22-26 = -4 V(X, —X2) =o} +03 = 1.2? + 1.5? = 3.69 ox,-x, = V3.69 = 1.92 If we relabel so that X refers to the four-cylinder car, then E(X, — X2) = 4, but the variance of the difference is still 3.69. a The Case of Normal Random Variables When the X;’s form a random sample from a normal distribution, X and T, are both normally distributed. Here is a more general result concerning linear combinations. The proof will be given toward the end of the section. PROPOSITION If X,, X5, ..., X, are independent, normally distributed rv’s (with possibly different means and/or variances), then any linear combination of the X,’s also has a normal distribution. In particular, the difference X; — Xz between two independent, normally distributed variables is itself normally distributed. The total revenue from the sale of the three grades of gasoline on a particular day (Example 6.12 was Y = 3.5X, + 3.65X2 + 3.8X3, and we calculated pry = 6465 and (assuming continued) independence) oy = 493.83. If the X;’s are normally distributed, the probability that revenue exceeds 5000 is 5000 — 6465 P(Y>5000) = P| Z>—____—_ ] = P(Z> — 2.967 (5000) (z= mE) ( ) = 1 — ©(—2.967) = .9985 rT The CLT can also be generalized so it applies to certain linear combinations. Roughly speaking, if n is large and no individual term is likely to contribute too much to the overall value, then Y has approximately a normal distribution. --- Trang 323 --- 310 = cuarrer 6 Statistics and Sampling Distributions Proofs for the Case n=2 For the result concerning expected values, suppose that X, and X, are continuous with joint pdf f(x, x). Then E(a,X, + aX2) = | (aixi + a2x2)f (x1,x2) dx daz = ME(X1) + ak(X2) Summation replaces integration in the discrete case. The argument for the variance result does not require specifying whether either variable is discrete or continuous. Recalling that V(Y) = E[(Y — wy), V(aiX1 + a2X>) = Ef lar: + aX2 — (aif, + ar4))"} = Efay(X1 — wy)? + a3(Xo — py)” + 2avan(Xi — 1,)(X> — a) } The expression inside the braces is a linear combination of the variables Y; = (X1— 14), Yo = (Xo— py), and Ys = (X; — 1))(X>— pty), 80 carrying the E operation through to the three terms gives ajV(X1) + a3V(X2) + 2a;axCov(X1,X2) as required. a The previous proposition has a generalization to the case of two linear combinations: PROPOSITION Let U and V be linear combinations of the independent normal rv’s X1, X2, ..., X,. Then the joint distribution of U and V is bivariate normal. The converse is also true: if U and V have a bivariate normal distribution then they can be expressed as linear combinations of independent normal rv’s. The proof uses the methods of Section 5.4 together with a little matrix theory. How can we create two bivariate normal rv’s X and Y with a specified correlation p? Let Z, and Z, be independent standard normal rv’s and let X=Z, Y=p-Z+V1-pPh Then X and ¥Y are linear combinations of independent normal random variables, so their joint distribution is bivariate normal. Furthermore, they each have standard deviation 1 (verify this for Y) and their covariance is p, so their correlation is p. Moment Generating Functions for Linear Combinations We shall use moment generating functions to prove the proposition on linear combinations of normal random variables, but we first need a general proposition on the distribution of linear combinations. This will be useful for normal random variables and others. --- Trang 324 --- 6.3 The Mean, Variance, and MGF for Several Variables 311 Recall that the second proposition in Section 5.2 shows how to simplify the expected value of a product of functions of independent random variables. We now use this to simplify the moment generating function of a linear combination of independent random variables. PROPOSITION Let Xj, X2, ..., X, be independent random variables with moment generating functions My,(r),My,(t),...,Myx,(0), respectively. Define Y = a,X, + aX + +++ + a,X,, where aj, dz, ..., dy are constants. Then My(t) = Mx, (ait) «Mx, (ant) +++ + Mx, (ant) In the special case that ay = a) = +++ =a, = 1, My(t) = Mx, (1) -Mx,(t) + +++ » Mx, (4) That is, the mgf of a sum of independent rv’s is the product of the individual mgf’s. Proof First, we write the moment generating function of Y as the expected value of a product. My(t) = Ele”) = E(ellarXiteaXet-tar%s)) = E(eltiXitinkettianXyy — pgm , gttaXa. ... . gltnXn) Next, we use the second Proposition in Section 5.2, which says that the expected value of a product of functions of independent random variables is the product of the expected values: (et, ottX2 . gtXn) = B(eltiXi), B(elmX) ..... E(elm%n) = My,(ayt)-My,(aat) ----My,(aut) Now let’s apply this to prove the previous proposition about normality for a linear combination of independent normal random variables. If ¥ = aX, + aoX> +--+ + a,X,, where X; is normally distributed with mean 1; and standard deviation o;, and a; is a constant, i = 1, 2, ..., n, then My,(t) = ett+2;"/2_ Therefore, My(t) = Mx, (ait) « Mx, (aot) «+++» Mx, (ant) = pltuttotai? /2ppraatto3@e/2 .... , plinaattozaie 2 = ella tigate t+ (ofay toh +t Rae) P /2 Because the moment generating function of Y is the moment generating function of a normal random variable, it follows that Y is normally distributed by the uniqueness principle for moment generating functions. In agreement with the first proposition in this section, the mean is the coefficient of t, E(Y) = apt, + anpty +--+ + dnfty and the variance is the coefficient of (7/2, V(Y) = ajo} +0503 ++ +470, --- Trang 325 --- 312 = ctapreR 6 Statistics and Sampling Distributions Suppose X and Y are independent Poisson random variables, where X has mean 4 and Y has mean v. We can show that X + ¥ also has the Poisson distribution and its mean is 2 + v, with the help of the proposition on the moment generating function of a linear combination. According to the proposition, Mysy(t) = Mx(t) -My(t) = ef VerlD) = ell) Here we have used for both X and Y the moment generating function of the Poisson distribution from Section 3.7. The resulting moment generating function for X + Y is the moment generating function of a Poisson random variable with mean / + v. By the uniqueness property of moment generating functions, X + Y is Poisson distributed with mean 4 + v. a Exercises | Section 6.3 (27-45) 27. A shipping company handles containers in three 29. Five automobiles of the same type are to be different sizes: (1) 27 fC (3 x 3 x 3), (2) 125 f°, driven on a 300-mile trip. The first two will use and (3) 512 ft. Let X; = 1, 2, 3) denote the an economy brand of gasoline, and the other number of type i containers shipped during a three will use a name brand. Let X1, X>, X3. Xa. given week. With p1, = E(X) and o? = V(X), and Xs be the observed fuel efficiencies (mpg) suppose that the mean values and standard devia- for the five cars. Suppose these variables are tions are as follows: independent and normally distributed with jy = in = 200 bo = 250 ii 100 My = 20, ft = pty = bls = 21, and o? = 4 for the = 10 a 12 e28 economy brand and 3.5 for the name brand. Define an rv Y by a. Assuming that X,, Xp, X; are independent, cal- culate the expected value and variance of the total volume shipped. [Hint: Volume = y —*i bk Xs +Xa tXs 27X; + 125X + 512X3] 2 3 b. Would your calculations necessarily be correct if the X;'s were not independent? Explain. so that Y is a measure of the difference in effi- cc. Suppose that the X;'s are independent with ciency between economy gas and name-brand each one having a normal distribution. What gas. Compute P< Y) and P(-I1 2X3). before intermission. The time taken to play each --- Trang 326 --- 6.3 The Mean, Variance, and MGF for Several Variables 313 piece has a normal distribution. Assume that the a. Suppose that X, and X> are independent rv’s three times are independent of each other. The mean with means 2 and 4 kips, respectively, and times are 15, 30, and 20 min, respectively, and the standard deviations .5 and 1.0 kip, respec- standard deviations are 1, 2, and 1.5 min, respec- tively. If a, =5 ft and a = 10 ft, what is tively. What is the probability that this part of the the expected bending moment and what is the concert takes at most 1 h? Are there reasons to standard deviation of the bending moment? question the independence assumption? Explain. b. If X, and X3 are normally distributed, what is 32. Refer to Exercise 3 in Chapter 5. the probability that the bending moment will a. Calculate the covariance between X, = the eed TS BP confiibmerworioxd number of customers in the express checkout 2 BUBPOSE.WDG Posttigns: OT Mee hwo Teaes are aiid. Xgomther itnber OFCCUeaTNERS fn ARE random variables. Denoting them by A, and superexpress checkout, A>, assume that these variables have means of b. Calculate V(X, + X2). How does this com- Sand 10) ft, respectively, that each “has a pare to Vok,) + VO)? standard deviation of 5, and that all A;’s and X,’s are independent of each other. What is the 33. Suppose your waiting time for a bus in the morn- expected moment now? ing is uniformly distributed on [0, 8], whereas d. For the situation of part (c), what is the vari- waiting time in the evening is uniformly ance of the bending moment? distributed on (0, 10] independent of morning e. If the situation is as described in part (a) waiting time. except that Corr(X;, X2) = .5 (so that the a. If you take the bus each morning and evening two loads are not independent), what is the for a week, what is your total expected wait- variance of the bending moment? te tee? eerie ‘a tery Xro and 36 One piece of PVC pipe is to be inserted inside b. What is the variance of your total waiting nother (piece) THe length OP the first ites ‘atae? & normally distributed with mean value 20 in. and ¢. What are the expected value and variance of standard deviation:a.any The: leogtiiot mneisecond the difference between morning and evening iece'ts anormal with-mean-and standard dey. waiting times on a given day? ation 15 and 4 in., respectively. The amount of d. What are the expected value and variance of overlap is normally distributed with mean value the difference between total morning waiting 1 in. and standard deviation .1 in. Assuming that time and total evening waiting time for a tbe lesjgths atid ational OF overlap. Bre ingepets particular week? dent of each other, what is the probability that the total length after insertion is between 34.5 and 34. An insurance office buys paper by the ream, 35 in.? sein) forse in Henke ee a With 37. Two airplanes are flying in the same direction standard deviation 1 day. The distribution is ip adjacent parallel corridors. Mt time r= 0, the normal, independent of previous reams. first airplane is 10 km ahead of the second one. a. Find the probability that the next ream out- Suppose the speed of the first plane (km/h) is jsala te present One by inane than lays normally distributed with mean 520 and standard b. How many reams must be purchased if they Beviation 10d the secon plane stepeed ines are to last at least 60 days with probability at pententiof the first, 18 aise vormally swibuted Teast 80%? with mean and standard deviation 500 and 10, respectively. 35. If two loads are applied to a cantilever beam as a. What is the probability that after 2 h of flying, shown in the accompanying drawing, the bend- the second plane has not caught up to the first ing moment at 0 due to the loads is a;X, + a2X>. plane? % Xp b. Determine the probability that the planes are i | separated by at most 10 km after 2 h. -— 38. Three different roads feed into a particular free- a “2 way entrance. Suppose that during a fixed time ) period, the number of cars coming from each road onto the freeway is a random variable, with --- Trang 327 --- 314 = cuarrer 6 Statistics and Sampling Distributions expected value and standard deviation as given in the beam is of uniform thickness and density so the table. that the resulting load is uniformly distributed on the beam, If the weight of the beam is random, Road 1 Road 2 Road 3 the resulting load from the weight is also random; denote this load by W (kip-ft). Expected value 800 1000 600 a. If the beam is 12 ft long, W has mean 1.5 and Standard deviation 16 25 18 standard deviation .25, and the fixed loads are as described in part (a) of Exercise 35, what are a What 8 The expenten (OnLine OF-waRR the expected value and variance of the bending ; . ve moment? (Hint: If the load due to the beam entering the freeway at this point during the F period? (Hint: Let X,=the number from were w kip-ft, the contribution to the bending road i] moment would be w fj xdv.] b. What is the variance of the total number of BFE all thive variables (Fi Aspand W) are-not entering cars? Have you made any assumptions mally distributed, what is the probability that about the relationship between the numbers of tre behing maemaent will be atimdst 200 ip A? cars on the different roads? 41. A professor has three errands to take care of in the ¢. With X, denoting the number of cars entering Administration Building. Let X; = the time that from road i during the period, suppose that it takes for the ith errand (i = 1, 2, 3), and let Cov(X;, X2) = 80, Cov(X;, X3) = 90, and X, = the total time in minutes that she spends Cov(X>, X;) = 100 (so that the three streams walking to and from the building and between of traffic are not independent). Compute the each errand. Suppose the X;’s are independent, expected total number of entering cars and the normally distributed, with the following means standard deviation of the total. and standard deviations: = 15, a) = 4, 39. ‘Suppose we take a random sample of size n froma Hes 5, 02s WB, O38 2. ile 1 continuous distribution having median 0 so that Ga... Shs plans to leaveihee officesat precisely the probability of any one observation being posi- JOU aan, and wishes t6 post a noteion her door tive is .5. We now disregard the signs of the that reads, “Lown relum’ by tam” What ‘time observations, rank them from smallest to largest should she write: down tf she;wants the; probabil; in absolute value, and then let W = the sum of the ify ob herarriving-aftceit to: bey OU ranks of the observations having positive signs. 42, For males the expected pulse rate is 70/m and For example, if the observations are -.3, +.7, the standard deviation is 10/m. For women the +2.1, and -2.5, then the ranks of positive observa- expected pulse rate is 77/m and the standard devi- tions are 2 and 3, so W = 5. In Chapter 14, W will ation is 12/m. Let X = the sample average pulse be called Wilcoxon's signed-rank statistic. W can rate for a random sample of 40 men and let ¥ = be represented as follows: the sample average pulse rate for a random sample of 36 women WH1s¥) 42-Yo430¥y pe tne Yy a, What is the approximate distribution of X? ory? - Sti LY, b. What is the approximate distribution of X- Y? = Justify your answer. ¢. Calculate (approximately) the probability where the Y;’s are independent Bernoulli P(-2 0 F(x) = 82'7P (0/2) 0 x<0 We use the notation re to indicate a chi-squared variable with v df (degrees of freedom). The mean, variance, and moment generating function of a chi-squared rv follow from the fact that the chi-squared distribution is a special case of the gamma distribution with « = v/2 and B = 2: w=op=v @=af=2v My(t)= (1-207? --- Trang 329 --- 316 = cuarrer 6 Statistics and Sampling Distributions Here is a result that is not at all obvious, a proposition showing that the square of a standard normal variable has the chi-squared distribution. PROPOSITION If Z has a standard normal distribution and X = Vig then the pdf of X is J x22 yp f(x) = 4 ZPAP/2) 0 x<0 That is, X is chi-squared with | df, X ~ Ge Proof = The proof involves determining the cdf of X and differentiating to get the pdf. If x > 0, P(X ~ Z2,, and they are independent, then X; + Xo ~ 72, ,,,- Proof The proof uses moment generating functions. Recall from Section 6.3 that, if random variables are independent, then the moment generating function of their sum is the product of their moment generating functions. Therefore, Myx (0) = Mx (1)Mx, (2) = (1 28)? (1 = 28)? = (1 22)? Because the sum has the moment generating function of a chi-squared variable with v, + v2 degrees of freedom, the uniqueness principle implies that the sum has the chi-squared distribution with v, + v2 degrees of freedom. : By combining the previous two propositions we can see that the sum of two independent standard normal squares is chi-squared with two degrees of freedom, the sum of three independent standard normal squares is chi-squared with three degrees of freedom, and so on. --- Trang 330 --- 6.4 Distributions Based on a Normal Random Sample 317 PROPOSITION If Z;, Zz, ..., Z, are independent and each has the standard normal distribu- tion, then Z7+Zj +--+ Z~ 7 Now the meaning of the degrees of freedom parameter is clear. It is the number of independent standard normal squares that are added to build a chi-squared variable. Figure 6.15 shows graphs of the chi-squared pdf for 1, 2, 3, and 5 degrees of freedom. Notice that the pdf is unbounded for | df and the pdf is exponentially decreasing for 2 df. Indeed, the chi-squared for 2 df is exponential with mean 2, f(x) =4e-*/? for x > 0. If v > 2 the pdf is unimodal with a peak at x = y — 2, as shown in Exercise 49. The distribution is skewed, but it becomes more symmetric as the degrees of freedom increase, and for large df values the distribution is approximately normal (see Exercise 47). 1.0 08 1 | 206} 2 \ 3 ah 047 \ \ - 5 DF oak AN. -- 3DF | oe eS — - 2DF “ a | —— 1 DF 09 2 4 6 8 10 x Figure 6.15 The Chi-Squared pdf for 1, 2, 3, and 5 DF Except for a few special cases, it is difficult to integrate a chi-squared pdf, so Table A.6 in the appendix has critical values for chi-squared distributions. For example, the second row of the table is for 2 df, and under the heading .01 the value 9.210 indicates that P(73 > 9.210) =.01. We use the notation Loz = 9-210 , where in general 72, = ¢ means that P(7? > c) = a. In Section 1.4 we defined the sample variance in terms of x, s =! Su —3x) n-1Q"" which gives an estimate of o* when the population mean is unknown. If we happen to know the value of ju, then the appropriate estimate is ot hoa? neg ie --- Trang 331 --- 318 cuarrer 6 Statistics and Sampling Distributions Replacing x,’s by X;,’s results in S? and 6? becoming statistics (and therefore random variables). A simple function of 6? is a chi-squared rv. First recall that if X is normally distributed, then (X — ,)/o is a standard normal rv. Thus ne? On (Xi 2 a oe is the sum of n independent standard normal squares, so it is se A similar relationship connects the sample variance $* to the chi-squared distribution. First, compute YG =? = YK -X) + -wP =) -¥)P +28 -w SS %-¥) +02 The middle term on the second line vanishes (why?). Dividing through by o, Xp)? Ke KN? Xp)? _ KAR? X=? DE) -DEF) IE) LEA) ot The last term can be written as the square of a standard normal rv, and therefore as a rv. yy Xp \? 5a yy GS =L(FR) +0 X;,-X\? (X¥-p\? = icin — 6.11 oC) Gra) em It is crucial here that the two terms on the right be independent. This is equivalent to saying that S? and X are independent. Although it is a bit much to show rigorously, one approach is based on the covariances between the sample mean and the deviations from the sample mean. Using the linearity of the covariance operator, Cov(X; — X,X) = Cov(X;,X) — Cov(X,X) Cov(X; yx) V(®) Pe ah = Cov(X;,— i) - =—-—=0. ve non This shows that X is uncorrelated with all the deviations of the observations from their mean. In general, this does not imply independence, but in the special case of the bivariate normal distribution, being uncorrelated is equivalent to independence. Both X and X; — X are linear combinations of the independent normal observations, so they are bivariate normal, as discussed in Section 5.3. Because the sample variance S? is composed of the deviations X; — X, we have this result. PROPOSITION If X,, X>, ..., X;, are a random sample from a normal distribution, then X and S? are independent. --- Trang 332 --- 6.4 Distributions Based on a Normal Random Sample 319 To understand this proposition better we can look at the relationship between the sample standard deviation and mean for a large number of samples. In particu- lar, suppose we select sample after sample of size n from a particular population distribution, calculate ¥ and s for each one, and then plot the resulting (x, s) pairs. Figure 6.16(a) shows the result for 1000 samples of size n = 5 from a standard normal population distribution. The elliptical pattern, with axes parallel to the coordinate axes, suggests no relationship between x and s, that is, independence of the statistics X and S (equivalently X and 8. However, this independence fails for data from a nonnormal distribution, and Figure 6.16(b) illustrates what happens for samples of size 5 from an exponential distribution with mean 1. This plot shows a strong relationship between the two statistics, which is what might be expected for data from a highly skewed distribution. a b s s 25 3.5 . Li 30 a, & 2.0 2 wy 2 ° . biome * 25 . z (Sls & ig eee ee ice. 20 hae we eat init Mid : FR Ma see : et 4 ee lee: ii 8s NE . Je 1.0 nS re. ia are aye a . i Se . ‘6 5 ay ° 0 x 0 ¥ 20 +16 =-10 «5 0 5 1.0 0 5 1.0 1.5 20 25 3.0 Figure 6.16 Plot of (x, s) pairs We will use the independence of X and S? together with the following proposition to show that S? is proportional to a chi-squared random variable. PROPOSITION If X3 =X; + Xo, and Xi ~ Gs X3~ ites v3 > vj, and X, and X> are inde- pendent, then X) ~ 72. The proof is similar to that of the proposition involving the sum of indepen- dent chi-squared variables, and it is left as an exercise (Exercise 51). From Equation 6.11 y Xi pe rs XX ne Xap * =v? | X=)? o - o ofyn) of Jn Assuming a random sample from the normal distribution, the term on the left is 72, and the last term is the square of a standard normal variable, so it is 7}. --- Trang 333 --- 320 = cuarrer 6 Statistics and Sampling Distributions Putting the last two propositions together gives the following: PROPOSITION If X,, X>, ..., X, are a random sample from a normal distribution, then (n= 1)8°/0? ~ a4 Intuitively, the degrees of freedom make sense because s* is built from the devia- tions (x; — X), (x2 —X), ..., (Wn —X), which sum to zero: Si -3) =x = 1: = nt —nv=0. The last deviation is determined by the first (n — 1) deviations, so it is reasonable that s? has only (n — 1) degrees of freedom. The degrees of freedom help to explain why the definition of s* has (n — 1) and not 7 in the denominator. Knowing that (n — 1)S?/o? ~ 72_,, it can be shown (see Exercise 50) that the expected value of S° is o°, and also that the variance of S” approaches 0 as n becomes large. The t Distribution Let Z be a standard normal rv and let X be a 72 rv independent of Z. Then the t distribution with degrees of freedom vy is defined to be the distribution of the ratio Z T= _ /X/v Sometimes we will include a subscript to indicate the df, t = f,. From the definition it is not obvious how the ¢ distribution can be applied to data, but the next result puts. the distribution in more directly usable form. THEOREM If X,, X2,...,X, is a random sample from a normal distribution N(0?), then X- put _# S]Ja has the ft distribution with (n — 1) degrees of freedom, t,,_;. Proof First we express T in a slightly different way, pak rh &=wilol va) S//n DS 1(q 1) The numerator on the right is standard normal because the mean of a random sample from N(j, o°) is normal with population mean 4 and variance o/n. --- Trang 334 --- 6.4 Distributions Based on a Normal Random Sample 321 The denominator is the square root of a chi-squared variable with (n — 1) degrees of freedom, divided by its degrees of freedom. This chi-squared variable is independent of the numerator, so the ratio has the f distribution with (n—1) degrees of freedom. Hi It is not hard to obtain the pdf for T. PROPOSITION The pdf of a random variable T having a f distribution with v degrees of freedom is 1 T[(v+1)/2] 1 t) = SS oe orm OOS EK 1O=eTWR) Gaemmmm “ests Proof We first find the cdf of T and then differentiate to obtain the pdf. A t variable is defined in terms of a standard normal Z and a chi-squared variable X with v degrees of freedom. They are independent, so their joint pdf f(x, z) is the product of their individual pdfs. Zz K\ px pvsle P(T (x, z) dz dx = = sty/— |dx Now substitute the joint pdf and integrate co fe v2 nl ayy j= ia -/ Px /(2¥) gy =| aera te The integral can be evaluated by writing the integrand in terms of a gamma pdf. T[(v + 1)/2] £0 V2mT (v/2)[1/2 + 2/(2v)] we OHD/2 Ht) % [ 1 gre eo W/24P/ rea, o 2° Morn The integral of the gamma pdf is 1, so fl) = T{(v + 1)/2] V2mv¥(v/2)[1/2 + 2/(2v)J D7 Ia" _TI@+/2) 1 eiZ8E 6 ~ Var /2) (1+ 2] * --- Trang 335 --- 322 = cuarrer 6 Statistics and Sampling Distributions The pdf has a maximum at 0 and decreases symmetrically as It| increases. As vy becomes large the t pdf approaches the standard normal pdf, as shown in Exercise 54. It makes sense that the f distribution would be close to the standard normal for large v, because T=2/J2K, and 72/v converges to 1 by the law of large numbers, as shown in Exercise 48. Figure 6.17 shows ft density curves for v = 1, 5, and 20 along with the standard normal curve. Notice how fat the tails are for 1 df, as compared to the standard normal. However, as the degrees of freedom increase, the t pdf becomes more like the standard normal. For 20 df there is not much difference. f (0) 5 20 df ¥ , 5 df 4 1 df 3 2 P| 0 t -5 -3 os 4 3 5 Figure 6.17 Comparison of t curves to the z curve Integration of the t pdf is difficult except for low degrees of freedom, so values of upper tail areas are given in Table A.7. For example, the value in the column labeled 2 and the row labeled 3.0 is .048, meaning that for two degrees of freedom P(T > 3.0) = .048. We write this as to4s.2 = 3.0, and in general we write t,y =c if P(T, > c) =. A tabulation of these ¢ critical values (i.e. t,,) for frequently used tail areas % appears in Table A.5. Using v = 1 and (1/2) = \/7 in the chi-squared pdf, we obtain the pdf for the ¢ distribution with one degree of freedom as 1/[z(1 + P)]. It has another name, the Cauchy distribution. This distribution has such fat tails that the mean does not exist (Exercise 55). The mean and variance of a f variable can be obtained directly from the pdf, but there is another route, through the definition in terms of independent standard normal and chi-squared variables, T = Z/\/X/v. Recall from Section 5.2 that E(UV) = E(U)E(V) if U and V are independent. Thus, E(T) = E(Z) E(1//X/v). Of course, E(Z) = 0, so E(T) = 0 if the second expected value on the right exists. Let’s compute it from a more general expectation, E(X*) for any kif X is chi-squared: [Pa Pap axe) = | “oprEla* ‘dx Z gkty Prk £u/2yi(* xlkty/2)-1 59 --- Trang 336 --- 6.4 Distributions Based on a Normal Random Sample 323 The second integrand is a gamma pdf so its integral is 1 if k + v/2 > 0, and otherwise the integral does not exist. Therefore, 2 (k + v/2) E(x‘) == 6.12 O) = (6.12) if k + v/2 > 0, and otherwise the expectation does not exist. The requirement k+y/2 > 0 translates when k = -3 [recall that we need the existence of E(1/\/X/v)] into v > 1. The mean of a ¢ variable fails to exist if vy = 1 and the mean is indeed 0 otherwise. For the variance of T we need E(T?) = E(Z*) E[1M(X/v)] = 1 -v/E(1/X). Using k = —1 in Equation (6.12), we obtain, with the help of P(@ + 1) = al'(a), 2-'T(-1+/2) oF i E(x!) =2 3 ify 2 om) TO/]2) y2-1 y-2 OY? and therefore V(T) = v/(v — 2). For 1 or 2 degrees of freedom the variance does not exist. The variance always exceeds 1, and for large df the variance is close to 1. This is appropriate because any f curve spreads out more than the z curve, but for large df the t curve approaches the z curve. The F Distribution Let X, and X, be independent chi-squared random variables with v, and v2 degrees of freedom, respectively. The F distribution with vy, numerator degrees of freedom and v3 denominator degrees of freedom is defined to be the distribution of the ratio X/v pat (6.13) X2/v2 Sometimes the degrees of freedom will be indicated with subscripts Fy, y,. Suppose that we have a random sample of m observations from the normal population N(,,0?) and an independent random sample of n observations from a second normal population N(j15, 03). Then for the sample variance from the first group we know (m—1)S}/oj is 7,;, and similarly for the second group (n — 1)S3/o3 is 72_,. Thus, according to Equation (6.13), (m= et Tn S/O Prag =a t= t. 6.14 w= Ge Dey Be (a n=1 The F distribution, via Equation (6.14), will be used in Chapter 10 to compare the variances from two independent groups. Also, for several independent groups, in Chapter 11 we will use the F distribution to see if the differences among sample means are bigger than would be expected by chance. What happens to F if the degrees of freedom are large? Suppose that v> is large. Then, using the law of large numbers we can see (Exercise 48) that the --- Trang 337 --- 324 =~ cuarrer 6 Statistics and Sampling Distributions denominator of Equation (6.13) will be close to 1, and approximately the F will be just the numerator chi-squared over its degrees of freedom. Similarly, if both v, and vy, are large, then both the numerator and denominator will be close to 1, and the F ratio therefore will be close to 1. The pdf of a random variable having an F distribution is Ti + 92)/2] (vi\"? iat of Ed CEN, os y 82) = ) T1720 02/2) ia) a/v) VP 0 x<0 Its derivation (Exercise 60) is similar to the derivation of the t pdf. Figure 6.18 shows the F density curves for several choices of v; and v2 = 10. It should be clear by comparison with Figure 6.15 that the numerator degrees of freedom determine a lot about the shapes in Figure 6.18. For example, with vy; = 1, the pdf is unbounded at x = 0, just as in Figure 6.15 with vy = 1. For v; = 2, the pdf is positive at x = 0, just as in Figure 6.15 with vy = 2. For v, > 2, the pdf is 0 at x = 0, just as in Figure 6.15 with v > 2. However, the F pdf has a fatter tail, especially for low values of v3. This should be evident because the F pdf does not decrease to 0 exponentially as the chi-squared pdf does. L(x) 1.0 8 5, 10 df “s 10 df 6 2, 10 df 4 1, 10 df 2 0 x 0 1 2 3 4 5 Figure 6.18 F density curves Except for a few special choices of degrees of freedom, integration of the F pdf is difficult, so F critical values (values that capture specified F distribution tail areas) are given in Table A.8. For example, the value in the column labeled 1 and the row labeled 2 and .100 is 8.53, meaning that for one numerator degree of freedom and two denominator degrees of freedom P(F > 8.53) = .100. We can express this as Fy 1,2 = 8.53, where F,,,,,, = means that P(F,,,, > ¢) = a. What about lower tail areas? Since 1/F = (X>/v2)/(X,/v,), the reciprocal of an F variable also has an F distribution, but with the degrees of freedom reversed, and this can be used to obtain lower tail critical values. For example, 100 = P(F 1.2 > 8.53) = PUI/Fy,2 < 1/8.53) = P(Fx1 < .117). This can be writ- ten as F.99, = .117 because .9 = P(F2,; > .117). In general we have --- Trang 338 --- 6.4 Distributions Based on a Normal Random Sample 325 Bo (6.15) a ss Recalling that T = SAE /X/v, it follows that the square of this t random variable is an F random variable with 1 numerator degree of freedom and v denominator degrees of freedom, a = F\,. We can use this to obtain tail areas. For example, 100 = P(Fi,2 > 8.53) = P(T? > 8.53) = P(|T>| > V8.53) = 2P(T> > 2.92), and therefore .05 = P(T2 > 2.92). We previously determined that .048 = P(Tz > 3.0), which is very nearly the same statement. In terms of our notation, tos.2 = VF 10,12, and we can similarly show that in general t,.. = /Faziy if 0 < a 2, and it does not exist if v2 < 2 (Exercise 57). Summary of Relationships Is it clear how the standard normal, chi-squared, t, and F distributions are related? Starting with a sequence of n independent standard normal random variables (let’s use five, Z),Z, ...,Zs, to be specific) can we construct random variables having the other distributions? For example, the chi-squared distribution with n degrees of freedom is the sum of n independent standard normal squares, so Z7 + Z3 + Z} has the chi-squared distribution with 3 degrees of freedom. Recall that the ratio of a standard normal rv to the square root of an independent chi-squared rv, divided by its df v, has the f distribution with v df. This implies that Zy / \/ (Z} + Z3 + 23) /3 has the ¢ distribution with 3 degrees of freedom. Why would it be wrong to use Z, in place of Z,? Building a random variable with the F distribution requires two independent chi-squared rvs. We already have Z} + Z3 + Z} with 3 df, and similarly we obtain Zj + Z2, chi-squared with 2 df. Dividing each chi-square rv by its df and taking the ratio gives an F2,; random variable, [(Z} + 23) /2]/[(Z} + 23 + Z3)/3]. Exercises | Section 6.4 (46-66) 46. a. Use Table A.6 to find 7455- 50. Knowing that (n — 1)S?/o? ~ 72_, for a normal b. Verify the answer to (a) by integrating the pdf. random sample, c. Verify the answer to (a) by using software a. Show that E(S?) = 0° (e.g., TI 89 calculator or MINITAB) b. Show that V(S?) = 264/(n-1). What happens to this variance as n gets large? 47. Why should 72 be approximately normal for large : aaah v? What theorem applies here, and why? a Apply Higuation (602) se) Show tht 48. Apply the Law of Large Numbers to show that . V20(n/2) 3 Bs) ¢_— _, 7/v approaches 1 as v becomes large. Va—iTn—/ 49. Show that the 7? pdf has a maximum at v — 2 if ‘Then show that E(S) = o,/2/r if n = 2. Is it true v> 2. that E(S) = o for normal data? --- Trang 339 --- 326 = cuarrer 6 Statistics and Sampling Distributions 51. Use moment generating functions to show that if 61. a. Use Table A.8 to find F.i2.4. X3 =X, +X, with Xi ~ 7X3 ~L,, v3 > vy b. Verify the answer to part (a) using the pdf. and X; and X, are independent, then X> ~ 72, ¢. Verify the answer to part (a) using software . (e.g., TL 89 calculator or MINITAB). 52. a, Use Table A.7 to find f102,1- b. Verify the answer to part (a) by integrating 62. a. Use Table A.7 to find ¢5.10. the pdf. b. Use (a) to find the median of F 10. c. Verify the answer to part (a) using software ¢. Verify the answer to part (b) using software (e.g., TL 89 calculator or MINITAB) (e.g., TE 89 calculator or MINITAB). 53, a. Use Table A.7 to find r.09s,10- 63. Show that if X has a gamma distribution and c b. Use Table A.8 to find F 91,1,19 and relate this to (> 0) is a constant, then cX has a gamma distribu- the value you obtained in part (a). tion. In particular, if X is chi-squared distributed, ¢. Verify the answer to part (b) using software then cX has a gamma distribution. (e.g., TL 89 calculator or MINITAB). 64. Let Z;, Zs, ..., Zio be independent standard nor- 54, Show that the pdf approaches the standard mal. Use these to construct normal pdf for large df values. [Hint: Use a. A 7j random variable. (1 + ax)’ > e* and P(x +1/2)/[Val(a)] 1 b. A t, random variable. as¥ 3-00] ¢. An 46 random variable. . d. A Cauchy random variable. 55. Show directly from the pdf that the mean of a f, e. An exponential random variable with mean 2. (Calichy) random Wariable\does not-exist, f. An exponential random variable with mean 1. 56. Show that the ratio of two independent standard g. A gamma random variable with mean | and normal random variables has the f; distribution. variance 4. [Hint: Use part (a) and Exercise Apply the method used to derive the ¢ pdf in this 63.) section. (Hint: Split the domain of the denomina- 65. a. Use Exercise 47 to approximate PU > 70), ton into positivesand negative parts] and compare the result with the answer given 57. Let X have an F distribution with , numerator df by software, .03237. and v3 denominator df. b. Use the formula given at the bottom of Table a. Determine the mean value of X. A6, 2x v(1—2/(9r) +Z/2/¥)), to b. Determine the variance of X. approximate P(72) > 70), and compare with 58. Is it true that E(Fy..,) = BG8,/v1) EGR, /n2)? pata) Explain. 66. The difference of two independent normal variables ‘ : itself has a normal distribution. Is it true that the 59. Show that Fp.ios = 1/Fi-pysar- difference between two independent chi-squared 60. Derive the F pdf by applying the method used to variables has a chi-squared distribution? Explain. derive the t pdf. 67. In cost estimation, the total cost of a project is the component tasks and that X, and X> are indepen- sum of component task costs. Each of these costs dent, normally distributed random variables. Is is a random variable with a probability distribu- the roll-up procedure valid for the 75th percen- tion. It is customary to obtain information about tile? That is, is the 75th percentile of the distribu- the total cost distribution by adding together tion of X,; + X> the same as the sum of the 75th characteristics of the individual component cost percentiles of the two individual distributions? distributions—this is called the “roll-up” proce- If not, what is the relationship between the per- dure. For example, A(X; + «++ +. X,) = centile of the sum and the sum of percentiles? For E(X,) + +--+ E(X,), so the roll-up procedure is what percentiles is the roll-up procedure valid in valid for mean cost. Suppose that there are two this case? --- Trang 340 --- Supplementary Exercises 327 68. Suppose that for a certain individual, calorie intake mean 30. What is the expected total amount ($) at breakfast is a random variable with expected purchased during a particular 4-h period, and value 500 and standard deviation 50, calorie intake what is the standard deviation of this total at lunch is random with expected value 900 and amount? standard deviation 100, and calorie intake at din- ‘ , sa random variable with expected value2000 71. Suppose the Proportion of rural voters ina certain aaa tasaniacvian Letina state who favor a particular gubernatorial candi- standard deviation 180. Assuming that intakes date is .A5 and the proportion of suburban and Bt GiDferpat aneals Gie Sndepetitiont af eaih other, urban voters favoring the candidate is .60. If a what is the probability that average calorie intake coaaple OF 200 Pieil voted and°300 tran BNA per day over the next (365-day) year is at most . . . . 3500? [Hint: Let X,, Y;, and Z, denote the three Raburn Voters ASObEINE, Whats therapy prORe a ’ ‘ z mate probability that at least 250 of these voters Says EY, Sy ji hen tofal intakesty favor this candidate? 69. The mean weight of luggage checked by a rane 72 Letstdenote the true pH ofa chemical compound, ee Q A sequence of 1 independent sample pH determi- domly selected tourist-class passenger flying : nade : : -- nations will be made. Suppose each sample pH is between two cities on a certain airline is 40 Ib, a random variable with expected value jx and and the standard deviation is 10 Ib. The mean and dcstdl RAViGiOn A Tb maky Hctaaliogs standard deviation for a business-class passenger : : A site 30 Ibvand 6 Ib, respectively, are required if we wish the probability that the a. If there are 12 business-class passengers and Saitple avecsee ss withily 02.oF the tripe 6 be at 50 tourist-class passengers on a particular least 952 eee theorem justifies your probability flight, what are the expected value of total calculation? luggage weight and the standard deviation of | 73. The amount of soft drink that Ann consumes on total luggage weight? any given day is independent of consumption on b. If individual luggage weights are indepen- any other day and is normally distributed with dent, normally distributed rv's, what is the = 13 oz and ¢ = 2. If she currently has two probability that total luggage weight is at six-packs of 16-02 bottles, what is the probability most 2500 Ib? that she still has some soft drink left at the end of 70. If X1,X2,...,X, are independent rvs, each with 2 WEEKS (14 days)?" Whiy’ should We Worry about Thera a amd cations teller we the validity of the independence assumption here? have seen that E(X +Xp+---+X,) = np and 74. A large university has 500 single employees who V(X, +X +--+ +X,) =no*. In some applica- are covered by its dental plan. Suppose the num- tions, the number of X;'s under consideration is ber of claims filed during the next year by such an not a fixed number n but instead a rv N. For employee is a Poisson rv with mean value 2.3. example, let NV be the number of components of Assuming that the number of claims filed by any a certain type brought into a repair shop on a such employee is independent of the number filed particular day and let X; represent the repair time by any other employee, what is the approximate for the ith component. Then the total repair time is probability that the total number of claims filed is Sy =X, + X2+---+Xy, the sum of a random at least 1200? se Ruppoaaztiae Wy iv independent of the As, 75 8 Stident has class that is supposed to end at * Obtain an expression for E(Sq) in terms of ‘ 9:00 a.m. and another that is supposed to begin and E(N). Hint: [Refer back to the theorem se pan ae sie semua cede ne g involving the conditional mean and variance iihcmsean $00 and standard deviation Lain hb in Section se Sk neve _ x ~ a a and that the starting time of the next class is also a . Obtai ssi Sy) in terms of 2, : ‘ : 7 Customers submit orders for stock purchases at the time necessary to get from one classroom SE ee ee oa a ae to the other is a normally distributed rv X3 with ees : me mean 6 min and standard deviation 1 min. chased by any particular customer (in 1000 s What is the probability that the student makes it of dollars) has an exponential distribution with BiKE! SERGE: GES REGS hs EATS. BD --- Trang 341 --- 328 = cuarrer 6 Statistics and Sampling Distributions (Assume independence of X,, X2, and X3, which is b. What is the maximum value of Corr(X, Y) reasonable if the student pays no attention to the when Corr(X;, Xx) = .8100, Corr(¥;, ¥Y2) = finishing time of the first class.) .9025? Is this disturbing? 76. a. Use the general formula for the variance of 79. Let Xi, ..., X, be independent rv’s with mean a linear combination to write an expression values j1y, ..., #4, and variances 3, ..., 2. Con- for V(aX + Y). Then let a = oy/ay, and show sider a function h(x, ...,.x,), and use it to define a that p > —1. [Hint: Variance is always > 0, and new tv Y = h(X,, ..., X,). Under rather general Cov(X, ¥) = cy: yp] conditions on the h function, if the o;’s are all b. By considering V(aX — Y), conclude that small relative to the corresponding 4,;’s, it can be p< shown that E(Y) © h(i), ..., Hn) and c. Use the fact that V(W) = 0 only if W is a constant to show that p = 1 only if Y = aX + b. vn) (ay 7 in (ay 4 a(S) 4...4(F) 77. A rock specimen from a particular area is ran- Ox, i ax,) °° domly selected and weighed two different times. Let W denote the actual weight and X, and X> where each partial derivative is evaluated at (x, the two measured weights. Then X; = W + Ey sees Xy) = (iq, +++ My). Suppose three resistors and X, = W + E>, where E, and E> are the two with resistances X,, X2, X3 are connected in paral- measurement errors. Suppose that the E;’s are lel across a battery with voltage X,. Then by independent of each other and of W and that Ohm’s law, the current is V(E1) = V(E2) = oF. 1 i41 a. Express p, the correlation coefficient between Y=X. (= tigich x) the two measured weights X; and X9, in terms 1 a2. 2 of giv, the variance of actual weight, and oy, Let 2, = 10 ohms,o, = 1.0 ohms, zy = 15 ohms, the variance of measured weight. bl A ea ke ond ee=-0T'k 2 = 1.0 ohms, 43 = 20 ohms, 3 = 1.5 ohms, «| Compute. p: when oy—:1 ke.and gx'—.0L kg: Ha = 120 V, 64 = 4.0 V. Calculate the approxi- 78. Let A denote the percentage of one constituent in a mate expected value and standard deviation of the randomly selected rock specimen, and let B denote current (suggested by “Random Samplings,” the percentage of a second constituent in that same CHEMTECH, 1984: 696-697). Specimen. Suppose D and £ are measurement 59 4 more accurate apptoximation to E[AX, -- errors in determining the values of A and B so that Lots 2 X,)] in Exercise 79 is measured values are X = A + D and ¥Y =B +E, respectively. Assume that measurement errors are 29: 29: independent of each other and of actual values. Alttyy--+Hlp) +02 (=) mle (=) a. Show that 2 *\oxt 2 "\ Ox; Compute this for ¥ = A(X, X2, Xs, X4) given in Corr(X, ¥) = Corr(A, B) - /Corr(X;,X2) Exercise 79, and compare it to the leading term - /Corr(¥, Ya) ACL, 5 Ma) 81. Explain how you would use a statistical soft- where X; and X; are replicate measurements on ware package capable of generating independent fis. Wllne Of-A, and Y, aX, we desuca standard normal observations to obtain observed analogously with respect to B. What effect values of (X, Y), where X and Y are bivariate does the presence of measurement error have normal with means 100 and 50, standard devia- orithe correlation? tions 5 and 2, and correlation .5. (Hint: Exam- ple 6.16.] --- Trang 342 --- Appendix: Proof of the Central Limit Theorem 329 | Bibliography | Larsen, Richard, and Morris Marx, An Introduction to Olkin, Ingram, Cyrus Derman, and Leon Gleser, Prob- Mathematical Statistics and Its Applications (4th ability Models and Applications (2nd ed.), Macmil- ed.), Prentice Hall, Englewood Cliffs, NJ, 2005. lan, New York, 1994, Contains a careful and More limited coverage than in the book by Olkin comprehensive exposition of limit theorems. et al., but well written and readable. Appendix: Proof of the Central Limit Theorem First, here is a restatement of the theorem. Let X), X2, ..., X, be a random sample from a distribution with mean y and variance o”. Then, if Z is a standard normal random variable, Xp lim P(——£ <2) =P(z <2) nace \a/ Jn The theorem says that the distribution of the standardized X approaches the standard normal distribution. Our proof is only for the special case in which the moment generating function exists, which implies also that all its derivatives exist and that they are continuous. We will show that the moment generating function of the standardized X approaches the moment generating function of the standard normal distribution. However, convergence of the moment generating function does not by itself imply the desired convergence of the distribution. This requires a theorem, which we will not prove, showing that convergence of the moment generating function implies the convergence of the distribution. The standardized X can be written as y Fe _ (U/l = w/o + Ka = w/o +++ Ky = w/a] -0 ava 1] The mean and standard deviation for the first ratio come from the first proposition of Section 6.2, and the second ratio is algebraically equivalent to the first. It says that, if we define W to be the standardized X, so W; = (X; — w/o, i = 1, 2,.....n, then the standardized X can be written as the standardized W, — X-jf_W-0 —a]yn A/a’ This allows a simplification of the proof because we can work with the simpler variable W, which has mean 0 and variance 1. We need to obtain the moment generating function of w-0 Y= = Va W = (Wit Wot +) /Va Ligh va (Wi + Wo )/va --- Trang 343 --- 330 = curter 6 Statistics and Sampling Distributions from the moment generating function M(t) of W. With the help of the Section 6.3 proposition on moment generating functions of linear combinations of independent random variables, we get My(t) = M(t/,/n)". We want to show that this converges to the moment generating function of a standard normal random variable, Mz(t) = e*/2. It is easier to take the logarithm of both sides and show instead that In{My(t)] = nIn{M(t/ Vn] + °/2. This is equivalent because the logarithm and its inverse are continuous functions. The limit can be obtained from two applications of L’H6pital’s rule if we set x = 1/y/n, In[My(¢)] = nIn{M(t/ /n)| = In[M(tx)]/2°. Both the numerator and the denominator approach 0 as n gets large and x gets small (recall that M(0) = 1 and M(t) is continuous), so L’H6pital’s rule is applicable. Thus, differentiating the numerator and denominator with respect to x, _ InfM(tx)] M'(tx)t/M(tx) 0 M'(tx)t lim 2 im oH eae) Recall that M(0) = 1, M'(0) = E(W) = 0 and M(#) and its derivative M’(t) are continuous, so both the numerator and denominator of the limit on the right approach 0. Thus we can use L’H6pital’s rule again. tim OM = jig MP MY) ag x0 2xM(tx) 1-0 2M(tx) + 2xM'(tx)t—-2(1) + 2(0)(0)t In evaluating the limit we have used the continuity of M(¢) and its derivatives and M(0) = 1, M'(0) = E(W) = 0, M’(0) = E(W*) = 1. We conclude that the mgf converges to the mgf of a standard normal random variable. --- Trang 344 --- P e e 0 oint Estimation Introduction Given a parameter of interest, such as a population mean , or population propor- tion p, the objective of point estimation is to use a sample to compute a number that represents in some sense a good guess for the true value of the parameter. The resulting number is called a point estimate. In Section 7.1, we present some general concepts of point estimation. In Section 7.2, we describe and illustrate two important methods for obtaining point estimates: the method of moments and the method of maximum likelihood. Obtaining a point estimate entails calculating the value of a statistic such as the sample mean X or sample standard deviation S. We should therefore be concerned that the chosen statistic contains all the relevant information about the parameter of interest. The idea of no information loss is made precise by the concept of sufficiency, which is developed in Section 7.3. Finally, Section 7.4 further explores the meaning of efficient estimation and properties of maximum likelihood. JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 331 DOI 10.1007/978-1-4614-0391-3_7, © Springer Science+Business Media, LLC 2012 --- Trang 345 --- 332 carrer 7 Point Estimation General Concepts and Criteria Statistical inference is frequently directed toward drawing some type of conclusion about one or more parameters (population characteristics). To do so requires that an investigator obtain sample data from each of the populations under study. Conclusions can then be based on the computed values of various sample quan- tities. For example, let 4 (a parameter) denote the average duration of anesthesia for a short-acting anesthetic. A random sample of n = 10 patients might be chosen, and the duration for each one determined, resulting in observed durations X1,X2,.... X10. The sample mean duration ¥ could then be used to draw a conclusion about the value of x. Similarly, if o° is the variance of the duration distribution (population variance, another parameter), the value of the sample variance s* can be used to infer something about o7. When discussing general concepts and methods of inference, it is conve- nient to have a generic symbol for the parameter of interest. We will use the Greek letter @ for this purpose. The objective of point estimation is to select a single number, based on sample data, that represents a sensible value for 6. Suppose, for example, that the parameter of interest is yu, the true average lifetime of batteries of a certain type. A random sample of n = 3 batteries might yield observed lifetimes (hours) x; = 5.0, x2 = 6.4, x3 = 5.9. The computed value of the sample mean lifetime is ¥ = 5.77, and it is reasonable to regard 5.77 as a very plausible value of jz, our “best guess” for the value of 1 based on the available sample information. Suppose we want to estimate a parameter of a single population (e.g., or a) based on a random sample of size n. Recall from the previous chapter that before data is available, the sample observations must be considered random variables (tv’s) X,, X>,...,X,,. It follows that any function of the X;’s—that is, any statistic— such as the sample mean X or sample standard deviation S is also a random variable. The same is true if available data consists of more than one sample. For example, we can represent duration of anesthesia of m patients on anesthetic A and n patients on anesthetic B by X;,...,X,, and Y),.. ., ¥,, respectively. The difference between the two sample mean durations is X — Y, the natural statistic for making inferences about jt; — /l2, the difference between the population mean durations. DEFINITION A point estimate of a parameter 0 is a single number that can be regarded as a sensible value for 0. A point estimate is obtained by selecting a suitable statistic and computing its value from the given sample data. The selected statistic is called the point estimator of 0. In the battery example just given, the estimator used to obtain the point estimate of j« was X, and the point estimate of 4 was 5.77. If the three observed lifetimes had instead been x, = 5.6, x2 = 4.5, and x3 = 6.1, use of the estimator X would have resulted in the estimate ¥ = (5.6 + 4.5 + 6.1)/3 = 5.40. The symbol 0 (“theta hat”) is customarily used to denote both the estimator of @ and the point --- Trang 346 --- 7.1 General Concepts and Criteria 333 estimate resulting from a given sample.’ Thus jt = X is read as “the point estimator of jv is the sample mean X.” The statement “the point estimate of y is 5.77” can be written concisely as jt = 5.77. Notice that in writing 9 = 72.5, there is no indica- tion of how this point estimate was obtained (what statistic was used). It is recommended that both the estimator and the resulting estimate be reported. An automobile manufacturer has developed a new type of bumper, which is supposed to absorb impacts with less damage than previous bumpers. The manu- facturer has used this bumper in a sequence of 25 controlled crashes against a wall, each at 10 mph, using one of its compact car models. Let X = the number of crashes that result in no visible damage to the automobile. The parameter to be estimated is p = the proportion of all such crashes that result in no damage [alternatively, p = P(no damage in a single crash)]. If X is observed to be x = 15, the most reasonable estimator and estimate are estimator p x stimate =~ 18 60 stima == estimate = —= = = | a n 35 7 If for each parameter of interest there were only one reasonable point estimator, there would not be much to point estimation. In most problems, though, there will be more than one reasonable estimator. Reconsider the accompanying 20 observations on dielectric breakdown voltage for pieces of epoxy resin introduced in Example 4.36 (Section 4.6). 24.46 25.61 26.25 26.42 26.66 27.15 27.31 27.54 27.74 27.94 27.98 28.04 28.28 28.49 28.50 28.87 29.11 29.13 29.50 30.88 The pattern in the normal probability plot given there is quite straight, so we now assume that the distribution of breakdown voltage is normal with mean value p. Because normal distributions are symmetric, 1 is also the median lifetime of the distribution. The given observations are then assumed to be the result of a random sample X,, X2, . . ., X29 from this normal distribution. Consider the following estimators and resulting estimates for ju: a. Estimator = X, estimate = ¥ = )> x;/n = 555.86/20 = 27.793 b. Estimator = X, estimate = ¥ = (27.94 + 27.98) /2 = 27.960 c. Estimator = X, = [min(X;) + max(X;)]/2 = the midrange, (average of the two extreme lifetimes), estimate = [min(x,) + max(x;)]/2 = (24.46 + 30.88)/2 = 27.670 d. Estimator = X,,(19), the 10% trimmed mean (discard the smallest and largest 10% of the sample and then average), 555.86 — 24.46 — 25.61 — 29.50 — 30.88 estimate = X19(19) = ———_____ = 27.838 16 ' Following earlier notation, we could use © (an uppercase theta) for the estimator, but this is cumber- some to write. --- Trang 347 --- 334 carrer 7 Point Estimation Each one of the estimators (a)-(d) uses a different measure of the center of the sample to estimate u. Which of the estimates is closest to the true value? We cannot answer this without knowing the true value. A question that can be answered is, “Which estimator, when used on other samples of X;’s, will tend to produce estimates closest to the true value?” We will shortly consider this type of question. a Studies have shown that a calorie-restricted diet can prolong life. Of course, controlled studies are much easier to do with lab animals. Here is a random sample of eight lifetimes (days) taken from a population of 106 rats that were fed a restricted diet (from “Tests and Confidence Sets for Comparing Two Mean Residual Life Functions,” Biometrics, 1988: 103-115) 716 1144 1017 1138 389 «1221 530 958 Let X), .. ., Xg denote the lifetimes as random variables, before the observed values are available. We want to estimate the population variance o°. A natural estimator is the sample variance: gag LHX _ DX? = (OX)? /n . n=1 n-1 The corresponding estimate is 2 2 2 id vi)" /8 6,991,551 —(7113)"/8 667,205 pag Le Os _ (1137/8 _ aa? 7 7 7 The estimate of ¢ would then be 6 = s = \/95,315 = 309 An alternative estimator would result from using divisor n instead of n — 1 (ie., the average squared deviation): Xi — XP 667,205 eo Ui teed estimate = —~—— = 83, 401 n 8 We will indicate shortly why many statisticians prefer S* to the estimator with divisor n. a In the best of all possible worlds, we could find an estimator 0 for which 0=0 always. However, 0 is a function of the sample X;’s, so it is a random variable. For some samples, @ will yield a value larger than 0, whereas for other samples @ will underestimate 0. If we write @ = 0 + error of estimation then an accurate estimator would be one resulting in small estimation errors, so that estimated values will be near the true value. Mean Squared Error A popular way to quantify the idea of 0 being close to @ is to consider the squared error (@ — @)°. Another possibility is the absolute error | — @|, but this is more --- Trang 348 --- 7.1 General Concepts and Criteria 335 difficult to work with mathematically. For some samples, 6 will be quite close to 6 and the resulting squared error will be very small, whereas the squared error will be quite large whenever a sample produces an estimate @ that is far from the target. An omnibus measure of accuracy is the mean squared error (expected squared error), which entails averaging the squared error over all possible samples and resulting estimates. DEFINITION The mean squared error of an estimator 0 is E[(0 — 0)"). A useful result when evaluating mean squared error is a consequence of the following rearrangement of the shortcut for evaluating a variance V(Y): VY) =E(¥?) -(E)P => EY) =V(v) + (EQ)? That is, the expected value of the square of Y is the variance plus the square of the mean value. Letting Y =6 — 0, the estimation error, the left-hand side is just the mean squared error. The first term on the right-hand side is V(0 — p= va) since @ is just a constant. The second term involves E(@ — 0) = E(@) — 0, the difference between the expected value of the estimator and the value of the parameter. This difference is called the bias of the estimator. Thus MSE = V(6) + {E(0) — OP = variance of estimator + (bias)? Consider once again estimating a population proportion of “successes” p. The (Example 7.1 natural estimator of p is the sample proportion of successes p = X/n. The number continued) of successes X in the sample has a binomial distribution with parameters 7 and p, so E(X) = np and V(X) = np(1 — p). The expected value of the estimator is x 1 1 E(p) =E(—) =—E(X) =— np= @)=2(%) 120) =1 =p Thus the bias of p is p — p = 0, giving the mean squared error as . . x) _1 p(l =p) E\(6 — p)] = V6) + =V(—=) == V(X) = (6-p)')] =V@) + a) eV) 7 Now consider the alternative estimator p = (X + 2)/(n +4) . That is, add two successes and two failures to the sample and then calculate the sample proportion of successes. One intuitive justification for this estimator is that XK ,|_|X--5e K+2 | _|X-5n n “| | on n+4 “| |n+4 from which we see that the alternative estimator is always somewhat closer to .5 than is the usual estimator. It seems particularly reasonable to move the estimate toward .5 when the number of successes in the sample is close to 0 or n. For example, if there are no successes at all in the sample, is it sensible to estimate the population proportion of successes as zero, especially if n is small? --- Trang 349 --- 336 carrer 7 Point Estimation The bias of the alternative estimator is X+2 1 np +2 2/n—Ap/n a(**2) _» = + eee42)—p = PAP» = Ben n+4 n+4 n+4 1+4/n This bias is not zero unless p = .5. However, as n increases the numerator approaches zero and the denominator approaches 1, so the bias approaches zero. The variance of the estimator is X+2 i V(X) _ mp(1—p) __ pp) v(—+) = —, wr4g = 4, = Be ST (3) (n+4y ( ) (n+4)? (n +4)? 2 +84 16/n This variance approaches zero as the sample size increases. The mean squared error of the alternative estimator is 1- 2/n — 4p/n\? msg =—Pil=P)_, (2/n—4p/n n+8+16/n T+4/n So how does the mean squared error of the usual estimator, the sample proportion, compare to that of the alternative estimator? If one MSE were smaller than the other for all values of p, then we could say that one estimator is always preferred to the other (using MSE as our criterion). But as Figure 7.1 shows, this is not the case at least for the sample sizes n = 10 and n = 100, and in fact is not true for any other sample size. According to Figure 7.1, the two MSE’s are quite different when n is small. In this case the alternative estimator is better for values of p near .5 (since it moves the sample proportion toward .5) but not for extreme values of p. For large n the two MSE’s are quite similar, but again neither dominates the other. a b MSE MSE 025 - 0025 - a) oA alternative on alternative mois 010 0010 -005 | -0005 0 P 0 Pp 0 at A 6 8 1.0 0 aa 4 6 8 1.0 n=10 n= 100 Figure 7.1 Graphs of MSE for the usual and alternative estimators of p : --- Trang 350 --- 7.1 General Concepts and Criteria 337 Seeking an estimator whose mean squared error is smaller than that of every other estimator for all values of the parameter is generally too ambitious a goal. One common approach is to restrict the class of estimators under consideration in some way, and then seek the estimator that is best in that restricted class. A very popular restriction is to impose the condition of unbiasedness. Unbiased Estimators Suppose we have two measuring instruments; one instrument has been accurately calibrated, but the other systematically gives readings smaller than the true value being measured. When each instrument is used repeatedly on the same object, because of measurement error, the observed measurements will not be identical. However, the measurements produced by the first instrument will be distributed about the true value in such a way that on average this instrument measures what it purports to measure, so it is called an unbiased instrument. The second instrument yields observations that have a systematic error component or bias. DEFINITION —_A point estimator 0 is said to be an unbiased estimator of 0 if E(0) = 0 for every possible value of 0. If 0 is not unbiased, the difference E(0) — 0 is called the bias of 0. That is, @ is unbiased if its probability (ie., sampling) distribution is always “centered” at the true value of the parameter. Suppose 0 is an unbiased estimator; then if @ = 100, the 0 sampling distribution is centered at 100; if @ = 27.5, then the 0 sampling distribution is centered at 27.5, and so on. Figure 7.2 pictures the distributions of several biased and unbiased estimators. Note that “centered” here means that the expected value, not the median, of the distribution of ft is equal to 0. pat of pdf of 8 a we paf of 8, nN a a a Bias of 6, Bias of 6, Figure 7.2 The pdf's of a biased estimator 0, and an unbiased estimator 2 for a parameter 0 It may seem as though it is necessary to know the value of @ (in which case estimation is unnecessary) to see whether Gis unbiased. This is usually not the case, however, because unbiasedness is a general property of the estimator’s sampling distribution—where it is centered—which is typically not dependent on any partic- ular parameter value. For example, in Example 7.4 we showed that E(p) =p when p is the sample proportion of successes. Thus if p = .25, the sampling --- Trang 351 --- 338 carrer 7 Point Estimation distribution of p is centered at .25 (centered in the sense of mean value), when p = .9 the sampling distribution is centered at .9, and so on. It is not necessary to know the value of p to know that p is unbiased. PROPOSITION When X is a binomial rv with parameters n and p, the sample proportion p = X/nis an unbiased estimator of p. Suppose that X, the reaction time to a stimulus, has a uniform distribution on the interval from 0 to an unknown upper limit @ (so the density function of X is rectangular in shape with height 1/0 for 0 < x < @). An investigator wants to estimate @ on the basis of a random sample X,, Xo, . . ., X, of reaction times. Since 0 is the largest possible time in the entire population of reaction times, consider as a first estimator the largest sample reaction time: 6, = max(X;, ..., X,).Ifn =5 and x; = 4.2, x2 = 1.7, x3 = 2.4, xq = 3.9, xs = 1.3, the point estimate of 0 is 6, = max(4.2, 1.7, 2.4, 3.9, 1.3) = 4.2. Unbiasedness implies that some samples will yield estimates that exceed 0 and other samples will yield estimates smaller than § — otherwise 0 could not possibly be the center (balance point) of @,’s distribution. However, our proposed estimator will never overestimate @ (the largest sample value cannot exceed the largest population value) and will underestimate unless the largest sample value equals 6. This intuitive argument shows that 0), is a biased estimator. More precisely, using our earlier results on order statistics, it can be shown (see Exercise 50) that - n n E(0,) =——-0 <0 since —— <1 (85) == 78 < (since <1) The bias of Oy is given by nO/(n + 1)—@ = —O/(n + 1), which approaches 0 as n gets large. . It is easy to modify 4), to obtain an unbiased estimator of 0. Consider the estimator a ls 1 6, =, =O max(xy,..., Xn) n n Using this estimator on the data gives the estimate (6/5)(4.2) = 5.04. The fact that (n + L/n > 1 implies that 0, will overestimate 0 for some samples and underesti- mate it for others. The mean value of this estimator is a 1 1 E(0,) = [EE maxX. . +X] aT tT efmax(X1,---.X,)] n n 1 0 + fn 6=0 non+i1 If Ou is used repeatedly on different samples to estimate 9, some estimates will be too large and others will be too small, but in the long run there will be no systematic tendency to underestimate or overestimate 0. a --- Trang 352 --- 7.1 General Concepts and Criteria 339 Statistical practitioners who buy into the Principle of Unbiased Estimation would employ an unbiased estimator in preference to a biased estimator. On this basis, the sample proportion of successes should be preferred to the alternative estimator of p, and the unbiased estimator @,, should be preferred to the biased estimator 6) in the uniform distribution scenario of the previous example. Let’s turn now to the problem of estimating o” based on a random sample X, ..., X,,. First consider the estimator $7 = > (X; — X°)/(n — 1), the sample variance as we have defined it. Applying the result E(Y) = V(Y) + [EF to 1 (xP 2?=—_ p (aa from Section 1.4 gives z(s*) = yoew)-te (Sox) n—1 Pon : 1 ig gt 1 ~aLerr7{(Ex) + OH) }} 1 1 1 2 =o { na? + nye —=no® — xinw)} n—1 n n 1 = {n0? 07} = 0? Thus we have shown that the sample variance S? is an unbiased estimator of o”. The estimator that uses divisor n can be expressed as (n — 1)S?/n, so ze = vs) Lae 1 £(82) uml, n n n This estimator is therefore biased. The bias is (n — 1)a?/n — 0? = —o7/n. Because the bias is negative, the estimator with divisor 7 tends to underestimate o°, and this is why the divisor n — | is preferred by many statisticians (although when n is large, the bias is small and there is little difference between the two). This is not quite the whole story, however. Suppose the random sample has come from a normal distribution. Then from Section 6.4 , we know that the rv (n — 1)S°/o” has a chi-squared distribution with n — 1 degree of freedom. The meanvand variance of archi-squared variable are df and 2vdf, respectively. Let’s now consider estimators of the form @=c> > (Xi - XP The expected value of the estimator is Ble Yew x)| = e(n — 1)E(S2) = e(n — Io? so the bias is c(n — 1)o? — o?. The only unbiased estimator of this type is the sample variance, with c = 1/(n- 1). --- Trang 353 --- 340 carrer 7 Point Estimation Similarly, the variance of the estimator is 2 vena -xF] = Veo a) = etn 1)] Substituting these expressions into the relationship MSE = variance + (bias)*, the value of c for which MSE is minimized can be found by taking the derivative with respect to c, equating the resulting expression to zero, and solving for c. The result is c = In + 1). So in this situation, the principle of unbiasedness and the principle of minimum MSE are at loggerheads. As a final blow, even though S? is unbiased for estimating o°, it is not true that the sample standard deviation S is unbiased for estimating ¢. This is because the square root function is not linear, so the expected value of the square root is not the square root of the expected value. Well, if S is biased, why not find an unbiased estimator for o and use it rather than S? Unfortunately there is no estimator of o that is unbiased irrespective of the nature of the population distribu- tion (although in special cases, e.g., a normal distribution, an unbiased estimator does exist). Fortunately the bias of S is not serious unless n is quite small. So we shall generally employ it as an estimator. Ll) In Example 7.2, we proposed several different estimators for the mean y of a normal distribution. If there were a unique unbiased estimator for 1, the estimation dilemma could be resolved by using that estimator. Unfortunately, this is not the case. PROPOSITION If X), X2,.. ., X, is a random sample from a distribution with mean j1, then X is an unbiased estimator of y. If in addition the distribution is continuous and symmetric, then X and any trimmed mean are also unbiased estimators of ju. The fact that X is unbiased is just a restatement of one of our rules of expected value: E(X) = y for every possible value of jz (for discrete as well as continuous distributions). The unbiasedness of the other estimators is more difficult to verify; the argument requires invoking results on distributions of order statistics from Section 5.5. According to this proposition, the principle of unbiasedness by itself does not always allow us to select a single estimator. When the underlying population is normal, even the third estimator in Example 7.2 is unbiased, and there are many other unbiased estimators. What we now need is a way of selecting among unbiased estimators. Estimators with Minimum Variance Suppose 6, and Od, are two estimators of @ that are both unbiased. Then, although the distribution of each estimator is centered at the true value of 0, the spreads of the distributions about the true value may be different. --- Trang 354 --- 7.1 General Concepts and Criteria 341 PRINCIPLE Among all estimators of @ that are unbiased, choose the one that has OF MINIMUM minimum variance. The resulting @ Is called the minimum variance unbi- VARIANCE ased estimator (MVUE) of @. Since MSE = variance + (bias)’, seeking UNBIASED an unbiased estimator with minimum variance is the same as seeking an ESTIMATION unbiased estimator that has minimum mean squared error. Figure 7.3 pictures the pdf’s of two unbiased estimators, with the first 0 having smaller variance than the second estimator. Then the first 6 is more likely than the second one to produce an estimate close to the true 0. The MVUE is, in a certain sense, the most likely among all unbiased estimators to produce an estimate close to the true 0. pdf of first estimator Ik _- Pf of second estimator 0 Figure 7.3 Graphs of the pdf's of two different unbiased estimators We argued in Example 7.5 that when X),. . ., X,, is a random sample from a uniform distribution on [0, 0], the estimator ‘A 1 6, = max(X,,...,X,) n is unbiased for 0 (we previously denoted this estimator by 0,). This is not the only unbiased estimator of 0. The expected value of a uniformly distributed rv is just the midpoint of the interval of positive density, so E(X;) = 0/2. This implies that E(X) = 0/2, from which E(2X) = 0. That is, the estimator 0. = 2X is unbiased for 0. IfX is uniformly distributed on the interval [A, B], then V(X) = o = (B-AY/12 (Exercise 23 in Chapter 4). Thus, in our situation, V(X;) = P/12, V(X) = o?/n = @ /(12n), and V(02) = V(2X) = 4V(X) = 07 /(3n). The results of Exercise 50 can be used to show that V(01) = 0° /{n(n + 2)]. The estimator 0, has smaller variance than does Oy if 3n < n(n + 2)—that is, if 0 < n?—n = n(n—1). As long as n> 1, V(0,) < V(@2), so 0 is a better estimator than 0. More advanced methods can be used to show that @; is the MVUE of 0—every other unbiased estimator of 0 has variance that exceeds 0 7/[n(n + 2)]. r One of the triumphs of mathematical statistics has been the development of methodology for identifying the MVUE in a wide variety of situations. The most important result of this type for our purposes concerns estimating the mean p of a normal distribution. For a proof in the special case that ¢ is known, see Exercise 45. THEOREM Let X;, . . ., X, be a random sample from a normal distribution with parameters yz and o. Then the estimator ji = X is the MVUE for yu. --- Trang 355 --- 342 carrer 7 Point Estimation Whenever we are convinced that the population being sampled is normal, the result says that X should be used to estimate ju. In Example 7.2, then, our estimate would be % = 27.793. Once again, in some situations such as the one in Example 7.6, it is possible to obtain an estimator with small bias that would be preferred to the best unbiased estimator. This is illustrated in Figure 7.4. However, MVUEs are often easier to obtain than the type of biased estimator whose distribution is pictured. pdf of 4), a biased estimator Hy igo f\ V pdf of 6, the MVUE 6 Figure 7.4 A biased estimator that is preferable to the MVUE More Complications The last theorem does not say that in estimating a population mean y, the estimator X should be used irrespective of the distribution being sampled. Suppose we wish to estimate the number of calories @ in a certain food. Using standard measurement techniques, we will obtain a random sample X),.. ., X,, of n calorie measurements. Let’s assume that the population distribution is a member of one of the following three families: 1 2 1/442) - —(x-0)?/(20" x)= e —co denote the time at which the second failure occurs (the second smallest lifetime), and so on. Since the experiment terminates at time Y,, the total accumulated lifetime at termination is T.= OY + (n—r)Y, iI We now demonstrate that ji = T,/r is an unbiased estimator for j. To do so, we need two properties of exponential variables: 1. The memoryless property (see Section 4.4) says that at any time point, remaining lifetime has the same exponential distribution as original lifetime. 2. If X),..., X, are independent, each exponentially distributed with parameter 4, then min (X), . . ., X,) is exponential with parameter k/ and has expected value 1/(k/). See Example 5.28. --- Trang 357 --- 344 carrer 7 Point Estimation Since all n components last until Y,, n — 1 last an additional Y,—Y,,n—2 an additional Y, — Y, amount of time, and so on, another expression for T, is T, =nY, + (n—1)(¥2—¥1)+(n—2)(¥3 — Yo) +---+(n—r+1)(¥,—Y¥,-1) But Y, is the minimum of n exponential variables, so E(Y;) = 1/(nd). Simi- larly, Yz — Y; is the smallest of the n— 1 remaining lifetimes, each exponential with parameter 1 (by the memoryless property), so E(¥2 — Y,) = I/[(n — Wa]. Continuing, E(¥i41 — Y)) = 1/[(n - i)A], so E(T,) = nE(¥1) + (n— 1)E(¥2 — 1) +--+ + (n— r+ LEY; — Y,-1) 1 1 1 r =n-— ss [\et —"_aesene =—r+1)-———___ =~ Hee Ney eee) Sas Therefore, E(T,/r) = (A/NE(T,) = (/r) + (r/A) = 1/2. = was claimed. As an example, suppose 20 components are put on test and r = 10. Then if the first ten failure times are 11, 15, 29, 33, 35, 40, 47, 55, 58, and 72, the estimate of wis TL 15 + 72 + (10) (72, pe Pp BV) crys 10 The advantage of the experiment with censoring is that it terminates more quickly than the uncensored experiment. However, it can be shown that V(Z,/r) = 1/(°r), which is larger than 1/(/?n), the variance of X in the uncensored experiment. a Reporting a Point Estimate: The Standard Error Besides reporting the value of a point estimate, some indication of its precision should be given. The usual measure of precision is the standard error of the estimator used. DEFINITION The standard error of an estimator 0 is its standard deviation = /v(0). If the standard error itself involves unknown parameters whose values can be estimated, substitution of these estimates into oj yields the estimated stan- dard error (estimated standard deviation) of the estimator. The estimated standard error can be denoted either by 6% (the * over o emphasizes that oj is being estimated) or by 5). Assuming that breakdown voltage is normally distributed, ji =X is the best (Example 7.2 estimator of y. If the value of o is known to be 1.5, the standard error of X is continued) oz = 0//n = 1.5/\/20 = .335. If, as is usually the case, the value of o is unknown, the estimate ¢ = s = 1.462 is substituted into ay to obtain the estimated standard error Gy = sy = s//n = 1.462//20 = .327 a --- Trang 358 --- 7.1 General Concepts and Criteria 345 The standard error of p = X/n is (Example 7.1 TX) npq pq continued) op = J/V(X/n) = ae ee Since p and q = 1 —p are unknown (else why estimate?), we substitute p = x/n and q=1-x/n into a, _ yielding the — estimated standard _ error 6p = \/pq/n = \/(.6)(.4)/25 = .098. Alternatively, since the largest value of pq is attained when p=gq=.5, an upper bound on the standard error is /1/(4n) = .10. a When the point estimator Ohas approximately a normal distribution, which will often be the case when n is large, then we can be reasonably confident that the true value of @ lies within approximately 2 standard errors (standard deviations) of 0. Thus if measurement of prothrombin (a blood-clotting protein) in 36 individuals gives ji=X = 20.5 and s = 3.6 mg/100 ml, then s/./n = .60, so “within 2. estimated standard errors of jv” translates to the interval 20.50 + (2)(.60) = (19.30, 21.70). If 0 is not necessarily approximately normal but is unbiased, then it can be shown (using Chebyshev’s inequality, introduced in Exercises 43, 77, and 135 of Chapter 3) that the estimate will deviate from @ by as much as 4 standard errors at most 6% of the time. We would then expect the true value to lie within 4 standard errors of @ (and this is a very conservative statement, since it applies to any unbiased 0). Summarizing, the standard error tells us roughly within what distance of @ we can expect the true value of @ to lie. The Bootstrap The form of the estimator 0 may be sufficiently complicated so that standard statistical theory cannot be applied to obtain an expression for oj. This is true, for example, in the case = a, 0 = S; the standard deviation of the statistic S, ¢s, cannot in general be determined. In recent years, a new computer-intensive method called the bootstrap has been introduced to address this problem. Suppose that the popula- tion pdf is f(x; 0), a member of a particular parametric family, and that data x1, x2, ..., Xn gives @ = 21.7. We now use the computer to obtain “bootstrap samples” from the pdf f(x; 21.7), and for each sample we calculate a “bootstrap estimate” 0°: First bootstrap sample: x},.x3,...,x;; estimate = 0; Second bootstrap sample: xj,.x3,...,.1%; estimate = 03 Bth bootstrap sample: x{,x3,-..,x%; estimate = 0% B = 100 or 200 is often used. Now let 0° = yO; /B, the sample mean of the bootstrap estimates. The bootstrap estimate of 0’s standard error is now just the sample standard deviation of the 07's: I ae aN? %=Ve5 UG -*) =r dG (In the bootstrap literature, B is often used in place of B — 1; for typical values of B, there is usually little difference between the resulting estimates.) --- Trang 359 --- 346 carrer 7 Point Estimation A theoretical model suggests that X, the time to breakdown of an insulating fluid between electrodes at a particular voltage, has f(x; 2) = Ae", an exponential distri- bution. A random sample of n = 10 breakdown times (min) gives the following data: 41.53 18.73 2.99 30.34 12.33 117.52 73.02 223.63 4.00 26.78 Since E(X) = 1/2, E(X)=1/4, so a reasonable estimate of A is d= 1/x = 1/55.087 = .018153. We then used a statistical computer package to obtain B = 100 bootstrap samples, each of size 10, from f(x; .018153). The first such sample was 41.00, 109.70, 16.78, 6.31, 6.76, 5.62, 60.96, 78.81, 192.25, 27.61, from which > x7 = 545.8 and 7} = 1/54.58 = .01832. The average of the 100 bootstrap estimates is 7 = .02153, and the sample standard deviation of these 100 estimates is s; = .0091, the bootstrap estimate of 2’s standard error. A histogram of the 100 As was somewhat positively skewed, suggesting that the sampling distribution of / also has this property. a Sometimes an investigator wishes to estimate a population characteristic without assuming that the population distribution belongs to a particular parametric family. An instance of this occurred in Example 7.8, where a 10% trimmed mean was proposed for estimating a symmetric population distribution’s center 6. The data of Example 7.2 gave 0= X.x(10) = 27.838, but now there is no assumed fix; 0), so how can we obtain a bootstrap sample? The answer is to regard the sample itself as constituting the population (the n = 20 observations in Example 7.2) and take B different samples, each of size n, with replacement from this population. We expand on this idea in Section 8.5. Exercises | Section 7.1 (1-20) 1. The accompanying data on IQ for first-graders ata. A sample of 20 students who had recently taken university lab school was introduced in Example 1.2. elementary statistics yielded the following infor- 82 96 99 102 103 103 106 107 108 108 108 mation on brand of calculator owned (T = Texas 108 109 110 110 111 113 113 113 113 1S 115 Instruments, H = Hewlett-Packard, C = Casio, 118 118 119 121 122 122 127 132 136 140 146 S = Sharp): a. Calculate a point estimate of the mean value of T T H TFT GC T TF & CF 1Q for the conceptual population of all first S oS T H C T T T H T graders an this!SchGo} and’ state: whichestima. a. Estimate the true proportion of all such stu- tor you tised. [ints2x) 9753] dents who own a Texas Instruments calculator. bi sCaleulate, a pointe stimateiafthe 10 value: that b. Of the ten students who owned a TI calculator, separates the lowest 50% of all such students 4 had graphing calculators. Estimate the pro- from the highest 50%, and state which estima- portion of students who do not own a TI graph- tor you used. ing calculator. ¢. Calculate and interpret a point estimate of the population standard deviation c. Which esti: 3+ Consider the following sample of observations on mator did you use? (Hint: Ex? = 432, 015] coating thickness for low-viscosity paint (“Achiev- d. Calculate a point estimate of the proportion of ing a Target Value for a Manufacturing Process: all such students whose IQ exceeds 100. [Hint: A Case Study,” J. Qual. Technol., 1992: 22-26): Think of an observation as a “success” if it 83 88 88 1.04 1.09 1.12 1.29 1.31 exceeds 100.] 148 1.49 1.59 1.62 1.65 1.71 1.76 1.83 e, Calculate a point estimate of the population Assume that the distribution of coating thickness coefficient of variation o/p, and state which is normal (a normal probability plot strongly sup- estimator you used. eg ni ports this assumption). --- Trang 360 --- 7.1 General Concepts and Criteria 347 a. Calculate a point estimate of the mean value of “book value,” the recorded amount of that invoice. coating thickness, and state which estimator Let T denote the total book value, a known you used. amount. Some of these book values are erroneous. b. Calculate a point estimate of the median of the An audit will be carried out by randomly selecting coating thickness distribution, and state which n invoices and determining the audited (correct) estimator you used. value for each one. Suppose that the sample gives ¢. Calculate a point estimate of the value that the following results (in dollars). separates the largest 10% of all values in the thickness distribution from the remaining 90%, Invoice and state which estimator you used. [Hint: SS Express what you are trying to estimate in 1 2 s 4 5 terms of jz and a] ss d. Estimate P(X < 1.5), i.e., the proportion of all Book value 300 720 526 200 127 thickness values less than 1.5. [Hint: If you Audited value 300 520 526 200 157 knew the values of rand ¢, you could calculate Error 0 200 0 0 -30 this probability. These values are not available, TT bunthey-can heeaimated.] Let X = the sample mean audited value, Y= the e. What is the estimated standard error of the estimator that you used in part (bY? sample mean book value, and D= the sample mean error. Propose three different statistics for 4. The data set mentioned in Exercise | also includes estimating the total audited (i.e. correct) value 0 these third grade verbal IQ observations for males: — one involving just N and X, another involving 117 103 121 112 120 132 113 117 132 N, T, and D, and the last involving T and X/Y. 149 125 131 136 107 108 113 136 114 Then calculate the resulting estimates when N = 5,000 and T = 1,761,300 (The article “Sta- and females tistical Models and Analysis in Auditing,”, Statis- {1k foo. Ma: Yt Woe we ‘807 90 tical Science, 1989: 2 ~ 33 discusses properties of 114 109 102 114 127 127. 103 these estimarcre), . . 6. Consider the accompanying observations on Prior to obtaining data, denote the male values by stream flow (1000's of acre-feet) recorded at a X,,...,X,, and the female values by Yj, .. ., Y,,. station in Colorado for the period April 1-August Suppose that the X;’s constitute a random sample 31 over a 31-year span (from an article in the 1974 from a distribution with mean j, and standard volume of Water Resources Res.). Hescatietl 1 ated thatthe Ys foisia random sample 127.96 210.07 203.24 108.91 178.21 (independent of the X;’s) from another distribution 285.37 100.85 89.59 185.36 126.94 with mean ji» and standard deviation >. 200.19 66.24 247.11 299.87 109.64 a. Use tules of expected value to show that ¥ — Y 12586 114.79 109.11 330.33 85.54 is an unbiased estimator of ft) — fo. Calculate 11764 302.74 280.55 145.11 95.36 the estimate for the given data. . 204.91 311.13 150.58 262.09 477.08 b. Use rules of variance from Chapter 6 to obtain 04.33 an expression for the variance and standard deviation (standard error) of the estimator in An appropriate probability plot supports the use of part (a), and then compute the estimated stan- the lognormal distribution (see Section 4.5) as a dard error. reasonable model for stream flow. ¢. Calculate a point estimate of the ratio o,/¢2 of a. Estimate the parameters of the distribution. the two standard deviations. [Hint: Remember that X has a lognormal d. Suppose one male third-grader and one female distribution with parameters jx and o? if In(X) third-grader are randomly selected. Calculate a is normally distributed with mean y and vari- point estimate of the variance of the difference ance o7,] X — Y between male and female 1Q. b. Use the estimates of part (a) to calculate an 5. As an example of a situation in which several estimale of (he-expested value o}stseam toy: different statistics could reasonably be used to [tints Whatas:E(%)?] calculate a point estimate, consider a population 7. a. A random sample of 10 houses in a particular of N invoices. Associated with each invoice is its area, each of which is heated with natural gas, --- Trang 361 --- 348 chapter 7 Point Estimation is selected and the amount of gas (therms) used a. Find an unbiased estimator of 4 and compute during the month of January is determined for the estimate for the data. [Hint: E(X) = 4 for X each house. The resulting observations are 103, Poisson, so E(X = ?)] 156, 118, 89, 125, 147, 122, 109, 138, 99. Let pc b. What is the standard deviation (standard error) denote the average gas usage during January by of your estimator? Compute the estimated stan- all houses in this area. Compute a point esti- dard error. [Hint: oj = 2 for X Poisson.] mate of ju. f r 10. Us it id that has length y, are g b. Suppose there are 10,000 houses in this area SIE BOB 100 Sek 128 one ee you ate Boe to lay out a square plot in which the length of each that use natural gas for heating. Let t denote ans " 2 side is . Thus the area of the plot will be 2. the total amount of gas used by all of these : : : However, you do not know the value of 1, so houses during January. Estimate t using the . . d f Wh: ah did you decide to make n independent measurements atavet part) What eye ae ae X,,X>,...X, of the length. Assume that each X; reac po rc ame has mean yz (unbiased measurements) and vari- c. Use the data in part (a) to estimate p, the pro- ance 02. ve SETA GOURES HAL Sedat teat; 100) a. Show that X° is not an unbiased estimator nS F 3 for jC. (Hint: For any rv Y, E(¥) = d. Give a point estimate of the Population median V(Y) + [EQYP. Apply this with ¥ = X.] usage (the middle value in the population of all ih; Boe what valuecof bisthereedmater 23S houses) based onithe-sample:of part fa); What unbiased for 22 [Hint: Compute ECR? — kS2).] estimator did you use? 11. Of 7, randomly selected male smokers, X, smoked 8. In a random sample of 80 components of a certain : 12 found to be defecti filter cigarettes, whereas of nz randomly selected I at ones geal female smokers, X> smoked filter cigarettes. Let a. Give a point estimate of the proportion of all ental " py and p> denote the probabilities that a randomly such components that are not defective. ; : selected male and female, respectively, smoke b. A system is to be constructed by randomly filter eiwarenies selene Of these ana and cone a. Show that (X,/n,) — (X2/n3) is an unbiased necting themian seres, as shown here. estimator for p — p2. [Hint: E(X;) = njp; for i=1,2.) —_ /-— —— b. What is the standard error of the estimator in part (a)? The series connection implies that the system ¢. How would you use the observed values x, and xp will function if and only if neither component is to estimate the standard error of your estimator? defective (ie., both components work properly). d. If nm =n = 200, x, = 127, and x) = 176, Estimate the proportion of all such systems that use the estimator of part (a) to obtain an esti- work properly. [Hint: If p denotes the probabil- mate of py — p>. ity that a component works properly, how can e. Use the result of part (c) and the data of part (d) P(system works) be expressed in terms of p?] to estimate the standard error of the estimator. ¢. Let p be the sample proportion of successes. 12. Suppose a certain type of fertilizer has an expected Is f° an unbiased estimator for p?? (Hint: yield per acre of yz; with variance o°, whereas the For any rv Y, E(¥?) = VY) + [EW] expected yield for a second type of fertilizer is . : 2 2 and 2 9. Each of 150 newly manufactured items is exam- Hz with the same variance o”. Let Sj and S3 denote ined and the number of scratches per item is ie sample Vanantes) of yields hased:-Giseannple recorded (the items are supposed to be free of sizes n, and no, respectively, of the two fertilizers. scratches), yielding the following data: Show that the pooled (combined) estimator Number of 0123 4567 ge — (= DST + (a = )S3 scratches per item - ny tn —2 Observed 18 37 42 30 13721 is an unbiased estimator of 0”. frequency a. 13. Consider a random sample Xj, .. ., X, from the pdf Let X = the number of scratches on a randomly chosen item, and assume that X has a Poisson F(x@) = 50+0x) 9 -1SxS1 distribution with parameter 1. --- Trang 362 --- 7.1 General Concepts and Criteria 349 where —1 < 0 <1 (this distribution arises in numerical constant and consider the estimator particle physics). Show that 0 = 3¥ is an unbiased ju=cX +(1—c)Y. For any ¢ between 0 and 1 estimator of 0. (Hint: First determine this is a weighted average of the two sample n= E(X) = ER) means, e.g., .7¥ + 3Y 14. A sample of n captured Pandemonium jet fighters @ Show: thal forany cithe etmatorisiunbiased, cae b. For fixed m and n, what value ¢ minimizes results in serial numbers x), X2,.¥3, .« %. The CIA Cee NOS Deen: 4 : V (ji)? [Hint: The estimator is a linear combi- knows that the aircraft were numbered consecu- ! i : nation of the two sample means and these tively at the factory starting with x and ending ey : : means are independent. Once you have an with f, so that the total number of planes manu- ei 5 z A ss ” rasa ases = expression for the variance, differentiate with factured is B-a + 1 (e.g., if = 17 and f = 29, $ then 29-17 + 1 = 13 planes having’ serial num- respect tore.) bers 17, 18, 19, .. ., 28, 29 were manufactured). 17. In Chapter 3, we defined a negative binomial rv as However, the CIA does not know the values of the number of failures that occur before the rth aor B. A CIA statistician suggests using the esti- success in a sequence of independent and identical mator max(X;) — min(X;) + 1 to estimate the total success/failure trials. The probability mass func- number of planes manufactured. tion (pmif) of X is a If n=5, 1 =237, x5 =375, xy = 202, xy = 525, and xs =418, what is the nb(x,r,P) corresponding estimate? dona b. Under what conditions on the sample will the pi(l—p)§ x=0,1,2,.. value of the estimate be exactly equal to the = x true total number of planes? Will the estimate 0 otherwise ever be larger than the true total? Do you think ° the estimator is unbiased for estimating f — a + 1? Explain in one or two sentences. a, Suppose that r > 2. Show that (A similar method was used to estimate German p=(r-1)/(X+r-1) tank production in World War IL.) is an unbiased estimator for p. [Hint: Write out 15, Let X;,Xa,...,X» representa random sample from B( By aid cancel scary 1 inside:the:sum] @Rayleigh distribution with pdf b. A reporter wishing to interview five indivi- ohh duals who support a certain candidate begins FOB =GEM —-x>0 asking people whether (S) or not (F) they sup- . ere 2 port the candidate. If the sequence of responses a. It can be shown that E(X) = 20. Use this fact is SFFSFFFSSS, estimate p ~ the true propor. to construct an unbiased estimator of @ based tion who support the candidate. on 5>X? (and use rules of expected value to show that it is unbiased). 18. Let X1, Xo, ...,X;, be a random sample from a pdf b. Estimate 0 from the following measurements fix) that is symmetric about 4, so that X is an of blood plasma beta concentration (in pmol/L) unbiased estimator of 1. If m is large, it can be for n = 10. men. shown that V(X) © 1/{4n[f(4)]°}. When the underlying pdf is Cauchy (see Example 7.8), 1688 = 10.23 459 6.66 13.68 V(X) = co, so X is a terrible estimator. What is 1423-1987 9.40 6.51 ——«10.95 Dh eecmcac j V(X) in this case when » is large? oe . 19. An investigator wishes to estimate the proportion 16. Suppose the true average growth 4 of one type of students at a certain university who have vio- of plant during a 1-year period is identical to that lated the honor code. Having obtained a random Bra, seeoiid type bul Mie. WarAnee OF grow EtEor sample of n students, she realizes that asking each, the Hirstitype is.” whereas forthe: second type, “Have you violated the honor code?” will proba- the variance is 4a”. Let X;, . . ., Xm be m indepen- bly result in some untruthful responses. Consider sent growtlvobseritansion fhe fist type Ine the following scheme, called a randomized EQ) = nw, VX) = a"), and let Yi, . . .. Yn be response technique. The investigator makes up a ne Sintigpendent:: growth Bbservations :Oni ‘the! deck of 100 cards, of which 50 are of type I and 50 second type [E(Y;) = 1, VY) = 407]. Let c be a aie oP ye --- Trang 363 --- 350 carrer 7 Point Estimation Type I: Have you violated the honor code (yes or b. Use the fact that E(Y/n) = 4 to show that your no)? estimator p is unbiased. Type Il: Is the last digit of your telephone number ¢. If there were 70 type I and 30 type II cards, a0, 1, or 2 (yes or no)? what would be your estimator for p? Each student in the random sample is asked to mix 20. Return to the problem of estimating the population the deck, draw a card, and answer the resulting proportion p and consider another adjusted esti- question truthfully. Because of the irrelevant ques- mator, namely tion on type II cards, a yes response no longer = OE Vasa stigmatizes the respondent, so we assume that p=— responses are truthful. Let p denote the proportion ntvn of honor-code violators (Le., the probability of a The justification for this estimator comes from the randomly selected student being a violator), and Bayesian approach to point estimation to be intro- let 4 = P(yes response). Then / and p are related duced in Section 14.4. by A = Sp + (.5)(.3). a. Determine the mean squared error of this esti- a. Let Y denote the number of yes responses, so mator. What do you find interesting about this Y ~ Bin(v, 4). Thus ¥/n is an unbiased estimator MSE? of 4. Derive an estimator for p based on Y. If b. Compare the MSE of this estimator to the n= 80 and y = 20, what is your estimate? MSE of the usual estimator (the sample [Hint: Solve 2 = .5p + .15 for p and then sub- proportion). stitute Y/n for 4.) Methods of Point Estimation So far the point estimators we have introduced were obtained via intuition and/or educated guesswork. We now discuss two “constructive” methods for obtaining point estimators: the method of moments and the method of maximum likelihood. By constructive we mean that the general definition of each type of estimator suggests explicitly how to obtain the estimator in any specific problem. Although maximum likelihood estimators are generally preferable to moment estimators because of certain efficiency properties, they often require significantly more computation than do moment estimators. It is sometimes the case that these methods yield unbiased estimators. The Method of Moments The basic idea of this method is to equate certain sample characteristics, such as the mean, to the corresponding population expected values. Then solving these equa- tions for unknown parameter values yields the estimators. DEFINITION Let X;,...,X,, be arandom sample from a pmf or pdf f(x). For k = 1,2,3,..., the kth population moment, or kth moment of the distribution f(x), is EX). The kth sample moment is (1/1) 30"; X*. Thus the first population moment is E(X) = u and the first sample moment is ¥X;/n =X. The second population and sample moments are E(X*) and YX? /n, respectively. The population moments will be functions of any unknown parameters 0), 02,.... --- Trang 364 --- 7.2 Methods of Point Estimation 351 DEFINITION Let X,, Xz, .. ., X, be a random sample from a distribution with pmf or pdf FO iy <5 Om), where O155 535 Om are parameters whose values are unknown. Then the moment estimators 0), ... , 0, are obtained by equating the first m sample moments to the corresponding first m population moments and solving for 0), -. «+ On. If, for example, m = 2, E(X) and E(X’) will be functions of 0; and 02. Setting E(X) = (1/n) DX; (=X) and E(X?) = (1/n) SX? gives two equations in 0, and 0. The solution then defines the estimators. For estimating a population mean j, the method gives j1 = X, so the estimator is the sample mean. Let X), . . ., X, represent a random sample of service times of n customers at a certain facility, where the underlying distribution is assumed exponential with parameter 4. Since there is only one parameter to be estimated, the estimator is obtained by equating E(X) to X. Since E(X) = 1/2 for an exponential distribution, this gives 1/2 = X or A = 1/X. The moment estimator of Ais then, =1/X. Mf Let X,,...,X,, be a random sample from a gamma distribution with parameters « and f. From Section 4.4, E(X) = of and E(X?) = f° (a + 2)/T(@) = Ba + Ia. The moment estimators of « and f are obtained by solving = 1 — — > X? =a(a+ 1p? ap = OX = ala + 198 Since x(a + 1)f* = 226? + «f* and the first equation implies «2f? = (X)’, the second equation becomes 1 Bk = RY toh n Now dividing each side of this second equation by the corresponding side of the first equation and substituting back gives the estimators ,-__ pan bx- ay = ss ae 7 UX? — (%) x To illustrate, the survival time data mentioned in Example 4.28 is 152 115 109 94 88 137 152 77 160 165 125 40 128 123 136 101 62 153 83 69 with ¥ = 113.5 and (1/20) 3>.x? = 14, 087.8. The estimates are 5 113.5)" » _ 14,087.8 — (113.5)” jee ES) _ sayy j= EES — CS) 06 14, 087.8 — (113.5)~ 113.5 These estimates of x and f differ from the values suggested by Gross and Clark because they used a different estimation technique. a --- Trang 365 --- 352 carrer 7 Point Estimation Let X,, .. ., X, be a random sample from a generalized negative binomial distribution with parameters r and p (Section 3.6). Since E(X) = r(1 — p)/p and V(X) = rl = p)ip?, E(X*) = V(X) + [EQ)P = rl = p) (r= rp + Dip. Equating E(X) to X and E(X”) to (1/n) > X? eventually gives 5 x ; cy == P= 7 LX? - (%) 5X? -(X) -¥ As an illustration, Reep, Pollard, and Benjamin (“Skill and Chance in Ball Games,” J. Roy. Statist. Soc. Ser. A, 1971: 623-629) consider the negative bino- mial distribution as a model for the number of goals per game scored by National Hockey League teams. The data for 1966-1967 follows (420 games): Goals 0 1 2 3 4 5 6 7 8 9 10 Frequency| 29 71 82 89 65 45 24 7 4 «#1 3 Then, ¥ = D> 1/420 = [(0)(29) + (1)(71) +++ + (10)(3)]/420 = 2.98 and S$ 7/420 = [(0)°(29) + (1)?(71) +++ + (10)? (3)]/420 = 12.40 Thus, 2.98 2.98)" p= __ = 85 __ BO _c res 12.40 — (2.98) 12.40 — (2.98) — 2.98 Although r by definition must be positive, the denominator of 7 could be negative, indicating that the negative binomial distribution is not appropriate (or that the moment estimator is flawed). a Maximum Likelihood Estimation The method of maximum likelihood was first introduced by R. A. Fisher, a geneticist and statistician, in the 1920s. Most statisticians recommend this method, at least when the sample size is large, since the resulting estimators have certain desirable efficiency properties (see the proposition on large sample behavior toward the end of this section). A sample of ten new bike helmets manufactured by a company is obtained. Upon testing, it is found that the first, third, and tenth helmets are flawed, whereas the others are not. Let p = P(flawed helmet) and define X;, . . .. X10 by X; = 1 if the ith helmet is flawed and zero otherwise. Then the observed +;’s are 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, so the joint pmf of the sample is F(%1492,---,4103P) = p(1 = p)p-s p= p'(1 =p)! (7.4) --- Trang 366 --- 7.2 Methods of Point Estimation 353 We now ask, “For what value of p is the observed sample most likely to have occurred?” That is, we wish to find the value of p that maximizes the pmf (7.4) or, equivalently, maximizes the natural log of (7.4) Since Inff(x1,x2,--.,x10;p)] = 31n(p) + 7In(1 — p) (7.5) and this is a differentiable function of p, equating the derivative of (7.5) to zero gives the maximizing value’: d 3 gi 3k — Inf f(x1,22,---,X103 =--——=05p=—=- B [A122 10;P)] a I= P=997 3 where x is the observed number of successes (flawed helmets). The estimate of p is now p=%- It is called the maximum likelihood estimate because for fixed x), . . ., X;o, it is the parameter value that maximizes the likelihood (joint pmf) of the observed sample. The likelihood and log likelihood are graphed in Figure 7.5. Of course, the maximum on both graphs occurs at the same value, p = .3. Note that if we had been told only that among the ten helmets there were three that were flawed, Equation (7.4) would be replaced by the binomial pmf 3 ( era - p)' which is also maximized for p = To a b Likelihood In(likelihood) 0025 5 .0020 “o 15 0015 ~20 0010 ~25 .0005 5 0 Pp -35 B 0 4 4 & 8 1.0 0 2 4 6 8 1.0 Figure 7.5 Likelihood and log likelihood plotted against p . ? Since In[g(x)] is a monotonic function of g(x), finding x to maximize In[g(x)] is equivalent to maximizing ¢(x) itself. In statistics, taking the logarithm frequently changes a product to a sum, which is easier to work with. * This conclusion requires checking the second derivative, but the details are omitted. --- Trang 367 --- 354 carrer 7 Point Estimation DEFINITION Let X,,..., X, have joint pmf or pdf F ny Xrgsp ni 15-45 Om) (7.6) where the parameters 6), . . ., 0,, have unknown values. When xj, . . .,.x,, are the observed sample values and (7.6) is regarded as a function of 6), . oe Ons itis called the likelihood function. The maximum likelihood estimates 6, ..., Om are those values of the 0;’s that maximize the likelihood function, so that a. Oty 2 On) & AOI Deo is Oty os On) Or all Oa, On When the X;’s are substituted in place of the x;’s, the maximum likelihood estimators (mle’s) result. The likelihood function tells us how likely the observed sample is as a function of the possible parameter values. Maximizing the likelihood gives the parameter values for which the observed sample is most likely to have been generated, that is, the parameter values that “agree most closely” with the observed data. Suppose Xj, . . ., X, is a random sample from an exponential distribution with parameter /. Because of independence, the likelihood function is a product of the individual pdf's: F(Xty 06-5 Xnj A) = (Ae) oe den) = We EH The In(likelihood) is Inff (x1, .-. nj 4)] = aln(Z) — is ¥ Equating (d/dA)[In(likelihood)] to zero results in n/A — Xx;=0, or A= n/Zx; = 1/%. Thus the mle is 2 = 1/X; it is identical to the method of moments estimator but it is not an unbiased estimator, since E(1/X) # 1/E(X). a Let Xi, . . ., X, be a random sample from a normal distribution. The likelihood function is F (21; . 0+; nj Hy 07) = eg , , V2n0? V2n0? nf2 = ! e7 Elin) / 2a?) 2n0? so 1 Inf... a3 4,0?)] = -5 In(2x0*) — 5 D> (i — w)? To find the maximizing values of j and o”, we must take the partial derivatives of In(f) with respect to 4 and o, equate them to zero, and solve the resulting two equations. Omitting the details, the resulting mle’s are --- Trang 368 --- 7.2 Methods of Point Estimation 355 Xi -X) ax # aL MinD) n The mle of o” is not the unbiased estimator, so two different principles of estimation (unbiasedness and maximum likelihood) yield two different estimators. | In Chapter 3, we discussed the use of the Poisson distribution for modeling the number of “events” that occur in a two-dimensional region. Assume that when the region R being sampled has area a(R), the number X of events occurring in R has a Poisson distribution with parameter Aa(R) (where / is the expected number of events per unit area) and that nonoverlapping regions yield independent X’s. Suppose an ecologist selects n nonoverlapping regions R, .. ., R,, and counts the number of plants of a certain species found in each region. The joint pmf (likelihood) is then ; A alRy Peek) 2+ a(Ry)}"e-*4) x! Xn: — a(R)" = fay) eae) _ Glee: el The In(likelihood) is Infp (x1... 404) = So ai -Infa(Ri)) + In(A) Sx) — 22 (Ri) — SF inGai) Taking d/d/ In(p) and equating it to zero yields a Sra(k)) =0 Z ql so 4) ae {A= a(R) The mle is then 2 = )>X;/ 37 a(R;). This is intuitively reasonable because / is the true density (plants per unit area), whereas / is the sample density since > a(Rj) is just the total area sampled. Because E(X,) = 4 - a(R;), the estimator is unbiased. Sometimes an alternative sampling procedure is used. Instead of fixing regions to be sampled, the ecologist will select n points in the entire region of interest and let y; = the distance from the ith point to the nearest plant. The cumulative distribution function (cdf) of Y = distance to the nearest plant is no plants in a y) = y)=1 P(e. of ins y) Ce ee a ae Taking the derivative of Fy(y) with respect to y yields sna Qndye™” y>O fr(ys4) = . 0 otherwise If we now form the likelihood f(y; 4) - + - fy(y,; 4), differentiate In(likelihood), and so on, the resulting mle is --- Trang 369 --- 356 carrer 7 Point Estimation ga 8 8 number of plants observed “1 >s¥? total area sampled which is also a sample density. It can be shown that in a sparse environment (small /), the distance method is in a certain sense better, whereas in a dense environment, the first sampling method is better. a Let X;,.. .,X;, be a random sample from a Weibull pdf ol, y-(x/p)* ay the x20 F(%%,B)= 9 B ~ 0 otherwise Writing the likelihood and In(likelihood), then setting both (0/0) {In(f)] = 0 and (0/0) (In(f)] = 0 yields the equations CbF-In@))_— Linw)] p-(t es a= = = soe n n These two equations cannot be solved explicitly to give general formulas for the mle’s & and f. Instead, for each sample xj, . . ., X,, the equations must be solved using an iterative numerical procedure. Even moment estimators of « and f are somewhat complicated (see Exercise 22). The iterative mle computations can be done on a computer, and they are available in some statistical packages. MINITAB gives maximum likelihood estimates for both the Weibull and the gamma distributions (under “Quality Tools”). Stata has a general procedure that can be used for these and other distributions. For the data of Example 7.14 the maximum likelihood estimates for the Weibull distribution are % = 3.799 and f = 125.88. (The mle’s for the gamma distribution are % = 8.799 and B = 12.893, a little different from the moment estimates in Example 7.14). Figure 7.6 shows the Weibull log likelihood as a function of x and f. The surface near the top has a rounded shape, allowing the maximum to be found easily, but for some distributions the surface can be much more irregular, and the maximum may be hard to find. oe. 2 8 2 A 100 8 7 7101 Meal 3.0 125 B 3.5 120 « 40 45 Figure 7.6 Weibull log likelihood for Example 7.20 : --- Trang 370 --- 7.2 Methods of Point Estimation 357 Some Properties of MLEs In Example 7.18, we obtained the mle of o” when the underlying distribution is normal. The mle of ¢ = ve, as well as many other mle’s, can be easily derived using the following proposition. PROPOSITION — The Invariance Principle Let 0), 02,..., 0m be the mle’s of the parameters 0), 02, . . ., Am. Then the mle of any function h(@,, 02, . . ., 0,,) of these parameters is the function 1(01,02,..., Om), of the mle’s. Proof For an intuitive idea of the proof, consider the special case m = 1, with 0, = 0, and assume that /(-) is a one-to-one function. On the graph of the likelihood as a function of the parameter 0, the highest point occurs where 0 = 0. Now consider the graph of the likelihood as a function of /(@). In the new graph the same heights occur, but the height that was previously plotted at 0 =a is now plotted at h(0) = h(a), and the highest point is now plotted at h(0) = (0). Thus, the maxi- mum remains the same, but it now occurs at h(@). 7 In the normal case, the mle’s of jr and 0” are ji = X and 6? = Sy (X; — X)°/n. To (Example 7.18 obtain the mle of the function h(y, 0”) = Vo? = a, substitute the mle’s into the continued) function: 1 a2 Z Po) G=V@= Era -X) | The mle of ¢ is not the sample standard deviation S, although they are close unless n is quite small. Similarly, the mle of the population coefficient of variation 100y/a is 100f1/6. : The mean value of an rv X that has a Weibull distribution is (Example 7.20 —p. continued) B= BTL + 1a) The mle of jis therefore ja = - T(1 + 1/%), where & and fare the mle’s of « and B. In particular, X is not the mle of y, although it is an unbiased estimator. At least for large n, ji is a better estimator than X. a Large-Sample Behavior of the MLE Although the principle of maximum likelihood estimation has considerable intui- tive appeal, the following proposition provides additional rationale for the use of mle’s. (See Section 7.4 for more details.) PROPOSITION Under very general conditions on the joint distribution of the sample, when the sample size is large, the maximum likelihood estimator of any parameter 0 is close to @ (consistency), is approximately unbiased [E(@) ~ 0], and has --- Trang 371 --- 358 carrer 7 Point Estimation variance that is nearly as small as can be achieved by any unbiased estimator. Stated another way, the mle@ is approximately the MVUE of 0. Because of this result and the fact that calculus-based techniques can usually be used to derive the mle’s (although often numerical methods, such as Newton’s method, are necessary), maximum likelihood estimation is the most widely used estimation technique among statisticians. Many of the estimators used in the remainder of the book are mle’s. Obtaining an mle, however, does require that the underlying distribution be specified. Note that there is no similar result for method of moments estimators. In general, if there is a choice between maximum likelihood and moment estimators, the mle is preferable. For example, the maximum likelihood method applied to estimating gamma distribution parameters tends to give better estimates (closer to the parameter values) than does the method of moments, so the extra computation is worth the price. Some Complications Sometimes calculus cannot be used to obtain mle’s. Eee Suppose the waiting time for a bus is uniformly distributed on [0, 0] and the results Xq,..-,%, of a random sample from this distribution have been observed. Since fix; 0) = 1/0 for 0 < x < @ and 0 otherwise, . 1/0" 0, .. ., X, is a random sample from a pdf f(x; 0) that is symmetric about 6, but the investigator is unsure of the form of the f function. It is then desirable to use an estimator @ that is robust, that is, one that performs well for a wide variety of underlying pdf’s. One such estimator is a trimmed mean. In recent years, statisticians have proposed another type of estimator, called an M- estimator, based on a generalization of maximum likelihood estimation. Instead of maximizing the log likelihood UIn[f(x; 0)] for a specified f, one seeks to maximize Lp(xj; 0). The “objective function” p is selected to yield an estimator with good robustness properties. The book by David Hoaglin et al. (see the bibliography) contains a good exposition on this subject. Exercises | Section 7.2 (21-31) 21. A random sample of n bike helmets manufactured ce. If n = 20 and x = 3, what is the mle of the by a company is selected. Let X = the number probability (1 — p)* that none of the next five among the n that are flawed, and let p =P helmets examined is flawed? (flawed). Assume that only X is observed, rather 95, yet ¥ have a Weibull distribution with parameters than the sequence of S’s and F's. wand B, so a. Derive the maximum likelihood estimator of p. i Ifn= 20 and x = 3, what is the estimate? E(X) = B-T(1 + 1/x) b. Is the estimator of part (a) unbiased? > 5 V(X) = B{T(1 + 2/0) — (PU + 1/a)F} --- Trang 373 --- 360 chapter 7 Point Estimation a, Based on a random sample X;,. . .. X;, write Wote: 402 = 113.48) equations for the method of moments estimators 8) ASStuiing jist abe (sik Bagels are) sano of f and 2. Show that, once the estimate of « has saimple-andithe: weight:ls normally:distabuted, been obtained, the estimate of B can be found petimate: the; mie. average weight:and standard " deviation of the weight using maximum likeli- from a table of the gamma function and that the hood estimate of is the solution to a complicated % censerege ' : : ‘ b. Again assuming a normal distribution, estimate (Cemation urvolying the garauis fine ton, the weight below which 95% of all bagels will b. If n=20, ¥=28.0, and Sox? = 16,500, HAve"Ur“wigune [Hine Waa ie fas ostt compute the estimates. [Hint: [F(1.2)P/F(1.4) averiheir weights: [Hii What is the 95th = 951] percentile in terms of ys and ¢? Now use the oS: invariance principle.] 23. Let X denote the proportion of allotted time that a ¢. Suppose we choose another bagel and weigh it. randomly selected student spends working on a Let X = weight of the bagel. Use the given certain aptitude test. Suppose the pdf of X is, data to obtain the mle of POX < 113.4). Hint: P(X < 113.4) = O[(113.4 - w/o). 8 f(x; 0) = { + ue Osxsl 27. Suppose a measurement is made on some physical otherwise am ; characteristic whose value is known, and let X where —1 <0. A random sample of ten students denote the resulting measurement error. For an yields data x= 92, %=.79, x, =.90, unbiased measuring instrument or technique, the iy = 65, x5 = 86, x6 = 47, 47 = 73, %y = 97, mean value of X is 0. Assume that any particular 39°= 94, x69 = 77. measurement error is normally distributed with a. Use the method of moments to obtain an esti- variance 0°. Let X;,.. . X, be a random sample mator of 0, and then compute the estimate for of measurement errors. this data. a. Obtain the method of moments estimator of 0”. b. Obtain the maximum likelihood estimator of 0, b. Obtain the maximum likelihood estimator and then compute the estimate for the given of a. daa, 28. Let X1,...,X,, be a random sample from a gamma 24. Two different computer systems are monitored for distribution with parameters and f. a total of n weeks. Let X; denote the number of | _a. Derive the equations whose solution yields the breakdowns of the first system during the ith week, maximum likelihood estimators of «and f. Do and suppose the X;’s are independent and drawn you think they can be solved explicitly? from a Poisson distribution with parameter /,. Sim- b. Show that the mle of jp = af is j= X. ilarly, let ¥, denote the number of breakdowns of 99 Le ¥,,x.,.., Xyrepresenta random sample from (ie seccinid systesiduiniig the Al Weeks Sid asstmne, the Rayleigh distribution with density function independence ‘wathteach ¥, Poisson with parameter given in Exercise 15. Determine Az Derive the:mle's‘of 41,.42,'and Av— 42: [Hine a. The maximum likelihood estimator of @ and Using independence, wiite the iomt pant (kell: then calculate the estimate for the vibratory hood) of the X;'s.and Y/'s together.] stresg data given in that exercise. Is this estima- 25. Refer to Exercise 21. Instead of selecting n = 20 tor the same as the unbiased estimator sug- helmets to examine, suppose we examine helmets gested in Exercise 15? in succession until we have found r= 3 flawed b. The mle of the median of the vibratory stress ones, If the 20th helmet is the third flawed one (so distribution. [Hint: First express the median in that the number of helmets examined that were not terms of 0.] flawed is.x = 17), whatis the mle of p? Is this the 39, Consider a random sample X;, Xo, .... X, from the same as the estimate in Exercise 21? Why or why shifted exponential pdf not? Is it the same as the estimate computed from the unbiased estimator of Exercise 17? fede { jet) eg 26. Six Pepperidge Farm bagels were weighed, yield- was 0 otherwise angithe following:data:(arams): Taking 0 = 0 gives the pdf of the exponential 1176 1095 111.6 1092 119.1 1108 distribution considered previously (with positive density to the right of zero). An example of the --- Trang 374 --- 7.3 Sufficiency 361 shifted exponential distribution appeared in exponential with parameter 7. The experimenter Example 4.5, in which the variable of interest then leaves the test facility unmonitored. On was time headway in traffic flow and 0 = .5 was his return 24 h later, the experimenter immedi- the minimum possible time headway. ately terminates the test after noticing that a. Obtain the maximum likelihood estimators of y = 15 of the 20 components are still in operation O and A. (so 5 have failed). Derive the mle of 2. b. If n= 10 time headway observations are [Hint: Let Y = the number that survive 24 h. made, resulting in the values 3.11, .64, 2.55, Then Y ~ Bin(n, p). What is the mle of p? 2.20, 5.44, 3.42, 10.39, 8.93, 17.82, and 1.30, Now notice that p = P(X; > 24), where X; is calculate the estimates of @ and 2. exponentially distributed. This relates 1 to p, so 31. At time += 0, 20 identical components are pe hat em be esttanied oncerthe latter put on test. The lifetime distribution of each is " Sufficiency An investigator who wishes to make an inference about some parameter 6 will base conclusions on the value of one or more statistics — the sample mean X, the sample variance S?, the sample range Y,, — Y;, and so on. Intuitively, some statistics will contain more information about @ than will others. Sufficiency, the topic of this section, will help us decide which functions of the data are most informative for making inferences. As a first point, we note that a statistic T = t(X), . . ., X,) will not be useful for drawing conclusions about @ unless the distribution of T depends on 0. Consider, for example, a random sample of size n = 2 from a normal distribution with mean jy and variance o”, and let T = X, — X2. Then T has a normal distribution with mean 0 and variance 207, which does not depend on yu. Thus this statistic cannot be used as a basis for drawing any conclusions about , although it certainly does carry information about the variance 6°. The relevance of this observation to sufficiency is as follows. Suppose an investigator is given the value of some statistic T, and then examines the condi- tional distribution of the sample X, X2, . . ., Xn given the value of the statistic — for example, the conditional distribution given that X = 28.7. If this conditional distribution does not depend upon 6, then it can be concluded that there is no additional information about @ in the data over and above what is provided by T. In this sense, for purposes of making inferences about 0, it is sufficient to know the value of T, which contains all the information in the data relevant to 0. An investigation of major defects on new vehicles of a certain type involved selecting a random sample of n = 3 vehicles and determining for each one the value of X = the number of major defects. This resulted in observations x; = 1, X2 = 0, and x3 = 3. You, as a consulting statistician, have been provided with a description of the experiment, from which it is reasonable to assume that X has a Poisson distribution, and told only that the total number of defects for the three sampled vehicles was four. Knowing that T= ).X; = 4, would there be any additional advantage in having the observed values of the individual X,’s when making an inference about the Poisson parameter ).? Or rather is it the case that the statistic T contains all relevant information about / in the data? To address this issue, consider the conditional distribution of X,, X2, X3 given that )}X; = 4. First of all, there are only --- Trang 375 --- 362 carrer 7 Point Estimation a few possible (x), x3, x3) triples for which x, + x, + x; = 4. For example, (0, 4, 0) is a possibility, as are (2, 2, 0) and (1, 0, 3), but not (1, 2, 3) or (5, 0, 2). That is, 3 P(X, = x1, X2 = x2, X3 = x3] 0X; = 4) = 0 unless xj +x) +43 =4 1 Now consider the triple (2, 1, 1), which is consistent with YX; = 4. If we let A denote the event that X,; = 2, X2 = 1, and X; = | and B denote the event that DX; = 4, then the event A implies the event B (i.e., A is contained in B), so the intersection of the two events is just the smaller event A. Thus 3 P(ANB) P(X, = 2,X2 = 1,X3 = 1|})X; = 4) = P(A|B) = (Xi 2 = 1,X3 p> ) (A|B) P(B) _ PXp= 2% = 1,%s= 1) — P(ZX; = 4) A moment generating function argument shows that )X; has a Poisson distribution with parameter 32. Thus the desired conditional probability is et BP etd et i! 7 7 _ 4 4 e@. Bay 3a2N 27 4! Similarly, P(X, 1,X) = 0,X3 = 3| : X;=4 4! 4 (X1 = 1,X2 =0,X%3 = p> = ) =a ala The complete conditional distribution is as follows: 3 P(X) = 1,X2 =0,X3 =| 0X = 4) =I 6 By (tts) = (2,20), (2,0,2),(0,2,2) 12 ay Otters) = (21,1), (1,2, 1)5 (11,2) a (x1,42,3) = (4,0,0), (0,4, 0), (0,0, 4) & (x1,42,."3) = (3, 1,0), (1,3, 0), (3,0, 1), (1,0, 3), (0, 1,3), (0,3, 1) This conditional distribution does not involve 2. Thus once the value of the statistic ©; has been provided, there is no additional information about 2 in the individual observations. To put this another way, think of obtaining the data from the experiment in two stages: 1. Observe the value of T = X, + X2 + X3 from a Poisson distribution with parameter 32. 2. Having observed T = 4, now obtain the individual x,’s from the conditional distribution --- Trang 376 --- 7.3 Sufficiency 363 3 P(X = x1,X2 = x2,X3 = 33| OX; = 4) il Since the conditional distribution in step 2 does not involve A, there is no additional information about 4 resulting from the second stage of the data generation process. This argument holds more generally for any sample size n and any value t other than 4 (e.g., the total number of defects among ten randomly selected vehicles might be YX; = 16). Once the value of )X; is known, there is no further informa- tion in the data about the Poisson parameter. a DEFINITION A statistic T = 1(X,, . . ., X,) is said to be sufficient for making inferences about a parameter @ if the joint distribution of X,,X>,..., X, given that T = t does not depend upon @ for every possible value t of the statistic T. The notion of sufficiency formalizes the idea that a statistic T contains all relevant information about 0. Once the value of T for the given data is available, it is of no benefit to know anything else about the sample. The Factorization Theorem How can a sufficient statistic be identified? It may seem as though one would have to select a statistic, determine the conditional distribution of the X;’s given any particular value of the statistic, and keep doing this until hitting paydirt by finding one that satisfies the defining condition. This would be terribly time-consuming, and when the X;’s are continuous there are additional technical difficulties in obtaining the relevant conditional distribution. Fortunately, the next result provides a relatively straightforward way of proceeding. THE NEYMAN Let f(x), X2, . . ., X,3 9) denote the joint pmf or pdf of X,, Xz, .. ., X,. Then FACTORI- T = t(X,..., X,) is a sufficient statistic for 0 if and only if the joint pmf or ZATION pdf can be represented as a product of two factors in which the first factor THEOREM involves @ and the data only through f(x), . . ., x,) whereas the second factor involves xj, . . ., x, but does not depend on 0: Fer. 25 etn A) = (try ---.Xn)3O) Ala, Xn) Before sketching a proof of this theorem, we consider several examples. Let’s generalize the previous example by considering a random sample X,, X», .. ., X,, from a Poisson distribution with parameter 4, for example, the numbers of blemishes on n independently selected DVD's or the numbers of errors in n batches of invoices where each batch consists of 200 invoices. The joint pmf of these variables is --- Trang 377 --- 364 carrer 7 Point Estimation ; . entgh echpn ena te cn, uit t tn FQ ns) = = = x! x! Xn! Xba xylose xy! 1 = (em. gB8 ( ) Xb eagles xy! The factor inside the first set of parentheses involves the parameter / and the data only through }\x;, whereas the factor inside the second set of parentheses involves the data but not 2. So we have the desired factorization, and the sufficient statistic is T = )°X; as we previously ascertained directly from the definition of sufficiency. a A sufficient statistic is not unique; any one-to-one function of a sufficient statistic is itself sufficient. In the Poisson example, the sample mean X = (1/n) 3°X; is a one-to-one function of YX; (knowing the value of the sum of the n observations is equivalent to knowing their mean), so the sample mean is also a sufficient statistic. eee §=Suppose that the waiting time for a bus on a weekday morning is uniformly distributed on the interval from 0 to 0, and consider a random sample X;, . . ., X, of waiting times (i.e., times on n independently selected mornings). The joint pdf of these times is As 3 11 aioe 60S x, <0,...,0<%, <0 flxi,-..,%030) = 4 0 0 0 8 = ae 0 otherwise To obtain the desired factorization, we introduce notation for an indicator function of an event A: (A) = 1 if (4), x2, .. ., Xp) lies in A and J(A) = 0 otherwise. Now let A= {(X1,22,-54n) 10 S31 $ 0,0 , .. ., X,, by X and X yy Xp, + oy Ly BY X. Suppose first that T = f(x) is sufficient, so that P(X = x | T = f) does not depend upon 9. Focus on a value t for which t(x) = t (e.g.,x = 3, 0, 1, (x) = Sx, so t = 4). The event that X = x is then identical to the event that both X = x and T = t because the former equality implies the latter one. Thus f(x;0) = P(X; 0) = P(X =x,T =1;0) = P(X =x|T =1,0)-P(T = 1,0) = P(X =x|T =0)-P(T =4,0) Since the first factor in this latter product does not involve @ and the second one involves the data only through rf, we have our desired factorization. Now let’s go the other way: assume a factorization, and show that T is sufficient, i.e., that the conditional probability that X = x given that T = ¢ does not involve 0. P(X=x,T=1,0) _P(X=x,0) P(X=x|T =14,0) = ——____— = —_—_ (X=x] =~ Beran) ~ P(r=n8) — g(tO)-A(x) g(t) -h(x) A(x) Dd P(X=u;0) D7 gle(u;@))-A(u) Au) usf(u) = us(u)=r ut(u)=¢ Sure enough, this latter ratio does not involve 6. 7 Jointly Sufficient Statistics When the joint pmf or pdf of the data involves a single unknown parameter 0, there is frequently a single statistic (single function of the data) that is sufficient. However, when there are several unknown parameters—for example, the mean yu and standard deviation ¢ of a normal distribution, or the shape parameter « and scale parameter f of a gamma distribution—we must expand our notion of sufficiency. DEFINITION Suppose the joint pmf or pdf of the data involves k unknown parameters 4), 03, .. ., Ox. The m statistics T) = t)(X,,..., X,), T2 = (XK, - Xs es Ty = tm(X,, .--.X,,) are said to be jointly sufficient for the parameters if the conditional distribution of the X;’s given that T, = t), T. = th, .. . Tn = tin does not depend on any of the unknown parameters, and this is true for all possible values f), t2, . . ., tm of the statistics. Consider a random sample of size n = 3 from a continuous distribution, and let T,, T>, and T; be the three order statistics — that is, 7; = the smallest of the three X;’s, 7. = the second smallest X;, and 7; = the largest X; (these order statistics were previously denoted by Y;, Y2, and Y3.). Then for any values f), fo, and ¢3 satisfying t) < tf) < fy, --- Trang 379 --- 366 carrer 7 Point Estimation P(X) = 31, X2 =, X3 =43|T) = 1, T. = 2, Ts =) 1 FRA A Bit Oi A BIR. B MiB fA: yt 0 otherwise For example, if the three ordered values are 21.4, 23.8, and 26.0, then the condi- tional probability distribution of the three X;’s places probability t on each of the 6 permutations of these three numbers (23.8, 21.4, 26.0, and so on). This conditional distribution clearly does not involve any unknown parameters. Generalizing this argument to a sample of size n, we see that for a random sample from a continuous distribution, the order statistics are jointly sufficient for 0), 0, . . ., 0, regardless of whether k = | (e.g., the exponential distribution has a single parameter) or 2 (the normal distribution) or even k > 2. a The factorization theorem extends to the case of jointly sufficient statistics: T, Tz, .. ., Ty, are jointly sufficient for 0), 02, .. ., 0; if and only if the joint pmf or pdf of the X;’s can be represented as a product of two factors, where the first involves the 0;’s and the data only through f;, fo, . . ., t, and the second does not involve the 6;’s. Let Xi, .. ., X, be a random sample from a normal distribution with mean y and variance o*. The joint pdf is n 1 5 su ole —(ai-Ky"/ (20?) f(x1,--.,%3 0) = | | ae n/2 = i oe (E82 tng?) /(207) |, i o" 20, This factorization shows that the two statistics =X; and EX? are jointly sufficient for : 2 the two parameters jc and 0. Since Z(X; — xy = EX? — n(X)° there is a one-to- one correspondence between the two sufficient statistics and the statistics X and &(X; — X)°; that is, values of the two original sufficient statistics uniquely determine values of the latter two statistics, and vice-versa. This implies that the latter two statistics are also jointly sufficient, which in turn implies that the sample mean and sample variance (or sample standard deviation) are jointly sufficient statistics. The sample mean and sample variance encapsulate all the information about ju and o® that is contained in the sample data. a Minimal Sufficiency When Xj, .. ., X, constitute a random sample from a normal distribution, the n order statistics Y,,.. ., Y,, are jointly sufficient for yz and o”, and the sample mean and sample variance are also jointly sufficient. Both the order statistics and the pair (X,S?) reduce the data without any information loss, but the sample mean and variance represent a greater reduction. In general, we would like the greatest possible reduction without information loss. A minimal (possibly jointly) suffi- cient statistic is a function of every other sufficient statistic. That is, given the value(s) of any other sufficient statistic(s), the value(s) of the minimal sufficient statistic(s) can be calculated. The minimal sufficient statistic is the sufficient --- Trang 380 --- 7.3 Sufficiency 367 statistic having the smallest dimensionality, and thus represents the greatest possi- ble reduction of the data without any information loss. A general discussion of minimal sufficiency is beyond the scope of our text. In the case of a normal distribution with values of both yu and o unknown, it can be shown that the sample mean and sample variance are jointly minimal sufficient (so the same is true of }>X; and >3X?). It is intuitively reasonable that because there are two unknown parameters, there should be a pair of sufficient statistics. It is indeed often the case that the number of the (jointly) sufficient statistic(s) matches the number of unknown parameters. But this is not always true. Consider a random sample X;,..., X,, from the pdf f(x;0) = 1/{n[1 + (@ — O)P°} for — 00 V(U*) as desired. LI --- Trang 381 --- 368 carrer 7 Point Estimation Suppose that the number of major defects on a randomly selected new vehicle of a certain type has a Poisson distribution with parameter 4, Consider estimating e~“, the probability that a vehicle has no such defects, based on a random sample of n vehicles. Let’s start with the estimator U = /(X, = 0), the indicator function of the event that the first vehicle in the sample has no defects. That is, _fil ifxX,=0 US {5 if X, > 0 Then E(U) =1-P(X; = 0) +0- P(X > 0) = P(X =0) =e7 - 29/0! =e? Our estimator is therefore unbiased for estimating the probability of no defects. The sufficient statistic here is T = )°X;, so of course the estimator U is not a function of T. The improved estimator is U* = E(U | ©X;) = P(X, = 01 ):X;). Let’s consider P(X, = 01 DX; = t) where ¢ is some non-negative integer. The event that X; = 0 and )°X; = t is identical to the event that the first vehicle has no defects and the total number of defects on the last n—1 vehicles is t. Thus n P(t =o0{Sxi=rh) P(X = O|FL Xi = 1) = —— Pl xsi isl (ea =o0{Sxi=h) 2 P| SOX st ial A moment generating function argument shows that the sum of all n X;’s has a Poisson distribution with parameter nd and the sum of the last n — 1 X;’s has a Poisson distribution with parameter (n — 1)/. Furthermore, X, is independent of the other n — 1 X,’s so it is independent of their sum, from which eA en — 1)4! ; a ee P(X, = 0/24 ,X) =) = OL __a@ (2 (X= 0fEiy ) o™ (na) a t! The improved unbiased estimator is then U¥ = (1— In)". If, for example, there are a total of 15 defects among 10 randomly selected vehicles, then the estimate is (1-4)? = .206. For this sample, 4 = = 1.5, so the maximum likelihood esti- mate of e~* is e~'> = .223. Here as in some other situations the principles of unbiasedness and maximum likelihood are in conflict. However, if n is large, the improved estimate is (1 — 1/n)! = [(1 — 1/n)"| ~ e~*, which is the mle. That is, the unbiased and maximum likelihood estimators are “asymptotically equivalent.” Il We have emphasized that in general there will not be a unique sufficient statistic. Suppose there are two different sufficient statistics T, and T such that the first one is not a one-to-one function of the second (e.g., we are not considering T, = \X; and T, = X). Then it would be distressing if we started with an unbiased --- Trang 382 --- 7.3 Sufficiency 369 estimator U and found that E(U | T,) 4 E(U | T>), so our improved estimator depended on which sufficient statistic we used. Fortunately there are general conditions under which, starting with a minimal sufficient statistic T, the improved estimator is the MVUE (minimum variance unbiased estimator). That is, the new estimator is unbiased and has smaller variance than any other unbiased estimator. Please consult one of the chapter references for more detail. Further Comments Maximum likelihood is by far the most popular method for obtaining point esti- mates, so it would be disappointing if maximum likelihood estimators did not make full use of sample information. Fortunately the mle’s do not suffer from this defect. IfT,,..., 7, are jointly sufficient statistics for parameters 6), . . ., 0,, then the joint pmf or pdf factors as follows: F ty ig Xj Oy 205 Oe) = (try 0 consider a random sample from a b. Consider the estimator @ = /(X, < c). Obtain uniform distribution on the interval from @ to an improved unbiased estimator based on the 20 (pdf 1/0 for 0 < x < 20), and use the factori- sufficient statistic (it is actually the minimum zation theorem to determine a sufficient statistic variance unbiased estimator). [Hint: You may for 0. use the following facts: (1) The joint distribu- 38. Suppose that survival time X has a lognormal tion oe rand X eae normal with means distribution with parameters jx and ¢ (which are prand variances an Un, the mean and standard deviation of In(X), not of Tespectively,..and ‘cortelation: p) (Which. you X itself), Are EX; and > X? jointly sufficient for should determine). (2) If Yi and Y2 have a the two parameters? If not, what is a pair of bivariate normal distribution, then the condi- jointly sufficient statistics? tional distribution of Y, given that Y) = yo is normal with mean it (poi/o2)(y2 — Ha) and 39, The probability that any particular component of variance o?(1 —p)°.] a certain type works in a satisfactory manner is p. If n of these components are independently --- Trang 384 --- 7.4 Information and Efficiency 371 Information and Efficiency In this section we introduce the idea of Fisher information and two of its applica- tions. The first application is to find the minimum possible variance for an unbiased estimator. The second application is to show that the maximum likelihood estima- tor is asymptotically unbiased and normal (that is, for large n it has expected value approximately @ and it has approximately a normal distribution) with the minimum possible variance. Here the notation f(x; 0) will be used for a probability mass function or a probability density function with unknown parameter 0. The Fisher information is intended to measure the precision in a single observation. Consider the random variable U obtained by taking the partial derivative of In[f(x;0)] with respect to 0 and then replacing x by X: U = O[In[f(X;0)]/00. For example, if the pdf is 6x9 for O0), then Afln(Ox"~ 1/9 = A[In() + @-DIn~)/A0 = 1/0 + In(x), so U = In(X) + 1/0. DEFINITION The Fisher information J(@) in a single observation from a pmf or pdf (x30) is the variance of the random variable U = O[In[f(X;0)]/00 : 0)=V 2 rng 30 1(0) =V [spines » (7.7) It may seem strange to differentiate the logarithm of the pmf or pdf, but this is exactly what is often done in maximum likelihood estimation. In what follows we will assume that f(x; 6) is a pmf, but everything that we do will apply also in the continuous case if appropriate assumptions are made. In particular, it is important to assume that the set of possible x’s does not depend on the value of the parameter. When f(x; 0) is a pmf, we know that 1 = 37, f(x; 0). Therefore, differentiat- ing both sides with respect to 0 and using the fact that [In()]/ = f’/f, we find that the mean of U is 0: a a 0= DF %8) = Dagf te 0) a a (7.8) = Lo [nfs OIF (a; 8) = Elgg In(F%s 8))] = EU) This involves interchanging the order of differentiation and summation, which requires certain technical assumptions if the set of possible x values is infinite. We will omit those assumptions here and elsewhere in this section, but we emphasize that switching differentiation and summation (or integration) is not allowed if the set of possible values depends on 0. For example, if the summation were from —0 to @ there would be additional variability, and therefore terms for the limits of summation would be needed. --- Trang 385 --- 372 carrer 7 Point Estimation There is an alternative expression for /(0) that is sometimes easier to compute than the variance in the definition: 1(0) =-E a In( f(X;0)) (7.9) This is a consequence of taking another derivative in (7.8): tea a o 0= Dopgelinr cs O))F (39) + Lo pgllt (2:0)] saline (ss O)1F G8) a a > {ota} ef [nro \ (7.10) To complete the derivation of (7.9), recall that U has mean 0, so its variance is 1(0) =V. 2 [Inf (X;0)] } =E g Inf (X; 0) ; pf [In F(X; 0)] =Vi—IInf(X; = = : = —-Ei—, ; a0 ao ae? where Equation (7.10) is used in the last step. Let X be a Bernoulli rv, so f(x; p) = p*(1-p)'*, x = 0, 1. Then cm 0 X 1-X X-p pple &sp) = ap xine + (1 —X)In(1 — p)| = a lop allo (7.11) This has mean 0, in accord with Equation (7.8), because E(X) = p. Computing the variance of the partial derivative, we get the Fisher information: g[ Oa: oe V(X =p) V(X) p(l—p) 1) =v [zm(xip)] = PEO) = VO ea ap ba-p)P p=p) pap) 1 =——_ 712 p(1 =p) (7-42) The alternative method uses Equation (7.9). Differentiating Equation (7.11) with respect to p gives oe -xX_ 1-Xx sa In(f (xX; ==za7 EAS: ppiln sp) =F Gap (7.13) Taking the negative of the expected value in Equation (7.13) gives the information in an observation: om P l-p 1 1 1 I(p) = -E| n(f(X;p))| = 24+ -——?, = - + — = (7.14 P lie rt D) Pp (1—p) pp (1=p) p(l~p) om) --- Trang 386 --- 7.4 Information and Efficiency 373 Both methods yield the answer I(p) = 1/[p(1 — p)], which says that the information is the reciprocal of V(X). It is reasonable that the information is greatest when the variance is smallest. a Information in a Random Sample Now assume a random sample X,, X>, ..., X,, from a distribution with pmf or pdf flor: 0). Let fX,,X>, ...X,5 0) = f\X15 O) + (Xp; 8) » + + AAX,,; 0) be the likelihood function. The Fisher information /,(0) for the random sample is the variance of the score function B Inf (X1,X X30) = aw In[f (X1; 0) - f (Xo; 0) f (Xn; 9)] 9g Inf (Ki Xa. Xn 8) = a5 “G 25 ni The log of a product is the sum of the logs, so the score function is a sum: @) a @) = Inf (X,,X2,...,Xn3 0) = Inf (X,;0) += Inf(X;0) +--+ 30 Inf(X1,X2 ni 9) 20 inf (X; ) +55 Inf (X2;0) + o a Inf (X13 0 TAS: +55 Ins 8) (7.15) This is a sum of terms for which the mean is zero, by Equation (7.8), and therefore p\2 Inf (XX. X,:0)| =0 (7.16 00 1,X2,-.-,Xn3@)) = -16) The right-hand-side of Equation (7.15) is a sum of independent identically distributed random variables, and each has variance /(@). Taking the variance of both sides of Equation (7.15) gives the information /,(0) in the random sample 1,(0) =V 2 atx, x X,;0)| =nV 2 in f(X1:0) =nl(0). (7.17) WM) = V6 1,42,---,An; =n 30 ie) = nl (0). i Therefore, the Fisher information in a random sample is just 7 times the information in a single observation. This should make sense intuitively, because it says that twice as many observations yield twice as much information. Continuing with Example 7.31, let X,, Xo, ..., X, be a random sample from the Bernoulli distribution with f(x; p) = p*(1 —p)'*, x = 0, 1. Suppose the purpose is to estimate the proportion p of drivers who are wearing seat belts. We saw that the information in a single observation is (p) = 1/{[p(1 — p)], and therefore the Fisher information in the random sample is /,,(p) = nl(p) = n/[p( — p)]. a The Cramér-Rao Inequality We will use the concept of Fisher information to show that if (X), X2, ..., Xp) is an unbiased estimator of 0, then its minimum possible variance is the reciprocal of 1,(0). Harald Cramér in Sweden and C. R. Rao in India independently derived this --- Trang 387 --- 374 caper 7 Point Estimation inequality during World War II, but R. A. Fisher had some notion of it 20 years previously. THEOREM Assume a random sample X;, X>, . . ., X, from the distribution with pmf or pdf (CRAMER- f(x; 0) such that the set of possible values does not depend on 9. If the statistic RAO T = t(X,, Xo, ..., X,) is an unbiased estimator for the parameter 0, then INEQUALITY) i i i V0) = aa oor oo =o V{Glinfi...XnO)} nl) — 10) Proof The basic idea here is to consider the correlation p between T and the score function, and the desired inequality will result from —1 < p < 1. If T= 1(X,, Xo, ..., X,) is an unbiased estimator of 0, then OS ET) = SO terry xn) 1.4038) fe Differentiating this with respect to 0, PT tate ste sts 8) = DO tly a) as) 96 24, Liye cegha lf Bigsay kai) = p> Myon) Bgh y+ Multiplying and dividing the last term by the likelihood f(x, .. ., xn30) gives DF (x, ;0) Date 10) EO agg By Pinta 8) which is equivalent to T=) SS t@igesx wy? hepa, are i 0)] f (hig 009th B) Biel v0 , on. = EY (X45 00, Xp) [Inf (X10, X38) 00 Therefore, because of Equation 7.16, the covariance of T with the score function is Is 1=covl7,2 [Inf(x X,;0)] (7.18) = Cov) T. a5 qgacreyg Ky é Recall from Section 5.2 that the correlation between two rv’s X and Y is py y = Cov(X, Y)\/(cyoy), and that —1 < pyy <1. Therefore, Cov(X,¥)” = pi yaza} < ao} --- Trang 388 --- 7.4 Information and Efficiency 375. Apply this to Equation 7.18: a z 1 = (Cov T.5 linf(Xi,--- Xn: 8)] ‘ 9 (7.19) 6 X;)/n? = np(1 —p)/n? = p(1 —p)/n. Because T is unbiased and V(T) is equal to the lower bound, T has efficiency 1 and therefore it is an efficient estimator. a Large Sample Properties of the MLE As discussed in Section 7.2, the maximum likelihood estimator 6 has some nice properties. First of all it is consistent, which means that it converges in probability to the parameter @ as the sample size increases. A verification of this is beyond the level of this book, but we can use it as a basis for showing that the mle is asymptotically normal with mean @ (asymptotic unbiasedness) and variance equal to the Cramér—Rao lower bound. THEOREM Given a random sample X;, X2, ..., X, from a distribution with pmf or pdf f(x; 0), assume that the set of possible x values does not depend on 0. Then for large n the maximum likelihood estimator has approximately a normal distribution with mean 0 and variance 1/[nl(0)]. More precisely, the limiting distribution of \/n(@—@) is normal with mean 0 and variance 1/I(@). --- Trang 389 --- 376 carrer 7 Point Estimation Proof Consider the score function S(0) = a Inf (X1,X: X,:0) 50) = 55 1,X9,....Xn5 Its derivative S’(0) at the true @ is approximately equal to the difference quotient 5(0) — S(0) S'(0) = 7.20 (= (7.20) and the error approaches zero asymptotically because 6 approaches @ (consistency). Equation (7.20) connects the mle @ to the score function, so the asymptotic behavior of the score function can be applied to #. Because @ is the maximum likelihood estimate, S() = 0, so in the limit, z S(0) 0-0=—— =S(0) Multiplying both sides by \/n, then dividing numerator and denominator by nT), Val — 0) = Lv Alley TOI}S(0) __8(0)/V/nIO) —{1/[nV/1(0)]}8'@)— —(1/n)8'(0)/1(0) Now rewrite S(0) and S'(@) as sums using Equation 7.15: ‘ {2 nlf (MO) +--+ 3 nifX} / 1G Valo — 0) = Bat a é Ms +8 ali :)}/ VT@)/n (721) LF tnt) =~ F ine Oi}/ VIO) The denominator braces contain a sum of independent identically distributed rv’s each with mean 1(0) = -E © iinp(x:0)] 00 by Equation (7.9). Therefore, by the law of large numbers, the denominator average +{} converges to 1(0). Thus the denominator converges to \/7(0). The numerator average i {} is the mean of independent identically distributed rv’s with mean 0 [by Equation (7.8)] and variance /(@), so the numerator ratio is an average minus its expected value, divided by its standard deviation. Therefore, by the Central Limit Theorem it is approximately normal with mean 0 and standard deviation 1. Thus, the ratio in Equation (7.21) has a numerator that is approximately N(O, 1) and a denomi- nator that is approximately ,//(@), so the ratio is approximately N(O, 1/,/1(0)) = N(O, 1/I(0)). That is, \/n(@ — 0) is approximately N(O, 1//()), and it follows that 6 is approximately normal with mean @ and variance 1/[n/(@)], the Cramér-Rao lower bound. 7 --- Trang 390 --- 7.4 Information and Efficiency 377 Continuing with the previous example, let X;, Xo, ..., X, be a random sample from the Bernoulli distribution. The objective is to estimate the proportion p of drivers who are wearing seat belts. The pmf is f(x; p) = p*(1 — pv), x =0, 1 so the likelihood is Fei x2, ... .xngp) = ph Pet-Pa(] — py erat) Then the log likelihood is In[f (x1,X2,.--.Xn3p)] = xi In(p) + (n — Oxi) n(1 — p) and therefore its derivative, the score function, is 0 -3x = 2 nip (eix2,---,23p)] <2 — _ Op P 1-p — p(l~p) Conclude that the maximum likelihood estimator is p = X = 3) X;/n. Recall from Example 7.33 that this is unbiased and efficient with the minimum variance of the Cramér-Rao inequality. It is also asymptotically normal by the Central Limit Theorem. These properties are in accord with the asymptotic distribution given by the theorem, p ~ N(p, 1/|nI(p))). Ld] Let X,,X>,...,X,, be a random sample from the distribution with pdf fix; @) = 0x?" for0 0. Here X;,i = 1,2,...,, represents the fraction of a perfect score assigned to the ith applicant by a recruiting team. The Fisher infor- mation is the variance of uae in x0) - 20 0+(0-1)h x) <2 In(X = 59 InlF x: = pln +( )In(X)] = 5 + In(X) However, it is easier to use the alternative method of Equation (7.9): P ofl I 1 I(0) = -E: In[ f(X: 0)] ¢ = —Ey = |= + In(X =—-E = (0) = 2S mipcco)} = ~ef 5 [5 + nc] } = ef rh = To obtain the maximum likelihood estimator, we first find the log likelihood: In[f (x1,42,-.- Xn O)] = In(O" [I xP!) = nin() + (= 1) D In(a) Its derivative, the score function, is a) n 5g lft] = B+ SF nei) Setting this to 0, we find that the maximum likelihood estimate is g-— (7.22) ~ Sin@ai)/n . The expected value of In(X) is —1/0, because E(U) = 0, so the denominator of (7.22) converges in probability to —1/0 by the law of large numbers. Therefore 0 converges in probability to @, which means that @ is consistent. We knew this because the mle is always consistent, but it is also nice to show it directly. By the theorem, the asymptotic distribution of @ is normal with mean @ and variance 1/{nl()] = 0°/n. Ld --- Trang 391 --- 378 carrer 7 Point Estimation Exercises | Section 7.4 (42-48) 42. Assume that the number of defects in a car 45. Let X;, Xo, ..., X, be a random sample from the has a Poisson distribution with parameter 1. normal distribution with known standard devia- To estimate 4 we obtain the random sample X), tion o. Masewa Xe a. Find the mle of 1. a. Find the Fisher information in a single obser- b. Find the distribution of the mle. vation using two methods. c. Is the mle an efficient estimator? Explain. b. Find the Cramér—Rao lower bound for the var- d. How does the answer to part (b) compare with iance of an unbiased estimator of 4. the asymptotic distribution given by the second cc. Use the score function to find the mle of 2 and theorem? show that the mle is an efficient estimator. 46, Let X,, Xo, ..., X, be a random sample from the d. Is the asymptotic distribution of the mle in ee iia ‘ z aoe . normal distribution with known mean jt but with accord with the second theorem? Explain. fe 2 the variance gas the unknown parameter. 43. In Example 7.23 fix; 0) = 1/0 for 0 < x < 0 and a. Find the information in a single observation 0 otherwise. Given a random sample, the maxi- and the Cramér—Rao lower bound. mum likelihood estimate is the largest observa- b. Find the mle of 07. tion. . . . cc. Find the distribution of the mle. a. Letting @ = [(n + 1)/n]0, show that 0 is unbi- d. Is the mle an efficient estimator? Explain. ased and find its variance. e. Is the answer to part (c) in conflict with the b. Find the Cramér-Rao lower bound for the var- asymptotic distribution of the mle given by the iance of an unbiased estimator of 0. second theorem? Explain. ¢. Compare the answers in parts (a) and (b) and 47, yoy, X,, ..., X, be a'random sample from the explain why it is apparent that they disagree. sues suka : ee normal distribution with known mean but with What assumption is violated, causing the theo- mn zZ the standard deviation ¢ as the unknown parame- rem not to apply here? tii 44. Survival times have the exponential distribution a. Find the information in a single observation. with pdf fx; 2) = 4e*, x > 0, and fix; 2) = 0 b. Compare the answer in part (a) to the answer in otherwise, where 2 > 0. However, we wish to part (a) of Exercise 46. Does the information estimate the mean 1 = 1/2 based on the random depend on the parameterization? Seas x he «+a Xm so let's te-expressthe pdfin 4 yer x, x5, .., X, be a random sample from a eform (we. , continuous distribution with pdf fix; 6). For large a. Find the information in a single observation ‘ ae ' a n, the variance of the sample median is approxi- and the Cramér—Rao lower bound. aOYIZ). If f i r mately 1/{4n[f@is)P)}. If X1, Xo, ..., Xp is a ran- b. Use the score function to find the mle of 1. ‘ 4 : dom sample from the normal distribution with c. Find the mean and variance of the mle. A 4 ef Ac. ‘ known standard deviation o and unknown p, d. Is the mle an efficient estimator? Explain. ‘ determine the efficiency of the sample median. 49. At time f = 0, there is one individual alive in a variables, which is exponential with parameter certain population. A pure birth process then 22. Similarly, once the second birth has unfolds as follows. The time until the first birth is occurred, there are three individuals alive, so exponentially distributed with parameter . After the time until the next birth is an exponential rv the first birth, there are two individuals alive. The with parameter 3/, and so on (the memoryless time until the first gives birth again is exponen- property of the exponential distribution is being tial with parameter /, and similarly for the sec- used here). Suppose the process is observed until ond individual. Therefore, the time until the next the sixth birth has occurred and the successive birth is the minimum of two exponential (A) birth times are 25.2, 41.7, 51.2, 55.5, 59.5, 61.8 --- Trang 392 --- Supplementary Exercises. 379 (from which you should calculate the times Use this to obtain an unbiased estimator for o of between successive births). Derive the mle of 2. the form cS. What is ¢ when n = 20? [Hint: The likelihood is a product of exponential 55, Each of n specimens is to be weighed twice on terms.] the same scale. Let X; and Y; denote the two 50. Let Xj,.%, be a random sample from a observed weights for the ith specimen. Suppose uniform distribution on the interval [—0, 0]. Xpand ‘Yiyaresindepentient of each-othery eack a. Determine the mle of 0. [Hint: Look back at normally distributed with mean value 1; (the : ares true weight of specimen i) and variance 0”. what we did in Example 7.23.] ‘ , b. Give an intuitive argument for why the mle is ae iSiGis Olay, (Ne, mai eeee 8 y stimator of o? is 62=3>(X;—Y,)"/(4n) either biased or unbiased. eee ors ey ;. Determine ‘a’ sufficient statistic: for 6,. Hinr (int: We 2= (21 +22)/2, then Yo (2-2) = See Example 7.27.] (41 -2)°/2.] » d. Determine the joint pdf of the smallest order b. Is the mle 6? an unbiased estimator of o°? statistic ¥; (= min(X)) and the largest order Find an unbiased estimator of 0°. (Hint: For statistic Y,, (= max(X;)) [Hint: In Section 5.5 any ty Z, E(Z") = V(Z) + [E(Z))". Apply this we determined the joint pdf of two particular toZ=X,-Y,] order statistics]. Then use it to obtain the 56, For 0 < @ < | consider a random sample from a expected value of the mle. [Hint: Draw the uniform distribution on the interval from @ to 1/0. region of joint positive density for Y, and Y,,, Identify a sufficient statistic for 0. and identify what the mle is for each part of . a this region | 57. Let p denote the proportion of all individuals who € ‘Whatis an uiibiased estimator for 67 are allergic to a Particular medication. An inves- tigator tests individual after individual to obtain a 51. Carry out the details for minimizing MSE in group of r individuals who have the allergy. Let Example 7.6: show that ¢ = 1/(n + 1) minimizes X;, = 1 if the ith individual tested has the allergy the MSE of 6? = c > (X; —X)* when the popu- and X; = 0 otherwise (i = 1, 2, 3, . . .). Recall lation distribution is normal. that in this situation, X = the number of nonaller- 52. Let X;,.. ., X, be a random sample from a pdf gic individuals tested prior to obtaining the that is symmetric about 1. An estimator for j1 that desired group has a negative binomial distribu- has been found to perform well for a variety of tion. Use the definition of sufficiency to show that underlying distributions is the Hodges-Lehmann X isa sufficient statistic for p. estimator. To define it, first compute for each 58, The fraction of a bottle that is filled with a particu- i 0). is ft = the median of the Xj,’s. Compute the a. Obtain the method of moments estimator for 0. value of this estimate using the data of Exercise b. Is the estimator of (a) a sufficient statistic? If 41 of Chapter 1. [Hint: Construct a square table not, what is a sufficient statistic, and what is with the .’s listed on the left margin and on top. de. estiniaiG: OF (OC RARELY GABLE) Then compute averages on and above the diago- based on a sufficient statistic? aa 59. Let X,,...,X,, be arandom sample from a normal 53. For a normal population distribution, the statistic distribution with both yz and o unknown. An median {|X; — X)|,..., |X, — X)|}/.6745 can be unbiased estimator of @ = P(X < c) based on used to estimate ¢. This estimator is more resis- the jointly sufficient statistics is desired. Let tant to the effects of outliers (observations far ES Val(n—1) and w= (c—X)/s. Then it from the bulk of the data) than is the sample can be shown that the minimum variance unbi- standard deviation. Compute both the corres- paged estimmar ford is ponding point estimate and s for the data of Example 7.2. 0 kw <-1 54. When the sample standard deviation $ is based . kwVn = on a random sample from a normal population o= | P (r< aoe —l1 where T has a 1 distribution with n ~ 2 df. The E(S) = V2/(n— 1) 0 (n/2)o/T[(n — 1)/2] article “Big and Bad: How the $.U.V. Ran over Automobile Safety” (The New Yorker, Jan. 24, --- Trang 393 --- 380) charer7 Point Estimation 2004) reported that when an engineer with and compare the two sides to determine 5(X). If Consumers Union (the product testing and rating X = 200, what is the estimate? Does this seem organization that publishes Consumer Reports) reasonable? What is the estimate if X = 199? Is performed three different trials in which a this reasonable? Chevrolet Blazer was accelerated to 60 mph ¢> 16+ ¥, the payoff from playing a certain game, and then suddenly braked, the stopping distances have pmf (ft) were 146.2, 151.6, and 153.4, respectively. Assuming that braking distance is normally distributed, obtain the minimum variance unbi- f(x;6) = { © vp a ased estimate for the probability that distance is (1— aye x= 01,2... at most 150 ft, and compare to the maximum likelihood estimate of this probability. a. Verify that f(x; 9) is a legitimate pmf, and ‘ . = — determine the expected payoff. [Hint: Look 60. Here is acresult that allows for easy identification back at the properties of a geometric random ofa minimal sufficient statistic: Suppose there variable discussed in Chapter 3.] is a function ta, + %,) such that for any two b. Let X;,. . ..X,, be the payoffs from n indepen- sets of observations x1,. . .,.%, and y1,. + Yn the dent games of this type. Determine the mle likelihood ratio flxy, « « 5 ni OlfOs «+ + Yui 4) of 6. [Hint: Let Y denote the number of obser- doesn’t. depend on 9 if and only if 41,2». ¥n) vations among the n that equal —1 (that is, = 101-59). Then T = 10%,....X,) isaminimal Y=SWY,——1), where KA) =1 if the sufficient statistic. The result is also valid if 0 is event A occurs and 0 otherwise}, and write replaced by 4 1, . . .. @ 4 in which case there will the likelihood as a single expression in terms typically be several jointly minimal sufficient sta- OPS x and] tistics. For example, if the-tinderlying pdf is-expo- c. What is the approximate variance of the mle nential with parameter A, then the likelihood ratio is when n is large? 2=*- >», which will not depend on 2 if and only if . Su = Dyi, So T =Yl x is a minimal sufficient 63. Let x denote the number of items in an order and y statistic for 2 (and so is the sample mean). denote time (min) necessary to process the order. a. Identify a minimal sufficient statistic when the Processing time may be determined by various X;’s are a random sample from a Poisson distri- factors other than order size. So for any particular bution. value of x, we now regard the value of total pro- b. Identify a minimal sufficient statistic or jointly duction time as a random variable Y. Consider the minimal sufficient statistics when the X;’s are a following data obtained by specifying various random sample from a normal distribution with values of x and determining total production time mean @ and variance 0. for each one. ¢. Identify a minimal sufficient statistic or jointly . minimal sufficient statistics when the X;'sarea * (10 15 18 2025 27 30353640 random sample from a normal distribution with y 301 455 533 599 750 810 903 1054 1088 1196 mean @ and standard deviation 0. 61. The principle of unbiasedness (prefer an unbiased estimator to any other) has been criticized on the a. Plot each observed (x, y) pair as a point on a grounds that in some situations the only unbiased two-dimensional coordinate system with a hor- estimator is patently ridiculous. Here is one such izontal axis labeled x and vertical axis labeled y. example. Suppose that the number of major Do all points fall exactly on a line passing defects X on a randomly selected vehicle has a through (0, 0)? Do the points tend to fall close Poisson distribution with parameter 7. You are to such a line? going to purchase two such vehicles and wish to b. Consider the following probability model for estimate 0 = P(X; =0, X2=0)=e, the the data. Values 11,12, ....x, are specified, and probability that neither of these vehicles has any at each x; we observe a value of the dependent major defects. Your estimate is based on observ- variable y. Prior to observation, denote the y ing the value of X for a single vehicle. Denote this values by ¥;, Y2,. . ., Y,, where the use of estimator by 0 = 5(X). Write the equation implied uppercase letters here is appropriate because by the condition of unbiasedness, E[6(X)] = e 7”, we are regarding the y values as random vari- cancel e~ from both sides, then expand what ables. Assume that the Y;’s are independent and remains on the right-hand side in an infinite series, normally distributed, with Y; having mean --- Trang 394 --- Bibliography 381 value fx; and variance o°. That is, rather than product of individual normal likelihoods with assume that y = fx, a linear function of x different mean values and the same variance. passing through the origin, we are assuming Proceed as in the estimation via maximum that the mean value of Y is a linear function likelihood of the parameters jc and a? based of x and that the variance of Y is the same for ona random sample from a normal population any particular x value. Obtain formulas for the distribution (but here the data does not consti- maximum likelihood estimates of f and 0”, and tute a random sample as we have previously then calculate the estimates for the given data. defined it, since the Y;’s have different mean How would you interpret the estimate of /? values and therefore don’t have the same dis- What value of processing time would you pre- tribution). [Note: This model is referred to as dict when x = 25? [Hint: The likelihood is a regression through the origin.] DeGroot, Morris, and Mark Schervish, Probability good chapters on robust point estimation, including and Statistics (3rd ed.), Addison-Wesley, Boston, one on M-estimation. MA, 2002. Includes an excellent discussion of Hogg, Robert, Allen Craig, and Joseph McKean, Intro- both general properties and methods of point esti- duction to Mathematical Statistics (6th ed.), Pren- mation; of particular interest are examples tice Hall, Englewood Cliffs, NJ, 2005. A good showing how general principles and methods discussion of unbiasedness. can yield unsatisfactory estimators in particular Larsen, Richard, and Morris Marx, Introduction to situations. Mathematical Statistics (4th ed.), Prentice Hall, Efron, Bradley, and Robert Tibshirani, An Introduc- Englewood Cliffs, NJ, 2005. A very good discus- tion to the Bootstrap, Chapman and Hall, New sion of point estimation from a slightly more math- York, 1993. The bible of the bootstrap. ematical perspective than the present text. Hoaglin, David, Frederick Mosteller, and John Tukey, — Rice, John, Mathematical Statistics and Data Analysis Understanding Robust and Exploratory Data (Grd ed.), Duxbury Press, Belmont, CA, 2007. Analysis, Wiley, New York, 1983. Contains several A nice blending of statistical theory and data. --- Trang 395 --- Statisti atistical Intervals e Based on a Single Sample Introduction A point estimate, because it is a single number, by itself provides no information about the precision and reliability of estimation. Consider, for example, using the statistic X to calculate a point estimate for the true average breaking strength (g) of paper towels of a certain brand, and suppose that ¥ = 9322.7. Because of sam- pling variability, it is virtually never the case that ¥ = yu. The point estimate says nothing about how close it might be to yz. An alternative to reporting a single sensible value for the parameter being estimated is to calculate and report an entire interval of plausible values—an interval estimate or confidence interval (Cl). A confidence interval is always calculated by first selecting a confidence level, which is a measure of the degree of reliability of the interval. A confidence interval with a 95% confidence level for the true average breaking strength might have a lower limit of 9162.5 and an upper limit of 9482.9. Then at the 95% confidence level, any value of 4 between 9162.5 and 9482.9 is plausible. A confidence level of 95% implies that 95% of all samples would give an interval that includes y, or whatever other parameter is being estimated, and only 5% of all samples would yield an erroneous interval. The most frequently used confidence levels are 95%, 99%, and 90%. The higher the confidence level, the more strongly we believe that the value of the parameter being estimated lies within the interval (an interpreta- tion of any particular confidence level will be given shortly). Information about the precision of an interval estimate is conveyed by the width of the interval. If the confidence level is high and the resulting interval is quite narrow, our knowledge of the value of the parameter is reasonably precise. A very wide confidence interval, however, gives the message that there is a great deal of uncertainty concerning the value of what we are estimating. Figure 8.1 shows 95% confidence intervals for true average breaking strengths of two JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 382 DOI 10.1007/978-1-4614-0391-3_8, © Springer Science+Business Media, LLC 2012 --- Trang 396 --- 8.1 Basic Properties of Confidence Intervals 383 Brand 1: ——_________¢} Strength Brand 2, ——{—_____}—- srrength Figure 8.1 Confidence intervals indicating precise (brand 1) and imprecise (brand 2) information about j1 different brands of paper towels. One of these intervals suggests precise knowledge about y, whereas the other suggests a very wide range of plausible values. Basic Properties of Confidence Intervals The basic concepts and properties of confidence intervals (CIs) are most easily introduced by first focusing on a simple, albeit somewhat unrealistic, problem situation. Suppose that the parameter of interest is a population mean j and that 1. The population distribution is normal. 2. The value of the population standard deviation ¢ is known. Normality of the population distribution is often a reasonable assumption. However, if the value of j is unknown, it is unlikely that the value of ¢ would be available (knowledge of a population’s center typically precedes information concerning spread). In later sections, we will develop methods based on less restrictive assumptions. Industrial engineers who specialize in ergonomics are concerned with designing workspace and devices operated by workers so as to achieve high productivity and comfort. The article “Studies on Ergonomically Designed Alphanumeric Key- boards” (Hum. Factors, 1985: 175-187) reports on a study of preferred height for an experimental keyboard with large forearm—wrist support. A sample of n = 31 trained typists was selected, and the preferred keyboard height was determined for each typist. The resulting sample average preferred height was ¥ = 80 cm. Assum- ing that the preferred height is normally distributed with o = 2.0 cm (a value suggested by data in the article), obtain a CI for yu, the true average preferred height for the population of all experienced typists. a The actual sample observations x), x2, .. . ,X, are assumed to be the result of a random sample X;, ... , X, from a normal distribution with mean value jy and standard deviation ¢. The results of Chapter 6 then imply that irrespective of the sample size n, the sample mean X is normally distributed with expected value ju and standard deviation o/,/n. Standardizing X by first subtracting its expected value and then dividing by its standard deviation yields the variable X-w Z=——= 8.1 oi (8.1) which has a standard normal distribution. Because the area under the standard normal curve between —1.96 and 1.96 is .95, --- Trang 397 --- 384 = cuarrer 8 Statistical Intervals Based on a Single Sample X-u P( -1.96 1.96) = .95 8.2 (-196< 77h <16) ~ The next step in the development is to manipulate the inequalities inside the parentheses in (8.2) so that they appear in the equivalent form / < < u, where the endpoints / and u involve X and @/,/n. This is achieved through the following sequence of operations, each one yielding inequalities equivalent to those we started with: 1. Multiply through by ¢/,/n to obtain o o —196-—> fii that is, o = o X—1.96:— X+1.96-— Va — (4) —e—+->—____ (5) ——++—_»—_ Ot —_$ —_ (7) —+—+—_ @. —_——e— (9) ——++}+—_>—___ (10) —e—+-»—_—_. (11) —3- Figure 8.3 Repeated construction of 95% Cls According to this interpretation, the confidence level 95% is not so much a statement about any particular interval such as (79.3, 80.7), but pertains to what would happen if a very large number of like intervals were to be constructed using the same formula. Although this may seem unsatisfactory, the root of the difficulty lies with our interpretation of probability—it applies to a long sequence of replica- tions of an experiment rather than just a single replication. There is another approach to the construction and interpretation of CIs that uses the notion of subjective probability and Bayes’ theorem, as discussed in Section 14.4. The interval presented here (as well as each interval presented subsequently) is called a “classical” CI because its interpretation rests on the classical notion of probability (although the main ideas were developed as recently as the 1930s). Other Levels of Confidence The confidence level of 95% was inherited from the probability .95 for the initial inequalities in (8.2). If a confidence level of 99% is desired, the initial probability of .95 must be replaced by .99, which necessitates changing the z critical value from 1.96 to 2.58. A 99% CI then results from using 2.58 in place of 1.96 in the formula for the 95% CI. This suggests that any desired level of confidence can be achieved by replacing 1.96 or 2.58 with the appropriate standard normal critical value. As Figure 8.4 shows, a probability of | — x is achieved by using z,,2 in place of 1.96. --- Trang 400 --- 8.1 Basic Properties of Confidence Intervals 387 z curve fs = al { i i i i 1 —_s Fal2 9 Fal2 Figure 8.4 P(-2.)2 < Z < Z,j2) = 1-a DEFINITION A 100(1 — @)% confidence interval for the mean y of a normal population when the value of o is known is given by o o (x sya pol =) (8.5) or, equivalently, by ¥ + 2,2 -0/\/n. A finite mathematics course has recently been changed, and the homework is now done online via computer instead of from the textbook exercises. How can we see if there has been improvement? Past experience suggests that the distribution of final exam scores is normally distributed with mean 65 and standard deviation 13. It is believed that the distribution is still normal with standard deviation 13, but the mean has likely changed. A sample of 40 students has a mean final exam score of 70.7. Let’s calculate a confidence interval for the population mean using a confi- dence level of 90%. This requires that 100(1 — «) = 90, from which « = .10 and Zy/2 = Z.9s = 1.645 (corresponding to a cumulative z-curve area of .9500). The desired interval is then 13 70.7 £ 1.645 -—— = 70.7 + 3.4 = (67.3,74.1) v40 With a reasonably high degree of confidence, we can say that 67.3 < uw < 74.1. Furthermore, we are confident that the population mean has improved over the previous value of 65. a Confidence Level, Precision, and Choice of Sample Size Why settle for a confidence level of 95% when a level of 99% is achievable? Because the price paid for the higher confidence level is a wider interval. The 95% interval extends 1.96 - ¢/\/n to each side of X, so the width of the interval is 2(1.96) -o//n = 3.92-¢/,/n. Similarly, the width of the 99% interval is 2(2.58) -o//n = 5.16-a/,/n. That is, we have more confidence in the 99% interval precisely because it is wider. The higher the desired degree of confidence, the wider the resulting interval. In fact, the only 100% CI for ju is (—o0, 00), which is not terribly informative because, even before sampling, we knew that this interval covers jt. --- Trang 401 --- 388 charter 8 Statistical Intervals Based on a Single Sample If we think of the width of the interval as specifying its precision or accuracy, then the confidence level (or reliability) of the interval is inversely related to its precision. A highly reliable interval estimate may be imprecise in that the endpoints of the interval may be far apart, whereas a precise interval may entail relatively low reliability. Thus it cannot be said unequivocally that a 99% interval is to be preferred to a 95% interval; the gain in reliability entails a loss in precision. An appealing strategy is to specify both the desired confidence level and interval width and then determine the necessary sample size. Extensive monitoring of a computer time-sharing system has suggested that response time to a particular editing command is normally distributed with standard deviation 25 ms. A new operating system has been installed, and we wish to estimate the true average response time y for the new environment. Assuming that response times are still normally distributed with @ = 25, what sample size is necessary to ensure that the resulting 95% CI has a width of (at most) 10? The sample size n must satisfy 10 =2- (1.96) - (25//n) Rearranging this equation gives /n = 2- (1.96) - (25)/10 = 9.80 so n= 9.80° = 96.04 Since n must be an integer, a sample size of 97 is required. a The general formula for the sample size n necessary to ensure an interval width w is obtained from w = 2 - 24/2 -¢/\/n as oy\2 n= (22/2 a) (8.6) The smaller the desired width w, the larger n must be. In addition, n is an increasing function of o (more population variability necessitates a larger sample size) and of the confidence level 100(1 — «) (as « decreases, z,,/2 increases). The half-width 1.96 - o/\/n of the 95% CI is sometimes called the bound on the error of estimation associated with a 95% confidence level; that is, with 95% confidence, the point estimate x will be no farther than this from y. Before obtaining data, an investigator may wish to determine a sample size for which a particular value of the bound is achieved. For example, with yc representing the average fuel efficiency (mpg) for all cars of a certain type, the objective of an investigation may be to estimate jz to within 1 mpg with 95% confidence. More generally, if we wish to estimate x to within an amount B (the specified bound on the error of estimation) with 100(1 — ~)% confidence, the necessary sample size results from replacing 2/w by 1/B in (8.6). --- Trang 402 --- 8.1 Basic Properties of Confidence Intervals 389. Deriving a Confidence Interval Let X;, Xo, ... , X,, denote the sample on which the CI for a parameter 0 is to be based. Suppose a random variable satisfying the following two properties can be found: 1. The variable depends functionally on both X,, ... , X,, and 0. 2. The probability distribution of the variable does not depend on 0 or on any other unknown parameters. Let A(X), Xo, ... , X,3 8) denote this random variable. For example, if the population distribution is normal with known o and @ = y, the variable h(X1,...,Xn30) = (X — w)/(o//n) satisfies both properties; it clearly depends functionally on y, yet has the standard normal probability distribution, which does not depend on yu. In general, the form of the h function is usually suggested by examining the distribution of an appropriate estimator 0. For any & between 0 and 1, constants a and b can be found to satisfy Pla 0, %2 > 0, with a + % = a. Then 11. Consider the next 1,000 95% CIs for w that a statistical consultant will obtain for various p(s ere 40 will be sufficient to justify the use of this interval. This is somewhat more conservative than the rule of thumb for the CLT because of the additional variability introduced by using S in place of o. --- Trang 406 --- 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion 393 Haven’t you always wanted to own a Porsche? One of the authors thought maybe he could afford a Boxster, the cheapest model. So he went to www.cars.com on Nov. 18, 2009 and found a total of 1,113 such cars listed. Asking prices ranged from $3,499 to $130,000 (the latter price was one of only two exceeding $70,000). The prices depressed him, so he focused instead on odometer readings (miles). Here are reported readings for a sample of 50 of these Boxsters: 2948 2996 7197 8338 8500 8759 12710 12925 15767 20000 23247 24863 26000 26210 30552 30600 35700 36466 40316 40596 41021 41234 43000. 44607 45000. 45027 45442 46963 41978 49518 52000 53334 54208 56062 57000. 57365 60020 60265 60803 62851 64404 72140 74594 79308 79500 80000 ~=—- g0000-~—S 84000 113000 118634 A boxplot of the data (Figure 8.5) shows that, except for the two mild outliers at the upper end, the distribution of values is reasonably symmetric (in fact, a normal probability plot exhibits a reasonably linear pattern, though the points corresponding to the two smallest and two largest observations are somewhat removed from a line fit through the remaining points). © 20000 40000 60000-80000 100000 120000 mileage Figure 8.5 A boxplot of the odometer reading data from Example 8.6 Summary quantities include n= 50, X= 45,679.4, x = 45,013.5, s = 26,641.675, f, = 34,265. The mean and median are reasonably close (if the two largest values were each reduced by 30,000, the mean would fall to 44,479.4 while the median would be unaffected). The boxplot and the magnitudes of s and f, relative to the mean and median both indicate a substantial amount of variability. A confidence level of about 95% requires z.25 = 1.96, and the interval is 45,679.4 + (1.96) (Ase) = 45,679.4 + 7384.7 = (38,294.7, 53,064.1) V50 That is, 38,294.7 < x < 53,064.1 with 95% confidence. This interval is rather wide because a sample size of 50, even though large by our rule of thumb, is not large enough to overcome the substantial variability in the sample. We do not have a very precise estimate of the population mean odometer reading. Is the interval we've calculated one of the 95% that in the long run includes the parameter being estimated, or is it one of the “bad” 5% that does not do so? Without knowing the value of 4, we cannot tell. Remember that the confidence --- Trang 407 --- 394 > cuarrer 8 Statistical Intervals Based on a Single Sample level refers to the long run capture percentage when the formula is used repeatedly on various samples; it cannot be interpreted for a single sample and the resulting interval. a Unfortunately, the choice of sample size to yield a desired interval width is not as straightforward here as it was for the case of known o. This is because the width of (8.8) is 2z,/.8/\/n. Since the value of s is not available before data collection, the width of the interval cannot be determined solely by the choice of n. The only option for an investigator who wishes to specify a desired width is to make an educated guess as to what the value of s might be. By being conservative and guessing a larger value of s, an n larger than necessary will be chosen. The investigator may be able to specify a reasonably accurate value of the population range (the difference between the largest and smallest values). Then if the popula- tion distribution is not too skewed, dividing the range by four gives a ballpark value of what s might be. The idea is that roughly 95% of the data lie within +26 of the mean, so the range is roughly 4o (range/6 might be too optimistic). An investigator wishes to estimate the true average score on an algebra placement test. Suppose she believes that virtually all values in the population are between 10 and 30. Then (30 — 10)/4 = 5 gives a reasonable value for s. The appropriate sample size for estimating the true average mileage to within one with confidence level 95%—that is, for the 95% CI to have a width of 2—is n= [(1.96)(5)/1]° ~ 96 = A General Large-Sample Confidence Interval The large-sample intervals ¥ + 2,/2 -o//n and X + z,/ -s/yn are special cases of a general large-sample CI for a parameter 0. Suppose that @ is an estimator satisfying the following properties: (1) It has approximately a normal distribution; (2) it is (at least approximately) unbiased; and (3) an expression for a9, the standard deviation of 0, is available. For example, in the case @ = 1, fi = X is an unbiased estimator whose distribution is approximately normal when n is large and oj = a7 = o/\/n- Standardizing 0 yields the rv Z = (0 — 0)/o%, which has approximately a standard normal distribution. This justifies the probability statement 0-0 P| ~Zyj2<< —— <4.) @ 1—ao (8.9) % Suppose, first, that oj does not involve any unknown parameters (e.g., known o in the case @=y). Then replacing each < in (8.9) by = results in 0 = 0 +242 - a, so the lower and upper confidence limits are @ — z,/2 +a) and 6+ Zy/2 * Oj, tespectively. Now suppose that o% does not involve 6 but does involve at least one other unknown parameter. Let sj be the estimate of oj obtained by using estimates in place of the unknown parameters (e.g., s//n estimates /\/n). Under general conditions (essentially that sj be close to oj for most samples), a valid CI is O+ Zy/2 °S- The interval X + z,/2 -s/,/n is an example. --- Trang 408 --- 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion 395 Finally, suppose that oj does involve the unknown 0. This is the case, for example, when § = p, a population proportion. Then (0 — 0)/o) = 2,/2 can be difficult to solve. An approximate solution can often be obtained by replacing @ in a by its estimate @. This results in an estimated standard deviation sg, and the corresponding interval is again 0 + 2/2 - sp. A Confidence Interval for a Population Proportion Let p denote the proportion of “successes” in a population, where success identifies an individual or object that has a specified property. A random sample of n individuals is to be selected, and X is the number of successes in the sample. Provided that n is small compared to the population size, X can be regarded as a binomial rv with E(X) = np and oy = \/np(1 —p). Furthermore, if n is large (np > 10 and ng > 10), X has approximately a normal distribution. The natural estimator of p is p = X/n, the sample fraction of successes. Since Pp is just X multiplied by a constant 1/n, p also has approximately a normal distribution. As shown in Section 7.1, E(p) =p (unbiasedness) and oj = V/p(1—p)/n. The standard deviation oj involves the unknown parameter p. Standardizing p by subtracting p and dividing by o then implies that p-p Pl —2,2 < == <2). | 1-2 ( p= p)/n ) Proceeding as suggested in the subsection “Deriving a Confidence Interval” (Section 8.1), the confidence limits result from replacing each < by = and solving the resulting quadratic equation for p. With q = 1 — p, this gives the two roots B+ 2 p/2n [PUL —p)/n + 23/40? P'T+epia 1+2,/n y/PU = p)/n + Z2/4n? =p +24). ~——___, ___—_ pes T+2,/a PROPOSITION p+ 2j/2n Let p = Taz, [a Then a confidence interval for a population propor- 5 )2/n 1/2. tion p with confidence level approximately 100(1 — «)% is [pan + 2)p/4n2 p+ %. ————— 8.10 g We 1V+Z p/n (10) where g = 1 —f and, as before, the — in (8.10) corresponds to the lower confidence limit and the + to the upper confidence limit. This is often referred to as the “score CI” for p. --- Trang 409 --- 396 charter 8 Statistical Intervals Based on a Single Sample If the sample size n is very large, then z*/2n is generally quite negligible (small) compared to p and z*/n is quite negligible compared to 1, from which p ~ p. In this case z*/4n” is also negligible compared to jg/n (n? is a much larger divisor than is n); as a result, the dominant term in the + expression is z,/2\/pq/n and the score interval is approximately pt 2yavVpq]n (8.11) This latter interval has the general form O+ zd, of a large-sample interval suggested in the last subsection. The approximate CI (8.11) is the one that for decades has appeared in introductory statistics textbooks. It clearly has a much simpler and more appealing form than the score CI. So why bother with the latter? First of all, suppose we use zo25 = 1.96 in the traditional formula (8.11). Then our nominal confidence level (the one we think we’re buying by using that z critical value) is approximately 95%. So before a sample is selected, the probability that the random interval includes the actual value of p (i.e., the coverage probabil- ity) should be about .95. But as Figure 8.6 shows for the case n = 100, the actual coverage probability for this interval can differ considerably from the nominal probability .95, particularly when p is not close to .5 (the graph of coverage probability versus p is very jagged because the underlying binomial probability distribution is discrete rather than continuous). This is generally speaking a defi- ciency of the traditional interval — the actual confidence level can be quite different from the nominal level even for reasonably large sample sizes. Recent research has shown that the score interval rectifies this behavior — for virtually all sample sizes and values of p, its actual confidence level will be quite close to the nominal level specified by the choice of z,,2. This is due largely to the fact that the score interval is shifted a bit toward .5 compared to the traditional interval. In particular, the midpoint p of the score interval is always a bit closer to .5 than is the midpoint p of the traditional interval. This is especially important when p is close to 0 or 1. 3.96 \ 9.92 as ase a.86 a o2 oa os os i Pp Figure 8.6 Actual coverage probability for the interval (8.11) for varying values of pwhen n = 100 In addition, the score interval can be used with nearly all sample sizes and parameter values. It is thus not necessary to check the conditions np > 10 and n(1 — p) > 10 which would be required were the traditional interval employed. So rather than asking when n is large enough for (8.11) to yield a good approximation --- Trang 410 --- 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion 397 to (8.10), our recommendation is that the score CI should always be used. The slight additional tediousness of the computation is outweighed by the desirable properties of the interval. The article “Repeatability and Reproducibility for Pass/Fail Data” (J. Testing Eval., 1997: 151-153) reported that in n = 48 trials in a particular laboratory, 16 resulted in ignition of a particular type of substrate by a lighted cigarette. Let p denote the long-run proportion of all such trials that would result in ignition. A point estimate for p is p = 16/48 = .333. A confidence interval for p with a confidence level of approximately 95% is 333-4 1.967/96 196M (333)(-667)/48 + 1.967/(4 - 487) $+ 1.96 1+ 1.96°/48 1+ 1.96°/48 = .346 + 129 = (.217, .475) The traditional interval is .333 + 1.96 /(.333)(.667) /48 = .333 + .133 = (.200, 466) These two intervals would be in much closer agreement were the sample size substantially larger. Ll) Equating the width of the CI for p to a prespecified width w gives a quadratic equation for the sample size n necessary to give an interval with a desired degree of precision. Suppressing the subscript in z,/, the solution is n= 22 Bg 2 ‘Papa w?) + (8.12) we Neglecting the terms in the numerator involving w” gives 42° pg n= — W: This latter expression is what results from equating the width of the traditional interval to w. These formulas unfortunately involve the unknown p. The most conservative approach is to take advantage of the fact that pg{[= p(1 — p)] is a maximum when p = .5. Thus if p = ¢ = .5 is used in (8.12), the width will be at most w regardless of what value of p results from the sample. Alternatively, if the investigator believes strongly, based on prior information, that p < po < .5, then po can be used in place of p. A similar comment applies when p > po > .5. The width of the 95% CI in Example 8.8 is .258. The value of 1 necessary to ensure a width of .10 irrespective of the value of p is 2(1.96)?(.25) — (1.96)°(.01) + 1/4(1.96)4(.25)(.25 — .01) + (.01)(1.96)* = Oi = 380.3 --- Trang 411 --- 398 cuarrer 8 Statistical Intervals Based on a Single Sample Thus a sample size of 381 should be used. The expression for n based on the traditional CI gives a slightly larger value of 385. a One-Sided Confidence Intervals (Confidence Bounds) The confidence intervals discussed thus far give both a lower confidence bound and an upper confidence bound for the parameter being estimated. In some circum- stances, an investigator will want only one of these two types of bounds. For example, a psychologist may wish to calculate a 95% upper confidence bound for true average reaction time to a particular stimulus, or a surgeon may want only a lower confidence bound for true average remission time after colon cancer surgery. Because the cumulative area under the standard normal curve to the left of 1.645 is .95, Xu P| ——= < 1.645} = .95 Gas) Manipulating the inequality inside the parentheses to isolate 4 on one side and replacing rv’s by calculated values gives the inequality > x — 1.645s/,/n; the expression on the right is the desired lower confidence bound. Starting with P(—1.645 < Z) = .95 and manipulating the inequality results in the upper confi- dence bound. A similar argument gives a one-sided bound associated with any other confidence level. PROPOSITION A large-sample upper confidence bound for y is s uUe— zy: Va A one-sided confidence bound for p results from replacing z,,2 by z, and + by either + or — in the CI formula (8.10) for p. In all cases the confidence level is approximately 100(1 — «)%. A random sample of 50 patients who had been seen at an outpatient clinic was selected, and the waiting time to see a physician was determined for each one, resulting in a sample mean time of 40.3 min and a sample standard deviation of 28.0 min (suggested by the article “An Example of Good but Partially Successful OR Engagement: Improving Outpatient Clinic Operations”, Interfaces 28, #5). An upper confidence bound for true average waiting time with a confidence level of roughly 95% is --- Trang 412 --- 8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion 399 40.3 + (1.645) (28.0)/V50 = 40.3 + 6.5 = 46.8 That is, with a confidence level of about 95%, < 46.8. Note that the sample standard deviation is quite large relative to the sample mean. If these were the values of o and 4, respectively, then population normality would not be sensible because there would then be quite a large probability of obtaining a negative waiting time. But because n is large here, our confidence bound is valid even though the population distribution is probably positively skewed. a Exercises | Section 8.2 (12-28) 12. A random sample of 110 lightning flashes in a how large a sample would have been required region resulted in a sample average radar echo to estimate 4 to within .S5 MPa with 95% duration of .81 s and a sample standard deviation confidence? of .34 8 (“Lightning Strikes to an Airplane in 415, Determine the confidence level for each of the Thunderstorm,” J. Aircraft, 1984: 607-611). ee meee i ‘ f following large-sample one-sided confidence Calculate a 99% (two-sided) confidence interval bounds: for the true ee sche duration 1, and inter- a. Upper bound: 8 + .84s//A Pret ne teeing tnIeeva™ b. Lower bound: ¥ — 2.05s/ Va 13. The article “Extravisual Damage Detection? c. Upper bound: ¥ + .67s/\/n Defining the: Standard Normal Tree” (Photogram- LGA saangla OE BS Sbese AaNIS Wad pil GA a Ow metric Engrg. Remote Sensing, 1981: 515-522) 7 ‘ ! ne : carbohydrate diet for a year. The average weight discusses the use of color infrared photography in : a identificati f al Douelas fi loss was 11 Ib and the standard deviation was ee caton, (OL (norma. Wes): DANBIAS BF 19 Ib. Calculate a 99% lower confidence bound stands. Among data reported were summary sta- ; : a ania é lle for the true average weight loss. What does the tistics for green-filter analytic optical densitomet- x 4 : bound say about confidence that the mean weight ric measurements on samples of both healthy and Ines is ocitive? diseased trees. For a sample of 69 healthy trees, SSUE ROSES! the sample mean dye-layer density was 1.028,and 17. A study was done on 41 first-year medical stu- the sample standard deviation was .163. dents to see if their anxiety levels changed during a. Calculate a 95% (two-sided) CI for the true the first semester. One measure used was the average dye-layer density for all such trees. level of serum cortisol, which is associated with b. Suppose the investigators had made a rough stress. For each of the 41 students the level was guess of .16 for the value of s before collect- compared during finals at the end of the semester ing data. What sample size would be neces- against the level in the first week of classes. The sary to obtain an interval width of .05 for a average difference was 2.08 with a standard confidence level of 95%? deviation of 7.88. Find a 95% lower confidence bound for the lat an diffe # 14, The article “Evaluating Tunnel Kiln Perfor- ent aecnt ne aera Pia treant a s 7 Does the bound suggest that the mean population mance” (Amer. Ceramic Soc. Bull., Aug. 1997: StPSRS’ Chane ig Tecessaril¥ positive? 59-63) gave the following summary information Stress chang Ssarily Positive: for fracture strengths (MPa) of n = 169 ceramic 18. The article “Ultimate Load Capacities of Expan- bars fired in a particular kiln: x = 89.10, sion Anchor Bolts” (J. Energy Engrg., 1993: s= 3.73. 139-158) gave the following summary data on a. Calculate a (two-sided) confidence interval shear strength (kip) for a sample of 3/8-in. anchor for true average fracture strength using a con- bolts: n = 78, ¥= 4.25, s = 1.30. Calculate a fidence level of 95%. Does it appear that true lower confidence bound using a confidence average fracture strength has been precisely level of 90% for true average shear strength. estimated? ; ; 19. The article “Limited Yield Estimation for Visual b.-Suippose: the “investigators:had ‘believed Defect Sources” (IEEE Trans. Semicon. Manuf, priori that the population standard deviation 1997: 17-23) reported that, in a study of a was about 4 MPa. Based on this supposition, --- Trang 413 --- 400 cuarter 8 Statistical Intervals Based on a Single Sample particular wafer inspection process, 356 dies Apparent Relation Between the Spiral Angle ¢, were examined by an inspection probe and 201 the Percent Elongation E}, and the Dimensions of of these passed the probe. Assuming a stable the Cotton Fiber,” Textile Res. J., 1978: 407-410). process, calculate a 95% (two-sided) confidence Calculate a 95% large-sample CI for the true aver- interval for the proportion of all dies that pass the age percentage elongation j1. What assumptions probe. are you making about the distribution of percent- 20. The Associated Press (October 9, 2002) reported age elongation? that ina survey of 4722 American youngsters aged 25. A state legislator wishes to survey residents of her 6-19, 15% were seriously overweight (a body district to see what proportion of the electorate is mass index of at least 30; this index is a measure aware of her position on using state funds to pay of weight relative to height). Calculate and inter- for abortions. pret a confidence interval using a 99% confidence a. What sample size is necessary if the 95% CI level for the proportion of all American youngsters for p is to have width of at most .10 irrespective who are seriously overweight. of p? 21. A random sample of 539 households from a mid- b. If the legislator has strong reason to believe western city was selected, and it was determined that at least ; of the electorate know of her that 133 of these households owned at least one pasition, Hoy laige:a sample size would you firearm (“The Social Determinants of Gun Own- recommend? ership: Self-Protection in an Urban Environment,” 26. The superintendent of a large school district, hav- Criminology, 1997: 629-640). Using a 95% con- ing once had a course in probability and statistics, fidence level, calculate a lower confidence bound believes that the number of teachers absent on any for the proportion of all households in this city that given day has a Poisson distribution with parame- own at least one firearm. ter 2. Use the accompanying data on absences for 22. Ina sample of 1000 randomly selected consumers OO cays to-cerive:s:large-sampie CE Roe Ae [PAK ‘i x 3 The mean and variance of a Poisson variable both who had opportunities to send in a rebate claim ‘ form after purchasing a product, 250 of these Stal 4. 308 people said they never did so (“Rebates: Get What You Deserve”, Consumer Reports, May zuk-4 2009: 7). Reasons cited for their behavior included - Vajn too many steps in the process, amount too small, missed deadline, fear of being placed on a mailing has approximately a standard normal distri- list, lost receipt, and doubts about receiving the bution. Now proceed as in the derivation of money. Calculate an upper confidence bound at the interval for p by making a probability the 95% confidence level for the true proportion of statatiieit (vith proBabiliEV'1 a) aiid SIV such consumers who never apply for a rebate. ig HS eS Ultiip “NGGUAUITES POE A: GES th Based on this bound, is there compelling evidence eee that the true proportion of such consumers is smal- argument just after (8.10))]. ler than 1/3? Explain your reasoning. Number of 23. The article “An Evaluation of Football Helmets absences 0123 45678910 Under Impact Conditions” (Amer. J. Sports Med., 1984: 233-237) reports that when each football Frequency | 1 48108753211 helmet in a random sample of 37 suspension-type helmets was subjected to a certain impact test, 24 27. Reconsider the CI (8.10) for p, and focus on a showed damage. Let p denote the proportion of all confidence level of 95%. Show that the confi- helmets of this type that would show damage dence limits agree quite well with those of the when tested in the prescribed manner. traditional interval (8.11) once two successes and a. Calculate a 99% CI for p. two failures have been appended to the sample b. What sample size would be required for the [ie., (8.11) based on (x + 2) S°s in (n + 4) trials). width of a 99% CI to be at most .10, irrespec- (Hint: 1.96 ~ 2.] [Note: Agresti and Coull showed tive of p? that this adjustment of the traditional interval 24. A sample of 56 research cotton samples resulted also has actual confidence level close to the nomi- ‘ nal level. | in a sample average percentage elongation of 8.17 and a sample standard deviation of 1.42 (“An --- Trang 414 --- 8.3 Intervals Based on a Normal Population Distribution 401 28. Young people may feel they are carrying the of body weight, a 95% CI for the population weight of the world on their shoulders, when mean was (13.62, 15.89). what they are actually carrying too often is an a. Calculate and interpret a 99% CI for population excessively heavy backpack. The article “Effec- mean backpack weight. tiveness of a School-Based Backpack Health Pro- b. Obtain a 99% CI for population mean weight motion Program” (Work, 2003: 113-123) reported as a percentage of body weight. the following data for a sample of 131 sixth ¢. The American Academy of Orthopedic Sur- graders: for backpack weight (Ib), ¥= 13.83, geons recommends that backpack weight be at s =5.05; for backpack weight as a percentage most 10% of body weight. What does your calculation of (b) suggest, and why? Intervals Based on a Normal Population Distribution The CI for 1 presented in Section 8.2 is valid provided that n is large. The resulting interval can be used whatever the nature of the population distribution. The CLT cannot be invoked, however, when n is small. In this case, one way to proceed is to make a specific assumption about the form of the population distribution and then derive a CI tailored to that assumption. For example, we could develop a CI for when the population is described by a gamma distribution, another interval for the case of a Weibull population, and so on. Statisticians have indeed carried out this program for a number of different distributional families. Because the normal distribution is more frequently appropriate as a population model than is any other type of distribution, we will focus here on a CI for this situation. ASSUMPTION The population of interest is normal, so that X,, ... , X,, constitutes a random sample from a normal distribution with both 4 and ¢ unknown. The key result underlying the interval in Section 8.2 is that for large n, the rv Z = (X — n)/(S/,/n) has approximately a standard normal distribution. When 7 is small, S is no longer likely to be close to a, so the variability in the distribution of Z arises from randomness in both the numerator and the denominator. This implies that the probability distribution of (X — 4)/(S/,/n) will be more spread out than the standard normal distribution. Inferences are based on the following result from Section 6.4 using the family of t distributions: THEOREM When X is the mean of a random sample of size n from a normal distribution with mean y, the rv X-u T= => 8.13 S/Jn (5) has the ft distribution with n — 1 degrees of freedom (df). --- Trang 415 --- 402 = cirrer 8 Statistical Intervals Based on a Single Sample Properties of t Distributions Before applying this theorem, a review of properties of ¢ distributions is in order. Although the variable of interest is still (¥ — :)/(S/\/n), we now denote it by T to emphasize that it does not have a standard normal distribution when n is small. Recall that a normal distribution is governed by two parameters, the mean yi and the standard deviation ¢. A ¢ distribution is governed by only one parameter, the number of degrees of freedom of the distribution, abbreviated df and denoted by v. Possible values of v are the positive integers 1, 2, 3, .... Each different value of v corresponds to a different ¢ distribution. The density function for a random variable having a ¢ distribution was derived in Section 6.4. It is quite complicated, but fortunately we need concern ourselves only with several of the more important features of the corresponding density curves. PROPERTIES 1. Each f, curve is bell-shaped and centered at 0. OF T DISTRI- BUTIONS 2. Each t, curve is more spread out than the standard normal (z) curve. 3. As v increases, the spread of the ¢, curve decreases. 4. As v — oo, the sequence of t, curves approaches the standard normal curve (so the z curve is often called the t curve with df = 00). Recall the notation for values that capture particular upper-tail t-curve areas. NOTATION Let t,,, = the number on the measurement axis for which the area under the t curve with v df to the right of t,,, is «; t,, is called a ¢ critical value. This notation is illustrated in Figure 8.7. Appendix Table A.5 gives t,, for selected values of x and v. The columns of the table correspond to different values of «. To obtain to515, go to the 2 = .05 column, look down to the v = 15 row, and read to55 = 1.753. Similarly, tos... = 1.717 (.05 column, v = 22 row), and toi22 = 2.508. “4 1, curve 1 i Shaded area = a ot — ss ° | Fag) Figure 8.7 A pictorial definition of ta,» The values of t, , exhibit regular behavior as we move across a row or down a column. For fixed v, t,,, increases as « decreases, since we must move farther to the --- Trang 416 --- 8.3 Intervals Based on a Normal Population Distribution 403 right of zero to capture area « in the tail. For fixed «, as v is increased (i.e., as we look down any particular column of the f table) the value of t,,, decreases. This is because a larger value of v implies a f distribution with smaller spread, so it is not necessary to go so far from zero to capture tail area %. Furthermore, t, ,, decreases more slowly as v increases. Consequently, the table values are shown in increments of 2 between 30 and 40 df and then jump to v = 50, 60, 120, and finally oo. Because ft... is the standard normal curve, the familiar z, values appear in the last row of the table. The rule of thumb suggested earlier for use of the large-sample CI (if n > 40) comes from the approximate equality of the standard normal and t distributions for v > 40. The One-Sample t Confidence Interval The standardized variable T has a ¢ distribution with n — 1 df, and the area under the corresponding f density curve between —t,,,-) and ta,-) is 1 — % (area «/2 lies in each tail), so P(-typrn— ,and 41. Here are the lengths (in minutes) of the 63 nine- Xog, and let Xjew denote the average of these inning games from the first week of the 2001 two values. Modify the formula for a PI for a major league baseball season: single x value to obtain a PI for Xnew, and calculate a 95% two-sided interval based on 194 160 176 203 187 163 162 183, 152.177 rhe data: 177: 151 (173) 188 179 194 149 165 186 187 tesa lervescaperdatt: 187 177 187 186 187 173 136 150 173. 173 ili indivi alk i 136 153 152 149 152 180 186 166 174 176 38. A study of the ability of individuals to walk in a mae eee ee ee straight line (“Can We Really Walk Straight?” 151 172 216 149 207 212 216 166 190 165 Amer. J. Phys. Anthropol., 1992: 19-27) reported 176 158 198 the accompanying data on cadence (strides per second) for a sample of n = 20 randomly selected Assume that this is a random sample of nine- healthy men. inning games (the mean differs by 12 s from the mean for the whole season). 95 85 92 95 93 86 1.00 92 85 81 a. Give a 95% confidence interval for the popula- .78 93 93 1.05 .93 1.06 1.06 .96 81 .96 tion mean. b. Give a 95% prediction interval for the length of A normal probability plot gives substantial sup- the next nine-inning game. On the first day of port to the assumption that the population distri- the next week, Boston beat Tampa Bay 3-0 in bution of cadence is approximately normal. A a nine-inning game of 152 min. Is this within descriptive summary of the data from MINITAB the prediction interval? follows: ¢. Compare the two intervals and explain why one is much wider than the other. variable N Mean Median TrMean stDev seMean d. Explore the issue of normality for the data and cadence 20 0.9255 0.9300 0.9261 0.0809 0.0182 explain how this is relevant to parts (a) and (b). tehesce, 0.7800 e060 6.8525 9.9600 42 A more extensive tabulation of critical values than what appears in this book shows that for the t distribution with 20 df, the areas to the right of a. Calculate and interpret a 95% confidence inter- the values .687, .860, and 1.064 are .25, .20, and val for population mean cadence. 15, respectively. What is the confidence level for b. Calculate and interpret a 95% prediction inter- each of the following three confidence intervals val for the cadence of a single individual ran- for the mean j1 of a normal population distribu- domly selected from this population. tion? Which of the three intervals would you rec- . . . ommend be used, and why? 39. A sample of 25 pieces of laminate used in the a(t - 687s//3L.7-4 1.725s/V21) manufacture of circuit boards was selected and ‘ j b. (x — .860s/V21,x + 1.325s/V21) the amount of warpage (in.) under particular con- ©. (t— 1.0648//21,x + 1.064s//21) ditions was determined for each piece, resulting in - oe ees a sample mean warpage of .0635 and a sample 43. Use the results of Section 6.4 to show that the standard deviation of .0065. Calculate a prediction variable Ton which the PI is based does in fact for the amount of warpage of a single piece of have a ¢ distribution with n — 1 df. --- Trang 422 --- 8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population 409 Confidence Intervals for the Variance and Standard Deviation of a Normal Population Although inferences concerning a population variance o or standard deviation s are usually of less interest than those about a mean or proportion, there are occasions when such procedures are needed. In the case of a normal population distribution, inferences are based on the following result from Section 6.4 concerning the sample variance S*, THEOREM Let X,, X2, ... , X,, be a random sample from a normal distribution with parameters j1 and a. Then the rv (n—1)S2 3 (X; - XY ao has a chi-squared O®) probability distribution with n — | df. As discussed in Sections 4.4 and 6.4, the chi-squared distribution is a continu- ous probability distribution with a single parameter v, the number of degrees of freedom, with possible values 1, 2,3, .... To specify inferential procedures that use the chi-squared distribution, recall the notation for critical values from Section 6.4. NOTATION Let Heys called a chi-squared critical value, denote the number on the measurement axis such that % of the area under the chi-squared curve with v df lies to the right of 72. Because the ¢ distribution is symmetric, it was necessary to tabulate only upper-tail critical values (f,,. for small values of «). The chi-squared distribution is not symmetric, so Appendix Table A.6 contains values of 72,, for « both near 0 and near 1, as illustrated in Figure 8.9(b). For example, 77y95 4 = 26.119 and 74s 99 (the Sth percentile) = 10.851. Each shaded a b area= .01 ee pdf Shaded area = a i i i i Xa X00 xb1,» Figure 8.9 ;,, notation illustrated --- Trang 423 --- 410 == ciarrer 8 Statistical Intervals Based on a Single Sample The rv (n — 1)S?/o? satisfies the two properties on which the general method for obtaining a Cl is based: It is a function of the parameter of interest a”, yet its probability distribution (chi-squared) does not depend on this parameter. The area under a chi-squared curve with v df to the right of Lip , is o/2, as is the area to the left of Gap, y- Thus the area captured between these two critical values is 1 — a. As a consequence of this and the theorem just stated, in — 1)S? (hate < ee 37.652), where 7? is a all timings. Based on your results, is this true chi-squared tv with v = 25 ontalae? 46, Exercise 34 gave a random sample of 20 ACT 48. Refer to the baseball game times in Exercise al. ” men 5 ~ Calculate an upper confidence bound with scores from students taking college freshman calcu- aie i: é lus. Calculate a99% Cl for the standarddeviation of -«-CoRfidence level 5% for the population the population distribution, Is this interval valid standard. deviation of game time. Interpret your whatever the nature of the distribution? Explain. interval. Explore the issue of normality for the data and explain how this is relevant to your 47. Here are the names of 12 orchestra conductors and interval. their performance times in minutes for Beetho- ven’s Ninth Symphony: Bootstrap Confidence Intervals How can we find a confidence interval for the mean if the population distribution is not normal and the sample size n is not large? Can we find confidence intervals for other parameters such as the population median or the 90th percentile of the population distribution? The bootstrap, developed by Bradley Efron in the late 1970s, allows us to calculate estimates in situations where statistical theory does not produce a formula for a confidence interval. The method substitutes heavy computation for theory, and it has been feasible only fairly recently with the availability of fast computers. The bootstrap was introduced in Section 7.1 for applications with known distribution (the parametric bootstrap), but here we are concerned with the case of unknown distribution (the nonparametric bootstrap). In a student project, Erich Brandt studied tips at a restaurant. Here is a random sample of 30 observed tip percentages: 22.7, 16.3, 13.6, 16.8, 29.9, 15.9, 14.0, 15.0, 14.1, 18.1, 22.8, 27.6, 16.4, 16.1, 19.0, 13.5, 18.9, 20.2, 19.7, 18.2, 15.4, 15.7, 19.0, 11.5, 18.4, 16.0, 16.9, 12.0, 40.1, 19.2 We would like to get a confidence interval for the population mean tip percentage at this restaurant. However, this is not a large sample and there is a problem with positive skewness, as shown in the normal probability plot of Figure 8.10. --- Trang 425 --- 412 = cuarrer 8 Statistical Intervals Based on a Single Sample Mean 18.43 . N 30 AD 1.828 P.Value <0.005 & & 30 = i ° o a PS - —- 10 a 1 0 + 2 z Score Figure 8.10 Normal probability plot from MINITAB of the tip percentages Most of the tips are between 10% and 20%, but a few big tips cause enough skewness to invalidate the normality assumption. The sample mean is 18.43% and the sample standard deviation is 5.76%. If population normality were plausible, then we could form a confidence interval using the mean and standard deviation calculated from the sample. From. Section 8.3, the resulting 95% confidence interval for the population mean would be Samet - 18.43 + 2.045 276 18.43 + 2.15 = (16.3, 20.6) Xt to25n-1—= = 18. 045 —— = 18. 15 = (16.3, 20. easly V30 How does the bootstrap approach differ from this? For the moment, we regard the 30 observations as constituting a population, and take a large number of random samples (999 is a common choice), each of size 30, from this population. These are samples with replacement, so repetitions are allowed. For each of these samples we compute the mean (or the median or whatever statistic estimates the population parameter). Then we use the distribution of these 999 means to get a confidence interval for the population mean. To help get a feeling for how this works, here is the first of the 999 samples: 22.8, 16.8, 16.0, 19.0, 19.2, 20.2, 13.6, 15.9, 22.8, 11.5, 15.9, 14.0, 29.9, 19.2, 16.0, 27.6, 14.1, 13.5, 16.8, 15.4, 20.2, 16.4, 20.2, 16.9, 16.8, 22.8, 19.7, 18.2, 22.7, 18.2 This sample has mean xX; = 18.41, where the asterisk emphasizes that this is the mean of a bootstrap sample. Of course, when we take a random sample with replacement, repetitions usually occur as they do here, and this implies that not all of the 30 observations will appear in each sample. After doing this 998 more times and computing the means X}, ..., ¥999 for these 999 samples, we construct Figure 8.11, the histogram of the 999 x* values. --- Trang 426 --- 8.5 Bootstrap Confidence Intervals 413 80 70 60 350 3 40 s £30 20 10 0 15 16 17 18 19 20 21 22 23 Boot Tips Figure 8.11 Histogram of the tip bootstrap distribution, from MINITAB This describes approximately the sampling distribution of X for samples of 30 from the true tip population. That is, if we could draw the pdf for the true population distribution of X values, then it should look something like the histogram in Figure 8.11. Does the distribution appear to be normal? The histogram is not exactly symmetric, and the distribution looks skewed to the right. Figure 8.12 has the normal probability plot from MINITAB: 23 Mean 18.46 StDev 1.043 22 N 999 AD 3.101 21 P-Value <0.005 20 3 £19 8 18 a ay 16 15 14 3 - 1 0 1 2 3 z Score Figure 8.12 Normal plot of the tip bootstrap distribution The pattern in this plot gives evidence of slight positive skewness (see Section 4.6). If this plot were straighter, then we could form a 95% confidence interval for the population mean in the following way. Let spoo. denote the sample standard deviation of the 999 bootstrap means. That is, defining X* to be the mean of the 999 bootstrap means, xt pty? 52 _XG -*) Sboot 909-1 --- Trang 427 --- 414 = cuarrer 8 Statistical Intervals Based on a Single Sample The value of 54... turns out to be 1.043. The sample mean of the original 30 tip percentages is x = 18.43, giving the 95% confidence interval X+£Zo25Spoot = 18.43 + 1.96(1.043) = 18.43 + 2.04 = (16.4, 20.5) Notice that this is very similar to the previous interval based on the method of Section 8.3. The difference is mainly due to using the z critical value instead of the t critical value, because the bootstrap standard deviation 54,4, = 1.043 is close to the estimated standard error s/\/n = 1.052. There should be good agreement if the original data set looks normal. Even if the normality assumption is not satisfied, there should be good agreement if the sample size n is big enough. a The Percentile Interval In the case that the bootstrap distribution (as represented here by the histogram of Figure 8.11) is normal, the foregoing interval uses the middle 95% of the bootstrap distribution. Because the 999 bootstrap means do not fit a normal curve, we need an alternative approach to finding a confidence interval. To allow for a nonnormal bootstrap distribution, we need to use something other than the standard deviation and the ¢ table to determine the confidence limits. The percentile interval uses the 2.5 percentile and the 97.5 percentile of the bootstrap distribution for confidence limits of a 95% confidence interval. Computationally, one way to find the two percentiles is to sort the 999 means and then use the 25th value from each end. DEFINITION The bootstrap percentile interval with a confidence level of 100(1 — «)% for a specified parameter is obtained by first generating B bootstrap samples, for each one calculating the value of some particular statistic that estimates the parameter, and sorting these values from smallest to largest. Then we compute k = «(B + 1)/2 and choose the kth value from each end of the sorted list. These two values form the confidence limits for the confidence interval. If k is not an integer, then interpolation can be used, but this is not crucial. As an example, if » = .05 and B = 999, then k= «(B + 1)/2 = (.05)(999 + 1)/2 = 25. For the tip data the 2.5 percentile is 16.7 and the 97.5 percentile is 20.8, so the 95% (Example 8.15 bootstrap percentile interval (16.65, 20.80). Because the bootstrap distribution is continued) positively skewed, the percentile interval is shifted slightly to the right compared to the interval based on a normal bootstrap distribution. a A Refined Interval When the percentile method is used to obtain a confidence interval, under some circumstances the actual confidence level may differ substantially from the nomi- nal level (the level you think you are getting); in our example, the nominal level was 95%, and the actual level could be quite different from this. There are refined bootstrap intervals that often yield an improvement in this respect. In particular, --- Trang 428 --- 8.5 Bootstrap Confidence Intervals 415. the BCa (bias corrected and accelerated) interval, implemented in the R, Stata, and Systat software packages, is a method that corrects for bias. Here bias refers to the difference between the mean of the bootstrap distribution compared to the value of the estimate based on the original sample. For example, in estimating the mean for the tip data, the mean of the 30 tips in the original sample is 18.43 but the mean of the 999 bootstrap sample means is 18.46, so there is just a slight bias of 18.46 — 18.43 = .03. The acceleration aspect of the BCa interval is an adjustment for dependence of the standard error of the estimator on the parameter that is being estimated. For example, suppose we are trying to estimate the mean in the case of exponential data. In this case the standard deviation is equal to the mean, and the standard error of X is o/\/n = p/\/n, so the standard error of the estimator X depends strongly on the parameter ju that is being estimated. If the histogram in Figure 8.11 resembled the exponential pdf, we would expect the BCa method to make a substantial correction to the percentile interval. Recall that the percentile interval for the mean of the tip data is (16.7, 20.8). (Example 8.16 Compared to this, the BCa interval (16.9, 21.8) is shifted a little to the right. i continued) Is the bootstrap guaranteed to work, or is it possible that the method can give grossly incorrect estimates? The key here is how closely the original sample represents the whole distribution of the random variable X. When the sample is small, then there is a possibility that important features of the distribution are not included in the data set. In terms of our 30 observations, the value 40.1% is highly influential. If we drew another sample of 30 observations independent of this sample, the luck of the draw might give no values above 25, and the sample would yield very different conclusions. The bootstrap is a useful method for making inferences from data, but it is dependent on a good sample. If this is all the data that we can get, we will never know how well our sample represents the distribution, and therefore how good our answer is. Of course, no statistical method will give good answers if the sample is not representative of the population. Bootstrapping the Median We do have a statistic that is less sensitive to the influence of individual observa- tions. For the 30 tip percentages, the median is 16.85, substantially less than the mean of 18.43. The mean is pulled upward by the few large values, but these extremes have little effect on the median. In general, the median is less affected by outliers than the mean. However, it is more difficult to get confidence intervals for the median. There is a nice statistic to estimate the standard deviation of the mean (S/,/n), but unfortunately there is nothing like this for the median. Let’s use the bootstrap method to get a confidence interval for the median of the tip (Example 8.15 data. We can use the same 999 samples of 30 as we did previously, but now we continued) instead look at the 999 medians. The first sample has mean X} = 18.41, whereas its median is xj = 17.55. The histogram of this and the other 998 bootstrap medians 3}, ..+, 4999 is shown in Figure 8.13. --- Trang 429 --- 416 = cuarrer 8 Statistical Intervals Based on a Single Sample 140 120 100 > 2 80 5 $ & 60 40 20 0 16 17 18 19 Boot Median Figure 8.13 Histogram of the bootstrap medians from R It should be apparent that the distribution of the 999 bootstrap medians is not normal. As is often the case with the median, the bootstrap distribution takes on just a few values and there are many repeats. Instead of 999 different values, as would be expected if we took 999 samples from a true continuous distribution, here there are only 72 values, and some appear more than 50 times. These are apparent in the normal probability plot, shown in Figure 8.14. In contrast to what MINITAB does, the values here are plotted vertically, so the horizontal segments indicate repeats. 19 | 18 ‘a © # 3 i 8 j 16 if 3 2 1 0 1 = 3 norm quantiles Figure 8.14 Normal probability plot of the bootstrap medians from R The mean of the 999 bootstrap medians is 17.20 with standard deviation .917. Even though the procedure is inappropriate because of nonnormality, we can for comparative purposes use the median * = 16.85 of the original 30 observations --- Trang 430 --- 8.5 Bootstrap Confidence Intervals 417 together with the bootstrap standard deviation Spo; = .917 to get a confidence interval based on the normal distribution: ¥ £Z,025Spoot = 16.85 + 1.96(.917) = 16.85 + 1.80 = (15.1, 18.6) Because the bootstrap distribution is so nonnormal, it is more appropriate to use the percentile interval in which the confidence limits for a 95% confidence interval are taken from the 2.5 and 97.5 percentiles of the bootstrap distribution. When the 999 bootstrap medians are sorted, the 25th value is 15.94 and the 25th value from the top is 18.98, so the 95% confidence interval for the population median is (15.94, 18.98). In accord with the nonnormal bootstrap distribution, this interval differs from the interval that assumes normality. The bias corrected and accelerated BCa refinement gives only a slight change to the percentile interval for the median. To estimate the bias, subtract the median of the original sample from the mean of the bootstrap medians, which is 17.20 — 16.85 = .35. The percentile interval gives only a slight refinement from (15.94, 18.98) to (15.87, 18.94). a We should be a bit uncomfortable with the results of bootstrapping the median. Given that the bootstrap distribution takes on just a few values but the true sampling distribution is continuous, we should worry a little about how well the bootstrap distribution approximates the true sampling distribution. On the other hand, the situation here is nowhere near as bad as it could be. Sometimes, especially when the sample size is smaller, the bootstrap distribution has far fewer values. What can be done to see if the bootstrap results are valid for the median? We performed a simulation experiment with data from the exponential distribution, a distribution that is more strongly skewed than the tip percentages. We generated 100 samples, each of size 30, and then took 999 bootstrap samples from each of them. In this way we obtained 95% percentile confidence intervals for the mean and the median from each of the 100 samples. We used the exponential distribution with mean « = 1// = 1, for which the median ji = In(2) = .693. In checking each of the 100 confidence intervals for the mean, we found that 93 of them contained the true mean. Similarly, we found that 93 of the confidence intervals for the median contained the true median. It is gratifying to see that, in spite of the strange distribution of the bootstrapped medians, the performance of the percentile confi- dence intervals is reasonably on target. The Mean Versus the Median For the tip percentages is it better to use the mean or the median? The median is much less affected by the extreme observations in this skewed data set. This suggests that the mean will vary a lot depending on whether a particular sample has outliers. Here, the variability shows up in a higher standard deviation 1.043 for the 999 bootstrap means as compared to the standard deviation .917 for the 999 bootstrap medians. Furthermore, the percentile interval with 95% confidence for the mean has width 4.15 whereas the interval for the median has a width of only 3.04. In terms of precision, we are better off with the median. For a prospective server at this restaurant, it might also be more meaningful to give the median, the middle tip value in the sense that roughly half are above and half are below. --- Trang 431 --- 418 = cuarrer 8 Statistical Intervals Based on a Single Sample Of course, it is not always necessary to choose one statistic over the other. Sometimes a case can be made for presenting both the mean and the median. In the case of salaries, the median salary may be more relevant to an employee, but the mean may be more useful to the employer because the mean is proportional to the total payroll. Exercises | Section 8.5 (49-57) 49. In a survey, students gave their study time per week (h), and here are the 22 values: 9 «14-20 38 21-22 36 38 35-47 35: 24 31 28 25 32 23 30 39 26 15.0 10.0 10.0 15.0 25.0 7.0 3.0 8.0 10.0 38 20 21 11 35 42 31 25 590 23 10.0 11.0 7.0 5.0 15.0 7.5 7.5 12.0 7.0 B 38 21 76 22 2% 10 19 95 25 105 60 100 7.5 15 31 34 36 35 33 24 44 35 43 z “ ei i 32 25 27 31 14 25 16 25 47 We would like to get a 95% confidence interval for the population mean. 35-14 65 40 3545 27 24 & Sega a Steen tenttetce saemal ee We would like to get a 95% confidence interval ee: ne arent that th for the population mean. - Display a normal plot. ts It apparent that the a. Compute the t-based confidence interval of data set is not normal, so the ¢-based interval is Section 8.3. lcci validity? joi 888 b. Check for normality to see if part (a) is valid. Is S Generatece bootstrap: sunple Oba: Means: the sample large enough that the interval might d. Use the standard deviation for part (c) to get a hevralid-anywar? 95% confidence interval for the population ¢. Generate a bootstrap sample of 999 means. mean. an d. Use the standard deviation for part (c) to get a e. Investigate the distribution of the bootstrap 95% confidence interval for the population means to see if the CI of part (d) is valid. one Use part (0) to form the 95th confidence inter- _@, Investigate the distribution of the bootstrap vanusing he percenie memoe: means to see if the Cl of part (d) is valid, g. Say which interval should be used and explain f. Use part ©) to form the 95% confidence inter- why. val using the percentile method. 50. We would like to obtain a 95% confidence interval g. Compare the intervals. If they are all close, for the population median of the study hours data then the bootstrap supports the CI of part (a). ™ Beercise ° sample of 999 medi 52. We would like to obtain a 95% confidence interval a. Obtain a bootstrap sample of 999 medians. for the population median weight gain using the b. Use the standard deviation for part (a) to get a AAG EReHE 31 95% confidence interval for the population arn no . ey a. Obtain a bootstrap sample of 999 medians. mena oe b. Use the standard deviation for part (a) to get a c. Investigate the distribution of the bootstrap 98% confidence. intewval for the population medians and discuss the validity of part (b). veedion Does the distribution take on just a few values? c Investigate the distribution of the bootstrap id. (Use part. (a) to Tenn eS exconhidence aiterval medians and discuss the validity of part (b). for the: median using the percentile: method. Does the distribution take on just a few values? e. For the study hours data, state your preference i. URE put @ tw een OS mH CORTTMencE IRR between the: zoedian andltheuneansand explain for the median using the percentile method. yourteasoning: e. For the weight gain data, state your preference 51. Here are 68 weight gains in pounds for pregnant between the median and the mean and explain women from conception to delivery (“Classifying your reasoning. ae mn ee ey Saye Ga bi pes 53, Nine Australian soldiers were subjected to eee Nae, Teach: Statist extreme conditions, which involved a 100-min Autumn 2002: 96-101). --- Trang 432 --- 8.5 Bootstrap Confidence Intervals 419 walk with a 25-Ib pack when the temperature was This is one of those rare instances in which we can 40°C (104°F). One of them overheated (above do a confidence interval and compare with the true 39°C) and was removed from the study. Here are population mean. The mean of all 2,429 lengths is the rectal Celsius temperatures of the other eight 178.29 (almost 3 h). at the end of the walk (“Neural Network Training a. Compute the tbased confidence interval of on Human Body Core Temperature Data,” Com- Section 8.3. batant Protection and Nutrition Branch, Aeronau- b. Use a normal plot to see if part (a) is valid. tical and Maritime Research Laboratory of ¢. Generate a bootstrap sample of 999 means. Australia, DSTO TN-0241, 1999): d. Use the standard deviation for part (c) to get a 95% confidence interval for the population 38.4 38.7 39.0 38.5 38.5 39.0 38.5 38.6 mean. e. Investigate the distribution of the bootstrap We would like to get a 95% confidence interval means to see if the Cl of part (d) is valid. for the population mean. f. Use part (c) to form the 95% confidence inter- a. Compute the t-based confidence interval of val using the percentile method. Section 8.3. g. Say which interval should be used and explain b. Check for the validity of part (a). why. Does your interval include the true value, cc. Generate a bootstrap sample of 999 means. 178.29? ss Ae te sates cevation pe ee © Se 56. The median might be a more meaningful statistic ag) ee for the length-of-game data in Exercise 55, The e. Investigate the distribution of the bootstrap seein ofall Aza enteths as 1 os : ; a. Obtain a bootstrap sample of 999 medians. means to see if part (d) is valid. oo ¢ b. Use the standard deviation for part (a) to get a f. Use part (c) to form the 95% confidence inter- : ie : : 95% confidence interval for the population val using the percentile method. median, B Compare:theanteryalsiand expinin yourrpreter: ¢. Investigate the distribution of the bootstrap ence. dians and discuss the validity of part (b) h. Based on your knowledge of normal body tem- Be ; Does the distribution take on just a few perature, would you say that body temperature values? cart he influenced by environment? d. Use part (a) to form a 95% confidence interval 54, We would like to obtain a 95% confidence interval for the median using the percentile method. for the population median temperature using the Compare your answer with the population data in Exercise 53. median, 175. a. Obtain a bootstrap sample of 999 medians. e. Comparing the percentile intervals for the b. Use the standard deviation for part (a) to get a mean and the median, is there much difference 95% confidence interval for the population in their widths? If not, and you are forced to median. choose between them for the length-of-game c. Investigate the distribution of the bootstrap data, which do you choose and why? medians gad discuss. the Validity'of part ©). 99: weisioute tikete obtsin W956 coutidencé Interval Does the distribution take on just a few values? k : : : : for the study time population standard deviation d. Use part (a) to form a 95% confidence interval : a eee using the data in Exercise 49. for the median using the percentile method. . : a. Obtain a bootstrap sample of 999 standard e. Compare all the intervals for the mean and : ‘ Ei : raenrran deviations and use it to form a 95% confidence median. Are they fairly similar? How do you ; : : . 5 interval for the population standard deviation explain that? y : using the percentile method. 55. If you go to a major league baseball game, how b. Recalling that it requires normal data, use the long do you expect the game to be? From the method of Section 8.4 to obtain a 95% confi- 2,429 games played in 2001, here is a random dence interval for the population standard devi- sample of 25 times in minutes: ation, Discuss normality for the study hours data. How does this interval compare with the 352 150 164 167 225 159 142 182 229 163 percentile interval? 188 197 189 235 161 195 177 166 195 160 154 130 189 188 225 --- Trang 433 --- 420 cuarrer 8 Statistical Intervals Based on a Single Sample 58. According to the article “Fatigue Testing of a. Construct a boxplot of this data, and comment Condoms” (Polymer Testing, 2009: 567-571), on any interesting features. “tests currently used for condoms are surrogates b. Is it plausible that the sample was selected for the challenges they face in use”, including a from a normal population distribution? test for holes, an inflation test, a package seal c. Calculate a 98% CI for the true average test, and tests of dimensions and lubricant quality amount of residual gas saturation. ioloenty The invectatns developed a neg 61 Amanufacturer of college textbooks is interested BY): WT SERE OSS Y in estimating the strength of the bindings pro- test that adds cyclic strain to a level well below et tal atecimcattinnoramiae eticte breakage and determines the number of cycles to duced by a particular binding machine. Strength break A sample of 20 condoms of one particular ant be mensured Py:recatling the force required ese esl Gis A SACRE REM RE SEeTSWA to pull the pages from the binding. If this force is ype res P > measured in pounds, how many books should be and a sample standard deviation of 607. Calcu- g ; ow I di ccinfidence i Leaeth tested to estimate the average force required to ae an eek oe toruke an interval at the break the binding to within .1 Ib with 95% confi- fe Contidence. lever tor te. te average num: dence? Assume that ¢ is known to be .8. ber of cycles to break. [Nore: The article pre- sented the results of hypothesis tests based on 62. The Pew Forum on Religion and Public Life the f distribution; the validity of these depends reported on Dec. 9, 2009 that in a survey of on assuming normal population distributions. 2003 American adults, 25% said they believed 59. The reaction time (RT) toa stimulus is the interval inestrology, F Of Hie WeRRRNehcibie With Simnullls GreseBtatiOR a. Calculate and interpret a confidence interval at sid ii ith the st discenble movenieat OF Hie #94 confidence level tor the:pronortion.ot 2 ; I adult Americans who believe in astrology. a certain type. The article “Relationship of Reac- naiimcnend ctor ie cami dat oie ak Miavetniens Wine neg Co oe aoe b. What sample size would be required for the ion Time a ime i $s , . ~ Skill” (Percept. Motor Skills, 1973: 453-454) width ove ee at most .05 irrespec reports that the sample average RT for 16 experi- E : a ‘ ‘ 2 ¢. The upper limit of the Cl in (a) gives an upper enced swimmers to a pistol start was .214 s and the 7 Tne pls Sab WBN aHTONWE .036 confidence bound for the proportion being stimated. What is the c di fi- a. Making any necessary assumptions, derive a ae 90% CI for true average RT for all experi- ; enced swimmers. 63. There were 12 first-round heats in the men’s b. Calculate a 90% upper confidence bound for 100-m race at the 1996 Atlanta Summer Olym- the standard deviation of the reaction time pics. Here are the reaction times in seconds (time distribution. to first movement) of the top four finishers of c. Predict RT for another such individual in a each heat, The first 12 are the 12 winners, then way that conveys information about precision the second-place finishers, and so on. and reliability. 60. For each of 18 preserved cores from oil-wet Me a We a ae a a carbonate reservoirs, the amount of residual gas ‘ * * ‘ 3 5 faa 2nd 168 140.214.163.202 173 saturation after a solvent injection was measured 175 154 160 169 148 144 at water flood-out. Observations, in percentage . : : E 3 f of pore volume, were 3rd 159145187222 190.158 2 202 162 156 141167155 4th 156.164.160.145. .163 170 A Ge 6 ee Pe ee (182 187 148.183.162.186 44.5 35.7 33.5 39.3 22.0 51.2 Because reaction time has little if any relation- (see ReaRveREnEaNIRty StimerER ORE WaiRE ship to the order of finish, it is reasonable to view Flow Following Solvent Injection in Carbonate ices ae ie a a ae Rocks,” Soc. Petrol. Eng. J. 1976: 23-30.) --- Trang 434 --- Supplementary Exercises. 421 a. Estimate the population mean in a way that 66. A journal article reports that a sample of size 5 conveys information about precision and was used as a basis for calculating a 95% Cl for reliability. [Note: 3.x; = 8.08100, 3x2 = the true average natural frequency (Hz) of dela- 1.37813.] Do the runners seem to react faster minated beams of a certain type. The resulting than the swimmers in Exercise 59? interval was (229.764, 233.504). You decide that b. Calculate a 95% confidence interval for the a confidence level of 99% is more appropriate population proportion of reaction times that than the 95% level used. What are the limits of are below .15. Reaction times below .10 are the 99% interval? [Hint: Use the center of the regarded as false starts, meaning that the run- interval and its width to determine X and s.] ner anticipates the starter’s gun, because such 67 Chronic exposure to asbestos fiber is a well- fmescare considered physically unpossible, known health hazard. The article “The Acute Linford Christie, who had a reaction time of Fi : : ; pela ects of Chrysotile Asbestos Exposure on 160 in placing second in his first-round heat, Lung Function” (Envir. Res., 1978: 360-372) had two such false starts in the finals and was e disqualified: reports results of a study based on a sample of construction workers who had been exposed to 64. Aphid infestation of fruit trees can be controlled asbestos over a prolonged period. Among the either by spraying with pesticide or by inunda- data given in the article were the following tion with ladybugs. In a particular area, four (ordered) values of pulmonary compliance different groves of fruit trees are selected for (cm?/em H20) for each of 16 subjects 8 months experimentation. The first three groves are after the exposure period (pulmonary compliance sprayed with pesticides 1, 2, and 3, respectively, is a measure of lung elasticity, or how effectively and the fourth is treated with ladybugs, with the the lungs are able to inhale and exhale): following results on yield: 167.9 1808 184.8 189.8 194.8 200.2 Tete ZA, CnLAbEOR x) CSIR ¥ 201.9 206.9 207.2 208.4 226.3 227.7 trees) tee) 228.5 232.4 239.8 258.6 1 100 10.5 LS a. Is it plausible that the population distribution 9: 90 10.0 13 is normal? 3 100 10.1 18 b. Compute a 95% CI for the true average pul- 4 120 107 16 monary compliance after such exposure. ; 68. In Example 7.9, we introduced the concept of a Let jj =the tre average. yield “(bushels/tree), censored experiment in which n components are after receiving the ith treatment. Then put on test and the experiment terminates as soon as r of the components have failed. Suppose 6 uerompld—an, component lifetimes are independent, each hay- 3 ing an exponential distribution with parameter /. Let Y, denote the time at which the first failure measures the difference in true average yields occurs, Y> the time at which the second failure between treatment with pesticides and treatment occurs, and so on, so that T, = Yy + + + ¥, + with ladybugs. When m, m2, 13, and ng are all (n — r)Y, is the total accumulated lifetime at large, the estimator # obtained by replacing each termination. Then it can be shown that 227, has H; by X; is approximately normal. Use this to a chi-squared distribution with 2r df. Use this fact derive a large-sample 100(1 — %)% CI for 0, to develop a 100(1 — ~)% CI formula for true and compute the 95% interval for the given data. average lifetime 1/7. Compute a 95% Cl from the 65. It is important that face masks used by firefigh- data in Example 7.9. ters be able to withstand high temperatures 69. Exercise 63 from Chapter 7 introduced “regres- because firefighters commonly work in tempera- sion through the origin” to relate a dependent tures of 200-500°F. In a test of one type of mask, variable y to an independent variable x. The 11 of 55 masks had lenses pop out at 250°. assumption there was that for any fixed x value, Construct a 90% CI for the true proportion of the dependent variable is a random variable Y masks of this type whose lenses would pop out with mean value fx and variance o7 (so that Y at 250°. has mean value zero when x = 0). The data --- Trang 435 --- 422 = cuarrer 8 Statistical Intervals Based on a Single Sample consists of n independent (x;, Y,) pairs, where The choice y= 0/2 yields the usual interval each Y; is normally distributed with mean fx; derived in Section 8.2; if y # 2/2, this confidence and variance o°. The likelihood is then a product interval is not symmetric about X. The width of the of normal pdf’s with different mean values but interval is w = s(z,-+2.-,)/Vn. Show that w is the same variance. . minimized for the choice » = 2/2, so that the a. Show that the mle of f is 8 = Ex,¥;/Ex2. symmetric interval is the shortest. [Hints: (a) By b. Verify that the mle of (a) is unbiased. definition of z,, @(2,)=1— 4, so that z,= ¢. Obtain an expression for V(j) and then for o. © '(1 — a); (b) the relationship between the d. For purposes of obtaining a precise estimate derivative of a function y = f(x) and the inverse of f, is it better to have the x;’s all close to function x = f'(y) is (d/dy) f(y) = Uf'd.1 ee ee your reasoning. : . 4 a 1 uisatiitaih GB Woeidi Bile TR from a random sample from a symmetric but possi- Cee Tae Enon soe ee Bx; Let bly heavy-tailed distribution. Let = and f, denote S? = Z(¥;— Bx) /(a—1) which is analo- the sample median and fourth spread, respectively. gous to our earlier sample _ variance Chapter 11 of Understanding Robust and Explor- 8 =E(X,—X)?/(n—1) for a univariate atory Data Analysis (see the bibliography in sample X;, ... , X;, (in which case X is a Chapter 7) suggests the following robust 95% Cl natural prediction for each X,). Then it can for the population mean (point of symmetry): be shown that T = (B — B)/(S/,/Za?) has a ¢ distribution based on n — 1 df. Use this to - ‘conservative t critical value\ _f, obtain a CI formula for estimating f, and a(S) calculate a 95% CI using the data from the citediexeroises ‘The value of the quantity in parentheses is 2.10 for 70. Let X,, Xo, +s: , Xq be a random sample from a n= 10, 1.94 for n = 20, and 1.91 for n = 30. uniform distribution on the interval [0, @], so that Compute this CI for the restaurant tip data of Example 8.15, and compare to the t CI appropriate i O #) = e°”", Use the result of part (a) to - obtain a 95% lower confidence bound for a: Use fu(u) to verify that the probability that lifetime exceeds 100 min, [tain ct ca = 1/2)" 2 fae 74, Let 0, and denote the mean weights for animals 0 of two different species. An investigator wishes to and use this to derive a 100(1 — 4)% Cl for 8. estimate the ratio 6,/02. Unfortunately the species Un are extremely rare, so the estimate will be based b. Verify that P(e!" < ¥/ 0 < 1) =1—a, and sasetinats : ; on finding a single animal of each species. Let X; derive a 100(1 — 2)% Cl for 0 based on this ” oar a denote the weight of the species i animal (i = 1,2), probability statement. rune a : : . . assumed to be normally distributed with mean 0; ¢. Which of the two intervals derived previously : : ; ber the 5 and standard deviation 1. is shorter? If your waiting time for a morning f eee ie . rn . a. What is the distribution of the variable bus is uniformly distributed and observed wait- ing times are x; = 4.2, x» = 3.5, 43 = 1.7, h(X,X0; 01,02) = (2X1 — 01X2)/y/ OF + 052 x4 = 1.2, and x5 = 2.4, derive a 95% Cl for 0 Show that this variable depends on 0, and by using the shorter of the two intervals. 0» only through 0/0 (divide numerator and 71. Let 0 <) 1.967, whereas if this --- Trang 436 --- Supplementary Exercises 423 inequality is not satisfied, the resulting confi- 31.2, 36.0, 31.5, 28.7, 37.2, 35.4, 33.3, 39.3, dence set is the complement of an interval. 42.0, 29.9. Determine the confidence interval 75. The one-sample Cl for a normal mean and PI for a ft (©) and the associated confidence level, Also . rn calculate the one-sample t CI using the same single observation from a normal distribution level and compare the two intervals. were both based on the central t distribution. ia A Cl for a particular percentile (e.g., the Ist per- 77. Consider the situation described in the previous centile or the 95th percentile) of a normal popula- exercise. tion distribution is based on the noncentral a. What is P({X, A}N--- t distribution. A particular distribution of this {X,, > ji}), that is, the probability that only the type is specified by both df and the value of the first observation is smaller than the median? noncentrality parameter (5 = 0 gives the central b. What is the probability that exactly one of the n t distribution). The key result is that the variable observations is smaller than the median? ¢. What is P(ji<¥2)? [Hinr: The event in par- Fn _(, til entheses occurs if all n of the observations 7a tin © percentile) Va exceed the median. How else can it occur? S/o What does this imply about the confidence has a noncentral ¢ distribution with df =n —1 levelisegociated With. the CL (a,/Yaesi)' £68, f7 " Determine the confidence level and CI for the and 5 = —(z percentile) /n. araiverin fe ! Let 25,5 and fo7s,,5 denote the critical values ata’ pivenzin the: previos/exereise,] that capture upper-tail area 025 and lower-tail 78. The previous two exercises considered a CI for a area .025, respectively, under the noncentral population median ji based on the n order statistics t curve with v df and noncentrality parameter 5 from a random sample. Let's now consider a pre- (when 6 = 0, to7s = —tozs, since central ¢ distri- diction interval for the next observation X,,,.. butions are symmetric about 0). a. What is P(X,41 < X1)? What is PUXnar < Xi} a. Use the given information to obtain a formula NV (Xnar < Xo})? for a 95% confidence interval for the (100p)th b. What is P(X,,,, < ¥,)? What is PX, > Y,)? percentile of a normal population distribution. c. What is P(Y; < Xn41 < Y,,)? What does this b. For 6 = 6.58 and df = 15, fo7s and toss are say about the prediction level for the PI (1, (from MINITAB) 4.1690 and 10.9684, respec- Yn)? Determine the prediction level and interval tively. Use this information to obtain a 95% CI for the data given in the previous exercise. forthe Sth peteentilé-of the beer aleshabdisttl. oy i. . iten baa O's tortwo diffrent parameters 0) biitioaeonsi dered ih Exe pIE ST. and >, and let A; (i = 1, 2) denote the event that 76. The one-sample ¢ Cl for y is also a confidence the value of 6; is included in the random interval interval for the population median f when the that results in the CL. Thus P(Aj) = .95. population distribution is normal. We now a. Suppose that the data on which the Cl for 0, is develop a CI for ji that is valid whatever the based is independent of the data used to obtain shape of the population distribution as long as it the Cl for 0 (e.g., we might have 0; = 1, the is continuous. Let X,,... , X, be a random sample population mean height for American females, from the distribution and Y,, ... , ¥,, denote the and 0 = p, the proportion of all Kodak digital corresponding order statistics (smallest observa- cameras that don’t need warranty service). tion, second smallest, and so on). What can be said about the simultaneous (i.e., a. What is P(X, ja)? [Hint: interval contains the value of 0; and that the What condition involving all of the X;’s is second contains the value of 6? [Hint: Con- equivalent to the largest being smaller than sider P(A, N Ao).] the population median?] b. Now suppose the data for the first CI is not c. What is P(Y; 5. Yet another example of a hypothesis is the assertion that the stopping distance for a car under particular conditions has a normal distribution. Hypotheses of this latter sort will be considered in Chapter 13. In this and the next several chapters, we concentrate on hypotheses about parameters. In any hypothesis-testing problem, there are two contradictory hypotheses under consideration. One hypothesis might be the claim y. = $311 and the other u # $311, or the two contradictory statements might be p > .50 and p < .50. The objective is to decide, based on sample information, which of the two hypotheses is correct. There is a familiar analogy to this in a criminal trial. One claim is the assertion that the accused individual is innocent. In the U.S. judicial system, this is the claim that is initially believed to be true. Only in the face of strong evidence to the contrary should the jury reject this claim in favor of the alternative assertion that the accused is guilty. In this sense, the claim of innocence is the favored or protected hypothesis, and the burden of proof is placed on those who believe in the alternative claim. Similarly, in testing statistical hypotheses, the problem will be formulated so that one of the claims is initially favored. This initially favored claim will not be rejected in favor of the alternative claim unless sample evidence contradicts it and provides strong support for the alternative assertion. DEFINITION The null hypothesis, denoted by Hp, is the claim that is initially assumed to be true (the “prior belief” claim). The alternative hypothesis, denoted by H,, is the assertion that is contradictory to Ho. The null hypothesis will be rejected in favor of the alternative hypoth- esis only if sample evidence suggests that Hp is false. If the sample does not strongly contradict Ho, we will continue to believe in the plausibility of the null hypothesis. The two possible conclusions from a hypothesis-testing analysis are then reject Ho or fail to reject Ho. A test of hypotheses is a method for using sample data to decide whether the null hypothesis should be rejected. Thus we might test Ho: = .75 against the alterna- tive H,: # .75. Only if sample data strongly suggests that 1 is something other than .75 should the null hypothesis be rejected. In the absence of such evidence, Ho should not be rejected, since it is still quite plausible. Sometimes an investigator does not want to accept a particular assertion unless and until data can provide strong support for the assertion. As an example, suppose a company is considering putting a new additive in the dried fruit that it produces. --- Trang 440 --- 9.1 Hypotheses and Test Procedures 427 The true average shelf life with the current additive is known to be 200 days. With yu denoting the true average life for the new additive, the company would not want to make a change unless evidence strongly suggested that < exceeds 200. An appropri- ate problem formulation would involve testing Hp: « = 200 against H,: 4 > 200. The conclusion that a change is justified is identified with H,, and it would take conclusive evidence to justify rejecting Hp and switching to the new additive. Scientific research often involves trying to decide whether a current theory should be replaced by a more plausible and satisfactory explanation of the phenome- non under investigation. A conservative approach is to identify the current theory with Ho and the researcher’s alternative explanation with H,. Rejection of the current theory will then occur only when evidence is much more consistent with the new theory. In many situations, H, is referred to as the “research hypothesis,” since it is the claim that the researcher would really like to validate. The word nu// means “of no value, effect, or consequence,” which suggests that Hy should be identified with the hypothesis of no change (from current opinion), no difference, no improvement, and so on. Suppose, for example, that 10% of all computer circuit boards produced by a manufacturer during a recent period were defective. An engineer has suggested a change in the production process in the belief that it will result in a reduced defective rate. Let p denote the true proportion of defective boards resulting from the changed process. Then the research hypothesis, on which the burden of proof is placed, is the assertion that p < .10. Thus the alternative hypothesis is H,: p < .10. In our treatment of hypothesis testing, Ho will generally be stated as an equality claim. If @ denotes the parameter of interest, the null hypothesis will have the form Ho: 8 = 09, where 0p is a specified number called the null value of the parameter (value claimed for 6 by the null hypothesis). As an example, consider the circuit board situation just discussed. The suggested alternative hypothesis was Hi: p < .10, the claim that the defective rate is reduced by the process modifica- tion. A natural choice of Hp in this situation is the claim that p > .10, according to which the new process is either no better or worse than the one currently used. We will instead consider Ho: p = .10 versus Hy: p < .10. The rationale for using this simplified null hypothesis is that any reasonable decision procedure for deciding between Ho: p = .10 and H,: p < .10 will also be reasonable for deciding between the claim that p > .10 and H,. The use of a simplified Ho is preferred because it has certain technical benefits, which will be apparent shortly. The alternative to the null hypothesis Ho: 8 = 99 will look like one of the following three assertions: 1. Hy: 0 > Oo (in which case the implicit null hypothesis is 0 < 4) 2. H,: 0 < Qo (so the implicit null hypothesis states that 6 > 0) 3. Hy: 0 F Oo. For example, let ¢ denote the standard deviation of the distribution of outside diameters (inches) for an engine piston. If the decision was made to use the piston unless sample evidence conclusively demonstrated that ¢ > .0001 in., the appropriate hypotheses would be Hp: ¢ = .0001 versus H,: ¢ > .0001. The number 0, that appears in both Hy and H, (separates the alternative from the null) is called the null value. Test Procedures A test procedure is a rule, based on sample data, for deciding whether to reject Ho. A test of Ho: p = .10 versus Hy: p < .10 in the circuit board problem might be --- Trang 441 --- 428 = cuarrer9 Tests of Hypotheses Based on a Single Sample based on examining a random sample of n = 200 boards. Let X denote the number of defective boards in the sample, a binomial random variable; x represents the observed value of X. If Ho is true, E(X) = np = 200(.10) = 20, whereas we can expect fewer than 20 defective boards if H, is true. A value x just a bit below 20 does not strongly contradict Ho, so it is reasonable to reject Ho only if x is substantially < 20. One such test procedure is to reject Ho if x < 15 and not reject Hp otherwise. This procedure has two constituents: (1) a test statistic or function of the sample data used to make a decision and (2) a rejection region consisting of those x values for which Ho will be rejected in favor of H,. For the rule just suggested, the rejection region consists of x = 0, 1, 2, ... , 15. Ho will not be rejected if x = 16, 17,... , 199, or 200. A test procedure is specified by the following: 1. A test statistic, a function of the sample data on which the decision (reject Hp or do not reject Ho) is to be based 2. A rejection region, the set of all test statistic values for which Ho will be rejected The null hypothesis will then be rejected if and only if the observed or computed test statistic value falls in the rejection region. As another example, suppose a cigarette manufacturer claims that the aver- age nicotine content 1 of brand B cigarettes is (at most) 1.5 mg. It would be unwise to reject the manufacturer’s claim without strong contradictory evidence, so an appropriate problem formulation is to test Hp: 4 = 1.5 versus Hy: « > 1.5. Con- sider a decision rule based on analyzing a random sample of 32 cigarettes. Let X denote the sample average nicotine content. If Ho is true, E(X) = u = 1.5, whereas if Ho is false, we expect X to exceed 1.5. Strong evidence against Ho is provided by a value X that considerably exceeds 1.5. Thus we might use X as a test statistic along with the rejection region ¥ > 1.60. In both the circuit board and nicotine examples, the choice of test statistic and form of the rejection region make sense intuitively. However, the choice of cutoff value used to specify the rejection region is somewhat arbitrary. Instead of rejecting Ho: p = .10 in favor of H,: p < .10 when x < 15, we could use the rejection region x < 14. For this region, Hy would not be rejected if 15 defective boards are observed, whereas this occurrence would lead to rejection of Ho if the initially suggested region is employed. Similarly, the rejection region x > 1.55 might be used in the nicotine problem in place of the region ¥ > 1.60. Errors in Hypothesis Testing The basis for choosing a particular rejection region lies in an understanding of the errors that one might be faced with in drawing a conclusion. Consider the rejection region x < 15 in the circuit board problem. Even when Ho: p = .10 is true, it might happen that an unusual sample results in x = 13, so that Ho is erroneously rejected. On the other hand, even when H,: p < .10 is true, --- Trang 442 --- 9.1 Hypotheses and Test Procedures 429 an unusual sample might yield x = 20, in which case Hp would not be rejected, again an incorrect conclusion. Thus it is possible that Hy may be rejected when it is true or that Hp may not be rejected when it is false. These possible errors are not consequences of a foolishly chosen rejection region. Either one of these two errors might result when the region x < 14 is employed, or indeed when any other sensible region is used. DEFINITION A type I error consists of rejecting the null hypothesis Ho when it is true. A type II error involves not rejecting Hy when Ho is false. In the nicotine scenario, a type I error consists of rejecting the manufacturer’s claim that ¢ = 1.5 when it is actually true. If the rejection region x > 1.60 is employed, it might happen that x = 1.63 even when pu = 1.5, resulting in a type I error. Alternatively, it may be that Ho is false and yet x = 1.52 is observed, leading to Ho not being rejected (a type II error). In the best of all possible worlds, test procedures for which neither type of error is possible could be developed. However, this ideal can be achieved only by basing a decision on an examination of the entire population, which is almost always impractical. The difficulty with using a procedure based on sample data is that because of sampling variability, an unrepresentative sample may result. Even though E(X) = y, the observed value x may differ substantially from (at least if n is small). Thus when y = 1.5 in the nicotine situation, x may be much larger than 1.5, resulting in erroneous rejection of Ho. Alternatively, it may be that = 1.6 yet an ¥ much smaller than this is observed, leading to a type II error. Instead of demanding error-free procedures, we must look for procedures for which either type of error is unlikely to occur. That is, a good procedure is one for which the probability of making either type of error is small. The choice of a particular rejection region cutoff value fixes the probabilities of type I and type II errors. These error probabilities are traditionally denoted by « and f, respectively. Because Hp specifies a unique value of the parameter, there is a single value of «. However, there is a different value of f for each value of the parameter consistent with H,. An automobile model is known to sustain no visible damage 25% of the time in 10-mph crash tests. A modified bumper design has been proposed in an effort to increase this percentage. Let p denote the proportion of all 10-mph crashes with this new bumper that result in no visible damage. The hypotheses to be tested are Ho: p = .25 (no improvement) versus H,: p > .25. The test will be based on an experiment involving n = 20 independent crashes with prototypes of the new design. Intuitively, Hy should be rejected if a substantial number of the crashes show no damage. Consider the following test procedure: Test statistic: X = the number of crashes with no visible damage Rejection region: Rg = {8,9, 10,..., 19, 20}; that is, reject Ho ifx > 8, where x is the observed value of the test statistic This rejection region is called upper-tailed because it consists only of large values of the test statistic. --- Trang 443 --- 430 = = ciarrer9 Tests of Hypotheses Based on a Single Sample When Hj is true, X has a binomial probability distribution with n = 20 and p = .25. Then a= P(type I error) = P(Hp is rejected when it is true) = P|X > 8 when X ~ Bin(20, .25)] = 1 — B(7; 20, .25) = 1 —.898 = .102 That is, when Hp is actually true, roughly 10% of all experiments consisting of 20 crashes would result in Hp being incorrectly rejected (a type I error). In contrast to «, there is not a single f. Instead, there is a different f for each different p that exceeds .25. Thus there is a value of B for p = .3 [in which case X ~ Bin(20, .3)], another value of f for p = .5, and so on. For example, B(.3) = P(type II error when p = .3) = P(Hp is not rejected when it is false because p = .3) = P[X <7 when X ~ Bin(20, .3)] = B(7; 20, .3) = .772 When p is actually .3 rather than .25 (a “small” departure from Ho), roughly 77% of all experiments of this type would result in Ho being incorrectly not rejected! The accompanying table displays f for selected values of p (each calculated for the rejection region Rg). Clearly, 8 decreases as the value of p moves farther to the right of the null value .25. Intuitively, the greater the departure from Ho, the more likely it is that such a departure will be detected. Pp 3 4 5 6 7 38 B(p)| 772 416 132 .021 .001 .000. The proposed test procedure is still reasonable for testing the more realistic null hypothesis that p < .25. In this case, there is no longer a single «, but instead there is an o for each p that is at most .25: «(.25), «(.23), «(.20), «(.15), and so on. It is easily verified, though, that «(p) < a(.25) = .102 if p < .25. That is, the largest value of % occurs for the boundary value .25 between Hy and H,. Thus if « is small for the simplified null hypothesis, it will also be as small as or smaller for the more realistic Ho. a The drying time of a type of paint under specified test conditions is known to be normally distributed with mean value 75 min and standard deviation 9 min. Chemists have proposed a new additive designed to decrease average drying time. It is believed that drying times with this additive will remain normally distributed with ¢ = 9. Because of the expense associated with the additive, evidence should strongly suggest an improvement in average drying time before such a conclusion is adopted. Let , denote the true average drying time when the additive is used. The appropriate hypotheses are Ho: . = 75 versus H,: < 75. Only if Ho can be rejected will the additive be declared successful and used. Experimental data is to consist of drying times from n = 25 test specimens. Let X;,... , X25 denote the 25 drying times—a random sample of size 25 from a normal distribution with mean value ji and standard deviation o = 9. The sample mean drying time X then has a normal distribution with expected value xy = je and standard deviation og = ¢/\/n = 9//25 = 1.80. When Hp is true, jig = 75, so only an ¥ value substantially < 75 would strongly contradict Ho. A reasonable --- Trang 444 --- 9.1 Hypotheses and Test Procedures 431 rejection region has the form x < c, where the cutoff value c¢ is suitably chosen. Consider the choice c = 70.8, so that the test procedure consists of test statistic X and rejection region ¥ < 70.8. Because the rejection region consists only of small values of the test statistic, the test is said to be Jower-tailed. Calculation of « and B now involves a routine standardization of X followed by reference to the standard normal probabilities of Appendix Table A.3: a = P(type I error) = P(Hp is rejected when it is true) = P(X < 70.8 when X ~ normal with uy = 75, oy = 1.8) 70.8 — 75 = (~~) = (2.33) =.01 1.8 B(72) = P(type I error when j = 72) = P(Hp is not rejected when it is false because 4p = 72) = P(X > 70.8 when X ~ normal with py = 72, og = 1.8) 70.8 — 72 =1-0 3 y= 1 — ®(—.67) = 1 — .2514 = .7486 70.8 — 70 (70) = 1- of) = 3300 B(67) = .0174 For the specified test procedure, only 1% of all experiments carried out as described will result in Hp being rejected when it is actually true. However, the chance of a type II error is very large when jx = 72 (only a small departure from Ho), somewhat less when yp = 70, and quite small when pu = 67 (a very substantial departure from Ho). These error probabilities are illustrated in Figure 9.1 on the next page. Notice that « is computed using the probability distribution of the test statistic when Hp is true, whereas determination of f requires knowing the test statistic’s distribution when Ho is false. As in Example 9.1, if the more realistic null hypothesis 4 > 75 is considered, there is an « for each parameter value for which Ho is true: 2(75), «(75.8), «(76.5), and so on. It is easily verified, though, that (75) is the largest of all these type I error probabilities. Focusing on the boundary value amounts to working explicitly with the “worst case.” a The specification of a cutoff value for the rejection region in the examples just considered was somewhat arbitrary. Use of the rejection region Rg = {8,9,...,20} in Example 9.1 resulted in « = .102, B(.3) = .772, and §(.5) = .132. Many would think these error probabilities intolerably large. Perhaps they can be decreased by changing the cutoff value. Let us use the same experiment and test statistic X as previously described in the (Example 9.1 automobile bumper problem but now consider the rejection region Ry = {9, 10, continued) ..., 20}. Since X still has a binomial distribution with parameters n = 20 and p, a = P(Hp is rejected when p = .25) = P[X > 9 when X ~ Bin(20, .25)] = 1 — B(8; 20, .25) = .041 --- Trang 445 --- 432 cuarrer9 Tests of Hypotheses Based on a Single Sample a Shaded area = = 01 t 73 75 70.8 b f Shaded area = f (72) { 2 15 70.8 & q Shaded area = B (70) 70 f 75 70.8 Figure 9.1 « and f illustrated for Example 9.2: (a) the distribution of X when [t = 75 (Ho true); (b) the distribution of X when ju = 72 (Hb false); (c) the distribution of X when jt = 70 (Hp false) The type I error probability has been decreased by using the new rejection region. However, a price has been paid for this decrease: B(.3) = P(Hp is not rejected when p = .3) = PX <8 when X ~ Bin(20, .3)] = B(8; 20, .3) = .887 B(.5) = B(8; 20, .5) = .252 Both these B’s are larger than the corresponding error probabilities .772 and .132 for the region Rg. In retrospect, this is not surprising; % is computed by summing over probabilities of test statistic values in the rejection region, whereas f is the probability that X falls in the complement of the rejection region. Making the rejection region smaller must therefore decrease « while increasing f for any fixed alternative value of the parameter. a The use of cutoff value c = 70.8 in the paint-drying example resulted in a very (Example 9.2 small value of x (.01) but rather large f’s. Consider the same experiment and test continued) statistic X with the new rejection region ¥ < 72. Because X is still normally distributed with mean value jy = pt and og = 1.8, --- Trang 446 --- 9.1 Hypotheses and Test Procedures 433 a = P(Hp is rejected when it is true) = P(X <72 when X ~N(75, 1.8°)] 72-75 = o(7*) =@(—1.67) = .0475 = .05 B(72) = P(Hp is not rejected when pi: = 72) = P(X >72 when X is a normal rv with mean 72 and standard deviation 1.8) 72-72 =1-0( —— } =1-0(0)=.5 72-70 B(70) =1- oF) = 1335 B(67) = .0027 The change in cutoff value has made the rejection region larger (it includes more X values), resulting in a decrease in f for each fixed y less than 75. However, « for this new region has increased from the previous value .01 to approximately .05. Ifa type I error probability this large can be tolerated, though, the second region (c = 72) is preferable to the first (¢c = 70.8) because of the smaller p’s. a The results of these examples can be generalized in the following manner. PROPOSITION Suppose an experiment and a sample size are fixed and a test statistic is chosen. Then decreasing the size of the rejection region to obtain a smaller value of & results in a larger value of £ for any particular parameter value consistent with H,. This proposition says that once the test statistic and n are fixed, there is no rejection region that will simultaneously make both « and all f’s small. A region must be chosen to effect a compromise between « and f. Because of the suggested guidelines for specifying Hy and H,, a type I error is usually more serious than a type II error (this can always be achieved by proper choice of the hypotheses). The approach adhered to by most statistical practitioners is then to specify the largest value of x that can be tolerated and find a rejection region having that value of rather than anything smaller. This makes f as small as possible subject to the bound on «. The resulting value of « is often referred to as the significance level of the test. Traditional levels of significance are .10, .05, and .01, although the level in any particular problem will depend on the seriousness of a type I error—the more serious this error, the smaller should be the significance level. The corresponding test procedure is called a level @ test (¢.g., a level .05 test or a level .01 test). A test with significance level x is one for which the type I error probability is controlled at the specified level. Consider the situation mentioned previously in which y was the true average nicotine content of brand B cigarettes. The objective is to test Ho: 4 = 1.5 versus Hi, 4 > 1.5 based on a random sample Xj), X2, ... , X32 of nicotine contents. Suppose the distribution of nicotine content is known to be normal with ¢ = .20. It follows that X is normally distributed with mean value jg = 1 and standard deviation oy = .20//32 = .0354. --- Trang 447 --- 434 = cuarrer9 Tests of Hypotheses Based on a Single Sample Rather than use X itself as the test statistic, let’s standardize X assuming that Hg is true. Test statistic : Z = Rms = Z=13 a/yn 0354 Z expresses the distance between X and its expected value when Hp is true as some number of standard deviations. For example, z = 3 results from an X that is 3 standard deviations larger than we would have expected it to be were Ho true. Rejecting Ho when x “considerably” exceeds 1.5 is equivalent to rejecting Ho when z “considerably” exceeds 0. That is, the form of the rejection region is z > c. Let’s now determine @ so that’ = 05. When Hp is true, Z has a standard normal distribution. Thus. a = P(type I error) = P(rejecting Ho when it is true) = P(Z > c when Z ~ N(0,1)] The value ¢ must capture upper-tail area .05 under the z curve. Either from Section 4.3 or directly from Appendix Table A.3, @= 295 = 1.645. Notice that z > 1.645 is equivalent to x— 1.5 > (.0354)(1.645); that is, X > 1.56. Then f is the probability that X < 1.56 and can be calculated for any ELS: a Exercises | Section 9.1 (1-14) 1. For each of the following assertions, state whether a random sample of welds is selected, and tests itis a legitimate statistical hypothesis and why: are conducted on each weld in the sample. Weld a. H:a > 100 strength is measured as the force required to break bo HW: x= 45 the weld. Suppose the specifications state that ce. His < 20 mean strength of welds should exceed 100 1b/in’; d. Hi aio, <1 the inspection team decides to test Ho: 4 = 100 e. H:X-Y=5 versus H,: > 100. Explain why it might be f. H: 2 < 01, where 2 is the parameter of an preferable to use this H, rather than x < 100. sxponental distrbunon used omodel comer 4. en aeacue ene ne auemaNe EIA evel nent lifetime (picocuries per liter). The value 5 pCi/L is consid- 2. For the following pairs of assertions, indicate ered the dividing line between safe and unsafe which do not comply with our rules for setting water. Would you recommend testing Ho: «= 5 up hypotheses and why (the subscripts 1 and 2 dif- versus Hy: > 5 or Ho: ¢ = 5 versus Hy: p< 5? ferentiate between quantities for two different Explain your reasoning. [Hint: Think about the populations or samples): consequences of a type I and type II error for a. Ho: 0 = 100, Hy: jt > 100 each possibility.] b. Hola 20a aS 20 5, Before agreeing to purchase a large order of ©. Ho: p # 25, Ha: p = 25 } : ae hs ossns— ns 95, He pe—pex 100 polyethylene sheaths for a particular type of a at eee high-pressure oil-filled submarine power cable, e: Hops Sa Haase 2 82 a company wants to see conclusive evidence that f Ho: w= 120, Ha w= 150 the true standard deviation of sheath thickness is 8. Ho: olor = 1, Ha: olen # | < .05 mm. What hypotheses should be tested, and Bay! Pi ~ Ba = AB Pe Pa So why? In this context, what are the type I and 3. To determine whether the girder welds in a type Il errors? new performing arts center meet specifications, --- Trang 448 --- 9.1 Hypotheses and Test Procedures 435 6. Many older homes have electrical systems that use a. Which of the following rejection regions is fuses rather than circuit breakers. A manufacturer most appropriate and why? of 40-amp fuses wants to make sure that the mean amperage at which its fuses burn out is in fact 40. Ry = {xix <7 orx> 18}, If the mean amperage is lower than 40, customers Ry = {xx <8},R3 = {xix > 17} will complain because the fuses require replace- . ment too often. If the mean amperage is higher b. In the context of this problem situation, than 40, the manufacturer might be liable for describe what type I and type II errors are. damage to an electrical system due to fuse mal- % What is the probability distribution of the test function. To verify the amperage of the fuses, a statistic X when Hp is true? Use it to compute sample of fuses is to be selected and inspected, Ifa the probability: of a type Terror: hypothesis test were to be performed on the result- 4; Compute the: probability or 4 ty) Th ertor-tee ing data, what null and alternative hypotheses tie selected region when-p'— -3)agaih.when would be of interest to the manufacturer? Describe p = A, and also for both p = .6 and p = .7. type I and type II errors in the context of this e. Using the selected region, what would you voroblent situation, conclude if 6 of the 25 queried favored com- pany 1? 7. Water samples are taken from water used for cool- a. ; ing as it is being discharged from a power plant 10. For healthy individuals the level of prothrombin in into a river. It has been determined that as long as the blood is approximately normally distributed the mean temperature of the discharged water is at with mean 20 mg/100 mL and standard deviation most 150°F, there will be no negative effects on 4 mg/100 mL. Low levels indicate low clotting the river’s ecosystem. To investigate whether the ability. In studying the effect of gallstones on pro- lant 2895, SORipITARES NON GARUMAHONS HAE Se thrombin, the level of each patient in a sample is hibit a mean discharge-water temperature above measured to see if there is a deficiency. Let 1 be the 150), 50: water sannples will be'taken ar'randomly true average level of prothrombin for gallstone selected times, and the temperature of each sample patents: . . recorded. The resulting data will be used to test the a; Whatare the:spproprate:null and alternative hypotheses Ho: 4 = 150° versus Hy: fe > 150°. In hypotheses? the context of this situation, describe type I and iby Tet desiote tie, dai plevaverdigeevel of pro: type II errors. Which type of error would you thrombin in a sample of n = 20 randomly consider more serious? Explain. selected gallstone patients. Consider the test procedure with test statistic X and rejection 8. A regular type of laminate is currently being used region x -< 17.92, What is the probability dis: by a manufacturer of circuit boards. A special tribution of the test statistic when Ho is true? laminate has been developed to reduce warpage. ‘What.is the probability of a type I ertor fot the The regular laminate will be used on one sample teat preci? of specimens and the special laminate on another c. What is the probability distribution of the test sample, and the amount of warpage will then be statistic when p = 16.7? Using the test proce- determined for each specimen. The manufacturer dure of pore (b),.whar-ds the probability. thot will then switch to the special laminate only if it gallstone patients will be judged not deficient can be demonstrated that the true average amount in prothrombin, when in fact 4 = 16.7 (a type of warpage for that laminate is less than for the Ierror)? regular laminate. State the relevant hypotheses, d. How would you change the test procedure of and describe the type I and type II errors in the part (b) to obtain a test with significance level context of this situation. .05? What impact would this change have on 9. Two different companies have applied to provide theentor probablity of part (c)) cable television service in a region. Let p denote e. Consider the standardized test statistic Z = the proportion of all potential subscribers who (X = 20)/(o/,/n) = (® — 20)/.8944. What favor the first company over the second. Consider are the values of Z corresponding to the rejec- testing Ho: p = .5 versus Hy: p # .5 based on a tion region of part (b)? random sample of 25 individuals. Let X denote the 41 The calibration of a scale is to be checked by paneber in the sample who favor the first company weighing a 10-kg test specimen 25 times. Suppose and erepresent the observed yaluesof X that the results of different weighings are --- Trang 449 --- 436 = cuarrer9 Tests of Hypotheses Based on a Single Sample independent of one another and that the weight on denote the sample average braking distance each trial is normally distributed with ¢ = .200kg. for a random sample of 36 observations. Let y denote the true average weight reading on Which of the following rejection regions the scale. is appropriate: Ry = {x:1> 124.80}, Ry = a. What hypotheses should be tested? {e:x< 115.20}, Rj = {X: eitherx > 125.13 or b. Suppose the scale is to be recalibrated if either ¥< 114.87}? ¥ > 10.1032 or ¥ < 9.8968. What is the prob- c. What is the significance level for the appropri- ability that recalibration is carried out when it ate region of part (b)? How would you change is actually unnecessary? the region to obtain a test with 2 = .001? ¢. What is the probability that recalibration is d. What is the probability that the new design is judged unnecessary when in fact jp = 10.1? not implemented when its true average braking When ju = 9.8? distance is actually 115 ft and the appropriate d. Let 2 = (x — 10)/(o/,/n). For what value c is region from part (b) is used? the rejection region of part (b) equivalent to the e. Let Z= (X—- 120)/(a/./n). What is the sig- “two-tailed” region either z > c or z < —c? nificance level for the rejection region {z: e. If the sample size were only 10 rather than 25, z < —2.33}? For the region {z: z < —2.88}? how should shelpmesdire of part (d) be altered 43, yar y,,.. , x, denote a random sample from a Rorthat as 03% normal population distribution with a known f. Using the test of ‘part ©), what would you value-oru. conclude from the following sample data? a. For testing the hypotheses Ho: = yup versus 9.981 10.006 9.857 10.107 9.888 Hyg: > [Mo (where fig is a fixed number), show . " . ° . that the test with test statistic X and rejection 9.728 10.439 10.214 10.190 9.793 region X > fly +2.330/,/n has significance level .01. g. Re-express the test procedure of part (b) in b. Suppose the procedure of part (a) is used to test terms of the standardized test statistic Ho: WU < flo versus Hy: jo > to. If fo = 100, Z= (X —10)/(o/,/n). n= 25, and ¢ = 5, what is the probability of 12. A new design for the braking system on a certain committing a type I error when t = 99? When type of car has been proposed. For the current # = 98? In general, what can be said about system, the true average braking distance at 40 the. probabibity, of" aitype I enter’ when the mph under specified conditions is known to be ‘actual ‘Value jof. jis-less than jig? Verify your 120 ft. It is proposed that the new design be assertion. implemented only if sample data strongly indi- 14, Reconsider the situation of Exercise 11 and sup- cates a reduction in true average braking distance pose the rejection region is ¥: x > 10.1004 or for the new design. ¥ < 9.8940} = {2:2 > 2.51 or z < —2.65} a. Define the parameter of interest and state the a. What is « for this procedure? relevant hypotheses. b. What is when u = 10.1? When p = 9.97 Is b. Suppose braking distance for the new system is this desirable? normally distributed with o = 10. Let X¥ Tests About a Population Mean The general discussion in Chapter 8 of confidence intervals for a population mean ju focused on three different cases. We now develop test procedures for these same three cases. Case 1: A Normal Population with Known 7 Although the assumption that the value of ¢ is known is rarely met in practice, this case provides a good starting point because of the ease with which general procedures and their properties can be developed. The null hypothesis in all three cases will state that yu has a particular numerical value, the null value, which we will --- Trang 450 --- 9.2 Tests About a Population Mean 437 denote by jlp. Let X,,... , X,, represent a random sample of size n from the normal population. Then the sample mean X has a normal distribution with expected value Hg = wand standard deviation oy = o/\/n. When H, is true, Lz = Ho. Consider now the statistic Z obtained by standardizing X under the assumption that Hp is true: we X- UW a] ya Substitution of the computed sample mean X gives z, the distance between X and Ho expressed in “standard deviation units.” For example, if the null hypothesis is Ho: « = 100, og = o//n = 10/25 = 2.0 and x = 103, then the test statistic value is given by z = (103 — 100)/2.0 = 1.5. That is, the observed value of X is 1.5 standard deviations (of X) above what we expect it to be when Ho is true. The statistic Z is a natural measure of the distance between X, the estimator of 1, and its expected value when Hy is true. If this distance is too great in a direction consistent with H,, the null hypothesis should be rejected. Suppose first that the alternative hypothesis has the form H,: 4 > [lg. Then an x value less than flo certainly does not provide support for H,. Such an X corresponds to a negative value of z (since ¥ — lp is negative and the divisor a/,/n is positive). Similarly, an X value that exceeds flo by only a small amount (corresponding to z which is positive but small) does not suggest that Ho should be rejected in favor of H,. The rejection of Ho is appropriate only when X considerably exceeds jig—that is, when the z value is positive and large. In summary, the appropriate rejection region, based on the test statistic Z rather than X, has the form z > c. As discussed in Section 9.1, the cutoff value c should be chosen to control the probability of a type I error at the desired level x. This is easily accomplished because the distribution of the test statistic Z when Hp is true is the standard normal distribu- tion (that’s why fo was subtracted in standardizing). The required cutoff ¢ is the z critical value that captures upper-tail area % under the standard normal curve. As an example, let c = 1.645, the value that captures tail area .05 (zo5 = 1.645). Then, a = P(type I error) = P(Hp is rejected when Hp is true) = P|Z > 1.645 when Z ~ N(0, 1)] = 1 — (1.645) = .05 More generally, the rejection region z > z, has type I error probability ~. The test procedure is upper-tailed because the rejection region consists only of large values of the test statistic. Analogous reasoning for the alternative hypothesis H,: 4 < Ho suggests a rejection region of the form z < c, where c is a suitably chosen negative number (xis far below uo if and only if z is quite negative). Because Z has a standard normal distribution when Ho is true, taking ce = —z, yields P(type I error) = «. This is a lower-tailed test. For example, z.19 = 1.28 implies that the rejection region z < —1.28 specifies a test with significance level .10. Finally, when the alternative hypothesis is H,: 1 # lo, Hp should be rejected if X is too far to either side of jig. This is equivalent to rejecting Hp either if z > c or if z < —c. Suppose we desire « = .05. Then, 05 = P(Z > c or Z < —c when Z has a standard normal distribution) = O(-c) + 1 — O(c) = 2[1 — &(c)] --- Trang 451 --- 438 = cuarrer9 Tests of Hypotheses Based on a Single Sample Thus c is such that 1 — ®(c), the area under the standard normal curve to the right of c, is .025 (and not .05!). From Section 4.3 or Appendix Table A.3, c = 1.96, and the rejection region is z > 1.96 or z < —1.96. For any a, the two-tailed rejection region z > 2Z,,. or Zz < —Z,/9 has type I error probability « (since area #/2 is captured under each of the two tails of the z curve). Again, the key reason for using the standardized test statistic Z is that because Z has a known distribution when Ho is true (standard normal), a rejection region with desired type I error probability is easily obtained by using an appropriate critical value. The test procedure for Case I is summarized in the accompanying box, and the corresponding rejection regions are illustrated in Figure 9.2. Null hypothesis: Ho: 6 = Lo Test statistic value: z = “40 of fn Alternative Hypothesis Rejection Region for Level a Test Ay: 6 > fo Zz > Z, (upper-tailed test) Ay: bw < Lo Zz < —z, (lower-tailed test) Aa: w F bo either z > 2,2 or z < —Z,,2 (two-tailed test) z curve (probability distribution of test statistic Z when Ho is true) a b c Total shaded area = a= P(type| error) ‘Shaded area \ = a= P(type| error) Shaded area Shaded = al2 area = a /2 ‘ 0 Ze j -z, 0 ] typ 8 Zap | Reection region: 2 <-z, Rejection region: either Rejection region: z >=, 2 224 O02 Sap Figure 9.2 Rejection regions for z tests: (a) upper-tailed test; (b) lower-tailed test; (c) two-tailed test Use of the following sequence of steps is recommended when testing hypotheses about a parameter. 1. Identify the parameter of interest and describe it in the context of the problem situation. --- Trang 452 --- 9.2 Tests About a Population Mean 439 2. Determine the null value and state the null hypothesis. 3. State the appropriate alternative hypothesis. 4, Give the formula for the computed value of the test statistic (substituting the null value and the known values of any other parameters, but not those of any sample-based quantities). 5. State the rejection region for the selected significance level a. 6. Compute any necessary sample quantities, substitute into the formula for the test statistic value, and compute that value. 7. Decide whether Hy should be rejected and state this conclusion in the problem context. The formulation of hypotheses (steps 2 and 3) should be done before examining the data. A manufacturer of sprinkler systems used for fire protection in office buildings claims that the true average system-activation temperature is 130°. A sample of n = 9 systems, when tested, yields a sample average activation temperature of 131.08°F. If the distribution of activation times is normal with standard deviation 1.5°F, does the data contradict the manufacturer’s claim at significance level « = .01? 1. Parameter of interest: | = true average activation temperature. 2. Null hypothesis: Ho: « = 130 (null value = po = 130). 3. Alternative hypothesis: H,: 4. # 130 (a departure from the claimed value in either direction is of concern). 4. Test statistic value: X= xX— 130 Glyn 1S/va 5. Rejection region: The form of H, implies use of a two-tailed test with rejection region either z > Zo05 or Z < —Z,o0s. From Section 4.3 or Appendix Table A.3, Zoos = 2.58, so we reject Ho if either z > 2.58 or z < —2.58. 6. Substituting n = 9 and x = 131.08, 131.08 — 130 _ 1.08 z= 216 15/V9 5 That is, the observed sample mean is a bit more than 2 standard deviations above what would have been expected were Ho true. 7. The computed value z = 2.16 does not fall in the rejection region (—2.58 < 2.16 < 2.58), so Ho cannot be rejected at significance level .01. The data does not give strong support to the claim that the true average differs from the design value of 130. a Another view of the analysis in the previous example involves calculating a 99% CI for 4 based on Equation 8.5: 44 2.580//n = 131.08 + 2.58(1.5/V9) = 131.08 + 1.29 = (129.79, 132.37) --- Trang 453 --- 440) cuarter 9 Tests of Hypotheses Based on a Single Sample Notice that the interval includes jig = 130, and it is not hard to see that the 99% CI excludes fi) if and only if the two-tailed hypothesis test rejects Ho at level .01. In general, the 100(1 — «)% Cl excludes jug if and only if the two-tailed hypothesis test rejects Ho at level « Although we will not always call attention to it, this kind of relationship between hypothesis tests and confidence intervals will occur over and over in the remainder of the book. It should be intuitively reasonable that the CI will exclude a value when the corresponding test rejects the value. There is a similar relationship between lower-tailed tests and upper confidence bounds, and also between upper-tailed tests and lower confidence bounds. B and Sample Size Determination The z tests for Case I are among the few in statistics for which there are simple formulas available for f, the probability of a type II error. Consider first the upper-tailed test with rejection region z > z,. This is equivalent to ¥ > lg + 2, + @/,/n, 80 Ho will not be rejected if ¥< py + 2+ 0/\/n. Now let ,’ denote a particular value of 4 that exceeds the null value ip. Then, B(u’) = P(Ho is not rejected when p = y’) = P(X < jy + 22-0/Vn when p = p') Kau Ho ~ , = P| —— Lo Hom o(. ame Tdi Aa W< bo _of(_-. foc ou (iene Ha: h # ko . Ho— . Ho— Ht (2/0 +e = 0-2, + oe where ®(z) = the standard normal cdf. The sample size n for which a level test also has B(u’) = f at the alternative value pl is o(z,+2p)]° for a one - tailed Ho — (upper or lower) test n= a(z,/2 +2p)]° for a two - tailed test Ly — (an approximate solution) Let ju denote the true average tread life of a type of tire. Consider testing Ho: = 30,000 versus H,: 4 > 30,000 based on a sample of size n = 16 from anormal population distribution with ¢ = 1500. A test with « = .01 requires z, = Zo) = 2.33. The probability of making a type II error when ps = 31,000 is 30,000 — 31,000 (31,000) = @( 2.33 += ) = o(—.34) = .3669 1500/V16 Since z, = 1.28, the requirement that the level .01 test also have £(31,000) = .1 necessitates 1500(2.33 + 1.28)]? ‘i = |_| = (-5.42)° = 29.32 . [ 30,000 — 31,000 (942) The sample size must be an integer, so n = 30 tires should be used. a Case Il: Large-Sample Tests When the sample size is large, the z tests for Case I are easily modified to yield valid test procedures without requiring either a normal population distribution or known o. The key result was used in Chapter 8 to justify large-sample confidence intervals: A large n implies that the sample standard deviation s will be close to ¢ for most samples, so that the standardized variable Fe X-w Syn --- Trang 455 --- 442 cunrter 9 Tests of Hypotheses Based on a Single Sample has approximately a standard normal distribution. Substitution of the null value Ho in place of y yields the test statistic Y= Fu Ho S//n which has approximately a standard normal distribution when Hp is true. The use of rejection regions given previously for Case I (e.g., z > z, when the alternative hypothesis is H,: {4 > fo) then results in test procedures for which the significance level is approximately (rather than exactly) %. The rule of thumb 7 > 40 will again be used to characterize a large sample size. Example 9.8 A sample of bills for meals was obtained at a restaurant (by Erich Brandt). For each of 70 bills the tip was found as a percentage of the raw bill (before taxes). Does it appear that the population mean tip percentage for this restaurant exceeds the standard 15%? Here are the 70 tip percentages: 14.21 20.24 20.10 14.94 15.69 15.04 12.04 20.16 17.85 16.35 19.12 20.37 15.29 18.39 27.55 16.01 10.94 13.52 1742 14.48 29.87 17.92 19.74 22.73 14.56 15.16 16.09 16.42, 19.07 13.74 13.46 16.79 19.03 19.19 19.23. 12.39 16.89 18.93. 13.56 17.70 11.48 13.96 21.58 = 11.94 19.02, 17.73, -20.07, 40.09 19.88 22.79 15.23 16.09 19.19 11.91 18.21 15.37 16.31 16.03 48.77 12.31 2153 12.76 18.07 14.11 15.86 20.67 15.66 18.54 27.88 13.81 Anderson-Darting Normality Test A-Squared 4.47 P.Value < 0.005 EL] Mean 17.986 StDev 5.937 y Variance 35.247 Skewness 2.9391 Kurtosis 12.0154 [| N 70 = |e = Minimum 10.940 15.0 225 300 37.5 45.0 1st Quartile 14.540 Median 16.840 wee . . 3st Quart 19.358 95% Confidence Interval for Mean 95% Confidence Intervals ABST 19.402 Mean 95% Confidence Interval for Median 15.913 18.402 Median | }-_—__+—______ 95% Confidence Interval for StDev G oy mA om 5.090 7.124 Figure 9.3 MINITAB descriptive summary for the tip data of Example 9.8 Figure 9.3 shows a descriptive summary obtained from MINITAB. The sample mean tip percentage is >15. Notice that the distribution is positively skewed because there are some very large tips (and a normal probability plot therefore does not exhibit a linear pattern), but the large-sample z tests do not require a normal population distribution. 1. 4c = true average tip percentage 2. Ho: w= 15 --- Trang 456 --- 9.2 Tests About a Population Mean 443 3. Hy: > 15 x-15 4, 2:=—__ sa 5. Using a test with a significance level .05, Ho will be rejected if z > 1.645 (an upper tailed test). 6. With n = 70, x = 17.99, and s = 5.937, _17.99-15 2.99 _ 421 “~~ 5.937/V70 .7096 7. Since 4.21 > 1.645, Ho is rejected. There is evidence that the population mean tip percentage exceeds 15%. a Determination of f and the necessary sample size for these large-sample tests can be based either on specifying a plausible value of o and using the Case I formulas (even though s is used in the test) or on using the methods to be introduced shortly in connection with Case III. Case Ill: A Normal Population Distribution with Unknown 7 When n is small, the Central Limit Theorem (CLT) can no longer be invoked to justify the use of a large-sample test. We faced this same difficulty in obtaining a small-sample confidence interval (CI) for in Chapter 8. Our approach here will be the same one used there: We will assume that the population distribution is at least approximately normal and describe test procedures whose validity rests on this assumption. If an investigator has good reason to believe that the population distribution is quite nonnormal, a distribution-free test from Chapter 14 can be used. Alternatively, a statistician can be consulted regarding procedures valid for specific families of population distributions other than the normal family. Or a bootstrap procedure can be developed. The key result on which tests for a normal population mean are based was used in Chapter 8 to derive the one-sample ¢ CI: If X;, Xo, ... , X, is a random sample from a normal distribution, the standardized variable xX- T= S]Va has a f¢ distribution with n — 1 degrees of freedom (df). Consider testing Ho: He = Mo against H,: 4 > fo by using the test statistic (X — py) /(S//n). That is, the test statistic results from standardizing X under the assumption that Ho is true (using S/,/n, the estimated standard deviation of X, rather than ¢/ \/n). When Ho is true, the test statistic has a t distribution with n — 1 df. Knowledge of the test statistic’s distribution when Hp is true (the “null distribution”) allows us to con- struct a rejection region for which the type I error probability is controlled at the desired level. In particular, use of the upper-tail t critical value t,,,_; to specify the rejection region ¢ > ¢,,,,_, implies that --- Trang 457 --- 444 cuarrer9 Tests of Hypotheses Based on a Single Sample P(type I error) = P(Hp is rejected when it is true) = P(T > ty,-, when T has at distribution with n — 1 df) =a The test statistic is really the same here as in the large-sample case but is labeled T to emphasize that its null distribution is a f distribution with n — 1 df rather than the standard normal (z) distribution. The rejection region for the ¢ test differs from that for the z test only in that a ¢ critical value t,,,_, replaces the z critical value z,. Similar comments apply to alternatives for which a lower-tailed or two-tailed test is appropriate. THE Null hypothesis: Ho: 6 = Lo ONE-SAMPLE atictin wales ¢ 2 — Ho {TEST Test statistic value: ¢ = shJa Alternative Hypothesis Rejection Region for a Level a Test Ay > Ho > tyn—1 (upper-tailed) Ay: bt < fo t < —t,,—1 (lower-tailed) Ay: w F bo either t > tyj2.,—1 OF t < —ty2,n—1 (two-tailed) A well-designed and safe workplace can contribute greatly to increased productiv- ity. It is especially important that workers not be asked to perform tasks, such as lifting, that exceed their capabilities. The accompanying data on maximum weight of lift (MAWL, in kg) for a frequency of four lifts/min was reported in the article “The Effects of Speed, Frequency, and Load on Measured Hand Forces for a Floor-to-Knuckle Lifting Task” (Ergonomics, 1992: 833-843); subjects were randomly selected from the population of healthy males age 18-30. Assuming that MAWL is normally distributed, does the following data suggest that the population mean MAWL exceeds 25? 25.8 36.6 26.3 21.8 27.2 Let’s carry out a test using a significance level of .05. 1. 4 = population mean MAWL 2. Ho: w= 25 3. Hy: w > 25 are x25 = Tn 5. Reject Ho if t > ty, n-1 = tos = 2.132. 6. Xx; = 137.7 and Ex? = 3911.97, from which x = 27.54, s = 5.47, and --- Trang 458 --- 9.2 Tests About a Population Mean 445 27.54—25 2.54 547/V5 2.45 1.04 The accompanying MINITAB output from a request for a one-sample f test has the same calculated values (the P-value is discussed in Section 9.4). Test of mu = 25.00 vsmu > 25.00 Variable N Mean StDev SEMean T P-Value mawl 5 27.54 5.47 2.45 1.04 0.18 7. Since 1.04 does not fall in the rejection region (1.04 < 2.132), Ho cannot be rejected at significance level .05. It is still plausible that jc is (at most) 25. Ml B and Sample Size Determination The calculation of at the alternative value y/ in Case I was carried out by expressing the rejection region in terms of X (e.g., X > lo + 2,-0/,/n) and then subtracting y’ to standardize correctly. An equivalent approach involves noting that when = w, the test statistic Z = (X — j9)/(a/V/n) still has a normal distribution with variance 1, but now the mean value of Z is given by (u’ —po)/(o/\/n). That is, when = yp’, the test statistic still has a normal distribution though not the standard normal distribution. Because of this, Bu’) is an area under the normal curve corresponding to mean value (a! — Uo) /(@//n) and variance 1. Both « and f involve working with normally distributed variables. The calculation of f(y’) for the ¢ test is much less straightforward. This is because the distribution of the test statistic T= (X — pg)/(S/V/n) is quite complicated when Ho is false and H, is true. Thus, for an upper-tailed test, determining Bw!) = P(E 2.5 using a t test with significance level = .05. If the standard deviation of the voltage distribution is ¢ = .100, how likely is it that Ho will not be rejected when = 2.6? With d = 12.5 — 2.61/.100 = 1.0, the point on the f curve at 9 df for a one-tailed test with x = .05 above 1.0 has height approximately .1, so f = .1. The investigator might think that this is too large a value of f for such a substantial departure from Ho and may wish to have f = .05 for this alternative value of yt. Since d = 1.0, the point (d, 8) = (1.0, .05) must be located. This point is very close to the 14 df curve, so using n = 15 will give both « = .05 and f = .05 when the value of 1 is 2.6 and o = .10. A larger value of ¢ would give a larger f for this alternative, and an alternative value of j closer to 2.5 would also result in an increased value of B. a Most of the widely used statistical computer packages will also calculate type II error probabilities and determine necessary sample sizes. As an example, we asked MINITAB to do the calculations from Example 9.10. Its computations are based on power, which is simply | — £. We want f to be small, which is equivalent to asking that the power of the test be large. For example, f = .05 corresponds to a value of .95 for power. Here is the resulting MINITAB output. Power and Sample Size Testing mean = null (versus > null) Calculating power for mean = null+0.1 --- Trang 460 --- 9.2 Tests About a Population Mean 447 Alpha = 0.05 Sigma =0.1 Sample Size Power 10 0.8975 Power and Sample Size 1-Sample t Test Testing mean = null (versus > null) Calculating power for mean = null +0.1 Alpha = 0.05 Sigma = 0.1 Sample Target Actual Size Power Power 13 0.9500 0.9597 Notice from the second part of the output that the sample size necessary to obtain a power of .95 (8 = .05) for an upper-tailed test with « = .05 when o = .1 and y’ is -1 larger than jlo is only n = 13, whereas eyeballing our f curves gave 15. When available, this type of software is more trustworthy than the curves. Exercises | Section 9.2 (15-35) 15. Let the test statistic Z have a standard normal 18. Reconsider the paint-drying situation of Example distribution when Ho is true. Give the significance 9.2, in which drying time for a test specimen is level for each of the following situations: normally distributed with ¢ = 9. The hypotheses a. Hg: jt > flo, rejection region z > 1.88 Ho: j= 75 versus Hy: jt < 75 are to be tested b. Ha: Jt < lo, rejection region z < —2.75 using a random sample of n = 25 observations. c. Hy: t X fo, rejection region z > 2.88 or 2 < a. How many standard deviations (of X) below —2.88 the null value is ¥ = 72.3? 16. Let the test statistic T have a ¢ distribution when BisTP i T2iSin Vihiat is the (conclusion: using Ho is true. Give the significance level for each of a= OM fo 1S true. Give the sig ¢. What is a for the test procedure that rejects Ho the following situations: @ : when z < —2.88? aH: >to, df= 15, rejection region ef “ 2 1 > 3.733 d. For the test procedure of part (c), what is ees Lo . B(70)? b. Hy: 7 = 24, t B ae aac a mlection” region e. If the test procedure of part (c) is used, what n cH: Z fom BU ejection repiow eS 1.607 is necessary to ensure that $(70) = .01? “gee ites gone * f. Ifa level .01 test is used with n = 100, what is a a the probability of a type I error when jx = 76? 17. -Ariswer the following questions for the tire prob- 19, The: melting pointof’eachof 16 samples of abrand lem in Example 9.7. : : . . of hydrogenated vegetable oil was determined, a. If x = 30,960 and a level % = .01 test is used, aoa « ¢ whatis:therdacision?: resulting in¥ = 94.32. Assume that the distribution b. Ifa level .01 test is used, what is (30,500)? lions: sguagee ean iagn aeage ae els ase 7 a. Test Ho: je = 95 versus Hy: jt £95 using a ¢. Ifa level .01 test is used and it is also required : that B(30,500) = .05, what sample size n is fpartniled level. 01 test, ee anes : b. Ifa level .01 test is used, what is (94), the ve ay ability of a type I error when p = 94? d. If = 30, 960, what is the smallest a at which LS aaa lea ade ode: Ho can be rejected (based on'n = 16)? c. What value of n is necessary to ensure that ° B94) = .1 when « = .01? --- Trang 461 --- 448 = cuarrer9 Tests of Hypotheses Based on a Single Sample 20. Lightbulbs of a certain type are advertised as having b. A normal probability plot of the data was quite an average lifetime of 750 h. The price of these straight. Use the descriptive output to test the bulbs is very favorable, so a potential customer appropriate hypotheses. has decided to go ahead with a purchase arrange- 94, Exercise 33 in Chapter l gave = 26 observations ment unless it can be conclusively demonstrated 5 ‘i 4 ; pein on escape time (sec) for oil workers in a simulated that the true average lifetime is smaller than what i 3 ‘ exercise, from which the sample mean and sample is advertised. A random sample of 50 bulbs was “ : Mf standard deviation are 370.69 and 24.36, respec- selected, the lifetime of each bulb determined, and : De: mera aun : f tively. Suppose the investigators had believed a the appropriate hypotheses were tested using MINI- We af TAB Iting in the acc ing output. priori that true average escape time would be at resulling in the accompanying output. most 6 min. Does the data contradict this prior Variable N Mean StDev SEMean 2 P-value belief? Assuming normality, test the appropriate HSGELEME (BO TABSED S820 S280) =Aaildy On 018 hypotheses using a significance level of .05. What conclusion would be appropriate for a 4, Reconsider the sample observations on stabilized significance level of .05? A significance level wHucosiOy? of agphelk: Specimens (inerodaced Ga of .012 What significance level and conclusion Exercise 43 in Chapter 1 (2781, 2900, 3013, would you recommend? 2856, and 2888). Suppose that for a particular 21. The true average diameter of ball bearings of a application, it is required that true average viscosity certain type is supposed to be .5 in. A one-sample be 3000. Does this requirement appear to have been 1 test will be carried out to see whether this is the satisfied? State and test the appropriate hypotheses. case. What conclusion Js appropriate in each of 5, Recall the first-grade IQ scores of Example 1.2. the:Following! situations Here is a random sample of 10 of those scores: a. n= 13,1= 16,0 =.05 bMS FS 167805 107 113 108 127 146 103 108 118 111 119 en=25,t=-26,0= 01 The IQ test score has approximately a normal dn = 25,1 = —3.9 distribution with mean 100 and standard deviation 22. The article “The Foreman’s View of Quality Con- 15 for the entire US. population of first-graders. trol” (Quality Engrg., 1990: 257-280) described Here were intercsted in:seeing! whether the pon: an investigation into the coating weights for large ulation of first-graders at this school is different pipes resulting from a galvanized coating pro- from the national population. Assume that the cess. Production standards call for a true average normal distribution with standard deviation 15 is Weight OF 001K PSE pips. The wesomapanviNg valid for the school, and test at the .05 level to see “céceiptive ‘Summary: and “Boxplet ae “om whether the school mean differs from the national MINITAB. mean, Summarize your conclusion in a sentence about these first-graders. variable N Mean Median TrMean StDev SEMean ctgwt 30 206.73 206.00 206.81 6.35 1.16 26, Inrecent years major league baseball games have variable Min Max 913 averaged 3 h in duration. However, because games ctgwt 193.00 218.00 202.75 212.00 in Denver tend to be high-scoring, it might be expected that the games would be longer there. In 2001, the 81 games in Denver averaged 185.54 min with standard deviation 24.6 min. What would you conclude? 7 1 Coating weight 27. On the label, Pepperidge Farm bagels are said to 190 200 «210 ~=—-220 weigh four ounces each (113 g). A random sample of six bagels resulted in the following weights (in grams): 5) What dors the borplot suggest about the-stats 1176 1095 1116 1092 119.1 1108 of the specification for true average coating weight? a. Based on this sample, is there any reason to doubt that the population mean is at least 113 g? --- Trang 462 --- 9.2 Tests About a Population Mean 449 b. Assume that the population mean is actually distributed, does the data contradict prior belief? 110 g and that the distribution is normal with Use the f test with a = .1. standard deviation 4 g. In a z test of Ho: 49 sample of 12 radon detectors of a certain = 113 against Hy: < 113 with x = .05, find th ability of rejecting Ho with type was selected, and each was exposed to Peet Deane ye REISE DE 20 WUE 100 pCi/L of radon. The resulting readings were observations. as follows: ¢. Under the conditions of part (b) with # = .05, ee how many more observations would be needed 105.6 90.9 91.2 96.9 96.5 91.3 in order for the power to be at least .95? 100.1 105.0 99.6 107.7. 103.3 92.4 28. Minor surgery on horses under field conditions a. Does this data suggest that the population mean requires a reliable short-term anesthetic producing reading under these conditions differs from good muscle relaxation, minimal cardiovascular 100? State and test the appropriate hypotheses and respiratory changes, and a quick, smooth using « = .05. recovery with minimal aftereffects so that horses b. Suppose that prior to the experiment, a value of can be left unattended. The article “A Field Trial o = 7.5 had been assumed. How many deter- of Ketamine Anesthesia in the Horse” (Equine minations would then have been appropriate to Vet. J., 1984: 176-179) reports that for a sample obtain B = .10 for the alternative jp = 952 of n = 73 horses to which ketamine was adminis- 45, soy that for any A > 0, when the population tered under certain conditions, the sample average errant lateral bi r dow! : - distribution is normal and o is known, the two- ster. secumbency Alyineddown) Hine: ‘was tailed test satisfies (jo — A) = Biju + A), $0 18.86 min and the standard deviation was ty 3 - - that B(u’) is symmetric about fig. 8.6 min. Does this data suggest that true average lateral recumbency time under these conditions is 34. For a fixed alternative value y’, show that less than 20 min? Test the appropriate hypotheses Bll) = 0 as n — oo for either a one-tailed or a at level of significance .10. two-tailed z test in the case of a normal population 29. The amount of shaft wear (,0001 in.) after a fixed Siseibetion Witt meowng: mileage was determined for each of n = 8 internal 35. The industry standard for the amount of alcohol combustion engines having copper lead as a bear- poured into many types of drinks (e.g., gin for a ing material, resulting in ¥ = 3.72 and s = 1.25. gin and tonic, whiskey on the rocks) is 1.5 oz. a. Assuming that the distribution of shaft wear is Each individual in a sample of 8 bartenders with normal with mean 1, use the f test at level .05 to at least 5 years of experience was asked to pour test Ho: 16 = 3.50 versus Hy: > 3.50. rum for a rum and coke into a short, wide (tum- b. Using ¢ = 1.25, what is the type II error prob- bler) glass, resulting in the following data: abibey AGE) 08 the test for the alseenanve 2.00 1.78 2.16 1.91 1,70 1.67 1.83 148 w= 4.00? 30. ‘The recommended daily dietary allowance for zine (Summary quantities agree wilt those given the article “Bottoms Up! The Influence of Elongation on among males older than age 50 years is 15 mg/day. : epee rsa ig . Pouring and Consumption Volume,” J. Consumer The article “Nutrient Intakes and Dietary Patterns : : es Res., 2003: 455-463.) of Older Americans: A National Study” (J. Geron- What d boxpl ale Hedi tol., 1992: M145-150) reports the following sum- a What does a boxplot suggest about the distri- : bution of the amount poured? mary data on intake for a sample of males age 2 ape: yp _ _ b. Carry out a test of hypotheses to decide 65-74 years: n = 115, x= 11.3, and s = 6.43. 3 A fags A ‘ whether there is strong evidence for conclud- Does this data indicate that average daily zinc x , A a‘ ‘i ing that the true average amount poured differs intake in the population of all males age 65-74 ; 9 from the industry standard. falls below the recommended allowance? oe . ¢. Does the validity of the test you carried out in 31. In an experiment designed to measure the time (b) depend on any assumptions about the pop- necessary for an inspector’s eyes to become used ulation distribution? If so, check the plausibil- to the reduced amount of light necessary for pene- ity of such assumptions. trant inspection, the sample average time for d. Suppose the actual standard deviation of the n = 9 inspectors was 6.32 s and the sample stan- amount poured is .20 oz. Determine the proba- dard deviation was 1.65 s. It has previously been bility of a type Il error for the test of (b) when assumed that the average adaptation time was at the true average amount poured is actually least 7 s. Assuming adaptation time to be normally (1) 1.6, (2) 1.7, (3) 1.8. --- Trang 463 --- 450 = = ciarrer9 Tests of Hypotheses Based on a Single Sample Tests Concerning a Population Proportion Let p denote the proportion of individuals or objects in a population who possess a specified property (e.g., cars with manual transmissions or smokers who smoke a filter cigarette). If an individual or object with the property is labeled a success (S), then p is the population proportion of successes. Tests concerning p will be based on a random sample of size n from the population. Provided that n is small relative to the population size, X (the number of $’s in the sample) has (approximately) a binomial distribution. Furthermore, if n itself is large, both X and the estimator p =X/n are approximately normally distributed. We first consider large-sample tests based on this latter fact and then turn to the small-sample case that directly uses the binomial distribution. Large-Sample Tests Large-sample tests concerning p are a special case of the more general large-sample procedures for a parameter 6. Let 0 be an estimator of 0 that is (at least approxi- mately) unbiased and has approximately a normal distribution. The null hypothesis has the form Ho: 8 = 0, where 0 denotes a number (the null value) appropriate to the problem context. Suppose that when Hp is true, the standard deviation of 6, 4, involves no unknown parameters. For example, if @ =u and =X, 9) = ox = /,/n, which involves no unknown parameters only if the value of ¢ is known. A large-sample test statistic results from standardizing 0 under the assumption that Hp is true [so that E(0) = Oo): Test statistic: ois % If the alternative hypothesis is H,: 6 > 0, an upper-tailed test whose significance level is approximately « is specified by the rejection region z > z,. The other two alternatives, H,: 0 < @) and H,: 0 # Oo, are tested using a lower-tailed z test and a two-tailed z test, respectively. In the case 0 = p, @{ Will not involve any unknown parameters when Ho is true, but this is atypical. When a4 does involve unknown parameters, it is often possible to use an estimated standard deviation Sj in place of aj and still have Z approximately normally distributed when Hp is true (because when n is large, 5g © @ for most samples). The large-sample test of the previous section furnishes an example of this: Because ¢ is usually unknown, we use sj = sy = 5/,/n in place of ¢/,/n in the denominator of z. The estimator p = X/n is unbiased [E(p) = p], has approximately a normal distribution, and its standard deviation is oj = \/p(1 —p)/n. These facts were used in Section 8.2 to obtain a confidence interval for p. When Hy is true, E(p) = po and 05 = \/po(1 — po)/n , so op does not involve any unknown parameters. It then follows that when n is large and Hp is true, the test statistic Z- P—Po --- Trang 464 --- 9.3 Tests Conceming a Population Proportion 451 has approximately a standard normal distribution. If the alternative hypothesis is Hi: p > po and the upper-tailed rejection region z > z, is used, then P(type Lerror) = P(Ho is rejected when it is true) = P(Z > z, when Z has approximately a standard normal distribution) ~ « Thus the desired level of significance » is attained by using the critical value that captures area « in the upper tail of the z curve. Rejection regions for the other two alternative hypotheses, lower-tailed for H,: p < po and two-tailed for H,: p # po, are justified in an analogous manner. Null hypothesis: Ho: p = po Test statistic value: = D> —P0__ vpo(1 — po)/n Alternative Hypothesis Rejection Region Ay: p > po Zz > 2, (upper-tailed) Ay: p < po Zz < —z, (lower-tailed) Ay: p # Po either z > 2,2 or z < —Z,,2 (two-tailed) These test procedures are valid provided that npy > 10 and n(1 — po) > 10. Recent information suggests that obesity is an increasing problem in America among all age groups. The Associated Press (Oct. 9, 2002) reported that 1276 individuals in’a sample of 4115 adults were found/to bevobese (a body mass index exceeding 30; this index is a measure of weight relative to height). A 1998 survey based on people’s own assessment revealed that 20% of adult Americans consid- ered themselves obese. Does the recent data suggest that the true proportion of adults who are obese is more than 1.5 times the percentage from the self-assessment survey? Let’s carry out a test of hypotheses using a significance level of .10. 1. p = the proportion of all American adults who are obese. 2. Saying that the current percentage is 1.5 times the self-assessment percentage is equivalent to the assertion that the current percentage is 30%, from which we have the null hypothesis as Ho: p = .30. 3. The phrase “more than” in the problem description implies that the alternative hypothesis is Hy: p > .30. 4, Since npy = 4115(.3) > 10 and ngy = 4115(.7) > 10, the large-sample z test can certainly be used. The fest statistie value is z= (p— 3)/V(.3)(.7)/n --- Trang 465 --- 452 = cuarrer9 = Tests of Hypotheses Based on a Single Sample 5. The form of H, implies that an upper-tailed test is appropriate: Reject Hy if 2 > 249 = 1.28. 6. p = 1276/4115 = 310, from which z= (.310 — .3)/\/(.3)(-7)/4115 = .010/.0071 = 1.40. 7. Since 1.40 exceeds the critical value 1.28, z lies in the rejection region. This justifies rejecting the null hypothesis. Using a significance level of .10, it does appear that more than 30% of American adults are obese. 2 B and Sample Size Determination When Ho is true, the test statistic Z has approximately a standard normal distribution. Now suppose that Ho is not true and that p = p’. Then Z still has approximately a normal distribution (because it is a linear function of /), but its mean value and variance are no longer 0 and 1, respectively. Instead, ye (1 —p’ F(z) - Pe viz) —2 (1=p')/n Po(1 — po)/n Po(1 —po)/n The probability of a type II error for an upper-tailed test is B(p') = P(Z < z, when p = p’). This can be computed by using the given mean and variance to standardize and then referring to the standard normal cdf. In addition, if it is desired that the level test also have f(p') = f for a specified value of f, this equation can be solved for the necessary n as in Section 9.2. General expressions for B(p’) and n are given in the accompanying box. Alternative Hypothesis Be’) Hu P > Po [Po =P! + 2xv/po(l = po)/n /p'(1 —p')/n He P< Po 1 — @ [Po =P = Zxv/po(l = po)/n VP pin Hap # Po |PO=P! + 2x/2v/Po(d = po)/n vp = p')/n _ ep |P0 =P! = 222, Vo = po) /a Vel —p')/n The sample size n for which the level « test also satisfies B(p’) = B is 2 zi i Z (1 =p) zav/poll = po) + Zev PLP) oe — tailed test P — Po n= 2 24/2\/Po(1 — Po) + 2p,/P'(1 — p')| two — tailed test (an P'—Po approximate solution) --- Trang 466 --- 9.3 Tests Conceming a Population Proportion 453 A package-delivery service advertises that at least 90% of all packages brought to its office by 9 a.m. for delivery in the same city are delivered by noon that day. Let p denote the true proportion of such packages that are delivered as advertised and consider the hypotheses Ho: p = .9 versus Hy: p < .9. If only 80% of the packages are delivered as advertised, how likely is it that a level .01 test based on n = 225 packages will detect such a departure from Ho? What should the sample size be to ensure that B(.8) = .01? With « = .01, po = .9, p’ = .8, and n = 225, 9 —8'-'2.33,/ (.9)(.1)/225 B(.8)=1-® 9= 8=2.33y/(9)(1)/225) _ | — (2.00) = .0228 /(.8)(.2)/225 Thus the probability that Ho will be rejected using the test when p = .8 is .9772— roughly 98% of all samples will result in correct rejection of Ho. Using z,, = zg = 2.33 in the sample size formula yields 2 2.33 ,/(.9)(.1) + 2.33 \/(.8)(.2) 266 a= |e eee | a) . Small-Sample Tests Test procedures when the sample size n is small are based directly on the binomial distribution rather than the normal approximation. Consider the alternative hypoth- esis H,: p > po and again let X be the number of successes in the sample. Then X is the test statistic, and the upper-tailed rejection region has the form x > c. When A is true, X has a binomial distribution with parameters n and po, so P(type I error) = P(Hp is rejected when it is true) = P|X > c when X ~ Bin(n, po)] = 1—P[X po). When p = p',X ~ Bin(n,p'), so B(p') = P(type II error when p = p') = P[X 16 when X ~ Bin(20, .8)] = 1-—B(15; 20, .8) — 1 — .370 = .630 That is, when p = .8, 63% of all samples consisting of n = 20 cans would result in Hp being incorrectly not rejected. This error probability is high because 20 is a small sample size and p! = .8 is close to the null value po = .9. | Exercises | Section 9.3 (36-44) 36. State DMV records indicate that of all vehicles size of 100 is used, how likely is it that the null undergoing emissions testing during the previous hypothesis of part (a) will not be rejected by year, 70% passed on the first try. A random sam- the level .05 test? Answer this question for a ple of 200 cars tested in a particular county during sample size of 200. the current year yields 124 that passed on the cc. How many plates would have to be tested to initial test. Does this suggest that the true propor- have B(.15) = .10 for the test of part (a)? tion for this county during the current year differs 38 4 random sample of 150 recent donations at a from the previous statewide proportion? Test the blood bank reveals that 82 were t ‘A blood. " ype elevant hypotheses using a= 05. Does this suggest that the actual percentage of 37. A manufacturer of nickel-hydrogen batteries ran- type A donations differs from 40%, the percentage domly selects 100 nickel plates for test cells, of the population having type A blood? Carry out cycles them a specified number of times, and a test of the appropriate hypotheses using a signif- determines that 14 of the plates have blistered. icance level of .01. Would your conclusion have a. Does this provide compelling evidence for con- been different if a significance level of .05 had cluding that more than 10% of all plates blister been used? under such circumstances? State and test the 39, university library ordinarily has a complete appropriate hypotheses using a significance : . .. level of .05. In reaching your conclusion, shelf inventory doneronce everyivean Becanseiot’ whattypéiof eranmiphuyou hayeccummitted? new shelving rules instituted the previous year, the b. If it is really the case that 15% of all plates head librarian believes it may be possible to save blister under these circumstances and a sample money by pistbonins the inventory. The librarian ° ° ° . . decides to select at random 1000 books from the --- Trang 468 --- 9.3 Tests Conceming a Population Proportion 455 library's collection and have them searched in a a. Which of the rejection regions {15, 16, 17, 18, preliminary manner. If evidence indicates strongly 19, 20}, {0, 1,2, 3, 4, 5}, or {0, 1, 2,3, 17, 18, that the true proportion of misshelved or unloca- 19, 20} is most appropriate, and why are the table books is <.02, then the inventory will be other two not appropriate? postponed. b. What is the probability of a type | error for the a. Among the 1000 books searched, 15 were mis- chosen region of part (a)? Does the region spec- shelved or unlocatable. Test the relevant ify a level .05 test? Is it the best level .05 test? hypotheses and advise the librarian what to do c. If 60% of all enthusiasts prefer gut, calculate (use % = .05). the probability of a type I error using the b. If the true proportion of misshelved and lost appropriate region from part (a). Repeat if books is actually .01, what is the probability 80% of all enthusiasts prefer gut. that the inventory will be (unnecessarily) taken? d. If 13 out of the 20 players prefer gut, should Ho cc. If the true proportion is .05, what is the proba- be rejected using a significance level of .10? bility that the inventory will be postponed? 43, 4 manufacturer of plumbing fixtures has devel- 40. The article “Statistical Evidence of Discrimina- oped a new type of washerless faucet. Let p = P(a tion” (J. Amer. Statist. Assoc., 1982: 773-783) randomly selected faucet of this type will develop discusses the court case Swain v. Alabama a leak within 2 years under normal use). The (1965), in which it was alleged that there was manufacturer has decided to proceed with produc- discrimination against blacks in grand jury selec- tion unless it can be determined that p is too large; tion. Census data suggested that 25% of those the borderline acceptable value of p is specified as eligible for grand jury service were black, yet a -10. The manufacturer decides to subject 1 of these random sample of 1050 people called to appear for faucets to accelerated testing (approximating possible duty yielded only 177 blacks. Using a 2 years of normal use). With X = the number level .01 test, does this data argue strongly for a among the n faucets that leak before the test con- conclusion of discrimination? cludes, production will commence unless the 41. A plan for att executive traveler’s club has been pbecryed, A-asitng) large: Ut ag decaded that : : p = .10, the probability of not proceeding should developed by an airline on the premise that 5% of ee . ye we be at most .10, whereas if p = .30 the probability its current customers would qualify for member- : : : of proceeding should be at most .10. Can n = 10 ship. A random sample of 500 customers yielded . : be used? n = 20? n = 25? What is the appropriate 40 who would qualify. “use seen eee eee rejection region for the chosen n, and what are the a. Using this data, test at level .01 the null hypoth- AS5A ; pes . . te emecan : : actual error probabilities when this region is used? esis that the company’s premise is correct against the alternative that it is not correct. 44. Scientists have recently become concerned about b. What is the probability that when the test of the safety of Teflon cookware and various food part (a) is used, the company’s premise will be containers because perfluorooctanoic acid (PFOA) judged correct when in fact 10% of all current is used in the manufacturing process. An article in customers qualify? the July 27, 2005, New York Times reported that of 42. Each of a group of 20 intermediate tennis players 600 children tested, 96% had PFOA in their blood. pee Ge eroee da apes According to the FDA, 90% of all Americans have is given two rackets, one having nylon strings and . . ee . ; PFOA in their blood. the other synthetic gut strings. After several weeks i - - a. Does the data on PFOA incidence among chil- of playing with the two rackets, each player will ; dren suggest that the percentage of all children be asked to state a preference for one of the two a8 - a who have PFOA in their blood exceeds the types of strings. Let p denote the proportion of all 4: FDA percentage for all Americans? Carry out such players who would prefer gut to nylon, and ; let X be the number of players in the sample who an appropriate test/of hypotheses: Bs eee b. If 95% of all children have PFOA in their prefer gut. Because gut strings are more expen- blood, how likel SaRIHEAROIL RU SBH sive, consider the null hypothesis that at most 50% 0d BOW MRE Ly as ab hatte aul ny pamests . . tested in (a) will be rejected when a signifi- of all such players prefer gut. We simplify this to : : He os &: plaining to cteleet. He only if Sanaplé cance level of .01 is employed? Aiuwantisivfnostesin. ¢. Referring back to (b), what sample size would be AIEODE eka NRES EUR STIESs necessary for the relevant probability to be .10? --- Trang 469 --- 456 = cuarrer9 Tests of Hypotheses Based on a Single Sample P-Values Using the rejection region method to test hypotheses entails first selecting a significance level %. Then after computing the value of the test statistic, the null hypothesis Ho is rejected if the value falls in the rejection region and is otherwise not rejected. We now consider another way of reaching a conclusion in a hypothesis testing analysis. This alternative approach is based on calculation of a certain probability called a P-value. One advantage is that the P-value provides an intuitive measure of the strength of evidence in the data against Ho DEFINITION The P-value is the probability, calculated assuming that the null hypothesis is true, of obtaining a value of the test statistic at least as contradictory to Ho as the value calculated from the available sample. The definition is quite a mouthful. Here are some key points: + The P-value is a probability. + This probability is calculated assuming that the null hypothesis is true. * To determine the P-value, we must first decide which values of the test Statistic are at least as contradictory to Ho as the value obtained from our sample. eee «Urban storm water can be contaminated by many sources, including discarded batteries. When ruptured, these batteries release metals of environmental significance. The paper “Urban Battery Litter” (J. Environ. Engr., 2009: 46-57) presented sum- mary data for characteristics of a variety of batteries found in urban areas around Cleveland. A sample of 51 Panasonic AAA batteries gave a sample mean zinc mass of 2.06 g. and a sample standard deviation of .141 g. Does this data provide compelling evidence for concluding that the population mean zinc mass exceeds 2.0 g.? With u denoting the true average zinc mass for such batteries, the relevant hypotheses are Ho: 4 = 2.0 versus H,: « > 2.0. The sample size is large enough so that a z test can be used without making any specific assumption about the shape of the population distribution. The test statistic value is x-2.0 2.06—2.0 Gime EN EEN ig, sfyn —.141/V/51 Now we must decide which values of z are at least as contradictory to Ho, Let’s first consider an easier task: Which values of x are at least as contradictory to the null hypothesis as 2.06, the mean of the observations in our sample? Because > appears in H,, it should be clear that 2.10 is at least as contradictory to Hg as is 2.06, so is 2.25, and so in fact is any X value that exceeds 2.06. But an xX value that exceeds 2.06 corresponds to a value of z that exceeds 3.04. Thus the P-value is P-value = P(Z > 3.04 when yp = 2.0) --- Trang 470 --- 9.4 P-Values 457 Since the test statistic Z was created by subtracting the null value 2.0 in the numerator, when yt = 2.0 (i.e., when Hp is true) Z has approximately a standard normal distribution. As a result, P-value = P(Z > 3.04 when p = 2.0) area under the z curve to the right of 3.04 = 1— (3.04) = .0012 : We will shortly illustrate how to determine the P-value for any z or f test; that is, any test where the reference distribution is the standard normal distribution (and z curve) or some f distribution (and corresponding t curve). For the moment, though, let’s focus on reaching a conclusion once the P-value is available. Because it is a probability, the P-value must be between 0 and 1. What kinds of P-values provide evidence against the null hypothesis? Consider two specific instances: + P-value = .250: In this case, fully 25% of all possible test statistic values are more contradictory to Ho than the one that came out of our sample. So our data is not that contradictory to the null hypothesis. + P-value = .0018: Here, only .18%, much less than 1%, of all possible test statistic values, are at least as contradictory to Ho as what we obtained. Thus the sample appears to be highly contradictory to the null hypothesis. More generally, the smaller the P-value, the more evidence there is in the sample data against the null hypothesis and for the alternative hypothesis. That is, Ho should be rejected in favor of H, when the P-value is sufficiently small. So what constitutes “sufficiently small”? DECISION RULE BASED Select a significance level « (as before, the desired type I error probability). ON THE Then reject Ho if P-value < «; do not reject Ho if P-value > « P-VALUE sss Thus if the P-value exceeds the chosen significance level, the null hypothesis cannot be rejected at that level. But if the P-value is equal to or <«, then there is enough evidence to justify rejecting Ho. In Example 8.14, we calculated P-value = .0012. Then using a significance level of .01, we would reject the null hypothesis in favor of the alternative hypothesis because .0012 < .01. However, suppose we select a significance level of only .001, which requires more substantial evidence from the data before Ho can be rejected. In this case we would not reject Ho because 0012 > .001. How does the decision rule based on the P-value compare to the decision tule employed in the rejection region approach? The two procedures—the rejection region method and the P-value method—are in fact identical. Whatever the conclusion reached by employing the rejection region approach with a particular a, the same conclusion will be reached via the P-value approach using that same «. The nicotine content problem discussed in Example 9.5 involved testing Ho: = 15 versus H,: « > 1.5 using a z test (i.e., a test which utilizes the z curve as the reference distribution). The inequality in H, implies that the upper-tailed --- Trang 471 --- 458 = cuarrer9 Tests of Hypotheses Based on a Single Sample rejection region z > z, is appropriate. Suppose z = 2.10. Then using exactly the same reasoning as in Example 8.14 gives P-value = 1 — ®(2.10) = .0179. Con- sider now testing with several different significance levels: a= 10 => z = 210 = 1.28 > 2.10 > 1.28 = reject Hp a= 05 > z, = zo5 = 1.645 = 2.10 > 1.645 = reject Ho a= 01 > z, =z01 = 2.33 > 2.10 < 2.33 = do not reject Ho Because P-value = .0179 < .10 and also .0179 < .05, using the P-value approach results in rejection of Ho for the first two significance level. However, for « = .01, 2.10 is not in the rejection region and .0179 is larger than .01. More generally, whenever « is smaller than the P-value .0179, the critical value z, will be beyond the P-value and Ho cannot be rejected by either method. This is illustrated in Figure 9.5. a - Standard normal (z) curve ‘Shaded area = .0179 9 boa = computed z zcurve zourve b a c — Shaded ‘Shaded area = a area = a 0 i 2.10 0 2a0f Figure 9.5 Relationship between a and tail area captured by computed z: (a) tail area captured by computed z; (b) when a > .0179, z, < 2.10 and Hp is rejected; (c) when @ < .0179, zy > 2.10 and Hp is not rejected = Let’s reconsider the P-value .0012 in Example 9.14 once again. Hy can be rejected only if .0012 < «. Thus the null hypothesis can be rejected if « = .05 or -01 or .005 or .0015 or .00125. What is the smallest significance level « here for which Ho can be rejected? It is the P-value .0012. PROPOSITION The P-value is the smallest significance level ~ at which the null hypothesis can be rejected. Because of this, the P-value is alternatively referred to as the observed significance level (OSL) for the data. It is customary to call the data significant when Hp is rejected and not significant otherwise. The P-value is then the smallest level at which the data is --- Trang 472 --- 9.4 P-Values 459 P-value = smallest level at which Hp can be rejected | Sy 0 (b) (a) 1 Figure 9.6 Comparing a and the P-value: (a) reject Ho when a@ lies here; (b) do not reject Hp when a lies here significant. An easy way to visualize the comparison of the P-value with the chosen a is to draw a picture like that of Figure 9.6. The calculation of the P-value depends on whether the test is upper-, lower-, or two-tailed. However, once it has been calculated, the comparison with % does not depend on which type of test was used. The true average time to initial relief of pain for a best-selling pain reliever is known to be 10 min. Let jz denote the true average time to relief for a company’s newly developed reliever. Suppose that when data from an experiment involving the new pain reliever was analyzed, the P-value for testing Hg: 4 = 10 versus H,: ft < 10 was calculated as .0384. Since « = .05 is larger than the P-value [.05 lies in the interval (a) of Figure 9.6], Ho would be rejected by anyone carrying out the test at level .05. However, at level .01, Ho would not be rejected because .01 is smaller than the smallest level (.0384) at which Ho can be rejected. i | The most widely used statistical computer packages automatically include a P-value when a hypothesis-testing analysis is performed. A conclusion can then be drawn directly from the output, without reference to a table of critical values. With the P-value in hand, an investigator can see at a quick glance for which significance levels Hy would or would not be rejected. Also, each individual can then select his or her own significance level. In addition, knowing the P-value allows a decision maker to distinguish between a close call (e.g.,@ = .05, P-value = .0498) and a very clear-cut conclusion (e.g., % = .05, P-value = .0003), something that would not be possible just from the statement “Ho can be rejected at significance level .05.” P-Values for z Tests The P-value for a z test (one based on a test statistic whose distribution when Ho is true is at least approximately standard normal) is easily determined from the information in Appendix Table A.3. Consider an upper-tailed test and let z denote the computed value of the test statistic Z. The null hypothesis is rejected if z > z,,and the P-value is the smallest « for which this is the case. Since z, increases as o decreases, the P-value is the value of « for which z = z,. That is, the P-value is just the area captured by the computed value z in the upper tail of the standard normal curve. The corresponding cumulative area is ®(z), so in this case P-value = 1 — ®(z). An analogous argument for a lower-tailed test shows that the P-value is the area captured by the computed value z in the lower tail of the standard normal curve. More care must be exercised in the case of a two-tailed test. Suppose first that z is positive. Then the P-value is the value of « satisfying z = z,,> (i.e., computed z = upper-tail --- Trang 473 --- 460 = ciarrer9 Tests of Hypotheses Based on a Single Sample zourve P-value = area in upper tail 1. Upper-tailed test =e) H, contains the inequality > | ° 4 Calculated = z curve P-value = area in lower tail 2. Lower-tailed test = (2) J H,, contains the inequality < 0 Calculated = P-value = sum of area in two tails = 2[1 - ®(|z))] = curve 3. Two-tailed test ra H,, contains the inequality h__* ih Calculated 2, —2 Figure 9.7 Determination of the P-value for a z test critical value). This says that the area captured in the upper tail is half the P-value, so that P-value = 2[1 — ®(z)]. Ifzis negative, the P-value is the x for which z = —z,;, or, equivalently, —z = z,, so P-value = 2[1 — ®(—z)]. Since —z = Izl when z is negative, the P-value = 2[1 — @(1zl)] for either positive or negative z. 1—®(z) for an upper -tailed test P-value: P= ¢ ®(z) for a lower -tailed test 2[1—®((z|)] for a two -tailed test Each of these is the probability of getting a value at least as extreme as what was obtained (assuming Ho true). The three cases are illustrated in Figure 9.7. The next example illustrates the use of the P-value approach to hypothesis testing by means of a sequence of steps modified from our previously recom- mended sequence. eee ©=«‘The target thickness for silicon wafers used in a type of integrated circuit is 245 ym. A sample of 50 wafers is obtained and the thickness of each one is determined, resulting in a sample mean thickness of 246.18 ym and a sample standard deviation of 3.60 zm. Does this data suggest that true average wafer thickness is something other than the target value? --- Trang 474 --- 9.4 P-Values 461 1. Parameter of interest: js = true average wafer thickness 2. Null hypothesis: Ho: p = 245 3. Alternative hypothesis: H,: pp # 245 4. Formula for test statistic value: z = jamal) ul st statis c= a 5. Calculation of test statistic value: z = 24618 28 = 2.32 3.60//50 6. Determination of P-value: Because the test is two-tailed, P-value = 2{1—©(2.32)] = .0204 7. Conclusion: Using a significance level of .01, Ho would not be rejected since .0204 > .01. At this significance level, there is insufficient evidence to conclude that true average thickness differs from the target value. a P-Values for t Tests Just as the P-value for a z test is a z curve area, the P-value for a f test will be a tcurve area. Figure 9.8 illustrates the three different cases. The number of df for the one-sample ¢ test isn — 1. The table of ¢ critical values used previously for confidence and prediction intervals doesn’t contain enough information about any particular ¢ distribution to allow for accurate determination of desired areas. So we have included another t table in Appendix Table A.7, one that contains a tabulation of upper-tail ¢ curve areas. Each different column of the table is for a different number of df, and the rows are for calculated values of the test statistic t ranging from 0.0 to 4.0 in increments of .1. For example, the number .074 appears at the intersection of the 1.6 row and the 8 df column, so the area under the 8 df curve to the right of 1.6 (an upper-tail area) is .074. Because ft curves are symmetric, .074 is also the area under the 8 df curve to the left of —1.6 (a lower-tail area). Suppose, for example, that a test of Ho: 4 = 100 versus Hy: 4 > 100 is based on the 8 df ¢ distribution. If the calculated value of the test statistic is t = 1.6, then the P-value for this upper-tailed test is .074. Because .074 exceeds .05, we would not be able to reject Ho at a significance level of .05. If the alternative hypothesis is Hi: 4 < 100 and a test based on 20 df yields t = —3.2, then Appendix Table A.7 shows that the P-value is the captured lower-tail area .002. The null hypothesis can be rejected at either level .05 or .01. Consider testing Ho: fy — fy = 0 versus Hi: 1) — [2 # 0; the null hypothesis states that the means of the two populations are identical, whereas the alternative hypothesis states that they are different without specifying a direction of departure from Hp. If a ft test is based on 20 df and t = 3.2, then the P-value for this two-tailed test is 2(.002) = .004. This would also be the P-value for t = —3.2. The tail area is doubled because values both larger than 3.2 and smaller than —3.2 are more contradictory to Hg than what was calculated (values farther out in either tail of the t curve). --- Trang 475 --- 462 = ciarrer9 Tests of Hypotheses Based on a Single Sample t curve for relevant df P-value = area in upper tail 4. Upper-tailed test H, contains the inequality > 1 a 4 Calculated + t curve for relevant df P-value = area in lower tail vo 2. Lower-tailed test H, contains the inequality < 0 Calculated + P-value = sum of area in two tails t curve for relevant df 3. Two-tailed test lA H, contains the inequality # Ao _ Calculated 7, -1 Figure 9.8 P-values for t tests eee ~=In Example 9.9, we carried out a test of Hg: 4 = 25 versus H,: « > 25 based on 4 df. The calculated value of t was 1.04. Looking to the 4 df column of Appendix Table A.7 and down to the 1.0 row, we see that the entry is .187, so the P-value ~ .187. This P-value is clearly larger than any reasonable significance level « (.01, .05, and even .10), so there is no reason to reject the null hypothesis. The MINITAB output included in Example 9.9 has P-value = .18. P-values from software packages will be more accurate than what results from Appendix Table A.7 since values of t in our table are accurate only to the tenths digit. a More on Interpreting P-Values The P-value resulting from carrying out a test on a selected sample is not the probability that Hp is true, nor is it the probability of rejecting the null hypothesis. Once again, it is the probability, calculated assuming that Hp is true, of obtaining a test statistic value at least as contradictory to the null hypothesis as the value that actually resulted. For example, consider testing Ho: « = 50 against Ho: uw < 50 using a lower-tailed z test. If the calculated value of the test statistic is z = —2.00, then --- Trang 476 --- 9.4 P-Values 463 P-value = P(Z < —2.00 when pt = 50) = area under the z curve to the left of —2.00 = .0228 But if a second sample is selected, the resulting value of z will almost surely be different from —2.00, so the corresponding P-value will also likely differ from .0228. Because the test statistic value itself varies from one sample to another, the P-value will also vary from one sample to another. That is, the test statistic is a random variable, and so the P-value will also be a random variable. A first sample may give a P-value of .0228, a second sample result in a P-value of .1175, a third yield .0606 as the P-value, and so on. If Ho is false, we hope the P-value will be close to 0 so that the null hypothesis can be rejected. On the other hand, when Ho is true, we'd like the P-value to exceed the selected significance level so that the correct decision to not reject Hy is made. The next example presents simulations to show how the P-value behaves both when the null hypothesis is true and when it is false. The fuel efficiency (mpg) of any particular new vehicle under specified driving conditions may not be identical to the EPA figure that appears on the vehicle’s sticker. Suppose that four different vehicles of a particular type are to be selected and driven over a certain course, after which the fuel efficiency of each one is to be determined. Let jz denote the true average fuel efficiency under these conditions. Consider testing Ho: 4 = 20 versus Ho: 4 > 20 using the one-sample ¢ test based on the resulting sample. Since the test is based on n — 1 = 3 degrees of freedom, the P-value for an upper-tailed test is the area under the ¢ curve with 3 df to the right of the calculated rt. Let’s first suppose that the null hypothesis is true. We asked MINITAB to generate 10,000 different samples, each containing 4 observations, from a normal population distribution with mean value 4¢ = 20 and standard deviation o = 2. The first sample and resulting summary quantities were x, = 20.830, x2 = 22.232, x3 = 20.276, x4 = 17.718 X= 20.264 s= 1.8864 t= 20264 — 20 = .2799 -1.8864//4 The P-value is the area under the 3-df t curve to the right of .2799, which according to MINITAB is .3989. Using a significance level of .05, the null hypothesis would of course not be rejected. The values of t for the next four samples were —1.7591, -6082, —.7020, and 3.1053, with corresponding P-values .912, .293, .733, and .0265. Figure 9.9(a) shows a histogram of the 10,000 P-values from this simulation experiment. About 4.5% of these P-values are in the first class interval from 0 to .05. Thus when using a significance level of .05, the null hypothesis is rejected in roughly 4.5% of these 10,000 tests. If we continue to generate samples and carry out the test for each one at significance level .05, in the long run 5% of the P-values would be in the first class interval—because when Hp is true and a test with significance level .05 is used, by definition the probability of rejecting Ho is .05. Looking at the histogram, it appears that the distribution of P-values is relatively flat. In fact, it can be shown that when Ho is true, the probability distribution of the P-value is a uniform distribution on the interval from 0 to 1. That is, the density curve is completely flat on this interval, and thus must have a --- Trang 477 --- 464 = cuarter9 Tests of Hypotheses Based on a Single Sample a 6 5 4 = 85 5 a 2 1 0 0.00 0.15 0.30 0.45 0.60 0.75 0.90 P-value b 20 15 e S 210 S a 5 i} 0.00 0.15 0.30 0.45 0.60 0.75 0.90 P-value c 50 40 30 o 2 5 @ 20 10 te) 0.00 0.15 0.30 0.45 0.60 0.75 0.90 P-value Figure 9.9 P-value simulation results for Example 9.19 --- Trang 478 --- 9.4 P-Values 465 height of | if the total area under the curve is to be 1. Since the area under such a curve to the left of .05 is (.05)(1) = .05, we again have that the probability of rejecting Hy when it is true is .05, the chosen significance level. Now consider what happens when H, is false because yp = 21. We again had MINITAB generate 10,000 different samples of size 4, each from a normal distribution with pp = 21 and ¢ = 2, calculate t = (¥ — 20)/(s/V4) for each one, and then determine the P-value. The first such sample resulted in ¥ = 20.6411, 5s = 49637, t = 2.5832, P-value = .0408. Figure 9.9(b) gives a histogram of the 10,000 resulting P-values. The shape of this histogram is quite different from that of Figure 9.9(a): there is a much greater tendency for the P-value to be small (closer to 0) when yx = 21 than when px = 20. Again Hp is rejected at significance level .05 whenever the P-value is at most .05 (in the first class interval). Unfortunately this is the case for only about 19% of the 10,000 P-values. So only about 19% of the 10,000 tests correctly reject the null hypothesis; for the other 81%, a type II error is committed. The difficulty is that the sample size is quite small and 21 is not very different from the value asserted by the null hypothesis. Figure 9.9(c) illustrates what happens to the P-value when Ho is false because = 22 (still with n = 4 and 6 = 2). The histogram is even more concentrated toward values close to 0 than was the case when js = 21. In general, as 4« moves further to the right of the null value 20, the distribution of the P-value will become more and more concentrated on values close to 0. Even here a bit fewer than 50% of the 10,000 P-values are smaller than .05. So it is still slightly more likely than not that the null hypothesis is incorrectly not rejected. Only for values of 4 much larger than 20 (e.g., at least 24 or 25) is it highly likely that the P-value will be smaller than .05 and thus give the correct conclusion. The big idea of this example is that because the value of any test statistic is random, the P-value will also be a random variable and thus have a distribution. The farther the actual value of the parameter is from the value specified by the null hypothesis, the more the distribution of the P-value will be concentrated on values close to 0 and the greater the chance that the test will correctly reject Ho (corresponding to smaller ). ry Exercises | Section 9.4 (45-59) 45. For which of the given P-values would the null c. P-value = 498, a = .05 hypothesis be rejected when performing a level d. P-value = .084, 2 = .10 .05 test? e. P-value = .039, 7 = .01 a. 001 f. P-value = .218, 7 = .10 ba O21 47. Let «1 denote the mean reaction time to a certain © 2078 stimulus. For a large-sample z test of Ho: = 5 dra? versus H,: 1 > 5, find the P-value associated with ARS each of the given values of the z test statistic. 46. Pairs of P-values and significance levels, x, are a. 1.42 given. For each pair, state whether the observed P- b. 90 value would lead to rejection of Ho at the given ©. 1.96 significance level. d. 2.48 a. P-value = .084, 7 = .05 *% =1 b. P-value = .003, « = .001 --- Trang 479 --- 466 = cuarrer9 Tests of Hypotheses Based on a Single Sample 48. Newly purchased tires of a certain type are sup- 53. Because of variability in the manufacturing pro- posed to be filled to a pressure of 30 Ib/in?. Let cess, the actual yielding point of a sample of mild jedenote the true average pressure. Find the P-value steel subjected to increasing stress will usually associated with each given z statistic value for test- differ from the theoretical yielding point. Let ing Ho: 1 = 30 versus H,: « # 30. p denote the true proportion of samples that a. 2.10 yield before their theoretical yielding point. If on b. —1.75 the basis of a sample it can be concluded that more e —.55 than 20% of all specimens yield before the theo- d. 1.41 retical point, the production process will have to e. —5.3 be modified. 49. Give as much information as you can about the a, If 15 of G0 specimens yield before the theoreti: : : : , cal point, what is the P-value when the appro- P-value of a t test in each of the following situa- : tions: priate test is used, and what would you advise a. Upper-tailed test, df = 8, t = 2.0 imecompanyiodo? a. - b. If the true percentage of “early yields” is actu- b. Lower-tailed test, df = 11,1 = —2.4 ‘ * ally 50% (so that the theoretical point is the c. Two-tailed test, df = 15, t = —1.6 2 i : d. Upper-tailed test, df = 19,1 = —4 median of the yield distribution) and a level .01 me ae pe test is used, what is the probability that the e. Upper-tailed test, df = 5, = 5.0 . conclud dificati Sf th f. Twortailed test, df = 40, = 4.8 company concludes:a:modi ication of the pro- cess is necessary? 50. The paint used to make lines on roads must . | : : e : 54, Many consumers are turning to generics as a way reflect enough light to be clearly visible at night. ; 5 : of reducing the cost of prescription medications. Let y denote the true average reflectometer cal hn . 7 é : The article “Commercial Information on Drugs: reading for a new type of paint under consider- . eas = f Confusing to the Physician?” (J. Drug Issues, ation. A test of Ho: 4 = 20 versus H,: tp > 20 will A : " 2 : : 1988: 245-257) gives the results of a survey of be based on a random sample of size n from a 5 mn " - 102 doctors. Only 47 of those surveyed knew the normal population distribution. What conclusion A A cee a: generic name for the drug methadone. Does this is appropriate in each of the following situations? ‘ % provide strong evidence for concluding that fewer a. n= 15,t=3.2,0 = .05 si " than half of all physicians know the generic name b.n=9,t=18,%2= 01 9 ce n=24t—=-2 for methadone? Carry out a test of hypotheses J with a significance level of .01 using the P-value 51. Let jc denote true average serum receptor concen- method. ration Tonal preguantiwomen; Theavernge fOr8ll. 5, wieanunmcamplenraoil Sueciniene wNaTOLNTEG: women is known to be 5.63. The article “Serum ; % F 5 5 7 and the amount of organic matter (%) in the soil Transferrin Receptor for the Detection of Iron Defi- fi ae ase is | ; was determined for each specimen, resulting in the ciency in Pregnancy” (Amer. J. Clin. Nutrit., 1991: : anyine data (from “Engi » Properties 1077-1081) reports that P-value > .10 for a test of oreo Sole aa canoe Ho: = 5.63. versus Hy: jt 345.63 based on BESO See Serato Bete n= 176 pregnant women. Using a significance 1.10 5.09 0.97 1.59 4.60 0.32 0.55 1.45 level of .01, what would you conclude? 0.14 4.47 1.20 3.50 5.02 4.67 5.22 2.69 9 52. An aspirin manufacturer fills bottles by weight ee fe ca x te oe 331 117 rather than by count. Since each bottle should ° ° . ° ° ° contain 100 tablets, the average weight per tablet The values of the sample mean, sample standard should be 5 grains. Each of 100 tablets taken from deviation, and (estimated) standard error of the a very large lot is weighed, resulting in a sample mean are 2.481, 1.616, and .295, respectively. average weight per tablet of 4.87 grains and a Does this data suggest that the true average per- sample standard deviation of .35 grain. Does this centage of organic matter in such soil is something information provide strong evidence for conclud- other than 3%? Carry out a test of the appropriate ing that the company is not filling its bottles as hypotheses at significance level .10 by first deter- advertised? Test the appropriate hypotheses using mining the P-value. Would your conclusion be a = .01 by first computing the P-value and then different if ¢ = .05 had been used? [Note: A nor- comparing it to the specified significance level. mal probability plot of the data shows an --- Trang 480 --- 9.5 Some Comments on Selecting a Test Procedure 467 acceptable pattern in light of the reasonably large 58. A spectrophotometer used for measuring CO con- sample size.] centration [ppm (parts per million) by volume] is 56. The times of first sprinkler activation for a series ieee ora f vce fel a taking eee hie of tests with fire prevention sprinkler systems manufactured gas (called span gas) in witch the using an aqueous film-forming foam were (in sec) CO concentration is very precisely controlled at ° . 70 ppm. If the readings suggest that the spectro- 27 41 22 27 23 35 30 33 24 27 28 22 24 photometer is not working properly, it will have to (see “Use of AFFF in Sprinkler Systems,” Fire be recalibrated. Assume that if it is properly cali- Teel, 197685) site ne ees saiceeate brated, measured concentration for span gas sam- that true average activation fines at moet. 25°6 Biles isnormally distributed. On.the basisiof the:six under such conditions. Does the data strongly fairer eu 4 68, 7ivand ie eal contradict the validity of this design specification? cae eee a é test or is gn Test the relevant hypotheses at significance level Ce using, the wesvalus sapproachy wall .05 using the P-value approach. a= 0 57. A pen has been designed so that true average 5% The relative conductivity of a semiconductor writing lifetime under controlled conditions device is determined by the amount of impurity (involving the use of a writing machine) is at doped” inte mie: device: diving its mgnufactine. : : A silicon diode to be used for a specific purpose least 10h. A random sample of 18 pens is selected, : : thie’ Writing: lifstinis Ofeach’is determined). aid’. Tequires’an’ average cut-on voltage of .60 Vand af normal probability plot of the resulting data sup- this 1s not achieved, theamount of impurity: must ports the use of a one-sample t test. be adjusted. A sample of diodes was selected and a. What hypotheses should be tested if the inves- the cut-on voltage was determined. The accompa- tisators helieve-a priori that the desten-specifi nying SAS ‘output resulted from a request to test cation has been satisfied? {nee pnroptiate By patieses: Be bate execute N Mean staDev Prob > |T| b. What conclusion is appropriate if the hypoth- 15 99453333 0.0899100 1.9527887 0.0711 eses of part (a) are tested, f= —2.3, and a= 05? [Note: SAS explicitly tests Ho: 4 = 0, so to test ic; What:condlusion iappropiiate JF ile hypothe Ho: 4 = .60, the null value .60 must be subtracted eses of part (a) are tested, r= —1.8, and from each x; the reported mean is then the average 2-012 of the (x; — .60) values. Also, SAS's P-value is d. What should be concluded if the hypotheses of always for a two-tailed test.] What would be con- pare (@) ate tested ad ?'=S 367 cluded for a significance level of .01? .05? .10? Some Comments on Selecting a Test Procedure Once the experimenter has decided on the question of interest and the method for gathering data (the design of the experiment), construction of an appropriate test procedure consists of three distinct steps: 1. Specify a test statistic (the decision is based on this function of the data). 2. Decide on the general form of the rejection region (typically, reject Ho for suitably large values of the test statistic, reject for suitably small values, or reject for either small or large values). 3. Select the specific numerical critical value or values that will separate the rejection region from the acceptance region (by obtaining the distribution of the test statistic when Hp is true, and then selecting a level of significance). --- Trang 481 --- 468 = cuarrer9 Tests of Hypotheses Based on a Single Sample In the examples thus far, both steps 1 and 2 were carried out in an ad hoc manner through intuition. For example, when the underlying population was assumed normal with mean yi and known a, we were led from X to the standardized test statistic Fi X= ly a]va For testing Ho: 41 = Mo versus Hy: & > Ho, intuition then suggested rejecting Ho when z was large. Finally, the critical value was determined by specifying the level of significance « and using the fact that Z has a standard normal distribution when Ai is true. The reliability of the test in reaching a correct decision can be assessed by studying type II error probabilities. Issues to be considered in carrying out steps 1-3 encompass the following questions: 1. What are the practical implications and consequences of choosing a particular level of significance once the other aspects of a test procedure have been determined? 2. Does there exist a general principle, not dependent just on intuition, that can be used to obtain best or good test procedures? 3. When two or more tests are appropriate in a given situation, how can the tests be compared to decide which should be used? 4. If a test is derived under specific assumptions about the distribution or population being sampled, how well will the test procedure work when the assumptions are violated? Statistical Versus Practical Significance Although the process of reaching a decision by using the methodology of classical hypothesis testing involves selecting a level of significance and then rejecting or not rejecting Ho at that level, simply reporting the ~ used and the decision reached conveys little of the information contained in the sample data. Especially when the results of an experiment are to be communicated to a large audience, rejection of Ho at level .05 will be much more convincing if the observed value of the test statistic greatly exceeds the 5% critical value than if it barely exceeds that value. This is Table 9.1 An illustration of the effect of sample size on P-values and B n P-value when x = 101 (101) for Level .01 Test 25 -3085 9664 100 -1587 9082, 400 0228 6293 900, 0013 2514 1600 -0000335 0475 2500 -000000297 0038 10,000 7.69 x 10-4 -0000 --- Trang 482 --- 9.5 Some Comments on Selecting a Test Procedure 469 precisely what led to the notion of P-value as a way of reporting significance without imposing a particular « on others who might wish to draw their own conclusions. Even if a P-value is included in a summary of results, however, there may be difficulty in interpreting this value and in making a decision. This is because a small P-value, which would ordinarily indicate statistical significance in that it would strongly suggest rejection of Ho in favor of Hy, may be the result of a large sample size in combination with a departure from Ho that has little practical significance. In many experimental situations, only departures from Ho of large magnitude would be worthy of detection, whereas a small departure from Hp would have little practical significance. Consider as an example testing Ho: 46 = 100 versus H,: fc > 100 where ji is the mean of a normal population with o = 10. Suppose a true value of p = 101 would not represent a serious departure from Ho in the sense that not rejecting Ho when « = 101 would be a relatively inexpensive error. For a reasonably large sample size n, this 4: would lead to an ¥ value near 101, so we would not want this sample evidence to argue strongly for rejection of Hy when X = 101 is observed. For various sample sizes, Table 9.1 records both the P-value when ¥ = 101 and also the probability of not rejecting Ho at level .01 when uw = 101. The second column in Table 9.1 shows that even for moderately large sample sizes, the P-value of X = 101 argues very strongly for rejection of Ho, whereas the observed X itself suggests that in practical terms the true value of 4 differs little from the null value ig = 100. The third column points out that even when there is little practical difference between the true yu and the null value, for a fixed level of significance a large sample size will almost always lead to rejection of the null hypothesis at that level. To summarize, one must be especially careful in interpret- ing evidence when the sample size is large, since any small departure from Ho will almost surely be detected by a test, yet such a departure may have little practical significance. Best Tests for Simple Hypotheses The test procedures presented thus far are (hopefully) intuitively reasonable, but have not been shown to be best in any sense. How can an optimal test be obtained, one for which the type II error probability is as small as possible, subject to controlling the type I error probability at the desired level? Our starting point here will be a rather unrealistic situation from a practical viewpoint: testing a simple null hypothesis against a simple alternative hypothesis. A simple hypothesis is one which, when true, completely specifies the distribution of the sample X;’s. Suppose, for example, that the X;’s form a random sample from an exponential distribution with parameter 4. Then the hypothesis H: 1 = | is simple, since when His true each X; has an exponential distribution with parameter 2 = 1. We might then consider Ho: 2 = 1 versus H,: 2 = 2, both of which are simple hypotheses. The hypothesis H: 2 < 1 is not simple, because when H is true, the distribution of each X; might be exponential with 4 = | or with A = .8 or... . Similarly, if the X;'s constitute a random sample from a normal distribution with known o, then Hf: « = 100 is a simple hypothesis. But if the value of ¢ is unknown, this hypothesis is not simple because the distribution of each X; is then not completely specified; it could be normal with = 100 and o = 15 or normal with 4 = 100 and o = 12 or --- Trang 483 --- 470 = criarreR9 Tests of Hypotheses Based on a Single Sample normal with 2 = 100 and any other positive value of ¢. For a hypothesis to be simple, the value of every parameter in the pmf or pdf of the X;’s must be specified. The next result was a milestone in the theory of hypothesis testing—a method for constructing a best test for a simple null hypothesis versus a simple alternative hypothesis. Let f(x, ... , Xn} 9) be the joint pmf or pdf of the X;’s. Then our null hypothesis will assert that @ = 0p and the relevant alternative hypothesis will claim that 9 = @,. The result will carry over to the case of more than one parameter as long as the value of each parameter is completely specified in both Ho and Hy. THE For testing a simple null hypothesis Ho: 8 = 0 versus a simple alternative NEYMAN- hypothesis H,: 0 = 0,, let k be a positive fixed number and form the rejection PEARSON region THEOREM “ . Fy Xn Oa) B= frost) teenth ah Thus R* is the set of all observations for which the likelihood ratio—ratio of the alternative likelihood to the null likelihood—is at least k. The probability of a type I error for the test with this rejection region is #* = P[(X),...,X,) © R* when @ = 0], whereas the type II error probability £* is the probability that the X,’s lie in the complement of R* (in the “acceptance” region) when 0=0,. Then for any other test procedure with type I error probability « satisfying « < a, the probability of a type II error must satisfy B > p*. Thus the test with rejection region R* has the smallest type II error probabil- ity among all tests for which the type I error probability is at most «*. The choice of the constant k in the rejection region will determine the type I error probability «*. In the continuous case, k can be selected to give one of the traditional significance levels .05, .01, and so on, whereas in the discrete case a* = .057 or .039 may be as close as one can get to .05. Consider randomly selecting n = 5 new vehicles of a certain type and determining the number of major defects on each one. Letting X; denote the number of such defects for the ith selected vehicle (i = 1, ... , 5), suppose that the X;’s form a random sample from a Poisson distribution with parameter /. Let’s find the best test for testing Ho: 2=1 versus H,: 2 = 2. The Poisson likelihood is f(x, ..., 45:4) =e 2="/TLy!. Substituting first 2 = 2, then 2 = 1, and then taking the ratio of these two likelihoods gives the rejection region Re = {(a1,-- 45) :eF2™ > k} Multiplying both sides of the inequality by e° and letting k’ = ke* gives the rejection region 2*“ > k’, Now take the natural logarithm of both sides and let c = In(k’)/In(2) to obtain the rejection region Xx; > c. This latter rejection region is completely equivalent to R*: For any particular value k there will be a corresponding value c, and vice versa. But it is much easier to --- Trang 484 --- 9.5 Some Comments on Selecting a Test Procedure 471 express the rejection region in this latter form and then select c to obtain a desired significance level than it is to determine an appropriate value of k for the likelihood ratio. In particular, T = XX; has a Poisson distribution with parameter 5/ (via a moment generating function argument), so when Hp is true T has a Poisson dis- tribution with parameter 5. From the 5.0 column of our Poisson table (Table A.2), the cumulative probabilities for the values 8 and 9 are .932 and .968, respectively. Thus if we use c = 9 in the rejection region, a = P(Poisson rv with parameter 5 is > 9) = 1 — .932 = .068 Choosing instead c = 10 gives «* = .032. If we insist that the significance level be at most .05, then the optimal rejection region is Lx; > 10. When H, is true, the test statistic has a Poisson distribution with parameter 10. Thus B* = P(Ho is not rejected when H, is true) = P(Poisson rv with parameter 10 is < 9) = .458 Obviously this type II error probability is quite large. This is because the sample size n = 5 is too small to allow for effective discrimination between 2 = 1 and 4 = 2. For a sample size of 10, the Poisson table reveals that the best test having significance level at most .05 uses c = 16, for which «* = .049 (Poisson para- meter = 10) and £* = .157 (Poisson parameter = 20). Finally, returning to a sample size of 5,c = 10 implies that 10 = In(ke*)/n(2), from which k = 2'°/e? = 6.9, For the best test to have a significance level of at most .05, the null hypothesis should be rejected only when the likelihood for the alternative value of / is more than about 7 times what it is for the null value. a eee «Let X), ... , X, be a random sample from a normal distribution with mean jy and variance | (the argument to be given will work for any other known value of 0). Consider testing Ho: 4 = [lg versus H,: ¢ = ft, where ft, > [lo, The likelihood ratio is 1yn/2 —(1/2)E (Ha)? Pa (xz) ae a eltsExi—HoExi— (0/2) (1-18) (4) “WDE C-H0)? = [emt , [evo] The term in the first set of brackets is a numerical constant. Then jig — flo > 0 implies that the likelihood ratio will be at least k if and only if Lx; > x’, that is, if and only if x > &”, which means if and only if X— Ho z=—- 2c 1/ Jn ~ If we now let c = 2.9; = 2.33, this z test (one for which the test statistic has a standard normal distribution when Hp is true), will have minimum f among all tests for which « < .01. i | The key idea in these last two examples cannot be overemphasized: Write an expression for the likelihood ratio, and then manipulate the inequality likelihood ratio > kso itis equivalent to an inequality involving a test statistic whose distribution when Hp is true is known or can be derived. Then this known or derived distribution --- Trang 485 --- 472 cuarreR9 Tests of Hypotheses Based on a Single Sample can be used to obtain a test with the desired «. In the first example the distribution was Poisson with parameter 5, and in the second it was the standard normal distribution. Proof of the Neyman-Pearson Theorem: We shall consider the case in which the X;’s have a discrete distribution, so that type I and type II error probabilities are obtained by summation. In the continuous case, integration replaces summation. Then RE {Cis 9a) SF Ets «ons Oa) > RFCs ns Oo) a = P[(X1,...,Xn) © R* when 0 = 00] = Sf (x1,--- +n} 00) e Bo = P((X1,...,Xn) © RY when 0 = 04) = SOF (x1, ni Oa) ‘Rr (B* is the sum over values in the complement of the rejection region). Suppose that R is a rejection region different from R* whose type I error probability is at most «*; that is, a = Pl(X1,...,Xn) €R when 0 = Oo] = S“flx1,---,4ni Oo) [..] ROR RORY This last difference is nonnegative (i.e. > 0) because the term in the square brackets is > 0 for any set of x,’s in R* and is negative for any set of x;’s not in R*. It then follows that OS SOF G61, na) — & DOF (t+ +405 80) Re RB = SOF (er, ans Oa) + SOF (1, satus Oo) R R = (1- B*) —ka* — (1 — B) + ko = B— fr — ka" —a) < p-B* (since « < «* implies that the term being subtracted is nonnegative) Thus we have shown that f* < f as desired. . --- Trang 486 --- 9.5 Some Comments on Selecting a Test Procedure 473 Power and Uniformly Most Powerful Tests The Neyman—Pearson theorem can be restated in a slightly different way by considering the power of a test, first introduced in Section 9.2. DEFINITION Let Qo and Q, be two disjoint sets of possible values of 0, and consider testing Ho: 0 © Qo versus H,: 0 © Q, using a test with rejection region R. Then the power function of the test, denoted by 7(-) is the probability of rejecting Ho considered as a function of 6: n(0') = P[(X1,...,Xn) € R when 0 = 0'| Since we don’t want to reject the null hypothesis when 0 © Qo and do want to reject it when 0 © Q,, we wish a test for which the power function is close to 0 whenever 6! is in Qo and close to 1 whenever 6! is in Q,. The power is easily related to the type I and type II error probabilities: () P(type I error when 0 = 6') = «(0’) when 6’ € n(0') = 1 — P(type II error when 0 = 6') = 1 — B(6') when 6! € Q, Thus large power when 6’ © Q, is equivalent to small f for such parameter values. The drying time (min) of a particular brand and type of paint on a test board under controlled conditions is known to be normally distributed with p = 75ando = 9.4.A new additive has been developed for the purpose of improving drying time. Assume that drying time with the additive is still normally distributed with the same standard deviation, and consider testing Ho: 4 > 75 versus H,: ft < 75 based on a sample of size n = 100. A test with significance level .01 rejects Hy if z < —2.33, where z= (x—75)/(9.4//100) = (x — 75)/.94. Manipulating the inequality in the rejec- tion region to isolate x gives the equivalent rejection region x < 72.81. Thus the power of the test when jz = 70 (a substantial departure from the null hypothesis) is 72.81 — 70 (70) = P(X < 72.81 when p = 70) = ©( —-—__ 94/100 = (2.99) = .9986 so B = .0014. It is easily verified that 2(75) = .01, the significance level. The power when 4 = 76 (a parameter value for which Hp is true) is _ 72.81 — 76 n(76) = P(X < 72.81 when pp = 76) = o( 9.4/V/100 = (—3.39) = .0003 which is quite small as it should be. By repeating this calculation for various other values of 4“ we obtain the entire power function. A graph of the ideal power function appears in Figure 9.10(a) and the actual power function is graphed in Figure 9.10(b). The maximum power for : > 75 (i.e. in Qo) occurs at pp = 75, on the boundary between Qo and Q,,. Because the power function is continuous, there are values of jc smaller than 75 for which the power is quite small. Even with a large sample size, it is difficult to detect a very small departure from the null hypothesis. --- Trang 487 --- 474 = cuarrer9 Tests of Hypotheses Based on a Single Sample a b 10 10 os 08 & 506 = 06 a 2 Fi oa. 204 02 02 0.0 00 6 69 70 71 72 73 74 «75 76 7 68 69 70 71 72 73 74 75 76 77 MEAN MEAN ideal actual Figure 9.10 Graphs of power functions for Example 9.22 : The Neyman-Pearson theorem says that when Qo consists of a single value 4 and Q, also consists of a single value @,, the rejection region R* specifies a test for which the power 7(0,) at the alternative value @, (which is just 1 — f) is maximized subject to (09) < « for some specified value of «. That is, R* specifies a most powerful test subject to the restriction on the power when the null hypothesis is true. What about best tests when at least one of the two hypotheses is composite, that is, Qo or Q, (or both) consist of more than a single value? Consider again a random sample of size n = 5 from a Poisson distribution, and (Example 9.20 suppose we now wish to test Ho: 2 < 1 versus H,: 2 > 1. Both of these hypotheses continued) are composite. Arguing as in Example 9.20, for any value /, exceeding 1, a most powerful test of Ho: 2 = 1 versus Hy: 2 = 4, with significance level (power when 4 = 1) .032 rejects the null hypothesis when Lx; > 10. Furthermore, it is easily verified that the power of this test at 2’ is smaller than .032 if 2’ < 1. Thus the test that rejects Ho: 2 < 1 in favor of Ho: 4 > 1 when Xx; > 10 has maximum power for any 2’ > 1 subject to the condition that (/’) < .032. This test is uniformly most powerful. : More generally, a uniformly most powerful (UMP) level @ test is one for which 7(4’) is maximized for any 6 © Q, subject to 2(0’) < & for any 6’ © Qo. Unfortunately UMP tests are fairly rare, especially in commonly encountered situations when Hg and H, are assertions about a single parameter 0, whereas the distribution of the X;’s involves not only @, but also at least one other “nuisance parameter”. For example, when the population distribution is normal with values of both 4 and o unknown, ¢ is a nuisance parameter when testing Ho: 1 = [lg versus H,: 1 # Uo. Be careful here—the null hypothesis is not simple because Qo consists of all pairs (4, o) for which fp = flo and o > O, and there is certainly more than one such pair. In this situation, the one-sample f test is not UMP. --- Trang 488 --- 9.5 Some Comments on Selecting a Test Procedure 475 However, suppose we restrict attention to unbiased tests, those for which the smallest value of 2(@’) for 6’ = Q,, is at least as large as the largest value of (@’) for 0’ © Qo. Unbiasedness simply says that we are at least as likely to reject the null hypothesis when Hp is false as we are to reject it when Hp is true. The test proposed in Example 9.22 involving paint drying times is unbiased because, as Figure 9.10(b) shows, the power function at or to the right of 75 is smaller than it is to the left of 75. It can be shown that the one-sample ¢ test is UMP unbiased; that is, it is uniformly most powerful among all tests that are unbiased. Several other commonly used tests also have this property. Please consult one of the chapter references for more details. Likelihood Ratio Tests The likelihood ratio (LR) principle is the most frequently used method for finding an appropriate test statistic in a new situation. As before, denote the joint pmf or pdf of X,,...,X, by fy, ... , X,5 0). In the case of a random sample, it will be a product fx130)- +++ + fx, 30). When the x;’s are the actual observations and f(x), ... ,%, 39) is regarded as a function of 0, it is called the likelihood function. Again consider testing Ho: 8 | Qo versus Hy: 0 | Q., where Qo and Qy are disjoint sets, and let Q = Qo U Q,. In the Neyman—Pearson theorem, we focused on the ratio of the likelihood when @ € Q, to the likelihood when 6 © Qo, rejecting Ho when the value of the ratio was “sufficiently large”. Now we consider the ratio of the likelihood when @ = Qo to the likelihood when § € Q. A very small value of this ratio argues against the null hypothesis, since a small value arises when the data is much more consistent with the alternative hypothesis than with the null hypothesis. More formally, 1. Find the largest value of the likelihood for any @ © Qo by finding the maximum likelihood estimate of @ within Qo and substituting this mle into the likelihood function to obtain L(Qo). 2. Find the largest value of the likelihood for any 0 © Q by finding the maximum likelihood estimate of @ within Q and substituting this mle into the likelihood function to obtain L(Q). Because Qo is a subset of Q, this likelihood L(Q) can’t be any smaller than the likelihood L(Qo) obtained in the first step, and will be much larger when the data is much more consistent with H, than with Ho. 3. Form the likelihood ratio L(Qo) /L(Q)and reject the null hypothesis in favor of the alternative when this ratio is < k. The critical value k is chosen to give a test with the desired significance level. In practice, the inequality L(Qo)/L(Q) 0, and the likelihood function is ( 1 y' 2 —1/(202) > (s/n)? =a) e 2na? --- Trang 489 --- 476 = cuarreR9 Tests of Hypotheses Based on a Single Sample In Section 7.2 we obtained the mle’s as fi =x, 6? = D> (x; —x)*/n. Substituting these estimates back into the likelihood function gives . 1 nf2 LQ) = ( 5 ) enn 2nd) (xi — x) /n. Within Qo, y in the foregoing likelihood is replaced by j1g, so that only o must be estimated. It is easily verified that the mle is 6? = Y> (x; — uo)” /n. Substitution of this estimate in the likelihood function yields . 1 nf2 2m) (xi = Ho) /n Thus we reject Hy in favor of H, when - oa \ 0/2 LG) _ (Stu-9)"" Da baa s LQ) \¥ (i= M0) Raising both sides of this inequality to the power 2/n, we reject Hy whenever 2 Disa amy DX (i — Ho)” This is intuitively quite reasonable: the value fio is implausible for ju if the sum of squared deviations about the sample mean is much smaller than the sum of squared deviations about fio. The denominator of this latter ratio can be expressed as 2 2 2 Di [ei + = Ho)? = SO i = 3)? +207 (Ho (ae 2) + mC — Ho) The middle (i.e., cross-product) term in this expression is 0, because the constant X — {lg can be moved outside the summation, and then the sum of deviations from the sample mean is 0. Thus we should reject Hp when 92 1 i Py Si) Fay Tae Hey Ds 9) This latter ratio will be small when the second term in the denominator is large, so the condition for rejection becomes ~ 2 n= Hol 5 yr YS Gi -2z) Dividing both sides by n — 1 and taking square roots gives the rejection region ap FE Ho I= Mo either ———>c or —~=<-c spa = sj = If we now let c = ty/2,,-1, We have exactly the two-tailed one-sample ¢ test. The bottom line is that when testing Ho: 4 = [lp against the two-sided () alternative, the one-sample t test is the likelihood ratio test. This is also true of the upper-tailed version of the f test when the alternative is H,: 4 > sug and of the lower-tailed test when the alternative is Ha: 4 < {o. We could trace back through the argument to recover the critical constant k from c, but there is no point in doing this; the rejection region in terms of t is much more convenient than the rejection region in terms of the likelihood ratio. a --- Trang 490 --- 9.5 Some Comments on Selecting a Test Procedure 477 A number of tests discussed subsequently, including the “pooled” ft test from the next chapter and various tests from ANOVA (the analysis of variance) and regression analysis, can be derived by the likelihood ratio principle. Rather fre- quently the inequality for the rejection region of a likelihood ratio test cannot be manipulated to express the test procedure in terms of a simple statistic whose distribution can be ascertained. The following large-sample result, valid under fairly general conditions, can then be used: If the sample size n is sufficiently large, then the statistic —2[In(likelihood ratio)] has approximately a chi-squared distribution with v degrees of freedom, where v is the difference between the number of “freely varying” parameters in Q and the number of such parameters in Qo, For example, if the distribution sampled is bivariate normal with the 5 para- meters jl), lo, 0), G2, and p and the null hypothesis asserts that fy) = f. and o, = 0, then v = 5 — 3 = 2. By definition L(Qo)/L(Q) < 1, and the likelihood ratio test rejects Hy when this likelihood ratio is much less than 1. This is equivalent to rejecting when the logarithm of the likelihood ratio is quite negative, that is, when —In(LR) is quite positive. The large-sample version of the test is thus upper- tailed: Ho should be rejected if —2In(likelihood ratio) > ve (an upper-tail critical value extracted from Table A.6). Suppose a scientist makes n measurements of some physical characteristic, such as the specific gravity of a liquid. Let X), ... , X,, denote the resulting measurement errors. Assume that these X;’s are independent and identically distributed according to the double exponential (Laplace) distribution: f (x) =.5e~~"! for oo |x; — 0| is minimized. The absolute value function is not differentiable, and therefore differential calculus cannot be used. Instead, consider for a moment the case n = 5 and let y,,..., ys denote the values of the x;’s ordered from smallest to largest—so the y;’s are the observed values of the order statistics. For example, a random sample of size five from the Laplace distribution with 9 = 0 is —.24998, -75446, —.19053, 1.16237, .83229, so (v1, .....¥s) = (—.24998, —.19053, .75446, -83229, 1.16237). Then yityat+ystyatys—50 O|x;| — 2 |x; — | > 3, for the large- sample version of the LR test. Suppose that a sample of n = 30 errors results in 5 |x;| = 38.6 and > by — &| = 37.3. Then —2In(LR) = a> nl — So bv — i) =26 Comparing this to 74,5; = 3.84, we would not reject the null hypothesis at the 5% significance level. It is plausible that the measurement process is indeed unbiased. a Exercises | Section 9.5 (60-71) 60. Reconsider the paint-drying problem discussed in ¢. Would you really want to use a sample size Example 9.2. The hypotheses were Ho: 1 = 75 of 2500 along with a level .01 test (disregard- versus Hy: < 75, with ¢ assumed to have value ing the cost of such an experiment)? Explain. 9.0. Consider the alternative value yp = 74, which 61. Consider the large-sample level .01 test in Sec- in the context of the problem would presumably : . . tion 9.3 for testing Ho: p = .2 against H,: p > .2. not be a practically significant departure from Ho. eee ‘i b . a. For the alternative value p = .21, compute a. For a level .01 test, compute f at this alterna- B21) fe seni = 100, 2500. ive for sample sizes n = 100, 900, and 2500. BC21) for sample sizes n= 100, J we ee sa aa 10,000, 40,000, and 90,000. b. If the observed value of X is ¥ = 74, what can ee you say about the resulting P-value when b. For p = x/n = 21, compute the P-value when nn = 2500? Is the data statistically significant = 100,250, 10,000, and 40,000, at any of the standard values of «? --- Trang 492 --- 9.5 Some Comments on Selecting a Test Procedure 479 c. In most situations, would it be reasonable to a. Obtain a most powerful test for Ho: 4 = 1 use a level .01 test in conjunction with a sample versus H,: 2 = .5, and express the rejection size of 40,000? Why or why not? region in terms of a “simple” statistic. 62. For a random sample of n individuals taking a be Bs theitest of Ca) uniformly most powerful: for licensing exam, let X; = 1 if the ith individual in Hod 1 vyerus, Hands 12 qustify yyour the sample passes the exam and X, = 0 otherwise ane wey @=Igexeyn): 66. Consider a random sample of size n from the a. With p denoting the proportion of all exam- “shifted exponential” distribution with _ pdf takers who pass, show that the most powerful F(x: 0) =e) for.x > 0 and 0 otherwise (the test of Ho: p = .5 versus H,: p = .75 rejects Ho graph is that of the ordinary exponential pdf with when Ex; > c. 2. = | shifted so that it begins its descent at rather b. If n = 20 and you want % < .05 for the test of than at 0). Let Y; denote the smallest order statistic, (a), would you reject Ho if 15 of the 20 indivi- and show that the likelihood ratio test of Ho: 0 < 1 duals in the sample pass the exam? versus H,: § > 1 rejects the null hypothesis if y,, c. What is the power of the test you used in (b) the observed value of Yi, is > ¢. - 8 a oni fee duals is classified according to his/her genotype the hypotheses Ho: p = .5 versus H,: p >.5? : ‘ ae Exblainsyoursreaxcdine with respect to a particular genetic characteristic plain y ig. e. Graph the power function n(p) of the test for the and that the three possible genotypes are AA, Aa, hypotheses of (d) when n = 20 and 2 < .05. and aa with long-run proportions (probabilities) 0°, f. Return to the scenario of (a), and suppose the 261-0); and C1—B)’,- respectively (0-<0'< 1). test is based on a sample size of 50. If the Te iby then (steaigiitorward “te stow" that dhe - : : likelihood is probability of a type II error is approximately .025, what is the approximate significance level oxy e Se of the test (use a normal approximation)? oP - 20(1 — a)” - (1 — 8) 63. The error X in a measurement has a normal distri- where x), x2, and x3 are the number of individuals bution with mean value 0 and variance 6°. Con- in the sample who have the AA, Aa, and aa geno- sider testing Ho: 0” = 2 versus H,: 6” = 3 based types, respectively. Show that the most powerful ona random sample X;,... , X, of errors. test for testing Ho: @ = .5 versus Hy: 0 = 8 a. Show that a most powerful test rejects Hy when rejects the null hypothesis when 2x, + x2 > c. Is ve c this test UMP for the alternative H,: 0 > .5? b. For n = 10, find the value of c for the test in Explain. [Note: The fact that the joint distribution (a) that results in « = .05. of X,, Xo, and X3 is multinomial can be used to c. Is the test of (a) UMP for Ho: 7 = 2 versus obtain the value of ¢ that yields a test with any H,: a? > 22 Justify your assertion. desired significance level when n is large.] 64, Suppose that X, the fraction of a container that is 68. The error in a measurement is normally dis- filled, has pdf fo) = @x""' for 0 0), and let X, ... , X, be a random Consider a random sample of n errors, and show sample from this distribution. that the likelihood ratio test for Ho: 41 = 0 versus a. Show that the most powerful test for Ho: 0 = 1 H,: 4 # 0 rejects the null hypothesis when either versus H,: @ = 2 rejects the null hypothesis if X¥>c or X¥<—c. What is c for a test with Lin(x) > c. % = .05? How does the test change if the standard b. Is the test of (a) UMP for testing Ho: @ = 1 deviation of an error is co (known) and the rele- versus H,: 0 > 1? Explain your reasoning. vant hypotheses are Ho: jt = 0 versus Hy: jt #Ulo? ¢. If = 50, what is the (approximate) value of ¢o Measurement ertor in a particular situation is for which the test has significance level .05? ne F normally distributed with mean value jo and 65. Consider a random sample of n component life standard deviation 4. Consider testing Ho: jt = 0 times, where the distribution of lifetime is expo- versus H,: 4 # 0 based on a sample of n = 16 nential with parameter /. measurements. a. Verify that the usual test with significance level 05 rejects Ho if either x > 1.96 or --- Trang 493 --- 480 = ciarrer9 Tests of Hypotheses Based on a Single Sample X < —1.96. [Note: That this test is unbiased a. Determine the significance level (type I error follows from the fact that the way to capture probability) for each rejection region. the largest area under the z curve above an b. Determine the power of each test when interval having width 3.92 is to center that inter- p = A9. Is the test with rejection region Ry a val at 0 (so it extends from —1.96 to 1.96).] uniformly most powerful level .033. test? b. Consider the test that rejects Ho if either Explain. X > 2.17 orx < —1.81. What is @, that is, 7(0)? c. Is the test with rejection region Ry unbiased? c. What is the power of the test proposed in (b) Explain. when jr = .1 and when jr = —.1? (Note that .1 d. Sketch the power function for the test with and —.1 are very close to the null value, so one rejection region Rj, and then do so for the test would not expect large power for such values). with the rejection region R>. What does your Is the test unbiased? intuition suggest about the desirability of using d. Calculate the power of the usual test when the rejection region R2? ft = .1 and when ps aad Is the usual test a 71. Consider Example 9.24. most powerful test? [Hint: Refer to your calcu- a — : a. With t= (¥ — jl) /(s/ Vi), show that the likeli- lations in (c).] [Note: It can be shown that . '. 2 -n/2 hood ratio is equal to 2 = [1 + F/(n — DY", the usual test is most powerful among all such ecosnmed -_ Betis, and therefore the approximate chi-square statis- unbiased tests.) tic is —2[In(a)] = n Inf + Pn — DI. 70. A test of whether a coin is fair will be based on b. Apply part (a) to test the hypotheses of n = 50 tosses. Let X be the resulting number of Exercise 55, using the data given there. Com- heads. Consider two rejection regions: Ry = {x: pare your results with the answers found in either x < 17 or x > 33} and Ry = {x either Exercise 55. x < 18 orx > 37}. 72. A sample of 50 lenses used in eyeglasses yields a b. What conclusion would be reached for a sig- sample mean thickness of 3.05 mm and a sample nificance level of .05, and why? Answer the standard deviation of .34 mm. The desired true same question for a significance level of .10. average thickness of such lenses is3.20mm. Does 95° One method for straightening wire before coiling the data strongly suggest that the true average scares i a " mene ‘ ; it to make a spring is called “roller straightening. thickness of such lenses is something other than rh icle “The Effect of Roller and S hat is desired? Test using a = 05 The article “The Effect of Roller and Spinner RE Cee oe Wire Straightening on Coiling Performance and 73. In Exercise 72, suppose the experimenter had Wire Properties” (Springs, 1987: 27-28) reports believed before collecting the data that the value on the tensile properties of wire. Suppose a sample of @ was approximately .30. If the experimenter of 16 wires is selected and each is tested to deter- wished the probability of a type II error to be .05 mine tensile strength (N/mm). The resulting sam- when j¢ = 3.00, was a sample size of 50 unneces- ple mean and standard deviation are 2160 and 30, sarily large? respectively. 74. It is specified that a certain type of iron should a TDS ican stenslerstength for sprigs wade ‘ - | , using spinner straightening is 2150 N/mm’. contain .85 g of silicon per 100 g of iron (.85%). : ee as What hypotheses should be tested to determine The silicon content of each of 25 randomly } A ‘ whether the mean tensile strength for the roller selected iron specimens was determined, and the 4 a f method exceeds 2150? accompanying MINITAB output resulted from a . oa as ‘ : = b. Assuming that the tensile strength distribution test of the appropriate hypotheses. . ‘ . is approximately normal, what test statistic Vartable Ni Meat “StDey a t @& would you use to test the hypotheses in part (a)? ean sata ae silcont 25 0/8880 0.1807 0.0361 1.05-0.30 c. bis is the value of the test statistic for this lata? a. What hypotheses were tested? d. What is the P-value for the value of the test statistic computed in part (c)? --- Trang 494 --- Supplementary Exercises 481 e. For a level .05 test, what conclusion would you a conclusion at significance level .05 and also at reach? level .01. 76. Anew method for measuring phosphorus levels in TEST OF MU = 25.000 VS MUG. 7. 25.000 soil is described in the article “A Rapid Method to N MEAN STDEV SEMEAN T —-PVALUE Determine Total Phosphorus in Soils” (Soil Sci. time 13 27.923 5.619 1.559 1.88 0.043 ay 7 AEB: A) nee asample of 90. The true average breaking strength of ceramic ie eee WN iced ache te ce insulators of a certain type is supposed to be at ee oma ain S18 ait vias Mane =a least 10 psi. They will be used for a particular a od fy en ah ne © ie » an %0 application unless sample data indicates conclu- nective in Phosphorus level are 987 and 10, sively that this specification has not been met. TORRE A test of hypotheses using x = .01 is to be based Js there evidence:that the mean phosphorus on a random sample of ten insulators. Assume develireponted by Chssnew: method differs sia? that the breaking-strength distribution is normal nificantly from the true value of 548 mg/kg? withvurinownseradate deviation b ee OE a ake forth a. If the true standard deviation is .80, how - What Hants bea aauinie? make for the test likely is it that insulators will be judged satis- ft Pah Ca) to He appropriate? factory when true average breaking strength is 77. The article “Orchard Floor Management Utilizing actually only 9.5? Only 9.0? Soil-Applied Coal Dust for Frost Protection” b. What sample size would be necessary to have (Agric. Forest Meteorol., 1988: 71-82) reports a 75% chance of detecting that true average the following values for soil heat flux of eight breaking strength is 9.5 when the true stan- plots covered with coal dust. dard deviation is .802 34.7 35.4 34.7 37.7 32.5 28.0 18.4 24.9 81, The accompanying observations on residual flame a . time (sec) for strips of treated children’s nightwear ed men so peat we Ge ee covered ony were given in the article “An Introduction to Some SE Ect ns Pano UNE Aaa: Wen eens Precision and Accuracy of Measurement Pro- distribution is approximately normal, does. the lems” (7 Tesh Evat,, 1982: 132-140), Suppose & data suggest that the coal dust is effective in ig oe eRe tHe OR ACHR’. 75had bash mncreasing ithe mean:heat flux cover (that for mandated. Does the data suggest that this condition grass? Test the appropriate hypotheses using has not been met? Carry out an appropriate test 78. The article “Caffeine Knowledge, Attitudes, and ave a ins en she piles By basso: Consumption in Adult Women” (J. Nutrit. Ed., x : 1992: 179-184) reports the following summary DBS 293 (295 OTT 86T DBT BGT data on daily caffeine consumption for a sample on4 985 1275 983 992 One 8 of adult women: n = 47, ¥= 215mg, s = 235 O88 Be) BOP a8 Bye Bee mg, and range = 5—1176. ; 82. The incidence of a certain type of chromosome a. Does it appear plausible that the population defect in the U.S. adult male population is distribution of daily caffeine consumption is believed to be 1 in 75, Asandom:-sample of £00 normal? Is it necessary to assume a normal individuals in U.S. penal institutions reveals 16 Population distribution to test hypotheses who have such defects. Can it be concluded that about the value of the population mean con- the incidence rate of this defect among prisoners sumption? Explain your reasoning. differs from the presumed rate for the entire adult b. Suppose it had previously been believed that male population? mean consumption was at most 200 mg. Does a, State and test the relevant hypotheses using the given data contradict this prior belief? Test a = .05. What type of error might you have the appropriate hypotheses at significance level made in reaching a conclusion? -10 and include a P-value in your analysis. b. What P-value is associated with this test? 79. The accompanying output resulted when MINI- ‘Based ont this B2valne,could:Hy/eite jectedtat TAB was used to test the appropriate hypotheses significance level .20? about true average activation time based on the 3, In an investigation of the toxin produced by a data in Exercise 56. Use this information to reach Cera polan ous Stake, w-iesearchet prepared 26 --- Trang 495 --- 482 = cuarrer9 Tests of Hypotheses Based on a Single Sample different vials, each containing | g of the toxin, and true average number of weekly requests exceeds then determined the amount of antitoxin needed to 4.0? Test using « = .02. neutralize the toxin. The sample average amount of’ ga notmby manufactufer’ advertises that with its antitexin necessary: way:found.to-be 18mg, and. heating equipment, a temperature of 100°F can be tes garmples Standard deviation Wwagiea2, Erevious achieved in at most 15 min. A random sample of 32 research had indicated that the true average neutra- tubs is selected, and the time necessary to achieve a lizinigamountwas 175 ‘mg/g af toxin, Does\ the 100°F temperature is determined for each tub. The new data contradict the value suggested by prior sample average time and sample standard deviation research? Test the selevant-hypotieses using: the are 17.5 min and 2.2 min, respectively. Does this a aie appicaets Doss ihe sar oe . sour! He data cast doubt on the company’s claim? Compute ye ari on cae Nirah ae er in seat ation. the P-value and use it to reach a conclusion at level Bete DUnon oF neutralizing amounts Spans .05 (assume that the heating-time distribution is 84. The sample average unrestrained compressive approximately normal). strength for 45 specimens of a particular type of gg. Chapter 8 presented a Cl for the variance 0? of a ‘briekéwas computed to'be.3107 pst) and the sample: normal population distribution. The key result standard deviation was 188. The distribution of there was that the. rv 92-= (a 1)S2/22 as a Usitestraliiod eomprodsive: stenetti may Be Sele: chi-squared distribution with n — 1 df. Consider what skewed. Does the data strongly indicate that ihe ‘ail Biypotkeig Ho: o? =a (Gduivalenily thet rims (avetage ‘untestrained compressive = Go). Then when Ho is true, the test statistic strength is less than the design value of 3200? PANS fo ae oa gure EE Test using a 2001 with n—1 df. If the relevant alternative is 85. To test the ability of auto mechanics to identify Hy: 0? >a, rejecting Ho if (n—1)S?/o) > simple engine problems, an automobile with a Zen) gives a test with significance level x. To sige guclt problem was taken'in turn'to /2iditter: ensure reasonably uniform characteristics for a Ent cat repair facilities ‘Only 42/of thie72.miechan- particular application, it is desired that the true ics who worked on the car correctly identified the standard deviation of the softening point of a problem. Does this strongly indicate that the frite certain type of petroleum pitch be at most .50°C. proportion |of mechanics who;conld adentify "this The softening points of ten different specimens problem is less than .75? Compute the P-value and were determined, yielding a sample standard devi- teach a conclusion accordingly. ation of .58°C. Does this strongly contradict the 86. When X;, X>, ... , X are independent Poisson uniformity specification? Test the appropriate variables, each with parameter 2, and n is large, hypotheses using 2 = .O1. the sample mean X has approximately a normal gg, Referring to Exercise 88, suppose an investigator distribution with 4 = E(X) = 4 and 6? = V(X) = wishes to test Ho: 02 = .04 versus H,: a? < .04 A/a. This implies that based on a sample of 21 observations. The com- ye puted value of 20s7/,04 is 8.58. Place bounds z= on the P-value and then reach a conclusion at valn level .O1. has approximately a standard normal distribution. 99. When the population distribution is normal and n is For testing Ho: 2 = Ay, we can replace A by Ay in large, the sample standard deviation S has approxi- the equation for Z to obtain a test statistic. This mately a normal distribution with E(S) + o and statistic is actually preferred to the large-sample V(S) = 07/(2n). We already know that in this statistic with denominator S/,/n (when the X;’s case, for any n, X is normal with E(X) = pe and are Poisson) because it is tailored explicitly to the V(X) = 02/n. Poisson assumption. If the number of requests for ao Astunning that tne cindetning:- WiaIbitiON % consulting received by a certain statistician during normal, what is an approximately unbiased a 5-day work week has a Poisson distribution and estimator of the 99th percentile 0 = pt + 2.330? the total number of consulting requests during a b. As discussed in Section 6.4, when the X;'s are 36-week period is 160, does this suggest that the normal Y and § are independent rv’s (one mea- sures location whereas the other measures --- Trang 496 --- Bibliography 483 spread). Use this to compute V(0) and aq for b. Derive an expression for B(ju’). [Hint: Express the estimator @ of part (a). What is the esti- the test in the form “reject Ho if either mated standard error 65? ¥>c1 or 0. For what values of } (relative to a) has approximately a standard normal distribu- will B(uo + A) < Blo — A)? tion when Ho is true. If soil pH is normally 95. 4 fer a period of apprenticeship, an organization distributed in a certain region and 64 soil sam- : : ee . . gives an exam that must be passed to be eligible ples yield ¥ = 6.33, s = .16, does this provide ms ‘ for membership. Let p = P(randomly chosen strong evidence for concluding that at most : oa iran Mont é apprentice passes). The organization wishes an 99% of all possible samples would have a pH een f less than 6.75? Test usine ¢ = 01 exam that most but not all should be able to pass, SheSs RLS LOSE URNE Os so it decides that p = .90 is desirable. For a par- 91. Let X,, Xo, ..., X,, be a random sample from an ticular exam, the relevant hypotheses are Ho: exponential distribution with parameter 2. Then it p = .90 versus the alternative H,: p 4 .90. Sup- can be shown that 22X; has a chi-squared distri- pose ten people take the exam, and let X = the bution with v = 2n(by first showing that 22X; has number who pass. a chi-squared distribution with vy = 2). a. Does the lower-tailed region {0, 1, ... , 5} a. Use this fact to obtain a test statistic and rejec- specify a level .01 test? tion region that together specify a level x test b. Show that even though H, is two-sided, no for Ho: Jt = lo versus each of the three com- two-tailed test is a level .01 test. monly encountered alternatives. [Hint: E(X;) = ¢. Sketch a graph of f(p’) as a function of p! for f= 1/2, so ff = fo is equivalent to 2 = 1/1.) this test. Is this desirable? b. Suppose that ten identical components, each 94 4 service station has six gas pumps. When no having exponentially distributed time until ie ' . . . . vehicles are at the station, let p; denote the proba- failure, are tested. The resulting failure times ae . bility that the next vehicle will select pump i aS (i= 1, 2, ... , 6). Based on a sample of size n, 95 16 11 3 42 71 225 64 87 123 we wish to test Hy: p; = ... = pg Versus the alter- . - 2 native Hy: pi =p3=Ps, P2 = Ps = Po (note Use the test. procedureof :part (a).to decide that H, is not a simple hypothesis). Let X be the whether the data strongly suggests that the : : ‘ : : number of customers in the sample that select an true average lifetime is less than the previously slaimed value of 75. even-numbered pump. claimed wale oh > a, Show that the likelihood ratio test rejects Ho if 92. Suppose the population distribution is normal with either X > c or X z, orz < —z,_, where b. Let n = 10 andc = 9. Determine the power of the test statistic is Z = (X — py)/(a//n). the test both when Ho is true and also when a. Show that P(type I error) = x. P2 = Ps =Po =H: Pi =P3 = Ps =H- See the bibliographies for Chapters 7 and 8. --- Trang 497 --- Inferences Based on Two Samples Introduction Chapters 8 and 9 presented confidence intervals (CIs) and hypothesis testing procedures for a single mean y, single proportion p, and a single variance o. Here we extend these methods to situations involving the means, proportions, and variances of two different population distributions. For example, let 4; and Ha denote true average decrease in cholesterol for two drugs. Then an investi- gator might wish to use results from patients assigned at random to two different groups as a basis for testing the hypothesis Ho: “i = [2 versus the alternative hypothesis H,: 4; # fo. As another example, let p, denote the true proportion of all Catholics who plan to vote for the Republican candidate in the next presidential election, and let p, represent the true proportion of all Protestants who plan to vote Republican. Based on a survey of 500 Catholics and 500 Protestants we might like an interval estimate for the difference p; — po. JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 484 DOI 10.1007/978-1-4614-0391-3_10, © Springer Science+Business Media, LLC 2012 --- Trang 498 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 485. z Tests and Confidence Intervals for a Difference Between Two Population Means The inferences discussed in this section concern a difference 4; — fo between the means of two different population distributions. An investigator might, for example, wish to test hypotheses about the difference between the true average weight losses of two diets. One such hypothesis would state that j4; — 42 = 0, that is, that 4) = po. Alternatively, it may be appropriate to estimate 4; — 2 by computing a 95% CL. Such inferences are based on a sample of weight losses for each diet. BASIC 1. X), X2, ... , Xm is a random sample from a population with mean py, and ASSUMPTIONS variance oj. 2. Y\, Yo, ..., Y, is a random sample from a population with mean jz and variance 63. 3. The X and Y samples are independent of each other. The natural estimator of 14; — {ly is X — Y, the difference between the corresponding sample means. The test statistic results from standardizing this estimator, so we need expressions for the expected value and standard deviation of X — Y. PROPOSITION The expected value of X — Y is 4, — fz, so X — ¥ is an unbiased estimator of [t) — [. The standard deviation of X — Y is oT Vin Proof Both these results depend on the rules of expected value and variance presented in Chapter 6. Since the expected value of a difference is the difference of expected values, E(X —¥) =E(®) - E(%) =) — 10 Because the X and Y samples are independent, X and Y are independent quantities, so the variance of the difference is the swm of V(X) and V(Y): oY V(X -Y) =V(X%) + vv) =2+2 mon The standard deviation of X — Y is the square root of this expression. : If we think of jt, — {2 as a parameter 0, then its estimator is @=X—Ywith standard deviation aj given by the proposition. When oj and a3 both have known values, the test statistic will have the form (0 — null value) /c@; this form of a test --- Trang 499 --- 486 = cuarrer 10 Inferences Based on Two Samples statistic was used in several one-sample problems in the previous chapter. When of and 03 are unknown, the sample variances must be used to estimate 9. Test Procedures for Normal Populations with Known Variances In Chapters 8 and 9, the first CI and test procedure for a population mean i were based on the assumption that the population distribution was normal with the value of the population variance o* known to the investigator. Similarly, we first assume here that both population distributions are normal and that the values of both of and o3 are known. Situations in which one or both of these assumptions can be dispensed with will be presented shortly. Because the population distributions are normal, both X and Y have normal distributions. This implies that X — Y is normally distributed, with expected value #4 — Hy and standard deviation oz_y given in the foregoing proposition. Standar- dizing X — Y gives the standard normal variable zak aP (4 =») (10.1) oT , Ina hypothesis-testing problem, the null hypothesis will state that jz; — ji has a specified value. Denoting this null value by Ao, the null hypothesis becomes Ho: Ha — Ha = Ao. Often Ap = 0, in which case Hp says that 4) = fo. A test statistic results from replacing #4; — 2 in Expression (10.1) by the null value Ap. Because the test statistic Z is obtained by standardizing X — Y under the assumption that Hp is true, it has a standard normal distribution in this case. Consider the alternative hypothesis H,: ft; — {lz > Ao. A value X— Jy that considerably exceeds Ap (the expected value of ¥ — Y when Hp is true) provides evidence against Hp and for H,. Such a value of ¥ — y corresponds to a positive and large value of z. Thus Hp should be rejected in favor of H, if z is greater than or equal to an appropriately chosen critical value. Because the test statistic Z has a standard normal distribution when Af is true, the upper-tailed rejection region z > z, gives a test with significance level (type I error probability) 2. Rejection regions for the other alternatives Ay: $y — fl < Ao and Hy: f) — fla # Ao that yield tests with desired significance level a are lower-tailed and two-tailed, respectively. Null hypothesis: Ho: 41 — #2 = Ao Test statistic value: z = F=y— Ao [oi , %3 ot 22 mon Alternative Hypothesis Rejection Region for Level a Test Fy: fy — fo > Ao Zz > z, (upper-tailed test) Fy: py — bo < Ao Zz < —z, (lower-tailed test) Ay: [ty — fo F Ao either z > z, or z < —Z,2 (two-tailed test) --- Trang 500 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 487 Because these are z tests, a P-value is computed as it was for the z tests in Chapter 9 [e.g., P-value = 1 — ®(z) for an upper-tailed test]. Each student in a class of 21 responded to a questionnaire that requested their grade point average (GPA) and the number of hours each week that they studied. For those who studied less than 10 h/week the GPAs were 2.80 340 4.00 3.60 2.00 3.00 347 2.80 2.60 2.00 and for those who studied at least 10 h/week the GPAs were 3.00 3.00 2.20 240 4.00 2.96 3.41 3.27 3.80 3.10 2.50 Normal plots for both sets are reasonably linear, so the normality assumption is tenable. Because the standard deviation of GPAs for the whole campus is .6, it is reasonable to apply that value here. The sample means are 2.97 for the <10 study hours group and 3.06 for the >10 study hours group. Treating the two samples as random, is there evidence that true average GPA differs for the two study times? Let’s carry out a test of significance at level .05. 1. The parameter of interest is 4; — fo, the difference between true mean GPA for the < 10 (conceptual) population and true mean GPA for the >10 population. 2. The null hypothesis is Ho: 4, — {2 = 0. 3. The alternative hypothesis is Hy: 4) — 2 # 0; if Hy is true then jy and fy are different. Although it would seem unlikely that jo; — f2 > 0 (those with low study hours have higher mean GPA) we will allow it as a possibility and do a two-tailed test. 4. With Ao = 0, the test statistic value is pe TY [ei 82 mon 5. The inequality in H, implies that the test is two-tailed. For « = .05, #/2 = .025 and zy). = Zo25 = 1.96. Ho will be rejected if z > 1.96 or z < —1.96. 6. Substituting m = 10, ¥ = 2.97, a7 = 36, n = 11, y = 3.06, and 63 = .36 into the formula for z yields 2.97—3.06 —.09 p= = 8h 36 36.262 10 Tr That is, the value of ¥ — y is only one-third of a standard deviation below what would be expected when Hp is true. 7. Because the value of z is not even close to the rejection region, there is no reason to reject the null hypothesis. This test shows no evidence of any relationship between study hours and GPA. a --- Trang 501 --- 488 = cuarrer 10 Inferences Based on Two Samples Using a Comparison to Identify Causality Investigators are often interested in comparing either the effects of two different treat- ments on a response or the response after treatment with the response after no treatment (treatment vs. control). If the individuals or objects to be used in the comparison are not assigned by the investigators to the two different conditions, the study is said to be observational. The difficulty with drawing conclusions based on an observational study is that although statistical analysis may indicate a significant difference in response between the two groups, the difference may be due to some underlying factors that had not been controlled rather than to any difference in treatments. A letter in the Journal of the American Medical Association (May 19, 1978) reports that of 215 male physicians who were Harvard graduates and died between November 1974 and October 1977, the 125 in full-time practice lived an average of 48.9 years beyond graduation, whereas the 90 with academic affiliations lived an average of 43.2 years beyond graduation. Does the data suggest that the mean lifetime after graduation for doctors in full-time practice exceeds the mean lifetime for those who have an academic affiliation (if so, those medical students who say that they are “dying to obtain an academic affiliation” may be closer to the truth than they realize; in other words, is “publish or perish” really “publish and perish”)? Let j; denote the true average number of years lived beyond graduation for physicians in full-time practice, and let 2 denote the same quantity for physicians with academic affiliations. Assume the 125 and 90 physicians to be random samples from populations 1 and 2, respectively (which may not be reasonable if there is reason to believe that Harvard graduates have special characteristics that differentiate them from all other physicians—in this case inferences would be restricted just to the “Harvard populations”). The letter from which the data was taken gave no information about variances, so for illustration assume that a, = 14.6 and o2 = 14.4. The relevant hypotheses are Ho: ft; — [2 = 0 versus Ay: [1 — fz > 0, so Ao is zero. The computed value of z is :- 48.9 — 43.2 _ 5.70 = 285 (14.6)? (14.4)? V1.70 + 2.30 125 € 90 The P-value for an upper-tailed test is 1 — (2.85) = .0022. At significance level .01, Af is rejected (because % > P-value) in favor of the conclusion that 4) — f2 > 0 (i) > Ha). This is consistent with the information reported in the letter. This data resulted from a retrospective observational study; the investigator did not start out by selecting a sample of doctors and assigning some to the “academic affiliation” treatment and the others to the “full-time practice” treat- ment, but instead identified members of the two groups by looking backward in time (through obituaries!) to past records. Can the statistically significant result here really be attributed to a difference in the type of medical practice after graduation, or is there some other underlying factor (e¢.g., age at graduation, exercise regimens, etc.) that might also furnish a plausible explanation for the difference? Once upon a time, it could be argued that the studies linking smoking and lung cancer were all observational, and therefore that nothing had been proved. --- Trang 502 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 489 This was the view of the great (perhaps the greatest) statistician R. A. Fisher, who maintained till his death in 1962 that the observational studies did not show causation. He said that people who choose to smoke might be more susceptible to lung cancer. This explanation for the relationship had plenty of opposition then, and few would support it now. At that time few women got lung cancer because few women had smoked, but when smoking increased among women, so did lung cancer. Furthermore, the incidence of lung cancer was higher for those who smoked more, and quitters had reduced incidence. Eventually, the physiological effects on the body were better understood, and nonobservational animal studies made it clear that smoking does cause lung cancer. a A randomized controlled experiment results when investigators assign subjects to the two treatments in a random fashion. When statistical significance is observed in such an experiment, the investigator and other interested parties will have more confidence in the conclusion that the difference in response has been caused by a difference in treatments. A famous example of this type of experiment and conclusion is the Salk polio vaccine experiment described in Section 10.4. These issues are discussed at greater length in the (nonmathematical) books by Moore and by Freedman et al., listed in the Chapter 1 bibliography. B and the Choice of Sample Size The probability of a type II error is easily calculated when’ both population distributions "are normal: with known values of @yvand@3, Consider the case in which the alternative hypothesis is H,: [4; — 2 > Ao. Let A’ denote a value of 4, — Ly that exceeds Ag (a value for which Hp is false). The upper-tailed rejection region z > 2, can be re-expressed in the form X — y > Ao + Za¢x_y. Thus the probability of a type II error when jt) — ft = A’ is B(A’) = P(not rejecting Ho when pt, — fy = A’) = P(X-Y Ao o(, _A- *) o Hy by — Ha < Ao 1 o(-s _A- *) @ Ay: fy — A A’ A A I i — Ho # Ao (2 i: *) = 0(-2 i *) @ a where o = oy_y = \/(at/m) + (03/n) --- Trang 503 --- 490) cuarter 10 Inferences Based on Two Samples If jt; and ftp (the true average GPAs for the two levels of effort) differ by as much as (Example 10.1 -5, what is the probability of detecting such a departure from Hp based on a level .05 continued) test with sample sizes m = 10 and n = 11? The value of o for these sample sizes (the denominator of z) was previously calculated as .262. The probability of a type II error for the two-tailed level .05 test when My - b= N= 5 is 3-0 3-0 B(.5) = o( 1.96 ee ) = o(-196 eS ) = (.0516) — ©(—3.868) = .521 By symmetry we also have f(—.5) = .521. Thus the probability of detecting such a departure is 1 — f(.5) = .479. Clearly, we do not have a very good chance of detecting a difference of .5 with these sample sizes. We should not conclude from Example 10.1 that there is no relationship between study time and GPA, because the sample sizes were insufficient. a As in Chapter 9, sample sizes m and n can be determined that will satisfy both P(type I error) = a specified % and P(type II error when py — fly = A’) = a specified f. For an upper-tailed test, equating the previous expression for B(A’) to the specified value of f gives 2 2 a 2 Gy Gh (A Ad) moon (2,425) When the two sample sizes are equal, this equation yields Begg BV cause §2 mana +03) +29) (A’ = Ao) These expressions are also correct for a lower-tailed test, whereas « is replaced by a/2 for a two-tailed test. Large-Sample Tests The assumptions of normal population distributions and known values of o, and @2 are unnecessary when both sample sizes are large. In this case, the Central Limit Theorem guarantees that X — Y has approximately a normal distribution regardless of the underlying population distributions. Furthermore, using S} and S} in place of aj and o} in Expression (10.1) gives a variable whose distribution is approximately standard normal: 7 XY (m4 =) Is 8 Pry 22 mon A large-sample test statistic results from replacing 4, — [2 by Ao, the expected value of X — Y when Hp is true. This statistic Z then has approximately a standard --- Trang 504 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 491 normal distribution when Hp is true, so level x tests are obtained by using z critical values exactly as before. Use of the test statistic value ,_*-3-Ao mon along with the previously stated upper-, lower-, and two-tailed rejection regions based on z critical values gives large-sample tests whose significance levels are approximately «. These tests are usually appropriate if both m > 40 and n > 40. A P-value is computed exactly as it was for our earlier z tests. A study was carried out in an attempt to improve student performance in a low- level university mathematics course. Experience had shown that many students had fallen by the wayside, meaning that they had dropped out or completed the course with minimal effort and low grades. The study involved assigning the students to sections based on odd or even Social Security number. It is important that the assignment to sections not be on the basis of student choice, because then the differences in performance might be attributable to differences in student attitude or ability. Half of the sections were taught traditionally, whereas the other half were taught in a way that hopefully would keep the students involved. They were given frequent assignments that were collected and graded, they had frequent quizzes, and they were allowed retakes on exams. Lotus Hershberger conducted the experiment and he supplied the data. Here are the final exam scores for the 79 students taught traditionally (the control group) and for the 85 students taught with more involvement (the experimental group): Control 37 22 29 29 33 22 32 36 29 06 04 37 00 36 00 32 27 07 «+19 35 26 22 28 28 32 35 28 33 35 24 21 00 32 28 27 08 #30 37 09 33 30 36 28 03 08 31 29 09 00 00 35 25 29 03 33 33 28 32 39 20 32 22 24 20 32 07 08 33 29 09 00 30 26 25 32 38 22 29 29 Experimental 34 27 26 33 23 37 24 34 22 23 32 05 30 35 28 25 37 28 26 29 22 33 31 23 37 29 00 30 34 26 28 27 32 29 31 33 28 21 34 29 33 06 O08 29 36 07 21 30 28 34 28 35 30 34 09 38 09 27 25 33 09 23 32 25 37 28 23 26 34 32 34 00 24 30 36 28 38 35 16 37 25 34 38 34 31 Table 10.1 summarizes the data. Does this information suggest that true mean for the experimental condition exceeds that for the control condition? Let’s use a test with « = .05. --- Trang 505 --- 492 cuarter 10 Inferences Based on Two Samples Table 10.1 Summary results for Example 10.4 Group Sample Size Sample Mean Sample SD Control 79 23.87 11.60 Experimental 85 27.34 8.85 Let 4; and jz denote the true mean scores for the control condition and the experimental condition, respectively. The two hypotheses are Ho: 4; — [2 = 0 versus Hy: 4) — [2 < 0. Ho will be rejected if z < —zo5 = —1.645. Then 23.87 — 27.34 —3.47 z= = = 214 11.60° r 8.85 1.620 719 85 Since —2.14 < —1.645, Mp is rejected at significance level .05. Alternatively, the P-value for a lower-tailed z test is P-value = ®(z) = ©(—2.14) = .016 which implies rejection at significance level .05. Also, if the test had been two- tailed, then the P-value would be 2(.016) = .032, so the two-tailed test would reject Ap at the .05 level. We have shown fairly conclusively that the experimental method of instruc- tion is an improvement. Nevertheless, there is more to be said. It is important to view the data graphically to see if there is anything strange. Figure 10.1 shows a plot from Systat combining a boxplot and dotplot. 40 . B20 £ it ‘10 he t 0 Control Exper Figure 10.1 Boxplot/dotplot for the teaching experiment The plot shows that both groups have outlying observations at the low end; some students showed up for the final but performed very poorly. What happens if we compare the groups while ignoring the low performers whose scores are below 10? The resulting summary information is in Table 10.2. --- Trang 506 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 493 Table 10.2 Summary results without poor performers Group Sample Size Sample Mean Sample SD Control 61 29.59 5.005 Experimental 76 29.88 4.950 Notice that the means and standard deviations for the two groups are now very similar. Indeed, based on Table 10.2 the z-statistic value is —.34, giving no reason to reject the null hypothesis. For the majority of the students, there appears to be not much effect from the experimental treatment. It is the low performers who make a big difference in the results. There were 18 low performers in the control group but only 9 in the experimental group. The effect of the experimental instruction is to decrease the number of students who perform at the bottom of the scale. This is in accord with the goals of the experimental treatment, which was designed to keep students on track. a Confidence Intervals for 1, — [2 When both population distributions are normal, standardizing X — Y gives arandom variable Z with a standard normal distribution. Since the area under the z curve between —z,, and z,/. is 1 — a, it follows that X-Y-(4- 4 ‘coerce i) 40 and n> 40. For many calculus instructors it seems that students taking Calculus I in the fall semester are better prepared than are the students taking it in the spring. If so, it would be nice to have some measure of the difference. We use data from a study of the influence of various predictors on calculus performance, “Factors Affecting Achievement in the First Course in Calculus” (J. Exper. Educ.,1984: 136-140). Here are the ACT mathematics scores for the fall and spring students: Fall 27 29 30 34.0 «29 30-2928 28 31 25 340 (27 28 31 26 24 30 25 25 27 27 28 (27 27 27 26 «(33 27 26 «(35 27. 32 30027 30 30028 28 30-26 31 28 26 23 28 «31 28 33 24 32 20 28 34 33 30-29 16 30 30026 29 260227 26 25 31 18 29 «29 30 29° (29 30 33 29 «29 27 28 28 Spring 29 26 25 24 1431 25 33 27 300 (27 29 26 27 29 31 25 28 26 23 28 27 27 19 28 25 23 20 34 (25 33 30 26 19 18 25 17 26 «24 29 20, 270 26 «426 «©6270 «6-20.28 26 27 24 «(28 28 3000-27 27027 14 25 27-32 35 13 28 25 29 25 19 27 30 15 28 (27 28 32 Figure 10.2 shows a graph from Systat combining a boxplot and dotplot. 40 30 bag Md E 20 ae [Fie < 10 . = z 0 Fall Spring Figure 10.2 Boxplot/dotplot for fall and spring ACT mathematics scores --- Trang 508 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 495 It is evident that there are more high scorers in the fall and more low scorers in the spring. Table 10.3 summarizes the data. Table 10.3 Summary results for Example 10.5 Group Sample Size Sample Mean Sample SD Fall 80 28.25 3.25 Spring 14 25.88 4.59 Let’s now calculate a confidence interval for the difference between true average fall ACT score and true average spring ACT score, using a confidence level of 95%: 3.257 4.59° 28.25 — 25.88 + (1.96)\/—— + “=~ = 2.37 + (1.96)(.6456) 80 74 = 2.37 + 1.265 = (1.10, 3.64) That is, with 95% confidence, 1.10 < ft; — 2 < 3.64. We can therefore be highly confident that the true fall average exceeds the true spring average by between 1.10 and 3.64. It makes sense that the fall average should be higher, because students who were less prepared in the fall (as judged by an algebra placement test) were required to take a fall semester college algebra course before taking Calculus I in the spring. a If the variances o7 and 3 are at least approximately known and the investi- gator uses equal sample sizes, then the sample size n for each sample that yields a 100(1 — «)% interval of width w is 42; 2(o1 + 63) n= We: which will generally have to be rounded up to an integer. Exercises | Section 10.1 (1-19) 1. An article in the November 1983 Consumer X — Y centered)? How does your answer depend Reports compared various types of batteries. The on the specified sample sizes? average lifetimes of Duracell Alkaline AA batteries b. Suppose the population standard deviations of and Eveready Energizer Alkaline AA batteries lifetime are 1.8 h for Duracell batteries and were given as 4.1 h and 4.5 h, respectively. Sup- 2.0 h for Eveready batteries. With the sample pose these are the population average lifetimes. sizes given in part (a), what is the variance of a. Let ¥ be the sample average lifetime of 100 Dur- the statistic X¥ —Y, and what is its standard acell batteries and Y be the sample average life- deviation? time of 100 Eveready batteries. What is the mean c. For the sample sizes given in part (a), draw a value of X — Y (i.e., where is the distribution of picture of the approximate distribution curve of X—Y (include a measurement scale on the --- Trang 509 --- 496 = cuarrer 10 Inferences Based on Two Samples horizontal axis). Would the shape of the curve the modified and unmodified mortars, respectively. necessarily be the same for sample sizes of Assume that the bond strength distributions are both 10 batteries of each type? Explain. normal. 2. Let jy and jp denote true average tread lives for 9 Assuming ie Pi =F anda = i Be two competing brands of size P205/65R15 radial 5 ee eee tires. Test Ho: ty — fo =0 versus Hy: py — oie ea ae sat @) HBA iy id 1 wr) a ap aoe wee ae ¢. Suppose the investigator decided to use a level sp. 1OUrMEAS. —. 1900: > .05 test and wished f = .10 when ju) — p> = 1. 2 ee If m = 40, what value of n is necessary? 3. Let jz, denote true average tread life for a premium d. How would the analysis and conclusion of part brand of P205/65R15 radial tire and let jl denote (a) change if o; and o> were unknown but the true average tread life for an economy brand of 51 = 1.6 and s) = 1.4? mame een ieee ‘llowing 7. Are male college students more easily bored than ditas m= 45, x= 42,500, 51 — 2200. n=45, their female counterparts? This question was exam- y= 36, 800, and sy = 1500. ; ined in the article “Boredom in Young Adults— ? : Gender and Cultural Comparisons” (/. Cross-Cult. 4, a. Use the data of Exercise 2 to compute a 95% CI Psych., 1991: 209-223). The authors administered for 1 — 2. Does the resulting interval suggest a scale called the Boredom Proneness Scale to 97 that 12, — 2 has been precisely estimated? male and 148 female U.S. college students. Does b. Use the data of Exercise 3 to compute a 95% the accompanying data support the research hypoth- upper confidence bound for ju, — >. esis that the mean Boredom Proneness Rating is 5. Persons having Raynaud’s syndrome are apt to Teighiek fr iver it foe Worn? "Test the: dppepet: suffer a sudden impairment of blood circulation ate hypotheses using a .05 significarice level. in fingers and toes. In an experiment to study the a extent of this impairment, each subject immersed Sample Sample Sample a forefinger in water and the resulting heat output Gender Size Mean sD (cal/em?/min) was measured. For m = 10 subjects ee with the syndrome, the average heat output was Male 97 10.40 4:33 X= .64, and for n = 10 nonsufferers, the average Female 148 9.26 4.68 output was 2.05. Let 4, and 4, denote the true average heat outputs for the two types of subjects. gs touching by a coworker sexual harassment? This Assume that the two distributions of heat output question sasdncluded ons survey given to federal anormal withat =,2 and.¢2 = 45 employees, who responded on a scale of 1-5, with ‘a; Consider, testing Ho: i= p2)=— 10: yersus 1 meaning a strong negative and 5 indicating a Hy iy — pa <—1.0 at level .01. Describe in strong yes. The table summarizes the results. words what H, says, and then carry out the test. b. Compute the P-value for the value of Z a a ¢. What is the probability of a type II error when 3 s E the actual difference between ju, and pp is fry — Female Baa 4.6056 8650 ae 12? - Male 3903 4.1709 1.2157 d. Assuming that m =n, what sample sizes are $< = required to ensure that f= .1 when yy, — fy = -1.2? Of course, with 1-5 being the only possible 6. An experiment to compare the tension bond values, the normal distribution does not apply ee 7 here, but the sample sizes are sufficient that it strength Of pole: latex modified mortar Portland does not matter. Obtain a two-sided confidence foes mortar to whieh polymer: latex emnulsions interval for the difference in population means, ave been added during mixing) to that of unmodi- fied mortar resulted in ¥ = 18.12 kgf /em? for the Does your interval suggest that females are more modified mortar (m = 40) andy = 16.87 kgf/cm? likely than males to regard touching as harassment? for the unmodified mortar (n = 32). Let jy and Explain your teasoning. {> be the true average tension bond strengths for --- Trang 510 --- 10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means 497 9. The article “Evaluation of a Ventilation Strategy a a a a a ae a to Prevent Barotrauma in Patients at High Risk for Sample ‘Sample ‘Sample Acute Respiratory Distress Syndrome” (New EatFastFood Size Mean sD Engl. J. Med., 1998: 355-358) reported on an Ke. 663 7058 isio experiment in which 120 patients with similar Yes 413 2637 1138 clinical features were randomly divided into a rr control group and a treatment group, each consist- ing of 60 patients. The sample mean ICU stay a. Estimate the difference between true average (days) and sample standard deviation for the treat- calorie intake for teens who typically don’t eat ment group were 19.9 and 39.1, respectively, fast foods and true average intake for those who whereas these values for the control group were do eat fast foods, and do so in a way that conveys 13.7 and 15.8. information about reliability and precision. a. Calculate a point estimate for the difference b. Does this data provide strong evidence for between true average ICU stay for the treat- concluding that true average calorie intake for ment and control groups. Does this estimate teens who typically eat fast food exceeds true suggest that there is a significant difference average intake for those who don’t typically between true average stays under the two eat fast food by more than 200 cal/day? Carry conditions? out a test at significance level .05 based on b. Answer the question posed in part (a) by determining the P-value. camyiig cut a formal test of hypothosesi'ls: 45) 4 4. cage aaldyowaswatiied om toseeit finale the result different from what you conjectured peer ain ia pC GN? toothpaste helps to prevent cavities i Clinical ¢. Does it appear that ICU stay for patients given Testing of Flugride:and non-tluoride Conlatnine the ventilation treatment is normally distr Dentifrices in Hounslow School Children,” British Sears : Dental J., Feb., 1971: 154-158). The dependent buted? Explain your reasoning. ” ° di: Hatimale ‘true average lengthy Gf sity, for variable was the DMES increment, the number of : : EF te . new Decayed, Missing, and Filled Surfaces. The patients given the ventilation treatment in a . a table gives summary data. way that conveys information about precision and reliability. —_—_________________, Sample Sample — Sample 10. An experiment was performed to compare the Group Size Mean sD fracture toughness of high-purity 18 Ni maraging eS steel with commercial-purity steel of the same Control 289 12.83 8.31 type (Corrosion Sci., 1971: 723-736). The sample Fluoride 260 9.78 751 average toughness was X = 65.6 for m = 32 spe- SS cimens of the high-purity steel, whereas for fa = 38 Specie’ of comineidial steal’ — 598: Calculate and interpret a 99% confidence interval SCUING, CHK gE aR EF HRTOBEARIVE for the difference between true means. Is fluoride its use for a certain application can be justified toothpaste beneficial? only if its fracture toughness exceeds that of com- 13, A study seeks to compare hospitals based on the mercial-purity steel by more than 5. Suppose that performance of their intensive care units. The both toughness distributions are normal. dependent variable is the mortality ratio, the ratio a. Assuming that ¢; = 1.2 and 62 = 1.1, test the of the number of deaths over the predicted number relevant hypotheses using « = .001. of deaths based on the condition of the patients. The b. Compute f for the test conducted in part (a) comparison will be between hospitals with nurse when {ty ~ ja = 6. staffing problems and hospitals without such pro- 11. What impact does fast-food consumption have on ‘blema.: Assume, based on past expecience that the various dietary and health characteristics? The Shinditd deviahion“of the!mantality nitio. E e Soe students. It has been estimated that about = of all and H, in terms of 0, estimate 0, and derive 6% : 3 o are ee college students possess credit cards, and 80% of (“Reduced Monoamine Oxidase Activity in Blood : ‘ ; ‘4 these students received cards during their first year Platelets from Schizophrenic Patients,” Nature, : ; . Tuly 28, 1972: 295-296) ] of college. The article “College Students’ Credit alee " Card Debt and the Role of Parental Involvement: 15. a. Show for the upper-tailed test with 1 and Implications for Public Policy” (J. Public Policy cy known that as either m or n increases, Mark., 2001: 105-113) reported that for 209 decreases when jt; — jz > Ao. students whose parents had no involvement what- b. For the case of equal sample sizes (m =n) soever in credit card acquisition or payments, the and fixed «, what happens to the necessary sample mean total account balance was $421 with sample size n as f is decreased, where f is a sample standard deviation of $686, whereas for the desired type II error probability at a fixed 75 students whose parents assisted with payments alternative? even though they were under no legal obligation to 16. To decide whether chemistry or physics majors dogo, the sample mean and sample: standard devia, have higher starting salaries in industry, n B.S. Hon were $666 and $1048, respectively. All sam- graduates of each major are surveyed, yielding ee csiear ae ra a ore io the following results (in $1000’s): a. Do you think it is plausible that the distribu- tions of total debt for these two types of stu- —<—<—<==— = __—_—_=__ ee dents are normal? Why or why not? Is it Major Sample/Average: Sample SD necessary to assume normality in order to com- Chemistry ALS 25 pare the two groups meine an inferential proce- Physics a0 25 dure described in this chapter? Explain, b. Estimate the true average difference between total balance for noninvolvement students and Calculate the P-value for the appropriate two- postacquisition-involvement students using a sample z test, assuming that the data was based method that incorporates precision into the on n= 100. Then repeat the calculation for estimate. Then interpret the estimate. [Note: n = 400. Is the small P-value for n = 400 indica- Data was also reported in the article for pre- tive of a difference that has practical significance? acquisition involvement only and for both pre- Would you have been satisfied with just a report and postacquisition involvement] of the P-value? Comment briefly. 19. Returning to the previous exercise, the mean and 17. Much recent research has focused on comparing standard deviation of the number of credit cards for business environment cultures across several coun- the no-involvement group were 2.22 and 1.58, tries. The article “Perception of Internal Factors for respectively, whereas the mean and standard devi- Corporate Entrepreneurship: A Comparison of ation for the payment-help group were 2.09 and Canadian and U.S. Managers” (Entrep. Theory 1.65, respectively. Does it appear that the true Pract., 1999: 9-24) presented the following average number of cards for no-involvement stu- summary data on hours per week managers spent dents exceeds the average for payment-help stu- thinking about new ideas. dents? Carry out an appropriate test of significance. --- Trang 512 --- 10.2 The Two-Sample t Test and Confidence Interval 499 The Two-Sample t Test and Confidence Interval In practice, it is virtually always the case that the values of the population variances are unknown. In the previous section, we illustrated for large sample sizes the use of a test procedure and CI in which the sample variances were used in place of the population variances. In fact, for large samples, the CLT allows us to use these methods even when the two populations of interest are not normal. In many problems, though, at least one sample size is small and the popula- tion variances have unknown values. In the absence of the CLT, we proceed by making specific assumptions about the underlying population distributions. The use of inferential procedures that follow from these assumptions is then restricted to situations in which the assumptions are at least approximately satisfied. ASSUMPTIONS Both populations are normal, so that X,, X, ... , X,,, is arandom sample from. anormal distribution and so is Y,, ..., Y,, (with the X’s and Y’s independent of each other). The plausibility of these assumptions can be judged by constructing a normal probability plot of the x;’s and another of the y;’s. The test statistic and confidence interval formula are based on the same standar- dized variable developed in Section 10.1, but the relevant distribution is now t rather than z. THEOREM When the population distributions are both normal, the standardized variable X-Y-(m- pat) (10.2) Sp SS jig ec mon has approximately a f distribution with df v estimated from the data by 3 3\? 2 pire 2)? (2 = 2) [(ver ‘ae (sen)'] = 7 zy (sen)* Hn, (B/G) Coen) m—l n-l m—I'n-t where 1 $2 se) =— sey = Lai 2 “Jt (round v down to the nearest integer). --- Trang 513 --- 500 carrer 10 Inferences Based on Two Samples We can give some justification for the theorem. Dividing numerator and denominator of (10.2) by the standard deviation of the numerator, we get oT. o 2 R-¥- (uy -)] / Yo+> mon JS pS f [ot p@ mon mon The numerator of this ratio is a standard normal rv because it results from standar- dizing X — Y, which is normally distributed because it is the difference of indepen- dent normal rv’s. The denominator is independent of the numerator because the sample variances are independent of the sample means. However, in order for (10.2) to be a ¢ random variable, the denominator needs to be the square root of a chi-squared rv over its degrees of freedom, and unfortunately this is not generally true. However, we can try to write the square of the denominator [S7/m + S3/n]/{oz/m + o3/n] approximately as a chi-squared rv W with v degrees of freedom, divided by v, so mon monjy To determine v we equate the means and variances of both sides, with the help of EW) =v, VW) = 2v, (m—1)S7/op ~ 24, (n— 1)83/o3 ~ 72_,, from Section 6.4. It follows that E(S}) = 6], V(S}) = 2 o{/(m — 1), and similarly for 53. The mean of the left-hand side is mon mon which is also the mean of the right-hand side, so the means are equal. The variance of the left-hand side is Vv SiS _ 201 + 204 mon (m—1)n? © (n= 1)n? and the variance of the right-hand side is monjvy mon v2 mon v We then equate the two, substituting sample variances for the unknown population variances, and solve for v. This gives the v of the theorem. ii Manipulating T in a probability statement to isolate 4; — M2 gives a Cl, whereas a test statistic results from replacing f4; — f2 by the null value Ao. TWO-SAMPLE The two-sample ¢ confidence interval for yy — jz with confidence level t PROCEDURES 100(1 — «)% is then se se X-Ythpyft+2 mon A one-sided confidence bound can be calculated as described earlier. --- Trang 514 --- 10.2 The Two-Sample t Test and Confidence Interval 501 The two-sample f test for testing Ho: 4) — flo = Ao is as follows: x-y-A Test statistic value: r= *—2—"? 2 2 St, 83 14 mon Alternative Hypothesis Rejection Region for Approximate Level a Test Ay: [ty — fo > Ao t > t,y (upper-tailed test) Ay: fy — po < Ao t < —t, (lower-tailed test) Ay: 1 — po F Ao either ¢ > t,2,. or t < —t,2,y (two-tailed test) A P-value can be computed as described in Section 9.4 for the one-sample f test. Which way of dispensing champagne, the traditional vertical method or a tilted “beer-like” pour, preserves more of the tiny gas bubbles that improve flavor and aroma? The following data was reported in the article “On the Losses of Dissolved CO, during Champagne Serving” (J. Agr. Food Chem., 2010: 8768-6775). Temperature (°C) Typeof Pour 2 ~=Mean(g/L) ~—- SD 18 Traditional 4 4.0 5 18 Slanted 4 3.7 3 12 Traditional 4 33) 2 12 Slanted 4 2.0 3 Assuming that the sampled distributions are normal, let’s calculate confidence intervals for the difference between true average dissolved CO loss for the traditional pour and that for the slanted pour at each of the two temperatures. For the 18°C temperature, the number of degrees of freedom for the interval is Sa * 3) 007225 df =—\*, "7 = 91 (52/4) (.37/4) .00147083 St Rounding down, the CI will be based on 4 df. For a confidence level of 99%, we need fo95,4 = 4.604. The desired interval is 52 32 4.0 — 3.7 + (4.604) at a = 3 + (4.604)(.2915) = .3 4 1.3 = (—1.0,1.6) Thus we can be highly confident that — 1.0 <4, — fy < 1.6, where jr, and /y are true average losses for the traditional and slant methods, respectively. Notice that this CI contains 0, so at the 99% confidence level, it is plausible that Hy — ly =O, that is, that wy = po. The df formula for the 12°C comparison yields df = .00105625/ -00020208 = 5.23, necessitating the use of t.995,5 = 4.032 for a 99% CI. The result- ing interval is (.6, 2.0). Thus 0 is not a plausible value for this difference. It appears from the CI that the true average loss when the slant method is used is smaller than that when the traditional method is used, so that the slant method is better at this temperature. This in fact was the conclusion reported in the popular media. a --- Trang 515 --- 502 carrer 10 Inferences Based on Two Samples ea §=The deterioration of many municipal pipeline networks across the country is a growing concern. One technology proposed for pipeline rehabilitation uses a flexible liner threaded through existing pipe. The article “Effect of Welding on a High-Density Polyethylene Liner” (J. Mater. Civil Eng., 1996: 94-100) reported the following data on tensile strength (psi) of liner specimens both when a certain fusion process was used and when this process was not used. No fusion 2748 2700 2655 2822 ©2511 3149 3257 3213 3220 2753 m=10 x=2902.8 s; = 2773 Fused 3027 3356 3359 3297 3125 2910 2889 =—.2902 n=8 ¥=3108.1 sy = 205.9 Figure 10.3 shows normal probability plots from MINITAB. The linear pattern in each plot supports the assumption that the tensile strength distributions under the two conditions are both normal. Normal Probability Plot Normal Probability Plot Bybee tt) 9 ae ee eee OS geet een paenyscrageescgeessancerig| ye eet eee eens en O1 tpn panne penne tenn tee dened | a Scalar Sa SEE teeta teem 2480 2580 2680 2780 2880 2980 3080 3180 3280 2380-2980 30803180 32803380 notfused fused Figure 10.3 Normal probability plots from MINITAB for the tensile strength data The authors of the article stated that the fusion process increased the average tensile strength. The message from the comparative boxplot of Figure 10.4 is not all that clear. Let’s carry out a test of hypotheses to see whether the data supports this conclusion. 1. Let 2; be the true average tensile strength of specimens when the no-fusion treatment is used and jy denote the true average tensile strength when the fusion treatment is used. 2. Ho: [1 — 2 = 0 (no difference in the true average tensile strengths for the two treatments) 3. Ay: 4 — 2 < 0 (true average tensile strength for the no-fusion treatment is less than that for the fusion treatment, so that the investigators’ conclusion is correct) --- Trang 516 --- 10.2 The Two-Sample t Test and Confidence Interval 503 ‘ ny ™ —y ‘Strength 2500 2600 2700 2800 2900 3000 3100 3200 3300 3400 Figure 10.4 A comparative boxplot of the tensile strength data 4. The null value is Ap = 0, so the test statistic is X- a ie are mon 5. We now compute both the test statistic value and the df for the test: , = 29028-31081 _ -205.3_ sg pre, 205.9? 113.97 . 10 8 Using sj/m = 7689.529 and s3/n = 5299.351, (7689.529 + 5299.351)° 168,711,004 15.94 y= pe 5 (7689.529)° (5299.351)” 10,581,747 MBERIEAE RAG fe APOE 9 7 so the test will be based on 15 df. 6. Appendix Table A.7 shows that the area under the 15 df t curve to the right of 1.8 is .046, so the P-value for a lower-tailed test is also .046. The following MINITAB output summarizes all the computations: Twosample T for nofusionvs. fused N Mean stDev SE Mean No fusion 10 2903 277 88 Fused 8 3108 206 73 95% C.1. for mu nofusion-mu fused: (-488, 38) T-Test mu nofusion = mu fused (vs<):T = -1.80P = 0.046DF = 15 7. Using a significance level of .05, we can barely reject the null hypothesis in favor of the alternative hypothesis, confirming the conclusion stated in the article. However, someone demanding more compelling evidence might select a = .01, a level for which Hp cannot be rejected. --- Trang 517 --- 504 = cuarrer 10 Inferences Based on Two Samples If the question posed had been whether fusing increased true average strength by more than 100 psi, then the relevant hypotheses would have been Ho: ft; — f2 = —100 versus H,: {4 — {lo < —100; that is, the null value would have been Ay = —100. Ml Pooled t Procedures Alternatives to the two-sample t procedures just described result from assuming not only that the two population distributions are normal but also that they have equal variances (o7 = 03). That is, the two population distribution curves are assumed normal with equal spreads, the only possible difference between them being where they are centered. Let o* denote the common population variance. Then standardizing X — Y gives poke Poh) P= (te) o 4 o fil Vin "a a a which has a standard normal distribution. Before this variable can be used as a basis for making inferences about jt; — fz, the common variance must be estimated from sample data. One estimator of o” is S7, the variance of the m observations in the first sample, and another is 53, the variance of the second sample. Intuitively, a better estimator than either individual sample variance results from combining the two sample variances. A first thought might be to use (St + S3)/2, the ordinary average of the two sample variances. However, if m > n, then the first sample contains more information about o* than does the second sample, and an analogous com- ment applies if m mpn—2°! mtn? We can show that Ss is proportional to a chi-squared rv with m + n — 2 df. Recall that (m — 1)S}/ot ~ Mri (n—1)S$/o3 ~ 72_,. Furthermore, S} and S3 are independent, so with o7 = ¢3 =a”, m+n—2)S% m1 n-1 (etn =a Sit o o ge is the sum of two independent chi-squared rv’s with m — | and n — | df, respectively, so the sum is a chi-squared rv with (m — 1) + (n — 1) = m + n—2df. Furthermore, it is also independent of X and Y because the sample means are independent of the sample variances. Now consider the ratio X-Y~ (um —) VU fm+ in) _X-Y—(u— 1) (mtn—-25 1 9(3+2) e m+n—2 mn On the left is the ratio of a standard normal rv to the square root of an independent chi-squared rv over its degrees of freedom, m + n — 2, so the ratio --- Trang 518 --- 10.2 The Two-Sample t Test and Confidence Interval 505 has the ¢ distribution with m + n — 2 degrees of freedom. We see therefore that if Ss, replaces o? in the expression for Z, the resulting standardized variable has a t distribution. In the same way that earlier standardized variables were used as a basis for deriving confidence intervals and test procedures, this t variable imme- diately leads to the pooled ¢ confidence interval for estimating 4) — fo and the pooled f test for testing hypotheses about a difference between means. In the past, many statisticians recommended these pooled t procedures over the two-sample ¢ procedures. The pooled ¢ test, for example, can be derived from the likelihood ratio principle, whereas the two-sample ¢ test is not a likelihood ratio test. Furthermore, the significance level for the pooled f test is exact, whereas it is only approximate for the two-sample f test. However, recent research has shown that although the pooled f test does outperform the two-sample f test by a bit (smaller $’s for the same «) when oF a 3, the former test can easily lead to erroneous conclusions if applied when the variances are different. Analogous comments apply to the behavior of the two confidence intervals. That is, the pooled t procedures are not robust to violations of the equal variance assumption. It has been suggested that one could carry out a preliminary test of Ho: oj = 03 and use a pooled ¢ procedure if this null hypothesis is not rejected. Unfortunately, the usual “F test” of equal variances (Section 10.5) is quite sensitive to the assumption of normal population distributions, much more so than t proce- dures. We therefore recommend the conservative approach of using two-sample t procedures unless there is really compelling evidence for doing otherwise, particularly when the two sample sizes are different. Type Il Error Probabilities Determining type II error probabilities (or equivalently, power = 1 — ) for the two-sample ¢ test is complicated. There does not appear to be any simple way to use the B curves of Appendix Table A.16. The most recent version of MINITAB (Version 16) will calculate power for the pooled f test but not for the two-sample t test. However, the UCLA Statistics Department homepage (http://www.stat.ucla. edu) permits access to a power calculator that will do this. For example, we specified m = 10, n = 8, a = 300, 2 = 225 (these are the sample sizes for Example 10.7, whose sample standard deviations are somewhat smaller than these values of o, and ¢) and asked for the power of a two-tailed level .05 test of Ho: 4) — fo = 0 when pt; — fo = 100, 250, and 500. The resulting values of the power were .1089, .4609, and .9635 (corresponding to f = .89, .54, and .04), respectively. In general, f will decrease as the sample sizes increase, as x increases, and as j; — [t2 moves farther from 0. The software will also calculate sample sizes necessary to obtain a specified value of power for a particular value of fy) — po. Exercises | Section 10.2 (20-38) 20. Determine the number of degrees of freedom 21. Expert and amateur pianists were compared in a for the two-sample ¢ test or CI in each of the study “Maintaining Excellence: Deliberate Practice following situations: and Elite Performance in Young and Older a. m= 10,n = 10,5; = 5.0, 5. = 6.0 Pianists” (J. Exp. Psychol. Gen., 1996; 331-340). b. m= 10,n = 15,5; = 5.0, 85 = 6.0 The researchers used a keyboard that allowed mea- c. m= 10,n = 15, 5; = 2.0, 59 = 6.0 surement of the force applied by a pianist in striking d. m= 12,n = 24, 5; = 5.0, 5) = 6.0 a key. All 48 pianists played Prelude Number 1 --- Trang 519 --- 506 = cuarrer 10 Inferences Based on Two Samples from Bach’s Well-Tempered Clavier. For 24 ama- ¢. The sample mean and standard deviation for teur pianists the mean force applied was 74.5 with the high-quality sample are 1.508 and .444, standard deviation 6.29, and for 24 expert pianists respectively, and those for the poor-quality the mean force was 81.8 with standard deviation sample are 1.588 and .530. Use the two-sample 8.64. Do expert pianists hit the keys harder? Assum- t test to decide whether true average extensibil- ing normally distributed data, state and test the ity differs for the two types of fabric. relevant ypothieses and interpret. theiresults: 24, Low-back pain (LBP) is a serious health problem 22. The article “Supervised Exercise Versus Non- in many industrial settings. The article “Isody- Supervised Exercise for Reducing Weight in namic Evaluation of Trunk Muscles and Low- Obese Adults” (J. Sport. Med. Phys. Fit., 2009: Back Pain Among Workers in a Steel Factory” 85-90) reported on an investigation in which par- (Ergonomics, 1995: 2107-2117) reported the ticipants were randomly assigned either to a accompanying summary data on lateral range of supervised exercise program or a control group. motion (degrees) for a sample of workers without Those in the control group were told only that they a history of LBP and another sample with a history should take measures to lose weight. After 4 of this malady. months, the sample mean decrease in body fat for the 17 individuals in the experimental group Sample Sample Sample was 6.2 kg with a sample standard deviation of Condition Size Mean sD 4.5 kg, whereas the sample mean and sample oa_ae—oOrs Eee standard deviation for the 17 people in the control No LBP 28 915 5.5 group were 1.7 kg and 3.1 kg, respectively. LBP 31 88.3 18 Assume normality of the two body fat loss distri- ——.——_ butions (as did the investigators). ai, Calonlate «99% lower prediction bound for the Calculate a 90% confidence interval for the dif- body fat loss of a single randomly selected ference between population mean extent of lateral individual subjected to the supervised exercise motion for the two conditions. Does the interval program. Can you be highly confident that such suggest that population mean lateral motion dif- an individual will actually lose body fat? fers for the two conditions? Is the message differ- b. Does it appear that true average decrease in ent if we use:a confidence level of 95%? body fat is more than 2 kg larger for the exper- 25, Research has shown that good hip range of motion imental condition than for the control condi- and strength in throwing athletes results in tion? Carry out a test of appropriate hypotheses improved performance and decreased body stress. using a significance level of .01 ‘The article “Functional Hip Characteristics of 23. Fusible interlinings are being used with increasing Baseball Piicbers aid Position Players” (Avi frequency to support outer fabrics and improve the Spot Med.,2010; 383-388) seported ot 'a.study shape and drape of various pieces of clothing. The involving samples of 40 professional pitchers and article “Compatibility of Outer and Fusible Inter- 40 professional position: players: For the pitchers; lining Fabrics in Tailored Garments” (Textile Res. the sample mean trail leg total arc of motion J., 1997: 137-142) gave the accompanying data on (degrees) was:75.6 with sample standard devia extensibility (%) at 100 g/cm for both high-quality HiGn_off5:9 swhiereds. thie saniple tiieananid/éatnple fabric (H) and poor-quality fabric (P) specimens. standard deviation for position players were 79.6 and 7.6, respectively. Assuming normality, test H12 9 7 101717 11 9 17 appropriate hypotheses to decide whether true 19 13 21 16 18 14 13 19 16 average range of motion for the pitchers is less 8 20 17 16 23 2.0 than that for the position players (as hypothesized P 1615 11 21 15 13 10 26 by the investigators). In reaching your conclusion, what type of error might you have committed? a, Construct normal probability plots to verify the . . plausibility of both samples having been 26. Tennis elbow is thought to be aggravated by selected from normal population distributions. thecinipath experiehced when hitting (He tall, b. Construct a comparative boxplot. Does it sug- TE articis “Potces a the Hand iit the Tennis eavaihan ahane inn AiNREderbeRieNT tae One-Handed Backhand” (Int. J. Sport Biomech., average extensibility for high-quality fabric 1991: 282-292) reported the force (Newtons) on specimens and that for poor-quality specimens? fis: hang ju! afler ampact ence ouehended --- Trang 520 --- 10.2 The Two-Sample t Test and Confidence Interval 507 backhand drive for six advanced players and for P-value. What assumptions are necessary for your eight intermediate players. analysis? Sample Sample Sample Sample Sample Sample Type of Player Size = Mean sD Beverage Size Mean sD 1awaneedl é 403 13 Strawberry drink 15 540 21 2. Intermediate 8 214 83 Cola 15 554 15 , , 29. Which foams more when you pour it, Coke or In their analysis of the data, the authors assumed Pepsi? Here are measurements by Diane Warfield that both force distributions were normal. Calcu- on the foam volume (mL) after pouring a 12-0z late a 95% CI for the difference between true can of Coke, based on a sample of 12 cans: average force for advanced players (41)) and ° / true average force for intermediate players (jt2). 312.2 292.6 331.7 355.1 362.9 331.7 Does your interval provide compelling evidence 292.6 245.8 280.9 320.0 273.1 288.7 for concluding that the two y's are different? F Would you have reached the same conclusion gud beresaremmeasurementy forFepel based gaia by calculating a Cl for jr — 1 (.e., by reversing sample of 12 cans: the 1 and 2 labels on the two types of players)? 148.3. 210.7 152.2 117.1 89.7 140.5 Explain. 128.8 167.8 156.1 136.6 1249 136.6 27. As the population ages, there is increasing con- a. Verify graphically that normality is an appro- cern about accident-related injuries to the elderly. priate assumption. we ae ae and Gener purterences in b. Calculate a 99% confidence interval for the ingle-Step Recovery from a Forward Fal lation diff , ‘ population difference in mean volumes. W Gerontol A Biol Sci Med Sci., 1999 54(1): ¢. Does the upper limit of your interval in (b) give M44-50 d iment in which th pp x i 50) ee ai aie od me eC a 99% lower confidence bound for the differ- gmaneimuum lean anuler—the: tarhestva isublect ence between the two p’s? If not, calculate . ie to a ee ean ek such a bound and interpret it in terms of the females (21.29 ei) asad @ satpl Beene relationship between the foam volumes of (ai é Coke and Pepsi. females (67-81 years). The following observations aa; ‘Suininarioetiva centencs wt soa have leans a aii wilh Scinmaiy; date piven.cin he, about the foam volumes of Coke and Pepsi. article: = . 30. The accompanying data set gives expenses YF: 29, 34, 33, 27, 28, 32, 31, 34, 32, 27 (including tuition and fees but not room and OF: 18, 15, 23, 13, 12 board) for 16 colleges from the 2008 edition of Dae Heda an gee ED Ini average RRARITR U.S. News and World Report's America’s Best lean angle fox older fenmulessie'smoreviehan’ 10 Colleges, which lists 248 national liberal arts col degrees’ smiallersthan‘it ig: for-younger:femules? leges in four tiers. The first two tiers are combined State and test the relevant hypotheses at signifi- in a list of 125 colleges. We drew a random sam- cance level .10 by obtaining a P-value. ple of size 8 from the 62 in the first tier and a another random sample of size 8 from the 63 in 28. The article “Effect of Internal Gas Pressure on the the next tier, excluding non-private colleges. Compression Strength of Beverage Cans and SPORE i Plastic Bottles” (J. Testing Eval., 1993: 129-131) ee includes the accompanying data on compression Tier College Expenses strength (Ib) for a sample of 12-02 aluminum ——3 cans filled with strawberry drink and another sam- 1 Gettysburg eer ple filled with cola. Does the data suggest that the ! Harvey Mudd 34891 extra carbonation of cola results in a higher average ! Scripps 35850 compression strength? Base your answer on a 1 Macalester 33694 1 Hamilton 36860 --- Trang 521 --- 508 = cuarrer 10 Inferences Based on Two Samples f Kaliya e140 ber of eycles to break were 4358 and 2218, respec- ; tively, whereas a sample of 20 polyisoprene 1 Oberlin 36282 : condoms gave a sample mean and sample standard 1 Franklin and 36480 : Marshall deviation of 5805 and 3990, respectively. Is there sins strong evidence for concluding that the true aver- 2 Goucher 31082 se trinber of cvcles to'biedi for the bolvis 2 Randolph-Macon 26830 age number of cycles to break for the polyisoprene ; condom exceeds that for the natural latex condom 2 Thomas Aquinas 20400 : : by more than 1000 cycles? [Note: The article pre- 2 Beloit 30138 2 sented the results of hypothesis tests based on the ¢ 2 Austin 21586 ee : distribution; the validity of these depends on 2 Ursinus 35160 aie aL’ population distri 2 Siena 22685 assuming normal population distributions.] 2 Juniata 28920 33. Consider the pooled r variable x= (uy = a. Construct a comparative boxplot of expenses, 7a FeV) =) and comment on any interesting features. Sp V be =p i b. Obtain a 95% confidence interval for the dif- mon TerenceL oh ROpulAtiGn! ments, -Tnrespret your which has a ¢ distribution with m + n — 2 df when result in terms of the additional cost of attend- both population distributions are normal with ing’a, more prestigious ‘college: Moving: up 1 = G9 (see the Pooled ¢ Procedures subsection aa tier 2 ty Her 1 raises the cost by roughly for a description of S,). what percentage? a. Use this ¢ variable to obtain a pooled ¢ confi- 31. The article “Characterization of Bearing Strength dence interval formula for ju) — 2. Factors in Pegged Timber Connections” U. Siruch b. A sample of ultrasonic humidifiers of one par- Engrg., 1997: 326-332) gave the following sum- ticular brand was selected for which the obser- mary data on proportional stress limits for speci- vations on maximum output of moisture (oz) mens constructed using two different types of wood: in a controlled chamber were 14.0, 14.3, 12.2, and 15.1, A sample of the second brand gave ——————— output values 12.1, 13.6, 11.9, and 11.2 Type Sample Sample — Sample (“Multiple Comparisons of Means Using of Wood Size Mean sD Simultaneous Confidence Intervals,” J. Qual. Techn., 1989: 232-41). Use the pooled ¢ for- colors Ms ae ee mula from part (a) to estimate the difference Douglas fir 10 6.65 1.28 between true average outputs for the two brands with a 95% confidence interval. Assuming that both samples were selected from c. Estimate the difference between the two js normal distributions, carry out a test of hypotheses bing the two-sample’? interval disciissed Sn this to decide whether the true average proportional section, and compare it to the interval of part (b). stress limit for red oak joints exceeds that for 34, Refer to Exercise 33. Describe the pooled f test for Douglas fir joints by more than | MPa. testing Hp: 4, — 42 =0 when both population 32. According to the article “Fatigue Testing of distributions are normal with o; = 62. Then use Condoms” (Polym. Test., 2009: 567-571), “tests this test procedure to test the hypotheses suggested currently used for condoms are surrogates for the in Exercise 32. challenges they face in use,” including a test for 35. Exercise 35 from Chapter 9 gave the following holes, an inflation test, a package seal test, and data on amount (02) of alcohol poured into a short, tests of dimensions and lubricant quality (all fer- wile tamabler-plass by a. sample ef experienced tile territory for the use of statistical methodo- bartenders: 2.00, 1.78, 2.16, 1.91, 1.70, 1.67, logy!). The investigators developed a new test 1.83, 1.48. The cited article also gave summary that adds cyclic strain to a level well below break- data on the amount poured by a different sample age and determines the number of cycles to break. of experienced bartenders into a tall, slender The cited article reported that for a sample of 20 (highball) glass; the following observations are natural latex condoms of a certain type, the sample consistent with the reported summary data: 1.67, mean and sample standard deviation of the num- 1.57, 1.64, 1.69, 1.74, 1.75, 1.70, 1.60. --- Trang 522 --- 10.3 Analysis of Paired Data 509 a. What does a comparative boxplot suggest Formulate hypotheses and carry out an appro- about similarities and differences in the data? priate analysis. Does your conclusion depend on b. Carry out a test of hypotheses to decide whether whether a significance level of .05 or .01 was the true average amount poured is different for employed? (The cited paper reported P-value the two types of glasses; be sure to check the <. 05; presumably .05 would have been replaced validity of any assumptions necessary to your by .01 if the P-value were really that small), analysis, and report a P-value. 38. Which factors are relevant to the time a consumer 36. Is the incidence of head or neck pain among video spends looking at a product on the shelf prior to display terminal users related to the monitor angle selection? The article “Effects of Base Price upon (degrees from horizontal)? The paper, “An Anal- Search Behavior of Consumers in a Supermarket” ysis of VDT Monitor Placement and Daily Hours. (J. Econ. Psychol., 2003: 637-652) reported the of Use for Female Bifocal Users” (Work, 2003: following data on elapsed time (sec) for fabric 77-80), reported the accompanying data. Carry softener purchasers and washing-up liquid purcha- out an appropriate test of hypotheses (be sure to sers; the former product is significantly more include a P-value in your analysis). expensive than the latter. These products were chosen because they are similar with respect to Semple Sample Sample allocated shelf space and number of alternative Pain Size Mean sD brands. Yes 32 2.20 3.42 Sample Sample Sample No 40 3.20 2.52 Product Size Mean SD . . . i Fabric softener 15 30.47 19.15 37. The article “Gender Differences in Individuals Washing-up liquid 19 26.53 15.37 with Comorbid Alcohol Dependence and Post- =—__SS Be Traumatic Stress Disorder” (Amer. J. Addiction, 2003: 412-423) reported the accompanying data a. What if any assumptions are needed before an on total score on the Obsessive-Compulsive inferential procedure can be used to compare Drinking Scale (OCSD). true average elapsed times? b. If just the two sample means had been Sample ‘Sample Sample reported, would they provide persuasive evi- Gender Size Mean SD dence for a significant difference between true ——— average elapsed times for the two products? Male 44 19.93 1.74 ¢. Carry out an appropriate test of significance Female 40 16.26 7.58 and state your conclusion. Analysis of Paired Data In Sections 10.1 and 10.2, we considered estimating or testing for a difference between two means 4; and juz. This was done by utilizing the results of a random sample X,, X2,..., X, from the distribution with mean 4, and a completely independent (of the X’s) sample ¥;, ..., Y,, from the distribution with mean po. That is, either m individuals were selected from population 1 and n different individuals from population 2, or m individuals (or experimental objects) were given one treatment and another n individuals were given the other treatment. In contrast, there are a number of experimental situations in which there is only one set of n individuals or experimental objects, and two observations are made on each individual or object, resulting in a natural pairing of values. --- Trang 523 --- 510 = cuarrer 10 Inferences Based on Two Samples Ue §= Trace metals in drinking water affect the flavor, and unusually high concentrations can pose a health hazard. The article “Trace Metals of South Indian River” (Envir. Studies, 1982: 62-66) reports on a study in which six river locations were selected (six experimental objects) and the zinc concentration (mg/L) determined for both surface water and bottom water at each location. The six pairs of observations are displayed in the accompanying table. Does the data suggest that true average concentration in bottom water exceeds that of surface water? Location 1 2 3 4 5 6 Zinc concentration in bottom water (x) 430 266 567 531 -107 716 Zinc concentration in surface water (y) 41S .238 390 410 605 609 Difference 015 .028 177 121 102 107 Figure 10.5a displays a plot of this data. At first glance, there appears to be little difference between the x and y samples. From location to location, there is a great deal of variability in each sample, and it looks as though any differences between the samples can be attributed to this variability. However, when the observations are identified by location, as in Figure 10.5b, a different view emerges. At each location, bottom concentration exceeds surface concentration. This is confirmed by the fact that all x — y differences (bottom water concentration — surface water concentration) displayed in the bottom row of the data table are positive. As we will see, a correct analysis of this data focuses on these differences. a m6 * A ee ee Bog " B “4 5 e a 8 b Location x 2 1 43 % Location » 5 au = Figure 10.5 Plot of paired data from Example 10.8: (a) observations not identified by location; (b) observations identified by location . ASSUMPTIONS The data consists of n independently selected pairs (X,, Y1), (X2, Y2), .-- 5 (%,, Y,), with E(X) = ju, and E(Y;) = py. Let D, = X, — Y,, Dy = Xp — Yo, ...,D, =X, — Y,, so the D,’s are the differences within pairs. Then the D;’s are assumed to be normally distributed with mean value ip and variance a}. We are again interested in hypothesis testing or estimation for the difference 1) — My. The denominator of the two-sample f statistic was obtained by first applying the rule V(X — Y) = V(X) + V(Y) However, with paired data, the X and Y observations within each pair are often not independent, so X and Y are not independent of each other, and the rule is not valid. We must therefore abandon the two-sample ¢ procedures and look for an alternative method of analysis. --- Trang 524 --- 10.3 Analysis of Paired Data 511 The Paired t Test Because different pairs are independent, the D,’s are independent of each other. If we let D = X — Y, where X and Y are the first and second observations, respec- tively, within an arbitrary pair, then the expected difference is Mp = E(X — Y) = E(X) — E(Y) = my — 1 (the rule of expected values used here is valid even when X and Y are dependent). Thus any hypothesis about jz; — ji2 can be phrased as a hypothesis about the mean difference ftp. But since the D;’s constitute a normal random sample (of differ- ences) with mean tp, hypotheses about jp can be tested using a one-sample t test. That is, to test hypotheses about [t, — {lo when data is paired, form the differences D,, D3, ... , D, and carry out a one-sample t test (based on n — 1 df) on the differences. THE PAIRED Null hypothesis: Hp: pip = Ao (where D = X — Y is the difference between t TEST the first and second observations within a d—Ay pair, and jp = pt) — pt2) Test statistic value; = “—“° (where d and sp are the sample mean and sp//n standard deviation, respectively, of the d;’s) Alternative Hypothesis Rejection Region for Level a Test Fy: Up > Ao t 2 tan-1 Fh: tp < Ao tS tant Hi: Up # Ao either f > tyan—1 OF tS —ty2n—-1 A P-value can be calculated as was done for earlier ¢ tests. ese )=©Musculoskeletal neck-and-shoulder disorders are all too common among office staff who perform repetitive tasks using visual display units. The article “Upper- Arm Elevation During Office Work” (Ergonomics, 1996: 1221-1230) reported on a study to determine whether more varied work conditions would have any impact on arm movement. The accompanying data was obtained from a sample of n = 16 subjects. Each observation is the amount of time, expressed as a proportion of total time observed, during which arm elevation was below 30°. The two measurements from each subject were obtained 18 months apart. During this period, work condi- tions were changed, and subjects were allowed to engage in a wider variety of work tasks. Does the data suggest that true average time during which elevation is below 30° differs after the change from what it was before the change? This particular angle is important because in Sweden, where the research was conducted, workers’ compensation regulations assert that arm elevation less than 30° is not harmful. Subject 1 2 3 4 5 6 7 8 Before 81 87 86 82 90 86 96 73 After 78 OL 78 78 84 67 92 70 Difference 3 —4 8 4 6 19 4 3 --- Trang 525 --- 512 = cuarrer 10 Inferences Based on Two Samples Subject 9 10 a 12 13 14 15 16 Before 74 75 72 80 66 72 56 82 After 58 62 70 58 66 60. 65 73 Difference 16 13 2 22 0 12 -9 9 Figure 10.6 shows a normal probability plot of the 16 differences; the pattern in the plot is quite straight, supporting the normality assumption. A boxplot of these differences appears in Figure 10.7; the box is located considerably to the right of zero, suggesting that perhaps 4p > 0 (note also that 13 of the 16 differences are positive and only two are negative). Normal Probability Plot 99 9 RS) 2 8 a g 50 c 2 oe? ft} oO 001 -D 0 D 20 diff Average 675 Wiest for Normality ‘Std Dew. 823408 R asete Nof deta: 16 pvetue (appre: > 0.1000 Figure 10.6 Anormal probability plot from MINITAB of the differences in Example 10.9 Ss SE SE SN el Difference 10 0 10 20 Figure 10.7 A boxplot of the differences in Example 10.9 Let’s now use the recommended sequence of steps to test the appropriate hypotheses. 1. Let jp denote the true average difference between elevation time before the change in work conditions and time after the change. 2. Ho: Up = 0 (there is no difference between true average time before the change and true average time after the change) 3. Hy tp #0 --- Trang 526 --- 10.3 Analysis of Paired Data 513 d-0 d 4. t=—_= —_ sp/ Vn p/n 5. n = 16, Xd; = 108, eae = 1746, from which d = 6.75, sp = 8.234, and 6.75 t= = 3.28 3.3 8.234//16 6. Appendix Table A.7 shows that the area to the right of 3.3 under the t curve with 15 df is .002. The inequality in H, implies that a two-tailed test is appropriate, so the P-value is approximately 2(.002) = .004 (MINITAB gives .0051). 7. Since .004 < .01, the null hypothesis can be rejected at either significance level -05 or .01. It does appear that the true average difference between times is something other than zero; that is, true average time after the change is different from that before the change. Recalling that arm elevation should be kept under 30°, we can conclude that the situation became worse because the amount of time below 30° decreased. a When the number of pairs is large, the assumption of a normal difference distribution is not necessary. The CLT validates the resulting z test. A Confidence Interval for pip In the same way that the ¢ CI for a single population mean jy is based on the t variable T = (X — )/(S/V/n), at confidence interval for jp (= jt) — ft2) is based on the fact that rp =P- Sp/yn has a ¢ distribution with n — 1 df. Manipulation of this t variable, as in previous derivations of CIs, yields the following 100(1 — ~)% Cl: The paired ¢ CI for pp is A+ tyr + sp/Vn A one-sided confidence bound results from retaining the relevant sign and replacing t,,) by t,. When n is small, the validity of this interval requires that the distribution of differences be at least approximately normal. For large n, the CLT ensures that the resulting z interval is valid without any restrictions on the distribution of differences. Adding computerized medical images to a database promises to provide great resources for physicians. However, there are other methods of obtaining such information, so the issue of efficiency of access needs to be investigated. The article --- Trang 527 --- 514 cuarrer 10 Inferences Based on Two Samples “The Comparative Effectiveness of Conventional and Digital Image Libraries” (J. Audiov. Media Med., 2001: 8-15) reported on an experiment in which 13 computer-proficient medical professionals were timed both while retrieving an image from a library of slides and while retrieving the same image from a computer database with a web front end. Subject 1 2 3 4 5 6 7 8 9 10 It 12) 13 Slide 30 35 40 25 20 30 35 62 40 SI 25 42 33 Digital 25 16 15 15 10 20 7 16 15 13) IL 19 19 Difference 5 19 25 10 10 10 28 46 25 38 14 23 14 Let {tp denote the true mean difference between slide retrieval time (sec) and digital retrieval time. Using the paired f confidence interval to estimate 1p requires that the difference distribution be at least approximately normal. The linear pattern of points in the normal probability plot from MINITAB (Figure 10.8) validates the normality assumption. (Only 9 points appear because of ties in the differences.) a a Cea |: Laat anne 1 eed CR ER 3 ! i H i H i a a | Ay peony emenvemnwennliecese ete penn ese 5 15 25 35 45 Diff Average: 20.5385 W-test for Normality SDev: 11.9625 R: 0.9724 N: 13 P-Value (approx): > 0.1000 Figure 10.8 Normal probability plot of the differences in Example 10.10 Relevant summary quantities are Xd; = 267, ve = 7201, from which d = 20.5, sp = 11.96. The t critical value required for a 95% confidence level is to2s.12 = 2.179, and the 95% CI is - Sp 11.96 d+ tyjan—1 >= = 20.5 2.179 - —— = 20.5 7.2 = (13.3, 27.7 Ta Vi3 ( ) Thus we can be highly confident (at the 95% confidence level) that 13.3 < Up < 27.7. This interval of plausible values is rather wide, a consequence of the sample standard deviation being large relative to the sample mean. A sample size much larger than 13 would be required to estimate with substantially more precision. Notice, however, that 0 lies well outside the interval, suggesting that jp > 0; this is confirmed --- Trang 528 --- 10.3 Analysis of Paired Data 515 by a formal hypothesis test. It is not hard to show that 0 is outside the 95% CI if and only if the two-tailed test rejects Ho: [tp = 0 at the .05 level. We can conclude from the experiment that computer retrieval appears to be faster on average. a Paired Data and Two-Sample ¢ Procedures Consider using the two-sample ¢ test on paired data. The numerators of the paired t and two-sample f test statistics are identical, since d= > d;/n = (2 @i -— y)]/n = (CL 4xi)/n — CL yi)/n =X —Y. The difference between the two statistics is due entirely to the denominators. Each test statistic is obtained by standardizing X — Y (=D), but in the presence of dependence the two-sample t standardization is incorrect. To see this, recall from Section 6.3 that V(X £Y) = V(X) + V(Y¥) + 2 Cov(X,Y) Since the correlation between X and Y is p = Com(X, Y) = Cov(X, ¥)/[/V(X)- /V)] It follows that V(X —Y) = 0; +03 —2paran Applying this to ¥ — Y yields 1 V(Di) _ of +63 — 2poior V(X -Y)=Vv[_— Di )\= ea eee eee Po The two-sample ¢ test is based on the assumption of independence, in which case p = 0. But in many paired experiments, there will be a strong positive depen- dence between X and ¥ (large X associated with large Y), so that p will be positive and the variance of ¥ — Y will be smaller than ot/n + 03/n. Thus whenever there is positive dependence within pairs, the denominator for the paired t statistic should be smaller than for t of the independent-samples test. Often two-sample t will be much closer to zero than paired t, considerably understating the significance of the data. Similarly, when data is paired, the paired t CI will usually be narrower than the (incorrect) two-sample t CI. This is because there is typically much less variability in the differences than in the x and y values. Paired Versus Unpaired Experiments In our examples, paired data resulted from two observations on the same subject (Example 10.9) or experimental object (location in Example 10.8). Even when this cannot be done, paired data with dependence within pairs can be obtained by matching individuals or objects on one or more characteristics thought to influence responses. For example, in a medical experiment to compare the efficacy of two drugs for lowering blood pressure, the experimenter’s budget might allow for the treatment of 20 patients. If 10 patients are randomly selected for treatment with the --- Trang 529 --- 516 charter 10 Inferences Based on Two Samples first drug and another 10 independently selected for treatment with the second drug, an independent-samples experiment results. However, the experimenter, knowing that blood pressure is influenced by age and weight, might decide to pair off patients so that within each of the resulting 10 pairs, age and weight were approximately equal (although there might be sizable differences between pairs). Then each drug would be given to a different patient within each pair for a total of 10 observations on each drug. Without this matching (or “blocking”), one drug might appear to outperform the other just because patients in one sample were lighter and younger and thus more susceptible to a decrease in blood pressure than the heavier and older patients in the second sample. However, there is a price to be paid for pairing—a smaller number of degrees of freedom for the paired analysis—so we must ask when one type of experiment should be preferred to the other. There is no straightforward and precise answer to this question, but there are some useful guidelines. If we have a choice between two f tests that are both valid (and carried out at the same level of significance ~), we should prefer the test that has the larger number of degrees of freedom. The reason for this is that a larger number of degrees of freedom means a smaller f for any fixed alternative value of the parameter or parameters. That is, for a fixed type I error probability, the probability of a type II error is decreased by increasing degrees of freedom. However, if the experimental units are quite heterogeneous in their responses, it will be difficult to detect small but significant differences between two treatments. This is essentially what happened in the data set in Example 10.8; for both “treat- ments” (bottom water and surface water), there is great between-location variability, which tends to mask differences in treatments within locations. If there is a high positive correlation within experimental units or subjects, the variance of D = X — Y will be much smaller than the unpaired variance. Because of this reduced variance, it will be easier to detect a difference with paired samples than with independent samples. The pros and cons of pairing can now be summarized as follows. 1. If there is great heterogeneity between experimental units and a large correlation within experimental units (large positive p), then the loss in degrees of freedom will be compensated for by the increased precision associated with pairing, so a paired experiment is preferable to an inde- pendent-samples experiment. 2. If the experimental units are relatively homogeneous and the correlation within pairs is not large, the gain in precision due to pairing will be outweighed by the decrease in degrees of freedom, so an independent- samples experiment should be used. Of course, values of a7, 43, and p will not usually be known very precisely, so an investigator will be required to make a seat-of-the-pants judgment as to whether Situation 1 or 2 obtains. In general, if the number of observations that can be obtained is large, then a loss in degrees of freedom (e.g., from 40 to 20) will not be serious; but if the number is small, then the loss (say, from 16 to 8) because of pairing may be serious if not compensated for by increased precision. Similar considerations apply when choosing between the two types of experiments to estimate jt; — J with a confidence interval. --- Trang 530 --- 10.3 Analysis of Paired Data 517 Exercises | Section 10.3 (39-47) 39. The Weaver-Dunn procedure with a fiber mesh 41. Shoveling is not exactly a high-tech activity, but tape augmentation is commonly used to treat AC will continue to be a required task even in our joint (a joint in the shoulder) separations requiring information age. The article “A Shovel with a surgery. The article “TightRope Versus Fiber Perforated Blade Reduces Energy Expenditure Mesh Tape Augmentation of Acromioclavicular Required for Digging Wet Clay” (Hum. Factors, Joint Reconstruction” (Am. J. Sport Med., 2010: 2010: 492-502) reported on an experiment in 1204-1208) described the investigation of a new which each of 13 workers was provided with both method which was hypothesized to provide supe- a conventional shovel and a shovel whose blade rior stability (less movement) compared to the was perforated with small holes. The authors of the W-D procedure. The authors of the cited article cited article provided the following data on stable kindly provided the accompanying data on ante- energy expenditure [kcal/kg(subject)/Ib(clay)]: posterior (forward-backward) movement (mm) for six matched pairs of shoulders: Worker 1 2 3 4 5 6 7 Conventional: 0011 0014 0018 .0022 .0010 0016 .0028 Subject: 1 2 3 4 5 6 Perforated: 0011 0010 .0019 .0013 0011 0017 .0024 Fiber mesh: 20) 30 200 32) 35033 waney 8 9 10 it 2 13 TightRope: 15 18 16 19 10 12 Conventional: 0020 .0015 00140023. .0017_ 0020 Carry outa test of hypotheses at significance level P*?”a"ed 0020-0013. 0013-0017. 0015-0013 201 ogee af true ayersee movement focthe Tent: a. Calculate a confidence interval at the 95% Rope treatment is indeed less than that for the confidence level for the true average difference Fiber Mesh treatment. Be sure to check any between energy expenditure for the conven- assumptions underlying your analysis. tional shovel and the perforated shovel (a nor- 40. Hexavalent chromium has been identified as an mal probability plot of the sample differences inhalation carcinogen and an air toxin of concern shows a reasonably linear pattern). Based on in a number of different locales. The article “Air- this interval, does it appear that the shovels borne Hexavalent Chromium in Southwestern differ with respect to true average energy Ontario” (J. Air Waste Manage., 1997: 905-910) expenditure? Explain. gave the accompanying data on both indoor and b. Carry out a test of hypotheses at significance outdoor concentration (nanograms/m*) for a sam- level .05 to see if true average energy expendi- ple of houses selected from a certain region. ture using the conventional shovel exceeds that using the perforated shovel: include a P-value Howe 1 2 3 4 5 678 9 inyouranalysis: Indoor 07 .08 .09 .12 .12 .12 13 .14 15 : Outdoor 29 68 47 54 97 35 49 .84 .86 4% Scientists and engineers frequently wish to com- pare two different techniques for measuring House 10 11 12) 13 14 15 16 17 or determining the value of a variable. In such Indoor AS AT AT) A818 18 18.19 situations, it is useful to test whether the mean Outdoor .28 .32 32 1.55 .66 .29 .21 1.02 difference in measurements is zero. The article ED 18 19 20 21 22 23 24 25 “Evaluation of the Deuterium Dilution Technique ; Against the Test Weighing Procedure for the Indoor 20 .22 .22 .23 .23 .25 26 .28 es * Outdoor 159 90 52 12 54 88 49 124 Determination of Breast Milk Intake” (Amer. J. ° . — — . ~ Clin. Nutrit., 1983: 996-1003) reports the accom- House 26 27 28 29 30 31 32 33 panying data on measuring the amount of milk Indoor .28 .29 34 39 40 45 54.62 ingested by each of 14 randomly selected infants. Outdoor 48 .27 37 1.26 .70 .76 99 .36 a. Is it plausible that the population distribution of ‘ differences is normal? a. Calculate a confidence interval for the popu- : . és b. Does it appear that the true average difference lation mean difference between indoor and ‘ : : between intake values measured by the two outdoor concentrations using a confidence : ' m . methods is something other than zero? Deter- level of 95%, and interpret the resulting interval. : mine the P-value of the test, and use it to reach b. If a 34th house were to be randomly selected pie : a conclusion at significance level .05. from the population, between what values : _ Id ‘dict the difference in c cc. What happens if the two-sample f test is (incor- ae te Het the difference i'concentra: rectly) used? [Hint: s; = 352.970, sy = 234.042.] --- Trang 531 --- 518 cuarrer 10 Inferences Based on Two Samples Infant Method 1 2 3 4 5 6 7 8 9 10 IL 12 13 14 Isotopic 1509 1418 1561 1556 2169 1760 1098 1198 1479 1281 1414 1954 2174 2058 Test 1498 1254 1336 1565 2000 1318 1410 1129 1342 1124 1468 1604 1722 1518 Difference 11 164 225 -9 169 442 —312 69 137 157 -54 350 452 540 43. In an experiment designed to study the effects of possible. Age at onset of symptoms and age at illumination level on task performance (‘‘Perfor- diagnosis for 15 children suffering from the mance of Complex Tasks Under Different Levels disease were given in the article “Treatment of of Illumination,” J. Ilumin. Engrg., 1976: Cushing's Disease in Childhood and Adolescence 235-242), subjects were required to insert a fine- by Transphenoidal Microadenomectomy” (New tipped probe into the eyeholes of 10 needles in Engl. J. Med., 1984: 889). Here are the values of rapid succession both for a low light level with a the differences between age at onset of symptoms black background and a higher level with a white and age at diagnosis: background. Each data value is the time (sec) required to complete the task. 4 -12 ~55 —15 —30 -60 —14 —21 Subject 1 2 3 4 5 —48 -12 —25 -—53 -61 —69 —80 Black 25.85 28.84 32.05 25.74 20.89 White 18.23 20.84 22.96 19.68 19.50 a. Does the accompanying normal probability Subject 6 7 8 9 plot cast strong doubt on the approximate nor- Black 41.05 25.01 24.96 27.47 mality of the population distribution of differ- White 24.98 16.61 16.07 24.59 ences? Does the data indicate that the higher level of illumination yields a decrease of more than 5 s in Difference true average task completion time? Test the appro- ig priate hypotheses using the P-value approach. bs ae © 44. It has been estimated that between 1945 and 1971, 30 Poss . as many as 2 million children were born to ‘0 mothers treated with diethylstilbestrol (DES), 50 . a nonsteroidal estrogen recommended for preg- ® a nancy maintenance. The FDA banned this drug ae P in 1971 because research indicated a link with the incidence of cervical cancer. The article ors R f qk = percentile “Effects of Prenatal Exposure to Diethylstilbestrol 5 (DES) on Hemispheric Laterality and Spatial b. Calculate a lower 95% confidence bound for Ability in Human Males” (Hormones Behav., the population mean difference, and interpret 1992: 62-75) discussed a study in which 10 the resulting bound. males exposed to DES and their unexposed c. Suppose the (age at diagnosis) — (age at onset) brothers underwent various tests. This is the sum- differences had been calculated. What would mary data on the results of a spatial ability test: be a 95% upper confidence bound for the X= 12.6 (exposed), y= 13.7, and standard error corresponding population mean difference? ‘of mean difference = .5. Test at level .05 to see - . oo whether exposure is associated with reduced 46 Example 1.2 describes a study of children’s pri- spatial ability by obtaining the P-value. vate speech (talking to themselves). The 33 chil- dren were each observed in about 100 ten-second 45. Cushing’s disease is characterized by muscular intervals in the first grade, and again in the second weakness due to adrenal or pituitary dysfunction. and third grades. Because private speech occurs To provide effective treatment, it is important to more in challenging circumstances, the children detect childhood Cushing's disease as early as were observed while doing their mathematics. --- Trang 532 --- 10.4 Inferences About Two Population Proportions 519 The speech was classified as on task (about the The numbers are in the same order for each grade; math lesson), off task, or mumbling (the observer for example, the third student mumbled in 19.4% could not tell what was said). Here are the 33 first- AF ihe interval in the REM GRIME ANE 239% OF te grade mumble scores: intervals in the third grade. a. Verify graphically that normality is plausible 21.6 32.1 48.1 19.5 19.2 43.0 26.3 22.7 ‘ : 49.4 354 568 454 287 422 203 20.0 ence of population means, and interpret the 34.0 26.9 48.4 27.6 52.6 59 38.5 22.1 result. 222 47. Construct a paired data set for which t = 00, so that the data is highly significant when the correct and here are the third-grade mumble scores: analysis is used, a fe the two-sample ¢ test is quite near zero, so the incorrect analysis yields an 28.8 57.0 23.9 46.9 50.0 64.6 54.2 55.3 insipuificantresult, 21.4 38.3 78.5 38.1 44.3 11.7 58.6 76.1 76.4 48.6 37.2 698 29.1 60.4 57.8 38.7 46.5 50.0 69.6 69.8 59.4 22.7 84.9 42.0 67.2 Inferences About Two Population Proportions Having presented methods for comparing the means of two different populations, we now turn to the comparison of two population proportions. The notation for this problem is an extension of the notation used in the corresponding one-population problem. We let p; and p2 denote the proportions of individuals in populations 1 and 2, respectively, who possess a particular characteristic. Alternatively, if we use the label S for an individual who possesses the characteristic of interest (does favor a particular proposition, has read at least one book within the last month, etc.), then P, and py, represent the probabilities of seeing the label § on a randomly chosen individual from populations | and 2, respectively. We will assume the availability of a sample of m individuals from the first population and n from the second. The variables X and Y will represent the number of individuals in each sample possessing the characteristic that defines p, and p. Provided the population sizes are much larger than the sample sizes, the distribution of X can be taken to be binomial with parameters m and p;, and similarly, Y is taken to be a binomial variable with parameters n and p. Furthermore, the samples are assumed to be independent of each other, so that X and Y are independent rv’s. The obvious estimator for p; — p2, the difference in population proportions, is the corresponding difference in sample proportions X/m — Y/n. With p, = X/m and P2 = Y/n, the estimator of p; — pz can be expressed as p; — po. PROPOSITION LetX ~ Bin(m, p,) and Y ~ Bin(n, p2) with X and Y independent variables. Then E(P. — Px) = Pi — Pa so pi — P2 is an unbiased estimator of p, — p2, and V(~1 — p2) =P, Pe (where gi = 1 — pi) (10.3) --- Trang 533 --- 520 = cuarrer 10 Inferences Based on Two Samples Proof Since E(X) = mp, and E(Y) = np>, xX Y 1 1 1 1 B(= ==) = S808) — SEY) = mp — aps =r = pr mon m n m n Since V(X) = mpiqi, VY) = np2q2, and X and Y are independent, x Y x Y 1 1 Pig , Pq vV(2-=) =) #v(=)\=-Save erm -2 2 a)" 7G) ete We will focus first on situations in which both m and n are large. Then because ; and ji individually have approximately normal distributions, the esti- mator fp; — 2 also has approximately a normal distribution. Standardizing p; — p2 yields a variable Z whose distribution is approximately standard normal: 7 b= Po (r=) pvt Pada m n A Large-Sample Test Procedure Analogously to the hypotheses for 4; — fo, the most general null hypothesis an investigator might consider would be of the form Ho: pi — p2 = Ao, where Ao is again a specified number. Although for population means the case Ap # 0 pre- sented no difficulties, for population proportions the cases Ag = 0 and Aj # 0 must be considered separately. Since the vast majority of actual problems of this sort involve Ay = 0 (i.e., the null hypothesis p, = pz), we will concentrate on this case. When Hp: p, — p2 = 0 is true, let p denote the common value of p,; and p, (and similarly for g). Then the standardized variable 6, — p> -0 7 ea a a (10.4) : | pq{—+— mon has approximately a standard normal distribution when Hp is true. However, this Z cannot serve as a test statistic because the value of p is unknown—AHp asserts only that there is a common value of p, but does not say what that value is. To obtain a test statistic having approximately a standard normal distribution when Hp is true (so that use of an appropriate z critical value specifies a level « test), p must be estimated from the sample data. Assuming then that p; = p2 = p, instead of separate samples of size m and n from two different populations (two different binomial distributions), we really have a single sample of size m + n from one population with proportion p. Since the total number of individuals in this combined sample having the characteristic of interest is X + Y, the estimator of p is X+Y m n p = —— = ——p, + ——p. 10.5 Pinta men! mn? M105) --- Trang 534 --- 10.4 Inferences About Two Population Proportions 521 The second expression for / shows that it is actually a weighted average of estimators }; and 2 obtained from the two samples. If we take (10.5) (with q = 1 —p) and substitute back into (10.4), the resulting statistic has approximately a standard normal distribution when Hp is true. Null hypothesis: Ho: py — p2 = 0 Test statistic value (large samples): z = ie caf Lx ob [Pa G + *) mon Alternative Hypothesis Rejection Region for Approximate Level a Test Hy: p. — pr > 0 z>% Hy: py — pr < 0 ZS hy Hy: pi — pr #0 either z > Zz, or z < —Zy A P-value is calculated in the same way as for previous z tests. Some defendants in criminal proceedings plead guilty and are sentenced without a trial, whereas others who plead innocent are subsequently found guilty and then are sentenced. In recent years, legal scholars have speculated as to whether sentences of those who plead guilty differ in severity from sentences for those who plead innocent and are subsequently judged guilty. Consider the accompanying data on defendants from San Francisco County accused of robbery, all of whom had previous prison records (“Does It Pay to Plead Guilty? Differential Sentencing and the Functioning of Criminal Courts,” Law Soc. Rev., 1981-1982: 45-69). Does this data suggest that the proportion of all defendants in these circumstances who plead guilty and are sent to prison differs from the proportion who are sent to prison after pleading innocent and being found guilty? Plea Guilty Not guilty Number judged guilty m= 191 n= 64 Number sentenced to prison x = 101 y = 56 Sample proportion Pi = 529 py = 875 Let p, and p, denote the two population proportions. The hypotheses of interest are Hp: p: — P2 = 0 versus H,: p; — p> # 0. At level .01, Hp should be rejected if either z > Zo95 = 2.58 or if z < —2.58. The combined estimate of the common success proportion is p = (101 + 56)/(191 + 64) = .616. The value of the test statistic is then 529 — 875 —.346 LE = 4.94 1 1 .070 -616)(.384) { —+ — (,616)( ie +g) --- Trang 535 --- 522 = cuarrer 10 Inferences Based on Two Samples Since —4.94 < —2.58, Hp must be rejected. The P-value for a two-tailed z test is P-value = 2[1 — @(\zl)] = 2[1 — ®(4.94)] < 2[1 — 0(3.49)] = .0004 A more extensive standard normal table yields P-value = .0000006. This P-value is so minuscule that at any reasonable level , Ho should be rejected. The data very strongly suggests that p) # p2 and, in particular, that initially pleading guilty may be a good strategy as far as avoiding prison is concerned. The cited article also reported data on defendants in several other counties. The authors broke down the data by type of crime (burglary or robbery) and by nature of prior record (none, some but no prison, and prison). In every case, the conclusion was the same: Among defendants judged guilty, those who pleaded that way were less likely to receive prison sentences. a Type Il Error Probabilities and Sample Sizes Here the determination of f is a bit more cumbersome than it was for other large-sample tests. The reason is that the denominator of Z is an estimate of the standard deviation of 6; — 2, assuming that p, = p, = p. When Hp is false, P, — pz must be restandardized using Pig , P2g2 r- =\ to (10.6) The form of o implies that f is not a function of just py — p2, so we denote it by B(pr, P2) Alternative Hypothesis B(P1s pr) Ay: pi — p2 > 0 {fl 1 @ carat) — (pi —p2) ST Ay: py — pr < 0 vipat towt\ ps 1-o mayPa a 1 — P2, a Ay: pi — pr #0 11 © sanyfPa(z +3) — (Pi —p2) min @ na(L at _o md Pury ais aa o where p = (mp; + np2)/(m +n), 7 = (mqi + ngr)/(m +n), and is given by (10.6). --- Trang 536 --- 10.4 Inferences About Two Population Proportions 523 Proof For the upper-tailed test (H,: p; — p2 > 0), B * xaft 1 B(Pi,P2) = P| Py — Pa < 204] PG\ — + — mon xf 1. 1 — Zay/Pq{ —+—) — (p1 — p2) = p|PiaP2= Pir) — ale o o When m and n are both large, 2 Mp, +p. mpi +npz— _ p=» —___=p m+n m+n and q ~ q, which yields the previous (approximate) expression for B(p;, p2). Alternatively, for specified p;, p2 with pj — p2 = d, the sample sizes neces- sary to achieve B(p;, p2) = B can be determined. For example, for the upper-tailed test, we equate —zg to the argument of (-) (i-e., what’s inside the parentheses) in the foregoing box. If m = n, there is a simple expression for the common value. For the case m = n, the level x test has type II error probability f at the alternative values pj, p2 with p; — p2 = d when 52 2 +q)/2+zpvPig + 7 ne [ex/(i + pra ci vin + p2q2| (10.7) for an upper- or lower-tailed test, with «/2 replacing « for a two-tailed test. One of the truly impressive applications of statistics occurred in connection with the design of the 1954 Salk polio vaccine experiment and analysis of the resulting data. Part of the experiment focused on the efficacy of the vaccine in combating paralytic polio. Because it was thought that without a control group of children, there would be no sound basis for assessment of the vaccine, it was decided to administer the vaccine to one group and a placebo injection (visually indistinguish- able from the vaccine but known to have no effect) to a control group. For ethical reasons and also because it was thought that the knowledge of vaccine administra- tion might have an effect on treatment and diagnosis, the experiment was con- ducted in a double-blind manner. That is, neither the individuals receiving injections nor those administering them actually knew who was receiving vaccine and who was receiving the placebo (samples were numerically coded)—remember, at that point it was not at all clear whether the vaccine was beneficial. Let p; and p> be the probabilities of a child getting paralytic polio for the control and treatment conditions, respectively. The objective was to test the hypotheses Hy: p; — p» = 0 versus H,: p; — p2 > 0 (the alternative hypothesis --- Trang 537 --- 524 = cuarrer 10 Inferences Based on Two Samples states that a vaccinated child is less likely to contract polio than an unvaccinated child). Supposing the true value of p, is .0003 (an incidence rate of 30 per 100,000), the vaccine would be a significant improvement if the incidence rate was halved— that is, p> = .00015. Using a level « = .05 test, it would then be reasonable to ask for sample sizes for which B = .1 when p; = .0003 and pz = .00015. Assuming equal sample sizes, the required n is obtained from (10.7) as [1.645 y/(3)(00045)(.199955) + 1.28 /(-00015)(.99985) + (.0003)(.9997)] - n= eevee eee eee eee (0003 — .00015)7 = [(.0349 + .0271)/.00015]° = 171,000 The actual data for this experiment follows. Sample sizes of approximately 200,000 were used. The reader can easily verify that z = 6.43, a highly significant value. The vaccine was judged a resounding success! Placebo: =m = 201,229 x = number of cases of paralytic polio = 110 Vaccine: n= 200,745 y = 33 | A Large-Sample Confidence Interval for p, — pz As with means, many two-sample problems involve the objective of comparison through hypothesis testing, but sometimes an interval estimate for p; = p> is appropriate. Both p; = X/m and p, = Y/n have approximate normal distributions when m and n are both large. If we identify @ with p; — po, then 0 = p, — py satisfies the conditions necessary for obtaining a large-sample CI. In particular, the estimated standard deviation of 0 is \/()14;/m) + (P242/n). The 10011 — «)% interval 6 + z,/) - j then becomes 5a) Pd Ai — pr tzpy fh 4 oe m n Notice that the estimated standard deviation of p1 — p2 (the square-root expression) is different here from what it was for hypothesis testing when Ap = 0. Recent research has shown that the actual confidence level for the traditional CI just given can sometimes deviate substantially from the nominal level (the level you think you are getting when you use a particular z critical value—e.g., 95% when z,;2 = 1.96). The suggested improvement is to add one success and one failure to each of the two samples and then replace the p’s and q’s in the foregoing formula by p’s and q’s where p, = (x + 1)/(m + 2), etc. This interval can also be used when sample sizes are quite small. The authors of the article “Adjuvant Radiotherapy and Chemotherapy in Node- Positive Premenopausal Women with Breast Cancer” (New Engl. J. Med., 1997: 956-962) reported on the results of an experiment designed to compare treating cancer patients with only chemotherapy to treatment with a combination of chemo- therapy and radiation. Of the 154 individuals who received the chemotherapy-only treatment, 76 survived at least 15 years, whereas 98 of the 164 patients who received the hybrid treatment survived at least that long. With p, denoting the proportion of all such women who, when treated with just chemotherapy, survive at --- Trang 538 --- 10.4 Inferences About Two Population Proportions 525. least 15 years and p, denoting the analogous proportion for the hybrid treatment, Pi = 76/154 = .494 and 2 = 98/164 = .598. A confidence interval for the dif- ference between proportions based on the traditional formula with a confidence level of approximately 99% is 494 — 598 + 2.584 (494)(.506) + (598)(.402) = —.104 + .143 = (—.247, .039) 154 164 At the 99% confidence level, it is plausible that —.247 < p; —p2 < .039. This interval is reasonably wide, a reflection of the fact that the sample sizes are not terribly large for this type of interval. Notice that 0 is one of the plausible values of P — p2 Suggesting that neither treatment can be judged superior to the other. Using Pi = 77/156 = .494, g, = 79/156 = .506, pr = 596, G2 = .404 based on sample sizes of 156 and 166, respectively, the “improved” interval here is essentially identical to the earlier interval. a Small-Sample Inferences On occasion an inference concerning p; — p2 may have to be based on samples for which at least one sample size is small. Appropriate methods for such situations are not as straightforward as those for large samples, and there is more controversy among statisticians as to recommended procedures. One frequently used test, called the Fisher—Irwin test, is based on the hypergeometric distribution. Exercises | Section 10.4 (48-59) 48. Is someone who switches brands because of a b. If the true proportions favoring the increase financial inducement less likely to remain loyal are actually p; = .20 (urban) and p, = .40 than someone who switches without induce- (rural), what is the probability that Hy will ment? Let p; and p, denote the true proportions be rejected using a level .05 test with of switchers to a certain brand with and without m = 300, n = 180? inducement, fespectively, who subsequently’ Gy 7 jediouphrtiarthe ouncoverandstenameeet make a repeat purchase. Test Ho: pi — p2 = 0 he first question on mail surveys influence the versus Hy: py — p> <0 using o = 01 and the tees) es response rate. The article “The Impact of Cover following data Design and First Questions on Response Rates for a Mail Survey of Skydivers” (Leisure Sci., m =200 number oF successes 30 1991: 67-76) tested this theory by experimenting with different cover designs. One cover was n=600 number of successes = 180 ‘\ g i plain; the other used a picture of a skydiver. The researchers speculated that the return rate (Similar data is given in “Impact of Deals and would be lower for the plain cover. Deal Retraction on Brand Switching,” J. Market- ing, 1980: 62-70.) ee 49. A sample of 300 urban adult residents of a par- Number Number ticular state revealed 63 who favored increasing Cover Sent Returned the highway speed limit from 55 to 65 mph, —$—$—————— whereas a sample of 180 rural residents yielded Plain 207 104 75 who favored the increase. Does this data indi- Skydiver 213 109 cate that the sentiment for increasing the speed TTS limit is different for the two groups of residents? Does this data support the researchers’ hypothesis? a. Test Ho: pr = po versus Hy: p; # p> using Test the relevant hypotheses using 2 = .10 by first a = .05, where p; refers to the urban population. calculating a P-value. --- Trang 539 --- 526 = cuarrer 10 Inferences Based on Two Samples 51. Do teachers find their work rewarding and satisfy- 55. Sometimes experiments involving success or ing? The article “Work-Related Attitudes” (Psych. failure responses are run in a paired or before/ Rep., 1991: 443-450) reports the results of a survey after manner. Suppose that before a major policy of 395 elementary school teachers and 266 high speech by a political candidate, n individuals are school teachers. Of the elementary school teachers, selected and asked whether (S) or not (F) they 224 said they were very satisfied with their jobs, favor the candidate. Then after the speech the whereas 126 of the high school teachers were very same n people are asked the same question. The satisfied with their work. Estimate the difference responses can be entered in a table as follows: between the proportion of all elementary school teachers who are satisfied and all high school tea- After chers who are satisfied by calculating a Cl. s F 52. A random sample of 5726 telephone numbers from a certain region taken in March 2002 Ss) % |] % yielded 1105 that were unlisted, and 1 year later Before a sample of 5384 yielded 980 unlisted numbers. # a. Test at level .10 to see whether there is a difference in true proportions of unlisted numbers between the 2 years. b. If py = .20 and py = .18, what sample sizes where X, +X>+X3+X, =n. Let py, py Ps (m = n) would be necessary to detect such a and ps denote the four cell probabilities, so that difference with probability .90? Pi = P(S before and S after), and so on. We wish 53. Ionizing radiation is being given increasing o;sest:the hypothesis that theltrne:neenartion.ot attention as a method for preserving horticultural supparters:(S) after the speech hasnot increased products. The article “The Influence of Gamma- against the alternative that it has increased. Irradiation on the Storage Life of Red Variety ja--State'the/two hypotheses of anterest'in terms Garlic” (J. Food Process. Preserv., 1983: of pi, P2, Ps, and py. 179-183) reports that 153 of 180 irradiated garlic b. Construct an estimator for the after/before bulbs were marketable (no external sprouting, difference in success probabilities. rotting, or softening) 240 days after treatment, ¢. When nis large, it can be shown that the Ny whereas only 119 of 180 untreated bulbs were (ops bas approximately normal dises: marketable after this length of time. Does this bution with variance [p; + pj — (p; — pj)" Vn. data suggest that ionizing radiation is beneficial Use this to construct a test statistic with as far as marketability is concerned? approximately a standard normal distribution S when Ho is true (the result is called 54, In medical investigations, the ratio @ = p,/pp is MeNemar’s test). often of more interest than the difference p, — p2 a. If x; =350, 15 = 150, x; = 200, and (eg. individuals given treatment 1 are how x4 = 300, what do you conclude? many times as likely to recover as those given . . treatment 22), Let = pi/j2. When mand.n are 56+ The Chicago Cubs won 73 games and lost 71 in both large, the statistic In() has approximately a 1995. This was described as a much more suc- GonHAL. HistBULoN ‘GAL AppROMEAWTEN Wiehh cessful season for them than 1994, when they value In(@) and approximate standard deviation won only 49 and lost 64. {0m — xy(ma) + (0 — yay)". a. Based on a binomial model with p, for 1994 a. Use these facts to obtain a large-sample 95% and pz for 1995, carry out a two-tailed test for Cl formula forestimating in(6), and then'a CI the difference, Based on your result, could the for 0 itself. difference in sample proportions be attributed b. Return to the heart attack data of Example 1.3, to luck (bad in 1994, good in 1995)? and calculate an interval of plausible values for b. Criticize the binomial model. Do baseball 0 at the 95% confidence level. What does this games satisfy the assumptions? interval suggest about the efficacy of the aspi- 57, Using the traditional formula, a 95% Cl for p; — p> rin treatment? is to be constructed based on equal sample sizes from the two populations. For what value of n (= m) --- Trang 540 --- 10.5 Inferences About Two Population Variances 527 will the resulting interval have width at most .1 b. Give a 95% confidence interval for the differ- irrespective of the results of the sampling? ence in population heart attack death propor- 58. Statin drugs are used to decrease cholesterol levels, Hons, , ¢. Isitreasonable to say that most of the difference and therefore hopefully to decrease the chances of in death . is di he acl a heart attack. In a British study (“MRC/BHE = Hs en i duet hearirattacis/a8 Heart Protection Study of Cholesterol Lowering Woranem pected, with Simvastin in 20,536 High-Risk Individuals: 59. A study of male navy enlisted personnel was A Randomized Placebo-Controlled Trial,” Lancet, reported in the Bloomington, Illinois, Daily Pan- 2002: 7-22) 20,536 at-risk adults were assigned tagraph, Aug. 23, 1993. It was found that 90 of randomly to take either a 40-mg statin pill or 231 left-handers had been hospitalized for inju- placebo. The subjects had coronary disease, artery ries, whereas 623 of 2148 right-handers had been blockage, or diabetes. After 5 years there were hospitalized for injuries. Test for equal population 1328 deaths (587 from heart attack) among the proportions at the .01 level, find the P-value for 10,269 in the statin group and 1507 deaths (707 the test, and interpret your results. Can it be con- from heart attack) among the 10,267 in the placebo cluded that there is a causal relationship between group. handedness and proneness to injury? Explain. a. Give a 95% confidence interval for the difference in population death proportions. Inferences About Two Population Variances Methods for comparing two population variances (or standard deviations) are occasionally needed, though such problems arise much less frequently than those involving means or proportions. For the case in which the populations under investigation are normal, the procedures are based on the F distribution, as dis- cussed in Section 6.4. Testing Hypotheses A test procedure for hypotheses concerning the ratio o¢/03, as well as a CI for this ratio are based on the following result from Section 6.4. THEOREM Let X;,...,Xm be a random sample from a normal distribution with variance or. let Y,,..., Y,, be another random sample (independent of the X;’s) from a normal distribution with variance c3, and let St and S3 denote the two sample variances. Then the rv St/ot F= Si/ei (10.8) 83/05 has an F distribution with vy) = m — 1 and vz = n — 1. Under the null hypothesis of equal population variances, (10.8) reduces to the ratio of sample variances. For a test statistic we use this ratio of sample variances; and the claim that 67 = a} is rejected if the ratio differs by too much from 1. --- Trang 541 --- 528 chapter 10 Inferences Based on Two Samples THE F TEST Null hypothesis: Ho: of = 03 FOR EQUA- Test statistic value: f = s}/s} LITY OF “ VARIANCES Alternative Hypothesis Rejection Region for a Level a Test Hy of > 03 f2 Fam tnt Hy 0} <0} fs Fen aga Hy of # 63 either f > Fym1n—1 OF S Fi-aam-10-1 Since critical values are tabled only for « = .10, .05, .01, and .001, the two- tailed test can be performed only at levels .20, .10, .02, and .002. More extensive tabulations of F critical values are available elsewhere, including calculators and computer software. Is there less variation in weights of some baked goods than others? Here are the weights (in grams) for a sample of Bruegger’s bagels (their Iowa City shop) and another sample of Wolferman’s muffins (made in Kansas City): B: 998 1054 94.7 107.8 114.3 106.3 W: 99.0 98.2 98.1 102.1 102.9 104.1 988 99.5 The normality assumption is very important for the use of Expression (10.8) so we check the normal plot from MINITAB, shown in Figure 10.9. There is no apparent reason to doubt normality here. 2 brand —*— bruegger's —®© wolferman's Mean StDev N 104.7 6.765 6 2 100.3 2.338 8 8 0 a AD P 0.206 0.762 4 0.548 0.107 2 90 95 100 105 110 115 120 grams Figure 10.9 Normal plot for baked goods Notice the difference in slopes for the two sources. This suggests different variabilities because the vertical axis is the z-score and is related to the horizontal axis (grams) by z = (grams — mean)/(std dev). Thus, when score is plotted against grams the slope is the reciprocal of the standard deviation. Now let’s test Ho: o{ = 6} against a two-tailed alternative with « = .02. We need the critical values F 0157 = 7.46 and F957 = W/Foi7.5 = 1/10.46 = .0956. We have 2 2. 5} 6.765 =s3= = 8.37 f 5} 2.3387 --- Trang 542 --- 10.5 Inferences About Two Population Variances 529 which exceeds 7.46, so the hypothesis of equal variances is rejected. We conclude that there is a difference in weight variation, and the English muffins are less variable. Notice that it is not really necessary to use the lower-tailed critical value here if the groups are chosen so the first group has the larger variance, and therefore the value of f = s}/s; exceeds 1. Because f > 1, the only comparison is between the computed f and the upper critical value 7.46. It does not change the result of the test to fix things so f > 1, so it is not cheating to simplify the test in this way. Ll) P-Values for F Tests Recall that the P-value for an upper-tailed ¢ test is the area under the relevant t curve (the one with appropriate df) to the right of the calculated r. In the same way, the P-value for an upper-tailed F test is the area under the F curve with appropriate numerator and denominator df to the right of the calculated f. Figure 10.10 illus- trates this for a test based on v; = 4 and v2 = 6. F curve for fo y= 4,126 Shaded area = P-value = 025 4 f= 6.23 Figure 10.10 A P-value for an upper-tailed F test Unfortunately, tabulation of F curve upper-tail areas is much more cumber- some than for t curves because two df’s are involved. For each combination of v; and v2, our F table gives only the four critical values that capture areas .10, .05, .01, and .001. Figure 10.11 (next page) shows what can be said about the P-value depending on where f falls relative to the four critical values. For example, for a test with vy; = 4 and v, = 6, f=5.70 = 01 < P-value < .05 f=2.16 => P-value > .10 f = 25.03 => P-value < .001 Only if f equals a tabulated value do we obtain an exact P-value (e.g., if f = 4.53, then P-value = .05). Once we know that .01 < P-value < .05, Hp would be rejected at a significance level of .05 but not at a level of .01. When P-value < .001, Ho should be rejected at any reasonable significance level. The F tests discussed in succeeding chapters will all be upper-tailed. If, however, a lower-tailed F test is appropriate, then (6.15) should be used to obtain lower-tailed critical values so that a bound or bounds on the P-value can be established. In the case of a two-tailed test, the bound or bounds from a one- tailed test should be multiplied by 2. For example, if f = 5.82 when v, = 4 and v2 = 6, then since 5.82 falls between the .05 and .01 critical values, 2(.01) < P- value < 2(.05), giving .02 < P-value < .10. Ho would then be rejected if « = .10 --- Trang 543 --- 530 = cuarrer 10 Inferences Based on Two Samples 4 ym @ it... 4 Le 6 10 3.18 05 4.53 ot 9.15, “ ooo ae a a ee P-value> .10 la 6.906 — 1.813 + 2.080 eC aes = 5.093 + 2.080(2.1825) = 5.093 + 4.540 = (.55, 9.63) The degrees of freedom v = 21 come from the messy formula in the theorem of Section 10.2. The confidence interval does not include 0, which implies that we would reject the hypothesis jt; = {12 against a two-tailed alternative at the .05 level. This is in agreement with what we get in testing this hypothesis directly: t = 2.33, P-value .030. The t method is of questionable validity, because of sample sizes that might not be enough to compensate for the nonnormality. The bootstrap method involves drawing a random sample of size 18 with replacement from the 18 boys, drawing a random sample of size 15 with replacement from the 15 girls, and calculating the difference of means. Then this process is repeated to give a total of 999 differences of means. The distribution of these 999 differences of means is the bootstrap distribution. To help clarify the procedure, here are random samples from the boys and girls: B: 0.0, 3.0, 2.8, 0.9, 3.0, 0.0, 0.0, 6.5, 6.4, 8.7, 6.4, 1.0, 0.9, 5.5, 17.0, 17.0, 0.0, 3.0 G: 1.3, 0.0, 0.0, 0.0, 0.0, 1.3, 1.3, 0.0, 3.2, 0.0, 1.3, 5.2, 0.0, 0.0, 0.0. Of course, in sampling with replacement some values will occur more than once and some will not occur at all. For these two samples, the difference of means is 4.56 — .91 = 3.65. Doing this 999 times (using the R package boot) gives the bootstrap distribution displayed in Figure 10.12. The distribution looks almost normal, but with some positive skewness. The idea of the bootstrap, with its samples taken from the original samples of boys and girls, is for this histogram to resemble the true distribution of the difference of means. If the original samples of boys and girls are representative of their populations, then our histogram should be a reasonable imitation of the population distribution for the difference of means. --- Trang 547 --- 534 = cuarrer 10 Inferences Based on Two Samples ' ° ' ce ' | H o 0.15 10 J 0.10 hs > ik 5 FA 2 G a 0.05 o4 ¥ 8 J 0.00 | fo —t—T1r—41 0 5 10 15 3.921012 3 x*y* Quantiles of Standard Normal Figure 10.12 Histogram and normal plot of the bootstrapped difference in means from R In spite of the nonnormality of the bootstrap distribution, we will use its standard deviation to compute a confidence interval to see how much it differs from the percentile interval. The standard deviation of the bootstrap distribution (i.e., of the 999 x — ¥ values) is Spoor = 2.1874, very close to the 2.1825 that was computed for the square root in the f interval above. Using 2.1874 instead of 2.1825 gives the 95% confidence interval X—P+£Z.025Spoot = 6.906 — 1.813 + 1.96(2.1874) = 5.093 + 4.287 = (.81, 9.38) This is very similar to the f interval, (.55, 9.63), except that using z 095 (common bootstrap practice) instead of f95,, shortens the interval. Note that the R package boot produces a slightly different interval because it replaces the difference 5.093 with the average of the 999 bootstrap mean differences. In the presence of a nonnormal bootstrap distribution, we now use the percentile interval, which for a 95% confidence interval finds the middle 95% of the bootstrap distribution. The confidence limits for a 95% confidence interval are the 2.5 percentile and the 97.5 percentile. When the 999 bootstrap differences of means are sorted, the 25th value from the bottom is 1.029 and the 25th value from the top is 9.760. This gives a 95% CI (1.029, 9.760). The skewness of the bootstrap distribution pushes the endpoints a little to the right of the endpoints computed from Spoor In addition, one can compute the bias corrected and accelerated refinement, as discussed in Section 8.5. The improved interval (1.625, 10.446), obtained from R, is moved even farther to the right compared to the previous intervals. a --- Trang 548 --- 10.6 Comparisons Using the Bootstrap and Permutation Methods 535 Permutation Tests How should we test hypotheses when the validity of the ¢ test is in doubt? Permutation tests do not require any specific distribution for the data. The idea is that under the null hypothesis, every observation has the same distribution and thus the same expected value, so we can rearrange the group labels without changing the group population means. We look at all possible arrangements, compute the difference of means for each of these, and compute a P-value by seeing how extreme is our original difference of means. That is, the P-value is the fraction of arrangements that are at least as extreme as the value computed for the original data. Consider a small-scale version of the off-task private speech data. The first three values for the boys are 4.9, 5.5, 6.5 and the first two values for the girls are 0.0, 1.3. To demonstrate the permutation test, we will act as if this is the whole data set. First, we compute the difference of means of the boys versus the girls, 5.63 — -65 = 4.98. Under the null hypothesis of equal population means, it should not matter if we reassign boys and girls. Therefore, we consider all ways of selecting three from among the five observations to be in the boys sample, leaving the other two for the girls sample. Under the null hypothesis, the following ten choices are equally likely. Boys x Girls y x-y 49 55 65 5.63 0.0 1.3 65 4.98 49 SS: 0.0 3.47 6.5 13 3.90 —.43 49 55 13 3.90 0.0 6.5 3.25 65 49 6.5 0.0 3.80 5.5 13 3.40 40 49 6.5 13 4.23 5.5 0.0 2.75 1.48 49 0.0 13 2.07 35: 6.5 6.00 —3.93 5.5 6.5 0.0 4.00 49 13 3.10 90 5.5 6.5 13 4.43 4.9 0.0 2.45 1.98 5.5 0.0 13 2.27 6.5 49 5.70 —3.43 65 0.0 1:3: 2.60 55 49 5.20 —2.60 How extreme is our original difference of means (4.98) in this set of ten differ- ences? Because it is the largest of ten, our P-value for an upper-tailed alternative hypothesis is 7 = .10. That is, for an upper-tailed test the P-value is the fraction of arrangements that give a difference at least as large as our original difference. For a two-tailed test we simply double the one-tailed P-value, giving P = .20 for this example. Ll) When m = 3 and n = 2, it is simple enough to deal with all (3) = 10 arrangements. What happens when we try to use the whole set of 18 boys and 15 girls in the private speech data set? Consider a permutation test for the full private speech data. Here we are dealing with (73) = 1,037,158,320 arrangements of the 18 boys and 15 girls, more than a billion arrangements. Even on a reasonably fast computer it might take a while to generate this many differences and see how many are at least as big as the value --- Trang 549 --- 536 = cuarrer 10 Inferences Based on Two Samples xX—Y = 6.906 — 1.813 = 5.093 computed for the original data. It took around an hour on an 800 mhz Dell using the free program BLOSSOM, which can be downloaded from the Internet. The two-tailed P-value is .0203, a little less than the P-value .030 from the f test. There is fairly strong evidence, at least at the 5% level, that the boys engage in more off-task private speech than the girls. We might have expected that the hypothesis test would reject the null hypothesis (of zero difference in means) at the 5% level with a two-tailed test. Recall that all three of our 95% confidence intervals in Example 10.16 consisted of only positive values, so none of the intervals included zero. The number of arrangements goes up very quickly as the group sizes increase. If there are 20 boys and 20 girls, then the number of arrangements is more than 100 times as big as when there are 18 boys and 15 girls. Doing the test exactly, using all of the arrangements, becomes entirely impractical, but there is an approximate alternative. We can take a random sample of a few thousand arrange- ments and get quite close to the exact answer. For example, with our 18 boys and 15 girls, BLOSSOM gives (almost instantaneously) a P-value of .0204, which is certainly close enough to the exact answer of .0203. An approximate computation is also available in R (in the boot package) and Stata and can easily be programmed in other software such as MINITAB. a PERMUTA- Let 0; and 02 be the same parameters (means, medians, standard deviations, TION TESTS etc.) for two different populations, and consider testing Ho: 0; = 02 based on independent samples of sizes m and n, respectively. Suppose that when Hp is true, the two population distributions are identical in all respects, so all m + n observations have actually been selected from the same population distribu- tion. In this case, the labels 1 and 2 are arbitrary, as any m of the m + n observations have the same chance of ending up in the first sample (leaving the remaining n for the second sample). An exact permutation test computes a suitable comparison statistic for all possible rearrangements, and sets the P-value equal to the fraction of these that are at least as extreme as the statistic computed on the original samples. This is the P-value for a one-tailed test, and it needs to be doubled for a two-tailed test. For an approximate permutation test, instead of all possible arrangements, we take a random sample with replacement from the set of all possible arrangements. Permutation tests are nonparametric, meaning that they do not assume a specific underlying distribution such as the normal distribution. However, this does not mean that there are no assumptions whatsoever. The null hypothesis in a permutation test is that the two distributions are the same, and any deviation can increase the probability of rejecting the null hypothesis. Thus, strictly speaking, we are doing a test for equal means only if the distributions are alike in all other respects, and this means that the two distributions have the same shape. In particu- lar, it requires the distributions to have the same spread. See Exercise 84 for an example in which the permutation test underestimates the true P-value. --- Trang 550 --- 10.6 Comparisons Using the Bootstrap and Permutation Methods 537 Inferences About Variability Section 10.5 discussed the use of the F distribution for comparing two variances, but this inferential method is strongly dependent on normality. For highly skewed data the F test for equal variances will tend to reject the null hypothesis too often. Consider the off-task private speech data from Example 10.16. The sample stan- dard deviations for boys and girls are 8.72 and 2.85, respectively. Then the method of Section 10.5 gives for the ratio of male to female variances the 95% confidence interval p) 2 2 2 sod sol 8.722 1 872 1 ah ,+—— ] = Sonn? poet aay) = (3-23, 25.77 ( Fosina’ 3 Fosana) ~ \285% 790° 2a5? -ae33) ~ 9?3:25-77) Taking the square root gives (1.80, 5.08) as the 95% confidence interval for the ratio of standard deviations. However, the legitimacy of this interval is seriously in question because of the skewed distributions. What about a hypothesis test of equal population variances? The ratio of male variance to female variance is s}/s3 = 8.727/2.85? = 9.385. Comparing this to the F distribution with 17 numerator degrees of freedom and 14 denominator degrees of freedom, we find that the one-tailed P-value is .000061, and therefore the two- tailed P-value is .00012. This is consistent with the 95% confidence interval not including 1. It would be strong evidence for the male variance being greater than the female variance, except that the validity of the test is in doubt because of nonnormality. 0.30 ° 0.25 lia ° 0.20 ES s 3 ° 2 a 10 a 2 0.15 a By a a 0.10 5 0.05 J 0.00 | ts a 0 7 0 5 10 15 20 $270 1 2% SD ratio Quantiles of Standard Normal Figure 10.13 Histogram and normal plot of bootstrap standard deviation ratios from R --- Trang 551 --- 538 cuarrer 10 Inferences Based on Two Samples Let’s apply the bootstrap to this problem. Begin with a sample from the boys, standard deviation 5.264, and a sample from the girls, standard deviation 1.505, with ratio 5.264/1.505 = 3.498. We do this 999 times using the boot package in R, and the resulting distribution of ratios is shown in Figure 10.13. The bootstrap distribution is strongly skewed to the right. For a 95% confi- dence interval, the percentile method uses the middle 95% of the bootstrap distri- bution. The 2.5 percentile is 1.013 and the 97.5 percentile is 7.888, so the 95% confidence interval for the population ratio of standard deviations is (1.013, 7.888). The bias corrected and accelerated (BCa) refinement gives the interval (0.885, 7.382). These two intervals differ in an important respect, that the percentile interval excludes 1 but the BCa refinement includes 1. In other words, the BCa interval allows the possibility that the two standard deviations are the same, but the percentile interval does not. We expect the BCa method to be an improvement, and this is verified in the next example, where we see that the BCa result is consistent with the results of a permutation test. a Consider using a permutation test for Ho: ¢) = 02. From Example 10.19 we know that the ratio of sample standard deviations for off- task private speech, males versus females, is 8.72/2.85 = 3.064. The idea of the permutation test is to find out how unusual this value is if we blur the distinction between males and females. That is, we remove the labels from the 18 males and 15 females and then consider all possible choices of 18 from the 33 children. For each of these possible choices we find the ratio of the standard deviation of the first 18 to the standard deviation of the last 15. The one-tailed P-value is the fraction that are at least as big as the original value, 3.064. Because there are more than a billion possible choices of 18 from 33, we instead selected 4999 random choices. This gives a total of 5000 when the original selection of males and females is included. Of these, 432 are at least as big as 3.064, so the one-tailed P-value is 432/5000 = .0864. For a two-tailed P-value we double this and get .1728. The permutation test does not reject at the 5% level (or the 10% level) the null hypothesis that the two population standard deviations are the same. How does the permutation test result compare with the other results? Recall that the F interval and the percentile interval ruled out the possibility that the two standard deviations are the same, but the BCa refinement disagreed, because | is included in the BCa interval. Taking it for granted that the permutation test is a valid approach and the permutation test does not reject the equality of standard deviations, the BCa interval is the only one of the three CIs consistent with this result. a The Analysis of Paired Data The bootstrap can be used for paired data if we work with the paired differences, as in the paired ¢ methods of Section 10.3. The private speech study was introduced in Examples 1.2 and 10.16. The study included the percentage of intervals with on-task private speech for 33 children in the first, second, and third grades. Here we will consider just the 15 girls in the first and second grades. Is there a change in on-task private speech when the girls go from the first to the second grade? Here are the percentages of intervals in which on task private speech occurred, and also the differences. --- Trang 552 --- 10.6 Comparisons Using the Bootstrap and Permutation Methods 539 Grade 1 Grade 2 Difference 25.7 18.6 7A 36.0 17.4 18.6 27.6 2.6 25.0 29.7 0.9 28.8 36.0 LS 34.5 35.1 14.1 21.0 42.0 33 38.7 7.6 1.6 6.0 14.1 0.0 14.1 25.0 LS 23.5 20.2 0.0 20.2 24.4 2a 22.3 10.4 18.4 —8.0 211 2.6 18.5 5.6 26.0 —20.4 Our null hypothesis is that the population mean difference between first- and second-grade percentages is zero. Figure 10.14 shows a histogram for the differ- ences, and it shows a negatively skewed distribution. 6 A 5 3 a z a 20 10 0 10 20 30 Gils ontask1 - ontask2 Figure 10.14 Histogram of differences for girls from Stata The paired t method of Section 10.3 requires normality, so the skewness might invalidate this, but we will show the results here anyway for comparison purposes. The mean of the differences is d = 16.66 with standard deviation Sp = 15.43, so the 95% confidence interval for the population mean difference is a Sp 15.43 d + to95,15-1 Vis 16.66 + 2.145 Vis 16.66 + 8.54 = (8.12, 25.20) What about the bootstrap for paired data? The bootstrap focuses on the 15 differences and uses the method of Section 8.5. Using Stata, we draw 999 samples of size 15 with replacement from the 15 differences, and these 999 samples constitute the bootstrap distribution. Figure 10.15 shows the histogram. --- Trang 553 --- 540 cinrrer 10 Inferences Based on Two Samples 100 80 60 = = $ B 40 fra 20 0 0 5 10 15 20 25 bootstrap mean girls ontask1 - ontask2 Figure 10.15 Histogram of bootstrap differences for girls from Stata The histogram shows negative skewness, which is expected because of the negative skewness shown in Figure 10.14 for the original sample. The skewness implies that a symmetric confidence interval will not be entirely appropriate, but we show it for comparison with the other intervals. The standard deviation of the bootstrap distribution is 54.4, = 3.994, compared to the estimated standard error Sp/V15 = 15.43/V15 = 3.984. The 95% bootstrap confidence interval is nar- rower because of using 2.925 instead of t.925,15—1. d £2025Sboor = 16.66 + 1.96(3.994) = 16.66 + 7.83 = (8.83, 24.49) This is slightly different from what Stata produces, because it uses t25,-1 = t -where B is the size of the bootstrap sample. .025,999—1 The 95% percentile interval uses the 2.5 percentile = 7.91 and the 97.5 percentile = 23.97 of the bootstrap distribution, so the confidence interval is (7.91, 23.97). This interval is to the left of the ¢ intervals because of the negative skewness of the bootstrap distribution. The bias corrected and accelerated refine- ment from Stata yields the interval (6.43, 23.12), which is even farther to the left. All of the intervals agree that there is a substantial population difference between first grade and second grade. There is a strong reduction in the on-task private speech of girls between first and second grades. a A permutation test for paired data involves permutations within the pairs. Under the null hypothesis, the two observations in a pair have the same population mean, so the population mean difference is zero, even if the order is reversed. Therefore, we consider all possible orderings of the n pairs. Because there are two possible orderings within each pair, there are 2” arrangements of n pairs. The one- tailed P-value is the fraction of the 2” differences that are at least as extreme as the observed value, and the two-tailed P-value is double this. --- Trang 554 --- 10.6 Comparisons Using the Bootstrap and Permutation Methods 541 To see how the permutation test works for paired data, consider a scaled-down version of the data from Example 10.21 with only the first three pairs. These are (25.7, 18.6), (36.0, 17.4), (27.6, 2.6). They give a mean difference of (7.1 + 18.6 + 25.0)/3 = 16.9. Here are all 8 = 2° permutations with the corresponding means. Arrangements Mean difference (25.7, 18.6) (36.0, 17.4) (27.6, 2.6) 16.90 (25.7, 18.6) (36.0, 17.4) (2.6, 27.6) 23 (25.7, 18.6) (17.4, 36.0) (27.6, 2.6) 4.50 (25.7, 18.6) (17.4, 36.0) (2.6, 27.6) —12.17 (18.6, 25.7) (36.0,17.4) (27.6, 2.6) 12.17 (18.6, 25.7) (36.0, 17.4) (2.6, 27.6) —4.50 (18.6, 25.7) (17.4, 36.0) (27.6, 2.6) —.23 (18.6, 25.7) (17.4,36.0) (2.6, 27.6) 16.90 Because the mean difference for the original sample is the highest value of eight, the one-tailed P-value is ¢ = .125, and the two-tailed P-value is 2(j) =.25. i Let’s now apply the permutation test to the paired data for the 15 girls of Example 10.21. In principle it is no harder to deal with the 2” = 2'> = 32,768 arrangements when all 15 pairs are included, but this exact approach is generally approximated using a random sample. We used Stata to draw an additional 4999 samples. Of the 4999, none yielded a mean difference as large as the value of 16.66 obtained for the original sample of 15 differences. Therefore, the one-tailed P-value is aw = .0002, and the two-tailed P-value is 2(.0002) = .0004. Rejection of the null hypothesis at the 5% level was to be expected, given that none of the confidence intervals in Example 10.21 included 0. It is interesting to compare the permutation test result with the f test of Section 10.3. For testing the null hypothesis of 0 population mean difference, the value of ¢ is d-0 _ 16.66 = 4183 sp/V15— 15.425//15 The two-tailed P-value for this is .0009, not very different from the result of the permutation test. a Exercises | Section 10.6 (69-84) 69. A student project by Heather Kral studied L: 2.00, 2.25, 2.60, 2.90, 3.00, 3.00, 3.00, 3.00, students on “lifestyle floors” of a dormitory in 3.00, 3.20, 3.20, 3.25, 3.30, 3.30, 3.32, 3.50, comparison to students on other floors. On a life- 3.50, 3.60, 3.60, 3.70, 3.75, 3.75, 3.79, 3.80, style floor the students share a common major, 3.80, 3.90, 4.00, 4.00, 4.00, 4.00. and there are a faculty coordinator and resident N: 1.20, 2.00, 2.29, 2.45, 2.50, 2.50, 2.50, 2.50, assistant from that department. Here are the grade 2.65, 2.70, 2.75, 2.75, 2.79, 2.80, 2.80, 2.80, point averages of 30 students on lifestyle floors 2.86, 2.90, 3.00, 3.07, 3.10, 3.25, 3.50, 3.54, (L) and 30 students on other floors (N): 3.56, 3.60, 3.70, 3.75, 3.80, 4.00. --- Trang 555 --- 542 = cuarrer 10 Inferences Based on Two Samples Notice that the lifestyle grade point averages c. Use the standard deviation of the bootstrap have a large number of repeats and the distribu- distribution along with the mean and ¢ critical tion is skewed, so there is some question about value from (a) to get a 95% confidence inter- normality. val for the difference of means. a. Obtain a 95% confidence interval for the dif- d. Use the bootstrap sample and the percentile ference of population means using the method method to obtain a 95% confidence interval based on the theorem of Section 10.2. for the difference of means. b. Obtain a bootstrap sample of 999 differences e. Compare your three confidence intervals. If of means. Check the bootstrap distribution for you used a standard normal critical value in normality using a normal probability plot. place of the f critical value in (c), why would c. Use the standard deviation of the bootstrap that make this interval more like the one in distribution along with the mean and f critical (d)? Why should the three intervals be fairly value from (a) to get a 95% confidence inter- similar for this data set? val for the difference of means. f. Interpret your results. Is there a substantial d. Use the bootstrap sample and the percentile difference between the two locations? Com- method to obtain a 95% confidence interval pare the difference with what you thought it for the difference of means. would be. If you were a major league pitcher, e. Compare your three confidence intervals. If would you want to be traded to the Rockies? they are very similar, why do you think thisis 74, For the data of Exercise 70 we want to compare sieease? . population medians for the runs in Denver versus f. Interpret your results. Is there a substantial dif- f ference between lifestyle and other floors? Why tis naps in: Phoenix: a do you think the difference is as big as it is? gest lamn a boots pi ample-ob 29) differences of medians. Check the bootstrap distribution 70. In this application from major league baseball, for normality using a normal probability the populations represent an abstraction of what plot. the players can do, so the populations will vary b. Use the standard deviation of the bootstrap from year to year. The Colorado Rockies and the distribution along with the difference of the Arizona Diamondbacks played nine games in medians in the original sample and the f criti- Phoenix and ten games in Denver in 2001. The cal value from Exercise 70(a) to get a 95% thinner air in Denver causes curve balls to curve confidence interval for the difference of less and it allows fly balls to travel farther. Does population medians. this mean that more runs are scored in Denver? c. Use the bootstrap sample and the percentile The numbers of runs scored by the two teams in method to obtain a 95% confidence interval the nine Phoenix games (P) and ten Denver for the difference of population medians. games (D) are d. Compare the two confidence intervals. e. How do the results for the median compare P: 5.09 15.88 3 847 11.65 6.48 11.65 7.41 9.53 with the results for the mean? In terms of D: 10 18 15.56 19 81 1413.76 10 20.12 precision (measured by the width of the con- 10.59 fidence interval) which gives the best results? ‘The fractions occur because the numbers have 72. For the data of Exercise 69 now consider testing been adjusted for nine innings (54 outs). For the hypothesis of equal population variances. example, in the third Denver game the Rockies a. Carry out a 2-tailed test using the method of won 10 to 7 on a home run with two out in the Section 10.5, Recall that this method requires bottom of the tenth inning, so there were 59 outs the data to be normal, and the method is sensi- instead of 54, and the number of runs is adjusted tive to departures from normality. Check the to (54/59)(17) = 15.56. We want to compare the data for normality to see if the F test is justi- average runs in Denver with the average runs in fied. Phoenix. b. Carry out a 2-tailed permutation test for the a. Find a 95% confidence interval for the differ- hypothesis of equal population variances (or ence of population means using the method standard deviations). Why does it not matter given in the theorem of Section 10.2. whether you use variances or standard devia- b. Obtain a bootstrap sample of 999 differences tions? of means. Check the bootstrap distribution for ¢. Compare the two results and summarize your normality using a normal probability plot. conclusions. --- Trang 556 --- 10.6 Comparisons Using the Bootstrap and Permutation Methods 543 73. For the data of Exercise 69 we want a 95% this is the case? If you had used a critical confidence interval for the ratio of population value from the normal table rather than the standard deviations. 1 table, would the result of (c) agree better a. Use the method of Section 10.5. Recall that with the result of (d)? Why? this method requires the data to be normal, f. Interpret your results. Do the blueberries make and the method is sensitive to departures from a substantial difference? normality. Check the data for normality to see 95, For the data of Exercise 74, we now want to test if the F distribution can be used for the ratio the hypothesis of equal population means, Gf sample; vanances, ‘ a. Carry out a 2-tailed test using the method b. With a bootstrap sample of size 999 use the TWEEN Jan RUE TRSSE -OE lesction, OO! peccontile method to;nbiaina 7) #euntidence Although this test requires normal data, it will interval for the ratio of standard deviations. sill work: pretry-well. for: moderately nonnor- cc. Compare the two results and discuss the rela- mal data, Nevertheless, you should check the tionship of the results to those of Exercise 72. data for normality to see if the test is justified. 74. Can the right diet help us cope with diseases asso- b. Carry out a 2-tailed permutation test for the ciated with aging such as Alzheimer’s disease? hypothesis of equal population means. A study (“Reversals of Age-Related Declines c. Compare the results of (a) and (b). Would you in Neuronal Signal Transduction, Cognitive, and expect them to be similar for the data of this Motor Behavioral Deficits with Blueberry, problem? Discuss their relationship to the Spinach, or Strawberry Dietary Supplement,” J. results of Exercise 74. Summarize your con- Neurosci. 1999; 8114-8121) investigated the clusions about the effectiveness of blueberries. effects of fruit and vegetable supplements in the 76 Researchers at the University of Alaska have diet of rats. The rats were 19 months old, which is boon trying ta find taexpensive deed sources for Agen Gy pin etadards TAS a0 HES WETE ano nY Alaska reindeer growers (“Effects of Two Barley- assigned to four diets, of which we will consider TBiseA Dia lg Gh Hedy’ Mid aid THURS Rated oF sustthe bluebemy diet ani tae contea] aiet bese: Captive Reindeer During Winter,” Poster Presen- ubiten 6 weeks om thetraticts, thesats were-eryena tation: School of Agriculture and Land Resources number of tests. We give the data for just one of the Management, University of Alaska Fairbanks, ain when measured ow fete seconds a 2002). They are focusing on Alaska-grown barley Could walk on a rod. Here are the tumes for the because commercially available feed supplies are ten control tats. (C) and ett bhieberry rats (B): too expensive for farmers. Typically, reindeer lose weight in the fall and winter, and the researchers ASOT “7D TA S60 Roh Gah A, BR are trying to find a feed to minimize this loss. 320) 95 Thirteen pregnant reindeer were randomly divided Bi Siz, "988 1877 1503 (GST 791 788 into two groups to be fed on two different varieties En0? eT 828 of barley, thual and finaska. Here are the weight , . . yee gains between October | and December 15 for the Tie Objective 1s fo bin &.95%% cpufidedce seven that were fed thual barley (T) and the six that interval for the difference of population means. were fed finaska barley (F). a. Determine a 95% confidence interval for the difference of population means using the method based on the Theorem of Section 10.2. Te SSB SIS SS ESS RAB 3.35 b. Obtain a bootstrap sample of 999 differences EMT of means. Check the bootstrap distribution for Rofl G8 4 8 ISS OS, Orin fy AISI a Hottial GeOBbility plot ‘The weight gains are all negative, indicating that © Ue the startant Gevietion: of tie: Gaotstrap all of the animals lost weight. The thual barley is distribution along with the mean and ¢ critical ese ASTOR Gud HONS Mineasble, an KE ‘value:from (a) to:ge! a 7) eontidency Inter: satee forsthantwo vanetlenol: batleyywere very ee eee ea , nearly the same, so the experimenters expected d. Use the bootstrap sample and the percentile less weight loss for the thual variety. method to obtain a 95% confidence interval a. Determine a 95% confidence interval for the for neUifference|ORmcana, . difference of population means using the © Compare ‘your three ‘confidence ‘intervals, method given in the theorem of Section 10.2. If they are very similar, why do you think --- Trang 557 --- 544 cuarrer 10 Inferences Based on Two Samples b. Obtain a bootstrap sample of 999 differences of b. Obtain a bootstrap sample of 999 differences means. Check the bootstrap distribution for of means. Check the bootstrap distribution for normality using a normal probability plot. normality using a normal probability plot. c. Use the standard deviation of the bootstrap c. Use the standard deviation of the bootstrap distribution along with the mean and ¢ critical distribution along with the mean and ¢ critical value from (a) to get a 95% confidence interval value from (a) to get a 95% confidence interval for the difference of means. for the difference of means. d. Use the bootstrap sample and the percentile d. Use the bootstrap sample and the percentile method to obtain a 95% confidence interval method to obtain a 95% confidence interval for the difference of means. for the difference of means. e. Compare your three confidence intervals. If e. Compare your three confidence intervals. If they are very similar, why do you think this is they are very similar, why do you think this is the case? the case? In the light of your results for (c) and f. Interpret your results. Is there a substantial dif- (d), does the z method of (a) seem to work, ference? Is it in the direction anticipated by the regardless of normality? Explain. experimenters? f. Are your results consistent with the results of 77. For the data of Exercise 76 we want to test the Example 1047, Explain, hypothesis of equal population variances. 79. For the data of Example 10.4 we want to try a a. Carry out a 2-tailed test using the method of permutation test. Section 10.5. Recall that this method requires a. Carry out a 2-tailed permutation test for the the data to be normal, and the method is sensi- hypothesis of equal population means. tive to departures from normality. Check the b. Compare the results for (a) and Example 10.4. data for normality to see if the F test is justified. Why should you have expected (a) and Exam- b. Carry out a 2-tailed permutation test for the ple 10.4 to give similar results? hypothesis ‘of equal population variances: Gr gy) pos iie asta m—1.Thisiswhysome 90. Is the response rate for questionnaires affected authors suggest using min(m — 1, n — 1) as df in by including some sort of incentive to respond place of the formula given in the text. What impact along with the questionnaire? In one experiment, does this have on the CI and test procedure? 110 questionnaires with no incentive resulted in . . 75 being returned, whereas 98 questionnaires that 88. cuength Untecie 10908 naner deat included a chance to win a louery yielded 66 in the article “Compression of Single-Wall vet Pape Opiaiae } Sones sae aaet ee Sauipe Se na fi ae this data suggest that including an incentive 318-320). The authors stated that “the difference increases the: likelihood ‘of a:response? State'and between the compression strength using fixed and Sas, he a leven) Wpoibetes at sienticanes level floating platen method was found to be small ech dee ; compared to normal variation in compression 91. The article “Quantitative MRI and Electrophysiol- strength between identical boxes.” Do you agree? ogy of Preoperative Carpal Tunnel Syndrome in a Female Population” (Ergonomics, 1997: 642-649) Sample Sample Sample reported that (—473.3, 1691.9) was a large-sam- Method Size Mean SD ple 95% confidence interval for the difference —<—— = between true average thenar muscle volume Pixel 10 807 2 (mm*) for sufferers of carpal tunnel syndrome Floating 10 157 4 and true average volume for nonsufferers. Calcu- &__=_ late and interpret a 90% confidence interval for 89. The authors of the article “Dynamics of Canopy this difference. Structure and Light Interception in Pinus elliotti, --- Trang 560 --- Supplementary Exercises 547 92. The following summary data on bending strength Motor 1 2 3 4 5 6 (Ib-in/in) of joints is taken from the article Commutator 211 273 305 258 270 209 “Bending Strength of Corner Joints Constructed Pinion 226 278 259 244 273 236 with Injection Molded Splines” (Forest Products J., April 1997; 89-92). Assume normal distribu- Motor a rn on, Commutator 223 288 296 233 262 291 Pinion 290 287 315 242 288 242 Sample Sample Sample Motor 13 14 «151617 Type Size Mean SD Commutator 278 275 «210-272-264 _—_———— Pinion 278 «208 «281 «274-268 Without side coating 10 80.95 9.59 With side coating 10 63.23 5.96 Calculate an estimate of the population mean dif- ference between penetration for the commutator a. Calculate a 95% lower confidence bound for armature bearing and penetration for the pinion true average strength of joints with a side bearing, and do so in a way that conveys informa- coating. tion about the reliability and precision of the esti- b. Calculate a 95% lower prediction bound for the mate. [Note: A normal probability plot validates strength of a single joint with a side coating. the necessary normality assumption.) Would you ¢. Calculate a 95% confidence interval for the say that the population mean difference has been difference between true average strengths for precisely estimated? Does it look as though popu- the two types of joints. lation mean penetration differs for the two types of 93. An experiment was carried out to compare bearings? Explain. various properties of cotton/polyester spun yarn 95, The article “Two Parameters Limiting the Sensi- finished with softener only and yarn finished tivity of Laboratory Tests of Condoms as Viral with softener plus 5% DP-resin (“Properties Barriers” (/. Test. Eval., 1996: 279-286) reported of a Fabric Made with Tandem Spun Yarns,” that, in brand A condoms, among 16 tears pro- Textile Res. J., 1996: 607-611). One particularly duced by a puncturing needle, the sample mean important characteristic of fabric is its durability, tear length was 74.0 jum, whereas for the 14 brand that is, its ability to resist wear. For a sample of B tears, the sample mean length was 61.0 pm. 40 softener-only specimens, the sample mean (determined using light microscopy and scanning stoll-flex abrasion resistance (cycles) in the fill- electron micrographs). Suppose the sample stan- ing direction of the yarn was 3975.0, with a dard deviations are 14.8 and 12.5, respectively sample standard deviation of 245.1. Another (consistent with the sample ranges given in the sample of 40 softener-plus specimens gave a article). The authors commented that the thicker sample mean and sample standard deviation of brand B condom displayed a smaller mean tear 2795.0 and 293.7, respectively. Calculate a con- length than the thinner brand A condom. Is this fidence interval with confidence level 99% for difference in fact statistically significant? State the difference between true average abrasion the appropriate hypotheses and test at 7 = .05. resistances for’ the: two types Of fabries."Does. 4 7, emation about hand postute-and forvesgener your interval provide convincing evidence that ‘ 7 fi : : true average resistances differ for the two types ated by thestingers during manipilation.of.various of fairiat. Whycarswhy.ioe daily objects is needed for designing high- tech hand prosthetic devices. The article “Grip 94. The derailment of a freight train due to the cata- Posture and Forces During Holding Cylindrical strophic failure of a traction motor armature bear- Objects with Circular Grips” (Ergonomics, 1996: ing provided the impetus for a study reported in 1163-1176) reported that for a sample of 11 the article “Locomotive Traction Motor Armature females, the sample mean four-finger pinch Bearing Life Study” (Lubricat. Engrg., Aug. strength (N) was 98.1 and the sample standard 1997: 12-19). A sample of 17 high-mileage trac- deviation was 14.2. For a sample of 15 males, tion motors was selected, and the amount of cone the sample mean and sample standard deviation penetration (mm/10) was determined both for the were 129.2 and 39.1, respectively. pinion bearing and for the commutator armature a. A test carried out to see whether true average bearing, resulting in the following data: strengths for the two genders were different --- Trang 561 --- 548 = cuarrer 10 Inferences Based on Two Samples resulted in ¢ = 2.51 and P-value = .019. Does collegiate golfers during their swings. The follow- the appropriate test procedure described in this ing data was supplied by the article’s authors. chapter yield this value of ¢ and the stated P-value? Golfer. ER IR diff —z pere b. Is there substantial evidence for concluding eee that true average strength for males exceeds 1 ~130.6 = 98.9 31.7 -1.28 that for females by more than 25 N? State 2 —125.1 -115.9 9.2 -0.97 and test the relevant hypotheses. 3 —51.7 -161.6 109.9 0.34 97. The article “Pine Needles as Sensors of Atmo- 4 “wee =162 Ue =O spheric Pollution” (Environ. Monitor, 1982: : ae By ae ae 273-286) reported on the use of neutron-activity q Eis z analysis to determine pollutant concentration in 7 eae 2100 250e 8S pine needles, According to the article’s authors, 8 Bil -275.7 446 0.17 “These observations strongly indicated that for 9 ASE AQEAG = BTR! 0.52 those elements which are determined well by the 1" 3 ae ied ee analytical procedures, the distribution of concentra- ib eee tion is lognormal. Accordingly, in tests of signifi- iG “aad! abet We Lok cance the logarithms of concentrations will be 4 _1844 1406 438 -183 used.” The given data refers to bromine concentra- : : Be : tion in needles taken from a site near an oil-fired iS eye ates Age Ms? steams plant and tromasrelatiyely-clean ste The a. Is it plausible that the differences came from a summary values are means and standard deviations " ormally distributed population? of the log-transformed observations. b. The article reported that Mean(SD) = Se —145.3(68.0) for ER velocity and = Sample —_ Mean Log SD of Log ~227.8(96.6) for IR velocity. Based just on Site Size Concentration Concentration iia’ iAformaHOd, Gould 2 testsoF hypotheses about the difference between true average IR i 8 180 42 velocity and true average ER velocity be car- Ga 9 ho 46 ried out? Explain. : : c. Do an appropriate hypothesis test about the difference between true average IR velocity Let uf be the true average Jog concentration at and true average ER velocity and interpret the the first site, and define j4; analogously for the result. second site: - 99. The accompanying summary data on the ratio of a. Use the pooled + test (based on assuming nor- strength to cross-sectional area for knee extensors. mality and equal standard deviations) to decide is taken from the article “Knee Extensor and Knee at significance level .05 whether the two con- Flexor Strength: Cross-Sectional Area Ratios in centration distribution means are equal. Young and Elderly Men” (J. Gerontol., 1992: b. If oj and 3, the standard deviations of the two M204-M210). log concentration distributions, are not equal, would j1; and 42, the means of the concentra- TT tion distributions, be the same if jx} = p52 Sample ‘Sample “Signdard Explain your reasoning. Group Size Mean Error 98. Torsion during hip external rotation (ER) and Young 13 TAT 22 extension may be responsible for certain kinds of Elderly men 12 671 28 injuries in golfers and other athletes. The article SE eeeeeeesesFsFss “Hip Rotational Velocities during the Full Golf Does this data suggest that the true average ratio Swing” (VJ. Sport Sci. Med., 2009; 296-299) for young men exceeds that for elderly men? reported on a study in which peak ER velocity Carry out a test of appropriate hypotheses using and peak IR (internal rotation) velocity (both in a = .05. Be sure to state any assumptions neces- deg/s) were determined for a sample of 15 female sary for your analysis. --- Trang 562 --- Supplementary Exercises 549 100. The accompanying data on response time 103. How does energy intake compare to energy appeared in the article “The Extinguishment of expenditure? One aspect of this issue was con- Fires Using Low-Flow Water Hose Streams— sidered in the article “Measurement of Total Part II” (Fire Techn., 1991: 291-320). The Energy Expenditure by the Doubly Labelled samples are independent, not paired. Water Method in Professional Soccer Players” (J. Sports Sci., 2002: 391-397), which contained Good 43. 1.17.37 A768 58 502.75 the accompanying data (MJ/day). visibility Poor 1.47 80. 1.58 1.53 4.33 4.23 3.25 3.22 Player 1 2 3 4 5 6 7 visibility Expenditure 14.4 12.1 14.3 14.2 15.2 15.5 17.8 Intake 14.6 9.2 118 11.6 12.7 15.0 16.3 The authors analyzed the data with the pooled t test. Does the use of this test appear justified? Test to see whether there is a significant differ- {Hint: Check for normality. The normal scores ence between intake and expenditure. Does the for n = 8 are —1.53, —.89, —49, —.15, .15, 49, conclusion depend on whether a significance 89, and 1.53.] level of .05, .01, or .001 is used? 101. The accompanying data on the alcohol content of 104. An experimenter wishes to obtain a CI for the wine is representative of that reported in a study difference between true average breaking in which wines from the years 1999 and 2000 strength for cables manufactured by company I were randomly selected and the actual content and by company II. Suppose breaking strength is was determined by laboratory analysis (London normally distributed for both types of cable with Times, Aug. 5, 2001). @, = 30 psi and a> = 20 psi. a. If costs dictate that the sample size for the Wine 1 2 3 4 5 6 type I cable should be three times the sample Actual 14.2 145 140 149 136 12.6 size for the type Il cable, how many observa- label 140 140 135 150 130 125 tions are required if the 99% Cl is to be no wider than 20 psi? The two-sample f test gives a test statistic value b. Suppose a total of 400 observations is to be of .62 and a two-tailed P-value of .55. Does this made. How many of the observations should convince you that there is no significant difference be made on type I cable samples if the width between true average actual alcohol content and of the resulting interval is to be a minimum? true average content stated on the label? Explain. 195, An experiment to determine the effects of tempera- 102. The article “The Accuracy of Stated Energy ture on the survival of insect eggs was described in Contents of Reduced-Energy, Commercially the article “Development Rates and a Temperature- Prepared Foods” U. Am. Diet. Assoc:, 2010: Dependent Model of Pales Weevil” (Environ. 116-123) presented the accompanying data on Entomol., 1987: 956-962). At 11°C, 73 of 91 eggs vendonstated prose onerey and measured value survived to the next stage of development. At 30°C, (both in kcal) for 10 different supermarket 102 of 110 eggs survived. Do the results of this convenience meals): experiment suggest that the survival rate (propor- tion surviving) differs for the two temperatures? Meal 123 4 5 67 8 9 10 Calculate the P-value and use it to test the appro- Stated 180 220 190 230 200 370 250 240 80 180 priate hypotheses. Measured 212 319 231 306 211 431 288 265 145 228 106. The insulin-binding capacity (pmol/mg protein) Obtain a 95% confidence interval for the differ- was measured for four different groups of rats: ence of population means. By roughly what per- Ciynoniiabetic, (2) untreated diabetic, (2) diabetic centage are the actual calories higher than the treated Willa low dasezoF ansulie, (@): diabetic eae cee treated with a high dose of insulin. The accompa- Note that the article calls this a convenience nye; table etvesssample azesiand: sample: siane ; : dard deviations. Denote the sample size for the ith sample and suggests that therefore it should have A limited value for inference. However, even if the treatment "by ‘ay ‘and the ‘sample ‘variance: by ten meals were a random sample from their local Bt Of) Mesuraing sha abe teuewnaiance store, there could still be a problem in drawing for each treatment is o”, construct a pooled esti- conclusions about a purchase at your store. mator of o~ that is unbiased, and verify using rules --- Trang 563 --- 550 = cnarrer 10 Inferences Based on Two Samples of expected value that it is indeed unbiased. What 110. McNemar’s test, developed in Exercise 55, can is your estimate for the following actual data? also be used when individuals are paired [Hint: Modity the pooled estimator S% from Sec- (matched) to yield n pairs and then one member tion 10.2.] of each pair is given treatment | and the other is given treatment 2. Then X, is the number of pairs TT in which both treatments were successful, and Treatment similarly for X>, X3, and X,. The test statistic 1 2 3 4 for testing equal efficacy of the two treatments SSS is given by (X2 —X3)//X> Xs, which has Sample Size 16 18 g 12 approximately a standard normal distribution Sample SD 64 81 Sl 35 when Hp is true. Use this to test whether the $ So drug ergotamine is effective in the treatment of migraine headaches. 107. Suppose a level .05 test of Ho: J — fb =0 versus H,: 4; — [2 > 0 is to be performed, . . assuming 6; = 6) = 10 and normality of both Ergotamine distributions, using equal sample sizes (m = 1). ¥ F Evaluate the probability of a type II error when ae 1) — fy = Land n = 25, 100, 2500, and 10,000. Placebo S 44 34 Can you think of real problems in which the F 4 30 difference jt; — fly = 1 has little practical signif- ——___—_+ icance? Would sample sizes of n = 10,000 be desirable in such problems? The data is fictitious, but the conclusion agrees . with that in the article “Controlled Clinical Trial 108. The following data refers to airbome bacteria of Ergotamine Tartrate” (British Med. J, 1970: count (number of colonies/ft") both for m= 8 325-327). carpeted hospital rooms and for n = 8 uncar- peted rooms (“Microbial Air Sampling in a 111. Let Xj, ... , Xm be a random sample from a Carpeted Hospital,” J. Environ. Health, 1968: Poisson distribution with parameter /,, and let 405). Does there appear to be a difference in Yi, ..., Y, be a random sample from another true average bacteria count between carpeted Poisson distribution with parameter A2. We wish and uncarpeted rooms? to test Hy: 2, — 27 = 0 against one of the three standard alternatives. Since 4. = / for a Poisson Carpeted 11.8 8.2 7.1 13.0 10.8 10.1 14.6 14.0 distribution, when m and n are large the large- Uncarpeted 12.1 8.3 3.87.2 12.0 11.1 10.1 13.7 sample = test of Section 10.1 can be used. How- ever, the fact that V(X) = A/n suggests that a Suppose you later learned that all carpeted rooms different denominator should be used in standar- were ina veterans’ hospital, whereas all uncarpeted dizing X — Y. Develop a large-sample test proce- rooms were in a children’s hospital. Would you be dure appropriate to this problem, and then apply able to assess the effect of carpeting? Comment. it to the following data to test whether the plant . . densities for a particular species are equal in two Ty: Researchers sent 5000 resumes 1h response ta job different regions (where each observation is the ads that appeared in the Boston Globe and Chicago : . : me number of plants found in a randomly located Tribune. The resumes were identical except that ee enree s 2 ue nical square sampling quadrat having area | m7, so for 2500 of them had “white sounding” first names, sion 1, there: weie 40 quiidrans ih. Which one such as Brett and Emily, whereas the other 2500 plant was observed, etc.): had “black sounding” names such as Tamika and Rasheed. The resumes of the first type elicited 250. responses and the resumes of the second type only Frequency 167 responses (these numbers are very consistent 0123 4567 with information that appeared in a January 15,¢2©< 2003, report by the Associated Press). Does this Region! 28 40 28 17 8 2 1 1 m=125 data strongly suggest that a resume with a“black” Region2 14 25 30 18 49 2 1 1 n=140 name is less likely to result in a response than is @_§ _——H.__—— resume with a “white” name? --- Trang 564 --- Bibliography 554 112. Referring to Exercise 111, develop a large- bioequivalent) versus Ha: 5: < perlite < du sample confidence interval formula for 21 — 22. (bioequivalent). The limits 6, and dy are Calculate the interval for the data given there standards set by regulatory agencies; for cer- using a confidence level of 95%. tain purposes the FDA uses .80 and 1.25 = 113. Let R, be a rejection region with significance Wr Sirespectyelys By taking logaritinsiand level for testing Hy): 0 © Q, versus Hy: 0¢ letting 1 = In(u), t = In(®), the hypotheses Qj, and let Ry be a level « rejection region for become Ho: either nr — Te St OF 2 ty : 2 z 2 ah testing Hyp: 0 © Qo versus Hys: 0¢ Qo, where Q, WORRIES FL) By = te WH tbs and Q) are two disjoint sets of possible values of setup, &type 1 eet involves siying fhe . , drugs are bioequivalent when they are not. 6, Now consider testing Ho: 0 © Q; U Q3 versus eal aan the alternative H,: 0 ¢ @, U Qs. The proposed se rejection region for this latter test is Ry Rp. That Let D be an estimator of 1 — ng with stan- is, Hy is rejected only if both Hor and Ho can dard error Sp such that standardized variable be rejected. This procedure is called a union- T = [D — (nr — na)VSp bas at distribution intersection test (UIT). with v df. The standard test procedure is a. Show that the UIT is a level 2 test. referred to as TOST for “two one-sided b. As an example, let j17 denote the mean value tests,” and is based on the two test statistics of a particular variable for a generic (test) Ty =(D — t)Sp and T, =(D — ~/Sp. drug, and jig denote the mean value of this If v = 20, state the appropriate conclusion variable for a brand-name (reference) drug. In in each of the following cases: (1) tz, = 2.0, bioequivalence testing, the relevant hypoth- w=-15; Q i=15, w= ~-2.0; B) eses are Ho: pirlite < dx. oF ftrltie > dy (not ty, BO ty = 205 Bibliography See the bibliography at the end of Chapter 8. --- Trang 565 --- CHAPTER ELEVEN The Analysis f Variance Introduction In studying methods for the analy‘sis of quantitative data, we first focused on problems involving a single sample of numbers and then turned to a comparative analysis of two different samples. Now we are ready for the analysis of several samples. The analysis of variance, or more briefly ANOVA, refers broadly to a collection of statistical procedures for the analysis of quantitative responses. The simplest ANOVA problem is referred to variously as a single-factor, single-classification, or one-way ANOVA and involves the analysis of data sampled from two or more numerical populations (distributions). The characteristic that labels the populations is called the factor under study, and the populations are referred to as the levels of the factor. Examples of such situations include the following: 1. An experiment to study the effects of five different brands of gasoline on automobile engine operating efficiency (mpg) 2. An experiment to study the effects of four different sugar solutions (glucose, sucrose, fructose, and a mixture of the three) on bacterial growth 3. An experiment to investigate whether hardwood concentration in pulp (%) has an effect on tensile strength of bags made from the pulp 4. An experiment to decide whether the color density of fabric specimens depends on the amount of dye used JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 552 DOI 10.1007/978-1-4614-0391-3_11, © Springer Science+Business Media, LLC 2012 --- Trang 566 --- 11.1 Single-Factor ANOVA 553. In (1) the factor of interest is gasoline brand, and there are five different levels of the factor. In (2) the factor is sugar, with four levels (or five, if a control solution containing no sugar is used). In both (1) and (2), the factor is qualitative in nature, and the levels correspond to possible categories of the factor. In (3) and (4), the factors are concentration of hardwood and amount of dye, respectively; both these factors are quantitative in nature, so the levels identify different settings of the factor. When the factor of interest is quantitative, statistical techniques from regression analysis (discussed in Chapter 12) can also be used to analyze the data. In this chapter we first introduce single-factor ANOVA. Section 11.1 presents the F test for testing the null hypothesis that the population means are identical. Section 11.2 considers further analysis of the data when Ho has been rejected. Section 11.3 covers some other aspects of single-factor ANOVA. Many experimen- tal situations involve studying the simultaneous impact of more than one factor. Various aspects of two-factor ANOVA are considered in the last two sections of the chapter. Single-Factor ANOVA Single-factor ANOVA focuses on a comparison of two or more populations. Let 7 = the number of treatments being compared {4 = the mean of population 1 (or the true average response when treatment | is applied) 4; = the mean of population / (or the true average response when treatment / is applied) Then the hypotheses of interest are Ao iy = = = versus Hg: at least two of the yu;’s are different If] = 4, Hp is true only if all four u,’s are identical. H,, would be true, for example, if fy = fy A by = My, if My = Wy = Wy F fh, Or if all four y,’s differ from each other. A test of these hypotheses requires that we have available a random sample from each population or treatment. The article “Compression of Single-Wall Corrugated Shipping Containers Using Fixed and Floating Test Platens” (J. Test. Eval., 1992: 318-320) describes an experiment in which several different types of boxes were compared with respect to compression strength (Ib). Table 11.1 presents the results of a single-factor ANOVA experiment involving / = 4 types of boxes (the sample means and standard deviations are in good agreement with values given in the article). --- Trang 567 --- 554 = cuaprer 11 The Analysis of Variance Table 11.1. The data and summary quantities for Example 11.1 Type of box Compression strength (Ib) Sample mean Sample SD 1 655.5 788.3 734.3 713.00 46.55 7214 679.1 699.4 2 789.2 772.5 786.9 756.93 40.34 686.1 732.1 TI4A8 3 737.1 639.0 696.3 698.07 37.20 671.7 717.2 P71 4 535.1 628.7 542.4 562.02 39.87 559.0 586.9 520.0 Grand mean = 682.50 With y; denoting the true average compression strength for boxes of type i (i = 1, 2, 3, 4), the null hypothesis is Ho: ft) = flo = Ms = My. Figure 11.1(a) shows a comparative boxplot for the four samples. There is a substantial amount of overlap among observations on the first three types of boxes, but compression strengths for the fourth type appear considerably smaller than for the other types. This suggests that Ho is not true. The comparative boxplot in Figure 11.1(b) is based on adding 120 a 1 as 2 ——asE- 3 —nna- 4 t_L__} i 550 600 650 700 750 b 1 a 2 —_ Ena a 4 |__| — 630 660 690 720 750 780 Figure 11.1 Boxplots for Example 11.1: (a) original data; (b) altered data --- Trang 568 --- 11.1 Single-Factor ANOVA 555. to each observation in the fourth sample (giving mean 682.02 and the same standard deviation) and leaving the other observations unaltered. It is no longer obvious whether Hp is true or false. In situations such as this, we need a formal test procedure. a Notation and Assumptions In two-sample problems, we used the letters X and Y to designate the observations in the two samples. Because this is cumbersome for three or more samples, it is customary to use a single letter with two subscripts. The first subscript identifies the sample number, corresponding to the population or treatment being sampled, and the second subscript denotes the position of the observation within that sample. Let Xj; = the random variable (rv) denoting the jth measurement from the ith population xij = the observed value of X;; when the experiment is performed The observed data is usually displayed in a rectangular table, such as Table 11.1. There samples from the different populations appear in different rows of the table, and .;,; is the jth number in the ith row. For example, x23 = 786.9 (the third observation from the second population), and x4, = 535.1. When there is no ambiguity, we will write x,; rather than x;,; (e.g., if there were 15 observations on each of 12 treatments, x,)2 could mean x, )2 or Xy;,2). It is assumed that the X;;s within any particular sample are independent—a random sample from the ith population or treatment distribution— and that different samples are independent of each other. In some experiments, different samples contain different numbers of obser- vations. However, the concepts and methods of single-factor ANOVA are most easily developed for the case of equal sample sizes. Unequal sample sizes will be considered in Section 11.3. Restricting ourselves for the moment to equal sample sizes, let J denote the number of observations in each sample (J = 6 in Example 11.1). The data set consists of 1J observations. The individual sample means will be denoted by Xj., X., ..., X;.. That is, I DXi X= 7=1,2,...,1 J The dot in place of the second subscript signifies that we have added over all values of that subscript while holding the other subscript value fixed, and the horizontal bar indicates division by J to obtain an average. Similarly, the average of all 17 observations, called the grand mean, is Ld ee ra For the strength data in Table 11.1, x). = 713.00, %. = 756.93, ¥3. = 698.07, X4 = $62.02, and ¥.. = 682.50. Additionally, let S7, S3,... ,.S7 represent the sample variances: --- Trang 569 --- 556 = cuaprer 11 The Analysis of Variance J wy > (Xi — Xi.) 2 FF ; = Sy Dyeaeg li s $7 a From Example 11.1, s; = 46.55, = 2166.90, and so on. ASSUMPTIONS The / population or treatment distributions are all normal with the same variance ”. That is, each X;; is normally distributed with E(Xi) =; V(Xy) = 0 In previous chapters, a normal probability plot was suggested for checking normality. The individual sample sizes in ANOVA are typically too small for J separate plots to be informative. A single plot can be constructed by subtracting x; from each observation in the first sample, ¥,. from each observation in the second, and so on, and then plotting these // deviations against the z percentiles. The deviations are called residuals so this plot is the normal plot of the residuals. Figure 11.2 gives the plot for the residuals of Example 11.1. The straightness of the pattern gives strong support to the normality assumption. Deviation 50 eee’ cence . 0 ee ne eee es . , ee = percentile 1.4 oe 0 7 1.4 Figure 11.2 A normal probability plot based on the data of Example 11.1 At the end of the section we discuss Levene’s test for the equal variance assumption. For the moment, a rough rule of thumb is that if the largest s is not much more than twice the smallest s, it is reasonable to assume equal variances. This is especially true if the sample sizes are equal or close to equal. In Example 11.1, the largest s is only about 1.25 times the smallest. Sums of Squares and Mean Squares If Ho is true the J observations in each sample come from a normal population distribution with the same mean value fl, in which case the sample means X1.,X.,...Xy. should be reasonably close. The test procedure is based on comparing --- Trang 570 --- 11.1 Single-Factor ANOVA 557 ameasure of differences among these sample means (“‘between-samples” variation) to a measure of variation calculated from within each sample. These measures involve quantities called sums of squares. DEFINITION The treatment sum of squares SSTr is given by sstr=J > (%. -X.) =J[%. -—¥.)? +--+, -¥.)] and the error sum of squares SSE is SSE =~ > (Xj —%.) ri =O Ky Xi) et Ey — KP i 7 =(J — 1)8}+ (J -1)S3 +--+ (J - 1S? =(J — 1)[S} + 83 +--+ +57] Now recall a result from Section 6.4 : if Xj, ..., X,, is arandom sample from a normal distribution with mean jy and variance ¢°, then the sample mean X and the sample variance S° are independent. Also, X is normally distributed, and (n—1)S?/o* [i.e., 3 (Xi — X)?/o?] has a chi-squared distribution with n — 1 df. That is, dividing the sum of squares }> (X; — X)? by o? gives a chi-squared random variable. Similar results hold in our ANOVA situation. THEOREM When the basic assumptions of this section are satisfied, SSE/o” has a chi- squared distribution with /(J — 1) df (each sample contributes J — 1 df and df’s add because the samples are independent). Furthermore, when Ho is true, SSTr/o? has a chi-squared distribution with J — 1 df [there are / deviations X,. —X.,...,X).—X. but 1 df is lost because S>; (X;. — X..) = 0]. Lastly, SSE and SSTr are independent random variables. If we let ¥j =X;., i=1,...,/, then Y,, Yo, ..., Y; are independent and normally distributed with the same mean under Ho and with variance ol. Thus, by the key result from Section 6.4, (J — 1)S}/(o?/J) has a chi-squared distribution with J —1 df. Furthermore, (J — 1)S}/(o?/J) = JS (X;. —X..)°/o? = SSTr/a?, so SSTr/o? ~ 741. Independence of SSTr and SSE follows from the fact that SSTr is based on the individual sample means whereas SSE is based on the sample variances, and Xj. is independent of S? for each i. The expected value of a chi-squared variable with v df is just v. Thus SSE SSE 2 E\—~] =I -1) > E| ——] =a (Gr) -17-=8(G=p) -¢ --- Trang 571 --- 558 = cuaprer 11 The Analysis of Variance Ho true > (Se) =I-1 >() =0 a f—1 Whenever the ratio of a sum of squares over a” has a chi-squared distribution, we divide the sum of squares by its degrees of freedom to obtain a mean square (“mean” is used in the sense of “average”). DEFINITION The mean square for treatments is MSTR = SSTr/(/ — 1) and the mean square for error is MSE = SSE/{/VJ — 1)]. Notice that upper case X’s and S’s are used in defining the sums of squares and thus the mean squares, so the SS’s and MS’s are statistics (random variables). We will follow tradition and also use MSTr and MSE (rather than mstr and mse) to denote the calculated values of these statistics. The foregoing results concerning expected values can now be restated: E(MSE) = o?; that is, MSE is an unbiased estimator of a? Ho true => E(MSTr) = o*; so MSTr is an unbiased estimator of o7 MSTr is unbiased for o? when Hy is true, but what about when Hy is false? It can be shown (Exercise 10) that in this case, E(MSTr) > o°. This is because the X;.’s tend to differ more from each other, and therefore from the grand mean, when the u;’s are not identical than when they are the same. The F Test The test statistic is the ratio F = MSTr/MSE. F is a ratio of two estimators of 07. The numerator (the between-samples estimator), MSTr, is unbiased when Ho is true but tends to overestimate o? when Ho is false, whereas the denominator (the within- samples estimator), MSE, is unbiased regardless of the status of Ho. Thus if Ho is true the F ratio should be reasonably close to 1, but if the y1;’s differ considerably from each other, F should greatly exceed 1. Thus a value of F considerably exceeding | argues for rejection of Ho. In Section 6.4 we introduced a family of probability distributions called F distributions. If ¥; and Y2 are two independent chi-squared random variables with v; and v> df, respectively, then the ratio F = (Y,/v,)/(¥>/v2) has an F distribution with v, numerator df and vj denominator df. Figure 11.3 shows an F density curve and corresponding upper-tail critical value Fy, ,,. Appendix Table A.8 gives these critical values for x = .10, .05, .01, and .001. Values of v, are identified with different columns of the table and the rows are labeled with various values of v3. For example, the F critical value that captures upper-tail area .05 under the F curve with vy; = 4.and v2 = 6 is Fos46 = 4.53, whereas F564 = 6.16 (so don’t accidentally switch numerator and denominator df!). The key theoretical result that justifies the test procedure is that the test statistic F has an F distribution when Ho is true. --- Trang 572 --- 11.1 Single-Factor ANOVA 559 F curve for v, and v5 df Shaded area = a ' wd I Fiesty, Figure 11.3 An Fcurve and critical value F,,y,, THEOREM The test statistic in single-factor ANOVA is F = MSTr/MSE. We can write this as SSTr S)e-) Fate SSE | = 1) When Hp is true, the previous theorem implies that the numerator and denominator of F are independent chi-squared variables divided by their df’s, in which case F has an F distribution with J — 1 numerator df and J(J—1) denominator df. The rejection region f > F,.;—1,4;—1) then specifies an upper-tailed test that has the desired significance level x. The P-value for an upper-tailed F test is the area under the relevant F curve (the one with correct numerator and denominator df’s) to the right of the calculated f. Refer to Section 10.5 to see how P-value information for F tests can be obtained from the table of F critical values. Alternatively, statistical software packages will automatically include the P-value with ANOVA output. Computational Formulas The calculations leading to fcan be done efficiently by using formulas similar to the computing formula for the numerator of the sample variance s* from Section 1.4. The first two computational formulas here are essentially repetitions of that for- mula with new notation. Let x;. represent the sum (not the average, since there is no overbar) of the x;’s for fixed 7 (the total of the J observations in the ith sample). Similarly, let x denote the sum of all // observations (the grand total). We also need a third sum of squares in addition to SSTr and SSE. Sum of Squares af Definition Computing Formula Total = SST Wet SY (yy — 3.) Vly 8 /y a) 77 Tree = SST: tial | %,—x.)° Z ‘reatment r rE xy Le 2 =IN@ 2) I Error = SSE 1 — 1) EDs? SST — SSTr 7 7 --- Trang 573 --- 560 = cuarrer 11 The Analysis of Variance Both SST and SSTr involve 7 /LJ, which is called either the correction factor or the correction for the mean. SST results from squaring each observation, adding these squares, and then subtracting the correction factor. Calculation of SSTr entails squaring each sample total (each row total from the data table), summing these squares, dividing the sum by J, and again subtracting the correction factor. SSTr is subtracted from SST to give SSE (it must be the case that SST > SSTr), after which MSTr, MSE, and finally f are calculated. The computational formula for SSE is a consequence of the fundamental ANOVA identity SST =SSTr+ SSE (11.1) The identity implies that once any two of the SS’s have been calculated, the remaining one is easily obtained by addition or subtraction. The two that are most easily calculated are SST and SSTr. The proof of the identity follows from squaring both sides of the relationship Ny — EB. = (xy — H.) + (%. - ¥.) (11.2) and summing over all 7 and j. This gives SST on the left and SSTr and SSE as the two extreme terms on the right; the cross-product term is easily seen to be zero (Exercise 9). The interpretation of the fundamental identity is an important aid to under- standing ANOVA. SST is a measure of total variation in the data — the sum of all squared deviations about the grand mean. The identity says that this total variation can be partitioned into two pieces; it is this decomposition of SST that gives rise to the name “analysis of variance” (more appropriately, “analysis of variation”). SSE measures variation that would be present (within samples) even if Ho were true and is thus the part of total variation that is unexplained by the status of Ho (true or false). SSTr is the part of total variation (between samples) that can be explained by possible differences in the ju;’s. If explained variation is large relative to unex- plained variation, then Ho is rejected in favor of Hy. Once SSTr and SSE are computed, each is divided by its associated df to obtain a mean square (mean in the sense of average). Then F is the ratio of the two mean squares. SSTr SSE MSTr MSTr = 7 MSE = 775 F= 5s (11.3) The computations are often summarized in a tabular format, called an ANOVA table, as displayed in Table 11.2. Tables produced by statistical software custom- arily include a P-value column to the right of f. --- Trang 574 --- 11.1 Single-Factor ANOVA 561 Table 11.2 An ANOVA table Source of Variation df Sum of Squares Mean Square f Treatments I-1 SSTr MSTr = SSTr/(—1) | MSTr/MSE. Error IJ-1) SSE MSE = SSE/[JJ—1)] Total FAA SST The accompanying data resulted from an experiment comparing the degree of soiling for fabric copolymerized with three different mixtures of methacrylic acid (similar data appeared in the article “Chemical Factors Affecting Soiling and Soil Release from Cotton DP Fabric,” Am. Dyest. Rep., 1983: 25-30). Mixture Degree of Soiling OR 1 56 1.12 90 1.07.94 4.59 918 2 72 69 87 78 91 3.97 .794 3 62 1.08 1.07 99 293 4.69 938 x, = 13.25 Let y; denote the true average degree of soiling when mixture 7 is used (7 = 1, 2, 3). The null hypothesis Ho: 4; = [l2 = [ls States that the true average degree of soiling is identical for the three mixtures. We will carry out a test at significance level .01 to see whether Hg should be rejected in favor of the assertion that true average degree of soiling is not the same for all mixtures. Since /— 1 = 2 and (J —1) = 12, the F critical value for the rejection region is F 9,912 = 6.93. Squaring each of the 15 observations and summing gives Ly = (56)? + (1.12)? +--+ (.93)? = 12.1351. The values of the three sums of squares are SST = 12.1351 — 13.257/15 = 12.1351 — 11.7042 = .4309 1 2 2 2 SSTr = 5459 +3.97° + 4.697] — 11.7042 = 11.7650 — 11.7042 = .0608 SSE = .4309 — .0608 = .3701 The remaining computations are summarized in the accompanying ANOVA table. Because f = .99 is not at least F'9),5 12 = 6.93, Ho is not rejected at significance level .01. The mixtures appear to be indistinguishable with respect to degree of soiling (F 102,12 = 2.81 = P — value>.10). Source of Variation df Sum of Squares MeanSquare f Treatments 2 .0608 0304 99 Error 12 3701 0308 Total 14 4309 r | When the F test causes Hy to be rejected, the experimenter will often be interested in further analysis to decide which j1;’s differ from which others. Proce- dures for doing this are called multiple comparison procedures, and several are described in the next two sections. --- Trang 575 --- 562 = cuaprer 11 The Analysis of Variance Testing for the Assumption of Equal Variances One of the two assumptions for ANOVA is that the populations have equal variances. If the likelihood ratio principle is applied to the problem of testing for equal variances for normal data, then the result is Bartlett’s test. This is a generali- zation of the F test for equal variances given in Section 10.5, and it is very sensitive to the normality assumption. The Levene test is much less sensitive to the assumption of normality. Essen- tially, this test involves performing an ANOVA on the absolute values of the residuals, which are the deviations x; —x;.,j = 1,2,...,/ for each i= 1, 2,..., 1. That is, a residual is the difference between an observation and its row mean (mean for its sample). The Levene test performs an ANOVA F test using the absolute residuals |xij — 3;.| in place of xj. The idea is to use absolute residuals to compare the variability of the samples. Consider the data of Example 11.2. Here are the observations again along with the (Example 11.2 means and the absolute values of the residuals. continued) fi Elxy—¥i| Mixture | 56 1.12 90 1.07 94 918 |residual 1| 358 -202 018 152 022 752 Mixture 2 72 69 87 18 91 794 |residual 2) .074 104 .076 014 116 384 Mixture 3 62 1.08 1.07 99 93 938 |residual 3) 318 142 132 052 -008 652 1.788 Now apply ANOVA to the absolute residuals. The sum of all 15 squared absolute residuals is .3701, so SST = .3701 — 1.7887/15 = .3701 — .2131 = .1570 1 SSTr = 5 [.752? + .384? + .652"] — 2131 = 2276 — 2131 = 0145 SSE = .1570 — .0145 = .1425 0145/2 61 f= 1425/12 * Compare .61 to the critical value F 192,12 = 2.81. Because .61 is much smaller than 2.81, there is no reason to doubt that the variances are equal. | Given that the absolute residuals are not normally distributed, it might seem like a dumb idea to do an ANOVA on them. However, the ANOVA F-test is robust to the assumption of normality, meaning that the assumption can be relaxed somewhat. Thus, the Levene test works in spite of the normality assumption. Note also that the residuals are dependent because they sum to zero within each sample (row), but this again is not a problem if the samples are of sufficient size (If J = 2, why does each sample have both absolute residuals the same?). A sample size of 10 is sufficient for excellent accuracy in the Levene test, but smaller samples can still give useful results when only approximate critical values are needed. This occurs when the test value is either far beyond the nominal critical value or well below it, as in Example 11.3. --- Trang 576 --- 11.1 Single-Factor ANOVA 563 Some software packages perform the Levene test, but they will not necessarily get the same answer because they do not necessarily use absolute deviations from the mean. For example, MINITAB uses absolute residuals with respect to the median, an especially good idea in case of skewed data. By default, SAS uses the squared deviations from the mean, although the absolute deviations from the mean can be requested. SAS also allows absolute deviations from the median (as the BF test, because Brown and Forsythe studied this procedure). The ANOVA F-test is pretty robust to both the normality and constant variance assumptions. The test will still work under moderate departures from these two assumptions. When the sample sizes are all the same, as we are assuming so far, the test is especially insensitive to unequal variances. Also, there is a generalization of the two-sample f-test of Section 10.2 for more than two samples, and it does not demand equal variances. This test is available in JMP, R, and SAS. If there is a major violation of assumptions, then the situation can sometimes be corrected by a data transformation, as discussed in Section 11.3. Alternatively, the bootstrap can be used, by generalizing the method of Section 10.6 from two groups to several. There is also a nonparametric test (no normality required), as discussed in Exercise 37 of Chapter 14. Exercises | Section 11.1 (1-10) 1. An experiment to compare [ = 5 brands of golf infective rhesus and oocysts present (IRD), infective balls involved using a robotic driver to hit J = 7 thesus and no infection developed (IRN), and balls of each brand. The resulting between-sample noninfective (C). The summary data values are and within-sample estimates of 6? were MSTr = ¥, = 4.39 (IRS), %=452(IRD), %= 123.50 and MSE = 22.16, respectively. 5.49 (IRN), ¥4 =6.36(C), ¥.=5.19, and a. State and test the relevant hypotheses using a Yj = 9191. Use the ANOVA F test at level significance level of .05. .05 to decide whether there are any differences b. What can be said about the P-value of the test? between true average flight times for the four 2. The lumen output was determined for each of | = 3 freatments: different brands of 60-watt soft-white lightbulbs, 4, Consider the following summary data on the mod- with J = 8 bulbs of each brand tested. The sums ulus of elasticity (x 10° psi) for lumber of three of squares were computed as SSE = 4773.3 and different grades (in close agreement with values in SSTr = 591.2. State the hypotheses of interest the article “Bending Strength and Stiffness of Sec- (including word definitions of parameters), and ond-Growth Douglas-Fir Dimension Lumber” use the F test of ANOVA (a = .05) to decide (Forest Products J., 1991: 3543), except that the whether there are any differences in true average _ sample sizes there were larger): lumen outputs among the three brands for this type of bulb by obtaining as much information as possi- Grade 7 a i ble about the P-value. ¢-—$—_—_————___ 3. Ina study to assess the effects of malaria infection on 1 10 1.63 27 mosquito hosts (“Plasmodium cynomolgi: Effects of 2 10 1.56 24 Malaria Infection on Laboratory Flight Performance 3 10 1.42 26 of Anopheles stephensi Mosquitos,” Exp. Parasitol., 1977: 397-404), mosquitoes were fed on either infec- Use this data and a significance level of .01 to test tive or noninfective rhesus monkeys. Subsequently the null hypothesis of no difference in mean modu- the distance they flew during a 24-h period was lus of elasticity for the three grades. measured using a flight mill. The mosquitoes were 5. THe article “Origin of Precambrian Iron Forma- divided into four groups of eight mosquitoes each: "sions (Econ. Geol., 1964: 1025-1057) teports the infective rhesus and sporozites present (IRS), --- Trang 577 --- 564 = cuaprer 11 The Analysis of Variance following data on total Fe for four types of iron a. Check the ANOVA assumptions with a normal formation (1 = carbonate, 2 = silicate, 3 = mag- plot and a test for equal variances. Better 4 = hematite), b. Does variation in plate length have any effect on true average axial stiffness? State and be 20.5 28.1 27.8 27.0 28.0 test the relevant hypotheses using analysis Be Be gl We 31a of variance with «=.01. Display your an ne er ome results in an ANOVA table. (Hint: 34.0 17.1 26.8 23.7 24.9 Yow = 5,241, 420.79] Bt 29.5 34.0 27.5 29.4 27.9 wv: 26.2 29.9 29.5 30.0 35.6 _—_8, Six samples of each of four types of cereal grain 4. 365 442 341 03030314 grown in a certain region were analyzed to deter- 33.1 34.1 32.9 36.3 25.5 mine thiamin content, resulting in the following Carry out an analysis of variance F test at signifi- data (ug/g): ee” and summarize the results in an Wheat 52 45 60 61 67 58 Barley 65 80 61 75 5.9 5.6 6. In an experiment to investigate the performance of Maize 58 47 64 49 60 52 four different brands of spark plugs intended for use Oats 83 6.1 78 #70 5.5 eed on a 125-ce two-stroke motorcycle, five plugs of . each brand were tested for the number of miles (at a a. Check the ANOVA assumptions with a normal constant speed) until failure. The partial ANOVA probability plot and a test for equal variances. table for the data is given here. Fill in the missing b. Test to see if at least two of the grains differ entries, state the relevant hypotheses, and carry out with respect to true average thiamin content. a test by obtaining as much information as you can Use an x= .05 test based on the P-value about the P-value. method. —____________~__ 9, Derive the fundamental identity SST = SSTr + Source df Sum of squares Mean square f SSE by squaring both sides of Equation 11.2 kad tti“‘(‘é;O;*é‘CO!SCS and summing over all i and j. [Hint: For any ia aes particular i, 3) (xj—X.) = 0.] Tova A10800.78 10. In single-factor ANOVA with / treatments and J observations per treatment, let uw = (1/)Zyu; . 7. A study of the properties of metal plate-connected a, Express E(X..) in terms of yw. [Hint: X. = trusses used for roof support (“Modeling Joints A/D OX Made with Light-Gauge Metal Connector Plates,” b. Compute £(X;,). [Hint: For any rv Y,E(¥?) = Forest Products J., 1979: 39-44) yielded the fol- v(Y) + emp. ] lowing observations on axial stiffness index (kips/ ¢. Compute E(X*). in.) for plate lengths 4, 6, 8, 10, and 12 in.: d. Compute E(SSTr) and then show that ee 2 4: 309.2 409.5 311.0 326.5 316.8 349.8 309.7 E(MST!) = 0? +27 ¥ (ui ~ #) 6: 402.1 347.2 361.0 404.5 331.0 348.9 381.7 e. Using the result of part (d), what is E(MSTr) 8: 392.4 366.2 351.0 357.1 409.9 367.3 382.0 when Ho is true? When Ho is false, how does 10: 346.7 452.9 461.4 433.1 410.6 384.2 362.6 E(MSTr) compare to 0”? 12: 407.4 441.8 419.9 410.7 473.4 441.2 465.8 Multiple Comparisons in ANOVA When the computed value of the F statistic in single-factor ANOVA is not significant, the analysis is terminated because no differences among the j1;’s have been identi- fied. But when Hp is rejected, the investigator will usually want to know which of the --- Trang 578 --- 11.2 Multiple Comparisons in ANOVA 565 H's are different from each other. A method for carrying out this further analysis is called a multiple comparisons procedure. Several of the most frequently used such procedures are based on the following central idea. First calculate a confidence interval for each pairwise difference yu; — fi; with i Q,) = 4, so max|X;. — X;. — (uw; — 14)| 1-o = p(s Z % ( MSE/J S Qut4-1) Ki. =X. = (us = I = p( Dee TH" < Ox nig for all i,j = P(-O./MSE/J <=). = (uy) — ty) < Q.,/MSE/J for all i,/) = P(X. — Xj — Qe MSE/T < 1; ~ 1 Xj — Xj. + Qe MSE/I forall i, ) (whew!). Replacing X;., X;., and MSE by the values calculated from the data gives the following result. PROPOSITION For each i < j, form the interval Hi. — 3}. On rng—1y VW MSE/T (11.4) There are (3) =I(I —1)/2 such intervals: one for 4) — #2, another for [4 — fs, ..., and the last for 4y—; — 7. Then the simultaneous confidence level that every interval includes the corresponding value of fu; — py; is 100(1 — «)%. Notice that the second subscript on Q,, is J, whereas the second subscript on F,, used in the F test is J—1. We will say more about the interpretation of “simultaneous” shortly. Each interval that doesn’t include 0 yields the conclusion that the corresponding values of j4; and uj are different—we say that s; and 4; “differ significantly” from each other. For purposes of deciding which ju;’s differ significantly from which others (i.e., identi- fying the intervals that don’t include 0) much of the arithmetic associated with calculating the CI’s can be avoided. The following box gives details and describes how differences can be displayed using an “underscoring pattern”. TUKEY’S Select a, extract Qy1;4;-1) from Appendix Table A.9, and calculate PROCEDURE w = Qy147-1) + \/MSE/J. Then list the sample means in increasing order FOR IDEN- and underline those pairs that differ by less than w. Any pair of sample means TIFYING SIG- not underscored by the same line corresponds to a pair of population or NIFICANTLY treatment means that are judged significantly different. The quantity w is DIFFERENT sometimes referred to as Tukey’s honestly significantly difference (HSD). Bis --- Trang 580 --- 11.2 Multiple Comparisons in ANOVA 567 Suppose, for example, that J = 5 and that Hy <5. w, proceed to step 2. However, if Xs. — ¥5. .4. Moving one mean to the right, the pair 3. and X). cannot be underscored because these means differ by more than .4. Again moving to the right, the next mean, 13.8, cannot be connected to any further to the right, and finally the last two means can be underscored with the same line segment. --- Trang 581 --- 568 = cuaprer 11 The Analysis of Variance is Xs Bot Xy a 13.1 13.3 13.8 14.3 14.5 Thus brands | and 4 are not significantly different from each other, but are significantly higher than the other three brands in their true average amounts captured. Brand 2 is significantly better than 3 and 5 but worse than | and 4, and brands 3 and 5 do not differ significantly. If 2. = 14.15 rather than 13.8 with the same computed w, then the configu- ration of underscored means would be Xs, &. Ey, Ha. x . 13.1 13:3 14.15 14.3 14.5 A biologist wished to study the effects of ethanol on sleep time. A sample of 20 rats, matched for age and other characteristics, was selected, and each rat was given an oral injection having a particular concentration of ethanol per kg of body weight. The rapid eye movement (REM) sleep time for each rat was then recorded for a 24-h period, with the following results: Treatment (ethanol) REM time X. ca 0 (control) 88.6 73.2 91.4 68.0 75.2 396.4 79.28 1 g/kg 63.0 53.9 69.2 SO1 715 307.7 61.54 2 ghkg 44.9 59.5 40.2 56.3 38.7 239.6 47.92 4 gikg 31.0 39.6 45.3 25.2 22.7 163.8 32.76 x. =11075 ¥. = 55.375 Does the data indicate that the true average REM sleep time depends on the concentration of ethanol? (This example is based on an experiment reported in “Relationship of Ethanol Blood Level to REM and Non-REM Sleep Time and Distribution in the Rat,” Life Sci., 1978: 839-846.) The x;s differ rather substantially from each other, but there is also a great deal of variability within each sample, so to answer the question precisely we must carry out the ANOVA. With cL¥y = 68,697.6 and correction factor x°/ (IJ) = (1107.5)?/20 = 61, 327.8, the computing formulas yield SST = 68, 697.6 — 61, 327.8 = 7369.8 1 7 SSTr = [396.40? + 307.70? + 239.60? + 163.807] — 61, 327.8 = 67, 210.2 — 61, 327.8 = 5882.4 and SSE = 7369.8 — 5882.4 = 1487.4 Table 11.4 isa SAS ANOVA table. The last column gives the P-value, which is .0001. Actually, the P-value is .0000083, but SAS does not output anything lower than .0001. It does not output .0000 because this could be misinterpreted to say that the P-value is 0. Using a significance level of .05, we reject the null hypothesis Ho: Hy = I = Ms = Ly, Since the given P-value = .0001 < .05 = a. True average REM sleep time does appear to depend on ethanol concentration. --- Trang 582 --- 11.2 Multiple Comparisons in ANOVA 569 Table 11.4 SAS ANOVA table Analysis of variance procedure Dependent Variable: TIME Sum of Mean Source DF Squares Square FValue Pr>F Model 3 5882.35750 1960.78583 21.09 -0001 Error 16 1487.40000 92.96250 Corrected Total 19 7369.75750 There are / = 4 treatments and 16 df for error, so Qos,4,16 = 4.05 and w = 4.05,/93.0/5 = 17.47. Ordering the means and underscoring yields Hy. &. Xp, i 32.76 47.92 61.54 79.28 The interpretation of this underscoring must be done with care, since we seem to have concluded that treatments 2 and 3 do not differ, 3 and 4 do not differ, yet 2 and 4 do differ. The suggested way of expressing this is to say that although evidence allows us to conclude that treatments 2 and 4 differ from each other, neither has been shown to be significantly different from 3. Treatment 1 has a significantly higher true average REM sleep time than any of the other treatments. This treat- ment involves 0 ethanol (alcohol) and there is a trend toward less sleep with more ethanol, although not all differences are significant. Figure 11.4 shows SAS output from the application of Tukey’s procedure. Alpha = 0.05 df= 16 MSE = 92.9625 Critical Value of Studentized Range = 4.046 Minimum Significant Difference = 17.446 Means with the same letter are not significantly different. Tukey Grouping Mean N TREATMENT A 79.280 5 0 (control) B 61.540 5 1 gm/kg B c B 47.920 5 2 gm/kg c c 32.760 5 4 gm/kg Figure 11.4 Tukey’s method using SAS : The Interpretation of a in Tukey’s Procedure We stated previously that the simultaneous confidence level is controlled by Tukey’s method. So what does “simultaneous” mean here? Consider calculating a 95% Cl for a population mean 4 based on a sample from that population and then --- Trang 583 --- 570 = cuarrer 11 The Analysis of Variance a 95% CI for a population proportion p based on another sample selected indepen- dently of the first one. Prior to obtaining data, the probability that the first interval will include is .95, and this is also the probability that the second interval will include p. Because the two samples are selected independently of each other, the probability that both intervals will include the values of the respective parameters is (.95)(.95) = (.95)? = .90. Thus the simultaneous or joint confidence level for the two intervals is roughly 90%—f pairs of intervals are calculated over and over again from independent samples, in the long run roughly 90% of the time the first interval will capture 4 and the second will include p. Similarly, if three CIs are calculated based on independent samples, the simultaneous confidence level will be 100(.95)°% ~ 86%. Clearly, as the number of intervals increases, the simulta- neous confidence level that all intervals capture their respective parameters will decrease. Now suppose that we want to maintain the simultaneous confidence level at 95%. Then for two independent samples, the individual confidence level for each would have to be 100\/.95% ~ 97.5%. The larger the number of intervals, the higher the individual confidence level would have to be to maintain the 95% simultaneous level. The tricky thing about the Tukey intervals is that they are not based on independent samples—MSE appears in every one, and various intervals share the same 3;.’s (e.g., in the case / = 4, three different intervals all use x;.). This implies that there is no straightforward probability argument for ascertaining the simulta- neous confidence level from the individual confidence levels. Nevertheless, if Q os is used, the simultaneous confidence level is controlled at 95%, whereas using Q 9) gives a simultaneous 99% level. To obtain a 95% simultaneous level, the individual level for each interval must be considerably larger than 95%. Said in a slightly different way, to obtain a 5% experimentwise or family error rate, the individual or per-comparison error rate for each interval must be considerably smaller than .05. MINITAB asks the user to specify the family error rate (e.g., 5%) and then includes on output the individual error rate (see Exercise 16). Confidence Intervals for Other Parametric Functions In some situations, a Cl is desired for a function of the j1;’s more complicated than a difference yu; —p4;. Let 0 = cj"; where the c;’s are constants. One such function is 3(i) + Mo) — § (tt + My + Ms), which in the context of Example 11.4 measures the difference between the group consisting of the first two brands and that of the last three brands. Because the Xj;’s are normally distributed with E(X;;) = f; and VX) = o°, 0 = X,c)X;, is normally distributed, unbiased for 0, and V(0) =V(Snok,) = Weaver.) <2 Ne @ = VN ok) = Davey =F La Estimating o* by MSE and forming 6, results in a t variable (@- 0) /6 , which can be manipulated to obtain the following 100(1 — «)% confidence interval for Xeju;: Seiki. £ typau—1yy/ (MSES <0?) /J (11.5) --- Trang 584 --- 11.2 Multiple Comparisons in ANOVA 571 ue = The parametric function for comparing the first two (store) brands of oil filter with (Example 11.4 the last three (national) brands is @ = $ (1; + Lo) — 5 (Hs + Hy + Hs), from which continued) “5 ye NAY (L(Y, 1\?_ 5 c=la a — — —=) =i - 2 2 Bs 3 3 6 With 6 =4(m. +x.) —4(%. +44. +45.) =.583 and MSE = 088, a 95% interval is 583 + 2.021 /5(.088)/[(6)(9)] = .583 + .182 = (.401, .765) rT Notice that in the foregoing example the coefficients c), ..., cs satisfy So =$+3-4—-5—4= 0. When the coefficients sum to 0, the linear combina- tion 6 = 5° cip1; is called a contrast among the means, and the analysis is available in a number of statistical software programs. Sometimes an experiment is carried out to compare each of several “new” treatments to a control treatment. In such situations, a multiple comparisons technique called Dunnett’s method is appropriate. Exercises | Section 11.2 (11-21) 11. An experiment to compare the spreading rates of Analysis of Variance for stiffness five different brands of yellow interior latex Source DF 88 MS F Pe paint available in a particular area used 4 gallons length 4 43993 10998 10.48 0.000 (J =4) of each paint, The sample average Error 30 31475 1049 spreading rates (ft"/gal) for the five brands were Total, 34 75468 ¥. = 462.0, ¥), = 512.8, X= 437.5, Level N Mean StDev X4. = 469.3, and x5, = 532.1. The computed 4 7 333.21 36.59 value of F was found to be significant at level 8 i ee : - 38 - 23 2 = .05. With MSE = 272.8, use Tukey's pro- to A Bete Geer cedure to investigate significant differences in 92 7 437.17 26.00 the true average spreading rates between brands. seONeaReDErL a2. 19 12. In Exercise 11, suppose ¥3, = 427.5. Now which Tukey's pairwise comparisons true average spreading rates differ significantly Family error rate = 0.0500 from each other? Be sure to use the method of Bente eee pane = OxO0G9S eae noes z Critical value = 4.10 underscoring to illustrate your conclusions, and write a paragraph summarizing your results. Intervals for (column level mean) - (row level mean) 13. Repeat Exercise 12 supposing that ¥. = 502.8 in 4 6 8 10 addition to ¥3. = 427.5 & =e5re 15.4 14. Use Tukey’s procedure on the data in Exercise 3 8 —92..1 -57.3 to identify differences in true average flight 8.3 43.1 i 2 the four types of mosquitos. ig —Te4 “89-5 “B28 times among the ypes squitos. 35.8 Hes aco 15. Use Tukey’s procedure on the data of Exercise 5 12 154.2 -119.3| -112.2 -80.0 to identify differences in true average total Fe a. Use the output (without reference to our F among the four types of formations (use MSE table) to test the relevant hypotheses. = 15.64). b. Use the Tukey intervals given in the output to 16. Reconsider the axial stiffness data given in Exer- determine which means differ, and construct cise 7. ANOVA output from MINITAB follows: the corresponding underscoring pattern. --- Trang 585 --- 572 carrer 11 The Analysis of Variance 17. Refer to Exercise 4. Compute a 95% ¢ CI for the Lethal Levels of Nitrogen Dioxide” (Toxicol. contrast 0 = 4 (11; + fo) — Hs Appl. Pharmacol., 1978: 169-174) reports the fol- 18. Consider the accompanying data on plant growth lowing: dita if Survival cimes for rats exposed t0 oe . nitrogen dioxide (70 ppm) via different injection after the application of different types of growth : : h regimens. There were J = 14 rats in each group. jormone. 1 13 17 T 14 = 2 21 13 20 17 Regimen 4%; (min) Si Hormone 3 18 15, 20 17 yo 4 7 i 18 10 1, Control 166 32 5 6 in 15 8 2. 3-Methylcholanthrene 303 53 3. Allylisopropylacetamide 266 54 2 4. Phenobarbital 212 35 a. Perform an F test at level x = .05. ; 5, Chlorpromazine 202 34 b. What happens when Tukey’s procedure is éip-Amiebennsie Waa 184 31 applied? TF 19: Consider a:single:factor:ANOVAvexperiment in a. Test the null hypothesis that true average sur- which [= 3, J =5, %.=10, % = 12, and are pga 2s ts LA ‘ vival time does not depend on injection regi- X3, = 20. Find a value of SSE for which : F ' . men against the alternative that there is some f > Fos212, so that Ho: (1 = fb = fs is di = a z eS . ~ . . lependence on injection regimen using rejected, yet when Tukey’s procedure is applied Be OL none of the ji;’s differ significantly from each b. Suppose that 100(1 — 0% Cls for & different other. parametric functions are computed from the 20. Refer to Exercise 19 and suppose %). = 10, same ANOVA data set. Then it is easily ver- 3), = 15, and x3. = 20. Can you now find a value ified that the simultaneous confidence level is of SSE that produces such a contradiction between at least 100(1 — ka)%. Compute Cls with the F test and Tukey’s procedure? simultaneous confidence level at least 98% c - 1 21. The article “The Effect of Enzyme Inducing ge the ma ltnae My = §(ta +s + Mat ‘Agents on the Survival Times of Rats Exposed to. Hs + Me)and 3 (to +43 + Ha + Hs) — He More on Single-Factor ANOVA In this section, we briefly consider some additional issues relating to single-factor ANOVA. These include an alternative description of the model parameters, f for the F test, the relationship of the test to procedures previously considered, data transformation, a random effects model, and formulas for the case of unequal sample sizes. An Alternative Description of the ANOVA Model The assumptions of single-factor ANOVA can be described succinctly by means of the “model equation” Xij = Mi + Gi where é,; represents a random deviation from the population or true treatment mean i; The &;’s are assumed to be independent, normally distributed rv’s (implying that the X;;’s are also) with E(¢;) = 0 [so that E(X;;) = 4] and V(e,) = o° [from which VX) = o° for every i and j]. An alternative description of single-factor ANOVA will give added insight and suggest appropriate generalizations to models involving more than one factor. Define a parameter p by --- Trang 586 --- 11.3 More on Single-Factor ANOVA 573 ie He] 3 i and the parameters ), ..., % by “o=n—H G=1,...,/) Then the treatment mean yj; can be written as 4 + o;, where jc represents the true average overall response in the experiment, and y; is the effect, measured as a departure from 1, due to the ith treatment. Whereas we initially had / parameters, we now have J + 1 (u, a1, --., %). However, because }> a; = 0 (the average departure from the overall mean response is zero), only / of these new parameters are independently determined, so there are as many independent parameters as there were before. In terms of yz and the «;’s, the model becomes Xysujtute; (=1,..,G7=1,--.3) In the next two sections, we will develop analogous models for two-factor ANOVA. The claim that the yu;’s are identical is equivalent to the equality of the o;’s, and because }> a; = 0, the null hypothesis becomes Ho 1% =02 =---=0,=0 In Section 11.1, it was stated that MSTr is an unbiased estimator of ¢? when Ho is true but otherwise tends to overestimate o?. More precisely, E(MSTx) =o? + Le I-1 a When Hp is true, > a? = 0 so E(MSTr) = a? (MSE is unbiased whether or not Ho is true). If > a? is used as a measure of the extent to which Hp is false, then a larger value of >a? will result in a greater tendency for MSTr to overestimate a”. More generally, formulas for expected mean squares for multifactor models are used to suggest how to form F ratios to test various hypotheses. Proof of the Formula for E(MSTr) For any rv Y, E(Y’) = VY) + [EP. so. E(SSTr) = E z Sox dy)il SOE?) - + 2x2) TOOT FOO TT 1 241 2 = 7 {V%) + EOP} {V%) + EP} 1 1 2 =5 x {ue + Ut al} {so? + (Wu)?} =I +e + 2p Se 2% + iD of —o — Ie 7 7 =(I- Io? +) 97 (since Sai =0) The result then follows from the relationship MSTr = SSTr/(/ — 1). = --- Trang 587 --- 574 = cuaprer 11 The Analysis of Variance B for the F Test Consider a set of parameter values %, %, ..., 4 for which Ho is not true. The probability of a type II error, f, is the probability that Ho is not rejected when that set is the set of true values. One might think that # would have to be determined separately for each different configuration of x;’s. Fortunately, since f for the F test depends on the g;’s and o? only through S>o?/o? it can be simultaneously evaluated for many different alternatives. For example, > a? = 4 for each of the following sets of ;’s for which Hy is false, so f is identical for all three alternatives: Lg =-1l,w~ =—-1,% =lay=1 2. 0 = —V2, a9 = V2, 03 = 0, a4 = 0 3. a = —V3, @ = V/1/3, 3 = /1/3, 4 = VIB The quantity J > at? /o is called the noncentrality parameter for one-way ANOVA (because when Hp is false the test statistic has a noncentral F distribution with this as one of its parameters), and f is a decreasing function of the value of this parameter. Thus, for fixed values of o* and J, the null hypothesis is more likely to be rejected for alternatives far from Ho (large S> a) than for alternatives close to Hp. For a fixed value of ve, B decreases as the sample size J on each treatment increases, and it increases as the variance o° increases (since greater underlying variability makes it more difficult to detect any given departure from Ho). Because hand computation of f and sample size determination for the F test are quite difficult (as in the case of f tests), statisticians have constructed sets of curves from which f can be obtained. Sets of curves for numerator df v, = 3 and v, = 4are displayed in Figures. 11.5 and 11.6, respectively. After the values of o7 and the «;'s for which f is desired are specified, these are used to compute the value of , where $? = (J/D S>0?/0*. We then enter the appropriate set of curves at the a Fy AIT Ty Fy re PF Pieheti Ty ry Try tT 7 7 ft Pd PDT 7 on a a a a Po ee a 98 EPI EY a — SSS iiss sa = = ———— = oN Be ee gg ey 47 % WH TA Peet eT I Ss ee sy ATT TT TT I TT 7 fof 7 7 od a 7 0 A A A | _——| (| E70 0 FS 27S OP A A A a 2 TT HSS SS 90 EE ff b L a= 08 IML AG rd 6 YELLS FT OZ a Os rT ST DBE. TEE ee — EOL OO SS 1 2 3 =< ¢ (for a= .05) ¢ (for a= .01) +4 2 3 4 5 Figure 11.5 Power curves for the ANOVA F test (v, = 3) (E. S. Pearson and H. 0. Hartley, “Charts of the Power Function for Analysis of Variance Tests, Derived from the Non-central F Distribution,” Biometrika, vol. 38, 1951: 112, by permission of Biometrika Trustees.) --- Trang 588 --- 11.3 More on Single-Factor ANOVA 575, 99 Pe ME 6 SF MS AE ST | a a 26 = SP a . ys eR 7—| _———_— oe — y x 96 Eo 65 oh ek ok op i NS SE Te a Se a | a a 8 a a a a a | 5 22 SS A TTT TT pe I A 6 LLIN YS IY JIS SLT 7 i 2 TE. HT TL LA LZ ODE. TTF eee 10 "Yk J NEIL ZZ | | 80 Eo 60 LEE LA 10 1 2 3 = ¢ (for w= .05) 9 (for a= 01) +4 2 3 4 5 Figure 11.6 Power curves for the ANOVA F test (v; = 4) (E. S. Pearson and H. 0. Hartley, “Charts of the Power Function for Analysis of Variance Tests, Derived from the Non-central F Distribution,” Biometrika, vol. 38, 1951: 112, by permission of Biometrika Trustees.) value of ¢ on the horizontal axis, move up to the curve associated with error df v3, and move over to the value of power on the vertical axis. Finally, f = 1 — power. The effects of four different heat treatments on yield point (tons/in) of steel ingots are to be investigated. A total of eight ingots will be cast using each treatment. Suppose the true standard deviation of yield point for any of the four treatments is @ = 1. How likely is it that Ho will not be rejected at level .05 if three of the treatments have the same expected yield point and the other treatment has an expected yield point that is 1 ton/in? greater than the common value of the other three (i.e., the fourth yield is on average 1 standard deviation above those for the first three treatments)? Suppose that py = pig = plz and pg = py + 1, we = (Zp) /4 = yy +4. Then 1 = fy —H=— 5, =— 4,09 = — 5,4 =] 80 2 2 2 # 2_8 1 1 1 3 3 2? =-)(--} +(--) +(--) +(=] } == ala 4 4 4 2 and @ = 1.22. The degrees of freedom are v; = J—1 = 3 and v2 = I(J-1) = 28, so interpolating visually between v2 = 20 and v2 = 30 gives power © .47 and B = .53. This f is rather large, so we might decide to increase the value of J. How many ingots of each type would be required to yield 8 ~ .05 for the alterna- tive under consideration? By trying different values of J, we can verify that J = 24 will meet the requirement, but any smaller J will not. Ll) As an alternative to the use of power curves, many statistical packages have a function that calculates the cumulative area under a noncentral F curve (inputs F,,, numerator df, denominator df, and @°), and this area is f. In addition, MINITAB 16 --- Trang 589 --- 576 = cuaprer 11 The Analysis of Variance does something rather different. The user is asked to specify the maximum differ- ence between y;’s rather than the individual means. For example, we might wish to calculate the power of the test with « = .05, o = 1,/=4, J= 2, 4, = 100, fy = 101, wz = 102, and fy = 106. Then the maximum difference is 106 — 100 = 6. However, the power depends not only on this maximum difference but on the values of all the j;’s. In this situation MINITAB calculates the smallest possible value of power subject to “4; = 100 and x44 = 106, which occurs when the two other ju’s are both halfway between 100 and 106. This power is .86, so we can say that the power is at least .86 and f is at most .14 when the two most extreme ju’s are separated by 6. The software will also determine the necessary common sample size if maximum difference and minimum power are specified. The R package has a function that allows specification of all / of the means, along with the other parameters. The function calculates whichever parameter is omitted. For example, in the above scenario with x = .05, 0 = 1,1 = 4, J = 2, wu, = 100, wo = 101, Hy = 102, and 4 = 106, the function calculates power = .89. Relationship of the F Test to the t Test When the number of populations is just / = 2, the ANOVA F is testing Ho: f= jo versus H,: 1) # [o. In this case, a two-tailed, two-sample t test can also be used. In Section 10.2, we mentioned the pooled f test, which requires equal variances, as an alternative to the two-sample ¢ procedure. With a little algebra, it can be shown that the single-factor ANOVA F test and the two-tailed pooled ¢ test are equivalent; for any given data set, the P-values for the two tests will be identical, so the same conclusion will be reached by either test. The two-sample f test is more flexible than the F test when J = 2 for two reasons. First, it is not based on the assumption that ¢; = 2; second, it can be used to test Hy: 4) > fe (an upper-tailed test) or Hy: 4) < [2 as well as Hy: 4) A flo. As mentioned at the end of Section 11.1, there is a generalization of the two-sample t test for / > 3 samples with population variances not necessarily the same. Single-Factor ANOVA When Sample Sizes Are Unequal When the sample sizes from each population or treatment are not equal, let J), J, . . ., J, denote the / sample sizes and let n = &,J; denote the total number of observations. The accompanying box gives ANOVA formulas and the test procedure. 1 i oe 1 2 2 = SsT = 97D Oy —K) =D df=n-1 ml = i=l j= oh arise 15 SSTr = Xj. — Xo = —X;, ——X: df=I-1 Loh SSE= °° (X;-X.)?=SST-Sstr df =O (1) =n-1 i=l j=l --- Trang 590 --- 11.3 More on Single-Factor ANOVA 577. Test statistic value: MST! SST: SSE fa SE where MSTr = pat and MSE = = Rejection region: f > Fy j—1,7 The correction factor (CF) KE /nis subtracted when computing both SST and SSTr. These formulas are derived in the same way (see Exercise 28) as the similar formulas in Section 11.1, except that it is harder here to show that MSTr/MSE has the F distribution under Ho. The article “On the Development of a New Approach for the Determination of Yield Strength in Mg-Based Alloys” (Light Metal Age, Oct. 1998: 51-53) pre- sented the following data on elastic modulus (GPa) obtained by a new ultrasonic method for specimens of an alloy produced using three different casting processes. Process Observations Sie Permanent molding 45.5 45.3 45.4 44.4 44.6 43.9 44.6 44.0 8 357.7 44.71 Die casting 44.2 43.9 44.7 44.2 44.0 43.8 44.6 43.1 8 352.5 44.06 Plaster molding 46.0 45.9 44.8 46.2 45.1 45.5 6 273.5 45.58 22 983.7 Let 4), fl, and fl; denote the true average elastic moduli for the three different processes under the given circumstances. The relevant hypotheses are Ho: 4) = Hy = fs versus H,: at least two of the ju;’s are different. The test statistic is, of course, F = MSTr/MSE, based on J — 1 = 2 numerator df and n — J = 22 —- 3 = 19 denominator df. Relevant quantities include 983.77 SOF = 43,998.73 CF= FT = 43,984.80 SST = 43,998.73 — 43, 984.80 = 13.93 357.7 352.5" 273.5? SSTr = ——— + ——— + —— — 43, 984.80 = 7.93 8 8 6 SSE = 13.93 — 7.93 = 6.00 The remaining computations are displayed in the accompanying ANOVA table. Since F 901,2,19 = 10.16 < 12.56 = f, the P-value is smaller than .001. Thus the null hypothesis should be rejected at any reasonable significance level; there is compelling evidence for concluding that true average elastic modulus somehow depends on which casting process is used. --- Trang 591 --- 578 cuaprer 11 The Analysis of Variance Source of Variation df Sum of Squares Mean Square f Treatments 2 793) 3.965 12.56 Error 19 6.00 3158 Total 21 13,93 Py Multiple Comparisons When Sample Sizes Are Unequal There is more controversy among statisticians regarding which multiple compar- isons procedure to use when sample sizes are unequal than there is in the case of equal sample sizes. The procedure that we present here is recommended in the excellent book Beyond ANOVA: Basics of Applied Statistics (see the chapter bibliography) for use when the / sample sizes J), J>, ..., J; are reasonably close to each other (“mild imbalance”). It modifies Tukey’s method by using averages of pairs of 1/J;’s in place of I/J. tet 4 MSE (1) 1 Wi = Qatn-t- | — | 7 +7 | oR OL A Then the probability is approximately 1 — « that Xj. — Xj. — wi < oy — wy < Xi. — Xe + wy for every i andj (i = 1,...,/ andj = 1,...,J) with i F j. The simultaneous confidence level 100(1 — ~)% is only approximate rather than exact as it is with equal sample sizes. The underscoring method can still be used, but now the w;; factor used to decide whether +. and %;, can be connected will depend on Jj and J;. The sample sizes for the elastic modulus data were J; = 8, Jz = 8, J3 = 6, and (Example 11.8 1 = 3,n—I = 19,MSE = .316. A simultaneous confidence level of approximately continued) 95% requires Qo5,3.19 = 3.59, from which 316/11 wir = 3.59 FE G+3) = 113 wy = 2771 w3 = 771 Since x). — %). = 44.71 — 44.06 = .65 < wiz, 4 and py are judged not signifi- cantly different. The accompanying underscoring scheme shows that 4, and j13 differ significantly, as do yz and 13. 2. Die 1, Permanent 3. Plaster 44.06 44.71 45.58 i | --- Trang 592 --- 11.3 More on Single-Factor ANOVA 579. Data Transformation The use of ANOVA methods can be invalidated by substantial differences in the variances o7,...,7 (which until now have been assumed equal with common value 0”). It sometimes happens that V(X) = 6? = g(u;), a known function of 4; (so that when Hg is false, the variances are not equal). For example, if X;; has a Poisson distribution with parameter 2; (approximately normal if 2; > 10), then fy; = 2; and a =i, so g(uj) = pw; is the known function. In such cases, one can often transform the X;;’s to h(X;;)’s so that they will have approximately equal variances (while hopefully leaving the transformed variables approximately nor- mal), and then the F test can be used on the transformed observations. The basic idea is that, if h(-) is a smooth function, then we can express it approximately using the first terms of a Taylor series, h(Xjj) © h(t) + h' (ui)(Xij— i). Then V[A(X;)] © VX) > tur = g(u)- thu) P. We now wish to find the function A(-) for which g(u,) - (hu)? = € (a constant) for every i. Solving this for h'(u;) and integrating gives the following result: PROPOSITION If V(X) = g(44), a known function of ;, then a transformation h(X,)) that “stabilizes the variance” so that V[h(X;;)] is approximately the same for each i is given by A(x) x f[g(x)]7" dv. In the Poisson case, g(x) = x, so h(x) should be proportional to [x7'? dv = 2.x". Thus Poisson data should be transformed to h(xj;) = \/%j before the analysis. A Random Effects Model The single-factor problems considered so far have all been assumed to be examples of a fixed effects ANOVA model. By this we mean that the chosen levels of the factor under study are the only ones considered relevant by the experimenter. The single-factor fixed effects model is Xj=mjt+ate You =0 (11.6) where the ¢;’s are random and both y and the «;’s are fixed parameters whose values are unknown. In some single-factor problems, the particular levels studied by the experi- menter are chosen, either by design or through sampling, from a large population of levels. For example, to study the effects on task performance time of using different operators on a particular machine, a sample of five operators might be chosen from a large pool of operators. Similarly, the effect of soil pH on the yield of maize plants might be studied by using soils with four specific pH values chosen from among the many possible pH levels. When the levels used are selected at random from a larger population of possible levels, the factor is said to be random rather than fixed, and the fixed effects model (11.6) is no longer appropriate. An analogous random effects model is obtained by replacing the fixed y;’s in (11.6) by random variables. The resulting model description is --- Trang 593 --- 580 = cuaprer 11 The Analysis of Variance Xj ="+Ai +e; with E(Aj) = E(ey) =0 (11.7) V(ey) = 0? V(Aj)) = 04 with all A;’s and ¢;’s normally distributed and independent of each other. The condition E(A;) = 0 in (11.7) is similar to the condition Xx; = 0 in (11.6); it states that the expected or average effect of the ith level measured as a departure from j is zero. For the random effects model (11.7), the hypothesis of no effects due to different levels is Ho: on = 0 which says that different levels of the factor contrib- ute nothing to variability of the response. Although the hypotheses in the single- factor fixed and random effects models are different, they are tested in exactly the same way, by forming F = MSTr/MSE and rejecting Hp if f > F)-),,7. This can be justified intuitively by noting that E(MSE) = o° (as for fixed effects), whereas Beast = 0 +745 (n=) (11.8) T=1 n where J), J2,..., J; are the sample sizes and n = XJ;. The factor in parentheses on the right side of (11.8) is nonnegative, so once again E(MSTr) = o if Ho is true and E(MSTr) > o? if Ho is false. The study of nondestructive forces and stresses in materials furnishes important information for efficient design. The article “Zero-Force Travel-Time Parameters for Ultrasonic Head-Waves in Railroad Rail” (Mater. Eval., 1985: 854-858) reports on a study of travel time for a type of wave that results from longitudinal stress of rails used for railroad track. Three measurements were made on each of six rails randomly selected from a population of rails. The investigators used random effects ANOVA to decide whether some variation in travel time could be attributed to “between-rail variability.” The data is given in the accompanying table (each value, in nanoseconds, resulted from subtracting 36.1 js from the original obser- vation) along with the derived ANOVA table. The value of the F ratio is highly significant, so Ho: a = 0 is rejected in favor of the conclusion that differences between rails are a source of travel-time variability. Rail Travel time x; Source of Sum of Mean Toss gE ges Variation df Squares Square f 2 26 37 32 95 Treatments 5 9310.5 1862.1 115.2 3 78 91 85 254 Error 12 194.0 16.17 4 92 100 96 288 Total 17 9504.5 5 49 51 50 1800 6 80 85 83 _248 x. = 1197 : --- Trang 594 --- 11.3 More on Single-Factor ANOVA 581 Exercises | Section 11.3 (22-34) 22. The following data refers to yield of tomatoes (Kgl plot) for four different levels of salinity; salinity Sample Sample —- Sample level here refers to electrical conductivity (EC), Regimen Size Mean sD mies the gen eye were EC = 1.6, 3.8, 6.0, Breast milk 8 BO 1s co 13 42.4 13 16: 595 53.3 56.8 63.1 58.7 ie i a ie 38: 55.2 59.1 528 54.5 : : 60: 517 488 539 49.0 10.2: 446 485 410 47.3 46.1 a. What assumptions must be made about the four total polyunsaturated fat distributions before Use the F test at level « = .05 to test for any carrying out a single-factor ANOVA to decide differences in true average yield due to the differ- whether there are any differences in true aver- ent salinity levels. age fat content? 23. Apply the modified Tukey’s method to the data in b. Carry out the test suggested in part (a). What Exercise 22 to identify significant differences can be said about the P-value? among the j4;’s. 26. Samples of six different brands of diet/imitation 24. The following partial ANOVA table is taken from margarine were analyzed to determine the level of the article “Perception of Spatial Incongruity” (J. physiologically active polyunsaturated fatty acids Nery. Ment. Dis., 1961: 222) in which the abilities (PAPFUA, in percentages), resulting in the fol- of three different groups to identify a perceptual lowing data: incongruity were assessed and compared. All indi- viduals in the experiment had been hospitalized to Imperial 14.1 13.6 144 14.3 undergo psychiatric treatment. There were 21 Parkay 128 125 134 13.0 123 individuals in the depressive group, 32 individuals Blue Bonnet 13.5 134 14.1 14.3 in the functional “other” group, and 21 individuals Chiffon 13.2 127 126 13.9 in the brain-damaged group. Complete the Mazola 16.8 17.2 164 173 18.0 ANOVA table and carry out the F test at level Fleischmann’s 18.1 17.2 18.7 18.4 a= 01. (The preceding numbers are fictitious, but the De sample means agree with data reported in the Jan- Source df Sum of Squares Mean Square f uary 1975 issue of Consumer Reports.) 6 a, Use ANOVA to test for differences among the ToUups 76.09 . Biter true average PAPFUA percentages for the dif- Total 1123.14 fent brant. b. Compute Cls for all (j4;— 14)’s. ¢. Mazola and Fleischmann’s are com-based, 25. Lipids provide much of the dietary energy in the whereas the others are soybean-based. Compute bodies of infants and young children. There is a a Cl for growing interest in the quality of the dietary lipid supply during infancy as a major determinant of growth, visual and neural development, and long- My + Hy Fog + Hy Ms + Me term health. The article “Essential Fat Require- 4 2 ments of Preterm Infants” (Amer. J. Clin. Nutrit., . . , 2000: 2458-250S) reported the following data on te sa the eviove cect Vip) thatsled 6 total polyunsaturated fats (%) for infants who were _ ° randomized to four different feeding regimens: 27. Although tea is the world’s most widely consumed breast milk, corn-oil-based formula, soy-oil-based beverage after water, little is known about its formula, or soy-and-marine-oil-based formula: nutritional value. Folacin is the only B vitamin --- Trang 595 --- 582 carrer 11 The Analysis of Variance present in any significant amount in tea, and for the model parameters (in particular, recent advances in assay methods have made eon accurate determination of folacin content feasible. F . : " : saci aaa 30. Reconsider Example 11.7 involving an investiga- Consider the accompanying data on folacin con- ‘ " Sige é "2 tion of the effects of different heat treatments on tent for randomly selected specimens of the four . " ae 4 the yield point of steel ingots. leading brands of green tea. - ~ a. If J =8 and o = 1, what is f for a level — .05 F test when jy = Lb, fs = Hy — 1, and py Brand Observations =p+1? % b. For the alternative of part (a), what value of J is 1 79 62 66 86 89 101 96 necessary to obtain f'—-089 a oF GS Oe el BM c. If there are / = 5 heat treatments, J = 10, and 3 G8 75 50 M4 53 61 = 1, what is f for the level .05 F test when 4 64 71 79 45 5.0 40 four of the j1;'s are equal and the fifth differs by 1 from the other four? (Data is based on “Folacin Content of Tea,” J. 31. For unequal sample sizes,, the noncentrality para- Amer. Dietetic Assoc., 1983: 627-632.) Does this meter is S2Jja2/o? and $2 =(1/NCJix2/0?. data suggest that true average folacin content is the Referring to Exercise 22, what is the power of the same for all brands? test when pin = [43, fly = flo—G, and lg = plo + 0? a. Carry out a test using x = .05 via the P-value i method. 32. In an experiment to compare the quality of four b. Assess the plausibility of any assumptions different brands of rel-to-reel recording tape, five required for your analysis in part (2). 2400-f reels of each brand (A~D) were selected &:Pelform.a inultiple ‘comparisons analysis it and the number of flaws in each reel was deter- identify significant differences among brands. mined. 28. In single-factor ANOVA with sample sizes J; (= A 100650 («12 1, ..., D, show that SSTr= 0 J,(X). —X.)° = B MW oR iW 9 8 Dik, — nX?, where n = i. Cc 1 18 10 15) 18 29. When sample sizes are equal (J, = J), the para- De dh de w2 22 MUBLET 2), -n EOE The alternative’ parameter Itis believed that the number of flaws has approx- zation are restricted by La; = 0. For unequal ; dl a & . z imately a Poisson distribution for each brand. Ana- sample sizes, the most natural restriction is 5. lyze the data at level .01 to see whether the L/w; = 0. Use this to show that 5 expected number of flaws per reel is the same for each brand. . 2, 1 2 B(MSTH = 0? +7 Yai 33. Suppose that X,; is a binomial variable with para- Pod ij meters n and p; (so it is approximately normal What is E(MSTr) when Ho is true? [This expec- when np; > 10 and ng; > 10). Then since pj = tation is correct if L/;; = 0 is replaced by the mpi, V(X) = 07 = npi(1 — pi) = (1 — n/n). restriction Za, = 0 (or any other single linear How should the X;;’s be transformed so as to restriction on the ;’s used to reduce the model stabilize the variance? [Hint: g(41;) = p(1 — /n).] to J independent parameters), but L/,a; = 0 7 . simplifies the algebra and yields natural estimates 34 Simplify E(MST») for the random effects model when J; = Jo =-+ =Jp=J. Two-Factor ANOVA with Kj = 1 In many experimental situations there are two factors of simultaneous interest. For example, suppose an investigator wishes to study permeability of woven material used to construct automobile air bags (related to the ability to absorb energy). --- Trang 596 --- 11.4 Two-Factor ANOVA with Ky = 1 583 An experiment might be carried out using / = 4 temperature levels (10°C, 15°C, 20°C, 25°C) and J = 3 levels of fabric denier (420-D, 630-D, 840-D). When factor A consists of / levels and factor B consists of J levels, there are [J different combinations (pairs) of levels of the two factors, each called a treatment. With Kj; = the number of observations on the treatment consisting of factor A at level i and factor B at level j, we focus in this section on the case Kj; = 1, so that the data consists of 1J observations. We will first discuss the fixed effects model, in which the only levels of interest for the two factors are those actually represented in the experiment. The case in which one or both factors are random is discussed briefly at the end of the section. Is it really as easy to remove marks on fabrics from erasable pens as the word erasable might imply? Consider the following data from an experiment to compare three different brands of pens and four different wash treatments with respect to their ability to remove marks on a particular type of fabric (based on “An Assess- ment of the Effects of Treatment, Time, and Heat on the Removal of Erasable Pen Marks from Cotton and Cotton/Polyester Blend Fabrics,” J. Test. Eval., 1991: 394-397). The response variable is a quantitative indicator of overall specimen color change; the lower this value, the more marks were removed. Washing treatment 1 2 3 4 Total 1 97 48 48 46 2.39 Brand of Pen 2 17 14 22 25 1.38 2 67 339: 57 19 1.82 Total 2.41 LOL 1.27 90 5.59 Is there any difference in the true average amount of color change due either to the different brands of pen or to the different washing treatments? a As in single-factor ANOVA, double subscripts are used to identify random variables and observed values. Let Xj; = the random variable (rv) denoting the measurement when factor A is held at level i and factor B is held at level j xj; = the observed value of Xj; The xj’s are usually presented in a two-way table in which the ith row contains the observed values when factor A is held at level i and the jth column contains the observed values when factor B is held at level j. In the erasable-pen experiment of Example 11.11, the number of levels of factor A is J = 3, the number of levels of factor B is J = 4, x;3 = 48, X59 = .14, and so on. Whereas in single-factor ANOVA we were interested only in row means and the grand mean, here we are interested also in column means. Let > X; X,= the average of data obtained __j=1 / "when factor A is held at level i J --- Trang 597 --- 584 = cuaprer 11 The Analysis of Variance 1 Xx; X= the average of data obtained __ > t J ~ when factor B is held at level j La ¥.. =the grand mean = —4—— _ 7 with observed values *;., ¥j, and <.. Totals rather than averages are denoted by omitting the horizontal bar (so x; = > xij, etc.). Intuitively, to see whether there is any effect due to the levels of factor A, we should compare the observed x;.’s with each other, and information about the different levels of factor B should come from the x,’s. The Model Proceeding by analogy to single-factor ANOVA, one’s first inclination in specify- ing a model is to let j4;; = the true average response when factor A is at level i and factor B at level j, giving /J mean parameters. Then let Xi = My + by where éj is the random amount by which the observed value differs from its expectation and the ¢;;’s are assumed normal and independent with common vari- ance o”. Unfortunately, there is no valid test procedure for this choice of parameters. The reason is that under the alternative hypothesis of interest, the ju;;’s are free to take on any values whatsoever, whereas o” can be any value greater than zero, so that there are /J + 1 freely varying parameters. But there are only // observations, so after using each xj; as an estimate of j1;;, there is no way to estimate o. To rectify this problem of a model having more parameters than observed values, we must specify a model that is realistic yet involves relatively few parameters. Assume the existence of J parameters %, %,..., %, and J parameters By, Bo,---, By such that Xysat Brey (=1,..b fHl,...J) (11.9) so that My = 4 + By (11.10) Including o°, there are now J + J + 1 model parameters, so if / > 3 and J > 3, there will be fewer parameters than observations [in fact, we will shortly modify (11.10) so that even J = 2 and/or J = 2 will be accommodated]. The model specified in (11.9) and (11.10) is called an additive model because each mean response j1;; is the sum of an effect due to factor A at level 7 (a;) and an effect due to factor B at level j (f;). The difference between mean --- Trang 598 --- 11.4 Two-Factor ANOVA with Ky = 1 585 responses for factor A at level i and level i’ when B is held at level j is puj; — py). When the model is additive, My — Hay = (08 + Bj) — (00 + By) = 0 — which is independent of the level j of the second factor. A similar result holds for ij — Hi’. Thus additivity means that the difference in mean responses for two levels of one of the factors is the same for all levels of the other factor. Figure 11.7(a) shows a set of mean responses that satisfy the condition of additivity (which implies parallel lines), and Figure 11.7(b) shows a nonadditive configuration of mean responses. a b Mean response Mean response aco Levas of B es Levels of 8 1 4 3 4 1 a 3 4 Levels of 4 Levds of 4 Figure 11.7 Mean responses for two types of model: (a) additive; (b) nonadditive Re ees §=©When we plot the observed x;;’s in a manner analogous to that of Figure 11.7, we (Example 11.11 get the result shown in Figure 11.8. Although there is some “crossing over” in the continued) observed .,;’s, the configuration is reasonably representative of what would be expected under additivity with just one observation per treatment. Color change 1.0 9 Brand 1 we 8 Brand 2 at Brand 3 6 5 4A 3 2 1 1 2 3 4 Washing treatment Figure 11.8 Plot of data from Example 11.11 : --- Trang 599 --- 586 carrer 11 The Analysis of Variance Expression (11.10) is not quite the final model description because the 2,’s and f;’s are not uniquely determined. Following are two different configurations of the »;’s and f;’s that yield the same additive j1;;’s. B=1 fr=4 fi =2 fr=5 By subtracting any constant ¢ from all «;’s and adding ¢ to all f;'s, other config- urations corresponding to the same additive model are obtained. This nonunique- ness is eliminated by use of the following model. Xy = M+ a + B+ by (11.11) where 30), 4; =0, wy 8; =0 and the ¢j’s are assumed independent, normally distributed, with mean 0 and common variance o. This is analogous to the alternative choice of parameters for single-factor ANOVA discussed in Section 11.3. It is not difficult to verify that (11.11) is an additive model in which the parameters are uniquely determined (e.g., for the y4;;’s men- tioned previously, fp = 4, %; = —5, %2 = .5, B} =—1.5, and B, = 1.5). Notice that there are only / — | independently determined «;’s and J — 1 independently determined B7s, so (including j) (11.11) specifies / + J — 1 mean parameters. The interpretation of the parameters of (11.11) is straightforward: y is the true grand mean (mean response averaged over all levels of both factors), x; is the effect of factor A at level i (measured as a deviation from 1), and f; is the effect of factor B at level j. Unbiased (and maximum likelihood) estimators for these parameters are a=Xx. 4; =X;.—X. B, =X) -X%. There are two different hypotheses of interest in a two-factor experiment with Kj; = 1. The first, denoted by Hoa, states that the different levels of factor A have no effect on true average response. The second, denoted by Hog, asserts that there is no factor B effect. Hoa 1%) =% =: =a =0 versus H,4: at least one a #0 (11.12) Hop : By = By =--- = 8, =0 versus Hyg : at least one f; #0 (No factor A effect implies that all x;’s are equal, so they must all be 0 since they sum to 0, and similarly for the f;’s.) --- Trang 600 --- 11.4 Two-Factor ANOVA with Kj = 1 587 Test Procedures The description and analysis now follow closely that for single-factor ANOVA. The relevant sums of squares and their computing forms are given by of ov xhvau le Sst Dy XSF = ti a df= -1 il j= i=l j= Fi 3 dist 1 SSA = K.—X.) =)? - x? df=I-1 fall i_Isacp lap (11.13) SSB = Xj —X.) == x, -—X df=J-1 RN Te ’ toi 5 SSE = >> (Xj -X. — Xj +X.) df = (1-1) -1) i=l j=l and the fundamental identity SST = SSA + SSB + SSE (11.14) allows SSE to be determined by subtraction. The expression for SSE results from replacing yp, 4, and f; in E[Ky — (ut ay + BP by their respective estimators. Error df is JJ — number of mean parameters estimated = J — [1 + 7-1) + (J -1)]) = V@- IV —- 1). Asin single-factor ANOVA, total variation is split into a part (SSE) that is not explained by either the truth or the falsity of Ho4 or Hog and two parts that can be explained by possible falsity of the two null hypotheses. Forming F ratios as in single-factor ANOVA, we can show as in Section 11.1 that if Ho, is true, the corresponding F ratio has an F distribution with numerator df = J — 1 and denominator df = (J — 1)(J — 1); an analogous result applies when testing Hop. Hypotheses Test Statistic Value Rejection Region Hoa versus Hay Az MSA fa 2 Fap-au-yu-1) 4" MSE Hog versus Hap i= MSB fe 2 Fag-1a-ryu-) oS MSE The x;.’s (row totals) and x,’s (column totals) for the color change data are displayed (Example 11.12 along the right and bottom margins of the data table in Example 11.11. In addition, continued) ¥ Ya} = 3.2987 and the correction factor is x7/ (I) = (5.59)"/12 = 2.6040. The sums of squares are then --- Trang 601 --- 588 = cuaprer 11 The Analysis of Variance SST = 3.2987 — 2.6040 = .6947 1 SSA=] (2.397 + 1.38? + 1.827] — 2.6040 = .1282 1 SSB = z2ar + 1.01? + 1.27 + .907] — 2.6040 = .4797 SSE = .6947 — (.1282 + .4797) = .0868 The accompanying ANOVA table (Table 11.5) summarizes further calculations. Table 11.5 ANOVA table for Example 11.13 Source of Variation df Sum of Squares Mean Square f Factor A (pen brand) J—1 = 2 SSA =.1282 MSA=.0641 f,= 4.43 Factor B (wash J-1=3 SSB = .4797 MSB = .1599 fy = 11.05 treatment) Error (-DU-1)=6 SSE=.0868 MSE = .01447 Total H-1=11 SST = .6947 The critical value for testing Ho, at level of significance .05 is F 953.6 = 5.14. Since 4.43 < 5.14, Ho4 cannot be rejected at significance level .05. Based on this (small) data set, we cannot conclude that true average color change depends on brand of pen. Because F 95.3.6 = 4.76 and 11.05 > 4.76, Hog is rejected at signifi- cance level .05 in favor of the assertion that color change varies with washing treatment. A statistical computer package gives P-values of .066 and .007 for these two tests. How can plausibility of the normality and constant variance assumptions be investigated graphically? Define the predicted values (also called fitted values) ky = At Gj + Bp =%. 4 (He —¥.) + (Hj —¥.) =H. +4; —¥., and the residuals (the differences between the observations and _ predicted values) Xij — Xij = Xij — Xi. — Xj +X... We can check the normality assumption with a nor- mal plot of the residuals, Figure 11.9(a), and we can check the constant variance assumption with a plot of the residuals against the fitted values, Figure 11.9(b). a b Normal Probability Plot of the Residuals Residuals Versus the Fitted Values 99 0.15 95 90 0.10 80 2 70 ict & 60 $ 908 Bi 3 00 aoe 4 20 05 10 s 0.10 1 02 04 0.0 04 02 01 02 03 04 05 06 07 08 09 10 Residual Fitted Value Figure 11.9 Plots from MINITAB for Example 11.13 --- Trang 602 --- 11.4 Two-Factor ANOVA with Ky = 1 589 The normal plot is reasonably straight, so there is no reason to question normality for this data set. On the plot of the residuals against the fitted values, we are looking for differences in vertical spread as we move horizontally across the graph. For example, if there were a narrow range for small fitted values and a wide range for high fitted values, this would suggest that the variance is higher for larger responses (this happens often, and it can sometimes be cured by replacing each observation by its logarithm). No such problem occurs here, so there is no evidence against the constant variance assumption. a Expected Mean Squares The plausibility of using the F tests just described is demonstrated by determining the expected mean squares. After some tedious algebra, E(MSE) = o? (when the model is additive) a ag? 2 E(MSA) = 0? +74 Pe P al E(MSB) = o* +—— )> f; (MSB) = 0° +7— » B; When Hp, is true, MSA is an unbiased estimator of o”, so F is a ratio of two unbiased estimators of 07. When Ho, is false, MSA tends to overestimate a”, so Ho, should be rejected when the ratio F, is too large. Similar comments apply to MSB and Hog. Multiple Comparisons When either Hy, or Hog has been rejected, Tukey’s procedure can be used to identify significant differences between the levels of the factor under investigation. The steps in the analysis are identical to those for a single-factor ANOVA: 1. For comparing levels of factor A, obtain Q, ).¢—1yy—1)- For comparing levels of factor B, obtain Q,.7,¢—1y—1)- 2. Compute w = Q-(estimated standard deviation of the sample means being compared) — J Qyi-ijyu—1)* /MSE/J for factor A comparisons 7 Q,.1,1-1)V-1) * \/ MSE/T for factor B comparisons (because, ¢.g., the standard deviation of X;. is ¢/ VJ). 3. Arrange the sample means in increasing order, underscore those pairs differing by less than w, and identify pairs not underscored by the same line as corresponding to significantly different levels of the given factor. Identification of significant differences among the four washing treatments requires (Example 11.13 Q 5,46 = 4.90 and w = 4.90,/.01447/3 = .340. The four factor B sample means continued) (column averages) are now listed in increasing order, and any pair differing by less than .340 is underscored by a line segment: --- Trang 603 --- 590 chaprer 11 The Analysis of Variance 4. %, x3. x 300 337 423 803 Washing treatment | is significantly worse than the other three treatments, but no other significant differences are identified. In particular, it is not apparent which among treatments 2, 3, and 4 is best at removing marks. a Randomized Block Experiments In using single-factor ANOVA to test for the presence of effects due to the J different treatments under study, once the JJ subjects or experimental units have been chosen, treatments should be allocated in a completely random fashion. That is, J subjects should be chosen at random for the first treatment, then another sample of J chosen at random from the remaining J — J subjects for the second treatment, and so on. It frequently happens, though, that subjects or experimental units exhibit differences with respect to other characteristics that may affect the observed responses. For example, some patients might be healthier than others. When this is the case, the presence or absence of a significant F value may be due to these differences rather than to the presence or absence of factor effects. This was the reason for introducing a paired experiment in Chapter 10. The generalization of the paired experiment to J > 2 is called a randomized block experiment. An extrane- ous factor, “blocks,” is constructed by dividing the // units into J groups with / units in each group. This grouping or blocking is done in such a way that within each block, the / units are homogeneous with respect to other factors thought to affect the responses. Then within each homogeneous block, the / treatments are randomly assigned to the / units or subjects in the block. A consumer product-testing organization wished to compare the annual power consumption for five different brands of dehumidifier. Because power consumption depends on the prevailing humidity level, it was decided to monitor each brand at four different levels ranging from moderate to heavy humidity (thus blocking on humidity level). Within each level, brands were randomly assigned to the five selected locations. The resulting amount of power consumption (annual kWh) appears in Table 11.6. Table 11.6 Power consumption data for Example 11.15 Blocks (humidity level) Treatments (brands) 1 2 3 4 Xj ¥ 1 685 792 838 875 3190 797.50 2 722 806 893 953 3374 843.50 3 733 802 880 941 3356 839.00 4 811 888 952 1005 3656 914.00 5 828 920 978 1023 3749 937.25 xy 3779 4208 4541 4797 17,325 --- Trang 604 --- 11.4 Two-Factor ANOVA with Ky = 1 5914 Since cry = 15,178,901.00 and x?/(L/) = 15,007,781.25 SST = 15,178,901.00 — 15,007,781.25 = 171,119.75 1 SSA = 7160.244.049] — 15,007,781.25 = 53,231.00 1 SSB = 5 175,619,995] — 15,007,781.25 = 116,217.75 and SSE = 171,119.75 — 53,231.00 — 116,217.75 = 1671.00 The ANOVA calculations are summarized in Table 11.7 Table 11.7 ANOVA table for Example 11.15 Source of Variation df Sum of Squares Mean Square f Treatments (brands) 4 53,231.00 13,307.75 fx = 95.57 Blocks 3 116,217.75 38,739.25 fg = 278.20 Error 12 1671.00 139,25 Total 19 171,119.75 Since F 95,412 = 3.26 and fy = 95.57 > 3.26, Ho is rejected in favor of H,, and we conclude that power consumption does depend on the brand of humidifier. To identify significantly different brands, we use Tukey’s procedure. Q.o5.5.12 = 4.51 and w = 4.51\/139.25/4 = 26.6. Ba & Ey, 4 Xs. 797.50 839.00 843.50 914.00 937.25 The underscoring indicates that the brands can be divided into three groups with respect to power consumption. Because the block factor is of secondary interest, F 95312 is not needed, though the computed value of F, is clearly highly significant. Figure 11.10 shows SAS output for this data. Notice that in the first part of the ANOVA table, the sums of squares (SS’s) for treatments (brands) and blocks (humidity levels) are combined into a single “model” SS. In many experimental situations in which treatments are to be applied to subjects, a single subject can receive all / of the treatments. Blocking is then often done on the subjects themselves to control for variability between subjects; each subject is then said to act as its own control. Social scientists sometimes refer to such experiments as repeated-measures designs. The “units” within a block are then the different “instances” of treatment application. Similarly, blocks are often taken as different time periods, locations, or observers. --- Trang 605 --- 592° cuarrer 11 The Analysis of Variance Analysis of Variance Procedure Dependent Variable: POWERUSE Sum of Mean Source DF Squares Square F Value Pr >F Model 7 169448.750 24206.964 173.84 0.0001 Error 12 1671.000 139.250 Corrected Total 19 171119.750 R-Square C.V. Root MSE POWERUSE Mean 0.990235 1.362242 11.8004 866.25000 Source DF Anova SS Mean Square F Value Pr>F BRAND 4 53231.000 13307.750 95.57 0.0001 HUMIDITY 3 116217.750 38739.250 278.20 0.0001 Alpha = 0.05 df = 12 MSE = 139.25 Critical Value of Studentized Range = 4.508 Minimum Significant Difference = 26.597 Means with the same letter are not significantly different. Tukey Grouping Mean N BRAND A 937.250 4 5 A A 914.000 4 4 B 843.500 4 2 B B 839.000 4 3 c 797.500 4 1 Figure 11.10 SAS output for consumption data i In most randomized block experiments in which subjects serve as blocks, the subjects actually participating in the experiment are selected from a large popula- tion. The subjects then contribute random rather than fixed effects. This does not affect the procedure for comparing treatments when K;; = | (one observation per “cell,” as in this section), but the procedure is altered if Kj; = K > 1. We will shortly consider two-factor models in which effects are random. More on Blocking When / = 2, either the F test or the paired differences f test can be used to analyze the data. The resulting conclusion will not depend on which procedure is used, since T° =F and Bio, =iF iy Just as with pairing, blocking entails both a potential gain and a potential loss in precision. If there is a great deal of heterogeneity in experimental units, the value of the variance parameter ¢* in the one-way model will be large. The effect of blocking is to filter out the variation represented by o* in the two-way model appropriate for a randomized block experiment. Other things being equal, a smaller --- Trang 606 --- 11.4 Two-Factor ANOVA with Ky = 1 593 value of o” results in a test that is more likely to detect departures from Hp (i.e., a test with greater power). However, other things are not equal here, since the single-factor F test is based on /(J — 1) degrees of freedom (df) for error, whereas the two-factor F test is based on (/ — 1)(J — 1) df for error. Fewer degrees of freedom for error results in a decrease in power, essentially because the denominator estimator of o° is not as precise. This loss in degrees of freedom can be especially serious if the experi- menter can afford only a small number of observations. Nevertheless, if it appears that blocking will significantly reduce variability, it is probably worth the loss in degrees of freedom. Models for Random Effects In many experiments, the actual levels of a factor used in the experiment, rather than being the only ones of interest to the experimenter, have been selected from a much larger population of possible levels of the factor. In a two-factor situation, when this is the case for both factors, a random effects model is appropriate. The case in which the levels of one factor are the only ones of interest and the levels of the other factor are selected from a population of levels leads to a mixed effects model. The two-factor random effects model when Kj; = | is Xyp=etAt Beto; (@=1egly f=l.J) where the A;’s, B;’s, and ¢;;’s are all independent, normally distributed rv’s with mean 0 and variances ¢3,03, and 0”, respectively. The hypotheses of interest are then Hoa: a = 0 (level of factor A does not contrib- ute to variation in the response) versus Ha: 0, > 0 and Hog: o3 =0 versus Ag: oR > 0. Whereas E(MSE) = o as before, the expected mean squares for factors A and B are now E(MSA) = 0? + Joi -E(MSB) = o* + Io% Thus when Ho, (Hop) is true, F.4(Fg) is still a ratio of two unbiased estimators of 0”. It can be shown that a test with significance level « for Ho, versus H,4 still rejects Hoa if fa > Fog—1.—1ys—1), and, similarly, the same procedure as before is used to decide between Hog and Hyg. For the case in which factor A is fixed and factor B is random, the mixed model is XjsetatBptoy (f=1enls f= l,.J) where }°2; = 0, and the Bj's, and ¢;’s are all independent, normally distributed rv’s with mean 0 and variances and ”, respectively. --- Trang 607 --- 594 = cuaprer 11 The Analysis of Variance Now the two null hypotheses are Hoa: 1 = +++ = a =0 and Hog: of = 0 with expected mean squares E(MSE) =o? — E(MSA) =o? + a >» o? — E(MSB) = o? +10}, The test procedures for Ho, versus H,4 and Hog versus H,, are exactly as before. For example, in the analysis of the color change data in Example 11.11, if the four wash treatments were randomly selected, then because fg = 11.05 and F 0536 = 4.76, Hog: oj = 0 is rejected in favor of His: 6% > 0. An estimate of the “variance component” oR is then given by (MSB — MSE)/I = .0485. Summarizing, when K, = 1, although the hypotheses and expected mean squares differ from the case of both effects fixed, the test procedures are identical. Section 11.4 (35-48) 35. The number of miles of useful tread wear (in maximum pits, in .0001 in.) is determined. The 1000's) was determined for tires of each of five depths are shown in this table: different makes of subcompact car (factor A, with [= 5) in combination with each of four Soil Type (B) different brands of radial tires (factor B, with 1 2 3 J = 4), resulting in LJ = 20 observations. The 1 et 4 i values SSA = 30.6, SSB = 44.1, and SSE = 2 33 51 48 59.2 were then computed. Assume that an addi- Coating (A) 3 47 45 50 tive model is appropriate. 4 51 B 52 a. Test Ho: 0) = 02 = 43 = %4 = 45 = 0 (no differences in true average tire lifetime due to makes of cars) versus Hy: at least one a. Assuming the validity of the additive model, a; # Ousing a level .05 test. carry out the ANOVA analysis using an b. Ho: B: = Ba = Bs = By = 0 (no differences ANOVA table to see whether the amount of in true average tire lifetime due to brands of corrosion depends on either the type of coat- tires) versus Hy: at least one B; # 0 using a ing used or the type of soil. Use a = .05. level .05 test. b. Compute ji, 1, %, 3, 24, 81, Bo, and Bs 36. Four different coatings are being considered for 37. The data set shown below is from the article corrosion protection of metal pipe. The pipe will “Compounding of Discriminative Stimuli from be buried in three different types of soil. To the Same and Different Sensory Modalities” investigate whether the amount of corrosion (J. Exp. Anal. Behay., 1971: 337-342). Rat depends either on the coating or on the type of, response was maintained by fixed interval sche- soil, 12 pieces of pipe are selected. Each piece is dules of reinforcement in the presence of a tone or coated with one of the four coatings and buried in two separate lights. The lights were of either mod- one of the three types of soil for a fixed time, erate (L1) or low (L2) intensity. Observations are after which the amount of corrosion (depth of given as the mean number of responses emitted by --- Trang 608 --- 11.4 Two-Factor ANOVA with Ky = 1 595 Subject Stimulus 1 2 3 4 Xi Fi Li 8.0 17.3 52.0 22.0 99.3 24.8 12 69 19.3 63.7 21.6 111.5 219 Tone (T) 93 18.8 60.0 28.3 116.4 29.1 Li+L2 92 24.9 82.4 44.9 161.4 40.3 Li+T 12.0 317 83.8 374 164.9 412 12+T 94 33.6 96.6 40.6 180.2 45.1 xj 548 145.6 438.5 194.8 833.7 each subject during single and compound stimuli five connectors (factor B) was pulled once at presentations over a 4-day period. Carry out an each angle (“A Mixed Model Factorial Experi- appropriate analysis. ment in Testing Electrical Connectors,” Indust. 38. In an experiment to see whether the amount of Qual, Control, 1260: 12-16), The data appears ia 7 aes " the accompanying table. coverage of light-blue interior latex paint depends either on the brand of paint or on the B brand of roller used, 1 gallon of each of four 1 is 3 4 5 brands of paint was applied using each of three brands of roller, resulting in the following data O°} 45.3 42.2 39.6 36.8 45.8 (number of square feet covered). 2) 44.1 44.1 384 38.0 47.2 A 4] 427° 427 426 422 48.9 Roller Brand 6) 435 458 479 37.9 564 1 2 3 1 454 446 451 Does the data suggest that true average separa- Paint 2 446 444 447 tion force is affected by the angle of pull? State Brand 3. 439 442 444 and test the appropriate hypotheses at level .01 4 444 437 443 by first constructing an ANOVA table (SST = 396.13, SSA = 58.16, and SSB = 246.97). a. Construct the ANOVA table. [Hint: The com- 40. A particular county employs three assessors who putations can be expedited by subtracting 400 are responsible for determining the value of resi- (or any other convenient number) from each. dential property in the county. To see whether observation. This does not affect the final these assessors differ systematically in their results.] assessments, 5 houses are selected, and each b. State and test hypotheses appropriate for assessor is asked to determine the market value deciding whether paint brand has any effect of each house. With factor A denoting assessors on coverage. Use # = .05. (J = 3) and factor B denoting houses (J = 5), c. Repeat part (b) for brand of roller. suppose SSA = 11.7, SSB = 113.5, and SSE d. Use Tukey's method to identify significant = 25.6. differences among brands. Is there one brand a. Test Ho: %) = a = %3 = 0 at level .05. (Ho that seems clearly preferable to the others? states that there are no systematic differences e. Check the normality and constant variance among assessors.) assumptions graphically. b. Explain why a randomized block experiment . with only 5 houses was used rather than a 39. In an experiment to assess the effect of the angle “wa NOVA ‘experiment Gavelving & of pull on the force required to cause separation ose ey A) P e in electrical connectors, four different angles total of In.difterent houses: withyeach ener a asked to assess 5 different houses (a different (factor A) were used and each of a sample of ees ee group of 5 for each assessor). --- Trang 609 --- 596 cuarrer 11 The Analysis of Variance 41. The article “Rate of Stuttering Adaptation ———eeees Under Two Electro-Shock Conditions” (Behav. Batch Method A Method B_ Method C Res. Therapy, 1967; 49-54) gives adaptation j 307 337 305 scores for three different treatments: (1) no 2 29.1 30.6 32.6 shock, (2) shock following each stuttered 3 30.0 32.2 30.5 word, and (3) shock during each moment of 4 31.9 34.6 33.5 stuttering. These treatments were used on each 5 30.5 33.0 32.4 of 18 stutterers. 6 26.9 29.3 27.8 a. Summary statistics include x), = 905, x2. = v 28.2 28.4 30.7 913,x3, = 936, x. = 2754, Y7)x9 = 430,295 8 32.4 32.4 33.6 iss 9 26.6 29:5 29.2 and > Da} = 143,930. Construct the 16 ike a a2 ANOVA table and test at level .05 to see ——<——————— whether true average adaptation score depends on the treatment given. 44, Check the normality and constant variance assump- b. Judging from the F ratio for subjects (factor tions graphically for the data of Example 11.15. Pcovouens Gale ed WAS 45, Suppose that in the experiment described in Exer- cise 40 the five houses had actually been selected 42. The article “The Effects of a Pneumatic Stool at random from among those of a certain age and and a One-Legged Stool on Lower Limb Joint size, so that factor B is random rather than fixed. Load and Muscular Activity During Sitting and Test Ho: 63 = 0 versus H,: 63 > 0 using a level Rising” (Ergonomics, 1993: 519-535) gives the .01 test. accompanying data on the effort required of 46 5 show that a constant d can’be added to (or subject to arise from four different types of stools a . . ge (Borg scale). Perform an analysis of variance MabtiiCted ROD) Cab, Witiolliiatrectag using a = .05, and follow this with a multiple SAV OF ME ANOY Aint of Sates oth padilbils analyals Wappeepilite. b. Suppose that each .x;; is multiplied bya nonzere constant c. How does this affect the ANOVA Subject sums of squares? How does this affect the 12345678 9] & values of the F statistics F, and Fg? What = effect does “coding” the data by yy = cxy + d Type 1) 12 10 7 7 8 9 BT 9) 856 have on the conclusions resulting from the of 2 15 14 14 11 11 11 12 11 13) 12.44 ANOVA procedures? Stool 3 12 13 13 10 8 11 12) 8 10} 10.78 4] 10 12 9 9 7 10 11 7 8] 9.22 47, Use the fact that E(Xi)) = +a +f, with Ea, = =f; = 0 to show that E(X;.—X..) = a;, so that ; . 4, =X). — X.. is an unbiased estimator for 2. 43. The strength of concrete used in commercial construction tends to vary from one batch to 48. The power curves of Figures 11.5 and 11.6 can be another. Consequently, small test cylinders of used to obtain f = P(type Il error) for the F test in concrete sampled from a batch are “cured” for two-factor ANOVA. For fixed values of %, %,.. .. periods up to about 28 days in temperature- and %, the quantity ¢* = (J/1) > 2}/0? is computed. moisture-controlled environments _ before Then the figure corresponding to v, = / — 1 is strength measurements are made. Concrete is entered on the horizontal axis at the value @, the then “bought and sold on the basis of strength power is read on the vertical axis from the curve test cylinders” (ASTM C 31 Standard Test labeled vy = (J - 1) — 1), and B = 1 - power. Method for Making and Curing Concrete Test a. For the corrosion experiment described in Specimens in the Field). The accompanying Exercise 36, find B when a, = 4, 0 = 0, 9% = data resulted from an experiment carried out to a =—2, and o=4. Repeat for a = 6, compare three different curing methods with dy = 0, 03 = %4 =—3, and o = 4, respect to compressive strength (MPa). Analyze b. By symmetry, what is f for the test of Hog versus this data. Hyg in Example 11.11 when B, = 3, B> = Bs = By = -.1,anda = 3? --- Trang 610 --- 11.5 Two-Factor ANOVA with Kj > 1 597 Two-Factor ANOVA with Kj > 1 In Section 11.4, we analyzed data from a two-factor experiment in which there was one observation for each of the /J combinations of levels of the two factors. To obtain valid test procedures, the js were assumed to have an additive structure with fj; = w+ % + B), Za; = XB; = 0. Additivity means that the difference in true average responses for any two levels of the factors is the same for each level of the other factor. For example, jj — My; = (w+ a + Bj) — (w+ «wv + Bj) = a; — ay independent of the level j of the second factor. This is shown in Figure 11.7(a), in which the lines connecting true average responses are parallel. Figure 11.7(b) depicts a set of true average responses that does not have additive structure. The lines connecting these y's are not parallel, which means that the difference in true average responses for different levels of one factor does depend on the level of the other factor. When additivity does not hold, we say that there is interaction between the different levels of the factors. The assumption of additivity allowed us in Section 11.4 to obtain an estimator of the random error variance o” (MSE) that was unbiased whether or not either null hypothesis of interest was true. When Kj; > 1 for at least one (i, j) pair, a valid estimator of 7 can be obtained without assuming additivity. In specifying the appropriate model and deriving test procedures, we will focus on the case Kj; = K > 1, so the number of observations per “cell” (for each combination of levels) is constant. Parameters for the Fixed Effects Model with Interaction Rather than use the j1;;’s themselves as model parameters, it is usual to use an equivalent set that reveals more clearly the role of interaction. Let 1 1 1 =p de =z y= 7 (il15) TT Mey 7 Thus y is the expected response averaged over all levels of both factors (the true grand mean), ji;, is the expected response averaged over levels of the second factor when the first factor A is held at level i, and similarly for j1;. Now define a; = jt;, — w= the effect of factor A at level 7 B; = fj — w= the effect of factor B at level j (11.16) Vij = Hay — (H+ 4 + Bi) from which Hy =U +B Hy (11.17) --- Trang 611 --- 598 = cuaprer 11 The Analysis of Variance The model is additive if and only if all ),;’s = 0. The ,;’s are referred to as the interaction parameters. The «;’s are called the main effects for factor A, and the B;’s are the main effects for factor B. Although there are J %;’s, J ;’s, and IJ ;;’s in addition to 41, the conditions Lo; = 0, &B; = 0, Ljyz = 0 for any i, and Ly; = 0 for any j [all by virtue of (11.15) and (11.16)], imply that only 7 of these new parameters are independently determined: 1, J — 1 of the ;’s, J — 1 of the B;’s, and (— DU — 1) of the y,’s. There are now three sets of hypotheses that will be considered: Hoap: }i; = 0 for alll i,j versus H,4p: at least one Va #0 Hoa: % = =-++=0,=0 versus Aya: at least one a; £0 Hos: By = By =+-»=B,;=0 versus Hyg: at least one B; 40 The no-interaction hypothesis Ho4g is usually tested first. If Hoag is not rejected, then the other two hypotheses can be tested to see whether the main effects are significant. But once Hog is rejected, we believe that the effect of factor A at any particular level depends on the level of B (and vice versa). It then does not make sense to test Ho, or Mog. In this context a picture similar to that of Figure 11.7(b) is helpful in visualizing the way the factors interact. Here the cell means are used instead of x;;; this type of graph is sometimes called an interaction plot. In case of interaction, it may be appropriate to do a one-way ANOVA to compare levels of A separately for each level of B. For example, suppose factor A involves four kinds of glue, factor B involves three types of material, the response is strength of the glue joint, and the strength rankings of the glues clearly depend on which material is being glued. In this situation with interaction, it makes sense to do three separate one-way ANOVA analyses, one for each material. Notation, Model, and Analysis We now use triple subscripts for both random variables and observed values, with X jx and x; referring to the kth observation (replication) when factor A is at level i and factor B is at level j. The model is then Xi = tat B+ yy te ee Bob dy Sie (11.18) PHUeG fe leond; k= 1.4K where the ¢;,’s are independent and normally distributed, each with mean 0 and variance 0”. Again a dot in place of a subscript means that we have summed over all values of that subscript, whereas a horizontal bar denotes averaging. Thus Xj. is the total of all K observations made for factor A at level i and factor B at level j [all observations in the (i, /)th cell], and Xi is the average of these K observations. --- Trang 612 --- 11.5 Two-Factor ANOVA with Kj; > 1 599 Three different varieties of tomato (Harvester, Ife No. 1, and Pusa Early Dwarf) and four different plant densities (10, 20, 30, and 40 thousand plants per hectare) are being considered for planting in a particular region. To see whether either variety or plant density affects yield, each combination of variety and plant density is used in three different plots, resulting in the data on yields in Table 11.8 (based on the article “Effects of Plant Density on Tomato Yields in Western Nigeria,” Exper. Agric., 1976: 43-47). Table 11.8 Yield data for Example 11.16 Planting Density Variety 10,000 20,000 30,000 40,000 Xi Fi H 10.5 9.2 7.9 128 11.2 133 121 126 140 108 9.1 125]136.0 11.33 Ife 81 86 10.1 127 13.7 115 144 15.4 13.7 113 125 14.5] 146.5 12.21 P 16.1 15.3 17.5 166 19.2 185 208 18.0 21.0 184 18.9 17.2]217.5 18.13 Xp 103.3 129.5 142.0 125.2 500.00 ¥y. 11.48 14.39 15.78 13.91 13.89 Here, / = 3, J = 4, and K = 3, for a total of JK = 36 observations i | To test the hypotheses of interest, we again define sums of squares and present computing formulas: 2 a. lige _ SST = LEY hn -X.y= Dy.” ~ Re UH UK-1 SSE = DODD in — Ri) 7 7k 1 = 2 2 - 7 7 7 1 2 1 SSA = REP aye df= 1-1 ie 2 ae a _ _ 1 1 - 2 2 2 = SSB -LUE& in) = TR LX ag as-1 SSAB = °° 3° (Xj. -%). — 4%. df= (0-1-1) 7 7k The fundamental identity SST = SSA + SSB + SSAB + SSE implies that the interaction sum of squares SSAB can be obtained by subtraction. --- Trang 613 --- 600 = cuarrer 11 The Analysis of Variance The computing formulas are all obtained by expanding the squared expres- sions and summing. The fundamental identity is obtained by squaring and summing an expression similar to Equation (11.2). Total variation is thus partitioned into four pieces: unexplained (SSE—which would be present whether or not any of the three null hypotheses was true) and three pieces that may be explained by the truth or falsity of the three Ho’s. Each of four mean squares is defined by MS = SS/df. The expected mean squares suggest that each set of hypotheses should be tested using the appropriate ratio of mean squares with MSE in the denominator: E(MSE) = 0” IK Sy E(MSA) = 6° + Soa? (MSA) = 0? +7 Xv 4%; IK < E(MSB) = 0? + Sf? (MSB) = 0? +7— yf K roa E(MSAB) = o° + ~—_____ % Osan = +7 ai i=l j= Each of the three mean square ratios can be shown to have an F distribution when the associated Hg is true, which yields the following level « test procedures. Hypotheses Test Statistic Value Rejection Region MSA Hoa versus Has fs = 3ISE Sa 2 Fats) MSB . Hog versus Hap Ja = TisE fe > Fas-.uix-1) MSAB : - Hoas versus Hap fae = gE Sap 2 Fa aay) utK-1) As before, the results of the analysis are summarized in an ANOVA table. From the given data, x2, = 500? = 250,000. (E le 11.16 2 2 alee SI Fy $10.5? + 9.27 + + 18.97 + 17.2? = 7404.80 ToT * S032. $136.0? + 146.5? + 217.5? = 87,264.50 7 and 4 = 63,280.18 j The cell totals (xj,’s) are 10,000 20,000 30,000 40,000 H 27.6 37.3 38.7 32.4 Ife 26.8 37.9 43.5 38.3 P 48.9 54.3 59.8 54.5 --- Trang 614 --- 11.5 Two-Factor ANOVA with Ky > 1 601 from which 53; Dis = 27.6 +--+ +54.5? = 22,100.28. Then 1 SST = 7404.80 — 36 (250,000) = 7404.80 — 6944.44 = 460.36 6 1 SSA = D (87,264.50) — 6944.44 = 327.60 1 SSB = 5 (63,280.18) — 6944.44 = 86.69 1 SSE = 7404.80 — 3 (22,100.28) = 38.04 and SSAB = 460.36 — 327.60 — 86.69 — 38.04 = 8.03 Table 11.9 summarizes the computation. Table 11.9 ANOVA table for Example 11.17 Source of Variation df Sum of Squares Mean Square f Varieties 2 327.60 163.8 fa = 103.02 Density 3 86.69 28.9 fa = 18.18 Interaction 6 8.03 1.34 fus= 84 Error 24 38.04 1.59 Total 35 460.36 Since F 1.6.24 = 3.67 and faz = .84 is not > 3.67, Hoag cannot be rejected at level .01, so we conclude that the interaction effects are not significant. Now the presence or absence of main effects can be investigated. Since F 9).2.24 = 5.61 andf, = 103.02 > 5.61, Hoa is rejected at level .01 in favor of the conclusion that different varieties do affect the true average yields. Similarly, fg = 18.18 > 4.72 = Fo1324, so we conclude that true average yield also depends on plant density. Figure 11.11 shows the interaction plot. Notice the nearly parallel lines for the three tomato varieties, in agreement with the F test showing no significant interaction. The yield for Pusa Early Dwarf appears to be significantly above the yields for the other two varieties, and this is in accord with the highly significant F for varieties. Furthermore, all three varieties show the same pattern in which yield increases as the density goes up, but decreases beyond 30,000 per hectare. This suggests that planting more seed will increase the yield, but eventually overcrowd- ing causes the yield to drop. In this example one of the two factors is quantitative, and this is naturally the factor used for the horizontal axis in the interaction plot. In case both of the factors are quantitative, the choice for the horizontal axis would be arbitrary, but a case can be made for two plots to try it both ways. Indeed, MINITAB has an option to allow both plots to be included in the same graph. --- Trang 615 --- 602 = cuarrer 11 The Analysis of Variance Variety StH -a-Ife +P 20 oe 18 aa Ne c1e] § 2 a“ 214 get Ss 12 a = 10 10000 20000 30000 40000 Density Figure 11.11 Interaction plot for the tomato yield data To check the normality and constant variance assumptions we can make plots similar to those of Section 11.4. Define the predicted values (fitted values) to be the cell means, £), = Xj., So the residuals, the differences between the observations and predicted values, are xij, — Xj. The normal plot of the residuals is Figure 11.12(a), and the plot of the residuals against the fitted values is Figure 11.12(b). The normal plot is sufficiently straight that there should be no concern about the normality assumption. The plot of residuals against predicted values has a fairly uniform vertical spread, so there is no cause for concern about the constant variance assumption. a b Normal Probability Plot of the Residuals Residuals Versus the Fitted Values (response is Yield) (response is Yield) CO er an a oa a aa | 1 = fn 3 5a ll Ae 8 a plz) 10 pg Mp aecfeseeeccedeneeneeet| au ha H H 2 3 2 1 0 1 2 3 10 12 14 16 18 20 Residual Fitted Value Figure 11.12 Plots from MINITAB to verify assumptions for Example 11.17 : --- Trang 616 --- 11.5 Two-Factor ANOVA with Kj > 1 603 Multiple Comparisons When the no-interaction hypothesis Ho, is not rejected and at least one of the two main-effect null hypotheses is rejected, Tukey’s method can be used to identify significant differences in levels. To identify differences among the «;’s when Ho4 is rejected: 1. Obtain Q,,;.4«—1), Where the second subscript / identifies the number of levels being compared and the third subscript refers to the number of degrees of freedom for error. 2. Compute w = Q\/MSE/J/K, where JK is the number of observations averaged to obtain each of the x;..’s compared in step 3. 3. Order the ¥;..’s from smallest to largest and, as before, underscore all pairs that differ by less than w. Pairs not underscored correspond to significantly different levels of factor A. To identify different levels of factor B when Hog is rejected, replace the second subscript in Q by J, replace JK by /K in w, and replace ¥j.. by X,.. For factor A (varieties), / = 3, so with « = .01 and (K = 1) = 24, Qoi324 = (Example 11.17 4.55. Then w = 4.55y/1.59/12 = 1.66, so ordering and underscoring gives continued) i X. Bet 11.33 12.21 18.13 The Harvester and Ife varieties do not differ significantly from each other in effect on true average yield, but both differ from the Pusa variety. For factor B (density), = 480 Q.o1,424 = 4.91 and w =4.91,/1.59/9 = 2.06 Ey Ey. Eo X3 11.48 13.91 14.39 15.78 Thus with experimentwise error rate .01, which is quite conservative, only the lowest density differs significantly from all others. Even with « = .05 (so that w = 1.64), densities 2 and 3 cannot be judged significantly different from each other in their effect on yield. a Models with Mixed and Random Effects In some situations, the levels of either factor may have been chosen from a large population of possible levels, so that the effects contributed by the factor are random rather than fixed. As in Section 11.4, if both factors contribute random effects, the model is referred to as a random effects model, whereas if one factor is fixed and the other is random, a mixed effects model results. We will now consider the analysis for a mixed effects model in which factor A (rows) is the fixed factor and factor B (columns) is the random factor. When either factor is random, interaction effects will also be random. The case in which both factors are random is dealt with in Exercise 57. The mixed effects model is --- Trang 617 --- 604 = charter 11 The Analysis of Variance Xi =e + 4 + By + Gi + bie P= fel; k=, K Here and %;’s are constants with Zw; = 0 and the B;’s, Gj’s, and ¢,’s are independent, normally distributed random variables with expected value 0 and variances oj, 07, and 6”, respectively.' Hoa: % =-++-=% =O0 versus H,4: at least one a 40 Hop: 63 =0 versus Hag: oj > 0 Hoa: a =f) versus Hag: a >0 It is customary to test Ho, and Hog only if the no-interaction hypothesis Hog cannot be rejected. The relevant sums of squares and mean squares needed for the test procedures are defined and computed exactly as in the fixed effects case. The expected mean squares for the mixed model are E(MSE) = o? JK mg 2 2 E(MSA) = 0? + Kog +7 94 E(MSB) = o° + Koz, + [Koz and E(MSAB) = o? + Koz. Thus, to test the no-interaction hypothesis, the ratio f4, = MSAB/MSE is again appropriate, with Hog rejected if fag > Fs —1yy—1).(K-1)- However, for testing Hoa versus H,,4, the expected mean squares suggest that although the numerator of the F ratio should still be MSA, the denominator should be MSAB rather than MSE. MSAB is also the denominator of the F ratio for testing Hog. ' This is referred to as an “unrestricted” model. An alternative “restricted” model requires that 2,Gi; = 0 (so the G,'s are no longer independent). Expected mean squares and F ratios appropriate for testing certain hypotheses depend on the choice of model. Minitab’s default option gives output for the unrestricted model. --- Trang 618 --- 11.5 Two-Factor ANOVA with Ky > 1 605 For testing Ho, versus H,,4 (factors A fixed, B random), the test statistic value is f4 = MSA/MSAB, and the rejection region is fy > F2s—1—1)y—1)- The test of Hog versus Hyg utilizes fg = MSB/MSAB, and the rejection region is fg > Fag-1u-1yu-1)- RU ELE © process engineer has identified two potential causes of electric motor vibration, the material used for the motor casing (factor A) and the supply source of bearings used in the motor (factor B). The accompanying data on the amount of vibration (microns) resulted from an experiment in which motors with casings made of steel, aluminum, and plastic were constructed using bearings supplied by five randomly selected sources. Supply source Material 1 2 3 4 5 Steel 13.1 13.2 16.3 15.8 13.7 14.3 15.7 15.8 13.5 12.5 Aluminum 15.0 14.8 15.7 16.4 13.9 14.3 13.7 14.2 13.4 13.8 Plastic 14.0 14.3 17.2 16.7 12.4 12.3 14.4 13.9 13.2 13.1 Only the three casing materials used in the experiment are under consideration for use in production, so factor A is fixed. However, the five supply sources were randomly selected from a much larger population, so factor B is random. The relevant null hypotheses are Hoa: & = 0% = 03 =0 Hog: 43 =0 Hoa: 6% =0 MINITAB output appears in Figure 11.13. Factor Type Levels values casmater fixed 3 1,2 3 source random 5 12345 Source DE 88 MS F P casmater 2 0.7047 0.3523 0.24 0.790 source 4 36.6747 9.1687 6.32 0.013 casmater*source 8 11.6053 1.4507 13.03 0.000 Error 15 1.6700 0.1113 Total 29 50.6547 Source Variance Error Expected Mean Square for Each Term component term (using unrestricted model) 1 casmater 3 (4) + 2(3) + O21] 2 source 1.2863 3 (4) + 2(3) + 6(2) 3 casmater*source 0.6697 4 (4) + 2(3) 4 Error 0.1113 (4) Figure 11.13 Output from MINITAB’s balanced ANOVA option for the data of Example 11.19 --- Trang 619 --- 606 = carrer 11 The Analysis of Variance The printed 0.000 P-value for interaction means that it is less than .0005 (the actual value is .000018). To interpret the significant interaction we use the interac- tion plot, Figure 11.14, which has both versions, one with source on the x-axis and one with material on the x-axis. Interaction is evident, because the best material (the one with the least vibration) depends strongly on source. For source | the best material is steel, for source 3 the best material is plastic, and for source 4 the best material is aluminum. Because of this interaction, we ordinarily would not interpret the main effects, but one cannot help noticing that there is strong dependence of vibration on source. Source 2 is bad for all three materials and source 3 is pretty good for all three materials. When one-way ANOVA analyses are done to compare the five sources for each of the three materials, all three show highly significant differences. This is consistent with the P-value of 0.013 for source in Figure 11.13. We can conclude that, although the interaction causes the best material to depend on the source, the source also makes a difference of its own. Interaction Plot(data means)for vibration 17 _ Source a ee 16 = 4 m2 a -9-3 15 Source oa ee" 14 poe 5 13 See ‘o 7 x Material 16 yn A 154 of / a See ig \ a Material =@-S 14} ey \ \ 134 # VY 1 2 3 4 5 A P Ss Fig.11.14 MINITAB interaction plot for the data of Example 11.19 : When at least two of the Kjj’s are unequal, the ANOVA computations are much more complex than for the case Kj; = K, and there are no nice formulas for the appropriate test statistics. One of the chapter references can be consulted for more information. Exercises | Section 11.5 (49-57) 49. In an experiment to assess the effects of curing squares were computed to be SSA = 30,763.0, time (factor A) and type of mix (factor B) on the SSB = 34,185.6, SSE = 97,436.8, and SST compressive strength of hardened cement cubes, = 205,966.6. three different curing times were used in combi- a. Construct an ANOVA table. nation with four different mixes, with three obser- b. Test at level .05 the null hypothesis Hoag: all vations obtained for each of the 12 curing yy'8 =0 (no interaction of factors) against time—mix combinations. The resulting sums of Hoag: at least one y;; # 0. --- Trang 620 --- 11.5 Two-Factor ANOVA with Ky > 1 607 ¢. Test at level .05 the null hypothesis Hoa: #4 = chemical process depended either on the formula- % = %3 = 0 (factor A main effects are absent) tion of a particular input or on mixer speed. against H,,;: at least one a; 4 0. d. Test Hox : By = By = By = By = Oversus Hyp: Speed at least one f; # 0 using a level .05 test. 60 70 80 e. The values of the ¥,.’s were X= 4010.88, ¥, = 4029.10, and x. = 3960.02. 189.7 185.1 189.0 Use Tukey’s procedure to investigate signifi- 1) tte = oe iso y's pi gate sig . : So | Formulation 190.1 1773, 191.1 cant differences among the three curing times. 50. The article “Towards Improving the Properties of 165.1 161.7 163.3 Plaster Moulds and Castings” (J. Engrg. Manuf, 2 | 165.9 159.8 166.6 1991: 265-269) describes several ANOVAS car- 167.6 161.6, 170.3 ried out to study how the amount of carbon fiber and sand additions affect various characteristics of A statistical computer package gave SS(Form) = the molding process. Here we give data on casting 2253.44, SS(Speed) = 230.81, SS(Form*Speed) hardness and on wet-mold strength. = 18.58, and SSE = 71.87. a. Does there appear to be interaction between the factors? Sand Carbon Casting Wet- : ; Adaltsoa Fe cMnednesé! Mala b. Deets ani appear to depend on either formu- (%) Addition Strength bsg a. tie (%) ce. Calculate estimates of the main effects. d. Verify that the residuals are 0.23,—0.87, 0.63, 0 0 61.0 34.0 4.50,—1.20,—3.30,—2.03,1.97,0.07,—1.10, 0 0 63.0 16.0 —0.30,1.40,0.67,—1.23,0.57,—3.43,—0.13, 15 0 67.0 36.0 3.57. 15 0 69.0 19.0 e. Construct a normal plot from the residuals 30 0 65.0 28.0 given in part (d). Do the ¢z’s appear to be 30 0 74.0 170 normally distributed? 0 25 69.0 49.0 f. Plot the residuals against the predicted values 0 25 69.0 48.0 (cell means) to see if the population variance 15 25 69.0 43.0 appears reasonably constant. 15 25 74.0 29.0 52. Inanexperiment to investigate the effect of “cement 30 25 74.0 31.0 factor” (number of sacks of cement per cubic yard) 30 25 72.0 24.0 on flexural strength of the resulting concrete (“Stud- 0 50 67.0 55.0 ies of Flexural Strength of Concrete. Part 3: Effects 0 50 69.0 60.0 of Variation in Testing Procedure,” Proceedings 15 50 69.0 45.0 ASTM, 1957: 1127-1139), | = 3 different factor 1s 50 74.0 43.0 values were used, = 5 different batches of cement 30 50 74.0 22.0 were selected, and K = 2 beams were cast from 30 50 74.0 48.0 each cement factor/batch combination. Summary values include > Soxjy = 12,280,108, a. An ANOVA for wet-mold strength gives 2 2G = 74,529,699, Ja. = 122,380,901, SSSand = 705, SSFiber = 1278, SSE = 843, D045, = 73,427,483, and x.. = 19,143. and SST = 3105. Test for the presence of any a. Construct the ANOVA table. effects using « = .05. b. Assuming a mixed model with cement factor b. Carry out an ANOVA on the casting hardness (A) fixed and batches (B) random, test the three observations using « = .05. pairs of hypotheses of interest at level .05. ¢. Make an interaction plot with sand percentage 53, 4 study was carried out to compare the writing on the horizontal axis, and discuss the results of lifetimes of four premium brands of pens. It was parti(b).i0'tetms:of what. the plot shows. thought that the writing surface might affect life- 51. The accompanying data resulted from an time, so three different surfaces were randomly experiment to investigate whether yield from a selected. A writing machine was used to ensure that conditions were otherwise homogeneous --- Trang 621 --- 608 cuarrer 11. The Analysis of Variance (e.g., constant pressure and a fixed angle). The In addition, 7 7) x}, = 16,815,853 and accompanying table shows the two lifetimes YY. = 50,443,409. Obtain the ANOVA (min) obtained for each brand-surface combina- table and then test at level .01 the hypotheses tion. In addition, > 7) xj, = 11,499,492 and Hog versus Hyg, Hoa versus Hy,, and Hog versus DD aj. = 22, 982, 552. Hyg, assuming that capping is a fixed effect and batches is a random effect. Writing Surface §5. a. Show that E(X,.. — X..) = a, so that X;. —X. =— is an unbiased estimator for a; (in the fixed Brand 1 2 3 xi effects model). of Pen b. With}, = Xj —X;. — Xj. +X... show that jj, = ,_L_ is an unbiased estimator for 7 (in the fixed 1 709,659 713,726 660,645 4112 affects'model). 2 668,685 722,740 692, 720 4227 3 659,685 666.684 678.750 4122 56. Show how a 100(1 ~ a)% # Cl for a — a, can be “4: 698, 650 704,666 686.733 4137 obtained. Then compute a 95% interval for a) — a3 xy 5413. 5621 5564. 16,598 using the data from Example 11.16. [Hint: With i 0 = a ~ 43, the result of Exercise 55(a) indicates how to obtain 0. Then compute V(0) and oj and Carry out an appropriate ANOVA, and state your obtain an estimate of oj by using VMSE to esti- conclusions. mate ¢ (which identifies the appropriate number 54. The accompanying data was obtained in an experi- of df).] ment to investigate whether compressive strength 57. When both factors are random in a two-way of concrete cylinders depends on the type of cap- ANOVA experiment with K replications per ping material used or variability in different batches combination of factor levels, the expected (“The Effect of Type of Capping Material on the mean squares are E(MSE) = 02, E(MSA) = 02+ Compressive Strength of Concrete Cylinders,” Ko?, + JKo},, E(MSB) = 6? + Ko2, + IKo3, and Proceedings ASTM, 1958: 1166-1186). Each num- E(MSAB) = 02 + Ko2, ber is a cell total (x;;) based on K = 3 observations. a. What F ratio is appropriate for testing Batch Hog: 6%, = 0 versus Hag: az, > 0? b. Answer part (a) for testing Ho4: 4 = 0 versus Capping i ¥ 3 4 7 Hy: oh >0 and — Hog:62=0 versus Material Haw: 6% > 0 1 1847 1942 1935 1891 1795 2 1779 1850-1795 1785 1626 3 1806 1892 1889 1891 1756 | Supplementary Exercises| iaganay (58-70) 58. An experiment was carried out to compare flow 59. The article “Computer-Assisted Instruction Aug- rates for four different types of nozzle. mented with Planned Teacher/Student Contacts” a. Sample sizes were 5, 6, 7, and 6, respectively, (J. Exper. Ed., Winter 1980-1981: 120-126) and calculations gave f = 3.68. State and test compared five different methods for teaching the relevant hypotheses using « = .01. descriptive statistics. The five methods were tra- b. Analysis of the data using a statistical com- ditional lecture and discussion (L/D), pro- puter package yielded P-value = 029. At grammed textbook instruction (R), programmed level .01, what would you conclude, and text with lectures (R/L), computer instruction why? (C), and computer instruction with lectures (C/L). Forty-five students were randomly --- Trang 622 --- Supplementary Exercises 609 assigned, 9 to each method. After completing 61. An article in the British scientific journal Nature the course, the students took a I-h exam. In (“Sucrose Induction of Hepatic Hyperplasia in addition, a 10-minute retention test was adminis- the Rat,” August 25, 1972: 461) reports on an tered 6 weeks later. Summary quantities are experiment in which each of five groups consist- given. ing of six rats was put on a diet with a different carbohydrate. At the conclusion of the experi- EE ment, the DNA content of the liver of each rat Exam Retention Test was determined (mg/g liver), with the following TO results: Method i Si i S L/D 29.3 4.99 30.20 3.82 - Carbohydrate i R 28.0 533 2880 5.26 anpenyorate 4 R/L 30.23.33 -26.20 4.66 Starch 258 c 324 2.94 31.10 491 Sucrose 263 cL 34.2 2.74 30.20 3.53 FRUGIOSE 2.13 TS Glucose 2.41 ‘The grand mean for the exam was 30.82, and the Malioge: pas grand mean for the retention test was 29.30. a. Does the data suggest that there is a difference a. Assuming also that S77 x7 = 183.4, is the among the five teaching methods with respect true average DNA content affected by the to true mean exam score? Use % = .05. type of carbohydrate in the diet? Construct b. Using a .05 significance level, test the null an ANOVA table and use a .05 level of sig- hypothesis of no difference among the true nificance, mean retention test scores for the five differ- b. Construct a ¢ Cl for the contrast ent teaching methods. A ( ys = My — (My +g + iy +s) /4 60. Numerous factors contribute to the smooth run- . ' ae 4 a ning of an electric motor (“Increasing Market which measures the difference between the Share Through Improved Product and Process average DNA content for the starch diet and Design: An Experimental Approach,” Qual. the combined average for the four other diets. Engrg., 1991: 361-369). In particular, it is desir- Does the resulting interval include zero? able to keep motor noise and vibration to a mini- c. What is f for the test when true average DNA mum. To study the effect that the brand of content is identical for three of the diets and bearing has on motor vibration, five different falls below this common value by 1 standard motor bearing brands were examined by instal- deviation (a) for the other two diets? ling each type of bearing on different random . samples of six motors. The amount of motor 62+ Four laboratories (1-4) are randomly selected vibration (measured in microns) was recorded from a large population, and each is asked to when each of the 30 motors was running. The make three determinations of the percentage of data for this study follows. State and test the methyl alcohol in specimens of a compound relevant hypotheses at significance level .05, taken from a single batch. Based on the accom- and then carry out a multiple comparisons analy- panying data? ate) differences:among laboratories sis if appropriate. a source of variation in the percentage of methyl alcohol? State and test the relevant hypotheses Mean using significance level .05. Brand I: 13.1 15.0 14.0 14.4 14.0 11.6 13.68 Brand 2: 163 15.7 172 149 144 172 1595 i: 8506 8525 84.87 Brand 3: 13.7 139 124 138 ag 13.3. 13.67 a: 8499 8428 «84.88 Brand 4: 15.7 13.7 ti 16.0 13.9 14g 14.73 3 8448 «88472«—«85.10 Brand 5: 13.5 134 13.2 12.7 134 12.3 13.08 4 84.10 8455 8405 --- Trang 623 --- 610 = cuarrer 11 The Analysis of Variance 63. The critical flicker frequency (cff) is the highest contrasts ji) — Joy Hh — fy fa — fs, and frequency (in cycles/sec) at which a person can Sit + Stlz — fy (the last contrast compares detect the flicker in a flickering light source. At blue to the average of brown and green). Which frequencies above the cff, the light source appears contrasts differ significantly from 0, and why? to be continuous even though it is actually flicker- "7 A ‘ ra 65. Four types of mortars—ordinary cement mortar ing. An investigation carried out to see whether iin - - (OCM), polymer impregnated mortar (PIM), resin true average cff depends on iris color yielded the mortar’ (RM), and polymer cement soortar following data (based on the article “The Effects yee Bey 2 ‘i f : a (PCM)—were subjected to a compression test to of Iris Color on Critical Flicker Frequency,” J. Gen. Psych., 1973: 91-95): measure strength (MPa). Three strength observa- Rem Ral 12 ie) tions for each mortar type aré given in the article “Polymer Mortar Composite Matrices for Mainte- Iris Color nance-Free Highly Durable Ferrocement” (J. Fer- ss rocement, 1984: 337-345) and are reproduced Brown 2.Green 3. Blue here. Construct an ANOVA table. Using a .05 JH significance level, determine whether the data sug- 26.8 26.4 25.7 gests that the true mean strength is not the same for 27.9 24.2 27.2 all four mortar types. If you determine that the true 23.7 28.0 29.9 mean strengths are not all equal, use Tukey’s 25.0 26.9 28.5 method to identify the significant differences. 26.3 29.1 29.4 24.8 28.3 25.7 OcM: 32.15 35.53 34.20 2s PIM: 126.32 126.80 134.79 Ii 8 5 6 RM: 117.91 115.02 114.58 cs 204.7 134.6 169.0 PCM: 29.09 30.87 29.80 FA 25.59 26.92 28.17 n=19 x, = 508.3 66. In single-factor ANOVA, suppose the xj’s are ———ore “coded” by yj = cx + d. How does the value of the F statistic computed from the y,;’s compare to a, State and test the relevant hypotheses at signif- the value computed from the xj's? Justify your icance level .05 by using the F table to obtain assertion. an upper and/or lower bound on the P-value. : (Hint: Se = 13,659.67 and 67. In Example 11.10, subtract ¥;. from each observa- CF = 13,598.36] iJ tion in the ith sample (i = 1, .. ., 6) to obtain a set bicTRbaBlesis IMGRGAGES (HEWES JES BOISE of 18 residuals. Then construct a normal probabil ah respect wlan CE ity plot and comment on the plausibility of the normality assumption. 64. Recall from Section 11.2 that if ¢, c3, ..., ¢; are ; numbers satisfying Ec, — 0 then Scqu) — cj, + 68+ The results of a study on the effectiveness of line 5a ap 1STOalleA cobras ik the 18. Notice drying on the smoothness of fabric were summar- hat with 6 =, aeeel, G art Sp =0 ized in the article “Line-Dried vs. Machine-Dried Some pis p5,-which implesttharevecy spate: Fabrics: Comparison of Appearance, Hand, and wise difference between j1;'s is a contrast (so is, Consumer Acceptance” (Home Econ. Res. J., eB. fly — Sila — 5p). A method attributed to 1984: 27-35). Smoothness scores were given for Scheffé gives simultaneous Cls with simultaneous nine different types of fabric and five different confidence level 100(1 — 2)% for all possible drying methods: (1) machine dry, (2) line dry, contrasts (an infinite number of them!). The inter- (3) ‘line dry followed ity Ssaia! tumble, 4) line vial for Sig is dry with softener, and (5) line dry with air move- ment. Regarding the different types of fabric as Doak. + (0 Fain MSES OS, blocks, construct an ANOVA table. Using the critical flicker frequency data of Exer- Using 05 signilicance level. test 10 see cise 63, calculate the Scheffé intervals for the ee aa smoothness score for the drying methods. --- Trang 624 --- Bibliography 611 b. Make a plot like Figure 11.8 with fabric on the sowing rates (“Performance of Overdrilled Red horizontal axis. Discuss the result of part (a) in Clover with Different Sowing Rates and Initial terms of the plot. Grazing Managements,” New Zeal. J. Exper. ¢. Did the two methods involving the dryer yield Agric., 1984: 71-81). Since the four plots had significantly smoother fabric compared to the been grazed differently prior to the experiment other three? and it was thought that this might affect clover accumulation, a randomized block experiment TT was used with all four sowing rates tried on a Drying method section of each plot. Use the given data to test | the null hypothesis of no difference in true mean Fabric 1 2 3 4 5 clover accumulation (kg DM/ha) for the different sowing rates. Crepe 33°25 28 25 19 & . ; . . Doubleknit 36 20 36 24 23 a. ies oes if es alters che Be i an ae | make a difference in true mean clover accumu- Twill mix 3424-29 «160 OT ae : 4 ' b. Make appropriate plots to go with your analy- Terry 38 13 28 20 16 vee aes sis in (a): Make a plot like the one in Figure Broadcloth 22 15 27 15 19 : - 11.8, make a normal plot of the residuals, and Sheeting 35 21 28 21 22 : : plot the residuals against the predicted values. Corduroy 36 13 28 17 18 y : Explain why, based on the plots, the assump- Denim 26 14 24 13 16 : : : tions do not appear to be satisfied for this data set. 69. The water absorption of two types of mortar used ¢. Repeat part (a) replacing the observations with to repair damaged cement was discussed in the their natural logarithms. _ article “Polymer Mortar Composite Matrices for d. Repeat the plots of (b) for the analysis in (c). Maintenance-Free, Highly Durable Ferrocement” Do the logged observations appear to satisty (J. Ferrocement, 1984: 337-345). Specimens of the assumptions better? ordinary cement mortar (OCM) and polymer e. Summarize your conclusions for this experi- cement mortar (PCM) were submerged for vary- ment. Does mean clover accumulation increase ing lengths of time (5, 9, 24, or 48 h), and water with increasing sowing rate? absorption (% by weight) was recorded. With mortar type as factor A (with two levels) and + submersion period as factor B (with four levels), Sowing Rate (kg/ha) three observations were made for each factor level py 356 6% 102 145 combination. Data included in the article was used to compute the sums of squares, which were SSA, 1155 2255 3505 4632 = 322.667, SSB = 35.623, SSAB = 8.557, and 123 406 564 416 SST = 372.113. Use this information to construct 3 68 416 662 379 an ANOVA table. Test the appropriate hypotheses 4 62 5 362 564 at a .05 significance level. OO 70. Four plots were available for an experiment to compare clover accumulation for four different Miller, Rupert, Beyond ANOVA: The Basics of Applied An up-to-date presentation of ANOVA models Statistics, Wiley, New York, 1986. An excellent and methodology. source of information about assumption checking — Kutner, Michael, Christopher Nachtsheim, John Neter, and alternative methods of analysis and William Li, Applied Linear Statistical Models Montgomery, Douglas, Design and Analysis of (5thed.), McGraw-Hill, New York, NY, 2005. The Experiments (7th ed.), Wiley, New York, 2009. second half of this book contains a well-presented --- Trang 625 --- 612 = carrer 11 The Analysis of Variance survey of ANOVA; the level is comparable to that (6th ed.), Cengage, Belmont, CA, 2010. Includes of the present text, but the discussion is more com- several chapters on ANOVA methodology that can prehensive, making the book an excellent reference profitably be read by students desiring a nonmath- Ott, R. Lyman, and Michael Longnecker, An Introduc- ematical exposition; there is a good chapter on tion to Statistical Methods and Data Analysis various multiple comparison methods --- Trang 626 --- ° Regression ° and Correlation Introduction The general objective of a regression analysis is to determine the relationship between two (or more) variables so that we can gain information about one of them through knowing values of the other(s). Much of mathematics is devoted to studying variables that are deterministically related. Saying that x and y are related in this manner means that once we are told the value of x, the value of y is completely specified. For example, suppose we decide to rent a van for a day and that the rental cost is $25.00 plus $.30 per mile driven. If we let x = the number of miles driven and y = the rental charge, then y = 25 + .3x. If we drive the van 100 miles (x = 100), then y = 25 + .3(100) = 55. As another example, if the initial velocity of a particle is v9 and it undergoes constant acceleration a, then distance traveled = y + vx +3ax?, where x = time. There are many variables x and y that would appear to be related to each other, but not in a deterministic fashion. A familiar example to many students is given by variables x = high school grade point average (GPA) and y = college GPA. The value of y cannot be determined just from knowledge of x, and two different students could have the same x value but have very different y values. Yet there is a tendency for those students who have high (low) high school GPAs also to have high (low) college GPAs. Knowledge of a student’s high school GPA should be quite helpful in enabling us to predict how that person will do in college. Other examples of variables related in a nondeterministic fashion include x = age of a child and y = size of that child’s vocabulary, x = size of an engine in cubic centimeters and y = fuel efficiency for an automobile equipped with that engine, and x = applied tensile force and y = amount of elongation in a metal strip. Regression analysis is the part of statistics that deals with investigation of the relationship between two or more variables related in a nondeterministic fashion. JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 613 DOI 10.1007/978-1-4614-0391-3_12, © Springer Science+Business Media, LLC 2012 --- Trang 627 --- 614 = cuarrer 12 Regression and Correlation In this chapter, we generalize a deterministic linear relation to obtain a linear probabilistic model for relating two variables x and y. We then develop procedures for making inferences based on data obtained from the model, and obtain a quantitative measure (the correlation coefficient) of the extent to which the two variables are related. Techniques for assessing the adequacy of any particular regression model are then considered. We next introduce multiple regression analysis as a way of relating y to two or more variables—for example, relating fuel efficiency of an automobile to weight, engine size, number of cylinders, and transmission type. The last section of the chapter shows how matrix algebra techniques can be used to facilitate a concise and elegant development of regres- sion procedures. The Simple Linear and Logistic Regression Models The key idea in developing a probabilistic relationship between a dependent or response variable y and an independent, explanatory, or predictor variable x is to realize that once the value of x has been fixed, there is still uncertainty in what the resulting y value will be. That is, for a fixed value of x, we now think of the dependent variable as being random. This random variable will be denoted by Y and its observed value by y. For example, suppose an investigator plans a study to relate y = yearly energy usage of an industrial building (1000’s of BTUs) tox = the shell area of the building (ft*). If one of the buildings selected for the study has a shell area of 25,000 ft*, the resulting energy usage might be 2,215,000 or it might be 2,348,000 or any one of a number of other possibilities. Since we don’t know a priori what the value of energy usage will be (because usage is determined partly by factors other than shell area), usage is regarded as a random variable Y. We now relate the independent and dependent variables by an additive model equation: Y = some particular deterministic function of x +a random deviation (12.1) =f(x)+e The symbol ¢ represents a random deviation or random “error” (random variable) which is assumed to have mean value 0. This rv incorporates all variation in the dependent variable due to factors other than x. Figure 12.1 shows the graph of a particular f(x). Without the random deviation ¢, whenever x is fixed prior to making an observation on the dependent variable, the resulting (x, y) point would fall exactly on the graph. That is, y would be entirely determined by x. The role of the random deviation ¢ is to allow a non-deterministic relationship. Now if the value of ¢ is positive, the resulting (x, y) point falls above the graph of f(x), whereas when ¢ is negative, the resulting point falls below the graph. The assumption that ¢ has mean value 0 implies that we expect the point (x, y) to fall right on the graph, but we virtually never see what we literally expect—the observed point will almost always deviate upward or downward from the graph. --- Trang 628 --- 12.1 The Simple Linear and Logistic Regression Models. 615 a (2) ° | Graph of f(x) positive € wes X (ey) _ Figure 12.1 Observations resulting from the model equation (12.1) How should the deterministic part of the model equation be selected? Occasionally some sort of theoretical argument will suggest an appropriate choice of f(x). However, in practice the specification of f(x) is almost always made by obtaining sample data consisting of n (x, y) pairs. A picture of the resulting observa- tions (x), 1), (2, 2), -- ++ (Xp. Yn), Called a seatter plot, is then constructed. In this scatter plot each (x;,y,) is represented as a point in a two-dimensional coordinate system. The pattern of points in the plot should suggest an appropriate f(x). Visual and musculoskeletal problems associated with the use of visual display terminals (VDTs) have become rather common in recent years. Some researchers have focused on vertical gaze direction as a source of eye strain and irritation. This direction is known to be closely related to ocular surface area (OSA), so a method of measuring OSA is needed. The accompanying representative data on y = OSA (cm?) and x = width of the palprebal fissure (i.e., the horizontal width of the eye opening, in cm) is from the article “Analysis of Ocular Surface Area for Comfort- able VDT Workstation Layout” (Ergonomics, 1996: 877-884). The order in which observations were obtained was not given, so for convenience they are listed in increasing order of x values. ifr 2 3 4 5 6 7 8 9 0 Hh 12 13 14 15 Xi 40.42 48 51 57 60 70 757578 B84 9S 99 “1.03 1.12 y; | 1.02 1.21 88 .98 1.52 1.83 1.50 1.80 1.74 1.63 2.00 2.80 2.48 2.47 3.05 i | 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 x; | 11S 1.20 1.25 1.25 1.28 1.30 1.34 1.37 1.40 1.43 1.46 1.49 1.55 1.58 1.60 y; | 3.18 3.76 3.68 3.82 3.21 4.27 3.12 3.99 3.75 4.10 4.18 3.77 4.34 4.21 4.92 Thus (x), y1) = (40, 1.02), (vs, ys) = (57, 1.52), and so on. A MINITAB scatter plot is shown in Figure 12.2; we used an option that produced a dotplot of both the x values and y values individually along the right and top margins of the plot, which makes it easier to visualize the distributions of the individual variables --- Trang 629 --- 616 = cuaprer 12 Regression and Correlation (histograms or boxplots are alternative options). Here are some things to notice about the data and plot: + Several observations have identical x values yet different y values (e.g., Xg = Xo = .75, but yg = 1.80 and yo = 1.74). Thus the value of y is not determined solely by x but also by various other factors. + There is a strong tendency for y to increase as x increases. That is, larger values of OSA tend to be associated with larger values of fissure width—a positive relationship between the variables. + It appears that the value of y could be predicted from x by finding a line that is reasonably close to the points in the plot (the authors of the cited article superimposed such a line on their plot). In other words, there is evidence of a substantial (though not perfect) linear relationship between the two variables. 6 = : “ Oo, . E 0 ttt tot 04 06 08 10 12 14 16 palwidth Figure 12.2 Scatter plot from MINITAB for the data from Example 12.1, along with dotplots of x and y values Lt The horizontal and vertical axes in the scatter plot of Figure 12.2 intersect at the point (0, 0). In many data sets, the values of x or y or the values of both variables differ considerably from zero relative to the range(s) of the values. For example, a study of how air conditioner efficiency is related to maximum daily outdoor temperature might involve observations for temperatures ranging from 80°F to 100°F. When this is the case, a more informative plot would show the appropriately labeled axes intersecting at some point other than (0, 0). ee §=Forest growth and decline phenomena throughout the world have attracted consid- erable public and scientific interest. The article “Relationships Among Crown Condition, Growth, and Stand Nutrition in Seven Northern Vermont Sugarbushes” (Canad. J. Forest Res., 1995: 386-397) included a scatter plot of y = mean crown dieback (%), one indicator of growth retardation, and x = soil pH (higher pH corresponds to more acidic soil), from which the following observations were taken: x 3.3 3.4 3.4 3.5 3.6 3.6 3.7 3.7 3.8 3.8 y 13 10.8 13.1 10.4 5.8 9.3 12.4 14.9 11.2 8.0 x 3.9 4.0 4.1 4.2 43 44 4.5 5.0 5.1 y 6.6 10.0 9.2 12.4 2.3 43 3.0 1.6 1.0 --- Trang 630 --- 12.1 The Simple Linear and Logistic Regression Models. 617 Figure 12.3 shows two MINITAB scatter plots of this data. In Figure 12.3a, MINITAB selected the scale for both axes. We obtained Figure 12.3b by specifying minimum and maximum values for x and y so that the axes would intersect roughly at the point (0, 0). The second plot is more crowded than the first one; such crowding can make it more difficult to ascertain the general nature of any relation- ship. For example, it can be more difficult to spot curvature in a crowded plot. a5 nn | bis / a . . . . | . i} * 3 . 2 go . . 2, . ° ne ° : 32 a2 52 0 1 2 3 4 5 6 soil pH soil pH Figure 12.3 MINITAB scatter plots of data in Example 12.2 Large values of percentage dieback tend to be associated with low soil pH, a negative or inverse relationship. Furthermore, the two variables appear to be at least approximately linearly related, although the points would be spread out about any straight line drawn through the plot. Ld A Linear Probabilistic Model For a deterministic linear relationship y = fy + fx, the slope coefficient f, is the guaranteed increase in y when x increases by one unit and the intercept coefficient Bois the value of y when x = 0. A graph of y = Bo + Bx is of course a straight line. The slope gives the amount by which the line rises or falls when we move one unit to the right, and the intercept is the height at which the line crosses the vertical axis. For example, the line y = 100 —5x specifies an increase of —S (i.e., a decrease of 5) for each one-unit increase in x, and the vertical intercept of the line is 100. When a scatter plot of bivariate data consisting of n (x,y) pairs shows a reasonably substantial linear pattern, it is natural to specify f(x) in the model equation (12.1) to be a linear function. Rather than assuming that the dependent variable itself is a linear function of x, the model assumes that the expected value of Y is a linear function of x. For any fixed x value, the observed value of Y will deviate by a random amount from its expected value. THE SIMPLE There are parameters fy, ;, and o* such that for any fixed value of the LINEAR independent variable x, the dependent variable is related to x through the REGRESSION model equation MODEL Y=fo+Pixte --- Trang 631 --- 618 = cuarrer 12 Regression and Correlation The random deviation (random variable) ¢ is assumed to be normally distributed with mean value 0 and variance o*, and this mean value and variance are the same regardless of the fixed x value. The 7 observed pairs (x1, ¥p)s G2. Ya), «+++ ns Yn) are regarded as having been generated indepen- dently of each other from the model equation (first fix x = x, and observe Y, = Bo + Bixi + €1, then fix x = x9 and observe ¥Y> = Bo + Bix2 + &, and so on; assuming that the ¢’s are independent of each other implies that the Y’s are also). Figure 12.4 gives an illustration of data resulting from the simple linear regression model. (no) True regression line fr ' an News +4 Mo Figure 12.4 Points corresponding to observations from the simple linear regression model The first two model parameters fo and f, are the coefficients of the popula- tion or true regression line fy + x. The slope parameter f, is now interpreted as the expected or true average increase in Y associated with a 1-unit increase in x. The variance parameter o* (or equivalently the standard deviation ¢) controls the inherent amount of variability in the data. When o° is very close to 0, virtually all of the (x;, y;) pairs in the sample should correspond to points quite close to the population regression line. But if fo greatly exceeds 0, a number of points in the scatter plot should fall far from the line. So the larger the value of o, the greater will be the tendency for observed points to deviate from the population line by substan- tial amounts. Roughly speaking, the magnitude of o is the size of a “typical” deviation from the population line. The following notation will help clarify implications of the model relation- ship. Let x* denote a particular value of the independent variable x, and Hy. = the expected (i.e., mean) value of Y when x = x” G}.. = the variance of Y when x =x" Alternative notation for these quantities is E(YI x*) and V(YI x*). For example, if x = applied stress (kg/mm) and y = time to fracture (h), then f1y.99 denotes the expected time to fracture when applied stress is 20 kg/mm”. If we conceptualize an entire population of (x, y) pairs resulting from applying stress to specimens, then Hy.29 is the average of all values of the dependent variable for which x = 20. The variance a}.39 describes the spread in the distribution of all y values for which applied stress is 20. --- Trang 632 --- 12.1 The Simple Linear and Logistic Regression Models 619 Now consider replacing x in the model equation by the fixed value x*. Then the only randomness on the right-hand side is from the random deviation ¢. Recalling that the mean value of a numerical constant is the numerical constant and the variance of a constant is zero, we have that Hy. = E(Bo + Bix” + 8) = Bo + Bix” + Ele) = Bo + Bix™ Trae = V(Bo + Bix* +8) = V(Bo + Bx") +V(2) = 0+ 07 = 0°? The first sequence of equalities says that the mean value of Y when x = x* is the height of the population regression line above the value x*. That is, the population regression line is the line of mean Y values—the mean Y value is a linear function of the independent variable. The second sequence of equalities tells us that the amount of variability in the distribution of Y is the same at any particular x value as it is at any other x value—this is the property of homogeneous variation about the popu- lation regression line. If the independent variable is age of a preschool child and the dependent variable is the child’s vocabulary size, data suggests that the mean vocabulary size increases linearly with age. However, there is more variability in vocabulary size for 2-year-old children than for 4-year-old children, so there is not constant variation in Y about the population line and the simple linear regres- sion model is therefore not appropriate. The constant variance property implies that points should spread out about the population regression line to the same extent throughout the range of x values in the sample, rather than fanning out more as x increases or as x decreases. Also, the sum of a constant and a normally distributed variable is itself normally distributed, and the addition of the constant affects only the mean value and not the variance. So for any fixed value x*, Y (= Bo + B,x* + €) has a normal distribution. The foregoing properties are summarized in Figure 12.5. a Normal, mean 0, fi %& standard deviation o a rs 0 06 b y = NO H Hi ‘ x4 Xp x3 Figure 12.5 (a) Distribution of ¢, (b) distribution of Y for different values of x Re ee © Suppose the relationship between applied stress x and time-to-failure y is described by the simple linear regression model with true regression line y = 65 — 1.2x and o = 8. Then on average there is a 1.2-h decrease in time to rupture associated with --- Trang 633 --- 620 = cuarrer 12 Regression and Correlation an increase of 1 kg/mm? in applied stress. For any fixed value of x* of stress, time to rupture is normally distributed with mean value 65 — 1.2x* and standard deviation 8. Roughly speaking, in the population consisting of all (x, y) points, the magnitude of a typical deviation from the true regression line is about 8. For x = 20, Y has mean value jly..9 = 65 — 1.2(20) = 41, so 50-41 P(Y > 50 when x = 20) = P(Z > —{—) = 1 — (1.13) = .1292 When applied stress is 25, fy.2.5 = 35, so the probability that time-to-failure exceeds 50 is 50 = 35 P(Y > 50 when x = 25) = o(z > “) = 1 — 0(1.88) = .0301 These probabilities are illustrated as the shaded areas in Figure 12.6. ; P(Y > 50 when x= 20)= 1292 H | P(Y>50 when x= 25) = .0301 $6 frencnnonnioggir i ! ! True regression line \ ' y= 65—1.2x x 20 25 Figure 12.6 Probabilities based on the simple linear regression model Suppose that Y; denotes an observation on time-to-failure made with x = 25 and Y> denotes an independent observation made with x = 24. Then the difference Y, — Y> is normally distributed with mean value E(Y; — Y2) = B; = —1.2, vari- ance V(Y, — Y2) = a? + o = 128, and standard deviation /128 = 11.314. The probability that Y, exceeds Y, is 0 — (-1.2) P(Y, —Y. 0) =P(Z > —_.— ] =P(Z > .11) = 4562 (Yi — Y2 > 0) (z> Soe (Z > 1) That is, even though we expected Y to decrease when x increases by | unit, the probability is fairly high (but less than .5) that the observed Y at x + 1 will be larger than the observed Y at x. a The Logistic Regression Model The simple linear regression model is appropriate for relating a quantitative response variable y to a quantitative predictor x. Suppose that y is a dichotomous variable with possible values 1 and 0 corresponding to success and failure. --- Trang 634 --- 12.1 The Simple Linear and Logistic Regression Models 621 Let p = P(S) = P(y = 1). Frequently, the value of p will depend on the value of some quantitative variable x. For example, the probability that a car needs warranty service of a certain kind might well depend on the car’s mileage, or the probability of avoiding an infection of a certain type might depend on the dosage in an inoculation. Instead of using just the symbol p for the success probability, we now use p(x) to emphasize the dependence of this probability on the value of x. The simple linear regression equation Y = Bo + Bix + € is no longer appropriate, for taking the mean value on each side of the equation gives Hy = 1- p(x) +0- [1 ~ pla] = pe) = Bo + Bix Whereas p(x) is a probability and therefore must be between 0 and 1, By + Bix need not be in this range. Instead of letting the mean value of y be a linear function of x, we now consider a model in which some function of the mean value of y is a linear function of x. In other words, we allow p(x) to be a function of Bo + fyx rather than Bo + Bix itself. A function that has been found quite useful in many applications is the logit function Bo Bix e P(x) = T+ efor hie Figure 12.7 shows a graph of p(x) for particular values of Bp and B, with B, > 0. As x increases, the probability of success increases. For f, negative, the success probability would be a decreasing function of x. P(x) 1.0 0 10 20 30 40 50 6 7 8 Figure 12.7 A graph of a logit function Logistic regression means assuming that p(x) is related to x by the logit function. Straightforward algebra shows that PO) = phot tx 1 = p(x) The expression on the left-hand side is called the odds ratio. If, for example p(60) = 3/4, then p(60)/[1 — p(60)] = 3/{1 — 3] = 3 and when x = 60 a success --- Trang 635 --- 622 ctiapreR 12 Regression and Correlation is three times as likely as a failure. This is described by saying that the odds are 3 to 1 because the success probability is three times the failure probability. Taking natural logs of both sides, we see that the logarithm of the odds ratio is a linear function of the predictor, pix) In{ ———_} = Bo + Byx (5) = ho In particular, the slope parameter f, is the change in the log odds associated with a 1-unit increase in x. This implies that the odds ratio itself changes by the multipli- cative factor e”: when x increases by 1 unit. clu meeea =f seems reasonable that the size of a cancerous tumor should be related to the likelihood that the cancer will spread (metastasize) to another site. The article “Molecular Detection of p16 Promoter Methylation in the Serum of Patients with Esophageal Squamous Cell Carcinoma” (Cancer Res., 2001: 3135-3138) investi- gated the spread of esophageal cancer to the lymph nodes. With x = size of a tumor (cm) and Y = 1 if the cancer does spread, consider the logistic regression model with B; = .5 and fy = —2 (values suggested by data in the article). Then -2+.5x e 00) = eRe from which p(2) = .27 and p(8) = .88 (tumor sizes for patients in the study ranged from 1.7 to 9.0 cm). Because e~***°7” = 4, the odds for a 6.77 cm tumor are 4, so that it is four times as likely as not that a tumor of this size will spread to the lymph nodes. a Exercises | Section 12.1 (1-12) 1. The efficiency ratio for a steel specimen immersed c. Construct a scatter plot of the data. Does it in a phosphating tank is the weight of the phos- appear that efficiency ratio could be very well phate coating divided by the metal loss (both in predicted by the value of temperature? Explain mg/ft*). The article “Statistical Process Control of | your reasoning. a Phosphate Coating Line” Wire J. Internat. May > The article “Exhaust Emissions from Four-Stroke 1997: 78-81) gave the accompanying data on tank ee temperature’ Co and erheieticy atio G Lawn Mower Engines” (J. Air Water Manage. Ps a Stem SAUD): Assoc., 1997: 945-952) reported data from a Temp. 170 172 173 174 174 175 176 study in which both a baseline gasoline mixture Ratio 84 131 1.42 1.03 1.07 1.08 1.04 and a reformulated gasoline were used. Consider Temp. 177 180 180 180 180 180 181 the following observations on age (year) and NOx Ratio 180 145° 1.60 161 213 2.15 84 emissions (g/kWh): Temp. 181 182 182 182 182 184184 Ragin 1 2 4 4 5 Ratio 143° 90 181 1.94 2.68 1.49 2.52 age 0 0 2 i 7 - Baseline 172 438 «4.06 1.26 5.31 te Se & Reformulated 188 5.93 554 2.67 6.53 Engine 6 7 8 9 10 a, Construct stem-and-leaf displays of both tem- Age 16 9 0 12 4 perature and efficiency ratio, and comment on Baseline 57337 3.4474 1.24 inteneseing Teaties: Reformulated 744.944.8969 1.42 b. Is the value of efficiency ratio completely and “letency ratio comp 'etely Construct scatter plots of NO, emissions versus uniquely determined by tank temperature? a Eeplain vour-reasoniw age. What appears to be the nature of the relation- ° ° ship between these two variables? [Note: The --- Trang 636 --- 12.1 The Simple Linear and Logistic Regression Models 623 authors of the cited article commented on the cheese, not the poor cousin widely available in the relationship. } United States.] 3. Bivariate data often arises from the use of two x 59 63 6 n 4 78 83 different techniques to measure the same quantity. As an example, the accompanying observations on y 118-182-247, 208) «197, 135.132 x = hydrogen concentration (ppm) using a gas chromatography method and y = concentration a. Construct a scatter plot in which the axes inter- using a new sensor method were read from a sect at (0, 0). Mark 0, 20, 40, 60, 80, and 100 on graph in the article “A New Method to Measure the horizontal axis and 0, 50, 100, 150, 200, the Diffusible Hydrogen Content in Steel Weld- and 250 on the vertical axis. ments Using a Polymer Electrolyte-Based Hydro- b. Construct a scatter plot in which the axes inter- gen Sensor” (Welding Res., July 1997: sect at (55, 100), as was done in the cited 251s—256s). article. Does this plot seem preferable to the one in part (a)? Explain your reasoning. x 47 62 65 70 70 78 95 100 114 118 cc. What do the plots of parts (a) and (b) suggest about the nature of the relationship between the y 38 62 53 67 84 79 93 106 117 116 two variables? «| 124 127 140 140 140 150152 164 198 221 & One factor in the development of tennis elbow, a malady that strikes fear in the hearts of all serious y 127 114 134 139 142 170 149 154 200 215 tennis players, is the impact-induced vibration of the racket-and-arm system at ball contact. It is Construct a scatter plot. Does there appear to be a well known that the likelihood of getting tennis very strong relationship between the two types of elbow depends on various properties of the racket concentration measurements? Do the two methods used. Consider the scatter plot of x = racket appear to be measuring roughly the same quan- resonance frequency (Hz) and y = sum of peak- tity? Explain your reasoning. to-peak acceleration (a characteristic of arm vibra- 4 4 he capability of subsurface fl tion, in m/s/s) for n= 23 different rackets . ni ry to assess ‘the. capa ne Lia a el en ow’ (“Transfer of Tennis Racket Vibrations into the and doo, ei TEMONE fe sca Sl ence Human Forearm,” Med. Sci. Sports Exercise, Sip aaa ey a Sapuieaiitene ian hd 1992; 1134-1140), Discuss interesting features stituents resulted in the accompanying lata on of the data and scatter plot. x = BOD mass loading (kg/ha/d) and y = BOD mass removal (kg/ha/d) (“Subsurface Flow Wet- , lands—A Performance Evaluation,” Water Envi- 38 ron. Res., 1995: 244-247). wll a ee 34 5 x 3 8 10 11 13 16 27 30 35 37 38 44 103 142 ES me “: y 47 8 8 10 11 16 26 21 9 31 30 75 90 30 . 5 $ 28 a. Construct boxplots of both mass loading and % mass removal, and comment on any interesting 26 features. % = b. Construct a scatter plot of the data, and com- 22 x a 100 110 120 130 140 150 160 170 180 190 ment on any interesting features. 5. The article “Objective Measurement of the 7. The article “Some Field Experience in the Use of Stretchability of Mozzarella Cheese” (J. Texture an Accelerated Method in Estimating 28-Day Stud., 1992: 185-194) reported on an experiment Strength of Concrete” (J. Amer. Concrete Institut., to investigate how the behavior of mozzarella 1969: 895) considered regressing y = 28-day cheese varied with temperature. Consider the standard-cured strength (psi) against x = acceler- accompanying data on x = temperature and y = ated strength (psi). Suppose the equation of the elongation (%) at failure of the cheese. [Note: The true regression line is y = 1800 + 1.3x. researchers were Italian and used real mozzarella a. What is the expected value of 28-day strength when accelerated strength = 2500? --- Trang 637 --- 624 = cuarrer 12 Regression and Correlation b. By how much can we expect 28-day strengthto 10. Suppose the expected cost of a production run is change when accelerated strength increases by related to the size of the run by the equation 1 psi? y = 4000 + 10x. Let Y denote an observation on c. Answer part (b) for an increase of 100 psi. the cost of a run. If the variables size and cost are d. Answer part (b) for a decrease of 100 psi. related according to the simple linear regression 8. Referring to Exercise 7, suppose that the standard model; could it be the case that P(Y'> 5500 when deviation of the random deviation ¢ is 350 psi. = 100) =1.05 aad BCP: 6500 whens:=.200) : ee ? = .10? Explain, a. What is the probability that the observed value P of 28-day strength will exceed 5000 psi when 11. Suppose that in a certain chemical process the the value of accelerated strength is 2000? reaction time y (hr) is related to the temperature b. Repeat part (a) with 2500 in place of 2000. (°F) in the chamber in which the reaction takes c. Consider making two independent observa- place according to the simple linear regression tions on 28-day strength, the first for an accel- model with equation y= 5.00 — Olx and erated strength of 2000 and the second for o = 075. x = 2500. What is the probability that the sec- a. What is the expected change in reaction time ond observation will exceed the first by more for a 1°F increase in temperature? For a 10°F than 1000 psi? increase in temperature? d. Let Y, and Y> denote observations on 28-day b. What is the expected reaction time when tem- strength when x = x, and x = xo, respectively. perature is 200°F? When temperature is By how much would x» have to exceed x; in 250°F? order that P(¥, > Y;) = .95? ¢. Suppose five observations are made indepen- Syd Si a dently on reaction time, each one for a temper- 3) Ths Dowrrate slo Ann) in adevies used fora ature of 250°R. What ie the probability ‘hat all quality measurement depends on the pressure drop . ee five times are between 2.4 and 2.6 h? x (in. of water) across the device’s filter. Suppose d. Wh hi bability th: tind denth that for x values between 5 and 20, the two vari- je Nyhatis the piobability that two. mndependenuly en “ . observed reaction times for temperatures 1° ables are related according to the simple linear y fi fi A a apart are such that the time at the higher tem- regression model with true regression line : a perature exceeds the time at the lower temper- y = —.12 + .095x. eee a. What is the expected change in flow rate asso- ature? ciated with a 1-in. increase in pressure drop? 12. In Example 12.4 the probability of cancer metas- Explain. tasizing was p(x) = e2*5"/(1 + e245"), b. What change in flow rate can be expected a. Tabulate values of x, p(x), the odds when pressure drop decreases by 5 in.? p(x)/[1— pQ)], and the log odds c. What is the expected flow rate for a pressure forx=0,1,2,3,...,10 drop of 10 in.? A drop of 15 in.? b. Explain what happens to the odds when x is d. Suppose o = .025 and consider a pressure increased by 1. Your explanation should drop of 10 in. What is the probability that the involve the .5 that appears in the formula observed value of flow rate will exceed .835? for p(x). That observed flow rate will exceed .840? ¢. Support your answer to (b) algebraically, start- e. What is the probability that an observation on ing from the formula for p(x). flow rate when pressure drop is 10 in. will d. For what value of x are the odds 1? 5? 10? exceed an observation on flow rate made when pressure drop is 11 in.? Estimating Model Parameters We will assume in this and the next several sections that the variables x and y are related according to the simple linear regression model. The values of fo, B;, and o° will almost never be known to an investigator. Instead, sample data consisting of n observed pairs (x), y)), -.-, Qn, Yn) Will be available, from which the model parameters and the true regression line itself can be estimated. These observations --- Trang 638 --- 12.2 Estimating Model Parameters 625 are assumed to have been obtained independently of each other. That is, y; is the observed value of an rv Y;, where Y; = By + fx; + & and the n deviations £1, &2, -- +, & are independent rv’s. Independence of Y;, Y>, ..., Y,, follows from the independence of the ¢,’s. According to the model, the observed points will be distributed about the true regression line in a random manner. Figure 12.8 shows a typical plot of observed pairs along with two candidates for the estimated regression line, y = a) + ax and y = bo + byx. Intuitively, the line y = ap + a,x is not a reasonable estimate of the true line y = Bo + B x because, if y = ao + ayx were the true line, the observed points would almost surely have been closer to this line. The line y = bo + bixisa more plausible estimate because the observed points are scattered rather closely about this line. y sia ye by + byx ° . ere F * ° SSE es ye dgt ayx x Figure 12.8 Two different estimates of the true regression line Figure 12.8 and the foregoing discussion suggest that our estimate of y = Bo + Bix should be a line that provides in some sense a best fit to the observed data points. This is what motivates the principle of least squares, which can be traced back to the mathematicians Gauss and Legendre around the year 1800. According to this principle, a line provides a good fit to the data if the vertical distances (deviations) from the observed points to the line are small (see Figure 12.9). The measure of the goodness-of-fit is the sum of the squares of these deviations. The best-fit line is then the one having the smallest possible sum of squared deviations. y 80 = y=bot byx © 60 = £ 2 40 e Ec 20 x 10 20 30 40 Applied stress (kg/mm?) Figure 12.9 Deviations of observed data from line y = bo + b\x --- Trang 639 --- 626 = cuarrer 12 Regression and Correlation PRINCIPLE The vertical deviation of the point (x;, yj) from the line y = bo + b,x is OF LEAST SQUARES height of point — height of line = y; — (bo + bixi) The sum of squared vertical deviations from the points (11, y1), - - -; %n» Yn) to the line is then (bo, b1) = 9° [yi = (bo + bv)? i=l The point estimates of Bo and f,, denoted by Bo and p, and called the least squares estimates, are those values that minimize f(bo, b). That is,By andf, are such that f(o.8,) < f(bo,b1) for any bo and b;. The estimated regres- sion line or least squares line is then the line whose equation is y = By +x. The minimizing values of bo and b; are found by taking partial derivatives of f(bo, 6) with respect to both bo and b;, equating them both to zero [analogously to f'(b) = 0 in univariate calculus], and solving the equations Of (bo, b1) Oh SD 2(yi — bo — bixi)(—1) = 0 Of (bo; bi) ab, S201 = bo = bixi)(—xi) = 0 Cancellation of the factor 2 and rearrangement gives the following system of equations, called the normal equations: nbo + (Scu)o = Si (Sox) bo + (© pi =x The normal equations are linear in the two unknowns ho and 4,. Provided that at least two of the x;’s are different, the least squares estimates are the unique solution to this system. The least squares estimate of the slope coefficient /; of the true regression line is A Xi = X)(yi =F) _ Sry by =f = DOLD) Fe (122) xX 2x) Sxx Computing formulas for the numerator and denominator of b, are (oxi) yi) 2_ (omy Sy = oa Su=o#- Sy = Dixy 5 Uo eae --- Trang 640 --- 12.2 Estimating Model Parameters 627 (the S,, formula was derived in Chapter | in connection with the sample variance, and the derivation of the S,, formula is similar). The least squares estimate of the intercept bp of the true regression line is bo = fy = FLL yf (12.3) Because of the normality assumption, Bo and B, are also the maximum likelihood estimates (see Exercise 23). The computational formulas for S,, and S,. require only the summary statistics Lx;, Zy;, Za?, Xxiy; (Ly? will be needed shortly); the x and y deviations are then not needed. In computing fy, use extra digits inp, because, if ¥ is large in magnitude, rounding may affect the final answer. We emphasize that before p, and By are computed, a scatter plot should be examined to see whether a linear probabilistic model is plausible. If the points do not tend to cluster about a straight line with roughly the same degree of spread for all x, other models should be investigated. In practice, plots and regression calculations are usually done by using a statistical computer package. Global warming is a major issue, and CO, emissions are an important part of the discussion. What is the effect of increased CO, levels on the environment? In particular, what is the effect of these higher levels on the growth of plants and trees? The article “Effects of Atmospheric CO, Enrichment on Biomass Accumulation and Distribution in Eldarica Pine Trees” (J. Exp. Bot., 1994: 345-349) describes the results of growing pine trees with increasing levels of CO in the air. There were two trees at each of four levels of CO concentration, and the mass of each tree was measured after 11 months of the experiment. Here are the observations with x = atmospheric concentration of CO; in microliters per liter (parts per million) and y = mass in kilograms, along with x; xy and y. The mass measurements were read from a graph in the article. Obs x y x xy y 1 408 LL 166,464 448.8 1.21 2 408 13 166,464 530.4 1.69 3 554 16 306,916 886.4 2.56 4 554 2.5 306,916 1385.0 6.25 5 680 3.0 462,400 2040.0 9.00 6 680 43 462,400 2924.0 18.49 7 812 42 659,344 3410.4 17.64 8 812 47 659,344 3816.4 22.09 Sum 4908 22.7 3,190,248 15,441.4 78.93 --- Trang 641 --- 628 = cuarrer 12 Regression and Correlation Thus x = 4908/8 = 613.5, ¥ = 22.7/8 = 2.838, and p, 5 — Saad ~ (4908) (22.7) /8 1S 3,190,248 — (4908)7/8 1514.95 = ——— = .00845443 = .00845 179,190 By =2.838 — (.00845443) (613.5) = —2.349 We estimate that the expected change in tree mass associated with a 1-part-per- million increase in CO concentration is .00845. The equation of the estimated regression line (least squares line) is then y = —2.35 + .00845x. Figure 12.10, generated by the statistical computer package R, shows that the least squares line provides an excellent summary of the relationship between the two variables. 5 4 ° ° : 5 2 2 mee 410 510 610 710 810 co2 Figure 12.10 A scatter plot of the data in Example 12.5 with the least squares line superimposed, from R : The estimated regression line can immediately be used for two different pur- poses. For a fixed x valuex*, By +f ,x* (the height of the line above x*) gives either (1) a point estimate of the expected value of Y when.x = x* or (2) a point prediction of the Y value that will result from a single new observation made at x = x*. The least squares line should not be used to make a prediction for an x value much beyond the range of the data, such as x = 250 or x = 1000 in Example 12.5. The danger of extrapolation is that the fitted relationship (a line here) may not be valid for such x values. (In the foregoing example, x = 250 gives j = —.235, a patently ridiculous value of mass, but extrapolation will not always result in such inconsistencies.) Refer to the tree-mass-CO) data in the previous example. With a little extrapola- tion, a point estimate for true average mass for all specimens with CO, concentra- tion 365 is ity.s65 =p +B, (365) = —2.35 + .00845(365) = .73 With a little more extrapolation, a point estimate for true average mass for all specimens with COy concentration 315 is ity.s15 =p +B, (315) = —2.35 + .00845(315) = .31 --- Trang 642 --- 12.2 Estimating Model Parameters 629 The values 315 and 365 are chosen based on actual values: the average world atmospheric CO, concentration rose from 315 to 365 parts per million between 1960 and 2000. Even if the prediction equation is somewhat inaccurate when extrapolated to the left, it is clear that changes in carbon dioxide are making a big difference in the growth of trees. Notice that in Figure 12.10 the tree mass increases by a factor of more than 4 while the CO2 concentration increases by just a factor of 2. a Estimating o? and a The parameter o” determines the amount of variability inherent in the regression model. A large value of @ will lead to observed (x; y;)’s that are quite spread out about the true regression line, whereas when a” is small the observed points will tend to fall very close to the true line (see Figure 12.11). An estimate of o will be used in confidence interval (CI) formulas and hypothesis-testing procedures pre- sented in the next two sections. Because the equation of the true line is unknown, the estimate is based on the extent to which the sample observations deviate from the estimated line. Many large deviations (residuals) suggest a large value of o, whereas if all deviations are small in magnitude it indicates that o” is small. a b y = Product sales )) = Elongation . f oo fot oe . . etme . H oe * at x= Tensile force x= Advertising expenditure Figure 12.11 Typical sample for a”: (a) small; (b) large DEFINITION The fitted (or predicted) values ¥;,¥2,..., 5, are obtained by successively substituting the x values x), ..., x, into the equation of the estimated regres- sion line: $; =By +8)%1, 92 =Bo +Byx2,---.9n =Bo +BiXn- The residuals are the vertical deviations y; — yj, y2 — Y2,-.-,¥n — Yn from the estimated line. In words, the predicted value ¥; is the value of y that we would predict or expect when using the estimated regression line with x =.x;; y; is the height of the estimated regression line above the value x; for which the ith observation was made. The residual y; —¥; is the difference between the observed y; and the predicted y;. If the residuals are all small in magnitude, then much of the variability --- Trang 643 --- 630 = cuarrer 12 Regression and Correlation in observed y values appears to be due to the linear relationship between x and y, whereas many large residuals suggest quite a bit of inherent variability in y relative to the amount due to the linear relation. Assuming that the line in Figure 12.9 is the least squares line, the residuals are identified by the vertical line segments from the observed points to the line. When the estimated regression line is obtained via the principle of least squares, the sum of the residuals should in theory be zero (an immediate consequence of the first normal equation; see Exercise 24). In practice, the sum may deviate a bit from zero due to rounding. Japan’s high population density has resulted in a multitude of resource usage problems. One especially serious difficulty concerns waste removal. The article “Innovative Sludge Handling Through Pelletization Thickening” (Water Res., 1999: 3245-3252) reported the development of a new compression machine for processing sewage sludge. An important part of the investigation involved relating the moisture content of compressed pellets (y, in %) to the machine’s filtration rate (x, in kg-DS/m/h). The following data was read from a graph in the paper: x 125.3 98.2 201.4 147.3 145.9 124.7 112.2 120.2 161.2 178.9 y 719 768 81.5 79.8 78.2 78.3 77.5 77.0 80.1 80.2 3 159.5 145.8 75.1 151.4 144.2 125.0 198.8 132.5 159.6 110.7 y 79.9 79.0 76.7 78.2 79.5 78.1 81.5 77.0 79.0 78.6 Relevant summary quantities (summary statistics) are > x; = 2817.9, yi = 1574.8, Sx? = 415,949.85, SD xy; = 222,657.88, and Soy? = 124,039.58, from which ¥ = 140.895, ¥ = 78.74, Srv = 18,921.8295, and Sy = 776.434. Thus 5 _ 776.434 =e = -04103377 © .041 Pr = FR 901.8295 By =78.74 — (.04103377) (140.895) = 72.958547 ~ 72.96 from which the equation of the least squares line is ¥ = 72.96 + .041x. For numerical accuracy, the fitted values are calculated from $j = 72.958547 + .04103377x;: 31 = 72.958547 + .04103377(125.3) + 78.100 yi — yi & —200, etc. A positive residual corresponds to a point in the scatter plot that lies above the graph of the least squares line, whereas a negative residual results from a point lying below the line. All predicted values (fits) and residuals appear in the accompanying table. Obs Filtrate Moistcon Fit Residual 1 125.3 719 78.100 —0.200 2 98.2 76.8 76.988 —0.188 3 201.4 81.5 81.223 0.277 4 147.3 79.8 79.003 0.797 5 145.9 78.2 78.945 —0.745 6 124.7 78.3 78.075 0.225 q 112.2 715 77.563 —0.063 8 120.2 77.0 77.891 —0.891 --- Trang 644 --- 12.2 Estimating Model Parameters 631 9 161.2 80.1 79.573 0.527 10 178.9 80.2 80.299 —0.099 6 159.5 79.9 79.503 0.397 12 145.8 79.0 78.941 0.059 13 75.1 76.7 76.040 0.660 14 151.4 78.2 79.171 0.971 15 144.2 79.5 78.876 —0.624 16 125.0 78.1 78.088 0.012 17 198.8 81.5 81.116 0.384 18 132.5 71.0 78.396 —1.396 19 159.6 79.0 79.508 —0.508 20 110.7 78.6 77.501 1.099 a In much the same way that the deviations from the mean in a one-sample situation were combined to obtain the estimate s? = > (x; —x)?/(n — 1), the estimate of s* in regression analysis is based on squaring and summing the residuals. We will continue to use the symbol s° for this estimated variance, so don’t confuse it with our previous 5°. DEFINITION The error sum of squares (equivalently, residual sum of squares), denoted by SSE, is SSE= 71-3) =D - Bo thn” and the least squares estimate of o? is pug — SSE _ Xi sy n-2 n-2 The divisor n — 2 in s? is the number of degrees of freedom (df) associated with the estimate (or, equivalently, with the error sum of squares). This is because to obtain & the two parameters fo and f, must first be estimated, which results in a loss of 2 df (just as 1 had to be estimated in one-sample problems, resulting in an estimated variance based on n — | df). Replacing each y; in the formula for s by the rv Y; gives the estimator S*. It can be shown that S? is an unbiased estimator for a (although the estimator S is biased for ¢). The mle of o has divisor n rather than n — 2, so it is biased. The residuals for the filtration rate-moisture content data were calculated previ- (Example 12.7 ously. The corresponding error sum of squares is continued) 2 2 2 SSE = (—.200)? + (—.188)? +--+ + (1.099)? = 7.968 The estimate of o” is then 6? = s? = 7.968/(20 — 2) = .4427, and the estimated standard deviation is ¢ = s = /.4427 = .665. Roughly speaking, .665 is the mag- nitude of a typical deviation from the estimated regression line. a --- Trang 645 --- 632 ctireR 12 Regression and Correlation Computation of SSE from the defining formula involves much tedious arithmetic because both the predicted values and residuals must first be calculated. Use of the following computational formula does not require these quantities. SSE = 7 y? —Bo Soi -Bi Doxa This expression results from substituting y; =By +8 x; into D7 (y; — §,), squaring the summand, carrying the sum through to the resulting three terms, and simplify- ing (see Exercise 24). This computational formula is especially sensitive to the effects of rounding in By and f,, so use as many digits as your calculator will provide. The article “Promising Quantitative Nondestructive Evaluation Techniques for Composite Materials” (Mater. Eval., 1985: 561-565) reports on a study to investi- gate how the propagation of an ultrasonic stress wave through a substance depends on the properties of the substance. The accompanying data on fracture strength (x, as a percentage of ultimate tensile strength) and attenuation (y, in neper/cm, the decrease in amplitude of the stress wave) in fiberglass-reinforced polyester com- posites was read from a graph that appeared in the article. The simple linear regression model is suggested by the substantial linear pattern in the scatter plot. x 12. 30 36 40 45 57 62 67 71 78 93 94 100 105 y 3.3 3.2 34 30 28 29 2.7 26 25 26 22 20 23 2.1 The necessary summary quantities are n= 14, S> x; = 890, Sox = 67,182, Sy =37.6, Dy? = 103.54, oxi; = 2234.30, from = which Sy = 10,603.4285714, Sy) = —155.98571429, B, = —.0147109, and Bo = 3.6209072. The computational formula for SSE gives SSE = 103.54 — (3.6209072)(37.6) — (—.0147109) (2234.30) = .2624532 so s* = .2624532/12 = 0218711 and s = .1479. With rounding to three decimal digits in the computational formula for SSE, the result is SSE = 104 — (3.62)(37.6) — (—.0147) (2234.30) = 104 — 103.331 = .669 which is wrong in all digits. The problem is that, even though each of the three terms may be correct in its first three nonzero digits, the three correct digits can be subtracted away, leaving you with no correct digits. a The Coefficient of Determination Figure 12.12 shows three different scatter plots of bivariate data. In all three plots, the heights of the different points vary substantially, indicating that there is much variability in observed y values. The points in the first plot all fall exactly on a straight line. In this case, all (100%) of the sample variation in y can be attributed to --- Trang 646 --- 12.2 Estimating Model Parameters 633 the fact that x and y are linearly related in combination with variation in x. The points in Figure 12.12b do not fall exactly on a line, but compared to overall y variability, the deviations from the least squares line are small. It is reasonable to conclude in this case that much of the observed y variation can be attributed to the approximate linear relationship between the variables postulated by the simple linear regression model. When the scatter plot looks like that of Figure 12.12c, there is substantial variation about the least squares line relative to overall y variation, so the simple linear regression model fails to explain variation in y by relating y to x. a b c y y y a wie axe 8 at «ax ** _ x x x Figure 12.12 Explaining y variation: (a) all variation explained; (b) most variation explained; (c) little variation explained The error sum of squares SSE can be interpreted as a measure of how much variation in y is left unexplained by the model—that is, how much cannot be attributed to a linear relationship. In Figure 12.12a, SSE = 0, and there is no unexplained variation, whereas unexplained variation is small for the data of Figure 12.12b and much larger in Figure 12.12c. A quantitative measure of the total amount of variation in observed y values is given by the total sum of squares SST = 8, = 7-3 =o? - Ol y?/n The total sum of squares is the sum of squared deviations about the sample mean of the observed y values. Thus the same number ¥ is subtracted from each y; in SST, whereas SSE involves subtracting each different predicted value 5; from the corresponding observed y;. Just as SSE is the sum of squared deviations about the least squares line y =, +By x, SST is the sum of squared deviations about the horizontal line at height ¥ (since then vertical deviations are y; — ¥), as pictured in Figure 12.13. Furthermore, because the sum of squared deviations about the least squares line is smaller than the sum of squared deviations about any other line, SSE < SST unless the horizontal line is the least squares line. The ratio SSE/SST is the proportion of total variation that cannot be explained by the simple linear regression model, and | — SSE/SST (a number between 0 and 1) is the proportion of observed y variation explained by the model. --- Trang 647 --- 634 = cuarrer 12 Regression and Correlation a b y a Horizontal line at height 7 Aa a squares line J y x x Figure 12.13 Sums of squares illustrated: (a) SSE = sum of squared deviations about the least squares line; (b) SST = sum of squared deviations about the horizontal line DEFINITION The coefficient of determination, denoted by r’, is given by 2 _1_ SSE "<"~ 3st It is interpreted as the proportion of observed y variation that can be explained by the simple linear regression model (attributed to an approximate linear relationship between y and x). In equivalent words, r* is the proportion by which the error sum of squares is reduced by the regression line compared to the horizontal line. For example, if SST = 20 and SSE = 2, then r? = 1 — 3, so the regression reduces the error sum of squares by .90 = 90%. The higher the value of r°, the more successful is the simple linear regression model in explaining y variation. When regression analysis is done by a statistical computer package, either r? or 1007? (the percentage of variation explained by the model) is a prominent part of the output. If 7? is small, an analyst may want to search for an alternative model (either a nonlinear model or a multiple regression model that involves more than a single independent variable) that can more effectively explain y variation. The scatter plot of the CO> concentration data in Figure 12.10 indicates a fairly high (Example 12.5 r° value. With continued) a ~ Bo = —2.349293 B, = 00845443 Ly; = 22.7 Zxy; = 15,4414 Ly? = 78.93 we have 22.7 SST =78.93 — a 14.519 SSE =78.93 — (—2.349293) (22.7) — (.00845443)(15,441.4) = 1.711 --- Trang 648 --- 12.2 Estimating Model Parameters 635. The coefficient of determination is then L711 2 r=1—-——~=1-—.118 = .882 14.519 That is, 88.2% of the observed variation in mass is attributable to (can be explained by) the approximate linear relationship between mass and CO, concentration, a fairly impressive result. The 7? can also be interpreted by saying that the error sum of squares using the regression line is 88.2% less than the error sum of squares using a horizontal line. By the way, although it is common to have r values of 88 or more in engineering, the physical sciences, and the biological sciences, ris likely to be much smaller in social sciences such as psychology and sociology. An ras big as .5 would be unusual in predicting one test score from another. In particular, when third grade verbal IQ score is used to predict third-grade written IQ score for the 33 students of Example 1.2, ris only .28. Figure 12.14 shows partial MINITAB output for the CO2 concentration data of Examples 12.5 and 12.10; the package will also provide the predicted values and residuals upon request, as well as other information. The formats used by other packages differ slightly from that of MINITAB, but the information content is very similar. Quantities such as the standard deviations, f-ratios, and the details of the ANOVA table are discussed in Section 12.3. The regression equation is kg = —2.35 + 0.00845 coz Predictor coef SE Coef T P Constant -2.3493-By 0.7966 -2.95 0.026 co2 0.008454--B, 0.001261 6.70 0.001 S = 0.533964 R+Sq = 88.2%¢100P — R-Sq(adj) = 86.3% Analysis of Variance source DF ss Ms F P Regression 1 12,808 12.808 44.92 0.001 Residual Error 6 1.711¢-8SB 0.285 Total 7 14.519¢-SST Figure 12.14 MINITAB output for the regression of Examples 12.5 and 12.10 i For regression there is an analysis of variance identity like the fundamental identity (11.1), in Section 11.1. Add and subtract 5; in the total sum of squares: 5)2 « « =)? 2 a ND SST = 90 01-5 = [Gi - 5) + G- DP =O -IP + OG - 9) Notice that the middle (cross-product) term is missing on the right, but see Exercise 24 for the justification. Of the two sums on the right, the first is SSE = > (; — ,) --- Trang 649 --- 636 charter 12 Regression and Correlation and the second is something new, the regression sum of squares, SSR = Yoi- yy. Interpret the regression sum of squares as the amount of total variation that is explained by the model. The analysis of variance identity for regression is SST = SSE+SSR (12.4) The coefficient of determination in Example 12.10 can now be written in a slightly different way: >_, SSE _ SST—SSE _ SSR "<""sst SST ‘SST the ratio of explained variation to total variation. The ANOVA table in Figure 12.14 shows that SSR = 12.808, from which r? = 12.808/14.519 = .882. Terminology and Scope of Regression Analysis The term regression analysis was first used by Francis Galton in the late nineteenth century in connection with his work on the relationship between father’s height x and son’s height y. After collecting a number of pairs (x;, y;), Galton used the principle of least squares to obtain the equation of the estimated regression line with the objective of using it to predict son’s height from father’s height. In using the derived line, Galton found that if a father was above average in height, the son would also be expected to be above average in height, but not by as much as the father was. Similarly, the son of a shorter-than-average father would also be expected to be shorter than average, but not by as much as the father. Thus the predicted height of a son was “pulled back in” toward the mean; because regression can be defined as moving backward, Galton adopted the terminology regression line. This phenomenon of being pulled back in toward the mean has been observed in many other situations (e.g., batting averages from year to year in baseball) and is called the regression effect or regression to the mean. See also Section 5.3 for a discussion of this topic in the context of the bivariate normal distribution. Because of the regression effect, care must be exercised in experiments that involve selecting individuals based on below average scores. For example, if students are selected because of below average performance on a test, and they are then given special instruction, then the regression effect predicts improvement even if the instruction is useless. A similar warning applies in studies of under- performing businesses or hospital patients. Our discussion thus far has presumed that the independent variable is under the control of the investigator, so that only the dependent variable Y is random. This was not, however, the case with Galton’s experiment; fathers’ heights were not preselected, but instead both X and Y were random. Methods and conclusions of regression analysis can be applied both when the values of the independent variable are fixed in advance and when they are random, but because the derivations and interpretations are more straightforward in the former case, we will continue to work explicitly with it. For more commentary, see the excellent book by Michael Kutner et al. listed in the chapter bibliography. --- Trang 650 --- 12.2 Estimating Model Parameters 637 Exercises | Section 12.2 (13-30) 13. Exercise 4 gave data on x = BOD mass loading linear regression relationship between the two and y = BOD mass removal. Values of relevant variables? Summary quantivies:are 16. As an alternative to the use of father’s height to predict son’s height, Galton also used the mid- n=14 Sou =517 parent height, the average of the father’s and Soy = 346 S04? = 39,095 mother’s heights. Here are the heights of 11 female students along with their midparent Dov = 17,454 ST xpi = 25,825 heights in inches: a. Obtain the equation of the least squares line. b. Predict the value of BOD mass removal for a Midparent 66.0 65.5 71.5 68.0 70.0 65.5 67.0 single observation made when BOD mass Daughter 64.0 63.0 69.0 69.0 69.0 65.0 63.0 loading is 35, and calculate the value of the ——-Midparent 705 69.5 64.5 67.5 cceespotidine ERICA, Daughter 68.5 69.0 64.0 67.0 ¢. Calculate SSE and then a point estimate of o. 3 ~ d. What proportion of served variation in a. Make a scatter plot of daughter's height removal can be explained by the approximate against the midparent height and comment linear relationship between the two variables? Gait ate OT OE there latanstip; e. The last two x values, 103 and 142, are much Buds ithe daughter's: Height ‘completely ‘and larger than the others. How are the equation of so eeemines by the: imidparent the least squares line and the value of 7? : fe ee SiRTRAS affected by deletion of the two corresponding u Vee the-aecompanying 2 oun: 2 observations from the sample? Adjust the Obtain the:equation-of the:least:squares: line given values of the summary quantities, and for predicting daughter height from midparent use the fact that the new value of SSE is Hieight, atid then predict the neipit of Gatigh. 311.79. ter whose midparent height is 70 in. Would he you feel comfortable using the least squares 14. The accompanying data on x = current density line to predict daughter height when midpar- (mA/cm?) and y = rate of deposition (mm/min) ent height is 74 in.? Explain. appeared in the article “Plating of 60/40 Tin/ Lead Solder for Head Termination Metallurgy” Predictor Coef SECoef Tv P (Plating and Surface Finishing, Jan. 1997: Constant 1.65 13.36 0.12 0.904 38-40). Do you agree with the claim by the midparent 0.9555 0.1971 4.85 0.001 article’s author that “a linear relationship was §=1.45061 R-Sq=72.3% R-Sq(adj) =69.2% obtained from the tin-lead rate of deposition as . . 2 ome ‘ Analysis of Variance a function of current density”? Explain your reasoning. Source DF ss MS FE P Regression 1 49.471 49.471 23.51 0.001 x 20 40 60 80 Residual 9 18.938 2.104 Error y 24 1.20 171 2.22 Total 10 68.409 15. Refer to the data given in Exercise 1 on tank d, What are the values of SSE, SST, and the temperature and efficiency ratio. coefficient of determination? How well does a. Determine the equation of the estimated the midparent height account for the variation regression line. in daughter height? b. Calculate a point estimate for true average e. Notice that for most of the families, the mid- efficiency ratio when tank temperature is 182. parent height exceeds the daughter height. Is c. Calculate the values of the residuals from the this what is meant by regression to the mean? least squares line for the four observations for Explain, which temperature is 182. Why do they notall 47, The article “Characterization of Highway Runoff have the same sign? _ in Austin, Texas, Area” (J. Environ. Engrg., d. What Proportion of the observed variation in 1998: 131-137) gave a scatter plot, along with efficiency ratio can be attributed to the simple ° --- Trang 651 --- 638 = cuarrer 12 Regression and Correlation the least squares line, of x = rainfall volume linear regression model to this data, so let’s fol- (m’) and y = runoff volume (m’*) for a particular low their lead. location, The accompanying values were read from the plot, x 132.0 129.0 120.0 113.2 105.0 92.0 84.0 x 5 12 14 17 23 30 40 47 y 46.0 48.0 51.0 52.1 54.0 52.0 59.0 vie 9 Be I i 2 2h Ms x | 832 884 590 800 815 710 692 x 55 67 72 81 96 112 127 y 58.7 61.6 640 614 54.6 58.8 58.0 y 38 46 53 70 82 99 100 x= 1307.5, Y= 779.2, a. Does a scatter plot of the data support the use > x2 = 128,913.93. vx yj) = 71,347.30 of the simple linear regression model? S , _ a — b. Calculate point estimates of the slope and Soy} = 43,745.22 intercept of the population regression line. ba Caleulate a | uheRTEARR o te snes a. Obtain the equation of the least squares line, runoff volume when rainfall volume is 50. ee d. Calculate a point estimate of the standard and then calculate @ point prediction of the iat Point estima ance cetane number that would result from a single leviation o. a A aes . . observation with an iodine value of 100. & eens ae be ciiced keke deal b. Calculate and interpret the coefficient of : Z zs : determination. as i ae nelatonship Delweenituhotn c. Calculate and interpret a point estimate of the ang rainias model standard deviation o. ee ven nien a ren (iL) OP 20. A number of studies have shown lichens (certain x = dissolved material (mg/cm?) was reporte in th atiele "Use of Ply ‘Asha ili Pasi to Plans consposed ofan olen anc’ a"ringus) 0 Be ence Reames ee Sa excellent bioindicators of air pollution. The arti- aout ie Resistance ‘hos Ter Gd) cle “The Epiphytic Lichen Hypogymnia phy- The . of il be oie eed See com thea sodes as a Biomonitor of Atmospheric Nitrogen au : oe tia ‘i sn as Reet and Sulphur Deposition in Norway” (Environ. es O78 Paldaay with £3800, base Monitoring Assessment, 1993: 27-47) gives the onn = 23. Z a, Interpret the estimated slope .144 and the following dala (fend Seonva graph) onx— NOs 7 cette vent of asteraisitene 860 wet deposition (g N/m?) and y = lichen N (% dry ill weight): b. Calculate a point estimate of the true average gh) calcium content when the amount of dis- solved material is 50 mg/em?, iz 05 10 ll AZ 31 md A2 c. The value of total sum of squares was SST fs = 320.398. Calculate an estimate of the error =” 4899 AB SD SBS L.02 standard deviation ¢ in the simple linear x 58 68 8 B 85 92 regression model. 19. The cetane number is a critical property in spe- RL cifying the ignition quality of a fuel used in a . a diesel engine, Determination of this number for a The jauthorssed simple: linear reptessionics ana: biodiesel fuel is expensive and time-consuming. lyze the data. Use the accompanying MINITAB ‘The article “Relating the Cetane Number of Bio- output to answer the following questions: diesel Fuels to Their Fatty Acid Composition: a. What are the least squares estimates of Bo and A Critical Study” (J. Automobile Engr., 2009: BQ : 565-583) included the following data on x = b. Predict lichen N for an NO3~ deposition iodine value (g) and y = cetane number for a VaicGS: = sample of 14 biofuels. The iodine value is the ¢. What is the estimate of o? amount of iodine necessary to saturate a sample d. What is the value of total variation, and how of 100 g of oil. The article’s authors fit the simple much of it can be explained by the model relationship? --- Trang 652 --- 12.2 Estimating Model Parameters 639 The regression equation is lichen N= 0.365 +0.967n03 depo a y 33 680 593 S78 48.5 Predictor Coef Stdev t-ratio P Constant 0.36510 0.09904 3.69 0.004 no3 depo 0.9668 0.1829 5.29 0.000 a. Verify that a scatter plot supports the assump- tion that the two variables are related via the $= 0.1932 Resq= 71.78 Rosq (adj) = 69.2% simple linear regression model. mnalysisofvariance b. Obtain the equation of the least squares line, source DF ss MS F P and interpret its slope. Regression 1 1.0427 1.0427 27.94 0.000 ¢. Calculate and interpret the coefficient of deter- Error 11 0.4106 0.0373 mination Total 121-4533 d. Calculate and interpret an estimate of the error standard deviation ¢ in the simple linear 21. The article “Effects of Bike Lanes on Driver and regression model. Bicyclist. Behavior” (ASCE Transportation . The largest x value in the sample considerably Engrg. J., 1977: 243-256) reports the results of exceeds the other x values. What is the effect a regression analysis with x = available travel on the equation of the least squares line of space in feet (a convenient measure of roadway deleting the corresponding observation? width, defined as the distance between a cyclist 23. show that the mle’s of fo and B, are indeed the and the roadway center line) and separation dis- ieastnquares estimates; (Hints The:pdé vf Yris tance y between a bike and a passing car (deter- normal with mean ji; = Bo + fix and variance mined by photography). The data; for ten:streets 6°; the likelihood is the product of the n pdf’s.] with bike lanes, follows: 24. Denote the residuals by ¢1,...,¢n (¢ = yi — 5) a. Show that }>¢;=0 and }) xje; = 0. [Hint: ln sc Examine the two normal equations. ] y 5.5 6.2 63 7.0 TB b. Show that yi — ¥ =f; (xi —¥). Use (a) and (b) to derive the analysis of vari- » | 46 151 175. «195208 ance identity for regression, Equation (12.4), by showing that the cross-product term is 0. y 8.3 7A 10.0 10.8 11.0 d. Use (b) and Equation (12.4) to verify the computational formula for SSE. aiVerify that Soap = 15420, D591 = 80: 55: 1 scpression analydis:ik canied out with y=teme Yo = 2452.18, Saxiy, = 1282.74, and a : 3 perature, expressed in °C. How do the resulting Lyi = 675.16. ; values of fy and, relate to those obtained if y is be Derive the-equation:6f the estimaten regres reexpressed in °F? Justify your assertion. [Hint: sion line, ; new y; =y} = L.8y; + 32.] ¢. What separation distance would you predict for another street that has 15.0 as its available 26. Show that b, and bo of Expressions (12.2) and travel space value? (12.3) satisfy the normal equations. d. What would be the estimate of expected sep- 27, Show that the “point of averages” (3,7) lies on the aration distance for all streets having avail- estimated regression line, able travel space value 15.02 28. Suppose an investigator has data on the amount 22. For the past decade rubber powder has been used of shelf space x devoted to display of a particular in asphalt cement to improve performance. The product and sales revenue y for that product. The article “Experimental Study of Recycled Rubber- investigator may wish to fit a model for which Filled High-Strength Concrete” (Mag. Concrete the true regression line passes through (0, 0). Res., 2009: 549-556) included on a regression of The appropriate model is Y = fix + ¢. Assume y = axial strength (MPa) on x = cube strength that (1, y1), ---; Qn, Yn) are observed pairs gener- (MPa) based on the following sample data: ated from this model, and derive the least squares estimator of {,. [Hint: Write the sum of squared x 1123. 970 92.7 -—-86.0~—«102.0 deviations as a function of by, a trial value, and use e| 7s i 2 Be calculus to find the minimizing value of b,.] y 75.0 710 577 487 743 --- Trang 653 --- 640 charter 12 Regression and Correlation 29, a. Consider the data in Exercise 20. Suppose that would simple linear regression be most (least) instead of the least squares line passing through effective, and why? the points (%), y), ..., Qn. Yn), We wish the least squares line passing —_— through 1 2 3 (x1 —¥, y1),---;(% —¥, yn). Construct a —_—_—— —_—— —_— scatter plot of the (xj, y;) points and then of * y x y * y the (x; —X, y;) points. Use the plots to explain 15 42 5 16 5 8 intuitively how the two least squares lines are 16 35 10 32 10 Is related to each other. hoo of @ go eB b. Suppose that instead of the model 19 49 25 8 25 31 Yi =By+Byxite (i=1,...,n), we wish 2 4 50 1S 50 60 to fit; a model of the form Sa -1750 1270.8333 1270.8333 Yi=Po+Bii-—) +e (i= 1,-..57). Sy 29.50 2722.5 1431.6667 What are the least squares estimators of fj, and hr 1.685714 2.142295 1.126557 B;, and how do they relate toy andp,? By Tai6esa72 FT RORBS2 3.196729, SST 114.83 5897.5 1627.33, 30. Consider the following three data sets, in which SSE 65.10 65.10 14.48 the variables of interest are x = commuting dis- tance and y = commuting time. Based on a scatter plot and the values of s and 7°, in which situation Inferences About the Regression Coefficient 3, In virtually all of our inferential work thus far, the notion of sampling variability has been pervasive. In particular, properties of sampling distributions of various statistics have been the basis for developing confidence interval formulas and hypothesis-testing methods. The key idea here is that the value of virtually any quantity calculated from sample data—the value of virtually any statistic—is going to vary from one sample to another. Reconsider the global warming data on x = CO, and y = tree growth mass from Example 12.5 in the previous section. There are 8 observations, 2 at each of the x values 408, 554, 680, and 812. Suppose that the slope and intercept of the true regression line are 6; = .0085 and By = —2.35, with o = .5 (consistent with the values, = .00845, iy) = —2.349, s = 0.534, computed in Example 12.10). Using R, we proceeded to generate a sample of random deviations é,...,é from a normal distribution with mean 0 and standard deviation .5, and then added @; to Bo + Bix; to obtain 8 corresponding y values. Regression calculations were then carried out to obtain the estimated slope, intercept, and standard deviation. This process was repeated a total of 20 times, resulting in the values given in Table 12.1. There is clearly variation in values of the estimated slope and estimated intercept, as well as the estimated standard deviation. The equation of the least squares line thus varies from one sample to the next. Figure 12.15 shows graphs of the true regression line and the 20 sample regression lines. --- Trang 654 --- 12.3 Inferences About the Regression Coefficient 8B; 641 Table 12.1 Simulation results for Example 12.11 Bo By s 1 —2.606 0.0086 0.312 2 —3.639 0.0104 0.345 3 —3.316 0.0100 0.530 4 —3.042 0.0093 0.475 5 —3.400 0.0103 0.441 6 —3.932 0.0107 0.328 7 —2.533 0.0090 0.423 8 —2.862 0.0100 0.676 9 —2.152 0.0081 0.401 10 2.975 0.0093 0.409 11 2.255 0.0084 0.639 12 —3.003 0.0095 0.437 13 —3.187 0.0093 0.587 14 —2.424 0.0087 0.598 15 —1.490 0.0073 0.735 16 —1.812 0.0074 0.332 17 —1.845 0.0079 0.552 18 —4.080 0.0107 0.520 19 —2.958 0.0090 0.718 20 —1.670 0.0072 0.574 45 yo 3.5 fb g 30 yy S y e Y 25 yy Yj 20 Y 1.0 Vb 400 500 600 700 800 co2 Figure 12.15 Simulation results from Example 12.11: graphs of the true regression line and 20 least squares lines (from R) . --- Trang 655 --- 642 cuarrer 12 Regression and Correlation The slope 3, of the population regression line is the true average change in the dependent variable y associated with a 1-unit increase in the independent variable x. The slope of the least squares line, f,, gives a point estimate of f,. In the same way that a confidence interval for 4 and procedures for testing hypotheses about j were based on properties of the sampling distribution of X, further inferences about f; are based on thinking off, as a statistic and investigating its sampling distribution. The values of the x;’s are assumed to be chosen before the experiment is performed, so only the Y;’s are random. The estimators (statistics, and thus random variables) for Bo and f, are obtained by replacing y; by Y; in (12.2) and (12.3): g lm-MM-%) 4 _ON-hYw B ==, fo SS X Gi - 3%) n Similarly, the estimator for ¢* results from replacing each y; in the formula for s* by the rv Y;: Puasa LY -h ON -B Dav ‘ n—2 The denominator of Bi, Sx = @i- as depends only on the x;’s and not on the Y,’s, so it is a constant. Then because SD (x; —x)¥ = YD (a; —X) = Y - 0 =0, the slope estimator can be written as 4 (Mi =X); f= ESRF Pay, where 6 = bo - 2/5 That is, B, is a linear function of the independent rv’s Y;, Y>, ..., Y,, each of which is normally distributed. Invoking properties of a linear function of random variables discussed in Section 6.3 leads to the following results (Exercise 40). 1. The mean value of f, is E(b,) = ly, = By, so B, is an unbiased estimator of f, (the distribution of f, is always centered at the value of f;). 2. The variance and standard deviation of f, are vé)=@ == == 12.5 Bi) =, = 5 Tg (12.5) where Sy = 0 (x) — ¥)? = 3.x2—() xi) /n. Replacing o by its estimate s gives an estimate for %, (the estimated standard deviation, i.e., estimated standard error, of f,): os a Ts (This estimate can also be denoted by G,-) 3. The estimator, has a normal distribution (because it is a linear function of independent normal rv’s). --- Trang 656 --- 12.3 Inferences About the Regression Coefficient 6, 643 According to (12.5), the variance of B; equals the variance o° of the random error term—or, equivalently, of any ¥—divided by > (x; — x)°. Because Le x) is a measure of how spread out the x;’s are about X, we conclude that making observations at x; values that are quite spread out results in a more precise estimator of the slope parameter (smaller variance off), whereas values of x; all close to each other imply a highly variable estimator. Of course, if the x;’s are spread out too far, a linear model may not be appropriate throughout the range of observation. Many inferential procedures discussed previously were based on standardiz- ing an estimator by first subtracting its mean value and then dividing by its estimated standard deviation. In particular, test procedures and a CI for the mean u of a normal population utilized the fact that the standardized variable (X — p)/(S/\/n)—that is, (X — ju) /Sj—had a t distribution with n — 1 df. A similar result here provides the key to further inferences concerning B. THEOREM The assumptions of the simple linear regression model imply that the standardized variable phi Bs BiB S/VSu Si, has a ¢ distribution with n — 2 df. The T ratio can be written as ; AB ae ed TRS S/VSx [n= 2)82/o? (n—2) The theorem is a consequence of the following facts: (6, — B,)/(¢/V/Su) ~ N(0, 1), (n — 2)S?/o? ~ 72», and, is independent of $?. That is, T is a standard normal rv divided by the square root of an independent chi-squared rv over its df, so T has the specified f distribution. A Confidence Interval for B, As in the derivation of previous CIs, we begin with a probability statement: (res < ? 4 tes) =l-a S, Manipulation of the inequalities inside the parentheses to isolate B, and substitution of estimates in place of the estimators gives the CI formula. --- Trang 657 --- 644 criapreR 12 Regression and Correlation A 100(1 — «)% CI for the slope B, of the true regression line is By £taprn-2°§, This interval has the same general form as did many of our previous intervals. It is centered at the point estimate of the parameter, and the amount it extends out to each side of the estimate depends on the desired confidence level (through the t critical value) and on the amount of variability in the estimator (through Si which will tend to be small when there is little variability in the distribution off, and large otherwise). Is it possible to predict graduation rates from freshman test scores? Based on the average SAT score of entering freshmen at a university, can we predict the percentage of those freshmen who will get a degree there within 6 years? We use a random sample of 20 universities from the 248 national universities listed in the 2005 edition of America’s Best Colleges, published by U.S.News & World Report. Rank University Grad rate SAT Private or State 1 2 Princeton 98 1465.00 P 2 13 Brown 96 1395.00 P 3 15 Johns Hopkins 88 1380.00 P 4 69 Pittsburgh 65 1215.00 Ss 5 77 SUNY-Binghamton 80. 1235.00 Ss 6 94 Kansas 58 1011.10 Ss 7 102 Dayton 76 1055.54 P 8 107 Illinois Inst Tech 67 1166.65 P 9 125. Arkansas 48 1055.54 s 10 139 Florida Inst Tech 54 1155.00 P 11 147 New Mexico Inst Mining 42 1099.99 Ss 12 158 Temple 54 1080.00 Ss 13 172 Montana 45 944.43 Ss 14 174 New Mexico 42 899.99 s 15 178 South Dakota 51 944.43 Ss 16 183 Virginia Commonwealth 42 1060.00 s 17 186 Widener 70 1005.00 P 18 187. Alabama A&M 38 722.21 s 19 243 ~~‘ Toledo 44 877.77 Ss 20 245 Wayne State 31 833.32 s The SAT scores were actually given in the form of first and third quartiles, so the average of those two numbers is used here. Notice that some of the SAT scores are not integers. Those values were computed from ACT scores using the NCAA formula SAT = —55.556 + 44.444ACT, which is equivalent to saying that there is a linear relationship with 17 on the ACT corresponding to 700 on the SAT, and 26 on the ACT corresponding to 1100 on the SAT. The scatter plot of the data in Figure 12.16 suggests the appropriateness of the linear regression model; graduation rate increases approximately linearly with SAT. --- Trang 658 --- 12.3 Inferences About the Regression Coefficient 6, 645 Grad. rate 100 . 90 ¥ 80 . . 70 . : . 60 Fs 50 * = on *.* oe 40K 30 * SAT 700 800 900 1000 1100 1200 1300 1400 1500 Figure 12.16 Scatter plot of the data from Example 12.12 The values of the summary statistics required for calculation of the least squares estimates are SF x1=21,600.97 Soy 1189 $x? =24,034,220.545 So wiyi=1,346,524.53 Sy? =78,113 from which $,, = 62,346.86, Sx. = 704,125.298, B, = 08854513, Bo = —36.1830309, SST = 7426.95, SSE = 1906.439, 1? = 1 — 1906.439/7426.95 = -7433. Roughly 74% of the observed variation in graduation rate can be attributed to the simple linear regression model relationship between graduation rate and SAT. Enror df is 20 — 2 = 18, giving s* = 1906.439/18 = 105.9 and s = 10.29. The estimated standard deviation of B, is gc eS 1029 gine tn TSq VATE The t critical value for a confidence level of 95% is t.995,1g = 2.101. The confidence interval is 0885 + (2.101) (.01226) = .0885 + .0258 = (.063, .114) With a high degree of confidence, we estimate that an average increase in percentage graduation rate of between .063 and .114 is associated with a | point increase in SAT. Multiplying by 100 gives the change in graduation percentage corresponding to a 100 point increase in SAT, 8.85 + 2.58, between 6.3 and 11.4. This shows that a substantial increase in graduation rate accompanies an increase of 100 SAT points. Is this a causal relationship, so a university president can count on an increased graduation rate if the admissions process becomes more selective in terms of entrance exam scores? One can imagine contrary scenarios, such as that more serious students attend more prestigious colleges, with higher entrance requirements and higher graduation rates, and that prestige would not be affected by an increase in entrance requirements. However, it seems more likely that --- Trang 659 --- 646 = cuarrer 12 Regression and Correlation prestige would benefit from higher test scores, so this scenario is not a very good argument against causality. In any case, there is at least one university president who claimed that increasing test scores resulted in a higher graduation rate. Looking at the SAS output of Figure 12.17, we find the value of %, under Parameter Estimates as the second number in the Standard Error column. All of the widely used statistical packages include this estimated standard error in output. There is also an estimated standard error for the statistic8). Confidence intervals for B, and Bo appear on the output. For all of the statistics, compare the values on the SAS output with the values that we calculated. The output shows the values of graduation rate, predicted values, and residuals. Matching the rows in Figure 12.17 with the corresponding rows in the original listing of the data, it is possible to see that the residuals for the private universities are mostly positive. However, it is much easier to see this in Figure 12.18, where the private The REG Procedure Model: Linear_Regression_Model Dependent variable: GradRate Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr> F Model 1 5520.51091 5520.51091 52.12 <.0001 Error 18 1906.43909 105.91328 Corrected Total a9 1426.95000 Root MSE 10.29142 R-Square 0.7433 Dependent Mean 59.45000 adj RS 0.7290 Coeff Var 17.31105 Parameter Estimates Parameter Standard Variable DF Estimate Error t Value Pr > |t| 958 Confidence Limits Intercept 1 ~36.18303 13.44867 -2.69 0.0149 —64.42924 © —7.93682 SAP 1 0.08855 0.01226 7.22 <.0001 0.06278 = 0.12431 The REG Procedure Model: Linear_Regression_Model Dependent Varlable: GradRate output statistics Dep Var Predicted obs Gradrate value Residual 1 98.0000 93.5356 4.4604 2 96-0000 87.3374 8.6626 3 88.0000 86.0092 1.9908 a 65.0000 71.3993 6.3993 8 80.0000 73.1702 5.8298 6 58.0000 53.3449 4.6551 7 76.0000 57.2799 18.7201 8 67.0000 67.1181 =0.1181 8 48.0000 57.2799 =3.2799 10 54.0000 66.0866 =12.0866 cbt 42.0000 61.2157 19.2157 2 54.0000 99.4457 =5.4457 3 45.0000 47.4416 =214a16 a4 42.0000 43.5067 1.5067 is 51,0000 47.4416 3.5564 16 42.0000 57.6748 15.6748 Fi 70.0000 52.8048 47.1952 18 38.0000 27.7651 10.2349 19 44.0000 41.5392 2.4608 20 31.0000 37.6034 6.6034 Figure 12.17 SAS output for the data of Example 12.12 --- Trang 660 --- 12.3 Inferences About the Regression Coefficient 6, 647 Grad. rate 100 P P 90 y 80 P . 70 g PK s : oS F 50 gu 3. 4o[S CS: 8, 8 S eS 30 * 20 SAT 700 800 900 1000 1100 1200 1300 1400 1500 Figure 12.18 Comparing private and state universities universities are labeled “P” and the public universities are labeled “S.” Of the seven private universities, five are above their predictions (positive residual) and one is barely below. Private universities mostly seem to achieve a higher graduation rate for a given entrance exam score (for more on this issue, see the rest of the story in Sections 12.6 and 12.7). It is interesting to speculate about why this might occur. Is there a more nurturing atmosphere with more individual attention at private schools? On the other hand, private universities might attract students who are more likely to graduate regardless of the campus atmosphere. a Hypothesis-Testing Procedures As before, the null hypothesis in a test about f, will be an equality statement. The null value (value of f, claimed true by the null hypothesis) will be denoted by Bio (read “beta one nought,” not “beta ten”). The test statistic results from replacing f; in the standardized variable T by the null value $j o—that is, from standardizing the estimator of 6, under the assumption that Hp is true. The test statistic thus has a t distribution with n — 2 df when Hp is true, so the type I error probability is controlled at the desired level « by using an appropriate ¢ critical value. The most commonly encountered pair of hypotheses about f is Ho: 6; = 0 versus H,: 8, # 0. When this null hypothesis is true, fly. = By independent of x, so knowledge of x gives no information about the value of the dependent variable. A test of these two hypotheses is often referred to as the model utility test in simple linear regression. Unless n is quite small, Ho will be rejected and the utility of the model confirmed precisely when r is reasonably large. The simple linear regression model should not be used for further inferences (estimates of mean value or predictions of future values) unless the model utility test results in rejection of Hg for a suitably small a. --- Trang 661 --- 648 cuarrer 12 Regression and Correlation Null hypothesis: Ho: 61 = Bio Test statistic value: 1 Pi Bo vs Alternative Hypothesis Rejection Region for Level « Test Ha: Bi > Bro t > tan-2 Hu: Bi < Bio tS —tan-2 Hu Bi # Bio either f > fypn-2 OF ES —tyrn—2 A P-value based on n — 2 df can be calculated just as was done previously for t tests in Chapters 9 and 10. The model utility test is the test of Ho: 6; = 0 versus Hy: 6; # 0, in which case the test statistic value is the f ratio t =f /s, . OU eee © Let’s carry out the model utility test at significance level « = .05 for the data of Example 12.12. We use the MINITAB regression output in Figure 12.19, which can be compared with the SAS output of Figure 12.17. The regression equation is Grad Rate = -36.2 + 0.0885 SAT Predictor Coef_ SE Coef T Pe Constant -36.18 13.44 0-269 0.035 SAT 0.0n8ss 0.012268 7.22t=Ai/s, 0.000c-P-value for model utility test $= 10.2918 RSq= 74.3% R-Sq(adj) = 72.98 Analysis of Variance source DF ss xs F P Regression 2 5820.5 5520.5 $2.12 0.000 Residual Error 18 1906.4 105.9 Total 19 7427.0 Figure 12.19 MINITAB output for Example 12.13 The parameter of interest is B,, the expected change in graduation rate associated with an increase of 1 in SAT score. The null hypothesis Ho: B, = 0 will be rejected in favor of the alternative H,: 8, 4 Oif the rratiot =f, /§, satisfies either f > ty2n-2 = tors.ig = 2-101 or t < —2.101. From Figure 12.19,f, = .08855, 4, = .01226, and 08855 t= ——— = 7.22 (also on output) 01226 --- Trang 662 --- 12.3 Inferences About the Regression Coefficient 6, 649 Clearly, 7.22 > 2.101, so Hp is resoundingly rejected. Alternatively, the P-value is twice the area captured under the 18 df t curve to the right of 7.22. MINITAB gives P-value = .000, so Ho should be rejected at any reasonable x. This confirmation of the utility of the simple linear regression model gives us license to calculate various estimates and predictions as described in Section 12.4. Notice that, in contrast, SAS in Figure 12.17 gives a P-value of < .0001. This is better than the MINITAB P-value of .000 because the MINITAB value could be incorrectly read as 0. Of course the actual value is positive, approximately -0000010. When rounded to three decimals this gives the value .000 printed by MINITAB. Given the confidence interval of Example 12.12, the result of the hypothesis test should be no surprise. It should be clear, in the two-tailed test for Hp: 8, = Oat level a, that Ho is rejected if and only if the 100(1 — «)% confidence interval fails to include 0. In the present instance, the 95% confidence interval did not include 0, so we should have known that the two-tailed test at level .05 would reject Ho: 6; = 0. | Regression and ANOVA The splitting of the total sum of squares > (y; —y)° into a part SSE, which measures unexplained variation, and a part SSR, which measures variation explained by the linear relationship, is strongly reminiscent of one-way ANOVA. In fact, the null hypothesis Ho: 6; = 0 can be tested against H,: 6; #0 by constructing an ANOVA table (Table 12.2) and rejecting Ho if f > Fy.1n—2- Table 12.2 ANOVA table for simple linear regression Source of variation af Sum of Squares Mean Square f Regression 1 SSR SSR SSR SSE/(n — 2) Error n= SSE yp — SSE n-2 Total n-1 SST The F test gives exactly the same result as the model utility t test because ra = fand bp n—2 = Fai n—2- Virtually all computer packages that have regression options include such an ANOVA table in the output. For example, Figure 12.17 shows SAS output for the university data of Example 12.12. The ANOVA table at the top of the output has f = 52.12 with a P-value of <.0001 (the actual value is about .0000010) for the model utility test. The table of parameter estimates gives t= 7.22, again with P = <.0001 (the actual value is about .0000010) and as = (7.227 = 52.12 =f. --- Trang 663 --- 650 = cuarrer 12 Regression and Correlation Fitting the Logistic Regression Model Recall from Section 12.1 that in the logistic regression model, the dependent variable Y is 1 if the observation is a success and 0 otherwise. The probability of success is related to a quantitative predictor x by the logit function p(x) = efo+Bix /(1 + efo+h), Fitting the model to sample data requires that the parameters Bo and f, be estimated. The standard way of doing this is by the method of maximum likelihood. Suppose, for example, that n = 5 and that the observations made at x5, x4, and xs are successes whereas the other two observations are failures. Then the likelihood function is [= p@)I[P@2)][! — ps)][PO4)]P@s)] i bot Biz 1 bot Bixs oblot Biss ~lTpektin | |Tp eke | Tp eke | |1 4 eh he| |] > his Unfortunately it is not at all straightforward to maximize this likelihood, and there are no nice formulas for the mle’s fy and £, The maximization process must be carried out using iterative numerical methods. The details are involved, but fortu- nately the most popular statistical software packages will do this on request and provide quantitative and graphical indications of how well the model fits. In particular, the mle f,is provided along with its estimated standard devia- tion Si, For large n, the estimator has approximately a normal distribution and the standardized variable (6; — B,)/5; has approximately a standard normal distribu- tion. This allows for calculation of a confidence interval for , as well as for testing Hg: B, = 0, according to which the value of x has no impact on the likelihood of success. Some software packages report the value of the chi-squared statistic z* rather than z itself, along with the corresponding P-value for a two-tailed test. Here is data on launch temperature and the incidence of failure for O-rings in 23 space shuttle launches prior to the Challenger disaster of January 1986. Temperature Failure | Temperature Failure | Temperature Failure ‘53 Y 68 N 75 N 57 Y 69 N 75 Y 58 Y 70 N 76 N 63 Y 70 N 76 N 66 N 70 ¥ 78 N 67 N 70 Y 79 N 67 N 72 N 81 N 67 N 73 N Figure 12.20 shows JMP output for a logistic regression analysis. We have chosen to let p denote the probability of failure. Failures tended to occur at lower tem- peratures and successes at higher temperatures, so the graph of p decreases as temperature increases. The estimate of B, is 6, = —.2322, and the estimated standard deviation of f, is §, = -1082. The value of z for testing Ho: 6, = 0, which asserts that temperature does not affect the likelihood of O-ring failure, is Bil, = ~—.2322/.1082 = —2.15. The P-value is .032 (twice the area under the z --- Trang 664 --- 12.3 Inferences About the Regression Coefficient B; 651 curve to the left of —2.15). JMP reports the value of a chi-squared statistic, which is just 2°, and the chi-squared P-value differs from that for z only because of rounding. For each |-degree increase in temperature, we estimate that the odds of failure decrease by a factor of # = e752 = .79. The launch temperature for the Chal- lenger mission was only 31°F. Because this value is much smaller than any temperature in our sample, it is dangerous to extrapolate the estimated relationship. Nevertheless, it appears that for a temperature this small, O-ring failure is almost a sure thing. The logistic regression gives the estimated probability at x = 31 as ebotBi(31) 15.0423 -.23215(31) pGl)= T+ eho hil) | 4 ls 0i3— 230581) — 99961 and the odds associated with this probability are .99961/(1 — .99961) = 2563. Thus, if the logistic regression can be extrapolated down to 31, the probability of failure is .99961, the probability of success is .00039, and the predicted odds are 2563 to | against success. Too bad this calculation was not done before launch! 1.00 0.75 1 4 20.50 & 0.25 0 0.00 50 55 60 65 70 75 80 8 temp Parameter Estimates Term Estimate Std Error. ChiSquare__Prob>ChiSq Intercept. 15.0422911 7.378391 416 0.0415 temp -0.2321537 0.108229 4.60 0.0320 Figure 12.20 Logistic regression output from JMP : Exercises | Section 12.3 (31-44) 31. Reconsider the situation described in Exam- a. Calculate > the standard deviation off. ple 12.5, in which x = CO, concentration b. What is the probability that the estimated and y=mass of I1-month-old pine trees. slope based on such observations will be Suppose the simple linear regression model is between .006 and .010? valid for x between 450 and 750, and that ¢. Suppose it is also possible to make a single B, = .008 and o = .5. Consider an experiment observation at each of the n = 11 values 525, in which n = 7, and the x values at which obser- 540, 555, 570, ..., 675. If a major objective is vations are made are x; = 450, x = 500, to estimate f}, as accurately as possible, would x3 = 550, x4 = 600, x5 = 650, x6 = 700, and the experiment with n = 11 be preferable to = 10, the one with n = 7? --- Trang 665 --- 652 = cuarrer 12 Regression and Correlation 32. Exercise 17 of Section 12.2 gave data on x = b. Carry out a test of hypotheses to determine rainfall volume and y = runoff volume (both in whether there is a useful linear relationship m’), Use the accompanying MINITAB output to between density and rock area. decide whether there is a useful linear relation- ¢. The second observation has a very extreme ship between rainfall and runoff, and then calcu- y value (in the full data set consisting of 72 late a confidence interval for the true average observations, there were two of these). This change in runoff volume associated with a 1-m? observation may have had a_ substantial increase in rainfall volume. impact on the fit of the model and subsequent : conclusions. Eliminate it and redo parts (a) The regression equation is runoff= “RAG, FTN and (b). What do you conclude? 35. How does lateral acceleration—side forces expe- BeeaLetox cose Sbaav! E-eaLYS > rienced in tums that are largely under driver Constant -1.128 2.368 -0.48 0.642 control—affect nausea as perceived by bus pas- Rainfall 0.82697 0.03652 22.64 0.000 sengers? The article “Motion Sickness in Public §=5.240 R-sq=97.5% R+sq(adj)=97.38 Road Transport: The Effect of Driver, Route, and Vehicle” (Ergonomics, 1999: 1646-1664) 33. Exercise 16 of Section 12.2 included MINITAB Teported data on x = motion sickness dose (calcu- output for a regression of daughter's height on lited inp Geeatdance wih a. Betish standard for the midparent height. evaluating similar motion at sea) and y= reported a. Use the output to calculate a confidence inter- nausea (%). Relevant summary quantities are val with a confidence level of 95% for the slope f; of the population regression line, and n=17, Yon=221, y= 193, interpret the resulting interval. b. Suppose it had previously been believed that S77 = 3056.69, 7 xiy; = 2759.6, when midparent height increased by 1 in., the Soy} = 2975 associated true average change in the daugh- A ter’s height would be at least | in. Does the . sample data contradict this belief? State and Values of dose in the sample ranged from 6.0 test the relevant hypotheses. to 17.6. 34. The invasive diatom species Didymosphenia a. Assuming that the simple linear regression Geminata has the potential to inflict substantial model is valid for relating these two variables ecological and economic damage in rivers. The (this is supported by the raw data), calculate article “Substrate Characteristics Affect Coloni- and interpret an estimate of the slope parame- zation by the Bloom-Forming Diatom Didymo- ter that conveys information about the preci- sphenia Geminata” (Aquatic Ecology, 2010: sion and reliability of estimation. 33-40) described an investigation of coloniza- b. Does it appear that there is a useful linear tion behavior. One aspect of particular interest relationship between these two variables? was whether y = colony density was related to Answer the question by employing the P- x = rock surface area. The article contained a value approach. scatter plot and summary of a regression analy- ¢. Would it be sensible to use the simple linear sis, Here is representative data: regression model as a basis for predicting % nausea when dose = 5.0? Explain your reasoning. x 50 TM 55) 50-33) 5879 d. When MINITAB was used to fit the simple 35 152 1929 48 22 2 5 35 linear regression model to the raw data, the , observation (6.0, 2.50) was flagged as possi- bly having a substantial impact on the fit. me | 26) 68 a. Br 70 20 As 8 Blliniiate hig SbUSEVAn ROH INE sample y 7 269 38 171 13 43 185 25 and recalculate the estimate of part (a). Based on this, does the observation appear a. Fit the simple linear regression model to this to Derexertingian unduesinfluence? data, and then calculate and interpret the coef- 36. Mist (airborne droplets or aerosols) is gen- ficient of determination. erated when metal-removing fluids are used in machining operations to cool and --- Trang 666 --- 12.3 Inferences About the Regression Coefficient 6, 653 lubricate the tool and workpiece. Mist gen- b. Use the rules of variance from Chapter 6 to eration is a concern to OSHA, which has verify the expression for V(,) given in this substantially lowered the workplace stan- section. dard, The article “Variables Affecting Mist 44 yerity that if each x; is multiplied by a positive Generation from Metal Removal Fluids ! ene bas constant ¢ and each y; is multiplied by another (Lubricat. Engrg., 2002: 10-17) gave the a : : ¥ . positive constant d, the f statistic for testing Ho: accompanying data on.x = fluid flow veloc- = 0 versus Ha: By # 0 is unchanged in value ity for a 5% soluble oil (cm/s) and y = the a Py eee th due Roth extent of mist droplets having diameters ibe alls of fy wal chage watch shows Vintitie 3 magnitude off, is not by itself indicative of model smaller than 10 jm (mg/m’*): Wes utility). 42. The probability of a type II error for the f test x | 89177 (189354362 442965 for Ho: Bi = Bio can be computed in the same y 40 60 48 66 61 69 .99 manner as it was computed for the f tests of Chapter 9. If the alternative value of f; is denoted a. The investigators performed a simple linear by i, the value of regression analysis to relate the two variables. Bo —Bl Does a scatter plot of the data support this d= tt. strategy? o ja=t b. What proportion of observed variation in mist V Su can be attributed to the simple linear regression is first calculated, then the appropriate set of curves relationship between velocity and mist? in Appendix Table A. 16 is entered on the horizontal ¢. The investigators were particularly interested axis at the value of d, and fi is read from the curve in the impact on mist of increasing velocity for n — 2 df. An article in the Journal of Public from 100 to 1000 (a factor of 10 corresponding Health Engineering. reports the results of to the difference between the smallest and larg-_a regression analysis based on n = 15 observations est x values in the sample). When x increases in in which x = filter application temperature (°C) this way, is there substantial evidence that the and y = % efficiency of BOD removal. Here true average increase in y is less than .6? BOD stands for biochemical oxygen demand, d. Estimate the true average change in mist asso- and it is a measure of organic matter in sewage. ciated with a 1 cm/s increase in velocity, and Calculated quantities include 7.x; = 402, do. so ina way that conveys information about Yox? = 11,098, s = 3.725, and fy = 1.7035. precision and reliability. Consider testing at significance level .01 Ho: 37. Refer to the data on x = iodine value and y = B, = 1, which states that the expected increase in cetane number given in Exercise 19. ° % BOD removal is 1 when filter application tem- a. Does the simple linear regression model spec perature increases by 1°C, against the alternative ify a useful relationship between the two vari- H,: Bb > 1. Determine P(type IL error) when ables? Use the appropriate test procedure to B, =2, 8=4. obtain information about the P-value and then 43, Kyphosis, or severe forward flexion of the spine, each a conclusion at significance level .01. aay persist despite ‘coméctive ‘spinal surgery: by: Compute:a.95% Cl for the expected change in A study carried out to determine risk factors for eetahe number associated withra'10 g increase kyphosis reported the following ages (months) for in iodlitie walle, 40 subjects at the time of the operation; the first 18 38. Carry out the model utility test using the subjects did have kyphosis and the remaining 22 ANOVA approach for the filtration rate~mois- did not. ture content data of Example 12.7. Verify that it Kyphosis 2 15 42 52 50 «73 gives a result equivalent to that of the ¢ test. 82 91 96 105 114 120 39. Use the rules of expected value to show that is 121 128 130 139 139 157 af re ial for Bo (assuming that, is No kyphosis 1 1 2 8 18 22 31 37 «61 72 «81 40. a. Verify that £(@,) = B, by using the rules of 97 112 118 127 131 140 expected value from Chapter 6. 151 159 177 206 --- Trang 667 --- 654 = cuaprer 12 Regression and Correlation Predictor coet stDev 2 Pe odds ratio 958 lower ci upper constant -0.5727 0.6024 0.95 0.342 age 0.004296 0.005849 0.73 0.463 1.00 0.99 1.02 Use the accompanying MINITAB logistic regres- sion output to decide whether age appears tohave Suecess 8 13 14 1820 2121 22-25 26 28 a significant impact on the presence of kyphosis. 20 20° Be: 44. The following data resulted from a study Fale 4 5 6 6 7 9 10 Hl Mt 13 15 commissioned by a large management con- 18 19 20 23. 27 sulting company to investigate the relationship Interpret the accompanying MINITAB logistic between amount of job experience (months) for regression output, and sketch a graph of the esti- a junior consultant and the likelihood of the mated probability of task performance as a func- consultant being able to perform a certain tion of experience. complex task. Predictor coet seDev 2 P odds ratio 95% lower cL upper constant —3.211 1.238 -2.60 0.009 Age 0.17772 0.06573 2.70 0.007 a.ag 1.08 1.36 Inferences Concerning j1y..* and the Prediction of Future Y Values Let x* denote a specified value of the independent variable x. Once the estimates Bo and Bi have been calculated, By +Bix" can be regarded either as a point estimate of Hy.» (the expected or true average value of Y when x = x*) or as a prediction of the Y value that will result from a single observation made when x = x*. The point estimate or prediction by itself gives no information concerning how precisely jy,» has been estimated or Y has been predicted. This can be remedied by developing a CI for jy... and a prediction interval (PI) for a single Y value. Before we obtain sample data, both Bo and B, are subject to sampling variability—that is, they are both statistics whose values will vary from sample to sample. This variability was shown in Example 12.11 at the beginning of Section 12.3. Suppose, for example, that Bp = 50 and f, = 2. Then a first sample of (x, y) pairs might give Bo = 52.35, By = 1.895, a second sample might result in o = 46.52, B; = 2.056, and so on. It follows that ¥ =fy +f,x* itself varies in value from sample to sample, so it is a statistic. If the intercept and slope of the population line are the aforementioned values 50 and 2, respectively, and x* = 10, then this statistic is trying to estimate the value 50 + 2(10) = 70. The estimate from a first sample might be 52.35 + 1.895(10) = 71.30, from a second sample might be 46.52 + 2.056(10) = 67.08, and so on. In the same way that a confidence interval for 6, was based on properties of the sampling distribution of Bis a confidence interval for a mean y value in regression is based on properties of the sampling distribution of the statistic By +B x". Substitution of the expressions for Bo and By into Bo +B followed by some algebraic manipulation leads to the representation of By +B" as a linear function of the Y;’s: i afar So [ig @ Dea], _s! +h" = = +S Ni = diY; --- Trang 668 --- 12.4 Inferences Concerning jy ,. and the Prediction of Future Y Values 655. The coefficients d,, d>, ..., d, in this linear function involve the x;’s and x*, all of which are fixed. Application of the rules of Section 6.3 to this linear function gives the following properties. (Exercise 55 requests a derivation of Property 2.) Let ¥ =f, +f,x*, where x* is some fixed value of x. Then 1. The mean value of Y is E(P) = EB +Bix") = Hy, sige = Bo + Bix” Thus fy +f,x* is an unbiased estimator for By + Bix* (Le., for py..-)- 2. The variance of ¥ is P 1 (x* —x) afl, Gt -x)? Viv) sot |2-—_ | Seng ee A TY no Se and the standard deviation ay is the square root of this expression. The estimated standard deviation of Bo +f x", denoted by sy or Siyapfyxe? results from replacing o by its estimate s: 1 (xt—x)? = Fate = Sb 3. Y has a normal distribution (because the Y;’s are normally distributed and independent). The variance of By +B" is smallest when x* = X and increases as x* moves away from X in either direction. Thus the estimator of jty.,. is more precise when x* is near the center of the x,’s than when it is far from the x values where observations have been made. This implies that both the CI and PI are narrower for an x* near X than for an x* far from X. Most statistical computer packages provide bothfy +-$)x* and Siaipie for any specified x* upon request. Inferences Concerning {y.,- Just as inferential procedures for f; were based on the ¢ variable obtained by standardizing f,, a t variable obtained by standardizing fy +f,x* leads to a CI and test procedures here. THEOREM The variable pot Bax? = (Bo + Bix") _ ¥ = (Bo + Bix°) vig Sse ? has a ¢ distribution with n — 2 df. --- Trang 669 --- 656 = cuarrer 12 Regression and Correlation As for f, in the previous section, a probability statement involving this standardized variable can be manipulated to yield a confidence interval for jiyjs: A 100(1 — «)% CI for pry.x*, the expected value of Y When =x, is Bo +B £ teprn—2* Syne = I+ brn sy (12.7) This Cl is centered at the point estimate for ty... and extends out to each side by an amount that depends on the confidence level and on the extent of variability in the estimator on which the point estimate is based. Recall the university data of Example 12.12, where the dependent variable was graduation rate and the predictor was the average SAT for entering freshmen. Results from Example 12.12 include }> x;= 21,600.97, Sy, = 704,125.298, B, = .088545, By = —36.18, s = 10.29, and therefore ¥ = 21,600.97/20 = 1080. Let’s now calculate a confidence interval, using a 95% confidence level, for the mean graduation rate for all universities having an average freshman SAT of 1200—that is, a confidence interval for By + £;(1200). The interval is centered at $ =fo +B (1200) = —36.18 + .0885(1200) = 70.07 The estimated standard deviation of the statistic Y is 1 (xx)? 1 (1200 — 1080)” Sp = sq) —+————— = 10.294 / — + ——____—_ = 2.731 Ye VTS, 20 704,125 The 18 df ¢ critical value for a 95% confidence level is 2.101, from which we determine the desired interval to be 70.07 + (2.101)(2.731) = 70.07 + 5.74 = (64.33, 75.81) This rather wide CI suggests that we don’t have terribly precise information about the mean value being estimated. Remember that if we recalculated this interval for sample after sample, in the long run about 95% of the calculated intervals would include Bo + £,(1200). We can only hope that this mean value lies in the single interval that we have calculated. Figure 12.21 shows MINITAB output resulting from a request to calculate confidence intervals for the mean graduation rate when the SAT is 1100 and 1200. Because this optional output was requested, the confidence intervals (Figure 12.21) were appended to the bottom of the regression output given in Figure 12.19. Note that the first interval is narrower than the second, because 1100 is much closer to ¥ than is 1200. Figure 12.22 shows curves corresponding to the confidence limits for each different x value. Notice how the curves get farther and farther apart as x moves away from X. The output labeled PI in Figure 12.21 and the curves labeled PI in Figure 12.22 refer to prediction intervals, to be discussed shortly. --- Trang 670 --- 12.4 Inferences Concerning ly ,. and the Prediction of Future Y Values 657 Predicted Values for New Observations New Obs Fit SE Fit 95% CI 95% PI 1 61.22 2.31 (56.35, 66.08) (39.06, 83.38) 2 70.07 2.73 (64.33, 75.81) (47.70, 92.44) Figure 12.21 MINITAB regression output for the data of Example 12.15 1207 regression a — = 96% c1 we 100 95% Pl a rs oe “7 © Ae _ a 60 wa pee ee 3 a eS ed G 404 ¢ oe a Ce 204 ~~ ae ol - 700 800 900 1000 1100 1200 1300 1400 1500 SAT Figure 12.22 MINITAB scatter plot with confidence intervals and prediction intervals for the data of Example 12.15 : In some situations, a CI is desired not just for a single x value but for two or more x values. Suppose an investigator wishes a CI both for jty., and for jly,,, where v and w are two different values of the independent variable. It is tempting to compute the interval (12.7) first for x = v and then for x = w. Suppose we use a = .05 in each computation to get two 95% intervals. Then if the variables involved in computing the two intervals were independent of each other, the joint confidence coefficient would be (.95) ¢ (.95) = .90. Unfortunately, the intervals are not independent because the samefy,f,, and S are used in each. We therefore cannot assert that the joint confidence level for the two intervals is exactly 90%. However, Exercise 79 of Chapter 8 derives the Bonferroni inequality showing that, if the 100(1 — «)% CI (12.7) is computed both for x = v and for x = w to obtain joint CIs for py., and ly.,,, then the joint confidence level on the resulting pair of intervals is at least 100(1 — 2a)%. In particular, using x = .05 results in a joint confidence level of at least 90%, whereas using % = .01 results in at least 98% confidence. For example, in Example 12.15 a 95% Cl for juy.1199 Was (56.35, 66.08) and a 95% CI for py. 1299 Was (64.33, 75.81). The simultaneous or joint confidence level for the two statements 56.35 < [y.1199 < 66.08 and 64.33 < fly.4299 < 75.81 is at least 90%. --- Trang 671 --- 658 = cuarrer 12 Regression and Correlation The joint Cls are referred to as Bonferroni intervals. The method is easily generalized to yield joint intervals for k different jy.,.’s. Using the interval (12.7) separately first for x = x} then for x = x},..., and finally for x = x} yields a set of k CIs for which the joint or simultaneous confidence level is guaranteed to be at least 100(1 — ka)%. Tests of hypotheses about fy + f)x* are based on the test statistic T obtained by replacing Bo + 61x* in the numerator of (12.6) by the null value ji. For example, the assertion Ho: Bo + $1(1200) = 75 in Example 12.15 says that when the average SAT is 1200, expected (i.e., true average) graduation rate is 75%. The test statistic value is then t = By +) (1200) — 75] /s, 4,120); ANd the test is upper-, lower-, or two-tailed according to the inequality in H,. A Prediction Interval for a Future Value of Y Analogous to the CI (12.7) for y.,., one frequently wishes to obtain an interval of plausible values for the value of Y associated with some future observation when the independent variable has value x*. In the scenario of Example 12.5, the CI (12.7) can be used to provide an interval estimate of true average tree mass for all trees exposed to CO2 concentration x = 600. Alternatively, we might wish an interval of plausible values for the mass of a single such tree. A CI refers to a parameter, or population characteristic, whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this reason we refer to an interval of plausible values for a future Y as a prediction interval rather than a confidence interval. For the confidence interval we use the error of estimation, By + B)x* — Bo +B,x*), a difference between a fixed (but unknown) quantity and a random variable. The error of prediction is Y — 9 +B,x") = Bo + Byx* + — Bo +f"), a difference between two random variables. With the additional random ¢ term, there is more uncertainty in prediction than in estimation, so a PI will be wider than a CI. Because the future value Y is independent of the observed Y;’s, V[Y — By +B,x")] = variance of prediction error =V(Y) + Vo +Bi3") *_ x2 =e+0 E -@ov n Sex 1 » _ x)? =0 [ gb Boa | n Sox Furthermore, because E(Y) = By + Byx* and Ey +B,x*) = By + Bix", the expected value of the prediction error is E[Y — Bo +f,x*)| = 0. It can then be shown that the standardized variable pa X= Got Bix") 1 x _=\2 sy +—4+ ee n Sux --- Trang 672 --- 12.4 Inferences Concerning jy ,. and the Prediction of Future Y Values 659 has a ¢ distribution with n — 2 df. Substituting this T into the probability statement P(+typn—2 < T < ty2,,-2) = 1 — and manipulating to isolate Y between the two inequalities yields the following interval. A 100(1 — «)% PI for a future Y observation to be made when x = x* is 5 LA 1 (=x)? Bo + Bix” £ taprn2 sf 1 = + Gt =a n Sex = Bo +Bix* + tajrn* 4/8? + Shady is =Sthpnr V+ 5 (12.8) The interpretation of the prediction level 100(1 — «)% is identical to that of previous confidence levels—if (12.8) is used repeatedly, in the long run the resulting intervals will actually contain the observed y values 100(1 — «)% of the time. Notice that the 1 underneath the initial square root symbol makes the PI(12.8) wider than the CI (12.7), although the intervals are both centered ath) +f,x*. Also, asin — oo the width of the CI approaches 0, whereas the width of the PI approaches 22,0 (because even with perfect knowledge of Bo and 1, there will still be uncertainty in prediction). Let’s return to the university data of Example 12.15 and calculate a 95% prediction interval for a graduation rate that would result from selecting a single university whose average SAT is 1200. Relevant quantities from that example are ¥=70.07 sy =2.731 s=10.29 For a prediction level of 95% based on n — 2 = 18 df, the f critical value is 2.101, exactly what we previously used for a 95% confidence level. The prediction interval is then 70.07 + (2.101) 10.29? + 2.731? = 70.07 + (2.101) (10.646) = 70.07 + 22.37 = (47.70, 92.44) Plausible values for a single observation on graduation rate when SAT is 1200 are (at the 95% prediction level) between 47.70% and 92.44%. The 95% confidence interval for graduation rate when SAT is 120 was (64.33, 75.81). The prediction interval is much wider than this because of the extra 10.29” under the square root. Figure 12.22, the MINITAB output for Example 12.15, shows this interval as well as the confidence interval. a The Bonferroni technique can be employed as in the case of confidence intervals. If a PI with prediction level 100(1 — «)% is calculated for each of k different values of x, the simultaneous or joint prediction level for all k intervals is at least 100(1 — ka)%. --- Trang 673 --- 660 charter 12 Regression and Correlation Pieced Section 12.4(45-55) 45. Recall Example 12.5 and Example 12.6 of Sec- ous analytic methods. Here is data provided by tion 12.2, where the simple linear regression the authors on x = tannin concentration by pro- model was applied to 8 observations on.x = COz tein precipitation and y = perceived astringency concentration and y = mass in kilograms of pine as determined by a panel of tasters. trees at age 11 months. Further calculations give. org gsos 0.924 1.000 0.667 0529 0514 0859 s=.534 and j=2.723, sy=.190 when 7 — x= 600; id F=3992) 45.256 when, * MF 0490 HF nem ONE OAMN-or4 CLE 4 x 0.766 0470 0.726 0.762 0.686 0.562 0.378 0.779 a Explaiiwhy 9p isllerper whense= 780 than, 7 © ~PE “Stes: iad Ree el nase Wate whena-= 600. x 0.674 0858 0.406 0.927 0311 0.319 0.518 0.687 b. Calculate a confidence interval with a confi- ” 017° 0308 O57 079 0707 “0.010, “0648 0.145 daiice level of 9596 for the tue averdge masa, 7 O27 9% (OME DTM Gams aH’ O29 O28 Sf all tees erownt witha COpconeennaonor: YET ~009 =H O98 —Lom Sosm oss million. 7 = 11.835795, Vx = 3.497811 d. If a 95% Cl is calculated for the true average mass when CO; concentration is 750, what Sy. = 13.248032 — (19.404)°/32 = 1.48193150, will be the simultaneous confidence level for §,, = 11.82637622 both this interval and the interval calculated Sy = 3.497811 — (19.404)(—.549) /32 = 3.83071088 in part (b)? ° 46. Reconsider the filtration rate-moisture content a. Fit the simple linear regression model to this data introduced in Example 12.7 (see also Exam- data. Then determine the proportion of ple 12.8). observed variation in astringency that can be a. Compute a 90% CI for fy + 125/;, true aver- attributed to the model relationship between age moisture content when the filtration rate astringency and tannin concentration. is 125. b. Calculate and interpret a confidence interval b. Predict the value of moisture content for a for the slope of the true regression line. single experimental run in which the filtration ¢. Estimate true average astringency when tan- rate is 125 using a 90% prediction level. How nin concentration is .6, and do so in a way that does this interval compare to the interval of conveys information about reliability and pre- part (a)? Why is this the case? cision. c. How would the intervals of parts (a) and (b) d. Predict astringency for a single wine sample compare to a CI and PI when filtration rate is whose tannin concentration is .6, and do so in 115? Answer without actually calculating a way that conveys information about reliabil- these new intervals. ity and precision. d. Interpret both Ho: Bo + 1258; = 80 and e. Is there compelling evidence for concluding Hy: Bo + 125, < 80, and then carry out a that true average astringency is positive when test at significance level .01. tannin concentration is .7? State and test the 47. Astringency is the quality in a wine that makes appropriate, ypathesss. the wine drinker’s mouth feel slightly rough, dry, 48. The simple linear regression model provides a and puckery. The paper “Analysis of Tannins in very good fit to the data on rainfall and runoff Red Wine Using Multiple Methods: Correlation volume given in Exercise 17 of Section 12.2. The with Perceived Astringency” (Amer. J. Enol. equation of the least squares line _ is Vitic., 2006: 481-485) reported on an investiga- § = 1.128 + .82697x, r? = 975, and s = 5.24. tion to assess the relationship between perceived a. Use the fact that sy = 1.44 when rainfall astringency and tannin concentration using vari- volume is 40 m* to predict runoff in a way --- Trang 674 --- 12.4 Inferences Conceming jy ,. and the Prediction of Future Y Values 661 that conveys information about reliability and ¢. Calculate a 95% CI for ly.3.0, the true average precision. Does the resulting interval suggest etch rate when flow = 3.0. Has this average that precise information about the value of been precisely estimated? runoff for this future observation is available? d, Calculate a 95% PI for a single future obser- Explain your reasoning. vation on etch rate to be made when flow b. Calculate a PI for runoff when rainfall is 50 = 3.0. Is the prediction likely to be accurate? using the same prediction level as in part (a). e. Would the 95% Cl and PI when flow = 2.5 be What can be said about the simultaneous pre- wider or narrower than the corresponding diction level for the two intervals you have intervals of parts (c) and (d)? Answer without calculated? actually computing the intervals. 49. You are told that a 95% CI for expected lead © Would, yourrecommend catculatingsa/99/°RL 7 ane ; for a flow of 6.0? Explain. content when traffic flow is 15, based on a sam- oo : g. Calculate simultaneous CI’s for true average ple of n = 10 observations, is (462.1, 597.7). : ° Calculate a CI with confidence level 99% for etch rate when chlorine flow is 2.0, 2.5, and expected lead content when traffic flow is 15. 3,0; tespeetively, Vout Sinullaneoud conti: dence level should be at least 97%. 50s Reten fo Exercise: 21 an yluch a—iayailable: 5». scpnsiaer the follow Toumni ei Vale HaNeA a travel space in feet and y = separation distance A in feet between a bicycle and a passing car. the: data of Exercise 20 (Section 12.2): =. MINITAB fiver = 4a6) atid a. A 95% Cl for lichen nitrogen when NO} is .5 & him, = 360. Expl este one is: much b. A 95% PI for lichen nitrogen when NO; is .5 io-#,(20) c. A. 95% CI for lichen nitrogen when NO; is .8 larger than the other. ; d. A 95% PI for lichen nitrogen when NO; is .8 BeiCaloiilate jacb5 dee CL foriexpectedsaeparation e. Without computing any of these intervals, distance when available travel space is 15 ft. SACRE BE EAAMLOUC HET whakhs ENLETS (Use Doshi = 186.) to each other? c. Calculate 'a 95% PI for a single instance of separation distance when available travel 53. The decline of water supplies in certain areas of space is 20 ft. (Use 5), ,p,.29) = 360.) the United States has created the need for —_ on — increased understanding of _relationships 51. Plasma etching is essential to the fine-line pat- between economic factors such as crop yield tem transfer in current semiconductor processes. and hydrologic and soil factors. The article The article “Ion Beam-Assisted Etching of Alu- “Variability of Soil Water Properties and Crop nintim.-withi, Chlorine” (Ji Eleeiochen:S6e., Yield in a Sloped Watershed” (Water Resources 1985: 2010-2012) gives the accompanying data Bull., 1988: 281-288) gives data on grain sor- ead fromija: gtaph) fon thlerne fow'iG, an ghum yield (y, in g/m-row) and distance upslope SCCM) through a nozzle used in the etching (x, in m) on a sloping watershed. Selected obser- mechanism, and efch:rate (9m 100 A/mm), vations are given in the accompanying table. v|15 15 20 25 25 30 35 35 40 «| 0 w 2 3 45 50 70 y | 23.0 24.5 25.0 30.0 33.5 40.0 40.5 47.0 49.0 yl soo 590 410 470 450 480 510 The summary statistics are > x; = 24.0, Lyi = 312.5, Ya? = 70.50, Yo xiyi = 902.25, x | 80 100 120 140 160 170 190 Sy? = 11,626.75, By = 6.448718, By = 10.602564. y | 450 360 400 300 410 280 350 a. Does the simple linear regression model spec- ify a useful relationship between chlorine a. Construct a scatter plot. Does the simple lin- flow and etch rate? ear regression model appear to be plausible? b. Estimate the true average change in etch rate b. Carry out a test of model utility. associated with a 1-SCCM increase in flow c. Estimate true average yield when distance rate using a 95% confidence interval, and upslope is 75 by giving an interval of plausi- interpret the interval. ble values. --- Trang 675 --- 662 cuarrer 12 Regression and Correlation 54. Infestation of crops by insects has long been of a. Why is the relationship between x and y not great concern to farmers and agricultural scien- deterministic? tists. The article “Cotton Square Damage by the b. Does a scatter plot suggest that the simple Plant Bug, Lygus hesperus, and Abscission Rates” linear regression model will describe the rela- (J. Econ. Entomol., 1988: 1328-1337) reports data tionship between the two variables? on x = age of a cotton plant (days) and y = % c. The summary statistics are S>x = 246, damaged squares. Consider the accompanying x2 =5742, Dy =572, Dy? = 35,634 n = 12 observations (read from a scatter plot in and >> x;y; = 14,022. Determine the equation the article). of the least squares line. d. Predict the percentage of damaged squares 4 9 om 8 « 8 when the age is 20 days by giving an interval of plausible values. Md 12 23 30, 295255, Verity that Vy +B,x) is indeed given by the expression in the text. (Hint: x | 21 21027303033 VAY) =le@-V(¥%)J y 41 65 60, 72 84 93 Correlation In many situations the objective in studying the joint behavior of two variables is to see whether they are related, rather than to use one to predict the value of the other. In this section, we first develop the sample correlation coefficient ras a measure of how strongly related two variables x and y are in a sample and then relate r to the correlation coefficient p defined in Chapter 5. The Sample Correlation Coefficient r Given n pairs of observations (x), y)), (%2, 2), - «+s Xn» Yn), it is natural to speak of x and y having a positive relationship if large x’s are paired with large y’s and small x’s with small y’s. Similarly, if large x’s are paired with small y’s and small x’s with large y’s, then a negative relationship between the variables is implied. Consider the quantity n n Sy =) — 01-5) = Diy — Then if the relationship is strongly positive, an x; above the mean X will tend to be paired with a y; above the mean ¥, so that (x; — X)(y; — ¥) > 0, and this product will also be positive whenever both x; and y; are below their respective means. Thus a positive relationship implies that S, will be positive. An analogous argument shows that when the relationship is negative, S,, will be negative, since most of the products (x; — X)(y; — Y) will be negative. This is illustrated in Figure 12.23. Although S,,, seems a plausible measure of the strength of a relationship, we do not yet have any idea of how positive or negative it can be. Unfortunately, S.,, has a serious defect: By changing the unit of measurement for either x or y, S,, can be made either arbitrarily large in magnitude or arbitrarily close to zero. For example, if S,, = 25 when x is measured in meters, then S,, = 25,000 when x is --- Trang 676 --- 12.5 Correlation 663 a b - i vet \ ler eet + B 9 eee ae mite! = | rr + + ae i | i i * * Figure 12.23 (a) Scatter plot with S,, positive; (b) scatter plot with S,, negative [+ means (.; — x)(y; — y) > 0, and — means (2; — ¥)(y; — y) < 0] measured in millimeters and .025 when x is expressed in kilometers. A reasonable condition to impose on any measure of how strongly x and y are related is that the calculated measure should not depend on the particular unit used to measure them. This condition is achieved by modifying S,, to obtain the sample correlation coefficient. DEFINITION The sample correlation coefficient for the n pairs (x), y1), .... ns Yn) is Syy Sxy PO ee (12.9) VEGi-3°VOoi-7? Sev An accurate assessment of soil productivity is critical to rational land-use planning. Unfortunately, as the author of the article “Productivity Ratings Based on Soil Series” (Prof. Geographer, 1980: 158-163) argues, an acceptable soil productivity index is not so easy to come by. One difficulty is that productivity is determined partly by which crop is planted, and the relationship between yield of two different crops planted in the same soil may not be very strong. To illustrate, the article presents the accompanying data on corn yield x and peanut yield y (mT/ha) for eight different types of soil. x 24 3.4 4.6 37 2.2 3.3 4.0 2.1 y 1.33 2.12 1.80 1.65 2.00 1.76 2.11 1.63 With Sox; = 25.7, Sy, = 14.40, 2x2 = 88.31, xy = 46.856, Sy? = 26.4324, 25.7 Sx = 88.31 — >= 88.31 — 82.56 = 5.75 14.407 Syy = 26.4324 — a 5124 25.7)(14.40 Say = 46.856 — ee) = 5960 --- Trang 677 --- 664 = cuarrer 12 Regression and Correlation from which -5960 r= = 347 V5.75V 5124 a Properties of r The most important properties of r are as follows: 1. The value of r does not depend on which of the two variables is labeled x and which is labeled y. 2. The value of r is independent of the units in which x and y are measured. 3% =l y? = 2253.56, from which 20.0397 — (1.656)(170.6)/16 pi EN ee, /.196912 — (1.656)?/16y/ 2253.56 — (170.6)"/16 2.3826 = — = 716 (1597) (20.8456) The point estimate of the population correlation coefficient p between ozone concentration and secondary carbon concentration is p = r = .716. a --- Trang 680 --- 12.5 Correlation 667 The small-sample intervals and test procedures presented in Chapters 8-10 were based on an assumption of population normality. To test hypotheses about p, we must make an analogous assumption about the distribution of pairs of (x,y) values in the population. We are now assuming that/botl’X’and Y are'randoms with joint distribution given by the bivariate normal pdf introduced in Section 5.3. If X = x, recall that the (conditional) distribution of Y is normal with mean Hy. = Ha + (po2/01)(x — fy) and variance (1 — p?)o3. This is exactly the model used in simple linear regression with By = py — pftyo2/01,B; = po2/o1, and o> = (1 — p*)o3 independent of x. The implication is that if the observed pairs (x,y) are actually drawn from a bivariate normal distribution, then the simple linear regression model is an appropriate way of studying the behavior of Y for fixed x. If p = 0, then fty.. = {2 independent of x; in fact, When p = 0 the joint probability density function f(x, y) can be factored into a part involving x only and a part involving y only, which implies that X and Y are independent variables. CPE As discussed in Section 5.3, contours of the bivariate normal distribution are elliptical, and this suggests that a scatter plot of observed (x, y) pairs from such a joint distribution should have a roughly elliptical shape. The accompanying scatter plot of y = visceral fat (cm?) by the CT method versus x = visceral fat (cm?) by the US method for a sample of n = 100 obese women appeared in the paper “Methods of Estimation of Visceral Fat: Advantages of Ultrasonography” (Obes. Res., 2003: 1488-1494). Computerized tomography is considered the most accu- rate technique for body fat measurement, but is costly, time consuming, and involves exposure to ionizing radiation; the US method is noninvasive and less expensive. Fat by CT 350 300 . 2 250 oe «: Ba? = git Se 150 hos a it 400 Fyet ey 6, 50 ogkee.! Fat by US % 2 4 6 8 10 12 ay Figure 12.25 Scatter Plot for Example 12.19 The pattern in the scatter plot seems consistent with an assumption of bivariate normality. Here r = .71, which is not all that impressive (° = .50), but the investigators reported that a test of Ho: p = 0 (to be introduced shortly) gives P-value < .001. Of course we would want values from the two methods to be very highly correlated before regarding one as an adequate substitute for the other. Ml --- Trang 681 --- 668 — cuarrer 12 Regression and Correlation Assuming that the pairs are drawn from a bivariate normal distribution allows sito" test hypotheses" about p/and:torconstruct/a/Cl, There is no completely satisfactory way to check the plausibility of the bivariate normality assumption. A partial check involves constructing two separate normal probability plots, one for the sample x;’s and another for the sample y;’s, since bivariate normality implies that the marginal distributions of both X and Y are normal. If either plot deviates substantially from a straight-line pattern, the following inferential procedures should not be used when the sample size n is small. Also, as discussed in Example 12.19, the scatter plot should show a roughly elliptical shape. TESTING When Ho: p = 0 is true, the test statistic FOR THE ABSENCE RVnad OF CORRE- Cs LATION TOR has a ¢ distribution with n — 2 df (see Exercise 65). Alternative Hypothesis _ Rejection Region for Level a Test Hyp >0 t 2 tan2 Hyp <0 P< -tyy-2 Hyp #0 either t > typn—2 OF t < —tyrn—2 A P-value based on n — 2 df can be calculated as described previously. Neurotoxic effects of manganese are well known and are usually caused by high occupational exposure over long periods of time. In the fields of occupational hygiene and environmental hygiene, the relationship between lipid peroxidation, which is responsible for deterioration of foods and damage to live tissue, and occupational exposure had not been previously reported. The article “Lipid Perox- idation in Workers Exposed to Manganese” (Scand. J. Work Environ. Health, 1996: 381-386) gave data on x = manganese concentration in blood (ppb) and y = con- centration (jmol/L) of malondialdehyde, which is a stable product of lipid peroxi- dation, both for a sample of 22 workers exposed to manganese and for a control sample of 45 individuals. The value of r for the control sample was .29, from which j— (22 5 v1= 29% The corresponding P-value for a two-tailed f test based on 43 df is roughly .052 (the cited article reported only that the P-value > .05). We would not want to reject the assertion that p = 0 at either significance level .01 or .05S. For the sample of exposed workers, r = .83 and t = 6.7, clear evidence that there is a positive relationship in the entire population of exposed workers from which the sample was selected. Although in general correlation does not necessarily imply causation, it is plausible here that higher levels of manganese cause higher levels of per- oxidation. a --- Trang 682 --- 12.5 Correlation 669 Because p measures the extent to which there is a linear relationship between the two variables in the population, the null hypothesis Ho: p = 0 states that there is no such population relationship. In Section 12.3, we used the tratiop /s;, to test for a linear relationship between the two variables in the context of regression analysis. It turns out that the two test procedures are completely equivalent because rfn—2/V1 =r? =b,/s, (Exercise 65). When interest lies only in assessing the strength of any linear relationship rather than in fitting a model and using it to estimate or predict, the test statistic formula just presented requires fewer computa- tions than does the f ratio. Other Inferences Concerning p The procedure for testing Ho: p = po when po # 0 is not equivalent to any procedure from regression analysis. The test statistic is based on a transformation of R called the Fisher transformation. PROPOSITION When (X;, ¥1), ..-, Xn, Yn) is a sample from a bivariate normal distribution, the rv 1 1+R V =~ Inj —~ 12.10 a" ( = i) (12:10) has approximately a normal distribution with mean and variance 1 (ite 5 1 ema == oe =— Wa "\I—p va-3 The rationale for the transformation is to obtain a function of R that has a variance independent of r; this would not be the case with R itself. Also, the approximation will not be valid if is quite small. The test statistic for testing Ho: p = po is za ¥ a2 lal + p0)/(0 ~ po)) 1/Vvn—3 Alternative Hypothesis _ Rejection Region for Level a Test Hy: p > po 2 ly Hy: p < po ZS —2y Hy: p # po either 2 > 2,9 or 2 < —Zyp A P-value can be calculated in the same manner as for previous z tests. --- Trang 683 --- 670 = cuarrer 12 Regression and Correlation As far back as Leonardo da Vinci, it was known that height and wingspan (measured fingertip to fingertip between outstretched hands) are closely related. For these measurements (in inches) from 16 students in a statistics class notice how close the two values are. Student: 1 2 3 4 5 6 7 8 Height: 63.0 63.0 65.0 64.0 68.0 69.0 71.0 68.0 Wingspan: 62.0 62.0 64.0 64.5 67.0 69.0 70.0 72.0 Student: 9 10 IL 12 13 14 15 16 Height: 68.0 72.0 73.0 73.5 70.0 70.0 72.0 74.0 Wingspan: 70.0 72.0 73.0 75.0 71.0 70.0 76.0 76.5 The scatter plot in Figure 12.26 shows an approximately linear shape, and the point cloud is roughly elliptical. Also, the normal plots for the individual variables are roughly linear, so the bivariate normal distribution can reasonably be assumed. Wingspan 8 16 : of 74 2 . . 70 ofoe 68 . 66 i 64 Ys er ¢ 60 Height 62 64 66 68 70 72 74 Figure 12.26 Wingspan plotted against height The correlation is computed to be .9422. Can it be conclude that wingspan and heightvare highly correlated in’the sense’ thatp/>"8? To carry out a test of Ho: p = .8 versus H,: p > .8, we Fisher transform .9422 and .8: 1 1+ .9422 1 1+.8 = In| ——_— } = 1.757 = In{ —— ]} = 1.099 2 0355) 2 o(73) The calculation is easily done on a calculator with hyperbolic functions, because the inverse hyperbolic tangent is equivalent to the Fisher transformation. That is, tanh”'(.9422) = 1.757 and tanh”'(.8) = 1.099. Compute z= (1.757 — 1.099) /(1//16 — 3) = 2.37. Since 2.37 > 1.645, at level .05 we can reject Ho: p = .8 in favor of H,: p > .8. Indeed, because 2.37 > 2.33, it is also true that we can reject Ho in this one-tailed test at the .01 level, and conclude that wingspan is highly correlated with height. a --- Trang 684 --- 12.5 Correlation 671 To obtain a CI for p, we first derive an interval for Hy = 3 In[(1 + p)/(1 — p)]. Standardizing V, writing a probability statement, and manipulating the resulting inequalities yields 2a/2 2a/2 e 4.8 12.11 (- ee) cam asa 100(1 — «)% interval for juy, where v = } In{(1 + r)/(1 —r)]. This interval can then be manipulated to yield a CI for p. A 100(1 — «)% confidence interval for p is err ead om $1 e% +1 where c, and c are the left and right endpoints, respectively, of the interval (12.11). The sample correlation coefficient between wingspan and height was r = .9422, (Example 12.21 giving v = 1.757. With n= 16, a 95% confidence interval for jp, is continued) 1.757 + 1.96/V16 — 3 = (1.213,2.301) = (c1,¢2). The 95% interval for p is 2(1.213) _ y 92(2.301) _ 1 e e [sr 213) 1? e2(2301) | = (838, .980) As before, this calculation can be done more easily using the hyperbolic tangent, which is the inverse of the Fisher transformation. This gives (tanh(1.213), tanh(2.301)) = (.838, .980). Notice that this interval excludes .8, and that our hypothesis test in Example 12.21 would have rejected Ho: p = .8 in favor of the alternative H,: p > .8 at the .025 level. a Absent the assumption of bivariate normality, a bootstrap procedure can be used to obtain a CI for p or test hypotheses. In Chapter 5, we cautioned that a large value of the correlation coefficient (near 1 or —1) implies only association and not causation. This applies to both p and r. It is easy to find strong but weird correlations in which neither variable is casually related to the other. For example, since prohibition ended in the 1930s, beer consumption and church attendance have correlated very highly. Of course, the reason is that both variables have increased in accord with population growth. --- Trang 685 --- 672 = cuarrer 12 Regression and Correlation Exercises | Section 12.5 (56-67) 56. The article “Behavioural Effects of Mobile Tele- Separate normal probability plots showed very phone Use During Simulated Driving” (Ergo- substantial linear patterns. nomics, 1995: 2536-2562) reported that for a a. Calculate a point estimate for the population sample of 20 experimental subjects, the sample correlation coefficient. correlation coefficient for x = age and y = time b. If the simple linear regression model were fit since the subject had acquired a driving license to the data, what proportion of variation in (yr) was .97. Why do you think the value of r is external velocity could be attributed to the so close to 1? (The article’s authors gave an model relationship? What would happen to explanation.) this proportion if the roles of x and y were 57. The Turbine Oil Oxidation Test (TOST) and the reversed Tepla Rotating Bomb Oxidation Test (RBOT) are two Sa youleg tee a. Sieniticancerievel 0! ts : ‘ oe decide whether there is a linear relationship different procedures for evaluating the oxidation . stability of steam turbine oils. The article between the two velocities in the sampled pop- “Dependence of Oxidation Stability of Steam ulation; your conclusion should be based on a Turbine Oil on Base Oil Composition” (J. Soc. P valor: , 3 Tribologists Lubricat, Engrs., Oct. 1997: 19-24) a, Yypald thereonclumornf (e) have changed af fs you had tested appropriate hypotheses to reported the accompanying observations on x = \ é TOST time (hr) and y = RBOT time (min) for 12 eee Whee teas a petty tale oil specimens. ciation in the population? What if a signifi- cance level of .05 rather than .01 had been TOST 4200 360037503675 4050-2770 used? BROT 370 M0375 310-350 200 59. “The authors of the paper “Objective Effects of a TOST 4870 4500-3450. 2700» 3750-3300 Six Months’ Endurance and Strength Training RBOT 400-375 285 225345 285 Program in Outpatients with Congestive Heart . ; Failure” (Med. Sci. Sports Exercise, 1999: ay; Calenlate-and snterpret the-yalue/of the:sample 1102-1107) presented a correlation analysis to correlation coefficient (as did the article’s investigate the relationship between maximal lac- authors). tate level x and muscular endurance y. The accom- b. How would the value of r be affected if we had paging data Wacceaa fromn'a plot in aie pape let x = RBOT time and y = TOST time? c. How would the value of r be affected if RBOT time were expressed in hours? x | 400 750 770 800 850 1025 1200 d. Construct a scatter plot and normal probability _| oO ue Se y | 380 400 4.90 520 4.00 350 620 plots and comment. e. Carry out a test of hypotheses to decide whether RBOT time and TOST time are line- © [1501300 400 47S E480 15052200 arly related. y | 688 755 495 780 445 660 8.90 58. Torsion during hip external rotation and extension may explain why acetabular labral tears occur in Sxx = 36.9839, Syy = 2,628,930.357, professional athletes. The article “Hip Rotational Syy = 7377.704 Velocities During the Full Golf Swing” (J. Sport A scatter plot shows a linear pattern. Sci. Med., 2009: 296 — 299) reported on an inves- a. Test to see whether there is a positive correla- tigation in which lead hip internal peak rotational tion between maximal lactate level and muscu- velocity (x) and trailing hip peak external rota- lar endurance in the population from which this tional velocity (y) were determined for a sample of data was selected. 15 golfers. Data provided by the article’s authors b. If a regression analysis were to be carried out was used to calculate the following summary to predict endurance from lactate level, what quantities: proportion of observed variation in endurance could be attributed to the approximate linear Syy = 64,732.83, Syy = 130,566.96, relationship? Answer the analogous question if. Syy = 44,185.87 regression is used to predict lactate level from --- Trang 686 --- 12.5 Correlation 673 endurance—and answer both questions with- b. Test Ho: p = —.5 versus Hy: p < —5 at level out doing any regression calculations. 05. 60. Hydrogen content is conjectured to be an impor- © Inacregressionvanalysisiot yon *,.svhat propor: . . F tion of variation in change of cortisol-binding tant factor in porosity of aluminum alloy castings. : : : "The article “The Reduced Pressure Test as a Mea. globulin level could be explained by variation suring Tool in the Evaluation of Porosity/Hydro- in patient age within the sample? . gen Content in Al-7 Wt Pet Si-10 Vol Pet SiC(p) 4. If you decide to perform a regression analysis Metal Matrix Composite” (Metallurg. Trans., withyage aeithie Wependent.vanable, Whit; pra: 1993: 185721868) gives the accompanyitig’ data portion of variation in age is explainable by on x = content and y = gas porosity for one par- vaniationaniQCRGy ticular measurement technique. 63. A sample of n = 500 (x,y) pairs was collected | and a test of Ho: p = 0 versus H,: p # 0 was x} 18 200 21 21 21 2223 carried out. The resulting P-value was computed y | 46 70 41 45 55 44 (24 tobe: - a, What conclusion would be appropriate at level of significance .001? 2 | 8 oe ee DS b. Does this small P-value indicate that there is a y AT 22 80 88 20 .72 75 very strong relationship between x and y (a value of p that differs considerably from 0)? MINITAB gives the following output in response Explain. to a CORRELATION command: c. Now suppose a sample of n = 10,000 (x,y) Correlation of Hydrcon and pairs resulted inr = 022. Test Ho: p = 0 versus Porosity =0.449 H,: p # 0 at level .05. Is the result statistically a. Test at level .05 to see whether the population significant? Comment on the practical signifi- correlation coefficient differs from 0. cance’ of your analysis: b. Ifa simple linear regression analysis had been 64, Let x be number of hours per week of studying and carried out, what percentage of observed vari- y be grade point average. Suppose we have one ation in porosity could be attributed to the sample of (x, y) pairs for females and another for model relationship? males, Then we might like to test the hypothesis 61. Physical properties of six flame-retardant fabric Hover po ~ Dagamatthe-altermative that the two damnpled were investigated inthe ariele “Séndory population correlation coefficients are different. anid Physical’ Properties. Of Taherently “Flanie- a. Use properties of the transformed variable V = Retardant Fabrics” (Textile Res., 1984: 61-68). Sth RNC —R)1 6 propose an appropriate Use the accompanying data and a .05 significance est: salistiGiand. teyection teeton Cet -Rivand level to determine whether there is a significant ‘Besdenates thestwoy sample comelanionicoetts, correlation between stiffness x (mg-cm) and thick- een); ass . ness y (mm). Is the result of the test surprising in by: The:paper “Relational Honds:and Customer's light of the value of r? Trust and Commitment: A Study on the Mod- erating Effects of Web Site Usage” (Serv. Ind. x | 7.98 24.52 1247 692 2411 3571 J., 2003: 103-124) reported that m = 261, ry = 59, ny = 557, ry = 50, where the first yl 28 65 320 270 81S sample consisted of corporate website users 62. The article “Increases in Steroid Binding and fhe secondvof nonsusersy Here is the Globulins Induced by Tamoxifen in Patients with Contelatiom Petween want fasséssiticnt cof! tie Carcinoma of the Breast” (J. Endocrinol., 1978: strength of economic bonds and performance. 219-226) reports data on the effects of the drug Carry out the test for this data (as did the tamoxifen on change in the level of cortisol-bind- authors of the cited paper). ing globulin (CBG) of patients during treatment. §§, Verify that the r ratio for testing Ho: fp; = 0 in With age = x and ACBG = y, summary values Section 12.3 is identical to r for testing Ho: p = 0. are n= 26, Sx = 1613, 5 (x;—)? =3756.96, 66. Verify Property 2 of th lati ficient — ee . Verify Property 2 of the correlation coefficient: Se at LOi-7)'= 465.34, and" she value of ris independent of the units in which a. Compute a 90% CI for the true correlation seond y are mestured: that ie atyy a ty and Bae y/ = by; + d,a > 0,6 > 0, then r for the (x/,y/) coefficient p. pairs is the same as r for the (x,, y,) pairs. --- Trang 687 --- 674 crisper 12 Regression and Correlation 67. Consider a time series—that is, a sequence of What about r, compared to r for the n — 2 pairs observations X, X2, ... on some response variable (1.35 ay Xa)y oes Oy — 25 Xn)? (e.g., concentration of a pollutant) over time— c. Analogous to the population correlation coeffi- with observed values x1, x2, ..., X, over n time cient p, let p; (i = 1, 2, 3, ) denote the theo- periods. Then the lag | autocorrelation coefficient retical or long-run autocorrelation coefficients at is defined as the various lags. If all these p’s are zero, there is nt no (linear) relationship between observations in SG-wea—w the series at any lag. In this case, if is large, each awe ; has approximately a normal distribution with s Gin x mean and standard deviation Uva and differ- ma ent R;’s are almost independent. Thus Ho: p; = 0 . _ can be rejected at a significance level of approxi- Autocorrelation coefficients rz, r3, ... for lags 2, mately .05 if either r;>2/n or r; < —2//n.If 3, ... are defined analogously. n= 100andr, = .16,r = —09,r, = —15,is a. Calculate the values of r), ro, and r3 for the there evidence of theoretical autocorrelation at temperature data from Exercise 79 of Chapter 1. any of the first three lags? by Consider the:n'— 1-paits (213.42), (12}.23))..0 d. If you are testing the null hypothesis in (c) for Gn — 15%»). What is the difference between the more than one lag, why might you want to formula for the sample correlation coefficient r increase the cutoff constant 2 in the rejection a to these pairs and the formula for ri? region? [Hint: What about the probability of at if n, the length of the series, is large? committing at least one type I error?] Assessing Model Adequacy A plot of the observed pairs (1;, yj) is a necessary first step in deciding on the form of a mathematical relationship between x and y. It is possible to fit many functions other than a linear one (y = bo + bx) to the data, using either the principle of least squares or another fitting method. Once a function of the chosen form has been fitted, it is important to check the fit of the model to see whether it is in fact appropriate. One way to study the fit is to superimpose a graph of the best-fit function on the scatter plot of the data. However, any tilt or curvature of the best-fit function may obscure some aspects of the fit that should be investigated. Further- more, the scale on the vertical axis may make it difficult to assess the extent to which observed values deviate from the best-fit functions. Residuals and Standardized Residuals A more effective approach to assessment of model adequacy is to compute the fitted or predicted values y; and the residuals e; = y; — 3; and then plot various functions of these computed quantities. We then examine the plots either to confirm our choice of model or for indications that the model is not appropriate. Suppose the simple linear regression model is correct, and let y =f +f, x be the equation of the estimated regression line. Then the ith residual is e; = yj — Bo +Bix). To derive properties of the residuals, let e; = Yj — Y; represent the ith residual as a random variable (rv) (before observations are actually made). Then E(¥, — ¥1) = E(Y)) — Ey +Bixi) = Bo + Baxi — (Bo + Byxi) = (12.12) Because ¥i(=fo +Bixi) is a linear function of the Y;’s, so is Y; — Yi (the coefficients depend on the x;’s). Thus the normality of the Y;’s implies that each residual is normally distributed. It can also be shown (Exercise 76) that --- Trang 688 --- 12.6 Assessing Model Adequacy 675 1 ay? ve, Hy =e [12-22 (12.13) n Sux Replacing o* by s* and taking the square root of Equation (12.13) gives the estimated standard deviation of a residual. Let’s now standardize each residual by subtracting the mean value (zero) and then dividing by the estimated standard deviation. The standardized residuals are given by i= i __. i=1,...,n (12.14) of Ys tte xy “oTn Se Notice that the variances of the residuals differ from one another. If n is reasonably large, though, the bracketed term in (12.13) will be approximately 1, so some sources use e;/s as the standardized residual. Computation of the e;*’s can be tedious, but the most widely used statistical computer packages automatically provide these values and (upon request) can construct various plots involving them. Example 12.12 presented data on x = average SAT for entering freshmen and y = six-year percentage graduation rate. Here we reproduce the data along with the fitted values and their estimated standard deviations, residuals and their estimated standard deviations, and standardized residuals. The estimated regression line is y = —36.18 + .08855x, and r? = .729. Notice that estimated standard deviations of the residuals (in the s, column) differ somewhat, so e* # e/s. The standard deviations of the residuals are higher near X, in contrast to the standard deviations of the predicted values, which are lower near X. x y 5 55 e Se e* 722.21 38 27.7651 4.9554 10.2349 9.020 1.135 833.32 31 37.6034 3.8016 —6.6034 9.564 —0.690 877.77 44 41,5392 3.3838 2.4608 9.719 0.253 899.99 42 43.5067 3.1894 —1.5067 9.785 —0.154 944.43 45 47.4416 2.8394 —2.4416 9.892 —0.247 944.43 51 47.4416 2.8394 3.5584 9.892 0.360 1005.00 70 52.8048, 2.4785 17.1952 9.989 1.721 1011.10 58 53.3449 2.4517 4.6551 9.995 0.466 1055.54 48 57.2799 2.3208 —9.2799 10.026 —0.926 1055.54 76 57.2799 2.3208 18.7201 10.026 1.867 1060.00 42 57.6748 2.3143 —15.6748 10.028 —1.563 1080.00. 54 59.4457 2.3012 —5.4457 10.031 —0.543 1099.99 42 61.2157 2.3142 —19.2157 10.028 —1.916 1155.00 54 66.0866 2.4780 —12.0866 9.989 —1.210 1166.65 67 67.1181 2.5345 —0.1181 9.974 —0.012 1215.00 65 71.3993 2.8346 —6.3993 9.893 —0.647 1235.00 80 73.1702 2.9845 6.8298 9.849 0.693 1380.00 88 86.0092 4.3392 1.9908 9.332 0.213 1395.00 96 87.3374 4.4963 8.6626 9.257 0.936 1465.00 98 93.5356 5.2522 4.4644 8.850 0.504 a --- Trang 689 --- 676 = cuarrer 12 Regression and Correlation Diagnostic Plots The basic plots that many statisticians recommend for an assessment of model validity and usefulness are the following: 1. y; on the vertical axis versus x; on the horizontal axis. 2. y; on the vertical axis versus y; on the horizontal axis 3. e;* (or e;) on the vertical axis versus x; on the horizontal axis 4. e;* (or e;) on the vertical axis versus ¥; on the horizontal axis 5. A normal probability plot of the standardized residuals (or residuals) Plots 3 and 4 are called residual plots against the independent variable and fitted (predicted) values, respectively. Tf Plot 2 yields points close to the 45° line [slope +1 through (0, 0)], then the estimated regression function gives accurate predictions of the values actually observed. Thus Plot 2 provides a visual assessment of model effectiveness in making predictions. Provided that the model is correct, neither residual plot should exhibit distinct patterns. The residuals should be randomly distributed about 0 according to a normal distribution, so all but a very few standardized residuals should lie between —2 and +2 (i.e., all but a few residuals within 2 standard deviations of their expected value 0). The plot of standardized residuals versus ¥ is really a combination of the two other plots, showing implicitly both how residuals vary with x and how fitted values compare with observed values. This latter plot is the single one most often recommended for multiple regression analysis. Plot 5 allows the analyst to assess the plausibility of the assumption that é has a normal distribution. Figure 12.27 presents the five plots just recommended along with a sixth plot. The (Example 12.23 plot of y versus ¥ confirms the impression given by r that x is fairly effective in continued) predicting y. The residual plots show no unusual pattern or discrepant values. The normal probability plot of the standardized residuals is quite straight. In summary, the first five plots leave us with no qualms about either the appropriateness of a simple linear relationship or the fit to the given data. Notice that plotting against x yields the same shape as a plot against the predicted values. Is this surprising? The predicted value is a linear function of x, so the plots will have the same appearance. Given that the plots look the same, why include both? This is preparation for the next section, where more than one predictor is allowed, and plotting against x is not the same as plotting against the predicted values. The sixth plot in Figure 12.27 is in accord with what was found graphically in Example 12.12. In that example, Figure 12.18 showed that private universities might tend to have better graduation rates than state universities. For another graphical view of this, we show in the last plot of Figure 12.27 the standardized residuals plotted against a variable that is 0 for state universities and 1 for private universities. In this graph the private universities do seem to have an advantage, but we will need to wait until the next section for a hypothesis test, which requires including this new variable as a second predictor in the model. --- Trang 690 --- 12.6 Assessing Model Adequacy 677 o 90 3 . 2 60 a Bo 7 FI are ae a . a 3 3 Te 136.18 5 * Standardized ede voaessy | 2 “residuals vs 30 . 5.2 » predicted 800 1000 1200 1400 5 20 40 60 80 100 SAT Predicted Value ae 32 s © 90 3 $ 2 60 e . Bo = 3 e 2’ 8 = ‘ "os 3 ay yvs 2 * | Standardized : : predicted) + “residuals vs 30 ° 5-2 ox 20 40 60 80 100 # 800 1000 1200 1400 Predicted Value SAT a . ‘ = | i i | i i 3 3 i eee cia a | eee | i Standardized § | Al | probability, § residuals vs . Ge opp ete Bg |e another variable a +2 “1 0 1 2 a 0.00 0.25 0.50 0.75 1.00 z Score State = 0 Private = 1 Figure 12.27 Plots for the data from Example 12.24 ry] Difficulties and Remedies Although we hope that our analysis will yield plots like the first five of Figure 12.27, quite frequently the plots will suggest one or more of the following difficulties: 1. A nonlinear probabilistic relationship between x and y is appropriate. 2. The variance of ¢ (and of Y) is not a constant ¢7 but depends on x. 3. The selected model fits the data well except for a very few discrepant or outlying data values, which may have greatly influenced the choice of the best-fit function. 4. The error term ¢ does not have a normal distribution (this is related to item 3). 5. When the subscript i indicates the time order of the observations, the ¢;’s exhibit dependence over time. 6. One or more relevant independent variables have been omitted from the model. --- Trang 691 --- 678 = cuarrer 12 Regression and Correlation Figure 12.28 presents residual plots corresponding to items 1-3, 5, and 6. In Chapter 4, we discussed patterns in normal probability plots that cast doubt on the assumption of an underlying normal distribution. Notice that the residuals from the data in Figure 12.28d with the circled point included would not by themselves necessarily suggest further analysis, yet when a new line is fit with that point deleted, the new line differs considerably from the original line. This type of behavior is more difficult to identify in multiple regression. It is most likely to arise when there is a single (or very few) data point(s) with independent variable value(s) far removed from the remainder of the data. a b e ot “4 +2 +2 an 4 . goa oe aw os . A oe coe oe 8 % =~ © gee 2 2 ° ° c d F. © . 42 a . ; Z . . Z, i ° e * x . . =) ° e +2 ‘ « - * cag ° Time order °. Omitted a = of observation Fe independent ‘ . ai. Ne variable 2 Figure 12.28 Plots that indicate abnormality in data: (a) nonlinear relationship; (b) non-constant variance; (c) discrepant observation; (d) observation with large influence; (e) dependence in errors; (f) variable omitted We now indicate briefly what remedies are available for the types of diffi- culties. For a more comprehensive discussion, one or more of the references on regression analysis should be consulted. If the residual plot looks something like that of Figure 12.28a, exhibiting a curved pattern, then a nonlinear function of x may be fit. The residual plot of Figure 12.28b suggests that, although a straight-line relationship may be reasonable, the assumption that V(Y;) = o° for each i is of doubtful validity. When the error term ¢ satisfies the independence and constant variance assumptions (normality is not needed) for the simple linear regression --- Trang 692 --- 12.6 Assessing Model Adequacy 679 model of Section 12.1, it can be shown that among all unbiased estimators of fy and ,, the ordinary least squares estimators have minimum variance. These estimators give equal weight to each (x;, Y;). If the variance of Y increases with x, then Y;’s for large x; should be given less weight than those with small x;. This suggests that Bo and f should be estimated by minimizing fulbo,b1) =~ wilyi = (bo + dix)? (12.15) where the w;’s are weights that decrease with increasing x;. Minimization of Expression (12.15) yields weighted least squares estimates. For example, if the standard deviation of Y is proportional to x (for x > 0)—that is, V(Y) = ko?—then it can be shown that the weights w; = 1x? yield minimum variance estimators of Bo and f;. The books by Michael Kutner et al. and by S. Chatterjee et al. contain more detail (see the chapter bibliography). Weighted least squares is used quite frequently by econometricians (economists who use statistical methods) to estimate parameters. When plots or other evidence suggest that the data set contains outliers or points having large influence on the resulting fit, one possible approach is to omit these outlying points and recompute the estimated regression equation. This would certainly be correct if it were found that the outliers resulted from errors in recording data values or experimental errors. If no assignable cause can be found for the outliers, it is still desirable to report the estimated equation both with and without outliers. Yet another approach is to retain possible outliers but to use an estimation principle that puts relatively less weight on outlying values than does the principle of least squares. One such principle is MAD (minimize absolute devia- tions), which selects By and f, to minimize 5? |y; — (b) + 514;)| . Unlike the estimates of least squares, there are no nice formulas for the MAD estimates; their values must be found by using an iterative computational procedure. Such procedures are also used when it is suspected that the ¢;’s have a distribution that is not normal but instead has “heavy tails” (making it much more likely than for the normal distribution that discrepant values will enter the sample); robust regression procedures are those that produce reliable estimates for a wide variety of underly- ing error distributions. Least squares estimators are not robust in the same way that the sample mean X is not a robust estimator for y. When a plot suggests time dependence in the error terms, an appropriate analysis may involve a transformation of the y’s or else a model explicitly including atime variable. Lastly, a plot such as that of Figure 12.28f, which shows a pattern in the residuals when plotted against an omitted variable, suggests considering a model that includes the omitted variable. We have already seen an illustration of this in Example 12.24. a Exercises | Section 12.6 (68-77) 68. Suppose the variables x = commuting distance xs = 25, calculate the standard deviations of and y = commuting time are related according the five corresponding residuals. to the simple linear regression model with b. Repeat part (a) for.x; = 5,7 = 10,43 = 15, o=10. x4 = 20, and xs = 50. a. Ifn = 5 observations are made at the x values ¢. What do the results of parts (a) and (b) imply x =5, %=10, x3=15, x4 =20, and about the deviation of the estimated line from --- Trang 693 --- 680 charter 12 Regression and Correlation the observation made at the largest sampled x b. Use s = .7921 to calculate the standardized value? residuals from a simple linear regression. 69. The x values and standardized residuals for the Copsteict'a standardize residual: plot anid chlorine flow/etch rate data of Exercise 51 (Sec- comment: Disp constmet anormal probability tion 12.4) are displayed in the accompanying plot.and:comment, table. Construct a standardized residual plot and 72. As the air temperature drops, river water comment on its appearance. becomes supercooled and ice crystals form. Such ice can significantly affect the hydraulics x 150 150 2.00 = 2.50 2.50 of a river. The article “Laboratory Study of ” = Gor ws ter 3 Anchor Ice Growth” (J. Cold Regions Engrg. 2001: 60-66) described an experiment in which ice thickness (mm) was studied as a function of | S00 550) 350! 4.00) elapsed time (hr) under specified conditions, The | 7 136 153 07 following data was read from a graph in the : : article: n = 33; x = .17, .33, 50, .67, ..., 5.50; Hy tinatoile TT presented the sesi@iads ‘Boss y = 50, 1.25, 1.50, 2.75, 3.50, 4.75, 5.75, 5.60, simple: linear ‘regression. Of-moistire content: y 7.00, 8.00, 8.25, 9.50, 10.50, 11.00, 10.75, 12.50, on filtration rate x. 12.25, 13.25, 15.50, 15.00, 15.25, 16.25, 17.25, a, Plot the residuals against x. Does the resulting 18.00, 18.25, 18.15, 20.25, 19.50, 20.00, 20.50, plot suggest that a straight-line regression 20.60, 20.50, 19.80. function is a reasonable choice of model? a. The /? value resulting from a least squares fit Explain yourreasoning: is 977. Given the high r*, does it seem appro- b. Using s = .665, compute the values of the priate to assume an approximate linear rela- standardized residuals. Is ef © e/s for tionship? i= 1, ... n, or are the e's not close to b. The residuals, listed in the same order as the x being proportional to the ¢;’s? valieseaie c. Plot the standardized residuals against x. Dees the plot differ significantly in general —1.03 —0.92 —1.35 —0.78 —0.68 -0.11 0.21 appearance from the plot of part (a)? -0.59 0.13 0.45 0.06 0.62 0.94 0.80 71. Wear resistance of certain nuclear reactor compo- ~O-!4 0.93 0.04 0.36 1.92 0.78 0.35 Deere : 0.67 1.02 1.09 0.66 -0.09 1.33 —0.10 nents made of Zircaloy-2 is partly determined by properties of the oxide layer. The following data 9-24 —0-43 —1.01 —1.75 —3.14 appears in an article that proposed a new nonde- Plot the residuals against x, and reconsider the structive testing method to monitor thickness of question in (a). What does the plot suggest? the layer (“Monitoring of Oxide Layer Thickness 73, The accompanying data on x = true density on Zircaloy-2 by the Eddy Current Test Method,” (kg/mm*) and y = moisture content (% 4.b.) J. Test. Eval., 1987: 333-336). The variables are was read from a plot in the article “Physical x = oxide-layer thickness (wm) and y = eddy- Properties of Cumin Seed” (J. Agric. Engrg. current response (arbitrary units). Res., 1996: 93-98). x] 0 7 17 14133 x| 7.0 93 132 163 19.1 22.0 y | 203 198 195 159 15.1 y | 1046 1065 1094 1117 1130 1135 The equation of the least squares line is y ep Me 0218237285 = 1008.14 + 6.19268x (this differs very slightly y 14.7 11.9 115 83 6.6 from the equation given in the article); s = 7.265 : and 7? = .968. a. The auithors summarized. the relationship by a. Carry out a test of model utility and comment. giving the equation of the least squares line as b. Compute the values of the residuals and y = 20.6 — .047x. Calculate and plot the resi- plot the residuals against x. Does the plot duals against se) and then comment ‘ca the suggest that a linear regression function is appropriateness of the simple linear regres- inappropriate? sion model. --- Trang 694 --- 12.6 Assessing Model Adequacy 681 c. Compute the values of the standardized computed from these five will be essentially residuals and plot them against x. Are there identical for the four sets—the least squares any unusually large (positive or negative) line (vy = 3 + .5x), SSE, s*, 7°, ¢ intervals, ¢ sta- standardized residuals? Does this plot give tistics, and so on. The summary statistics provide the same message as the plot of part (b) no way of distinguishing among the four data regarding the appropriateness of a linear sets. Based on a scatter plot and a residual plot regression function? for each set, comment on the appropriateness or 74. Continuous recording of heart rate can be used to {iiap pre pfialenieas os tiftini a sttaaghtslineiodeh obtain information about the level of exercise include in your comments any specific sugges: . . . . . tions for how a “straight-line analysis” might be intensity or physical strain during sports partici- ae ‘ pation, work, or other daily activities. The article modifiedior qualified: “The Relationship Between Heart Rate and Oxy- 76. a. Express the ith residual Y;—Y) (where gen Uptake During Non-Steady State Exercise” Y; =fy +B ix) in the form S>cj;, a linear (Ergonomics, 2000: 1578-1592) reported on a function of the Y;’s. Then use rules of vari- study to investigate using heart rate response (x, ance to verify that V(Y;—Yj) is given by as a percentage of the maximum rate) to predict Expression (12.13). oxygen uptake (y, as a percentage of maximum b. As x; moves farther away from x, what hap- uptake) during exercise. The accompanying data pens to V(¥;) and to V(¥, — ¥,)? was read from a graph in the paper. ; : 77. If there is at least one x value at which more than HR | 43.5 44.0 44.0 44.5 44.0 45.0 48.0 49.0 one observation has been made, there is a formal test procedure for testing Vo; | 22.0 21.0 22.0 21.5 25.5 24.5 30.0 28.0 Ha. pire = Bo + Bux for some values Bo, i (the HR | 49.5 51.0 54.5 57.5 57.7 61.0 63.0 72.0 inne regression function is linear) Si | 5 a8 58 Be ad versus VOz | 32.0: 29.0 38.5 305° 57.0 40.0 58.0 72.0 Hy: Ho is not true (the true regression function is Use a statistical software package to perform a not linear) simple linear regression analysis. Considering Suppose observations are made at x1, x3, -..5 X- the list of potential difficulties in this section, Let ¥11,Yi2,-.., Yin, denote the 7; observations see which of them apply to this data set. when x = 15-5 Yer Yooy +++ Yon, denote the n, 75. Consider the following four (x,y) data sets; the observations when x= x. With n = En; (the first three have the same x values, so these values tolal-niimber of:observations), SSE bas#'-—.2 dk are listed only once (Frank Anscombe, “Graphs in We break SSE into two pieces, SSPE (pure error) Statistical Analysis,” Amer. Statist., 1973: 17-21): and SSLF (lack of fit), as follows: 13 1 2 3 4 4 ssPE = 33%) -F.)? a] x y y y x y . a = VN Y-Sae.P 10.0 8.04 9.14 746 8.06.58 5 7 8.0 695 814 677 80 5.76 SSLF = SSE — SSPE 13.0 758 874 1274 80 7.71 30 BBL 1877 7Al 80. 8.84 ‘The n; observations at x; contribute n; — 1 df to 1 3 9260 TB? SSPE, so the number of degrees of freedom for 140 9.96 810 884 80 7.04 SSPE is E.G = 1)ich = © wid the’ degree ot 6.0 7.24 6.13 6.08 = 8.0 5.25 freedom for SSLF is n — 2 —(n —c) = c—2. Let 4.0 426 3.10 5.39 19.0—-12.50 MSPE = SSPE/(n — c), MSLE = SSLF(c — 2). 120 1084 9.13 8.15 8.05.56 Then it can be shown that whereas E(MSPE) =o? 7.0 482 7.260 6420 8.00 7.91 whether or not Ho is true, E(MSLE) = 0? if Ho is 5.0 568 474 573 80 6.289 true and E(MSLF) > o? if Hy is false. For each of these four data sets, the values of the Test statistic: F = MSLF/MSPE summary statistics > xi, be, Xvi. Dy? and Rejection Te gIGN FS Pye Soe Saini are virtually identical, so all quantities 2 --- Trang 695 --- 682 cuarrer 12 Regression and Correlation The following data comes from the article (Soc = 4,m =n = 3,3 = 14 =4,) “Changes in Growth Hormone Status Related to a. Test Ho versus H, at level .05 using the lack- Body Weight of Growing Cattle” (Growth, 1977: of-fit test just described. 241-247), with x = body weight and y = meta- b. Does a scatter plot of the data suggest that the bolic clearance rate/ body weight. relationship between x and y is linear? How Fe 110 110 110 230 230 230 360 does this compare with the result of part (a)? (A nonlinear regression function was used in y 235° «198 «#173 174 «149 124 «TIS the article.) x 360 «6360 «360 «6505 505 S05 505 y 130 102 95 122 112 98 96 Multiple Regression Analysis In multiple regression, the objective is to build a probabilistic model that relates a dependent variable y to more than one independent or predictor variable. Let k represent the number of predictor variables (k > 2) and denote these predictors by X), Xo, ..., Xj. For example, in attempting to predict the selling price of a house, we might have k = 3 with x, = size (£7), x5 = age (years), and x, = number of rooms. DEFINITION The general additive multiple regression model equation is Y = Bo + Bix + Box t+ Boe te (12.16) where E(2) = 0 and V(e) = o7. In addition, for purposes of testing hypoth- eses and calculating Cls or PIs, it is assumed that ¢ is normally distributed and also that the ¢’s associated with various observations, and thus the Y;’s themselves, are independent of one another. Let x},x3,...,x; be particular values of x1, ..., x:. Then (12.16) implies that Psp oat = Bot Bixt +--+ Bex (12.17) Thus, just as By + 1x describes the mean Y value as a function of x in simple linear regression, the true (or population) regression function Bo + Bix; +--+ + Bixe gives the expected value of Y as a function of x1, ..., xz. The f;’s are the true (or population) regression coefficients. The regression coefficient f, is interpreted as the expected change in Y associated with a 1-unit increase in x; while xp, ...,.xx are held fixed. Analogous interpretations hold for B2, .-., Bx- Estimating Parameters The data in simple linear regression consists of n pairs (x, y,), - - . ns Yn). Suppose that a multiple regression model contains two predictor variables, x; and x2. Then each observation will consist of three numbers (a triple): a value of x1, a value of x2, and a value of y. More generally, with k independent or predictor variables, each --- Trang 696 --- 12.7 Multiple Regression Analysis 683 observation will consist of k + 1 numbers (a “k + 1 tuple”). The values of the predictors in the individual observations are denoted using double-subscripting: xij = the value of the jth predictor x; in the ith observation ('=1,...,mjf=1,..., 8. Thus the first subscript is the observation number and the second subscript is the predictor number. For example, xg; is the value of the 3rd predictor in the 8th observation (to avoid confusion, a comma can be inserted between the two sub- scripts, €.g. x}2,3). The first observation in our data set is then (x11, X12, ....¥1k 91), the second is (x2, X22, ..., .¥24, 2), and so on. Consider candidates bo, bi, ..., by for estimates of the f,’s and the corresponding candidate regression function bo + bx; + +++ + byxy. Substituting the predictor values for any individual observation into this candidate function gives a prediction for the y value that would be observed, and subtracting this prediction from the actual observed y value gives the prediction error. The princi- ple of least squares says we should square these prediction errors, sum, and then take as the least squares estimates fy ,f,,.. . ,B;, the values of the b;’s that minimize the sum of squared prediction errors. To carry out this program, form the criterion function (sum of squared prediction errors) n . (bo, Diy --+5 Px) = D> bi — (bo + dian + +++ + bein) i=l and then take the partial derivative of g(-) with respect to each bj (j = 0,1, ....4), and equate these k + 1 partial derivatives to 0. The result is a system of k + 1 equations, the normal equations, in the k + 1 unknowns (the 5;’s). It is very important here that the normal equations are linear in the unknowns because the criterion function is quadratic. nby + (Scan) de (© wi) ba-te+-+ (Soa) be =Sy (Som) bo + (Soi) + (Soxxa) Dpgheeah (Sonic) be = Vx (Sox) bo + (Sons) hr peep (Svie-vvix) bet (© xj) = So xayi We will assume that the system has a unique solution, the least squares estimates Bo. By 5 Bo, eee By. The next section uses matrix algebra to deal with the system of equations and develop inferential procedures for multiple regression. For the moment, though, we shall take advantage of the fact that all of the commonly used statistical software packages are programmed to solve the equations and provide the results needed for inference. Sometimes interest in the individual regression coefficients is the main reason for doing the regression. The article “Autoregressive Modeling of Baseball Performance and Salary Data,” Proceedings of the Statistical Graphics Section, American Statistical Association, 1988, 132-137, describes a multiple regression of runs scored as a function of singles, doubles, triples, home runs, and walks (combined with hit-by-pitcher). The estimated regression equation is --- Trang 697 --- 684 = cuarrer 12 Regression and Correlation runs = —2.49 + .47 singles + .76 doubles + 1.14 triples + 1.54 home runs + 39 walks This is very similar to the popular slugging percentage statistic, which gives weight | to singles, 2 to doubles, 3 to triples, and 4 to home runs. However, the slugging percentage gives no weight to walks, whereas the regression puts weight -39 on walks, more than 80% of the weight it assigns to singles. The importance of walks is well-known among statisticians who follow baseball, and it is interesting that there are now some statistically savvy people in major league baseball man- agement who are emphasizing walks in choosing players. The article “Factors Affecting Achievement in the First Course in Calculus” (J. Exper. Educ., 1984: 136-140) discussed the ability of several variables to predict y = freshman calculus grade (on a scale of 0-100). The variables included xX, = an algebra placement test given in the first week of class, x. = ACT math score, x; = ACT natural science score, and x4 = high school percentile rank. Here are the scores for the first five and the last five of the 80 students (the data set is available from the website for this book): Observation Algebra ACTM ACTNS HS Rank Grade 1 21 27 23 68 62 2 16 29 32 99 75 3 22 30 32 98 95 4 25 34 28 90 78 5 22 29 23 99 95 76 22 29 26 88 85 7 17 29 33 92 75 78 26 27 29 95 88 19 26 28 30 99 95 80 21 28 30 99 85 The JMP statistical computer package gave the following least squares estimates: By = 36.12 B, =.9610 B,=.2718 f,=.2161 fp, =.1353 Thus we estimate that .9610 is the average increase in final grade associated with a 1-point increase in the algebra placement score when the other three predictors are held fixed. Another way to interpret this is to say that a 10-point increase in the algebra pretest score, with the other scores held fixed, corresponds to a 9.6 point increase in the final grade, an increase of approximately one letter grade if A = 90s, B = 80s, etc. The other estimated coefficients are interpreted in a similar manner. The estimated regression equation is y = 36.12 + .9610x; + .2718x9 + .2161x3 + .1353x5. A point prediction of final grade for a single student with an algebra test score of 25, ACTM score of 28, ACTNS score of 26, and a high school percentile rank of 90 is 3 = 36.12 + .9610(25) + .2718(28) + .2161(26) + .1353(90) = 85.55 --- Trang 698 --- 12.7 Multiple Regression Analysis 685 a middle B. This is also a point estimate of the mean for the population of all students with an algebra test score of 25, ACTM score of 28, ACTNS score of 26, and a high school percentile rank of 90 a 6? and the Coefficient of Multiple Determination Substituting the values of the predictors from the successive observations into the equation for an estimated regression function gives the predicted or fitted values J1;)2,+-+;3n- For example, since the values of the four predictors for the last observation in Example 12.25 are 21, 28, 30, and 99, respectively, the corresponding predicted value is ¥g9 = 83.79. The residuals are the differences Yt —Ji----;¥n — Jn. In simple linear regression, they were the vertical deviations from the least squares line, but in general there is no geometric interpretation in multiple regression (the exception is the case k = 2, where the estimated regression function specifies a plane in three dimensions and the residuals are the vertical deviations from the plane). The last residual in Example 12.25 is 85 — 83.79 = 1.21. The closer the residuals are to 0, the better the job our estimated equation is doing in predicting the y values actually observed. The residuals are sometimes important not just for judging the quality of a regression. Several enterprising students developed a multiple regression model using age, size in square feet, etc. to predict the price of four-unit apartment buildings. They found that one building had a strongly negative residual, meaning that the price was much lower than predicted. As it turned out, the reason was that the owner had “cash-flow” problems, and needed to sell quickly, so the students got an unusually good deal. As in simple linear regression, the estimate of the variance parameter o* is based on the sum of squared residuals (or sum of squared errors) SSE = X(y; — 5i). Previously, we divided SSE by n — 2 to obtain the estimate. The explanation was that the two parameters 8) and By had to be estimated, entailing a loss of two degrees of freedom. For each parameter there is a normal equation that can be expressed as a constraint on the residuals, with a loss of 1 df. In multiple regression with k predictors, k + 1 df are lost in estimating the £;’s (don’t forget the constant term Bo). Here are the normal equations rewritten as constraints on the residuals: s [yi — (Bo + xib1 + xinb2 + +++ + xixby)] = 0 SP sabi — (60 + xinb1 + xinbds + +++ +.xxd,)] = 0 So ries — Bo + xibr + xinb2 +++ + xnbx)] = 0 The first equation says that the sum of the residuals is 0, the second equation says that the first predictor times the residual sums to 0, etc. These k + 1 constraints allow any k + | residuals to be determined from the others. This implies that SSE is based on n — (k + 1) df and this is the divisor in the estimate of 07: oes SSF _ usp, ¢as=V2 n—(k+1) --- Trang 699 --- 686 charter 12 Regression and Correlation SSE can once again be regarded as a measure of unexplained variation in the data—the extent to which observed variation in y cannot be attributed to the model relationship. Total sum of squares SST, defined as > (yi — 5)* as in simple linear regression, is a measure of total variation in the observed y values. Taking the ratio of these sums of squares and subtracting from one gives the coefficient of multiple determination . SSE R=1 SST Sometimes called just the coefficient of determination or the squared multiple correlation, R? is interpreted as the proportion of observed variation that can be attributed to, or equivalently, explained by, the model relationship. Thinking of SST as the error sum of squares using just the constant model (with fo as the only term in the model) having y as the predictor, R’ is the proportion by which the model reduces the error sum of squares. For example, if SST = 20 and SSE = 5, then the model reduces the error sum of squares by 75%, so R? = .75. The closer R* is to 1, the greater the proportion of observed variation that can be explained by the fitted model. Unfortunately, there is a potential problem with R: its value can be inflated by including predictors in the model that are relatively unimportant or even frivolous. For example, suppose we plan to obtain a sample of 20 recently sold houses in order to relate sale price to various characteristics of a house. Natural predictors include interior size, lot size, age, number of bedrooms, and distance to the nearest school. Suppose we also include in the model the diameter of the doorknob on the door of the master bedroom, the height of the toilet bowl in the master bath, and so on until we have 19 predictors. Then unless we are extremely unlucky in our choice of predictors, the value of R? will be 1 (because 20 coefficients are estimated from 20 observations)! Rather than seeking a model that has the highest possible R* value, which can be achieved just by “packing” our model with predictors, what is desired is a relatively simple model based on just a few important predictors whose R? value is high. It is therefore desirable to adjust R? to take account of the fact that its value may be quite high just because many predictors were used relative to the amount of data. The adjusted coefficient of multiple determination is defined by pea MSE_, _SSE/a-(k+0) _, n—1 SSE a" MST SST/(n—1) sn — (K+ 1) SST The ratio multiplying SSE/SST in adjusted R? exceeds 1 (the denominator is smaller than the numerator), so adjusted R? is smaller than R? itself, and in fact will be much smaller when k is large relative to n. A value of R2 much smaller than Risa warning flag that the chosen model has too many predictors relative to the amount of data. Continuing with the previous example in which a model with four predictors was fit to the calculus data consisting of 80 observations, the JMP software package gave SSE = 7346.05 and SST = 10,332.20, from which s = 9.90, R? = .289, and Rr = .251. The estimated standard deviation s is very close to 10, which corre- sponds to one letter grade on the usual A = 90s, B = 80s, .. ., scale. About 29% of --- Trang 700 --- 12.7 Multiple Regression Analysis 687 observed variation in grade can be attributed to the chosen model. The difference between R? and R,? is not very dramatic, a reflection of the fact that k = 4 is much smaller than n = 80. a A Model Utility Test In multiple regression, is there a single indicator that can be used to judge whether a particular model will be useful? The value of R° certainly communicates a prelimi- nary message, but this value is sometimes deceptive because it can be greatly inflated by using a large number of predictors (large 4) relative to the sample size n (this is the rationale behind adjusting R?). The model utility test in simple linear regression involved the null hypothesis Ho: B; = 0, according to which there is no useful relation between y and the single predictor x. Here we consider the assertion that 8; = 0, B2 = 0,..., By = 0, which says that there is no useful relationship between y and any of the k predictors. If at least one of these ’s is not 0, the corresponding predictor(s) is (are) useful. The test is based on a statistic that has a particular F distribution when Hp is true (see Sections 10.5 and 11.1 for more about F tests). Null hypothesis: Ho: 6, = B, =--- =f, =0 Alternative hypothesis: H,: at least one Bj #0 (i =1,...,k) Test statistic value: R/k SSR/k MSR fo, Se __.._SSB/s _ MSR (12.18) (1 —R?)/[n— (k +1] SSE/[n — (K+ 1)] MSE where SSR = regression sum of squares = SST — SSE Rejection region for a level x test: f > Fykn—(er1y See the next section for an explanation of why the ratio MSR/MSE has an F distribution under the null hypothesis. Except for a constant multiple, the test statistic here is R°/(1 — R?), the ratio of explained to unexplained variation. If the proportion of explained variation is high relative to unexplained, we would naturally want to reject Ho and confirm the utility of the model. However, the factor [n — (k + 1)]/k decreases as k increases, and if k is large relative to n, it will reduce f considerably. Returning to the calculus data of Example 12.25, a model with k = 4 predictors was fitted, so the relevant hypotheses are Ho: By = B2 = Bs = Ba = 0 Hi: at least one of these four f’s is not 0 Figure 12.29 shows output from the JMP statistical package. The values of s (Root Mean Square Error), R?, and adjusted R* certainly suggest a useful model. The value of the model utility F ratio is R/k -289/4 f= __ Rik __ me 7.62 (1—R?)/[n—(K+1)] — .711/(80 — 5) --- Trang 701 --- 688 cuarrer 12 Regression and Correlation This value also appears in the F Ratio column of the ANOVA table in Figure 12.29. Since f = 7.62 > Fo1.a75 © 3.6, Ho should be rejected at significance level .01. In fact, the ANOVA table in the JMP output shows that P-value < .0001. The null hypothesis should therefore be rejected at any reasonable significance level. We conclude that there is a useful linear relationship between y and at least one of the four predictors in the model. This does not mean that all four predictors are useful; we will say more about this subsequently. Summary of Fit RSquare 0.289014 RSquare Adj 0.251095 Root Mean Square Error 9.896834 Mean of Response 80.15 Observations (or Sum Wgts) 80 Analysis of Variance Source DF Sum of Squares Mean Square F Ratio Model 4 2986.150 746.538 7.6218 Error 75 7346 .050 97,947 Prob > F ¢. Total 73 10332.200 <-0001 Parameter Estimates Term Estimate Std Error t Ratio Prob>|t| Lower 95% Upper 958 Intercept 36.121531 10.7519 3.36 0.0012 14, 702651 7.540411 Alg Place 0.960992 0.26404 3.64 0.0005 0.434997 1.4869868 acTM 0.2718147 0.453505 0.60 0.5507 -0.631614 3.1752438 ACINS 0.2161047 «0.313215 0.68 0.4924 -0.407852 0.400606 HS Rank 0.235358 0.103642 2.32 0.1957 -0.07115 o.3417e15 Figure 12.29 Multiple regression output from JMP for the data of Example 12.27. ™ Inferences in Multiple Regression Before testing hypotheses, constructing CIs, and making predictions, one should first examine diagnostic plots to see whether the model needs modification or whether there are outliers in the data. The recommended plots are (standardized) residuals versus each independent variable, residuals versus ¥, y versus j, and a normal probability plot of the standardized residuals. Potential problems are sug- gested by the same patterns discussed in Section 12.6. Of particular importance is the identification of observations that have a large influence on the fit. Because each f; is a linear function of the y;’s, the standard deviation of each B; is the product of ¢ and a function of the x;;’s, so an estimate % is obtained by substituting s for o. A formula for 5% is given in the next section, and the result is part of the output from all standard regression computer packages. Inferences concerning a single; are based on the standardized variable fia which, assuming the model is correct, has a t distribution with n — (k + 1) df. The point estimate of py. Sool? the expected value of Y when ML SMP Xk = AGS fly ve =Bo +Bixj + +++ +Byxj- The estimated standard deviation of the corresponding estimator is a complicated expression involving the --- Trang 702 --- 12.7 Multiple Regression Analysis 689 sample x;;’s, but a simple matrix formula is given in the next section. The better statistical computer packages will calculate it on request. Inferences about /ty. Xt ent are based on standardizing its estimator to obtain a ft variable having n — (k + 1) df. 1. A 100(1 — «)% Cl for f;, the coefficient of x; in the regression function, is Bit tayrm—(e4t) *%, 2. A test for Ho: B; = Bio uses the test statistic value t = 6; - Bio) / based on n — (k + 1) df. The test is upper-, lower-, or two-tailed according to whether 7, contains the inequality >, <, or # . 3. A 100(1 = 21) % CI for pays ese 18 Bhy-xs xs, x © ta/2.n—(ei) + (estimated SD of fly.y vs...v:) = IE ta/rn—G) * Sy where ¥ is the statisticBy +Bixt tee + Bix and ¥ is the calculated value of ¥. 4. A 100(1 — «)% PI for a future y value is Bly 5 oe E ba/2.n—(e41) * [s° + (estimated SD of Ply xt att yy? =P baprn—rt) fs? +55 Simultaneous intervals for which the simultaneous confidence or prediction level is controlled can be obtained by applying the Bonferroni technique. The JMP output for the calculus data includes 95% confidence intervals for the (Example 12.27 coefficients. Let’s verify the interval for f), the coefficient for algebra placement continued) score: B, + tors 80-55, = -961 + 1.992(.264) = .961 + .526 = (.435, 1.487) which agrees with the interval given in Figure 12.29. Thus if ACTM score, ACTNS score, and percentile rank are fixed, we estimate an increase between .435 and 1.487 in grade is associated with a one-point increase in algebra score. We found in Example 12.25 that, if a student has an algebra test score of 25, ACTM score of 28, ACTNS score of 26, and high school percentile rank of 90, then the predicted value is 85.55. The estimated standard deviation for this predicted value can be obtained from JMP, with the result sy = 1.882, so a 95% confidence interval for the expected grade is fty.25,28.26.90 + t.025,80-s8y = 85.55 + 1.992(1.882) = 85.55 + 3.75 = (81.8, 89.3) which can also be obtained from JMP. This interval is for the mean score of all students with the predictor values 25, 28, 26, and 90. Regarding scores in the 80's as B’s, we can say with 95% confidence that the expected grade is a B. Now --- Trang 703 --- 690 = cuarrer 12 Regression and Correlation consider the estimated standard deviation for the error in predicting the final grade of a single student with the predictor values 25, 28, 26, and 90. This is /s? + 3 = V9.897? + 1.882? = 10.074 Therefore, a 95% prediction interval for the final grade of a single student with predictor scores 25, 28, 26, and 90 is ity-25,28.26,90 © 025,805 (10.074) = 85.55 + 1.992(10.074) = 85.55 + 20.07 = (65.5, 105.6) Of course, this PI is much wider than the corresponding CL. Although we are highly confident that the expected score is a B, the score for a single student could be as low as a D or as high as an A. Notice that the upper end of the interval exceeds the maximum score of 100, so it would be appropriate to truncate the interval to (65.5, 100) a Frequently, the hypothesis of interest has the form Ho: B; = 0 for a particular i, For example, after fitting the four-predictor model in Example 12.25, the investigator might wish to test Ho: fz = 0. According to Ho, as long as the predictors x), x3, and x4 remain in the model, x2 contains no useful information about y. The test statistic value is the tratio Bi/s,- Many statistical computer packages report the t-ratio and corresponding P-value for each predictor included in the model. For example, Figure 12.29 shows that as long as algebra pretest score, ACT natural science, and high school percentile rank are retained in the model, the predictor x. = ACT math score can be deleted. The P-value for x, is .55, much too large to reject the null hypothesis. It is interesting to look at the correlations between the predictors and the response variable in Example 12.25. Here are the correlations and the corresponding P-values (in parentheses): algple ACTmath ACTns xank calegrade 0.491 0.353 0.259 0.324 (0.000) (0.0013) (0.020) (0.003) Do these values seem inconsistent with the multiple regression results? There is a highly significant correlation between calculus grade and ACT math score, but in the multiple regression the ACT math score is redundant, not needed in the model. The idea is that ACT math score also has highly significant correlations with the other predictors, so much of its predictive ability is retained in the model when this variable is deleted. In order to be a statistically significant predictor in the multiple regression model, a variable must provide additional predictive ability beyond what is offered by the other predictors. The R? value for the calculus data is disappointing. Given the importance placed on predictors such as ACT scores and high school rank in college admis- sions and NCAA eligibility, we might expect that these scores would give better predictions. --- Trang 704 --- 12.7 Multiple Regression Analysis 691 Assessing Model Adequacy The standardized residuals in multiple regression result from dividing each residual by its estimated standard deviation; a simple matrix formula for the standard deviation is given in the next section. We recommend a normal probability plot of the standardized residuals as a basis for validating the normality assumption. Plots of the standardized residuals versus each predictor and versus ¥ should show no discernible pattern. The book by Kutner et al. discusses other diagnostic plots. Figure 12.30 from JMP shows a histogram and normal probability plot of the standardized residuals for the calculus data discussed in the preceding examples. The plot is sufficiently straight that there is no reason to doubt the assumption of normally distributed errors. oo a +01 .05 .10 .25 .50 .75 /90 .98 99 1.5 a $ “Ld Y 0 cost yy, aL -1.5 i ~2 sie 2 -1 #0 1 z 3 Normal Quantile Plot Figure 12.30 A normal probability plot and histogram of the standardized residuals for the calculus data Figure 12.31 shows plots of the standardized residuals versus the predictors for the calculus data. There is not much evidence of a pattern in plots (b), (c), and (d), other than randomness. However, the first plot does show some indication that the variance might be lower at the high end. The graphs in Figure 12.32 show the calculus grade and the standardized residuals plotted against the predicted values, and these also show narrowing on the right. Looking at Figure 12.32a, it is apparent that this would have to occur, because no score can be above 100. a Multiple Regression Models We now consider various ways of creating predictors to specify informative models. Polynomial Regression _Let’s return for a moment to the case of bivariate data consisting of n (x, y) pairs. Suppose that a scatter plot shows a parabolic rather than linear shape. Then it is natural to specify a quadratic regression model: Y = By + Bix+ yx +e --- Trang 705 --- 692 carter 12 Regression and Correlation a b © ® zB 2 4 z 2 i oe dsb @ os < swe a 48 © aeg * = . . am ey 3 os ow me SER alg 5 os Wag Sew SB . ee gee is] . ese : £ os a . qos 7 saw O08 . cee 6 ~ : é oe isis : 1 wtreteds # § 15>. Ge Me § 15 lee 5 & a PR 5 “ -25 25 10 5 20 25 30 15 20 25 30 36 Algebra place ACTM c d ® @ Bp 2 oe “ 3 2 . . g is * < #8 i 9 15+¢ * = am. |g 2 es 2 ° . 8 os . mg Bg 05 . ge Seas 3 = of 7? 3 : wet ee LE -0.5 ae a. ry ete 5 ~0.5+ ° eoesd = 1 ote ee FEE 5 =4 . ~<—- ee 2 -1.5 . ot - 3 -1.5 oe. "ei a 2 . a . 25 25 15 20 25 30 36 50 60 70 8 9 100 ACTNS HS Rank Figure 12.31 Standardized residuals versus predictors for the calculus data a b ® g 100 ‘ © 2 . . ad a ees 34 = ota 3 owe Be . Bg os i sal 5 . . . . oe fe os o 80 nee 6 zB a SS wr ey hay 3 oe 2 : ° SB 7h « o eee . = 508 a Sa ic) = ae BE osike « a Saige ° 3S '° oe . : B -15 es 60}, ns 4 3 ‘ & -2 ‘ 50 ® os 6 70 75 8 8 9 95 6 70 75 80 8 9 95 Predicted calculus grade Predicted calculus grade Figure 12.32 Diagnostic plots for the calculus data: (a) y versus j) (b) standardized residual versus #7 --- Trang 706 --- 12.7 Multiple Regression Analysis 693 The corresponding population regression function By + ,x + fox? is quadratic rather than linear, and gives the mean or expected value of Y for any particular x. So what does this have to do with multiple regression? Let’s rewrite the quadratic model equation as follows: Y = Bo + Bix + Box2 +e where x; = x and x2 = Now this looks exactly like a multiple regression equation with two predictors. You may object on the grounds that one of the predictors is a mathematical function of the other one. Appeal denied! It is not only legitimate for a predictor in a multiple regression model to be a function of one or more other predictors but often desirable in the sense that a model with such a predictor may be judged much more useful than the model without such a predictor. The message at the moment is that quadratic regression is a special case of multiple regression. Thus any software package capable of carrying out a multiple regression analysis can fit the quadratic regression model. The same is true of cubic regression and even higher-order polynomial models, although in practice very rarely are such higher- order predictors needed. The interpretation of f; given previously for the general multiple regression model is not legitimate in quadratic regression. This is because x. = x°, so the value of x5 cannot be increased while x, = x is held fixed. More generally, the interpretation of regression coefficients requires extra care when some predictor variables are mathematical functions of others. Models with Interaction Suppose that an industrial chemist is interested in the relationship between product yield (y) from a certain reaction and two indepen- dent variables, x; = reaction temperature and x. = pressure at which the reaction is carried out. The chemist initially proposes the relationship Y = 1200 + 15x, — 35x. +€ for temperature values between 80 and 100 in combination with pressure values ranging from 50 to 70. The population regression function 1200 + 15x; — 35x2 gives the mean y value for any particular values of the predictors. Consider this mean y value for three different particular temperature values: x, = 90 : mean y value = 1200 + 15(90) — 35x, = 2550 — 35x xX, = 95 : mean y value = 2625 — 35x» XxX; = 100: mean y value = 2700 — 35x Graphs of these three mean y value functions are shown in Figure 12.33a. Each graph is a straight line, and the three lines are parallel, each with a slope of —35. Thus irrespective of the fixed value of temperature, the average change in yield associated with a 1-unit increase in pressure is —35. When pressure x increases, the decline in average yield should be more rapid for a high temperature than for a low temperature, so the chemist has reason to doubt the appropriateness of the proposed model. Rather than the lines being parallel, the line for a temperature of 100 should be steeper than the line for a temperature of 95, and that line in turn should be steeper than the line for x; = 90. --- Trang 707 --- 694 = cuarrer 12 Regression and Correlation a b Mean y value Mean y value x @ x % \ 2 SONG By Nae Ne San, NS ee NS, Xs 5 NGS ay Dy Zn Dy 7 Z, >a “% x2 xy Figure 12.33 Graphs of the mean y value for two different models: (a) 1200 + 15x, — 35x2; (b) —4500 + 75x, + 60x, — x1x2, A model that has this property includes, in addition to predictors x, and x5, a third predictor variable, x; = x,x>. One such model is Y¥ = —4500 + 75x; + 60x. — x4X2 + € for which the population regression function is —4500 + 75x, + 60x. — x,%2. This gives (mean y value when temperature is 100) = — 4500 + (75)(100) + 60x. — 100x2 = 3000 — 40x (mean value when temperature is 95) = 2625 — 35x (mean value when temperature is 90) = 2250 — 30x2 These are graphed in Figure 12.33b, where it is clear that the three slopes are different. Now each different value of x, yields a line with a different slope, so the average change in yield associated with a l-unit increase in x, depends on the value of x,;. When this is the case, the two variables are said to interact. DEFINITION If the change in the mean y value associated with a 1-unit increase in one independent variable depends on the value of a second independent variable, there is interaction between these two variables. Denoting the two indepen- dent variables by x; and x2, we can model this interaction by including as an additional predictor x3 = x,x2, the product of the two independent variables. The general equation for a multiple regression model based on two indepen- dent variables x, and x2 and also including an interaction predictor is Y = Bo + Bix + Boxy + Byx3 +e where x3 = x4.x7. When x, and x, do interact, this model will usually give a much better fit to resulting data than would the no-interaction model. Failure to consider a model --- Trang 708 --- 12.7 Multiple Regression Analysis 695 with interaction too often leads an investigator to conclude incorrectly that the relationship between y and a set of independent variables is not very substantial. In applied work, quadratic predictors x7 and x3 are often included to model a curved relationship. This leads to the full quadratic or complete second-order model Y = Bo + Bixi + Boxr + Byxixr + Burt + Bord +e This model replaces the straight lines of Figure 12.33 with parabolas (each one is the graph of the population regression function as x varies when x, has a particular value). Investigators carried out a study to see how various characteristics of concrete are influenced by x, = % limestone powder and x; = water-cement ratio, resulting in the accompanying data (“Durability of Concrete with Addition of Limestone Powder,” Mag. Concrete Res., 1996: 131-137). x4 x2 Xpty 28-day comp str. (MPa) Adsorbability (%) 21 65 13.65 33.55 8.42 21 55 11.55 47.55 6.26 7 65 4.55 35.00 6.74 7 55 3.85 35.90 6.59 28 60 16.80 40.90 7.28 0 60 0.00 39.10 6.90 14 70 9.80 31.55 10.80 14 50 7.00 48.00 5.63 14 60 8.40 42.30 7.43 y = 39.317, SST = 278.52 ¥ = 7.339, SST = 18.356 Consider first compressive strength as the dependent variable y. Fitting the first- order model results in y = 84.82 + .1643x, — 79.67x2 SSE = 72.25 (df = 6) R= 741 R2 = 654 whereas including an interaction predictor gives y = 6.22 + 5.779x, + 51.33x2 — 9.357x1x2 SSE = 29.35 (df =5) R?>=.895 R?=.831 Based on this latter fit, a prediction for compressive strength when % limestone = 14 and water—cement ratio = .60 is § = 6.22 + 5.779(14) + 51.33(.60) — 9.357(8.4) = 39.32 Fitting the full quadratic relationship results in virtually no change in the R? value. However, when the dependent variable is adsorbability, the following results are obtained: R* = .747 when just two predictors are used, .802 when the interaction predictor is added, and .889 when the five predictors for the full quadratic relation- ship are used. a --- Trang 709 --- 696 = cuarrer 12 Regression and Correlation Models with Predictors for Categorical Variables — Thus far we have explicitly considered the inclusion of only quantitative (numerical) predictor vari- ables in a multiple regression model. Using simple numerical coding, qualitative (categorical) variables, such as type of college (private or state) or type of wood (pine, oak, or walnut), can also be incorporated into a model. Let’s first focus on the case of a dichotomous variable, one with just two possible categories—male or female, U.S. or foreign manufacture, and so on. With any such variable, we associate a dummy or indicator variable x whose possible values 0 and | indicate which category is relevant for any particular observation. Recall the graduation rate data introduced in Example 12.12 and plotted in Exam- ple 12.24. There it appeared that private universities might do better for a given SAT score. To test this we will use a model with y = graduation rate, x. = average freshman SAT score, and x, = a variable defined to indicate private or public status. Define __f 1. if the university is private *1~ 1) 0. if the university is public and consider the multiple regression model Y = Bo + Byx1 + Box2 + 8. The mean graduation rate depends on whether the university is public or private: mean graduation rate = By + Brx2 when x; = 0 (public) mean graduation rate = By +B, + B.x2 when x; = | (private) Thus there are two parallel lines with vertical separation f,. as shown in Fig- ure 12.34a. The coefficient f, is the difference in mean graduation rates between private and public universities with SAT held fixed. If 6, > 0, then on average, for a given SAT, private universities will have a higher graduation rate. a b Mean y Mean y Private » State 7S Private as . at ey we oe x & wr Ww ast Sy 20) _- Sate x oa < . ¥ po* Xp X2 Figure 12.34 Regression functions for models with one dummy variable (x;) and one quantitative variable (x2): (a) no interaction; (b) interaction --- Trang 710 --- 12.7 Multiple Regression Analysis 697 A second possibility is a model with a product (interaction) term: Y = Bo + Byx1 + Box2 + Bsxix2 + &. Now the mean graduation rates for the two types of university are mean graduation rate = fy + Bx> when x, = 0 (public) mean graduation rate = fy + B, + (Bo + B3)x2 when x, = 1 (private) Thus we have two lines where f is the difference in intercepts and f3 is the difference in slopes, as shown in Figure 12.34b. Unless £3 = 0, the lines will not be parallel and there will be interaction, which means that the separation between public and private universities depends on SAT. The usual procedure is to test the interaction hypothesis Ho: 6; = 0 versus H,: B; # 0 first. If we do not reject Ho (no interaction) then we can use the parallel model to see if there is a separation (8) between lines. Of course, it does not make sense to estimate the difference between lines if the difference depends on x2, which is the case when there is interaction. Figure 12.35 shows SAS output for these two tests. The coefficient for interaction has a P-value of 0.9062, so there is no reason to reject the null Test Interaction Analysis of Variance sum of Source DF Squares Mean Square = F Value Pr > F Model 3 6343.01499 2114.33833 31.21 <.0001 Error 16 2083.93501 67.74594 Corrected Total 19 742695000 Root MSE 8.23079 R-Square 0.8541 Dependent Mean 59.45000 = AG] R-Sq 0.8267 Coeff var 13.84490 Parameter Estimates Parameter Standard variable DF Estimate Error t Value Pr > |t| Intercept 1 —0.52145 18.16644 =0.03 0.9775 sav 1 0.04822 0.01840 2.62 0.0186 Privi_sto 1 -7.86223 29.39747 -0.27 0.7925 Inter a 0.02240 0.02617 0.86 0.4047 Test Private versus State Analysis of Variance Sum of Source DF Squares Mean Square =F Value Pr > F Model 2 6293.39873 3146.69936 47.19 <.0001 Error 7 1133.55127 66.67949 Corrected Total 19 7426.95000 Root MSE 9.16575 R-Square 0.8474 Dependent Mean 59.45000 Adj R-Sq 0.8294 CocfE Var 13.3549 Parameter Estimates Parameter Standard 958 variable Dr Estimate krror t Value Pr > |t| Confidence Limits Intercept 1 -11.35960 12.92137 -0.88 0.3916 -38.62131 15.90210 SAT 1 0.05929 0.01298 4.57 0.0003 0.03190 0.08668 Privl_StO 1 16.92772 4.97206 2.40 0.0034 6.43759 27.41785 Figure 12.35 SAS output for interaction model and parallel model Lt --- Trang 711 --- 698 — cuarrer 12 Regression and Correlation hypothesis Ho: 8; = 0. Since we do not reject the hypothesis of no interaction, let’s look at the results for the difference f, in the model with two parallel lines. The variable Priv1_St0 is x», the dummy variable with value | for private and 0 for state universities. The P-value for its coefficient is .0231, so we can reject the hypothesis that it is 0 at the .05 level. The value of the coefficient is 13.17, which means that a private university is estimated to have a graduation rate about 13 percentage points higher than a state university with the same freshman SAT. This is pretty large, especially in comparison with the coefficient for SAT, which is .06869. Dividing .06869 intof, = 13.17 gives 192, which means that it takes 192 points in SAT to make up the difference between private and public universities. To put it another way, a private university with freshman SAT of 1000 is estimated to have the same graduation rate as a state university with SAT of 1192. a You might think that the way to handle a three-category situation is to define a single numerical variable with coded values such as 0, 1, and 2 corresponding to the three categories. This is incorrect, because it imposes an ordering on the categories that is not necessarily implied by the problem context. The correct way to incorporate three categories is to define two different dummy variables. Suppose, for example, that y is a score on a posttest taken after instruction, x; is the score on an ability pretest taken before instruction, and that there are three methods of instruction in a mathematics unit (1) with symbols, (2) without symbols, and (3) a mixture with and without symbols. Then let 1 instruction method 1 1 instruction method 2 a= {i otherwise ss {4 otherwise For an individual taught with method 1, x. = 1 and x3 = 0, whereas for an individual taught with method 2, x7 = 0 and x; = 1. For an individual taught with method 3, x. = x3 = 0, and it is not possible that x. = x3 = 1 because an individual cannot be taught simultaneously by both methods | and 2. The no-interaction model would have only the predictors x), x2, and x3. The following interaction model allows the mean change in lifetime associated with a 1-unit increase in pretest to depend on the method of instruction: Y = Bo + Bix + Box2 + Byxs + Baxixr + Bsxixs + € Construction of a picture like Figure 12.34 with a graph for each of the three possible (x2, x3) pairs gives three nonparallel lines (unless 8, = B; = 0). How would we interpret statistically significant interaction? Suppose that it occurs to the extent that the lines for methods | and 2 cross. In particular, if the line for method 1 is higher on the right and lower on the left, it means that symbols work well for high ability students but not as well for low ability students. More generally, incorporating a categorical variable with c possible cate- gories into a multiple regression model requires the use of c — | indicator variables (e.g., five methods of instruction would necessitate using four indicator variables). Thus even one categorical variable can add many predictors to a model. Indicator variables can be used for categorical variables without any other variables in the model. For example, consider Example 11.3, which compared three different compounds in their ability to prevent fabric soiling. Using a regression --- Trang 712 --- 12.7 Multiple Regression Analysis 699 with two dummy variables gives the following regression ANOVA table, just like the one in Example 11.2: Source DE ss MS F P Regression 2 0.06085 0.03043 0.99 0.401 Residual error 12 0.37008 0.03084 Total 14 0.43093 Analysis that involves both quantitative and categorical predictors, as in Example 12.31, is called analysis of covariance, and the quantitative variable is called a covariate. Sometimes more than one covariate is used. Other Models The logistic regression model introduced in Section 12.1 can be extended to incorporate more than one predictor. Various nonlinear models are also used frequently in applied work. An example is the multiple exponential model Y = ebot bist thie , 9 Taking logs on both sides shows that In(Y) = Bo + Bix + +++ + Bux, + e!, where e’ = In(e). This is the usual multiple regression model with In(Y) as the response variable. Exercises | Section 12.7 (78-90) 78. Cardiorespiratory fitness is widely recognized as a ¢. What is the probability that VO.max will be major component of overall physical well-being. between 1.00 and 2.60 for a single observation Direct measurement of maximal oxygen uptake made when the values of the predictors are as (VO max) is the single best measure of such fit- stated in part (b)? ness, but direct measurement is time-consuming 79. Let y = sales at a fast-food outlet ($1000’s), x, = and expensive. It is therefore desirable to have a : ate 4 é 5 number of competing outlets within a 1-mile prediction equation for VOzmax in terms of easily " _ Se te aaa bt d quaritities: Consider the. variabl radius, x. = population within a 1-mile radius Obtained quart Hes Onsiden newanayi (1000's of people), and.x; be an indicator variable y= VOomax(L/min) x; = weight(kg) that equals 1 if the outlet has a drive-up window xy = age(yr) and 0 otherwise. Suppose that the true regression i y model is x3 = time necessary to walk 1 mile(min) ‘X4 = heart rate at the end of the walk(beats/min) Y = 10.0 — 1.2x, + 6.8%) + 15.3x3 +6 Here is one ean Hide totale bai a, What is the mean value of sales when the Sie uth fGen gen inter atiele umber of competing outlets is 2, there are Validation of the Rockport Fitness Walking Test 8000 people within a I-mile radius, and the in College Males and Females” (Res. Q. Exercise sane re Sport, 1994: 152-158): outlet has a drive-up window? Spars 1998 192158): b. What is the mean value of sales for an outlet without a drive-up window that has three com- Y=5.0+ Oly — 05x — -13x3 — Oley +8 peting outlets and 5000 people within a I-mile o=4 radius? c. Interpret f3. a. Interpret f, and B3. pret fs b. What is the expected value of VO2max when 80. The article “Analysis of the Modeling Methodol- weight is. 76 ke, axe is 20 sean, walle tine is ogies for Predicting the Strength of Air-Jet Spun 12 min, and heart rate is 140 beats/min? Yarns” (Textile Res. J., 1997: 39-44) reported ona --- Trang 713 --- 700 = charter 12 Regression and Correlation study carried out to relate yarn tenacity (y, in g/ Objective Decision-Making Approach for Asses- tex) to yarn count (x), in tex), percentage polyester sing Simultaneous Improvement in Die Life and (x9), first nozzle pressure (x3, in kg/em?), and Casting Quality in a Die Casting Process,” Qual. second nozzle pressure (x4, in kg/cm’). The esti- Engrg., 1994: 371-383). mate of the constant term in the corresponding multiple regression equation was 6.121. The esti- x | 1250 1300 1350 1250-1300 mated coefficients for the four predictors were —.082, .113, .256, and —.219, respectively, and %2 6 7 6 7 6 the coefficient of multiple determination was .946. 9 30 05. ~«4101.~=«S 02 Assume that n = 25, a. State and test the appropriate hypotheses to decide whether the fitted model specifies a x 1250 1300 1350 1350 useful linear relationship between the depen- y 3 z = q dent variable and at least one of the four model = predictors. y 87 96 106 108 b. Calculate the value of adjusted R? and comment. . ¢. Calculate a 99% confidence interval for true MINITAB output from fitting the multiple regres- sion model with predictors x, and x, is given here. mean yarn tenacity when yarn count is 16.5, yarn contains 50% polyester, first nozzle pres- ‘The regression equation is sure is 3, and second nozzle pressure is 5 ifthe tempai ff - -200 + 0.210 furntemp estimated standard deviation of predicted +3.00clostime tenacity under these circumstances is .350. 81. The article “Selling Prices/Sq. Ft. of Office Build- P°¢ictox Goer = Bhdev’ fonatto e ae Constant 199.56 11.64 -17.14 0.000 ings in Downtown Chicago — How Much Is neemp 0.210000 0.008642 24.30 0.000 dt Worth to Be an Old But Class A Building?” oy ook ime 3.0000 0.4321 6.94 0.000 (J. Real Estate Res., 2010: 1-22) considered a regression model to relate y = In($/ft?) to 16 pre-. $= 1-058 R-sq= 99.18 Rosq(adj) = 98.88 dictors, including age, age squared, number of 4...1ysis of variance stories, occupancy rate, and indicator variables for whether a building has a restaurant and weupee By a8 8 2 8 whether it has conference rooms. The model Was Regression 2 715.50 387.75 319.31 0.000 fit to data resulting from 203 sales. Error 6 6.72 1.12 a. The coefficient of multiple determination was Total 8 722.22 -711. What is the value of the adjusted coeffi- cient of multiple determination? Does it sug- a. Carry out the model utility test. gest that the relatively high R? value was the b. Calculate and interpret a 95% confidence inter- result of including too many predictors in the val for >, the population regression coefficient model relative to the amount of data available? of x5. b. Using the R? value from (a), carry out a test of ¢. When x, = 1300 and x2 = 7, the estimated hypotheses to see whether there is a useful standard deviation of Y is Sy = .353. Calculate linear relationship between the dependent var- a 95% confidence interval for true average iable and at least one of the predictors. temperature difference when furnace tempera- c. The estimated coefficient of the indicator vari- ture is 1300 and die close time is 7. able for whether or not a building was class A d. Calculate a 95% prediction interval for the was .364. Interpret this estimated coefficient, temperature difference resulting from a single first in terms of y and then in terms of $/f. experimental run with a furnace temperature of d. The ¢ ratio for the estimated coefficient of (c) 1300 and a die close time of 7. was 5.49. What does this tell you? e. Use appropriate diagnostic plots to see if there 82. An investigation of a die casting process resulted is any reason to question the regression model : B i me assumptions. in the accompanying data on x; = furnace tem- perature, x» = die close time, and y = tempera- 83. An experiment carried out to study the effect of ture difference on the die surface (“A Multiple- the mole contents of cobalt (x)) and the calcination --- Trang 714 --- 12.7 Multiple Regression Analysis 701 temperature (x2) on the surface area of an iron— a. Predict the value of surface area when cobalt cobalt hydroxide catalyst (y) resulted in the content is 2.6 and temperature is 250, and accompanying data (“Structural Changes and Sur- calculate the value of the corresponding resid- face Properties of Co,Fe3_,Oy Spinels,” J. Chem. ual. Tech. Biotech., 1994: 161-170). b. Since, = —46.0, is it legitimate to conclude that if cobalt content increases by | unit while x 6 6 6 6 6 1.0 1.0 the values of the other predictors remain fixed, |} 200 250 400 500 600 200 250 surface area oe be expected to decrease by roughly 46 units? Explain your reasoning. y | 90.6 82.7 58.7 43.2 25.0 127.1 112.3 c. Does there appear to be a useful relationship between y and the predictors? x 10 10 10 26 26 26 26 d. Given that mole contents and calcination tem- perature remain in the model, does the interac- | 400 500 600 200 250400 500 tion predictor x3 provide useful information y | 196 178 9.1 53.1 520 434 42.4 about y? State and test the appropriate hypoth- eses using a significance level of .01. x 2.6 2.8 2.8 2.8 28 2.8 e. The estimated standard deviation of Y when mole contents is 2.0 and calcination tempera- Bx |_ BOO) 200 _ #2501 400) _500_600 ture is 500 is sy = 4.69. Calculate a 05% von. y | 316 40.9 37.9 27.5 27.3 19.0 fidence interval for the mean value of surface area under these circumstances, A request to the SAS package to fit the regression f. Based on appropriate diagnostic plots, is there function Bo + Bix, + fox2 + B3x3, where x3 = any reason to question the regression model X,X2 (an interaction predictor) yielded the accom- assumptions? panying output. SAS output for Exercise 83 Dependent Variable: SURFAREA Analysis of Variance Source DE Sum of Squares Mean Square F value Prob>F Model 3 15223.52829 5074.50943 18.924 0.0001 Error 16 4290.53971 268.15873 c Total 19 19514.06800 Root MSE 16.37555 R-square 0.7801 Dep Mean 48.06000 Adj R-sq 0.7389 cv. 34.07314 Parameter Estimates Parameter Standard T for HO: Prob variable DE Estimate Error Parameter -0 >IT INTERCEP 1 185.485740 21.19747682 8.750 0.0001 COBCON 1 ~45.969466 10.61201173 4.332 0.0005 TEMP 1 -0.301503 0.05074421 -5.942 0.0001 84. A regression analysis carried out to relate y = cal and 0 if mechanical) yielded the following repair time for a water filtration system (hr) to model based on n= 12 observations: y x, = elapsed time since the previous service = .950 + .400x, + 1.250xo. In addition, SST (months) and x3 = type of repair (1 if electri- = 12.72, SSE = 2.09, and , = 312. --- Trang 715 --- 702 == cuarrer 12 Regression and Correlation a. Does there appear to be a useful linear a. Do plots of e* versus x1, e* versus x2, and relationship between repair time and the e* versus y suggest that the full quadratic two model predictors? Carry out a test of model should be modified? Explain your the appropriate hypotheses using a signifi- answer. cance level of .05. b. The value of R? for the full quadratic b. Given that elapsed time since the last ser- model is .759. Test at level .05 the null vice remains in the model, does type of hypothesis stating that there is no linear repair provide useful information about relationship between the dependent vari- repair time? State and test the appropriate able and any of the five predictors. hypotheses using a significance level of ¢. Each of the null hypotheses Ho: f; = 0 Ol. versus Hy: B; # 0,1 = 1, 2,3, 4, 5, is not ¢. Calculate and interpret a 95% CI for >. rejected at the 5% level. Does this make d. The estimated standard deviation of a pre- sense in view of the result in (b)? Explain. diction for repair time when elapsed time d. It is shown in Section 12.8 that is 6 months and the repair is electrical is V(Y)=8=V(¥)+ vy — Y). The esti- .192. Predict repair time under these cir- mate of o is 6 = s = 6.99 (from the full cumstances by calculating a 99% predic- quadratic model). First obtain the esti- tion interval. Does the interval suggest that mated standard deviation of Y —Y, and the estimated model will give an accurate then estimate the standard deviation of Y prediction? Why or why not? (ie, By +Bimi thax +8332 +Bo3 + 85. The article “The Undrained Strength of Some Bsxixq when x, = 8.0 and x, = 33.1. Thawed Permafrost Soils” (Canad. Geotech. Finally, compute a 95% CI for mean J., 1979: 420-427) contains the following strength. [Hint: What is (y — y)/e*?] data on undrained shear strength of sandy e, Sometimes an investigator wishes to soil (y, in kPa), depth (x1, in m), and water decide whether a group of m predictors content (x3, in-%). (m > 1) can simultaneously be eliminated from the model. The null hypothesis says # P ‘ that all f’s associated with these m predic- Os y 8 Be Fee tors are 0, which is interpreted to mean that 1 147 89 315 2335 8.65 —1.50 as long as the other k — m predictors are 2 48.0 36.6 27.0 46.38 1.62 54 retained in the model, the m predictors 3 256 368 259 2713 -153 —53 under consideration collectively provide 4 10.0 6.1 39.1 10.99 —.99 —17 no useful information about y. The test is 5 16.0 69 392 14.10 1.90 33 carried out by first fitting the “full” model 6 168 69 383 16.54 26 04 with all k predictors to obtain SSE(full) 7 20.7 73 33.9 23.34 —2.64 _42 and then fitting the “reduced” model con- 8 38.8 84 338 25.43 13.37 2.17 sisting just of the k — m predictors not 9 169 65 279 156 127 23 being considered for deletion to obtain 1027.0 8.0 33.1 24.29 2.71 44 SSE(red). The test statistic is 11 16.0 45 26.3 15.36 64 20 " 12 249 99 37.8 29.61 —4.71 = 91 F= [SSE (ced) — SSE(tul) ae 13-73-29 346 15.38 -8.08 1.53 SSE(full/fin — (k+ 1)] i a The test is upper-tailed and based on m numerator df and n — (k + 1) denominator ‘The predicted values and residuals were com- df. Fitting the first-order model with just the puted by fitting a full quadratic model, which predictors, and x, results in SSH = 894.95: resulted in the estimated regression function State and test at significance level .05 the null hypothesis that none of the three second-order predictors (one interaction and two quadratic y =— 151.36 — 16.22x + 13.48x9 + .094x predictors) provides useful information about = 253x2 + 492x1x9 y provided that the two first-order predictors = are retained in the model. --- Trang 716 --- 12.7 Multiple Regression Analysis 703 86. The following data on y = glucose concen- b. For x; = x3 = 30, x3 = 10, a statistical tration (g/L) and x = fermentation time software package reported that (days) for a particular blend of malt liquor § = 66573, sy = .01785 based on the was read from a scatter plot in the article complete second-order model. Predict “Improving Fermentation Productivity with the amount of f-carotene that would Reverse Osmosis” (Food Tech., 1984: result from a single experimental run 92-96): with the designated values of the indepen- dent variables, and do so in a way that x 1 2 3 4 5 6 7 8 conveys information about precision and reliability. y 74 54 52 51 52 53 58 TL i, VeriBy hk aseatnen plotconeamnertatEtS Obs Linoleic Kerosene Antiox Betacaro consistent with the choice of a quadratic 1 30.00 30.00 10.00 0.7000 Tegressionmodel, ; 2 30.00 30.00 ~—:10.00 0.6300 b. The estimated quadratic regression equa- 3 30.00 30.00 18.41 0.0130 tion is y = 84.482 — 15.875x + 1.76792". 4 40.00 40.00 5.00 0.0490 Predict the value of glucose concentration is 30.00 30.00 10.00 0.7000 for a fermentation time of 6 days, and 6 13.18 30.00 10.00 0.1000 compute the corresponding residual. 7 2000 40.00 5.00 0.0400 c. Using SSE = 61.77, what proportion of 2 one: dons sow. O.00sE observed variation can be attributed to the 9 40.00 20.00 5.00 0.2020 quadratic regression relationship? 10 30.00 3000 10.00 06300 d. The n = 8 standardized residuals based on il 30.00 30.00 1.59 0.0400 the quadratic model are 1.91, —1.95, —.25, 12 40.00 20.00 15.00 0.1320 58, .90, .04, —.66, and .20. Construct a 3 40.00 40.00 15.00 0.1500 plot of the standardized residuals versus x 14 30.00 30.00 10.00 0.7000 and a normal probability plot. Do the plots 15 30.00 46.82 10.00 0.3460 exhibit any troublesome features? 16 30.00 30.00 10.00 0.6300 e. The estimated _ Standard deviation of 7 30.00 13.18 10.00 0.3970 ftyo—that is, Bo +B,(6) +fo(36)— is 18 20.00 20.00 5.00 0.2690 1.69. Compute a 95% CI for pty. 19 20.00 20.00 15.00 0.0054 f. Compute a 95% PI for a glucose concen- 20 «46.82 ~—=«30.00-~—=—«*10.00 «0.0640 tration observation made after 6 days of a fermentation time. 87. Utilization of sucrose as a carbon source for 8. aa, oe represent environena the production of chemicals is uneconomical. hazards, The article “Atmospheric PAH Beet molasses is a readily available and low- Deposition: Deposition Velocities and Wash- priced substitute. The article “Optimization of out Ratios"U. Environ. Engre., 2002: the Production of f-Carotene from Molasses. 186-195) focused on the deposition of poly- by Blakeslea trispora” (J. Chem. Tech. Bio- tomatic hyihiiatbons The aulhcie niipesel tech., 2002: 933-943) carried out a multiple a multiple regression model for relating regression analysis to relate the dependent deposition over-aspecified time period (y; in variable y = amount of f-carotene (g/dm”) pg/m?) to two rather complicated predictors to the three predictors: amount of linoleic 2, Gib Sin?) and xy Gigi) defined in terms aerd_-amount of eetonene, and amount.of ant: of PAH air concentrations for various species, oxidant (all gid). ; total time, and total amount of precipitation. 2; Fitting the complete second order:model in Here is data on the species fluoranthene and the three predictors resulted in R° = .987 corresponding MINITAB output: and adjusted R* = .974, whereas fitting the first-order model gave R? = .016. Obs xl x2 flthdep What would you conclude about the two si 92017 ~ 0026900 278.78 models? 2 51830 - 0030000 124.53 3 17236 .0000196 22.65 --- Trang 717 --- 704 = cuarrer 12 Regression and Correlation i igre aeaeses we b. Regress Rating on IBU and ABV. Notice : anes = poeased 5 . a that although both predictors have strongly 6 asst pexead OR significant correlations with Rating, they ° govos 011200 a as do not both have significant regression 4 sono, poeaee shoe coefficients. How do you explain this? 5 13048 . 0004850. 20 . 64 ¢. Plot the residuals from the regression of (b) 18 seed sBOBsEER en to check the assumptions. Also plot rating i Ese 606x250 aeceh, against each of the two predictors. Which = seyot ere ie et 56 of the assumptions is clearly not satisfied? _ 0048 DOsatOD tav08 d. Repeat the multiple regression in (b) with ts sess oecoaee boo: the square of IBU as a third predictor. 15 49793 “0000896 58.97 (Again check assumptions: 2 Gee ae eee e. How effective is the regression in (d)? i Saad =pODsE aC toe Interpret the coefficients with regard to . . statistical significance and sign. In particu- 5, a. lar, discuss the relationship to IBU. Thexegvesstonequattonys f. Summarize your conclusions. £1th dep = ~33.5 + 0.00205 x1 + 29836 x2 Beer IBU ABV Rating Predictor Coef — SECoef 7 P Constant -33.46 14.90 =2.25 pxoay fttteelLieht 3 x1 0.0020548 0.0002945 6.98 0.000 Anchor Liberty Ale a aes - 508a6 ee 2.19 0.046 Anchor Steam 33 «4.9331 $=44.28 R-Sq= 92.3% R-Sq(adj) =91.23 Bud Light ai ay al5 Budweiser Ws 1.38 Analysis of Variance Coors 145 1.63 DAB Dark 3205 (273 sewsee ne e we ¥ P Dogfish 60 Minute IPA 60 6 3.76 Great Divide Titan IPA 65 68 3.81 Regression 2 330989 165495 84.39 0.000 Reviduel. 1457482 1061 Great Divide Hercules Double 85 9.14.05 IPA error fetal ig. seNTas Guinness Extra Stout 60 5 338 Harp Lager 2 43° 2.85 Heineken 3 5 243 Formulate questions and perform appropriate _Heineken Premium Light 32 162 analyses. Michelob Ultra 4 42 101 Construct the appropriate residual plots, Newcastle Brown Ale 18 4.7 3.05 including plots against the predictors. Based Pilsner Urquell 35 44 (3.28 on these plots, justify adding a quadratic term, Redhook ESB 29 5.77 3.06 and fit the model with this additional term. Is Rogue Imperial Stout 88 116 3.98 this term statistically significant, and does it Samuel Adams Boston Lager 31 49 3.19 help the appearance of the diagnostic plots? Shiner Light 13 403 257 . Sierra Nevada Pale Ale 37 5.6 3.61 89. The following data set has ratings from rateb- : 3 Sierra Nevada Porter 40 5.6 3.60 eer.com along with values of IBU (interna- e A a Terrapin All-American Imperial 75 7.5 3.46 tional bittering units, a measure of bitterness) prise ils and ABV (alcohol by volume) for 25 beers. ‘Three Floyds Alpha King 6 6 4.04 Notice which beers have the lowest ratings and which are highest. . . . edema’ We, aRLARUORY tates 90. The article “Promoting Healthy Choices: corresponding P-values) miing Rade, Information versus Convenience” (Amer. Tat aa ABN: Econ. J. Applied Econ., 2010: 164 — 178) ’ reported on a field experiment at a fast-food --- Trang 718 --- 12.8 Regression with Matrices 705 sandwich chain to see whether calorie infor- b. What can be said about the P-value for the mation provided to patrons would affect calo- model utility F test? rie intake. One aspect of the study involved ¢. What proportion of the observed variation in fitting a multiple regression model with 7 pre- calorie intake can be attributed to the model dictors to data consisting of 342 observations. relationship? Does this seem very impres- Predictors in the model included age and sive? Why is the P-value as small as it is? dummy variables for gender, whether or not a d. The estimated coefficient for the indicator daily calorie recommendation was provided, variable calorie information provided was and whether or not calorie information about —71.73, with an estimated standard error choices was provided. The reported value of of 25.29. Interpret the coefficient. After the F ratio for testing model utility was 3.64. adjusting for the effects of other predic- a. At significance level .01, does the model tors, does it appear that true average calo- appear to specify a useful linear relation- rie intake depends on whether or not ship between calorie intake and at least calorie information is provided? Carry one of the predictors? out a test of appropriate hypotheses. Regression with Matrices In Section 12.7 we used an additive model equation to relate a dependent variable y to independent variables x, ..., x,. That is, we used the model Y = Bo + Byxt + Box2 +--+ + Bete +, where ¢ is a random deviation or error term that is normally distributed with mean 0, variance a, and the various ¢’s are independent of one another. Simple linear regression is the special case in which k = 1. The Normal Equations Suppose that we have n observations, each consisting of a y value and values of the k predictors (so each observation consists of k + 1 numbers). We have then yi Bo + Byxir + Boxig +++ + Bexie + &1 Jn Bo + Brxnt + Boxna ++ + BeXnk + En For example, if there are n = 6 cars, where y is horsepower, x, is engine size (liters), and x, indicates fuel type (regular or premium), then we are trying to predict horsepower as a linear function of the k = 2 predictors engine size and fuel type. The equations can be written much more compactly using vectors and matrices. To do this, form a column vector of observations on y, a column vector of regression coefficients, and a vector of random deviations: Bo v1 & . A ! y=]: ] Bel. &= | s Jn 3 fn n Bi n --- Trang 719 --- 706 = cuarrer 12 Regression and Correlation Also form an n x (k +1) matrix in which the first column consists of 1's (corresponding to the constant term in the model), the second column consists of the values of the first predictor x, (i-e., of x1, 2), -. -,X,1), the third column has the values of x, and so on. To Sy soe Bay x=|i: : LX... Xnk The X matrix has a row for each observation, consisting of | and then the values of the k predictors. The equations relating the observed y’s to the x;’s can then be written very concisely as ' Vox sy % Bo y i 1k & s at A | :[ =y=XBt+e=][: : 3 2 [Pe Yn ie ee 77 Be En We now estimate fo, 61, 2, . -., Bx using the principle of least squares: Find bo, 1, bo, ..., bg to minimize n 5 SE Di = (bo + bixin + boxing ++ + bun) )? = (y — Xb)! (y — XB) = lly — XBIP i=l where b is the column vector with entries bo, by, .. ., bg, and llull is the length of u. If we equate to zero the partial derivative with respect to each of the coefficients, then it leads to the normal equations: n n n n by S21 + br So xin te tbe Don = Ovi i=l i=l i=l i=l n n n n bo S xi +h So axnxn tobe SD xxi = Sanayi =I i=l i=l =i n n n a by S xin + By So xara He te SY aierin = SO vay i=l =I i=l =i In matrix form this is n n n n pa, Da LX Lyi : s . bo - Vexa Vxava .-. Lxinxn | | br Sxayi ist i ist .| =| n a? n dy a" xe Vata... Yo mere Vxwi isl fa i= isl --- Trang 720 --- 12.8 Regression with Matrices 707 The matrix on the left is just X’X and the matrix on the right is X’y, where X’ indicates X-transpose, so the normal equations become X’Xb = X’y. We will assume throughout this section that X’X has an inverse, so the vector of estimated coefficients is B = b = [X'X]"'X'y. Based on six cars, we try to predict horsepower (hp) using engine size (liters) and fuel type. Here is the data set: Make hp Eng Size Fuel Ford 132 2.0 Regular Mazda 167 2.0 Premium Subaru 170 25 Regular Lexus 204 25 Premium Mitsubishi 230 3.0 Regular BMW 260 3.0 Premium The hp column will be used for y, and engine size values are placed in the second column of X, but numbers must be used instead of words in the third column. We use 0 for “regular” and 1 for “premium.” Any two numbers could be used instead of 0 and 1, but this choice is convenient in terms of the interpretation of the coeffi- cients. This gives 1 2.0 0 132 ; 5a ; oh 6 15 3 1163 X= : y= XX=/]15 385 7.5 X'y = | 3003 P23 i at 3 715 3 631 1 3.0 0 230 . 1 3.0 1 260 Therefore, 79/12 —5/2 -1/3] [1163 61.417 B=(x'xy'x’y=|-5/2 1 0 3003} =|] 95.5 -1/3 0 2/3 || 631 33 The coefficient 95.5 for engine size means that, if the fuel type is held constant, then we estimate that horsepower will increase on average by 95.5 when the engine size increases by one liter. Similarly, the coefficient 33 for fuel means that, if the engine size is held constant, then we estimate that horsepower will increase on average by 33 when the fuel type increases by 1. However, increasing fuel type by | unit means switching from regular fuel to premium fuel, so the difference in horsepower corresponding to the difference in fuels is 33. Notice that this is the difference between the average for the three premium-fuel cars and the average for the three regular-fuel cars. a Residuals, ANOVA, F, and R-Squared The estimated regression coefficients can be used to obtain the predicted values. Recall that yj =By +B) +Bo%i2 + +» +By%ix. The expression for ¥; is the product of the ith row of X and the B vector. The vector of predicted values is then --- Trang 721 --- 708 = cuarrer 12 Regression and Correlation ay » | SS =XB=X(N'X] 'X'y In Because y-hat is the product of H = X[X'X]~'X’ and y, the matrix H is called the hat matrix. A residual is y; — y;, so the vector of n residuals is y-J=y—Hy=(I-H)y. The error sum of squares SSE is the sum of the n squared residuals, SSE = (y—3)'(y 9) = lly IP An unbiased estimator of o” is MSE = S* = SSE/[n — (k + 1)]. Notice that the estimated variance is the average [with n — (k + 1) in place of n] squared residual. The divisor n — (k + 1) is used because SSE is proportional to a chi-square rv with n—(k + 1) degrees of freedom under the assumptions given at the beginning of this section, including the assumption that X’X be invertible. We can rewrite the normal equations in the form 0 =X'y —X'XB = X'(y — XB) =X'(y -5). (12.19) Because the transpose of X times the residual vector is zero, each of the columns of X, including the column of 1’s, is perpendicular to the residual vector y — y. In particular, because the dot product of the column of 1’s with the residual vector is zero, the sum of the residuals is zero. There are k + 1 columns of X, and the dot product of each column with the residual vector is zero, so there are k + | condi- tions satisfied by the residual vector. This helps to explain intuitively why there are only n — (k + 1) degrees of freedom for SSE. Letting y be the vector with n identical components ¥, the total sum of squares SST is the sum of the squared deviations from y, SST = ||y — ¥||°. Simi- larly, the regression sum of squares SSR is defined to be the sum of the squared deviations of the predicted values from ¥, SSR = || — y||?. As before the ANOVA relationship is SST = SSE + SSR (12.20) This can be obtained by subtracting and adding y: SST = |ly— yl’ = (0-9) + 8-0-9) + G-9)] =|b~sIP + li - yl)? = SSE + SSR. The cross-terms in the matrix product are zero because of Equation (12.19) (see Exercise 102). Recall that the null hypothesis in the model utility test is Ho: B) = --- = Be=0, in which case the model consists of just Bo. That is, under Ho the observations all have the same mean jt = fo. For a normal random sample with mean 1 and standard --- Trang 722 --- 12.8. Regression with Matrices 709 deviation ¢, a proposition in Section 6.4 shows that SST/o? has the chi-squared distribution with n — 1 df. Dividing Equation (12.20) by o? gives SST SSE SSR eat ee It can be shown that SSE and SSR are independent of each other. We know that SST/o? ~ 72_, under the null hypothesis and SSE/o? ~ 72_,_,. Then, by a prop- osition in Section 6.4, SSR/a” is distributed as chi-squared with degrees of freedom [n — 1] — [n — (Kk + 1)] = &. Recall from Section 6.4 that the F distribution is the ratio of two independent chi-squares that have been divided by their degrees of freedom. Applying this to SSR/a* and SSE/o” leads to the F ratio SSR SSR Ck ke MSR —st— = —i— = MSE™ Fen—(k41) (12.21) oln—(k+ 1] n-(k+1) Here MSR = SSR/k and MSE was previously defined as SSE/[n — (k + 1)]. The F ratio MSR/MSE is a standard part of regression output for statistical computer packages. It tests the null hypothesis Ho: 6, = --- = B, = 0, the hypothesis of a constant mean model. This is the model utility test, and it tests the hypothesis that the explanatory variables are useless for predicting y. Rejection of Ho occurs for large values of the F ratio. This should be intuitively reasonable, because if the prediction quality is good, then SSE should be small and SSR should be large, and therefore the F ratio should be large. The dividing line between large and small is set using the upper tail of the F distribution. In particular, Ho is typically rejected if the F ratio exceeds F 95 g.n—(+1) Another measure of the relationship between y and the predictors is the R* Statistic, the coefficient of multiple determination, which is the fraction SSR/SST: pi 2 SOR... SST SSE. SSE (12.22) SST SST SST By the analysis of variance, Equation (12.20), this is always between 0 and 1. The R® statistic is also called the squared multiple correlation. For example, suppose SST = 200, SSR = 120, and therefore SSE = 80. Then R? = 1 — (SSE/SST) = 1 — 80/200 = .60, so the error sum of squares is 60% less than the total sum of squares. This is sometimes interpreted by saying that the regression explains 60% of the variability of y, which means that the regression has reduced the error sum of squares by 60% from what it would be (SST) with just a constant model and no predictors. The F ratio and R® are equivalent statistics in the sense that one can be obtained from the other. For example, dividing numerator and denominator through by SST in Equation (12.21) and using Equation (12.22), we find that the F ratio is [see Equation (12.18)] Re R/k (= R?)/[n— (e+ 1)] --- Trang 723 --- 710 = cuarrer 12 Regression and Correlation In the special case of just one predictor, k = 1, F = (n — 2)R7/(1 — R?), and the multiple correlation is just the absolute value of the ordinary correlation coefficient. This F is the square of the statistic T = /n — 2R/ V1 — R? given in Section 12.5. The predicted values and residuals are easily obtained: (Example 12.32 continued) 12 0 129.583 121 61417 162.583 j=Xxp= 1 25 0 “os50 _ | 177.333 US| i DIST a : ~ 1 210,333 1 3 0 33 225.083 1 3 1 258.083 132 129.583 2.417 167 162.583 4.417 5 170 177333 —7.333 2~** | 904} ~ | 210.333 | ~ | -6.333 230 225.083 4.917 260 258.083 1.917 Therefore, the error sum of squares is SSE = ||y — j||? = 2.417 +---+ 1.91? = 147.083 and MSE = s* = SSE/[n — (k + 1)] = 147.083/[6 — (2 + 1)] = 49.028. The square root of this yields the estimated standard deviation s = 7.002, which is a form of average for the magnitude of the residuals. However, notice that only one of the six residuals exceeds s in magnitude. The total sum of squares is SST = |ly —¥||? = Yo (7; — 193.83)? = 10,900.83. The regression sum of squares can be obtained by subtraction using the analysis of variance, SSR = SST — SSE = 10,900.83 — 147.083 = 10,753.75. The sums of squares and the computa- tion of the F test and R? are often done through an analysis of variance table, as copied in Figure 12.36 from SAS output. Analysis of Variance Sum of Mean Source DF Squares Square F Value Pr > F Model 2 SUTSS IS 5376.87500 109.67 0.0016 Error 2 147.08333 49,02778 Corrected Total 5 10900.83 Figure 12.36 Analysis of variance table from SAS The regression sum of squares is called the model sum of squares here. The mean square is the sum of squares divided by the degrees of freedom, and the F value is the ratio of mean squares. Because the P-value is less than .05, we reject the null hypothesis (that both the engine size and fuel population coefficients are 0) at the .05 level. The coefficient of multiple determination is R? = SSR/SST = 10,753.75/ 10,900.83 = .9865. We say that the two predictors account for 98.65% of the variance of horsepower because the error sum of squares is reduced by 98.65% compared to the total sum of squares. a --- Trang 724 --- 12.8 Regression with Matrices 711 Covariance Matrices In order to develop hypothesis tests and confidence intervals for the regression coefficients, the standard deviations of the estimated coefficients are needed. These can be obtained from a certain covariance matrix, a matrix with the variances on the diagonal and the covariances in the off-diagonal elements. If U is a column vector of random variables U,,..., U,, with means pw; = E(U)),..., fly = E(U,), let p be the vector of these 7 means and define Cov(M1,U1) --- Cov(U1,Un) Cov(U) = : ts : Cov(Un,U1) +++ Cov(Un,Un) ' =m )(Ur wy) Eli ~1)On “ EU n= Hy)(Ur = Hy)) + En = Bn) (On = Hn) U1 ky =E 5 PLWr ay Una] p= EAU ~ el — a)'} On= En (12.23) When n = | this reduces to just the ordinary variance. The key to finding the needed covariance matrix is this proposition: PROPOSITION IfA is a matrix with constant entries and V = AU, then Cov(V) = ACov(U)A’. Proof By the linearity of the expectation operator, E(V) = E(AU) = AE(U). Then Cov(V) = E{(AU ~ E(AU)|[AU ~ E(AU)]’} = E{A[U — E(U)|(A — EW)))} = E{A[U — E(U)|[U ~ E(U)A"} = AE{[U — E(U)|[U — E(U)|}A' = ACov(U)A’ a Let’s apply the proposition to find the covariance matrix of B. Because B= (X'X]'X’Y, we use A = [X/X]'X’ and U=Y. The transpose of A is Al = {[X'X]'X’}! = X[x’X]"!. The covariance matrix of Y is just the variance o* times the n-dimensional identity matrix, that is, o°I, because the observations are independent and all have the same variance o*, Then the proposition says Cov(B) = ACov(Y)A! = [X’X] X"[o7]X(X'X] | = 0? [x’x]! (12.24) --- Trang 725 --- 712 chapter 12 Regression and Correlation We also need to find the expected value of B, E(B) = E([X'X]-'X’Y) = [X'X]- X'E(Y) = [X’X] 'X'E(XB + €) = [X'X] 'X’XB = B That is, B is an unbiased estimator of B (for each i, B; is unbiased for estimating ;). Write the inverse matrix as [X’X]!=C = {cj In particular, let C00, C115---5Cke be the diagonal elements of this inverse matrix. Then V6) = Oey. Also, is a linear combination of Yj, ..., ¥,, which are independent normal, so 6 — B))/(e/G) ~ N(O, 1) It follows that (this requires the indepen- dence of S and the estimated regression coefficients, which we will not prove) 6G ~ B))/(S\VG) ~ t—(e41)- This leads to the confidence interval and hypothesis test for coefficients of Section 12.7. The 95% confidence interval for B; is By t005n—(eey SVG (12.25) We can test the hypothesis Ho: 8; = Bio using the f ratio bj Bo T= SS Ant,. SVG in—(k-+1) Statistical software packages usually provide output for testing Ho B; = 0 against the two-sided alternative H,: 8; # 0. In particular, we would reject Ho in favor of H,, at the 5% level if It exceeds ¢.925,, (441). Usually, with computer output there is no need to use statistical tables for hypothesis tests because P-values for these tests are included. For the engine horsepower scenario we found that s = 7.002, By = —61.417, (Example 12.33 B, = 95.5, By = 33 and [X’X]! has elements cog = 79/12, 11 = 1, ¢22 = 2/3. continued) Therefore, we get these 95% confidence intervals: By £t025,6-2418VEr1 = 95.5 + 3.182(7.002) V1 = 95.50 + 22.28 = (73.22, 117.78] Bo £t025,6-(241) 8/22 = 33 £3.182(7.002) /2/3 = 33 + 18.19 = [14.81, 51.19] We can also do the individual ¢ tests for the coefficients: Bi=0 9550 13.64. qwontiled P-value — .0009 —— = ——__ = 13.64, wo-tailed P-value = | sVeur 7.0021 p. —0 33-0 PaO 33-0577, tyortailed P-value = 0103 sven —7.002,/2/3 Both of these exceed t 975.6 ~2-; = 3.182 in absolute value (and their P-values are less than .05), so for both of them we reject at the 5% level the null hypothesis that the coefficient is 0, in favor of the two-sided alternative. These conclusions are consistent with the fact that the corresponding confidence intervals do not include zero. Also, recall that the F test rejected at the 5% level the null hypothesis that both coefficients are zero. As our intuition suggests, horsepower increases with engine size and horsepower is higher when the engine requires premium fuel. a --- Trang 726 --- 12.8 Regression with Matrices 713 The Hat Matrix The foregoing proposition can be used to find estimated standard deviations for predicted values and residuals. Recall that the vector of predicted values can be obtained by multiplying the hat matrix H times the Y vector, HY = Y. First, in order to apply the proposition, let’s obtain the transpose of H. With the help of the rules (AB)' = B'A' and (A~')' = (A')~', we find that H is symmetric, H! = H: , = H= {xiexy 'x’} = (XIX) VX! = XXX] PX! = X(X'X] IX! =H. Therefore, Cov(¥) = HCov(Y)H' = X[X'X]"!X"[o2)X(x'x] 1X’ (39206) = 0 X(X'X] |X’ = 0H. : A similar calculation shows that the covariance matrix of the residuals is Cov(¥ — ¥) = 0° (I — H) (12.27) Of course, the true variance ois generally unknown, so the estimate s° = MSE is used instead. Continue again with the horsepower example. If residuals and predicted values are (Example 12.34 requested from SAS, then the output includes the information in Figure 12.37. continued) Predicted StdError StdError — Student Obs Dep Var Value Mean Predict Residual Residual Residual 1 132.0000 129.5833 5.3479 2.4167 4.520 0.535, 2 167.0000 162.5833 5.3479 4.4167 4.520 0.977 3 170.0000 = 177.3333 4.0426 -7.3333 5.717 —1.283 4 204.0000 210.3333 4.0426 -6.3333 5.717 ~1.108 5 230.0000 =. 225.0833 5.3479 4.9167 4.520 1.088 6 260.0000 258.0833 5.3479 1.9167 4.520 0.424 Figure 12.37 Predicted values and residuals from SAS The column labeled “Std Error Mean Predict” has the estimated standard deviations for the predicted values and it contains the square roots of the s*H matrix diagonal elements. The column labeled “Std Error Residual” has the estimated standard deviations for the residuals, and it contains the square roots of the diagonal elements of s*(I — H). The column labeled “Student Residual” is what we defined as the standardized residual in Section 12.6. It is the ratio of the previous two col- umns. a The hat matrix is also important as a measure of the influence of individual observations. Because y = Hy, ¥j = hays + hy. +-+++Ainyn, and therefore 09; /Oy; = hi. That is, the partial derivative of §; with respect to y; is the ith diagonal element of the hat matrix. In other words, the ith diagonal element of H measures the influence of the ith observation on its predicted value. The diagonal --- Trang 727 --- 714 = cuarter 12 Regression and Correlation elements of H are sometimes called the leverages to indicate their influence over the regression. An observation with very high leverage will tend to pull the regression toward it, and its residual will tend to be small. Of course, H depends only on the values of the predictors, so the leverage measures only one aspect of influence. If the influence of an observation is defined in terms of the effect on the predicted values when the observation is omitted, then an influential observation is one that has both large leverage and a large (in absolute value) residual. Students in a statistics class measured their height, foot length, and wingspan (measured fingertip to fingertip with hands outstretched) in inches. Leonardo da Vinci was aware that the wingspan tends to be very nearly the same as height. Here in Table 12.3 are the measurements for 16 students. The last column has the leverages for the regression of wingspan on height and foot length. Table 12.3 Height, foot length, and wingspan Obs Height Foot Wingspan Leverage 1 63.0 9.0 62.0 0.239860 2 63.0 9.0 62.0 0.239860 3 65.0 9.0 64.0 0.228236 4 64.0 9.5, 64.5 0.223625 5 68.0 9.5, 67.0 0.196418 6 69.0 10.0 69.0 0.083676 7 71.0 10.0 70.0 0.262182 8 68.0 10.0 72.0 0.067207 9 68.0 10.5 70.0 0.187088 10 72.0 10.5 72.0 0.151959 11 73.0 11.0 73.0 0.143279 12 73.5 11.0 75.0 0.168719 13 70.0 11.0 71.0 0.245380 14 70.0 11.0 70.0 0.245380 15 72.0 11.0 76.0 0.128790 16 74.0 11.2 76.5 0.188340 In Figure 12.38 we show the plot of height against foot length, along with the leverage for each point. Notice that the points at the extreme right and left of the plot have high leverage, and the points near the center have low leverage. However, it is interesting that the point with highest leverage is not at the extremes of height or foot length. This is student number 7, with a 10-in. foot and height of 71 in., and the high leverage comes from the height being extreme relative to foot length. Indeed, when there are several predictors, high leverage often occurs when values of one predictor are extreme relative to the values of other predictors. For example, if height and weight are predictors, then an overweight or underweight subject would likely have high leverage. --- Trang 728 --- 12.8 Regression with Matrices 715 Height 4 0.17+0.19 bl 0.26 :d 0.25 Le 0.08 0.20 007 0.19 68 : cr 66 0235 50 64 : 0.24 62 Foot length 90 95 100 105 11.0 11.5 Figure 12.38 Plot of height and foot length showing leverage In Figure 12.39 there is some useful output from MINITAB, including the model utility test, the regression coefficients, and the correlations among the variables. The correlation table shows all three correlations among the three vari- ables along with their P-values. Clearly, the three variables are very strongly related. However, when wingspan is regressed on height and foot length, the P-value for foot length is greater than .05, so we can consider eliminating foot length from the regression equation. Does it make sense for foot length to be very strongly related to wingspan, as measured by correlation, but for the foot length term to be not statistically significant in the regression equation? The difference is that the regression test is asking whether foot length is needed in addition to height. Because the two predictors are themselves highly correlated, foot length is redun- dant in the sense that it offers little prediction ability beyond what is contributed by height. Analysis of Variance Source DF ss MS F PB Regression 2 294.79 147.40 67.33 0.000 Residual Error 13 28.46 2.19 Total 15 323.25 Predictor Coef SE Coef T P Constant 6.085 8.018 -0.76 0.461 height 0.8060 0.2305 3.50 0.004 foot 1.973 1.044 1.89 0.081 S$ = 1.47956 R-Sq = 91.2% R-Sq(adj) = 89.8% Correlations: height, foot, wingspan height foot foot 0.892 0.000 wingspan 0.942 0.911 0.000 0.000 Figure 12.39 Regression output for height, foot length, and wingspan : --- Trang 729 --- 716 = cuaprer 12 Regression and Correlation Exercises | Section 12.8 (91-104) 91. Fit the model Y = fy + Byxi + Boxy + € to the d. Find SSE by summing the eight squared resi- data duals. Use this to get the estimated variance MSE. e. Use the MSE and ¢}, to get a 95% confidence - vi a interval for f1. f. Carry out ar test for the hypothesis Ho: 6; = 0 A I I against a two-tailed alternative. ; 4 i g. Carry out the F test for the hypothesis Ho: 1 = 0. How is this related to part (f)? 93. Suppose that the model consists of just a. Determine X and y and express the normal Y=By+e so k=O. Estimate fy from equations it terms of matrices. [X’X]!X’y. Find simple expressions for s and b. Determine the B vector, which contains the Coo: and use them along with Equation (12.25) estimates for the three coefficients in the to. ekpRAS Miniply thie OS8R dontiaenee istawal model. for By. Your result should be equivalent to the “. Determine jb, the eegnees for ne tour one-sample f confidence interval in Section 8.3. observations, and also the four residuals. Find SSE by summing the four squared resi- 94. Suppose we have (11, Yi). ++ (tn Yn). Letk = 1 duals. Use this to get the estimated variance and let.xj =a; —X, i= 1,...,n, 80 our model MSE. isy;=Bo t+ Bii-X) +e i=1,...,0. d. Use the MSE and ¢}; to get a 95% confidence a. Obtain fy and , from [X'X] xy. interval for By: b. Find cog and ¢;, and use them to simplify the ere eee ee confidence intervals [Equation (12.25)] for B, =0 against a two-tailed alternative, and Bo and By. interpret the result. ¢. In terms of computing [X’X]"', why is it better f. Form the analysis of variance table and carry to have xj) = 4) — X rather than xj) =? out the F test for the hypothesis Ho: Bi = B2 95, Suppose that we have Yj, ..., Yn ~ Niu, 0), = 0. Find R” and interpret. Yinets ++ Yen ~ N(pb, 0°), and all m + n obser- 92. Consider the model Y = By + 121 +e for the vations are independent. These are the assump- data tions of the pooled t procedure in Section 10.2. Leth-= lap Sour — sei = s+: Xmen) = —-5. For convenience in inverting a y X'X assume m = n. aif 1 a. Obtain By and f, from [X’X]'X'y. ah 8 b. Find simple expressions for j, SSE, s, cr -5 2 ¢. Use parts (a) and (b) to find a simple expres- 5 3 sion for the 95% CI [Equation (12.25)] for 1. 5 8 Letting ¥, be the mean of the first m observa- 5 9 tions and ¥, be the mean of the next n obser- 5 7 vations, your result should be 5 8 By Eteosmin a6 fett= ds —y a. Determine the X and y matrices and express the normal equations in terms of matrices. b. Determine the B vector, which contains the why ret (=o)? estimates for the two coefficients in the 4) Rois 2 iow) fra model. Sete 2 m+n—2 Vinh c. Determine j, the predictions for the eight observations, and also obtain the eight resi- which is the pooled variance confidence inter duals. val discussed in Section 9.2. --- Trang 730 --- 12.8 Regression with Matrices 717 d. Let m=3 and n=3, with y;=117, b. Use the inverse of X’X to obtain expressions yo = 119, ys = 127, y= 129, ys = 138, for the variances of the coefficients, and Yo = 139. These are the prices in thousands check your answers against the results given for three houses in Brookwood and then three in Sections 12.3 and 12.4 (By is the predicted houses in Pleasant Hills. Apply parts (a), (b), value corresponding to x* = 0). and (c) to this data set. ¢. Compare the predictions from this model with 96. The constant term is not always needed in the the predictionsifrom the mddel ohExercise:94: ae a - Comparing other aspects of the two models, regression equation. For example, many physical ane . piineipled involve proportions, wheié no con discuss similarities and differences. Mention, . gn in particular, the hat matrix, the predicted stant term is needed. In general, if the dependent : variable should be 0 when the independent vari- valpes, and the resicuals: ables are 0, then the constant term is not needed. 101. Continue Exercise 94. Then it is preferable to omit fo and use the model a. Find the elements of the hat matrix and use Y=fix t+ foxx +--+ Bee +e. Here we them to obtain the variance of the predicted focus on the special case k = 1. values. Noting the result of Exercise 100(c), a. Differentiate the appropriate sum of squares compare your result with the expression for to derive the one normal equation for estimat- V(Y) given in Section 12.4. ing Bi. b. Using the diagonal elements of H, obtain the b. Express your normal equation in matrix variances of the residuals and compare with terms, X/XB = X’y, where X consists of one the expression given in Section 12.6 column with the values of the predictor vari- ¢. Compare the variances of predicted values able. for an x that is close to ¥ and an x that is far c. Apply part (b) to the data of Example 12.32, from Z. using hp for y and just engine size in X. d. Compare the variances of residuals for an x d. Explain why deletion of the constant term that is close to ¥ and an x that is far from x. might be appropriate for the data set in part (c). e. Give intuitive explanations for the results of e. By fitting a regression model with a constant parts (c) and (d). ou win oa ee 102. Carry out the details of the derivation for the analysis of variance, Equation (12.20). 97. Assuming that the analysis of variance table is a available: show how the last three columns ot 103+ The measurements here are similar to those in Figure 12.37 (the columns related to residuals) BeAMpleN 236; except diet here the snidents did can be obtained from the previous columns. the measurements at honte,/and tie seals i fered in accuracy. These are measurements from 98. Given that the residuals are y — y = (I — H)y, a sample of ten students: show that Cov(Y — ¥) = (I — H)o?. 99. Use Equations (12.26) and (12.27) to show that ; cack tne leverages is between 0 and 1, and Wingspan Root Height therefore the variances of the predicted values 74 13.0 15 and residuals are between 0 and a”. 56 85 66 100. Consider the special case y = fy + Bix + £, so 65 10.0 69 k= 1 and X consists of a column of 1’s and a 66 9.5 66 column of the values x}, ...5 %y Of x 62 9.0 54 a. Write the normal equations in matrix form, 69 11.0 72 and solve by inverting X'X. [Hint: if ad # be, 75 12.0 75 then 66 9.0 63 66 9.0 66 ab)! 1 ofa ~b 2 = & F 4 -a-~|“. | ; ; a. Regress wingspan on the other two variables. Check your answers against those in Sec- Carry out the test of model utility and the tests tion 12.2.] for the two individual regression coefficients of the predictors. --- Trang 731 --- 718 = cuarrer 12 Regression and Correlation b. Obtain the diagonal elements of the hat 104. Here is a method for obtaining the variance of the matrix (leverages). Identify the point with residuals in simple (one predictor) linear regres- the highest leverage. What is unusual about sion, as given by Equation (12.13). the point? Given the instructor’s assertion that a. We have shown in Equations (12.26) and there were no students in the class less than (12.27) that Cov(¥Y)=o?H and five feet tall, would you say that there was an Cov(Y —¥) = o2(1—H). Show therefore error? Give another reason that this student’s that V(Y; — ¥;) =o? — V(¥)). measurements seem wrong. b. Use part (a) and v(¥i) from Section 12.4 to ¢. For the other points with high leverages, what show that for simple linear regression, distinguishes them from the points with ordi- nary leverage values? d. Examining the residuals, find another student , 5 Z lL (@-3x)? whose data might be wrong. Vigpleiae a e. Discuss the elimination of questionable points in order to obtain valid regression results. 105. The presence of hard alloy carbides in high ships in High Chromium White Iron Alloys” chromium white iron alloys results in excellent (Internat. Mater. Rev., 1996: 59-82). abrasion resistance, making them suitable for materials handling in the mining and materials. *_| 4° 17-0 174 18.0 185 224 26.5 30.0 34.0 processing industries. The accompanying data. y | .66 .92 145 1.03 .70 .73 120 80 91 on x = retained austenite content (%) and y = abrasive wear loss (mm*) in pin wear tests with x | 388 482 635 658 73.9 77.2 798 84.0 garnet as the abrasive was read from a plot in the article “Microstructure-Property Relation- y | 119 115 112 137 145 150 1.36 1.29 SAS output for Exercise 105 Analysis of Variance Source DE Sum of Squares Mean Square F value Prob >F Model 1. 0.63690 0.63690 15.444 0.0013 Error 15 0.61860 0.04124 c Total 16 1.25551 Root MSE 0.20308 R-square 0.5073 Dep Mean 1.10765 Adj R-sq 0.4744 Cie 18.33410 Parameter Estimates Parameter Standard T for HO: Prob variable DE Estimate Error Parameter -0 >IT INTERCEP 1 0.787218 0.09525879 8.264 0.0001 AUSTCONT 1 0.007570 0.00192626 3.930 0.0013 --- Trang 732 --- Supplementary Exercises. 719 a. What proportion of observed variation in j-340.5¢—% wear loss can be attributed to the simple lin- we 5 ear regression model relationship? b. What is the value of the sample correlation b. This expression for the regression line can coefficient? be interpreted as follows. Suppose r = .5. c. Test the utility of the simple linear regression What then is the predicted y for an x that model using « = .01. lies 1 SD (sy units) above the mean of the d. Estimate the true average wear loss when 4's? If r were 1, the prediction would be for content is 50% and do so in a way that con- y to lie 1 SD above its mean J, but since veys information about reliability and preci- r= .5, we predict a y that is only .5 SD (5s, sion. unit) above y. Using the data in Exercise 62 e, What value of wear loss would you predict for a patient whose age is 1 SD below the when content is 30%, and what is the value of average age in the sample, by how many the corresponding residual? standard deviations is the patient's predicted 106. An investigation was carried out to study the (ACB G. abbyEI pr DELO WithENaverage ACG relationship between speed (ft/s) and stride rate forthe gamble? (number of steps taken/s) among female mara- 111. In biofiltration of wastewater, air discharged thon runners. Resulting summary quantities from a treatment facility is passed through a included n = 11, X(speed) = 205.4, X(speed)* damp porous membrane that causes contami- = 3880.08, X(rate) = 35.16, X(rate)” nants to dissolve in water and be transformed = 112.681, and Z(speed)(rate) = 660.130. into harmless products. The accompanying data a. Calculate the equation of the least squares on x = inlet temperature (°C) and y = removal line that you would use to predict stride rate efficiency (%) was the basis for a scatter plot from speed. that appeared in the article “Treatment of b. Calculate the equation of the least squares Mixed Hydrogen Sulfide and Organic Vapors line that you would use to predict speed from in a Rock Medium Biofilter"(Water Environ. stride rate. Res., 2001: 426-435). ce. Calculate the coefficient of determination for the regression of stride rate on speed of OO part (a) and for the regression of speed on Obs Temp Removal Obs Temp Removal stride rate of part (b). How are these related? % % d. How is the product of the two slope esti- TO mates related to the value calculated in (c)? 1 7.68 98.09 178.55 98.27 2 651 98.25 «187.57 ——-98.00 107. In Section 12.4, we presented a formula for 3. 643 9782 19 694 98.09 the variance V(fy+,x") and a CI for 4 548 9782 20 8.32 98.25 Bo + Bix". Taking x = 0 gives of and a CI 5 657 97.82 2110.50 98.41 for By. Use the data of Example 12.12 to cal- 6 10.22 97.93 22 16.02 98.51 culate the estimated standard deviation of By 7 15.69 98.38 2317.83 98.71 and a 95% CI for the y-intercept of the true 8 16.77, 98.8924 17.03 98.79 regression line. 9 17.13 98.96 25 16.18 98.87 . 5 . 10 17.63 98.90 26 16.26 98.76 108. Show that SSE = S,, —fyS,y, which gives an li 1672 9868 27 1444 9858 alternative computational formula for SSE. 12 15.45 98.69 28 12.78 98.73 109. Suppose that and y are positive variables and 13° 12.06 98.51 29 «12.25 98.45 that a sample of 7 pairs results in r= 1. If the 14 1144 98.09 30«11.69 98.37 sample correlation coefficient is computed for 15 10.17 98.25 3111.34 98.36 the (x, y’) pairs, will the resulting value also be 16 9.64 98.36 32 10.97 98.45 approximately 1? Explain. TT 110. Let s, and s, denote the sample standard devia- Calculated summary quantities are tions of the observed x's and y’s, respectively [so Sx; = 384.26, Sy, = 3149.04, a= 82 =D (4 —3)°/(m = 1) and similarly for s?]. 5099.2412, Lxiy; = 37,850.7762, and oy? = a. Show that an alternative expression for the 309,892.6548. estimated regression line By + Byx is --- Trang 733 --- 720 cuarrer 12 Regression and Correlation a. Does a scatter plot of the data suggest appro- Players” (Med. Sci. Sports Exercise, 1999: priateness of the simple linear regression 1350-1356) reports on a new air displacement model? device for measuring body fat. The customary b. Fit the simple linear regression model, obtain procedure utilizes the hydrostatic weighing a point prediction of removal efficiency when device, which measures the percentage of temperature = 10.50, and calculate the value body fat by means of water displacement. of the corresponding residual. Here is representative data read from a graph ¢. Roughly what is the size of a typical deviation in the paper. of points in the scatter plot from the least squares line? BOD] 2.5 4.0 4.1 62 7.1 7.0 83 9.2 9.3 120 122 d. What proportion of observed variation in EWLED-G2-92 64-86 1221212014513 removal efficiency can be attributed to the BOD| 126 142 144 151 152 163 171 179 179 model relationship? Re EEE SRS e. Estimate the slope coefficient in a way that conveys information about reliability and pre- cision, and interpret your estimate. a. Use various methods to decide whether it is f. Personal communication with the authors of plausible that the two techniques measure on the article revealed that one additional observa- average the same amount of fat. tion was not included in their scatter plot: (6.53, b. Use the data to develop a way of predicting 96.55). What impact does this additional obser- an HW measurement from a BOD POD vation have on the equation of the least squares measurement, and investigate the effective- line and the values of s and 17? ness of such predictions. 112. Normal hatchery processes in aquaculture inev- 114. Reconsider the situation of Exercise 105, in itably produce stress in fish, which may nega- which x = retained austenite content using a tively impact growth, reproduction, flesh garnet abrasive and y = abrasive wear loss quality, and susceptibility to disease. Such were related via the simple linear regression stress manifests itself in elevated and sustained model Y = fo + Bix + &. Suppose that for a sec- corticosteroid levels. The article “Evaluation of ond type of abrasive, these variables are also Simple Instruments for the Measurement of related via the simple linear regression model Blood Glucose and Lactate, and Plasma Protein Y= 9+ 7x +e and that V(2) =o for both as Stress Indicators in Fish"(/. World Aquacult. types of abrasive. If the data set consists of 1 Soc., 1999: 276-284) described an experiment observations on the first abrasive and 1) on the in which fish were subjected to a stress protocol second and if SSE; and SSE, denote the two and then removed and tested at various times error sums of squares, then a pooled estimate of after the protocol had been applied. The accom- o is 6? = (SSE, + SSEy)/(m +m —4). Let panying data on x = time (min) and y = blood SSq and S$, denote S> (x; —¥)° for the data glucose level (mmol/L) was read from a plot. on the first and second abrasives, respectively. A test of Ho: 61 — 71 = 0 (equal slopes) is based on the statistic x 2 2 5 7 12 13 17 18 23 24 26 28 y | 40 36 3.7 40 38 40 5.1 39 44 43 43 44 Fe—_fch a 29 30 34 36 40 41 44 56 56 57 60 60 “V SS SSxz y | 58 43 55 56 51 5.7 61 5.1 59 68 4.9 5.7 When Hp is true, T has a ¢ distribution with n, + ny — 4 df. Suppose the 15 observations Use the methods developed in this chapter to using the alternative abrasive give SS,2 analyze the data, and write a brief report sum- = 71525578, }, =.006845, and SSE, marizing your conclusions (assume that the = .51350. Using this along with the data of investigators are particularly interested in glu- Exercise 105, carry out a test at level .05 to see cose level 30 min after stress). whether expected change in wear loss associated 113. The article “Evaluating the BOD POD for with a 1% increase in austenite content is identi- FS e Re cal for the two types of abrasive. Assessing Body Fat in Collegiate Football --- Trang 734 --- Supplementary Exercises 721 115. Show that the ANOVA version of the model 118, No tortilla chip afficionado likes soggy chips, so utility test discussed in Section 12.3 (with test it is important to identify characteristics of the statistic F = MSR/MSE) is in fact a likelihood production process that produce chips with an ratio test for Ho: By = 0 versus Hy: By £0. [Hint: appealing texture. The following data on x = We have already pointed out that the least frying time (sec) and y = moisture content (%) squares estimates of Bo and f; are the mle’s. appeared in the article “Thermal and Physical What is the mle of fo when Ho is true? Now Properties of Tortilla Chips as a Function of determine the mle of a7 both in Q (when f; is Frying Time” (J. Food Process. Preserv., 1995: not necessarily 0) and in Qo (when Ho is true).] 175-189). 116. Show that the ¢ ratio version of the model utility test is equivalent to the ANOVA F statistic ver- x 5 10 15 20 25 30 45 60 sion of the test. Equivalent here means that rejecting Ho: By = 0 when either ¢ > ty,,-2 or y | 163° 9.7 81 42 34 29 19 13 t < —typ, n—2 is the same as rejecting Hy when f > Fe,tn—2- a. Construct a scatter plot of the data and com- sae ment. 117. When a scatter plot of Diente data shows a b. Construct a scatter plot of the [In(x), In(y)] pattern resembling an exponentially increasing pairs (.e. transform both x and y by logs) and or decreasing curve, the following multiplicative Comient exponential model is often used: Y = we** - e, cape _ ; c. Consider the multiplicative power model ¥ = a. What does this multiplicative model imply B : ° reine m axe, What does this model imply about the about the relationship between Y’= In(Y) and 1AGOHRHIDS jem aid = 1 .) (Hint: take logs on both sides of the model Tolar abhi berweea ye INO) and. Yen Re 8 a (x) (assuming that ¢ has a lognormal distribu- equation and let By = In(x), B; = Be = In tion)? oe suppose tial elas toghonnal digit d. Obtain a prediction interval for moisture con- in . tent when frying time is 25 s. (Hint: first carry b. The accompanying data resulted from an ij : Hens ees out a simple linear regression of y’ on x! and investigation of how ethylene content of let- " : oi seamen y i i calculate an appropriate prediction interval.| tuce seeds (y, in nL/g dry wt) varied with exposure time (x, in min) to an ethylene 119. The article “Determination of Biological Matu- absorbent (“Ethylene Synthesis in Lettuce rity and Effect of Harvesting and Drying Condi- Seeds: Its Physiological Significance,” Plant tions on Milling Quality of Paddy” (J. Agric. Physiol., 1972: 719-722). Engr. Res., 1975: 353-361) reported the follow- ing data on date of harvesting (x, the number of x | 2 2 2 30 40 50 60 70 80 90 100 days after flowering) and yield of paddy, a grain farmed in India (y, in kg/ha). y 408 274 196 137 90 78 51 40 30 22 15 2 Fit the simple linear regression model to this “| '© 18 20 22 24 26 28 30 data, and check model adequacy using the y | 2508 2518 3304 3423 3057 3190 3500 3883 residuals. ¢. Isa scatter plot of the data consistent with the + 2M 36 88 exponential regression model? Fit this model y | 3823 3646 3708 3333 3517 3241 3103 2776 by first carrying out a simple linear regression analysis using In(y) as the dependent variable a. Construct a scatter plot of the data. What and vas the independent variable, How good a model 48 suggedted by the plot? at isthe Sieaple hingat te pression model #5 the b. Use a statistical software package to fit the transformed” data [the (x, In(y)) pairs]? What model suggested in (a) and test its utility. are point estimates of the parameters « and f? ¢. Use the software package to obtain a predic- d. Obtain a 95% prediction interval for ethylene tion interval for yield when the crop is har- content when exposure time is 50 min. [Hint: vested 25 days after flowering, and also a first obtaii'a'PL-for.in()) based:on'the:simple, confidence interval for expected yield in lincatre gression: Camried outa: (¢),] situations where the crop is harvested --- Trang 735 --- 722 carrer 12 Regression and Correlation 25 days after flowering. How do these two 11 min, and heart rate was 140 beats/min intervals compare to each other? Is this result resulted in VO max = 3.15. What would consistent with what you learned in simple you have predicted for VO.max in this situa- linear regression? Explain. tion, and what is the value of the d. Use the software package to obtain a PI and corresponding residual? CI when x = 40. How do these intervals com- d. Using SSE = 30.1033 and SST = 102.3922, pare to the corresponding intervals obtained what proportion of observed variation in in (c)? Is this result consistent with what you VO.max can be attributed to the model rela- learned in simple linear regression? Explain. tionship? e. Carry out a test of hypotheses to decide e. Assuming a sample size of n = 20, carry out a whether the quadratic predictor in the model test of hypotheses to decide whether the cho- fit in (b) provides useful information about sen model specifies a useful relationship yield (presuming that the linear predictor between VO>max and at least one of the pre- remains in the model). dictors. 120. The article “Validation of the Rockport Fitness 121. A sample of n = 20 companies was selected, and Walking Test in College Males and Females” the values of y = stock price and k = 15 predic- (Res. Q. Exercise Sport, 1994: 152-158) recom- tor variables (such as quarterly dividend, previ- mended the following estimated regression equa- ous year’s earnings, and debt ratio) were tion for relating y = VO2max (L/min, a measure determined. When the multiple regression of cardiorespiratory fitness) to the predictors x model using these 15 predictors was fit to the = gender (female = 0, male = 1), x» = weight data, R? = .90 resulted. (lb), x3 = I-mile walk time (min), and x4 = a. Does the model appear to specify a useful heart rate at the end of the walk (beats/min): relationship between y and the predictor vari- ables? Carry out a test using significance level y = 3.5959 + .6566x, + .0096x2 .05. [Hint: The F critical value for 15 numer- — .0996x3 — .0080x4 ator and 4 denominator df is 5.86.] b. Based on the result of part (a), does a high R? a. How would you interpret the estimated coef- value by itself imply that a model is useful? ficient —.09967 Under what circumstances might you be sus- b. How would you interpret the estimated coef- picious of a model with a high R? value? ficient .6566? c. With n and k as given previously, how large ¢. Suppose that an observation made on a male would R? have to be for the model to be whose weight was 170 Ib, walk time was judged useful at the .05 level of significance? Chatterjee, Samprit, Ali Hadi, and Bertram Price, Hoaglin, David, and Roy Welsch, “The Hat Matrix Regression Analysis by Example (4th ed.), Wiley, in Regression and ANOVA,” American Statisti- New York, 2006. A brief but informative discus- cian, 1978: 17-23. Describes methods for sion of selected topics. detecting influential observations in a regression Daniel, Cuthbert, and Fred Wood, Fitting Equations to data set. Data (2nd ed.), Wiley, New York, 1980. Contains Kutner, Michael, Christopher Nachtsheim, John many insights and methods that evolved from the Neter, and William Li, Applied Linear Statistical authors’ extensive consulting experience. Models (5th ed.), McGraw-Hill, New York, 2005. Draper, Norman, and Harry Smith, Applied Regression The first 14 chapters constitute an extremely Analysis (3rd ed.), Wiley, New York, 1998. A com- readable and informative survey of regression prehensive and authoritative book on regression. analysis. --- Trang 736 --- . Goodness-of-Fit Tests and ° Categorical Data ° Analysis Introduction In the simplest type of situation considered in this chapter, each observation in a sample is classified as belonging to one of a finite number of categories (For example, blood type could be one of the four categories O, A, B, or AB). With p; denoting the probability that any particular observation belongs in category i (or the proportion of the population belonging to category i), we wish to test a null hypothesis that completely specifies the values of all the p;'s (such as Ho: Pp; = .45, Po = .35, p3 = .15, pa = .05, when there are four categories). The test statistic will be a measure of the discrepancy between the observed numbers in the categories and the expected numbers when Hp is true. Because a decision will be reached by comparing the computed value of the test statistic to a critical value of the chi-squared distribution, the procedure is called a chi-squared goodness-of-fit test. Sometimes the null hypothesis specifies that the ps depend on some smaller number of parameters without specifying the values of these parameters. For example, with three categories the null hypothesis might state that p, = 6, Po. = 20(1 - 6), and p,; = (1 - 6)”. For a chi-squared test to be performed, the values of any unspecified parameters must be estimated from the sample data. These problems are discussed in Section 13.2. The methods are then applied to test a null hypothesis that states that the sample comes from a particular family of distributions, such as the Poisson family (with A estimated from the sample) or the normal family (with 4 and o estimated). Chi-squared tests for two different situations are presented in Section 13.3. In the first, the null hypothesis states that the p;’s are the same for several different populations. The second type of situation involves taking a sample from a single population and classifying each individual with respect to two different categorical JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 723 DOI 10.1007/978-1-4614-0391-3_13, © Springer Science+Business Media, LLC 2012 --- Trang 737 --- 724 — cuiareR 13 Goodness-of-Fit Tests and Categorical Data Analysis factors (such as religious preference and political party registration). The null hypothesis in this situation is that the two factors are independent within the population. Goodness-of-Fit Tests When Category Probabilities Are Completely Specified A binomial experiment consists of a sequence of independent trials in which each trial can result in one of two possible outcomes, S (for success) and F (for failure). The probability of success, denoted by p, is assumed to be constant from trial to trial, and the number 7 of trials is fixed at the outset of the experiment. In Chapter 9, we presented a large-sample z test for testing Ho: p = po. Notice that this null hypothesis specifies both P(S) and P(F), since if P(S) = po, then P(F) = 1 — po. Denoting P(F) by g and 1 — po by qo, the null hypothesis can alternatively be written as Ho: p = po, ¢ = qo. The z test is two-tailed when the alternative of interest is p 4 po. A multinomial experiment generalizes a binomial experiment by allowing each trial to result in one of k possible outcomes, where k > 2. For example, suppose a store accepts three different types of credit cards. A multinomial experi- ment would result from observing the type of credit card used—type 1, type 2, or type 3—by each of the next n customers who pay with a credit card. In general, we will refer to the k possible outcomes on any given trial as categories, and p; will denote the probability that a trial results in category i. If the experiment consists of selecting n individuals or objects from a population and categorizing each one, then p;is the proportion of the population falling in the ith category (such an experiment will be approximately multinomial provided that n is much smaller than the population size). The null hypothesis of interest will specify the value of each p;. For example, in the case k = 3, we might have Ho: p,; = .5, po = .3, p3 = .2. The alternative hypothesis will state that Hp is not true—that is, that at least one of the p;’s has a value different from that asserted by Ho (in which case at least two must be different, since they sum to 1). The symbol pjo will represent the value of p; claimed by the null hypothesis. In the example just given, pio = .5, p29 = -3, and p39 = .2. Before the multinomial experiment is performed, the number of trials that will result in category i (i = 1, 2, ... , or k) is a random variable—just as the number of successes and the number of failures in a binomial experiment are random variables. This random variable will be denoted by N; and its observed value by n;. Since each trial results in exactly one of the k categories, ZN; = n, and the same is true of the ,’s. As an example, an experiment with n = 100 and k = 3 might yield N, = 46, N, = 35, and N; = 19. The expected number of successes and expected number of failures in a binomial experiment are np and ng, respectively. When Ho: p = po. ¢ = Yo is true, the expected numbers of successes and failures are npo and nqo, respectively. Similarly, in a multinomial experiment the expected number of trials resulting in category i is E(N;) = np; (i = 1, ..., &). When Ho: pi = pio, --. » Pe = Peo is true, these expected values become E(N,) = npio, E(N2) = np2o, ... » E(Ne) = mpso- For the case k = 3, Ho: py = .5, p2 = 3, p3 = .2, and n= 100, we have E(N,) = 100(.5) = 50, E(N3) = 30, and E(N3) = 20 when Hg is true. The n;’s --- Trang 738 --- 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 725 are often displayed in a tabular format consisting of a row of k cells, one for each category, as illustrated in Table 13.1. The expected values when Hp is true are displayed just below the observed values. The N;’s and n;’s are usually referred to as observed cell counts (or observed cell frequencies), and np jo, NP, - - - » NPyo AWE the corresponding expected cell counts under Ho. Table 13.1 Observed and expected cell counts Category: i=l i=2 ee i=k Row Total The n;’s should all be reasonably close to the corresponding npjo’s when Ho is true. On the other hand, several of the observed counts should differ substantially from these expected counts when the actual values of the p;’s differ markedly from what the null hypothesis asserts. The test procedure involves assessing the discrepancy between the n;’s and the np,o’s, with Ho being rejected when the discrepancy is sufficiently large. It is natural to base a measure of discrepancy on the squared deviations (nm, — mpyo)”, (2 — NP2o)s + «5 (Me — NPR)” An obvious way to combine these into an overall measure is to add them together to obtain X(n; — pio)’. However, suppose npjo = 100 and np = 10. Then if n, = 95 and nz = 5, the two categories contribute the same squared deviations to the proposed measure. Yet m, is only 5% less than what would be expected when Hp is true, whereas nz is 50% less. To take relative magnitudes of the deviations into account, we will divide each squared deviation by the corresponding expected count and then combine. Before giving a more detailed description, we must discuss the chi-squared distribution. This distribution was introduced in Section 4.4, discussed in Section 6.4, and used in Chapter 8 to obtain a confidence interval for the variance o° of a normal population. The chi-squared distribution has a single parameter, called the number of degrees of freedom (df) of the distribution, with possible values 1, 2, 3,.... Analogous to the critical value t,,, for the t distribution, #. gl the value such that & of the area under the 7” curve with v df lies to the right of 72, (see Figure 13.1). Selected values of 72, are given in Appendix Table A.6. x? curve Shaded area = a 0 Xo Figure 13.1 A critical value for a chi-squared distribution --- Trang 739 --- 726 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis THEOREM Provided that np; > 5 for every i (i = 1, 2, ... , k), the variable 2 2 (Ni — npi)? x (observed — expected)” f=) = eee iat MP ail cells eepected has approximately a chi-squared distribution with k — 1 df. The fact that df = k—1 is a consequence of the restriction ZN; = n. Although there are k observed cell counts, once any k — | are known, the remaining one is uniquely deter- mined. That is, there are only k— 1 “freely determined” cell counts, and thus k — 1 df. If np, is substituted for np; in 7, the resulting test statistic has approximately a chi-squared distribution when Hp is true. Rejection of Hy is appropriate when 7 > © (because large discrepancies between observed and expected counts lead to a large value of )» and the choice c = ¢. x-1 Yields a test with significance level «. Null hypothesis: Ho: P1 = Pio, P2 = P20 --- + Pk = Pro Alternative hypothesis: H,: at least one p; does not equal pio 2 sat'statistl wpa (observed-expected)” _ 4 (mi = mpi)” Test statistic value: 77 = a: an = 2 a Rejection region: 7° > 72 yy If we focus on two different characteristics of an organism, each controlled by a single gene, and cross a pure strain having genotype AABB with a pure strain having genotype aabb (capital letters denoting dominant alleles and small letters recessive alleles), the resulting genotype will be AaBb. If these first-generation organisms are then crossed among themselves (a dihybrid cross), there will be four phenotypes depending on whether a dominant allele of either type is present. Mendel’s laws of inheritance imply that these four phenotypes should have prob- abilities 9/16, 3/16, 3/16, and 1/16 of arising in any given dihybrid cross. The article “Linkage Studies of the Tomato” (Trans. Royal Canad. Institut., 1931: 1-19) reports the following data on phenotypes from a dihybrid cross of tall cut-leaf tomatoes with dwarf potato-leaf tomatoes. There are k = 4 categories corresponding to the four possible phenotypes, with the null hypothesis being Hi 9 iS) 3 1 [Pi=o, =, =, a= 0° PI 16 P2 16 P3 16’ 4 16 The expected cell counts are 9n/16, 3n/16, 3n/16, and n/16, and the test is based on k-—1 = 3 df. The total sample size was n = 1611. Observed and expected counts are given in Table 13.2. Table 13.2 Observed and expected cell counts for Example 13.1 o=J i=2 i=3 f= Tall, Tall, Dwarf, Dwarf, Cut-Leaf Potato-Leaf Cut-Leaf Potato-Leaf --- Trang 740 --- 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 727 The contribution to 7° from the first cell is (m —npio)’ _ (926 — 906.2)” _ 3 iio 906.2 a Cells 2, 3, and 4 contribute .658, .274, and .108, respectively, so 7 = 433 + .658 + .274 + 108 = 1.473. A test with significance level .10 requires 77,93, the number in the 3 df row and .10 column of Appendix Table A.6. This critical value is 6.251. Since 1.473 is not at least 6.251, Hy cannot be rejected even at this rather large level of significance. The data is quite consistent with Mendel’s laws. a Consider the special case of just two categories, k = 2. The null hypothesis in this case can be stated as Ho: p; = pio. because the relations p2 = 1 — p; and P20 = 1 = pio make the inclusion of pz = p29 in Ho redundant. The alternative hypothesis is H,: p; # pio. These hypotheses can also be tested using a two-tailed z test with test statistic za Mi/n) =p _ Pi =Pro Pr0(1 — pio) P10 P20 V n Von Surprisingly, the two test procedures are completely equivalent. This is because it can be shown that Z? = 77 and (z,/2)” = 72, so that 7? > 72, if and only if (iff) IZI > Zypoet If the alternative hypothesis is either Hy: p) > pio or Hy: P, < Pio; the chi-squared test cannot be used. One must then revert to an upper- or lower-tailed z test. As is the case with all test procedures, one must be careful not to confuse statistical significance with practical significance. A computed y? that exceeds %4-1 May be a result of a very large sample size rather than any practical differences between the hypothesized pjo’s and true p;’s. Thus if Pio = P20 = P30 = 4, but the true p,’s have values .330, .340, and .330, a large value of 7? is sure to arise with a sufficiently large n. Before rejecting Ho, the i's should be examined to see whether they suggest a model different from that of Ho from a practical point of view. P-Values for Chi-Squared Tests The chi-squared tests in this chapter are all upper-tailed, so we focus on this case. Just as the P-value for an upper-tailed r test is the area under the f, curve to the right of the calculated f, the P-value for an upper-tailed chi-squared test is the area under the 7? curve to the right of the calculated 7°. Appendix Table A.6 provides limited P-value information because only five upper-tail critical values are tabulated for each different v. We have therefore included Appendix Table A.10, analogous to Table A.7, that facilitates making more precise P-value statements. 'The fact that (2,,2)” = 72,, is a consequence of the relationship between the standard normal distribution and the chi-squared distribution with 1 df; if Z ~ N(O, 1), then Z? has a chi-squared distribution with n= 1. See the first proposition in Section 6.4. --- Trang 741 --- 728 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis The fact that ¢ curves were all centered at zero allowed us to tabulate tcurve tail areas in a relatively compact way, with the left margin giving values ranging from 0.0 to 4.0 on the horizontal t scale and various columns displaying corresponding upper-tail areas for various df’s. The rightward movement of chi-squared curves as df increases necessitates a somewhat different type of tabulation. The left margin of Appendix Table A.10 displays various upper-tail areas: .100, .095, .090, ... , .005, and .001. Each column of the table is for a different value of df, and the entries are values on the horizontal chi-squared axis that capture these corresponding tail areas. For example, moving down to tail area .085 and across to the 4 df column, we see that the area to the right of 8.18 under the 4 df chi-squared curve is .085 (see Figure 13.2). Zi Chi-squared curve for 4 df Shaded area = .085 i i) Calculated y? > 8.18 Figure 13.2 A P-value for an upper-tailed chi-squared test To capture this same upper-tail area under the 10 df curve, we must go out to 16.54. In the 2 df column, the top row shows that if the calculated value of the chi- squared variable is smaller than 4.60, the captured tail area (the P-value) exceeds .10. Similarly, the bottom row in this column indicates that if the calculated value exceeds 13.81, the tail area is smaller than .001 (P-value < .001). x’ When the p;’s Are Functions of Other Parameters Frequently the p;’s are hypothesized to depend on a smaller number of parameters 0), ... 5 On (m < k). Then a specific hypothesis involving the 0;'s yields specific P'S, Which are then used in the 7’ test. In a well-known genetics article (“The Progeny in Generations Fi to Fy7 of a Cross Between a Yellow-Wrinkled and a Green-Round Seeded Pea,” J. Genet., 1923: 255-331), the early statistician G. U. Yule analyzed data resulting from crossing garden peas. The dominant alleles in the experiment were Y = yellow color and R = round shape, resulting in the double dominant YR. Yule examined 269 four- seed pods resulting from a dihybrid cross and counted the number of YR seeds in each pod. Letting X denote the number of YR’s in a randomly selected pod, possible X values are 0, 1,2, 3, 4, which we identify with cells 1,2, 3,4, and 5 of a rectangular table (so, for example, a pod with X = 4 yields an observed count in cell 5). The hypothesis that the Mendelian laws are operative and that genotypes of individual seeds within a pod are independent of one another implies that X has a binomial distribution with n = 4 and 0 = + We thus wish to test Ho: p) = Pio, +--+ Ps = pso, where Pio = P(i—1 YR’s among 4 seeds when Hy is true) 4 9 = oa — ay) i=1,2,3,4,5; 0=— ( = ) (L8) : 16 --- Trang 742 --- 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 729 Yule’s data and the computations are in Table 13.3 with expected cell counts MPin = 269Pi0- Table 13.3 Observed and expected cell counts for Example 13.2 Celli: 1 2 3 4 5 YR peas/pod: 0 1 2 3 4 (observed — expected)? ——_a CT 3.823 637 052 038 032 expected Thus 7° = 3.823 +- - - + 032 = 4.582. Since 773, .-4 = 714 = 13.277, Ho is not rejected at level .01. Appendix Table A.10 shows that because 4.582 < 7.77, the P-value for the test exceeds .10. Ho should not be rejected at any reasonable significance level. a x” When the Underlying Distribution 1s Continuous We have so far assumed that the k categories are naturally defined in the context of the experiment under consideration. The 7” test can also be used to test whether a sample comes from a specific underlying continuous distribution. Let X denote the variable being sampled and suppose the hypothesized pdf of X is fo(x). As in the construction of a frequency distribution in Chapter 1, subdivide the measurement scale of X into k intervals [ao, a1), [a1, a2), ... , [@x1, 4), Where the interval [a;_1, ai) includes the value a;_; but not a;, The cell probabilities specified by Ho are then po=Plain SX 5 fori = 1,..., k. Often they are selected so that the npjo’s are equal. To see whether the time of onset of labor among expectant mothers is uniformly distributed throughout a 24 h day, we can divide a day into k periods, each of length 24/k. The null hypothesis states that f(x) is the uniform pdf on the interval [0, 24], so that pjo = 1/k. The article “The Hour of Birth” (Brit. J. Prevent. Social Med., 1953: 43-59) reports on 1186 onset times, which were categorized into k = 24 1-hour intervals beginning at midnight, resulting in cell counts of 52, 73, 89, 88, 68, 47, 58, 47, 48, 53, 47, 34, 21, 31, 40, 24, 37, 31, 47, 34, 36, 44, 78, and 59. Each expected cell count is 1186 - 1/24 = 49.42, and the resulting value of ¢ is 162.77. Since Loi.23 = 41.637, the computed value is highly significant, and the null hypothesis is resoundingly rejected. Generally speaking, it appears that labor is much more likely to commence very late at night than during normal waking hours. a For testing whether a sample comes from a specific normal distribution, the fundamental parameters are 0; = ys and 02 = o, and each pio will be a function of these parameters. --- Trang 743 --- 730 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis The developers of a new standardized exam want it to satisfy the following criteria: (1) actual time taken to complete the test is normally distributed, (2) 4 = 100 min, and (3) exactly 90% of all students will finish within a 2 h period. In the pilot testing of the standardized test, 120 students are given the test, and their completion times are recorded. For a chi-squared test of normally distributed completion time it is decided that k = 8 intervals should be used. The criteria imply that the 90th percentile of the completion time distribution is + 1.286 = 2 h = 120 min. Since 4: = 100, this implies that ¢ = 15.63. The eight intervals that divide the standard normal scale into eight equally likely segments are [0, .32), [.32, .675), [.675, 1.15), [1.15, 00), and their four counterparts on the other side of 0. For « = 100 and o = 15.63, these intervals become [100, 105), [105, 110.55), [110.55, 117.97), and [117.97, co). Thus pip = ¢= 125 (i=1,...,8), from which each expected cell count is np = 120(.125) = 15. The observed cell counts were 21, 17, 12, 16, 10, 15, 19, and 10, resulting in a va of 7.73. Since is5 = 12.017 and 7.73 is not > 12.017, there is no evidence for concluding that the criteria have not been met. a Section 13.1 (1-11) 1. What conclusion would be appropriate for an no preference for any direction of flight after upper-tailed chi-squared test in each of the follow- takeoff (so that the direction X should be uni- ing situations? formly distributed on the interval from 0° to a. 2 = .05, df = 4, 7? = 12.25 360°). To test this, 120 pigeons are disoriented, b. «= 01, df = 3,77 = 8.54 let loose, and the direction of flight of each is c. a= 10, df =2, 7 = 4.36 recorded; the resulting data follows. Use the chi- d. x= .01,k = 6, 7° = 10.20 squared test at level .10 to see whether the data 2. Say as much a you can about the P-value for’an sunpits the hypothesis; upper-tailed chi-squared test in each of the follow- Direction o-< 45° 45-< 90° 90- < 135° ing situations: Frequency 12 16 17 a, 2=75,df=2 b. 2 = 13.0, df =6 Direction 135— < 180° 180- < 225° 225— < 270° ce. 2 = 18.0,df=9 Frequency 15 13 20 ‘ e on aed Direction 270- < 315° 315— < 360° Frequency 17 10 3. A statistics department at a large university main- tains a tutoring center for students in its introduc- 5+ An information retrieval system has ten storage tory service courses. The center has been staffed locations. Information has been stored with the with the expectation that 40% of its clients would expectation that the long-run proportion of be from the business statistics course, 30% from requests for location i is given by the expression engineering statistics, 20% from the statistics Pi = (5.5 ~1i—5.51)/30. A sample of 200 retrieval course for social science students, and the other requests gave the following frequencies for loca- 10% from the course for agriculture students. tions 1-10, respectively: 4, 15, 23, 25, 38, 31, 32, A FRMOR RRND RTSE20: elisa FRVERIEH 14, 10, and 8. Use a chi-squared test at significance 52.98.21, and Sons the four courses, Does level 10 to decide whether the data is consistent this data suggest that the percentages on which with the a priori proportions (use the P-value staffing was based are not correct? State and test approach). the relevant hypotheses using « = .05. 6. Sorghum is an important cereal crop whose qual- 4, It is hypothesized that when homing pigeons are ity and appearance could be affected by the pres- disoriented in a certain manner, they will exhibit ence of pigments in the pericarp (the walls of the --- Trang 744 --- 13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified 731 plant ovary), The article “A Genetic and Biochem- a. Ifyou had observed X, Xx, ...,X, and wanted ical Study on Pericarp Pigments in a Cross to use the chi-squared test with five class inter- Between Two Cultivars of Grain Sorghum, Sor- vals having equal probability under Ho, what ghum Bicolor” (Heredity, 1976: 413-416) reports would be the resulting class intervals? on an experiment that involved an initial cross b. Carry out the chi-squared test using the follow- between CK60 sorghum (an American variety ing data resulting from a random sample of 40 with white seeds) and Abu Taima (an Ethiopian response times: variety with yellow seeds) to produce plants with 10 99 114 126 324 12 26 80 red seeds and then a self-cross of the red-seeded Gaede af a tn Bos. Re plants. According to genetic theory, this F2 cross 3 ll 46 69 38 should produce plants with red, yellow, or white T1221 68 43 IL 46 09 P : 91 55 81 251 277 16 111 02 seeds in the ratio 9:34. The data from Sil We ial Gia aan Bia “da aa the experiment follows; does the data confirm or contradict the genetic theory? Test at level .05 10. a. Show that another expression for the chi- using the P-value approach. squared statistic is Seed Color Red Yellow White kp 2. Ni i Observed Frequency | 195 73 100 y= x aoa? 7. Criminologists have long debated whether there is a relationship between weather conditions and the Why is it more efficient to compute 7? using incidence of violent crime. The author of the arti- this formula? cle “Is There a Season for Homicide?” (Criminol- b. When the null hypothesis is Mos pr 0). --- Trang 745 --- 732 =~ carrer 13 Goodness-of-Fit Tests and Categorical Data Analysis Goodness-of-Fit Tests for Composite Hypotheses In the previous section, we presented a goodness-of-fit test based on a 7° statistic for deciding between Ho: p; = pio. --- . Pk = Pxo and the alternative H, stating that Ho is not true. The null hypothesis was a simple hypothesis in the sense that each pio was a specified number, so that the expected cell counts when Ho was true were uniquely determined numbers. In many situations, there are k naturally occurring categories, but Ho states only that the p;’s are functions of other parameters 4), ... , 8, without specifying the values of these 6’s. For example, a population may be in equilibrium with respect to proportions of the three genotypes AA, Aa, and aa. With p,, p2, and p3 denoting these proportions (probabilities), one may wish to test Ho: pi = 0, p2 = 20(1 — 0), ps = (1-0) (13.1) where @ represents the proportion of gene A in the population. This hypothesis is composite because knowing that Ho is true does not uniquely determine the cell probabilities and expected cell counts but only their general form. To carry out a . test, the unknown 0,’s must first be estimated. Similarly, we may be interested in testing to see whether a sample came from a particular family of distributions without specifying any particular member of the family. To use the 7° test to see whether the distribution is Poisson, for example, the parameter / must be estimated. In addition, because there are actually an infinite number of possible values of a Poisson variable, these values must be grouped so that there are a finite number of cells. If Ho states that the underlying distribution is normal, use of a 7° test must be preceded by a choice of cells and estimation of jando. x” When Parameters Are Estimated As before, k will denote the number of categories or cells and p; will denote the probability of an observation falling in the ith cell. The null hypothesis now states that each p; is a function of a small number of parameters 0), ... , 0, with the 0;'s otherwise unspecified: Ho : py = (0),..., pe = (0) where @ = (0;,.... On) (13.2) H, : the hypothesis Ho is not true ~ For example, for Ho of (13.1), m = 1 (there is only one 0), 7(0) = 0 *, 23(0) = 20(1 — 0), and 23(0) = (1 - 0)”. In the case k = 2, there is really only a single rv, N; (since N; + Nz = n), which has a binomial distribution. The joint probability that N; = m, and Nz = nz is then P(N, = m,N2 =m) = (z ore x pit pi 1 --- Trang 746 --- 13.2 Goodness-of-Fit Tests for Composite Hypotheses 733 where p,; + p2 = Land n, + nz = n. For general k, the joint distribution of Nj, ..., N, is the multinomial distribution (Section 5.1) with P(N = mi, --+Ne = me) & pl! pl +++» pt (13.3) When Hp is true, (13.3) becomes. P(N, =m1,...,Ne =x) x [m1 (0)]" +... [7 (0)]" (13.4) To apply a chi-squared test, 0 = (0), ... , 8.) must be estimated. METHOD OF Let nj, nz, ... , ny denote the observed values of N;,..., Ny. Then 0,,..., On ESTIMATION are those values of the 0,’s that maximize (13.4), that is, the maximum likelihood estimators (Section 7.2). In humans there is a blood group, the MN group, that is composed of individuals having one of the three blood types M, MN, and N. Type is determined by two alleles, and there is no dominance, so the three possible genotypes give rise to three pheno- types. A population consisting of individuals in the MN group is in equilibrium if P(M)=p, =@ P(MN) = p) = 20(1 — 0) P(N) =p; = (1-0)? for some 0. Suppose a sample from such a population yielded the results shown in Table 13.4. Table 13.4 Observed counts for Example 13.5 Type: M MN M Then (7e1(0)]" [2(0)]"* [x3(0)]" = (0]" (20(1 — 0)" (1 — 0] = 2". emtm . ( — gym Maximizing this with respect to @ (or, equivalently, maximizing the natural loga- rithm of this quantity, which is easier to differentiate) yields au 2m + ng _ 2m +m © [2m +n) + (m2 +2n3)] On With n,; = 125 and nj = 225, b= 475/1000 = .475. a Once @ = (0;,...,0,,) has been estimated by 6= (1, pseu Om), the estimated expected cell counts are the n7;(@)’s. These are now used in place of the npjo’s of Section 13.1 to specify a 7° statistic. --- Trang 747 --- 734 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis THEOREM Under general “regularity” conditions on 6), ... , 0, and the 7;(8)’s, if 01,..., 6,, are estimated by the method of maximum likelihood as described previ- ously and n is large, 2 x (observed — estimated expected)? > [Ni -— n(6))° f= ee SE eae all cells expected fron (0) has approximately a chi-squared distribution with k — 1 — m df when Ho of (13.2) is true. An approximately level x test of Ho versus A, is then to reject Ho if 7 > 7244» In practice, the test can be used if nm;(@) > 5 for every i. Notice that the number of degrees of freedom is reduced by the number of 0;'s estimated. With 0=.475 and n=500, the estimated expected cell counts are (Example 13.5 ny (0) =500(0)? = 112.81, n72(0) = (500)(2)(.475)(1—.475)= 249.38, and continue) nm3(0) = 500 — 112.81 249.38 = 137.81. Then 125 — 112.81)” | (225 — 249.38)", (150 — 137.81)” po! an lan Yate 112.81 249.38 137.81 Since 75 .-1-m = 153-11 = Zs, = 3.843 and 4.78 > 3.843, Hp is rejected. Appendix Table A.10 shows that P-value © .029. Ll) Consider a series of games between two teams, I and II, that terminates as soon as one team has won four games (with no possibility of a tie). A simple probability model for such a series assumes that outcomes of successive games are independent and that the probability of team I winning any particular game is a constant 0. We arbitrarily designate I the better team, so that @ > .5. Any particular series can then terminate after 4, 5, 6, or 7 games. Let 7,(0), 12(), 73(@), ™4(0) denote the probability of termination in 4, 5, 6, and 7 games, respectively. Then (0) = P(I wins in 4 games) + P(II wins in 4 games) =o'+(1-0)* (0) = P(I wins 3 of the first 4 and the fifth) + P(I loses 3 of the first 4 and the fifth) 4 4 = (Jeo —0)-04 ( +) —6)*- (1-0) = 40(1 — 0) Gi +(1= 0)'| 73(0) = 1007(1 — 0)?[6? + (1 —0)°] n4(0) = 2003(1 — 0)* The article “Seven-Game Series in Sports” by Groeneveld and Meeden (Math. Mag., 1975: 187-192) tested the fit of this model to results of National --- Trang 748 --- 13.2 Goodness-of-Fit Tests for Composite Hypotheses 735 Hockey League playoffs during the period 1943-1967 (when league membership was stable). The data appears in Table 13.5. Table 13.5 Observed and expected counts for the simple model Cell: 1 2 3 4 Number of games played: 4 5 6 7 Estimated Expected Foqeney 19256 The estimated expected cell counts are 837;(0), where @ is the value of @ that maximizes 1s 26 {+o}. {40(1 = NG +(1- oy]} 2 20 92 2] 3 318 - {100(1 ~0) [e+e ~0) }} - {200 (1-0) } (13.5) Standard calculus methods fail to yield a nice formula for the maximizing value 0, so it must be computed using numerical methods. The result is 9 = .654, from which 7(0) and the estimated expected cell counts are computed. The computed value of ¢ is .360, and (since k— 1—-m = 4-1-1 = 2) qos = 4.605. There is thus no reason to reject the simple model as applied to NHL playoff series. The cited article also considered World Series data for the period 1903-1973. For the simple model, 77 = 5.97, so the model does not seem appropriate. The suggested reason for this is that for the simple model P(series lasts six games | series lasts at least six games ) > .5 (13.6) whereas of the 38 series that actually lasted at least six games, only 13 lasted exactly six. The following alternative model is then introduced: m1(01, 02) = 04 + (1 — 01)" (01.02) = 40;(1 — 01)[0} + (1 01)"] 73(01, 42) = 1007(1 — 01)°0> m4(01, 02) = 100; (1 — 01)°(1 — 02) The first two 7;’s are identical to the simple model, whereas 6, is the conditional probability of a 3.6) (which can now be any number between zero and one). The values of 0; and @> that maximize the expression analogous to expression (13.5) are determined numerically as 0; = .614, 02 = .342. A summary appears in Table 13.6, and Pag = .384. Two parameters are estimated, so df = k-1-—m=1 with 7°.) ; = 2.706, indicating a good fit of the data to this new model. Table 13.6 Observed and expected counts for the more complex model Number of games played: 4 5 6 ud 7 --- Trang 749 --- 736 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis One of the regularity conditions on the 0;’s in the theorem is that they be functionally independent of one another. That is, no single 0; can be determined from the values of other 6;’s, so that m is the number of functionally independent parameters estimated. A general rule of thumb for degrees of freedom in a chi- squared test is the following. Pat = number of freely __ { number of independent ig ~ \ determined cell counts parameters estimated This rule will be used in connection with several different chi-squared tests in the next section. Goodness of Fit for Discrete Distributions Many experiments involve observing a random sample X), X2, ..., X, from some discrete distribution. One may then wish to investigate whether the underlying distribution is a member of a particular family, such as the Poisson or negative binomial family. In the case of both a Poisson and a negative binomial distribution, the set of possible values is infinite, so the values must be grouped into k subsets before a chi-squared test can be used. The groupings should be done so that the expected frequency in each cell (group) is at least 5. The last cell will then correspond to X values of c, c + 1, ¢ + 2, ... for some value c. This grouping can considerably complicate the computation of the 0;’s and estimated expected cell counts. This is because the theorem requires that the 0;’s be obtained from the cell counts N;, ..., N; rather than the sample values X), ..., Xn. Table 13.7 presents count data on the number of Larrea divaricata plants found in each of 48 sampling quadrats, as reported in the article “Some Sampling Character- istics of Plants and Arthropods of the Arizona Desert” (Ecology, 1962: 567-571). Table 13.7 Observed counts for Example 13.8 Cell: 1 2 3 4 5 Number of plants: 0 1 2 3 =4 mes [Tole] | The author fit a Poisson distribution to the data. Let / denote the Poisson parameter and suppose for the moment that the six counts in cell 5 were actually 4, 4,5, 5, 6, 6. Then denoting sample values by x), .. ., x4, nine of the x;'s were 0, nine were 1, and so on. The likelihood of the observed sample is ena ena 948A gE Hi e482 101 wi ashes atag! ote aga The value of 4 for which this is maximized is 2 = x;/n = 101/48 = 2.10 (the value reported in the article). --- Trang 750 --- 13.2 Goodness-of-Fit Tests for Composite Hypotheses 737 However, the A required for 7° is obtained by maximizing Expression (13.4) rather than the likelihood of the full sample. The cell probabilities are eragicl ; ni(A) = ay i 1,2,3,4 3 Api ms(4) = 1-0 =0 so the right-hand side of (13.4) becomes 4 feta feta] Mets Breti]® eel Se] Gel 4 | -34] C3) i= There is no nice formula for A, the maximizing value of / in this latter expression, so it must be obtained numerically. a Because the parameter estimates are usually much more difficult to compute from the grouped data than from the full sample, they are often computed using this latter method. When these “full” estimators are used in the chi-squared statistic, the distribution of the statistic is altered and a level x test is no longer specified by the critical value 72. 1m THEOREM Let 01, bees Oni be the maximum likelihood estimators of 0), ..., 0, based on the full sample X), ..., X,, and let ral denote the statistic based on these estimators. Then the critical value c,, that specifies a level x upper-tailed test satisfies Lapt-m Sa S Lee (13.8) The test procedure implied by this theorem is the following: If ¢ > v ap Teject Ho. If 7 < Z.4-1-m do not reject Ho. (13.9) Tf 2 4-tm <2 (9 — 5.88)? (6 — 7.75) ns a hi Since m = 1 and k = 5, at level .05 we need 745, = 7.815 and 745 4 = 9.488. Because 6.31 < 7.815, we do not reject Hp; at the 5% level, the Poisson distribu- tion provides a reasonable fit to the data. Notice that 73; = 6.251 and L04 = 7-779, so at level .10 we would have to withhold judgment on whether the Poisson distribution was appropriate. For comparison we can with a little additional effort maximize Expres- sion (13.7). Use of a graphing calculator gives 4 = 2.047. Because this differs very little from 2.10, there is little change in the results. Using 2.047, we get the estimated expected cell counts 6.197, 12.687, 12.985, 8.860, and 7.271, and the resulting value of 7 is 6.230. Comparing this with 75, ; = 7-815, we do not reject the Poisson null hypothesis at the .05 level. Because 6.230 does not quite exceed Kiva = 6.251, we also do not reject the null hypothesis at the 10% level. a Sometimes even the maximum likelihood estimates based on the full sample are quite difficult to compute. This is the case, for example, for the two-parameter (generalized) negative binomial distribution. In such situations, method-of- moments estimates are often used and the resulting 7° compared to 72,» although it is not known to what extent the use of moments estimators affects the true critical value. Goodness of Fit for Continuous Distributions The chi-squared test can also be used to test whether the sample comes from a specified family of continuous distributions, such as the exponential family or the normal family. The choice of cells (class intervals) is even more arbitrary in the continuous case than in the discrete case. To ensure that the chi-squared test is valid, the cells should be chosen independently of the sample observations. Once the cells are chosen, it is almost always quite difficult to estimate unspecified parameters (such as yt and in the normal case) from the observed cell counts, so instead mle’s based on the full sample are computed. The critical value c, again satisfies (13.8), and the test procedure is given by (13.9). The Institute of Nutrition of Central America and Panama (INCAP) has carried out extensive dietary studies and research projects in Central America. In one study reported in the November 1964 issue of the American Journal of Clinical Nutrition (“The Blood Viscosity of Various Socioeconomic Groups in Guatemala”), serum --- Trang 752 --- 13.2 Goodness-of-Fit Tests for Composite Hypotheses 739 total cholesterol measurements for a sample of 49 low-income rural Indians were reported as follows (in mg/L): 204 «108 «140 152 158 129 175 146 157 174 192 194 144 152. 135) 223) «145, 231 IS) 131 129-142) 114-173, 226155 166 220 180 172 143 148 171 143 124 158 144 108 189 136 136 «197131 95 139 181 165 142 162 Is it plausible that serum cholesterol level is normally distributed for this popula- tion? Suppose that priorito Sampling) it was!believed that plausible! values! for ji and @ were 150 and 30, respectively. The seven equiprobable class intervals for the standard normal distribution are (—00, —1.07), (—1.07, —.57), (—.57, -.18), (—.18, .18), (.18, .57), (.57, 1.07), and (1.07, 00), with each endpoint also giving the distance in standard deviations from the mean for any other normal distribution. For j/="150/and/@=30, these intervals become (—o0, 117.9), (117.9, 132.9), (132.9, 144.6), (144.6, 155.4), (155.4, 167.1), (167.1, 182.1), and (182.1, 00). To obtain the estimated cell probabilities (ji, ¢),...,77(jt,), we the mle’s fvand 6. In Chapter 7, 6 was shown to be [J> (x; — x)?/n]!/? 4 so with s = 31.75, paraeiszo ga fV er _ /@— D8 _ ay n n Each 7;(ji,)) is then the probability that a normal rv X with mean 157.02 and standard deviation 31.42 falls in the ith class interval. For example, 79(jt, 6) = P(117.9 < X < 132.9) = P(-1.25 < Z < —.77) =.1150 so nmy(jt,¢) = 49(.1150) = 5.64. Observed and estimated expected cell counts are shown in Table 13.8. Table 13.8 Observed and expected counts for Example 13.10 Cel: (00, 117.9) (117.9, 132.9) (132.9, 144.6) (144.6, 155.4) Cell: (155.4, 167.1) (167.1, 182.1) (182.1, 00) Fstimated Expected 712 7197 The computed 7 is 4.60. With k = 7 cells and m = 2 parameters estimated, Pasp_1 = Pyse = 12-592 and 725541 -m = Zys.4 = 9-488. Since 4.60 < 9.488, anormal distribution provides quite a good fit to the data. Ll) The article “Some Studies on Tuft Weight Distribution in the Opening Room” (Textile Res. J., 1976: 567-573) reports the accompanying data on the distribution of output tuft weight X (mg) of cotton fibers for the input weight x) = 70. Interval: 0-8 8-16 16-24 24-32 32-40 40-48 48-56 56-64 64-70 bP PEEL Frequency seems] [ee fel | requency --- Trang 753 --- 740 = charter 13 Goodness-of-Fit Tests and Categorical Data Analysis The authors postulated a truncated exponential distribution: Ho: f(x) = — O" -q", Show that the mle of p is given by 12 to establish a single category “ > 7.”] p=(Xxi—n)/Tx;, and compute p for the Stablisia sing gory eh given data. --- Trang 756 --- 13.2 Goodness-of-Fit Tests for Composite Hypotheses 743 [Hint: Write the likelihood as a function of 0; and Number GO 12 34S 67 89 10 M1 12 0, take the natural log, then compute 0/00; and ohBorend 8/002, equate them to 0, and solve for 01,03.) Frequency | 24 16 16 18 1596534 3 0 1 20. The article “Compatibility of Outer and Fusible 18. The article “A Probabilistic Analysis of Dissolved Interlining Fabrics in Tailored Garments (Textile Oxygen—Biochemical Oxygen Demand Relation- Res. J., 1997: 137-142) gave the following ship in Steams” (J. Water Resources Control observations on bending rigidity (WN - m) for Fed., 1969: 73-90) reports data on the rate of medium-quality fabric specimens, from which oxygenation in streams at 20°C ina certain region. the accompanying MINITAB output was The sample mean and standard deviation were obtained: computed as. = .173 and s = .066, respectively. ate by 4 Be tex OF FS HHS Based on the accompanying frequency idistnibu- 46.9 68.3 308 116.7 39.5 73.8 80.6 203 tion, can it be concluded that oxygenation rate is a 25.8 30.9 39.2 368 46.6 15.6 323 normally distributed variable? Use the chi- squared test with 2 = .05. Normal Probability Plot Rate (per day) Frequency 2? Below .100 12 96 ° 2 280 ® -100-below .150 20 5 é 150-below .200 23 & a -200-below .250 IS a” .250 or more 13 08 * oo 01 19, Each headlight on an automobile undergoing an sa annual vehicle inspection can be focused either too 20 70 120 high (H), too low (L), or properly (WV). Checking bending the two headlights simultaneously (and not distin- Average: 97.4217 Wiest for Normality he : : : Std Dev. 25.8101 R 0.9116 guishing between left and right) results in the six Nof data: 23 pvalue(approx): <0.0100 possible outcomes HH, LL, NN, HL, HN, and LN. If the probabilities (population proportions) for the Would you use a one-sample ¢ confidence inter- single headlight focus direction are P(H) = 01, val to estimate true average bending rigidity? P(L) = 02, and P(N) = 1 ~ 0 ~ 03 and the two Explain your reasoning. headlights are focused independently of each 94. ‘The article from which the data in Exercise 20 was other, the probabilities of the six outcomes for a obtained also gave the accompanying data on the randomly selected car are the following: composite mass/outer fabric mass ratio for high- m=8 p= py = (1-0; — quality fabric specimens. = 2010s ps 2811 — 8) — 0») 1.15 140 1.34 1.29 1.36 1.26 1.22 Po =203(1— 0, — 63) 140 129 141 132 1.34 1.26 1.36 2 136 130 128 145 1.29 1.28 1.38 Use the accompanying data to test the null 1.55 146 1.32 Hypothesis MINITAB gave r = .9852 as the value of the Ho : pt = (01, 02), ---,P6 = %6(01, 02) Ryan— Joiner test statistic and reported that P- value > 10. Would you use the one-sample where the 7;(0;, 02)’s are given previously. t test to test hypotheses about the value of the . true average ratio? Why or why not? Outcome HH LL NN HL HN LN Frequency 49 26 14 20 53 38 22. The article “Nonbloated Bumed Clay Aggregate Concrete” (J. Mater., 1972: 555-563) reports the following data on 7 day flexural strength of --- Trang 757 --- 744 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis fomronted bumed clay aggregate concrete samples 7. 24 level .10 to decide whether flexural strength is a Psi: normally distributed variable. 257 327) 317, 3300) 340 340 343. 374-377-386 383. 393. 407) 407 434 427 440 407 450 440 456 460 456 476 480 490 497 526 546 700 Two-Way Contingency Tables In the previous two sections, we discussed inferential problems in which the count data was displayed in a rectangular table of cells. Each table consisted of one row and a specified number of columns, where the columns corresponded to categories into which the population had been divided. We now study problems in which the data also consists of counts or frequencies, but the data table will now have / rows (I > 2) and J columns, so /J cells. There are two commonly encountered situations in which such data arises: 1. There are / populations of interest, each corresponding to a different row of the table, and each population is divided into the same J categories. A sample is taken from the ith population (i = 1, ..., J), and the counts are entered in the cells in the ith row of the table. For example, customers of each of J = 3 department store chains might have available the same J =5 payment categories: cash, check, store credit card, Visa, and MasterCard. 2. There is a single population of interest, with each individual in the population cate- gorized with respect to two different factors. There are / categories associated with the first factor, and J categories associated with the second factor. A single sample is taken, and the number of individuals belonging in both category 7 of factor 1 and category j of factor 2 is entered in the cell in row i, column j (i = 1, ..., J; j= 1,...,J). As an example, customers making a purchase might be classified according to both department in which the purchase was made, with / = 6 departments, and according to method of payment, with J = 5 as in (1) above. Let nj; denote the number of individuals in the sample(s) falling in the (i, j )th cell (row i, column j) of the table—that is, the (i, )th cell count. The table displaying the n,; s is called a two-way contingency table; a prototype is shown in Table 13.9. Table 13.9 A two-way contingency table 5 ee ce | [im [mw [= [ow [= [om | atut [ot ft fi: | ge ee tft] [TT [=] --- Trang 758 --- 13.3 Two-Way Contingency Tables 745 In situations of type 1, we want to investigate whether the proportions in the different categories are the same for all populations. The null hypothesis states that the populations are homogeneous with respect to these categories. In type 2 situa- tions, we investigate whether the categories of the two factors occur independently of each other in the population. Testing for Homogeneity We assume that each individual in every one of the / populations belongs in exactly one of J categories. A sample of n; individuals is taken from the ith population; let n= Zn,and nj = the number of individuals in the ith sample who fall into category j y the total number of individuals among n= ny = A A 4 fal 1 the n sampled who fall into category j The nj’s are recorded in a two-way contingency table with J rows and J columns. The sum of the nj;’s in the ith row is n;, whereas the sum of entries in the jth column is nj. Let __ the proportion of the individuals in Py = population 7 who fall into category j Thus, for population 1, the J proportions are pj, P12, ..., Piz (Which sum to 1) and similarly for the other populations. The null hypothesis of homogeneity states that the proportion of individuals in category j is the same for each population and that this is true for every category; that is, for every j, pyj = Px = °** = Pyj- When H) is true, we can use pj, >, ..., Py to denote the population propor- tions in the J different categories; these proportions are common to all / popula- tions. The expected number of individuals in the ith sample who fall in the jth category when Hp is true is then E(N;j) = n;- p;. To estimate E(Nj;), we must first estimate p;, the proportion in category j. Among the total sample of n individuals, N, fall into category j, so we use pj = N.j/n as the estimator (this can be shown to be the maximum likelihood estimator of p,). Substitution of the estimate p; for p; in nip; yields a simple formula for estimated expected counts under Ho: ‘ ‘ Fi fn, é;; = estimated expected count in cell (i,j) = nj: n __ (ith row total)(jth column total) (13.10) 7 n The test statistic also has the same form as in previous problem situations. The number of degrees of freedom comes from the general rule of thumb. In each row of Table 13.9 there are J — 1 freely determined cell counts (each sample size n; is fixed), so there are a total of /(J — 1) freely determined cells. Parameters p, .. ., py are estimated, but because Xp; = 1, only J — | of these are independent. Thus df = IJ -1)-V-1) = V-1)d-D. --- Trang 759 --- 746 = carrer 13. Goodness-of-Fit Tests and Categorical Data Analysis Null hypothesis: Ho : pry = px =+:- = py f= 1,2,-...5 Alternative hypothesis: H, : Ho is not true Test statistic value: 2 y (observed — estimated expected)? _ y y (ny ei) a ; estimated expected - ej all cells i=l j=l Rejection region: 77 > 73)1)—1 P-value information can be obtained as described in Section 13.1. The test can safely be applied as long as é;; > 5 for all cells. eee =A company packages a particular product in cans of three different sizes, each one using a different production line. Most cans conform to specifications, but a quality control engineer has identified the following reasons for nonconformance: (1) blemish on can; (2) crack in can; (3) improper pull tab location; (4) pull tab missing; (5) other. A sample of nonconforming units is selected from each of the three lines, and each unit is categorized according to reason for nonconformity, resulting in the following contingency table data: Reason for Nonconformity Blemish Crack Location Missing Other Sample Size Production 1 34 65 17 21 13 150 Line 2 23 52 25 19 6 125 3 32 28 16 14 10 100 Total 89 145 58 54 29 375 Does the data suggest that the proportions falling in the various nonconformance categories are not the same for the three lines? The parameters of interest are the various proportions, and the relevant hypotheses are Ho: the production lines are homogeneous with respect to the five non- conformance categories; that is, pj; = p2j = p3j for j = 1, ...,5 Hi: the production lines are not homogeneous with respect to the categories The estimated expected frequencies (assuming homogeneity) must now be calcu- lated. Consider the first nonconformance category for the first production line. When the lines are homogeneous, estimated expected number among the 150 selected units that are blemished __ (first row total)(first column total) — (150)(189) _ 5860 7 total of sample sizes - 375 The contribution of the cell in the upper-left corner to 7? is then (observed — estimated expected)” _ (34— 35.60)° =07n estimated expected ~ 35.60 --- Trang 760 --- 13.3 Two-Way Contingency Tables 747 The other contributions are calculated in a similar manner. Figure 13.4 shows MINITAB output for the chi-squared test. The observed count is the top number in each cell, and directly below it is the estimated expected count. The contribution of each cell to 7? appears below the counts, and the test statistic value is 77 = 14.159. All estimated expected counts are at least 5, so combining categories is unnecessary. The test is based on (3 —1)(5— 1) = 8 df. Appendix Table A.10 shows that the values that capture upper-tail areas of .08 and .075 under the 8 df curve are 14.06 and 14.26, respectively. Thus the P-value is between .075 and .08; MINITAB gives P-value = .079. The null hypothesis of homogeneity should not be rejected at the usual significance levels of .05 or .01, but it would be rejected for the higher « of .10. Expected counts are printed below observed counts blem crack loc missing other Total 1 34 65 17 21 13 150 35.60 58.00 23.20 21.60 11.60 2 23 52 25 29 6 125 29.67 48.33 19.33 18.00 9.67 3 32 28 16 14 10 100 23: 73. 38.67 iSna% 14.40 ac Total 89 145 58 54 29 375 Chisq = 0.072 + 0.845 + 1.657 + 0.017 + 0.169 + 1,498 + 0.278 + 1.661 + 0.056 + 1.391 + 2.879 + 2.943 + 0.018 + 0.011 + 0.664 = 14.159 df = 8, p = 0.079 Figure 13.4 MINITAB output for the chi-squared test of Example 13.13 / Testing for Independence We focus now on the relationship between two different factors in a single population. The number of categories of the first factor will be denoted by / and the number of categories of the second factor by J. Each individual in the popula- tion is assumed to belong in exactly one of the / categories associated with the first factor and exactly one of the J categories associated with the second factor. For example, the population of interest might consist of all individuals who regularly watch the national news on television, with the first factor being preferred network (ABC, CBS, NBC, PBS, CNN, or FOX, so J = 6) and the second factor political philosophy (liberal, moderate, conservative, giving J = 3). For a sample of n individuals taken from the population, let n,; denote the number among the n who fall both in category i of the first factor and category j of the second factor. The 7,;’s can be displayed in a two-way contingency table with T rows and J columns. In the case of homogeneity for / populations, the row totals were fixed in advance, and only the J column totals were random. Now only the total sample size is fixed, and both the ,.’s and n,;’s are observed values of random variables. To state the hypotheses of interest, let --- Trang 761 --- 748 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis pi = the proportion of individuals in the population who belong in category i of factor 1 and category j of factor 2 = P(a randomly selected individual falls in both category i of factor 1 and category j of factor 2) Then Pi.= Spi = P(arandomly selected individual falls in category i of factor 1) J Pj= Spi = P(arandomly selected individual falls in category j of factor 2) Recall that two events A and B are independent if P(A 9 B) = P(A) - P(B). The null hypothesis here says that an individual's category with respect to factor | is independent of the category with respect to factor 2. In symbols, this becomes Pi = Pi. p; for every pair (i,j). The expected count in cell (i,j) is n - pjj, So When Ho is true, E(Njj) = n= pj. pj. To obtain a chi-squared statistic, we must therefore estimate the p;.’s (i = 1, ...,/) and p,’s (j = 1, ...,/). The (maximum likelihood) estimates are Bi. = i sample proportion for category i of factor 1 n and «0 z ¢ 2 = sample proportion for category j of factor 2 This gives estimated expected cell counts identical to those in the case of homo- geneity. by = NB -ppone tt non n __ (ith row total)(jth column total) 7 n The test statistic is also identical to that used in testing for homogeneity, as is the number of degrees of freedom. This is because the number of freely determined cell counts is /J — 1, since only the total 7 is fixed in advance. There are / estimated P;.’s, but only J — 1 are independently estimated since X p;. = 1, and similarly J — 1 p.js are independently estimated, so J + J — two parameters are independently estimated. The rule of thumb now yields df = J —1-(/+J-2)=I-I- J+1=(-1)-V-1). --- Trang 762 --- 13.3 Two-Way Contingency Tables 749 Null hypothesis: Ho: py =pi.-pj f=1,--.0) f= lyeeeJd Alternative hypothesis: H, : Ho is not true Test statistic value: i sti say? tt 3..\2 pu y (observed — estimated expected) _ y ys (ny — éy) fi - estimated expected ey all cells i=l jt Rejection region: 77 > 73,4_1)y-1) Again, P-value information can be obtained as described in Section 13.1. The test can safely be applied as long as é;; > 5 for all cells. A study of the relationship between facility conditions at gasoline stations and aggressiveness in the pricing of gasoline (“An Analysis of Price Aggressiveness in Gasoline Marketing,” J. Market. Res., 1970: 36-42) reports the accompanying data based on a sample of n = 441 stations. At level .01, does the data suggest that facility conditions and pricing policy are independent of one another? Observed and estimated expected counts are given in Table 13.10. Table 13.10 Observed and estimated expected counts for Example 13.14 Observed Pricing Policy Aggressive Neutral Nonaggressive Expected Pricing Policy mi ny 134 174 133 441 134 174 133 441 Thus 2 2 > (24— 17.02) (36 — 54.29) | aN cc M7 i T7021 S349 and because 7%;,4 = 13.277, the hypothesis of independence is rejected. We conclude that knowledge of a station’s pricing policy does give informa- tion about the condition of facilities at the station. In particular, stations with an aggressive pricing policy appear more likely to have substandard facilities than stations with a neutral or nonaggressive policy. a Ordinal Factors and Logistic Regression Sometimes a factor has ordinal categories, meaning that there is a natural ordering. For example, there is a natural ordering to freshman, sophomore, junior, senior. In such situations we can use a method that often has greater power to detect relation- ships. Consider the case in which the first factor is ordinal and the other has two categories. Denote by X the level of the first (ordinal) factor, the rows, which will be the predictor in the model. Then Y designates the column, either one or two, and --- Trang 763 --- 750 carrer 13 Goodness-of-Fit Tests and Categorical Data Analysis Y will be the dependent variable in the model. It is convenient for purposes of logistic regression to label column 1 as Y = 0 (failure) and column 2 as Y = 1 (success), corresponding to the usual notation for binomial trials. In terms of logistic regression, p(x) is the probability of success given that X = x: , . Px2 P(x) = P(Y = 1|X =x) = P(j = 2|i = x) = ——~_ Px + Pro Then the logistic model of Chapter 12 says that bot Bix — P(x) _ Po 1—p(x) pa In terms of the odds of success in a row (estimated by the ratio of the two counts), the model says that the odds change proportionally (by the fixed multiple e’') from row to row. For example, suppose a test is given in grades 1, 2, 3, and 4 with successes and failures as follows Grade Failed Passed Estimated Odds 1 45 45 1 2: 30 60. 2 3 18 ip 4 4 10 80 8 Here the model fits perfectly, with odds ratio e#: = 2, so 8; = In(2) and Bo = —In(2). In general, it should be clear that f is the natural log of the odds ratio between successive rows. If a table with J rows and 2 columns has roughly acommon odds ratio from row to row, then the logistic model should be a good fit if the rows are labeled with consecutive integers. We focus on the slope f; because the relationship between the two factors hinges on this parameter. The hypothesis of no relationship is equivalent to Ho: B, = 0, which is usually tested against a two-tailed alternative. Is there a relationship between TV watching and physical fitness? For an answer we refer to the article “Television Viewing and Physical Fitness in Adults” (Res. Quart. Exercise Sport, 1990: 315-320). Subjects were asked about their television-viewing habits and were classified as physically fit if they scored in the excellent or very good category on a step test. Table 13.11 shows the results in the form of a 4 x 2 table. The TV column gives the hours per day Table 13.11 TV versus fitness results TV Time Unfit Fit i oe i --- Trang 764 --- 13.3 Two-Way Contingency Tables 751 The rows need to be given specific numeric values for computational pur- poses, and it is convenient to make these just 1, 2, 3, 4, because consecutive integers correspond to the assumption of a common odds ratio from row to row. The columns may need to be labeled as 0 and 1 for input to a program. The logistic regression results from MINITAB are shown in Figure 13.5, where the estimated coefficient f, for TV is given as -.29 and the odds ratio is given as .75 = e~°. This means that, for each increase in TV watching category, the odds of being fit decline to about 3/4 of the previous value. There is a loss of 25% for each increment in TV. The output shows two tests for f), a z based on the ratio of the coefficient to its estimated standard error and G, which is based on a likelihood ratio test and gives the chi-squared approximation for the difference of log likelihoods. The two tests usually give very similar results, with G being approximately the square of z. In this case they agree that the P-value is around .02, which means that we should reject at the .05 level the hypothesis that ,; = 0, and we can conclude that there is a relationship between TV watching and fitness. Of course, the existence of a relationship does not imply anything about one causing the other. By the way, a chi-squared test yields xr = 6.161 with 3 df, P = .104, so with this test we would not conclude that there is a relationship, even at the 10% level. There is an advantage in using logistic regression for this kind of data. Logistic Regression Table odds 95% CI Predictor Coef SE Coef Zz P Ratio Lower Upper Constant -1.21316 0.267486 -4.54 0.000 TV -0.290693 0.125588 -2.31 0.021 0.75 0.58 0.96 Log-Likelihood = -483.205 Test that all slopes are zero: G = 5.501, DF = 1, P-Value = 0.019 Figure 13.5 Logistic regression for TV versus fitness = Suppose there are two ordinal factors, each with more than two levels. This too can be handled with logistic regression, but it requires a procedure called ordinal logistic regression that allows an ordinal dependent variable. When one factor is ordinal and the other is not, the analysis can be done with multinomial (also called nominal or polytomous) logistic regression, which allows a non-ordinal dependent variable. Models and methods for analyzing data in which each individual is cate- gorized with respect to three or more factors (multidimensional contingency tables) are discussed in several of the references in the chapter bibliography. --- Trang 765 --- 752 = cuarrer 13 Goodness-of-Fit Tests and Categorical Data Analysis Exercises | Section 13.3 (23-35) 23. Reconsider the Cubs data of Exercise 56 women the number of individuals whose feet in Chapter 10. Form a 2 x 2 table for the data were the same size, had a bigger left than right and use a 7 statistic to test the hypothesis of foot (a difference of half a shoe size or more), or equal population proportions. The 7° statistic had a bigger right than left foot. should be the square of the z statistic in Exer- Sample cise 56 of Chapter 10. How are the P-values L>R L=R Ls—wma cise Sport, 1990: 259-267). Each person in ran- The authors state, “Late-game leader is defined dom samples of 2225 male coaches and 1141 as the team that is ahead after three quarters in female coaches was classified according to num- basketball and football, two periods in hockey, ber of years of coaching experience to obtain the and seven innings in baseball. The chi-square accompanying two-way table. Is there enough --- Trang 768 --- 13.3 Supplementary Exercises 755 value on three degrees of freedom is 10.52 from each of three different areas near industrial (P < .015).” facilities. Each individual was asked whether he a. State the relevant hypotheses and reach a or she noticed odors (1) every day, (2) at least conclusion using % = .05. once/week, (3) at least once/month, (4) less often b. Do you think that your conclusion in part (a) than once/month, or (5) not at all, resulting in the can be attributed to a single sport being an output from SPSS on the next page. State and test anomaly? the appropriate hypotheses. 40. The accompanying two-way frequency table 43. Many shoppers have expressed unhappiness appears in the article “Marijuana Use in because grocery stores have stopped putting College” (Youth and Society, 1979: 323-334). prices on individual grocery items. The article Each of 445 college students was classified “The Impact of Item Price Removal on Grocery according to both frequency of marijuana Shopping Behavior” (J. Market., 1980: 73-93) use and parental use of alcohol and psychoactive reports on a study in which each shopper in a drugs. Does the data suggest that parental usage sample was classified by age and by whether he and student usage are independent in or she felt the need for item pricing. Based on the the population from which the sample was accompanying data, does the need for item pric- drawn? Use the P-value method to reach a con- ing appear to be independent of age? clusion. _———————— ee Age Standard Level of ——— Marijuana use < 30 30-39 40-49 50-59 > 60 Never Occasional Regular —— re ee . Number 150 141 82 63 49 Parental Neither fa | | 40 in Sample Useot One | os | a | 51 Number 127 118 77 61 41 Alcohol Who Want and Drugs Roth 7 nN 19 Item Pricing 41. Ina study of 2989 cancer deaths, the location of _ 44 Let p; denote the proportion of successes in a death (home, acute-care hospital, or chronic-care particular population. The test statistic value in facility) and age at death were recorded, resulting Chapter 9 for testing Ho: pi = pio was 2 = in the given two-way frequency table (“Where (P1 —pio)/V/Piopm/n, where px =1 — pro. Cancer Patients Die,” Public Health Rep., 1983: Show that for the case k = 2, the chi-squared sta- 173). Using a .01 significance level, test the null tistic value of Section 13.1 satisfies 7? = 2°. [Hinr: hypothesis that age at death and location of death First show that (71) — pio)” = (2 —np20)"-] are independent, 45. The NCAA basketball tournament begins with 64 tion teams that are apportioned into four regional tour- ocation : A naments, each involving 16 teams. The 16 teams Age Home Acute-Care Chronic-Care in each region are then ranked (seeded) from | to a 16. During the 12-year period from 1991 to 2002, 15-54 94 418 23 the top-ranked team won its regional tournament 55-64 116 524 34 22 times, the second-ranked team won 10 times, 65-74 156 581 109 the third-ranked team won 5 times, and the Over 74 138 558 238 remaining 11 regional tournaments were won — by teams ranked lower than 3, Let P,, denote 42. In a study to investigate the extent to which the probability that the team ranked i in its region individuals are aware of industrial odors in a is victorious in its game against the team ranked certain region (“Annoyance and Health Reac- Jj. Once the Pj’s are available, it is possible to tions to Odor from Refineries and Other Indus- compute the probability that any particular seed tries in Carson, California,” Environ. Res., 1978: wins its regional tournament (a complicated 119-132), a sample of individuals was obtained calculation because the number of outcomes --- Trang 769 --- 756 charter 13 Goodness-of-Fit Tests and Categorical Data Analysis Trosstabulation: AREA By CATEGORY Count Exp val CATEGORY Row Pct Row AREA Col Pct 1.00 2.00 3.00 4.00 5.00 Total 1.00 20 28 23 14 12 97 Tat 24.7 18.0 16.0 25.7 33.3% 20.6% 28.9% 23.7% 14.4% 12.4% 52.6% | 37.8% | 42.6% | 29.28 | 15.6% 2.00 14 34 21 14 12 95 12.4 24.2 17.6 15.7 25.1 32.6% 14.7% 35.8% 22.1% 14.7% 12.6% 36.8% 45.9% 38.9% 29.2% 15.6% 3.00 4 12 10 20 53 99 12.9 25.2 18.4 16.3 26.2 34.0% 4.0% 12.1% 10.1% 20.2% 53..5% 10.5% 16.2% 18.5% 41.7% 68.8% Column 38 74 54 48 77 291 Total 13.1% 25.4% 18.6% 16.5% 26.5% 100.0% Chi-Square D.F. Significance Min E.F. Cells with E.F. <5 70.64156 8 -0000 12.405 None in the sample space is quite large). The paper Impaired Neurocognitive Performance in Colle- “Probability Models for the NCAA Regional giate Soccer Players” (Amer. J. Sports Med. Basketball Tournaments”(Amer. Statist., 1991: 2002: 157-162) investigated this issue from 35-38) proposed several different models for several perspectives. the P;;’s. a. The paper reported that 45 of the 91 soccer a. One model postulated P;; = .5 — 2 — j) with players in their sample had suffered at least one 2=4, (from which Pig =4, Pro =2, concussion, 28 of 96 nonsoccer athletes had suf- etc.). Based on this, P(seed #1 wins) = .27477, fered at least one concussion, and only 8 of 53 P(seed #2 wins) = .20834, and P(seed #3 student controls had suffered at least one con- wins) = .15429. Does this model appear to cussion. Analyze this data and draw appropriate provide a good fit to the data? conclusions. b. A more sophisticated model has P;; = .5 + b. For the soccer players, the sample correlation .2813625(z; — z;), where the z’s are measures coefficient calculated from the values of of relative strengths related to standard normal X = soccer exposure (total number of percentiles [percentiles for successive highly competitive seasons played prior to enrollment seeded teams are closer together than is the in the study) and y = score on an immediate case for teams seeded lower, and .2813625 memory recall test was r = -.220. Interpret this ensures that the range of probabilities is the result. same as for the model in part (a). The resulting ¢. Here is summary information on scores on a probabilities of seeds 1, 2, or 3 winning their controlled oral word-association test for the regional tournaments are .45883, .18813, and soccer and nonsoccer athletes: 82 respectively. Assess the fit of this my = 26, % = 37.50,5; = 9.13, money iy = 56; % = "39.63, 59= 10,19 46. Have you ever wondered whether soccer players , . . suffer adverse effects from hitting “headers”? ee i ius datasand draw appropnetticonclus The authors of the article “No Evidence of ° . --- Trang 770 --- Bibliography 757 d. Considering the number of prior nonsoccer Consider nonoverlapping groups of two digits, concussions, the values of mean + SD for the and let pj denote the long-run proportion of three groups were soccer players, .30 + .67; groups for which the first digit is i and the nonsoccer athletes, .49 + .87; and student con- second digit is j. What hypotheses about these trols, .19 + .48. Analyze this data and draw proportions should be tested, and what is df for appropriate conclusions. the chi-squared test? 47. Do the successive digits in the decimal expansion © Consider nonoverlapping groups of 5 digits. 2 Could a chi-squared test of appropriate hypoth- of 7 behave as though they were selected from a ©. - ‘ aes . eses about the pijum’s be based on the first random number table (or came from a computer’s the Pijkim § 100,000 digits? Explain. random number generator)? 5 ef a 7 d. The paper “Are the Digits of an Independent a. Let po denote the long-run proportion of digits : ee pe nore and Identically Distributed Sequence?” (Amer. in the expansion that equal 0, and define p,,..., . Statist., 2000: 12-16) considered the first po analogously. What hypotheses about these ‘ae ; 1,254,540 digits of x, and reported the follow- proportions should be tested, and what is df for ! \ so the cai equaiea test? ing P-values for group sizes of 1, ..., 5 digits: “ . -572, .078, .529, .691, .298. What would you b. Ho of part (a) would not be rejected for the conclude? nonrandom sequence 012 ...901... 901... Soneiaes Agresti, Alan, An Introduction to Categorical Data but informative survey of methods for analyzing Analysis (2nd ed.), Wiley, New York, 2007. An categorical data, exposited with a minimum of excellent treatment of various aspects of categori- mathematics. cal data analysis by one of the most prominent Mosteller, Frederick, and Richard Rourke, Sturdy Sta- researchers in this area. tistics, Addison-Wesley, Reading, MA, 1973. Con- Everitt, B. S., The Analysis of Contingency Tables (2nd tains several very readable chapters on the varied ed.), Halsted Press, New York, 1992. A compact uses of chi-square. --- Trang 771 --- ° Alternative Approaches to Inference Introduction In this final chapter we consider some inferential methods that are different in important ways from those considered earlier. Recall that many of the confidence intervals and test procedures developed in Chapters 9-12 were based on some sort of a normality assumption. As long as such an assumption is at least approximately satisfied, the actual confidence and significance levels will be at least approxi- mately equal to the “nominal” levels, those prescribed by the experimenter through the choice of particular t or F critical values. However, if there is a substantial violation of the normality assumption, the actual levels may differ considerably from the nominal levels (e.g., the use of tos in a confidence interval formula may actually result in a confidence level of only 88% rather than the nominal 95%). In the first three sections of this chapter, we develop distribution-free or non- parametric procedures that are valid for a wide variety of underlying distributions rather than being tied to normality. We have actually already introduced several such methods: the bootstrap intervals and permutation tests are valid without restrictive assumptions on the underlying distribution(s). Section 14.4 introduces the Bayesian approach to inference. The standard frequentist view of inference is that the parameter of interest, @, has a fixed but unknown value. Bayesians, however, regard @ as a random variable having a prior probability distribution that incorporates whatever is known about its value. Then to learn more about 6, a sample from the conditional distribution f (x|@) is obtained, and Bayes’ theorem is used to produce the posterior distribution of 0 given the data x, ... , 1). All Bayesian methods are based on this posterior distribution. JL. Devore and K.N. Berk, Modern Mathematical Statistics with Applications, Springer Texts in Statistics, 758 DOI 10.1007/978-1-4614-0391-3_14, © Springer Science+Business Media, LLC 2012 --- Trang 772 --- 14.1 The Wilcoxon Signed-Rank Test 759 The Wilcoxon Signed-Rank Test A research chemist replicated a particular experiment a total of 10 times and obtained the following values of reaction temperature, ordered from smallest to largest: —57 —19 -05 76 1.30 2.02 2.17 246 2.68 3.02 The distribution of reaction temperature is of course continuous. Suppose the investigator is willing to assume that this distribution is symmetric, so that the pdf satisfies f (ji + t) =f (fi — 1) for any t >0, where 71 is the median of the distribution (and also the mean 1 provided that the mean exists). This condition on f(x) simply says that the height of the density curve above a value any particular distance to the right of the median is the same as the height that same distance to the left of the median. The assumption of symmetry may at first thought seem quite bold, but remember that we have frequently assumed a normal distribution. Since a normal distribution is symmetric, the assumption of symmetry without any additional distributional specification is actually a weaker assumption than normality. Let’s now consider testing the null hypothesis that ji = 0. This amounts to saying that a temperature of any particular magnitude, say 1.50, is no more likely to be positive (+1.50) than to be negative (—1.50). A glance at the data casts doubt on this hypothesis; for example, the sample median is 1.66, which is far larger in magnitude than any of the three negative observations. Figure 14.1 shows graphs of two symmetric pdf’s, one for which Ho is true and the other for which the median of the distribution considerably exceeds 0. In the first case we expect the magnitudes of the negative observations in the sample to be comparable to those of the positive sample observations. However, in the second case observations of large absolute magnitude will tend to be positive rather than negative. iT. ir. 0 0. @F Figure 14.1 Distributions for which (a) 72 = 0; (b) ji > 0 For the sample of ten reaction temperatures, let’s for the moment disregard the signs of the observations and rank the absolute magnitudes from 1 to 10, with the smallest getting rank 1, the second smallest rank 2, and so on. Then apply the sign of each observation to the corresponding rank (so some signed ranks will be negative, e.g. —3, whereas others will be positive, e.g. 8). The test statistic will be S, = the sum of the positively signed ranks. Absolute Magnitude 05 19 57.76 «1.30 2.02 2.17 2.46 2.68 3.02 Rank 1 2 3 4 5 6 i, 8 9 10 Signed Rank “t -2 =3 4 5 6 7 8 9 10 54 =44546474+849410=49 --- Trang 773 --- 760 charter 14 Alternative Approaches to Inference When the median of the distribution is much greater than 0, most of the observa- tions with large absolute magnitudes should be positive, resulting in positively signed ranks and a large value of s,. On the other hand, if the median is 0, magnitudes of positively signed observations should be intermingled with those of negatively signed observations, in which case s, will not be very large. Thus we should reject Ho: ji = 0 when s, is “quite large”— the rejection region should have the form s, > c. The critical value c should be chosen so that the test has a desired significance level (type I error probability), such as .05 or .01. This necessitates finding the distribution of the test statistic S, when the null hypothesis is true. Let’s consider n = 5, in which case there are 2° = 32 ways of applying signs to the five ranks 1, 2, 3, 4, and 5 (each rank could have a — sign or a + sign). The key point is that when Ho is true, any collection of five signed ranks has the same chance as does any other collection. That is, the smallest observation in absolute magnitude is equally likely to be positive or negative, the same is true of the second smallest observation in absolute magnitude, and so on. Thus the collection — 1,2, 3, —4, 5 of signed ranks is just as likely as the collection 1, 2, 3,4, —5, and just as likely as any one of the other 30 possibilities. Table 14.1 lists the 32 possible signed-rank sequences when n = 5 along with the value s, for each sequence. This immediately gives the “null distribution” of S, displayed in Table 14.2. For example, Table 14.1 shows that three of the 32 possible sequences have s, = 8, so P(S, = 8 when Hp is true) = 1/32 + 1/32 + 1/32 = 3/32. This null distribution appears in Table 14.2. Notice that it Table 14.1 Possible signed-rank sequences for n = 5 Sequence s Sequence Ss -l 2 -3 -4 -5 0 -l -20 -3 44-54 410-20 --30 4-5 1 +1 -2 -3 44-5 5 -1 42 -3 -4 -5 2 -l 42 -3 44 -5 6 -l -2 43 -4 -5 3 -1 -2 43 44 -5 #7 +1 +2 3 —4 a 3 +1 42 3 +4 = 7 +1 -2 43 -4 -5 4 +1 -2 43 44 -5 8 ok +2 +3 —4 Ss 5 ok +2 +3 +4 ac} 9 + +2 +3 —4 Ss 6 +1 +2 +3 +4 —5 10 -l 2 -3 -4 45 5 -l -2 -3 44 45 9 +1 -2 -3 -4 45 6 +1 -2 -3 44 45 10 ew | +2 =3: —4 +5 7 wa] +2 <3, +4 +5. if | -l -2 43 -4 45 8 -l -2 43 44 45 12 +1 42 -3 -4 45 8 +1 42 -3 44 #45 #12 +1 -2 43 -4 459 +1 -2 43 44 #45 #13 -1 42 43 -4 45 10 =1 42443 #44 #45 #14 ht +2 +3 —4 +5 11 a +2 +3 +4 +5 15 Table 14.2 Null distribution of S, when n = 5 P(S4) 1/32 1/32 1/32 2/32 2/32 3/32 3/32 3/32 Sy 8 9 10 11 12 13 14 15 P(S4) 3/32 3/32 3/32 2/32 2/32 1/32 1/32 1/32 --- Trang 774 --- 14.1 The Wilcoxon Signed-Rank Test 761 is symmetric about 7.5 [more generally, symmetrically distributed over the possible values 0, 1, 2,...,2(m + 1)/2]. This symmetry is important in relating the rejection region of lower-tailed and two-tailed tests to that of an upper-tailed test. For n = 10 there are 2'° = 1024 possible signed rank sequences, so a listing would involve much effort. Each sequence, though, would have probability 1/1024 when Hp is true, from which the distribution of S,. when Hp is true can be easily obtained. We are now in a position to determine a rejection region for testing Ho: i = 0 versus H,: ji>0 that has a suitably small significance level x. Consider the rejection region R = {s, :s, > 13} = {13, 14,15}. Then a = P(reject Ho when Hp is true) = P(S, = 13,14, or 15 when Hp is true) = 1/32 + 1/32 + 1/32 = 3/32 = .094 so that R = {13, 14, 15} specifies a test with approximate level .1. For the rejec- tion region {14, 15}, « = 2/32 = .063. For the sample x; = .58, x2 = 2.50, x3 = —.21, x4 = 1.23, xs = .97, the signed rank sequence is —1, +2, +3, +4, +5, sos, = 14 and at level .063 Ho would be rejected. A General Description of the Wilcoxon Signed-Rank Test Because the underlying distribution is assumed symmetric, j = fi, so we will state the hypotheses of interest in terms of j rather than ji.' ASSUMPTION X,, Xo, ... , X, is a random sample from a continuous and symmetric probability distribution with mean (and median) ju. When the hypothesized value of jm is fo, the absolute differences [x1 — Hol, ---[tn — Mol, must be ranked from smallest to largest. Null hypothesis: Ho: 1 = Uo Test statistic value: s, = the sum of the ranks associated with positive (x; — [Uo)’s Alternative Hypothesis Rejection Region for Level « Test Hy: > ly S20 Ay: h< bo 54 cors, 1) © xand P(S, > c) © a/2 when Hy is true. 'If the tails of the distribution are “too heavy,” as was the case with the Cauchy distribution of Chapter 7, then j1 will not exist. In such cases, the Wilcoxon test will still be valid for tests concerning jt --- Trang 775 --- 762 = cuarrer 14 Alternative Approaches to Inference A producer of breakfast cereals wants to verify that a filler machine is operating correctly. The machine is supposed to fill one-pound boxes with 460 g, on the average. This is a little above the 453.6 g needed for one pound. When the contents are weighed, it is found that 15 boxes yield the following measurements: 454.4 470.8 447.5 453.2 462.6 445.0 455.9 458.2 461.6 457.3 452.0 464.3 459.2 453.5 465.8 It is believed that deviations of any magnitude from 460 g are just as likely to be positive as negative (in accord with the symmetry assumption) but the distribution may not be normal. Therefore, the Wilcoxon signed-rank test will be used to see if the filler machine is calibrated correctly. The hypotheses are Ho: 4 = 460 versus H,: ¢ # 460, where y is the true average weight. Subtracting 460 from each measurement gives —5.6 10.8 -125 -68 26 -150 -41 -I18 16 —2.7 —8.0 43 -8 -65 5.8 The ranks are obtained by ordering these from smallest to largest without regard to sign. Absolute Magnitude] .8 1.6 1.8 2.6 2.7 4.1 4.3 5.6 5.8 65 68 8.0 10.8 12.5 15.0 Rank 12 3 4 5 6 7 8 9 10 11 12 13) 14° «15 Sign eee eo ee te Be Se ee OS Thus s, =2+4+7+9+ 13 = 35. From Appendix Table A.12, P(S, > 95) = P(S, < 25) = .024 when Hy is true, so the two-tailed test with approximate level .05 rejects Ho when either s, > 95 or < 25 [the exact « is 2(.024) = .048]. Since 5; = 35 is not in the rejection region, it cannot be concluded at level .05 that differs from 460. Even at level .094 (approximately .1), Ho cannot be rejected, since P(S, < 30) = P(S, > 90) = .047 implies that s, values between 30 and 90 are not significant at that level. The P-value of the data is thus >.1. a Although a theoretical implication of the continuity of the underlying distri- bution is that ties will not occur, in practice they often do because of the discrete- ness of measuring instruments. If there are several data values with the same absolute magnitude, then they would be assigned the average of the ranks they would receive if they differed very slightly from one another. For example, if in Example 14.1 xg = 458.2 is changed to 458.4, then two different values of (x; — 460) would have absolute magnitude 1.6. The ranks to be averaged would be 2 and 3, so each would be assigned rank 2.5. Paired Observations When the data consisted of pairs (X1,¥,),-..,(Xn,¥,) and the differences D, =X, —¥1,...,Dn =Xn —Yn were normally distributed, in Chapter 10 we used a paired f test for hypotheses about the expected difference up. If normality is not assumed, hypotheses about jp can be tested by using the Wilcoxon signed- rank test on the D;’s provided that the distribution of the differences is continuous and symmetric. [f X; and Y; both have continuous distributions that differ only with --- Trang 776 --- 14.1 The Wilcoxon Signed-Rank Test 763 respect to their means (so the Y distribution is the X distribution shifted by 1, — fy = Mp), then D; will have a continuous symmetric distribution (it is not necessary for the X and Y distributions to be symmetric individually). The null hypothesis is Ho: lp = Ao, and the test statistic S, is the sum of the ranks associated with the positive (D; — Ao)’s. About 100 years ago an experiment was done to see if drugs could help people with severe insomnia (“The Action of Optical Isomers, II: Hyoscines,” J. Physiol., 1905: 501-510). There were 10 patients who had trouble sleeping, and each patient tried several medications. Here we compare just the control (no medication) and levo-hyoscine. Does the drug offer an improvement in average sleep time? The relevant hypotheses are Ho: [fp = 0 versus H,: [py < 0. Here are the sleep times, differences, and signed ranks. Patient 1 2 3 4 5 6 7 8 9 10 Control 0.6 if 25 28 29 30 3.2 47 55 62 Drug 25: 5.7 80 44 6.3 3.8 76 58 56 61 Difference -19 —46 -5.5 -16 -34 —8 -44 -ll -1 1 Signed rank —6 -9 -10 —-5 -7 -3 —8 —4 -15 15 Notice that there is a tie for the lowest rank, so the two lowest ranks are split between observations 9 and 10, and each receives rank 1.5. Appendix Table A.12 shows that for a test with significance level approximately .05, the null hypothesis should be rejected if s, < (10)(11)/2 — 44 = 11. The test statistic value is 1.5, which falls in the rejection region. We therefore reject Hy at significance level .05 in favor of the conclusion that the drug gives greater mean sleep time. The accompanying MINITAB output shows the test statistic value and also the corresponding P-value, which is P(S, <1.5 when Hp is true). Test of median = 0.000000 versus median < 0.000000 N for Wilcoxon Estimated N Test Statistic P Median aiff 10 10 1.5 0.005 ~2.250 | Efficiency of the Wilcoxon Signed-Rank Test When the underlying distribution being sampled is normal, either the ¢ test or the signed-rank test can be used to test a hypothesis about ju. The f test is the best test in such a situation because among all level « tests it is the one having minimum f. It is generally agreed that there are many experimental situations in which normality can be reasonably assumed, as well as some in which it should not be. These two questions must be addressed in an attempt to compare the tests: 1. When the underlying distribution is normal (the “home ground” of the t test), how much is lost by using the signed-rank test? 2. When the underlying distribution is not normal, can a significant improvement be achieved by using the signed-rank test? If the Wilcoxon test does not suffer much with respect to the f test on the “home ground” of the latter, and performs significantly better than the f test for a large number of other distributions, then there will be a strong case for using the Wilcoxon test. --- Trang 777 --- 764 —cuarrer 14 Alternative Approaches to Inference Unfortunately, there are no simple answers to the two questions. Upon reflection, it is not surprising that the ¢ test can perform poorly when the underlying distribution has “heavy tails” (i.e., when observed values lying far from ju are relatively more likely than they are when the distribution is normal). This is because the behavior of the f test depends on the sample mean and variance, which are both unstable in the presence of heavy tails. The difficulty in producing answers to the two questions is that f for the Wilcoxon test is very difficult to obtain and study for any underlying distribution, and the same can be said for the ft test when the distribution is not normal. Even if f were easily obtained, any measure of efficiency would clearly depend on which underlying distribution was assumed. A number of different efficiency measures have been proposed by statisticians; one that many statisticians regard as credible is called asymptotic relative efficiency (ARE). The ARE of one test with respect to another is essentially the limiting ratio of sample sizes necessary to obtain identical error probabilities for the two tests. Thus if the ARE of one test with respect to a second equals .5, then when sample sizes are large, twice as large a sample size will be required of the first test to perform as well as the second test. Although the ARE does not characterize test performance for small sample sizes, the following results can be shown to hold: 1. When the underlying distribution is normal, the ARE of the Wilcoxon test with respect to the f test is approximately .95. 2. For any distribution, the ARE will be at least .86 and for many distributions will be much greater than 1. We can summarize these results by saying that, in large-sample problems, the Wilcoxon test is never very much less efficient than the f test and may be much more efficient if the underlying distribution is far from normal. Although the issue is far from resolved in the case of sample sizes obtained in most practical problems, studies have shown that the Wilcoxon test performs reasonably and is thus a viable alternative to the ¢ test. Exercises | Section 14.1 (1-8) 1 Reconsider the situation described in Exercise 32 of 7.02 735 734 7.17 728 7.77. 7.09 Section 9.2, and use the Wilcoxon test with x = .05 722 745 695 740 7.10 732 7.14 to test the relevant hypotheses. . 4. A random sample of 15 automobile mechanics 2. Use the Wilcoxon test to analyze the data given in 3 ; Example 9.9. certified to work on a certain type of car was selected, and the time (in minutes) necessary for 3. The accompanying data is a subset of the data re- each one to diagnose a particular problem was ported in the article “Synovial Fluid pH, Lactate, determined, resulting in the following data: Oxynen and Carbon. Dicwide Partial Pressure aoe s94 1536 267 201. 25K 350 308 Various Joint Diseases” (Arthritis Rheum., 1971: 319 532 125 232 88 249 302 476-477). The observations are pH values of syno- vial fluid (which lubricates joints and tendons) taken Use the Wilcoxon test at significance level .10 to from the knees of individuals suffering from arthri- decide whether the data suggests that true average tis. Assuming that true average pH for non-arthritic diagnostic time is less than 30 minutes. individuals is 7.39, test at level .05 to see whether the 5, Bort a gravimetric and a spectrophotometric method data indicates a difference between average pH are under consideration for determining phosphate values for arthritic and nonarthritic individuals. content of a particular material. Twelve samples of --- Trang 778 --- 14.1 The Wilcoxon Signed-Rank Test 765 the material are obtained, each is split in half, and a correction for the variance which can be found determination is made on each half using one of the in books on nonparametric statistics.] two methods, resulting in the following data: ¢. A particular type of steel beam has been de- signed to have a compressive strength (Ib/in?) Sample 1 2 3 4 of at least 50,000. An experimenter obtained a Gravimetric 547 585 668 461 random sample of 25 beams and determined the strength of each one, resulting in the following Spectrophotometric | 55.0 55.7 62.9 45.5 data (expressed as deviations from 50,000): Sample 5.6 7 8 -10 -27 36 -S5 73-77 -81 Gravimetric 523 743 92.5 40.2 50, SS 99 NG 027 18 1RG —150 —155 -159 165 —178 —183 —192 Spectrophotometric | 51.1 75.4 89.6 38.4 -199 -212 —217 -229 Sample 9 10 I 2 Carry out a test using a significance level of Giaviinétiic 873 748 632 685 approximately .01 to see if there is strong evi- dence that the design condition has been violated. Spectrophotometric | 86.8 72.5 62.3 66.0 7 tye accompanying 25 observations on’ fracture Use the Wileoxon test to decide whether one tech. ughness of base plate of 18% nickel maraging nique gives on average a different value than the steel were reported in the article “Fracture Testing 3 . . of Weldments” (ASTM Special Publ. No. 381, other technique for this type of material. 1965: 328-356). Suppose a company will agree to 6. The signed-rank statistic can be represented as purchase this steel for a particular application only Si =Wi+W2+-+-+W,, where W; =i if the if it can be strongly demonstrated from experimen- sign of the x; — fo with the ith largest absolute tal evidence that true average toughness exceeds magnitude is positive (in which case / is included 75. Assuming that the fracture toughness distribu- inS,) and W; = Oif this value is negative (= 1,2, tion is symmetric, state and test the appropriate 3,..., n). Furthermore, when Ho is true, the W;’s hypotheses at level .05, and compute a P-value. are independent and P(W = i) = P(W = 0) =.5. [Hint: Use Exercise 6(b).] a. Use these facts to obtain the mean and variance of S,, when Ho is true. [Hint: The sum of the first 69.5 71.9 72.6 73.1 73.3 73.5 74.1 74.2 75.3 n positive integers is n(n + 1)/2, andthe sum of 75.5 75.7 75.8 76.1 76.2 76.2 76.9 77.0 77.9 the squares of the first n positive integers is 7g) 796 79.7 80.1 82.2 83.7 93.7 n(n + 1)(2n + 1)/6.] b. The W;’s are not identically distributed (e.g., §, Suppose that observations X;, X>, ... , X, are made possible values of W are 2 and 0 whereas pos- on a process at times 1, 2, . ..,2. On the basis of this sible values of Ws are 5 and 0), so our Central data, we wish to test Limit Theorem for identically distributed and independent variables cannot be used here Ho: the X;’s constitute an independent and iden- when n is large. However, a more general CLT tically distributed sequence can be used to assert that when Hp is true and Weiss n > 20, Sy has approximately a normal distri- bution with mean and variance obtained in (a). Ha? Xie1 tends to be larger than X; fori = 1,....7 Use this to propose a large-sample standardized (an increasing trend) signed-rank test statistic and then an appropriate Suppose the X/'s are ranked from 1 ton. Then when Hy rejection region with level x for each of the three; sue, larger ranks tend to occur later in the sequence, commonly encountered altetative hypotheses: whereas if Ho is true, large and small ranks tend [Note: When there are ties in the absolute mag- to besmixed together. Tet R; be the rank of X; nitudes, it is still correct to standardize S, by and consider the test statistic D = 7", (Ry — i)”. subtracting the mean from (a), but there is a tek --- Trang 779 --- 766 = cuarrer 14 Alternative Approaches to Inference Then small values of D give support to H, (e.g., the to .10 as possible in the case n = 4. [Hint: List the 4! smallest value is 0 for Ri = 1, R2 = 2,...,Rn =n), rank sequences, compute d for each one, and then s0 Ho should be rejected in favor of H, ifd < c. When obtain the null distribution of D. See the Lehmann Ho is true, any sequence of ranks has probability I/n!. book (in the chapter bibliography), for more infor- Use this to find c for which the test has a level as close mation.] The Wilcoxon Rank-Sum Test When at least one of the sample sizes in a two-sample problem is small, the r test requires the assumption of normality (at least approximately). There are situations, though, in which an investigator would want to use a test that is valid even if the underlying distributions are quite nonnormal. We now describe such a test, called the Wilcoxon rank-sum test. An alternative name for the procedure is the Mann— Whitney test, although the Mann-Whitney test statistic is sometimes expressed in a slightly different form from that of the Wilcoxon test. The Wilcoxon test procedure is distribution-free because it will have the desired level of significance for a very large class of underlying distributions. ASSUMPTIONS X, ... , Xm and Y;, ... , Y, are two independent random samples from continuous distributions with means jz; and fl, respectively. The X and Y distributions have the same shape and spread, the only possible difference between the two being in the values of j4, and fly. When Ho: ft, — fy = Ao is true, the X distribution is shifted by the amount Ao to the right of the Y distribution; whereas when Hp is false, the shift is by an amount other than Ao. Development of the Test When m = 3, n = 4 Let’s first test Ho: 4; — fy = 0. If yz; is actually much larger than ji, then most of the observed x’s will fall to the right of the observed y’s. However, if Ho is true, then the observed values from the two samples should be intermingled. The test statistic will provide a quantification of how much intermingling there is in the two samples. Consider the case m = 3, n = 4. Then if all three observed x’s were to the right of all four observed y’s, this would provide strong evidence for rejecting Hp in favor of Hy: 4; — fy # 0, with a similar conclusion being appropriate if all three x’s fall below all four of the y’s. Suppose we pool the x’s and y’s into a combined sample of size m + n = 7 and rank these observations from smallest to largest, with the smallest receiving rank 1 and the largest, rank 7. If either most of the largest ranks or most of the smallest ranks were associated with X observations, we would begin to doubt Ho. This suggests the test statistic W = the sum of the ranks in the combined sample (14.1) associated with X observations , For the values of m and n under consideration, the smallest possible value of W is w = 1+2+3 = 6 (if all three x’s are smaller than all four y’s), and the largest possible value is w = 5 + 6 + 7 = 18 (if all three x’s are larger than all four y’s). --- Trang 780 --- 14.2 The Wilcoxon Rank-Sum Test 767 As an example, suppose x, = —3.10, x, = 1.67, x3 = 2.01, y, = 5.27, y2 = 1.89, y3 = 3.86, and y, = .19. Then the pooled ordered sample is —3.10, .19, 1.67, 1.89, 2.01, 3.86, and 5.27. The X ranks for this sample are 1 (for —3.10), 3 (for 1.67), and 5 (for 2.01), so the computed value of Wisw = 1+3+5=9. The test procedure based on the statistic (14.1) is to reject Ho if the computed value w is “too extreme” — that is, > ¢ for an upper-tailed test, < ¢ for a lower- tailed test, and either > c, or < c> for a two-tailed test. The critical constant(s) c (ci, €2) should be chosen so that the test has the desired level of significance %. To see how this should be done, recall that when Hp is true, all seven observations come from the same population. This means that under Ho, any possible triple of ranks associated with the three x’s — such as (1, 4, 5), (3, 5, 6), or (5, 6, 7) — has the same probability as any other possible rank triple. Since there are (G) =35 possible rank triples, under Hy each rank triple has probability 1/35. From a list of all 35 rank triples and the w value associated with each, the probability distribution of W can immediately be determined. For example, there are four rank triples that have w value 11 — (1, 3, 7), (1, 4, 6), (2, 3, 6), and (2, 4, 5) — so PW = 11) = 4/35. The summary of the listing and computations appears in Table 14.3. Table 14.3 Probability distribution of W(m = 3, n = 4) when Ho is true PW =w) 1 1 2 3 4 4 5 4 4 3 2 1 1 The distribution of Table 14.3 is symmetric about w = (6 + 18)/2 = 12, which is the middle value in the ordered list of possible W values. This is because the two rank triples (r, s, t) (with r < s < t) and (8 — t, 8 — s, 8 — r) have values of w symmetric about 12, so for each triple with w value below 12, there is a triple with w value above 12 by the same amount. If the alternative hypothesis is Hy: ft; — ty > 0, then Ho should be rejected in favor of H, for large W values. Choosing as the rejection region the set of W values {17, 18}, « = P(type I error) = P(reject Hy when Ho is true) = P(W = 17 or 18 when Ap is true) = 4 +4 = % = .057; the region {17, 18} therefore specifies a test with level of significance approximately .05. Similarly, the region {6, 7}, which is appropriate for H,: 4) — fy < 0, has « = .05S7 = .05. The region {6, 7, 17, 18}, which is appropriate for the two-sided alternative, has « = * = .114. The W value for the data given several paragraphs previously was w = 9, which is rather close to the middle value 12, so Hp would not be rejected at any reasonable level « for any one of the three H,’s. General Description of the Rank-Sum Test The null hypothesis Ho: 1; — fy = Apo is handled by subtracting Ay from each X; and using the (X; — Ao)’s as the X;’s were previously used. Recalling that for any positive integer K, the sum of the first K integers is K(K + 1)/2, the smallest possible value of the statistic W is m(m + 1)/2, which occurs when the (X; — Ao)’s are all to the left of the Y sample. The largest possible value of W occurs when the (X; — Ao)’s lie entirely to the right of the Y’s; in this case, W = (n+ 1) +---+ (m +n) = (sum of first m + n integers) — (sum of first n integers), which gives --- Trang 781 --- 768 crireR 14 Alternative Approaches to Inference m(m + 2n + 1)/2. As with the special case m = 3, n = 4, the distribution of W is symmetric about the value that is halfway between the smallest and largest values; this middle value is m(m +n + 1)/2. Because of this symmetry, probabilities involving lower-tail critical values can be obtained from corresponding upper-tail values. Null hypothesis: Ho : ft; — fl = Ao m where 7; = rank of (x; — Ao) in the Test statistic value : w = >» r; combined sample of m+n (x — Ao)’s 1 and y’s Alternative Hypothesis Rejection Region Hy: My — fy > Ao wee Hy : [hy — fy < Ao wcorw c, when Hy is true) © a, P(W > c when Hy is true) © o/2. Because W has a discrete probability distribution, there will not always exist a critical value corresponding exactly to one of the usual levels of significance. Appendix Table A.13 gives upper-tail critical values for probabilities closest to .05, .025, .01, and .005, from which level .05 or .01 one- and two-tailed tests can be obtained. The table gives information only for m = 3,4, ...,8 andn = m,m+ 1, ..., 8 (ie, 3 << m 47 when Hp is true) © .01. The critical value for the lower-tailed test is therefore m(m +n + 1) — 47 = 5(13) — 47 = 18; Ho will now be rejected if w < 18. The pooled ordered sample follows; the computed W is w =r) +12 +-++ +15 (where r; is the rank of x;)) = 1+5+4+6+9 = 25. Since 25 is not < 18, Ho is not rejected at (approximately) level .01. x y y x x x y y x Bs y y 14.2 168 17.1 17.2 183 184 187 19.7 20.0 20.9 21.3 23.0 1 2 3 4 5 6 7 8 9 10 11 12 a Ties are handled as suggested for the signed-rank test in the previous section. Efficiency of the Wilcoxon Rank-Sum Test When the distributions being sampled are both normal with ¢; = ¢2, and therefore have the same shapes and spreads, either the pooled ¢ test or the Wilcoxon test can be used (the two-sample ¢ test assumes normality but not equal variances, so assumptions underlying its use are more restrictive in one sense and less in another than those for Wilcoxon’s test). In this situation, the pooled f test is best among all possible tests in the sense of minimizing f for any fixed «. However, an investigator can never be absolutely certain that underlying assumptions are satisfied. It is therefore relevant to ask (1) how much is lost by using Wilcoxon’s test rather than the pooled ¢ test when the distributions are normal with equal variances and (2) how W compares to T in nonnormal situations. The notion of test efficiency was discussed in the previous section in connec- tion with the one-sample f test and Wilcoxon signed-rank test. The results for the two-sample tests are the same as those for the one-sample tests. When normality and equal variances both hold, the rank-sum test is approximately 95% as efficient as the pooled ¢ test in large samples. That is, the f test will give the same error probabilities as the Wilcoxon test using slightly smaller sample sizes. On the other hand, the Wilcoxon test will always be at least 86% as efficient as the pooled f test and may be much more efficient if the underlying distributions are very nonnormal, especially with heavy tails. The comparison of the Wilcoxon test with the two- sample (unpooled) ¢ test is less clear-cut. The t test is not known to be the best test in any sense, so it seems safe to conclude that as long as the population distributions have similar shapes and spreads, the behavior of the Wilcoxon test should compare quite favorably to the two-sample f test. Lastly, we note that f calculations for the Wilcoxon test are quite difficult. This is because the distribution of W when Hp is false depends not only on fy — fiz but also on the shapes of the two distributions. For most underlying distributions, the nonnull distribution of W is virtually intractable. This is why statisticians have developed large-sample (asymptotic relative) efficiency as a means of comparing tests. With the capabilities of modern-day computer software, another approach to calculation of f is to carry out a simulation experiment. --- Trang 783 --- 770 = cuarrer 14 Alternative Approaches to Inference Exercises | Section 14.2 (9-16) 9. In an experiment to compare the bond strength of Unexposed 8 Il 12 14 20 43 111 two different adhesives, each adhesive was used in passed 35-56 $3 92 128 150 176 208 five bondings of two surfaces, and the force nec- essary to separate the surfaces was determined for 13. Reconsider the situation described in Exercise 100 each bonding. For adhesive 1, the resulting values of Chapter 10 and the accompanying MINITAB were 229, 286, 245, 299, and 250, whereas the output (the Greek letter eta is used to denote a adhesive 2 observations were 213, 179, 163, 247, median). and 225. Let i; denote the true average bond Mann-Whitney Conf idence Interval and strength of adhesive type i. Use the Wilcoxon Test rank-sum test at level .05 to test Ho: fly = My good N=8 Median = 0.540 versus Hy! fly > Jb. poor N=8 Median = 2.400 ~ Point estimate for ETA1 — ETA2 is 10. The article “A Study of Wood Stove Particulate -1.155 Emissions” (J. Air Pollut. Contr. Assoc., 1979: 95.9% CI for ETAL — ETA2 is(—3.160, 724-728) reports the following data on burn time —0.409) W= 41.0 (hours) for samples of oak and pine. Test at level Mest Of ETAL = ERAZ VS ETAT <2TAR is . : significant at 0.0027 .05 to see whether there is any difference in true average burn time for the two types of wood. a. Verify that the value of MINITAB’s test statis- Oak 172 67 155 156 142 123 1.77 48 Cay ot aprepitate teat or nypeTeRes Pine 98 1.40 1.33 152.73 1.20 vs using a significance level of 01. IL. A modification has been made to the process for 44 ‘The Wilcoxon rank-sum statistic can be repre- producing a certain type of “time-zero” film (film sented as W = R, +Ry+---+Rp, where R; is that begins to develop as soon as a picture is taken). the rank of X; — Ao among all m + n such differ- Because the modification involves extra cost, it will ences, When Hy is true, each R; is equally likely to besincorporated ‘only st sample data‘strongly:indic be one of the first m + n positive integers; that is, eaten that the modification has decreased imevayer: R; has a discrete uniform distribution on the values age developing time by more than | s. Assuming 152, 3p2s5 mtn: that the developing-time distributions differ only a, Determine the mean value of each R; when Ho with respect to location if at all, use the Wilcoxon GSS RHA THEA SHOW THALTHE RISER WALSOEW. rank-sum test at level .05 on the accompanying data is m(m + n + 1)/2. [Hint: Use the hint given in to test the appropriate hypotheses. Exercise 6(a).] Original b. The variance of each R; is easily determined. Process 8.6 5.1 4.5 5.4 6.3 6.6 5.7 8.5 However, the R,’s are not independent random. variables because, for example, if m =n = 10 ‘Modified and we are told that Ry = 5, then Rp must Process 5.5 4.0 3.8 6.0 58 4.9 7.0 5.7 BSE HEEL OmhaR 19 sitegels) BEEWEEE 1 12. The article “Measuring the Exposure of Infants to and 20. However, if a and b are any two Tobacco Smoke” (New Engl. J. Med., 1984: distinct positive integers between 1 and 1075~1078) reports on a study in which various m+n inclusive, it follows _ that measurements were taken both from a random P(R; = aandR; = b) = 1/[(m-+n)(m+n—1)] sample of infants who had been exposed to house- since two integers are being sampled without hold smoke and from a sample of unexposed replacement from among 1, 2, ... , m+n. infants. The accompanying data consists of obser- Use this fact to show that Cov(R;,Rj) = vations on urinary concentration of cotinine, a —(m+n+1)/12 and then show that the vari- major metabolite of nicotine (the values constitute ance of W is mn(m +n + 1)/12. a subset of the original data and were read from a c. A central limit theorem for a sum of non-inde- plot that appeared in the article). Does the data pendent variables can be used to show that suggest that true average cotinine level is higher in when m > 8 and n > 8, W has approximately exposed infants than in unexposed infants by more a normal distribution with mean and variance than 25? Carry out a test at significance level .05. given by the results of (a) and (b). Use this to --- Trang 784 --- 14.3 Distribution-Free Confidence Intervals 771 propose a large-sample standardized rank-sum level .01 to decide whether true average length test statistic and then describe the rejection differs for the two types of vitamin C intake. region that has approximate significance level Compute also an approximate P-value. [Hint: « for testing Ho against each of the three See Exercise 14.] commonly encountered alternative hypotheses. Orange Juice 82 94 96 9.7 100 145 [Note: When there are ties in the observed 152 16.1 176 215 values, a correction for the variance derived ee in (b) should be used in standardizing W; please Ascorbic Acid 4.2 5.2 58 64 7.0 7.3 consult a book on nonparametric statistics for 10.1 11.2 11.3 115 the result.] 16. Test the hypotheses suggested in Exercise 15 15. The accompanying data resulted from an experi- using the following data: ment to compare the effects of vitamin C in orange Orange Juice 82 95 95 9.7 100 145 juice and in synthetic ascorbic acid on the length 152 161 176 215 of odontoblasts in guinea pigs over a 6-week a ; period (“The Growth of the Odontoblasts of the AScoRGIEACI ne ion Be ft TO 73 Incisor Tooth as a Criterion of the Vitamin C ° ° ° . Intake of the Guinea Pig,” J. Nutrit., 1947: [Hint: See Exercise 14.] 491-504). Use the Wilcoxon rank-sum test at Distribution-Free Confidence Intervals The method we have used so far to construct a confidence interval (CI) can be described as follows: Start with a random variable (Z, T, ~, F, or the like) that depends on the parameter of interest and a probability statement involving the variable, manipulate the inequalities of the statement to isolate the parameter between random endpoints, and finally substitute computed values for random variables. Another general method for obtaining CIs takes advantage of a relation- ship between test procedures and CIs. A 100(1 — «)% CI for a parameter @ can be obtained from a level « test for Ho: 0 = 09 versus H,: 6 # 0. This method will be used to derive intervals associated with the Wilcoxon signed-rank test and the Wilcoxon rank-sum test. Before using the method to derive new intervals, reconsider the f test and the t interval. Suppose a random sample of n = 25 observations from a normal population yields summary statistics t= 100, s = 20. Then a 90% CI for yu is ( t a z ) (93.16, 106.84) (14.2) X — 105,24 = 1X + 05,24 > T=] = (93.16, 106. “ V25 Vv 25. Suppose that instead of a CI, we had wished to test a hypothesis about yu. For Ho: [k= lg versus Hy: A Lo, the ¢ test at level .10 specifies that Hy should be rejected if t is either > 1.711 or < —1.711, where x- 100 — py 100 — py pee Ho SS Bo (14.3) s/V¥25 20/V25 4 Consider now the null value fig = 95. Then t = 1.25, so Ho is not rejected. Similarly, if fig = 104, then t= —1, so again Ho is not rejected. However, if fo = 90, then t = 2.5, so Ho is rejected, and if fig = 108, then t = —2, so Ho is again rejected. By considering other values of jg and the decision resulting from each one, the following general fact emerges: Every number inside the --- Trang 785 --- 772 = cuarrer 14 Alternative Approaches to Inference interval (14.2) specifies a value of [io for which t of (14.3) leads to nonrejection of Hp, whereas every number outside interval (14.2) corresponds to a t for which Hy is rejected. That is, for the fixed values of n, X, and s, the interval (14.2) is precisely the set of all jy values for which testing Ho: 4 = fg versus Hy: pe # Mp results in not rejecting Ho. PROPOSITION Suppose we have a level x test procedure for testing Ho: 0 = 09 versus H,: 0 4 0. For fixed sample values, let A denote the set of all values Qo for which Hp is not rejected. Then A is a 100(1 — ~)% CI for 0. There are actually pathological examples in which the set A defined in the proposition is not an interval of @ values, but instead the complement of an interval or something even stranger. To be more precise, we should really replace the notion of a CI with that of a confidence set. In the cases of interest here, the set A does turn out to be an interval. The Wilcoxon Signed-Rank Interval To test Hg: 1 = Up versus H,: ft # [lg using the Wilcoxon signed-rank test, where u is the mean of a continuous symmetric distribution, the absolute values [x1 —Hols-++;|%n — Hol are ordered from smallest to largest, with the smallest receiving rank 1 and the largest, rank n. Each rank is then given the sign of its associated x; — lo, and the test statistic is the sum of the positively signed ranks. The two-tailed test rejects Ho if s, is either > c or < n(n + 1)/2 — c, where c is obtained from Appendix Table A.12 once the desired level of significance « is specified. For fixed x), ...,x,, the 100(1 — «)% signed-rank interval will consist of all fo for which Ho: 4 = [lo is not rejected at level %. To identify this interval, it is convenient to express the test statistic S, in another form. S., = the number of pairwise averages (X; + Xj) /2 withi pW , That is, if we average each x; in the list with each 4; to its left, including (x; + ;)/2 (which is just x;), and count the number of these averages that are > Lo, 5, results. In moving from left to right in the list of sample values, we are simply averaging every pair of observations in the sample [again including (x; + x))/2] exactly once, so the order in which the observations are listed before averaging is not important. The equivalence of the two methods for computing s, is not difficult to verify. The number of pairwise averages is (3) +n (the first term due to averaging of different observations and the second due to averaging each x; with itself), which equals n(n + 1)/2. If either too many or too few of these pairwise averages are > [o, Ap is rejected. --- Trang 786 --- 14.3 Distribution-Free Confidence Intervals 773 The following observations are values of cerebral metabolic rate for rhesus monkeys: X= 4.51, x2 = 4.59, x3 = 4.90, x4 = 4.93, x5 = 6.80, x6 = 5.08, x7 = 5.67. The 28 pairwise averages are, in increasing order, 451 455 459 4.705 4.72 4.745 4.76 4.795 4.835 4.90 4.915 4.93 4.99 5.005 5.08 5.09 5.13 5.285 5.30 5.375 5.655 5.67 5.695 5.85 5.865 5.94 6.235 6.80 The first few and the last few of these are pictured on a measurement axis in Figure 14.2. 5, = 2 s,=2 s,=27 54 5 = 8B YY 355,525 Xs. =0 So rn jee¢}—_teeees|_—-- fees} copte—o 45 146 47 48 55 5.75 16 OO ' At level .0469, Hy is ' i not rected for pip in here ' Figure 14.2 Plot of the data for Example 14.4 Because of the discreteness of the distribution of S,, «= .05 cannot be obtained exactly. The rejection region {0, 1, 2, 26, 27, 28} has « = .046, which is as close as possible to .05, so the level is approximately .05. Thus if the number of pairwise averages > plo is between 3 and 25, inclusive, Ho is not rejected. From Figure 14.2 the (approximate) 95% CI for yu is (4.59, 5.94). i | In general, once the pairwise averages are ordered from smallest to largest, the endpoints of the Wilcoxon interval are two of the “extreme” averages. To express this precisely, let the smallest pairwise average be denoted by X(,), the next smallest by Xj), ... , and the largest by X(a(n41)/2)- PROPOSITION If the level « Wilcoxon signed-rank test for Ho: 1 = [lo versus Hy: 1 F [lp is to reject Ho ifeithers, > cors, < n(n + 1)/2 — c,thena100(1 — %)% Cl for wis (Rnn41)/2-c41))%(e)) (14.5) In words, the interval extends from the dth smallest pairwise average to the dth largest average, where d = n(n + 1)/2—c + 1. Appendix Table A.14 gives the values of c that correspond to the usual confidence levels for n = 5,6, ... , 25. For n = 7, an 89.1% interval (approximately 90%) is obtained by using c = 24 (Example 14.4 (since the rejection region {0, 1, 2, 3, 4, 24, 25, 26, 27, 28} has « = .109). The continued) interval is (X2g—2441);X(24)) = (is); X(24)) = (4.72, 5.85), which extends from the fifth smallest to the fifth largest pairwise average. a --- Trang 787 --- 774 ~~ cuarrer 14 Alternative Approaches to Inference The derivation of the interval depended on having a single sample from a continuous symmetric distribution with mean (median) jz. When the data is paired, the interval constructed from the differences d,, ds, ... , d,, is a CI for the mean (median) difference ftp. In this case, the symmetry of X and Y distributions need not be assumed; as long as the X and Y distributions have the same shape, the X — Y distribution will be symmetric, so only continuity is required. For n > 20, the large-sample approximation (Exercise 6) to the Wilcoxon test based on standardizing S, gives an approximation to c in (14.5). The result [for a 100(1 — «)% interval] is _ n(n+ 1) n(n + 1)(2n + 1) ce a. + 24/2 ny a The efficiency of the Wilcoxon interval relative to the r interval is roughly the same as that for the Wilcoxon test relative to the ¢ test. In particular, for large samples when the underlying population is normal, the Wilcoxon interval will tend to be slightly longer than the ft interval, but if the population is quite nonnormal (symmetric but with heavy tails), then the Wilcoxon interval will tend to be much shorter than the ¢ interval. And as we emphasized earlier in our discussion of bootstrapping, in the presence of nonnormality the actual confidence level of the ¢ interval may differ considerably from the nominal (e.g., 95%) level. The Wilcoxon Rank-Sum Interval The Wilcoxon rank-sum test for testing Ho : {; — fl) = Ao is carried out by first combining the (X; — Ao)’s and Y;’s into one sample of size m + n and ranking them from smallest (rank 1) to largest (rank m + n). The test statistic W is then the sum of the ranks of the (X; — Ao)’s. For the two-sided alternative, Ho is rejected if w is either too small or too large. To obtain the associated CI for fixed x;’s and y,’s, we must determine the set of all Ap values for which Hp is not rejected. This is easiest to do if we first express the test statistic in a slightly different form. The smallest possible value of W is m(m + 1)/2, corresponding to every (X; — Ao) less than every Y;, and there are mn differences of the form (X; — Ao) — Y;. A bit of manipulation gives W = |number of (X; — Y; — Ao)’s > 0] ant) 2 (14.6) m(m + 1) = (number of (X; — Y;)’s > Ao] + a Thus rejecting Ho if the number of (x; — y,)’s > Ao is either too small or too large is equivalent to rejecting Hy for small or large w. Expression (14.6) suggests that we compute x; — yj for each i and j and order these mn differences from smallest to largest. Then if the null value Ao is neither smaller than most of the differences nor larger than most, Ho: (4; — fy = Ao is not rejected. Varying Aj now shows that a CI for jt; — fy will have as its lower endpoint one of the ordered (x; — y;)’s, and similarly for the upper endpoint. --- Trang 788 --- 14.3 Distribution-Free Confidence Intervals 775 PROPOSITION Letxy,...,X, and yj, ..., y, be the observed values in two independent samples from continuous distributions that differ only in location (and not in shape). With dj; = x; — y;and the ordered differences denoted by dijc1), dia), « -- » diftmnys the general form of a 100(1 — «)% CI for jt; — po is (Aij(nne1) die)) (14.7) where c is the critical constant for the two-tailed level x Wilcoxon rank-sum test. Notice that the form of the Wilcoxon rank-sum interval (14.7) is very similar to the Wilcoxon signed-rank interval (14.5); (14.5) uses pairwise averages from a single sample, whereas (14.7) uses pairwise differences from two samples. Appendix Table A.15 gives values of ¢ for selected values of m and n. The article “Some Mechanical Properties of Impregnated Bark Board” (Forest Products J., 1977: 31-38) reports the following data on maximum crushing strength (psi) for a sample of epoxy-impregnated bark board and for a sample of bark board impregnated with another polymer: Epoxy (x’s) 10,860 11,120 11,340 12,130 14,380 13,070 Other (y’s) 4,590 4,850 6,510 5,640 6,390 Obtain a 95% CI for the true average difference in crushing strength between the epoxy-impregnated board and the other type of board. From Appendix Table A.15, since the smaller sample size is 5 and the larger sample size is 6, c = 26 for a confidence level of approximately 95%. The d,;’s appear in Table 14.4. The five smallest dj;’s [dijay, - -- , dics] are 4350, 4470, 4610, 4730, and 4830; and the five largest dj;’s are (in descending order) 9790, 9530, 8740, 8480, and 8220. Thus the CT is (dics), dijc26)) = (4830, 8220). Table 14.4 Differences (dj) for the rank-sum interval in Example 14.6 yi 4590 4850 5640 6390 6510 10,860 6270 6010 5220 4470 4350 11,120 6530 6270 5480 4730 4610 x 11,340 6750 6490 5700 4950 4830 12,130 7540 7280 6490 5740 5620 13,070 8480 8220 7430 6680 6560 14,380 9790 9530 8740 7990 7870 a When m and v are both large, the Wilcoxon test statistic has approximately a normal distribution (Exercise 14). This can be used to derive a large-sample approximation for the value c in interval (14.7). The result is --- Trang 789 --- 776 =~ cuarrer 14 Alternative Approaches to Inference ¢ wt a jam n+) a ) (14.8) As with the signed-rank interval, the rank-sum interval (14.7) is quite effi- cient with respect to the f interval; in large samples, (14.7) will tend to be only a bit longer than the f interval when the underlying populations are normal and may be considerably shorter than the ¢ interval if the underlying populations have heavier tails than do normal populations. And once again, the actual confidence level for the t interval may be quite different from the nominal level in the presence of substantial nonnormality. Exercises | Section 14.3 (17-22) 17. The article “The Lead Content and Acidity of 20. The following observations are amounts of hydro- Christchurch Precipitation” (New Zeal. J. Sci., carbon emissions resulting from road wear of bias- 1980: 311-312) reports the accompanying data belted tires under a 522-kg load inflated at on lead concentration (jzg/L) in samples gathered 228 kPa and driven at 64 km/h for 6 h (“Charac- during eight different summer rainfalls: 17.0, terization of Tire Emissions Using an Indoor Test 21.4, 30.6, 5.0, 12.2, 11.8, 17.3, and 18.8. Assum- Facility,” Rubber Chem. Tech., 1978: 7-25): .045, ing that the lead-content distribution is symmetric, .117, .062, and .072. What confidence levels are use the Wilcoxon signed-rank interval to obtain a achievable for this sample size using the signed- 95% CI for ju. rank interval? Select an appropriate confidence 18. Compute the 99% signed-rank interval for true level andicomputesthe: interval. average pH j (assuming symmetry) using the 21. Compute the 90% rank-sum Cl for jz, — jp using data in Exercise 3. [Hint: Try to compute only the data in Exercise 9. those pairwise averages having relatively small 95° Compute a 99% CI for jay — js using the data in or large values (rather than all 105 averages).] wore Exercise 10. 19. Compute a CI for jp of Example 14.2 using the data given there; your confidence level should be roughly 95%. Bayesian Methods Consider making an inference about some parameter 6. The “frequentist” or “classical” approach, which we have followed until now in this book, is to regard the value of @ as fixed but unknown, observe data from a joint pmf or pdf Ff (x1,-++,%n;), and use the observations to draw appropriate conclusions. The Bayesian or “subjective” paradigm is different. Again the value of 0 is unknown, but Bayesians say that all available information about it—intuition, data from past experiments, expert opinions, etc. —can be incorporated into a prior distribution, usually a prior pdf g(@) since there will typically be a continuum of possible values of the parameter rather than just a discrete set. If there is substantial knowledge about 0, the prior will be quite peaked and highly concentrated about some central value, whereas a lack of information is shown by a relatively flat “uninformative” prior. These possibilities are illustrated in Figure 14.3. In essence we are now thinking of the actual value of 0 as the observed value of a random variable ©, although unfortunately we ourselves don’t get to observe the value. The (prior) distribution of this random variable is g(0). Now, just as in --- Trang 790 --- 14.4 Bayesian Methods 777 Prior pdf 1.0 08 f\ | Narrow 0.6 | \ ae 04 I | \ : 0.2 oo ee eg get 0.0 b=" —Sae) i) 2 4 6 8 10 Figure 14.3 A narrow concentrated prior and a wider less informative prior the frequentist scenario, an experiment is performed to obtain data. The joint pmf or pdf of the data given the value of @ is p(x1,...,xXn|@) or f(x1,...,%n|0). We use a vertical line segment here rather than the earlier semicolon to emphasize that we are conditioning on the value of a random variable. At this point, an appropriate version of Bayes’ theorem is used to obtain A(O\xy,....Xn), the posterior distribution of the parameter. In the Bayesian world, this posterior distribution contains all current information about 0. In particular, the mean of this posterior distribution gives a point estimate of the parameter. An interval [a, b] having posterior probability .95 gives a 95% credibility interval, the Bayesian analogue of a 95% confidence interval (but the interpretation is different). After presenting the necessary version of Bayes’ Theorem, we illustrate the Bayesian approach with two examples. Bayes’ theorem here needs to be a bit more general than in Section 2.4 to allow for the possibility of continuous distributions. This version gives the posterior distribution A(@ | x1, x2, ..., X») as a product of the prior pdf times the conditional pdf, with a denominator to assure that the total posterior probability is 1: f (x1, 49, + %nl)8 (8) W(Pbnszaisest8) FoF x2, Xn]0) @(0)d0 Suppose we want to make an inference about a population proportion p. Since the value of this parameter must be between 0 and 1, and the family of standard beta distributions is concentrated on the interval [0, 1], a particular beta distribution is a natural choice for a prior on p. In particular, consider data from a survey of 1574 American adults reported by the National Science Foundation in May 2002. Of those responding, 803 (51%) incorrectly said that antibiotics kill viruses. In accord with the discussion in Section 3.5, the data can be considered either a random sample of 1574 from the Bernoulli distribution (binomial with number of trials = 1) or a single observation from the binomial distribution with n = 1574. We use the latter approach here, but Exercise 23 involves showing that the Bernoulli approach is equivalent. --- Trang 791 --- 778 — cuarrer 14 Alternative Approaches to Inference Assuming a beta prior for p on [0,1] with parameters a and b and the binomial distribution Bin(n = 1574, p) for the data, we get for the posterior distribution, n nv Eth) ot bt p(1 — p)"*® 7 pt — p sonny —__ PO Rar =m N( p| x) = a9 Fn) Fab) nay pa F(x|p)g(p)dp | “(| — py" 1) yyy [rene (ide apy te) dp The numerator can be written as n\ V(a+b) T(x+a)l(n—x+b) T(n+a+b) pro (y — pytetel x) F@I) Vatats) |Patarm—x+b)? z Given that the part in square brackets is of the form of a beta pdf on [0,1], its integral over this interval is 1. The part in front of the square brackets is shared by the numerator and denominator, and will therefore cancel. Thus T(n+a+b) a Hxtb-1 iitakey= tHa-1(] _ pyro (PM) =Teyaraaxte” That is, the posterior distribution of p is itself a beta distribution with parameters x+aandn—x +b. If we were using the traditional non-Bayesian frequentist approach to statis- tics, and we wanted to give an estimate of p for this example, we would give the usual estimate from Section 8.2, x/n = 803/1574 = .51. The usual Bayesian esti- mate is the posterior mean, the expected value of p given the data. Recalling that the mean of the beta distribution on [0, 1] is z/(« + B), we obtain E(p|x) = (x +.a)/(n +a +b) = (803 + a)/(1574 + a+b) for the posterior mean. Suppose that a = b = 1, so the beta prior distribution reduces to the uniform distribution on [0, 1]. Then E(p|x) = (803 + 1)(1574 + 2) = .51, and in this case the Bayesian and frequentist results are essentially the same. It should be apparent that, if a and b are small compared to n, then the prior distribution will not matter much. Indeed, if a and b are close to 0 and positive, then E(p|x) = x/n. We should hesitate to set a and b equal to 0, because this would make the beta prior pdf not integrable, but it does nevertheless give a reasonable posterior distribution if x and n — x are positive. When a prior distribution is not integrable it is said to be improper. In Bayesian inference, is there an interval corresponding to the confidence interval for p given in Section 8.2? We have the posterior distribution for p, so we can take the central 95% of this distribution and call it a 95% credibility interval, as mentioned at the beginning of this section. In the case with a beta prior and a = 1, b = 1, we have a beta posterior with 2 = 804, 6 = 772. Using the inverse cumu- lative beta distribution function from MINITAB (or almost any major statistical package) evaluated at .025 and .975, we obtain the interval [.4855, .5348]. For comparison the 95% confidence interval from Equation (8.10) of Section 8.2 is [.4855, .5348]. The intervals are not exactly the same, although they do agree to --- Trang 792 --- 14.4 Bayesian Methods 779 four decimals. The simpler formula, Equation (8.11), gives the answer [.4855, -5349], which is very close because of the large sample size. It is interesting that, although the frequentist and Bayesian intervals agree to four decimals, they have very different interpretations. For the Bayesian interval we can say that the probability is 95% that p is in the interval, given the data. However, this is not correct for the frequentist interval, because p is not random and the endpoints are not random after they have been specified, and therefore no proba- bility statement is appropriate. Here the 95% applies to the aggregate of confidence intervals, of which in the long run 95% should include the true p. The confidence intervals and credibility interval all include .5, so they allow the possibility that p = .5. Another way to view this possibility in Bayesian terms is to see whether the posterior distribution is consistent with p = .5. We actually consider the related hypothesis p < .5. Using a = 1 and b = 1 again, we find from MINITAB that the beta distribution with « = 804 and f = 772 has probability -2100 of being less than or equal to .5. The corresponding one-tailed frequentist P- value is the probability, assuming p = .5, of at least 803 successes in 1574 trials, which is .2173. Both the Bayesian and frequentist values are much greater than .05, and there is no reason to reject .5 as a possible value for p. To clarify the relationship between E(p|x) and x/n, we can write E(p|x) as a weighted average of the prior mean a/(a + b) and x/n. a+b a n x BO) a patb a+b AP eED a The weights can be interpreted in terms of the sum of the two parameters of the beta distribution, which is often called the concentration parameter. The weights are proportional to the concentration parameter a + b of the prior distribution and the number n of observations. The weight of the prior depends on the size of a + b in relation to n, and the concentration parameter of the posterior distribution is the totala+b+n. It is also useful to interpret the posterior pdf in terms of the concentration parameter. Because the first parameter is the sum x + a and the second para- meter is the sum (n — x) + b, the effect of a is to add to the number of successes and the effect of b is to add to the number of failures. In particular, setting a to | and b to 1 resulted in a posterior with the equivalent of 803 + 1 successes and (1574 — 803) + 1 failures, for a total of 1574 + 2 observations. From this view- point, the total observations are the a + b provided by the prior plus the n provided by the data, and this addition also gives the concentration parameter of the posterior in terms of the concentration parameter of the prior. How should we specify the prior distribution? The beta distribution is convenient, because it is easy with this specification to find the posterior distribu- tion, but what about a and b? Suppose we have asked 10 adults about the effect of antibiotics on viruses, and it is reasonable to assume that the 10 are a random sample. If 6 of the 10 say that antibiotics kill viruses, then we set a = 6 and b=10-—6=4. That is, we have a beta distributed prior with parameters 6 and 4. Then the posterior distribution is beta with parameters 803 + 6 = 809 and (1574 — 803) + 4 = 775. The posterior is the same as if we had started with a = 0 and b = 0 and observed 809 who said that antibiotics kill viruses and 775 who --- Trang 793 --- 780 = cuarrer 14 Alternative Approaches to Inference said no. In other words, observations can be incorporated into the prior and count just as if they were part of the NSF survey. : Life in the Bayesian world is sometimes more complicated. Perhaps the prior observations are not of a quality equivalent to that of the survey, but we would still like to use them to form a prior distribution. If we regard them as being only half as good, then we could use the same proportions but cut the a and b in half, using 3 and 2 instead of 6 and 4. There is certainly a subjective element to this, and it suggests why some statisticians are hesitant about using Bayesian methods. When everyone can agree about the prior distribution, there is little controversy about the Bayesian procedure, but when the prior is very much a matter of opinion people tend to disagree about its value. eee ~4=Assume a random sample X;,X2,...,X, from the normal distribution with known variance, and assume a normal prior distribution for ju. In particular, consider the IQ scores of 18 first- grade boys, 113° 108 140 113 11S 146 136 107 108 119 132 127 118 108 103 103 122 111 from the private speech data introduced in Example 1.2. Because the IQ has a standard deviation of 15 nationwide, we can assume o = 15 is valid here. For the prior distribution it is reasonable to use a mean of fo = 110, a ballpark figure for previous years in this school. It is harder to prescribe a standard deviation for the prior, but we will use ¢9 = 7.5. This is the standard deviation for the average of four independent observations if the individual standard deviation is 15. As a result, the effect on the posterior mean will turn out to be the same as if there were four additional observations with average 110. To compute the posterior distribution of the mean j1, we use Bayes’ theorem ules. fie F(x1,42, ++ Xnl eg () PR f@i x2, malig (wdu The numerator is Fl, mle g(t) =e LL potlscitlet ue v2n0 V2n0 x LL Stu te)? 08 V2n00 _ 1 eo SUle =H) 0? 4+ (nH)? (0 +H)? /08 GaP ona, The trick here is to complete the square in the exponent, which yields 2 (—5/a7)(u— mi) + where C does not involve ju and Xi Li nx 2H +8 5+4 2 1 o oH oe oF o=——, pw, =——__2 = ——_2 al nl al eo oe a oe a --- Trang 794 --- 14.4 Bayesian Methods 781 The posterior is then = : L el 5/27) HI gC 72 3 Atheist = eat ae ee # é| Se dy (2n)"""a"a9 — J-co (22)? a1 The integral is 1 because it is the area under a normal pdf, and the part in front of the integral cancels out, leaving a posterior distribution that is normal with mean jr; and standard deviation o: : =! syetiu-myt (uly, x2... Xn) aaa e' Notice that the posterior mean j1; is a weighted average of the prior mean Ho and the data mean X, with weights that are the reciprocals of the prior variance and the variance of X. It makes sense to define the precision as the reciprocal of the variance because a lower variance implies a more precise measurement, and the weights then are the corresponding precisions. Furthermore, the posterior variance is the reciprocal of the sum of the reciprocals of the two variances, but this can be described much more simply by saying that the posterior precision is the sum of the prior precision plus the precision of x. Numerically, we have 1 1 1 1 1 1 1 Fon’ a sys 7 O88 = Toa 3108 mx 18(118.28 110 a a 157 be 1S = y= e677 at a jet 7s The posterior distribution is normal with mean 4, = 116.77 and standard deviation o, = 3.198. The mean ju; is a weighted average of ¥ = 118.28 and fy = 110, so py is necessarily between them. As n becomes large the weight given to io declines, and 1; will be closer to X. Knowing the mean and standard deviation, we can use the normal distribu- tion to find an interval with 95% probability for u. This 95% credibility interval is [110.502, 123.038]. For comparison the 95% confidence interval using ¥ = 118.28 and o = 15 is ¥+ 1.960/,/n = [111.35, 125.21]. Notice that this interval must be wider. Because the precisions add to give the posterior precision, the posterior precision is greater than the prior precision and it is greater than the data precision. Therefore, it is guaranteed that the posterior standard deviation ¢, will be less than @ and less than the data standard deviation a/ \/n. Both the credibility interval and the confidence interval exclude 110, so we can be pretty sure that jt exceeds 110. Another way of looking at this is to calculate the posterior probability of 4 being less than or equal to 110. Using 4, = 116.77 and o; = 3.198, we obtain the probability .0171, so this too supports the idea that jo exceeds 110. How should we go about choosing Ho and oo for the prior distribution? Suppose we have four prior observations for which the mean is 110. The standard --- Trang 795 --- 782 = cuarrer 14 Alternative Approaches to Inference deviation of the mean is 15/V/4. We therefore choose fg = 110 and og = 7.5, the same values used for this example. If the four values are combined with the 18 values from the data set, then the mean of all 22 is 116.77 = y, and the standard deviation is 15/22 = 3.198 =. The 95% confidence interval for the mean, based on the average of all 22 observations, is the same as the Bayesian 95% credibility interval. This says that if you have some preliminary data values that are just as good as the regular data values that will be obtained, then base the prior distribution on the preliminary data. The posterior mean and its standard deviation will be the same as if the preliminary data were combined with the regular data, and the 95% credibility interval will be the same as the 95% confidence interval. It should be emphasized that, even if the confidence interval is the same as the credibility interval, they have different interpretations. To interpret the Bayes- ian credibility interval, we can say that the probability is 95% that y is in the interval [110.502, 123.038]. However, for the frequentist confidence interval such a probability statement does not make sense because j and the endpoints of the interval are all constants after the interval has been calculated. Instead we have the more complicated interpretation that, in repeated realizations of the confidence interval, 95% of the intervals will include the true y in the long run. What should be done if there are no prior observations and there are no strong opinions about the prior mean jig? In this case the prior standard deviation og can be taken as some large number much bigger than , such as gy = 1000 in our example. The result is that the prior will have essentially no effect, and the posterior distribu- tion will be based on the data, “4, = = 118.28 and o, =o = 15. The 95% credibility interval will be the same as the 95% confidence interval based on the 18 observations, [111.35, 125.21], but of course the interpretation is different. i | In both examples it turned out that the posterior distribution has the same form as the prior distribution. When this happens we say that the prior distribution is conjugate to the data distribution. Exercises 31 and 32 offer additional examples of conjugate distributions. Exercises | Section 14.4 (23-32) 23. For the data of Example 14.7 assume a beta prior using the 19 observations. Compare with the distribution and assume that the 1574 observa- result of (b). tions are a random sample from the Bernoulli d. Change the prior so the prior precision is very distribution. Use Bayes’ theorem to derive the small but positive, and then recompute (a) and (b). posterior distribution, and compare your answer e. Find a 95% confidence interval for using the with the result of Example 14.7. 15 observations and compare with the credibil- 24. Here are the IQ scores for the 15 first-grade girls ifjintecvalor (dy: from the study mentioned in Example 14.8. 25. Laplace’s rule of succession says that if there 102 96 106 118 108 122 115 113 have been n Bernoulli trials and they have all 109 113. 82 110 121 110. 99 been successes, then the probability of a success on the next trial is (m + 1)/(n + 2). For the deri- Assume the same prior distribution used in vation Laplace used a beta prior with a = I and Example 14.8, and assume that the data is a ran- b = 1 for binomial data, as in Example 14.7. dom sample from a normal distribution with mean a. Show that, if @ = 1 and b = | and there are n wand o = 15. successes in n trials, then the posterior mean of a, Find the posterior distribution of j.. pis(n+1)/(n +2). b. Find a 95% credibility interval for p. b. Explain (a) in terms of total successes and ¢. Add four observations with average 110 to the failures; that is, explain the result in terms of data and find a 95% confidence interval for jv two prior trials plus 7 later trials. --- Trang 796 --- Supplementary Exercises 783 ¢. Laplace applied his rule of succession to f. Calculate a 95% confidence interval for p using compute the probability that the sun will rise Equation (8.11) of Section 8.2, and compare tomorrow using 5000 years, or n = 1,826,214 with the results of (d) and (e). days of history in which the sun rose every day. g. Compare the interpretations of the credibility Is Laplace’s method equivalent to including interval and the confidence intervals. two prior days when the sun rose once and h. Based on the prior in (c), test the hypothesis failed to rise once? Criticize the answer in p < 5 using the posterior distribution to find terms of total successes and failures. P(p <5). 26. For the scenario of Example 14.8 assume the same 29. Exercise 27 gives an alternative way of finding normal prior distribution but assume that the data beta probabilities when software for the beta dis- set is just one observation ¥ = 118.28 with stan- tribution is unavailable. dard deviation / Yn = 15/V18 = 3.5355. Use a. Use Exercise 27 together with the F table to Bayes’ theorem to derive the posterior distribu- obtain a 90% credibility interval for Exercise tion, and compare your answer with the result of 28(c). [Hint: To find c such that .05 is the Example 14.8. probability that F is to the left of c, reverse 27. Let X have the beta distribution on (0, 1] with the degrees of freedom and take the reciprocal of the value for « = .05.] parameters ot = v,/2 and B = v2/2, where v,/2 5 z “al . . b. Repeat (a) using software for the beta distribu- and v2/2 are positive integers. Define Y = ti dc th th It of (X/a)/[(1 — X)/B]. Show that ¥ has the F distri- ga and compare wath the-result ob (a), bution with degrees of freedom 1, v2. 30. If x and f are large, then the beta distribution can 28. Ina study by Erich Brandt of 70 restaurant bills, Pesapproximated by: the normal disenibution.usiag 5 . the beta mean and variance given in Section 4.5. 40 of the 70 were paid using cash. We assume a ae : mene : 1. ; aie’ This is useful in case beta distribution software is random sample and estimate the posterior distri- ‘ Pash : } ' unavailable. Use the approximation to compute bution of the binomial parameter p, the population ea 3 : the credibility interval in Example 14.7. proportion paying cash. a. Use a beta prior distribution with a =2 and 31. Assume a random sample X;,X2, ... , X, from the b=2. Poisson distribution with mean 4. If the prior dis- b. Use a beta prior distribution with a = 1 and tribution for / has a gamma distribution with para- b=1. meters a and B, show that the posterior distribution ¢. Use a beta prior distribution with a and b very is also gamma distributed. What are its parameters? small and positive. e 32. Consider a random sample X1, X>,...,X, from the d. Calculate a 95% credibility interval for p using SSIES A EDSON SAME esc me " normal distribution with mean 0 and precision t (c). Is your interval compatible with p = .5? aia 2 7 . ; (use t as a parameter instead of o? = 1/t). e. Calculate a 95% confidence interval for p using : _ . Assume a gamma-distributed prior for t and Equation (8.10) of Section 8.2, and compare : . : 3 show that the posterior distribution of t is also with the result of (d). ee . gamma. What are its parameters? 33. The article “Effects of a Rice-Rich Versus Potato- article used a distribution-free test. Use such a test Rich Diet on Glucose, Lipoprotein, and Cholesterol with significance level .05 to determine whether Metabolism in Noninsulin-Dependent Diabetics” the true mean cholesterol-synthesis rate differs sig- (Amer. J. Clin. Nutrit., 1984: 598-606) gives the nificantly for the two sources of carbohydrates. accompanying data on cholesterol-synthesis rate for eight diabetic subjects. Subjects were fed a 9°. standardized diet with potato or rice as the major Subject 1 2 3 4 #5 6 7 8 carbohydrate source. Participants received. both | — AAA AA AA AANA diets for specified periods of time, with cholesterol- Potato 1.88 2.60 1.38 4.41 1.87 2.89 3.96 2.31 synthesis rate (mmol/day) measured at the end of — Rice 1.70 3.84 1.13 4.97 .86 1.93 3.36 2.15 each dietary period. The analysis presented in this_§ —- -—aAANHSYH --- Trang 797 --- 784 —cuarrer 14 Alternative Approaches to Inference mine the CI for the given data, and state the 34, The study reported in “Gait Patterns During Free confidence level. Choice Ladder Ascents” (Hum. Movement Sci. 37, The single-factor ANOVA model considered in 1983; 187-195) was motivated by publicity'con- Chapter 11 assumed the observations in the ith cerning the increased accident rate for individuals sample were selected from a normal distribution climbing ladders. A number of different gait pat- with mean 4 and variance o%, that is, terns were used by subjects climbing a portable Xj 1 +e) where the c's are normal with straight ladder according to specified instructions. mean Oan d variance G2, The normality assump- Allie, aiCGAts Tinie forseven aibpeis Who Uedia tion implies that the F test is not distribution-free. lateral gait and six subjects who used a four-beat We jidwi assuine iat thé @8 all cone from thé dingonal gaitate given. same continuous, but not necessarily normal, dis- Oo tribution, and develop a distribution-free test of Lateral = .86 1.31 1.64 151 1.53 139 1.09 the null hypothesis that all / j1;’s are identical. Let Diagonal 1.27 1.82 1.66 85 1.45 1.24 N = CJ, the total number of observations in the data set (there are J; observations in the ith sam- a, Carry out a test using % = .05 to see whether ple). Rank these N observations from 1 (the smal- the data suggests any difference in the true lest) to N, and let R; be the average of the ranks for average ascent times for the two gaits. the observations in the ith sample. When Ho is b. Compute a 95% CI for the difference between true, we expect the rank of any particular observa- the true average gait times. tion and therefore also R, to be (N + 1)/2. The 35. The sign test is a very simple procedure for testing data argues against Ho when some of the Ri’s hypotheses about a population median assuming differ considerably from (NV + 1)/2. The Krus- only that the underlying distribution is continuous. kal-Wallis test statistic is To illustrate, consider the following sample of 20 b waa? observations on component lifetime (hr): _ RN hY : ” «away i(®—“F) 17 33 5.1 69 126 144 164 24.6 26.0 26.5 32.1 37.4 40.1 40.5 When Hg is true and either (1) / = 3, all J; > 6 or 415 72.4 80.1 86.4 87.5 100.2 (2)1 > 3, all J; > 5, the test statistic has approxi- We wish te taacthe hypotticees yy i= 25.0;verms mately a chi-squared distribution with 1 — 1 df. Hy: fi > 25.0 The test statistic is Y = the number of ‘The accompanying observations on-axial stiff: observations thaexcesd25. ness index resulted from a study of metal-plate a. Consider rejecting Ho if Y > 15. What is the connected trusses in which five different plate value of «(the probability of a type I error) for lengths—4 in., 6 in., 8 in., 10 in., and 12 in. — this test? (Hint: Think of a “success” as a were used (“Modeling Joints Made with Light- litetinne: that Exceeds 95.0. THEN Ve the Alls Gauge Metal Connector Plates,” Forest Products ber of successes in the sample, What kind of a F.y1979: 39-44). distribution does Y have when ji = 25.02] b. What rejection region of the form Y > spe #=!@in): 309.2 309.7 311.0 316.8 cifies a test with a significance level as close to 326.5 349.8 409.5 .05 as possible? Use this region to carry out the j= 2(6in.): 331.0 347.2. 348.9 361.0 test for the given data. [Note: The test statistic 381.7 402.1 404.5 is the number of differences X;— 25.0 that; 3 (gin): 351.0 357.1. 366.2 3673 have positive signs, hence the name sign test.] 382.0 392.4 409.9 36. Refer to Exercise 35, and consider a confidence j= 4(10in.): 346.7 362.6 384.2 410.6 interval associated with the sign test, the sign 433.1 452.9 461.4 interval. The relevant hypotheses are now j;_ 5 (2 in): 4074 410.7 41994412 Ho: ji = figversus Hy: 7 # fig. Let’s use the fol- “418 4658 4734 lowing rejection region: either Y > 15 or Y < 5. a. What is the significance level for this test? Use the K-W test to decide at significance level b. The confidence interval will consist of all .O1 whether the true average axial stiffness index values fig for which Hg is not rejected. Deter- depends somehow on plate length. --- Trang 798 --- Supplementary Exercises 785 38. The article “Production of Gaseous Nitrogen in Hew a ae ee Human Steady-State Conditions” (J. Appl. Physiol., Happiness 22753297196 1972: 155-159) reports the following observations S ; iat ae Depression 22.5 53.7. 10.8 21.1 on the amount of nitrogen expired (in liters) under Calninesy 26 53.1 83 216 four dietary regimens: (1) fasting, (2) 23% protein, (3) 32% protein, and (4) 67% protein. Use the 5 é 7 8 Kruskal-Wallis test (Exercise 37) at level .05 to a test equality of the corresponding 4,;’s. Fear 19 54.6 21.0 20.3 Happiness 13.8 47.1 13.6 23.6 1. 4.079 4.859 3.540 5.047 3.298 yecresnen ee 3 tee ae 4.679 2.870 4.648 3.847 ee ee 2. 4.368 5.668 3.752 5.848 3.802 4844 3.578 5.393 4.374 Use Friedman’s test to decide whether emotion 3. 4.169 5.709 4416 5.666 4.123 has an effect on skin potential. 5.059 4.403 4.496 4.688 40. In an experiment to study the way in which differ- 4. 4928 5.608 4.940 5.291 4.674 ent anesthetics affect plasma epinephrine concen- 5.038 4.905 5.208 4.806 tration, ten dogs were selected and concentration was measured while they were under the influence of the anesthetics isoflurane, halothane, and 39. The model for the data from a randomized block cyclopropane (“Sympathoadrenal and Hemody- experiment for comparing / treatments was namic Effects of Isoflurane, Halothane, and Xj = "+; +f; +e, where the o’s are treat- Cyclopropane in Dogs,” Anesthesiology, 1974: ment effects, the f’s are block effects, and the 465-470). Test at level .05 to see whether there é’s were assumed normal with mean 0 and vari- is an anesthetic effect on concentration. [Hint: See ance o°. We now replace normality by the Exercise 39.] assumption that the ¢’s have the same continuous distribution. A distribution-free test of the null TT hypothesis of no treatment effects, called Fried- Dog ‘man’ s test, involves first ranking the observations igs ¢@¢F in each block separately from 1 to /. The rank average Rj is then calculated for each of the / Isoflurane 28 51 1.00 39 29 treatments. If Ho is true, the expected value of Halothane 30 39 63 38 21 each rank average is (J + 1)/2. The test statistic is Cyclopropane 1.07 135 69 28 1.24 ise ene 6 7 8 9 10 Fr “Way (@ 4) Isoflurane 36.32 69 17-33 Halothane 88 39 513242 For even moderate values of J, the test statistic has Cyclopropane 1.53 49 56 1.02 30 approximately a chi-squared distribution with f= laf benny etme. ___ 41. Suppose we wish to test The: article “Physiological. Effects “During Hp: the X and ¥ distributions are identical Hypnotically Requested Emotions” (Psychoso- matic Med., 1963: 334-343) reports the follow- versus ing data. (rj) on skin. potesitial int millivolts when, H,: the X distribution is less spread out than the Y the emotions of fear, happiness, depression, and distribution calmness were requested from each of eight sabiects. The accompanying figure pictures X and Y dis- tributions for which H, is true. The Wilcoxon A rank-sum test is not appropriate in this situation Blocks (Subjects) because when Hy is true as pictured, the Y’s will 1: 2 3 4 tend to be at the extreme ends of the combined —— sample (resulting in small and large Y ranks), so --- Trang 799 --- 786 ciwpteR 14 Altemative Approaches to Inference the sum of X ranks will result in a W value that is neither large nor small. = @@aribulion sips 40 44 48 49 ee ; Control 93.7 4.10 430 S156 ae distribution NUE EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE Consult the Lehmann book (in the chapter bibli- as ography) for more information on this test, called anita a 4 tae @ a6 the Siegel-Tukey test. 42. The ranking procedure described in Exercise 41 Consider modifying the procedure for assigning is somewhat asymmetric, because the smallest ranks as follows: After the combined sample of observation receives rank 1 whereas the largest m + nobservations is ordered, the smallest obser- receives rank 2, and so on. Suppose both the vation is given rank 1, the largest observation is smallest and the largest receive rank 1, the second given rank 2, the second smallest is given rank 3, smallest and second largest receive rank 2, and so the second largest is given rank 4, and so on. Then on, and let W" be the sum of the X ranks. The null if H, is true as pictured, the X values will tend to distribution of W" is not identical to the null dis- be in the middle of the sample and thus receive tribution of W, so different tables are needed. large ranks. Let W! denote the sum of the X ranks Consider the case m = 3, n = 4. List all 35 possi- and consider rejecting Ho in favor of H, when ble orderings of the three X values among the w' > c. When Hg is true, every possible set of X seven observations (e.g., 1, 3, 7 or 4, 5, 6), assign ranks has the same probability, so W' has the same ranks in the manner described, compute the value distribution as does W when Hg is true. Thus c can of W" for each possibility, and then tabulate the be chosen from Appendix Table A.13 to yield a null distribution of W”. For the test that rejects if level % test. The accompanying data refers to w"” > c, what value of ¢ prescribes approximately medial muscle thickness for arterioles from the a level .10 test? This is the Ansari-Bradley test; lungs of children who died from sudden infant for additional information, see the book by Hol- death syndrome (x’s) and a control group of chil- lander and Wolfe in the chapter bibliography. dren (y's). Carry out the test of Ho versus H, at level .05. Berry, Donald A., Statistics: A Bayesian Perspective, Hollander, Myles, and Douglas Wolfe, Nonparametric Brooks/Cole—Cengage Learning, Belmont, CA, Statistical Methods (2nd ed.), Wiley, New York, 1996, An elementary introduction to Bayesian 1999. A very good reference on distribution-free ideas and methodology. methods with an excellent collection of tables. Gelman, Andrew, John B. Carlin, Hal S. Stern, and — Lehmann, Erich, Nonparametrics: Statistical Methods Donald B. Rubin, Bayesian Data Analysis (2nd ed.), Based on Ranks (revised ed.), Springer, New York, Chapman and Hall, London, 2003. An up-to-date 2006. An excellent discussion of the most impor- survey of theoretical, practical, and computational tant distribution-free methods, presented with a issues in Bayesian inference. great deal of insightful commentary. --- Trang 800 --- Appendix Tables --- Trang 801 --- 788 Appendix Tables Table A.1 Cumulative Binomial Probabilities . BOs; n, p) = by 1, p) 0 an=5 P 0.01 (0.05) 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99 0 951 774 590) 328.237.168.078 = .031 «010.002.001.000 .000 000.000 1 999 977 919.737 633. 528.337.188.087 031.016.007.000 .000 .000 x 2 1.000 999 991 .942 896 .837 683 .500 .317 .163 .104 .058 .009 .001 .000 3 1.000 1.000 1.000 .993 984 969 913 812 663 472 367 .263 081 .023 001 4 1.000 1.000 1.000 1.000 .999 .998 .990 .969 .922 .832 .763 .672 410 .226 .049 bo n=10 P 0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99 0 904 599 349 107.056 028.006.001.000 .000 .000 .000 .000 .000 .000 1 996 914.736 376 244.149.046.011 -.002 000.000 .000 .000 .000 .000 2 1.000 988 930 678 526.383.167.055 012.002 .000 .000 .000 .000 .000 3 1.000.999 987.879.776.650 382-172-055 O11 004 ~.001 000 .000 .000 4 1.000 1.000 998 .967 .922 .850 .633 .377 .166 .047 .020 .006 .000 .000 .000 eS 5 1.000 1.000 1.000 .994 980 .953. 834.623. 367.150.078.033 .002 000 .000 6 1.000 1,000 1.000 .999 996 989 945 828 618 .350 .224 .121 .013 .001 .000 7 1.000 1.000 1.000 1.000 1.000 .998 .988 945 833.617 474 322.070.012.000 8 1.000 1.000 1.000 1.000 1.000 1.000 .998 .989 .954 851 .756 .624 .264 .086 .004 9 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .994 .972 .944 .893 .651 .401 .096 en=15 Pp 0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99 0 860 463.206 §.035 013, 005.000.000.000 000.000.000.000 .000 .000 1 990 829 549.167.080.035. 005.000 = .000 000 .000 .000 .000 .000 .000 2 1.000 .964 816 .398 .236 «=.127 027.004.000.000 .000 .000 .000 .000 .000 3 1.000 995 944 648 461 .297 091 .018 .002 .000 .000 .000 .000 .000 .000 4 1.000 .999 987 836 686 SIS .217 059 009.001.000.000 .000 .000 .000 5 1.000 1.000 .998 .939 852 .722 402 .151 .034 004 001 .000 .000 .000 .000 6 1.000 1.000 1.000 .982 .943 869 610 .304 .095 .015 .004 .001 .000 .000 .000 x 7 1.000 1.000 1.000 .996 .983 950.787.500.213. 050 017.004.000.000 .000 8 1.000 1.000 1.000 999 .996 985 905 .696 .390 .131 .057 .018 .000 .000 .000 9 1.000 1.000 1.000 1.000 .999 996 966 .849 .597 .278 .148 .061 .002 .000 .000 10 1.000 1.000 1.000 1.000 1.000 .999 .991 .941 .783 485 314 .164 013 .001 .000 11 1,000 1.000 1.000 1,000 1.000 1,000 .998 982 .909 .703 .539 .352 .056 .005 .000 12 1,000 1.000 1,000 1,000 1.000 1,000 1,000 .996 .973 .873 .764 .602 .184 .036 .000 13 1,000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .995 .965 920 833 .451 .171 .010 14 1,000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1,000 .995 987 .965 .794 .537 .140 (continued) --- Trang 802 --- Appendix Tables 789 Table A.1 Cumulative Binomial Probabilities (cont.) < B(x; n, p) = > bys n. p) = d.n = 20 Pp 0.01 0.05 0.10 0.20 06.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99 O 818 358 122.012.003.001 000.000.000.000 .000 000 .000 .000 .000 1 983.736 «392, 069.024. .008)—.001_— 000.000.000.000 .000 .000 .000 .000 2 999 925 677 206 =.091 035.004.000.000 000.000.000.000 .000 .000 31,000 984 867 411.225, 107.016.001.000 000.000.000.000 .000 .000 4 1.000 .997 957 630 415 .238 051 = =.006 §=.000 .000 .000 000 .000 .000 .000 5 1,000 1.000.989 804 617 416.126.021.002 000.000.000.000 .000 .000 6 1.000 1.000 998 .913 .786 608 .250 058.006 .000 .000 .000 .000 .000 .000 7 1.000 1.000 1.000 968 898 .772 416 .132 .021 .001 .000 .000 .000 .000 .000 8 1.000 1.000 1.000 990 .959 887 596.252.057.005 .001 .000 .000 .000 .000 9 1.000 1.000 1.000 .997 986 .952 .755 412 .128 .017 .004 .001 .000 .000 .000 x 10 1.000 1.000 1.000 .999 996 .983 872 588.245.048.014 003.000.000.000 11 1.000 1.000 1.000 1.000 .999 995 943 .748 404 .113 041 .010 .000 .000 .000 12 1.000 1.000 1.000 1.000 1.000 .999 979 .868 584.228.102.032 000.000 .000 13° 1,000 1.000 1.000 1.000 1.000 1.000 .994 942 .750 .392 .214 .087 .002 .000 .000 14 1,000 1.000 1.000 1.000 1.000 1.000 998 .979 874 584 383.196 .011 .000 .000 15 1,000 1.000 1.000 1.000 1.000 1,000 1.000 .994 .949 .762 585.370 .043 .003 .000 16 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 984 893 .775 .589 .133 016 .000 17 1,000 1,000 1,000 1,000 1.000 1.000 1.000 1.000 .996 .965 .909 .794 .323 .075 .001 18 1,000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 992 .976 .931 .608 .264 .0I7 19 1,000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1,000 .999 997 988 .878 642 .182 (continued) --- Trang 803 --- 790 Appendix Tables Table A.1 Cumulative Binomial Probabilities (cont.) B(x; n, p) = > b(n. p) = en = 25 P 0.01 0.05 0.10 0.20 0.25 0.30 0.40 0.50 0.60 0.70 0.75 0.80 0.90 0.95 0.99 0 778 277 072 .004 001 000.000.000.000 000.000.000.000 .000 000 1 974 642 271 027 007.002.000.000 000.000 .000 .000 .000 000 .000 2 998 873.537 098 = 032.009.000.000 = .000 = .000 .000 000.000 .000 .000 3 1.000 966.764 234.096.033.002, 000.000.000.000 000.000.000.000 4 1.000 .993 902 421 .214 .090 .009 000 .000 .000 .000 .000 .000 .000 .000 5 1.000 .999 967 617 .378 193 029.002.000.000 .000 .000 .000 .000 .000 6 1.000 1.000 991 .780 561 341 074.007.000.000 .000 .000 .000 .000 .000 7 1.000 1.000 .998 891 .727 512.154.022.001 = .000_ .000 000 .000 .000 .000 8 1.000 1.000 1.000 .953 851.677 .274 =—.054 004.000.000.000 .000 .000 .000 9 1.000 1.000 1.000 .983 .929 811 .425 .115 .013 .000 .000 .000 .000 .000 .000 10 1.000 1.000 1.000 .994 .970 902 .586 .212 .034 002 .000 .000 .000 .000 .000 11 1.000 1.000 1.000 .998 .980 .956 .732 345.078 = .006 001.000 .000 .000 .000 x 12 1,000 1.000 1.000 1.000 .997 .983 846 500.154 017.003.000.000 .000 .000 13 1.000 1.000 1.000 1.000 .999 .994 .922 .655 .268 044.020 002.000.000.000 14 1,000 1.000 1.000 1.000 1.000 .998 .966 .788 414 .098 .030 .006 .000 .000 .000 15 1.000 1.000 1.000 1.000 1.000 1.000 .987 .885 .575 .189 071 017.000 .000 .000 16 1.000 1.000 1.000 1.000 1.000 1.000 .996 946 .726 .323 .149 .047 .000 .000 .000 17 1.000 1.000 1.000 1.000 1.000 1.000 .999 .978 846 488 .273 .109 .002 .000 .000 18 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .993 .926 .659 .439 .220 .009 .000 .000 19 1,000 1.000 1.000 1,000 1.000 1.000 1.000 .998 .971 .807 .622 .383 .033 .001 .000 20 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .991 910 .786 .579 .098 .007 .000 21 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 967 .904 .766 .236 .034 .000 22 1,000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 991 .968 902 .463 .127 .002 23 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .998 .993 .973 .729 .358 .026 24 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 .999 .996 .928 .723 .222 Table A.2 Cumulative Poisson Probabilities Se N F(x; A) = > ™ y! a Jl 2 3 4 ES) 6 7 8 39 1.0 0 905 819 -7Al .670 607 549 497 449 407 368 1 995 982 963 938 910 878 844 809 72 .736 ES 1.000 999 996 992 986 977 966 953 937 .920 x 3 1.000 1.000 .999 998 997 994 991 987 981 4 1,000 1,000 1,000 999 .999 998 996 * 1,000 1,000 1.000 999 6 1.000 (continued) --- Trang 804 --- Appendix Tables 7914 Table A.2 Cumulative Poisson Probabilities (cont.) xg Ayr Fa:d)= > a yt! a 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 15.0 20.0 0 135 050.018 .007,-—S002.—««001.— 000-000-000 «000 S00 1 406 199 .092 .040 O17 .007 003 .001 .000 .000 .000 2 677 423 238 125 062 .030 014 006 .003 .000 000 3 857 647 433 265 ASI 082 042 021 010 .000 .000 4 947 815 629 440 285 173 100 055 029 001 000 5 983 916 785 616 446 301 191 116 = 067, S003. — 000 6 99 966 889 762 606 450 313 207 130 008 —.000 7 .999 988 .949 867 744 599 453 324 220 O18 .001 8 1.000 .996 979 932 847 ‘729 593 456 333 037 .002 9 999 992 968 916 -830 ‘NT 587 A58 070 .005 10 1.000 997 986 957 901 816 .706 583 ALS O11 i .999 995, 980 947 888 803 697 185 021 12 1.000 998 991 973 936 876 792 268 039 + .999 996 .987 966 .926 864 363 .066 14 1.000 999 994 983 589: 917 466 105 15 999 998 992 978 951 568 ST 16 1.000 999 996 .989 973 664 221 17 1.000 998 995 986 749 297 x 18 999 998 993 819 381 19 1.000 a. .997 875 470 20 1.000.998 917559 21 999 947644 22 1.000 967 721 23 981 787 24 989 843 25 994 888 26 997 .922 27 998 948 28 999 .966 29 1.000 978 30 987 31 992 32 995 33 997 34 999 35 .999 36 1.000 --- Trang 805 --- 792 Appendix Tables Table A.3 Standard Normal Curve Areas @) = PZ =2) Standard normal density curve DY i Shaded area = (2) 1 1 ' ’ lS 0 z 3 00 OL 02 03 04 05 06 07 08 09 -3.4 | 0003 0003 0003 = .0003.— 0003. = .0003.—S 0003. = .0003. = 0003S .0002 ~3.3 | 0005 .0005 0005-0004. .0004. = 0004 = .0004_— 0004 0004.00 —3.2 | 0007 .0007 0006 = 0006 = 0006 = 0006 »=— 0006 = 0005 0005.05 —3.1 | 0010 0009 = 0009-0009 0008S .0008=S 0008 —S0008=S.0007-—S «0007 —3.0 .0013 .0013 0013 0012 .0012 0011 0011 0011 0010 .0010 -29 .0019 .0018 .0017 0017 .0016 .0016 0015 0015 0014 .0014 —2.8 .0026 0025 0024 0023 0023 .0022 0021 0021 0020 .0019 -2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 0028 .0027 .0026 -2.6 .0047 0045 0044 0043 .0041 .0040 .0039 .0038 0037 .0036 —25 .0062 .0060 .0059 0057 0055 .0054 0052 0051 0049 0048 -24 0082 .0080 .0078 0075, 0073 .0071 0069 0068 0066 0064 2.3 | 0107 0104 = 0102, 0099-0096» .0094.—S.0091_~— 0089-0087 .0084 2.2 | 0139 0136 = 0132, 012901250122, 119.0116 OL —2.1 | 0179 0174 0170-0166. 0162, OSB ISA ISD 01460143, -2.0 4.0228 0222 0217 0212 .0207 0202 .0197 0192 0188 .0183 1.9 | 0287 0281 0274 += 0268S 02620256 = 0250S .0244.S 0239S 0233 —1.8 | 0359 0352 0344 = 0336 = 0329-0322, 0314. = .0307- 0301-0294 -1.7 | 0446 0436 = 0427, 0418» .0409 0401-0392 0384. .0375 0367 -1.6 | 0548 0537 0526-0516 = 050504950485 047504650455 -15 | 0668 0655 0643 06300618 060605940582. 0571_— 0559 -14 .0808 .0793 .0778 0764 0749 .0735 0722 0708 0694 .0681 -13 .0968 .0951 0934 0918 .0901 0885 .0869 0853 0838 .0823 1.2 LSI 1131 1112 -1093 «1075 1056 1038 -1020 -1003 0985 =—11 1357 1335 1314 1292 1271 1251 1230 1210 1190 1170 -1.0 1587 1562 .1539 ISIS 1492 1469 1446 1423 -1401 1379 -0.9 | 1841 181417881762, 1736 TIL 1685 «166016356 0.8 | 2119 2090 = 2061-2033, 2005 1977S «194919221894. 1867 -0.7 | 2420 2389 = 2358S .2327, 2296 = 2266S 2236) 22062177 —S 2148, 0.6 | 2743 2709 «= 2676S 2643S 2611S 2578S w2546 251424832451 —0.5 | 3085 3050-3015. .2981 2946 2912, 2877S 2843. 28102776 -0.4 3446 3409 3372 3336 .3300 3264 3228 3192 3156 3121 0.3 3821 3783 3745 3707 3669 3632 3594 3557 3520 3482 0.2 4207 4168 4129 .4090 4052 4013 3974 3936 3897 3859 -0.1 4602 4562 4522 4483 4443 4404 4364 4325 4286 4247 0.0 5000 4960 .4920 4880 4840 4801 4761 4721 4681 4641 (continued) --- Trang 806 --- Appendix Tables 793 Table A.3 Standard Normal Curve Areas (cont.) D2 = P(Z=D) 3 00 01 02 03 04 05 06 07 08 09 0.0 | .5000 =.5040 5080, S120, -5160- 51995239. 527953195359 0.1 | 5398 54385478 5517 555755965636 $675 S714. S753 0.2 5793 5832 5871 5910 5948 5987 .6026 6064 6103 6141 03 6179 6217 6255 6293 6331 6368 6406 6443 6480 6517 04 6554 6591 6628 6664 6700 .6736 6772 6808 6844 6879 0.5 6915 .6950 6985 .7019 .7054 .7088 7123 TST .7190 7224 0.6 7257 7291 .7324 7357 .7389 7422 7454 .7486 7517 7549 0.7 .7580 -7611 -7642 .7673 .7704 1734 7764 7794 .7823 7852 08 7881 -7910 -1939 .7967 .7995 8023 8051 8078 8106 8133 09 8159 8186 8212 8238 8264 8289 8315 8340 8365 8389 1.0 | 8413 8438 = 846184858508 = 853185548577 85998621 1.1 | 8643 8665 8686-8708 )=— 872987498770 = 879088108830. 1.2 | 8849 8869-8888 890789258944. = 8962 898089979015 13 9032 9049 .9066 9082 -9099 9115 9131 9147 9162 9177 14 | 9192 9207 9222, 9236-9251 9265. 9278 = 929293069319 15 | 9332 9345 9357-9370 .9382, 9394. 9406 941894299441 1.6 9452 9463 9474 9484 9495 9505 9515 9525 9535 9545 17 9554 9564 9573 9582 9591 .9599 .9608 .9616 9625 9633 18 9641 9649 .9656 .9664 9671 .9678 .9686 .9693 9699 9706 19 9713 719 .9726 9732 9738 9744 .9750 9756 9761 9767 2.0 9772 9778 9783 9788 .9793 9798 .9803 .9808 9812 9817 21 9821 .9826 -9830 9834 9838 9842 .9846 .9850 9854 9857 2.2. 9861 9864 9868 9871 9875 9878 9881 9884 9887 9890 23 -9893 .9896 9898 9901 9904 .9906 -9909 PE. 9913 9916 24 9918 .9920 9922 .9925 9927 .9929 .9931 .9932 9934 9936 25 | 9938 9940 9941 9943 9945. .9946 = 994899499951 9952 2.6 9953 9955 9956 9957 .9959 .9960 9961 9962 9963 9964 27 9965 .9966 .9967 9968 9969 .9970 9971 9972 9973 9974 28 9974 9975 9976 9977 9977 9978 .9979 .9979 9980 9981 29 9981 9982 9982 9983 9984 9984 9985 9985 9986 9986 3.0 9987 9987 9987 9988 9988 .9989 .9989 .9989 9990 9990 3.1 -9990 .9991 .9991 9991 9992 982: 9992 9992 9993 9993 3.2 9993 9993 9994 9994 9994, 9994 9994 9995 9995 9995 3.3 9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 9996 9997 3.4 9997 9997 9997 .9997 .9997 9997 9997 9997 9997 9998 --- Trang 807 --- 794 Appendix Tables Table A.4 The Incomplete Gamma Function al GH F(x a) = | ——y* te dy 0 T(a) a x 1 2 3 4 5 6 7 8 9 10 1 632 264 080 019 004 001 .000 000 .000 000 2 865 594 323 -143 053 O17 005 001 -000 000 3 950 801 S77 B53 185 084 034 012 004 001 4 982 908 762 567 3n 215 All 051 021 008 5 993 960 875 -735 560 384 238 133 .068 .032 6 998 983 938 849 7S 554 394 .256 153 084 7 999 993 970 918 827 699 550 401 an 170 8 1.000 997 986 958 900 809 687 SAT 407 283 9 999 994 979 945 884 .793 676 544 413 10 1.000 997 990 gn 933 870 780 667 542 1 .999 995 985 962 921 857 -768 659 12 1.000 998 992 .980 954 911 845 758 13 999 996 989 974 946 -900 834 4 1.000 998 994 986 968 938 891 15 999 997 992 982 963 930 --- Trang 808 --- Appendix Tables 795 Table A.5 Critical Values for t Distributions 1, density curve x 0 few @ v \ -10 05 025 01 005 001 0005 1 3.078 6.314 12.706 31.821 63.657 318.31 636.62 2 1.886 2.920 4.303 6.965 9.925 22.326 31.598 3 1.638 2.353 3.182 4.541 5.841 10.213 12.924 4 1.533 2.132 2.776 3.747 4.604 7.173 8.610 5 1.476 2.015 2.571 3.365 4.032 5.893 6.869 6 1.440 1.943 2.447 3.143 3.707 5.208 5.959 % 1.415 1.895 2.365 2.998 3.499 4.785 5.408 8 1.397 1.860 2.306 2.896 3.355 4.501 5.041 9 1.383 1.833 2.262 2.821 3.250 4.297 4.781 10 1.372 1.812 2.228 2.764 3.169 4.144 4.587 ist 1.363 1.796 2.201 2.718 3.106 4.025 4.437 12 1.356 1.782 2.179 2.681 3.055 3.930 4.318 13 1.350 1.771 2.160 2.650 3.012 3.852 4.221 14 1.345 1.761 2.145 2.624 2.977 3.787 4.140 15 1.341 1.753 2.131 2.602 2.947 3.733 4.073 16 1.337 1.746 2.120 2.583 2.921 3.686 4.015 17 1.333 1.740 2.110 2.567 2.898 3.646 3.965 18 1.330 1.734 2.101 2.552 2.878 3.610 3.922 19 1.328 1.729 2.093 2.539 2.861 3.579 3.883 20 1.325 1.725 2.086 2.528 2.845 3.552 3.850 21 1.323 1.721 2.080 2518 2.831 3.527 3.819 22 1.321 1717 2.074 2.508 2.819 3.505 3.792 23 1.319 1.714 2.069 2.500 2.807 3.485 3.767 24 1.318 1711 2.064 2.492 2.197 3.467 3.745 25 1.316 1.708 2.060 2.485 2.787 3.450 3.725 26 1.315 1.706 2.056 2.479 2.779 3.435 3.707 27 1.314 1.703 2.052 2.473 2.771 3.421 3.690 28 1.313 1.701 2.048 2.467 2.763 3.408 3.674 29 1.311 1.699 2.045 2.462 2.756 3.396 3.659 30 1.310 1.697 2.042 2.457 2.750 3.385 3.646 32 1.309 1,694 2.037 2.449 2.738 3.365 3.622 34 1.307 1.691 2.032 2.441 2.728 3.348 3.601 36 1.306 1.688 2.028 2.434 2.719 3.333 3.582 38 1.304 1.686 2.024 2.429 2712 3.319 3.566 40 1.303 1.684 2.021 2.423 2.704 3.307 3.551 50 1.299 1.676 2.009 2.403 2.678 3.262 3.496 60 1.296 1.671 2.000 2.390 2.660 3.232 3.460 120 1.289 1.658 1.980 2.358 2.617 3.160 3.373 00 1.282 1.645 1.960 2.326 2.576 3.090 3.291 --- Trang 809 --- 796 Appendix Tables Table A.6 Critical Values for Chi-Squared Distributions x7 density curve Shaded area = @ i 0 z fe Rey v 995 99 975 95 90 10 05 025 01 005 1 | 0.000 0.000 0.001 0.004 0.016 2706 3.843 5.025 6637 7.882 2 0.010 0.020 0.051 0.103 0.211 4.605 5.992 7.378 9.210 10.597 3 0.072 0.115 0.216 0.352 0.584 6.251 7.815 9.348 11.344 12.837 4 0.207 0.297 0.484 0711 1.064 7.179 9.488 11.143 13.277 14.860 5 0.412 0.554 0.831 1.145 1.610 9.236 11.070 12.832 15.085 16.748 6 0.676 0.872 1.237 1.635 2.204 10.645 12.592 14.440 16.812 18.548 7 0.989 1.239 1.690 2.167 2.833 12.017 14.067 16.012 18.474 20.276 8 1.344 1.646 2.180 2.733 3.490 13.362 15.507 17.534 20.090 21.954 9] 1.735 2088 2700 3.325 4.168 14.684 16.919 19.022 21.665 23.587 10 | 2156 2558 3.247 3.940 4865 15.987 18.307 20.483 23.209 25.188 11 | 2.603 3.053 3.816 © 4.575 5.578—«:17.275 19.675 21.920 24.724 26.755 12 | 3.074 3571 4404 5.226 = 6.304. 18.549 21.026 23.337 26.217 28.300 13 3.565 4.107 5.009 5.892 7.041 19.812 22.362 24.735. .27.687 29.817 14 4.075 4.660 5.629 6.571 7.790 21.064 =. 23.685 26.119 29.141 31.319 15 4.600 5.229 6.262 7.261 8.547 22.307. = 24.996 27.488 = 30.577 32.799 16 5.142 5.812 6.908 7.962 9.312 23.542 26.296 28.845 32.000 34.267 17 5.697 6.407 7.564 8.682 10.085 24.769. 27.587 30.190 33.408 = 35.716 18 6.265 7.015 8.231 9.390 10.865 25.989 28.869 = 31.526 = 34.805 37.156 19 6.843 7.632 8.906 10.117 11.651 27.203 = 30.143, 32.852 36.190 38.580 20 7.434 8.260 9.591 10.851 12443 28412 31.410 =. 34.170 337.566 = 39.997 21 | 8033 8897 10.283 11.591 13.240 29.615 32.670 35.478 38.930 41.399 22 | 8643 9.542 10.982 12338 14.042 30.813 33.924 36.781 40.289 42.796 23 | 9.260 10.195 11.688 13.090 14.848 32.007 35.172 38.075 41.637 44.179 24 9.886 10.856 12.401 13.848 15.659 33.196 36.415 39.364 = 42.980 45.558 25 10.519 11.523 13.120 14.611 16.473 34.381 37.652 40.646 = 44.313 46.925 26 11.160 12.198 13.844 15.379 17.292 35.563 38.885 41.923 45.642 48.290 27 11.807 12.878 14.573 16.151 18.114 36.741 40.113 43.194 46.962 49.642 28 12.461 13.565 15.308 16.928 18.939 37.916 41.337. 44.461 48.278 50.993 29 13.120 14.256 16.047 17.708 19.768 39.087 42.557 45.772, 49.586 = 52.333 30 13.787 14.954 16.791 18.493 20.599 40.256 §=— 43.773 46.979» 550.892. 53.672 31 | 14.457 15.655 17.538 19.280 21.433 41.422 44.985 48.231 52.190 55.000 32 | 15.134 16.362 18.291 20.072 22.271 42.585 46.194 49.480 53.486 56.328 33 | 15.814 17.073 19.046 20.866 23.110 43.745 47.400 50.724 54.774 57.646 34 | 16.501 17.789 19.806 21.664 23.952 44.903 48.602 51.966 56.061 58.964 35 17.191 18.508 20.569 22.465 24.796 = 46.059 49.802 53.203 57.340 ~—-60.272 36 17.887 19.233 21.336 = 23.269 25.643. 47.212, 50.998 54.437 58.619 61.581 37 18.584 19.960 22.105 24.075 26.492 48.363. 52.192 55.667 59.891 62.880 38 19.289 20.691 22.878 24.884 27.343 49.513 53.384 = 56.896 = 61.162 64,181 39 19.994 21.425 23.654 25.695 28.196 = 50.660) 54.572 58.119 62.426 65.473 40 | 20.706 22.164 24.433 26.509 29.050 51.805 55.758 59.342 63.691 66.766 2 2 2 For v > 40, x2, ~ eA --- Trang 810 --- Appendix Tables 797 Table A.7_ t Curve Tail Areas density cy Area to the 0 t 1 Av 10203 4 5 6 7 8 9 0 1 2 13 4 15 16 17 18 0.0 | .500 .500 500 .500 .500 .500 .500 .500 .500 500 .500 500 .500 .500 .500 .500 .500 .500 0.1 | 468 465 463 463 462. 462 462 461 461 461 461 461 461 461 461 461 .461 461 0.2 | .437 430 427 426 425 424 424 423 423 423 423 422 422 422 422 422 422 422 0.3 | .407 .396 .392 390 .388 .387 386 .386 386 385 385 385 384 384 384 384 384 384 0.4 | .379 364 358 355 .353 .352 351 .350 349 349 348 348 348 347 347 347 347 347 O5 | .352 333 .326 322 .319 317 316 315 315 .314 313 .313 313 .312 312 312 .312 .312 0.6 | .328 305 .295 .290 .287 .285 .284 .283 .282 .281 .280 .280 .279 .279 .279 .278 .278 .278 0.7 | .306 .278 .267 .261 .258 .255 253 .252 251 .250 .249 .249 248 .247 247 .247 .247 .246 0.8 | 285 .254 241 234 .230 .227 225 223 222 221 .220 220 219 .218 218 .218 217 .217 0.9 | .267 .232 .217 .210 .205 .201 .199 .197 .196 .195 .194 .193 .192 .191 .191 .191 .190 .190 1.0 | .250 211 .196 187 .182 .178 .175 .173 .172 .170 .169 .169 .168 .167 .167 .166 .166 .165 11 | 235 193 .176 167 .162 .157 154 .152 150 .149 147 146 146 .144 144 144 143.143 1.2 | 221 177 .158 .148 142 138 135.132 .130 .129 128 .127 .126 .124 .124 1124 .123 123 13 | .209 162 .142 132 125 121 117 115 113 111 .110 .109 108 .107 .107 .106 .105 .105 1.4 | .197 148 .128 .117 .110 .106 .102 .100 .098 .096 .095 .093 .092 .091 .091 .090 .090 .089 1.5 | .187 .136 .115 .104 .097 .092 .089 .086 .084 .082 .081 .080 .079 .077 .077 .077 .076 .075 1.6 | 178 .125 .104 .092 .085 .080 .077 .074 .072 .070 .069 .068 .067 .065 .065 .065 .064 .064 1.7 | .169 .116 .094 .082 .075 .070 .065 .064 .062 .060 .059 .057 .056 .055 .055 .054 .054 .053 1.8 | .161 .107 .085 .073 .066 .061 .057 .055 .053 .051 .050 .049 .048 .046 .046 .045 .045 .044 1.9 | .154 099 .077 065 .058 .053 .050 .047 .045 043 .042 041 .040 038 .038 .038 .037 .037 2.0 | .148 .092 .070 .058 .051 .046 .043 .040 .038 .037 .035 .034 .033 .032 .032 .031 .031 .030 21 | .141 085 .063 .052 .045 .040 .037 .034 .033 .031 .030 .029 028 .027 027 .026 .025 .025 2.2 | .136 .079 .058 .046 .040 .035 .032 .029 .028 .026 .025 .024 .023 .022 .022 .021 .021 .021 2.3 | .131 .074 .052 .041 .035 .031 .027 .025 .023 .022 .021 .020 .019 .018 .018 .018 .017 .017 2.4 | .126 .069 .048 .037 .031 .027 024 .022 .020 019 .018 .017 .016 015 .015 .014 .014 .014 2.5 | .121 .065 .044 .033 .027 .023 .020 .018 .017 .016 015 .014 .013 .012 .012 012 .O11 .011 2.6 | .117 .061 .040 .030 .024 .020 .018 .016 .014 .013 .012 .012 .011 .010 .010 .010 .009 .009 2.7 | .113 .057 .037 .027 .021 .018 .015 .014 .012 .011 .010 .010 .009 .008 .008 .008 .008 .007 2.8 | .109 .054 .034 024 .019 .016 .013 .012 .010 .009 .009 .008 .008 .007 .007 .006 .006 .006 2.9 | .106 .051 .031 .022 .017 .014 .011 .010 .009 .008 .007 .007 .006 .005 .005 .005 .005 .005 3.0 | .102 .048 .029 .020 .015 .012 .010 .009 .007 .007 .006 .006 .005 .004 .004 .004 .004 .004 3.1 | .099 .045 .027 018 .013 .011 .009 .007 .006 .006 .005 .005 .004 .004 .004 .003 .003 .003 3.2 | .096 .043 .025 .016 .012 .009 .008 .006 .005 .005 .004 .004 .003 .003 .003 .003 .003 .002 3.3 | .094 .040 023 .015 .011 .008 .007 .005 .005 .004 .004 .003 .003 .002 .002 .002 .002 .002 3.4 | .091 .038 .021 .014 .010 .007 .006 .005 .004 .003 .003 .003 .002 .002 .002 .002 .002 .002 3.5 | .089 .036 .020 .012 .009 .006 .005 .004 .003 .003 .002 .002 .002 .002 .002 .001 .001 .001 3.6 | .086 .035 .018 011 .008 .006 .004 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001 3.7 | .084 .033 .017 .010 .007 .005 .004 .003 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001 3.8 | .082 .031 .016 .010 .006 .004 .003 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 3.9 | .080 .030 .015 .009 .006 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 4.0 | .078 .029 .014 .008 .005 .004 .003 .002 .002 .001 .001 .001 .001 .001 .001 .001 .000 .000 (continued) --- Trang 811 --- 798 Appendix Tables Table A.7_ t Curve Tail Areas (cont.) 1tdensity ey Area to the SNe es 0 t 1 Av 19° 200 21 220-230 24 25 26 27 28-29 30 35 40 0 120 coz) 0.0 | .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 .500 500 .500 0.1 | 461 461 461 461 461 461 461 461 461 461 461 461 460 .460 460 460 460 0.2 | .422 422 422 422 422 422 422 422 421 421 421 421 421 421 421 421 421 0.3 384.384 .384 383 .383 383 .383 383.383 .383 383.383 383 .383 383 .382 .382 0.4 | 347 347 347 347 346 346 346 346 346 346 346 346 346 346 345 345 345 O5 | 311 311 311 311 311 311 311 311 311 310 310 .310 .310 .310 .309 .309 309 0.6 | 278 278 278 277 277 277 277 277 277 -277:«.277:«2T7:-276 «216-275-275 274 0.7 | 246 .246 246 .246 245 245 .245 245 245 245 245 245 244 244 243 243.242 0.8 | 217 217 216 216 216 216 .216 215 215 215 215 215 215 214 213 213.212 0.9 | .190 .189 .189 .189 .189 189 .188 .188 188 .188 .188 .188 .187 .187 .186 .185 184 1.0 | 165 .165 .164 .164 .164 164 .163 163 .163 163 163 .163 .162 .162 .161 .160 .159 Ld | 143.142 142 142 141 141 141.141.141.140 140.140.139.139 138 137.136 1.2 | 122 122 122 121 121 121 121 120 120 120 120 120 119 119 117116 11S 1.3 | .105 .104 104 .104 .103 .103 .103 103 .102 .102 .102 .102 .101 .101 .099 .098 .097 1.4 | .089 .089 .088 .088 .087 .087 .087 .087 .086 .086 .086 .086 .085 .085 .083 .082 081 1.5 | 075 .075 .074 .074 .074 .073 .073 .073 .073 072 .072 .072 .071 .071 .069 .068 067 1.6 | .063 .063 .062 .062 .062 .061 .061 .061 .061 .060 .060 .060 .059 .059 .057 .056 .055 1.7 | .053 .052 .052 .052 .051 .051 051 .051 .050 .050 .050 .050 .049 .048 .047 .046 .045 1.8 | .044 .043 .043 .043 .042 .042 .042 .042 .042 .041 .041 .041 .040 .040 .038 .037 .036 1.9 | .036 .036 .036 .035 .035 .035 .035 .034 .034 .034 .034 .034 .033 .032 .031 .030 .029 2.0 | .030 .030 .029 .029 .029 028 .028 .028 .028 .028 .027 .027 .027 026 025 024 .023 21 | 025 .024 .024 .024 .023 .023 .023 .023 .023 .022 .022 022 .022 .021 .020 019 018 2.2 | .020 .020 .020 .019 .019 019 019 .018 .018 .018 .018 .018 .017 017 .016 015.014 2.3 | 016 016 .016 016 O15 O15 015 015 015 O15 014 014 .014 013 012 012 O11 2.4 | 013 013 .013 013 012 .012 012 .012 012 .012 012 O11 .O11 .011 .010 .009 .008 2.5 | O11 O11 .010 .010 .010 .010 .010 .010 .009 .009 .009 .009 .009 .008 .008 .007 .006 2.6 | .009 .009 .008 .008 .008 .008 .008 .008 .007 .007 .007 .007 .007 .007 .006 .005 .005 2.7 | 007 .007 .007 .007 .006 .006 .006 .006 .006 .006 .006 .006 .005 .005 .004 .004 .003 2.8 006 .006 .005 .005 .005 .005 .005 .005 .005 .005 .005 .004 .004 .004 .003 .003 .003 2.9 | .005 .004 .004 .004 .004 .004 .004 .004 .004 .004 .004 .003 .003 .003 .003 .002 .002 3.0 | .004 .004 .003 .003 .003 .003 .003 .003 .003 .003 .003 .003 .002 .002 .002 .002 .001 3.1 | .003 .003 .003 .003 .003 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001 3.2 | .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .002 .001 .001 .001 .001 .001 3.3 | .002 .002 .002 .002 .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 3.4 | .002 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 3.5 | .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 3.6 | .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000 3.7 | .001 .001 .001 .001 .001 .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 3.8 | .001 .001 .001 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 3.9 | .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 4.0 | .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 000 --- Trang 812 --- Appendix Tables 799 Table A.8 Critical Values for F Distributions vy, = numerator df @ 1 2 3 4 5 6 zs 8 9 100} 39.86 49.50 53.59 55.83 57.24 «58.20 58.91 59.44. 59.86 1 050 | 161.45 199.50 215.71 224.58 230.16 233.99 236.77 (238.88 240.54 010 | 4052.2 4999.5 5403.4 5624.6 = 5763.6 = 5859.0 5928.4 5981.1 6022.5 001 | 405284 500000 540379 562500 = 576405 585937 592873. 598144 602284 -100 8.53 9.00 9.16 9.24 25 9.33 9.35 9.37 9.38 2 050 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19,37 19.38 010 | 98.50 99.00 99.17 99.25 99.30 99.33 99.36 99.37 99.39 001 | 998.50 999.00 999.17 999.25 999.30: 999.33. 999.36 999.37 999.39 100 5.54 5.46 5.39 5.34 531 5.28 5.27 5.25 5.24 3 050 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 -010 34.12 30.82 29.46 28.71 28.24 27.91 27.67 27.49 27.35 001 167.03 148.50 141.11 137.10 134.58 132.85 131.58 130.62 129.86 100 4.54 4.32 4.19 4d1 4.05 4.01 3.98 3.95 3.94 4 050 711 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 010 21.20 18.00 16.69 15.98 15.52 15.21 14.98 14.80 14.66 001) 74.14 61.25 56.18 53.44. S171 50.53 49.66 49.00 48.47 100 4.06 3.78 3.62 3.52 3.45 3.40 3.37 3.34 3.32 5 050 6.61 5.79 5.41 5.19 5.05 4,95 4.88 4.82 477 010} 16.26 = 13.27 1206 = 11.39 10.97 1067, 10.46 10.29 10.16 001} 47.18 37.12 33.20 31.09 29.75 28.83 28.16 = 27.65 27.24 S 100 3.78 3.46 3.29 3.18 3.11 3.05 3.01 2.98 2.96 5 6 050 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 B= 010 13.75 10.92 9.78 OAS 8.75 8.47 8.26 8.10 7.98 4 001 35.51 27.00 23.70 21.92 20.80 20.03 19.46 19.03 18.69 +100 3.59 3.26 3.07 2.96 2.88 2.83 2.78 215: 2.72 3 7 050 3.59 474 435 4.12 397 3.87 a7 273. 3.68 i 010 | 12.25 9.55 8.45 7.85 7.46 7.19 6.99 6.84 6.72 s 001) 29.25 2169 18.77 17.20 16.21. 15.52 15.02 14.63 14.33 100 3.46 3.11 2.92 2.81 2.73 2.67 2.62 2.59 2.56 8 050 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 010 11.26 8.65 7.59 7.01 6.63 6.37 6.18 6.03 5.91 001 | 25.41 1849 15.83 14.39 13.48 12.86 1240 12.05 11.77 -100 3.36 3.01 2.81 2.69 2.61 2.55 2.51 247 244 9 050 S2 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 010 10.56 8.02 6.99 6.42 6.06 5.80 5.61 SAT 5.35 001 22.86 16.39 13.90 12.56 1.71 11.13 10.70 10.37 10.11 -100 3.29 2.92 2.73 2.61 2.52 2.46 241 2.38 2.35 10 050 4.96 4.10 3.71 3.48 3,33 eh 3.14 3.07 3.02 010 10.04 7.56 6.55 509 5.64 5.39 Tg 5.06 4.94 001} 2104 = 1491 12.55 11.28 10.48 9.93 9.52 9.20 8.96 100 3.23 2.86 2.66 2.54 2.45 2.39 2.34 2.30 2.27 rT 050 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 -010 9.65 7.21 6.22 5.67 5.32. 5.07 4.89 4.74 4.63 001 19.69 13.81 11.56 10.35 9.58 9.05 8.66 8.35 8.12 100 3.18 2.81 2.61 2.48 2.39 2.33 2.28 2.24 2.21 1p 050 475 3.89 3.49 3.26 3.11 3.00 291 2.85 2.80 010 9.33 6.93 5.95 5.41 5.06 4.82 4.64 4.50 4.39 001 18.64 12.97 10.80 9.63 8.89 8.38 8.00 771 748 (continued) --- Trang 813 --- 800 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) v, = numerator df 10 12 15 20 25 30 40 50 60 120 1000 60.19 60.71 61.22 61.74 62.05 62.26 62.53 62.69 62.79 63.06 63.30 241.88 243.91 245.95 248.01 249.26 250.10 251.14 251.77 252.20 253.25 254.19 6055.8 6106.3. 6157.3 6208.7 6239.8 6260.6 6286.8 6302.5. 6313.0 6339.4 6362.7 605621 610668 615764 620908 624017 626099 628712 630285 631337 633972 636301 9.39 941 9.42 9.44 9.45 9.46 947 9.47 9.47 9.48 9.49 19.40 1941 1943 1945 1946 1946 1947 1948 1948 19.49 19.49 99.40 99.42 99.43 99.45 99.46 99.47 99.47 99.48, 99.48 99.49 99.50 999.40 999.42 999.43 999.45 999.46 999.47 999.47 999.48 999.48 999.49 999.50 523 5.22 5.200 S18 5.17. 5.17 5165S SS 5A 5B. 8.79 8.74 8.70 8.66 8.63 8.62 8.59 8.58 8.57 8.55 8.53 27.23 27.05 26.87 26.69 26.58 26.50 26.41 26.35 26.32 26.22 26.14 129.25 128.32 127.37 126.42 125.84 125.45 :124.96 124.66 124.47, 123.97 123.53 3.92 3.90 3.87 3.84 3.83 3.82 3.80 3.80 3.79 3.78 3.76 5.96 5.91 5.86 5.80 aT 5.75 Si 5.70 5.69 5.66 5.63 1455 1437 14.20 14.02, 13.91 13.84 13.75 13.69 13.65 13.56 13.47 48.05 4741 46.76 = 46.10 45.70 45.43 45.09 44.88 44.75 44.40 44.09 3.30 3.27 3.24 3.21 3.19 3.17 3.16 3.15 3.14 3.12 3.11 4.74 4.68 4.62 4.56 452 4.50 4.46 4.44 4.43 4.40 4.37 10.05 9.89 oF) 9.55 9.45 9.38 9.29 9.24 9.20 9.11 9.03 26.92 2642 25.91 25.39 2508 2487 2460 24.44 24.33 24.06 23.82 2.94 2.90 2.87 2.84 2.81 2.80 2.78 2.77 2.76 2.74 2.72 4.06 4.00 3.94 3.87 3.83 3.81 3.77 3.75 3.74 3.70 3.67 7.87 772 7.56 7.40 7.30 7.23 714 7.09 7.06 6.97 6.89 18.41 17.99 17.56 17.12 16.85 16.67 16.44 16.31 16.21 15.98 15.77 2.70 2.67 2.63 2.59 2.57 2.56 2.54 2.52 2.51 2.49 2.47 3.64 ao: at 3.44 3.40 3.38 3.34 sae 3.30 3.27 3.23 6.62 647 6.31 6.16 6.06 5.99 5.91 5.86 5.82 5.74 5.66 14.08 13.71 13.32 12.93 12.69 12.53 12.33 12.20 12.12 IL91 11.72 2.54 2.50 2.46 2.42 2.40 2.38 2.36 2.35 2.34 2.32 2.30 3.35 3.28 3.22 3.15 3.11 3.08 3.04 3.02 3.01 2.97 2.93 5.81 5.67 3.52 5.36 5.26 5.20 5.12 5.07 5.03 4.95 4.87 1154 «11.19 1084. 1048.10.26 10.11 9.92 9.80 9.73 9.53 9.36 2.42 2.38 2.34 2.30 2.27 2.25 2.23 2.22 2.21 2.18 2.16 3.14 3.07 3.01 2.94 2.89 2.86 2.83 2.80 2.79 2.75 2.71 5.26 SL 4.96 4.81 471 4.65 4.57 4.52 4.48 4.40 4.32 9.89 9.57 9.24 8.90 8.69 8.55 8.37 8.26 8.19 8.00 7.84 2.32 2.28 2.24 2.20 217 2.16 2.13 2.12 2.11 2.08 2.06 2.98 2.91 2.85 2.77 2.73 2.70 2.66 2.64 2.62 2.58 2.54 4.85 471 4.56 441 431 4.25 417 4.12 4.08 4.00 3.92 8.75 8.45 8.13 7.80 7.60 747 7.30 TAD 712 6.94 6.78 2.25 2.21 217 212 2.10 2.08 2.05 2.04 2.03 2.00 1.98 2.85 2.79 272 2.65 2.60 2.57 2.53 2.51 2.49 2.45 241 4.54 4.40 4.25 4.10 4.01 3.94 3.86 3.81 3.78 3.69 3.61 7.92 7.63 732 7.01 681 6.68 6.52 6.42 6.35 6.18 6.02 2.19 2.15 2.10 2.06 2.03 2.01 1.99 1.97 1.96 1.93 1.91 2.75 2.69 2.62 2.54 2.50 247 2.43 2.40 2.38 2.34 2.30 430 4.16 4.01 3.86 3.76 3.70 3.62 3.57 3.54 3.45 3.37 7.29 7.00 6.71 6.40 6.22 6.09 5.93 5.83 5.76 5.59 5.44 (continued) --- Trang 814 --- Appendix Tables 804 Table A.8 Critical Values for F Distributions (cont.) vy, = numerator df @ 1 2 3 4 5 6 7 8 9 100 3.14 2.76 256 243 235 228 223 220 216 B 050 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 010 9.07 6.70 5.74 5.21 4.86 4.62 4.44 4.30 419 001 17.82 12.31 10.21 9.07 8.35 7.86 7.49 7.21 6.98 100 3.10 2.73 2.52 2.39 2.31 2.24 2.19 2.15 2.12 14 050 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 010 8.86 6.51 556 5.04 469 4464.28 4.14 4.03, 001 | 17.14 11.78 973 862 792 744 708 680 6.58 100 3.07 2.70 2.49 2.36 2.27 2.21 2.16 2.12 2.09 1 050 454 3.68 3.29 3.06 2.90 2.79 271 2.64 2.59 010 8.68 6.36 3.42 4.89 4.56 432 4.14 4.00 3.89 001 16.59 11.34 9.34 8.25 757 7.09 6.74 6.47 6.26 -100 3.05 2.67 246 233 224 218 213 209 2.06 16 050 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 010 8.53 6.23 3.29 477 444 4.20 4.03 3.89 3.78 001 16.12 10.97 9.01 7.94 7.27 6.80 6.46 6.19 5.98 100 3.03 2.64 2.44 2.31 2.22 2.15 2.10 2.06 2.03 17050 4.45 3.59 3.20 296 281 270 261 255 249 010 8.40 6.11 519 467 434 410 3.93 3.79 3.68 001 15.72 10.66 8.73 7.68 7.02 6.56 6.22 5.96 5.75 3 -100 3.01 2.62 242 229 220 213 208 2.04 2.00 5 yg 050 441 3.55 3.16 293 277 266 258 251 2.46 3 010 8.29 6.01 5.09 4.58 4.25 4.01 3.84 & 3.60 Hi 001 15.38 10.39 8.49 746 6.81 6.35 6.02 5.76 5.56 g 100 2.99 2.61 240 227 218 241 = 2.06 2.02 1.98 3 19 050 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 i 010 8.18 5.93 5.01 4.50 417 3.94 BF 3.63 3.52 x 001 15.08 10.16 8.28 7.27 6.62 6.18 5.85 5.59 5.39 100 2.97 2.59 2.38 2.25 2.16 2.09 2.04 2.00 1.96 29 050 435 3.49 3.10 287 271 260 251 245 2.39 010 8.10 5.85 494 443 4103.87 3.70 3.56 3.46 001 14.82 9.95 8.10 7.10 6.46 6.02 5.69 5.44 5.24 100 2.96 2.57 236 223 214 208 202 1.98 1.95 2 050 4.32 3.47 3.07 2.84 2.68 257 2.49 2.42 2.37 010 8.02 5.78 4.87 4.37 4.04 3.81 3.64 3.51 3.40 001 14.59 9.77 7.94 6.95 6.32 5.88 5.56 5.31 S.A 100 2.95 2.56 2.35 2.22 2.13 2.06 2.01 1.97 1,93 » 050 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 = 010 195 5.72 4.82 431 3.99 3.76 3.59 3.45 3.35 001 | 14.38 9.61 780 681 619 5.76 544 519 4.99 100 2.94 2.55 2.34 2.21 2.11 2.05 1.99 1.95 1.92 B 050 4.28 3.42 3.03 2.80 2.64 2.53 2.44 2.37 2.32 010 7.88 5.66 4.76 4.26 3.94 3.71 3.54 3.41 3.30 001 14.20 947 7.67 6.70 6.08 5.65 5.33 5.09 4.89 100 2.93 2.54 233 219 210 204 198 194 1.91 24 050 4.26 3.40 3.01 278 262 251 242 236 230 010 7.82 5.61 4.72 4.22 3.90 3.67 3.50 3.36 3.26 001 14.03 9.34 7.55 6.59 5.98 5.55 5.23 4.99 4.80 (continued) --- Trang 815 --- 802 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) vy, = numerator df 10 12 15 20 25 30 40 50 60 120 1000 2.14 2.10 2.05 2.01 1.98 1.96 1.93 1.92 1.90 1.88 1.85 2.67 2.60 2.53 2.46 2.4L 2.38 2.34 231 2.30 2.25 2.21 4.10 3.96 3.82 3.66 3.57 3.51 3.43 3.38 3.34 3.25 3.18 6.80 6.52 6.23 593 5.75 5.63 5.47 5.37 5.30 5.14 4.99 2.10 2.05 2.01 1.96 1.93 1.91 1.89 1.87 1.86 1.83 1.80 2.60 2.53 2.46 2.39 2.34 2.31 2.27 2.24 2.22 2.18 2.14 3.94 3.80 3.66 3.51 3.41 3.35 3.27 3.22 3.18 3.09 3.02 6.40 6.13 5.85 5.56 5.38 5.25 5.10 5.00 4.94 4.77 4.62 2.06 2.02 1.97 1.92 1.89 1.87 1.85 1.83 1.82 1.79 1.76 2.54 2.48 2.40 2.33 2.28 2.25 2.20 2.18 2.16 211 2.07 3.80 3.67 3.52 3.37 3.28 3.21 3.13 3.08 3.05 2.96 2.88 6.08 5.81 5.54 5.25 5.07 4.95 4.80 4.70 4.64 4.47 4.33 2.03 1.99 1.94 1.89 1.86 1.84 L81 1.79 1.78 1.75 1.72 2.49 2.42 2.35 2.28 2.23 2.19 215 2.12 211 2.06 2.02 3.69 3.55 3.41 3.26 3.16 3.10 3.02 2,97 2.93 2.84 2.76 5.81 5.55 5.27 4.99 4.82 4.70 4.54 4.45 4.39 4.23 4.08 2.00 1.96 1.91 1.86 1.83 1.81 1.78 1.76 1.75 1,72 1.69 2.45 2.38 231 2.23 2.18 2.15 2.10 2.08 2.06 2.01 1.97 3.59 3.46 3.31 3.16 3.07 3.00 2.92 2.87 2.83 275 2.66 5.58 5.32 5.05 4.78 4.60 4.48 4.33 4.24 418 4.02 3.87 1.98 1.93 1.89 1.84 1.80 1.78 1.75 1.74 172. 1.69 1.66 241 2.34 2.27 2.19 2.14 2.11 2.06 2.04 2.02 1.97 1.92 aol 3.37 5.23 3.08 2,98 ae 2.84 2.78 2.75 2.66 2.58 5.39 5.13 4.87 4.59 4.42 430 4.15 4.06 4.00 3.84 3.69 1.96 1.91 1.86 1.81 1.78 1.76 1.73 171 1.70 1.67 1.64 2.38 231 2.23 2.16 241 2.07 2.03 2.00 1.98 1.93 1.88 3.43 3.30 3.15 3.00 2.91 2.84 2.76 2.71 2.67 2.58 2.50 5.22 4.97 4.70 4.43 4.26 4.14 3.99 3.90 3.84 3.68 3.53 1.94 1.89 1.84 1.79 1.76 1.74 L71 1.69 1.68 1.64 1.61 2.35 2.28 2.20 2.12 2.07 2.04 1.99 1.97 1.95 1.90 1.85 3.37 3.23 3.09 2.94 2.84 2.78 2.69 2.64 2.61 2.52 2.43 5.08 4.82 4.56 4.29 412 4.00 3.86 3.77 3.70 3.54 3.40 1.92 1.87 1.83 1.78 1.74 1.72 1.69 1.67 1.66 1.62 1.59 2.32 2.25 2.18 2.10 2.05 2.01 1.96 1.94 1.92 1.87 1.82 3.31 3.17 3.03 2.88 2.79 2.72 2.64 2.58 2.55 2.46 2.37 4.95 4.70 4.44 417 4.00 3.88 3.74 3.64 3.58 3.42 3.28 1.90 1.86 1.81 1.76 1.73 1.70 1.67 1.65 1.64 1.60 1.57 2.30 2,23 2.15 2.07 2.02 1.98 1.94 1.91 1.89 1.84 1.79 3.26 3.12 2,98 2.83 2.73 2.67 2.58 2.53 2.50 2.40 2.32 483 458 433 4.06 3.89 3.78 3.63 3.54 3.48 3.32 3.17 1.89 1.84 1.80 1.74 171 1.69 1.66 1.64 1.62 1.59 1.55 2.27 2.20 2.13 2.05 2.00 1.96 1.91 1.88 1.86 L181 1.76 3.21 3.07 2.93 2.78 2.69 2.62 2.54 2.48 2.45 2.35 2.27 4.73 4.48 4.23 3.96 3.79 3.68 3.53 3.44 3.38 3.22 3.08 1.88 1.83 1.78 1.73 1.70 1.67 1.64 1,62 1.61 1.57 1.54 225 2.18 241 2.03 1.97 1.94 1.89 1.86 1.84 1.79 1.74 3.17 3.03 2.89 2.74 2.64 2.58 2.49 2.44 2.40 2.31 2.22 4.64 4.39 4.14 3.87 3.71 3.59 3.45 3.36 3.29 3.14 2.99 (continued) --- Trang 816 --- Appendix Tables 803 Table A.8 Critical Values for F Distributions (cont.) vy, = numerator df a 1 2 3 4 5 6 7 8 9 100 292 253 232 218 209 202 197 1.93 1.89 25 050 4.24 3.39 2.99 2.76 2.60 2.49 2.40 2.34 2.28 010 177 5.57 4.68 4.18 3.85 3.63 3.46 3.32 3.22 001 13.88 9.22 TAS 6.49 5.89 5.46 5.15 491 4.71 -100 291 252 231 217 208 201 196 192 1.88 26 050 423 337 298 274 259 247 239 232 227 010 772 553 464 414 9382 359 342 3.29 3.18 001 13.74 9.12 7.36 641 5.80 5.38 5.07 4.83 4.64 100 2.90 2.51 2.30 2.17 2.07 2.00 1,95 1.91 1.87 ” 050 4.21 3.35 2.96 2.73 2.57 2.46 2.37 231 2.25 010 768 549 460 411 378 356 339 3.26 3.15 001 13.61 9.02 7.27 6.33 5.73 5.31 5.00 4.76 457 -100 289 250 229 216 206 200 194 190 1.87 28 050 4.20 3.34 2.95 271 2.56 2.45 2.36 2.29 2.24 010 7.64 5.45 4.57 4.07 3.75 3.53 3.36 3.23 3.12 001 13.50 8.93 719 6.25 5.66 5.24 4.93 4.69 4.50 100 2.89 2.50 2.28 2.15 2.06 1.99 1.93 1.89 1.86 29 050 418 333 293 270 255 243 235 228 2.22 010 760 542 454 404 373 350 3.33 3.20 3.09 001 13.39 885 7.12 619 5.59 5.18 4.87464 4.45 Ss 100 2.88 2.49 2.28 2.14 2.05 1.98 1.93 1.88 1.85 & 30 050 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 3 010 7.56 5.39 4.51 4.02 3.70 3.47 3.30 3.17 3.07 | 001 13.29 8.77 7.05 6.12 5.53 SAz. 4.82 4.58 4.39 2 -100 284 244 223 209 200 193 187 183 1.79 EY 40 050 408 323 284 261 245 234 225 218 212 i 010 731 5.18 4.31 3.83 3.51 3.29 3.12 2.99 2.89 AY 001 12.61 8.25 6.59 5.70 5.13 4.73 4.44 4.21 4.02 100 2.81 2.41 2.20 2.06 1.97 1.90 1.84 1.80 1.76 50-050 403 318 279 256 240 229 220 213 207 010 717 55.06 4.2000 3.72, 3413.19 3.02 2.89 2.78, 001 | 1222 796 634 546 490 451 422 400 3.82 100 2.79 2.39 2.18 2.04 1.95 1.87 1.82 177 1.74 60 050 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 010 7.08 4.98 4.13 3.65 3.34 3.12 2.95 2.82 2.72 001 11.97 VAT 6.17 5.31 4.76 437 4.09 3.86 3.69 100 276 236 214 200 191 183 178 1.73 1.69 100 050 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97 010 6.90 4.82 3.98 451 3.21 2.99 2.82 2.69 2.59 001 11.50 TAL 5.86 5.02 4.48 4.11 3.83 3.61 3.44 100 2.73 2.33 2.11 1.97 1.88 1.80 1.75 1.70 1.66 200 050 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93 010 676 471 388 341 311 289 273 260 250 001 M5 7.15 5.63 4.814.290 3.92 3653.43 3.26 100 271 231 209° #195 185 178 172 168 1.64 1000 050 3.85 3.00 2.61 2.38 2.22 2.11 2.02 1.95 1.89 010 6.66 4.63 3.80 3.34 3.04 2.82 2.66 2.53 2.43 001 10.89 6.96 5.46 4.65 414 3.78 3.51 3.30 3.13 (continued ) --- Trang 817 --- 804 Appendix Tables Table A.8 Critical Values for F Distributions (cont.) v, = numerator df 10 12 15 20 25 30 40 50 60 120 1000 1.87 1.82 1.77 1.72 1.68 1.66 1.63 1.61 159 1.56 1,52 2.24 2.16 2.09 2.01 1.96 1.92 1.87 1.84 1.82 177 1.72 3.13 2.99 2.85 2.70 2.60 2.54 2.45 2.40 2.36 2.27 2.18 4.56 43) 4.06 B19 3.63 3:52 3.37 3.28 3.22 3.06 2.91 1.86 1.81 1.76 171 1.67 1.65 1.61 1.59 1.58 1.54 1.51 2.22 2.15 2.07 1.99 1.94 1.90 1.85 1.82 1.80 1.75 1.70 3.09 2.96 2.81 2.66 2.57 2.50 2.42 2.36 2.33 2.23 2.14 4.48 4.24 3.99 3.72 3.56 3.44 3.30 3.21 3.15 2,99 2.84 1.85 1.80 1.75 1.70 1.66 1.64 1.60 1.58 1.57 1.53 1.50 2.20 2.13 2.06 1.97 1.92 1.88 1.84 181 1.79 1.73 1.68 3.06 2.93 2.78 2.63 2.54 2.47 2.38 2.33 2.29 2.20 2.11 441 417 3.92 3.66 3.49 3.38 3.23 3.14 3.08 2.92 2.78 1.84 1.79 1,74 1.69 1.65 1.63 he: 1.57 1.56 182. 1.48 2.19 2.12 2.04 1.96 1.91 1.87 1.82 1.79 L77 71 1.66 3.03 2.90 2.75 2.60 251 2.44 2.35 2.30 2.26 2.17 2.08 4.35 4.11 3.86 3.60 3.43 3.32 3.18 3.09 3.02 2.86 2.72 1.83 1,78 1.73 1.68 1.64 1.62 1.58 1.56 1.55 1.51 1.47 2.18 2.10 2.03 1.94 1.89 1.85 1.81 1.77 1.75 1.70 1.65 3.00 2.87 2.73 2.57 2.48 241 2.33 2.27 2.23 2.14 2.05 4.29 4,05 3.80 3.54 3,38 Bp Bi) 3.03 2.97 2.81 2.66 1.82 1.77 1.72 1.67 1.63 1.61 1.57 1.55 1.54 1.50 1.46 2.16 2.09 2.01 1.93 1.88 1.84 1.79 1.76 1.74 1.68 1.63 2.98 2.84 2.70 2.55 2.45 2.39 2.30 2.25 2.21 2.11 2.02 4.24 4.00 3.75 3.49 3.33 3.22 3.07 2.98 2.92 2.76 2.61 1.76 171 1.66 1.61 1.57 1.54 1.51 1.48 1.47 1.42 1.38 2.08 2.00 1.92 1.84 1.78 1.74 1.69 1.66 1.64 1.58 1.52 2.80 2.66 2.52 2.37 227 2.20 211 2.06 2.02 1.92 1.82 3.87 3.64 3.40 3.14 2.98 2.87 2.73 2.64 2.57 2.41 2.25 1.73 1.68 1.63 157 1.53 1.50 1.46 144 1.42 1.38 1.33 2.03 1.95 1.87 1.78 1.73 1.69 1.63 1.60 1.58 1.51 1.45 2.70 2.56 2.42 2.27 2.17 2.10 2.01 1.95 1.91 1.80 1.70 3.67 3.44 3.20 2,95 a9 2.68 253 244 2,38 2,21 2.05 71 1.66 1.60 1.54 1.50 1.48 144 141 1.40 1.35 1.30 1.99 1.92 1.84 1.75 1.69 1.65 1.59 1.56 1.53 1.47 1.40 2.63 2.50 2.35 2.20 2.10 2.03 1,94 1.88 1.84 1.73 1.62 3.54 3.32 3.08 2.83 2.67 2.55 241 2:32: 2.25 2.08 1.92 1.66 1.61 1.56 1.49 1.45 1.42 1.38 1.35 1.34 1.28 1.22 1.93 1.85 1.77 1.68 1.62 1.57 1.52 1.48 1.45 1.38 1.30 2.50 2.37 2.22 2.07 1.97 1.89 1.80 1.74 1.69 1.57 1.45 3.30 3.07 2.84 2.59 2.43 2.32 2.17 2.08 2.01 1.83 1.64 1.63 1.58 1.52 1.46 141 1.38 1.34 131 1.29 1.23 1.16 1.88 1.80 1,72 1.62 1.56 1,52 1.46 141 1,39 1.30 1.21 241 2.27 2.13 1.97 1.87 1.79 1.69 1.63 1.58 1.45 1.30 3.12 2.90 2.67 2.42 2.26 215 2.00 1.90 1.83 1.64 1.43 161 1.55 149 1.43 1.38 1.35 1.30 1.27 1.25 1.18 1.08 1.84 1.76 1.68 1.58 1.52 1.47 141 1.36 1.33 1.24 111 2.34 2.20 2.06 1,90 1,79 1.72 1.61 1.54 1.50 1,35 1.16 2.99 2.77 2.54 2.30 2.14 2.02 1.87 177 1.69 1.49 1.22 --- Trang 818 --- Appendix Tables 805 Table A.9 Critical Values for Studentized Range Distributions m ‘[e[?[=[*[*[s[7[*[>[*][a]a 5 | 05 | 364 | 460 | 5.22 | 5.67 | 603 | 633 | 658 | 680 | 699 | 7.17 | 7.32 01 5.70 | 6.98 7.80 8.42 8.91 9.32 9.67 9.97 10.24 10.48 10.70 6 05 3.46 434 4.90 5.30 5.63 5.90 6.12 6.32 6.49 6.65 6.79 01 | 5.24 | 633 | 7.03 | 7.56 | 7.97 | 8.32 | 861 | 887 | 9.10 | 930 | 9.48 7 05 3.34 4.16 4.68 5.06 5.36 5.61 5.82 6.00 6.16 6.30 6.43 O1 4.95 5.92 6.54 7.01 737 7.68 7.94 8.17 8.37 8.55 8.71 8 | 05 | 326 | 404 | 453 | 489 | 517 | 540 | 560 | 5.77 | 592 | 605 | 6.18 01 4.75 5.64 6.20 6.62 6.96 7.24 747 7.68 7.86 8.03 8.18 9 | 05 | 3.20 | 3.95 | 441 | 4.76 | 5.02 | 5.24 | 5.43 | 5.59 | 5.74 | 5.87 | 5.98 01 | 460 | 5.43 | 5.96 | 635 | 666 | 691 | 7.13 | 7.33 | 749 | 7.65 | 7.78 10 05 3.15 3.88 4.33 4.65 4.91 5.12 5.30 5.46 5.60 5.72 5.83 01 4.48 5:27 5.77 6.14 6.43 6.67 6.87 7.05 721 7.36 7.49 ll 05 3.11 3.82 4.26 4.57 4.82 5.03 5.20 5.35 5.49 5.61 5.71 01 4.39 5.15 5.62 5.97 6.25 6.48 6.67 6.84 6.99 TAZ 7.25 12 0S 3.08 3.77 4.20 451 475 4.95 5.12 3.27 5.39 5.51 5.61 1 432 5.05 5.50 5.84 6.10 6.32 6.51 6.67 6.81 6.94 7.06 13 05 3.06 3.73 41S 4.45 4.69 4.88 5.05 5.19 5.32 5.43 5.53 1 4.26 4.96 5.40 5.73 5.98 6.19 6.37 6.53 6.67 6.79 6.90 14 05 3.03 3.70 4.11 4.41 4.64 4.83 4.99 5.13 5.25 5.36 5.46 01 4.21 4.89 5.32 5.63 5.88 6.08 6.26 641 6.54 6.66 6.77 15 05 3.01 3.67 4.08 4.37 4.59 478 4.94 5.08 5.20 5.31 5.40 01 | 4.17 | 484 | 5.25 | 5.56 | 580 | 5.99 | 616 | 631 644 | 655 | 6.66 16 | 05 | 3.00 | 3.65 | 4.05 | 4.33 | 4.56 | 4.74 | 490 | 5.03 | 51S | 5.26 | 5.35 01 4.13 4.79 5.19 5.49 5.72 5.92 6.08 6.22 6.35 6.46 6.56 17 05 2.98 3.63 4.02 4.30 | 4.52 4.70 4.86 4.99 SL 5.21 5.31 01 4.10 | 4.74 5.14 5.43 5.66 5.85 6.01 6.15 6.27 6.38 6.48 18 05 2.97 3.61 4.00 4.28 4.49 4.67 4.82 4.96 5.07 alg 5.27 01 | 4.07 | 4.70 | 5.09 | 5.38 | 5.60 | 5.79 | 5.94 | 608 | 620 | 6.31 641 19 05 2.96 3.59 3.98 4.25 4.47 4.65 4.79 4.92 5.04 5.14 5.23 1 4.05 4.67 5.05 5.33 5.55 5.73 5.89 6.02 6.14 6.25 6.34 20 05 2.95 3.58 3.96 4.23 4.45 4.62 4.77 4.90 5.0L 5.1L 5.20 1 4.02 4.64 5.02 5.29 5.51 5.69 5.84 5.97 6.09 6.19 6.28 24 | 05 | 292 | 353 | 3.90 | 4.17 | 437 | 454 | 468 | 481 492 | 5.01 5.10 01 | 3.96 | 455 | 4.91 | 5.17 | 5.37 | 5.54 | 5.69 | 5.81 5.92 | 602 | 611 30 05 2.89 3.49 3.85 4.10 | 4.30 | 4.46 4.60 4.72 4.82 4.92 5.00 1 3.89 4.45 4.80 5.05 5.24 5.40 5.54 5.65 5.76 5.85 5.93 40 05 2.86 3.44 3.79 4.04 | 4.23 4.39 4.52 4.63 4.73 4.82 4.90 1 3.82 4.37 4.70 4.93 5.11 5.26 5:39. 5.50 5.60 5.69 5.76 60 05 2.83 3.40 3.74 3.98 4.16 431 444 455 4.65 4.73 481 01 3.76 4.28 459 4.82 4.99 3.13 5.25 5.36 5.45 3.53 5.60 120 | 05 | 280 | 336 | 3.68 | 3.92 | 410 | 4.24 | 436 | 447 | 456 | 464 | 4.71 1 3.70 | 4.20 4.50 471 4.87 5.01 5.12 5.21 5.30 5.37 5.44 oo 05 2.97 3.31 3.63 3.86 | 4.03 417 4.29 4.39 4.47 4.55 4.62 01 3.64 4.12 4.40 4.60 4.76 4.88 4.99 5.08 5.16 5.23 5.29 --- Trang 819 --- 806 Appendix Tables Table A.10 Chi-Squared Curve Tail Areas Upper-Tail Area vol v=2 v=3 va4 vas > 100 <2.70 <4.60 < 6.25 <7.77 <9.23 -100 2.70 4.60 6.25 171 9.23 095 2.78 4.70 6.36 7.90 9.37 090 2.87 481 6.49 8.04 9.52 085 2.96 4.93 6.62 8.18 9.67 080, 3.06 5.05 6.75 8.33 9.83 075 3.17 5.18 6.90 8.49 10.00 .070 3.28 5.31 7.06 8.66 10.19 .065 3.40 5.46 7.22 8.84 10.38 .060 3.53 5.62 7.40 9.04 10.59 .055 3.68 5.80 7.60 9.25 10.82 050 3.84 5.99 781 9.48 11.07 045, 4.01 6.20 8.04 9.74 11.34 .040 4.21 6.43 8.31 10.02 11.64 035 4.44 6.70 8.60 10.34 11.98 .030 4.70 7.01 8.94 10.71 12.37 025 5.02 7.37 9.34 11.14 12.83 .020 5.41 7.82 9.83 11.66 13.38 O15 5.91 8.39 10.46 12.33 14.09 010 6.63 9.21 11.34 13.27 15.08 005 7.87 10.59 12.83 14.86 16.74 001 10.82 13.81 16.26 18.46 20.51 <.001 > 10.82 > 13.81 > 16.26 > 18.46 > 20.51 Upper-Tail Area v=6 v=7 v=8 v=9 v=10 > .100 < 10.64 < 12.01 < 13.36 < 14.68 < 15.98 -100 10.64 12.01 13.36 14.68 15.98 095 10.79 12.17 13.52 14.85 16.16 090, 10.94 12.33 13.69 15.03 16.35 085 11.11 12.50 13.87 15.22 16.54 080 11.28 12.69 14.06 15.42 16.75 075 11.46 12.88 14.26 15.63 16.97 .070 11.65 13.08 14.48 15.85 17.20 065 11.86 13.30 1471 16.09 1744 .060 12.08 13.53 14.95 16.34 17.71 055 12.33 13.79 15.22 16.62 17.99 050 12.59 14.06 15.50 16.91 18.30 045 12.87 14.36 15.82 17.24 18.64 040 13.19 14.70 16.17 17.60 19.02 035 13.55 15.07 16.56 18.01 19.44 .030 13.96 15.50 17.01 18.47 19.92 025 14.44 16.01 17.53 19.02 20.48 .020 15.03 16.62 18.16 19.67 21.16 O1S 15.77 17.39 18.97 20.51 22.02 010 16.81 18.47 20.09 21.66 23.20 005 18.54 20.27 21.95 23.58 25.18 001 22.45 24.32 26.12 27.87 29.58 <.001 > 22.45 > 24.32 > 26.12 > 27.87 > 29.58 (continued) --- Trang 820 --- Appendix Tables 807 Table A.10 Chi-Squared Curve Tail Areas (cont.) Upper-Tail Area veil y=12 v=13 v=i4 v=15 > 100 STL! < 18.54 < 19.81 < 21.06 < 22.30 100 17.27 18.54 19.81 21.06 22.30 095 17.45 18.74 20.00 21.26 22.51 .090, 17.65 18.93 20.21 21.47 22.73 4.085 17.85 19.14 20.42 21.69 22.95 080, 18.06 19.36 20.65 21.93 23.19 075 18.29 19.60 20.89 22.17 23.45 .070 18.53 19.84 21.15 22.44 23.72 065 18.78 20.11 21.42 22.71 24.00 060 19.06 20.39 21.71 23.01 24.31 055 19.35 20.69 22.02 23.33 24.63 .050 19.67 21.02 22.36 23.68 24.99 045 20.02 21.38 22.73 24.06 25.38 .040 20.41 21.78 23.14 24.48 25.81 035 20.84 22.23 23.60 24.95 26.29 .030 21.34 22.74 24.12 25.49 26.84 025 21,92 23.33 24.73 26.11 27.48 020 22.61 24.05 25.47 26,87 28.25 O15 23.50 24.96 26.40 27.82 29.23 010 24.72 26.21 27.68 29.14 30.57 008 26.75 28.29 29.81 31.31 32.80 001 31.26 32.90 34.52 36.12 37.69 <.001 > 31.26 > 32.90 > 34.52 > 36.12 > 37.69 Upper-Tail Area v=16 v=i7 v=i18 v=o v=20 > .100 < 23.54 <24.77 < 25.98 < 27.20 < 28.41 100 23.54 24.76 25.98 27.20 28.41 095 23.75 24.98 26.21 27.43 28.64 090 23,97 25,21 26.44 27.66 28.88 085 24.21 25.45 26.68 27.91 29.14 080 24.45 25.70 26.94 28.18 29.40 075 24.71 25.97 27.21 28.45 29.69 070 24.99 26.25 27.50 28.75 29.99 .065 25.28 26.55 27.81 29.06 30.30 .060 25.59 26.87 28.13 29.39 30.64 055 25.93 27.21 28.48 29.75 31.01 .050 26.29 27.58 28.86 30.14 31.41 045 26.69 27.99 29.28 30.56 31.84 .040 27.13 28.44 29.74 31.03 32.32 .035 27.62 28.94 30.25 31.56 32.85 030 28.19 29.52 30.84 32,15 33.46 025 28.84 30.19 31.52 32.85 34.16 020 29.63 30.99 32.34 33.68 35.01 O15 30.62 32.01 33.38 34.74 36.09 010 32.00 33.40 34.80 36.19 37.56 005 34.26 35.71 37.15 38.58 39.99 001 39.25 40.78 42.31 43.81 45.31 < 001 > 39.25 > 40.78 > 42.31 > 43.81 > 45.31 --- Trang 821 --- 808 Appendix Tables Table A.11 Critical Values for the Ryan—Joiner Test of Normality a 10 05 01 5 .9033 -8804 -8320 10 9347 9180 8804 15 .9506 9383 9110 20 .9600 9503 .9290 25 -9662 9582 9408 ” 30 9707 9639 -9490 40 9767 9715 9597 50 -9807 9764 9664 60 9835 9799 9710 75 9865 9835 9787 --- Trang 822 --- Appendix Tables 809 Table A.12 Critical Values for the Wilcoxon Signed-Rank Test Po(S, = e¢,) = P(S, = ¢; when Hp is true) n CT PS = ey) n a PAS, = ey) a 6 125 wi O11 4 9 125 19 009 10 062 81 005 5 13 094 14 B 108 14 062 74 097 15 031 79 052 6 17 109 84 025 19 047 89 O10 20 031 92 005 21 O16 15 83 104 7 22 109 84 094 24 055 89 053 26 .023 90 047 28 008 95 024 8 28 098 100 ll 30 055 101 009 32 027 104 005 34 012 16 93 106 35 .008 94 096 36 .004 100 052 9 34 102 106 025 ae 049 112 Ol 39 027 113 .009 42 010 116, 005 44 004 17 104 103 10 41 097 105 095 44 053 112 049 47 024 118 025 50 .010 125 .010 52 .005 129 .005 BT 48 .103 18 116 .098 52 051 124 049 35 027 131 024 59 .009 138 .010 61 .005 143 .005 12 56 102 19 128 .098 60 055 136 052 61 046 137 048 64 026 144 025 68 010 152 010 a 005 157 005 13 64 -108 20 140 101 65 095 150 .049 69 055 158 .024 70 047 167 .010 74 024 172 005 --- Trang 823 --- 810 Appendix Tables Table A.13 Critical Values for the Wilcoxon Rank-Sum Test PW = c) = P(W= c when Hy is true) m n ¢ PW =e) m n c P(W=c) 3 3 15 05 40 004 4 17 057 6 40 041 18 .029 41 .026 5 20 .036 43 .009 21 018 44 .004 6 22 048 7 43 4.053 23 024 45 024 24 O12 47 .009 7 24 058 48 005 26 017 8 47 047 27 .008 49 023 8 27 042 St .009 28 024 52 005 29 012 6 6 50 047 30 .006 52 021 4 4 24 057 54 .008 25 .029 55 .004 26 014 7 54 O51 5 27 056 56 .026 28 032 58 OlL 29 O16 60 004 30 008 8 58 054 6 30 057 61 021 32 019 63 OL 33 O10 65 004 34 005 7 o 66 049 7 33 055 68 027 35 021 7 009 36 012 72 .006 37 .006 8 W 047 8 36 055 73 027 38 024 76 OL 40 .008 78 005 41 -004 8 8 84 052 Ss 5 36 048 87 025 37 028 90 01 39 .008 92 005 --- Trang 824 --- Appendix Tables 814 Table A.14 Critical Values for the Wilcoxon Signed-Rank Interval ont y2-e+1p Me )) Confidence Confidence Confidence n Level (%) ¢ n Level (%) c n Level (%) c 5 93.8 15 13 99.0 81 20 99.1 173 87.5 14 95.2 4 95.2 158 6 96.9 21 90.6 70 90.3 150 93.7 20 14 99.1 93 21 99.0 188 90.6 19 95.1 84 95.0 172 7 98.4 28 89.6 79 89.7 163 95.3 26 15 99.0 104 22 99.0 204 89.1 24 95.2 95 95.0 187 8 99.2 36 90.5 90 90.2 178 94.5 32 16 99.1 117 23 99.0 221 89.1 30, 94.9 106 95.2 203 se 99.2 44 89.5 100 90.2 193 94.5 39 17 99.1 130 24 99.0 239 90.2 37 94.9 118 95.1 219 10 99.0 52 90.2 112 89.9 208 95.1 47 18 99.0 143 25 99.0 257 89.5 44 5.2. 131 95.2 236 i 99.0 61 90.1 124 89.9 224 94.6 55 19 99.1 158 89.8 52 95.1 144 12 99.1 7 90.4 137 94.8 64 90.8 61 --- Trang 825 --- 812 Appendix Tables Table A.15 Critical Values for the Wilcoxon Rank-Sum Interval Gijtnn—c+ 1» Tey) Smaller Sample Size Se Larger Confidence Confidence Confidence Confidence Sample Size Level (%) c Level (%) ¢ Level (%) c Level (%) c 5 99.2 25 94.4 22 90.5 21 6 99.1 29 99.1 34 94.8 26 95.9 31 918 25. 90.7 29 7 99.0 33 99.2 39 98.9 44 95.2 30 94.9 35 94.7 40 89.4 28 89.9 33 90.3 38 8 98.9 37 99.2 44 99.1 50, 99.0 56 95.5 34 95.7 40 94.6 45 95.0 31 90.7 32 89.2 37 90.6 43 89.5 48 9: 98.8 41 99.2 49 99.2 56 98.9 62 95.8 38 95.0 44 94.5 50. 95.4 37 88.8 35 91.2 42 90.9 48 90.7 54 10 99.2. 46 98.9 53 99.0 61 99.1 69 94.5 41 94.4 48 94.5 55 94.5, 62 90.1 39 90.7 46 89.1 52 89.9 59 W 99.1 50 99.0 58 98.9 66 99.1 75 94.8 45 95.2 33 95.6 61 94.9 68 91.0 43 90.2 50 89.6 37 90.9 65 12 99.1 54 99.0 63 99.0 72 99.0 81 95.2 49 94.7 57 95.5 66 95.3 74 89.6 46 89.8 34 90.0 62 90.2 70 Smaller Sample Size Larger Confidence Confidence Confidence Confidence Sample Size Level (%) ce Level (%) c Level (%) © Level (%) c 9 98.9 69 95.0 63 90.6 60 10 99.0 16 99.1 84 94.7 69 94.8 76 90.5 66 89.5 72 in] 99.0 83 99.0 91 98.9 99 95.4 76 94.9 83 95.3 91 90.5 2 90.1 79 89.9 86 12 99.1 90 99.1 99 99.1 108 99.0 116 95.1 82 95.0 90, 94.9 98 94.8 106 90.5 78 90.7 86 89.6 93 89.9 101 --- Trang 826 --- Appendix Tables 813 ese ZEEE ae Josep ance" a) eV ive Lye i A ae SULT VTE Th. ZOE TAL Ae TTA pe ST eae ry ay a Sra er 7 LIZA ee OPTRA le LA Tar eT Pl. 7A er HAC ees ay AT A 0 MAZP ane ae 25 IZZIE Ee Ll Zt EAS er CVA er TDI 8 UA EET Ar 2 CVO Er ee WAC er ere WEEE aE Er), WA EL LEE. WZe222= rears (mez ez=anos alatrrr re) eee LETTE a a 2CPP ere. z GA SUPP yr Eevee yee et TTT rsTA7yh STATA EL , eTrrrr sy yr srry TVep g CeCe “VETP TV. © CECA TPCT eye § SaeeRrv ayer LUT VE de % COCR vp [TIZVERREVE & SEERA. LEV PT ISANP TA, 6 COXA el C771 oar NO ey SEP OONseese VAL PNA Sade BEZEASIees, WCET CeR KK & EEN. Were Ty. < | ger WAAR apr 2 eee wae) 3 --- Trang 827 --- Odd-Numbered Exercises Chapter 1 This display brings out the gap in the data: 1. a. Houston Chronicle, Des Moines Register, Chicago There are no scores in the high 70's. Tribune, Washington Post b. Capital One, Campbell Soup, Merrill Lynch, 13: a 2 23 Stem units: 1.0 Prudential 3 2344567789 Leaf units: .10 c. Bill Jasper, Kay Reinke, Helen Ford, David Menendez. 4 01356889 d. 1.78, 2.44, 3.50, 3.04 5 00001 114455666789 4. a Inatampe of 100DVD plage, wha wet chances 6 | ooorz2zzza444seooresse9 that more than 20 need service while under warranty? B45555 What are the chances that none need service while still § 02255488 under warranty? 9 012233335666788 b. What proportion of all DVD players of this brand and lo | 2344455688 model will need service within the warranty period? 11 | 2335999 5. a, No, the relevant conceptual population is all scores of B 7 all students who participate in the SI in conjunction ; with this particular statistics course. 14 | 36 b. The advantage of randomly allocating students to the 15 0035 two groups is that the two groups should then be fairly 16 comparable before the study. If the two groups perform 17 differently in the class, we might attribute this to the 1s | 9 treatments (SI and control). If it were left to students to choose, stronger or more dedicated students might b. A representative value could be the median, 7.0. gravitate toward SI, confounding the results. ¢. The data appear to be highly concentrated, except for c. If all students were put in the treatment group there a few values on the positive side would be no results with which to compare the d. No, there is skewness to the right, or positive treatments skewness. 7. One could generate a simple random sample of all single The vals, 1B sppeane'tn be an outlier; being mate : than two stem units from the previous value. family homes in the city, or a stratified random sample by taking a simple random sample from each of the ten 15, a, ———H—-——_ district neighborhoods. From each of the homes in the Relative sample the necessary data would be collected. This Naaabar frequency would be an enumerative study because there exists a - finite, identifiable population of objects from which to nonconforming Frequency (Freq/60) sample. 0 7 0.117 9. a. There could be several explanations for the variability 1 12 0.200 of the measurements. Among them could be measuring 2 13 0.217 error, (due to mechanical or technical changes across 3 14 0.233 measurements), recording error, differences in 4 6 0.100 weather conditions at time of measurements, etc. 3 3 0.050 b. This study involves a conceptual population. There is é 3 0.050 no sampling frame a ( ai 1. 61 034 8 1 0.017 6h 667899 1.001 71 00122244 Doesn't add exactly to | because relative Th Stem = tens frequencies have been rounded 81 001111122344 Leaf = ones 8h 5557899 91 03 oh 58 814 --- Trang 828 --- Chapter! 815 b. .917, 867, | ~.867 = .133 —— ¢. The center of the histogram is somewhere around bd. Class Freq Rel freq Density 2 or 3 and it shows that there is some positive — skewness in the data. 0 < 50 8 0.08 0016 50-< 10013 0.13 0026 AEs Pes 100-< 1501 0.11 0022 © 242 150- < 200 21 0.21 0042 d. The histogram is very positively skewed. 200-< 300 26 0.26 0026 300-< 40012 0.12 0012 19. a. The number of subdivisions having no cul-de-sacs is, 400- < 500 4 0.04 “0004 17/47 = .362, or 36.2%. The proportion having at 500: g - < 600 3 0.03 0003, least one cul-de-sac is 30/47 = .638, or 63.8% 600. < 9002 ne “S000 i00 1.00 yr Count Percent ie) EF 36.17 coe 1 22 46.81 2 6 12.77 . pe Class Freq Class Freq 5 1 2.13 10- < 20 8 Lie < 12 2 N=47 20- < 30 14 12 < 13 6 2362, .638 30- < 40 8 13-<14 7 40- < 50 4 14<15 9 OS 50- < 60 3 15-< 16 6 b. cad Count Percent 60- < 70 2 1.6-< 1.7 4 TT 70- < 80 a 17 < 18 5 ° 13 27.66 40 18-< 1.9 1 1 ts 23.40 40 2 3 6.38 a 3 7 14.89 4 5 10.64 The original distribution is positively skewed 5 3 6.38 The transformation creates a much more symmetric, 6 3 6.38 mound-shaped histogram. 8 2 4.26 a N=47 : 25. a. Class interval Freq Rel. Freq. -894, .830 0-< 50 9 0.18 2 a —§ A 50-< 100 19 0.38 Class Freq Rel freq 100-< 150 ul 0.22 i 150-< 200 4 0.08 O=5 100 at O71 200-< 250 2 0.04 100- < 200 32 0.32 b50-< 400 5 004 2000 we 026 300-< 350 1 0.02 800-400 12, Oe 350-< 400 1 0.02 400- < 500 4 0.04 6 406 i aD 500- < 600 3 0.03 0 00 600- < 700 1 0.01 a 700- < 800 0 0.00 800- < 900 1 0.01 100 1.00 The distribution is skewed to the right, or positively skewed. There is a gap in the histogram, and what appears to be an outlier in the “500-550” interval. The histogram is skewed right, with a majority of observations between 0 and 300 cycles. The class holding the most observations is between 100 and 200 cycles. --- Trang 829 --- 816 Chapter 1 ———-_. owas decreased to any value at least 370 without changing b. Class interval Freq. Rel. Freq. the median. ee d. 6.18 min; 6.16 min 2.25 < 2.75 2 0.04 2.75 < 3.25 2 O04 = Bia 12 goseenas n 008 b. If 127.6 is reported as 130, then the median is 130, a lee 4 6i6 substantial change. When there is rounding or grouping, the median can be highly sensitive to a small change. 4.25 < 415 18 0.36 4.75 < 5.25 10 0.20 37. ¥ = 92, Xu(2s) = 95.07, Xu(10) = 102.23, ¥ = 119.3 5.25 < 5.75 4 0.08 Positive skewness causes the mean to be larger than the 575 < 625 3 006 median. Trimming moves the mean closer to the median. 9. 8 F=8 te, faite The distribution of the natural logs of the original data b. yer pack is much more symmetric than the original. 41. 2.25.8 b.49.31 €.7.02 4.49.31 ¢. 56,14. 43. a. 2887.6, 2888 b. 7060.3 29. d. The frequency distribution is: 45. 24.36 Relative Relative 47. $1,961,160 Class frequency Class frequency yy 3.5.13, 19, 9.0,23,-25 0< 150.193 900 < 1050 019 SL. a. 1,6,5 150 < 300.183 1050 < 1200 029 b. The box plot shows positive skewness. The two 300< 450.251 1200 < 1350 005 longest runs are extreme outliers. 450 < 600.148, 1350 < 1500 004 ¢. outlier: greater than 13.5 or less than —6.5 600< 750 097 «1500 < 1650 001 extreme outlier: greater than 21 or less than ~14 750<900 066. +1650 < 1800-002 e woe equidtinpeatiecreased £0: & 1800 < 1950 .002 ‘ 53. a. The mean is 27.82, the median is 26, and the 5% The relative frequency distribution is almost unimodal trimmed mean is 27.38. The mean exceeds the and exhibits a large positive skew. The typical middle median, in accord with positive skewness. The value is somewhere between 400 and 450, although the trimmed mean is between the mean and median, as skewness makes it difficult to pinpoint more exactly than you would expect. this. b. There are two outliers at the high end and one at the e. 775, 014 low end, but there are no extreme outliers. Because f. 211 the median is in the lower half of the box, the upper whisker is longer than the lower whisker, and there are BL. a. 5.24 two high outliers compared to just one low outlier, the b. The median, 2, is much lower because of positive plot suggests positive skewness. skewness, = ¢. Trimming the largest and smallest observations yields 55+ The two distributions are centered in about the same the 5:9% trimmed mean, 44, which is between the place, but one machine is much more variable than the isin and Wiedian. other. The more precise machine produced one outlier, but this part would not be an outlier if judged by the 33. a. A stem-and leaf display: distribution of the other machine. 32 | 55 Stem: ones 57. All of the Indian salaries are below the first quartile of 33 | 49 Leaf: tenths Yankee salaries, There is much more variability in the 34 Yankee salaries. Neither team has any outliers. 35 | 6699 | es 61. The three flow rates yield similar uniformities, but the values for the 160 flow rate are a little higher. 37 | 03345, 38 | 9 63. a. 9.59, 59.41. The standard deviations are large, so it is 30 | 2347 certainly not true that repeated measurements are 40 | 33 identical. Hi b. 396, .323. In terms of the coefficient of variation, the HC emissions are more variable. 4214 65. 10.65 The display is reasonably symmetric, so the mean and 67g, yar th, =ars?, median will be close. b. 100.78, 572 b. 370.7, 369.50. ¢. The largest value (currently 424) could be increased by any amount without changing the median. It can be --- Trang 830 --- Chapter2 817 69. The mean is .93 and the standard deviation is O81. The 7. a. (111, 112, 113, 121, 122, 123, 131, 132, 133, 211, distribution is fairly symmetric with a central peak, as 212, 213, 221, 222, 223, 231, 232, 233, 311, 312, 313, shown by the stem and leaf display: 321, 322, 323, 331, 332, 333} . b. (111, 222, 333} Leaf unit = 0.010 ce. (123, 132, 213, 231, 312, 321) 7 7 (111, 113, 131, 133, 311, 313, 331, 333} 8 Be 8 S pee 9. a. S=(BBBAAAA, BBABAAA, — BBAABAA, 4 SSaaseay BBAAABA, BBAAAAB, BABBAAA, BABABAA, BABAABA, BABAAAB, BAABBAA, BAABABA, 3 a3 BAABAAB, BAAABBA, BAAABAB, BAAAABB, 10 O48 ABBBAAA, ABBABAA, ABBAABA, ABBAAAB, 10 55 ABABBAA, ABABABA, ABABAAB, ABAABBA, ABAABAB, ABAAABB, AABBBAA, AABBABA, AABBAAB, AABABBA, AABABAB, AABAABB, 71. a. Mode = .93. It occurs four times in the data set. ‘AAABBBA. AAABBAB, AAABABB, AAAABBB) b. The Modal Category is the one in which the most b. (AAAABBB, AAABABB, AAABBAB, AABAABB, observations occur. AABABAB} 73. The measures that are sensitive to outliers are the mean 13, a, 07 b..30. ¢..57 and the midrange. The mean is sensitive because all - values are used in computing it, The midrange is the 15. a. They are awarded at least one of the first two projects, most sensitive because it uses only the most extreme 36. values in its computation, b. They are awarded neither of the first two projects, .64. ‘The median, the trimmed mean, and the midfourth are ¢. They are awarded at least one of the projects, .53. less sensitive to outliers. The median is the most resistant d. They are awarded none of the projects, .47. to outliers because it uses only the middle value (or e, They are awarded only the third project, .17. values) in its computation. The midfourth is also quite f, Either they fail to get the first two or they are awarded resistant because it uses the fourths. The resistance of the the third, .75. trimmed mean increases with the trimming percentage. 47 4 579, 879 Tet bl POS S8s bake aad 19. a. SAS and SPSS ate not the only packages. 77. b. 552, .102 30 d.19 b.7 8 d.2 79. a. There may be a tendency to a repeating pattern. 21. a. 8841 b, .0435 b. The value .1 gives a much smoother series. , ©. The smoothed value depends on all previous values of 2° % 10 b- 18,19 41 4.59 €.31 £69 the time series, but the coefficient decreases withk. 25, a, 1/15 b.6/15. @ 14/15. 8/15 d. As fgets large, the coefficient (1 — «)'"' decreases to zero, so there is decreasing sensitivity to the initial 27- a. 98 b..02 ¢..03. d..24 value; 29.2. 1/9 b.8/9 €.2/9 31. a. 20 b.60 © 10 Chapter 2 33. a. 243 b. 3645, 10 1. a, ANB! b. AUB ¢. (ANB') U (BNA') 35. .0679 3. a. S = (1324, 1342, 1423, 1432, 2314, 2341, 2413, 37, 2 2431, 3124, 3142, 4123, 4132, 3214, 3241, 4213, 4231) 39. 0456 b. A = (1324, 1342, 1423, 1432} c. B= (2314, 2341, 2413, 2431, 3214, 3241, 4213, AA Ja O89 BuL2AOTS &.21998 4231} 43. a. 1/15 b.1B ©2238 d. AUB = (1324, 1342, 1423, 1432, 2314, 2341, 2413, 2431, 3214, 3241, 4213, 4231) 45. a. 447, (5,2 ADB = @ b. P(AIC) = 4, the fraction of ethnic group C that has A’ = (2314, 2341, 2413, 2431, 3124, 3142, 4123, blood type A. 4132, 3214, 3241, 4213, 4231} P(CIA) = .447, the fraction of those with blood group A that are of ethnic group C. 5. a. A= [ SSF, SFS, FSS } e211 b. B = | SSS, SSF, SFS, FSS } ¢. C = { SSS, SSF, SFS } 47. a. Of those with a Visa card, .5 is the proportion who also d. C’ =| SFF, FSS, FSF, FFS, FFF } have a Master Card. ‘AUC = { SSS, SSF, SES, FSS } b. Of those with a Visa card, .5 is the proportion who do ‘ANC = | SSE, SFS } not have a Master Card. BUC = { SSS, SSF, SFS, FSS } BNC = { SSS SSF, SFS } --- Trang 831 --- 818 Chapter 3 ¢. Of those with Master Card, .625 is the proportion who also have a Visa Card. Chapter 3 d. OF those with Master Card, .375 is the proportion who do not have a Visa Card. “Ss: FFF SFF FSF FFS FSS SFS SSF SSS e. Of those with at least one of the two cards, .769 is the x £ 4 i 2 2, 2: a proportion who have a Visa card. do BER AR 3. M = the absolute value of the difference between the alte outcomes, with possible values 0, 1, 2, 3, 4, 5 or 6; 51. 436, 582 W = Lif the sum of the two resulting numbers is even and W = 0 otherwise, a Bemoulli random variable. 53. .0833 5. No, X can be a Bemoulli random variable where a 59. a, 067 b. 509 success is an outcome in B, with B a particular subset x, (287 of the sample space. 7 7. a. Possible values are 0, 1,2, ..., 12; discrete SS BGS Bie2A5 b. With N = # on the list, values are 0, 1,2, ..., Ns 65. 466, .288, .247 discrete ¢. Possible values are 1, 2, 3,4, ... ; discrete 67. a. Because of independence, the conditional probability d. (.x:0 14/17 19. ce. F(x) = 0, x <1, FQ) = logio(X] +1), Sa <9, 101. a, 1/24 b. 3/8 FQ) =1,x> 9 d. 602, 301 103. s=1 2. F(X) = 0, x <0; 10, 05x <1; 25,1 Sx <2; 45, 107. a, P(Bolsurvive) = bo/[| — (by + bred] 2Sx<3; 70, 35e<4 90, 45x<5; 6, P(By\survive) = b,(1 — ed)/{1 — (by + bed] 5 1/3.5 = .286, so you expect to win 93 9932, .065 068d 491s 251 more if you gamble. 95. a..011 b.441 6. .554,.459 944 39. V(-X) = VX) 97. a..491— b..133 AL. a, 325 b.7.5 ¢. VO) = EIX(X-1)] + EQ) — (E00? 99. a. 122, .808,.283 b, 12,3.464 e530, 011 43. a. 1/4, 1/9, 1/16, 1/25, 1/100 101. a..099b..135 2 b. w= 2.64, o = 1.54, P(X — pl > 20) = 04 < 25, P(X — pl > 30) -0< 1/9 103.4 b..215—e. 1.15 years The actual probability can be far below the Chebyshev 495, 9,221, 6.800.000 _¢. pla: 1608.5) bound, so the bound is conservative. ¢. 1/9, equal to the Chebyshev bound LIL. b. 3.114, 405, .636 d. P(—1) = 02, P(0) = .96, P() = .02 113. a. (x; 15,.75) b. 6865 @..313 d. 45/4, 45/16 45. Mx(0) = Se'M(1-.Se"), BX) = 2, VX) = 2 e309 47. py(y) = 75.251, y = 1,2,3,.- 115. .9914 49, E(X) = 5, V(X) = 4 117. a. px; 2.5) b..067—c. .109 51. My(t) =e" 7, EX) =0, VX) =1 119. 1.813, 3.05 53. E(X) = 0, V(X) = 2 121. p(2) = p®.p@) = U1 — pp? .p(4) = - Pp, = [1 = p(2) — +++ — pix — 31 — pp, x =5, 59. a..850 b..200 200 d..701 pel “E i PE SE — PM e851 (£000 gs 570 Alternatively, p(x) = (1 — p)pix — 1) + 61. a..354 0b. 114 e919 PCL — p) p(x — 2),x = 5, 6, 7, ... 3 99950841 @ ado? be 8 123. a. 0029 b..0767,.9702, 65. 1478 125. a..135 b..00144 ee. & bes2yF 67. .4068, assuming independence 127. 3.590 69. a. 0173 b. .8106, 4246. 0056, 9022, 5858 WesasNo: bis02TS 71. For p =.9 the probability is higher for B (.9963 versus 131. b..6p03 4) + Apo, eA + w/2 .99 for A) (4+ wi2+ — wl For p = 5 the probability is higher for A (.75 versus 433, 5 .6875 for B) Fir chuolinlationtieaes obenenedied 137. X ~ b(x; 25, p), E(W(X)) = 500p + 750, . The tabulation for p > .5 is not neede f ux) = 100\/p(1 — p) 75. a. 20, 16 (binomial, n = 100, p = 2) b. 70, 21 Independence and constant probability might not be valid because of the effect that customers can have on 77. When p =.5, the true probability for k = 2 is .0414, each other. Also, store employees might affect customer compared to the bound of .25. decisions, When p = 5, the true probability for k = 3 is .0026, compared to the bound of .1111. 139. When p = .75, the true probability for k = 2 is .0652, x 0 1 2 3 4 compared to the bound of .25. p(x) 07776 10368 19008 20736 17280, When p = .75, the true probability for k = 3 is .0039, compared to the bound of .1111. x 5 iJ id 8 PQ) 13824 06912 03072 01024 79. Myx) =[p+(1— pel", Ew — X) =n ~ p), Vin — X) = np(l = p) Intuitively, the means of X and n — X should add to n and their variances should be the same, 81. a..114 —b..879 e121 Use the binomial distribution with n = 15 and p = .1 83. a. h(x; 15, 10,20) b..0325—&. .6966 --- Trang 833 --- 820 Chapter 4 Chapter 4 55. a..794 b.S88 7.94 265 57. No, because of symmetry. ads bs «7/16 59. a, approximate, .0391; binomial, 0437 3. b.S e 11/16 d. 6328 b. approximate, .99993; binomial, .99976 5.a.3/8 d.1/8 2969 d..S781 61. a..7287 _b, .8643, .8159 7. a. f(x) = qyfor 25 < x < 35 and = 0 otherwise 63. a. approximate, .9933; binomial, .9905 b2 G4 2 b. approximate, .9874; binomial, .9837 ® as8 ae, OO O70 ¢. approximate, .8051; binomial, .8066 67. a. 15866 _b. .0013499 _¢, .999936658 Ll. a. 1/4 b. 3/16 ©. 15/16 5 iB e hocah beoeees, wt ten Actual SB ag ID 999936658 otherwise - 13 4.3. OEE 2 b. 1/64 ¢..0137,.0137—d. L817 75. a. 449,699,148 —b. .050, 018 17. b. 90th percentile of Y = 1.8(90th percentile of X) +3277. a.A; —_b. Exponential with 2 = .05 ¢. 100 pth percentile of Y = a(100 pth percentile of ¢, Exponential with parameter ni Pare 83. a. 8257, 8257, .0636 b. .6637—¢. 172.73 Te askensoy tate 87. a..9296 b..2975 98.18, Bly QsSES2 SEES, bisOHe 89. a. 68.03, 122.09 b..3196 _€. .7257, skewness 23. a. A+ (B— Alp b. (A + B)2, (B — AP/12, (B — A)/VT2 on a tae aoe *; ae % = Aly BA" hyn + 1B — AN] . 148. e9.5 125: 25. 314.79 woh 27, 248, 3.6 95. b. P(x + B) Mm + PAP + B +m) V(BYI, Bia + B) 29. 1/(1 = 1/4), 1/4, 16 97. Yes, since the pattern in the plot is quite linear. 31. 1007, 307 By Yes 33. f(a) = dy for —5 1 43. a..9772 bd. .9104— 8413 ALO) FEW AEG 2417 f. 6826 113. fy(9) = 16,0 25 12.5, 7.22 d. 10Kx7 + .05,20 << 30 eno 129. b. F(x) = 1 — 16/(x + 4),x > 0; FQ) = 0.x <0 UL. a. p(x,y) = (e428 /x!) (e-" /y!) for x = 0, 1, 2, ..5 ©2247 d.4 &. 16.67 y=0,1,2,... b (e*9(1+4+40)) ec. e*"( + 0)"/ml, Poisson with parameter 4 + 0 131. a..6563 b.41.55 3179 ae x >0,y>0 b..3996 5940 133. a. .00025, normal approximation; .000859, binomial a. 3298 b. .0888, normal approximation; .0963, binomial " 18. a. F(y) =1—2e +e for y > 0, FQ) =0 for 135. a. F(x) =1.5(1 — 1), 1 Sx <3; FQ) =0,x <1; y <0; f(y) =4ie 2 — 3e* for y > 0, fy) = 0 FQ)=1x>3 b.9%4 ©. 1.6479 fory <0 d..5333 e. .2662 b. 2/62) 137. a. 1.075, 1.075 b..0614,.3331 «2.476 i.a25 bln en 139. b. 95,693, 1/3 d. fx(x) = 2VR7— 22 /(R?) for, -R 0 €. 5, .6648, .2555, .6703 19. .15 143. ak=(2- 15"! db. F(X) =0,x <5; 21. L? =1— (Six) a = Fa) =1- GIN x >5 eS DIE) 95 yy 145. b. 4602, 3636 €..5950 140.178 2528 2 147. a. Weibull by 5422 27. a, —10588 —_b. ~.0128 149. ai bea 1p @ F(t) 1—e #e *10P),0 B £ VX =x) = x12 This gives total probability less than 1, so some ; probability is located at infinity (for items that last 39. a. f(x) = 2e°**,0 Laslysl With positive correlation, the deviations from their means F(x, y) = 0, x < 0; Fl, y) = 0, y < 0; of X and ¥ are likely to have the same sign. F(x, y) = 3x4- 8 + 67 0S x 1 5 F(x, y) = 3y*-8y' + Oy 01 59a. If U=Xi +X, ful =, O1y>1 2u—w, 1 0; fx) = O.x.< 0. ptxin) | 0.0000 0.0000 0.0001 0.0008 0.0055 81. a. 3/81,250 5 6 4 8 9 1.0 b Aw in keydy = K(250— 107), << 20 0.0264 0.0881 0.2013 0.3020 0.2684 0.1074 : [yay = a(asoe~ 30e +f) dex < 30 2 ly 2 Say 1 18 2 25 3 35 4 fv) = fx) dependent = c. 3548 25.969 e, ~32.19, —.894 P(X) | 16.24.25) 20 10.0401 Prest b. P(X < 2.5) =.85 83. 7/6 er 0 1 2 3 87. ¢. If pO) = 3, p(1) = 5, p2) = 2, then 1 is the smaller of ptr) 30 40 2 08 the tworoots, so extinction is certain in this case with j. <1. Ifp(0) = 2,p(1) = 5,p2) = .3, then 2/3 is the smaller of a. 24 the two roots, so extinetion is not certain with yu > 1. a x P(e) x P(x) < P(x) 89. a. P((X,Y) © A) = F(b,d) — Fb, 0) ~ F(a,d) + Fla, b) UL ARE IGE CAG SS b. PUXY)EA) = F(10, 6) — Fl, 1) — FG,6) +F(4, 1) 0.0 0.000045 1.4 0.090079 2.8 0.052077 PUXY) © A) = F(b, d) — Fb, c-1) = F(@-l, 2) + 0.2 0.000454 1.6 0.112599 3.0 0.034718 F(a-1, b-1) 0.4 0.002270 1.8 0.125110 3.2 0,021699 c. At each (vt, y), FOr, y*) is the sum of the 0.6 0.007567 2.0 0.125110 3.4 0.012764 probabilities at points (x, y) such that x < x* and 0.8 0.018917 2.2 0.113736 3.6 0.007091 yay 1.0 0.037833 2.4 0.094780 3.8 0.003732 F(x, y) x 12 0.063055__2.6 0.072908 4.0 _ 0.001866 100 250 200 350 T y 100 30 50 0 20 25 --- Trang 836 --- Chapter? 823 11. a. 12,.01 75. 8340 b. 12, 005 5 Fi li ¢. With less variability, the second sample is more 77+ & P= oiv/ (ow + or) closely concentrated near 12. b. p = 9999 13. a. No, the distribution is clearly not symmetric. 79 26. 1.64 A positively skewed distribution —perhaps Weibull, gy. tf Z, and Zy are independent standard normal bh {osnormal, or gamma. observations, then let a X = 5Z, + 100, ¥ = 2(.5Zi + (V3/2)Z2) + 50 ¢, 00000092. No, 82 is not a reasonable value for 1. ‘ (521 + (V3/2)22) + 15. a. 8366 b. no Chapter 7 17. 43.29 19. a, .9802, 4802 b. 32 DTG byl ls x ¢. 12.74, S, an estimator for the population standard 21. a..9839 —b. 8932 deviation . The sample proportion of students exceeding 100 in 27. a, 87,850, 19,100,116 IQ is 30/33 ~ 91 b. In case of dependence, the mean calculation is still e. 112, 5/% valid, but not the variance calculation. ° ¢. 9973 3. a. 13481, — b. 1.3481, ¥ ¢. 1.78, X + 1.2828 29. a..2871 —b. .3695 a. 67. 0846 31. .0317; Because each piece is played by the same 9 1793000 b, 1,599,730. 1.601.438 musicians, there could easily be some dependence. If ~" “" 7” pete: ee they perform the first piece slowly, then they might 7. a. 120.6 b. 1,206,000, 10,000 8 perform the second piece slowly, too, d. 120, X 33.a.45 b.68.33 c.-1, 13.67 d. 5, 68.33, 9 a.X,2113 be Kn, 119 35. a.50, 10.308 b..0076 50d. 111.56 1. b. y/pi(l —pi)/m + pall — pa) /m e. 131.25 ¢. In part (b) replace p; with X;/n; and replace p> with Xyn 37. a..9615 b..0617 bs bat 39. a. Sinn + D4 b..25, n(n + 1)Qn + 1/24 19876 be GOS 41, 10:52.74 15. a. 0=SDX2/(2n) b. 74.505 43. AB, 17. b.4/9 45. b. My) = ML — F/2n)} 19. a. p=21—.30=.20 .p = (1002 —9)/70 47. Because 7; is the sum of v independent random variables, 31. a.15 byes. .4437 each distributed as 7, the Central Limit Theorem applies. . 23, a. 0 = (2 1)/(1-3) =3 53. 2.3.2. 10.04, the square of the answer to (a) b. = [-n/Zin(x)] —1= 3.12 ST: Dalla 2)s ae 2 3 25. p=r/(r+x)=.15 This is the number of successes b. 2v3(v1 + v2 —2)/[vi(v2 — 2)°(v2 —4)}, v2 > 4 over the number of trials, the same as the result in 61. 0.432 Exercise 21, It is not the same as the estimate of Exercise 17. 65. a. The approximate value, .0228, is smaller because of 50. RASA. dl HC skewness in the chi-squared distribution 2a =, OX be = 5 UX? b. This approximation gives the answer 03237, agreeing 9g, ) — 5~x2/(2n) — 74.505, the same as in Exercise 15 with the software answer to this number of decimals. ‘ Ea 67. No, the sum of the percentiles is not the same as the b. 4 20In(2) = 10.16 percentile of the sum, except that they are the same for 34, j= — In(p)/24 = .0120 the 50th percentile. For all other percentiles, the percentile of the sum is closer to the SOth percentile 33. No, statistician A does not have more information, than is the sum of the percentiles 69. a. 2360, 73.70 b..9713 37. WS max(x1, X2, Xn) SO < min@r, x2, ..-n)) 71. 9685 39. a. 2X(n — X/[n(n — 1) 73. 9093 Independence is questionable because con- was ~ sumption one day might be related to consumption the 41. a Xb. (XK —e)/V1— 1/n) next day. --- Trang 837 --- 824 Chapter 8 43. a.V(0)=P/[n(n+2)] db. Pn 33. a. (38.081, 38.439) —_b, (100.55, 101.19), yes ¢. The variance in (a) is below the bound of (b), but the | _ theorem does not apply because the domain is a 35+ @ Assuming normality, a 95% lower confidence bound finclionét tie'parsmeter is 8.11. When the bound is calculated from repeated independent samples, roughly 95% of such bounds 45. a. Xb. Nu, o7/n) should be below the population mean, c. Yes, the variance is equal to the Cramér-Rao bound b. A 95% lower prediction bound is 7.03. When the d. The answer in (b) shows that the asymptotic bound is calculated from repeated independent distribution of the theorem is actually exact here. samples, roughly 95% of such bounds should be ‘ below the value of an independent observation, 47. a. 2a” b. The answer in (a) is different from the answer, 37. a.378.85 b.413.09 _e. (333.88, 407.50) 1/(20*), to 46(a), so the information does depend on dive paratneteriestioti, 39, 95% prediction interval: (.0498, .0772) 49, 1 = 6/ (616 — ty —...~ ts) = 6/(01 +202 +... + 6x6) = 0436, 41+ B+ (169.36, 179.37) Wie Sheek hn Beeb b. (134.30, 214.43), which includes 152 ¢. The second interval is much wider, because it allows 53. 1.275, s = 1.462 for the variability of a single observation. 6 . d. The normal probability plot gives no reason to doubt 55. b. no, E(a*) = 07/2, so 26° is unbiased normality. This is especially important for part (b), but 59, 416, 448 the large sample size implies that normality is not so critical for (a). 61. d(X) = (—1)*, 6(200) = 1, 6199) = -1 45. 2.18307 b.3.940 95d. .10 63. b. B =X xiy;/3D17 = 30.040, the estimated minutes per item; a =F S(yi — pri)? = 16.912; 7s 34, 5:60), 258.= 751 49. a. (7.91, 12.00) b. Because of an outlier, normality is questionable for this data set. Chapter 8 ¢. In MINITAB, put the data in Cl and execute the following macro 999 times 1. 299.5% b.85% 6.297 9 dL LIS Let k3 =N(c1) sample k3clc3; 3. a. Anarrower interval has a lower probability _b. No, replace, Ass not random let k1 =mean(c3) ¢. No, the interval refers to 1, not individual observations giecme cscs d. No, a probability of 95 does not guarantee 95 aha successes in 100 trials 51. a. (26.61, 32.94) 3 52218) G28) eS de b. Because of outliers, the weight gains do not seem 7. Increase n by a factor of 4. Decrease the width by a factor normally distributed. of 5. ¢. In MINITAB, see Exercise 49(c). 9. a. (¥— 1.6450/ fi, 00); (4.57, 00) 53.2. G8.46, 38.84) b. G22 0/V/n,00) b. Although the normal probability plot is not perfectly a: straight, there is not enough deviation to reject c. (=00,8 + 24+ @/ Vi); (—90, 59.7) normality. 11. 950; .8724 (normal approximation), .8731 (binomial) ¢ In MINITAB, see Exercise 49(c). 13. a. (.99, 1.07) b. 158 SBuri ass LOTS, 205345) b. Because of an outlier, normality is questionable for 15. a. 80% — b.98% —€. 75% this data set. ¢. In MINITAB, see Exercise 49(c). 17. .06, which is positive, suggesting that the population mean change is positive 57. a. In MINITAB, put the data in Cl and execute the following macro 999 times 19. (513, .615) Tet k3 oN(61) 21. 218 sample k3cl¢3; replace. 23. (.439, .814) let k1 = stdev(c3) stackklc5c5 25. a.381 —b.339 end 29, 01341 b.1.753. 1.708. 1.684 b. Assuming normality, a 95% confidence interval for ©2704 cis (3.541, 6.578), but the interval is inappropriate because the normality assumption is clearly not 31. a.2.228 2131 2.947 4.604 satisfied. e.2.492 £.2.715 --- Trang 838 --- Chapter9 825 59, a. (198,230) b. .048 2M. Test Ho: w= Svs. Hy wz S c. A 90% prediction interval is (.149, .279) a, Do not reject Hy because fo5.1 = 2.179 > 11.61 b. Do not reject Hy because 199512 = 2.179 > |-1.6] 61. 246 ¢. Do not reject Ho because 005.24 = 2.797 > |-2.6] 63. a. A 95% confidence interval for the mean is (.163, d. Reject Ho because £995.24 = 2.797 < |-3.9] -174). Yes, this interval is below the interval for 59(a). 3, Because 1 = 2.24 > 1.708 = fsas: teject Ho: t= 360. b. (089, 326) Yes, this suggests contradiction of prior belief. 65. (0.1263, 0.3018) 25. Because |z| = 3.37 > 1.96, reject the null hypothesis. 67. a. yes b. (196.88, 222.62) It appears that this population exceeds the national ” average in 1Q. 69. &. V(p) = 07/222, 0, = 0/ (EP d. Pat the 3's far from 0 to minimize op 27. a. no,t = ~.02b. 58 ©. BP ty/an_18//Sx2, 029.93, 30.15) ¢. n= 20 total observations 73, a..00985 _b. .0578 29 Babaibs PSR TASS tong dank ec Hy 78. a. (— tors.n-1.6¥ — (8/ VA) orsn1.6 b & ate 025.01 (s/Va)tors0-13) 31. Because t = —1.24 > ~1.397 = —to, we do not have ee evidence to question the prior belief. TI a2" be nf". (n+ 12", 1 — (n+ 2", (29.9, 39.3) with confidence level 9785 35. a. The distribution is fairly symmetric, without outliers. b. Because 1 = 4.25 > 3.499 = f95, there is strong 79. a. P(A\MAy) = 95? b. P(A,MA2) > 90 evidence to say that the amount poured differs from €. P(A\MAg) > 1 = a — 933 PAO... AD) the industry standard, and indeed bartenders tend to Dla ay — a exceed the standard, ¢. Yes, the test in (b) depends on normality, and a normal probability plot gives no reason to doubt the Chapter 9 assumption. d. .643, .185, 016 1. ayes b.no eno dyes eno Fe yes 37, a, Do not reject Ho: p = -10 in favor of Ha: p > 10 5. Ho: a = .05 vs. Ha: 6 < .05. Type Terror: Conclude that because 2 = 13375 Leds, Because; “the: null the standard deviation is less than .05 mm when it is hypothesis is not rejected, there could be a type II really equal to .05 mm. Type II error: Conclude that the ‘errOty standard deviation is .05 mm when it is really less than be 49,27, © 362 05. 39. a. Do not reject Ho: p = .02 in favor of Hy: p < .02 7. A type Lerror here involves saying that the plant is not in because == —1.1 > ~1.645. There is no strong compliance when in fact it is. A type II error occurs when evidence suggesting that the inventory be postponed. we conclude that the plant is in compliance when in fact it b. .195. ¢. <.0000001. isn't. A government regulator might regard the type II 44. a, Reject Hp because z — 3.08 > 2.58. b. 03 error as being more serious. 4 en eet tee 43. Using n = 25, the probability of 5 or more leaky faucets 9. a. Ry is 0980 if p = .10, and the probability of 4 or fewer leaky b. A type I error involves saying that the two companies faucets is .0905 if p = .3. Thus, the rejection region is are not equally favored when they are. A type II error 5 or more, « = .0980, and ff = .0908. involves saying that the two companies are equally % ¢ favored when they are not. 45. a. reject’ b. reject €. donot reject d. reject ¢. binomial, n = 25, p = .5; 0433 e. do not reject d. .3, 4881; .4, 8452; .6, 8452; .7, 4881 e. If only 6 favor the first company, then reject the null 47- a 0778 b. 1841. 0250 d. .0066 €. .5438 hypothesis and conclude that the first company is not 49, a, p — 0403 b.P =.0176 ¢.P =.1304 preferred. d. P = 6532 e.P = .0021 f.P = .000022 A, a iHlos = 10. vs. Hees 10) .0009 51. Based on the given data, there is no reason to believe ¢. 5319..0076 dic =2.58 ec = 1.96 that pregnant women differ from others in terms of true £, ¥= 10.02, so do not reject Hy average serum receptor concentration. g. Recalibrate if < —2.58 or 2 > 2.58 53. a. Because the P-value is .17, no modification is 13. b. 00043, 0000075, less than .01 indicated b. 957 15, a. .0301, b: 0030 ¢:.0040 55. Because ¢ = ~1.759 and the P-value = .089, which is 17. a. Because z = 2.56 > 2.33, reject Hy b..84 less than .10, reject Ho: = 3.0 against a two-tailed alternative at the 10% level. However, the P-value e, 142d. .0052 exceeds .05, so do not reject Hp at the 5% level. There 19. a, Because z = ~2.27 > —2.58, do not reject Hy b. 22 ©. 22 --- Trang 839 --- 826 Chapter 10 is just a weak indication that the percentage is not equal to. 91. a. For the test of Hy: {t= flo vs. Hy: > flo at level 2, 3% (lower than 3%). reject Ho if 2Exi/pto > Loy 87 a Test He: ji = 10 va Fig WS 10. For the test of Hy: 11 = lo vs. Hy: ft < fup at level 2, b. Because the P-value is .017 <.05, reject Ho, reject Ho if 2Exi/po < Zia.2n suggesting that the pens do not meet specifications. For the test of Ho: = lo vs. Ha: # Ho at level 2, ¢, Because the P-value is .045 > .01, do not reject Ho, reject Hy if 2ExJ/ pl > 72 94 OF suggesting there is no reason to say the lifetime is if 2x0 < 2/220 inadequate. b. Because Xx; = 737, the test statistic is 2Exi/jt0 d. Because the P-value is .0011, reject Ho. There is good = 19.65, which gives a P-value of .52. There is no evidence showing that the pens do not meet reason to reject the null hypothesis. specifications, 93. a. yes 61. a. 98, .85, 43, .004, 0000002 b. 40, .11, .0062, 0000003 ¢. Because the null hypothesis willbe rejected with high Chapter 10 probability, even with only slight departure from the null hypothesis, it is not very useful to do.aOl level 4. g, —4:it doesn’t b..0724, .269 est ¢. Although the CLT implies that the distribution will be 63. b. 36.61 ¢. yes approximately normal when the sample sizes are each 100, the distribution will not necessarily be normal 65. a. Dx > cb yes when the sample sizes are each 10. 67. Yes, the test is UMP for the alternative Hy : 0 >.5 3. Do not reject Hp because z = 1.76 < 2.33 because the tests for Hy : 0 =.5 vs. Hy : 0 = py all have the same form for any po > .5. 5. a. H, says that the average calorie output for sufferers is more than | cal/em?/min below that for non-sufferers. 69. b. .05 Reject Hy in favor of H, because z = —2.90 < 2.33 ¢. .04345, .05826; Because .04345 < .05, the test is not b. 0019 819 d..66 unbiased. 4. 05114; not most powerful 7. Yes, because z = 1.83 > 1.645. 71. b. The value of the test statistic is 3.041, so the P-value is 9%» & —¥=6.2 081, compared to 089 for Exercise 55. b. 2 = 1.14, two-tailed P-value = .25, so do not reject the null hypothesis that the population means are 73. A sample size of 32 should suffice. equal. ¢. No, the values are positive and the standard deviation 78. a. Test Ho: = 2150 vs. Hy: > 2150 exceedsthe ean. b. = (¥=2150)/(s//n)_¢. 1.33 d. 101 d. 95% CI: (10.0, 29.8) e, Do not reject Ho at the .05 level. IL. a. A.95% Cl for the true difference, fast food mean — not 77. Because t= .77 and the P-value is .23, there is no fast food mean is (219.6, 538.4) evidence suggesting that coal increases the mean heat b. The one-tailed P-value is .014, so reject the null flux. hypothesis of a 200-calorie difference at the .05 79. Conclude that activation time is too slow at the .05 level, Jevel, and conclude that- yes, there 1sstrong evidence. but not at the .01 level. 13. 22. No. 81. A normal probability plot gives no reason to doubt the 45 p, It increases. normality assumption. Because the sample mean is 9.815, giving = 4.75 and a (upper tail) P-value of 00007, 17. Because z= 1.36, there is no reason to reject the reject the null hypothesis at any reasonable level. The hypothesis of equal population means (p = .17). true average flame time is too high. 19. Because z = .59, there is no reason to conclude that the 83. Assuming normality, calculate ¢ = 1.70, which gives a population mean is higher for the no-involvement group two tailed P-value of .102. Do not reject the null (p = 28). hypothesis Ho: ft = 1.75. 21. Because = —3.35 < -3.30=fooia2 yes, there is 85. The P-value for a lower tail test is .0014 (normal evidence that experts do hit harder. approximation, .0005), so it is reasonable to reject the idea that p = .75 and conclude that fewer than 75% of 23+ b-No _¢. Because |¢] = |~.38] < 2.228 = 1025.10, no, mnschanios can identify the’ problem, there is no evidence of a difference. 87. Because 1 = 6.43, giving an upper tail P-value of 25+ Because the one-tailed P-value is 005 < .01, conclude at 0000002, conclude that the population mean time the O1-Ievelithat the difference ts.as tated: exceeds 15 minutes, This could result in a type I error. 89. Because the P-value is .013 > .01, do not reject the null 27+ Yes, because ¢ = 2.08 with P-value = .046. hypothesis at'the .01 level. 29 b. (127.6, 202.0) ¢. 131.8 --- Trang 840 --- Chapter 10 827 31. Because ¢ = 1.82 with P-value .046 < .05, conclude at #start withX inC1, Y inc2 the .05 level that the difference exceeds 1. let k3 =N(c1) let k4 =N(c2) 33. a. (F—¥) £ bayrmin-2 “Spat sample k3clc3; b. (~.24, 3.64) replace: ¢. (~.34, 3.74), which is wider because of the loss of a sample k4c2c4; degree of freedom replace: 35. a. The slender distribution appears to have a lower mean Lehi = mesh (Co) aean lea) and lower variance. Spec eleoe? b. With ¢= 1.88 and a P-value of .097, there is no eng significant difference at the .05 level 71. a. Here is a macro that can be executed 999 times in 37. With ¢ = 2.19 and a two-tailed P-value of .031, there is a MINIEAB: , significant difference at the .05 level but not the .01 level. detartwithX incl; vin c2 2 let k3 =N(c1) 39. With ¢ = 3.89 and one-tailed P-value = .006, conclude let k4 =N(c2) at the 1% level that true average movement is less for the sample k3clc3; TightRope treatment. Normality is important, but the replace. normal probability plot does not indicate a problem. sample k4c2c4; replace. 41. a. The 95% confidence interval for the difference of let k2 = medi(c3)-medi(c4) means is (,000046, .000446), which has only positive shack koce'ce values. This omits 0 as a possibility, and says that the end conventional mean is higher. b. With 1 = 2.68 and P-value = .010, reject at the 05 73. a. (.593, 1.246) level the hypothesis of equal means in favor of the b. Here is a macro that can be executed 999 times in conventional mean being higher. MINITAB: # start withX inC1, Y inC2 43. With = 1.87 and a P-value of .049, the difference is let k3 -N(c1) (barely) significantly greater than 5 at the .05 level. let k4 =N(c2) 45.a.No b.—49.1 ©4911 SAMPLE ICS 1.82 replace. 47. 1 2 3 4 sample k4c2c4; x 10 20 30 40 replace. 5 im 1 31 4 let k5 = stdev(c3) /stdev(c4) stackk5c12¢12 end 49, a. Because Il = |-4.841 > 1.96, conclude that there is a difference. Rural residents are more favorable to the 75. a. Because ¢ = —2.62 with a P-value of .018, conclude increase. that the population means differ. At the 5% level, b. 9967 blueberries are significantly better. b. Here is a macro that can be executed repeatedly in 51. (016, 171) MIRTTAB: 53. Because 2 = 4.27 with P-value .000010, conclude that PStare with dats ine) Beoub yar ine? the radiation is beneficial. detkt = N(eL) Sample k3clc3. 55. a. Ho: ps = pa He: ps > Pr unstack c3 ¢4c5; b. (X — Xo)in subs c2. c. (Xy — X2)/VXa FX let k9 =mean(c4)-mean(c5) d. With z = 2.67, P = .004, reject Hy at the .01 level. stack k9 c6 c6é end 57. 769 71. a. Because f = 4.46 with a two-tailed P-value of .122, 22; Because 2 21d with B— OZ orelect Ha atthe (01 there is no evidence of unequal population variances. level. Conclude that lefties are more accident-prone. by Hece: ie we mmacte: thavvan be es ected sepemedly 61. a..0175 hb. .1642 0200 d. 0448, MINITAB: 0035 let kl =n(C1) Sample Klclc3. 63. No, because f = 1.814 < 6.72 = Foio2. unstack c3 4 c5; subs c2. 65. Because f = 1.2219 with P = .505, there is no reason to letike= evdev(cdii/eedavicsi question the equality of population variances. Been ke ce ce 67. 8.10 end 69. a. (158,735) 79, a. A MINITAB macro is given in #75(b). b. Here is a macro that can be executed 999 times in gy. a. (11.85, -6.40) MINITAB? b. See Exercise 57(a) in Chapter 8. --- Trang 841 --- 828 Chapter 11 85. The difference is significant at the .05, 01, and 001 7. a. The Levene test gives f = 1.47, P-value .236, so there levels. is no reason to doubt equal variances. 5 : b. Because f= 10.48 > 4.02 = Foi, there are 89. b. No, given that the 95% CI includes 0, the test at the uignificany ainverencevathong Wik mean .05 level does not reject equality of means. 91. (—299.2, 1517.8) Source DE Ss) MS FP Plate 4 43993 10998 10.48 0.000 93. (1020.2, 1339.9). Because 0 is not in the CI, we would length reject equality of means at the .01 level. Error 30 31475 1049 95. Because = 2.61 and the one-tailed P-value is .007, the Ba tal 5a, SBS difference is significant at the .05 level using either a a one-tailed or a two-tailed test. wae At kh Bs 97. a. Because 1 = 3.04 and the two-tailed P-value is .008, Splitting the paints into two groups, (3, 1, 4), (2, 5), the difference is significant at the .05 level. there are no significant differences within groups but the b. No, the mean of the concentration distribution paints in the first group differ significantly (they are depends on both the mean and standard deviation lower) from those in the second group. of the log concentration distribution. 13.3 1 4 2 5 99, Because ¢ = 7.50 and the one-tailed P-value is 0000001, 4275 462.0 469.3 502.8 532.1 the difference is highly significant, assuming normality. SSS 101. The two-sample / is inappropriate for paired data, The 15. w = 5.92; At the 1% level the only significant paired 1 gives a mean difference .3, 1 = 2.67, and the _differences are between formation 4 and the first two two-tailed P-value is .045, so the means are significantly formations. different at the .05 level. We are concluding tentatively 2 1 3 4 that the label understates the aleohol percentage. 24.69 26.08 29.95 33.84 103. Because paired ¢ = 3.88 and the two-tailed P-value is .008, the difference is significant at the .05 and .01 17. (~.029, 379) levels, but not at the .001 level. 19.436 105. Because z = 2.63 and the two-tailed P-value is .009. a (00% 21. a. Because f = 22.60 > 3.26 = Forse, there are there is s, sionificane ifteconce E the 01 level, sivnificant differences among the wean suggesting better survival at the higher temperature. b. (299.1, 35.7), 29.4, 99.1) 107. .902, .826, .029, 00000003 23. The nonsignificant differences are indicated by the 109. Because z = 4.25 and the one-tailed P-value is .00001, lnderscares. the difference is highly significant and companies 10 6 3 1 appear to discriminate. 45.5 50.85 55.40 58.28 UL. With Z=(X—Y)/V/X/n+Y¥/m, the result is | z= —5.33, two-tailed P-value = 0000001, so one 25 @ Assume normality and equal variances. should conclude that there is a significant difference in Baseball = Le Ls 2th F rosaaf velit — 18, parameters, there are no significant differences among the means. 113. (i) not bioequivalent (ii) not bioequivalent (iii) 27+ @ Because f= 3.75, P-value = .028, there are bioequivalent significant differences among the means. b. Because the normal plot looks fairly straight and the P-value for the Levene test is .68, there is no reason to doubt the assumptions of normality and constant Chapter 11 variance. ¢. The only significant pairwise difference is between 1. a, Reject Ho: fli = M2 = Hs = fa = Hs in favor of Hy: braids and Hi Ha, Hs, Ha Ms not all the same, because 4 3 2 1 f= 5.57 2 2.69 = Foos.4.s0. 5.82 6.35 7.50 8.27 b. Using Table A.9, 001 < P-value < .01. (The P-value od : is .0018) 31. 63 3. Because f= 643>2.95=Foss2, there are significant differences among the means. 33. aresin( y/x/n) 5. Because f= 10.85 >4.38=Foi335, there are 35. a Because f= 1.55 < 3.26 = Fos4i2, there are no significant differences among the means. significant differences among the means. ip, Because f = 2.98 < 3.49 = Fos.3,12, there are no Source DE ss MS F P significant differences among the means. Formation 3 509.1 169.7 10.85 0.000 37, with = 5.49 > 4.56 = Foisis, there are significant Error 36 563.1 15.6 differences among the stimulus means. Although not all Total 39 1072.3 differences are significant in the multiple comparisons analysis, the means for combined stimuli were higher. --- Trang 842 --- Chapter11 829 Differences among the subject means are not very SL. a. With f= 155 <281=F,o212, there is no important here. The normal plot of residuals shows no significant interaction at the .10 level. reason to doubt normality. However, the plot of residuals b. With f = 376.27 > 18.64 = Foo1212 there is a against the fitted values shows some dependence of the significant difference between the formulation variance on the mean. If logged response is used in place means at the .001 level. of response, the plots look good and the F test result is With f= 19.27 > 12.97 = Foor there is a similar but stronger. Furthermore, the logged response significant difference among the speed means at the gives more significant differences in the multiple .001 level comparisons analysis. ¢. Main effects Formulation: (1) 11.19, (2) -11.19 Speed: (60) 1.99, (70) ~5.03, (80) 3.04 Means: _ DW oT we wer war 5S Beeiste ANOVA table 24.825 27.875 29.1 40.35 41.22 45.05 Source DE ss MSE P Pen 3 1387.5 462.50 0.68 0.583 39. With f = 2.56 < 2.61 = F 103,12; there are no significant surface 2 2888.1 1444.04 2.11 0.164 differences among the angle means. Interaction 6 8100.3 1350.04 1.97 0.149 41. a. With f= 1.04 < 3.28 = Fos234, there are no Error 12 8216.0 684.67 significant differences among the treatment means. Total 23 20591.8 Source DE aS us F With f = 1.97 < 2.33 = F 106,12, there is no significant Dp interaction at the .10 level. Treatment 2 28.78 14.39 1.04 With f = .68 < 2.61 = F yo, there is no significant Block 17 2977.67 175.16 12.68 difference among the pen means at the .10 level. Error 34 469.56 13.81 With f = 2.11 < 2.81 = F102,12, there is no significant Total 53 3476.00 difference among the surface means at the .10 level. b. The very significant f for blocks, which shows that 57: @- F = MSAB/MSE . blocks differ strongly, implies that blocking was b. A: F = MSA/MSAB__B: F = MSB/MSAB suocesstul: 59. a, Because f= 343 > 2.61 =Fos.an there is a 43, With f= 8696.01 = Forze, there are significant significant difference among the exam means at the differences among the three treatment means. 205 level: - ’ The normal plot of residuals shows no reason to doubt b. Because f= 1.65 < 2.61 = Fos.449, there is no normality,.and'the plot’bf residuals against the fitted significant difference among the retention means at values shows no reason to doubt constant variance. the 208 level There is no significant difference between treatments B64, a, and C, but Treatment A differs (it is lower) significantly from the others at the .01 level. ieee of Sf USF Means: SS A29.49 B3131 31.40 Piet i e229: 282 a) Error 25 2.690 108 45. Because f = 8.87 > 7.01 = Foi.as, reject the hypothesis Total 29 36 that the variance for B is 0. Because f= 2.15 < 2.76 =Fosaas, there is no 49. a. significant difference among the diet means at the .0S ee level. Source at 8s Ms F b. (~.59, 92) Yes, the interval includes 0. A 2 30763.0 15381.5 3.79 es) 5 3 34185.6 11395.2 2.81 63, a, Test Mo: ti = 2 = fs versus Hy: the three means Interaction 6 43581.2 7263.5 1.79 are not ail the same. With f = 4.80 and Fo52.16 = Error 24 97436.8 4059.9 3.63 < 4.80 < 623 =Foi2is it follows that Total 35_205966.6 01 < P-value < .05 (more’ precisely, P = .023). an Reject Ho in favor of H, at the 5% level but not at b. Because 1.79 <2.04=Fige2% there is no the 1% level. significant interaction. b. Only the first and third means differ significantly at ¢. Because 3.79 > 3.40 = F os2.24, there is a significant the 5% level. difference among the A means at the .05 level. A 3 % d. Because 2.81 < 3.01 = F.os2s, there is no 5 significant difference among the B means at the .05 Ga5F 2692) 28.07 level. e. Using w = 64.93, 65. Because f = 1123 > 4.07 = Fs.,s. there are significant differences among the means at the .05 level. 3 1 2 For Tukey multiple comparisons, w = 7.12: 3960.2 4010.88 4029.10 --- Trang 843 --- 830 Chapter 12 ¢. No, there is a wide range of y values for a given x; for ieee eM RM EID example when temperature is 18.2 the ratio ranges 29.92 33.96 125.84 129.30 from 9 102.68. 3. Yes. Yes. ‘The means split into two groups of two. The means within each group do not differ significantly, but the means in 5. b. Yes the top group differ strongly from the means in the ¢, The relationship of y to x is roughly quadratic. bottom group. 7. 0.5050 psi be L3 psi. 130 psi d. —130 psi 67. The normal plot is reasonably straight, so there is no * m ‘ reason to doubt the normality assumption. Sean WS mimi bea smiiain 8S mufminy 1,305 m/min d. 4207, .3446 e. 0036 01m —10n—b.3.08, 2.5 pounce: 20 Ss MS zs €..3653 4624 A 1 322.667 322.667 980.5 13. a. y = 63 + .652x B 3 35.623 11.874 36.1 b. 23.46, -2.46 AB 3 8.557 2.852 8.7 ©. 392, 5.72 Error 16 5.266 +329 d. 956 Total 23 372.113 e. y = 2.29 + .564x, 17 = 688 15. a. y = —15.2 + .0942x With f = 8.7 > 3.24 =Fossis there is significant b. 1.906 interaction at the .05 level. ¢. —1.006 , ~0.096, 0.034, 0.774 In the presence of significant interaction, main effects are d. 451 not very useful. 17. a. Yes b. slope, .827; intercept, —1.13 ¢. 40.22 Chapter 12 Gaon e975 1. a, Temperature aa 19. a. y= 75.2 — .209x 54.274 a b. The coefficient of determination is .791, meaning that th 3 the predictor accounts for 79.1% of the variation in y. 17) 445 ¢. The value of s is 2.56, so typical deviations from the 17| 67 regression line will be of this size. 17 Stem: hundreds and tens . 18] 0000011 Leaf: ones Aly be a 00d 18) 2222 7.72 18) 445 | . 7 . 1s] 6 25. fly = 1.8fy +32, Bi, = 1.8, iy 8 29. a. Subtracting ¥ from each x; shifts the plot ¥ units to ‘The distribution is fairly symmetric and bell-shaped with the left. The slope is left unchanged, but the new a center around 180. y intercept is y, the height of the old line at x = x. Ratio b. fy =¥ = By + BX and Bi = By o | e890 31. a. 00189 b. 7101 ; gone ¢. No, because here E(x; ~ ¥)°is 24,750, smaller than the value 70,000 in part (a), so V(B,) = o7/Z(x; — x)” is A | “dees higher here. A 66 1| 8889 Stem: ones; 33. a. (51, 1.40) 3 | a4 Peske Leathe b. To test Ho: By =1 vs. Hy: fy <1, we compute 5 1 = —2258 > ~1.383 = 1109, so there is no 3 | % reason to reject the null hypothesis, even at the 10% level. There is no conflict between the data and the : & assertion that the slope is at least 1. 3! 00 35. a. By = 1.536, and a 95% CT is (.632, 2.440) b. Yes, for the test of Ho: B, = O-vs. Hy: fy # 0, we find The distribution is concentrated between 1 and 2, with 1 = 3.62, with P-value .0025. At the .01 level some positive skewness. conclude that there is a useful linear relationship. b. No, x does not determine y: for a given x there may be ¢. Because 5 is beyond the range of the data, predicting more than one y. at a dose of 5 might involve too much extrapolation. --- Trang 844 --- Chapter 12 831 d. p, = 1.683, and a 95% CI is (531, 2.835). 59 a. For the test of Ho: p =0 vs. Hy: p > 0, we find Eliminating the point causes only moderate change, r = .760, t = 4.05, with P-value < .001. At the .001 so the point is not extremely influential. level conclude that there is positive correlation. _ b. Because? = .578 we say that the regression accounts 37, a, Yes; forthe test of Ha: By= 0 vs: Ha: B. # 0, we find for 57.8 % of the variation in endurance. This also = —6.73, with P-value .00002. At the .01 level applies to prediction of lactate level from endurance. conclude that there is a useful linear relationship. b. (—2.77,-1.42) 61. For the test of Ho: p = Ovs. Hy: p #0, we find r = .773, 1 = 2.44, with P-value .072. At the .05 level conclude M3: Nosece=: 7S -and the F-value'is 46; so there:isno evidence that there is not a significant correlation, With such a for a significant impact of age on kyphosis. small sample size, a high r is needed for significance. 45. a. sy increases as the distance of x from x increases 63. a. Reject the null hypothesis in favor of the alternative. b. (2.26, 3.19) b. No, with a large sample size a small r can be eet 34 ATT) significant. d. At least 90% ¢. Because t = 2.200 > 1.96 = to25,900g the correlation 47. a, The regression equation is y= ~1.58 +2.59x and ig ‘Statistically ((but. not inecessanly practically) R= 838. significant at the .05 level. b. A 95% confidence interval for the slope is (2.16, 67, a, 184,238, 426 3.01). In repetitions of the whole process of data b. The mean that is subtracted is not the mean ¥,,_; of collection and calculation of the interval, roughly js Sy ans a, esthesmectens OF 26a, no he 95% of the intervals will contain the true slope. ‘Also the” denominaior” Of ry Ge Hoe ¢. When tannin = .6 the estimated mean astringency is (S50, nt, Sea —0.0335 and the 95% confidence interval is (-0.125, VOT (i — Bn) 03 (41 — Fan)”. However, if 0.058) nis large then r is approximately the same as the d. When tannin =.6 the predicted astringency is correlation. A similar relationship applies to r2. —0,0335 and the 95% prediction interval is c. No (0.5582, 0.4912) d. After performing one test at the .05 level, doing more e, Our null hypothesis is that true average astringency tests raises the probability of at least one type I error to is 0 when tannin is .7, and the alternative is that the more than .05. true average is positive. The f for this test is 4.61, with P-value = 000035, so yes there is compelling 69. The plot shows no reasons for concern about using the evidences, simple linear regression model. 49. (431.2, 628.6) 71. a. The simple linear regression model may not be a perfect fit because the plot shows some curvature, 51. a. Yes, for the test of Ho: B; =0 vs. Ha: Bi #0, b. The plot of standardized residuals is very similar to we find ¢= 10.62, with P-value .000014. At the the residual plot. The normal probability plot gives .001 level conclude that there is a useful linear no reason to doubt normality. relationship. b. (8.24, 12.96) With 95% confidence, when the flow 73: a. For the test of Ho: f, = 0 vs. Hy: Bi: #0, we find rate is increased by 1 SCCM, the associated expected 1 = 10.97, with P-value .0004. At the .001 level change invetch rate is in'the interval, conclude that there is a useful linear relationship. €. (36.10, 40.41) This is fairly precise. b. The residual plot shows curvature, so the linear d. (31.86, 44.65) This is much less precise than the relationship of part (a) is questionable. imervalin (6) ¢. There are no extreme standardized residuals , and the e. Because 2.5 is closer to the mean, the intervals will be plot of standardized residuals is similar to the plot of narrower. ordinary residuals. f Because 6 is outside the range of the data, it is 75. The first data set seems appropriate for a straight-line unknown whether the regression will apply there. snodsl,, “The secoed. dath det shows i quadiatis g. Use a 99% CI at each value: (23.88, 31.43), (29.93, gelationihip, 50 the sttuighidine relationship. is 35.98), O20 A145) inappropriate. The third data set is linear except for an 53, a. Yes outlier, and removal of the outlier will allow a line to be b. Yes, for the test of Ho: By = Ovs. Ha By # 0, we find fit. The fourth data set has only two values of x, so there is 1 = —4.39, with P-value < .001. At the .001 level no way to tell if the relationship is linear. conclude that there is a useful linear relationship. 77. a. To test for lack of fit, we find f= 3.30, with 3 &.(403.6, 468.2) numerator df and 10 denominator df, so the P-value 87. a. r= .923, sox and y are strongly correlated. is .079. At the .05 level we cannot conclude that the by uadetedted relationship is poor. c. unaffected b. The scatter plot shows that the relationship is not d. The normal plots seem consistent with normality, but linear, in spite of (a). In this case, the plot is more the scatter plot shows a slight curvature. sensitive than the test. e. For the test of Ho: p =0 vs. Ha: p #0, we find 79 77.3 1= 7.59, with P-value .00002. At the .001 level b404 conclude that there is a useful linear relationship. ©. ‘The cvetictenty 8 thataleneretice ft aalas eaudéa by the window, all other things being equal. --- Trang 845 --- 832 Chapter 12 81. a, .686, no i Sues ee Ze Si DF ss MS F b. We find f = 28.6 > 2.62 = Foo1,16.196, 80 there is a Sonuce PE) SS SCE significant relationship at the .001 level. Regression 2 5 2.5 0.625 ¢. With all other predictors held constant, the estimated Error 1 4 4.0 difference in y between class A and not is 364. In Total 3.009 terms of $/ft?, the effect is multiplicative. Class A tt buildings are estimated to be worth 44% more With f =.625 < 1995 = Fos91, there is no dollars per square foot, with all other predictors held significant relationship at the .05 level. constant. . ; d. The difference in (c) is highly significant because the 93. By = J, s= VO (y SY /(n— 1), two-tailed P-value is .00000013 coo = 1m, FE torsn1s/Vn 83. a. 48.31, 3.69 95. a. ee yy =F, b. No, because the interaction term will change. aa pie ee c. Yes, f = 18.92, P-value < .0001. =o Ha d. Yes, 1 = 3.496, P-value = .003 < .01 we nae ae b. §, =I f= les =Jai = m+ lyemtn ©. (21.6, 41.6) aE AC Ca le a al f. There appear to be no problems with normality or SLOW) + Un Oi- hy = curvature, but the variance may depend on x, VSSE/(m+n—2) er, = 4/(m +n) 5) wane d. By = 128.17, B, = 14.33 f= 121, 1=1,...,3; b. With = 5.03 > 3.69 = Foss, there is a significant J = 135.33, i=4,...,6 5 relationship at the .05 level SSE OG kms: Bn 28 c. Yes, the individual hypotheses deal with the issue of 95% CI for B; (2.09, 26.58) whether an individual predictor can be deleted, not the 97, Residual = Dep Var ~ Predicted Value effectiveness of the whole model. Std Error Residual = [MSE — (Std Error Predict)*]* de 6.25330 CG aL) Student Residual = Residual/Std Error Residual e. With f = 3.44 < 4.07 = Fos... there is no reason to reject the null hypothesis, so the quadratic terms can 101. a. Hy = 1/n+ (x) — X)(x) —¥)/E(ae — 8)? be deleted. VF.) = 97[L/n + (i — 3) /BQ% — 8°] 5 87. a. The quadratic terms are important in providing a good Bev ¥) = oh — Wn (i) (eG 3))| Ele heddie ¢. The variance of a predicted value is greater for an x a) that is farther from x Bi DROS ELS (S90, 77), d. The variance of a residual is lower for an x that is 89. a. rey = 843 (.000), rea = 621 (.001), ma = 843 farther from © (.000) Here the P-values are given in parentheses to €. Itis intuitive that the variance of prediction should be three decimals: higher with increasing distance. However, points that b. Rating = 2.24 + 0.0419 IBU — 0.166 ABV. Because are farther away tend to draw the line toward them, the two predictors are highly correlated, one is so the residual naturally has lower variance. fedundant, 103. a. With f= 12.04 >9.55=Foi27, there is a Stearn este: significant relationship at the .O1 level e. The regression is quite effective, with R? = .872. The ae anensine ee To test Ho: fi=0 vs. Hy fi #0, = ABV coefficient is not significant, so ABV is not g u 2.96 > to25.7 = 2.36, So reject Hy at the .05 level. needed. The highly significant positive coefficient & : The foot term is needed. for IBU and negative coefficient for its square show : . To test Ho: Po=0 vs. Hy fo #0, I= that Rating increases with IBU, but the rate of increase howertenithes tac: 0.02 < f25.7 = 2.36, so do not reject Hy at the .0S TEER SMES abet level. The height term is not needed. i A et i b. The highest leverage is .88 for the fifth point. The a height for this student is given as 54 inches, too low stiaxwe|i — 1] y= 141, to be correct for this group of students. Also this Port 0 value differs by 8” from the wingspan, an extreme root 4 difference. 400 6 15 ¢. Point 1 has leverage .55, and this student has height 0 4 olp=|2 bp=| 5 75, foot length 13, both quite high. bd 4 Fi Point 2 has leverage .31, and this student has height 66 and foot length 8.5, at the low end. 0 1 Point 7 has leverage .31 and this student has both a_|| 2 ~_|-1 height and foot length at the high end. Gm ~¥i SSE = 4, MSE = 4 fe ced te d. Point 2 has the most extreme residual. This student 3 1 has a height of 66” and a wingspan of 56" differing a. (122, 13.2) by 10”, so the extremely low wingspan is probably r sf . wrong. e Fo ste set of Hit Bis 9 a He he, x“ e. For this data set it would make sense to eliminate = 5 < tors = 12.7, s 0 Doe a y sect ’ the .05 level. The x, term does not play a significant pointsc2eand:2/hecauserthey seemstobexwrong: aie However, outliers are not always mistakes and one needs to be careful about eliminating them, --- Trang 846 --- Chapter 13 833 105. a. 507% —_b..7122 3. Do not reject Hy because 7? = 1.57 < 7.815 = 7455 ¢. To test Ho: fy = Ovs. Hy: B; #0, wehavet = 3.93, | A with P-value .0013. At the .01 level conclude that 5+ Because 7” = 6.61 with P-value .68, do not reject Ho. there 15,4 useful linear relationship 7. Because 7° = 4.03 with P-value > .10, do not reject Ho. d. (1.056, 1.275) ej=10l4 y-y=-214 9. a. [0, .223), [.223, .510), [510, .916), [.916, 1.609). [1.609, 00) 107. —36.18, (—64.43, -7.94) b. Because 7° = 1.25 with P-value > .10, do not reject 109. No, if the relationship of y to x is linear, then the Ho. relationship of y* to x is quadratic. IL. a. (—00, ~.967), [-.967, —.431), [-.431, 0), [0, 431), IIL. a. Yes 431, .967), [.967, 00) b. §=98.293 y-j= a7 b. (—2c,.49806), [.49806, .49914), [.49914, .50), [.50, esa 155 50086), [-50086, .50194), [.50194, 00) a. 794 ¢. Because 7° = 5.53 with P-value > .10, do not reject e. 95% CI for B1: (.0613, .0901) Ho. f. The new observation is an outlier, and has a major 43, Using j —.0843, 2 — 280.3 with P-value < 001, so impact: reject the independence model. The equation of the line changes from i y = 97.50 + 0757 x to y = 97.28 + .1603 x 15. The likelihood is proportional to 07°5(1 — 0)°°7 from changes from .155 to .291 which 0 = 3883. This gives estimated probabilities 7° changes from .794 to 616 1400, .3555, .3385, .1433, 0227 and expected counts 21.00, 53.32, 50.78, 21.49, 3.41. Because 3.41 <5, 113. a. The paired 1 procedure gives ¢ = 3.54 with a two- combine the last two categories, giving 7 = 1.62 with tailed P-value of .002, so at the .01 level we reject the P-value > .10, Do not reject the binomial rddel, hypothesis of equal means. . b. The regression line is y = 4.79 + .743x, and the test 17. 4 = 3.167 which gives 7° = 103.9 with P-value < .001, of Ho: Bi = 0 vs. Hs: Bi 4 0, gives t= 7.41 with a so reject the assumption of a Poisson model. P-value of <.000001, so there is a significant Z % . relationship. However, prediction is not perfect, 19+ 1 = 4275. 02 = .2750 which gives 7° = 29.3 with P- with 1 = .753, so one variable accounts for only value: <,-001, so-reject the:model, 75% of the variability in the other. 21. Yes, the test gives no reason to reject the null hypothesis 117. a. linear of a normal distribution, b. After fitting a line to the data, the residuals show a 93. The p-values are both 243. lot of curvature. ¢. Yes. The residuals from the logged model show some 25. Let pj: = the probability that a fruit given treatment i departure from linearity, but the fit is good in terms matures and pj =the probability that a fruit given of R® = .988. We find & = 411.98, f = —.03333. treatment j aborts, so Hg: pia = Pia for i = 1, 2, 3,4, 5. d. (58.15, 104.18) We find 7° = 24.82 with P-value < .001, so reject the null hypothesis and conclude that maturation is affected 119. a. The plot suggests a quadratic model. ipfledt sano. b. With f = 25.08 and a P-value of < .0001, there is a significant relationship at the 0001 level 21. If p;; denotes the probability of a type j response when ¢. Ch: (3282.3, 3581.3), PI: (2966.6, 3897.0). Of course, treatment i is applied, then Hy: p1j = p>; = Pay = Pay for the PI is wider, as in simple linear regression, j=1, 2, 3, 4. With 7? = 27.66 > 23.587 = Zoso, because it needs to include the variability of a new reject Ho at the .005 level. The treatment does affect the observation in addition to the variability of the mean. response. d. Cl: (3257.6, 3565.6), PI: (2945.0, 3878.2). These are > > slightly wider than the intervals in (c),which is 29+ With 7° = 64.65 > 13.277 = 7, 4, reject Ho at the .001 appropriate, given that 25 is slightly closer to the level. Political views are related to marijuana usage. In Haan gndthe vertex. particular, liberals are more likely to be users. e. With 1=~6.73 and a two-tailed P-value of 34, Compute the expected -~—=counts-—oby <.0001, the quadratic term is significant at the en = Bin = nb. Phan 22", — For the .0001 level, so this term is definitely needed. aH a - lial y 7 statistic df = 20. 121. a. With f=2.4 <586=Fosis4, there is no 2 , detec relationship at the 05 level 33. a. With 7? = 681 < 4.605 = Zio2, do not reject b. No, especially when & is large compared to n pce readies te LO level, ake ene b. With 7 = 6.81 > 4.605 = 795, reject independence ¢. 9565 at the .10 level. . 677 Chapter 13 35. a. With 7 = 6.45 and P-value .040, reject independence at the .05 level. 1. a. reject Hy bs do not reject Hye. donot reject. © b. With 2=~2.29 and P-value .022, reject Hy d.donot reject Hy independence at the .05 level. --- Trang 847 --- 834 Chapter 14 ¢. Because the logistic regression takes into account the 5. We form the difference and perform a two-tailed test of order in the professorial ranks, it should be more Hs: 1 = 0 at level .05. This gives s, = 72 and because sensitive, so it should give a lower P-value. it does not satisfy 14 <5, < 64, we reject Ho at the d. There are few female professors but many assistant .05 level. professors, and the assistant professors will be the . professors of the frtures 7. Because s, = 162.5 with P-value .044, reject Ho: 1 = 75 in favor of H,: 1 > 75 at the .05 level. 37. With 7 = 13.005 > 9.210= 73,5, reject the null hypothesis of no effect at the .01 level. Oil does make a9 With w = 38, reject Ho at the .05 level because the difference (more parasites). rejection region is {w > 36). 39. a. Ho: The population proportion of Late Game H- Test Ho: a — #2 =1 vs. Hat fy — fa > 1. After Leader Wins is the same for all four sports; H,: The subtracting 1 from the original process measurements, proportion of Late Game Leader Wins is not the same we get w = 65. Do not reject Ho because w < 84. for all four sports. With 7? = 10.518 > 7.815 = oss 13, b, Test Hot str — yl2 = 0 vs. Hat sh — #2 <0. With a reject the null hypothesis at level .0S. Sports differ in Povaue Of 002 We rejéct Hp at the .01 level. terms of coming from behind late in the game. b. Yes (baseball) 15, With w = 135, z = 2.223, and the approximate P-value 5s is .026, so we would not reject the null hypothesis at the 41. With 2 = 197.6 > 16.812 = 74,6, reject the null ‘0! level. hypothesis at the .01 level. The aged are more likely to die in a chronic-care facility. 17. (11.15, 23.80) 43. With 77 = 763 < 7.779 = 794, do not reject the 19. (—.585, .025) hypothesis of independence at the .10 level. There is no evidence that age influences the need for item pricing. 24+ (16, 87) 45. a. No, #2 =9.02 > 7815 = Bes. 29. a. (4736, .6669) b. With 7? = .157 < 6.251 = 7g 5, there is no reason to b. (4736, 6669) say the model does not fit. 33. Fora two-tailed test at level .05, we find that s, = 24 and GF aviienp=y =..=9~10 va cHe anlleavons because 4 < s, < 32, we do not reject the hypothesis of p: # 10, with df = 9. equal means. b. Ho: py = 01 for i and j = 0,1,2,...,9 vs. Ha: at least 35, a. y — 0207; Bin(20, .5) one pi; # .01, with df = 99. b. ¢ = 14; because y = 12, do not reject Ho, ¢. No, there must be more observations than cells to do a valid chi-square test. 37. With K = 20.12 > 13.277 = fy 4, reject the null hypo- . The results give no reason to reject randomness. thesis of equal means at the 1% level. Axial strength does seem to (as an increasing function) depend on plate length. Chapter 14 39. Because f. = 6.45 < 7.815 = 7s 3, do not reject the null hypothesis of equal emotion means at the 5% level. 1. For a two-tailed test of Ho: 1 = 100 at level .05, we find that s, = 27 and because 14 < s, < 64, we do not 41. Because w’ = 26 < 27, do not reject the null hypothesis reject Ho. at the 5% level. 3. Fora two-tailed test of Hp: 1 = 7.39 at level .05, we find that s,=18 and because s, does not satisfy 21 < sy < 84, we reject Ho. --- Trang 848 --- A Ansari—Bradley test, 786 Beta functions, incomplete, 207 Additive model Association, causation and, 251,671 Bias-corrected and accelerated for ANOVA, 584-6, 589 Asymptotic normal distribution, 298, interval, 415, 417, 538 for linear regression analysis, 624 371, 375, 377, 671 Bimodal histogram, 18, 19 for multiple regression Asymptotic relative efficiency, Binomial distribution analysis, 682 164, 769 basics of, 128-135 Alternative hypothesis, 426 Autocorrelation coefficient, 674 Bayesian approach to, 777-780 Analysis of covariance, 699 Average multinomial distribution and, 240 Analysis of variance (ANOVA) definition of, 25 normal distribution and, additive model for, 584-586, deviation, 33 189-190, 302 597 pairwise, 379, 772-773, 775 Poisson distribution and, data transformation for, 579 rank, 785 147-149 definition of, 552 weighted (see Weighted average) Binomial experiment, 130-131, 134, expected value in, 556, 573, 147, 240, 302, 724 589, 597 B Binomial random variable fixed vs. random effects, 579 Bar graph, 9, 19 Bernoulli random variables and, Friedman test, 785 Bartlett’s test, 562 134, 302 fundamental identity of, 560, Bayesian approach to inference, 758, cdf for, 132 564, 587, 599, 600, 635 716-782 definition of, 130 interaction model for, 597-606 Bayes’ Theorem, 79-81, 777, 780 distribution of, 132 Kruskal-Wallis test, 784 Bemoulli distribution, 104, 122, 134, expected value of, 134, 135 Levene test, 562-563 302 373, 375, 377, 777 in hypergeometric experiment, linear regression and, 636,639, Bernoulli random variable 141 664, 708, 717 binomial random variable and, in hypothesis testing, 428-431, mean in, 553, 555, 557 134, 302 450-454 mixed effects model for, Cramér-Rao inequality for, 375 mean of, 134-135 593, 603 definition of, 98 moment generating function for, multiple comparisons in, expected value, 113 135 564-571, 578, 589-590, 603 Fisher information on, 372-373, multinomial distribution of, 240 noncentrality parameter for, 377 in negative binomial experiment, 574, 582 Laplace’s rule of succession 142 notation for, 555, 559, 598 and, 782 normal approximation of, power curves for, 574-575 mean of, 113 189-190, 302 randomized block experiments mle for, 377 pmf for, 132 and, 590-593 moment generating function for, and Poisson distribution, regression identity of, 635-636 122, 123, 127 147-149 sample sizes in, 574-576 pmf of, 103 standard deviation of, 134 single-factor, 553-582 score function for, 372 unbiased estimation, 335, 337 two-factor, 582-608 in Wilcoxon’s signed-rank variance of, 134, 135 type I error in, 558-559 statistic, 314 Binomial theorem, 135, 142-144 type I error in, 574 Beta distribution, 206-208, 777 Bioequivalence tests, 551 835 --- Trang 849 --- 836 Index Birth process, pure, 378 in confidence intervals, Complement of an event, 53, 60 Bivariate data, 3, 617, 623, 632, 389-390, 410 Compound event, 52, 62 691, 721 critical values for, 317, 389, Concentration parameter, 779 Bivariate normal distribution, 409-410, 477, 725, 727, Conceptual population, 6, 113, 258-260, 310, 318, 477, 737-138 287, 487 667-671 definition of, 200 Conditional density, 253 Bonferroni confidence intervals, degrees of freedom for, 200,315 Conditional distribution, 253-263, 424, 657-659, 689 exponential distribution 361, 369, 667, 735, 758, 777 Bootstrap procedure and, 317 Conditional mean, 255-262 for confidence intervals, F distribution and, 323-325 Conditional probability, 74-81, 411-418, 532-534 gamma distribution and, 200, 315 84-85, 200, 253-255, 362, for paired data, 538-540 in goodness-of-fit tests, 720-751 365-366 for point estimates, 345-346 Rayleigh distribution and, 226 Conditional probability density Bound on the error of estimation, 388 standard normal distribution function, 253 Box-Muller transformation, 271 and, 224, 316-317, 325 Conditional probability mass Boxplot, 37-41 of sum of squares, 317, 557 function, 253, 255 comparative, 40-41 1 distribution and, 320, 325 Conditional variance, 255-262, 367 Branching process, 281 in transformation, 224 Confidence bound, 398-399, 403, Weibull distribution and, 231 440, 494, 500, 513 c Chi-squared random variable Confidence interval Categorical data in ANOVA, 557 adjustment of, 400 classification of, 30 cdf for, 316 in ANOVA, 565, 570-571, 578, graphs for, 19 expected value of, 315 589, 591, 603 in multiple regression analysis, in hypothesis testing, 482 based on f distribution, 401-404, 696-699 in likelihood ratio tests, 477, 480 499-501, 505, 513-515, Pareto diagram, 24 mean of, 315 570-571, 643-646 sample proportion in, 30 moment generating function Bonferroni, 424, 657-659 Cauchy distribution of, 315 bootstrap procedure for, mean of, 322, 342 pdf of, 200, 315 411-418, 538, 540, 532-534 median of, 342 standard normal random for a contrast, 571 minimal sufficiency for, 367 variables and, 224, for a correlation coefficient, 671 reciprocals and, 231 316-317, 325 vs. credibility interval, 777-781 standard normal distribution in Tukey’s procedure, 565 definition of, 382 and, 271 variance of, 315 derivation of, 389 uniform distribution and, 226 Chi-squared test for difference of means, 493-495, variance of sample mean degrees of freedom in, 726, 734, 500-501, 505, 513-515, for, 349 736, 745, 748 532-534, 539-540, 565-569, Causation, association and, 251, 671 for goodness of fit, 724-730, 578, 589, 591, 603 cdf. See Cumulative distribution for homogeneity, 745~747 for difference of proportions, 524 function for independence, 747-749 distribution-free, 771-776 Cell counts/frequencies, 725-727, P-value for, 727-728 for exponential distribution 729-730, 732-740, 744-750 for specified distribution, parameter, 389 Cell probabilities, 729, 732, 737, 739 729-730 in linear regression, 643-646, Censored experiments, 32, 343-344 z test and, 752 656-658 Census, 2 Class intervals, 15-17, 278, 293, for mean, 383~387, 392, Central Limit Theorem 738-739 403-404, 411-415 basics of, 298-303 Coefficient of determination for median, 415-417 Law of Large Numbers and, 305 definition of, 632-634, 686 in multiple regression, 689, 712 proof of, 329-330 F ratio and, 687 one-sided, 398, 500, 513 sample proportion distribution in multiple regression, 686 for paired data, 513-515, 539 and, 190 sample correlation coefficient for ratio of variances, 530-531, 537 Wilcoxon rank-sum test and, 770 and, 664 sample size and, 388 Wilcoxon signed-rank test Coefficient of skewness, 121, Scheffé method for, 610 and, 765 128, 178 sign, 784 Central ¢ distribution, 320-323, 423. Coefficient of variation, 45, 229, 357 for slope coefficient, 643 Chebyshev’s inequality, 120, 138, Cohort, 281 for standard deviation, 409-410 156, 194, 303, 345 Combination, 70-72 for variance, 409-410 Chi-squared distribution Comparative boxplot, 40-41, 502, width of, 385, 387-388, 394, 397, censored experiment and, 421 503, 554 404, 417, 495 --- Trang 850 --- Index 837 Wilcoxon rank-sum, 774-776 paired data and, 515-516 for Studentized range Wilcoxon signed-rank, 772-774 sample (see Sample correlation distribution, 565 Confidence level coefficient) for f distribution, 320, 390, definition of, 382, 385-388 Covariance 500, 504 simultaneous, 565-570, 578, correlation coefficient and, 249 type Il error and, 574 589, 591, 658 Cramér-Rao inequality and, Delta method, 174 in Tukey’s procedure, 565-570, 374-375 De Morgan’s laws, 56 578, 589, 591 definition of, 247 Density Confidence set, 772 of independent random conditional, 253-257 Consistency, 304, 357, 375-377 variables, 250-251 curve, 160 Consistent estimator, 304, 357, of linear functions, 249 function (pdf), 160 375-377 matrix format for, 711 joint, 235 Contingency tables, two-way, Covariate, 699 marginal, 236 744-751 Cramér-Rao inequality, 374-375 scale, 17 Continuity correction, 189-190 Credibility interval, 777-782 Dependence, 84-88, 238-242, 250, Continuous random variable(s) Critical values 257, 747 conditional pdf for, 254, 789 chi-squared, 317 Dependent events, 84-88 cumulative distribution function F324 Dependent variable, 614 of, 163-168 standard normal (z), 184 Descriptive statistics, 1-41 definition of, 99, 159 studentized range, 565 Deviation vs. discrete random variable, 162 1, 322, 409 definition of, 33 expected value of, 171-172 tolerance, 406 minimize absolute deviations joint pdf of (see Joint probability Cumulative distribution function principle, 33, 679 density functions) for a continuous random Dichotomous trials, 128 marginal pdf of, 236-238 variable, 163-168 Difference statistic, 347 mean of, 171, 172 for a discrete random variable, _ Discrete random variable(s) moment generating of, 175-177 104-108 conditional pmf for, 253 pdf of (see Probability density inverse function of, 223-224 cumulative distribution function function) joint, 282 of, 104-108 percentiles of, 166-168 of order statistics, 272-273 definition of, 99 standard deviation of, 173-175 pdf and, 163 expected value of, 112 transformation of, 220-225, percentiles and, 167 joint pmf of (see Joint probability 265-270 pmf and, 105-108 mass function) variance of, 173-175 transformation and, 220-225 marginal pmf of, 234 Contrast of means, 570-571 Cumulative frequency, 24 mean of, 112 Convenience samples, 7 Cumulative relative frequency, 24 moment generating of, 122 Convergence pmf of (see Probability mass in distribution, 153, 329 D function) in mean square, 303 Data standard deviation of, 117 in probability, 304 bivariate, 3, 617, 632, 691 transformation of, 225 Convex function, 231 categorical (see Categorical data) variance of, 117 Correction factor, 141, 560, 568, censoring of, 32, 343-344 Disjoint events, 54 577, 582 characteristics of, 3 Dotplots, 12 Correction for the mean, 560 collection of, 7-8 Dummy variable, 696 Correlation coefficient definition of, 2 Dunnett’s method, 571 autocorrelation coefficient and, 674 multivariate, 3, 220 in bivariate normal distribution, qualitative, 19 E 258-260, 310, 667 univariate, 3 Efficiency, asymptotic relative, confidence interval for, 671 Deductive reasoning, 6 764, 769 covariance and, 249 Degrees of freedom (df) Empirical rule, 187 Cramér-Rao inequality and, in ANOVA, 557-559, Erlang distribution, 202, 229 374-375 587, 599 Error(s) definition of, 249, 663 for chi-squared distribution, estimated standard, 344, 646, 713 estimator for, 666 200, 315-320 estimation, 334 Fisher transformation, 669 in chi-squared tests, 726, 734, family vs. individual, 570 for independent random 737, 746 measurement, 179, 211, 337,477 variables, 250 for F distribution, 323 prediction, 405, 658, 683 in linear regression, 664, 667, 669 in regression, 631, 685 rounding, 36 measurement error and, 328 sample variance and, 35 standard, 344, 713 --- Trang 851 --- 838 Index Error(s) (cont.) observational studies in, 488 Fisher—Irwin test, 525 type I, 429 paired data, 515 Fisher transformation, 669 type II, 429 paired vs. independent samples, Fitted values, 588, 629, 674 Estimated regression function, 520-521 Fixed effects model, 579, 592, 597 676, 685 randomized block, 590-593 Fourth spread, 37, 41, 285 Estimated regression line, 625 randomized controlled, 489 Frequency, 13 Estimated standard error, 344, repeated measures designs in, 591 Frequency distribution, 13 646, 713 with replacement, 69, 141,287 Friedman’s test, 785 Estimator, 332 retrospective, 488 F test Event(s) simulation, 291-294 in ANOVA, 558, 580, 587, complement of, 53 Explanatory variable, 614 593, 600 compound, 52, 62 Exponential distribution Bartlett's test and, 562 definition of, 52 censored experiments and, 343 coefficient of determination dependent, 84-88 chi-squared distribution and, 687 disjoint, 54 and, 317 critical values for, 324, 528, 558 exhaustive, 79 confidence interval for distribution and, 323, 527, 558 independent, 84-88 parameter, 389 for equality of variances, 527, 537 indicator function for, 364 double, 477 expected mean squares and, 573, intersection of, 53 estimators for parameter, 343, 351 589, 593, 600, 604 mutually exclusive, 54 goodness-of-fit test for, 739 Levene test and, 562 mutually independent, 87 mixed, 229 power curves and, 574-575 simple, 52 in pure birth process, 378 P-value for, 529, 537, 559 union of, 53 shifted, 360, 479 in regression, 687, 709 Venn diagrams for, 55 skew in, 277 sample sizes for, 574 Expected mean squares standard gamma distribution single-factor, 558, 580 in ANOVA, 573, 577, 600, 614 and, 198 vs. f test, 576 F test and, 589, 593, 600, 604 Weibull distribution and, 203 two-factor, 587, 593, 600 in mixed effects model, 593, 604 Exponential random variable(s) type I error in, 574 in random effects model, 580, Box~Muller transformation Full quadratic model, 695 593-594 and, 271 in regression, 681 cdf of, 199 G Expected value expected value of, 198 Galton—Watson branching process, conditional, 255 independence of, 242 281 of a continuous random mean of, 198 Gamma distribution variable, 171 in order statistics, 272, 275 chi-squared distribution and, 200 covariance and, 247 pdf of, 198 definition of, 195 of a discrete random variable, 112 transformation of, 220, 267, 270 density function for, 195 of a function, 115, 245-246 variance of, 198 Erlang distribution and, 201 heavy-tailed distribution and, Exponential regression model, 721 estimators of parameters, 351, 114-115, 120 Exponential smoothing, 48 355, 358 of jointly distributed random Extreme outliers, 3941 exponential distribution and, variables, 245 Extreme value distribution, 217 198-200 Law of Large Numbers and, 303 Poisson distribution and, 783 of a linear combination, 306 F standard, 195 of mean squares (see Expected _Factorial notation, 69 Weibull distribution and, 203 mean squares) Factorization theorem, 363 Gamma function moment generating function Factors, 552 incomplete, 196, 217 and, 122, 175 Failure rate function, 230 properties of, 195 moments and, 121 Family of probability distributions, Gamma random variables, 195 in order statistics, 272-273, 277 104, 213 Geometric distribution, 143, 225 of sample mean, 277, 296 F distribution Geometric random variables, 143 of sample standard deviation, chi-squared distribution and, 323. Goodness-of-fit test 340, 379 definition of, 323 for composite hypotheses, of sample total, 296 expected value of, 325 732, 741 of sample variance, 339 for model utility test, 649, 687, 709 definition of, 723 Experiment noncentral, 574-575 for homogeneity, 745-747 binomial, 128, 240, 724 pdf of, 324 for independence, 747-749 definition of, 52 Finite population correction factor, 141 simple, 724-730 double-blind, 523 Fisher information, 371 Grand mean, 555, 584 --- Trang 852 --- Index 839 H Intersection of events model utility test and, 721 Half-normal plot, 220 definition of, 53 in Neyman-Pearson theorem, 470 Histogram multiplication rule for probability significance level and, 470, 471 bimodal, 18 of, 77-79 sufficiency and, 380 class intervals in, 15-17 Invariance principle, 357 tests, 475 construction of, 12-20 Inverse matrix, 712 Limiting relative frequency, 58, 59 density, 17-18 Linear combination multimodal, 19 J distribution of, 309 Pareto diagram, 24 Jacobian, 267 expected value of, 306 for pmf, 103 Jensen’s inequality, 231 independence in, 306 symmetric, 19 Joint cumulative distribution variance of, 307 unimodal, 18 function, 282 Linear probabilistic model, 617, 627 Hodges-Lehmann estimator, 379 Jointly distributed random variables Linear regression Homogeneity, 745~747 bivariate normal distribution additive model for, 614, 682, 705 Hyperexponential distribution, 229 of, 258-260 ANOVA in, 649, 699, 768 Hypergeometric distribution, conditional distribution of, confidence intervals in, 643, 656 138-141 253-263 correlation coefficient in, and binomial distribution, 141 correlation coefficients for, 249 662-671 Hypergeometric random variable, covariance between, 248 definition of, 617 138-141 expected value of function of, degrees of freedom in, 631, Hypothesis 245-246 685, 708 alternative, 426 independence of, 238-239 least squares estimates in, composite, 732-741, 744 linear combination of, 306-312 625-636, 679 definition of, 426 in order statistics, 274-276 likelihood ratio test in, 721 errors in testing of, 428-434 pdf of (see Joint probability mies in, 631, 639 notation for, 426 density functions) model utility test in, 648, 687, 708 null, 426 pmf of (sce Joint probability mass parameters in, 617, 624-636, 682 research, 427 functions) percentage of explained variation simple, 469 transformation of, 265-270 in, 633-634 Hypothetical population, 6 variance of function of, 252, 307 prediction interval in, 654, 658, 689 Joint marginal density function, 245 residuals in, 629, 674, 685 I Joint probability mass function, summary statistics in, 627 Inclusive inequalities, 136 233-234 sums of squares in, 631-636, 686 Incomplete beta function, 207 Joint probability table, 233 ratio in, 648, 669, 690 Incomplete gamma function, Line graph, 102-103 196-197, 217 K Location parameter, 217, 367 Independence K-out-of-n system, 153 Logistic distribution, 279 chi-squared test for, 749 Kruskal-Wallis test, 784-785 Logistic regression model conditional distribution and, K-tuple, 68-69 contingency tables for, 749-751 257-258 definition of, 620-622 correlation coefficient L fit of, 650-651 and, 250 lag 1 autocorrelation coefficient, 674 mes in, 650 covariance and, 250, 252 Laplace distribution, 478 in multiple regression analysis, 699 of events, 84-88 Laplace’s rule of succession, 782 Logit function, 621, 650 of jointly distributed random Largest extreme value distribution, 228 Lognormal distribution, 205-205, 233 variables, 238-239, 241 Law of Large Numbers, 303-304, Lognormal random variables, 205-206 in linear combinations, 306-307 322-323, 376 mutual, 87 Law of total probability, 79 M pairwise, 90, 94 Least squares estimates, 626, 645, Mann-Whitney test, 766-770 in simple random sample, 287 679, 683-684 Marginal distribution, 234, 236, 253 Independent variable, 614 Level « test, 433 Marginal probability density Indicator variables, 696 Level of a factor, 552, 583, 593 functions, 236 Inductive reasoning, 6 Levene test, 562-563 Marginal probability mass Inferential statistics, 5-6 Leverages, 714715 functions, 234 Inflection point, 180 Likelihood function, 354, 470,475 Matrices in regression analysis, Intensity function, 156 Likelihood ratio 705-715 Interaction, 597-602, 603-606, chi-squared statistic for, 477 Maximum likelihood estimator 693-698 definition of, 470 for Bernoulli parameter, 377 Intercept, 214, 617, 627 mle and, 475 for binomial parameter, 377 --- Trang 853 --- 840 Index Maximum likelihood estimator Minimize absolute deviations normal equations in, 683, 685, (cont.) principle, 477, 679 705-708 Cramér-Rao inequality and, 375. Minimum variance unbiased parameters for, 682 data sufficiency for, 369 estimator, 341-343, 358, and polynomial regression, Fisher information and, 371, 375 369, 375 691-693 for geometric distribution Mixed effects model, 593~603 prediction interval in, 689 parameter, 742 Mixed exponential distribution, 229 principle of least squares in, in goodness-of-fit testing, 733 mle. See Maximum likelihood 683-706 in homogeneity test, 745 estimate residuals in, 685, 691, 688, 691, in independence test, 748 Mode 708, 713 in likelihood ratio tests, 475 of a continuous distribution, squared multiple correlation in, in linear regression, 631, 639 228, 229 686, 709 in logistic regression, 650 of a data set, 46 sum of squares in, 686, 708-710 sample size and, 357 of a discrete distribution, 156 1 ratios in, 690, 712 score function and, 377 Model utility test, 647-649 Multiplication rule, 77-88 MeNemar’s test, 526, 550 Moment generating function Multiplicative exponential Mean of a Bernoulli rv, 122, 127 regression model, 721 of Cauchy distribution, 322, of a binomial rv, 135 Multiplicative power regression 342, 761 of a chi-squared rv, 315 model, 721 conditional, 255-257 CLT and, 329-330 Multivariate data, 3, 20 correction for the, 560 of a continuous rv, 175-177 Multivariate hypergeometric deviations from the, 33, 206, 563, definition of, 122, 175 distribution, 244 631, 739 of a discrete rv, 122-127 Mutually exclusive events, 54, 79 of a function, 115, 245-246 of an exponential rv, 221 MVUE. See Minimum variance vs. median, 28 of a gamma ry, 195 unbiased estimator moments about, 121 of a linear combination, 311 outliers and, 27, 28 and moments, 124, 176 N population, 26 of a negative binomial rv, 143 Negative binomial distribution, regression to the, 260, 636 of a normal rv, 191 141-144 sample, 25 of a Poisson rv, 149 definition of, 141 of sample total, 296 of a sample mean, 329-330 estimation of parameters, 352, 738 See also Average uniqueness property of, 123,176 Negative binomial random Mean square Moments variable, 141 expected, 573, 589, 593, 594, definition of, 121 Newton’s binomial theorem, 143 600, 604 method of, 350-352, 358,740 Neyman factorization theorem, 363 lack of fit, 681 and moment generating function, | Neyman-Pearson theorem, 470-475 pure error, 681 124, 176 Noncentrality parameter, 423, Mean square error Monotonic, 221, 353 574, 582 definition of, 335 Multimodal histogram, 19 Noncentral f distribution, 423 of an estimator, 335 Multinomial distribution, 240,725 Nonhomogeneous Poisson MVUE and, 341 Multinomial experiment, 240, 724 process, 156 sample size and, 337 Multiple regression Nonstandard normal distribution, Measurement error, 337 additive model, 682, 705 185-188 Median categorical variables in, 696-699 Normal distribution in boxplot, 37-38 coefficient of multiple asymptotic, 298, 371, 375, 377 of a distribution, 27, 28 determination, 686, 709 binomial distribution and, as estimator, 378, 478 confidence intervals in, 712 189-190, 302 vs. mean, 28 covariance matrices in, 711-713 bivariate, 258-260, 310, 318, outliers and, 26, 28, 29 degrees of freedom in, 685, 477, 677-671 population, 28 696, 708 confidence interval for mean of, sample, 27, 271 diagnostic plots, 691 383-388, 392, 398, 403 statistic, 378 fitted values in, 685 continuity correction and, 189-190 Mendel’s law of inheritance, 726-728 F ratio in, 687, 709 density curves for, 180 M-estimator, 359, 381 interaction in models for, 693-698 and discrete random variables, Midfourth, 46 leverages in, 714-715 188-190 Midrange, 333 logistic regression model, 699 goodness-of-fit test for, 730, 740 Mild outlier, 39, 393 in matrix/vector format, 705~715 of linear combination, 309 Minimal sufficient statistic, model utility test in, 687, lognormal distribution and, 366-367, 369 708-709 205, 303 --- Trang 854 --- Index 841 nonstandard, 185-188 estimator for a, 332-346 mean squared error of, 335 pdf for, 179 Fisher information on, 371-377 moments method, 350-352, 358 percentiles for, 182-188, 210 goodness-of-fit tests for, MVUE of, 340-342, 358, 369, probability plot, 210, 740 728-729, 732-736 375 Ryan—Joiner test for, 747 hypothesis testing for, 427, 450 notation for, 332, 334 standard, 181 location, 217, 367 of a standard deviation and, 1 distribution and, 320-322, 325, maximum likelihood estimate 286, 340 402 of, 354-359, 369 standard error of, 344-346 z table, 181-183 moment estimators for, 350-352 of a variance, 334, 339 Normal equations, 626, 683, 705 MVUE of, 341-343, 358, Point prediction, 405, 628, 684 Normal probability plot, 210, 740 369, 375 Poisson distribution Normal random variable, 181 noncentrality, 574 Erlang distribution and, 202 Null distribution, 443-444, 760, 780 null value of, 427 expected value, 149, 152 Null hypothesis, 426 of a probability distribution, exponential distribution and, 199 Null set, 54, 57 103-104 gamma distribution and, 783 Null value, 427, 436 in regression, 617-618, 622, goodness-of-fit tests for, 736-738 624-636, 658, 666, 682 in hypothesis testing, 470-472, o scale, 195, 203, 217-218, 365 474, 482, 550 Observational study, 488 shape, 217-218, 365 mode of, 156 Odds ratio, 621-622, 750-751 sufficient estimation of, 361-369 moment generating function One-sided confidence interval, Pareto diagram, 24 for, 149 398-399 Pareto distribution, 170, 178, 226 nonhomogeneous, 156 Operating characteristic curve, 137 _ pf. See Probability density function parameter of, 149 Ordered categories, 749-751 Percentiles and Poisson process, 149-151, 199 Ordered pairs, 66-67 for continuous random variables, variance, 149, 152 Order statistics, 271-278, 338, 166-168 Poisson process, 149-151, 194 365-367, 478 in hypothesis testing, 458,740 Polynomial regression model, sufficiency and, 365-367 in probability plots, 211-216, 740 691-693 Outliers sample, 29, 210-211, 216 Pooled f procedures in a boxplot, 37-41 of standard normal distribution, and ANOVA, 477, 504-505, 576 definition of, 11 182-184, 211-216 vs. Wilcoxon rank-sum extreme, 39-41 Permutation, 68, 69, 535-541 procedures, 769 leverage and, 714 Permutation test, 535-541 Posterior probability, 79-81, mean and, 29, 415-417 PERT analysis, 207 717, 781 median and, 29, 37, 415, 417 Plot Power curves, 574-575 mild, 39 probability, 210-218, 369, 499, Power function of a test, 473-475, in regression analysis, 679, 688 668, 676, 688, 691, 740 574-575 scatter, 615~617, 632-633, Power model for regression, 721 P 663, 667 Power of a test Paired data pmf. See Probability mass function Neyman-Pearson theorem and, in before/after experiments, Point estimate/estimator 473-475 511, 526 biased, 337-342 type Il error and, 446-447, bootstrap procedure for, 538-540 bias of, 335-340 472-476, 505, 593, 749 confidence interval for, 513-515 bootstrap techniques for, Precision, 315, 344, 371, 382, definition of, 509 345-346, 411-418 387-388, 397, 405, 417, 514, vs. independent samples, 515 bound on the error of 516, 592, 781 in McNemar’s test, 550 estimation of, 388 Prediction interval permutation test for, 540-541 censoring and, 343-344 Bonferroni, 659 i test for, 511-513 consistency, 304, 357, 375-377 vs. confidence interval, 406, in Wilcoxon signed-rank test, for correlation coefficient, 65-666 658-659, 690 162-163 and Cramér-Rao inequality, in linear regression, 654, 658-659 Pairwise average, 772, 773, 775 373-377 in multiple regression, 690 Pairwise independence, 94 definition of, 26, 287, 332 for normal distribution, 404-406 Parallel connection, 55, 88, 89, 90, efficiency of, 375 Prediction level, 405, 659, 689 272, 273 Fisher information on, 371-377 _ Predictor variable, 614, 682, Parameter(s) least squares, 626-631 693-696 Bayesian approach to, 776-782 maximum likelihood (mle), Principle of least squares, 625-636, concentration, 779 352-359 674, 679, 683 confidence interval for, 389, 394 of a mean, 26, 287, 332-333, 366 Prior probability, 79, 758 --- Trang 855 --- 842 Index Probability hyperexponential, 229 Randomized controlled conditional, 74-81, 84-85, 200, hypergeometric, 138-141, experiment, 489 253-255, 362, 365-366 307-308 Randomized response technique, 349 continuous random variables joint, 232-283, 665-667, 732 Random variable and, 99, 158-225, 235-242, Laplace, 315, 477-478 continuous, 158-231 253-255 of a linear combination, 259, definition of, 97 counting techniques for, 66~72 306-312 discrete, 96-157 definition of, 50 logistic, 279 jointly distributed, 232, 233-283 density function (see Probability lognormal, 205~206, 303 standardizing of, 185 density function) multinomial, 240, 724 types of, 99 of equally likely outcomes, 62-63 negative binomial, 141-144 Range histogram, 103, 159-160, normal, 179-191, 205, 210-216, definition of, 33 188-190, 289-290 258-260, 297-303, 309, 730 in order statistics, 271-274 inferential statistics and, 6, 9, 284 parameter of a, 103-104 population, 394 Law of Large Numbers and, Pareto, 170, 178, 226 sample, 33, 271-274 303-304, 322-323 Poisson, 146-151, 199 Studentized, 565-566 law of total, 79 Rayleigh, 169, 226, 349, 360 Rank average, 785 mass function (see Probability of a sample mean, 285-294, Ratio statistic, 478 mass function) 296-304 Rayleigh distribution, 226, of null event, 57 standard normal, 181-184 349, 360 plots, 210-218, 369, 499, 668, of a statistic, 285-304 Regression 676, 688, 691, 740 Studentized range, 565 coefficient, 640-651, 682-685, posterior/prior, 79-81, 758, symmetric, 19, 28, 121, 168, 705-707, 711-712 777,781 174, 180 effect, 260, 636 properties of, 56-63 1, 320-323, 325, 401-403, 443, function, 614, 676, 682, 685, relative frequency and, 58-59, 462, 511 693, 696 291-292 uniform, 161-162, 164 line, 618-620, 624-636, sample space and, 51-55, 56-57, Weibull, 202-205 640-647, 674-677 63, 66, 95 Probability mass function linear, 617-620, 624-636, and Venn diagrams, 54-55, 62, conditional, 253-254 640-649, 654-659 75-16 definition of, 101-109 logistic, 620-622, 650-651 Probability density function (pdf) joint, 233-236 matrices for, 705~715 conditional, 254-255, 777 marginal, 234 to the mean, 260 definition of, 161 Product rules, 66-68 multiple, 682-689 joint, 232-278, 310, 354, Proportion multiplicative exponential 363-365, 368, 470, 475 population, 30, 395, 450-454, model, 721 marginal, 236-238, 268-269 519-525 multiplicative power model vs. pmf, 162 sample, 30, 190, 302, 338, 519, for, 721 Probability distribution 748 plots for, 676-678 Bemoulli, 98, 102-104, 113, trimming, 29, 333, 340, 342-343 polynomial, 691-693 122-123, 127, 134, 302, 304, P-value quadratic, 691-693 308, 360, 373, 375, 377, 777 for chi-squared test, 727-728 through the origin, 381-421 beta, 206-208 definition of, 456 Rejection method, 281 binomial, 128-135, 147-149, for F tests, 529-530 Rejection region 189-190, 302, 352-353, for f tests, 462-465 cutoff value for, 428-433 395-396, 428-431 type I error and, 457-459 definition of, 428 bivariate normal, 258-260, for z tests, 459-461 lower-tailed, 431, 437-438 477, 669 in Neyman-Pearson theorem, Cauchy, 226, 231, 271, 342 Q 470-474 chi-squared, 200, 224, 315-320 Quadratic regression model, 691-693 two-tailed, 438 conditional, 253-263 Qualitative data, 19 type I error and, 429 continuous, 99, 158-231 Quartiles, 28-29 in union-intersection test, 551 discrete, 96-157 upper-tailed, 429, 437-438 exponential, 198-200, 203, 343. R Relative frequency, 13-19, 30, extreme value, 217-218 Random effects model, 579-580, 58-59 F, 323-325 593-594, 603-606 Repeated measures designs, 591 family, 104, 213, 216-218, 558 Random interval, 384-386 Replications, 58, 291-293, 386 gamma, 194-200, 217-218 Randomized block experiment, Research hypothesis, 427 geometric, 106-107, 114, 143, 225 590-593 Residual plots, 588, 602, 676-678 --- Trang 856 --- Index 843 Residuals Poisson distribution and, 147 Series connection, 272-273 in ANOVA, 588, 602 for population proportion, Set theory, 53-55 definition of, 556 306-398 Shape parameters, 217-218, 366 leverages and, 714-715 power and, 433, 440-441, 445, Siegel~Tukey test, 786 in linear regression, 629, 674-678 452-454, 489, 505, 523 Significance in multiple regression, 685, 688 probability plots and, 216 practical, 468-469, 727 standard error, 674 in simple random sample, 287 statistical, 469, 489, 727 standardizing of, 675, 691 1 distribution and, 445, 505 Significance level variance of, 675, 713 type I error and, 433, 440-441, definition of, 433 Response variable, 8, 614, 620 445, 489, 523 joint distribution and, 479 Retrospective study, 488 type Il error and, 433, 440-441, likelihood ratio and, 475 Ryan-Joiner test, 741 445, 452-454, 489, 505, 523 observed, 458 variance and, 303 Sign interval, 784 s z test and, 440-441, 452-453 Sign test, 784 Sample Sample space Simple events, 52, 62, 66 convenience, 7 definition of, 51 Simple hypothesis, 469, 732 definition of, 2 probability of, 56-63 Simple random sample outliers in, 38-40 Venn diagrams for, 54-55 definition of, 7, 287 simple random, 7, 287 Sample standard deviation independence in, 287 size of (see Sample size) in bootstrap procedure, 413, 537 sample size in, 287 stratified, 7 confidence bounds and, 398 Simulation experiment, 288, Sample coefficient of variation, 45 confidence intervals and, 392, 291-294, 417, 463 Sample correlation coefficient 403 Skewed data in linear regression, 662-664, definition of, 33 coefficient of skewness, 121, 178 669, 719 as estimator, 340, 379 definition of, 19 vs. population correlation expected value of, 340, 379 in histograms, 19, 413 coefficient, 666, 669-671 independence of, 318-319 mean vs. median in, 28 properties of, 664-665 mle and, 357 measure of, 121 strength of relationship, 665 population standard deviation probability plot of, 216, 411-413 Sample mean and, 286, 340, 379 Slope, 617-618, 622, 626, 642, 644 definition of, 25 sample mean and, 34, 318-319 Slope coefficient population mean and, 296-304 sampling distribution of, confidence interval for, 644 sampling distribution of, 296-304 288-289, 320, 340, 379, 482 definition of, 617-618 Sample median variance of, 482 hypothesis tests for, 648 definition of, 27 Sample total, 296, 306, 560 least squares estimate of, 626 in order statistics, 271-272 Sample variance in logistic regression model, 622 vs. population median, 417 in ANOVA, 555-556 Standard deviation Sample moments, 350-351 calculation of, 35 normal distribution and, 179 Sample percentiles, 210-211 definition of, 33 of point estimator, 344-346 Sample proportion, 30, 335-336, distribution of, 287-289, 320 population, 117, 173 338, 391-400, 450-455, expected value of, 339 of a random variable, 117, 173 519-526 population variance and, 35, 317, sample, 33 Sample size 322-323, 339 z table and, 186 in ANOVA, 574-576 Sampling distribution Standard error, 344-346 asymptotic relative efficiency bootstrap procedure and, 413, Standardized variable, 185 and, 764, 769 532, 758 Standard normal distribution bound on the error of estimation definition of, 284, 287 Cauchy distribution and, 271 and, 388 derivation of, 288-291 chi-squared distribution and, Central Limit Theorem and, 302 of intercept coefficient, 719 316, 325 confidence intervals and, of mean, 288-290, 297-299 critical values of, 184 387-388, 394, 396, 403, 495 permutation tests and, 758 definition of, 181 definition of, 9 simulation experiments for, density curve properties for, in finite population correction 291-294 181-184 factor, 140 of slope coefficient, 640-649 F distribution and, 323, 325 for F test, 574-576 Scale parameter, 195, 203-204, percentiles of, 182-184 for Levene test, 562-563 217-218, 365 1 distribution and, 320, 325 mle and, 357-358, 375 Scatter plot, 615-617 Standard normal random variable, noncentrality parameter and, Scheffé method, 610 181, 325 574-576, 582 Score function, 373-377 Statistic, 286 --- Trang 857 --- 844 index Statistical hypothesis, 426 Trimming proportion, 29, 343 U Stem-and-leaf display, 10-12 True regression function, 615 Unbiased estimator, 337-344 Step function, 106 True regression line, 618-620, 625, minimum variance, 340-343 Stratified samples, 7 640-641 Uncorrelated random variables, Studentized range distribution, 565 test 251, 307 Student r distribution, 320-323 vs. F test, 576 Uniform distribution Summary statistics, 627, 630, heavy tails and, 764, 769, 774 beta distribution and, 778 645, 671 likelihood ratio and, 475, 476 Box-Muller transformation Sum of squares in linear regression, 648 and, 271 error, 557, 631, 708 in multiple regression, definition of, 161 interaction, 599 688-690, 712 discrete, 120 lack of fit, 681 one-sample, 443-445, 461, transformation and, 223-224 pure error, 681 474-476, 511, 769 Uniformly most powerful test, regression, 636, 699, 708 paired, 511 4T3-AT4 total, 559-560, 587, 591, 645, pooled, 504-505, 576 Unimodal histogram, 18-19 686 P-value for, 461-462 Union-intersection test, 551 treatment, 557-560 two-sample, 499-504, 576,515 Union of events, 53 Symmetric distribution, 19, type I error and, 443-445, 501 Univariate data, 3 121, 168 type Il error and, 445~447, 505 vs. Wilcoxon rank-sum v T test, 769 Variable(s) Taylor series, 174, 579 vs. Wilcoxon signed-rank test, covariate, 699 t confidence interval 763-764 in a data set, 10 heavy tails and, 764, 769,774 Tukey's procedure, 565-570, 578, definition of, 3 in linear regression, 643, 656 589-590, 603 dependent, 614 in multiple regression, 689,712 Two one-sided tests, 551 dummy, 696-699 one-sample, 403-404 Type I error explanatory, 614 paired, 513-515 definition of, 429 independent, 614 pooled, 505 Neyman-Pearson theorem indicator, 696-699 two-sample, 500, 515 and, 470 predictor, 614 1 distribution power function of the test random, 96-231 central, 423 and, 473 response, 614 chi-squared distribution and, P-value and, 457-458 Variance 320, 325, 500, 504 sample size and, 441 conditional, 255-257 critical values of, 322, 402, significance level and, 433 of a function, 118-119, 444, 461 vs. type IT error, 433 174-175, 328 definition of, 320 Type I error ofa linear function, 118-120, 307 degrees of freedom in, definition of, 429 population, 34-35, 117, 173 320-321, 401-402 vs. type I error, 433 precision and, 781 density curve properties for, Type II error probability of a random variable, 117, 173 322, 402 in ANOVA, 574-576, 596 sample, 33-37 F distribution and, 325, 576 degrees of freedom and, 516 Venn diagram, 54-55, 62, 75, 76 noncentral, 423 for F test, 574-576, 596 standard normal distribution in linear regression, 653 w and, 320, 322, 403 Neyman-Pearson theorem Weibull distribution Student, 320-323 and, 469-472 basics of, 202-205 Test statistic, 428 power of the test and, 446, 473 chi-squared distribution and, 231 Time series, 48, 674 sample size and, 440, 505, estimation of parameters, 356, Tolerance interval, 406 477-478, 468, 495 359-360 Treatment, 553, 555-556, 583 in tests concerning means, extreme value distribution Tree diagram, 67-68, 78, 81, 87 440, 445, 468, 489, 505 and, 217 Trial, 128-131 in tests concerning proportions, probability plot, 217-218 ‘Trimmed mean 452-453, 522-524 Weighted average, 112, 171, 261, definition of, 28-29 1 test and, 445, 505 504, 779, 781 in order statistics, 271-272 vs. type I error probability, 433 Weighted least squares outliers and, 29 in Wilcoxon rank-sum estimates, 679 as point estimator, 333, test, 769 Wilcoxon rank-sum test, 766-769 340, 343 in Wilcoxon signed-rank test, Wilcoxon signed-rank test, population mean and, 340, 343 763-764 759-164 --- Trang 858 --- Index 845 Z z curve for a difference between z confidence interval area under, maximizing means, 485-493 for a correlation coefficient, 671 of, 479 for a difference between for a difference between rejection region and, 438 proportions, 521 means, 493 t curve and, 322, 402 for a mean, 438, 442 for a difference between z test for a Poisson parameter, proportions, 524 chi-squared test and, 752 400, 482 for a mean, 387, 392 for a correlation for a proportion, 451 for a proportion, 395 coefficient, 669 P-value for, 459-461