logo资料库

Introduction_to_Mathematical_Statistics_and_Its_Applications__An....pdf

第1页 / 共768页
第2页 / 共768页
第3页 / 共768页
第4页 / 共768页
第5页 / 共768页
第6页 / 共768页
第7页 / 共768页
第8页 / 共768页
资料共768页,剩余部分请下载后查看
Cover
Title Page
Copyright Page
ISBN-13: 9780321693945
Table of Contents
Preface
Acknowledgments
1 INTRODUCTION
1.1 An Overview
1.2 Some Examples
1.3 A Brief History
Probability: The Early Years
Statistics: From Aristotle to Quetelet
Staatenkunde: The Comparative Description of States
Political Arithmetic
Quetelet: The Catalyst
1.4 A Chapter Summary
2 PROBABILITY
2.1 Introduction
The Evolution of the Definition of Probability
2.2 Sample Spaces and the Algebra of Sets
Unions, Intersections, and Complements
Expressing Events Graphically: Venn Diagrams
2.3 The Probability Function
Some Basic Properties of P
2.4 Conditional Probability
Applying Conditional Probability to Higher-Order Intersections
Calculating “Unconditional” and “Inverse” Probabilities
Bayes’ Theorem
2.5 Independence
Deducing Independence
Defining the Independence of More Than Two Events
2.6 Combinatorics
Counting Ordered Sequences: The Multiplication Rule
Counting Permutations (when the objects are all distinct)
Counting Permutations (when the objects are not all distinct)
Counting Combinations
2.7 Combinatorial Probability
2.8 Taking a Second Look at Statistics (Monte Carlo Techniques)
3 RANDOM VARIABLES
3.1 Introduction
3.2 Binomial and Hypergeometric Probabilities
The Binomial Probability Distribution
3.3 Discrete Random Variables
Assigning Probabilities: The Discrete Case
Defining “New” Sample Spaces
The Probability Density Function
The Cumulative Distribution Function
3.4 Continuous Random Variables
Choosing the Function f(t)
Fitting f(t) to Data: The Density-Scaled Histogram
Continuous Probability Density Functions
Continuous Cumulative Distribution Functions
3.5 Expected Values
A Second Measure of Central Tendency: The Median
The Expected Value of a Function of a Random Variable
3.6 The Variance
Higher Moments
3.7 Joint Densities
Discrete Joint Pdfs
Continuous Joint Pdfs
Geometric Probability
Marginal Pdfs for Continuous Random Variables
Joint Cdfs
Multivariate Densities
Independence of Two Random Variables
Independence of n (>2) Random Variables
Random Samples
3.8 Transforming and Combining Random Variables
Transformations
Finding the Pdf of a Sum
Finding the Pdfs of Quotients and Products
3.9 Further Properties of the Mean and Variance
Calculating the Variance of a Sum of Random Variables
3.10 Order Statistics
The Distribution of Extreme Order Statistics
A General Formula for fYi (y)
Joint Pdfs of Order Statistics
3.11 Conditional Densities
Finding Conditional Pdfs for Discrete Random Variables
3.12 Moment-Generating Functions
Calculating a Random Variable’s Moment-Generating Function
Using Moment-Generating Functions to Find Moments
Using Moment-Generating Functions to Find Variances
Using Moment-Generating Functions to Identify Pdfs
3.13 Taking a Second Look at Statistics (Interpreting Means)
Appendix 3.A.1 Minitab Applications
4 SPECIAL DISTRIBUTIONS
4.1 Introduction
4.2 The Poisson Distribution
The Poisson Limit
The Poisson Distribution
Fitting the Poisson Distribution to Data
The Poisson Model: The Law of Small Numbers
Calculating Poisson Probabilities
Intervals Between Events: The Poisson/Exponential Relationship
4.3 The Normal Distribution
Finding Areas Under the Standard Normal Curve
The Continuity Correction
Central Limit Theorem
The Normal Curve as a Model for Individual Measurements
4.4 The Geometric Distribution
4.5 The Negative Binomial Distribution
4.6 The Gamma Distribution
Generalizing the Waiting Time Distribution
Sums of Gamma Random Variables
4.7 Taking a Second Look at Statistics (Monte Carlo Simulations)
Appendix 4.A.1 Minitab Applications
Appendix 4.A.2 A Proof of the Central Limit Theorem
5 ESTIMATION
5.1 Introduction
5.2 Estimating Parameters: The Method of Maximum Likelihood and the Method of Moments
The Method of Maximum Likelihood
Applying the Method of Maximum Likelihood
Using Order Statistics as Maximum Likelihood Estimates
Finding Maximum Likelihood Estimates When More Than One Parameter Is Unknown
The Method of Moments
5.3 Interval Estimation
Confidence Intervals for the Binomial Parameter, p
Margin of Error
Choosing Sample Sizes
5.4 Properties of Estimators
Unbiasedness
Efficiency
5.5 Minimum-Variance Estimators: The Cramér-Rao Lower Bound
5.6 Sufficient Estimators
An Estimator That Is Sufficient
An Estimator That Is Not Sufficient
A Formal Definition
A Second Factorization Criterion
Sufficiency as It Relates to Other Properties of Estimators
5.7 Consistency
5.8 Bayesian Estimation
Prior Distributions and Posterior Distributions
Bayesian Estimation
Using the Risk Function to Find θ
5.9 Taking a Second Look at Statistics (Beyond Classical Estimation)
Appendix 5.A.1 Minitab Applications
6 HYPOTHESIS TESTING
6.1 Introduction
6.2 The Decision Rule
Expressing Decision Rules in Terms of Z Ratios
One-Sided Versus Two-Sided Alternatives
Testing H0: μ = μo (σ Known)
The P-Value
6.3 Testing Binomial Data—H0: p = po
A Large-Sample Test for the Binomial Parameter p
A Small-Sample Test for the Binomial Parameter p
6.4 Type I and Type II Errors
Computing the Probability of Committing a Type I Error
Computing the Probability of Committing a Type II Error
Power Curves
Factors That Influence the Power of a Test
The Effect of α on 1−β
The Effects of σ and n on 1−β
Decision Rules for Nonnormal Data
6.5 A Notion of Optimality: The Generalized Likelihood Ratio
6.6 Taking a Second Look at Statistics (Statistical Significance versus “Practical” Significance)
7 INFERENCES BASED ON THE NORMAL DISTRIBUTION
7.1 Introduction
7.2 Comparing Y-μ/σ /√n and Y-μ/S/√n
7.3 Deriving the Distribution of Y-μ/S /√n
Using the F Distribution to Derive the pdf for t Ratios
fTn(t) and fZ (Z): How the Two Pdfs Are Related
7.4 Drawing Inferences About μ
t Tables
Constructing a Confidence Interval for μ
Testing H0:μ = μo (The One-Sample t Test)
Testing H0: μ = μo When the Normality Assumption Is Not Met
7.5 Drawing Inferences About σ²
Chi Square Tables
Constructing Confidence Intervals for σ²
Testing H0: σ² = σ²
7.6 Taking a Second Look at Statistics (Type II Error)
Simulations
Appendix 7.A.1 Minitab Applications
Appendix 7.A.2 Some Distribution Results for Y; and S²
Appendix 7.A.3 A Proof that the One-Sample t Test is a GLRT
Appendix 7.A.4 A Proof of Theorem 7.5.2
8 TYPES OF DATA: A BRIEF OVERVIEW
8.1 Introduction
Definitions
Possible Designs
8.2 Classifying Data
One-Sample Data
Two-Sample Data
k-Sample Data
Paired Data
Randomized Block Data
Regression Data
Categorical Data
A Flowchart for Classifying Data
8.3 Taking a Second Look at Statistics (Samples Are Not “Valid”!)
9 TWO-SAMPLE INFERENCES
9.1 Introduction
9.2 Testing H0: μX=μY
The Behrens-Fisher Problem
9.3 Testing H0: σ²X=σ²Y—The F Test
9.4 Binomial Data: Testing H0: Px = Py
Applying the Generalized Likelihood Ratio Criterion
9.5 Confidence Intervals for the Two-Sample Problem
9.6 Taking a Second Look at Statistics (Choosing Samples)
Appendix 9.A.1 A Derivation of the Two-Sample t Test (A Proof of Theorem 9.2.2)
Appendix 9.A.2 Minitab Applications
10 GOODNESS-OF-FIT TESTS
10.1 Introduction
10.2 The Multinomial Distribution
A Multinomial/Binomial Relationship
10.3 Goodness-of-Fit Tests: All Parameters Known
The Goodness-of-Fit Decision Rule—An Exception
10.4 Goodness-of-Fit Tests: Parameters Unknown
10.5 Contingency Tables
Testing for Independence: A Special Case
Testing for Independence: The General Case
Reducing” Continuous Data to Contingency Tables
10.6 Taking a Second Look at Statistics (Outliers)
Appendix 10.A.1 Minitab Applications
11 REGRESSION
11.1 Introduction
11.2 The Method of Least Squares
Residuals
Interpreting Residual Plots
Nonlinear Models
11.3 The Linear Model
A Special Case
Estimating the Linear Model Parameters
Properties of Linear Model Estimators
Estimating σ²
Drawing Inferences about β1
Drawing Inferences about β0
Drawing Inferences about σ²
Drawing Inferences about E(Y | x)
Drawing Inferences about Future Observations
Testing the Equality of Two Slopes
11.4 Covariance and Correlation
Measuring the Dependence Between Two Random Variables
The Correlation Coefficient
Estimating ρ(X, Y): The Sample Correlation Coefficient
Interpreting R
11.5 The Bivariate Normal Distribution
Generalizing the Univariate Normal pdf
Properties of the Bivariate Normal Distribution
Estimating Parameters in the Bivariate Normal pdf
Testing H0: ρ =0
11.6 Taking a Second Look at Statistics (How Not to Interpret the Sample Correlation Coefficient)
Appendix 11.A.1 Minitab Applications
Appendix 11.A.2 A Proof of Theorem 11.3.3
12 THE ANALYSIS OF VARIANCE
12.1 Introduction
12.2 The F Test
Sums of Squares
Testing H0: μ1 =μ2 =. . .=μk When σ² Is Known
Testing H0: μ1 =μ2 =. . .=μk When σ² Is Unknown
ANOVA Tables
Computing Formulas
Comparing the Two-Sample t Test with the Analysis of Variance
12.3 Multiple Comparisons: Tukey’s Method
A Background Result: The Studentized Range Distribution
12.4 Testing Subhypotheses with Contrasts
12.5 Data Transformations
12.6 Taking a Second Look at Statistics (Putting the Subject of Statistics Together—The Contributions of Ronald A. Fisher)
Appendix 12.A.1 Minitab Applications
Appendix 12.A.2 A Proof of Theorem 12.2.2
Appendix 12.A.3 The Distribution of SSTR/(k-1)/SSE/(n-k) When H1 is True
13 RANDOMIZED BLOCK DESIGNS
13.1 Introduction
13.2 The F Test for a Randomized Block Design
Computing Formulas
Tukey Comparisons for Randomized Block Data
Contrasts for Randomized Block Data
13.3 The Paired t Test
Criteria for Pairing
The Equivalence of the Paired t Test and the Randomized Block ANOVA When k = 2
13.4 Taking a Second Look at Statistics (Choosing between a Two-Sample t Test and a Paired t Test)
Appendix 13.A.1 Minitab Applications
14 NONPARAMETRIC STATISTICS
14.1 Introduction
14.2 The Sign Tet
A Small-Sample Sign Test
Using the Sign Test for Paired Data
14.3 Wilcoxon Tests
Testing H0: μ=μo
Calculating pW(w)
Tables of the cdf, FW(w)
A Large-Sample Wilcoxon Signed Rank Test
Testing H0 :μD =0 (Paired Data)
Testing H0 : μX =μY (The Wilcoxon Rank Sum Test)
14.4 The Kruskal-Wallis Test
14.5 The Friedman Test
14.6 Testing for Randomness
14.7 Taking a Second Look at Statistics (Comparing Parametric and Nonparametric Procedures)
Appendix 14.A.1 Minitab Applications
Appendix: Statistical Tables
Answers to Selected Odd-Numbered Questions
Bibliography
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z
AN INTRODUCTION TO MATHEMATICAL STATISTICS AND ITS APPLICATIONS Fifth Edition Richard J. Larsen Vanderbilt University Morris L. Marx University of West Florida Prentice Hall Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai Paris Montréal London Madrid Milan Munich Toronto Sydney Hong Kong Seoul Singapore Taipei Tokyo Delhi Mexico City São Paulo
Editor in Chief: Deirdre Lynch Acquisitions Editor: Christopher Cummings Associate Editor: Christina Lepre Assistant Editor: Dana Jones Senior Managing Editor: Karen Wernholm Associate Managing Editor: Tamela Ambush Senior Production Project Manager: Peggy McMahon Senior Design Supervisor: Andrea Nix Cover Design: Beth Paquin Interior Design: Tamara Newnam Marketing Manager: Alex Gay Marketing Assistant: Kathleen DeChavez Senior Author Support/Technology Specialist: Joe Vetere Manufacturing Manager: Evelyn Beaton Senior Manufacturing Buyer: Carol Melville Production Coordination, Technical Illustrations, and Composition: Integra Software Services, Inc. Cover Photo: © Jason Reed/Getty Images Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Pearson was aware of a trademark claim, the designations have been printed in initial caps or all caps. Library of Congress Cataloging-in-Publication Data Larsen, Richard J. An introduction to mathematical statistics and its applications / Richard J. Larsen, Morris L. Marx.—5th ed. p. cm. Includes bibliographical references and index. ISBN 978-0-321-69394-5 1. Mathematical statistics—Textbooks. QA276.L314 2012 519.5—dc22 I. Marx, Morris L. II. Title. 2010001387 Copyright © 2012, 2006, 2001, 1986, and 1981 by Pearson Education, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. For information on obtaining permission for use of material in this work, please submit a written request to Pearson Education, Inc., Rights and Contracts Department, 501 Boylston Street, Suite 900, Boston, MA 02116, fax your request to 617-671-3447, or e-mail at http://www.pearsoned.com/legal/permissions.htm. 1 2 3 4 5 6 7 8 9 10—EB—14 13 12 11 10 ISBN-13: 978-0-321-69394-5 ISBN-10: 0-321-69394-9
Table of Contents Preface viii 1 Introduction 1 1.1 1.2 1.3 1.4 An Overview 1 Some Examples 2 A Brief History 7 A Chapter Summary 14 2 Probability 16 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 Introduction 16 Sample Spaces and the Algebra of Sets 18 The Probability Function 27 Conditional Probability 32 Independence 53 Combinatorics 67 Combinatorial Probability 90 Taking a Second Look at Statistics (Monte Carlo Techniques) 99 3 Random Variables 102 Introduction 102 Binomial and Hypergeometric Probabilities 103 Discrete Random Variables 118 Continuous Random Variables 129 Expected Values 139 The Variance 155 Joint Densities 162 Transforming and Combining Random Variables 176 Further Properties of the Mean and Variance 183 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 Order Statistics 193 3.11 Conditional Densities 200 3.12 Moment-Generating Functions 207 3.13 Taking a Second Look at Statistics (Interpreting Means) 216 Appendix 3.A.1 Minitab Applications 218 iii
iv Table of Contents 4 Special Distributions 221 4.1 4.2 4.3 4.4 4.5 4.6 4.7 Introduction 221 The Poisson Distribution 222 The Normal Distribution 239 The Geometric Distribution 260 The Negative Binomial Distribution 262 The Gamma Distribution 270 Taking a Second Look at Statistics (Monte Carlo Simulations) 274 Appendix 4.A.1 Minitab Applications 278 Appendix 4.A.2 A Proof of the Central Limit Theorem 280 5 Estimation 281 5.1 5.2 5.6 5.7 5.8 5.9 Introduction 281 Estimating Parameters: The Method of Maximum Likelihood and the Method of Moments 284 Interval Estimation 297 Properties of Estimators 312 5.3 5.4 5.5 Minimum-Variance Estimators: The Cramér-Rao Lower Bound 320 Sufficient Estimators 323 Consistency 330 Bayesian Estimation 333 Taking a Second Look at Statistics (Beyond Classical Estimation) 345 Appendix 5.A.1 Minitab Applications 346 6 Hypothesis Testing 350 6.1 6.2 6.3 6.4 6.5 6.6 Introduction 350 The Decision Rule 351 Testing Binomial Data—H0: p = po Type I and Type II Errors 366 A Notion of Optimality: The Generalized Likelihood Ratio 379 Taking a Second Look at Statistics (Statistical Significance versus “Practical” Significance) 382 361
7 Inferences Based on the Normal Distribution 385 Table of Contents v n 388 n σ/ n 7.1 7.2 7.3 and Y−μ √ S/ Introduction 385 Comparing Y−μ √ 386 Deriving the Distribution of Y−μ √ S/ Drawing Inferences About μ 394 Drawing Inferences About σ 2 410 Taking a Second Look at Statistics (Type II Error) 418 7.4 7.5 7.6 Appendix 7.A.1 Minitab Applications 421 Appendix 7.A.2 Some Distribution Results for Y and S2 423 Appendix 7.A.3 A Proof that the One-Sample t Test is a GLRT 425 Appendix 7.A.4 A Proof of Theorem 7.5.2 427 8 Types of Data: A Brief Overview 430 8.1 8.2 8.3 Introduction 430 Classifying Data 435 Taking a Second Look at Statistics (Samples Are Not “Valid”!) 455 9 Two-Sample Inferences 457 458 Y —The F Test 471 476 9.1 Introduction 457 Testing H0: μX = μY 9.2 = σ 2 Testing H0: σ 2 9.3 X 9.4 Binomial Data: Testing H0: pX = pY 9.5 Confidence Intervals for the Two-Sample Problem 481 9.6 Taking a Second Look at Statistics (Choosing Samples) 487 Appendix 9.A.1 A Derivation of the Two-Sample t Test (A Proof of Theorem 9.2.2) 488 Appendix 9.A.2 Minitab Applications 491 10 Goodness-of-Fit Tests 493 10.1 Introduction 493 10.2 The Multinomial Distribution 494 10.3 Goodness-of-Fit Tests: All Parameters Known 499 10.4 Goodness-of-Fit Tests: Parameters Unknown 509 10.5 Contingency Tables 519
vi Table of Contents 10.6 Taking a Second Look at Statistics (Outliers) 529 Appendix 10.A.1 Minitab Applications 531 11 Regression 532 11.1 Introduction 532 11.2 The Method of Least Squares 533 11.3 The Linear Model 555 11.4 Covariance and Correlation 575 11.5 The Bivariate Normal Distribution 582 11.6 Taking a Second Look at Statistics (How Not to Interpret the Sample Correlation Coefficient) 589 Appendix 11.A.1 Minitab Applications 590 Appendix 11.A.2 A Proof of Theorem 11.3.3 592 12 The Analysis of Variance 595 12.1 Introduction 595 12.2 The F Test 597 12.3 Multiple Comparisons: Tukey’s Method 608 12.4 Testing Subhypotheses with Contrasts 611 12.5 Data Transformations 617 12.6 Taking a Second Look at Statistics (Putting the Subject of Statistics Together—The Contributions of Ronald A. Fisher) 619 Appendix 12.A.1 Minitab Applications 621 Appendix 12.A.2 A Proof of Theorem 12.2.2 624 Appendix 12.A.3 The Distribution of SSTR/(k–1) 13 Randomized Block Designs 629 SSE/(n–k) When H1 is True 624 13.1 Introduction 629 13.2 The F Test for a Randomized Block Design 630 13.3 The Paired t Test 642 13.4 Taking a Second Look at Statistics (Choosing between a Two-Sample t Test and a Paired t Test) 649 Appendix 13.A.1 Minitab Applications 653 14 Nonparametric Statistics 655 14.1 Introduction 656 14.2 The Sign Test 657
Table of Contents vii 14.3 Wilcoxon Tests 662 14.4 The Kruskal-Wallis Test 677 14.5 The Friedman Test 682 14.6 Testing for Randomness 684 14.7 Taking a Second Look at Statistics (Comparing Parametric and Nonparametric Procedures) 689 Appendix 14.A.1 Minitab Applications 693 Appendix: Statistical Tables 696 Answers to Selected Odd-Numbered Questions 723 Bibliography 745 Index 753
分享到:
收藏