Title:
Statistical theory : a concise introduction
Author:
Abramovich, Felix.
ISBN:
9781439851845
Personal Author:
Physical Description:
pages cm.
Series:
Chapman & Hall/CRC texts in statistical science
Contents:
Machine generated contents note: 1.Introduction -- 1.1.Preamble -- 1.2.Likelihood -- 1.3.Sufficiency -- 1.4.Minimal sufficiency -- 1.5.Completeness -- 1.6.Exponential family of distributions -- 1.7.Exercises -- 2.Point Estimation -- 2.1.Introduction -- 2.2.Maximum likelihood estimation -- 2.3.Method of moments -- 2.4.Method of least squares -- 2.5.Goodness-of-estimation. Mean squared error -- 2.6.Unbiased estimation -- 2.6.1.Definition and main properties -- 2.6.2.Uniformly minimum variance unbiased estimators. The Cramer-Rao lower bound -- 2.6.3.The Cramer-Rao lower bound for multivariate parameters -- 2.6.4.Rao-Blackwell theorem -- 2.6.5.Lehmann-Scheffe theorem -- 2.7.Exercises -- 3.Confidence Intervals, Bounds, and Regions -- 3.1.Introduction -- 3.2.Quoting the estimation error -- 3.3.Confidence intervals -- 3.4.Confidence bounds -- 3.5.Confidence regions -- 3.6.Exercises -- 4.Hypothesis Testing -- 4.1.Introduction -- 4.2.Simple hypotheses -- 4.2.1.Type I and Type II errors --
Contents note continued: 4.2.2.Choice of a critical value -- 4.2.3.The p-value -- 4.2.4.Maximal power tests. Neyman-Pearson lemma -- 4.3.Composite hypotheses -- 4.3.1.Power function -- 4.3.2.Uniformly most powerful tests -- 4.3.3.Generalized likelihood ratio tests -- 4.4.Hypothesis testing and confidence intervals -- 4.5.Sequential testing -- 4.6.Exercises -- 5.Asymptotic Analysis -- 5.1.Introduction -- 5.2.Convergence and consistency in MSE -- 5.3.Convergence and consistency in probability -- 5.4.Convergence in distribution -- 5.5.The central limit theorem -- 5.6.Asymptotically normal consistency -- 5.7.Asymptotic confidence intervals -- 5.8.Asymptotically normal consistency of the MLE, Wald's confidence intervals, and tests -- 5.9.Multiparameter case -- 5.10.Asymptotic distribution of the GLRT, Wilks' theorem -- 5.11.Exercises -- 6.Bayesian Inference -- 6.1.Introduction -- 6.2.Choice of priors -- 6.2.1.Conjugate priors -- 6.2.2.Noninformative (objective) priors --
Contents note continued: 6.3.Point estimation -- 6.4.Interval estimation. Credible sets -- 6.5.Hypothesis testing -- 6.5.1.Simple hypotheses -- 6.5.2.Composite hypotheses -- 6.5.3.Testing a point null hypothesis -- 6.6.Exercises -- 7.Elements of Statistical Decision Theory -- 7.1.Introduction and notations -- 7.2.Risk function and admissibility -- 7.3.Minimax risk and minimax rules -- 7.4.Bayes risk and Bayes rules -- 7.5.Posterior expected loss and Bayes actions -- 7.6.Admissibility and minimaxity of Bayes rules -- 7.7.Exercises -- 8.Linear Models -- 8.1.Introduction -- 8.2.Definition and examples -- 8.3.Estimation of regression coefficients -- 8.4.Residuals. Estimation of the variance -- 8.5.Examples -- 8.5.1.Estimation of a normal mean -- 8.5.2.Comparison between the means of two independent normal samples with a common variance -- 8.5.3.Simple linear regression -- 8.6.Goodness-of-fit. Multiple correlation coefficient --
Contents note continued: 8.7.Confidence intervals and regions for the coefficients -- 8.8.Hypothesis testing in linear models -- 8.8.1.Testing significance of a single predictor -- 8.8.2.Testing significance of a group of predictors -- 8.8.3.Testing a general linear hypothesis -- 8.9.Predictions -- 8.10.Analysis of variance -- 8.10.1.One-way ANOVA -- 8.10.2.Two-way ANOVA and beyond -- A.Probabilistic Review -- A.1.Introduction -- A.2.Basic probabilistic laws -- A.3.Random variables -- A.3.1.Expected value and the variance -- A.3.2.Chebyshev's and Markov's inequalities -- A.3.3.Expectation of functions and the Jensen's inequality -- A.3.4.Joint distribution -- A.3.5.Covariance, correlation, and the Cauchy-Schwarz inequality -- A.3.6.Expectation and variance of a sum of random variables -- A.3.7.Conditional distribution and Bayes Theorem -- A.3.8.Distributions of functions of random variables -- A.3.9.Random vectors -- A.4.Special families of distributions --
Contents note continued: A.4.1.Bernoulli and binomial distributions -- A.4.2.Geometric and negative binomial distributions -- A.4.3.Hypergeometric distribution -- A.4.4.Poisson distribution -- A.4.5.Uniform distribution -- A.4.6.Exponential distribution -- A.4.7.Weibull distribution -- A.4.8.Gamma-distribution -- A.4.9.Beta-distribution -- A.4.10.Cauchy distribution -- A.4.11.Normal distribution -- A.4.12.Log-normal distribution -- A.4.13.X2 distribution -- A.4.14.t-distribution -- A.4.15.F-distribution -- A.4.16.Multinormial distribution -- A.4.16.1.Definition and main properties -- A.4.16.2.Projections of normal vectors -- B.Solutions of Selected Exercises -- B.1.Chapter 1 -- B.2.Chapter 2 -- B.3.Chapter 3 -- B.4.Chapter 4 -- B.5.Chapter 5 -- B.6.Chapter 6 -- B.7.Chapter 7.
Abstract:
"Designed for a one-semester advanced undergraduate or graduate course, Statistical Theory: A Concise Introduction clearly explains the underlying ideas and principles of major statistical concepts, including parameter estimation, confidence intervals, hypothesis testing, asymptotic analysis, Bayesian inference, and elements of decision theory. It introduces these topics on a clear intuitive level using illustrative examples in addition to the formal definitions, theorems, and proofs. Based on the authors' lecture notes, this student-oriented, self-contained book maintains a proper balance between the clarity and rigor of exposition. In a few cases, the authors present a 'sketched' version of a proof, explaining its main ideas rather than giving detailed technical mathematical and probabilistic arguments. Chapters and sections marked by asterisks contain more advanced topics and may be omitted. A special chapter on linear models shows how the main theoretical concepts can be applied to the well-known and frequently used statistical tool of linear regression.Requiring no heavy calculus, simple questions throughout the text help students check their understanding of the material. Each chapter also includes a set of exercises that range in level of difficulty"--
"Preface This book is intended as a textbook for a one-term course in statistical theory for advanced undergraduates in statistics, mathematics or other related fields although at least parts of it may be useful for graduates as well. Although there exist many good books on the topic, having taught a one-term Statistical Theory course during the years we felt that it is somewhat hard to recommend a particular one as a proper textbook to undergraduate students in statistics. Some of the existing textbooks with a primary focus on rigorous formalism, in our view, do not explain sufficiently clearly the underlying ideas and principles of the main statistical concepts, and are more suitable for graduates. Some others are "all-inclusive" textbooks that include a variety of topics in statistics that make them "too heavy" for a one-term course in statistical theory. Our main motivation was to propose a more "student-oriented" self-contained textbook designed for a one-term course on statistical theory that would introduce basic statistical concepts first on a clear intuitive level with illustrative examples in addition to the (necessary!) formal definitions, theorems and proofs. It is based on our lecture notes. We tried to keep a proper balance between the clarity and rigorousness of exposition. In a few cases we preferred to present a "sketched" version of a proof explaining its main ideas or even to give it up at all rather then to follow detailed technical mathematical and probabilistic arguments. The interested reader can complete those proofs from other existing books on mathematical statistics (see the bibliography)"--
Added Author:
Electronic Access:
Cover image http://images.tandf.co.uk/common/jackets/websmall/978143985/9781439851845.jpg