Cover image for Chi-squared goodness of fit tests with applications
Title:
Chi-squared goodness of fit tests with applications
Author:
Voinov, Vassiliy.
ISBN:
9780123971944
Personal Author:
Edition:
First edition.
Publication Information:
Amsterdam : Academic Press, 2013.
Physical Description:
xii, 229 pages ; 24 cm.
Contents:
Machine generated contents note: 1.A Historical Account -- 2.Pearson's Sum and Pearson-Fisher Test -- 2.1.Pearson's chi-squared sum -- 2.2.Decompositions of Pearson's chi-squared sum -- 2.3.Neyman-Pearson classes and applications of decompositions of Person's Sum -- 2.4.Pearson-Fisher and Dzhaparidze-Nikulin tests -- 2.5.Chernoff-Lehmann Theorem -- 2.6.Pearson-Fisher test for random class end points -- 3.Wald's Method and Nikulin-Rao-Robson Test -- 3.1.Wald's Method -- 3.2.Modifications of Nikulin-Rao-Robson test -- 3.3.Optimality of Nikulin-Rao-Robson test -- 3.4.Decomposition of Nikulin-Rao-Robson Test -- 3.5.Chi-squared tests for multivariate normality -- 3.5.1.Introduction -- 3.5.2.Modified chi-squared tests -- 3.5.3.Testing for bivariate circular normality -- 3.5.4.Comparison of different tests -- 3.5.5.Conclusions -- 3.6.Modified chi-squared tests for the exponential distribution -- 3.6.1.Two-parameter exponential distribution -- 3.6.2.Scale-exponential distribution --

Contents note continued: 3.7.Power generalized Weibull distribution -- 3.7.1.Estimation of parameters -- 3.7.2.Modified chi-squared test -- 3.7.3.Evaluation of power -- 3.8.Modified chi-squared goodness of fit test for randomly right censored data -- 3.8.1.Introduction -- 3.8.2.Maximum likelihood estimation for right censored data -- 3.8.3.Chi-squared goodness of fit test -- 3.8.4.Examples -- 3.9.Testing normality for some classical data on physical constants -- 3.9.1.Cavendish's measurements -- 3.9.2.Millikan's measurements -- 3.9.3.Michelson's measurements -- 3.9.4.Newcomb's measurements -- 3.10.Tests based on data on stock returns of two Kazakhastani companies -- 3.10.1.Analysis of daily returns -- 3.10.2.Analysis of weekly returns -- 4.Wald's Method and Hsuan-Robson-Mirvaliev Test -- 4.1.Wald's method and moment-type estimators -- 4.2.Decomposition of Hsuan-Robson-Mirvaliev test --

Contents note continued: 4.3.Equivalence of Nikulin-Rao-Robson and Hsuan-Robson-Mirvaliev tests for exponential family -- 4.4.Comparisons of some modified chi-squared tests -- 4.4.1.Maximum likelihood estimates -- 4.4.2.Moment-type estimators -- 4.5.Neyman-Pearson classes -- 4.5.1.Maximum likelihood estimators -- 4.5.2.Moment-type estimators -- 4.6.Modified chi-squared rest for three-parameter Weibull distribution -- 4.6.1.Parameter estimation and modified chi-squared tests -- 4.6.2.Power evaluation -- 4.6.3.Neyman-Pearson classes -- 4.6.4.Discussion -- 4.6.5.Concluding remarks -- 5.Modifications Based on UMVUEs -- 5.1.Test for Poisson, binomial, and negative binomial distributions -- 5.2.Chi-squared test for one-parameter exponential family -- 5.3.Revisiting Clarke's data on flying bombs -- 6.Vector-Valued Tests -- 6.1.Introduction -- 6.2.Vector-valued tests: An artificial example -- 6.3.Example of Section 2.3 revisited -- 6.4.Combining nonparametric and parametric tests --

Contents note continued: 6.5.Combining nonparametric tests -- 6.6.Concluding comments -- 7.Applications of Modified Chi-Squared Tests -- 7.1.Poisson versus binomial: Appointment of judges to the US supreme court -- 7.1.1.Introduction -- 7.1.2.Data to be analyzed -- 7.1.3.Statistical analysis of the data -- 7.1.4.Revisiting the analyses of Wallis and Ulmer -- 7.1.5.Comments about King's exponential Poisson regression model -- 7.1.6.Concluding remarks -- 7.2.Revisiting Rutherford's data -- 7.2.1.Analysis of the data -- 7.2.2.Concluding remarks -- 7.3.Modified tests for the logistic distribution -- 7.4.Modified chi-squared -- 7.4.1.Introduction -- 7.4.2.The NRR, DN and McCulloch tests -- 8.Probability Distributions of Interest -- 8.1.Discrete probability distributions -- 8.1.1.Binomial, geometric, and negative binomial distributions -- 8.1.2.Multinomial distribution -- 8.1.3.Poisson distribution -- 8.2.Continuous probability distributions -- 8.2.1.Exponential distribution --

Contents note continued: 8.2.2.Uniform distribution -- 8.2.3.Triangular distribution -- 8.2.4.Pareto model -- 8.2.5.Normal distribution -- 8.2.6.Multivariate normal distribution -- 8.2.7.Chi-square distribution -- 8.2.8.Non-central chi-square distribution -- 8.2.9.Weibull distribution -- 8.2.10.Generalized power Weibull distribution -- 8.2.11.Birnbaum-Saunders distribution -- 8.2.12.Logistic distribution -- 9.Chi-Squared Tests for Specific Distributions -- 9.1.Test for Poisson, binomial, and "binomial" approximation of Feller's distribution -- 9.2.Elements of matrices K, B, C, and V for the three-parameter Weibull distribution -- 9.3.Elements of matrices J and B for the generalized power Weibull distribution -- 9.4.Elements of matrices J and B for the two-parameter exponential distribution -- 9.5.Elements of matrices B, C, K, and V to test the logistic distribution -- 9.6.Testing for normality -- 9.7.Testing for exponentiality --

Contents note continued: 9.7.1.Test of Greenwood and Nikulin (see Section 3.6.1) -- 9.7.2.Nikulin-Rao-Robson test (see Eq. (3.8) and Section 9.4) -- 9.8.Testing for the logistic -- 9.9.Testing for the three-parameter Weibull -- 9.10.Testing for the power generalized Weibull -- 9.11.Testing for two-dimensional circular normality.
Abstract:
"If the number of sample observations n ! 1, the statistic in (1.1) will follow the chi-squared probability distribution with r-1 degrees of freedom. We know that this remarkable result is true only for a simple null hypothesis when a hypothetical distribution is specified uniquely (i.e., the parameter is considered to be known). Until 1934, Pearson believed that the limiting distribution of the statistic in (1.1) will be the same if the unknown parameters of the null hypothesis are replaced by their estimates based on a sample; see, for example, Baird (1983), Plackett (1983, p. 63), Lindley (1996), Rao (2002), and Stigler (2008, p. 266). In this regard, it is important to reproduce the words of Plackett (1983, p. 69) concerning E. S. Pearson's opinion: "I knew long ago that KP (meaning Karl Pearson) used the 'correct' degrees of freedom for (a) difference between two samples and (b) multiple contingency tables. But he could not see that.
Copies: