Jump to content

Talk:Goodness of fit

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia


Anderson-Darling

[edit]

The Anderson-Darling test should probably be mentioned on this page, as it tests the goodness of fit of a distribution 128.42.159.192 (talk) 19:49, 30 July 2009 (UTC)[reply]

In this equation: should it be ? Otherwise chi wouldn't be unitless if the observed/expected values have units. I'm going to make the change, but I'd like confirmation that this is the case --Keflavich 15:19, 15 April 2006 (UTC)[reply]

I don't think this is the case, I'm changing it back

[edit]

I'm no stats expert, but my textbook says otherwise —The preceding unsigned comment was added by 67.126.236.193 (talkcontribs) 01:02, 21 April 2006.

It's because frequencies, not actual quantities are used

[edit]

So both O and E are dimensionless


Reduced Chi-Squared

[edit]

It seems to me that the reference given for the reduced chi-squared gives a different formula than is included in this article. The reference indicates the reducted chi-squared is Chi^2/DOF where the DOF=#obs-#params-1. Privong 13:34, 30 July 2007 (UTC)[reply]

Nevermind. I see where the formula comes from. Perhaps it might be wise though to also put it in terms of the degrees of freedom? Privong 14:34, 30 July 2007 (UTC)[reply]
The formula is still wrong. Every source I can find (including the reference currently in the article) explicitly state that summing the squares ratioed by the variance is Chi-squared, and that the reduced Chi-squared is when you further divide the total by the number of degrees of freedom. I'm going to update the article accordingly, since the consensus in all sources I can find is that the reduced version is divided by degrees of freedom. --129.6.154.43 (talk) 17:21, 1 April 2009 (UTC)[reply]

More work?

[edit]

I think this page needs more theoretical work than just an example. . . . Just my thoughts —Preceding unsigned comment added by 161.31.73.160 (talk) 19:15, 28 May 2008 (UTC)[reply]

Pictures/diagrams would help. Charles Edwin Shipp (talk) 21:59, 19 October 2011 (UTC)[reply]

Very confusing as currently written

[edit]

Both the first two comments stem from confusion caused by defining O and E to be frequencies. I think it would be much clearer if the formulae were rewritten in terms of quantities and degrees of freedom, as in the cited article. Unfortunately, I don't have time to do this right now.

what is 'lack of fit' mean? The ‘Lack of Fit F-value’ of 0.57 implies that the Lack of Fit is not significantly relative to the pure error. The Value of ‘P > F’ is 0.7246 which means that there is a 72.46% chance that a ‘Lack of Fit F-value’ this large could occur due to noise. so what is the range of ‘Lack of Fit F-value’ is not significantly relative to the pure error? what is the meaning of ‘Lack of Fit F-value’ is not significantly relative to the pure error? the modelis not good fitting? —Preceding unsigned comment added by 124.16.145.39 (talk) 00:29, 19 March 2009 (UTC)[reply]


Reduced chi-squared only for linear models

[edit]

The article presents reduced chi-squared as if it were applicable to all kinds of models. However, this is not true. Comparing reduced chi-squared to unity is informative about the goodness of fit if and only if the model is purely linear! For nonlinear models it doesn't make sense. To give a reference, look into Barlow 1993, I think where he derives the chi-squared distribution. Unfortunately, he justs states that as a matter of fact - as I am doing here - but does not give an explanation. Maybe somebody else can point his finger to why reduced chi-squared doesn't make sense for nonlinear models?
Regards, Rene —Preceding unsigned comment added by 149.217.40.222 (talk) 14:15, 3 August 2010 (UTC)[reply]

Notation and connection to regression, OLS, ANOVA and econometrics

[edit]

When I studied this, goodness-of fit was called R^2 and was defined in terms of SSR (sum of squared residuals), SSE (sum of squared errors), and SST (sum of squares total), which in turn were defined in terms of yhat, xhat, ybar, xbar, and residuals, which in turn were defined in terms of the regression line based on the estimated values for the coefficient with the independent variable and the y-intercept. I think that these terms should at least be mentioned in the article. 146.163.167.230 (talk) 00:31, 1 September 2010 (UTC)[reply]

[edit]

Prior content in this article duplicated one or more previously published sources. The material was copied from: http://itl.nist.gov/div898/handbook/eda/section3/eda35f.htm. Infringing material has been rewritten or removed and must not be restored, unless it is duly released under a compatible license. (For more information, please see "using copyrighted works from others" if you are not the copyright holder of this material, or "donating copyrighted materials" if you are.) For legal reasons, we cannot accept copyrighted text or images borrowed from other web sites or published material; such additions will be deleted. Contributors may use copyrighted publications as a source of information, but not as a source of sentences or phrases. Accordingly, the material may be rewritten, but only if it does not infringe on the copyright of the original or plagiarize from that source. Please see our guideline on non-free text for how to properly implement limited quotations of copyrighted text. Wikipedia takes copyright violations very seriously, and persistent violators will be blocked from editing. While we appreciate contributions, we must require all contributors to understand and comply with these policies. Thank you. Danger (talk) 11:43, 9 October 2011 (UTC)[reply]

Plagiarism

[edit]

The opening appears to be widely plagiarized as far back as 2005 on the Psychology Wiki

http://psychology.wikia.com/wiki/Goodness_of_fit — Preceding unsigned comment added by 129.246.254.118 (talk) 22:53, 11 December 2014 (UTC)[reply]

Multivariate Gof-tests?

[edit]

It is not mentioned if there are multivariate tests to test the fit of a multivariate distribution. If there are no such tests, shouldn't it be suggested to use multiple tests for the marginal assumptions at least? -134.106.106.150 (talk) 20:45, 9 April 2015 (UTC)[reply]

Error in example?

[edit]

This sentence can't be correct: "A \chi_\mathrm{red}^2 < 1 indicates that the model is 'over-fitting' the data: either the model is improperly fitting noise, or the error variance has been overestimated."

Such an assertion would imply that linear relations (like the ideal gas law) are invalid.

The reference given earlier is to a paper that does not exist.

Createangelos (talk) 12:02, 21 April 2015 (UTC)[reply]

The claim about < 1 makes sense; your implication is confused; an exact linear law does not imply the measurement error disappears.
What ref?
Glrx (talk) 00:34, 24 April 2015 (UTC)[reply]


Hi, sorry for the delay replying. I can't find any problem with the references anymore (perhaps fixed?) but I still have problems with the fact that the statement that reduced chi squared less than one means that a model is over-fit. You correctly say that, for example if someone made a statistical model of gas volume, pressure and temperature which is the ideal gas law, there would indeed be measurement errors due to brownian motion of the gas. That would mean that the chi squared value would be nonzero, but it would be very near zero. Does that mean that the ideal gas law should be abandoned as 'overfitting?' Clearly not. A paper online by Andrae et al gives a really good example of over-fitting, it is always possible given any real valued function f with finite support in the reals, to choose A,B,C so that A cos(Bx) is within arbitrarily small distance from the function f in the least squares sense. In fact there are infinitely many such choices of A,B,C that give an arbitrarily good approximation for f at the chosen points, but are very different when extended to R. However, the same paper repeats the false statement that chi squared reduced detects such problems. More precisely, in the introduction of their paper they say that it does, and then in this example show that actually chi squared reduced doesn't really make sense.

I'm sort of OK with the Wikipedia article saying that as a rule of thumb one should aim for chi squared reduced being equal to 1, but there are assumptions that need to be specified, otherwise it is just the completely wrong thing to do; wikipedia editors ought to get involved with sorting this out by separating the wheat from the chaff in these references. Statistics writing doesn't always include the necessary hypotheses, but readers of wikipedia articles won't be applying standard statistical hypotheses, and are going to be mislead.

Obviously, the notion that if a statistical model is 'too good' so something must be wrong, is plain superstition. It is along the lines of saying 'the exception proves the rule' etc. Just nonsense superstition, actually, without correct hypotheses.Createangelos (talk) 09:23, 28 June 2016 (UTC)[reply]

The rule of thumb statement is appropriately sourced to Bevington, a reliable source published by McGraw Hill. The statement is completely appropriate; the requisite assumptions are stated. You don't have a reliable source that says otherwise (who is Andrae et al? -- BTW, many online sources are not reliable).
You do not understand measurement error. If ChiS_R is less than one, then the fit exceeds the measurement variance. A fit cannot be better than the measurements, so something is wrong. That is not superstition. Maybe the measurements are more accurate than believed or maybe the model has enough degrees of freedom to eliminate some of the measurement error.
Measurement errors do not include just Brownian motion. You're in left field here.
Glrx (talk) 19:55, 28 June 2016 (UTC)[reply]

Hmm, I think you might be right. I perhaps don't know the def'n of measurement error (as you said the first time).Createangelos (talk) 22:02, 28 June 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Goodness of fit. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 13:45, 12 January 2016 (UTC)[reply]

India Education Program course assignment

[edit]

This article was the subject of an educational assignment at College of Engineering, Pune supported by Wikipedia Ambassadors through the India Education Program. Further details are available on the course page.

The above message was substituted from {{IEP assignment}} by PrimeBOT (talk) on 20:02, 1 February 2023 (UTC)[reply]