This article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.StatisticsWikipedia:WikiProject StatisticsTemplate:WikiProject StatisticsStatistics articles
This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.Computer scienceWikipedia:WikiProject Computer scienceTemplate:WikiProject Computer scienceComputer science articles
The following Wikipedia contributor may be personally or professionally connected to the subject of this article. Relevant policies and guidelines may include conflict of interest, autobiography, and neutral point of view.
The mathematical problem is extremely overcomplicated. It is very simple and it needs only 10 lines of code to resolve it. I showed how to do that multiple times, but those guys who promote their obsolete and ineffective computations remove it. Watch this video https://www.youtube.com/watch?v=w9x-omEIML0. If to look for solution in a form of quantized (piecewise constant) functions, than the code for building model is 10 lines, here is example
for (int i = T - 1; i < N; ++i) {
double predicted = 0.0;
for (int j = 0; j < T; ++j) {
predicted += U[(int)((x[i-j] - xmin) / deltaX), j];
}
double error = (y[i] - predicted) / T * learning_rate;
for (int j = 0; j < T; ++j) {
U[(int)((x[i-j] - xmin) / deltaX), j] += error;
}
}
That is all, this code builds model. Explanation is here http://ezcodesample.com/NAF/index.html. You can't stop technical progress by removing more effective solutions from internet. It is published anyway in many other places and in high rated paper journals, just accept the fact that better solution exists. It is a normal way of technical progress. — Preceding unsigned comment added by 208.127.242.253 (talk) 14:20, 4 October 2021 (UTC)[reply]
The area of GAMs is critically important in modern data mining, and this article could use a lot of additional work. I'll put in some more sections over the coming days to try to bring it more in line with the current state of the art. Also, because the GAM is a highly practical subject, I think it is worthwhile to discuss some practical matters related to this model. In particular, there is a lot of work out there related to using the GAM approach to perform functional decomposition of large datasets in order to discover the functional form of the phenomena that drive observed results. This is touched on in a few of the references, but is not discussed in the article itself, which is a shame.
Also, Tibschirani's original paper talks only about nonparametric methods, but semi-parametric methods are also fairly common in recent years. The main reason for this is that semi-parametric methods allow (often) for explanation of the causes behind these effects, and they are also much more able to control the complexity class of the model sought. — Preceding unsigned comment added by Vertigre (talk • contribs) 20:01, 29 December 2015 (UTC)[reply]
Hmm, is the article in error? I've been taught that GAMs are extensions of Generalized Linear Models, not multiple regressions. Specifically, instead of the mean being a sum of component functions, it need only be related by a link function. (This construction contains the one given in the article.) --Fangz (talk) 15:03, 15 May 2008 (UTC)[reply]
Further development of this article might be needed
I think this article only superficially touches this increasingly important area of applied statistics. In order to understand importance of GAM one should ask himself how realistic is that a particular variable has a linear effect, which is the restriction of GLM or linear models. And non-linear least squares could be very time-consuming and do not provide a clear inferential framework, not to mention convergence problems that could arise.
Off course over-fitting as well as under-fitting could be a problem, but there are many methods to adjust the model: Cross-Validation (OCV), GCV, AIC, BIC or even through effect plots (especially in R-package mgcv). Simulations studies show that if GAM is used appropriately they are almost always outperform any other methods in vast variety of applications. Stats30 (talk) 00:03, 9 March 2009 (UTC)[reply]
Mostly it will sum to Gaussian noise except for specific inputs that incite some correlation in the function outputs.
Then is it not some form of associative memory? As a simple example you have a locality sensitive hash whose output bit you view as +1,-1.
Weight each bit and sum to get a recalled value. To train, recall and calculate the error. Divide by the number of bits. Then add or subtract that as appropriate to each weight to make the error zero. Spreading out the error term that way de-correlates it when there is non-simlar input, the error term fragments will sum to mean zero low level Gaussian noise.
You can use predetermined random pattern of sign flipping applied the elements of a 1d vector followed by the fast Walsh Hadamard transform to get a random projection (RP.) Repeat for better quality. Then you can binarize the output of the RP to get a fast locality sensitive hash. Anyway if you understand these things you can see that associative memory=extreme learning machines=reservoir computing etc. — Preceding unsigned comment added by 113.190.221.54 (talk) 11:33, 24 February 2019 (UTC)
:Does this have anything at all to do with generalized additive models? --Qwfp (talk) 17:35, 24 February 2019 (UTC)[reply]
Generalized additive models with pairwise interactions (GA2Ms)