Jump to content

Function approximation

From Wikipedia, the free encyclopedia
(Redirected from Target function)
Several approximations of a step function
Several progressively more accurate approximations of the step function.
An asymmetrical Gaussian function fit to a noisy curve using regression.
An asymmetrical Gaussian function fit to a noisy curve using regression.

In general, a function approximation problem asks us to select a function among a well-defined class[citation needed][clarification needed] that closely matches ("approximates") a target function[citation needed] in a task-specific way.[1][better source needed] The need for function approximations arises in many branches of applied mathematics, and computer science in particular [why?],[citation needed] such as predicting the growth of microbes in microbiology.[2] Function approximations are used where theoretical models are unavailable or hard to compute.[2]

One can distinguish[citation needed] two major classes of function approximation problems:

First, for known target functions approximation theory is the branch of numerical analysis that investigates how certain known functions (for example, special functions) can be approximated by a specific class of functions (for example, polynomials or rational functions) that often have desirable properties (inexpensive computation, continuity, integral and limit values, etc.).[3]

Second, the target function, call it g, may be unknown; instead of an explicit formula, only a set of points of the form (x, g(x)) is provided.[citation needed] Depending on the structure of the domain and codomain of g, several techniques for approximating g may be applicable. For example, if g is an operation on the real numbers, techniques of interpolation, extrapolation, regression analysis, and curve fitting can be used. If the codomain (range or target set) of g is a finite set, one is dealing with a classification problem instead.[4]

To some extent, the different problems (regression, classification, fitness approximation) have received a unified treatment in statistical learning theory, where they are viewed as supervised learning problems.[citation needed]

References

[edit]
  1. ^ Lakemeyer, Gerhard; Sklar, Elizabeth; Sorrenti, Domenico G.; Takahashi, Tomoichi (2007-09-04). RoboCup 2006: Robot Soccer World Cup X. Springer. ISBN 978-3-540-74024-7.
  2. ^ a b Basheer, I.A.; Hajmeer, M. (2000). "Artificial neural networks: fundamentals, computing, design, and application" (PDF). Journal of Microbiological Methods. 43 (1): 3–31. doi:10.1016/S0167-7012(00)00201-3. PMID 11084225. S2CID 18267806.
  3. ^ Mhaskar, Hrushikesh Narhar; Pai, Devidas V. (2000). Fundamentals of Approximation Theory. CRC Press. ISBN 978-0-8493-0939-7.
  4. ^ Charte, David; Charte, Francisco; García, Salvador; Herrera, Francisco (2019-04-01). "A snapshot on nonstandard supervised learning problems: taxonomy, relationships, problem transformations and algorithm adaptations". Progress in Artificial Intelligence. 8 (1): 1–14. arXiv:1811.12044. doi:10.1007/s13748-018-00167-7. ISSN 2192-6360. S2CID 53715158.

See also

[edit]