October 20, 2004

Parameter Estimation

There is a progression of sophistication in parameter estimation which goes as follows, from less sophisticated to more sophisticate:

  • Maximum Likelihood:
    This is a point estimate of a parameter which defines an unobservable distribution. It is obtained by choosing the parameter of a function which maximizes the likelihood that the function will generate the observed data. It is a bad estimate in the case of sparse data.
    In practice the maximum likelihood parameter is found by solving the first derivative of the likelihood function analytically and finding the setting of the parameter that causes the derivative to go to 0. (With some assumption about the likelihood function, I suppose)
  • Maxium a Posteriori Estimate:
    This is also a point estimate of a distribution. But this technique is less-biased in the presence of sparse data because it assumes that there is a prior distribution over the parameters which is known and taken into account when estimating the parameterization of the likelihood function.
  • Bayesian Esimation:
    This is an estimation technique which estimates a distribution over the parameter. It is not a point estimate of the parameter. It assumes a prior distribution and produces a posterior distribution of the likelihood function parameter.
Posted by djp3 at October 20, 2004 10:29 AM | TrackBack (0)
Comments
Post a comment

Post a comment