Include Experimental Error in fit

Hello,

I have some data which will fit to a linear line. However, there is known experimental error in attaining this data, both in X and Y. Is it possible to include this experimental error in the CurveFit routine? I ask because some of the experiments have rather large experimental error and including it will give a much better fit result.

For simplicity, lets say the X error is 0.01 and the Y error is 0.03 for every data point (in reality the values change for each data point due to the efficiency of the equipment).
John,

Thank you very much for this helpful link.

While this greatly helps, it is not quite what I was asking about and I'll try to elaborate. Say I have an optical measurement at various temperatures. The experimental error of the system is temperature dependent. As such, I am wondering if it is possible to provide known values for experimental error which will be utilized in the fit routine. I think of this like providing a confidence interval.

Lets say I'm collecting data using a silicon photodiode. Due to the spectral response of the diode, weak signals near the detection limit of the diode are more difficult to resolve from the background noise than those near it's peak sensitivity. As such, I would like to assign an error value to each point across the spectrum based on it's signal to noise ratio. This would be error for the Y coordinate, while the X coordinate error would be from another source (temperature sensor).
John,
I am interested in the ODR. Presumably the horizontal error bars are assumed to be 1SD? Is this method of fitting identical to taking an unsmeared model curve and convolving it with a gaussian kernel that has the same SD as the horizontal error bar for that point?
andyfaff wrote:
Presumably the horizontal error bars are assumed to be 1SD?

Um, yes, I guess. You can apply a weighting wave to the X and the Y values just like you can for Y values in an ordinary fit. The same flags apply to both, so you can have either 1/SD or SD in the weighting wave. I'm not sure why I supported that obsolete 1/SD thing...

Quote:
Is this method of fitting identical to taking an unsmeared model curve and convolving it with a gaussian kernel that has the same SD as the horizontal error bar for that point?

Hm. I doubt it. That would make a smoothed model.

The method is minimizing the squared perpendicular distance between a data point and the model rather than the squared vertical distance. This is better defined for a straight-line model than for a general nonlinear model, but that's what it does.

In practice it solves for perturbations to the X values in addition to solving for the fit coefficients. You can read the papers:

Boggs, P.T., R.H. Byrd, and R.B. Schnabel, A Stable and Efficient Algorithm for Nonlinear Orthogonal Distance Regression, SIAM Journal of Scientific and Statistical Computing, 8, 1052-1078, 1987.

Boggs, P.T., R.H. Byrd, J.R. Donaldson, and R.B. Schnabel, Algorithm 676 - ODRPACK: Software for Weighted Orthogonal Distance Regression, ACM Transactions on Mathematical Software, 15, 348-364, 1989

Boggs, P.T., J.R. Donaldson, R.B. Schnabel and C.H. Spiegelman, A Computational Examination of Orthogonal Distance Regression, Journal of Econometrics, 38, 169-201, 1988.

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com