Convolution of experimental resolution function with analytical function for fitting experimental data

Hello,

I need to fit experimentally measured data (quasi-elastic neutron scattering spectra) to the convolution of an experimentally measured resolution function with an analytical function (a Lorentzian function). Is there any way I can do this in Igor? If it is possible, could anybody explain me how to do this? I would like to include it in my general data analysis procedure.

Thanks!

Irene
Hello irenecalvo,

Is your data 1D? 2D?

If your resolution function is Gaussian, then the convolution of this with a Lorentzian is called a Voigt. There is a fitting function for the Voigt lineshape in the "Multipeak Fitting 2" package (Analysis > Packages > Multipeak Fitting > Multipeak Fitting 2).

You may also investigate "all-at-once" fitting. In the command line type:
DisplayHelpTopic "All-At-Once Fitting Functions"


Here's an all-at-once example with a Fermi-Dirac function convolved with a Gaussian. Apologies if this is a bit complicated.
Function FermiLinearDosWithRes(pw, yw, xw) : FitFunc    // linear DOS * Fermi, convolved with Gaussian energy resolution
    Wave pw, yw, xw
   
    // pw[0] = offset
    // pw[1] = Fermi level
    // pw[2] = T
    // pw[3] = DOS offset
    // pw[4] = DOS slope
    // pw[5] = energy resolution FWHM
   
    Variable kB = 8.6173e-5 // Boltzmann k in eV/K
   
    // Make the resolution function wave W_res.
    Variable x0 = xw[0]
    Variable dx = (xw[inf]-xw[0])/(numpnts(xw)-1)                           // assumes even data spacing, which is necessary for the convolution anyway
    Make/FREE/D/N=(min(max(abs(3*pw[5]/dx), 5), numpnts(yw))) W_res // make the Gaussian resolution wave
    Redimension/N=(numpnts(W_res)+!mod(numpnts(W_res), 2)) W_res    // force W_res to have odd length
    SetScale/P x, -dx*(numpnts(W_res)-1)/2, dx, W_res
    W_res = gauss( x, 0, pw[5]/(2*sqrt(2*ln(2))) )
    Variable a = sum(W_res)
    W_res /= a
   
    // To eliminate edge effects due to the convolution, add points to yw, make the spectrum,
    // do the convolution, and then delete the extra points.
    Redimension/N=(numpnts(yw)+2*numpnts(W_res)) xw, yw
    xw = (x0-numpnts(W_res)*dx) + p*dx
    yw = pw[0] + ((pw[3] + pw[4]*xw[p])/(1 + exp((xw[p] - pw[1])/(kB*pw[2]))))
    Convolve/A W_res, yw
    DeletePoints 0, numpnts(W_res), xw, yw
    DeletePoints numpnts(yw)-numpnts(W_res), numpnts(W_res), xw, yw
End


Finally, since it sounds like your resolution function has already been determined by other means, you could first de-convolve the data and then do the fit to a plain Lorentzian. For example, if your data is 2D, you might use ImageRestore. This is appropriate, because (I believe) you have Poisson (counting) statistics. From the command line:
DisplayHelpTopic "ImageRestore"


You might also refer to, e.g.,
DisplayHelpTopic "Deconvolution"

but note that in general I believe you will get better results from a fit to a convolved spectral function (using all-at-once fitting) than a deconvolved one.

Good luck,
Nick
Dear Nick,

Thanks for your code. I modified slightly it according to another paper :

M. G. Helander, M. T. Greiner, Z. B. Wang, and Z. H. Lu, Review of Scientific Instruments 82, 096107 (2011).

But I have a more general question concerning home-made fit functions. The chi square value and fit parameters errors (W_sigma wave) are very high, even if the fit curve is without any doubt fitting the data very well. Do you have any idea on how these W_sigma are calculated and why home-made functions are so prone to such errors?

Enclosed is the modified code, maybe the problem lies there:

Function Fermi_Edge(pw, yw, xw) : FitFunc // linear DOS * Fermi, convolved with Gaussian energy resolution
Wave pw, yw, xw

// pw[0] = offset
// pw[1] = Fermi level
// pw[2] = T
// pw[3] = DOS offset
// pw[4] = DOS slope
// pw[5] = energy resolution FWHM

Variable kB = 8.6173e-5 // Boltzmann k in eV/K

// Make the resolution function wave W_res.
Variable x0 = xw[0]
Variable dx = (xw[inf]-xw[0])/(numpnts(xw)-1) // assumes even data spacing, which is necessary for the convolution anyway
Make/FREE/D/N=(min(max(abs(3*pw[5]/dx), 5), numpnts(yw))) W_res // make the Gaussian resolution wave
Redimension/N=(numpnts(W_res)+!mod(numpnts(W_res), 2)) W_res // force W_res to have odd length
SetScale/P x, -dx*(numpnts(W_res)-1)/2, dx, W_res
W_res = gauss( x, 0, pw[5]/(2*sqrt(2*ln(2))) )
Variable a = sum(W_res)
W_res /= a

// To eliminate edge effects due to the convolution, add points to yw, make the spectrum,
// do the convolution, and then delete the extra points.
Redimension/N=(numpnts(yw)+2*numpnts(W_res)) xw, yw
xw = (x0-numpnts(W_res)*dx) + p*dx
// Originale: yw = pw[0] + ((pw[3] + pw[4]*xw[p])/(1 + exp((xw[p] - pw[1])/(kB*pw[2]))))
yw = pw[0]*(1/(1 + exp((xw[p] - pw[1])/(kB*pw[2])))+Heavyside(xw[p], pw[1])*(pw[3] + pw[4]*xw[p]))
Convolve/A W_res, yw
DeletePoints 0, numpnts(W_res), xw, yw
DeletePoints numpnts(yw)-numpnts(W_res), numpnts(W_res), xw, yw
End

// Additional function needed for fit.
Function Heavyside(value, ref)
Variable value, ref

If(value > ref)
Return 0
Else
Return 1
EndIf

End
If your coefficient errors are very large for a fit that appears to the eye to be very good, it is likely that you are suffering from "identifiability" problems. This is usually accompanied by the need for the fit to iterate many times. Usually a fit will iterate just a few times (less than 10).

Here is my stock description of identifiability problems:

---------------------------------------
The large error range combined with needing a large number of iterations for convergence, are strongly suggestive of "identifiability" problems. That is, two or more of the fit coefficients trade off in a way that makes it nearly impossible to solve for the values of both at once. They are correlated in a way that if you adjust one, you can find a value of the other that makes a fit that is nearly as good. Identifiability problems are generally accompanied by large estimated errors on the fit coefficients.

When the correlation is too strong, the fit doesn't know where to go- it will wander around in a coefficient space where a broad range of points all seem about as good. The usual result is apparent convergence but with large estimated values in W_sigma, or a singular matrix error. The error estimates are based on the curvature of the chi-square surface around the solution point. A flat-bottomed chi-square surface such as results from have many solutions that are nearly as good will result in large errors.

You can diagnose identifiability problems after a successful fit by looking at the correlation matrix. To learn how to get it from an Igor fit execute this command on the command line:

DisplayHelpTopic "Correlation matrix"

It requires that you use the /M=2 flag with FuncFit. You can set that flag on the Output Options tab of the Curve Fitting dialog by checking the Covariance Matrix checkbox.
---------------------------------------

You ask why this problem seems to affect "home-made" fit function particularly. That is probably because for the built-in fit functions we have taken pains to avoid the situations that lead to this problem. Note that this is a mathematical problem, not a software issue. It is a problem with your fit function, and possibly with your data. Sometimes a fit function coefficient has strong effect on the shape of the model curve only over a limited range in the independent variable. If you don't sample that range adequately, you can have identifiability problems even with a well-defined fit function.

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
I might make a few suggestions:

* Use Static Constant kB = … at the top of your procedure file rather than kB = … inside the function
* You probably know both the temperature and the resolution of your system, so remove them as fit parameters
-- Static Constant kTE = (kB)*(T) // put the number here
-- Static Constant res = …
* Since you probably know the instrument resolution and its probably fixed, create the transmission function wave outside the fit function and reference it via a global wave statement.
* To minimize Gibbs oscillations, instead of adding and removing points inside the function, mirror the yw and set its number of points externally before the function call to do the multi-fit. See the code snippets referenced at http://www.igorexchange.com/node/5463

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAHuntsville
@johnweeks:

Thank you for your answer, I'm really trying to figure out how to do proper fittings and your answer will certainly help me a lot.

Indeed it appears that the pw[0] coefficient is strongly correlated to the pw[3] and pw[4] coefficients. The fitting process can change pw[0] by huge amount and correct that by changing accordingly pw[3] and pw[4].

This is an intrinsic problem of the model I use for fitting the fermi edge. I'll try to find a better one in the literature, or a less self-involved.

@jjweimer:

Thank you very much for the DoMirror functions. It improved my errors bars on the pw[3] and pw[4] coefficients a lot.

Indeed, temperature is more or less known and can be a constant. However, the instrumental resolution is not known exactly. On the contrary, we can use the fermi edge at low temperature to determine the experimental broadening due to the instrument + photon source in my case.

In any case, thank you both for your help, I'll post the final code when I have a satisfactory fit function.

Best,

Julien

Hi all,

Well, I think my function is not good for such fitting, the reference I put is not very relevant.

I'm now using a slightly modified version of Nick's function which is way better. It works fine with genetic curvefitting too. Errors bars are back to normal.

Thank you for your help on these matters anyway, it's always a pleasure sharing stuff over igorexchange.

Best,

Julien
jer wrote:
Thank you very much for the DoMirror functions. It improved my errors bars on the pw[3] and pw[4] coefficients a lot.


Glad they are useful.

jer wrote:
… However, the instrumental resolution is not known exactly. On the contrary, we can use the fermi edge at low temperature to determine the experimental broadening due to the instrument + photon source in my case.


The resolution does not change from experiment to experiment? So, once you have a reasonable guess for it over a statistically large enough sample size, you might consider to fix that value for all subsequent analysis. In essence, you might want to think about how you could do a large number of "calibration" experiments just to get the line broadening parameter. Then, assign that work as an undergraduate student research project. :-)

Alternatively, do you have any way to measure it for your instrument? Use for example a point source with a defined line-width that shines directly in to the analyzer system. Or measure it by deconvolution to the spectrum of a signal that has a known intrinsic line width.

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAHuntsville
That's true: we should take some time to carefully characterize our experimental broadening for several experimental conditions. I just have to find the time, or the undergrad, to do that.

Thank you for all your help,

Julien
Hi,
I have to convolute demagnetization curves with a gaussian which describes the time resolution of our x-rays. For this I used your code above. After the convolution the convoluted wave is shifted to the right. The amount of shifting depend on the gaussian width.

Do you have any ideas about it and also how do you make your xw- and yw-waves in advanced?

My modified code:

Function DeMagWithRes(pw, yw0, xw0)    
    Wave pw, yw0, xw0
 
    Duplicate/O xw0, xw
    Duplicate/O yw0, yw
   
    // pw[0] = offset
    // pw[1] = A                // magnetization for t <0
    // pw[2] = B               // demagnetization rate
    // pw[3] = tD             // demagnetzization time, ~0.1 ps
    // pw[4] = tR              // remagnetization (or relaxation) time ~ 1 ps
    // pw[5] = time resolution FWHM   // time resolution of the x-rays: ~ 70 ps
    // pw[6] = t0             // time zero = 0
   
 
   
    // Make the resolution function wave W_res.
    Variable x0 = xw[0]
    Variable dx = (xw[inf]-xw[0])/(numpnts(xw)-1)                       // assumes even data spacing, which is necessary for the convolution anyway
    Make/O/D/N=(min(max(abs(3*pw[5]/dx), 5), numpnts(yw))) W_res        // make the Gaussian resolution wave
    Redimension/N=(numpnts(W_res)+!mod(numpnts(W_res), 2)) W_res    // force W_res to have odd length
    SetScale/P x, -dx*(numpnts(W_res)-1)/2, dx, W_res
    W_res = gauss( x, pw[6], pw[5] )
    Variable a = sum(W_res)
    W_res /= a
 
    // To eliminate edge effects due to the convolution, add points to yw, make the spectrum,
    // do the convolution, and then delete the extra points.
    Redimension/N=(numpnts(yw)+2*numpnts(W_res)) xw, yw
    xw = (x0-numpnts(W_res)*dx) + p*dx

    yw = xw[p] > 0 ? (pw[1]- pw[2]*(1-Exp(-(xw[p] - pw[6])/pw[3] ))*Exp(-(xw[p] - pw[6])/pw[4]) ) :  pw[1]
    Duplicate/O yw, yw_conv
   
    Convolve W_res, yw_conv
    DeletePoints 0, numpnts(W_res), xw, yw
    DeletePoints numpnts(yw)-numpnts(W_res), numpnts(W_res), xw, yw
End


Thanks
Quote:

    Duplicate/O yw, yw_conv
 
    Convolve W_res, yw_conv
    DeletePoints 0, numpnts(W_res), xw, yw
    DeletePoints numpnts(yw)-numpnts(W_res), numpnts(W_res), xw, yw


Here you first duplicate the output wave yw into yw_conv. Then you convolve yw_conv with the Gaussian broadening kernel. Then you delete points from yw, not yw_conv, twice. At no point are you putting the result of the convolution into yw. You need to delete the points from yw_conv, then assign the result to yw:
    yw = yw_conv

But read the documentation of all-at-once fit functions about the gotchas involved with all-at-once fit functions. It looks to me like the function probably takes into account the number of points in yw, but it may assume that the input data are evenly spaced in X.
DisplayHelpTopic "All-At-Once Fitting Functions"
to learn about the pitfalls of doing a simple assignment like that. It assumes that

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com