cross‐correlation term in 2D Gaussian Fitting Function

Hi all,

I have a question that what's the meaning of the cross‐correlation term in 2D Gaussian Fitting Function? Before, I suppose it indicates the angle that principle axis off x axis and y axis. However, after some inference, it seems not.

Here is the inference:
In Igor, 2D Gaussian has the following form: z0 + Aexp{-1/2(1-cor^2)[((x-x0)/xWidth)^2 + ((y-y0)/yWidth)^2 + 2cor(x-x0)(y-y0)/xWidth/yWidth]}. Compare to the general 2D Gaussian function form f = Aexp{-[a(x-x0)^2 + 2b(x-x0)(y-y0) + c(y-y0)^2]}, in which a = (cos^2 (sita))/(2xWidth^2) + (sin^2 (sita))/(2yWidth^2), c = (sin^2 (sita))/(2xWidth^2) + (cos^2 (sita))/(2yWidth^2) and "sita" is the angle rotated angle (angle between Gaussian function principle axis and x-axis). (You can see the definition of general 2D gaussian function here: http://en.wikipedia.org/wiki/Gaussian_function)

By this two definition, I found cor = 0. So it seems that cor is not related to the angle "sita". So does anyone know the meaning of cor in 2D Gaussian Fitting Function?

Thanks!
'cor' in the Gauss2D() function (the same as used in Gauss2d curve fit) is the correlation coefficient between two joint Gaussian random variables (GRV)

In simplest terms, if x and y are joint zero-mean GRVs, then

cor*sigmax*sigmay = E[ x* y ], where E[ ] is the expectation (or statistically averaged value) of the argument.

The angle you refer to is just a convenient representation of the axis orientations for the uncorrelated GRVs found by the appropriate orthogonal transformation of the correlated GRVs. In a circularly symmetric distribution (with equal variances and cor=0) the angle is undefined.
s.r.chinn wrote:
'cor' in the Gauss2D() function (the same as used in Gauss2d curve fit) is the correlation coefficient between two joint Gaussian random variables (GRV)

In simplest terms, if x and y are joint zero-mean GRVs, then

cor*sigmax*sigmay = E[ x* y ], where E[ ] is the expectation (or statistically averaged value) of the argument.

The angle you refer to is just a convenient representation of the axis orientations for the uncorrelated GRVs found by the appropriate orthogonal transformation of the correlated GRVs. In a circularly symmetric distribution (with equal variances and cor=0) the angle is undefined.


Thanks Chinn! Following your information, I searched online and found some materials talking about this kind of Bivariate Normal Density. But I still have some questions and want to make it clear.
http://users.ece.gatech.edu/~juang/3075/ECE3075A-17.pdf
http://www.site.uottawa.ca/~yymao/courses/elg3121/summer2006/scripts/st…
http://math.tntech.edu/ISR/Introduction_to_Probability/Joint_Distributi…

I understand that if cor != 0, it means that semi-axis of the ellipse is not parallel to x-axis and y-axis. But what I want is calculating the FWHM along 2D Gaussian function's principle.
So if cor !=0, what is the meaning of sigmax and sigmay? Is 2*sqrt(2*ln(2))*sigmax(y) the FWHM along principle axis, or it is the length of x-axis cut by the 2D Gaussian at half maximum height? If 2*sqrt(2*ln(2))*sigmax is not the FWHM of principle axis, how can I calculate it based on cor, sigmax and sigmay?

Thanks!
I will give you the simplest example of zero-mean GRVs, with unit sigma (standard deviation). The variance along the semi-major axis is (1+cor), and the variance along the semi-minor axis is (1-cor). All definitions of variance and standard deviation (square root of variance) of the transformed variables retain their usual meaning and definitions.

I suggest you find some some good references on random variable analysis and application. "Random Signals and Noise" (Davenport and Root) or "Detection, Estimation, and Modulation Theory" (Van Trees) treat this problem in detail.
s.r.chinn wrote:
I will give you the simplest example of zero-mean GRVs, with unit sigma (standard deviation). The variance along the semi-major axis is (1+cor), and the variance along the semi-minor axis is (1-cor). All definitions of variance and standard deviation (square root of variance) of the transformed variables retain their usual meaning and definitions.

I suggest you find some some good references on random variable analysis and application. "Random Signals and Noise" (Davenport and Root) or "Detection, Estimation, and Modulation Theory" (Van Trees) treat this problem in detail.


Thanks Chinn! I'm going to have a look at those books.

But two more questions:

1. In 2D Gaussian condition, we have both sigmax and sigmay, which sigma we should use as the unit of (1+cor) and (1-cor)?
2. By saying zero-mean, do you mean that the fitted functions center's coordinate is (0, 0)? (That's what I saw online before, zero-mean function has the symmetry around 0. If zero-mean means the function's peak is at (0, 0), I can simply shift the function right 1 unit and up 1 unit and put it center at (1, 1), which looks like non zero mean. But the shape of the function should not be changes, and variance along semi-axis should not change, either)

Thanks!
Quote:
1. In 2D Gaussian condition, we have both sigmax and sigmay, which sigma we should use as the unit of (1+cor) and (1-cor)?

I specifically said that for simplicity in my reply the standard deviations were the same and normalized to unity.
Quote:

2. By saying zero-mean, do you mean that the fitted functions center's coordinate is (0, 0)? (That's what I saw online before, zero-mean function has the symmetry around 0. If zero-mean means the function's peak is at (0, 0), I can simply shift the function right 1 unit and up 1 unit and put it center at (1, 1), which looks like non zero mean. But the shape of the function should not be changes, and variance along semi-axis should not change, either)

You need to learn the basics of handling multivariate Gaussians. Our simply providing you with formulas will not help you in the long run.
See Multivariate_normal_distribution for an introductory overview, or look at the texts I suggested. If you need to, brush up on your linear algebra first.

Look in the upper right box in the above link. What you have to do is examine the exponential argument. By finding the eigenvalues and eigenvectors of Sigma^-1 (the inverse covariance matrix), you are finding the orthogonal variances, and the coordinate transformation that gives the new axes for those variances in the system with uncorrelated GRVs. By subtracting the means first, the origin of the coordinates remains at the transformed mean values and just represents a translation.