Uncorrelatedness + Joint Normality = Independence. Why? Intuition and mechanics
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
4
down vote
favorite
Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?
correlation normal-distribution independence joint-distribution
New contributor
|
show 5 more comments
up vote
4
down vote
favorite
Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?
correlation normal-distribution independence joint-distribution
New contributor
1
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
1
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
1
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago
|
show 5 more comments
up vote
4
down vote
favorite
up vote
4
down vote
favorite
Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?
correlation normal-distribution independence joint-distribution
New contributor
Two variables that are uncorrelated are not necessarily independent, as is simply exemplified by the fact that $X$ and $X^2$ are uncorrelated but not independent. However, two variables that are uncorrelated AND jointly normally distributed are guaranteed to be independent. Can someone explain intuitively why this is true? What exactly does joint normality of two variables add to the knowledge of zero correlation between two variables, which leads us to conclude that these two variables MUST be independent?
correlation normal-distribution independence joint-distribution
correlation normal-distribution independence joint-distribution
New contributor
New contributor
edited 5 hours ago
Michael Hardy
3,0201330
3,0201330
New contributor
asked 7 hours ago
ColorStatistics
504
504
New contributor
New contributor
1
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
1
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
1
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago
|
show 5 more comments
1
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
1
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
1
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago
1
1
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):
X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):
X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
1
1
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
1
1
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago
|
show 5 more comments
2 Answers
2
active
oldest
votes
up vote
5
down vote
accepted
The the joint probability density function (pdf) of bivariate normal distribution is:
$$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$
where
$$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
When $rho = 0$,
$$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
& = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$.
So they are independent.
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
add a comment |
up vote
1
down vote
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:
For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.
There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$
That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .
If the second one it true, then $operatornamecov(X,Y) = ac + bd.$
If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$
Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
accepted
The the joint probability density function (pdf) of bivariate normal distribution is:
$$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$
where
$$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
When $rho = 0$,
$$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
& = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$.
So they are independent.
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
add a comment |
up vote
5
down vote
accepted
The the joint probability density function (pdf) of bivariate normal distribution is:
$$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$
where
$$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
When $rho = 0$,
$$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
& = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$.
So they are independent.
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
add a comment |
up vote
5
down vote
accepted
up vote
5
down vote
accepted
The the joint probability density function (pdf) of bivariate normal distribution is:
$$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$
where
$$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
When $rho = 0$,
$$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
& = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$.
So they are independent.
The the joint probability density function (pdf) of bivariate normal distribution is:
$$f(x_1,x_2)=frac 12pisigma_1sigma_2sqrt1-rho^2exp[-frac z2(1-rho^2)], $$
where
$$z=frac(x_1-mu_1)^2sigma_1^2-frac2rho(x_1-mu_1)(x_2-mu_2)sigma_1sigma_2+frac(x_2-mu_2)^2sigma_2^2.$$
When $rho = 0$,
$$beginalignf(x_1,x_2) &=frac 12pisigma_1sigma_2exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2+frac(x_2-mu_2)^2sigma_2^2right ]\
& = frac 1sqrt2pisigma_1exp[-frac 12leftfrac(x_1-mu_1)^2sigma_1^2right] frac 1sqrt2pisigma_2exp[-frac 12leftfrac(x_2-mu_2)^2sigma_2^2right]\ &= f(x_1)f(x_2)endalign$$.
So they are independent.
edited 5 hours ago
Michael Hardy
3,0201330
3,0201330
answered 6 hours ago
a_statistician
3,230139
3,230139
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
add a comment |
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
I was two lines slower than you! (+1)
– jbowman
5 hours ago
I was two lines slower than you! (+1)
– jbowman
5 hours ago
1
1
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
Thank you all. Elegant proof. It's clear now. It seems to me that given the flow of the proof, I should have asked what knowledge of zero correlation adds to knowledge of joint normality and not the other way around.
– ColorStatistics
5 hours ago
add a comment |
up vote
1
down vote
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:
For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.
There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$
That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .
If the second one it true, then $operatornamecov(X,Y) = ac + bd.$
If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$
Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
add a comment |
up vote
1
down vote
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:
For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.
There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$
That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .
If the second one it true, then $operatornamecov(X,Y) = ac + bd.$
If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$
Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
add a comment |
up vote
1
down vote
up vote
1
down vote
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:
For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.
There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$
That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .
If the second one it true, then $operatornamecov(X,Y) = ac + bd.$
If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$
Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
Joint normality of two random variables $X,Y$ can be characterized in either of two simple ways:
For every pair $a,b$ of (non-random) real numbers, $aX+bY$ has a univariate normal distribution.
There are random variables $Z_1,Z_2simoperatornametexti.i.d. operatorname N(0,1)$ and real numbers $a,b,c,d$ such that $$beginalign X & = aZ_1+bZ_2 \ textand Y & = cZ_1 + dZ_2. endalign$$
That the first of these follows from the second is easy to show. That the second follows from the first takes more work, and maybe I'll post on it soon . . .
If the second one it true, then $operatornamecov(X,Y) = ac + bd.$
If this covariance is $0,$ then the vectors $(a,b),$ $(c,d)$ are orthogonal to each other. Then $X$ is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto $(a,b)$ and $Y$ onto $(c,d).$
Now conjoin the fact of orthogonality with the circular symmetry of the joint density of $(Z_1,Z_2),$ to see that the distribution of $(X,Y)$ should be the same as the distribution of two random variables, one of which is a scalar multiple of the orthogonal projection of $(Z_1,Z_2)$ onto the $x$-axis, i.e. it is a scalar multiple of $Z_1,$ and the other is similarly a scalar multiple of $Z_2.$
answered 5 hours ago
Michael Hardy
3,0201330
3,0201330
add a comment |
add a comment |
ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.
ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.
ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.
ColorStatistics is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f376229%2funcorrelatedness-joint-normality-independence-why-intuition-and-mechanics%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
It is not generally the case that $X$ and $X^2$ are uncorrelated (unless you put particular conditions on the $X$ that would make them uncorrelated, but you mention none).
– Glen_b♦
2 hours ago
First, going back to the fact that correlation refers to linear relationships, please explain how X^2 is linearly related to X. Second, you seem to be stating that not only can X^2 and X be linearly related, but that they are linearly related more often than not, given the use of the word "generally". Please explain. Thank you.
– ColorStatistics
2 hours ago
@Glen_b is spot on: $X$ and $X^2$ are only uncorrelated if you specifically stipulate the range of $X$. For example Pearson's $r approx 0.98$ for $X$ and $X^2$ when restricting the sample of $Xsim mathcalN(0,1)$ to values of $X$ in the range greater than 1. Check it out yourself (R):
X <- rnorm(n=10000); X2 <- X*X; cor(X[X>1],X2[X>1])
– Alexis
2 hours ago
1
@Alexis It's not just the range, but the how the probabilities distribute over those values within the range. If you change the distribution you change the correlation.
– Glen_b♦
1 hour ago
1
@ColorStatistics correlation is the degree of linear relationship, yes. However, the linear projection of $x^2$ onto $x$ may involve a substantial linear component. If you want to see an example with high linear correlation between a variable and its square, let $X$ take the values 0 and 1 with equal probability (e.g. record the number of heads in the toss of a single fair coin). Then corr$(X,X^2)=1$ (!). If you're free to specify the distribution of $X$, you can make the correlation between $X$ and $X^2$ take any value between $-1$ and $1$. ... ctd
– Glen_b♦
1 hour ago