Convergence in probability and convergence in distribution

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












6












$begingroup$


Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_n oversetpto Z $ if $Pr(|X_n - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.



I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?










share|improve this question









$endgroup$











  • $begingroup$
    See: quora.com/…
    $endgroup$
    – afreelunch
    Mar 16 at 15:45















6












$begingroup$


Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_n oversetpto Z $ if $Pr(|X_n - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.



I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?










share|improve this question









$endgroup$











  • $begingroup$
    See: quora.com/…
    $endgroup$
    – afreelunch
    Mar 16 at 15:45













6












6








6


1



$begingroup$


Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_n oversetpto Z $ if $Pr(|X_n - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.



I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?










share|improve this question









$endgroup$




Im a little confused about the difference of these two concepts, especially the convergence of probability. I understand that $X_n oversetpto Z $ if $Pr(|X_n - Z|>epsilon)=0$ for any $epsilon >0$ when $n rightarrow infty$.



I just need some clarification on what the subscript $n$ means and what $Z$ means. Is $n$ the sample size? is $Z$ a specific value, or another random variable? If it is another random variable, then wouldn't that mean that convergence in probability implies convergence in distribution? Also, Could you please give me some examples of things that are convergent in distribution but not in probability?







econometrics statistics






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 16 at 14:42









Martin Martin

573




573











  • $begingroup$
    See: quora.com/…
    $endgroup$
    – afreelunch
    Mar 16 at 15:45
















  • $begingroup$
    See: quora.com/…
    $endgroup$
    – afreelunch
    Mar 16 at 15:45















$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
Mar 16 at 15:45




$begingroup$
See: quora.com/…
$endgroup$
– afreelunch
Mar 16 at 15:45










1 Answer
1






active

oldest

votes


















5












$begingroup$

I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables $X_i_i=1^n$. Then define the sample mean as $barX_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.



Noting that $barX_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. $barX_n_n=1^infty$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbarX_n = mu,$$
or equivalently
$$barX_n rightarrow_P mu,$$



where $mu=E(X_1)$. Formally, convergence in probability is defined as
$$forall epsilon>0, lim_n rightarrow infty P(|barX_n - mu| <epsilon)=1. $$
In other words, the probability of our estimate being within $epsilon$ from the true value tends to 1 as $n rightarrow infty$. Convergence in probability gives us confidence our estimators perform well with large samples.



Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrtn(barX_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_n rightarrow infty F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrtn(barX_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).






share|improve this answer











$endgroup$








  • 1




    $begingroup$
    Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
    $endgroup$
    – Theoretical Economist
    Mar 17 at 16:59










  • $begingroup$
    Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
    $endgroup$
    – dlnB
    Mar 17 at 18:09











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "591"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2feconomics.stackexchange.com%2fquestions%2f27300%2fconvergence-in-probability-and-convergence-in-distribution%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables $X_i_i=1^n$. Then define the sample mean as $barX_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.



Noting that $barX_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. $barX_n_n=1^infty$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbarX_n = mu,$$
or equivalently
$$barX_n rightarrow_P mu,$$



where $mu=E(X_1)$. Formally, convergence in probability is defined as
$$forall epsilon>0, lim_n rightarrow infty P(|barX_n - mu| <epsilon)=1. $$
In other words, the probability of our estimate being within $epsilon$ from the true value tends to 1 as $n rightarrow infty$. Convergence in probability gives us confidence our estimators perform well with large samples.



Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrtn(barX_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_n rightarrow infty F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrtn(barX_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).






share|improve this answer











$endgroup$








  • 1




    $begingroup$
    Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
    $endgroup$
    – Theoretical Economist
    Mar 17 at 16:59










  • $begingroup$
    Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
    $endgroup$
    – dlnB
    Mar 17 at 18:09















5












$begingroup$

I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables $X_i_i=1^n$. Then define the sample mean as $barX_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.



Noting that $barX_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. $barX_n_n=1^infty$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbarX_n = mu,$$
or equivalently
$$barX_n rightarrow_P mu,$$



where $mu=E(X_1)$. Formally, convergence in probability is defined as
$$forall epsilon>0, lim_n rightarrow infty P(|barX_n - mu| <epsilon)=1. $$
In other words, the probability of our estimate being within $epsilon$ from the true value tends to 1 as $n rightarrow infty$. Convergence in probability gives us confidence our estimators perform well with large samples.



Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrtn(barX_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_n rightarrow infty F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrtn(barX_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).






share|improve this answer











$endgroup$








  • 1




    $begingroup$
    Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
    $endgroup$
    – Theoretical Economist
    Mar 17 at 16:59










  • $begingroup$
    Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
    $endgroup$
    – dlnB
    Mar 17 at 18:09













5












5








5





$begingroup$

I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables $X_i_i=1^n$. Then define the sample mean as $barX_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.



Noting that $barX_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. $barX_n_n=1^infty$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbarX_n = mu,$$
or equivalently
$$barX_n rightarrow_P mu,$$



where $mu=E(X_1)$. Formally, convergence in probability is defined as
$$forall epsilon>0, lim_n rightarrow infty P(|barX_n - mu| <epsilon)=1. $$
In other words, the probability of our estimate being within $epsilon$ from the true value tends to 1 as $n rightarrow infty$. Convergence in probability gives us confidence our estimators perform well with large samples.



Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrtn(barX_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_n rightarrow infty F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrtn(barX_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).






share|improve this answer











$endgroup$



I will attempt to explain the distinction using the simplest example: the sample mean. Suppose we have an iid sample of random variables $X_i_i=1^n$. Then define the sample mean as $barX_n$. As the sample size grows, our value of the sample mean changes, hence the subscript $n$ to emphasize that our sample mean depends on the sample size.



Noting that $barX_n$ itself is a random variable, we can define a sequence of random variables, where elements of the sequence are indexed by different samples (sample size is growing), i.e. $barX_n_n=1^infty$. The weak law of large numbers (WLLN) tells us that so long as $E(X_1^2)<infty$, that
$$plimbarX_n = mu,$$
or equivalently
$$barX_n rightarrow_P mu,$$



where $mu=E(X_1)$. Formally, convergence in probability is defined as
$$forall epsilon>0, lim_n rightarrow infty P(|barX_n - mu| <epsilon)=1. $$
In other words, the probability of our estimate being within $epsilon$ from the true value tends to 1 as $n rightarrow infty$. Convergence in probability gives us confidence our estimators perform well with large samples.



Convergence in distribution tell us something very different and is primarily used for hypothesis testing. Under the same distributional assumptions described above, CLT gives us that
$$sqrtn(barX_n-mu) rightarrow_D N(0,E(X_1^2)).$$
Convergence in distribution means that the cdf of the left-hand size converges at all continuity points to the cdf of the right-hand side, i.e.
$$lim_n rightarrow infty F_n(x) = F(x),$$
where $F_n(x)$ is the cdf of $sqrtn(barX_n-mu)$ and $F(x)$ is the cdf for a $N(0,E(X_1^2))$ distribution. Knowing the limiting distribution allows us to test hypotheses about the sample mean (or whatever estimate we are generating).







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 17 at 18:09

























answered Mar 16 at 15:09









dlnBdlnB

5959




5959







  • 1




    $begingroup$
    Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
    $endgroup$
    – Theoretical Economist
    Mar 17 at 16:59










  • $begingroup$
    Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
    $endgroup$
    – dlnB
    Mar 17 at 18:09












  • 1




    $begingroup$
    Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
    $endgroup$
    – Theoretical Economist
    Mar 17 at 16:59










  • $begingroup$
    Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
    $endgroup$
    – dlnB
    Mar 17 at 18:09







1




1




$begingroup$
Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
$endgroup$
– Theoretical Economist
Mar 17 at 16:59




$begingroup$
Your definition of convergence in probability is more demanding than the standard definition. For example, suppose $X_n = 1$ with probability $1/n$, with $X_n = 0$ otherwise. It’s clear that $X_n$ must converge in probability to $0$. However, $X_n$ does not converge to $0$ according to your definition, because we always have that $P(|X_n| < varepsilon ) neq 1$ for $varepsilon < 1$ and any $n$.
$endgroup$
– Theoretical Economist
Mar 17 at 16:59












$begingroup$
Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
$endgroup$
– dlnB
Mar 17 at 18:09




$begingroup$
Yes, you are right. I posted my answer too quickly and made an error in writing the definition of weak convergence. I have corrected my post.
$endgroup$
– dlnB
Mar 17 at 18:09

















draft saved

draft discarded
















































Thanks for contributing an answer to Economics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2feconomics.stackexchange.com%2fquestions%2f27300%2fconvergence-in-probability-and-convergence-in-distribution%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?