Calculating Fisher Information for Bernoulli rv

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
4
down vote

favorite
1












Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question



















  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    Sep 16 at 15:56















up vote
4
down vote

favorite
1












Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question



















  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    Sep 16 at 15:56













up vote
4
down vote

favorite
1









up vote
4
down vote

favorite
1






1





Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?










share|cite|improve this question















Let $X_1,...,X_n$ be Bernoulli distributed with unknown parameter $p$.



My objective is to calculate the information contained in the first observation of the sample.



I know that the pdf of $X$ is given by $$f(xmid p)=p^x(1-p)^1-x$$, and my book defines the Fisher information about $p$ as



$$I_X(p)=E_pleft[left(fracddplogleft(p^x(1-p)^1-xright)right)^2right]$$



After some calculations, I arrive at



$$I_X(p)=E_pleft[fracx^2p^2right]-2E_pleft[fracx(1-x)p(1-p)right]+E_pleft[frac(1-x)^2(1-p)^2right]$$



I know that the Fisher information about $p$ of a Bernoulli RV is $frac1p(1-p)$, but I don't know how to get rid of the X-values, since I'm calculating an expectation with respect to $p$, not $X$. Any clues?







statistics probability-distributions expected-value






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 16 at 16:12









StubbornAtom

4,26211136




4,26211136










asked Sep 16 at 15:07









DavidS

1469




1469







  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    Sep 16 at 15:56













  • 1




    $+1$ for showing your work to derive the correct $I_X(p)$
    – Ahmad Bazzi
    Sep 16 at 15:56








1




1




$+1$ for showing your work to derive the correct $I_X(p)$
– Ahmad Bazzi
Sep 16 at 15:56





$+1$ for showing your work to derive the correct $I_X(p)$
– Ahmad Bazzi
Sep 16 at 15:56











2 Answers
2






active

oldest

votes

















up vote
4
down vote



accepted










beginequation
I_X(p)=E_p left[fracX^2p^2right]-2E_p left[ fracX - X^2p(1-p) right] + E_p left[ fracX^2 - 2X + 1(1-p)^2right] tag1.
endequation
For a Bernoulli RV, we know
beginalign
E(X) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
E(X^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p.
endalign
Now, replace in $(1)$, we get
beginequation
I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
=
frac1p-fracp-1(1-p)^2
=
frac1p
-
frac1p-1
=
frac1p(p-1).
endequation






share|cite|improve this answer






















  • thanks @MichaelHardy
    – Ahmad Bazzi
    Sep 16 at 18:00










  • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    Sep 16 at 18:23






  • 1




    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    Sep 16 at 18:24







  • 1




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    Sep 16 at 18:38






  • 1




    Thanks @AlejandroNasifSalum for the edit
    – Ahmad Bazzi
    Sep 17 at 23:28

















up vote
2
down vote













Actually, the Fisher information of $X$ about $p$ is
$$I_X(p)=E_pleft[left(fracddplog f(Xmid p) right)^2 right],$$
that is
$$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
$$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



The expectation is there for the fact that $X$ is a random variable. So, for instance:
$$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
$$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
or by the observation that $Xsim operatornameBe(p) implies X^nsim operatornameBe(p)$ as well. Then you can go on with the remaining terms.




Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
$$I_X(p)=-E_pleft(fracd^2dp^2log f(Xmid p) right),$$
and many times you'll get simpler expressions. In this case, for instance, you get
$$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
$$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2 right) = frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
$$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
as desired.






share|cite|improve this answer






















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "69"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2919044%2fcalculating-fisher-information-for-bernoulli-rv%23new-answer', 'question_page');

    );

    Post as a guest






























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    4
    down vote



    accepted










    beginequation
    I_X(p)=E_p left[fracX^2p^2right]-2E_p left[ fracX - X^2p(1-p) right] + E_p left[ fracX^2 - 2X + 1(1-p)^2right] tag1.
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(X) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(X^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p.
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1).
    endequation






    share|cite|improve this answer






















    • thanks @MichaelHardy
      – Ahmad Bazzi
      Sep 16 at 18:00










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      Sep 16 at 18:23






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      Sep 16 at 18:24







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      Sep 16 at 18:38






    • 1




      Thanks @AlejandroNasifSalum for the edit
      – Ahmad Bazzi
      Sep 17 at 23:28














    up vote
    4
    down vote



    accepted










    beginequation
    I_X(p)=E_p left[fracX^2p^2right]-2E_p left[ fracX - X^2p(1-p) right] + E_p left[ fracX^2 - 2X + 1(1-p)^2right] tag1.
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(X) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(X^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p.
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1).
    endequation






    share|cite|improve this answer






















    • thanks @MichaelHardy
      – Ahmad Bazzi
      Sep 16 at 18:00










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      Sep 16 at 18:23






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      Sep 16 at 18:24







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      Sep 16 at 18:38






    • 1




      Thanks @AlejandroNasifSalum for the edit
      – Ahmad Bazzi
      Sep 17 at 23:28












    up vote
    4
    down vote



    accepted







    up vote
    4
    down vote



    accepted






    beginequation
    I_X(p)=E_p left[fracX^2p^2right]-2E_p left[ fracX - X^2p(1-p) right] + E_p left[ fracX^2 - 2X + 1(1-p)^2right] tag1.
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(X) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(X^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p.
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1).
    endequation






    share|cite|improve this answer














    beginequation
    I_X(p)=E_p left[fracX^2p^2right]-2E_p left[ fracX - X^2p(1-p) right] + E_p left[ fracX^2 - 2X + 1(1-p)^2right] tag1.
    endequation
    For a Bernoulli RV, we know
    beginalign
    E(X) &= 0(Pr(X = 0)) + 1(Pr(X = 1)) = p\
    E(X^2) &= 0^2(Pr(X = 0)) + 1^2(Pr(X = 1)) = p.
    endalign
    Now, replace in $(1)$, we get
    beginequation
    I_X(p)=fracpp^2-2frac0-0p(1-p)+fracp-2p+1(1-p)^2
    =
    frac1p-fracp-1(1-p)^2
    =
    frac1p
    -
    frac1p-1
    =
    frac1p(p-1).
    endequation







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Sep 17 at 23:13









    Alejandro Nasif Salum

    3,484117




    3,484117










    answered Sep 16 at 15:54









    Ahmad Bazzi

    6,9831724




    6,9831724











    • thanks @MichaelHardy
      – Ahmad Bazzi
      Sep 16 at 18:00










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      Sep 16 at 18:23






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      Sep 16 at 18:24







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      Sep 16 at 18:38






    • 1




      Thanks @AlejandroNasifSalum for the edit
      – Ahmad Bazzi
      Sep 17 at 23:28
















    • thanks @MichaelHardy
      – Ahmad Bazzi
      Sep 16 at 18:00










    • I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
      – DavidS
      Sep 16 at 18:23






    • 1




      Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
      – Ahmad Bazzi
      Sep 16 at 18:24







    • 1




      The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
      – Alejandro Nasif Salum
      Sep 16 at 18:38






    • 1




      Thanks @AlejandroNasifSalum for the edit
      – Ahmad Bazzi
      Sep 17 at 23:28















    thanks @MichaelHardy
    – Ahmad Bazzi
    Sep 16 at 18:00




    thanks @MichaelHardy
    – Ahmad Bazzi
    Sep 16 at 18:00












    I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    Sep 16 at 18:23




    I always thought that when an expectation is written as $E_p$, for example, it means we treat the other variables (not p) as constants. Apparently not?
    – DavidS
    Sep 16 at 18:23




    1




    1




    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    Sep 16 at 18:24





    Yes i understand what you mean. I usually do not write $E_p()$. I write $E_X vert p()$, where the expectation is taken with respect to the samples given the parameters you want to estimate, i.e.
    – Ahmad Bazzi
    Sep 16 at 18:24





    1




    1




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    Sep 16 at 18:38




    The expectation is taken with respect to any random variable (whether is $X$, $Y$ or whatever other r.v. $E_p$ only intends to remark the fact that your model for the distribution of $X$ is not fully specified, but it is uncertain up to a parameter $p$, and thus the corresponding expectations taken may depend on that value $p$.
    – Alejandro Nasif Salum
    Sep 16 at 18:38




    1




    1




    Thanks @AlejandroNasifSalum for the edit
    – Ahmad Bazzi
    Sep 17 at 23:28




    Thanks @AlejandroNasifSalum for the edit
    – Ahmad Bazzi
    Sep 17 at 23:28










    up vote
    2
    down vote













    Actually, the Fisher information of $X$ about $p$ is
    $$I_X(p)=E_pleft[left(fracddplog f(Xmid p) right)^2 right],$$
    that is
    $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



    I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
    $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



    The expectation is there for the fact that $X$ is a random variable. So, for instance:
    $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



    Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
    $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
    or by the observation that $Xsim operatornameBe(p) implies X^nsim operatornameBe(p)$ as well. Then you can go on with the remaining terms.




    Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
    $$I_X(p)=-E_pleft(fracd^2dp^2log f(Xmid p) right),$$
    and many times you'll get simpler expressions. In this case, for instance, you get
    $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
    $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2 right) = frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
    $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
    as desired.






    share|cite|improve this answer


























      up vote
      2
      down vote













      Actually, the Fisher information of $X$ about $p$ is
      $$I_X(p)=E_pleft[left(fracddplog f(Xmid p) right)^2 right],$$
      that is
      $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



      I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
      $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



      The expectation is there for the fact that $X$ is a random variable. So, for instance:
      $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



      Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
      $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
      or by the observation that $Xsim operatornameBe(p) implies X^nsim operatornameBe(p)$ as well. Then you can go on with the remaining terms.




      Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
      $$I_X(p)=-E_pleft(fracd^2dp^2log f(Xmid p) right),$$
      and many times you'll get simpler expressions. In this case, for instance, you get
      $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
      $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2 right) = frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
      $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
      as desired.






      share|cite|improve this answer
























        up vote
        2
        down vote










        up vote
        2
        down vote









        Actually, the Fisher information of $X$ about $p$ is
        $$I_X(p)=E_pleft[left(fracddplog f(Xmid p) right)^2 right],$$
        that is
        $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



        I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
        $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



        The expectation is there for the fact that $X$ is a random variable. So, for instance:
        $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



        Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
        $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
        or by the observation that $Xsim operatornameBe(p) implies X^nsim operatornameBe(p)$ as well. Then you can go on with the remaining terms.




        Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
        $$I_X(p)=-E_pleft(fracd^2dp^2log f(Xmid p) right),$$
        and many times you'll get simpler expressions. In this case, for instance, you get
        $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
        $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2 right) = frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
        $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
        as desired.






        share|cite|improve this answer














        Actually, the Fisher information of $X$ about $p$ is
        $$I_X(p)=E_pleft[left(fracddplog f(Xmid p) right)^2 right],$$
        that is
        $$I_X(p)=E_pleft[left(fracddplogleft(p^X(1-p)^1-Xright)right)^2right].$$



        I've only changed every $x$ by $X$, which may seem as a subtlety, but then you get
        $$I_X(p)=E_pleft(fracX^2p^2right)-2E_pleft(fracX(1-X)p(1-p)right)+E_pleft(frac(1-X)^2(1-p)^2right).$$



        The expectation is there for the fact that $X$ is a random variable. So, for instance:
        $$E_pleft(fracX^2p^2right)=fracE_pleft(X^2right)p^2=fracpp^2=frac1p.$$



        Here I used the fact that $E_p(X^2)=p$, which can easily be seen as
        $$E_p(X^2)=0^2cdot p_X(0)+1^2cdot p_X(1)=0^2(1-p)+1^2p=p,$$
        or by the observation that $Xsim operatornameBe(p) implies X^nsim operatornameBe(p)$ as well. Then you can go on with the remaining terms.




        Additionally, an equivalent formula can be proved for $I_X(p)$ given the second derivative of $log f$ is well defined. This is
        $$I_X(p)=-E_pleft(fracd^2dp^2log f(Xmid p) right),$$
        and many times you'll get simpler expressions. In this case, for instance, you get
        $$I_X(p)=-E_pleft(fracd^2dp^2log p^X(1-p)^1-Xright)=$$
        $$=-E_pleft(-frac Xp^2-frac1-X(1-p)^2 right) = frac E_p(X)p^2+fracE_p(1-X)(1-p)^2=$$
        $$=frac pp^2+frac1-p(1-p)^2=frac 1p+frac 11-p=frac 1p(1-p),$$
        as desired.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Sep 17 at 14:43









        Michael Hardy

        206k23189469




        206k23189469










        answered Sep 16 at 15:46









        Alejandro Nasif Salum

        3,484117




        3,484117



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2919044%2fcalculating-fisher-information-for-bernoulli-rv%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            How to check contact read email or not when send email to Individual?

            Displaying single band from multi-band raster using QGIS

            How many registers does an x86_64 CPU actually have?