Difference between Cholesky decomposition and log-cholesky Decomposition
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
6
down vote
favorite
Is there any difference between a Cholesky decomposition and a log-cholesky decomposition? If yes, what is the difference?
In the paper "An R package for dynamic linear models" by Giovanni Petris ( he refers to the paper "Unconstrained Parametrizations of Variance-Covariance Matrices" by Pinheiro and Bates (1996)).
In this case since the model is not a standard one, we use the general create dlm to define a build which we subsequently use to find the MLEs of the model parameters. IN order to avoid an optimization problem with complicated constraints, we parametrize V in terms of the elements of its log-Cholesky decomposition
I know "Cholesky decomposition" $L L^T$
$A = beginbmatrixa_11 & a_21 & a_31 \a_21 & a_22 & a_23\ a_31 & a_32 & a_33endbmatrix = beginbmatrixl_11 & 0 & 0 \l_21 & l_22 & 0\ l_31 & l_32 & l_33endbmatrix beginbmatrixl_11 & l_21 & l_31 \0 & l_22 & l_23\ 0 & 0 & l_33endbmatrix qquad , $
but I do not know "Log-Cholesky decomposition".
maximum-likelihood matrix matrix-decomposition dlm cholesky
add a comment |Â
up vote
6
down vote
favorite
Is there any difference between a Cholesky decomposition and a log-cholesky decomposition? If yes, what is the difference?
In the paper "An R package for dynamic linear models" by Giovanni Petris ( he refers to the paper "Unconstrained Parametrizations of Variance-Covariance Matrices" by Pinheiro and Bates (1996)).
In this case since the model is not a standard one, we use the general create dlm to define a build which we subsequently use to find the MLEs of the model parameters. IN order to avoid an optimization problem with complicated constraints, we parametrize V in terms of the elements of its log-Cholesky decomposition
I know "Cholesky decomposition" $L L^T$
$A = beginbmatrixa_11 & a_21 & a_31 \a_21 & a_22 & a_23\ a_31 & a_32 & a_33endbmatrix = beginbmatrixl_11 & 0 & 0 \l_21 & l_22 & 0\ l_31 & l_32 & l_33endbmatrix beginbmatrixl_11 & l_21 & l_31 \0 & l_22 & l_23\ 0 & 0 & l_33endbmatrix qquad , $
but I do not know "Log-Cholesky decomposition".
maximum-likelihood matrix matrix-decomposition dlm cholesky
add a comment |Â
up vote
6
down vote
favorite
up vote
6
down vote
favorite
Is there any difference between a Cholesky decomposition and a log-cholesky decomposition? If yes, what is the difference?
In the paper "An R package for dynamic linear models" by Giovanni Petris ( he refers to the paper "Unconstrained Parametrizations of Variance-Covariance Matrices" by Pinheiro and Bates (1996)).
In this case since the model is not a standard one, we use the general create dlm to define a build which we subsequently use to find the MLEs of the model parameters. IN order to avoid an optimization problem with complicated constraints, we parametrize V in terms of the elements of its log-Cholesky decomposition
I know "Cholesky decomposition" $L L^T$
$A = beginbmatrixa_11 & a_21 & a_31 \a_21 & a_22 & a_23\ a_31 & a_32 & a_33endbmatrix = beginbmatrixl_11 & 0 & 0 \l_21 & l_22 & 0\ l_31 & l_32 & l_33endbmatrix beginbmatrixl_11 & l_21 & l_31 \0 & l_22 & l_23\ 0 & 0 & l_33endbmatrix qquad , $
but I do not know "Log-Cholesky decomposition".
maximum-likelihood matrix matrix-decomposition dlm cholesky
Is there any difference between a Cholesky decomposition and a log-cholesky decomposition? If yes, what is the difference?
In the paper "An R package for dynamic linear models" by Giovanni Petris ( he refers to the paper "Unconstrained Parametrizations of Variance-Covariance Matrices" by Pinheiro and Bates (1996)).
In this case since the model is not a standard one, we use the general create dlm to define a build which we subsequently use to find the MLEs of the model parameters. IN order to avoid an optimization problem with complicated constraints, we parametrize V in terms of the elements of its log-Cholesky decomposition
I know "Cholesky decomposition" $L L^T$
$A = beginbmatrixa_11 & a_21 & a_31 \a_21 & a_22 & a_23\ a_31 & a_32 & a_33endbmatrix = beginbmatrixl_11 & 0 & 0 \l_21 & l_22 & 0\ l_31 & l_32 & l_33endbmatrix beginbmatrixl_11 & l_21 & l_31 \0 & l_22 & l_23\ 0 & 0 & l_33endbmatrix qquad , $
but I do not know "Log-Cholesky decomposition".
maximum-likelihood matrix matrix-decomposition dlm cholesky
maximum-likelihood matrix matrix-decomposition dlm cholesky
edited Aug 31 at 13:09
Ben Bolker
20.7k15583
20.7k15583
asked Aug 31 at 11:11
Ferdi
3,39342049
3,39342049
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
5
down vote
accepted
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...)
From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite:
6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $boldsymbol L$
in the Cholesky factorization to be
positive then $boldsymbol L$
is unique. In order to avoid constrained estimation, one can
use the logarithms of the diagonal elements of
$boldsymbol L$. We call this parametrization
the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely
defined.
In other words, in your notation it would be:
beginbmatrix log(l_11) & 0 & 0 \
l_21 & log(l_22) & 0\ l_31 & l_32 & log(l_33)endbmatrix
For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4
the log-Cholesky lower triangle is unpacked in column-first order, i.e. $theta_1 = log(l_11)$, $theta_2=l_21$, $theta_3=l_31$, $theta_4=log(l_22)$, ...
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
5
down vote
accepted
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...)
From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite:
6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $boldsymbol L$
in the Cholesky factorization to be
positive then $boldsymbol L$
is unique. In order to avoid constrained estimation, one can
use the logarithms of the diagonal elements of
$boldsymbol L$. We call this parametrization
the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely
defined.
In other words, in your notation it would be:
beginbmatrix log(l_11) & 0 & 0 \
l_21 & log(l_22) & 0\ l_31 & l_32 & log(l_33)endbmatrix
For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4
the log-Cholesky lower triangle is unpacked in column-first order, i.e. $theta_1 = log(l_11)$, $theta_2=l_21$, $theta_3=l_31$, $theta_4=log(l_22)$, ...
add a comment |Â
up vote
5
down vote
accepted
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...)
From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite:
6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $boldsymbol L$
in the Cholesky factorization to be
positive then $boldsymbol L$
is unique. In order to avoid constrained estimation, one can
use the logarithms of the diagonal elements of
$boldsymbol L$. We call this parametrization
the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely
defined.
In other words, in your notation it would be:
beginbmatrix log(l_11) & 0 & 0 \
l_21 & log(l_22) & 0\ l_31 & l_32 & log(l_33)endbmatrix
For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4
the log-Cholesky lower triangle is unpacked in column-first order, i.e. $theta_1 = log(l_11)$, $theta_2=l_21$, $theta_3=l_31$, $theta_4=log(l_22)$, ...
add a comment |Â
up vote
5
down vote
accepted
up vote
5
down vote
accepted
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...)
From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite:
6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $boldsymbol L$
in the Cholesky factorization to be
positive then $boldsymbol L$
is unique. In order to avoid constrained estimation, one can
use the logarithms of the diagonal elements of
$boldsymbol L$. We call this parametrization
the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely
defined.
In other words, in your notation it would be:
beginbmatrix log(l_11) & 0 & 0 \
l_21 & log(l_22) & 0\ l_31 & l_32 & log(l_33)endbmatrix
For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4
the log-Cholesky lower triangle is unpacked in column-first order, i.e. $theta_1 = log(l_11)$, $theta_2=l_21$, $theta_3=l_31$, $theta_4=log(l_22)$, ...
I think it's less confusing to call it the log-Cholesky paramet(e)rization rather than the log-Cholesky decomposition (i.e., the "decomposition" part doesn't change ...)
From Pinheiro's thesis (1994, UW Madison) - I think it has the same information as the paper you cite:
6.1.2 Log-Cholesky Parametrization If one requires the diagonal elements of $boldsymbol L$
in the Cholesky factorization to be
positive then $boldsymbol L$
is unique. In order to avoid constrained estimation, one can
use the logarithms of the diagonal elements of
$boldsymbol L$. We call this parametrization
the log-Cholesky parametrization. It inherits the good computational properties of the Cholesky parametrization, but has the advantage of being uniquely
defined.
In other words, in your notation it would be:
beginbmatrix log(l_11) & 0 & 0 \
l_21 & log(l_22) & 0\ l_31 & l_32 & log(l_33)endbmatrix
For what it's worth, when defining a parameter vector for a model you also need to define an order in which the matrix is unpacked; for example, in lme4
the log-Cholesky lower triangle is unpacked in column-first order, i.e. $theta_1 = log(l_11)$, $theta_2=l_21$, $theta_3=l_31$, $theta_4=log(l_22)$, ...
edited Aug 31 at 13:33
answered Aug 31 at 13:03
Ben Bolker
20.7k15583
20.7k15583
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f364825%2fdifference-between-cholesky-decomposition-and-log-cholesky-decomposition%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password