Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one?
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
3
down vote
favorite
Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.
Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.
I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?
profile-likelihood
add a comment |Â
up vote
3
down vote
favorite
Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.
Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.
I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?
profile-likelihood
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.
Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.
I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?
profile-likelihood
Profile likelihood is sometimes used to get estimates for the confidence limits of parameters from an n-dimension parameter fit to a model. It can be used for example instead of Monte Carlo estimation. I don't understand the intuition of the algorithm itself. See section 4.4 of the paper "Parameter uncertainty in biochemical models described by ordinary differential equations", by Vanlier et al. (2013), Math Biosci.
Assume a model has been optimized and a minimum located. According to the algorithm, a parameter is selected and slowly changed. After each change, all the other unchanged parameters are re-optimized at the new value of the changed parameter. The chi-square at this new optimized point is recorded. This is repeated until a chi-square profile is obtained. This process can be applied to each parameter in turn and the change in chi-square can be used to define a confidence region for the particular parameter.
I'd like to understand the intuition as to why the other parameters must be optimized as we profile the selected parameter? Why for example couldn't we just change the parameter (leave the other parameters fixed) and observe how the chi-square changes away from the optimum? Wouldn't that tell us how the curvature changed and therefore give us information on how confident we are in the parameter?
profile-likelihood
profile-likelihood
asked 2 hours ago
rhody
1184
1184
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.
To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.
In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.
To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.
In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.
add a comment |Â
up vote
2
down vote
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.
To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.
In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.
To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.
In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the parameter is fixed and the likelihood ratio fails to reject. In the likelihood ratio test, you compare the likelihood at the MLE for each model. Therefore, you must optimize all free parameters of both the full model and the nested model. If you change one parameter, then it is very likely that the values of the other parameters at the optimum will change, so you can't just "recycle" them from the full model.
To further explore this, consider something like a regression model in which two covariates are highly collinear. From what we learned from studying linear regression, we should know that confidence interval for either individual predictor should be very wide, since it is hard to differentiate between the effect of the first and second variable, given their collinearity. Now, if we tried to make a profile confidence interval for predictor one, but fixed predictor two, we would (mistakenly) get a very narrow confidence interval; this would be the equivalent to fixing the second coefficient, subtracting out it's effect and then computing the confidence interval for the first coefficient without including the second covariate in your model.
In a nutshell, you need to allow the other parameters to vary to account for the fact that the uncertainty in your parameter of interest may be tied to the uncertainty in other parameters in your model.
edited 2 hours ago
answered 2 hours ago
Cliff AB
12k12159
12k12159
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f371854%2fprofile-likelihood-why-optimize-all-other-parameters-while-tracing-a-profile-fo%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password