SQL Server: Covering indexes including all columns?
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
Our team has inherited an application and associated database. The previous developers appear to have enforced a rule where every index, on every table, has an INCLUDE clause to always add every column that isn't otherwise part of the key. These tables have on average anywhere from two to five indexes or unique constraints as well as foreign keys.
The intent looks to be to improve SELECT performance regardless of what query is thrown at the database, as access is via an ORM that by default (but not always) retrieves all columns. We expect that the side effects of this are increased storage requirements (possibly significantly so) and additional overhead time for INSERT/UPDATE/DELETE.
The question is, is this a sensible strategy? Our team has history with SQL Server but no members who would consider themselves experts on its internal behaviour (though the question has been raised that if this strategy was optimal, wouldn't it be the default by now?). What other side effects (database server CPU/memory/TempDB usage, etc) should we be expecting, or are some of our assumptions above incorrect?
Additionally, the application can be installed into both SQL Server on-premise (versions since 2012), as well as Azure SQL -- should we be prepared for any differences between the two, or additional side effects on Azure, as a result of this approach?
sql-server sql-server-2012 sql-server-2014 sql-server-2016 azure-sql-database
add a comment |
Our team has inherited an application and associated database. The previous developers appear to have enforced a rule where every index, on every table, has an INCLUDE clause to always add every column that isn't otherwise part of the key. These tables have on average anywhere from two to five indexes or unique constraints as well as foreign keys.
The intent looks to be to improve SELECT performance regardless of what query is thrown at the database, as access is via an ORM that by default (but not always) retrieves all columns. We expect that the side effects of this are increased storage requirements (possibly significantly so) and additional overhead time for INSERT/UPDATE/DELETE.
The question is, is this a sensible strategy? Our team has history with SQL Server but no members who would consider themselves experts on its internal behaviour (though the question has been raised that if this strategy was optimal, wouldn't it be the default by now?). What other side effects (database server CPU/memory/TempDB usage, etc) should we be expecting, or are some of our assumptions above incorrect?
Additionally, the application can be installed into both SQL Server on-premise (versions since 2012), as well as Azure SQL -- should we be prepared for any differences between the two, or additional side effects on Azure, as a result of this approach?
sql-server sql-server-2012 sql-server-2014 sql-server-2016 azure-sql-database
add a comment |
Our team has inherited an application and associated database. The previous developers appear to have enforced a rule where every index, on every table, has an INCLUDE clause to always add every column that isn't otherwise part of the key. These tables have on average anywhere from two to five indexes or unique constraints as well as foreign keys.
The intent looks to be to improve SELECT performance regardless of what query is thrown at the database, as access is via an ORM that by default (but not always) retrieves all columns. We expect that the side effects of this are increased storage requirements (possibly significantly so) and additional overhead time for INSERT/UPDATE/DELETE.
The question is, is this a sensible strategy? Our team has history with SQL Server but no members who would consider themselves experts on its internal behaviour (though the question has been raised that if this strategy was optimal, wouldn't it be the default by now?). What other side effects (database server CPU/memory/TempDB usage, etc) should we be expecting, or are some of our assumptions above incorrect?
Additionally, the application can be installed into both SQL Server on-premise (versions since 2012), as well as Azure SQL -- should we be prepared for any differences between the two, or additional side effects on Azure, as a result of this approach?
sql-server sql-server-2012 sql-server-2014 sql-server-2016 azure-sql-database
Our team has inherited an application and associated database. The previous developers appear to have enforced a rule where every index, on every table, has an INCLUDE clause to always add every column that isn't otherwise part of the key. These tables have on average anywhere from two to five indexes or unique constraints as well as foreign keys.
The intent looks to be to improve SELECT performance regardless of what query is thrown at the database, as access is via an ORM that by default (but not always) retrieves all columns. We expect that the side effects of this are increased storage requirements (possibly significantly so) and additional overhead time for INSERT/UPDATE/DELETE.
The question is, is this a sensible strategy? Our team has history with SQL Server but no members who would consider themselves experts on its internal behaviour (though the question has been raised that if this strategy was optimal, wouldn't it be the default by now?). What other side effects (database server CPU/memory/TempDB usage, etc) should we be expecting, or are some of our assumptions above incorrect?
Additionally, the application can be installed into both SQL Server on-premise (versions since 2012), as well as Azure SQL -- should we be prepared for any differences between the two, or additional side effects on Azure, as a result of this approach?
sql-server sql-server-2012 sql-server-2014 sql-server-2016 azure-sql-database
sql-server sql-server-2012 sql-server-2014 sql-server-2016 azure-sql-database
asked Mar 8 at 8:48
T2PST2PS
383
383
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
I've done this on specific indexes before now, to aid oft-run heavy queries. Effectively what they have done is create multiple clustered indexes: when any of those indexes is used to find rows no extra work is needed looking up the rest of the data in the real clustered index (or the heap if there is no real clustered index).
is this a sensible strategy?
For some indexes where needed to support certain query patterns, certainly yes.
But to do this with all indexes, I would just as certainly say no.
It is going to be wasteful of space to do where not actually needed, and will slow inserts/updates significantly. It may slow down as many read queries as it helps too, because each index page holds less records so any query needing to reference a chunk of the index for filtering but not using all other columns will have to access more pages. This will make your database more memory-hungry: those pages will need to be loaded into the buffer pool, potentially ejecting other useful pages if memory is low. If compression is used on those indexes to try mitigate the effect on storage and memory requirements, then it will be pushing extra load to the CPUs instead.
as access is via an ORM that by default (but not always) retrieves all columns
This is a common pattern with poorly optimised use of an ORM (or just naive ORMs) and in these cases I've seen SQL Server's index adviser (and similar 3rd party tools) suggest indexes with many INCLUDE
d columns, so I would agree with your suggestion that this is why the indexes have been created this way.
But while it may make all such queries slightly faster and some of them significantly faster, I suspect that in many cases any benefit is so small as to not be worth the extra memory footprint required by your common working set, the space on disk, and the IO between disk and memory.
Also remember that the ORM might not be selecting out all columns of all tables tha a query touches, so that benefit might only hold for the main target of the current request, and the larger indexes may penalise the query when other objects are used for filtering but not returning data (SELECT * FROM table1 WHERE id IN (SELECT someID FROM table2 WHERE someColumn='DesiredValue')
perhaps).
Another consideration for the excess space used, particularly if the data is large, is that it will have an impact on your backup strategy: storage and transfer costs for those backups, potential restore times, and so forth.
should we be prepared for any differences between the two [on-prem & AzureSQL]
Generally I think the considerations here are going to be the same in each case, though any excess memory/IO cost imposed by the large indexes may be more directly visible in Azure where you can tweak the service tier and therefore infrastructure cost more easily rather than having a relatively fixed set of hardware resource. If using standard/premium tiers instead of vcore based pricing then you'll be affected more by the IO cost in standard as premium includes significantly more IO per DTU. If you are using multi-region backups or redundancy or other non-local features in Azure than there might be a bandwidth cost associated with the extra space taken by uneccesarily wide indexes.
add a comment |
The question is, is this a sensible strategy?.... (though the question
has been raised that if this strategy was optimal, wouldn't it be the
default by now?)
In most cases this is not a sensible strategy. The reason being, that in general OLTP databases, the rows returned to the end user are not going to be a whole lot. (Generalization)
The question you should ask yourself is,if you are seeking on the key columns, how many rows will be returned by that seek operation? And repeat that for the queries seeking on that column.
Consider the following table, returning a whole lot of columns, where SelectiveIDField= ...
select columnA,columnC, ... columnZ
FROM dbo.BigTable
Where SelectiveIDField= '225122141';
If only one row will be returned by the seek on selectiveIDField
, is the additional key lookup such a bad thing?
(guessing you have clustered indexes here, otherwise RID lookup)
It will just do one extra key lookup, one extra execution + the join operator.
Even if it is 10 or even 100, would it be that huge of an impact? This also depends on how much your query is executed and how important the execution time is.
In the case of it being negligible, just create the index on SelectiveIDField
and call it a day, it should not be worth the read gains in comparison to the write losses.
So in short, creating indexes on the entire table should in my opinion not be a default approach unless you really see an issue with a query and can improve it drastically by adding an entire covering index.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f231628%2fsql-server-covering-indexes-including-all-columns%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
I've done this on specific indexes before now, to aid oft-run heavy queries. Effectively what they have done is create multiple clustered indexes: when any of those indexes is used to find rows no extra work is needed looking up the rest of the data in the real clustered index (or the heap if there is no real clustered index).
is this a sensible strategy?
For some indexes where needed to support certain query patterns, certainly yes.
But to do this with all indexes, I would just as certainly say no.
It is going to be wasteful of space to do where not actually needed, and will slow inserts/updates significantly. It may slow down as many read queries as it helps too, because each index page holds less records so any query needing to reference a chunk of the index for filtering but not using all other columns will have to access more pages. This will make your database more memory-hungry: those pages will need to be loaded into the buffer pool, potentially ejecting other useful pages if memory is low. If compression is used on those indexes to try mitigate the effect on storage and memory requirements, then it will be pushing extra load to the CPUs instead.
as access is via an ORM that by default (but not always) retrieves all columns
This is a common pattern with poorly optimised use of an ORM (or just naive ORMs) and in these cases I've seen SQL Server's index adviser (and similar 3rd party tools) suggest indexes with many INCLUDE
d columns, so I would agree with your suggestion that this is why the indexes have been created this way.
But while it may make all such queries slightly faster and some of them significantly faster, I suspect that in many cases any benefit is so small as to not be worth the extra memory footprint required by your common working set, the space on disk, and the IO between disk and memory.
Also remember that the ORM might not be selecting out all columns of all tables tha a query touches, so that benefit might only hold for the main target of the current request, and the larger indexes may penalise the query when other objects are used for filtering but not returning data (SELECT * FROM table1 WHERE id IN (SELECT someID FROM table2 WHERE someColumn='DesiredValue')
perhaps).
Another consideration for the excess space used, particularly if the data is large, is that it will have an impact on your backup strategy: storage and transfer costs for those backups, potential restore times, and so forth.
should we be prepared for any differences between the two [on-prem & AzureSQL]
Generally I think the considerations here are going to be the same in each case, though any excess memory/IO cost imposed by the large indexes may be more directly visible in Azure where you can tweak the service tier and therefore infrastructure cost more easily rather than having a relatively fixed set of hardware resource. If using standard/premium tiers instead of vcore based pricing then you'll be affected more by the IO cost in standard as premium includes significantly more IO per DTU. If you are using multi-region backups or redundancy or other non-local features in Azure than there might be a bandwidth cost associated with the extra space taken by uneccesarily wide indexes.
add a comment |
I've done this on specific indexes before now, to aid oft-run heavy queries. Effectively what they have done is create multiple clustered indexes: when any of those indexes is used to find rows no extra work is needed looking up the rest of the data in the real clustered index (or the heap if there is no real clustered index).
is this a sensible strategy?
For some indexes where needed to support certain query patterns, certainly yes.
But to do this with all indexes, I would just as certainly say no.
It is going to be wasteful of space to do where not actually needed, and will slow inserts/updates significantly. It may slow down as many read queries as it helps too, because each index page holds less records so any query needing to reference a chunk of the index for filtering but not using all other columns will have to access more pages. This will make your database more memory-hungry: those pages will need to be loaded into the buffer pool, potentially ejecting other useful pages if memory is low. If compression is used on those indexes to try mitigate the effect on storage and memory requirements, then it will be pushing extra load to the CPUs instead.
as access is via an ORM that by default (but not always) retrieves all columns
This is a common pattern with poorly optimised use of an ORM (or just naive ORMs) and in these cases I've seen SQL Server's index adviser (and similar 3rd party tools) suggest indexes with many INCLUDE
d columns, so I would agree with your suggestion that this is why the indexes have been created this way.
But while it may make all such queries slightly faster and some of them significantly faster, I suspect that in many cases any benefit is so small as to not be worth the extra memory footprint required by your common working set, the space on disk, and the IO between disk and memory.
Also remember that the ORM might not be selecting out all columns of all tables tha a query touches, so that benefit might only hold for the main target of the current request, and the larger indexes may penalise the query when other objects are used for filtering but not returning data (SELECT * FROM table1 WHERE id IN (SELECT someID FROM table2 WHERE someColumn='DesiredValue')
perhaps).
Another consideration for the excess space used, particularly if the data is large, is that it will have an impact on your backup strategy: storage and transfer costs for those backups, potential restore times, and so forth.
should we be prepared for any differences between the two [on-prem & AzureSQL]
Generally I think the considerations here are going to be the same in each case, though any excess memory/IO cost imposed by the large indexes may be more directly visible in Azure where you can tweak the service tier and therefore infrastructure cost more easily rather than having a relatively fixed set of hardware resource. If using standard/premium tiers instead of vcore based pricing then you'll be affected more by the IO cost in standard as premium includes significantly more IO per DTU. If you are using multi-region backups or redundancy or other non-local features in Azure than there might be a bandwidth cost associated with the extra space taken by uneccesarily wide indexes.
add a comment |
I've done this on specific indexes before now, to aid oft-run heavy queries. Effectively what they have done is create multiple clustered indexes: when any of those indexes is used to find rows no extra work is needed looking up the rest of the data in the real clustered index (or the heap if there is no real clustered index).
is this a sensible strategy?
For some indexes where needed to support certain query patterns, certainly yes.
But to do this with all indexes, I would just as certainly say no.
It is going to be wasteful of space to do where not actually needed, and will slow inserts/updates significantly. It may slow down as many read queries as it helps too, because each index page holds less records so any query needing to reference a chunk of the index for filtering but not using all other columns will have to access more pages. This will make your database more memory-hungry: those pages will need to be loaded into the buffer pool, potentially ejecting other useful pages if memory is low. If compression is used on those indexes to try mitigate the effect on storage and memory requirements, then it will be pushing extra load to the CPUs instead.
as access is via an ORM that by default (but not always) retrieves all columns
This is a common pattern with poorly optimised use of an ORM (or just naive ORMs) and in these cases I've seen SQL Server's index adviser (and similar 3rd party tools) suggest indexes with many INCLUDE
d columns, so I would agree with your suggestion that this is why the indexes have been created this way.
But while it may make all such queries slightly faster and some of them significantly faster, I suspect that in many cases any benefit is so small as to not be worth the extra memory footprint required by your common working set, the space on disk, and the IO between disk and memory.
Also remember that the ORM might not be selecting out all columns of all tables tha a query touches, so that benefit might only hold for the main target of the current request, and the larger indexes may penalise the query when other objects are used for filtering but not returning data (SELECT * FROM table1 WHERE id IN (SELECT someID FROM table2 WHERE someColumn='DesiredValue')
perhaps).
Another consideration for the excess space used, particularly if the data is large, is that it will have an impact on your backup strategy: storage and transfer costs for those backups, potential restore times, and so forth.
should we be prepared for any differences between the two [on-prem & AzureSQL]
Generally I think the considerations here are going to be the same in each case, though any excess memory/IO cost imposed by the large indexes may be more directly visible in Azure where you can tweak the service tier and therefore infrastructure cost more easily rather than having a relatively fixed set of hardware resource. If using standard/premium tiers instead of vcore based pricing then you'll be affected more by the IO cost in standard as premium includes significantly more IO per DTU. If you are using multi-region backups or redundancy or other non-local features in Azure than there might be a bandwidth cost associated with the extra space taken by uneccesarily wide indexes.
I've done this on specific indexes before now, to aid oft-run heavy queries. Effectively what they have done is create multiple clustered indexes: when any of those indexes is used to find rows no extra work is needed looking up the rest of the data in the real clustered index (or the heap if there is no real clustered index).
is this a sensible strategy?
For some indexes where needed to support certain query patterns, certainly yes.
But to do this with all indexes, I would just as certainly say no.
It is going to be wasteful of space to do where not actually needed, and will slow inserts/updates significantly. It may slow down as many read queries as it helps too, because each index page holds less records so any query needing to reference a chunk of the index for filtering but not using all other columns will have to access more pages. This will make your database more memory-hungry: those pages will need to be loaded into the buffer pool, potentially ejecting other useful pages if memory is low. If compression is used on those indexes to try mitigate the effect on storage and memory requirements, then it will be pushing extra load to the CPUs instead.
as access is via an ORM that by default (but not always) retrieves all columns
This is a common pattern with poorly optimised use of an ORM (or just naive ORMs) and in these cases I've seen SQL Server's index adviser (and similar 3rd party tools) suggest indexes with many INCLUDE
d columns, so I would agree with your suggestion that this is why the indexes have been created this way.
But while it may make all such queries slightly faster and some of them significantly faster, I suspect that in many cases any benefit is so small as to not be worth the extra memory footprint required by your common working set, the space on disk, and the IO between disk and memory.
Also remember that the ORM might not be selecting out all columns of all tables tha a query touches, so that benefit might only hold for the main target of the current request, and the larger indexes may penalise the query when other objects are used for filtering but not returning data (SELECT * FROM table1 WHERE id IN (SELECT someID FROM table2 WHERE someColumn='DesiredValue')
perhaps).
Another consideration for the excess space used, particularly if the data is large, is that it will have an impact on your backup strategy: storage and transfer costs for those backups, potential restore times, and so forth.
should we be prepared for any differences between the two [on-prem & AzureSQL]
Generally I think the considerations here are going to be the same in each case, though any excess memory/IO cost imposed by the large indexes may be more directly visible in Azure where you can tweak the service tier and therefore infrastructure cost more easily rather than having a relatively fixed set of hardware resource. If using standard/premium tiers instead of vcore based pricing then you'll be affected more by the IO cost in standard as premium includes significantly more IO per DTU. If you are using multi-region backups or redundancy or other non-local features in Azure than there might be a bandwidth cost associated with the extra space taken by uneccesarily wide indexes.
edited Mar 13 at 11:15
answered Mar 8 at 9:46
David SpillettDavid Spillett
23k23368
23k23368
add a comment |
add a comment |
The question is, is this a sensible strategy?.... (though the question
has been raised that if this strategy was optimal, wouldn't it be the
default by now?)
In most cases this is not a sensible strategy. The reason being, that in general OLTP databases, the rows returned to the end user are not going to be a whole lot. (Generalization)
The question you should ask yourself is,if you are seeking on the key columns, how many rows will be returned by that seek operation? And repeat that for the queries seeking on that column.
Consider the following table, returning a whole lot of columns, where SelectiveIDField= ...
select columnA,columnC, ... columnZ
FROM dbo.BigTable
Where SelectiveIDField= '225122141';
If only one row will be returned by the seek on selectiveIDField
, is the additional key lookup such a bad thing?
(guessing you have clustered indexes here, otherwise RID lookup)
It will just do one extra key lookup, one extra execution + the join operator.
Even if it is 10 or even 100, would it be that huge of an impact? This also depends on how much your query is executed and how important the execution time is.
In the case of it being negligible, just create the index on SelectiveIDField
and call it a day, it should not be worth the read gains in comparison to the write losses.
So in short, creating indexes on the entire table should in my opinion not be a default approach unless you really see an issue with a query and can improve it drastically by adding an entire covering index.
add a comment |
The question is, is this a sensible strategy?.... (though the question
has been raised that if this strategy was optimal, wouldn't it be the
default by now?)
In most cases this is not a sensible strategy. The reason being, that in general OLTP databases, the rows returned to the end user are not going to be a whole lot. (Generalization)
The question you should ask yourself is,if you are seeking on the key columns, how many rows will be returned by that seek operation? And repeat that for the queries seeking on that column.
Consider the following table, returning a whole lot of columns, where SelectiveIDField= ...
select columnA,columnC, ... columnZ
FROM dbo.BigTable
Where SelectiveIDField= '225122141';
If only one row will be returned by the seek on selectiveIDField
, is the additional key lookup such a bad thing?
(guessing you have clustered indexes here, otherwise RID lookup)
It will just do one extra key lookup, one extra execution + the join operator.
Even if it is 10 or even 100, would it be that huge of an impact? This also depends on how much your query is executed and how important the execution time is.
In the case of it being negligible, just create the index on SelectiveIDField
and call it a day, it should not be worth the read gains in comparison to the write losses.
So in short, creating indexes on the entire table should in my opinion not be a default approach unless you really see an issue with a query and can improve it drastically by adding an entire covering index.
add a comment |
The question is, is this a sensible strategy?.... (though the question
has been raised that if this strategy was optimal, wouldn't it be the
default by now?)
In most cases this is not a sensible strategy. The reason being, that in general OLTP databases, the rows returned to the end user are not going to be a whole lot. (Generalization)
The question you should ask yourself is,if you are seeking on the key columns, how many rows will be returned by that seek operation? And repeat that for the queries seeking on that column.
Consider the following table, returning a whole lot of columns, where SelectiveIDField= ...
select columnA,columnC, ... columnZ
FROM dbo.BigTable
Where SelectiveIDField= '225122141';
If only one row will be returned by the seek on selectiveIDField
, is the additional key lookup such a bad thing?
(guessing you have clustered indexes here, otherwise RID lookup)
It will just do one extra key lookup, one extra execution + the join operator.
Even if it is 10 or even 100, would it be that huge of an impact? This also depends on how much your query is executed and how important the execution time is.
In the case of it being negligible, just create the index on SelectiveIDField
and call it a day, it should not be worth the read gains in comparison to the write losses.
So in short, creating indexes on the entire table should in my opinion not be a default approach unless you really see an issue with a query and can improve it drastically by adding an entire covering index.
The question is, is this a sensible strategy?.... (though the question
has been raised that if this strategy was optimal, wouldn't it be the
default by now?)
In most cases this is not a sensible strategy. The reason being, that in general OLTP databases, the rows returned to the end user are not going to be a whole lot. (Generalization)
The question you should ask yourself is,if you are seeking on the key columns, how many rows will be returned by that seek operation? And repeat that for the queries seeking on that column.
Consider the following table, returning a whole lot of columns, where SelectiveIDField= ...
select columnA,columnC, ... columnZ
FROM dbo.BigTable
Where SelectiveIDField= '225122141';
If only one row will be returned by the seek on selectiveIDField
, is the additional key lookup such a bad thing?
(guessing you have clustered indexes here, otherwise RID lookup)
It will just do one extra key lookup, one extra execution + the join operator.
Even if it is 10 or even 100, would it be that huge of an impact? This also depends on how much your query is executed and how important the execution time is.
In the case of it being negligible, just create the index on SelectiveIDField
and call it a day, it should not be worth the read gains in comparison to the write losses.
So in short, creating indexes on the entire table should in my opinion not be a default approach unless you really see an issue with a query and can improve it drastically by adding an entire covering index.
answered Mar 8 at 9:48
Randi VertongenRandi Vertongen
4,5361924
4,5361924
add a comment |
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f231628%2fsql-server-covering-indexes-including-all-columns%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown