Is it acceptable to use deadlocks as a strategy?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite












We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question





















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    Oct 3 at 19:34










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    Oct 3 at 19:41










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    Oct 3 at 21:27
















up vote
3
down vote

favorite












We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question





















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    Oct 3 at 19:34










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    Oct 3 at 19:41










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    Oct 3 at 21:27












up vote
3
down vote

favorite









up vote
3
down vote

favorite











We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?










share|improve this question













We recently upgraded our SQL production environment from 2008R2 to 2017. I am finally at a point where I want to try optimistic locking in the form of using snapshot isolation. I read Brent Ozar's article suggesting to SET ALLOW_SNAPSHOT_ISOLATION ON at the database level, and then alter queries to take advantage of the new to us feature. To my surprise, my manager who is very knowledgeable in sql, did not want to move in this direction. I understood him to say that we depended on the deadlocks to prevent us from reading dirty data, and saw no reason to solve a problem that did not exist. I did not know how to respond to him. I have never considered deadlocks as anything other than something to try and eliminate, not depend on as a way of life. I get that they do an important function, but are a sign of something that can be improved.



My question is how many others rely on deadlocks to such an extant that you would not consider trying to eliminate them?







deadlock snapshot-isolation






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Oct 3 at 18:51









JohnH

1567




1567











  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    Oct 3 at 19:34










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    Oct 3 at 19:41










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    Oct 3 at 21:27
















  • Are you sure your manager said "deadlocks" and not "locks"?
    – mustaccio
    Oct 3 at 19:34










  • We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
    – JohnH
    Oct 3 at 19:41










  • No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
    – dean
    Oct 3 at 21:27















Are you sure your manager said "deadlocks" and not "locks"?
– mustaccio
Oct 3 at 19:34




Are you sure your manager said "deadlocks" and not "locks"?
– mustaccio
Oct 3 at 19:34












We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
– JohnH
Oct 3 at 19:41




We were quite specifically talking deadlocks, not blocking. We have one particular database that generates 90% of the deadlocks in our environment, with a small handful of tables involved. I did not design it, but essentially many write operations deadlocking with read operations that cannot use nolock.
– JohnH
Oct 3 at 19:41












No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
– dean
Oct 3 at 21:27




No.it's not - if anything, the deadlocks are too random. It seems your manager is not that knowledgeable after all.
– dean
Oct 3 at 21:27










1 Answer
1






active

oldest

votes

















up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    Oct 3 at 20:08










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    Oct 3 at 21:08










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "182"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f219223%2fis-it-acceptable-to-use-deadlocks-as-a-strategy%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    Oct 3 at 20:08










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    Oct 3 at 21:08














up vote
4
down vote













Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer




















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    Oct 3 at 20:08










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    Oct 3 at 21:08












up vote
4
down vote










up vote
4
down vote









Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).






share|improve this answer












Snapshot and Read Committed Snapshot isolation levels do not prevent deadlocks from happening. They make the engine using row versioning, so readers do not block writers and writers do not block readers as SQLServer does by default. That improves concurrency , but requires changes to the applications that rely on "readers block writers and writers block readers" . I guess you misunderstood your manager , and your system relies on that SQLServer default blocking behavior, not on deadlocks which in most cases can be avoided by application design.



If I happen to work with SQLServer on a new system, I'll surely switch it to use row versioning instead of locking.



However, for an existing system, it is usually safer and cheaper not to fix what is not broken (especially when it's not broken).







share|improve this answer












share|improve this answer



share|improve this answer










answered Oct 3 at 19:41









a1ex07

7,11621533




7,11621533











  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    Oct 3 at 20:08










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    Oct 3 at 21:08
















  • I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
    – JohnH
    Oct 3 at 20:08










  • In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
    – a1ex07
    Oct 3 at 21:08















I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
– JohnH
Oct 3 at 20:08




I read docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/… , and definitely get the idea that this would prevent a deadlock from killing a reader when a write is happening. This is what I was trying to solve. Using SentryOne I can see a nice graphical rendition of the deadlock, and it seems to be the read process that gets killed.
– JohnH
Oct 3 at 20:08












In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
– a1ex07
Oct 3 at 21:08




In such case yes, deadlock won't happen with snapshot isolation levels. But even without snapshot, the deadlock is not guaranteed to happen - the order of locking objects (rows,pages,tables) identified by internal process controlled by optimizer - so during one execution write transaction will wait until read is completed, during other execution you can get a deadlock. So application cannot rely it always happen; it rather expects it to happen and reacts with (possibly) re-issuing transaction.
– a1ex07
Oct 3 at 21:08

















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f219223%2fis-it-acceptable-to-use-deadlocks-as-a-strategy%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?