Understanding read uncommitted isolation level's relation with clustered index and page splits



  • Missing one or more rows that were not the target of update

    When you are using READ UNCOMMITTED, if your query reads rows using an allocation order scan (using IAM pages), you might miss rows if another transaction is causing a page split. This cannot occur when you are using read committed because a table lock is held during a page split and does not happen if the table does not have a clustered index, because updates do not cause page splits. ( https://docs.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide )

    I want to get a better understanding of the above.

    1. Missing one or more rows that were not the target of update - how does it matter if rows that were not the target of update are missed?

    2. you might miss rows if another transaction is causing a page split - assuming another transaction that can cause page split is either an INSERT or UPDATE statement?

    3. This cannot occur when you are using read committed because a table lock is held during a page split and does not happen if the table does not have a clustered index, because updates do not cause page splits. - can you explain the marked line in simpler words please.

    4. Although not mentioned in the above paragraph, does page splits not result in reading the same row multiple times? For example - say a row is read and then a page split resulted in the row to go into the next page.



    1. Awkward wording. It means you might miss rows even though they were not the target of a change by another concurrent process.

    2. Yes, though https://www.sql.kiwi/2012/08/deletes-that-split-pages-and-forwarded-ghosts.html . MERGE too.

    3. Heap tables do not experience page splits. If a row becomes too big to fit on the current page, it is moved elsewhere and replaced in the original position by a forwarding pointer.

    4. Yes. I covered this in my answer to your previous question https://dba.stackexchange.com/q/307758/1192

      As a final example, imagine we have read page 300, and are now processing page 500. At this point, another process causes page 300 to split, creating new page 600. We will encounter the rows on page 600 twice: Once when we read page 300 before the split, and once when we encounter page 600 after the split.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2