Is there a better way to restore transaction logs to a MSSQL db?



  • We're using redgate's sqlbackup to restore external transaction logs from a source database every hour to a local MSSQL db. Currently the local db is permanently in a readonly state with a standby file. I'm not married to this arrangement, it currently seems to me like the simplest way to keep a database up to date if we ship logs every hour.

    We restore this db then move some tables to a different, live db during the hour, before the log restore time period, so we can access it anytime. During the restore period we cannot access this readonly database for a minute or two.

    Is there a way to keep the db accessible while the log restore is running? Please tell me there's a better way..

    Thanks



  • You can consider having two (or more) copies of the database, and use log shipping to roll through them in a round-robin fashion, making each new restore target the place where the next user reads from. This lets users get more current data as soon as it becomes available, without the usual disruption log shipping creates (where you have to kick everyone out of the database in order to restore the next log). I describe this process in detail here:

    • https://sqlperformance.com/2014/10/sql-performance/readable-secondaries-on-a-budget

    Another idea, if it's just a subset of tables (or you aren't in full recovery or don't have a version / edition combo that supports Availability Groups adequately) is to continuously pump new data to sets of destination tables, using the same kind of round-robin approach as above, to point readers to the latest copy. I talk about this method in the following posts, the last one being the preferred approach I use today:

    • https://sqlperformance.com/2012/08/t-sql-queries/t-sql-tuesday-schema-switch-a-roo
    • https://sqlperformance.com/2013/04/t-sql-queries/schema-switch-a-roo-part-2
    • https://sqlperformance.com/2021/09/sql-performance/refreshing-tables-partition-switching

    This can be quite effective if you don't want the new version of the data in any table to be visible to the reader until all the tables are synced, because you can point the reader to all copies of the table in one motion at the end, instead of current where the tables are refreshed in order and joins between them can become inconsistent.

    With most non-AG solutions, though, the data can never be fresher than the amount of time it takes to copy.


Log in to reply
 


Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2