Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. courtlanda
    3. Posts
    C
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by courtlanda

    • SQL Server Login Access getting revoked by a daily database refresh

      Scenario : I have granted dbdatareader access on a database to the dev team. They do have daily database refresh in place . Because of this the permissions to the login , which is a SQL login are getting revoked daily because of this database refresh job.

      Since Dev team cannot add a block to add permissions automatically after the database refresh , how should I as sysadmin deal this case . I get tickets daily to grant read-only access for that database. I cannot grant them dbaccessadmin due to security issues.

      Please advice.

      posted in SQL
      C
      courtlanda
    • RE: when update large table it take too much time to update

      Your updated actual execution plan is also indicating you have no indexes on your extractreports.dbo.alldistSuppliersourceid table because it is currently using a Table Scan operation against it.

      It should at the minimum have a clustered index. If that clustered index makes sense to be on the field your query is currently joining on SourcingNotMappedPartsID (depending on how often you query by that column) then you could create it like so:

      USE extractreports;
      

      CREATE CLUSTERED INDEX IX_alldistSuppliersourceid_ ON dbo.alldistSuppliersourceid (SourcingNotMappedPartsID);

      If SourcingNotMappedPartsID is unique in extractreports.dbo.alldistSuppliersourceid then you can also specify the UNIQUE keyword after theCREATE keyword.

      Otherwise if another field makes sense to be the clustered index on extractreports.dbo.alldistSuppliersourceid, then use the above script, substituting out that column, to create the clustered index and you can create an additional nonclustered index on it instead like so:

      USE extractreports;
      

      CREATE NONCLUSTERED INDEX IX_alldistSuppliersourceid_ ON dbo.alldistSuppliersourceid (SourcingNotMappedPartsID);

      Again, using the UNIQUE keyword if it's applicable as well.

      That one index should at least help your performance to a degree.

      posted in SQL
      C
      courtlanda
    • RE: Is there a way to add database encryption on certain columns without changing code base?

      The best thing you could try and use is the AlwaysEncrypted feature. There are pros and cons to that - but you can effectively "turn it on" on the column and then use a cert to decrypt it. So it can be done "outside" of code and only require some changes to connection strings and use of certs.

      You can read more about https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 . It's the closest you can get without buying a third-party product or changing code.

      posted in SQL
      C
      courtlanda
    • RE: Is there dbo for shrinking the Transaction Log?

      https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-ver15 does what you are looking for, but if you know the log will grow back again in a relatively short time then that probably isn't what you want. If it is going to grow back anyway you can't reuse the space (you need to leave it ready for that growth) and you are reducing performance in the circumstances where the log does need to grow rather than having the space already allocated.

      If you think the growth is unnecessary ("out of control") then you should look into what is happening during the periods of growth to see if those operations can be improved. Improving those activities will likely make them more efficient and improve your application's processing efficiency as well as the databases storage space efficiency. A common issue that is easy to improve its large updates that include a lot of noop updates, for instance UPDATE SomeTable SET AnInt = 123 will result in every row being updated even if 99% of them already have AnInt equalling 123, with all the log activity that involves. UPDATE SomeTable SET AnInt = 123 WHERE AnInt <> 123 would fix that simple example (be a little more careful of the columns can be NULL).

      posted in SQL
      C
      courtlanda
    • How does creating an index help with NOLOCK problems?

      Link: https://www.brentozar.com/archive/2018/10/using-nolock-heres-how-youll-get-the-wrong-query-results/

      The article mentions some problems with using NOLOCK:

      • You can see rows twice
      • You can skip rows altogether
      • You can see data that was never committed
      • Your query can outright fail with an error, “could not continue scan with nolock due to data movement”

      Then a fix is mentioned as:

      Create an index on the table (any single-field index would have worked fine in this particular example, giving SQL Server a narrower copy of the table to scan)

      How does creating an index help with NOLOCK problems?

      posted in SQL
      C
      courtlanda
    • RE: What constraint can't be written as a table constraint?

      That's just a question of syntax.

      A column constraint is defined like

      CREATE TABLE ... (
         col type [CONSTRAINT name] constraint_definition,
         ...
      );
      

      while a table constraint look like

      CREATE TABLE ... (
         col type,
         [CONSTRAINT name] constraint_definition,
         ...
      );
      

      The resulting constraint is the same in both cases.

      So the documentation is saying that you need to write NOT NULL in the column definition and you cannot write it anywhere else.

      posted in SQL
      C
      courtlanda
    • RE: how to perform inner join on two table where i'am able to retrieve all data?

      I want to join the tables on item_id

      You're successfully doing this. You're justing using the wrong type of join.

      I also want data from bar_recvd_details, even if recvd_value IS NULL

      You can't do this with an INNER JOIN. Instead, you need to use a LEFT JOIN to be able to return NULL from a table on the RIGHT side of the JOIN, where no data is present. For example, LEFT JOIN bar_recvd_details b instead of INNER JOIN bar_recvd_details b

      I want all data from each table, where the = '2019-06-18' from the respective table

      You' successfully doing this already. However, when moving to a LEFT JOIN, to meet the above criteria, you need to move that WHERE condition to the JOIN condition. I.E. ON a.item_id=b.item_id AND b.recvd_date = '2019-06-18'

      SELECT a.item_cid,
          a.item_id,
          a.op_value,
          a.close_val,
          a.op_date,
          a.close_date,
          b.recvd_value
      FROM bar_opening_details AS a
          LEFT JOIN bar_recvd_details AS b
          ON a.item_id = b.item_id AND b.recvd_date = '2019-06-18'
      WHERE a.close_date = '2019-06-18';
      
      tem_cid item_id op_value close_val op_date close_date recvd_value
      2 78 2 88 2019-06-18 2019-06-18 null
      2 79 28 103 2019-06-18 2019-06-18 null
      2 97 0 96 2019-06-18 2019-06-18 null
      6 456 30 48 2019-06-18 2019-06-18 24
      6 457 130 155 2019-06-18 2019-06-18 48
      6 484 1 46 2019-06-18 2019-06-18 48
      6 495 15 61 2019-06-18 2019-06-18 null
      6 523 1 12 2019-06-18 2019-06-18 12
      6 529 9 32 2019-06-18 2019-06-18 24
      6 530 54 98 2019-06-18 2019-06-18 null
      6 533 0 24 2019-06-18 2019-06-18 24
      6 630 35 77 2019-06-18 2019-06-18 null
      6 631 31 122 2019-06-18 2019-06-18 null
      6 635 5 47 2019-06-18 2019-06-18 48
      6 636 34 71 2019-06-18 2019-06-18 48

      https://dbfiddle.uk/?rdbms=mysql_5.6&fiddle=abede993a01a9b0493c370a2aa33be85

      posted in SQL
      C
      courtlanda
    • RE: Allow create table (dynamic name) and insert but nothing else

      Unless I am missing a business requirement, or a detail in your code, I think you're making your procedure a whole lot more complicated than necessary.

      There's not a need to create the table, then look up the name of the table, then rename the table, then add the primary key constraint all as separate steps, wrapped in a transaction to ensure consistency. Instead you can do it all in one step.

      A few other code-review type notes on your code:

      • You are using a nvarchar variables to support unicode, but doing the assignment using "regular" single quotes. To support unicode strings, you'll need to use the N' prefix to quote unicode strings.
      • There is an inline constraint creation that is possible for CREATE TABLE
      • https://sqlblog.org/2019/09/12/bad-habits-to-kick-avoiding-the-schema-prefix , including using the sys schema prefix on sp_executesql.
      • Never rely on the ordinal position of a stored procedure. Explicitly name the parameters as you pass them. Most developers (including Microsoft for system stored procedures) avoid changing the position of parameters--but if they do, it will break your code if you assume the position. Named parameters never has that problem.

      Here's my version of your procedure:

      CREATE OR ALTER PROCEDURE [lake].[create_terminal_raw_log_table]
          (
              @terminal_name nvarchar(100)
          )
      WITH EXECUTE AS 'dbo'
      AS
          DECLARE @table_name nvarchar(128);
          DECLARE @dynamic_sql nvarchar(1000);
      
      -- We want the table name to be the input value with `_raw_log` appended:
      -- I could skip even using this variable, 
        -- and just use `@terminal_name + N'_raw_log'` 
        -- in the two spots I reference @table_name
        -- but if you use the table name a bunch of times, 
        -- this is easier.
      SET @table_name = @terminal_name + N'_raw_log';
      
      --Use dynamic SQL to create the table
          --With the PK Constraint created in-line.
      SET @dynamic_sql = N'create table [lake].' + QUOTENAME(@table_name) + N'
                      (
                          id bigint not null,
                          [timestamp] datetime2(3) not null,
                          cmd varbinary(max) not null,
                          CONSTRAINT ' + QUOTENAME(N'PK_' + @table_name) + N' 
                              PRIMARY KEY CLUSTERED (id)
                      );';
      EXEC sys.sp_executesql @stmt = @dynamic_sql;
      

      GO

      Make sure you test!

      You'll want to do some quick sanity tests to make sure that your procedure actually works. I like to make sure that I test with unicode characters (I always use emojis), and any other specific concerns (like SQL injection, white space in object names, min or max length, etc).

      For example:

      EXEC [lake].[create_terminal_raw_log_table] @terminal_name = N'nds';
      EXEC [lake].[create_terminal_raw_log_table] @terminal_name = N'amtwo';
      EXEC [lake].[create_terminal_raw_log_table] @terminal_name = N'; DROP PROCEDURE [lake].[create_terminal_raw_log_table];';
      EXEC [lake].[create_terminal_raw_log_table] @terminal_name = N'It Works!! ';
      

      SELECT
      TableName = o.[name]
      FROM sys.objects AS o
      JOIN sys.schemas AS s ON s.schema_id = o.schema_id
      WHERE s.name = N'lake'
      AND o.type = 'U';

      Returns these results:

      TableName
      -------------------------------------------------------------------
      nds_raw_log
      amtwo_raw_log
      ; DROP PROCEDURE [lake].[create_terminal_raw_log_table];_raw_log
      It Works!! _raw_log
      

      (4 rows affected)

      posted in SQL
      C
      courtlanda
    • RE: Optimizing performance when searching on multiple columns with a single search string

      I would try to avoid OR operators in this scenario. I actually mention https://brendan-mccaffrey.com/2021/12/20/stop-using-the-or-operator/ in a recent blog post. While that post was about updates, the concept should translate well here. Basically, using the OR operator will make indexing challenging. You'll need to know which column needs to go first in any index you create, and hope it's SELECTIVE and will always have input.

      Also, you'll want to avoid the leading '%' in your search string, as no index will be able to help in that case. The easiest way to visualize why no index will help here it to think of a phone book. I know, many people probably don't know what these are any more. Now, picture trying to look up a phone number where you're missing the first letter of someone's last name. If you knew the name started with an 'M', you might just flip to the Ms. Then maybe flip a few more pages if you land on 'Ma', but need 'Mc". In either case, you'll get there fast. But without the first letter, you'll have to look at every record, front to back, of the entire phone book to find a name like '%cCaffrey, for example.

      Since your concatenation between first name and last name, you'll probably want to consider an index on a computed column, or an indexed view. You may need to experiment to see which one works best in your scenario. For example, here would be a computed column with an index.

      ALTER TABLE Waiver ADD FullName AS CONCAT(FirstName, ' ', LastName)) PERSISTED
      GO
      CREATE NONCLUSTERED INDEX IX_FullName ON dbo.Waiver (FullName)
      GO
      

      If you were to continue concatenate those columns in your WHERE clause, they'll produce a full table scan every time. SQL Server can't know two columns equal your condition until it concatenates them for every row in the table. These issues might not end up being a problem at 30k rows now, but could be big problems as the row counts grow.

      If you must using a generic search box, I would try writing your query somewhat like this. In this case, you could create a unique index for each column. You're using SELECT *, so you'll always be doing KEY LOOKUPS anyways.

      DECLARE @search VarChar(100)
      

      SET @search = 'Smith'

      SELECT *
      FROM Waiver
      WHERE LastName LIKE @search + '%'

      UNION

      SELECT *
      FROM Waiver
      WHERE FullName LIKE @search + '%'

      UNION

      SELECT *
      FROM Waiver
      WHERE Email LIKE @search + '%'

      UNION

      SELECT *
      FROM Waiver
      WHERE DriversLicense LIKE @search + '%'

      I removed FirstName from the search in the example, because it would already be met in the FullName search.

      You would still need to evaluate additional indexes on common search terms.

      With that said, I would advise you to see if can create several search boxes on the webpage, one for each search condition.

      posted in SQL
      C
      courtlanda
    • Is concept of fill factor useful only when index is rebuilt?

      Fill factor is concept that works only when creating a new index or when the index is rebuilt.

      Subsequently there will be fragmentation due to inserts/updates which will create new pages (page split) without the fill factor.

      Is it correct to say that the concept of fill factor is useful only when the index rebuilt?

      posted in SQL
      C
      courtlanda
    • Log only drop queries

      I'm having a issue that my tables are being dropped, and I don't know how/where.

      But enabling the general logs create 2 million records in less than 10 minutes and this is one of my least busy hours.

      I want to log drop queries for at least 24 hours... what's the best way to handle this?

      posted in SQL
      C
      courtlanda
    • RE: UPDATE incredibly slow on small PostgreSQL query - EXPLAIN ANALYZE causes it to hang

      1+GB size for a table with 2k rows means either it has huge text columns or it is highly bloated.

      Why don't you try running vacuum (analyze, full) against the database in question (if you haven't already done that) and re-check the size of your table again?

      VACUUM reclaims storage occupied by dead tuples. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables.

      Reference: https://www.postgresql.org/docs/current/sql-vacuum.html (PostgreSQL | Docs)

      posted in SQL
      C
      courtlanda
    • Why am I seeing key lookups for all rows read, instead of all rows matching the where clause?

      I have a table such as the following:

      create table [Thing]
      (
          [Id] int constraint [PK_Thing_Id] primary key,
          [Status] nvarchar(20),
          [Timestamp] datetime2,
          [Foo] nvarchar(100)
      )
      

      with a non-clustered, non-covering index on the Status and Timestamp fields:

      create nonclustered index [IX_Status_Timestamp] on [Thing] ([Status], [Timestamp] desc)
      

      If I query for a 'page' of these rows, using offset/fetch as follows,

      select * from [Thing]
      where Status = 'Pending'
      order by [Timestamp] desc
      offset 2000 rows
      fetch next 1000 rows only
      

      I understand that the query will need to read a total of 3000 rows to find the 1000 that I'm interested in. I would then expect it to perform key lookups for each of those 1000 rows to fetch the fields not included in the index.

      However, the execution plan indicates that it is doing key lookups for all 3000 rows. I don't understand why, when the only criteria (filter by [Status] and order by [Timestamp]) are both in the index.

      enter image description here

      If I rephrase the query with a cte, as follows, I get more or less what I expected the first query to do:

      with ids as
      (
          select Id from [Thing]
          where Status = 'Pending'
          order by [Timestamp] desc
          offset 2000 rows
          fetch next 1000 rows only
      )
      

      select t.* from [Thing] t
      join ids on ids.Id = t.Id
      order by [Timestamp] desc

      enter image description here

      Some statistics from SSMS to compare the 2 queries:

      Original With CTE
      Logical reads 12265 4140
      Subtree cost 9.79 3.33
      Memory grant 0 3584 KB

      The CTE version seems 'better' at first glance, although I don't know how much weight to place on the fact that it incurs a memory grant for a worktable. (The messages from set statistics io on indicate that there were zero reads of any kind on the worktable)

      Am I wrong in saying that the first query should be able to isolate the relevant 1000 rows first (even though that requires reading past 2000 other rows first), and then only do key lookups on those 1000? It seems a bit odd to have to try and 'force' that behaviour with the CTE query.

      (As a minor second question: I'm assuming that the last part of the CTE approach needs to do its own order by on the results of the join, even though the CTE itself had an order by, as the ordering might be lost during the join. Is this correct?)

      posted in SQL
      C
      courtlanda
    • How to automatically update table, whenever data added to another table?

      I have created two tables in MySQL. One (Todo_tbl) holds the data while the other (Statistics_tbl) hold the sums one of the variables from Todo_tbl.

      Following the definitions of these tables:

      create table Todo_tbl (
      id INT auto_increment,
      person VARCHAR(45) ,
      task VARCHAR(45) ,
      duration INT(4),
      deadline_day VARCHAR(2),
      deadline_month VARCHAR(2),
      PRIMARY KEY(id)
      )ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;
      

      insert into Todo_tbl values(1,'John', 'dust the floors', 40,04,03);
      insert into Todo_tbl values(2,'Matt', 'do the dishes', 15,02,02);
      insert into Todo_tbl values(3,'Mary', 'dusting', 40,03,02);
      insert into Todo_tbl values(4,'Chloe', 'cleaning the windows', 75,04,05);
      insert into Todo_tbl values(5,'John', 'wash the floors', 60,03,03);
      insert into Todo_tbl values(6,'Bridget', 'take out the trash', 15,03,03);
      insert into Todo_tbl values(7,'Matt', 'do the laundry', 18,02,02);
      insert into Todo_tbl values(8,'Bridget', 'water the plants', 15,03,03);

      select * from Todo_tbl;

      create table Statistics_tbl (
      SELECT person, SUM(duration) as total_duration FROM Todo_tbl GROUP BY person
      );

      select * from Statistics_tbl;

      The problem is that whenever I add new data to the Todo_tbl, it is not considered in the Statistics_tbl.

      Is there any way to connect two tables, so that when there is a change in one, the other will respond as well?

      posted in SQL
      C
      courtlanda
    • RE: Query Performance for Reports in PostgreSQL

      What you are currently doing is key-set pagination (except you don't show the necessary ORDER BY to make it work correctly--I'm assuming it is really there). With many queries and with the appropriate index this can work very well. You mention some concerns about it, but without any details on those concerns it is hard to know how valid and/or addressable they are.

      Both cursor and key-set pagination should be much more efficient than OFFSET/LIIMT when there are a large number of pages, but note that in your example there is no OFFSET. There is just a LIMIT and a key-set for where to start. In other words, OFFSET/LIMIT might be bad, but that is not what you are doing.

      The nice thing about key-set pagination over server-side cursor pagination is that the state is entirely and efficiently managed by the client. If the client doesn't come back for the next page for two weeks (or ever), that is of no concern to the server. With a server-side cursor however, you need some mechanism to hold the database connection open between page visits, and reattach to it. You also need some mechanism to declare a cursor abandoned, lest they accumulate and consume an unbounded amount of resources.

      auto_explain will still record plans for cursors, as long as it is closed cleanly, either by an explicit CLOSE or by COMMIT.

      posted in SQL
      C
      courtlanda
    • RE: Why does UPDATE FROM fail and MERGE works on a View with an INSTEAD OF Trigger?

      My understanding is this is disallowed because UPDATE...FROM has some quirky behaviours that are maintained due to backward compatibility. Making these work the same when the view has instead-of triggers would be difficult, perhaps impossible.

      MERGE has several weaknesses, but it does have well-defined and sane update semantics.

      The primary example of this is MERGE preventing ambiguous updates—where a target row is logically changed more than once by the SQL specification. If nothing in the schema absolutely guarantees a target row cannot be modified more than once, a MERGE plan introduces operators to check this condition at runtime, and raise an error if it encountered.

      This makes implementing joined-table MERGE on a target with instead-of triggers easier/at all possible.

      The UPDATE...FROM syntax also has curious binding behaviours (again maintained to avoid breaking old code) when the target table is referenced more than once, sometimes by alias and sometimes not.

      Not directly related to your question, but there is an example of an edge case with CTEs described https://docs.microsoft.com/en-us/sql/t-sql/queries/update-transact-sql#limitations-and-restrictions :

      When a common table expression (CTE) is the target of an UPDATE statement, all references to the CTE in the statement must match. For example, if the CTE is assigned an alias in the FROM clause, the alias must be used for all other references to the CTE. Unambiguous CTE references are required because a CTE does not have an object ID, which SQL Server uses to recognize the implicit relationship between an object and its alias. Without this relationship, the query plan may produce unexpected join behavior and unintended query results.

      All that said, just because it works with MERGE doesn't mean there aren't bugs lurking. For a semi-related example see https://dba.stackexchange.com/questions/305089/merge-into-a-view-with-instead-of-triggers .

      Pushing hard at the edges of combinations of complex features is a great way to find edge-case bugs.

      posted in SQL
      C
      courtlanda
    • RMAN archive logs not deleted

      I am working on an Oracle11gR2 database. The database is in DataGuard mode, even though the dataguard is set to DEFER.

      I want to delete all archive logs that have been backed up at least 2 times to tape.

      rman> list archivelog all backed up 2 times to device type sbt_tape;
      

      shows a list of archive logs that match the criteria

      However, when I later try to run

      rman> delete force noprompt archivelog all backed up 2 times to device type sbt_tape;
      

      no files are deleted.

      All parameters are set to default values, apart from retention policy which is set to a window of 370 days

      A look into the trace I generated, shows, among other things, this message for each of the files that were reported by the list command

      DBGMISC: Matching archived log has 0 backups
      

      which I do not know how to interpret.

      Any ideas on how to further investigate the issue would be greatly appreciated

      rman configuration is as follows

      using target database control file instead of recovery catalog
      RMAN configuration parameters for database with db_unique_name ORACLE are:
      CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 370 DAYS;
      CONFIGURE BACKUP OPTIMIZATION OFF; # default
      CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
      CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
      CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
      CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE SBT_TAPE TO '%F'; # default
      CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
      CONFIGURE DEVICE TYPE SBT_TAPE PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
      CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
      CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
      CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
      CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE SBT_TAPE TO 1; # default
      CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=/opt/tivoli/tsm/client/oracle/bin64/libobk.so';
      CONFIGURE MAXSETSIZE TO UNLIMITED; # default
      CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
      CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
      CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
      CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
      CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ASMARCHIVES/oracle/snap_orcld.f';
      
      posted in SQL
      C
      courtlanda
    • RE: Missing rows in table after restore database from MDF and LDF with SQL Server 2008

      Any rows you'd be missing would have been inserted into the original instance of the database after you copied the MDF / LDF files (or after whichever copy of the MDF / LDF files you've located since you're saying you're missing over months of data and this happened after a crash). So it depends on when that version of the MDF / LDF was last modified / copied.

      If you have Full Backups and Transaction Log Backups setup on the original database, you can use a backup restore to get as close as the frequency that you Transaction Log Backups run (e.g. 5 minutes granularity if that's what you schedule them for). And then if for some reason you don't restore it right away and the next Transaction Log Backup(s) run, you can always add the new Transaction Log Backups to your list of backups to restore to keep you current.

      Or you can even restore all the backups you have with the https://docs.microsoft.com/en-us/sql/t-sql/statements/restore-statements-transact-sql?view=sql-server-ver15#comparison-of-recovery-and-norecovery option (which will leave it in the recovering state) if you don't plan to bring it online right away but do want to keep it current to the original database. Then as more Transaction Log Backups are created on the original database, you can use those backup files to roll forward the copy of the database until it's at a point for which you want to bring it online. Once brought online, no more Transaction Log Backups can be applied without doing another full restore from all the previous backups to the nearest Full Backup.

      posted in SQL
      C
      courtlanda
    • RE: Temporal Table: Multiple inserts on each update?

      No, there is nothing in SQL Server that will generate multiple history table rows from a single row modification to a temporal table.

      The fact you are seeing history rows with zero validity duration means something is performing multiple data changes to that row within a transaction, as https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?#how-do-i-query-temporal-data (emphasis added):

      ℹ Note

      FOR SYSTEM_TIME filters out rows that have period of validity with zero duration (SysStartTime = SysEndTime).

      Those rows will be generated if you perform multiple updates on the same primary key within the same transaction.

      In that case, temporal querying surfaces only row versions before the transactions and ones that became actual after the transactions. If you need to include those rows in the analysis, query the history table directly.

      In the absence of a trigger, external code is submitting multiple data changes to the SQL Server.

      posted in SQL
      C
      courtlanda
    • ERROR: invalid byte sequence for encoding "UTF8": 0x00 in pg_stat_statements

      I'm attempting to use pg_stat_statements to optimise my queries but ran into an unexpected roadblock.

      SELECT total_plan_time + total_exec_time as total_time, query
      FROM pg_stat_statements
      ORDER BY total_time
      DESC LIMIT 10;
      

      results in ERROR: invalid byte sequence for encoding "UTF8": 0x00

      I looked into this and it seems Postgres doesn't like the NULL character \00 in text fields.

      Most of the existing advice online on this error are for people seeing errors inserting data into Postgres. In which case, the fix seems to be to filter the null character prior to the insert.

      In this case, it seems the data is already IN postgres but it makes the view impossible to query.

      a \d+ pg_stat_statements tells me that the pg_stat_statements view is built from running a function I've tried to get rid of the character using translate and replace but no luck.

      Any idea how I can find the offending query with the NULL character? I'm assuming the long term fix is to trace the query with the NULL character, figure out how it's getting in.

      What I've tried so far:

      • I verified that its definitely bad data by doing a SELECT pg_stat_statements_reset();. The above query immediately works after that for a short while.

      • I did a \ef pg_stat_statements and it seems the first argument is a boolean called show text. Passing false lets me query the table, which lets me do this:

      SELECT total_plan_time + total_exec_time as total_time, queryid
      FROM pg_stat_statements(false)
      ORDER BY total_time
      DESC LIMIT 10;
      

      unfortunately the queryid isn't very useful unless there's someway for me to retrieve the mapped query text without this error.

      Suggestions on how to proceed are appreciated! I'm on PostgreSQL 13.4

      posted in SQL
      C
      courtlanda
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1494
    • 1495
    • 1 / 1495