Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. tahishae
    3. Posts
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by tahishae

    • Find the youngest customer grouped by province
      SELECT a.province, c.birth_date, c.name 
      FROM customer c
      JOIN address a ON (c.cust_id = a.cust_id) 
      GROUP BY a.province 
      ORDER BY birth_date DESC;
      

      I want to find the youngest customer in each province. The query above doesn't work.

      posted in SQL
      T
      tahishae
    • RE: ODA X8-2M crossrestore with TDE wallet fails with: DCS-10001:Internal error encountered: failed to open the tde password based wallet

      A workround could be:

      alter system set wallet_root='/stage/backup/wallet/orabackups/etaxxxxx/database/3205039394/CDBET015/tdewallet/' scope=spfile;
      

      SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE open IDENTIFIED BY "xxxxx";

      keystore altered.

      SQL> administer key management set key identified by "xxxxx" with backup;

      keystore altered.

      SQL> administer key management create AUTO_LOGIN keystore from keystore '/stage/backup/wallet/orabackups/etaxxxxxc/database/3205039394/CDBET015/tdewallet/tde' identified by xxxxxxx;

      keystore altered.

      Startup force

      SQL> select * from v$encryption_wallet;

      WRL_TYPE WRL_PARAMETER STATUS WALLET_TYPE WALLET_OR KEYSTORE FULLY_BAC CON_ID


      FILE /stage/backup/wallet/orabackup OPEN AUTOLOGIN SINGLE NONE NO 1
      s/etabonxadbs00-c/database/320
      5039394/CDBET015/tdewallet//td
      e/

      FILE OPEN AUTOLOGIN SINGLE UNITED NO 2
      FILE OPEN AUTOLOGIN SINGLE UNITED NO

      3

      odacli register-database -c OLTP -s odb1 -sn CDBET015 -t SI -tp
      Enter SYS, SYSTEM and PDB Admin user password:
      Retype SYS, SYSTEM and PDB Admin user password:
      Enter TDE wallet password:
      Retype TDE wallet password:

      odacli describe-job "110a5246-f239-4fab-aad2-c001ae68c2e7"

      Job details

                       ID:  110a5246-f239-4fab-aad2-c001ae68c2e7
              Description:  Database service registration with db service name: CDBET015
                   Status:  Success
                  Created:  March 2, 2022 12:01:50 PM CET
                  Message:
      

      Task Name Start Time End Time Status


      TDE parameter validate at destination March 2, 2022 12:01:55 PM CET March 2, 2022 12:01:55 PM CET Success
      Enable OMF parameters March 2, 2022 12:01:56 PM CET March 2, 2022 12:01:56 PM CET Success
      Setting db character set March 2, 2022 12:01:56 PM CET March 2, 2022 12:01:56 PM CET Success
      Move Spfile to right location March 2, 2022 12:01:56 PM CET March 2, 2022 12:02:05 PM CET Success
      Enable DbSizing Template March 2, 2022 12:02:05 PM CET March 2, 2022 12:02:08 PM CET Success
      Copy Pwfile to Shared Storage March 2, 2022 12:02:08 PM CET March 2, 2022 12:02:09 PM CET Success
      Add Startup Trigger to Open all PDBS March 2, 2022 12:02:09 PM CET March 2, 2022 12:02:10 PM CET Success
      Running DataPatch March 2, 2022 12:02:10 PM CET March 2, 2022 12:02:23 PM CET Success
      configuring TDE March 2, 2022 12:02:23 PM CET March 2, 2022 12:02:25 PM CET Success
      Reset Associated Networks March 2, 2022 12:02:26 PM CET March 2, 2022 12:02:28 PM CET Success

      However this causes the TDE Wallet Management to be "EXTERNAL".

      odacli describe-database -in CDBET015
      …
        TDE Wallet Management: EXTERNAL
                  TDE Enabled: true
      …
      
      posted in SQL
      T
      tahishae
    • RE: Should JSON columns be separate from a wide table in MySQL?

      The advice you quoted from the documentation is old, and probably only applies to MyISAM tables.

      For InnoDB tables, you can make the query skip reading long datatypes (TEXT/BLOB/VARCHAR/JSON) by just omitting them from your select-list.

      That is, don't use SELECT *, but instead select only the columns you want to read, by name. InnoDB will skip reading extra pages for long columns that are omitted from the select-list. That will probably be a sufficient optimization for you, and does not require you to split the table.

      Admittedly, InnoDB may store short strings on the same page with the rest of the row, if they fit. That is, if you have a JSON column, but on a given row it happens to be short enough to fit in the same page with the other columns for the respective row, then InnoDB stores them together.

      So the scenario does exist in which one might need to separate the JSON column to its own table, to get that last 0.0001% optimization. But you haven't described that you are operating at the scale that would require this.

      You are optimizing prematurely. This is pretty much by definition, if you haven't actually measured performance to show that you have a problem related to storing the columns together, and that the alternative design fixes that problem.

      There's a reason that Computer Science is a scientific field. You should think like a scientist, and make an experiment to measure performance with both table designs. Then you'll know that you aren't optimizing prematurely.

      posted in SQL
      T
      tahishae
    • RE: Server slave, replication

      Replication requires several things to be set.

      log_bin -- to say that replication is turned on
      log_slave_updates  -- turn it on
      innodb_flush_log_at_trx_commit -- not not directly relevant to replication
      SQL_LOG_BIN  -- 0 says not to replicate subsequent writes in this connection
      plus all that is set up by CHANGE MASTER
      

      The intent of a Replica is to always have a full copy of the Primary. Is that your goal? Or maybe "take a snapshot of the Primary, but no keep it updated.

      Snapshot

      If you don't need the Replica to be kept up to date, then here are some idea on capturing an occasional snapshot:

      • LVM -- best for least impact.
      • mysqldump and use mysql to load on the target server -- straightforward, but invasive.
      • Use replication even if you don't need the rest of the repl benefits.
      • stop both servers; copy the disk over
      posted in SQL
      T
      tahishae
    • RE: Where to install SQL on EC2

      You’ll want to follow best practices for your data and log files, whether you’re running on EC2 or on-premise. Think of EC2 just as any other VM, it just happens to be running in AWS.

      The big difference being how you provision storage. You’ll want to provision a persisted volume in the web console and mount that to the server where you have SQL installed. Once you have the new drive letter available in the OS, you can create your data files there, instead of in the sys drive.

      Make sure to choose the proper storage type for SQL Server. While you’re at it, make sure to provision storage for TempDB.

      posted in SQL
      T
      tahishae
    • RE: catch and understand transaction rollback

      Do you have an idea, why an autocommit transaction has a transaction_state "Rollback" with no errors?

      The client can abort a running batch by sending an " https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-tds/dc28579f-49b1-4a78-9c5f-63fbda002d2e#:%7E:text=The%20client%20can%20interrupt%20and%20cancel%20the%20current,MUST%20read%20until%20it%20receives%20an%20Attention%20acknowledgment. ". This is how you cancel a running query in SSMS, and this is how timeouts work.

      You can add the sqlserver.attention event to your trace, eg

      CREATE EVENT SESSION [attentions] ON SERVER 
      ADD EVENT sqlserver.attention(
          ACTION(sqlserver.sql_text))
      
      posted in SQL
      T
      tahishae
    • Strange behaviour of force plan in query store

      I force plan in query store. Plan is connected to procedure in a job which run once a day. One of step of this job is just:

      EXEC [schema].[LoadData]
      

      Procedure [schema].[LoadData] looks like

      TRUNCATE TABLE [schema].[Data];
      
      INSERT INTO [schema].[Data]
             ([A1],
             [A2],
            .
            .
            .,
            [A49]
      )                
      SELECT *
          ,CURRENT_TIMESTAMP AS [Insert TimeStamp] 
      FROM [schema].[View]   
      

      where view is a view which contains some CTEs and use synonyms (connect to tables from different databases).

      In query store execution looks like : enter image description here

      To test If forcing plan is working I follow below steps:

      1. Run query in SSMS -> EXEC [schema].[LoadData]
      2. Above execution was treated as different query so did not see anything new in Query Store for query = 7
      3. DBA create a new job just with step which is running query -> EXEC [schema].[LoadData]
      4. Run of above newly created job cause Plan Id = 29800

      Question why execution plan was not forced ? In column "forced plan failure count" is 0.

      posted in SQL
      T
      tahishae
    • Postgres function for NVL

      I'm trying to have NVL function in postgres.

      create or replace function nvl (anyelement, anyelement)
      returns anyelement language sql as $$
          select coalesce(cast( $1 as decimal), cast( $2 as decimal))
      $$;
      

      however this fails on me for the following examples:

      testdb=> select nvl(1,2);
      ERROR:  return type mismatch in function declared to return integer
      DETAIL:  Actual return type is numeric.
      CONTEXT:  SQL function "nvl" during inlining
      

      testdb=> SELECT nvl( sum(balance), 0 ) as b FROM db.bank WHERE user = 123;
      ERROR: function nvl(numeric, integer) does not exist
      LINE 1: SELECT nvl( sum(balance), 0 ) as b FROM db.bank...
      ^
      HINT: No function matches the given name and argument types. You might need to add explicit type casts.

      When I change it to:

      create or replace function nvl (anyelement, anyelement)
      returns anyelement language sql as $$
          select case when $1 is null then $2 else $1 END 
      $$;
      

      The first example works. But I still have failures with:

      testdb=> SELECT nvl( sum(balance), 0 ) as b FROM db.bank WHERE user = 123;
      ERROR:  function nvl(numeric, integer) does not exist
      LINE 1: SELECT nvl( sum(balance), 0 ) as b FROM db.bank...
      

      Would love some help fixing this.

      posted in SQL
      T
      tahishae
    • RE: Where to submit bugs for SQL Management Studio 18.x?

      Unfortunately, the link within SSMS is currently broken:

      SSMS Help Menu

      This should be fixed for the next release.

      In the meantime, it is possible to submit feedback at https://aka.ms/sqlfeedback . This shortlink should always point to the correct page.

      It covers all SQL Server items, including Azure Data Studio... if anyone actually uses it.

      posted in SQL
      T
      tahishae
    • How to avoid replication lag in case all writes on master and reads on replica?

      I have stuck with the replication lag problem. I am trying to refactor my DB infrastructure. The first step was read-write split. I used ProxySQL то implement it. Now I have masterDb where all INSERT and UPDATES executing and 2 replicas where I route all SELECT.

      But the main problem I faced with, replication lag. Because usually when you change something in DB you immediately read data and expect to have new data in the query result, but with replication lag, I receive outdated data. After googling I found this article https://www.percona.com/blog/2018/11/29/mysql-high-availability-stale-reads-and-how-to-fix-them/ and according to this info the best option in 2018 was "ProxySQL 2.0 GTID consistent reads"

      Any updates from 2018? Maybe you guys know a better solution? Please share!

      posted in SQL
      T
      tahishae
    • RE: RDS slow queries after few weeks

      Suggestion/Observations to consider to reduce time required for query completion.

      CREATE INDEX bicycles_ndx_status_number ON bicycles (status, number);  
      

      Test your query to see if same results are provided but faster.

      EXPLAIN SELECT (your query) and observe ROWS column value.  Posted EXPLAIN had ROWS at 2,082.  The index should reduce the ROWS number significantly, saving time.
      

      Observation, AND Bicycle.number

      Observation 2 singular vs plural table name qualifiers. Is confusing to me when both are used but may be best for you.

      Observation 3 first line of defense when any query is slow ANALYZE TABLE table_name;

      EACH table to ensure indexes are CURRENT (they could have somehow become corrupted). This should be first attempt at correcting any SLOW query.

      Would not help you much because you NEED the additional BICYCLES multi-column index for high performance.

      posted in SQL
      T
      tahishae
    • RE: How to query for Amazon RDS instance configuration?

      Unlike the https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html and ECS Tasks ( https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint.html / https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-metadata-endpoint-fargate.html ) there is no metadata service that you are able to query to provide you with those details. The best that you will be able to get would be from system level views that might provide the hostname. However I would use caution with relying on that information as in a fully managed environment failovers and instance replacements happen automatically 99% of the time so the values you query could might be different the next time and may or may not have a standard nomenclature to them.

      The only way to get that information is to make a call to the RDS API endpoint for the cluster and/or instance that you are looking for.

      If you are running the sql via an external process that will connect to the instance you could put those checks as a prerequisite to connecting and running the script. However if you are manually run the script by connecting directly to the instance and hoping to prevent an "oops" action it's going to be up to the person taking the action to check their work.

      posted in SQL
      T
      tahishae
    • RE: Fastest way to get total block time?
      SELECT ISNULL(SUM(wait_duration_ms) / 1000,0) AS TotalBlockTime_s
      FROM sys.dm_os_waiting_tasks
      WHERE wait_type IN (
      SELECT wait_type FROM sys.dm_os_wait_stats
      WHERE wait_type LIKE '%LCK%'
      )
      

      Not perfect, but the above should suffice.

      posted in SQL
      T
      tahishae
    • RE: How to change default compression mode in mariaDB Columnstore?

      I don't know the details of INSERT in Columnstore, but I would suspect the following:

      • Inserting one row is quite costly.
      • The code will consider the tradeoffs when adding more data row-by-row
      • Probably: Batch inserting (especially of 64K or more rows) is better optimized.

      So... How fast are you inserting rows? Are they being inserted in clumps? What I might do is to gather rows until

      • I have collected X rows, OR
      • Y minutes have passed

      After whichever of those happens, then I would do a batch insert.

      X might be 10K; Y might be 20 minutes. But those depend on the desired performance characteristics of the application.

      (For InnoDB, I recommend batches of 100-1000 or 1MB of gathered Insert text. This runs about 10 times as fast as one-row-at-a time.)

      posted in SQL
      T
      tahishae
    • postresql - How to recovery DB after system re-install

      I had to reinstall the system on my laptop and change ssd (C:). My postgres was installed on another disc (D:), this drive was fine and I still use it, but the Postgres client was deleted with system reinstallation.

      i don't have any backup of DB.

      Now I wonder if there is any option to copy/recover data from old postgres (I have all the folders)?

      Can I install postgres in the same place it used to be on the hard drive (D:)? or it wont work or overwrite existing data ??

      posted in SQL
      T
      tahishae
    • RE: Migrating from old AlwaysOn Availability Group to new AG cluster but keep ip addresses

      Since it's all in the same subnet, when you cut over just https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/remove-an-availability-group-listener-sql-server?view=sql-server-ver15 from the old cluster and https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/create-or-configure-an-availability-group-listener-sql-server?view=sql-server-ver15 with the same name and IP to the new cluster.

      There's no need to change the IP addresses of the new nodes to match the IP addresses of the old servers. The App should be connecting to the AG Listener IP only.

      So you end up with:

      LISTENER : SQLLSN : 10.10.12.12
      

      SQL_NEW1 - 10.10.12.5
      SQL_NEW2 - 10.10.12.6
      SQL_NEW3 - 10.10.12.7

      posted in SQL
      T
      tahishae
    • RE: Environment variables in Postgres C extension

      That should work just fine. Note, however, that it will take the variable from the environment of the PostgreSQL server process, not your current client process.

      posted in SQL
      T
      tahishae
    • Is the data exchange on connection from client to server sent as plain text over the wire?

      I have a new SQL Server 2019 on-premises instance. Assuming the client is the SQL Server Management Studio or any other application:

      1. When the client connects to the server using SQL Server authentication, does the connection request go from the client to the server as plain text? In other words, are the authentication credentials exposed (plain text) over the wire? In other words, can an attacker see the username/password on the wire?

      2. Subsequent to the authentication, and assuming TLS is not configured, then is the query (example: SELECT) and its output visible as plain text over the wire?

      posted in SQL
      T
      tahishae
    • Why does this TVF throw error 9820 with GETDATE() as an input parameter?

      I am testing on SQL Server 2019 CU14. Consider the following table-valued function created against a SQL Server database with compatibility level 130 or 140:

      -- requires database compat 130 or 140 to see the issue
      CREATE OR ALTER FUNCTION [dbo].[TVF_BUG_REPRO_2] (@O DateTime, @Z varchar(50))
      RETURNS TABLE
      WITH SCHEMABINDING
      AS
      RETURN
      SELECT
      CAST(
             CASE
                    WHEN SZ.Zone1 > '' THEN (@O at time zone SZ.Zone1) at time zone 'Pacific Standard Time' 
                    WHEN LEN(@Z) > 3 THEN (@O at time zone @Z) at time zone 'Pacific Standard Time'
                    ELSE @O
             END AS DATETIME
      ) ConvertedDate
      FROM (SELECT CASE @Z WHEN 'ET' THEN 'Eastern Standard Time' ELSE NULL END Zone1) SZ;
      

      The following query executes without error and returns the expected results:

      SELECT * FROM [dbo].[TVF_BUG_REPRO_2] ('2022-01-10 16:16:51.327', 'ET');
      

      The following query unexpectedly throws an error:

      SELECT * FROM [dbo].[TVF_BUG_REPRO_2] (GETDATE(), 'ET');
      

      The error message is:

      Msg 9820, Level 16, State 1, Line 27

      The time zone parameter 'ET' provided to AT TIME ZONE clause is invalid.

      I don't understand why I'm getting an error here. That code should never execute. Running the code without a function as a simple SELECT also does not throw any errors.

      Why does that query fail? Could this be a bug in SQL Server?

      posted in SQL
      T
      tahishae
    • SQL Server RAM usage - how to find out where it is going?

      Short Version

      SQL Server is using 34 GB of RAM. But when query the Memory Consumption Report, the buffer pool size, and the ad-hoc query size, it only adds up to around 2 GB. What is the other 32 GB of RAM doing?

      Pre-emptive: "You should cap the amount of RAM SQL Server can use." Lets say it is capped to x. That simply changes my question to, "What is the other x GB of RAM doing?"

      Long Version

      I have an instance of SQL Server that is consuming 32 GB of RAM:

      enter image description here

      That's not 32 GB of virtual memory; it's actually consuming 32 GB of physical memory (on the RAM chips) - known as "working set".

      And it's not like it is shared with some other process. Essentially all of that is private to SQL Sever:

      enter image description here

      • Private Working Set: 33,896,700 bytes

      What is it doing with all that RAM?!

      Bufferpool memory usage by database

      So we query memory usage by database - as the buffer pool caches pages from the databases:

      --Memory usage server wide
      ;WITH src AS
      (
          SELECT
              database_id,
              COUNT_BIG(*) AS db_buffer_pages
          FROM sys.dm_os_buffer_descriptors
          --WHERE database_id BETWEEN 5 AND 32766
          GROUP BY database_id
      )
      SELECT
          CASE [database_id] WHEN 32767 THEN 'Resource DB' ELSE DB_NAME([database_id]) END AS [Database Name],
          db_buffer_pages AS BufferPages,
          db_buffer_pages /128.0 AS BufferMB
      FROM src
      ORDER BY db_buffer_pages DESC
      OPTION(RECOMPILE, MAXDOP 1);
      

      For a grand total of 4.5 MB of the 32 GB.

      TempDB uses the most (1.4 MB), and the rest go down from there:

      enter image description here

      5 MB of 32 GB - doesn't account for much

      Yes that might seem low - but that's likely because i called DBCC DROPCLEANBUFFERS first.

      Query plan cache

      Next we query the Query Plan Cache. All those T-SQL statements have to be compiled into a huge plan, and those plans are cached in RAM.

      --Server-wide memory usage of plan cache
      SELECT
          [cacheobjtype], ObjType,
          COUNT(1) AS Plans,
          SUM(UseCounts) AS UseCounts,
          SUM(CAST(size_in_bytes AS real)) / 1024.0 / 1024 AS [SizeMB]
      FROM sys.dm_exec_cached_plans
      --where [cacheobjtype] = 'Compiled Plan' and [objtype] in ('Adhoc', 'Prepared')
      GROUP BY CacheObjType, ObjType
      ORDER BY SizeMB DESC
      OPTION(RECOMPILE, MAXDOP 1)
      

      Now we can see how much memory is used to store various query plans:

      cacheobjtype ObjType Plans UseCounts SizeMB
      Compiled Plan Proc 3 4 0.21875
      Parse Tree UsrTab 1 1 0.03125
      Parse Tree View 1 6 0.0234375

      For a grand total of 250 KB - far short of the missing 32 GB.

      Note: Yes that might seem low - but that's likely because i called DBCC FREEPROCCACHE first.

      Memory Consumption Report

      The above queries show me RAM used by:

      • the buffer pool (to cache in memory database pages from disk)
      • the query plan cache

      And that's really all there is. But SQL Server does provide a Memory Consumption Report:

      This report provides detailed data on the memory consumption of components within the Instance

      Narrator: "It doesn't"

      The report is a little hard to read:

      enter image description here

      But in the end the breakdown is:

      • MEMORYCLERK_SOSNODE: 131,832 KB
      • MEMORYCLERK_SOSMEMMANAGER: 71,464 KB
      • USERSTORE_DBMETADATA: 67,432 KB
      • USERSTORE_SCHEMAMGR: 55,784 KB
      • MEMORYCLERK_SQLSTORENG: 54,280 KB
      • MEMORYCLERK_SQLBUFFERPOOL: 30,576 KB
      • Others: 145,056 KB

      That gives a grand total of: 556,424 KB → 544 MB

      Even if we round that up to 1 GB: it's still a far cry from 32 GB.

      So where is the memory going?

      Yes, i can cap SQL Server to 25 GB of RAM. But then that would just change my question to:

      What is SQL Server is using 25 GB of RAM for; where is the memory going?

      Because this sounds an awful lot like a memory leak to me.

      • Server: SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64)

      Server up-time

      Can query for server up-time (creation of tempdb):

      --Use creation date of tempdb as server start time
      

      SELECT SERVERPROPERTY('SERVERNAME') AS ServerName, create_date AS ServerStartedDate FROM sys.databases WHERE NAME='tempdb';

      • Server Started Date: 2021-12-21 15:46:26.730

      CLR Assemblies

      SELECT * FROM sys.assemblies
      
      name principal_id assembly_id clr_name permission_set permission_set_desc is_visible create_date modify_date is_user_defined
      Microsoft.SqlServer.Types 4 1 microsoft.sqlserver.types, version=11.0.0.0, culture=neutral, publickeytoken=89845dcd8080cc91, processorarchitecture=msil 3 UNSAFE_ACCESS 1 2012-02-10 20:15:58.843 2012-02-10 20:15:59.427 0

      Linked servers

      select provider, provider_string from sys.servers

      provider provider_string
      SQLNCLI NULL
      MSIDXS NULL
      search.collatordso NULL
      DB2OLEDB Package Collection=▒▒▒▒▒▒▒▒;Network Address=▒▒▒▒▒;Network Port=50000;Connection Timeout=0;
      posted in SQL
      T
      tahishae
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1488
    • 1489
    • 1 / 1489