Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. emmalee
    3. Posts
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by emmalee

    • RE: SSIS Duplicate Entry for key PRIMARY

      I have something similar running in our environment. Here is how we did it. You only have to worry about what you have sent from your SQL Server database. Create an SSIS package to load data into MYSQL, BUT you are going to be using a filter from a table that you create. The primary key is the last record sent.

      The table should look like this.

      Tablename | Key

      Customer 101

      Accounts 521

      At the last step you need to insert into this table the max Primary key ID for the table you sent.

      Then the next time the time the package run just filter the data being sent by the table. Example Select * FROM Customer where PK_ID > 101. Just ensure your package has the correct error handling. Hopefully this makes sense.

      posted in SQL
      emmalee
      emmalee
    • RE: Whether allocation order scan is with respect to clustered or non clustered index?

      SQL Server allocation order scans are driven by the allocation information contained in IAM pages. Each rowstore index or heap will have an IAM page for each 4gb range of the database data files in which it has any database pages.

      IAM-driven scans are the only available scan choice for a rowstore heap. Because the heap does not have an index order in which to organize a scan.

      But rowstore indexes - whether non-clustered or clustered - have both IAM pages and a b-tree structure for their index order. Both non-clustered and clustered indexes have the option of selecting an allocation order scan or an index order scan.

      More detailed information from this Paul White blog post.

      Allocation Order Scans 2015 January 23 https://sqlperformance.com/2015/01/t-sql-queries/allocation-order-scans

      posted in SQL
      emmalee
      emmalee
    • Postgresql Mirror a table schema in another table

      I have the following table asset_historical_data in a postgresql database with historical minutely asset data.

      timestamp asset_ticker asset_price asset_market_cap_bn
      02/28/2022 10:00:00 ABC 7.77 1.01342
      02/28/2022 10:00:00 XYZ 10.03 2.12233
      02/28/2022 10:01:00 ABC 8.77 1.71342
      02/28/2022 10:01:00 XYZ 10.05 2.13233

      However, this table is quite large and in production we only need hourly asset data. I don't wanna drop any data that we collect. So, I am thinking of creating a new table called asset_historical_data_filtered which will only have hourly asset data like follows:

      timestamp asset_ticker asset_price asset_market_cap_bn
      02/28/2022 10:00:00 ABC 7.77 1.01342
      02/28/2022 10:00:00 XYZ 10.03 2.12233

      I want this new table to mirror the schema of the asset_historical_data table. In particular, I want any schema changes in the asset_historical_data table (such as addition of a column) to be reflected on the asset_historical_data_filtered table. This is necessary cause both tables essentially contain the same data but with different granularity. Is that possible in postgresql?

      I use alembic to manage schema revisions, so a solution with alembic will be preferred.

      posted in SQL
      emmalee
      emmalee
    • MySql suddenly started to choose different index for query, which causes it to be slow

      Database is from Magento 2's site. Issue suddenly started to occur on our live site, dev site has the same database, but just with older data and same query is running completely fine there. I've also tried dumping those three tables from dev and imported them to live dump, but the issue persists.

      I've found out with EXPLAIN that the query suddenly stopped using index for catalog_product_website table, there are two possible keys to use and it suddenly selects none of them. This is from the live database, which is problematic.

      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+----------------------------------+-------+--------------------------------------------------------------+
      | id   | select_type | table                 | type   | possible_keys                                                                                                                                                                                 | key                                                        | key_len | ref                              | rows  | Extra                                                        |
      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+----------------------------------+-------+--------------------------------------------------------------+
      |    1 | SIMPLE      | e                     | ALL    | PRIMARY                                                                                                                                                                                       | NULL                                                       | NULL    | NULL                             | 97344 | Using where; Using temporary; Using filesort                 |
      |    1 | SIMPLE      | at_status             | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                                                              |
      |    1 | SIMPLE      | at_visibility         | ref    | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE     | 4       | const,const                      | 1     | Using where                                                  |
      |    1 | SIMPLE      | product_website       | range  | PRIMARY,CATALOG_PRODUCT_WEBSITE_WEBSITE_ID                                                                                                                                                    | CATALOG_PRODUCT_WEBSITE_WEBSITE_ID                         | 2       | NULL                             | 97960 | Using where; Using index; Using join buffer (flat, BNL join) |
      |    1 | SIMPLE      | at_status_default     | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                                                              |
      |    1 | SIMPLE      | at_visibility_default | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                                                              |
      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+----------------------------------+-------+--------------------------------------------------------------+
      

      And this if from the dev database, which runs that same query just fine, like live one did before:

      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+---------------------------------+-------+-----------------------------+
      | id   | select_type | table                 | type   | possible_keys                                                                                                                                                                                 | key                                                        | key_len | ref                             | rows  | Extra                       |
      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+---------------------------------+-------+-----------------------------+
      |    1 | SIMPLE      | e                     | ALL    | PRIMARY                                                                                                                                                                                       | NULL                                                       | NULL    | NULL                            | 94528 | Using where; Using filesort |
      |    1 | SIMPLE      | at_status_default     | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                             |
      |    1 | SIMPLE      | at_status             | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                             |
      |    1 | SIMPLE      | at_visibility         | ref    | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE     | 4       | const,const                     | 1     | Using where                 |
      |    1 | SIMPLE      | at_visibility_default | eq_ref | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID,CATALOG_PRODUCT_ENTITY_INT_STORE_ID,CATALOG_PRODUCT_ENTITY_INT_ATTRIBUTE_ID_STORE_ID_VALUE | CATALOG_PRODUCT_ENTITY_INT_ENTITY_ID_ATTRIBUTE_ID_STORE_ID | 8       | magento2.e.entity_id,const,const | 1     |                             |
      |    1 | SIMPLE      | product_website       | eq_ref | PRIMARY,CATALOG_PRODUCT_WEBSITE_WEBSITE_ID                                                                                                                                                    | PRIMARY                                                    | 6       | magento2.e.entity_id,const       | 1     | Using index                 |
      +------+-------------+-----------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------+---------+---------------------------------+-------+-----------------------------+
      

      I've tried ANALYZE and OPTIMIZE on all of those tables, but unfortunately no help from there. Haven't found any structural changes for those tables, everything seems to be same.

      What could be causing this issue?

      Query is automatically created by Magento. This particular query is not the only problematic one. Other ones that use those three tables and same logic seem to suffer from the same issue. Sample query:

      SELECT
         `e`.*,
         IF(at_status.value_id > 0, at_status.value, at_status_default.value) AS `status`,
         IF(at_visibility.value_id > 0, at_visibility.value, at_visibility_default.value) AS `visibility` 
      FROM
         `catalog_product_entity` AS `e` 
         INNER JOIN
            `catalog_product_entity_int` AS `at_status_default` 
            ON (`at_status_default`.`entity_id` = `e`.`entity_id`) 
            AND 
            (
               `at_status_default`.`attribute_id` = '97'
            )
            AND `at_status_default`.`store_id` = 0 
         LEFT JOIN
            `catalog_product_entity_int` AS `at_status` 
            ON (`at_status`.`entity_id` = `e`.`entity_id`) 
            AND 
            (
               `at_status`.`attribute_id` = '97'
            )
            AND 
            (
               `at_status`.`store_id` = 2
            )
         INNER JOIN
            `catalog_product_entity_int` AS `at_visibility_default` 
            ON (`at_visibility_default`.`entity_id` = `e`.`entity_id`) 
            AND 
            (
               `at_visibility_default`.`attribute_id` = '99'
            )
            AND `at_visibility_default`.`store_id` = 0 
         LEFT JOIN
            `catalog_product_entity_int` AS `at_visibility` 
            ON (`at_visibility`.`entity_id` = `e`.`entity_id`) 
            AND 
            (
               `at_visibility`.`attribute_id` = '99'
            )
            AND 
            (
               `at_visibility`.`store_id` = 2
            )
         INNER JOIN
            `catalog_product_website` AS `product_website` 
            ON product_website.product_id = e.entity_id 
            AND product_website.website_id IN
            (
               2
            )
      WHERE
         (
      (`e`.`created_at` > '2021-01-01 00:00:00')
         )
      ORDER BY
         `e`.`created_at` ASC LIMIT 50
      
      posted in SQL
      emmalee
      emmalee
    • RE: Why does OPTION RECOMPILE cause a predicate pushdown?

      Now, my question is why would the OPTION RECOMPILE plan behave differently than when just generating a new plan for a new query, with the same parameters / values passed to it. What does the OPTION RECOMPILE do?

      The main thing OPTION (RECOMPILE) does is to compile a plan for the current values of any parameters, rather than reusing any cached plan. The newly-generated plan is not cached for reuse.

      The second thing it enables is the https://sqlperformance.com/2013/08/t-sql-queries/parameter-sniffing-embedding-and-the-recompile-options#the-parameter-embedding-optimization . SQL Server replaces any parameters with their literal values before optimization.

      This might sound trivial, but can enable important simplifications. For example, any type conversions or complex expressions the value is part of can be evaluated early (constant folding). Note the CONVERT_IMPLICIT in your screenshot to convert the supplied varchar value to nvarchar. I note your database has forced parameterization enabled.

      Your plans are large and anonymized, but I would suggest parameter embedding and the resulting major simplifications it can enable are responsible for the dramatic performance improvement.

      When you run the same query without OPTION (RECOMPILE) the same simplifications are not possible because SQL Server cannot safely embed the parameter values, since the plan might be reused for different values.

      More information and background in my article (linked above) https://sqlperformance.com/2013/08/t-sql-queries/parameter-sniffing-embedding-and-the-recompile-options#the-parameter-embedding-optimization . It contains a worked example showing step-by-step how parameter embedding and constant folding can work to improve an execution plan.

      posted in SQL
      emmalee
      emmalee
    • BACKUP DATABASE permission denied when using AWS RDS

      Our team is migrating to the cloud and we have chosen RDS For MSSQL to host our current db server. As part of that, we have a lot of stored procedures that need to be modified. Some of those are backupping certain databases on disk. As this is RDS and we do not have access to the server directly, I have modified a backup proc to point to D:\S3\ path as this (according to the https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/User.SQLServer.Options.S3-integration.html#Appendix.SQLServer.Options.S3-integration.enabling ) location should be accessible.

      That said, when I run the below stored procedure I get an error:

      CREATE PROCEDURE TestStoredProcGeorgi
      AS
      BEGIN
        BACKUP DATABASE [Status] TO DISK = 'D:\S3\status.bak'
          WITH RETAINDAYS = 0,
              INIT;
      END
      GO
      

      execution: exec TestStoredProcGeorgi this is the error I get:

      Msg 262, Level 14, State 1, Procedure TestStoredProcGeorgi, Line 4 [Batch Start Line 9]
      BACKUP DATABASE permission denied in database 'Status'.
      Msg 3013, Level 16, State 1, Procedure TestStoredProcGeorgi, Line 4 [Batch Start Line 9]
      BACKUP DATABASE is terminating abnormally.
      

      Note: I have enabled S3_Integration on the RDS Instance

      What I am missing here?

      posted in SQL
      emmalee
      emmalee
    • Importing data into Mysql slowly

      This file is 23GB in size, it was created with Dump from the workbench

      It starts well, but when it reaches 1/3 of the recorded data, the import starts to be extremely slow.

      What do you suggest to improve the performance of these queries?

      results: very slow , can someone help me?

      import using:

      mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS = 0; \
          SET UNIQUE_CHECKS = 0" -v  \
          2>./logdeimport.txt \
          -u usrimport -p \
          --default-character-set=utf8 Products_in \
          < Prd_001.sql  
      

      variables:

      key_buffer_size                    8 MB |
      query_cache_size                   0 MB |
      innodb_buffer_pool_size            49152  MB |
      innodb_additional_mem_pool_size    0  MB |
      innodb_log_buffer_size             6144  MB |
      BASE MEMORY                        55304  MB |
      sort_buffer_size                   0.250 MB |
      read_buffer_size                   0.125 MB |
      read_rnd_buffer_size               0.250 MB |
      join_buffer_size                   0.250 MB |
      thread_stack                       0.273 MB |
      binlog_cache_size                  0.031 MB |
      tmp_table_size                     16  MB |
      MEMORY PER CONNECTION              17.180 MB |
      Max_used_connections               5 |
      max_connections                    151 |
      TOTAL (MIN)                        55389.898 MB |
      TOTAL (MAX)                        57898.133 MB |
      
      SHOW ENGINE INNODB status \G;
      *************************** 1. row ***************************
        Type: InnoDB
        Name:
      Status:
      =====================================
      2022-02-23 11:59:21 0x7f5571487700 INNODB MONITOR OUTPUT
      =====================================
      Per second averages calculated from the last 11 seconds
      -----------------
      BACKGROUND THREAD
      -----------------
      srv_master_thread loops: 13799 srv_active, 0 srv_shutdown, 243 srv_idle
      srv_master_thread log flush and writes: 0
      ----------
      SEMAPHORES
      ----------
      OS WAIT ARRAY INFO: reservation count 115564
      OS WAIT ARRAY INFO: signal count 113111
      RW-shared spins 21, rounds 22, OS waits 1
      RW-excl spins 17036, rounds 510461, OS waits 17030
      RW-sx spins 537, rounds 16063, OS waits 534
      Spin rounds per wait: 1.05 RW-shared, 29.96 RW-excl, 29.91 RW-sx
      ------------
      TRANSACTIONS
      ------------
      Trx id counter 675545
      Purge done for trx's n:o < 674572 undo n:o < 0 state: running but idle
      History list length 29
      LIST OF TRANSACTIONS FOR EACH SESSION:
      ---TRANSACTION 421501552888496, not started
      0 lock struct(s), heap size 1136, 0 row lock(s)
      ---TRANSACTION 675544, ACTIVE 44 sec inserting
      mysql tables in use 1, locked 1
      1 lock struct(s), heap size 1136, 0 row lock(s), undo log entries 6835
      MySQL thread id 76, OS thread handle 140004949817088, query id 7577 localhost root update
      INSERT INTO `saldoclientecca` VALUES ('2021-08-06 00:00:00',369,23,12701133,2,1,6,'93222',840.4300,'2021-08-07 08:37:44','2021-08-07 00:00:00'),('2021-08-06 00:00:00',369,3,7760333,2,1,13,'93470',1445.3300,'2021-08-07 08:37:44','2021-08-07 00:00:00'),('2021-08-06 00:00:00',369,3,7760333,2,1,6,'93470',1445.3300,'2021-08-07 08:37:44','2021-08-07 00:00:00'),('2021-08-06 00:00:00',369,51,3719333,2,1,13,'102634',61.3400,'2021-08-07 08:37:44','2021-08-07 00:00:00'),('2021-08-06 00:00:00',369,46,12700588,2,1,13,'93182',833.3300,'2021-08-07 08:37:44','2021-08-07 00:00:00'),('2021-08-06 00:00:00',369,5
      --------
      FILE I/O
      --------
      I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
      I/O thread 1 state: waiting for completed aio requests (log thread)
      I/O thread 2 state: waiting for completed aio requests (read thread)
      I/O thread 3 state: waiting for completed aio requests (read thread)
      I/O thread 4 state: waiting for completed aio requests (read thread)
      I/O thread 5 state: waiting for completed aio requests (read thread)
      I/O thread 6 state: waiting for completed aio requests (write thread)
      I/O thread 7 state: waiting for completed aio requests (write thread)
      I/O thread 8 state: waiting for completed aio requests (write thread)
      I/O thread 9 state: waiting for completed aio requests (write thread)
      Pending normal aio reads: [0, 0, 0, 0] , aio writes: [35, 0, 0, 0] ,
       ibuf aio reads:, log i/o's:, sync i/o's:
      Pending flushes (fsync) log: 0; buffer pool: 3
      13922 OS file reads, 33040400 OS file writes, 786283 OS fsyncs
      0.00 reads/s, 0 avg bytes/read, 254.89 writes/s, 8.91 fsyncs/s
      -------------------------------------
      INSERT BUFFER AND ADAPTIVE HASH INDEX
      -------------------------------------
      Ibuf: size 1, free list len 3093, seg size 3095, 0 merges
      merged operations:
       insert 0, delete mark 0, delete 0
      discarded operations:
       insert 0, delete mark 0, delete 0
      Hash table size 12750011, node heap has 13 buffer(s)
      Hash table size 12750011, node heap has 11 buffer(s)
      Hash table size 12750011, node heap has 17 buffer(s)
      Hash table size 12750011, node heap has 7 buffer(s)
      Hash table size 12750011, node heap has 37779 buffer(s)
      Hash table size 12750011, node heap has 1 buffer(s)
      Hash table size 12750011, node heap has 13 buffer(s)
      Hash table size 12750011, node heap has 163 buffer(s)
      0.00 hash searches/s, 0.00 non-hash searches/s
      ---
      LOG
      ---
      Log sequence number          372567137685
      Log buffer assigned up to    372567137685
      Log buffer completed up to   372567137685
      Log written up to            372567137685
      Log flushed up to            372567137685
      Added dirty pages up to      372567137685
      Pages flushed up to          372486561542
      Last checkpoint at           372486561542
      12690136 log i/o's done, 0.00 log i/o's/second
      ----------------------
      BUFFER POOL AND MEMORY
      ----------------------
      Total large memory allocated 52747567104
      Dictionary memory allocated 60770569
      Buffer pool size   3145728
      Free buffers       2426262
      Database pages     681462
      Old database pages 250574
      Modified db pages  117734
      Pending reads      0
      Pending writes: LRU 0, flush list 37, single page 0
      Pages made young 56588, not young 0
      0.00 youngs/s, 0.00 non-youngs/s
      Pages read 13897, created 667568, written 19834618
      0.00 reads/s, 0.00 creates/s, 254.98 writes/s
      No buffer pool page gets since the last printout
      Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
      LRU len: 681462, unzip_LRU len: 0
      I/O sum[577104]:cur[5472], unzip sum[0]:cur[0]
      ----------------------
      INDIVIDUAL BUFFER POOL INFO
      ----------------------
      ---BUFFER POOL 0
      Buffer pool size   65536
      Free buffers       50583
      Database pages     14161
      Old database pages 5207
      Modified db pages  1822
      Pending reads      0
      Pending writes: LRU 0, flush list 0, single page 0
      Pages made young 1848, not young 0
      0.00 youngs/s, 0.00 non-youngs/s
      Pages read 279, created 13882, written 439188
      0.00 reads/s, 0.00 creates/s, 0.00 writes/s
      No buffer pool page gets since the last printout
      Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
      LRU len: 14161, unzip_LRU len: 0
      I/O sum[12023]:cur[114], unzip sum[0]:cur[0]
      ---BUFFER POOL 1
      Buffer pool size   65536
      Free buffers       50694
      Database pages     14049
      Old database pages 5166
      Modified db pages  2463
      Pending reads      0
      Pending writes: LRU 0, flush list 0, single page 0
      Pages made young 544, not young 0
      0.00 youngs/s, 0.00 non-youngs/s
      Pages read 339, created 13710, written 387091
      0.00 reads/s, 0.00 creates/s, 0.00 writes/s
      No buffer pool page gets since the last printout
      Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
      LRU len: 14049, unzip_LRU len: 0
      I/O sum[12023]:cur[114], unzip sum[0]:cur[0]
      ---BUFFER POOL 2 a´te o POLL 47 é a mesma resposta (ocultei por limitação de carater) 
      --------------
      ROW OPERATIONS
      --------------
      0 queries inside InnoDB, 0 queries in queue
      0 read views open inside InnoDB
      Process ID=4782, Main thread ID=139972405339904 , state=checking free log space
      Number of rows inserted 64106963, updated 1578, deleted 52, read 197358
      0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
      ----------------------------
      END OF INNODB MONITOR OUTPUT
      ============================
      
      CREATE TABLE `Prd_001` (
        `DTSLD` datetime NOT NULL,
        `INSNAME` int(10) NOT NULL,
        `IDUNIDADEINST` int(10) NOT NULL,
        `IDCLI` int(10) NOT NULL,
        `IDPRO` smallint(5) NOT NULL,
        `IDMODAl` smallint(5) NOT NULL,
        `CODTIPOSALDO` smallint(5) NOT NULL,
        `IDENTNEG` varchar(50) NOT NULL,
        `TOTALSLO` decimal(19,4) NOT NULL,
        `DTHRR` datetime DEFAULT NULL,
        `DTCRA` datetime DEFAULT NULL,
        KEY `idx_IDCLIENTE` (`IDCLI`),
        KEY `idx_DTSLD` (`DTSMNY`),
        KEY `idx_CODTIPOSALDO` (`CODTPSLDO`),
        KEY `idx_TEMP` (`IDCLI`,`DTSMNY`,`CODTPSLDO`)
      ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
      
      posted in SQL
      emmalee
      emmalee
    • Reorg SYSIBM tables to reduce extents (XT) - change priqty & secQTY

      I have a new DB2 V12 database on z/OS 2.4 z13, with hundreds of programs to rebind, such as: sysibm.syspackage, sys.columns, sys.tables, etc. These will grow, hence the need to reorganize the tablespace for performance.

      Maintenance window is one hour, so just one tables space is being done.

      1. Is reorg run on system tables the same as for user tables?
      2. Is there special IBEGENR (sp?) VSAM redefine needed for system tables.

      Sample JCL:

         //REORGS1 EXEC PGM=DSNUTILB,PARM='MT03,REORG1DB' 
         //STEPLIB  DD  DSN=DSNC10.SDSNLOAD,DISP=SHR 
         //SYSREC   DD  DSN=&&SYSREC,DISP=(NEW,DELETE,DELETE),       
         //             SPACE=(TRK,(9000,5000),,,ROUND),UNIT=SYSDA 
       LISTDEF TBLSLIST                                     
           INCLUDE TABLESPACES DATABASE TR032             
       REORG                                                
          TABLESPACE LIST TBLSLIST   COPYDDN(CDD1) LOG NO  
          SHRLEVEL REFERENCE                               
          SORTKEYS SORTDATA SORTDEVT SYSDA                 
          STATISTICS TABLE(ALL) INDEX(ALL)
      
      posted in SQL
      emmalee
      emmalee
    • How get old versions from MS SQL Developer Edition?

      I'm trying to get SQL Server 2016 Developer Edition but when I access to https://my.visualstudio.com/Downloads/Featured?mkt=es-es I only can see Express Edition versions:

      enter image description here

      And when I visit https://www.microsoft.com/en-us/sql-server/sql-server-downloads I only can download 2019 version

      I would like get previous versions of SQL Server 2019 Developer Edition.

      Thanks!

      posted in SQL
      emmalee
      emmalee
    • RE: How to replace IN clause with JOIN in Postgres?

      I assume you mean:

      select * 
      from table0_ 
      where (table0_.col1, table0_.col2, table0_.col3) in (($1, $2, $3) 
                                                         , ($4, $5, $6) 
                                                         , ($7, $8, $9) 
                                                         , ($10, $11, $12) 
                                                         , ($13, $14, $15))
      

      Which would correspond with:

      select * 
      from table0_ 
      join ( values ($1, $2, $3) 
                  , ($4, $5, $6) 
                  , ($7, $8, $9) 
                  , ($10, $11, $12) 
                  , ($13, $14, $15) ) t (col1, col2, col3)
          using (col1, col2, col3);
      

      If the derived table is large you may be better off using a temporary table and inserting your parameters there

      CREATE TEMP TABLE t 
      ( col1 ...
      , col2 ...
      , col3 ... );
      

      INSERT INTO T (col1, col2, col3) VALUES ($1, $2, $3), ...

      select *
      from table0_
      join t
      using (col1, col2, col3);

      posted in SQL
      emmalee
      emmalee
    • Is there a way to make Azure SQL Database report a newer version number?

      We are trying to get Moodle installed in Azure, and the most cost-effective way to do this for our purposes is to use Azure SQL database.

      Azure SQL database is always running the latest stable build of SQL Server, but reports a version number beginning with 12.* - this causes Moodle to complain about the database version as it wants something 2017 or newer.

      Is there any way to have Azure SQL Database fudge the version number it reports so that the application thinks it's more modern?

      posted in SQL
      emmalee
      emmalee
    • Is there some good way to "diff" a table's data with a fresh copy of it, automatically?

      I have a table representing all countries on this planet, with their country codes and full titles/common names, etc.

      I have not run my update_country_table function since 2020, so it's due for another run now.

      If I run it as-is, it will fetch the new data, truncate the current table (and reset identity), then INSERT all the new rows. In other words, I will have no idea how many new countries were added, or removed, or possibly had one of their columns edited. Well, I can count the rows before and after, but that will only tell me if it has changed in numbers -- not what changed. (Maybe two countries were removed and two others added, or many of the existing ones had typos fixed for their common names.)

      I could of course make this function much more advanced and have it painstakingly check each existing record instead of truncating it and then inserting all the new data, but this is anything but simple. It would have to check if any of the columns have changed, etc. It would just be an annoying mess, especially since this is a common thing that I want to do in many other places as well.

      Is there some way to do something like this? (This is made-up fantasy code from my dreams.)

      MAKE TEMPORARY COPY OF countries AS countries2;
      TRUNCATE TABLE countries;
      INSERT ...
      ...
      SELECT (COMPARE TABLES countries, countries2);
      

      I doubt this exists, but it sure would be useful for the many times when I have data of this kind which I regularly pull down. I don't like being "blind" to what it actually changes.

      For my source code files, I use the neat WinMerge program to see exactly what has changed since last time. I sure wish this could be easily done for my database tables as well.

      posted in SQL
      emmalee
      emmalee
    • RE: Is there a way to use standard CLR functions on Azure SQL Edge for Ubuntu Docker on an M1 Mac?

      No.

      From https://docs.microsoft.com/en-us/azure/azure-sql-edge/features in the documentation:

      Azure SQL Edge is built on the latest version of the SQL Database Engine. It supports a subset of the features supported in SQL Server 2019 on Linux, in addition to some features that are currently not supported or available in SQL Server 2019 on Linux (or in SQL Server on Windows).

      https://docs.microsoft.com/en-us/azure/azure-sql-edge/features#operating-system

      Azure SQL Edge containers are based on Ubuntu 18.04, and as such are only supported to run on Docker hosts running either Ubuntu 18.04 LTS (recommended) or Ubuntu 20.04 LTS. It's possible to run Azure SQL Edge containers on other operating system hosts, for example, it can run on other distributions of Linux or on Windows (using Docker CE or Docker EE), however Microsoft does not recommend that you do this, as this configuration may not be extensively tested.

      https://docs.microsoft.com/en-us/azure/azure-sql-edge/features#hardware-support

      Azure SQL Edge requires a 64-bit processor (either x64 or ARM64), with a minimum of one processor and one GB RAM on the host.

      https://docs.microsoft.com/en-us/azure/azure-sql-edge/features#unsupported-features

      Area Unsupported feature or service
      Database Engine CLR assemblies, and related DDL commands and Transact-SQL functions, catalog views, and dynamic management views.
      CLR-dependent T-SQL functions, such as ASSEMBLYPROPERTY, FORMAT, PARSE, and TRY_PARSE.
      CLR-dependent date and time catalog views, functions, and query clauses.
      posted in SQL
      emmalee
      emmalee
    • Remote db connection limits and MySQL Workbench

      I have an app using a remote MySQL service and has max_user_connections set on the plans. For my pricing level, it's 30 connections. My app functions fine; all operations work.

      However, while MySQL Workbench is open (for manual db imports) if I attempt to reboot my remote app I get an error:

      [2022-02-04T22:24:05.313Z] error Error: ER_USER_LIMIT_REACHED: User '************' has exceeded the 'max_user_connections' resource (current value: 30)
      

      Upon closing MYSQL Workbench, my remote app reboots fine.

      My app has max connections set to 30, although just to be safe I made it 29 in an attempt to reserve 1 connection for MWB.

      Do I need to reserve even more for MWB? How many connections would it need? Can I limit MWB on my local machine (Ubuntu) so it uses less (how about just one?) connections?

      To avoid this in the future, what would be best? I suppose shut my app down for maintenance at 3am and run a DB job then.

      posted in SQL
      emmalee
      emmalee
    • RE: Is it possible to use Read Uncommitted isolation level on read-only Availability Group secondary?

      bzzt

      https://sqlperformance.com/2015/04/t-sql-queries/the-read-uncommitted-isolation-level doesn't mean your queries don't take locks, it means your queries ignore locks taken by other queries. The hints really should have been called NORESPECT instead. The current naming is quite misleading. They leave you open to returning dirty reads.

      Queries under this isolation level will still take schema stability locks, like any other read query.

      A much cleaner implementation of this is RCSI, which gives you the last known good version of rows that are currently locked by modification queries rather than potentially returning data from in-flight data from them.

      To answer your question, usually local query hints will override database-level settings like https://sqlperformance.com/2014/05/t-sql-queries/read-committed-snapshot-isolation or https://sqlperformance.com/2014/06/sql-performance/the-snapshot-isolation-level . But even query level locking hints are ignored by queries against readable secondaries:

      https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/active-secondaries-readable-secondary-replicas-always-on-availability-groups?view=sql-server-ver15#bkmk_Benefits

      All queries that run against the secondary databases are automatically mapped to snapshot isolation transaction level, even when other transaction isolation levels are explicitly set. Also, all locking hints are ignored. This eliminates reader/writer contention.

      If you're already using one or both of those row-versioning isolation levels, there shouldn't be any https://www.erikdarlingdata.com/execution-plans/help-my-query-got-slower-without-nolock/ to use Read Uncommitted/NOLOCK.

      posted in SQL
      emmalee
      emmalee
    • RE: configure size temdb

      In theory the tempdb tlog shouldn't grow too much, but in the real world I would take your available space and divide it by 9.

      posted in SQL
      emmalee
      emmalee
    • RE: How to start MySQL after failure?

      I faced the same problem today ,the source of the issue might be different but, read carefully /var/log/mysql/error.log it should point to the error.

      I deleted one table that I didn't need .MYD and .MYI files. It was a mistake , I should drop the table , but my server was on 100% memory usage.

      What I did was reading the error on:

      sudo tail -f /var/log/mysql/error.log
      

      Keep two terminal open at the same time, first run

      sudo tail -f /var/log/mysql/error.log 
      

      on first terminal , then run on the second terminal

      sudo systemctl stop mysql.service   ---to shut it down
      sudo systemctl start mysql.service  ---to try and start up and see the reason why it is not starting.
      

      For me it was:

      2022-01-26T16:30:36.213788Z 0 [Note] InnoDB: Database was not shutdown normally!
      2022-01-26T16:30:36.213802Z 0 [Note] InnoDB: Starting crash recovery.
      2022-01-26T16:30:36.213830Z 0 [ERROR] InnoDB: Tablespace 605 was not found at ./contratti/ip_log2_bak.ibd.
      2022-01-26T16:30:36.213840Z 0 [ERROR] InnoDB: Set innodb_force_recovery=1 to ignore this and to permanently lose all changes to the tablespace.
      2022-01-26T16:30:36.644359Z 0 [ERROR] InnoDB: Cannot continue operation.
      2022-01-26T16:30:38.045091Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
      2022-01-26T16:30:38.062994Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.34-log) starting as process 2843 ...
      2022-01-26T16:30:38.416074Z 0 [Note] InnoDB: PUNCH HOLE support available
      

      I was able to fixing by adding on /etc/mysql/mysql.conf.d/mysqld.cnf

      innodb_force_recovery = 2
      

      From https://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html

      If you are able to dump your tables with an innodb_force_recovery value of 3 or less, then you are relatively safe that only some data on corrupt individual pages is lost.

      After this change I stopped and started again MySQL and it is working great.

      posted in SQL
      emmalee
      emmalee
    • RE: How to count number of occurrences of each key in a MongoDB collection efficiently (one pass?)?

      One of the shortest one could be this one:

      const counts = {}
      db.collection.aggregate([
         { $project: { data: { $objectToArray: "$$ROOT" } } }
      ]).forEach(doc => {
         doc.data.forEach(val => {
            counts[val.k] = typeof counts[val.k] == "undefined" ? val.v : counts[val.k] + val.v;
         })
      })
      

      I tried to find solution of $objectToArray, $group but most likely it will not work, because the size of document is limited to 16 MiBytes. Having "millions of documents" this limit will certainly exceeded.

      posted in SQL
      emmalee
      emmalee
    • RE: Dynamic table DB design opinions

      It sounds like each of your Tools wants a different "view" of [the same] data.
      If that's the case, then why not create a View on the MetaData table for each tool and only return the columns that the Tool requires?

      -- Base Table 
      select c1, c2, c3, c4, c5, ..., c35 
      from MetaData ; 
      

      -- Tool1 column subset
      create view Tool1View as
      select c1, c2, c5
      from MetaData ;

      -- Tool2 column (and row) subset
      create view Tool2View as
      select c1, c2, c3, c4
      from MetaData
      where c5 = 'T2Subset' ;

      posted in SQL
      emmalee
      emmalee
    • Filter by field, and if zero results then filter by other field

      How should a WHERE clause be constructed, such that:

      • It filters by a certain field
      • If there are zero results for filtering by the aforementioned field, then it filters by another field?

      For example, if we have this schema:

      CREATE TABLE Items
      (
        [Id] INT PRIMARY KEY,
        [FirstDate] DATETIME,
        [SecondDate] DATETIME
      );
      

      ... with these records:

      INSERT INTO Items ([Id], [FirstDate], [SecondDate])
      VALUES 
      (1, '2021-1-1', '2022-1-1'),
      (2, '2022-1-1', '2023-1-1'), 
      (3, '2022-1-1', '2024-1-1'), 
      (4, '2024-1-1', '2025-1-1'),
      (5, '2024-1-1', '2026-6-1')
      

      ... then I want to filter by FirstDate and SecondDate, such that:

      • If we specify FirstDate and SecondDate to filter for 2022-1-1, records 2 & 3 will be returned (but not 1!).
      • If we specify FirstDate and SecondDate to filter for 2024-1-1, records 4 & 5 will be returned (but not 3!).
      • If we specify FirstDate and SecondDate to filter for 2025-1-1, record 4 will be returned.
      posted in SQL
      emmalee
      emmalee
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1490
    • 1491
    • 1 / 1491