Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Kadir
    3. Posts
    K
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by Kadir

    • RE: How to get more connections for mariaDB or how to reduce RAM usage per thread?

      You can increase max_connections without increasing RAM. BUT--Let's discuss whether 175 might actually be too big.

      If Max_used_connections has not reached 175, then max_connections is not the real problem.

      If you have hit that limit, then let's start by investigating what the clients are.

      • If connections come mostly from a web server, then it is probably configured too high.
      • If it is coming from applications, are they failing to close their connections?
      • Is there some form of "connection pooling"? If so, what limits does it have?

      Turn off the query_cache; it is (usually) more burden than benefit.

      Set these below 1% of RAM: tmp_table_size, max_heap_table_size. They are not just per query, but possibly per subquery. Anyway, 1G is very much into "diminishing returns". Disk-based temp tables happen for a variety of reasons; changing those two settings cannot get rid of all disk temp tables.

      Based on the setting of innodb_buffer_pool_size, I suggest that a lot of RAM is going unused.

      I agree that 0.21% is quite low. If you would like some more metrics like that, see http://mysql.rjweb.org/doc.php/mysql_analysis#tuning . It will also provide metrics saying whether the Query cache is being useful.

      "high load scenario" -- What does that mean? If it is "high 'Load Average'", that that is equivalent to "high CPU". This issue is best tackled by looking for the slowest and/or most common queries and trying to speed them up. Use the http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog to help with that.

      posted in SQL
      K
      Kadir
    • RE: Reproduce Heroku's Postgres WAL metric with pg_ls_waldir()

      We asked Heroku support again and they're saying we should be able to use SELECT sum(size) FROM pg_ls_waldir();

      To address your follow up question; internally we use a symlink from the pg_wal directory to a directory on a separate WAL drive. From there it gets pushed to an S3 bucket.

      We also wanted to get back to you about the tests you were running using the select sum(size) from pg_ls_waldir() function, and let you know that there can be discrepancies reported when comparing the results of the pg_ls_waldir() function against our server metrics (sample#wal-percentage-used) due to the nature of how they are both being measured internally, but ultimately using the pg_ls_waldir() function should be a good way to monitor WAL usage (considering the WAL disk is 68GB).

      We're going to test our script again and see if this measurement is comparable to the Heroku-provided one.

      posted in SQL
      K
      Kadir
    • RE: Microsoft SQL server storage best practice on Windows server vm with hybrid storage

      50 GB is not a lot of space, but if you have a tiny database you could configure the whole database to live on just the SSD. I couldn't say for sure without knowing your database size and projected growth rate.

      Assuming you do need to use the HDD for holding your user database, I would then assess if 50 GB is big enough for the tempdb database possibly. tempdb is a system database where certain operations occur during query processing, so it's generally recommended to provision it on it's own fast disk. Also, when your query runs, if the data https://www.brentozar.com/blitzcache/tempdb-spills/ because of a cardinality misestimation or otherwise, the tempdb is where that spill to disk occurs. Having that spill occur on an SSD as opposed to an HDD will make a meaningful difference for performance.

      Generally you should https://www.brentozar.com/archive/2016/01/cheat-sheet-how-to-configure-tempdb-for-microsoft-sql-server/ to 9 equal sized files (8 data files and 1 log file) to maximize it's performance as well.

      In a similar vein, you should https://www.sparkhound.com/blog/pre-grow-your-sql-server-database-data-and-log-files of your user databases as well, especially with them being on the HDD. Database growth operations can be performance costly, so by completely growing out the files pro-actively once, you don't have to worry about incurring that cost multiple times later as the amount of data increases.

      The only other advice I'd recommend is if you have the luxury of using two different physical HDDs for your database, then you should put the database's data file on one HDD, and the log file on the other HDD since maximizing I/O will be helpful.

      Does it need to install SQL server engine in the SSD drive?

      It's usually recommended (more so for management reasons) to install the SQL Server Engine to it's own drive separate from any databases. So based on my recommendations above, in addition to 1x SSD and 2x HDD, you would need a 3rd HDD to follow this advice.

      If that's not possible, then I suppose you should install it to the SSD so it's at least isolated from your user database files, and since the SSD is going to have the highest disk performance, so it shouldn't be an issue supporting the Engine files and tempdb.

      I have worked with SQL Server instances that co-mingled the installation path of the Engine and their user database files and never ran into issues myself, but it's been a long time since I worked with an instance dependent on HDDs too, so sticking it on the SSD will be probably be the best choice if you don't have an additional dedicated HDD you could use instead.

      Finally, make sure you're following https://www.brentozar.com/archive/2008/03/sql-server-2005-setup-checklist-part-1-before-the-install/ which will help you maximize performance too. (This is slightly dated of a resource but still very valid.) You may find more updated information also in https://www.brentozar.com/archive/2019/09/dba-training-plan-22-building-a-new-sql-server/ .

      posted in SQL
      K
      Kadir
    • How do I set up row level security based upon relationship in another table in PostgreSQL?

      I'm working on a social media app with PostgreSQL where I need the following features:

      • Users must be able to search for other users based upon their display names (display names are public)
      • Users should not be able to see not connected users' email addresses
      • Users with confirmed connections must be able to view the email address of any connected users.

      Right now, the "protected" (not sure if that's the right word) is just limited to email addresses, but I can see in the future wanting to do something similar with other columns that might be added.

      Here is how the tables are currently set up. I'm really not great at database administration, so I could have this all totally wrong.

      Users

      table_schema table_name column_name udt_name
      public users id uuid
      public users email text
      public users display_name text
      public users last_post_published_at timestamptz

      User connections

      table_schema table_name column_name udt_name
      public user_connections id int8
      public user_connections created_at timestamptz
      public user_connections user_a_id uuid
      public user_connections user_b_id uuid
      public user_connections user_a_confirmed bool
      public user_connections user_b_confirmed bool

      What's the best way to achieve the features and security set that I'm going for? Essentially what thought I needed was row level security based upon some sort of relationship established in a foreign key, but now I'm suspecting I need to look at it from a completely different angle.

      posted in SQL
      K
      Kadir
    • RE: Postgres : Indexing switches to Parallel Seq Scan from index scan when queried for date range exceeding 150 days

      PostgreSQL switches to a sequential scan because it thinks that the execution time will be shorter that way. If it isn't, it may be a configuration problem:

      • Perhaps you told PostgreSQL that index scans are expensive by leaving random_page_cost at its default value 4, which may be appropriate for spinning disks, but not for anything else.

      • Perhaps index scans are fast because the index is cached. If you increase effective_cache_size, PostgreSQL will know that the index is probably cached and will lower the cost of index scans.

      • Perhaps you specified a degree of parallelism that exceeds the capacities of your hardware, so the parallel table scan is slower than PostgreSQL thinks. Consider lowering max_parallel_workers_per_gather.

      posted in SQL
      K
      Kadir
    • RE: Index converted from Seek to Scan in CTE

      Problems

      The optimizer logic that allows a predicate to be pushed past a window function is https://stackoverflow.com/a/15304023 . The predicate must reference a constant at runtime and the partitioning clause of the window function must contain the same column as the predicate.

      Your database is using forced parameterization, so the constant value you supply in your query is replaced with a parameter marker @0. This prevents the pushdown as described in the linked Q & A just above.

      parameter

      You also need to ensure the column you are filtering on is the same one used in the partitioning clause of the ROW_NUMBER. It is not enough that the columns have the same name or alias, they must be the same reference.

      In your query:

      with tt as 
      ( 
          select ca.inst_bs as Base, 
              dateadd(mi,ROW_NUMBER() over (partition by i2.idcse,i2.idpln order by i2.dtdue,i2.id),i2.dtdue) as d1,
              i2.id,
              ca.id as idcse,
              i2.dtdue 
          from tbl_cse ca with(nolock) 
          join tbl_Inst i2 with(nolock) on i2.idcse = ca.id
      )
      select * from tt where idcse = 3169
      
      • The ROW_NUMBER is partitioned by i2.idcse
      • The outer test on idcse refers to ca.id (aliased as idcse)

      Those are not the same reference, they just have the same exposed name.

      They do have the same value due to the join on i2.idcse = ca.id, but that extra step of reasoning is enough to prevent the optimizer applying the transformation.

      Solution

      You can fix this in multiple ways. The easiest might be to return i2.idcse from the CTE rather than ca.id:

      with tt as 
      ( 
          select ca.inst_bs as Base, 
              dateadd(mi,ROW_NUMBER() over (partition by i2.idcse,i2.idpln order by i2.dtdue,i2.id),i2.dtdue) as d1,
              i2.id,
              i2.idcse, -- CHANGED from ca.id as idcse
              i2.dtdue 
          from tbl_cse ca with(nolock) 
          join tbl_Inst i2 with(nolock) on i2.idcse = ca.id
      )
      select * from tt where idcse = 3169
      

      To workaround the forced parameterization issue, add OPTION (RECOMPILE) to the query.

      As I said in my Stack Overflow answer though, I'd probably stick will the table-valued function replacement. That gives you much more control over where the predicate appears.

      Related reading:

      • https://dba.stackexchange.com/questions/215120/query-is-fast-but-becomes-sluggish-when-created-as-a-view
      • https://stackoverflow.com/questions/67975467/sql-server-views-inline-view-expansion-guidelines on Stack Overflow

      User https://dba.stackexchange.com/users/220697/charlieface provided a https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=5ab80738eb5ea12eab38fd8be365beac you might like to play around with.

      posted in SQL
      K
      Kadir
    • RE: How to revoke SA permissions that seem to be "stuck" to a login?

      There are a few different ways that a user gets permissions in SQL Server. In addition to role membership (like the sysadmin fixed server role), you can also do explicit GRANTs for individual permissions. Since you're using a Windows Service account, permissions can also be granted to domain groups, and those permissions inherited, in addition to anything granted directly to the login.

      The login might be part of a domain group.

      On the Active Directory/Domain side of things, your service account might be included in an AD group, then that AD group granted sysadmin in your SQL Server instance. Then, even though the login isn't directly granted sysadmin, it would still inherit membership from the group.

      You can check which groups are granting which membership from the SQL Server side of things via the xp_logininfo system stored procedure:

      EXEC xp_logininfo @acctname = 'MyDomain\amtwo', @option = 'ALL';
      

      The output will look something like this. Notice the first line, which shows my amtwo login gets admin privileges via the "permission path" (read: group membership) in the SQLSERVER_SYSADMIN domain group.

      account name     type     privilege mapped login name   permission path
      ---------------- -------- --------- ------------------- ----------------------------
      MyDomain\amtwo   user     admin     MyDomain\amtwo      MyDomain\SQLSERVER_SYSADMIN
      MyDomain\amtwo   user     user      MyDomain\amtwo      MyDomain\SomeOtherGroup
      

      In this case, if I want to remove sysadmin permissions from the amtwo login, I would need to remove it from the SQLSERVER_SYSADMIN group on the active directory side. (ie, there's nothing I can do about it via the SQL Server configuration & permissions.)

      You might just have CONTROL SERVER

      There is a permission called CONTROL SERVER which is almost the same as sysadmin, but with some differences. CONTROL SERVER will obey having a DENY enforced for more granular permissions (sysadmin overrides everything, including DENY permissions). Additionally, every once in a while you might run into an older command or function that only works with sysadmin--but that is increasingly rare.

      It might be that whoever set up your service account granted both sysadmin membership, and the CONTROL SERVER permission.

      The best way to check this is with a script to query all the permissions. This query is from https://github.com/amtwo/dba-database/blob/production/views/dbo.ServerLoginPermissions.sql in https://am2.co/dbadb , but including the query in full here, as well.

      As written, this will give all server-level permissions for all users. You can simply add a line to the where clause to filter further. Because this is a Windows login, you should check permissions granted both directly to the login, and any groups that it is a member of (this will be the list of groups returned in the "permissions path" column from using xp_logininfo.

      -- From http://am2.co/dbadb - Licensed under BSD 2-clause
          SELECT 
                  LoginSid                = p.sid, 
                  LoginName               = p.name, 
                  LoginType               = p.type_desc,
                  DefaultDatabase         = p.default_database_name,
                  LoginIsEnabled          = IIF(p.is_disabled = 0,1,0),
                  CanLogIn                = COALESCE((SELECT TOP 1 1 FROM sys.server_permissions AS cosq
                                                      WHERE cosq.grantee_principal_id = p.principal_id
                                                      AND cosq.type = 'COSQ' 
                                                      AND cosq.state IN ('G','W')
                                                      AND p.is_disabled = 0
                                                      ),
                                              0),
                  PermissionType          = perm.type,
                  PermissionState         = perm.state,
                  PermissionSql           = CONCAT(perm.state_desc, N' ',
                                                      perm.permission_name, N' TO ',
                                                      QUOTENAME(p.name) COLLATE Latin1_General_CI_AS_KS_WS, 
                                                      N';'
                                                      ),
                  DateLoginCreated        = p.create_date,
                  DateLoginModified       = p.modify_date
          FROM sys.server_principals AS p
          JOIN sys.server_permissions AS perm 
              ON perm.grantee_principal_id = p.principal_id
          WHERE p.type IN ('S','U','G')
          AND p.name <> N'sa'
          AND p.name NOT LIKE N'##%##';
      

      If you see any permissions, such as CONTROL SERVER that you want to remove, you would do so by using syntax like REVOKE CONTROL SERVER FROM 'MyDomain\Login';. Note that the PermissionSql column will return a GRANT or DENY statement, and you could simply modify that sql text to change the GRANT to REVOKE.

      posted in SQL
      K
      Kadir
    • RE: Postgres psql command intermittently 'stalls' in powershell

      Ok - found a solution.

      Need to alter the properties for the powershell console. untick both the boxes quick edit mode and insert mode

      posted in SQL
      K
      Kadir
    • RE: Proper roles for a backup-only user on PostgreSQL

      For a logical backup (dump), you need CONNECT on the database, USAGE on all schemas and SELECT on tables and sequences. The role must be defined as LOGIN.

      For a file system backup with pg_dump, you need a role with the REPLICATION setting.

      posted in SQL
      K
      Kadir
    • RE: UPSERT with UPDATE on variable column gives error "command cannot affect row a second time"

      Problem

      Your CTE result_query produces two rows with the same values for (arrive_date, arrive_location). The first one is inserted, the second one raises a conflict in the INSERT and tries to UPDATE the same row that has just been inserted - which is not possible, as the error message tells you. If the row already existed, you'd get the same error trying to update the same row twice.
      For more explanation, see:

      • https://dba.stackexchange.com/questions/206185/multiple-on-conflict-targets/225886#225886

      • https://stackoverflow.com/a/35953488/939860

      Solution

      Fold duplicates in the SELECT, either in the CTE directly, or in the SELECT attached to the INSERT. You didn't disclose how to deal with duplicates exactly, but since you https://dba.stackexchange.com/a/307608/3684 to UPDATE with preprocess_things.data || EXCLUDED.data, I suppose you want to overwrite same keys (same thing_type) for the same (arrive_date, arrive_location), but merge all distinct keys (distinct thing_type). I do that with https://www.postgresql.org/docs/current/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE in the CTE. Concatenation in the UPDATE overwrites same keys.

      WITH result_query AS (
         SELECT arrive_date, arrive_location
              , jsonb_object_agg(thing_type, total_things)  -- !
         FROM  (
            SELECT date_trunc('day', arrive_date) AS arrive_date
                   -- additional column "arrive_date" had to go
                 , arrive_location
                 , thing_type
                 , sum(quantity) AS total_things
            FROM   inventory
            GROUP  BY date_trunc('day', arrive_date), arrive_location, thing_type
            ) sub
         GROUP  BY arrive_date, arrive_location
         )
      INSERT INTO preprocess_things AS p (arrive_date, arrive_location, data)
      TABLE  result_query
      ON CONFLICT (arrive_date, arrive_location) DO
      UPDATE
      SET    data = p.data || EXCLUDED.data
      WHERE  p.data IS DISTINCT FROM p.data || EXCLUDED.data;  -- exclude empty updates
      

      db<>fiddle https://dbfiddle.uk/?rdbms=postgres_14&fiddle=2fc81795067b7d7ae01fa1d688bbd229

      Everything is prepared in the CTE, so I used the short syntax TABLE result_query in the INSERT. About that:

      • https://stackoverflow.com/a/30276023/939860

      About the added WHERE clause to suppress empty updates:

      • https://stackoverflow.com/a/12632129/939860
      posted in SQL
      K
      Kadir
    • Triggers: counting rows in a bridge table after INSERT

      I'm using a trigger in order to make a Statistic Table. I have this many-to-many relationship where I have an Order that can have multiple Product's and viceversa. So after an INSERT on Order, using JPA, new rows are automatically inserted in the bridge table Order_Products.

      When in the Trigger I use

      SET @numOfOPs = (SELECT COUNT(IDOrder) FROM Order_Product WHERE IDOrder = NEW.ID)
      

      on new entries, the count on Order_Procut seems to return 0 (or the value before the insert on Order).

      Here the signature of the Trigger:

      CREATE TRIGGER `Order_AFTER_INSERT` AFTER INSERT ON `Order` FOR EACH ROW
      

      What could be the problem? I will add additional information if needed.

      posted in SQL
      K
      Kadir
    • RE: How does blocking increase sos_scheduler_yield wait?

      There is no direct relationship. sos_scheduler_yield generally indicates CPU is a bottleneck, as sessions yielding the worker (cpu) are having to wait to get scheduled on a worker again.

      Blocking on LCK_M_SCH_S means that a session is performing DML and causing other sessions to wait.

      The sessions performing DML may also be driving high CPU utilization. For example if you're rebuilding all your indexes offline you would hold an Sch-M lock while performing a long-running CPU-intensive operation.

      posted in SQL
      K
      Kadir
    • RE: Can EXPLAIN be used to get some insights about Common Table Expression (CTE)?

      (not an Answer; won't fit in a Comment)

      Works for me:

      mysql> EXPLAIN
          -> WITH RECURSIVE page_revisions_path (id, page_id, parent_id) AS
          -> (
          ->  SELECT id, page_id, parent_id
          ->  FROM page_revisions
          ->  WHERE parent_id = 'some_id'
          ->  UNION ALL
          ->  SELECT c.id, c.page_id, c.parent_id 
          ->  FROM page_revisions_path AS cp JOIN page_revisions AS c ON cp.id = c.parent_id
          -> )
          -> 
          -> SELECT * FROM page_revisions_path;
      +----+-------------+----------------+------------+------+---------------+------+---------+-------+------+----------+--------------------------------------------+
      | id | select_type | table          | partitions | type | possible_keys | key  | key_len | ref   | rows | filtered | Extra                                      |
      +----+-------------+----------------+------------+------+---------------+------+---------+-------+------+----------+--------------------------------------------+
      |  1 | PRIMARY     |      | NULL       | ALL  | NULL          | NULL | NULL    | NULL  |    4 |   100.00 | NULL                                       |
      |  2 | DERIVED     | page_revisions | NULL       | ref  | IDX2          | IDX2 | 107     | const |    1 |   100.00 | NULL                                       |
      |  3 | UNION       | cp             | NULL       | ALL  | NULL          | NULL | NULL    | NULL  |    2 |   100.00 | Recursive                                  |
      |  3 | UNION       | c              | NULL       | ALL  | IDX2          | NULL | NULL    | NULL  |    1 |   100.00 | Using where; Using join buffer (hash join) |
      +----+-------------+----------------+------------+------+---------------+------+---------+-------+------+----------+--------------------------------------------+
      4 rows in set, 1 warning (0.00 sec)
      

      It does require MySQL 8.0 or MariaDB 10.2.

      VARCHAR(26) -- Eh? Too short for full names or uuids, etc.

      Are there a million rows in the table? How often do you need to run the query?

      posted in SQL
      K
      Kadir
    • RE: I cannot create a table in mysql

      After I re-reading the answers and comments in https://dba.stackexchange.com/questions/307079/how-can-i-join-two-tables-in-mysql-so-that-whenever-i-add-data-to-table-1-tabl/307085#307085 I think you are looking for a Materialized Query Table (MQT). AFAIK these do not exist in MySQL but you can create your own Incremental Evaluation System (IES) via triggers, see for example: https://fromdual.com/mysql-materialized-views

      I added a small example for an insert trigger at: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=cb04640727e19a430a1df4269051a423 . You will have to add corresponding ones for update and delete. Needless to say, you also need to add keys and not null constraints in your tables so that rows can be safely identified. Disclaimer, I don't know MySQL that well, so I can't tell what concurrency problems (such as lost update) you may have to take into consideration

      What is the problem with the view suggested in the previous answer?

      posted in SQL
      K
      Kadir
    • Loading Extended Events ring buffer to SQL table using powershell

      I am running a RPC_Completed extended event which outputs to a ring buffer. While trying to read the data from the ring buffer target data, xQuery is taking too much of time.

      The faster route was to try parse XML with powershell to a dataset and then use SQLBulkCopy to load it into a SQL Table.

      I am not able to match the schema of my XML file to that of the table.

      Basically I am looking for a XSD file that represents this XML file to load data into a SQL Table.

      I only want timestamp, CPU time and Object name from the file.

         Create table RPCCompleted(
         time_stamp datetime2,
         obj_name varchar(50),
         cpu_time int
         );
      

      The table i'm trying to input this into. Thank you for you time ! Cheers

      https://www.brentozar.com/archive/2015/01/query-extended-events-target-xml/ https://www.itprotoday.com/sql-server/use-powershell-load-xml-data-sql-server

      
        
          
            
            0
          
          
            
            304
          
          
            
            0
          
          
            
            6
          
          
            
            0
          
          
            
            0
            OK
          
          
            
            1
          
          
            
            0
            None
          
          
            
            sp_executesql
          
          
            
            exec sp_executesql N'SELECT
      session.name AS [Name],
      ''XEStore[@Name='' + quotename(CAST(SERVERPROPERTY(N''ServerName'') AS sysname),'''''''') + '']'' + ''/Session[@Name='' + quotename(session.name,'''''''') + '']'' AS [Urn],
      session.startup_state AS [AutoStart],
      running.create_time AS [StartTime],
      (CASE WHEN (running.create_time IS NULL) THEN 0 ELSE 1 END) AS [IsRunning]
      FROM
      sys.server_event_sessions AS session
      LEFT OUTER JOIN sys.dm_xe_sessions AS running ON running.name = session.name
      WHERE
      (session.name=@_msparam_0)and((CAST(SERVERPROPERTY(N''ServerName'') AS sysname)=@_msparam_1))
      ORDER BY
      [Name] ASC',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000)',@_msparam_0=N'Sp_Mon',@_msparam_1=N'WIN-9SOQOAAQ7AK'
          
          
            
            
          
          
            
            
          
          
            
            WIN-9SOQOAAQ7AK\Administrator
          
          
            
            1
          
        
      
      
      posted in SQL
      K
      Kadir
    • RE: Column name or number of supplied values does not match table definition

      You have to specify the column names for the insert statement.

      insert into #info_rep([file number], nume)
      
      posted in SQL
      K
      Kadir
    • Which TLS version is database mail using?

      How can we find out which TLS version our database mail is using for sending e-mails?

      We have issues with database mail (mails are sometimes not sent, even after 10 repeats), and it seems that TLS 1.1 and TLS 1.0 are not supported anymore. But how can I find out which TLS version our database server is using?

      SQL Server version is 12.0.5000.0 (which is SQL 2014 SP2, it seems)

      posted in SQL
      K
      Kadir
    • Add an auto increment column by an order condition

      I have a big table without auto increment id column. I wanna add this column, but I want to add in order of a column defined as date.

      What should be the most practical method for this? For example, can I transfer data to a temp table in the order I want, and then add an auto increment id column to the table according to that order, and then match it with the other table? (I don't think this is an efficient method.) Is there a special function or command block or method in MSSQL and how I can do this?

      posted in SQL
      K
      Kadir
    • RE: Best practice on index usage with timestamp

      create an index on ( cron_hour * 3600 + end_time )

      then write check in the where clause as

      AND UNIX_TIMESTAMP(NOW()) > cron_hour * 3600 + end_time 
      

      but you might find an index on status to be more useful

      posted in SQL
      K
      Kadir
    • Upgrade to PostgreSQL 13 fails

      I'm trying to upgrade PostgreSQL 12 cluster to version 13, with following script:

      /usr/lib/postgresql/13/bin/pg_upgrade --check \
       --old-datadir=/var/lib/postgresql/12/main \
       --new-datadir=/var/lib/postgresql/13/main \
       --old-bindir=/usr/lib/postgresql/12/bin \
       --new-bindir=/usr/lib/postgresql/13/bin \
       --old-options=' -c config_file=/etc/postgresql/12/main/postgresql.conf' \
       --new-options=' -c config_file=/etc/postgresql/13/main/postgresql.conf' \
       --old-port=5432 \
       --new-port=5433
      

      The check returns:

      *Clusters are compatible*
      

      However during the actual upgrade the process fails miserably due to pg_catalog.pg_pltemplate table:

      pg_restore: creating ACL "pg_catalog.TABLE "pg_pltemplate""
      pg_restore: while PROCESSING TOC:
      pg_restore: from TOC entry 17728; 0 0 ACL TABLE "pg_pltemplate" postgres
      pg_restore: error: could not execute query: ERROR:  relation "pg_catalog.pg_pltemplate" does not exist
      

      It appears to be https://postgrespro.com/list/thread-id/2521269 , however the upgrade script does not check these templates.

      So far it appears that this query should return an empty result, otherwise you're in troubles:

      $ psql -c "SELECT * FROM information_schema.role_table_grants WHERE table_name='pg_pltemplate';"
       grantor  | grantee  | table_catalog | table_schema |  table_name   | privilege_type | is_grantable | with_hierarchy 
      ----------+----------+---------------+--------------+---------------+----------------+--------------+----------------
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | TRIGGER        | YES          | NO
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | REFERENCES     | YES          | NO
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | TRUNCATE       | YES          | NO
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | DELETE         | YES          | NO
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | UPDATE         | YES          | NO
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | SELECT         | YES          | YES
       postgres | postgres | postgres      | pg_catalog   | pg_pltemplate | INSERT         | YES          | NO
       postgres | PUBLIC   | postgres      | pg_catalog   | pg_pltemplate | SELECT         | NO           | YES
      

      REVOKING these privileges:

      REVOKE SELECT ON "pg_catalog"."pg_pltemplate" FROM PUBLIC;
      REVOKE ALL ON "pg_catalog"."pg_pltemplate" FROM postgres;
      

      doesn't really help as the REVOKE statement gets saved to schema:

      pg_restore: error: could not execute query: ERROR:  relation "pg_catalog.pg_pltemplate" does not exist
      Command was: REVOKE ALL ON TABLE "pg_catalog"."pg_pltemplate" FROM "postgres";
      REVOKE SELECT ON TABLE "pg_catalog"."pg_pltemplate" FROM PUBLIC;
      

      this can be checked (also the result should be empty) using:

      pg_dump --port 5432 --schema-only --quote-all-identifiers | grep pg_pltemplate
      

      before performing the upgrade.

      Any ideas how to get rid of the pg_catalog.pg_pltemplate table altogether?

      posted in SQL
      K
      Kadir
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1481
    • 1482
    • 1 / 1482