Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. yajahira
    3. Posts
    Y
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by yajahira

    • How do you reset a serial type back to 0 after deleting all rows in a table?

      I previously asked https://dba.stackexchange.com/questions/294727/how-to-auto-increment-a-serial-column-on-update , and somebody posted a very helpful answer on db fiddle https://dbfiddle.uk/?rdbms=postgres_10&fiddle=871d2b1be9627095a15f07841dcb90b6 . This has worked really well for me. However, now I would like to ask a new (related) question :

      Suppose I wipe out all the rows in the table (i.e. in the db fiddle example the apple and orange rows are deleted by doing "delete from fruits where id > 0"). If I insert another fruit like watermelon, the id will start at 4, not 1. How do I force the id type to reset back to 0, so that the next insert will have id=1 ?

      posted in SQL
      Y
      yajahira
    • RE: How do I line the SELECT LIST with the rules of of VIEW creation in this query?

      My best guess is that the 2nd column does not have a name. Thus breaking the rules of view creation.

      posted in SQL
      Y
      yajahira
    • MySqlRouter cant connect to MySqlServer

      mysqlrouter service is running

      ports are listening

      tcp        0      0 0.0.0.0:6446            0.0.0.0:*               LISTEN      6150/mysqlrouter
      tcp        0      0 0.0.0.0:6447            0.0.0.0:*               LISTEN      6150/mysqlrouter
      tcp        0      0 0.0.0.0:6448            0.0.0.0:*               LISTEN      6150/mysqlrouter
      tcp        0      0 0.0.0.0:6449            0.0.0.0:*               LISTEN      6150/mysqlrouter
      tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      6150/mysqlrouter
      

      but when I try to connect using mysql client from different computer

      mysql -u innodbcluster -h 192.168.11.124 -P 6446 -p
      

      it says

      ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.11.124:6446' (10060)

      what am I missing?

      PS I installed the mysqlrouter in the primary mysqlserver am I correct or it should be installed in the application thats going to connect to the innodbcluster?

      posted in SQL
      Y
      yajahira
    • RE: Schema required to find a record in a table with null values

      One way I can think of, is to convert the row to a JSON value, remove the NULLs, and then check if your comparison value contains that "rule json":

      Given your sample table and data:

      select *
      from rules r
      where '{"Country": "Brazil","City": "Rio","Street": "Aven","No": 5}'::jsonb 
                @> jsonb_strip_nulls(to_jsonb(r) - 'id') 
      

      Returns the rule with id = 7 (assuming that No is defined as an integer column)

      https://dbfiddle.uk/?rdbms=postgres_14&fiddle=af549ddb6db147f8879fc3fb1cd38917

      posted in SQL
      Y
      yajahira
    • RE: How to revert Postgres to some time ealier?

      There is no way to undo a committed transaction. You have to restore a physical (file system level) backup and use point-in-time-recovery to recover (forward) to a point before the problem happened.

      posted in SQL
      Y
      yajahira
    • RE: Azure Data Studio - intellisence do not see tables in schema other than dbo

      You may need to refresh the IntelliSense cache. Previously in SSMS, this was easily achievable via the shortcut CTRL+SHIFT+R but it seems that's no longer the case in Azure Data Studio.

      Rather, in Azure Data Studio, you need to open the Command Palette, type in intellisense, and then you should be able to click on Refresh IntelliSense Cache as per https://dba.stackexchange.com/a/298418/150011 on a similar question:

      To do this in ADS, press CTRL+SHIFT+P to bring up the Command Palette, and type intellisense. The drop down list will filter so you can select Refresh IntelliSense Cache from the listed options.

      Perhaps there's a way to https://docs.microsoft.com/en-us/sql/azure-data-studio/keyboard-shortcuts?view=sql-server-ver15 to this specific command too.

      posted in SQL
      Y
      yajahira
    • Getting how many days users have logged in during a month (same day counts as 1 log in)

      Let's see if you can help me with a query. I want to know how many times users have logged in during a month.

      If I have this data:

      id_log id_user timestamp
      1 1 2022-02-22 00:00:00
      2 1 2022-02-22 15:00:00
      3 1 2022-02-23 00:00:00
      4 2 2022-02-22 00:00:00
      5 3 2022-02-22 00:00:00
      6 3 2022-02-24 00:00:00
      7 3 2022-02-25 00:00:00
      8 1 2022-03-01 00:00:00
      9 2 2022-03-01 00:00:00

      Result:

      year month times number_of_users_logged_in
      2022 2 1 2 (id_user 2 and 3)
      2022 2 2 1 (id_user 1)
      2022 2 3 1 (id_user 3)
      2022 3 1 2 (id_user 1 and 2)

      Notice that user 1 has logged in twice on the 22nd but only counts as one.

      Thanks in advance,

      Regards

      posted in SQL
      Y
      yajahira
    • attribute 51 of type users has wrong type

      I'm trying to import data to a pg table from a csv file. The error I get is:

      $ psql -U postgres -c "COPY users (first_name) FROM '/users.csv' (FORMAT csv)"
      ERROR:  attribute 51 of type users has wrong type
      DETAIL:  Table has type integer, but query expects character varying.
      CONTEXT:  COPY users, line 1
      

      How do I know which attribute is it about? For that matter an INSERT also fails:

      $ psql -U postgres -c "INSERT INTO users (first_name) VALUES ('first_name')"
      ERROR:  attribute 51 of type record has wrong type
      DETAIL:  Table has type integer, but query expects character varying.
      

      I tried to count according to the \d users output, and to the attnum value:

      SELECT a.*
      FROM pg_attribute a
          JOIN pg_class c on a.attrelid = c.oid
          JOIN pg_namespace n on c.relnamespace = n.oid
      WHERE n.nspname = 'public'
          AND c.relname = 'users'
          AND attnum >= 1
      ORDER BY attnum
      

      I'm running PostgreSQL 12.7.

      posted in SQL
      Y
      yajahira
    • RE: Postgres CREATE DATABASE Not Working

      The culprit was pgbouncer and the .ini file, which needs individual entries for each listed database. Creating the database was ok, but then the .ini file didn't recognize the new database name.

      posted in SQL
      Y
      yajahira
    • catch and understand transaction rollback

      Because of the data consistency issue of the application (sporadic data loss) I have to perform a deep dive into transaction handling. I've created an extended event session:

      CREATE EVENT SESSION [system_rollback_error] ON SERVER 
      ADD EVENT sqlserver.error_reported(
          ACTION(
              package0.event_sequence,
              sqlserver.client_app_name,
              sqlserver.client_hostname,
              sqlserver.database_id,
              sqlserver.database_name,
              sqlserver.server_principal_name,
              sqlserver.session_id,
              sqlserver.session_nt_username,
              sqlserver.sql_text,
              sqlserver.transaction_id,
              sqlserver.transaction_sequence
          )
          WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4))
              AND [severity]>(10))),
      ADD EVENT sqlserver.sql_transaction(
          ACTION(
              package0.event_sequence,
              sqlserver.client_app_name,
              sqlserver.client_hostname,
              sqlserver.database_id,
              sqlserver.database_name,
              sqlserver.server_principal_name,
              sqlserver.session_id,
              sqlserver.session_nt_username,
              sqlserver.sql_text,
              sqlserver.transaction_id,
              sqlserver.transaction_sequence
          )
          WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4))
              AND [transaction_state]='Rollback'
              AND [transaction_type]='System'))
      

      ADD TARGET package0.event_file(
      SET filename=N'system_rollback_error',
      max_file_size=(100)
      )
      WITH (
      MAX_MEMORY=4096 KB,
      EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
      MAX_DISPATCH_LATENCY=30 SECONDS,
      MAX_EVENT_SIZE=0 KB,
      MEMORY_PARTITION_MODE=NONE,
      TRACK_CAUSALITY=OFF,
      STARTUP_STATE=OFF
      )
      GO

      Now I get a lot of DUI statements shown in this ext-event session, all of them are in autocommit mode but there is no one error message. No key violation, no wrong syntax or something giving me an idea why transaction was rolled back. I've checked transaction log using

      select * from sys.fn_dblog(NULL,NULL) where operation in ('LOP_ABORT_XACT') 
      

      but there is also nothing.

      Do you have an idea, why an autocommit transaction has a transaction_state "Rollback" with no errors? (it's not a ROLLBACK at the end of the statement and that is also not a broken session in implicit_transaction mode, because this is a transaction_type "user" and not "system").

      posted in SQL
      Y
      yajahira
    • How to check that user is "db_owner" (like in SQLServer) but in PostgreSQL?

      I'm new to Postgres and would like to understand how to check if a user is db_owner. I'd like to know the same for SQL Server, as well.

      How can I check that user is owner of the database(db_owner)?

      Maybe another option is to check that user can do create, alter, drop with tables in the database?

      posted in SQL
      Y
      yajahira
    • Trigger uses a procedure or a function?

      PostgreSQL version 10 and before use the term "Trigger Procedure" ( https://www.postgresql.org/docs/10/plpgsql-trigger.html ). But later versions use the term "Trigger Function".

      What is the reason for changing that term?

      posted in SQL
      Y
      yajahira
    • Is it okay to have this kind of relationship in an ERD?

      https://imgur.com/a/FO26YA9

      The Staff connects to the manager and branch

      the branch connects to staff and manager.

      Note that one branch can only contain one manager.

      Questions:

      1. If it is wrong, how can I design it?

      2. Is it okay if I add two relationships between the tables?

      3. Is it possible just to add an attribute such as isManager().

      posted in SQL
      Y
      yajahira
    • SQL Error when trying to create a table. I modeled it after the exapmle that my professor showed us and I can't seem to find the error

      Error I'm getting

      Table level constraint or index does not specify column list, table 'Tag'.

      Below is my code

      --Creating the Tag Table
      CREATE TABLE Tag (
      --Columns for the Follower List table
      vc_TagID int identity,
      TagText varchar(20) not null,
      TagDescription varchar(100),
      --Contraints on the Follower List Table
      CONSTRAINT PK_vc_TagID PRIMARY KEY (Tag),
      CONSTRAINT U1_TagText UNIQUE 
      )
      --End Creating the Tag table
      
      posted in SQL
      Y
      yajahira
    • RE: How to restore automatically the latest of 10 .bak files from a blob container?

      you can just use dbatool's (Powershell cmdlets) https://docs.dbatools.io/Restore-DbaDatabase to do the automation that you want. This - https://www.nocentino.com/posts/2019-09-15-restoring-backups-from-azure-blob-with-dbatools/ will get you started.

      Alternatively, you can also use tsql based solution https://paulbrewer.wordpress.com/sp_restoregene/

      posted in SQL
      Y
      yajahira
    • RE: Aggregate multiple columns and rows into a json array
      select json_agg(u.a) from (
        select
          bname, bfrom, json_build_array(bfrom,bval) a
          FROM   stepchart 
        union all
        select
          bname, bfrom, json_build_array(bto,bval)
          FROM   stepchart 
        order by bname, bfrom
      ) u;
      

      see https://dbfiddle.uk/?rdbms=postgres_14&fiddle=1991623d96797ad5738a7f18481c9410

      [[63115200, 213.4], [139769280, 213.4], [139769280, 213.39], [160911360, 213.39], [160911360, -9999.9], [163853812, -9999.9]]
      
      posted in SQL
      Y
      yajahira
    • Creating sample multi-TB databases with Postgresql

      For reasons I won't go into I need to do some tests with sample terabyte-sized databases in Postgres. Those tests are not your typical SQL-level benchmark tests, more like tests of how quickly I can back the database up, restore it, etc, so while test DBs should have some structure, it's not really important what those structures are as long as there's minimum complexity to them like a few tables.

      The requirement is that tested databases have at least 10TB size, preferably 20TB, size being understood as diskspace occupied by database files (yes it should be size after vacuuming etc).

      So I'm trying to use pgbench tool to generate such sample databases. I started pgbench with intention of creating 15TB DB like this:

      pgbench -i -s 1600000 pgbench15t

      So this run has generated tuples in like 3 days, but now pgbench is hanging on vacuuming for like 2 days by now. What's worse, there does not seem to be much activity with either postmaster process or disk (I tested that with iotop).

      Note: PG version is 11 (I'm not at liberty to choose version) and I've used default settings. Can that be a problem (i.e. default settings)?

      OK so I have two main questions:

      1. Is there some other quick way of generating such sample TB-sized databases that would not involve me just writing DB generation script / program?
      2. Can pgbench be used in some specific way that would result in successful generation of say 15TB DB?
      posted in SQL
      Y
      yajahira
    • PostgreSQL minimize data dir for Docker image

      I am preparing a Postgres Docker image based on the official image. I have an application that fills the database with initial data (this takes about an hour for ~300MB of data). I want to make an image out of that Postgres database so that the data is already initialized. This works fine, but due to the nature of the WAL, the PG_DATA directory is larger than necessary. For Docker image distribution, minifying PG_DATA is important. This will add up if I later think about adding a couple more layers of data, which all add their own set of WAL files that are not strictly necessary.

      What options do I have to minimize PG_DATA? My main question is about minimizing the WAL, at the end I talk about minimizing other areas of PG_DATA.

      Delete the WAL

      When is it safe for me to delete the WAL?

      To be clear, this is a throw-away container filled with some initial data. It’s not intended to be a productive system. archive_mode = off, wal_level = minimal, no replication.

      I am trying to keep the WAL small by the following config:

      wal_keep_size = 0
      wal_recycle = off
      min_wal_size = 32MB
      max_wal_senders = 0 # necessary due to wal_level = minimal
      

      However, even after issuing CHECKPOINT;, the pg_wal directory is still ~200MB big. Why is Postgres not deleting the files? Based on the settings, I would assume that only 32MB should be kept. After CHECKPOINT; and stopping the server, is it safe to delete all files from pg_wal?

      From what I can tell from other questions and answers, Postgres should automatically delete the WAL files. Some Q&As I am already aware, but do not talk about my specific problem and use-case:

      • https://dba.stackexchange.com/questions/80317/how-can-i-solve-postgresql-problem-after-deleting-wal-files
      • https://stackoverflow.com/questions/49650016/how-to-reduce-wal-file-count-in-edb-postgresql-9-6-instance
      • https://stackoverflow.com/questions/35144403/which-postgresql-wal-files-can-i-safely-remove-from-the-wal-archive-folder
      • https://stackoverflow.com/questions/49539938/postgres-wal-file-not-getting-deleted
      • https://stackoverflow.com/questions/68858808/postgresql-how-to-safely-remove-files-inside-pg-wal-directory

      These questions are mostly concerned about archive, backup, replication, PITR. But that does not apply to my use-case.

      Minimizing PG_DATA

      My idea is that after executing CHECKPOINT; and clearing the WAL, I can also do a VACUUM FULL; to release any space to the file system. This seems to work as intended, I see the base directory getting a bit smaller. Of course, I expect this to have the most effect after deleting/updating data, not after mostly inserting rows during initialization.

      Are there even more options to minimize PG_DATA?

      posted in SQL
      Y
      yajahira
    • Update all rows with a one to many relationship count in the same table in MYSQL

      I am trying to update all rows of a MYSQL table with the total amount of referred users for each user. 1 user can refer multiple users and I need to run a MYSQL query to update the invite count (total amount of users they have referred) on all rows.

      Columns:

      • id
      • referred_by_user_id
      • invite_count

      This is what I came up with but it is not working (returns 0 on every row). What is the best way to accomplish this query?

      SET @code=0;
      UPDATE users
          SET invite_count =
             (SELECT count(*)
                 where referred_by_user_id = @code:=@code+1);
      

      Table Schema

      CREATE TABLE `users` (
        `id` bigint unsigned NOT NULL AUTO_INCREMENT,
        `name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
        `referred_by_user_id` bigint DEFAULT NULL,
        `invite_count` int NOT NULL DEFAULT '0',
        `created_at` timestamp NULL DEFAULT NULL,
        `updated_at` timestamp NULL DEFAULT NULL,
        PRIMARY KEY (`id`)
      ) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
      

      posted in SQL
      Y
      yajahira
    • Slow query with multiple INNER JOIN (via table)

      This is my first question here.

      I have a project written in PHP running on windows server and apache and mysql 8. There is functionality to show news by tags (many-to many).

      So I have three InnoDB tables:

      news (id, title ...) news_tag_assn (news_id, tag_id) tag (id, name)

      I use a query to filter news containing exactly two tags:

          SELECT news.* FROM news 
          INNER JOIN news_tag_assn ON news.id = news_tag_assn.news_id 
          INNER JOIN tag ON news_tag_assn.tag_id = tag.id 
          WHERE  (tag.name IN ('Corona', 'Egypt')) AND (news.category_id <> 14) AND (news.visible=1)
          GROUP BY news.id, news.category_id, news.visible 
          HAVING COUNT(*) = 2 
          ORDER BY news.created DESC, news.updated DESC LIMIT 5
      

      I have created some indexes:

      For news_tag_assn table:

      • idx_news_tag_assn (news_id, tag_id)
      • idx_tag_news_assn (tag_id, news_id)

      For tag table:

      • tag_id_name (id, name)

      For news table:

      • idx_news_visible_tags (id, category_id, visible)
      • idx_cat_visible (category_id, visible)

      After creating indexes query become faster (0.144ms vs > 1 sec.)

      Another issue is counting news for pagination:

          SELECT COUNT(*) FROM (SELECT news.* FROM news 
          INNER JOIN news_tag_assn ON news.id = news_tag_assn.news_id 
          INNER JOIN tag ON news_tag_assn.tag_id = tag.id 
          WHERE  (tag.name IN ('Corona', 'Egypt')) AND (news.category_id <> 14) AND (news.visible=1)
          GROUP BY news.id, news.category_id, news.visible 
          HAVING COUNT(*) = 2 ) c
      

      This query is slow than first in 2 times.

      But i have a doubt that i have made all correctly. Maybe someone can advice something how can I make this query more fast.

      UPDATE

      SHOW CREATE TABLE news:

      CREATE TABLE `news` (
        `id` int NOT NULL AUTO_INCREMENT,
        `title` text NOT NULL,
        `alias` text,
        `short_description` varchar(200) DEFAULT NULL,
        `description` varchar(765) DEFAULT NULL,
        `content` longtext COMMENT,
        `label` varchar(255) DEFAULT NULL,
        `category_id` int NOT NULL COMMENT,
        `thumb` varchar(255) DEFAULT NULL,
        `type` int DEFAULT '1',
        `link` varchar(255) DEFAULT NULL,
        `priority` int DEFAULT '0',
        `creator_id` int DEFAULT NULL,
        `layout` varchar(255) DEFAULT NULL,
        `internal_layout` varchar(255) DEFAULT NULL,
        `created` int DEFAULT NULL,
        `updated` int DEFAULT NULL,
        `visible` int NOT NULL DEFAULT '1',
        `custom_bundles` int DEFAULT '0',
        PRIMARY KEY (`id`,`category_id`,`visible`) USING BTREE,
        KEY `idx_news_visible_for_tags` (`id`,`category_id`,`visible`),
        KEY `idx_cat_visible` (`category_id`,`visible`),
        KEY `idx_date` (`created`,`updated`)
      ) ENGINE=InnoDB AUTO_INCREMENT=22269 DEFAULT CHARSET=utf8mb3
      

      SHOW CREATE TABLE news_tag_assn:

      CREATE TABLE `news_tag_assn` (
        `news_id` int NOT NULL,
        `tag_id` int NOT NULL,
        PRIMARY KEY (`tag_id`,`news_id`),
        KEY `idx_news_tag_assn` (`news_id`,`tag_id`) USING BTREE,
        KEY `idx_tag_news_assn` (`tag_id`,`news_id`) USING BTREE
      ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3
      

      SHOW CREATE TABLE tag:

      CREATE TABLE `tag` (
        `id` int NOT NULL AUTO_INCREMENT,
        `name` varchar(255) NOT NULL,
        `alias` varchar(255) NOT NULL,
        `group_id` int DEFAULT NULL,
        `frequency` int DEFAULT '0',
        `visible` int DEFAULT '0',
        PRIMARY KEY (`id`,`name`) USING BTREE,
        KEY `tag_id` (`id`) USING BTREE,
        KEY `tag_id_name` (`id`,`name`)
      ) ENGINE=InnoDB AUTO_INCREMENT=156 DEFAULT CHARSET=utf8mb3
      
      posted in SQL
      Y
      yajahira
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1483
    • 1484
    • 1 / 1484