Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. guilherme
    3. Posts
    G
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by guilherme

    • RE: How do you reset a serial type back to 0 after deleting all rows in a table?

      Simple (assuming you know the sequence name):

      SELECT setval('fruits_id_seq', 1, false);
      

      Or:

      ALTER SEQUENCE payments_id_seq RESTART WITH 0;
      

      Safe (you don't know the sequence name or are not certain whether the table is empty):

      SELECT setval(pg_get_serial_sequence('fruits', 'id')
                  , COALESCE(max(id) + 1, 1)
                  , false)
      FROM   fruits;
      

      db<>fiddle https://dbfiddle.uk/?rdbms=postgres_10&fiddle=402899ba92090d12b8d57279abed3c59

      See:

      • https://stackoverflow.com/a/23390399/939860
      • https://stackoverflow.com/a/8750984/939860
      • https://dba.stackexchange.com/questions/193568/how-to-fix-all-duplicate-key-value-violates-unique-constraint/193577#193577
      posted in SQL
      G
      guilherme
    • Placing ORDER BY in the right place

      Should the ORDER BY been placed at the very bottom of the query? Under the FROM departments?

      enter image description here

      posted in SQL
      G
      guilherme
    • design Postgres database schema for arbitrary JSON attribute queries

      In my Postgres database, I have got lots of data (a million rows) and the schema looks like this:

      create table test_tb (
        id int,
        tags jsonb
      );
      

      A few queries to be performed on this dataset are:

      1.

      select id, tags ->> 'tag_1' as tag_1
      from test_tb
      where tags ->> 'tag_1'= 'val_1'
      
      select id, tags ->> 'tag_2' as tag_2
      from test_tb
      
      select id, tags ->> 'tag_1'
      from test_tb
      WHERE (tags ->> 'tag_1'='dv_0'
        AND tags ->> 'tag_3'='dv_15')
        AND (tags ->> 'tag_15'='dv_22' OR tags ->> 'tag_5'='dv_6')
         OR (tags ->> 'tag_12'='dv_9'); 
      

      Now the tags in each tuple are completely arbitrary, one tag might be appearing in only one tuple while another in hundreds of them, the number of tags in each tuple is around 20 - 30.

      I tried storing tags in a jsonb column and put GIN indexing on that column but it didn't optimize my queries. Please suggest some alternate schema.

      posted in SQL
      G
      guilherme
    • Empty MYSQL table columns

      We have a legacy system where we believe some columns have never been used. This is because a setup was repeated for new clients.

      Is there any simple way in mysql to check for empty columns in a table other than doing a check one column at a time?

      posted in SQL
      G
      guilherme
    • How to resolve The instance could not be upgraded because the 'reg*' data type is used in user tables?

      I tried to upgrade a Postgres DB Engine from 11.12 to 13.3 on AWS via the admin console in the browser. But the upgrade could not be performed, the error/pg_upgrade_precheck.log shows this message.

      The instance could not be upgraded from 11.12.R1 to 13.3.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again.

      - Following usages in database '[mydbname]' need to be corrected before upgrade:

      -- The instance could not be upgraded because the 'reg*' data type is used in user tables. Please remove all usages of 'reg*' data types and try again.

      I checked the AWS RDS upgrade guide and it https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html#USER_UpgradeDBInstance.PostgreSQL.MajorVersion.Process this check specifically. It describes to run this query to check that there are no uses of unsupported reg* data types. Problem is, when I run it it returns 0, meaning that there would be no uses of unsupported reg* data types.

      SELECT count(*) FROM pg_catalog.pg_class c, pg_catalog.pg_namespace n, pg_catalog.pg_attribute a 
        WHERE c.oid = a.attrelid 
            AND NOT a.attisdropped 
            AND a.atttypid IN ('pg_catalog.regproc'::pg_catalog.regtype, 
                               'pg_catalog.regprocedure'::pg_catalog.regtype, 
                               'pg_catalog.regoper'::pg_catalog.regtype, 
                               'pg_catalog.regoperator'::pg_catalog.regtype, 
                               'pg_catalog.regconfig'::pg_catalog.regtype, 
                               'pg_catalog.regdictionary'::pg_catalog.regtype) 
            AND c.relnamespace = n.oid 
            AND n.nspname NOT IN ('pg_catalog', 'information_schema');
      

      I also found https://dba.stackexchange.com/questions/271672/cannot-upgrade-to-v12-due-to-sql-identifier-data-type . That gave me 1 user tabel with 2 columns. I removed that table, but the upgrade process still is not working and gives the same error messages as before.

      How can I otherwise find uses of unsupported reg* data types?

      posted in SQL
      G
      guilherme
    • RE: Postgresql error syntax error at or near "END;"

      The solution to this was to change ELSE IF to ELSEIF without the space in between, because that's the correct syntax for postgresql.

      CREATE OR REPLACE FUNCTION find_equal(object objects []) RETURNS text [] AS $$
      DECLARE uuids text [];
      BEGIN 
      

      IF object.object_type = 'flat' THEN
      SELECT array_agg(uuid)
      FROM objects
      WHERE open = true
      AND object_type = object.object_type
      AND square_meters = object.square_meters
      AND floor = object.floor
      AND total_floors = object.total_floors
      AND latitude = object.latitude
      AND longitude = object.longitude INTO uuids;

      ELSEIF object.object_type = 'house' THEN
      SELECT array_agg(uuid)
      FROM objects
      WHERE open = true
      AND object_type = object.object_type
      AND square_meters = object.square_meters
      AND floor = object.floor
      AND total_floors = object.total_floors
      AND latitude = object.latitude
      AND longitude = object.longitude INTO uuids;

      ELSEIF object.object_type = 'commercial_space' THEN
      SELECT array_agg(uuid)
      FROM objects
      WHERE open = true
      AND object_type = object.object_type
      AND square_meters = object.square_meters
      AND latitude = object.latitude
      AND longitude = object.longitude INTO uuids;

      ELSE
      SELECT array_agg(uuid)
      FROM objects
      WHERE open = true
      AND object_type = object.object_type
      AND lot_size = object.lot_size
      AND latitude = object.latitude
      AND longitude = object.longitude INTO uuids;
      END IF;

      RETURN uuids;

      END;
      $$ LANGUAGE plpgsql;

      posted in SQL
      G
      guilherme
    • RE: MariaDB Read-Only user can't see database, why?

      You aren't actually connected as the readonly user. If you select current_user(), you'll see an anonymous user that hasn't got any privileges on the database.

      posted in SQL
      G
      guilherme
    • RE: What is the easiest way to migrate posgresql database to another server?

      If the architecture on both systems is the same, (say, 64-bit Linux on the same hardware architecture), you can simply copy the data directory. There are a few things to observe:

      • you have to use the same PostgreSQL versions on both systems

      • unless the C library version is the same or you are using the C collation, you had better reindex all indexes on string expressions, because the collation might have changed

      posted in SQL
      G
      guilherme
    • RE: View to insert records into history table and then find changes with Except query

      Based on your response in the comments it sounds like you were trying to use an INSERT statement inside of your View's definition. That is not allowed and therefore not proper T-SQL syntax which explains the error you received:

      incorrect syntax near insert expecting ( select or with

      Views are essentially just a placeholder object for a DQL (SELECT) statement. Instead, if you want this code to be re-usable, you can create a https://docs.microsoft.com/en-us/sql/relational-databases/stored-procedures/create-a-stored-procedure?view=sql-server-ver15 like so:

      CREATE PROCEDURE [dbo].[Acc-Faculty-Changes] AS
      

      INSERT INTO [dbo].[Acc-Faculty-History]
      (
      [Run Date]
      ,[ID]
      ,[Name]
      ,[First]
      ,[Last]
      ,[Email]
      ,[Department]
      ,[Phone Number]
      ,[Mobile Number]
      )
      SELECT DISTINCT
      CURRENT_TIMESTAMP [Run Date]
      ,RTRIM(ISNULL(FL.INSTRCTR_ID_NUM, SPACE(0))) as ID
      ,RTRIM(ISNULL(NM.FIRST_NAME + ' ' + NM.LAST_NAME, SPACE(0))) as Name
      ,RTRIM(ISNULL(NM.FIRST_NAME, SPACE(0))) as First
      ,RTRIM(ISNULL(NM.LAST_NAME, SPACE(0))) as Last
      ,RTRIM(ISNULL(NM.EMAIL_ADDRESS, SPACE(0))) as Email
      ,RTRIM(ISNULL(SM.DIVISION_CDE, SPACE(0))) as Department
      ,AM.PHONE [Phone Number]
      ,NM.MOBILE_PHONE [Mobile Number]
      FROM DevPrd.dbo.FACULTY_LOAD_TABLE FL
      INNER JOIN DevPrd.dbo.YEAR_TERM_TABLE YT
      ON FL.YR_CDE = YT.YR_CDE
      AND FL.TRM_CDE = YT.TRM_CDE
      INNER JOIN DevPrd.dbo.NAME_MASTER NM
      ON NM.ID_NUM = FL.INSTRCTR_ID_NUM
      --INNER JOIN DevPrd.dbo.FACULTY_MASTER FM
      --ON NM.ID_NUM = FM.ID_NUM
      INNER JOIN DevPrd.dbo.ADDRESS_MASTER AM
      ON NM.ID_NUM = AM.ID_NUM
      --WHERE FM.ACTIVE = 'Y'
      --AND NM.CURRENT_ADDRESS <> '*LHP'
      INNER JOIN DevPrd.dbo.SECTION_MASTER SM
      ON FL.YR_CDE = SM.YR_CDE
      AND FL.TRM_CDE = SM.TRM_CDE

      WHERE

      YT.TRM_BEGIN_DTE <= dateadd(mi, 30, FORMAT(CURRENT_TIMESTAMP, 'd') ) --set to day of course
      AND YT.TRM_END_DTE >= dateadd(mi, 30, FORMAT(CURRENT_TIMESTAMP, 'd'))

      posted in SQL
      G
      guilherme
    • Date not being imported from csv in proper format in MySQL

      I am new to MySQL, so kindly explain your answers in a simple way!

      I have a csv file which has 2 date columns. I want to import the csv into MySQL and I am using the following method:

      1. Create an empty table in the database with proper column types. So both date columns were assigned a DATE datatype and so on.

      2. Load the csv using following query:

         LOAD DATA LOCAL INFILE 'C:\Users\Shrey\OneDrive\Desktop\assignments\Samyak\Comorbidity Covid-19.csv'  
         INTO TABLE `comorbidity` 
         FIELDS TERMINATED BY ',' 
         LINES TERMINATED BY '\r\n'
         IGNORE 1 ROWS;
        

      The issue is that both date columns are being imported but with all the entries like 0000-00-00.

      I tried the following query as suggested by @Ergest Basha:

      LOAD DATA LOCAL INFILE 'C:/Users/Shrey/OneDrive/Desktop/assignments/Samyak/Comorbidity Covid-19.csv'  
      INTO TABLE `comorbidity`  
      FIELDS TERMINATED BY ','  
      LINES TERMINATED BY '\n'
      IGNORE 1 ROWS
      

      (
      @Current_date_var, @Start_Date_var, State,
      Condition_group, Condition,
      Age_group, Covid19_deaths, Number_of_mentions
      )
      SET Date_as_of = STR_TO_DATE(@Current_date_var, '%m-%d-%Y') ,
      Start_date = STR_TO_DATE(@Start_Date_var, '%m-%d-%Y')
      ;

      But still get the same output!

      This is a sample of the data I am working with:-

      Date as of Start date State Condition group Condition Age group Covid-19 deaths Number of mentions
      02-06-2022 01-01-2020 United States Respiratory diseases Influenza 0-24 1167 1127
      02-06-2022 01-01-2020 United States Respiratory disease pneumonia 25-51 2254 5458
      02-06-2022 02-02-2020 United States Respiratory disease pneumonia 52-76 5545 2222

      Any help is much appreciated!

      posted in SQL
      G
      guilherme
    • how to perform inner join on two table where i'am able to retrieve all data?

      I need help with a MySQL query.

      • bar_opening_details has closing & opening stock
      • bar_recvd_details has received the stock

      When I receive purchased stock, I insert it into bar_recvd_details. Next, I will update close_val in bar_opening_details which is closing stock values of items.

      When I join these two tables together, the number of rows returned from bar_opening_details is equal to the number of rows returned from bar_recvd_details. I.E. the unmatched rows of bar_opening_details are no longer returned.

      The Fiddle is available https://www.db-fiddle.com/f/9SMHPJsPPvkw2WMaQiJAx2/0

      Below is my desired output.

      • I want to join the tables on item_id
      • I want all data from each table, where the = '2019-06-18' from the respective table
      • I also want data from bar_recvd_details, even if recvd_value IS NULL

      bar_opening_details

      SELECT *  FROM bar_opening_details 
      
      item_cid item_id op_value close_val op_date close_date
      2 78 2 88 2019-06-18 2019-06-18
      2 79 28 103 2019-06-18 2019-06-18
      2 97 0 96 2019-06-18 2019-06-18
      6 456 30 48 2019-06-18 2019-06-18
      6 457 130 155 2019-06-18 2019-06-18
      6 484 1 46 2019-06-18 2019-06-18
      6 495 15 61 2019-06-18 2019-06-18
      6 523 1 12 2019-06-18 2019-06-18
      6 529 9 32 2019-06-18 2019-06-18
      6 530 54 98 2019-06-18 2019-06-18
      6 533 0 24 2019-06-18 2019-06-18
      6 630 35 77 2019-06-18 2019-06-18
      6 631 31 122 2019-06-18 2019-06-18
      6 635 5 47 2019-06-18 2019-06-18
      6 636 34 71 2019-06-18 2019-06-18

      bar_recvd_details

      SELECT *  FROM bar_recvd_details 
      
      item_cid item_id recvd_date recvd_value
      6 637 2019-06-19 96
      6 635 2019-06-19 48
      6 637 2019-06-18 96
      6 635 2019-06-18 48
      6 636 2019-06-18 48
      6 457 2019-06-18 48
      6 456 2019-06-18 24
      6 1432 2019-06-18 96
      6 484 2019-06-18 48
      6 523 2019-06-18 12
      6 533 2019-06-18 24
      4 79 2019-06-19 24
      2 78 2019-06-19 24
      6 529 2019-06-18 24

      Two tables Joined

             SELECT 
                  bar_opening_details.item_cid,
                  bar_opening_details.item_id,
                  bar_opening_details.op_value,
                  bar_opening_details.close_val,
                  bar_opening_details.close_date,
                  bar_recvd_details.recvd_value,
                  bar_recvd_details.recvd_date,
                  bar_recvd_details.item_id
              FROM
                  bar_opening_details
                      LEFT JOIN bar_recvd_details 
                      ON bar_recvd_details.item_id = bar_opening_details.item_id 
              where bar_opening_details.close_date = '2019-06-18' 
      
      item_cid item_id op_value close_val close_date recvd_value recvd_date item_id
      2 79 28 103 2019-06-18 24 2019-06-19 79
      2 78 2 88 2019-06-18 24 2019-06-19 78
      2 97 0 96 2019-06-18 (null) (null) (null)
      6 457 130 155 2019-06-18 48 2019-06-18 457
      6 456 30 48 2019-06-18 24 2019-06-18 456
      6 484 1 46 2019-06-18 48 2019-06-18 484
      6 523 1 12 2019-06-18 12 2019-06-18 523
      6 529 9 32 2019-06-18 24 2019-06-18 529
      6 495 15 61 2019-06-18 (null) (null) (null)
      6 533 0 24 2019-06-18 24 2019-06-18 533
      6 530 54 98 2019-06-18 (null) (null) (null)
      6 630 35 77 2019-06-18 (null) (null) (null)
      6 635 5 47 2019-06-18 48 2019-06-19 635
      6 635 5 47 2019-06-18 48 2019-06-18 635
      6 636 34 71 2019-06-18 48 2019-06-18 636
      6 631 31 122 2019-06-18 (null) (null) (null)

      Below is the output I would like

      item_cid item_id op_value close_val close_date recvd_value recvd_date item_id
      2 97 0 96 2019-06-18 (null) (null) (null)
      6 457 130 155 2019-06-18 48 2019-06-18 457
      6 456 30 48 2019-06-18 24 2019-06-18 456
      6 484 1 46 2019-06-18 48 2019-06-18 484
      6 523 1 12 2019-06-18 12 2019-06-18 523
      6 529 9 32 2019-06-18 24 2019-06-18 529
      6 495 15 61 2019-06-18 (null) (null) (null)
      6 533 0 24 2019-06-18 24 2019-06-18 533
      6 530 54 98 2019-06-18 (null) (null) (null)
      6 630 35 77 2019-06-18 (null) (null) (null)
      6 635 5 47 2019-06-18 48 2019-06-18 635
      6 636 34 71 2019-06-18 48 2019-06-18 636
      6 631 31 122 2019-06-18 (null) (null) (null)
      posted in SQL
      G
      guilherme
    • RE: How to account for accumulating values which "start over" from zero for the same day?

      The task is not solvable in general using scheduled task. For example, you have 3 adjacent rows like ('01:00', 123456789), ('01:10', 123456), ('01:11', 234567). You cannot guarantee that the second reboot does not occur between 2nd and 3rd rows.

      You must use permanently executed task (service) which saves both current counters and some UID unique enough (for example, task starting timestamp). You may say that such UID may be stored into the registry with another task... maybe. But you must guarantee that it is saved before the most first scheduled execution. You must have static base timestamp. When you have uptime only you cannot define what is the base - so you must store created_at timestamp too.

      If you can receive both current system uptime and current network statistic then try to save them. This allows you to define the groups of rows in which you must select one row with max. traffic amounts. But such scheme needs to perform the base timestamp calculation for each row during the statistic retrieving - so it is more reasonable to calculate and to store not system uptime but system starting timestamp.

      Moreover, I'd recommend to compare calculated timestamp and prev. row timestamp, and, if they're close enough then use the value from prev. row.

      Check this solution:

      WITH dates AS (
          SELECT DISTINCT "collected at"::DATE "date"
          FROM "network utilization"
      ),
      raw AS (
          SELECT t1."date",
                 t2."system starting timestamp" sst,
                 MIN("number of byte sent since startup") min_snt,
                 MAX("number of byte sent since startup") max_snt,
                 MIN("number of byte received since startup") min_rcv,
                 MAX("number of byte received since startup") max_rcv
          FROM dates t1
          JOIN "network utilization" t2 ON t1."date" = t2."collected at"::DATE
          GROUP BY 1,2
      ),
      per_session AS (
          SELECT "date",
                 sst,
                 max_snt - COALESCE(LAG(max_snt) OVER (PARTITION BY sst ORDER BY "date"), 0) sent,
                 max_rcv - COALESCE(LAG(max_rcv) OVER (PARTITION BY sst ORDER BY "date"), 0) received
          FROM raw
      )
      SELECT "date",
             SUM(sent) sent,
             SUM(received) received
      FROM per_session
      GROUP BY "date"
      ORDER BY 1;
      

      https://dbfiddle.uk/?rdbms=postgres_12&fiddle=0453fe758a160ccbc617cce65830f40a with some comments/explanations.

      posted in SQL
      G
      guilherme
    • RE: To inquire about the query complexity of the following query in SQL

      taking the AND of rows matching the filters for two columns

      Can it be done in time sublinear in the number of rows?

      By sublinear I take it you mean faster than O(n) time. The answer is yes but under a certain condition. That condition is you must have a https://blog.toadworld.com/2017/04/06/speed-up-your-queries-using-the-covering-index-in-mysql for your query. Because an index is normally stored in a B-Tree logical data structure, the data is already sorted in such a way that makes seeking against that index for specific values on the order of O(log(n)) search time which is much faster than linear O(n) since not all of the rows will need to be searched.

      For the predicate where Country = "India" AND Size = "Large" that is easy to accomplish. Your index definition would essentially just need to include both columns (Country, Size).

      But things get a little more complicated when your query also uses SELECT * (which is an anti-pattern). If there are other columns besides Country and Size in the table, now the previously mentioned index definition is no longer covering. I'm not an expert on all of the operations MySQL does under the hood to serve a query, but there is a possibility you'll end up with an index scan now which would then be back to O(n) linear search time.

      If your table had a third column (and only 3 columns total for this example) called Column3, and you only needed to return the two columns Country and Size, then you could do two things to fix the aforementioned anti-pattern that will fix the performance to be an index seek again:

      1. Change the SELECT * part of your query to SELECT Country, Size.
      2. Change the index definition to also include Column3 on the end so that it becomes a covering index again: (Country, Size, Column3)

      Either of those options should fix the issue and make your query able to run in O(log(n)) time again.

      Now of course if your table has many more columns in it, then the second option becomes less and less viable (because indexes require additional storage space and affect write performance dependent on the amount of data needed to be written to them), which goes back to the reasoning SELECT * is an anti-pattern and you should only select the columns you need to actually return.

      posted in SQL
      G
      guilherme
    • Is there a way to install PostgreSQL using apt-get without it running initdb automatically?

      I am trying to set up replication for PostgreSQL on Ubuntu 20.04 and for the stand-by node I just need to install the PostgreSQL-server and not run initdb. Cluster creation on the stand-by will be taken care of when I will setup replication from master.

      But, when ever I am running

      sudo apt-get -y install postgresql
      

      initdb is called and cluster is created. In other distribution such as RHEL, it is possible to not run initdb and just install the postgresql server using packages ( https://www.postgresql.org/download/linux/redhat/ ).

      Is there any way to get around this? or is this just how installation works on ubuntu when installing using apt-get?

      posted in SQL
      G
      guilherme
    • RE: Log only drop queries

      You can https://mariadb.com/kb/en/mariadb-audit-plugin-installation/ the https://mariadb.com/kb/en/mariadb-audit-plugin-log-settings/ and configure it to log only DDL statements (CREATE, ALTER, DROP, RENAME and TRUNCATE), and enable it, by running:

      INSTALL SONAME 'server_audit';
      SET GLOBAL server_audit_events = 'QUERY_DDL';
      SET GLOBAL server_audit_logging = ON;
      

      Alternatively, this can all be done in the appropriate option file, although that requires a restart to take effect:

      [mysqld]
      plugin_load_add = server_audit
      server_audit_events = QUERY_DDL
      server_audit_logging = ON
      

      While that's not logging exclusively DROP TABLE statements, it should be a smaller log file than what you get with the general log.

      The logs produced will have the format described on https://mariadb.com/kb/en/mariadb-audit-plugin-log-format/ , e.g.:

      [timestamp],[serverhost],[username],[host],[connectionid],[queryid],DROP,[database],[object],
      
      posted in SQL
      G
      guilherme
    • RE: Regexp oracle SQL

      regexp_like might help.

      SQL> with test (col) as
        2    (select '123456789gm9876' from dual union all
        3     select '372737373ddd123' from dual
        4    )
        5  select
        6    col,
        7    case when regexp_like(col, '^[[:digit:]]{9}[[:alpha:]]{2}[[:digit:]]{4}$') then 'good'
        8         else 'bad'
        9    end result
       10  from test;
      

      COL RESULT


      123456789gm9876 good
      372737373ddd123 bad

      SQL>

      posted in SQL
      G
      guilherme
    • RE: Optimize Mysql & My.cnf to 32g Ram & 8 Core CPU?

      No to these:

      join_buffer_size = 1G
      max_heap_table_size = 1G
      #max_allowed_packet = 1600M
      tmp_table_size = 1G
      

      Don't set any of them to more than 1% of RAM. Otherwise, you threaten to consume so much RAM as to cause swapping. Swapping hurts performance a lot. Anyway, big values for those are mostly in the realm of "diminishing returns".

      The most important setting is fine as you have it:

      innodb_buffer_pool_size = 24G
      

      You will probably never use all 8 cores unless you have some really inefficient queries. And they can probably be fixed with a better INDEX or query reformulation.

      posted in SQL
      G
      guilherme
    • RE: REINDEX takes significantly more time on production system than on local snapshot

      we have never deleted or updated rows in the table

      Then there is typically no reason to run VACUUM FULL at all. Maybe VACUUM (without FULL) is useful. But, typically, that's covered by https://www.postgresql.org/docs/current/routine-vacuuming.html .

      REINDEX may be useful. Consider the scenarios listed in the manual https://www.postgresql.org/docs/current/sql-reindex.html#id-1.9.3.162.5 and https://www.postgresql.org/docs/current/routine-reindex.html . If you create the index after filling the table, REINDEX is certainly wasted.

      Not exactly sure where the observed difference comes from. (A hosted service is always a bit of a blackbox.)

      Either way, VACUUM FULL rebuilds table and indexes from scratch. An additional REINDEX adds nothing useful over just VACUUM FULL. See:

      • https://stackoverflow.com/a/31012925/939860
      posted in SQL
      G
      guilherme
    • Help with Database Design for a CRM

      I am creating a CRM for a call-center and I want to store the input field data in a database. My problem is that I am quite new to this ordeal and have little - to no idea how to structure my database.

      Should I create multiple tables for each part of the form and submit the data there or should everything be submitted to one large table?

      Note 1: the form has around 80 fields with the data I need. Note 2: I also have another form with approximately 50 fields and plan to add another form which will be much small with app. 15 fields.

      I appreciate any help and thank you for your time in advance!

      posted in SQL
      G
      guilherme
    • RE: InnoDB: Cannot close file ./tablefile.ibd because of 1 pending operations

      The signal 11 (SIGSEGV) looks like a bug. Can you please file it in the MariaDB bug tracker, against the 10.5.13 release, with as many details as possible? If possible, try to produce https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ by attaching a debugger to the server before it crashes, or by enabling core dumps and invoking the debugger on the core dump.

      We have not seen any crash like that in our internal testing, but then again, we typically run stress tests with small amounts of data, and on RAM disk.

      The scenario resembles that of https://jira.mariadb.org/browse/MDEV-25215 and that originally reported bug (excessive logging of "Cannot close", several times per second) was fixed.

      The scenario is that some files need to be closed so that the maximum number of open files is not being exceeded. We are not allowed to close a file handle on which writes or fdatasync() is pending. I think that at the same time, a log checkpoint must be in progress. A larger innodb_log_file_size should make log checkpoints less frequent.

      posted in SQL
      G
      guilherme
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1494
    • 1495
    • 1 / 1495