Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. terrea
    3. Posts
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by terrea

    • RE: Why are records from my data-modifying CTE not appearing in the subsequent SELECT from the insertion table?

      https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING

      The sub-statements in WITH are executed concurrently with each other and with the main query. Therefore, when using data-modifying statements in WITH, the order in which the specified updates actually happen is unpredictable. All the statements are executed with the same snapshot (see Chapter 13), so they cannot “see” one another's effects on the target tables. (... ) and means that RETURNING data is the only way to communicate changes between different WITH sub-statements and the main query.

      (emphasis mine)

      So you need a RETURNING clause in your merge_op clause to see the inserted rows.

      -- merge query
      WITH incoming (name, value) AS (
        VALUES 
          ('asdf','null'::JSONB), 
          ('asdf', '18446744073709551615'::JSONB), 
          ('asdaf', '3'::JSONB)
      )
      , merge_op AS (
          INSERT INTO target_table(name, value)
          SELECT i.name, i.value
          FROM incoming i
          LEFT JOIN target_table er
              ON er.name = i.name
              AND er.value = i.value
          WHERE er.name IS NULL
          ON CONFLICT DO NOTHING
          returning *
      )
      SELECT er.id, er.name, er.value
      FROM target_table er
        JOIN incoming i ON i.name = er.name
      AND i.value = er.value;
      

      If you also want to see the rows from incoming that were not inserted (e.g. due to a conflict) you need an outer join:

      SELECT er.id, i.name, i.value
      FROM incoming i 
        LEFT JOIN target_table er ON i.name = er.name
      AND i.value = er.value;
      
      posted in SQL
      T
      terrea
    • RE: Mysql query that is very quick (but sometimes very slow) versus an alternative version of the same query that is consistently a *bit* slow

      I (seem to have) fixed this by adding a query to check the creation date of the account.

      and created_at >= (select created_at from customers where username = '__username__' )
      

      (username is string_replaced in php with the actual username. I did try username = c.username first but that didn't acheive the performance improvement so I resorted to the php string replace (Note, this is not user-supplied info, so not a sql injection risk)

      select created_at from transactions where 
      created_at < date_add('2022-02-27 06:00:00',interval -60 day)
      

      /* Added this line... */
      and created_at >= (select created_at from customers where username = 'username' )

      and customer_username = 'username' and status = 'SUCCESS'
      and category in('TICKET','DEPOSIT') and subcategory = 'RESPONSE'
      order by created_at desc limit 1

      It was performing very slow for accounts that hadn't actually been created in the date range being queried, so the very slow query was coming back NULL

      posted in SQL
      T
      terrea
    • RE: What is the impact on primary (in terms of locking) and restore process when secondary database is being read by large query?

      Will this stop the query or delay the log shipping restore? Clearly this will have no impact on the primary db.

      You mean transaction log is triggered, that is the correct word to use. No this will have no impact on transactions running on primary DB. Transaction log backups does not hampers already running transactions. But in few cases, assuming you have big transaction log and your storage is slow( basically something outside SQL Server) this can hamper transactions on primary DB.

      AG - when a large query is running and meanwhile the mirroring process sends data that needs to be written into the db

      AG is not mirroring but advanced for of it.

      Or maybe the mode is async so the primary is waiting for the commit on secondary.

      In Async mode primary does not wait for commit on secondary.

      I am trying to understand what is the impact on the primary database (in terms of locking) and on the restore process when the secondary db is being read by a large query?

      Well the question is bit vague, but restore on secondary in case of LS is not impacted by query or anything on primary. For AG large read query can be affected by queries running on primary but that is whole different scenario.

      posted in SQL
      T
      terrea
    • Problem seeding Heroku database

      I'm trying to seed my Heroku production database and got the following error message:

      ERROR: insert or update on table "Posts" violate foreign key constraint "Posts_userId_fkey

      The post migration file:

      module.exports = {
        up: (queryInterface, Sequelize) => {
          return queryInterface.createTable('Posts', {
            id: {
              allowNull: false,
              autoIncrement: true,
              primaryKey: true,
              type: Sequelize.INTEGER
            },
            userId: {
              type: Sequelize.INTEGER,
              allowNull: false,
              references: {
                model: 'Users'
              }
            },
            content: {
              type: Sequelize.TEXT,
              allowNull: false
            },
            title: {
              type: Sequelize.STRING(50),
              allowNull: false
            },
            photo: {
              type: Sequelize.TEXT,
              allowNull: false
            },
            categoryId: {
              type: Sequelize.INTEGER,
              allowNull: false,
              references: {
                model: 'Categories'
              }
            },
            createdAt: {
              allowNull: false,
              type: Sequelize.DATE,
              defaultValue: Sequelize.fn('now')
            },
            updatedAt: {
              allowNull: false,
              type: Sequelize.DATE,
              defaultValue: Sequelize.fn('now')
            }
          });
        },
        down: (queryInterface, Sequelize) => {
          return queryInterface.dropTable('Posts');
        }
      };
      

      user model file:

       User.associate = function(models) {
          // associations can be defined here
          User.hasMany(models.Post, { foreignKey: 'userId' })
        };
      

      my config file:

      production: {
          use_env_variable: 'DATABASE_URL',
          dialect: 'postgres',
          seederStorage: 'sequelize',
          dialectOptions: {
            ssl: {
              require: true,
              rejectUnauthorized: false
            }
          }
        }
      

      my post seeders:

      return queryInterface.bulkInsert(
            "Posts",
            [
              {
                userId: 4,
                content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
                title: 'First blog',
                photo: 'https://i.pinimg.com/originals/e8/67/8c/e8678c5f0b32c1a4bffae9e9c9ca41a5.jpg',
                categoryId: 1
              },
              {
                userId: 4,
                content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
                title: 'Cat Ipsum',
                photo: 'https://i.pinimg.com/564x/52/d7/49/52d749ce51a1341d30ac00eb2ef712a9.jpg',
                categoryId: 2
              },
              {
                userId: 4,
                content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
                title: 'Cheesecake Party',
                photo: 'https://media.istockphoto.com/photos/cheesecake-slice-with-strawberries-picture-id1205169550?k=20&m=1205169550&s=612x612&w=0&h=QqJDIpCEpGEXBFU2c-aoZKEgtU5tfFGxKxrBu1bHYww=',
                categoryId: 2
              },
              {
                userId: 5,
                content: "Cream cheese halloumi camembert de normandie. Queso emmental melted cheese cream cheese cheese triangles the big cheese emmental blue castello. When the cheese comes out everybody's happy gouda queso fromage camembert de normandie stinking bishop rubber cheese rubber cheese. Edam cheese triangles pecorino babybel stilton. Parmesan hard cheese smelly cheese. Cheese triangles fondue macaroni cheese port-salut taleggio chalk and cheese brie cheesecake. Bavarian bergkase emmental taleggio dolcelatte fondue roquefort cheeseburger cheese slices. Fromage frais.Cheesecake cheese and wine fromage frais. Roquefort cheese triangles fromage frais stilton paneer mascarpone chalk and cheese dolcelatte. Dolcelatte who moved my cheese croque monsieur manchego taleggio cheese on toast hard cheese bocconcini. Cow everyone loves monterey jack cheesy grin smelly cheese cauliflower cheese bocconcini the big cheese. Hard cheese hard cheese.Cheese slices smelly cheese cheese on toast. Cheese strings chalk and cheese camembert de normandie cheese and biscuits red leicester cow brie cut the cheese. Cheese on toast melted cheese stilton pecorino brie st. agur blue cheese manchego cheese strings. Boursin who moved my cheese stilton paneer cheese triangles lancashire cow who moved my cheese. Who moved my cheese cauliflower cheese mascarpone say cheese.",
                title: 'Tell Me A Joke',
                photo: 'https://st2.depositphotos.com/1010146/9831/i/950/depositphotos_98319240-stock-photo-laughing-man-closeup.jpg',
                categoryId: 1
              },
            ],
            {}
          );
        },
      

      Why is it producing that message?

      posted in SQL
      T
      terrea
    • UPDATE statement on UPSERT not incrementing value

      I have the following QUERY:

      INSERT INTO ReservationCounter (student_id, reservation_count) VALUES (1, 1)
      ON CONFLICT (student_id) DO
      UPDATE SET reservation_count=excluded.reservation_count+1;
      

      When there is not a conflict it INSERTS successfully.

      When there is a conflict, however, the UPDATE statement doesn't actually updates the value with the increment.

      When I run it on conflict the first time it returns 2 as expected, but when I run it again I expect it to increment 2 to 3 (because of reservation_count=excluded.reservation_count+1). It does not do that, it returns 2 again; leading me to believe that reservation_count is always 1.

      What am I doing wrong?

      posted in SQL
      T
      terrea
    • RE: Getting how many days users have logged in during a month (same day counts as 1 log in)

      I think the expected result are not correct because user 1 in 2 month is logged twice not once. 2022-02-22 00:00:00 2022-02-22 15:00:00 counts as one and the second log-in 2022-02-23 00:00:00

      Maybe this would help:

      select t1.Year,month,t1.user as times,group_concat(t1.id_user) as users_logged_in
      from ( select month(`timestamp`) as month ,
                    count(distinct id_user, 
                    date(`timestamp`)) as user ,
                    id_user,
                    year(`timestamp`) as Year
             from test_tbl
             group by   month(`timestamp`),id_user,year(`timestamp`)
             ) as t1
      group by  t1.Year,month  ,times    
      order by month asc,times asc;
      

      Results

      Year month times users_logged_in
      2022 2 1 2
      2022 2 2 1
      2022 2 3 3
      2022 3 1 1,2

      https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=6bf296a6002517c9482746aee754ac57

      posted in SQL
      T
      terrea
    • Are the limits of varchar(8000) and nvarchar(4000) because of the sql server page size (8KB=8192 Bytes)?

      I know that varchar(max) and nvarchar(max) allow upto 2GB of data to be held, but this is not relevant to this question.

      A page is 8KB = 8192 Bytes

      In varchar, 1 character is 1 byte. Whereas in nvarchar, 1 character is 2 bytes.

      varchar is allowed to have 8000 characters (8000 Bytes). Whereas nvarchar is allowed to have 4000 characters (8000 Bytes).

      Is this because the page size is 8192 Bytes?

      posted in SQL
      T
      terrea
    • GROUP_CONCAT Extremely slow

      I'm trying to use group_concat and join tables using this query. However, it is extremely slow, to the extend which is not possible to apply to production.

      SELECT
          a.id, 
          a.`name`, 
          GROUP_CONCAT(c.name ORDER BY c.name) AS occasion, 
          GROUP_CONCAT(DISTINCT (x.name) ORDER BY x.name) AS katalog, 
          GROUP_CONCAT(DISTINCT(z.city_name) ORDER BY z.city_name) AS city_name, 
          a.featured, 
          a.price, 
          a.saleprice, 
          a.view, 
          a.favorite, 
          a.favnumber, 
          a.bestseller, 
          a.stock, 
          a.product_created, 
          a.product_update
      FROM product AS a
          LEFT JOIN product_occasion AS b ON a.id = b.product_id
          LEFT JOIN occasion AS c ON b.occasion_id = c.id
          LEFT JOIN product_catalog AS k ON a.id = k.product_id
          LEFT JOIN catalog AS x ON k.catalog_id = x.id
          LEFT JOIN product_city AS y ON a.id = y.product_id
          LEFT JOIN city AS z ON y.city_id = z.id
      GROUP BY a.id,
          a.name
      

      Appreciate if I could get assistance at improving the query.

      Thanks

      posted in SQL
      T
      terrea
    • What happens during auto vacuum / vacuum in PostgreSQL?

      The official documentaion here seemed quite cryptic to me. https://www.postgresql.org/docs/13/routine-vacuuming.html#AUTOVACUUM

      Can someone offer an explanation of what happens under the hood during autovacuum or manually triggerred vacuum?

      posted in SQL
      T
      terrea
    • How can I mimic full outer join in mysql with multiple columns

      I have two tables: A and B.

      Table A has the following set-up:

      ID date location sales
      1 2022-01-01 1 10000
      2 2022-01-02 1 10000
      3 2022-01-04 1 10000
      ... .... 2 ....

      So there is no data for for the location 1 at the date 2022-01-03.

      Table B has the following set-up:

      ID date location budget
      1 2022-01-01 1 10000
      2 2022-01-03 1 10000
      3 2022-01-04 1 10000
      ... .... 2 ....

      So there is no record for location 1 for the date 2022-01-02.

      I am trying to join the tables together to get the following output

      location sales budget
      1 30000 30000
      2 ... ...

      So I can group it on location and get | location ABC | sales 123 | budget 123 |, which is a sum of all the dates grouped, but also joined the 2 tables together on date and location.

      The query I currently have now is as follows:

      SELECT SUM(A.sales) AS sales, A.restaurant
      FROM A
      LEFT OUTER JOIN B ON A.date = B.date AND A.location= B.location
      WHERE A.date between ? AND ? 
      GROUP BY A.location
      UNION
      SELECT SUM(B.budget) AS budget, B.restaurant
      FROM A
      RIGHT OUTER JOIN B ON A.date = B.date AND A.restaurant = B.restaurant
      WHERE B.date between ? AND ?  
      GROUP BY B.restaurant
      

      I've tried different types of joins and unions and ended up with a query as suggested in this https://stackoverflow.com/questions/4796872/how-can-i-do-a-full-outer-join-in-mysql to mimic a full outer join. However, with this query I get the following output:

      location column
      1 30000
      2 ...
      3 ...
      1 30000
      2 ...
      3 ...

      These sums are correct, but are not in 2 separate columns 'sales' and 'budget'.

      Is there a way to achieve this?

      posted in SQL
      T
      terrea
    • RE: How to easily convert a comma seperated list of ids into a table of rows

      You can use string_to_array() together with unnest()

      select * 
      from unnest(string_to_array('2201322723,2201318904,2201299380,2164969641', ',')) as t(object_id)
      

      Or if you are using Postgres 14 or later this can be simplified to

      select * 
      from string_to_table('2201322723,2201318904,2201299380,2164969641', ',') as t(object_id)
      
      posted in SQL
      T
      terrea
    • RE: Table names when migrating MySQL to Microsoft SQL Server

      This is because the two database engines are designed this way:

      • Microsoft SQL Server: There is a SQL Server Instance --> which contains multiple databases --> each database contains schemas
      • MySQL: There is a MySQL database --> which contains multiple schemas

      In MySQL the schemas are the databases.

      Apparently as suggested by https://dba.stackexchange.com/users/240213/brendan-mccaffrey there might be a https://docs.microsoft.com/en-us/sql/ssma/mysql/mapping-mysql-databases-to-sql-server-schemas-mysqltosql?view=sql-server-ver15 :

      1. In MySQL Metadata Explorer, select Schemas. The Schema Mapping tab is also available when you select individual schemas. The list in the Schema Mapping tab is customized for the selected object.
      2. In the right pane, click the Schema Mapping tab. You will see a list of all MySQL schemas, followed by a target value. This target is denoted in a two part notation (database.schema) in SQL Server or SQL Azure where your objects and data will be migrated.
      3. Select the row that contains the mapping that you want to change, and then click Modify. In the Choose Target Schema dialog box, you may browse for available target database and schema or type the database and schema name in the textbox in a two part notation (database.schema) and then click OK.
      4. The target changes on the Schema Mapping tab.
      posted in SQL
      T
      terrea
    • Stuck at inserting records into MariaDB conditionally

      I know about insert ignore. That's of no use, because MariaDB does not consider NULL to be part of the uniqude index.

      Therefore I have to use if ... else.

      But this code does not run:

      select *
      from Nodes 
      where `Key` = 'FAQPage';
      

      if not found_row() then
      insert into Nodes (Key, ParentId)
      values ('FAQPage', null)
      end if;

      How can I use if ... else to conditionally insert data in MariaDB?

      posted in SQL
      T
      terrea
    • RE: Is it possible to add a column to a partition in a partitioned table?

      To answer a question of "how do I update the records in batches" with brief comment:

      DECLARE @i INT = 1;
      /*here you say how many records would you like to update at once*/
      DECLARE @batchsize = 100000;
      WHILE EXISTS(SELECT TOP(1) * FROM [table] WHERE col_X IS NOT NULL)
      BEGIN
          print @i
          BEGIN TRAN
              UPDATE TOP(@batchsize) [table] SET col_X = NULL WHERE col_X IS NOT NULL;
          COMMIT
          SET @i = @i+1
      END
      

      This is rather to demonstrate the concept than give you the solution tailored to your needs, because:

      • no error handling is in the code
      • as J.D. said, we'll need to get deeper insight to your data and schema
      • the less rows will meet the condition of not being NULL, the slower the update will be (it will be more difficult to find them)

      Other thing that could be relevant to you is to check the indexes on the table - in how many indexes is the updated column used? Not only the table but each index has to be updated too and it goes single-threaded - it influences your performance a lot.

      Regarding the rename - it is metadata operation only (but for the whole table) and you could use the following snippet:

      ALTER TABLE [table] ADD colX_new int;
      EXEC sp_rename '[table].colX' , 'colX_old';
      EXEC sp_rename '[table].colX_new' , 'colX';
      

      I don't think you can add column to individual partition only.

      posted in SQL
      T
      terrea
    • RE: Error while declaring procedure

      You need a semi-colon after the query. I.e.:

      DELIMITER $$
      

      CREATE PROCEDURE get_null_flags (col_name VARCHAR(40))
      BEGIN
      (select
      sum(case
      when ',[' + col_name + ']' IS NULL then 1
      else 0 END)
      as 'null_flag'
      from city);

      END $$

      DELIMITER ;

      By the way, you don't need to put the query inside a parenthesis.

      posted in SQL
      T
      terrea
    • Chat schema for PostgreSQL and MongoDB

      Chat schema for PostgreSQL and MongoDB

      I'm designing a schema for a very large chat app. I had problems with NoSQL databases such as MongoDB and Cassandra. Some part of the schema is relational i decided to use both PostgreSQL and MongoDB together.

      Chats and members are stored in PostgreSQL and messages are stored in MongoDB.
      There is a table called events in both databases. i just keep chat_ids and creation time of the events in PostgreSQL and the data of events in MongoDB.
      These events include remove_message block_user , etc.

      Note that longtext means json

      Questions
      What problems do i face if i use both together?
      Is there anything to change?

      Updated

      Message schema

      {
        _id: ObjectId,
        user_id: Number,
        chat_id: Number,
        random_id: String,
        peer: {
          type: [Number],
          default: []
        }
        message: String,
        type: {
          type: String,
          enum: ["service-message", "text", "image", "video", "voice"]
        },
        reply_to: {
          message_id: ObjectId,
          user_id: Number
        },
        forwarded_from: {
          message_id: ObjectId,
          user_id: Number,
          chat_id: Number,
        }
        deleted_for: {
          type: [Number],
          default: []
        },
        author_signature: String,
        restriction_reason: String,
        number_of_forwards: {
          type: Number,
          default: 0
        },
        is_read: {
          type: Boolean,
          default: false,
        },
        is_deleted: {
          type: Boolean,
          default: false,
        },
        is_private: {
          type: Boolean,
          required: true
        },
        is_edited: {
          type: Boolean,
          default: false,
        },
        created_at: {
          type: Number,
          default: Date.now
        },
        updated_at: {
          type: Number,
          default: Date.now
        }
      }
      
      posted in SQL
      T
      terrea
    • RE: Is there any way to let Postgres know it can filter results and then calculate aggregations?

      Be explicit about what you want and put your filter into a subquery in FROM:

      SELECT ...
      FROM (SELECT ...
            FROM a
            WHERE /* your filter */) AS a_filtered
         JOIN b ON ...
      GROUP BY ...
      
      posted in SQL
      T
      terrea
    • RE: Retrieve gaps in dates in SQL Server

      Here is one straightforward way. First you generate all dates between min and max in MYTABLE:

      with min_max(min_dt, max_dt) as ( 
          select min([DATE]), max([DATE]) from MYTABLE
      ), all_dt(d) as (
          select min_dt from min_max
          union all 
          select DATEADD(month, 1, d) from all_dt
          where d < (select max_dt from min_max)
      )
      select y.d 
      from all_dt y
      

      Now you can take the cartesian product between that and the set of users in MYTABLE

      with min_max(min_dt, max_dt) as ( 
          select min([DATE]), max([DATE]) from MYTABLE
      ), all_dt(d) as (
          select min_dt from min_max
          union all 
          select DATEADD(month, 1, d) from all_dt
          where d < (select max_dt from min_max)
      )
      select x.id, y.d 
      from all_dt y, (select distinct [ID] as id from MYTABLE) as x
      

      I used the "," join since I was not sure whether SQL-server supports explicit CROSS JOIN. Finally you can subtract all existing rows from that set:

      with min_max(min_dt, max_dt) as ( 
          select min([DATE]), max([DATE]) from MYTABLE
      ), all_dt(d) as (
          select min_dt from min_max
          union all 
          select DATEADD(month, 1, d) from all_dt
          where d < (select max_dt from min_max)
      )
      select x.id, y.d 
      from all_dt y, (select distinct [ID] as id from MYTABLE) as x
      except
      select [ID], [DATE] from MYTABLE
      

      I would advise you to avoid identifiers like DATE and ID, but I assume this is just an example

      EDIT: CROSS JOIN apparently works according to my fiddle, so you can rephrase that as:

      ...
      SELECT x.id, y.d 
      FROM all_dt y 
      CROSS JOIN (SELECT distinct [ID] AS id FROM MYTABLE) AS x
      EXCEPT
      SELECT [ID], [DATE] FROM MYTABLE
      

      https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=03c144940a57fe33a9dafd44666e74d5

      posted in SQL
      T
      terrea
    • I want to select many columns from many tables without foreign key

      I have three tables, and I want to fetch the three table data without a foreign key.

      Say I have a table product_1 which contains (id, name) and table product_2 contain (id, name, price).

      I want to select the columns of the two tables in one query.

      I am asking if there are any MySQL query to select many columns from many tables?

      This is my code:

      SELECT name FROM product1 AND price FROM product2;
      
      posted in SQL
      T
      terrea
    • RE: DBCC CHECKDB on a user database: Time-out occurred while waiting for buffer latch type 2 for page (X:XXX), database ID 2

      However, the buffer latch timeout reported was in TempDB (database ID 2).

      TempDB is used with https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql?view=sql-server-ver15 , most likely there was a set of items waiting on the latch (or a single item, can't say with the data provided) and either the there was a convoy occurence due to something being slow (for example, IO or memory allocations) or there was a single occurence of this.

      A time-out occurred while waiting for buffer latch type 2, bp 0000016D7A1DE340, page 6:222, stat 0x40d, database id: 2, allocation unit Id: 536870912/281475513581568, task 0x0000016CD624E4E8 : 2, waittime 300 seconds, flags 0x1a, owning task 0x0000016CD624E4E8. Not continuing to wait. (Bolding mine, for emphasis)

      • page 6:222 - Specifically tempdb fileid 6
      • stat 0x40d (buffer) - In IO
      • Waittime 300 seconds - Timeout after waiting a long time
      • flags 0x1a (latch) - In IO, Exclusive Access

      I did notice that there was very high latency on TempDB data files around 6 PM and 9:40 PM. I'm unsure whether it's related, though, because the latency spike at 6 PM is higher and yet the job failed only around 9:44 PM.

      According to your screenshot ad the output of the job:

      • Date and time: 2022-01-22 09:04:15
      • Disk IO taking 300,000+ ms (300+ seconds)

      That correlates pretty well, looks to be some slow disk at the time based on the available data.

      posted in SQL
      T
      terrea
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1465
    • 1466
    • 1 / 1466