Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Trenton
    3. Posts
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by Trenton

    • How do I line the SELECT LIST with the rules of of VIEW creation in this query?

      I'm trying to understand exactly where I went wrong in this query

      enter image description here

      posted in SQL
      T
      Trenton
    • Unique identifier per database "copy" on Azure?

      Every few weeks I delete our Test database and make a new copy from Production.

      I do want to run some scripts (from our application code) the first time one of these "freshly" created test copies are detected on startup of our application.

      Say that every "copy" has a real Unique Identifier (a GUID or whatever) that is NOT copied along but generated with the new copy then I could do something similar like this:

      • save this value in a normal table (SavedValue)
      • this value WILL be copied along from Production when I create a new Test database
      • if ( SavedValue != GUID ) { SavedValue = GUID; RunCleanupScripts(); }

      So - is there any value like that I can extract from an Azure database? One that will keep consistent with the lifetime of an "Azure SQL Database"?

      Some notes:

      • I want to keep the name of the Test database the same when I make a fresh copy. The format in our architecture is always XXX.ProductionDb and XXX.TestDb
      • I am sure there are a lot of alternative ways of doing this, but because of a lot of complicated issues out of this scope I am right now looking for a solution where I can "detect" a fresh Test database copy from our application code.
      • This script will only run in our Test environments (no risk for bugs on Production environments)
      posted in SQL
      T
      Trenton
    • RE: What is the performance overhead of IFNULL in the select statement of mysql 5.7+ query

      Let's look at this list of overheads:

      • Overhead for a statement
      • Overhead for a row
      • Overhead for a function

      Those are in order starting with the most overhead. That is, the function overhead is insignificant in the big picture; don't worry about it.

      See also COALESCE().

      Caching

      With Engine=InnoDB, the main cache is the "buffer_pool"; innodb_buffer_pool_size should be about 70% of available RAM. All actions on rows happen in the buffer_pool:

      1. Read block(s) from disk (if not already in cache)
      2. Find the row(s) in the block(s)
      3. Fetch (SELECT) or modify row(s)
      4. Eventually write the block back to disk (if modified)

      Since you are asking about SELECT, only steps 1,2,3 are relevant. Furthermore, if you keep Selecting the same row (or nearby rows), step 1 is not repeated.

      Note: This does not mention any impact of function calls.

      Note: On a cold system (nothing in the cache), the first SELECT will be slower due to step 1. After that the same or 'similar' queries will be faster (sometimes 10x faster).

      Comment: ON t1.id = t2.id Do you have two tables with identical PRIMARY KEYS? Why?

      The main way to have good performance in a JOIN is to have suitable indexes. Let's see the real query and SHOW CREATE TABLE for the tables being Joined.

      The BNL mentioned in a comment is an internal "caching" to speed up certain Joins. You have little control over its use. It refers to fetching all the stuff needed from a secondary table and storing it in an in-memory hash. It only applies when Joining. Function calls are not involved (in your example).

      If a function (eg, IFNULL) were used in ON or WHERE, it is likely to prevent the use of an otherwise suitable Index. (cf not "sargable")

      posted in SQL
      T
      Trenton
    • RE: Expanding existing column results in 'minimum row size would be XXXX' error, but creating a new table with the desired data length does not

      The scenario that you described is totally normal.

      The maximum size of every single row is per page is 8060 bytes. myTable1 is less than the limit, so will create successfully. But when you try to make the column larger, the sum of size meets the maximum and the mentioned error occurs.

      But it does not mean that a row of table can not exceeded the 8060 byte when creating a new table; Because in this case SQL can arrange the columns into multiple allocation units.

      When a combination of varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns exceeds this limit, the SQL Server Database Engine moves the record column with the largest width to another page in the ROW_OVERFLOW_DATA allocation unit, while maintaining a 24-byte pointer on the original page

      Reading the full https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms186981(v=sql.105)?redirectedfrom=MSDN is here for more information.

      posted in SQL
      T
      Trenton
    • RE: Change SQL Server 2019 instance Collation After installation, and new problem

      The problem is a collation mismatch between the instance-level collation (i.e. "server" collation) and the database-level collation of the database you are using. Your instance-level collation is Persian_100_CI_AI_SC_UTF8 and the database-level collation for your system databases is SQL_Latin1_General_CP1_CI_AS (well, at least according to the error message; according to the screenshot of SSMS, your system databases are using SQL_Latin1_General_CP1_CI_AI — only difference is the _AS vs _AI — which suggests that the initial collation was SQL_Latin1_General_CP1_CI_AS or that multiple runs of sqlservr.exe -q were executed, or possibly that the database generating the error is both not in the screenshot list and was restored after the execution of sqlservr -q).

      The cause of this problem can be one of the following:

      1. Restore a database onto a server that has a different instance-level collation than the database being restored, or
      2. Run undocumented sqlservr.exe -q to change all collations but it fails while converting a database. You need to check the log messages that were generated when you ran sqlservr.exe -q to see which one it was, or you might need to simply run sqlservr.exe -q again to see. The final step of the process is to change the instance-level collation, so if the instance-level collation does not match the collation specified in the -q option of sqlservr.exe, then it did not complete successfully.

      Please see my blog post about the sqlservr.exe -q approach for more details on various things that can go wrong when using it:

      https://sqlquantumleap.com/2018/06/11/changing-the-collation-of-the-instance-and-all-columns-across-all-user-databases-what-could-possibly-go-wrong/

      posted in SQL
      T
      Trenton
    • Is there a way to create a new instance on localhost on a different port?

      I would like to create a new instance on MySql and use a different port on localhost. Is that possible or MySql only runs locally on 3306? Why?

      I could not find any articles related to that.

      I am using MySql Workbench on Ubuntu 20.04.

      posted in SQL
      T
      Trenton
    • RE: MariaDb performance issue
      • Once you have twice as much RAM as the dataset size, adding more RAM will not help performance. Most of the 128GB is unused now.
      • A slow query can, in many cases, be sped up by having a better index or a better formulation. We need to look at such a query.
      • MyISAM is, in most cases, slower than InnoDB. So switch Engines.
      • Opening 4K tables in 17 days is rather trivial; I would no worry about that metric.
      • HDD or SSD? I ask because 3K qps is rather high. We may need to look at ways to cut back on the number of queries and/or combine queries, possibly into transactions.
      • Next steps: http://mysql.rjweb.org/doc.php/mysql_analysis
      posted in SQL
      T
      Trenton
    • RE: Importing data into Mysql slowly
      1. CREATE TABLE without those indexes.
      2. LOAD DATA
      3. ALTER TABLE .. ADD INDEX ...
      posted in SQL
      T
      Trenton
    • Postgres CREATE DATABASE Not Working

      I'm on a Mac using Monterey. It's a Mac M1 which had a bit of an issue using homebrew to install postgres when I first started working on this machine. I moved to the GUI postgres.app which just seemed like a front for the underlying postgres, so I didn't think much had changed.

      In any case the database (Postgres 12) is now running and I am seeing an issue with trying to create a new database. Same issue in a third party client as well as psql.

      rich=# create database sresearch
      rich-# \l
                                         List of databases
           Name     |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
      --------------+----------+----------+-------------+-------------+-----------------------
       postgres     | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
       rich         | rich     | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
       sresearch    | rich     | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
       template0    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                    |          |          |             |             | postgres=CTc/postgres
       weather      | rich     | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
      (5 rows)
      

      rich-# \c sresearch
      FATAL: no such database: sresearch

      The database cannot be made for some reason, even though it's reporting having done so. I have no clue why this is happening, but I want to fix it.

      I'm thinking I could make a backup and try to fix this by deleting both versions, then going back to homebrew to start again with Postgres 12. I'm not sure how to find out what the difference is between these two slightly different versions, which are essentially the same thing.

      How can I fix this? Obviously I need to be able to make new databases.

      Update: I just reinstalled the homebrew version and tried to import my alternative postgres.app export. It hung up on \connect for some reason but that was easy to manually get around. This installation as well gave me the same error on trying to create a new database. So both versions are showing the same, even from an export.

      I have no clue what to try.

      rich=# \c sres
      FATAL:  no such database: sres             # Indeed there is though
      Previous connection kept
      rich=#
      

      Updated: More info ==>

      rich=# \l
                                      List of databases
           Name     | Owner | Encoding |   Collate   |    Ctype    | Access privileges
      --------------+-------+----------+-------------+-------------+-------------------
       postgres     | rich  | UTF8     | C           | C           |
       rich         | rich  | UTF8     | C           | C           |
       sres         | rich  | UTF8     | C           | C           |
       sresearch    | rich  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
       template0    | rich  | UTF8     | C           | C           | =c/rich          +
                    |       |          |             |             | rich=CTc/rich
       template1    | rich  | UTF8     | C           | C           | =c/rich          +
                    |       |          |             |             | rich=CTc/rich
      (6 rows)
      

      rich=# \c sres;
      FATAL: no such database: sres
      Previous connection kept
      rich=# \s sresearch;
      Wrote history to file "sresearch".
      rich=#

      Here is the feedback from Sequel:

      Commands from Ruby

      Here are the results from Ruby

      enter image description here

      Updated. Even more of the same. ==>

      rich=# CREATE DATABASE alpha;
      CREATE DATABASE
      rich=# \c alpha;
      FATAL:  no such database: alpha
      Previous connection kept
      rich=# \l
                                      List of databases
           Name     | Owner | Encoding |   Collate   |    Ctype    | Access privileges
      --------------+-------+----------+-------------+-------------+-------------------
       alpha        | rich  | UTF8     | C           | C           |
       postgres     | rich  | UTF8     | C           | C           |
       rich         | rich  | UTF8     | C           | C           |
       template0    | rich  | UTF8     | C           | C           | =c/rich          +
                    |       |          |             |             | rich=CTc/rich
       template1    | rich  | UTF8     | C           | C           | =c/rich          +
                    |       |          |             |             | rich=CTc/rich
      (5 rows)
      

      rich=# DROP DATABASE alpha;
      DROP DATABASE
      rich=# \l
      List of databases
      Name | Owner | Encoding | Collate | Ctype | Access privileges
      --------------+-------+----------+-------------+-------------+-------------------
      postgres | rich | UTF8 | C | C |
      rich | rich | UTF8 | C | C |
      template0 | rich | UTF8 | C | C | =c/rich +
      | | | | | rich=CTc/rich
      template1 | rich | UTF8 | C | C | =c/rich +
      | | | | | rich=CTc/rich
      (4 rows)

      rich=# CREATE DATABASE alpha;
      CREATE DATABASE
      rich=# \l
      List of databases
      Name | Owner | Encoding | Collate | Ctype | Access privileges
      --------------+-------+----------+-------------+-------------+-------------------
      alpha | rich | UTF8 | C | C |
      postgres | rich | UTF8 | C | C |
      rich | rich | UTF8 | C | C |
      template0 | rich | UTF8 | C | C | =c/rich +
      | | | | | rich=CTc/rich
      template1 | rich | UTF8 | C | C | =c/rich +
      | | | | | rich=CTc/rich
      (5 rows)

      rich=# \c alpha;
      FATAL: no such database: alpha
      Previous connection kept
      rich=#

      posted in SQL
      T
      Trenton
    • Constantly querying system views [Oracle, MSSQL, PostgreSQL]

      I'm a kind-of-experienced database developer that just got into the DBAdministration world.

      I've been told that in both Oracle and MSSQL is definitely a terrible idea to run queries such as the below example, constantly every, let's say, 5 seconds. Since it might cause memory fragmentation.

      SELECT  *
      FROM    V_$SQL_MONITOR; --oracle
      

      I've done some research, but I couldn't find any official (or unofficial, for what is worth) documentation discouraging to do such a thing.

      Could someone experienced in the DBA world illustrate for me if this is indeed a bad idea, what memory fragmentation is, and if this practice also applies to PostgreSQL?

      Thank you in advance!

      posted in SQL
      T
      Trenton
    • Create an index for fields within an array of composite type

      My question is a follow-up to the question answered here:
      https://stackoverflow.com/a/15041094/994263

      Considering a table with an array column of a composite type, is it possible with PostgreSQL to index the column to be able to search for rows containing an array entry matching some arbitrary predicate.

      Here is a fiddle to start off the problem: https://dbfiddle.uk/?rdbms=postgres_14&fiddle=cbed38d77f5fb7e2bc3d14605e74d464

      We have the following schema:

      CREATE TYPE complex AS (
          r       double precision,
          i       double precision
      );
      

      CREATE TABLE tbl2 (tbl2_id serial, co complex[]);

      INSERT INTO tbl2(co)
      select array_agg((random()* 100, random()*100)::complex)
      from generate_series(1, 50000) i
      group by i % 10000;

      -- how to create an index on co[*].r basically?
      CREATE INDEX tbl2_co1_idx ON tbl2 (((co[1]).r)); -- note the parentheses!
      -- * this is only a single array entry's r values

      Is there a mechanism to do an indexed lookup for queries such as this:

      SELECT * FROM
         (SELECT *,
                 generate_subscripts(co, 1) AS s
          FROM tbl2) AS foo
      WHERE (co[s].r) BETWEEN 9.65 and 9.67;
      

      The rationale behind this could be to have items such as polygons for example with a small number of points (x,y) and to then lookup easily which polygons are out of bounds. It is a more NoSQL-like approach, which would be great if it is doable without resorting to jsonb.

      posted in SQL
      T
      Trenton
    • RE: Add default rows to the query result if restrictions are not met

      When I think about what you're trying to accomplish, I would describe it in this way, using "plain English":

      • Return the results of some query
      • But if no results exist, then return some default values.

      My thought process took the immediate leap of "What if I always include the defaults, but then somehow filter them out when there exist real results.

      After pondering over my cup of morning coffee, I realized this is actually pretty easy to do with CTEs. No recursion needed, but I will use two CTEs.

      That real query

      Lets start by throwing your real query into a CTE. This makes it easy to reference the results from the query multiple times, pretty easily. In this example, I'm just going to query sys.objects, and put the whole darn thing into a CTE:

      DECLARE @ObjectName nvarchar(128) = N'sysschobjs';
      

      RealQuery AS (
      SELECT object_id, name
      FROM sys.objects
      WHERE name = @ObjectName
      )
      SELECT *
      FROM RealQuery;

      Now for the defaults

      I'm doing the same treatment here. Just making up some default placeholders, and abstracting them away into a CTE that I can reference easily. Maybe your default values are stored in some table somewhere, or maybe you prefer to stuff them into a #temp table or table @variable, in which case you wouldn't need to use a CTE here.

      WITH Defaults AS (
          SELECT *
          FROM (VALUES (1,N'One'),(2,N'Two'),(3,N'Three')) AS x(Id,Name)
          )
      SELECT *
      FROM Defaults;
      

      Time for a mashup

      Now, with our real query in a CTE, and our "default" values in another, I simply UNION ALL the "real" results and the default place holders. The "magic" is to use WHERE NOT EXISTS (SELECT 1 FROM RealQuery) to control whether those defaults are included.

      This returns a single row, matching the object name sysschobjs, and does not return the default placeholders:

      DECLARE @ObjectName nvarchar(128) = N'sysschobjs';
      

      WITH Defaults AS (
      SELECT *
      FROM (VALUES (1,N'One'),(2,N'Two'),(3,N'Three')) AS x(Id,Name)
      ),
      RealQuery AS (
      SELECT object_id, name
      FROM sys.objects
      WHERE name = @ObjectName
      )
      SELECT *
      FROM RealQuery
      UNION ALL
      SELECT *
      FROM Defaults
      WHERE NOT EXISTS (SELECT 1 FROM RealQuery);

      SSMS Results of one value for sysschobjs

      And if you change that first line to a value that doesn't exist in sys.objects then you'll get the placeholder default results instead:

      DECLARE @ObjectName nvarchar(128) = N'AMtwo';
      

      image of three placeholder results, the numbers One, Two, Three spelled out in words

      There are other ways, too.

      My solution works, but it may not be ideal. For example, if you look at https://www.brentozar.com/pastetheplan/?id=SJOnxjMkc , you'll see it's running the "real" query twice. That's totally fine for this trivial query, but for other cases that might not work as well.

      You might be better off simply running your "real" query to insert into a #Results temp table, then checking how many rows are in the temp table.

      IF EXISTS (SELECT 1 FROM #Results)
      BEGIN
          INSERT INTO #Results(Id,Name)
          SELECT *
          FROM (VALUES (1,N'One'),(2,N'Two'),(3,N'Three')) AS x(Id,Name);
      END;
      

      SELECT *
      FROM #Results;

      posted in SQL
      T
      Trenton
    • Where is the Installation Media needed to perform a Repair on a SQL Server 2016 installation?

      I have a SQL Server 2016 Express installation that I recently performed an edition upgrade on to make it a Standard edition so that I could have access to SQL Agent. After performing the edition upgrade I noticed that SQL Agent would not start as the Log On account was not properly configured. I found a recommendation online to run the Repair action from the SQL Server Installation Center. When I try to do this it asks me to browse for the folder that contains the 2016 Installation Media. What folder is it looking for? I don't want to re-download and re-install on a working, production instance, I just want to run the repair.

      NOTE: I have since gotten SQL Agent to work by specifying a domain admin service account for the Log On, I just would like to know how to be able to run Repair on the instance in case I need it in the future.

      Repair Installation

      posted in SQL
      T
      Trenton
    • Database design: Combining good schema design with good user experience

      Context

      I am designing a table schema for a solution that will track "punishments" issued by multiple clients. Each client represents a group of people of various sizes. We will be using PostgreSQL and according to our current numbers I do not expect this table to exceed 500k-1mil rows (of course, it would be nice to be able to scale this up if we suddenly take off and get a lot more clients).

      The Problem

      Normally, I would just store the punishments with an auto generated sequence as the primary key. However, we would like to share the punishment IDs with the clients. In the ideal case, each client ID has its own sequential punishment ID, so that when they open the dashboard they are greeted with a monotonically increasing sequence of punishment IDs corresponding to punishments that they issued. If I were to use an auto incremented sequence for every row, it could be that a client sees IDs e.g. 1, 4, 8, 12, 13, 32. However, this is not human friendly to the client and our clients would prefer that, if we use a number as an ID, this would increase and not make it appear like there are "gap".

      For further context, this is how a client would interact with punishments:

      • They can query all punishments they have issued.
      • For a specific punishment ID, they can query all the details, some of which are omitted in the general query above.
      • Given a specific person, they must be able to see all punishments issued to that person (within the client's group, not globally).
      • They may be able to issue a new punishment, which would ideally generate a coherent punishment ID safely (i.e. no race conditions). Additionally, the client-friendly ID must be returned at the end of the query so it can be displayed to them (think "Created punishment #53").

      Example table of an ideal solution:

      id group_id (client) punishment foo bar
      1 1 1 foo1 bar1
      2 1 2 foo2 bar2
      3 2 1 fizz1 buzz1
      4 3 1 p np

      Here, querying for client 1's punishments would give the ones with punishment 1 and 2. Client 2 looking up foo for punishment 1 would return fizz. The next punishment ID for clients 1, 2, 3 would be 3, 2, 2 respectively.

      Solution 1: Tables/Views

      A very naive solution would be to make a table per client, and then to just use the correct table. However, there are thousands of clients and I feel like this may not be a particularly great solution (mostly since I have not heard of it in university nor in the field).

      Another solution I had was to create views. Perhaps a view per client? I admit I'm not particularly familiar with views, are there any sort of restrictions you should put on creating views? https://www.postgresqltutorial.com/postgresql-updatable-views/ looks like something that could be useful. Here, I would then make the ID displayed to the client the row number when querying the particular view. A concern I have though is that when inserting a new punishment, determining the row number could be quite tricky to do efficiently, especially after https://wiki.postgresql.org/wiki/Count_estimate .

      Solution 2: Auxiliary Table

      I would essentially keep a table of the last known punishment IDs per client and then do something like:

      start transaction
      upsert punishment ID for client, if fail, abort transaction
      insert into punishments with newly created ID, if fail, abort transaction
      commit transaction
      

      Is this a good practice though?

      Solution 3: Non-integer IDs

      Something else I had in mind was to use non-integer IDs (think Git commit hashes). An automatic alphanumeric string would work for this. However, I would want to keep these IDs short (preferably not exceed 6 characters) and for my RDBMS to manage collisions for me (if this is possible, if not, I can also do it myself). In this case, the clients would just see some random letter/number spam and not panic because of gaps in the sequence. I know that UUIDs are a thing and I have used them before, but in this context they are not client friendly enough to be used.

      Question

      What is the "best" solution here? Or even better, what are the pros/cons of each? How can I decide on the best tradeoff between me the developer&dba who wants the database to perform well and the client who wants their output to be idiot friendly?

      posted in SQL
      T
      Trenton
    • Combine tables with different foreign key values for same data (firefox)

      I am combining my history in firefox from two computers with the following commands:

      ATTACH 'filename' AS toMerge
      INSERT or IGNORE INTO moz_origins SELECT * FROM toMerge.moz_origins;
      INSERT or IGNORE INTO moz_places SELECT * FROM toMerge.moz_places;
      INSERT or IGNORE INTO moz_inputhistory SELECT * FROM toMerge.moz_inputhistory;
      INSERT or IGNORE INTO moz_historyvisits SELECT * FROM toMerge.moz_historyvisits;
      

      Below is the schemas for those four tables (from DB Browser). The databases are in sqlite.

      CREATE TABLE moz_origins ( id INTEGER PRIMARY KEY, prefix TEXT NOT NULL, host TEXT NOT NULL, frecency INTEGER NOT NULL, UNIQUE (prefix, host) )
      CREATE TABLE moz_places ( id INTEGER PRIMARY KEY, url LONGVARCHAR, title LONGVARCHAR, rev_host LONGVARCHAR, visit_count INTEGER DEFAULT 0, hidden INTEGER DEFAULT 0 NOT NULL, typed INTEGER DEFAULT 0 NOT NULL, favicon_id INTEGER, frecency INTEGER DEFAULT -1 NOT NULL, last_visit_date INTEGER , guid TEXT, foreign_count INTEGER DEFAULT 0 NOT NULL, url_hash INTEGER DEFAULT 0 NOT NULL, description TEXT, preview_image_url TEXT, origin_id INTEGER REFERENCES moz_origins(id))
      CREATE TABLE moz_inputhistory ( place_id INTEGER NOT NULL, input LONGVARCHAR NOT NULL, use_count INTEGER, PRIMARY KEY (place_id, input))
      CREATE TABLE moz_historyvisits ( id INTEGER PRIMARY KEY, from_visit INTEGER, place_id INTEGER, visit_date INTEGER, visit_type INTEGER, session INTEGER)
      

      My problem is a Foreign Key constraint issue where moz_origins of the main table and table to merge contain the same host but have different id. See here:

      moz_origins:
      id    prefix         host
      -------------------------------
      13    https://   www.youtube.com
      756   https://   youtube.com
      

      toMerge.moz_origins:
      id prefix host

      1512 https:// www.youtube.com
      5854 https:// youtube.com

      Is there a way where I can merge toMerge.moz_places into moz_places while rewriting any origin_id where the host is shared between both moz_origins tables?

      The furthest I've gotten in understanding how to get there is these sets of commands:

      SELECT url,origin_id FROM toMerge.moz_places WHERE origin_id NOT IN ( SELECT id FROM moz_origins);
      SELECT host,id FROM toMerge.moz_origins WHERE host IN ( SELECT host FROM moz_origins);
      SELECT A.[prefix],B.[host],A.[id] FROM moz_origins A INNER JOIN toMerge.moz_origins B ON A.host == B.host;
      
      posted in SQL
      T
      Trenton
    • Postgres notify if connection timeout is occured

      I want to set up a separate connection for every user of my app. I have some data structure for this to store connections. Obviously, I should delete the expired connections somehow. There are multiple ways to do it.

      The best one is to make a callback between Postgres and my app when a connection timeout has occurred. So my app will have some function like on_connection_timeout to remove old connections.

      But is Postgres have at all such feature to tell about connection timeout?

      P.S. If no - I will implenet a FIFO

      posted in SQL
      T
      Trenton
    • Retrieve gaps in dates in SQL Server

      I'm trying to figure out how to retrieve gaps in time-series data. Below is my example.

      CREATE TABLE MYTABLE
          ([ID] int, [DATE] DATE)
      ;
      

      INSERT INTO MYTABLE
      ([ID], [DATE])
      VALUES
      (1, '2022-01-01'),
      (1, '2022-02-01'),
      (1, '2022-03-01'),
      (1, '2022-04-01'),
      (1, '2022-05-01'),
      (1, '2022-06-01'),
      (1, '2022-10-01'),
      (1, '2022-11-01'),
      (1, '2022-12-01'),
      (2, '2022-01-01'),
      (2, '2022-02-01'),
      (2, '2022-03-01'),
      (2, '2022-04-01'),
      (2, '2022-05-01'),
      (2, '2022-06-01'),
      (2, '2022-07-01'),
      (2, '2022-08-01'),
      (2, '2022-10-01'),
      (2, '2022-11-01'),
      (2, '2022-12-01')
      ;

      The date field follows a convention where the day is always 1st of the month. For the above example, the gaps would be the following.

      | ID |       DATE |
      |----|------------|
      |  1 | 2022-07-01 |
      |  1 | 2022-08-01 |
      |  1 | 2022-09-01 |
      |  2 | 2022-09-01 |
      

      How would I write a query to return the above results?

      posted in SQL
      T
      Trenton
    • Find what queries are reports using on SSRS

      EDIT: It seems they're using "semanticQuery" and this is why I cant see usefull information?

      enter image description here


      We will be decommissioning one of our functions, and we noticed that some reports are still using it.

      Is there a way to grab the "query" of a report, and then filter it?

      for example:

      SELECT REPORT_NAME, QUERY from Magic_DMV_report_queries where QUERY like '%dbo.badFunction%'
      

      I tried to find some dmvs and etc but with no success even trying to get some XML information.

      With this query, the "command" is not so "understandable" and I thought it would be:

          DECLARE @details AS TABLE
          (
              id INT IDENTITY(1,1) NOT NULL,
              Report              NVARCHAR(100), 
              ReportPath          NVARCHAR(500),
              DataSetName         NVARCHAR(50), 
              DataSourceName      NVARCHAR(100), 
              ComandType          NVARCHAR(50), 
              CommandText         NVARCHAR(MAX)
          )
      

      DECLARE @report xml
      DECLARE @reportname NVARCHAR(100), @reportpath NVARCHAR(500)
      DECLARE @query NVARCHAR(MAX)
      DECLARE db_cursor CURSOR FOR
      SELECT Name, Path ,CAST(cast(Content AS varbinary(max)) as XML)
      FROM Catalog
      WHERE Type = 2
      ORDER BY Name
      OPEN db_cursor
      FETCH NEXT FROM db_cursor INTO @reportname, @reportpath, @report
      WHILE @@FETCH_STATUS = 0
      BEGIN
      INSERT INTO @details
      SELECT
      @reportname AS Report,
      @reportpath,
      t.c.value('@Name[1]', 'nvarchar(50)') AS DataSetName,
      t.c.value('(./:Query/:DataSourceName)[1]', 'nvarchar(50)') AS DataSourceName,
      t.c.value('(./:Query/:CommandType)[1]', 'nvarchar(100)') AS CommandType,
      ISNULL(t.c.value('(./:Query/:CommandText)[1]', 'nvarchar(MAX)'), 'Shared - ' + t.c.value('(./:SharedDataSet/:SharedDataSetReference)[1]', 'nvarchar(MAX)')) AS CommandText
      FROM
      @report.nodes('/:Report/:DataSets/*:DataSet') AS t(c)
      FETCH NEXT FROM db_cursor INTO @reportname, @reportpath, @report
      END
      CLOSE db_cursor
      DEALLOCATE db_cursor

      SELECT * FROM @details ORDER BY Report, DataSetName

      EIDT2:

      I think I did it:

      with CTE as
      (SELECT Name,
      cast(CAST(CAST(Content AS VARBINARY(MAX)) AS XML)as nvarchar(max)) AS ReportXML
      FROM Catalog
      where Type = 2)select * from cte 
      where ReportXML like '%My_Cool_Procedure_or_function%'
      
      posted in SQL
      T
      Trenton
    • Environment variables in Postgres C extension

      I cannot get environment variable in my PostgreSQL C extension code.

      For example, this function always returns 111:

      #include "postgres.h"
      #include "fmgr.h"
      #include 
      

      PG_MODULE_MAGIC;

      PG_FUNCTION_INFO_V1(myinc);
      Datum
      myinc(PG_FUNCTION_ARGS)
      {
      int32 arg = PG_GETARG_INT32(0);
      char* envar = getenv("MYINC");
      if (envar) {
      PG_RETURN_INT32(arg + atoi(envar));
      } else {
      PG_RETURN_INT32(111);
      }
      }

      While this C program works as expected, it prints whatever MYINC is:

      #include 
      #include 
      

      int main()
      {
      printf("MYINC: %s", getenv("MYINC"));
      return 0;
      }

      posted in SQL
      T
      Trenton
    • RE: Does only a full backup (and no tlog backup) of a db in full recovery model allow for point in time recovery?

      No, it's not possible. You can find that info on the https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql?view=sql-server-ver15#arguments doc:

      When you restore a backup created by BACKUP DATABASE (a data backup), the entire backup is restored. Only a log backup can be restored to a specific time or transaction within the backup.

      This limitation will be clear by https://www.sqlshack.com/understanding-sql-server-backup-types/ :

      A full backup, as the name implies, backs up everything. [...] This is a complete copy, which stores all the objects of the database: Tables, procedures, functions, views, indexes etc. Having a full backup, you will be able to easily restore a database in exactly the same form as it was at the time of the backup.
      [...]

      The log backup, as its name implies, backs up the transaction logs. [...] A transaction log file stores a series of the logs that provide the history of every modification of data, in a database.

      As you can see, the full backup is just a copy of the whole database as it was at that moment whereas a log backup contains each transaction (individually logged like a step by step recipe) that happened since the last log backup.

      The fact that you took log backups doesn't "upgrade" a full backup to be capable of doing a point in time recovery by itself if the log backup files were discarded afterwards. The log backups are still needed to perform a point in time recovery of the 40th day as you described.

      Therefore, if you wanna be able to restore the 40th day using point in time recovery from now on, you should change your backup plan to keep enough log backups to cover that period.


      To have a better understanding of the content of a log backup see https://dba.stackexchange.com/questions/252191/during-a-log-backup-is-the-data-backed-up-to-the-start-or-end-of-the-operation/252242#252242

      posted in SQL
      T
      Trenton
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1485
    • 1486
    • 1 / 1486