Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Trenton
    3. Topics
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by Trenton

    • T

      How do I line the SELECT LIST with the rules of of VIEW creation in this query?
      SQL, Database Testing • oracle sql developer • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Y

      My best guess is that the 2nd column does not have a name. Thus breaking the rules of view creation.
    • T

      Unique identifier per database "copy" on Azure?
      SQL, Database Testing • sql server azure sql database • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      Azure SQL Database https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison#features-of-sql-database-and-sql-managed-instance , however, databases are Broker Enabled when deployed to an Azure SQL Server. This means there is a unique Service Broker Guid assigned to each database, including copied, restored or manually created databases. You can fetch it with this query: SELECT name, service_broker_guid FROM sys.databases The query can be run from master for all databases, or from the database itself for only that database's guid value.
    • T

      Is there a way to create a new instance on localhost on a different port?
      SQL, Database Testing • mysql mysql workbench • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      How to run multiple instances of MySQL Server on one server is in the MySQL manual: https://dev.mysql.com/doc/refman/8.0/en/multiple-servers.html
    • T

      Postgres CREATE DATABASE Not Working
      SQL, Database Testing • macos postgresql 12 homebrew • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Y

      The culprit was pgbouncer and the .ini file, which needs individual entries for each listed database. Creating the database was ok, but then the .ini file didn't recognize the new database name.
    • T

      Constantly querying system views [Oracle, MSSQL, PostgreSQL]
      SQL, Database Testing • sql server postgresql oracle system tables • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      I've primarily been working with Microsoft SQL Server for about a decade and I've never heard of the term "memory fragmentation". First result in Google quotes it as "Fragmentation of memory is a memory disorder where an individual is unable to associate the context of their memories...". Kidding aside, I'm not aware of any specific memory issues with querying the system entities in SQL Server, but I would say the following are potential issues that can occur: Querying them frequently requires server resources like any other entity, and theoretically could add up, especially depending on the system entity and how efficiently it was designed / coded. CPU cycles, memory, and disk I/O typically being the resources consumed. Querying certain system entities too frequently could theoretically result in them being blocked, and / or your query on them being blocked and waiting a while to complete. The latter isn't necessarily a problem as much as it could be an annoyance unless you encompass querying those system entities as part of a larger workflow. Personally, I'm querying certain https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views?view=sql-server-ver15 and logging them to a table every 10 seconds, 1 minute, and 10 minutes depending on the system entity and information I want to log, so that I can retroactively troubleshoot performance issues. 10 seconds is probably excessive for most people, and even every 1 minute might be unneeded in a lot of cases, but I like having that level of granularity and it works well in my current system that I support. I have yet to encounter either of the aforementioned issues. Your mileage may vary on other database systems, depending on the system entities you query, your frequency, and the busyness of your server.
    • T

      Create an index for fields within an array of composite type
      SQL, Database Testing • postgresql database design index normalization nosql • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      D

      It is possible to create an index for fields within an array of composite type, but the applicable https://www.postgresql.org/docs/current/indexes-types.html#INDEXES-TYPES-GIN for arrays is GIN. And https://www.postgresql.org/docs/current/gin-builtin-opclasses.html that would work for your query predicate WHERE (co[*].r) BETWEEN 9.65 and 9.67. So not possible after all - with current Postgres 14, and probably future versions as well. The problem is merely academic, though, because you wouldn't do that to begin with. The solution is a proper relational design. Whenever you find yourself creating a table with an array of composite type, cease and desist. With a probability bordering on certainty, you should create a separate table instead, https://en.wikipedia.org/wiki/Database_normalization your design at least that much. Then everything becomes rather simple. Could look like this: -- parent table optional, not needed for demo CREATE TABLE tbl2_parent (tbl2_id serial PRIMARY KEY); -- more attributes? -- random test data INSERT INTO tbl2_parent SELECT FROM generate_series(1, 1000) t; -- main table CREATE TABLE tbl2( tbl2_id int NOT NULL -- REFERENCES tbl2_parent , idx int NOT NULL , l float8 NOT NULL , r float8 NOT NULL , PRIMARY KEY (tbl2_id, idx) ); -- random test data INSERT INTO tbl2 (tbl2_id, idx, l, r) SELECT t , i , random() * 100 , random() * 100 FROM generate_series(1, 1000) t, generate_series(1, 5) i ORDER BY 1, 2; Now, the index is simple: CREATE INDEX tbl2_r_idx ON tbl2 (r, tbl2_id); Any B-tree index with leading r does the job. Minor additional optimizations depend on the complete use case. And the query is something like: SELECT DISTINCT tbl2_id FROM tbl2 WHERE r BETWEEN 9.65 and 9.67; Or, if you need more than distinct IDs: EXPLAIN ANALYZE SELECT * FROM tbl2_parent p WHERE EXISTS ( SELECT FROM tbl2 t WHERE t.r BETWEEN 9.65 AND 9.67 AND t.tbl2_id = p.tbl2_id ); db<>fiddle https://dbfiddle.uk/?rdbms=postgres_14&fiddle=fd7ebbc39240674113d2475252cc7758 Either query can use very fast index-only scans with tables that are vacuumed enough, or at least index scans. Asides For your original scenario, you could just use the built-in type point. It consists of two float8 quantities, exactly like your custom row type. Reference first and second number with a subscript like: SELECT ('(7,8)'::point)[0]; -- 7 See: https://stackoverflow.com/a/8150944/939860 Your query in your original test case can be rewritten as: SELECT * FROM tbl2 t WHERE EXISTS ( SELECT FROM unnest(t.co) elem WHERE (elem.r) BETWEEN 9.65 and 9.67 ); Reading your rationale, though, you might consider https://postgis.net/ first, or at least mainline https://www.postgresql.org/docs/current/datatype-geometric.html .
    • T

      Where is the Installation Media needed to perform a Repair on a SQL Server 2016 installation?
      SQL, Database Testing • sql server sql server 2016 • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Alberto

      Repairing is an option of the original installer. That's the installation media that it's referring to, typically an *.iso file that represents the installer. You should be able to utilize the original installer you used to repair the installation as well.
    • T

      Database design: Combining good schema design with good user experience
      SQL, Database Testing • postgresql database design • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      Let's tackle some of the pros and cons of each these proposed solutions in reverse: Solution 3: Non-integer IDs: Not a bad idea, except the human readability / idiot-friendliness goes out the window pretty quickly, especially when you care about collisions. And if you have a limit of 6 characters (to improve the readability), you're going to have a bad time with managing the collisions . The pro of this solution such as using a pseudo-random data type such as GUID is there's a level of improved privacy / security by decreasing the idiot-friendliness of your IDs. Solution 2: Auxiliary Table: Not a terrible idea, but if I understand you correctly, you're saying you'd use a separate table to specifically store the last PunishmentId per Client?...I think that would be unnecessarily redundant data that could be more liable to worse data integrity. Rather you could just use your Punishments table to tell you what the max PunishmentId is for the Client you're creating a new Punishment for. With proper indexing, and the cardinality of the number of Punishments per Client likely not being a lot relative to the entire table size, it should be fast enough to lookup against your Punishments table when you need to get the max PunishmentId per Client. Solution 1: Tables/Views (one table per client): Yea that's going to be unnecessarily difficult to manage for the number of clients you have for what sounds like should be a fairly simple system. Sometimes multi-database to client pattern is used as a solution for decoupling large amounts of data, especially in cases where the client owns the data and / or wants it to live on a separate server from the other databases. But I don't think your system sounds like it necessarily needs that additional complication for the problem you're trying to solve. It also adds additional complication for querying appropriately from the application layer or if you wanted to report off the data ever. Solution 0: What you could do: You could leverage a view on top of your Punishments table with a https://www.postgresql.org/docs/14/functions-window.html such as https://www.postgresqltutorial.com/postgresql-row_number/ that generates a user-friendly fake ID for just display purposes but is backed by the actual primary key of that row. In this case you could use your globally increasing auto increment identity column for the PunishmentId but your window function would partition by the group_id and order by PunishmentId. This would ensure the fake user-friendly ID is deterministic and sequential for that individual client. Here's some example code you could use to create a view around to accomplish that: CREATE VIEW PunishmentsDisplay AS SELECT id, group_id, punishment, foo, bar, ROW_NUMBER() OVER (PARTITION BY group_id ORDER BY id) AS FakePunishmentId FROM Punishments; Then you can just select from your view for display purposes like so: SELECT id, FakePunishmentId, group_id, punishment, foo, bar FROM PunishmentsDisplay WHERE group_id = 123;
    • T

      Combine tables with different foreign key values for same data (firefox)
      SQL, Database Testing • sqlite foreign key • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      I ended up using https://github.com/crazy-max/firefox-history-merger (2.12.0 at the time of this post), and it merged my slightly older firefox database with my current one. I don't remember what my old firefox version was, but I am on Firefox v96.0.3 at the time of this post.
    • T

      Postgres notify if connection timeout is occured
      SQL, Database Testing • postgresql • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      S

      PostgreSQL v14 introduced the parameter https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-SESSION-TIMEOUT , which is exactly what you are looking for. Note that the client will get an error if it tries to use a connection that has been closed by the server.
    • T

      Retrieve gaps in dates in SQL Server
      SQL, Database Testing • sql server gaps and islands • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      Here is one straightforward way. First you generate all dates between min and max in MYTABLE: with min_max(min_dt, max_dt) as ( select min([DATE]), max([DATE]) from MYTABLE ), all_dt(d) as ( select min_dt from min_max union all select DATEADD(month, 1, d) from all_dt where d < (select max_dt from min_max) ) select y.d from all_dt y Now you can take the cartesian product between that and the set of users in MYTABLE with min_max(min_dt, max_dt) as ( select min([DATE]), max([DATE]) from MYTABLE ), all_dt(d) as ( select min_dt from min_max union all select DATEADD(month, 1, d) from all_dt where d < (select max_dt from min_max) ) select x.id, y.d from all_dt y, (select distinct [ID] as id from MYTABLE) as x I used the "," join since I was not sure whether SQL-server supports explicit CROSS JOIN. Finally you can subtract all existing rows from that set: with min_max(min_dt, max_dt) as ( select min([DATE]), max([DATE]) from MYTABLE ), all_dt(d) as ( select min_dt from min_max union all select DATEADD(month, 1, d) from all_dt where d < (select max_dt from min_max) ) select x.id, y.d from all_dt y, (select distinct [ID] as id from MYTABLE) as x except select [ID], [DATE] from MYTABLE I would advise you to avoid identifiers like DATE and ID, but I assume this is just an example EDIT: CROSS JOIN apparently works according to my fiddle, so you can rephrase that as: ... SELECT x.id, y.d FROM all_dt y CROSS JOIN (SELECT distinct [ID] AS id FROM MYTABLE) AS x EXCEPT SELECT [ID], [DATE] FROM MYTABLE https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=03c144940a57fe33a9dafd44666e74d5
    • T

      Find what queries are reports using on SSRS
      SQL, Database Testing • sql server 2014 ssrs • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      morde

      As J.D. says, you could search your SSRS solution or source control. Alternatively, if you have multiple SSRS solutions or projects, you could use the "Find in Files" Search option in Notepad++ (or a similar tool) to search the contents of all .rdl files in a folder or set of folders for the name of the function. If you do not have SSRS project (i.e. if your SSRS reports only live on the report server), then you can find some information on how to download them all https://stackoverflow.com/questions/46783093/download-all-ssrs-reports .
    • T

      Environment variables in Postgres C extension
      SQL, Database Testing • postgresql c language environment variables postgresql extensions • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      That should work just fine. Note, however, that it will take the variable from the environment of the PostgreSQL server process, not your current client process.
    • T

      Recover 3-node Galera cluster from crash
      SQL, Database Testing • mariadb galera • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      Looks like this is the issue: xbstream: Can't create/write to file '././backup-my.cnf' (Errcode: 17 - File exists). Maybe check if you see that file anywhere in your datadir (/var/lib/mysql/ or a sub-directory like /var/lib/mysql/.sst/) and either either delete it or set the permissions so that your mysql user can overwrite it next time it tries to write there.
    • T

      Three "Nested loop join" in the Execution plan despite using two inner joins in the main query
      SQL, Database Testing • oracle query performance plsql oracle 11g r2 execution plan • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Laycee

      You seem to be assigning too much meaning to the operation ID. It is there simply to uniquely identify each operation and has nothing to do with the order of operation execution. You need to read the plan "inside out", from the most nested operation to the less nested one. With that in mind, let's see what is happening: Op. 6. IDX1_SEGMENT is scanned to retrieve ROWIDs matching the segment. Op. 5. The rows from MI_SEGMENT_CUSTOMER_RELATION are read by ROWID to filter those that match the EFFECTIVE_DATE condition and get CUSTOMER_NUM. Op. 3. Loop over the retrieved CUSTOMER_NUM values. Op. 9. In the loop IDX1_F is scanned to find entries matching CUSTOMER_NUM from Op. 5, and the corresponding ROWIDs are retrieved. Op. 8. The rows from MI_FACTCUSTOMER are read by ROWID to filter those that match the EFFECTIVE_DATE condition and get CUSTOMER_NUM. Op. 2. Loop over the CUSTOMER_NUM values from Op. 8. Op. 10. In the loop access IDX_CUSTOMER to fetch ROWIDs matching the customer number. Op. 1. Loop over ROWIDs from Op. 10. Op. 11. In the loop access MI_DIMCUSTOMER to fetch the required columns. Obviously, those loops are executed simultaneously: as soon as the new value from Op. 5 is available, the next iteration of Ops. 9 and 8 can proceed, and as soon as that is complete, the next iteration of Ops. 10 and 11 can fetch the required data. Given the small number of rows and the presence of useful indexes the choice of the nested loop join seems appropriate.
    • T

      Unable to grant membership to public role
      SQL, Database Testing • sql server security permissions restore • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      I was able to get it working by removing the user from the database first and then recreating it. In my case, this login was used in other databases, so I couldn't drop the login. And droping the user didn't work immediately because it also owned some schema. First, I had to find which schema it owned: use [development] go --- See which schemas the user owns: SELECT S.*, [SchemaOwnwer]=dp.name, dp.type_desc FROM SYS.schemas S INNER JOIN SYS.database_principals dp ON S.principal_id = dp.principal_id It showed that the user owned 2 schemas, so next I re-assign those to dbo: -- change them so that dbo owns them: alter authorization on schema::HangFire to dbo; go alter authorization on schema::db_owner to dbo; go I also had to remove the user from a few roles: exec sp_droprolemember 'db_owner', 'mywebuser'; go exec sp_droprolemember 'db_datareader', 'mywebuser'; go exec sp_droprolemember 'db_datawriter', 'mywebuser'; go Finally, I can drop the user drop user [mywebuser] Now, at last, I can add it back with proper settings: create user [mywebuser] for login [mywebuser]; go exec sp_addrolemember 'db_owner', [mywebuser]; go exec sp_addrolemember 'db_datareader', [mywebuser]; go exec sp_addrolemember 'db_datawriter', [mywebuser]; go ALTER USER [mywebuser] WITH DEFAULT_SCHEMA=[dbo]; go ALTER ROLE [db_owner] ADD MEMBER [mywebuser]; go grant connect to [mywebuser]; go
    • T

      Massive DROP on a production database with the shortest downtime
      SQL, Database Testing • postgresql foreign key locking drop table • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      these tables have foreign keys on a highly referenced table and lock is required by drop If FK constraints point to a table to be deleted, add CASCADE to also drop any such FK constraint (not the referencing tables). https://www.postgresql.org/docs/current/sql-droptable.html (CASCADE will remove a dependent view entirely, but in the foreign-key case it will only remove the foreign-key constraint, not the other table entirely.) If FK constraints point from a table to be deleted (your case, as clarified in an update) then it dies with the table. Unfortunately, dropping an FK constraint also requires a brief ACCESS EXCLUSIVE lock on the referenced table. https://www.postgresql.org/docs/current/sql-altertable.html#id-1.9.3.35.5 ALTER TABLE changes the definition of an existing table. [...] An ACCESS EXCLUSIVE lock is acquired unless explicitly noted. This leaves room for interpretation. Indeed an ACCESS EXCLUSIVE lock is taken on both tables, referenced and referencing. (I verified in a quick test on Postgres 14.) The lock on the referenced table is very brief and shouldn't be a problem unless there is heavy concurrent access on that table - especially if there are long-running transactions. To avoid locking the referenced table longer than absolutely necessary, COMMIT after every DROP. Like (requires Postgres 11 or later): DO $do$ DECLARE _schema name; _tbl name; _sql text; BEGIN FOR _schema, _tbl IN SELECT schemaname, tablename FROM pg_catalog.pg_tables WHERE schemaname = 'public' -- or what you need AND tablename = ANY('{t2, t3}') -- array of tables to delete LOOP _sql := format('DROP TABLE %I.%I CASCADE', _schema, _tbl); -- CASCADE needed? RAISE NOTICE '%', _sql; -- EXECUTE _sql; -- un-comment once you are sure COMMIT; -- !!! END LOOP; END; $do$ I put in RAISE NOTICE and commented the actual DROP as child safety device. Un-comment the EXECUTE line (and optionally comment the RAISE) to prime the bomb. Since that commits after every DROP, you don't collect locks along the way. In particular, that highly contested target table of your FK constraints is only blocked for a brief moment for each DROP command. Dropping the FK constraint with ALTER TABLE ... DROP CONSTRAINT ... separately will hardly help, as that runs into the same problem. It can be an option, though, for multiple FK constraints from the same table (in separate transactions like above) - so you only have to wait for a single target table at a time. If you still get stuck, just execute the same command again: only tables that still exist are dropped. You could add IF EXISTS to the DROP, but that only makes sense if you expect concurrent transactions to DROP tables, which doesn't seem to apply. If you still get stuck, take an exclusive lock on the target table in question and DROP all tables in the same transaction at once. Concurrent access on that table will be halted during this, obviously, so best at low activity times or during a maintenance window. (OTOH, if there is no concurrent access, you can simply DROP without contention anyway.) Maybe you can identify and fix long-running transactions that don't need to stay open for that long? Those are a general burden for DBs with concurrent access in any case.
    • T

      UPDATE a table by self JOIN and GROUP BY
      SQL, Database Testing • mysql join mariadb update group by • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      Test this: UPDATE TeamPlayers t1 JOIN ( SELECT LEAST(t1.Name, t2.Name) Name1, MAX(CASE WHEN LEAST(t1.Name, t2.Name) = t1.Name THEN t1.PlayerID ELSE t2.PlayerID END) OVER () ID1, GREATEST(t1.Name, t2.Name) Name2, MAX(CASE WHEN GREATEST(t1.Name, t2.Name) = t1.Name THEN t1.PlayerID ELSE t2.PlayerID END) OVER () ID2, TeamID FROM TeamPlayers t1 JOIN TeamPlayers t2 USING (TeamID) WHERE t1.Name <> t2.Name GROUP BY Name1, Name2, TeamID ) t2 USING (TeamID) SET t1.PlayerID = CASE WHEN t1.Name = t2.Name1 THEN t2.ID1 ELSE t2.ID2 END; https://dbfiddle.uk/?rdbms=mariadb_10.5&fiddle=16f5b70672ce37a20ce120f29012a646 PS. Maybe the window definition needs in correct PARTITION BY (PARTITION BY Name1, Name2, TeamID?) - but the sample data array is too tiny and does not contain enough data.
    • T

      data structure for grouping children without parents
      SQL, Database Testing • mysql database design mariadb schema • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      In step 2, I find ID 2 and 4 are also the same person (making ID 1-4 the same person). How should I update the parent? For example, it can be SET @dup1 := 2; SET @dup2 := 4; UPDATE test t0 JOIN test t1 JOIN test t2 SET t0.PersonID = t1.PersonID WHERE t1.id = @dup1 AND t2.id = @dup2 AND t0.PersonID IN (t1.PersonID, t2.PersonID); https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=825e1c58b5e7fae31eca9485fa4124ce
    • T

      UNION SQL with LEFT JOIN
      SQL, Database Testing • postgresql join union • • Trenton  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      You're missing a comma in the GROUP BY clause after the im.zona_sismica column. Your GROUP BY clause should be: GROUP BY Date_part('year', pr.data_chiusura), Date_part('month', pr.data_chiusura), regione.descrizione, provincia.descrizione, im.comune, im.zona_sismica, dati_ante.classe_rischio, dati_post.classe_rischio
    • 1
    • 2
    • 3
    • 4
    • 5
    • 744
    • 745
    • 1 / 745