Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Jolied
    3. Topics
    J
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by Jolied

    • J

      For each day, get query row count for that day
      SQL, Database Testing • postgresql • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      O

      Would this be what you're looking for? SELECT date_trunc('day', ts) AS "day", COUNT(*) FROM events WHERE success IS TRUE GROUP BY date_trunc('day', ts) ORDER BY Day DESC https://dbfiddle.uk/?rdbms=postgres_13&fiddle=e09367af5bb4a1e4fef623df0cc5b35c
    • J

      Why SQL inserts are slower in AWS Aurora Serverless than in AWS RDS
      SQL, Database Testing • mysql performance innodb aws aws aurora • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      Is it possible your Aurora Serverless instance isn't autoscaling appropriately because your cron job is a continuously long running query and / or transaction? According to https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.how-it-works.html autoscaling can timeout due to the aforementioned events: When it does need to perform a scaling operation, Aurora Serverless v1 first tries to identify a scaling point, a moment when no queries are being processed. Aurora Serverless might not be able to find a scaling point for the following reasons: Long-running queries In-progress transactions Temporary tables or table locks To increase your Aurora Serverless DB cluster's success rate when finding a scaling point, we recommend that you avoid long-running queries and long-running transactions. Also mentioned in https://aws.amazon.com/blogs/database/best-practices-for-working-with-amazon-aurora-serverless/ : Scale-blocking operations The capacity allocated to your Aurora Serverless DB cluster seamlessly scales. But using long-running queries or transactions and temporary tables or table locks can delay finding a scaling point. And: Long-running queries or transactions For transactions, you should follow standard best practices. For example, keep your transactions simple, short and use a suitable isolation level. The most important practice is to avoid long-running transactions. In general, for any relational database, long-running transactions can cause performance degradation. Specifically for Aurora Serverless, long-running transactions are blocking operations for scaling unless you use the force scaling parameter. Even in this scenario, you must complete a proper rollback first, which can take significant time. This can have a very negative impact on your application. If your cron job is a single query or transaction, then re-writing it to insert data in multiple smaller batches, with breaks, in a more iterative manner, might prove a better workflow for you on Aurora Serverless. If your instance's autoscaling is timing out, then your cron job may be running on an under-provosioned instance the entire time.
    • J

      How to get error code of an exception type without raising it?
      SQL, Database Testing • oracle exception • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      B

      I don't know whether this is possible. In my application I define all errors in an generic PL/SQL package like this: CREATE OR REPLACE PACKAGE Errors IS E_ItemNotFound CONSTANT INTEGER := -20010; ItemNotFound EXCEPTION; PRAGMA EXCEPTION_INIT(ItemNotFound, E_ItemNotFound); END Errors; Then in exception handlers I use either the EXCEPTION or the constant error number, depending on my needs. The error messages are define in a table. create table t_error_messages as select -20010 ERR_CODE, 'Item not found' as ERR_MESSAGE from dual;
    • J

      Very slow update statement taking up to 11min to update 20 or 0 rows even with indexes
      SQL, Database Testing • sql server t sql sql server 2019 execution plan • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      briley

      These two points from your post are going to be your biggest contribution factors. have 14 milion rows where m.MaximumReflowTemperatureID <>r.z2valueid For starters, anytime you compare two columns in a table, regardless if equality or inequality comparisons, SQL Server is going to have to compare every row in the table to see if two columns meet the condition. It's not like a scenario where you're doing an index seek to find a row with ID = 5. In your case, the value you would be comparing to would be changing row by row. Therefore, there is no way to SEEK this data. Add to the fact that you have 14 million rows in one table, and 15 million rows in the other. Picture your query as a SELECT, but without the <> condition. select m.MaximumReflowTemperatureID, r.z2valueid from [OperationsTablesDB].[dbo].[ManufactureMaximumReflowTemperatures] r inner join z2datacore.parts.manufacturingdata m with(nolock) on m.partid=r.zpartid Run that select and see how long it takes, and how many rows it returns. Maybe grab an execution plan while you're at it. Now picture SQL Server having to pull all those rows, every time you run your update, even if there is no unequal values. That is because it has to look at both columns, side by side, before it knows if it needs to update that row. Comparing two columns will almost always be a painful query. However, the following indexes could at least make it more tolerable. CREATE NONCLUSTERED INDEX [zpartid_idx] ON [dbo].[ManufactureMaximumReflowTemperatures] ([zpartid]) INCLUDE (z2valueid) CREATE NONCLUSTERED INDEX [manufacturingdata_partid] ON [dbo].[manufacturingdata] ([partid]) INCLUDE (MaximumReflowTemperatureID) With these indexes, you'll hopefully be pulling smaller indexes into memory, I.E. fewer reads. This is because each of these indexes has two columns each, instead of all columns of the base tables. This should help your query run faster. Without these, you're most liklely doing clustered scans. Also, make sure to take out the NOLOCK hint.
    • J

      No speedup using parallel restore with pg_restore
      SQL, Database Testing • postgresql postgresql 11 postgresql performance centos 7 • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      Parallel restore does different operations in parallel, but doesn't parallelize individual operations. Since pgbench is dominated by one table, there isn't much that can be done in parallel for it at the client level. With some shell scripting, you can run pg_restore multiple times in parallel, pointing each one at a different database name. (But you could do the same scripting using pgbench -i -s 7000 $DBNAME & instead, skipping the dump and reload steps.)
    • J

      How to start MySQL after failure?
      SQL, Database Testing • mysql authentication • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      emmalee

      I faced the same problem today ,the source of the issue might be different but, read carefully /var/log/mysql/error.log it should point to the error. I deleted one table that I didn't need .MYD and .MYI files. It was a mistake , I should drop the table , but my server was on 100% memory usage. What I did was reading the error on: sudo tail -f /var/log/mysql/error.log Keep two terminal open at the same time, first run sudo tail -f /var/log/mysql/error.log on first terminal , then run on the second terminal sudo systemctl stop mysql.service ---to shut it down sudo systemctl start mysql.service ---to try and start up and see the reason why it is not starting. For me it was: 2022-01-26T16:30:36.213788Z 0 [Note] InnoDB: Database was not shutdown normally! 2022-01-26T16:30:36.213802Z 0 [Note] InnoDB: Starting crash recovery. 2022-01-26T16:30:36.213830Z 0 [ERROR] InnoDB: Tablespace 605 was not found at ./contratti/ip_log2_bak.ibd. 2022-01-26T16:30:36.213840Z 0 [ERROR] InnoDB: Set innodb_force_recovery=1 to ignore this and to permanently lose all changes to the tablespace. 2022-01-26T16:30:36.644359Z 0 [ERROR] InnoDB: Cannot continue operation. 2022-01-26T16:30:38.045091Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2022-01-26T16:30:38.062994Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.34-log) starting as process 2843 ... 2022-01-26T16:30:38.416074Z 0 [Note] InnoDB: PUNCH HOLE support available I was able to fixing by adding on /etc/mysql/mysql.conf.d/mysqld.cnf innodb_force_recovery = 2 From https://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html If you are able to dump your tables with an innodb_force_recovery value of 3 or less, then you are relatively safe that only some data on corrupt individual pages is lost. After this change I stopped and started again MySQL and it is working great.
    • J

      How to count number of occurrences of each key in a MongoDB collection efficiently (one pass?)?
      SQL, Database Testing • mongodb count aggregate • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      emmalee

      One of the shortest one could be this one: const counts = {} db.collection.aggregate([ { $project: { data: { $objectToArray: "$$ROOT" } } } ]).forEach(doc => { doc.data.forEach(val => { counts[val.k] = typeof counts[val.k] == "undefined" ? val.v : counts[val.k] + val.v; }) }) I tried to find solution of $objectToArray, $group but most likely it will not work, because the size of document is limited to 16 MiBytes. Having "millions of documents" this limit will certainly exceeded.
    • J

      ola hallengren on Azure SQL
      SQL, Database Testing • azure ola hallengren • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      there are two options that filter the indexes to be rebuilt/reorganized. MinNumberOfPages Set a size, in pages; indexes with fewer number of pages are skipped for index maintenance. The default is 1000 pages. This is based on Microsoft’s recommendation. IndexOptimize checks page_count in sys.dm_db_index_physical_stats to determine the size of the index. and MaxNumberOfPages Set a size, in pages; indexes with greater number of pages are skipped for index maintenance. The default is no limitation. IndexOptimize checks page_count in sys.dm_db_index_physical_stats to determine the size of the index. Said that, your index is quite big and probably you did not set MaxNumberOfPages. try to see the message output (using @Execute = 'N' and @Indexes = 'Db1.Schema1.Tbl1.Idx1' with the index you want to defrag) It should give you the index status and action choosed. Compare the output with what is coming out from select * from sys.dm_db_index_physical_stats for the same index. Are you using the last version of the Ola scripts? Try downloading the last version.
    • J

      Can you run PL/Python or PL/v8 code from the database itself?
      SQL, Database Testing • javascript postgresql plpython • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Laycee

      From https://stackoverflow.com/a/45131867/129805 You can use "eval", e.g. create or replace function my_function() returns text language plv8 as $$ // load module 'test' from the table js_modules var res = plv8.execute("select source from js_modules where name = 'test'"); eval(res[0].source); // now the function test() is defined return test(); $$; select my_function(); CREATE FUNCTION my_function this is a test (1 row) https://rymc.io/blog/2016/a-deep-dive-into-plv8/
    • J

      Query to find data where value of data changed to 0
      SQL, Database Testing • sql server t sql syntax • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      WITH cte AS ( SELECT user_id, event_time, value, LAG(value) OVER (PARTITION BY user_id ORDER BY event_time) lag_value, LEAD(value) OVER (PARTITION BY user_id ORDER BY event_time) lead_value FROM source_table ) SELECT user_id, event_time, value FROM cte WHERE (value = 0 AND lag_value <> 0) OR (value <> 0 AND lead_value = 0) ORDER BY 1, 2
    • J

      How return dynamic SQL value in procedure
      SQL, Database Testing • db2 db2 luw dynamic sql • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      Try DECLARE VAR1_INPUT INT; DECLARE SQLSTRING VARCHAR(5000); DECLARE OUTPUTVAR INT; SET VAR1_INPUT = 5; SET SQLSTRING = 'select MYCOL INTO :OUTPUTVAR FROM MYSCHEMA.MYTABLE where MYVAL = '||CAST(VAR1_INPUT AS VARCHAR(25)); EXECUTE IMMEDIATE SQLSTRING; https://www.ibm.com/docs/en/db2/11.1?topic=statements-select-into PS. SELECT must return one row only. If not then limit the amount of rows returned (with some definite ordering).
    • J

      AD group added to SQL as sysadmin not allowing users to login who are part of the group
      SQL, Database Testing • sql server active directory • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      A very basic thing but when you say "AD", do you mean Azure AD (AAD) or on-prem AD? Azure SQL Database supports https://docs.microsoft.com/en-us/azure/azure-sql/database/security-overview#authentication , so if your group isn't available in AAD (e.g. it hasn't been synchronised from on-prem AD), that could be your problem.
    • J

      Sequence not overflowing
      SQL, Database Testing • postgresql sequence • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      Your INSERT isn't using the sequence, but the values that are returned by generate_series(). So the sequence is never advanced and thus it doesn't throw an error during the insert. If you don't provide a value for the id column or use default, then you'd get the expected error: CREATE TABLE t1( id SERIAL ); alter sequence t1_id_seq MAXVALUE 5; -- this advances the sequence by 5 values insert into t1 values (default),(default),(default),(default),(default); -- this will fail insert into t1 values (default); The fact that you can bypass the sequence is precisely the reason, why the serial pseudo-type is https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_serial in favor of the standard compliant identity columns
    • J

      psql function: append one dimensional array to a two dimensional array using array_append
      SQL, Database Testing • postgresql array • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Demir

      See https://dbfiddle.uk/?rdbms=postgres_13&fiddle=db83ca30180e59e684bdd8336db2e59a create function f() returns integer[][] as $$ declare count_value integer[] = '{5,6}'; count_values integer[][] = '{{1,2},{3,4}}'; begin return Array_cat(count_values,count_value); end; $$ language plpgsql; or, starting with an empty array, create function f() returns integer[][] as $$ declare count_value integer[] = '{5,6}'; count_values integer[][] ; new_cvs integer[][]; begin if count_values is null then new_cvs = array_cat(ARRAY[]::integer[][],ARRAY[count_value]); else new_cvs = Array_cat(count_values,count_value); end if ; return new_cvs; end; $$ language plpgsql; see https://dbfiddle.uk/?rdbms=postgres_13&fiddle=f82ef52ec2c836a58400cd99c30d3675
    • J

      How to reset mySql root password
      SQL, Database Testing • mysql • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      For windows 10 Delete Data folder present inside C:\Program Files\MySQL\MySQL Server 8.0 Create file named mysql-init.txt and copy paste below lines and store at C:\ ALTER USER 'root'@'localhost' IDENTIFIED BY 'root'; Open CMD as admin cd to path C:\Program Files\MySQL\MySQL Server 8.0\bin> and enter following commands one by one mysqld -install mysqld --initialize mysqld --init-file=C:\\mysql-init.txt --console
    • J

      Query Store takes a very long time to load duration
      SQL, Database Testing • sql server performance query sql server 2017 query store • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      The core problem is that the T-SQL code that drives the SSMS GUI reports is not efficient. Unfortunately, Microsoft has a long history of including inefficient T-SQL in parts of SSMS. The problem is further compounded by the query store data model and the code that makes up the DMVs. There are unusual performance gotchas in many places. For example, https://dba.stackexchange.com/questions/216485/why-doesnt-join-elimination-work-with-sys-query-store-plan with some query store DMVs. I'll back up my criticism by doing a minimal effort rewrite of the query text that you captured. As is, the original code took 6 seconds to execute against one of my production databases: Note that the following rewrite only works against the database that I developed it against due to the hardcoded filter values for runtime_stats_interval_id: DECLARE @results_row_count INT = 100, @interval_end_time datetimeoffset = '2021-12-08 03:00:00', @interval_start_time datetimeoffset = '2021-12-07 03:00:00'; SELECT , qt.query_sql_text query_sql_text FROM ( SELECT TOP (@results_row_count) p.query_id query_id, q.object_id object_id, ISNULL(OBJECT_NAME(q.object_id),'') object_name, q.query_text_id, ROUND(CONVERT(float, SUM(rs.avg_durationrs.count_executions))*0.001,2) total_duration, SUM(rs.count_executions) count_executions, COUNT(distinct p.plan_id) num_plans FROM sys.query_store_runtime_stats rs JOIN sys.query_store_plan p ON p.plan_id = rs.plan_id JOIN sys.query_store_query q ON q.query_id = p.query_id WHERE NOT (rs.first_execution_time > @interval_end_time OR rs.last_execution_time < @interval_start_time) AND rs.runtime_stats_interval_id BETWEEN 11534 and 11558 -- code omitted to do this mapping but it isn't hard GROUP BY p.query_id, q.query_text_id, q.object_id --HAVING COUNT(distinct p.plan_id) >= 1 ORDER BY total_duration DESC ) q1 INNER JOIN sys.query_store_query_text qt ON q1.query_text_id = qt.query_text_id; The new code returns the same results but finishes in about 600 milliseconds: Of course, the above isn't terribly helpful to you. It's just a sign that there is a defect with SSMS that Microsoft could address with code changes on their end. I can think of the following options for resolving your problem: Give up on using some of the SSMS query store GUI reports and run custom T-SQL instead. Some members of the community have written code to do that and they make it freely available. https://github.com/erikdarlingdata/DarlingData/tree/master#quickie-store is one such example. It is somewhat unlikely, but your performance issue may be caused by parameter sniffing. When you open the query store GUI it immediately runs a report over an hour date range. It may be that SQL Server cached a query plan for that one hour range that doesn't perform well for a 24 hour date range. I've seen this happen before on systems that were significantly busier at certain times of the day. You can test for a parameter sniffing problem by removing the query plan that you identified from the cache. There is an https://dba.stackexchange.com/a/230893/ on this site that solves SSMS query store GUI performance issues by using plan guides. You could try opening a support ticket with Microsoft if you have a support contract with them. You could also try leaving feedback about slow performance on the https://feedback.azure.com/d365community/forum/04fe6ee0-3b25-ec11-b6e6-000d3a4f0da0 . On an unrelated note, I noticed in your question that you don't have wait stats collection enabled. I encourage you to enable that if it's possible to do so. I find it to be a powerful tool for troubleshooting why queries timed out. Unfortunately, it does https://www.erikdarlingdata.com/sql-server/t-sql-protip-watch-those-tops-without-order-by/ .
    • J

      Best Practice for Product Filtering Menus
      Usability Testing • filter drop down list • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      P

      It's always a good idea to reduce the amount of content users see at once. This reduces cognitive load. So, your approach in this case would be better. https://www.nngroup.com/articles/minimize-cognitive-load/ For the client approach. Is it scalable? How many filtering options will it have in 2 years from now? Would users know what "by type" means? Maybe yes or maybe not, in this case the clients approach would be better. Do users multi-filter? For example, by country AND by milk AND by type? In this case, the client's approach would give a better overview of filtering and reduce the amount of tasks and clicks users have to do. A third approach to this could be to have the filter options in a left column which is a common practice for most ecoms out there.
    • J

      AB Test Dilemma
      Usability Testing • ab testing • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      briley

      It depends on how the new CTA and the other section are related to each other. Does the new CTA detract from the other section because it adds clarity? Before people where clicking trying to find something and now the new CTA is what they were looking for? Causing less people to click on the other section because what they want to find is easier.
    • J

      Using forms inside modals
      Usability Testing • forms modal windows • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      There isn't a real limit on the number of inputs for a modal window form, but there are other considerations: Should the user be prevented from working on any other tasks? If so, that's a modal. Does the user need to reference anything else on the page while working on the form? If so, you might consider a sheet. If the user accidentally closes the modal by clicking outside of the window, will they lose a lot of work? If so, consider a sheet or a page. Does your form combine a lot of different tasks, rather than providing one thing for the user to do? If so, consider making that a page.
    • J

      How does a new network and subnet have FQDNs without DNS Nameservers?
      Continuous Integration and Delivery (CI, CD) • terraform openstack • • Jolied  

      2
      0
      Votes
      2
      Posts
      0
      Views

      D

      This is something that can really only be answered by your Cloud administrators. The openstacklocal domain is a premade domain that Openstack uses for internal DNS names. It does not mean that it will be usable out of the box. To test, you can try to ping 192.168.199.2-3-4 from your VM to see if the dhcp agent responds. You can then try to query the dhcp agent with dig Now, there seem to be deeper DNS support with ml2. The ML2 driver will need to include dns https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html You will need to use the DHCP agent IP as DNS servers. Usually, it's the first two/three IP of your subnet. It will create DNS names for your subnet/ports.
    • 1
    • 2
    • 3
    • 4
    • 5
    • 735
    • 736
    • 1 / 736