Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. terrea
    3. Topics
    T
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by terrea

    • T

      Problem seeding Heroku database
      SQL, Database Testing • heroku sequelize • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Z

      Presumably, every Post must be linked to a corresponding (no pun intended) User. This is what the Foreign Key is enforcing. It will not allow you to create a Post that contains a userId that does not match an existing User. You don't show a "seeder" for the Users. Is it as simple as you're trying to "seed" the Posts before "seeding" the Users?
    • T

      UPDATE statement on UPSERT not incrementing value
      SQL, Database Testing • postgresql upsert • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Laycee

      You are setting reservation_count to the inserted value + 1, not incrementing the current value. You probably want to add the provided count to the existing count. SET reservation_count = reservationcounter.reservation_count + excluded.reservation_count;
    • T

      Are the limits of varchar(8000) and nvarchar(4000) because of the sql server page size (8KB=8192 Bytes)?
      SQL, Database Testing • sql server • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Alberto

      Is this because the page size is 8192 Bytes? Yes. In SQL Server version 7 the 8K page was introduced and varchar(n) max length went from 255 to 8000. The page size before version 7 was 2K, so allowing a varchar to take up most of a page was new too.
    • T

      GROUP_CONCAT Extremely slow
      SQL, Database Testing • php mysql • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      G

      I have 3 significant performance improvements to recommend: Indexing It looks like you have 3 many-to-many mapping tables. The typical implementation of such has inefficient indexes. See http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table for how to improve their performance. Overkill It may not be worth having a separate table for cities. Consider simply having city_name in product_city (and change the indexing). Explode-implode This is an SQL design anti-pattern. It involves joining lots of stuff together, producing lots of intermediate rows, only to then spend a lot of time collapsing down (via GROUP BY) to roughly what you started with. Your example is a very clean one. Instead of SELECT GROUP_CONCAT(c...) LEFT JOIN c do SELECT ( SELECT GROUP_CONCAT(...) FROM c WHERE ... ) Ditto for the other two. Then, the GROUP BY a.id, a.name can be eliminated!
    • T

      What happens during auto vacuum / vacuum in PostgreSQL?
      SQL, Database Testing • postgresql vacuum autovacuum • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      The autovacuum launcher process wakes up regularly and determines – based on the statistical data in pg_stat_all_tables and pg_class and certain parameters settings – if a table needs to be VACUUMed or ANALYZEd. If yes, it starts an autovacuum worker process that performs the required operation. VACUUM does a lot of things. The basic procedure is: scan the table for dead row versions scan all indexes and remove references to these dead row versions scan the table again to remove the dead row versions found in the first step if we are not done with the table yet, repeat the whole process On the side lines, it also does these things: update table statistics in pg_class bring the “free space map” up to date, so that INSERTs and UPDATEs know where to find a block with enough free space bring the “visibility map” up to date, so that PostgreSQL can perform efficient index-only scans “freeze” all live rows older than a certain age to allow the transaction ID counter to wrap around without data loss
    • T

      How can I mimic full outer join in mysql with multiple columns
      SQL, Database Testing • mysql • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      In a UNION clause the alignment of the columns in each unioned dataset defines the order of those columns in the final result set. Since you don't want the sales and budget data points to be in the same column, you can just un-align them and add placeholder default values in the other dataset like so: SELECT 0 AS budget, SUM(A.sales) AS sales, A.restaurant FROM A LEFT OUTER JOIN B ON A.date = B.date AND A.location= B.location WHERE A.date between ? AND ? GROUP BY A.location UNION SELECT SUM(B.budget) AS budget, 0 AS sales, B.restaurant FROM A RIGHT OUTER JOIN B ON A.date = B.date AND A.restaurant = B.restaurant WHERE B.date between ? AND ? GROUP BY B.restaurant Then to get the final results you want, you'll want to do one more grouping again on location in a subquery or CTE to aggregate the different sides of the UNION into a single row like so: SELECT location, SUM(sales) AS sales, SUM(budget) AS budget FROM ( SELECT 0 AS budget, SUM(A.sales) AS sales, A.location FROM A LEFT OUTER JOIN B ON A.date = B.date AND A.location= B.location WHERE A.date between ? AND ? GROUP BY A.location UNION SELECT SUM(B.budget) AS budget, 0 AS sales, B.restaurant FROM A RIGHT OUTER JOIN B ON A.date = B.date AND A.restaurant = B.restaurant WHERE B.date between ? AND ? GROUP BY B.restaurant ) Results GROUP BY location Note I think I fixed a typo in your first dataset of the UNION clause by changing A.restaurant to A.location.
    • T

      Stuck at inserting records into MariaDB conditionally
      SQL, Database Testing • mariadb insert condition • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      INSERT INTO Nodes (`Key`, ParentId) SELECT @key, @parentid WHERE NOT EXISTS ( SELECT NULL FROM Nodes WHERE `Key` @key AND ParentId @parentid ) @key and @parentid are placeholders for the values to be inserted. So if the row with the values to be inserted already exists then the subquery detects this, and none rows is inserted.
    • T

      Chat schema for PostgreSQL and MongoDB
      SQL, Database Testing • postgresql mongodb schema • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      P

      This hopefully indirectly answers your question and points you in the right direction, it's too much to discuss in comments. MongoDB is not any more performant with writing data than any other mainstream relational database (PostgreSQL, SQL Server, etc). The same is true for reading data as well. NoSQL was not invented to be more performant with reading, writing, or querying data, rather it was originally created to be a tool to improve developer speed especially with a highly changing or variable schema. By not having to worry about maintaining an object structure both in the data layer in addition to the application layer, NoSQL databases allowed developers with specific development needs to release software changes faster when managing a highly changing schema. From what I can see, your messages entity is structured but perhaps the content of the message text is not (an example of the text would be helpful). Regardless you can either parse the text into a relational model or store it as a single text field just the same as your entity currently is modeled as, if you wanted to use a relational database system for all of your data. Sharding may be easier but at the general tradeoff of eventual consistency (two instances of the same table being out of sync at a given point in time). When to use Sharding depends on a couple of factors generally based on a function of what one is willing to pay to provision their servers vs how much data they need to store, based on their specific use cases. But in general, modern relational database systems handle big data well when architected properly, and I've yet to find Sharing any more useful than vertical scaling. All of that being said, I've worked with Messages data (financial trading messages) in a large and heavily transactional relational database as well. Some of the tables were into the 10s of billions of rows and multi-terabytes big for a single table. We stored our raw unparsed Messages in one table as they came into our system, but wrote an application that also parsed the relevant information from the Messages into a normalized table. The main two challenges you'll find by using two different databases systems is: Management of the data in two different systems, especially as your data grows: The management of a NoSQL database system requires different thought, planning, and attention than a relational database system. Relating the data across systems in a meaningful way: Consuming the data by your apps or for reporting purposes becomes more complicated and dependent on two unrelated systems. Associating the correlating messages data to your other data such as users becomes more complicated, and actually probably less efficient since you need two different methodologies for retrieving data which will result in more steps associating the data. For example if you wanted to get the users.full_name for the messages they created. Most modern relational database systems can store, parse, and query "unstructured" data too, since that's really a subset of structured data. For example, the JSON capabilities of PostgreSQL that Vérace mentions makes it a powerful data store for relational data and any additional meta-data that's unstructured as well. Alternatively, it seems like your messages structure is in your application's control. If that's true, then you don't need to store all as a single denormalized object. Rather you can store it into the database as a normalized series of tables that make up all of the data from the messages. It appears all of your data could be stored in a relational database.
    • T

      I want to select many columns from many tables without foreign key
      SQL, Database Testing • mysql select join code • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      I have three tables, and I want to fetch the three table data without a foreign key. I think you mean "without a join condition". That's a Bad Idea. If you don't tell the database how to join two tables, it will perform a Cartesian Join, i.e. it will match every row in the first table against every row in the second table. If you have ten rows in both tables, you will get back a resultset of one hundred (10 x 10) rows. If you have a million rows in both tables, you will get back a resultset of one trillion (10^6 x 10^6) rows - assuming your database survives long enough to deliver it and you're prepared to wait that long! And the more tables you add, the more this multiplies out!
    • T

      SQL Server 2019 pod K8S in CrashLoopBackOff. Could not open file .../master.mdf and Access is denied. Any ideas why?
      SQL, Database Testing • sql server • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      The problem was in sa login and enabling SQL Server Authentification. If sa login in SSMS works, it will work in pod. this helped https://www.liquidweb.com/kb/troubleshooting-microsoft-sql-server-error-18456-login-failed-user/
    • T

      Best practice for SQL table names when using sub namespaces that exceed maximum table name length
      SQL, Database Testing • mysql table limits naming convention • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      carriann

      There is essentially no reason to prefix tables with the database they are in. Ditto for prefixing column names with the table names. Keep in mind that these syntaxes can give you an equal amount of clutter, without threatening the max name lengths: dbname.tablename tablename.columnname "Everybody" recognizes genus_species; I suggest that is sufficient for that table.
    • T

      Can I convert a local user to a common user in Oracle?
      SQL, Database Testing • oracle oracle 19c • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Alberto

      Short answer - No. You can't convert between common and local user accounts. Even if you could, where would the database store the account's "data and objects"? You should not have any user/application data in the Container Database.
    • T

      PostgreSQL: Suitable ID to uniquely identify client socket connection
      SQL, Database Testing • postgresql • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      Every PostgreSQL database connection has a backend process on the database server, which processes the SQL statements for that connection. The process ID of that backend process can therefore uniquely identify a certain database session, and that is what is used as identifier for a client connection in PostgreSQL. You can get the process ID for the current session with the pg_backend_pid() function. Note that over the course of time, process IDs can be reused. So if you are looking for a globally unique identifier for a database session, you may want to add the timestamp at which the connection was established. You can use the following query to get both for the current database session: SELECT pid, backend_start FROM pg_stat_activity WHERE pid = pg_backend_pid();
    • T

      SQL Server table query with pagination performance tuning, understand the current solution
      SQL, Database Testing • sql server linq pagination performance tuning • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Y

      das spool The main difference between the two queries is the presence of an https://www.erikdarlingdata.com/starting-sql/starting-sql-eager-index-spools/ . From the article: Eager index spools can occur on the inner side of Nested Loops joins to reduce the amount of work that needs to be done there, by creating a more opportune index for the loop to work off of. That all sounds very good, but there are some problems: The index gets created single-threaded The way data is loaded into the spool is very inefficient The spool is disposed of when the query is finished, It’ll get built over and over again on later executions There’s no missing index request for the spool anywhere But in your case, since the spool is quite small, it works in your favor. slowpoke In the slow query, you have a nested loops join against tbl_src_ArticlesCategories which executes ~7k times, but doesn't have a useful index, so the entire table is scanned for each execution. The scan: The details: You end up reading ~53 million rows in total when all is said and done, because you scan ~736k rows ~7k times. fastpoke In the fast plan, you get this instead: The scan and spool (with seek): The details The optimizer for this plan decided to create a good index for you, so it has a more suitable structure to use to locate matching rows in ID_ARTICLE. You do ~7k seeks, which is much more efficient given the circumstances. equalizer You could potentially get equal performance from both queries by adding this index: CREATE /*UNIQUE?*/ INDEX spool_b_gone ON [dbo].[tbl_src_ArticlesCategories] ( [ID_ARTICLE] ) INCLUDE ( [ID_LINE], [ID_GROUP], [ID_CLASS], [ID_FAM] ); Though sometimes the optimizer is foolish and https://www.erikdarlingdata.com/sql-server/index-spools-when-you-have-an-index/ even when you have the right index in place. differences? The immediate difference that I see is that in the slower execution, you generate the row number over a ton more columns across different tables: But I'm a bit short on time, so there may be other things contributing to the choice between spool/no spool.
    • T

      Possible to GROUP BY across different columns in mariadb/mysql
      SQL, Database Testing • mysql mariadb group by gaps and islands • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      B

      If you are using MySQL 8 or MariaDB 10.2, you can use LEAD() or LAG() to see the next or previous row (given that you have an ORDER BY), then you can do the test to see if they are contiguous. (This is not a "grouping" function.)
    • T

      SQL Cannot restore database on YOUNGER server version
      SQL, Database Testing • sql server • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      G

      I made DB backup on 15.00.2255 and try to restore it on 15.00.4188 but it fails, any ideas? The error: The database was backed up .... version 15.00.2255. That version is incompatible ... running version 15.00.4188 ... The general understanding is that there should be no problems backing up from an older version to a newer version which is generally true. There are caveats which rarely, if ever, see the light of day. The common mistake made is that the SQL Instance Version Number must be larger/newer, which isn't the case (which is why it makes the error a little misleading, though technically still true). This compares the database physical version, not the server version, which is what makes this error occur, thus the 15.00.2255 on disk version is larger than the 15.00.4188. This data can be gleaned from the RESTORE HEADERONLY command against the backup from the 15.00.2255 version, the results you want are under the column of DatabaseVersion. If you compare that with the value of Database Version from the output of: DBCC TRACEON(3604) DBCC DBTABLE(master) Ex: database version = 904 It should show the version in the backup is larger, thus the error. There was no released SQL Box (On Prem) minor version 2255. I did not check any managed version (MI, SQLDB, etc.) numbers but most likely it's from one of those which is known that it's not possible to restore back to box/on prem.
    • T

      MariaDB Understanding "Event" Interval
      SQL, Database Testing • mariadb log event • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      irl

      I thought "6 Hour" means 6H00 12H00 18H00 and 24H00. No. This means "6 hours from previous start". You have no specidfied any timepoint as a base - and server uses previous event starting time for this purposes. If you want the event to be fired at listed hours you must specify STARTS schedule option - in this case the specified value will be used as a base. Like ALTER EVENT flush_logs_event ON SCHEDULE EVERY 6 HOUR STARTS '2021-12-09 00:00:00' COMMENT 'Rotate /var/log/mysql/master-bin*' DO FLUSH LOGS;
    • T

      Pentaho does not read sql script (csv)
      SQL, Database Testing • mysql database design parameter pentaho scripting • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      According to the error message it seems that the program does not understand the definition of the file: ${Internal.Entry.Current.Directory}\config.csv. Couldn't get filed info from [$(sql_script)] ERROR: syntax error at or near "$" Posizione: 1 Try omitting the $ in the file definition (first screenshot): {Internal.Entry.Current.Directory}\config.csv` I'm not Pentaho expert, but unless your file contains a $ in the first line, then the error must lie in the definition of the parameter for the CSV file.
    • T

      Insert select from ignoring function errors
      SQL, Database Testing • mysql insert • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      That is not the normal way to use IODKU. Instead of ... UPDATE foo = i1.stuff ... do ... UPDATE foo = VALUES(foo), ... What is the Unique key in the table?
    • T

      How to build an events application with private events sharing?
      Usability Testing • user behavior password • • terrea  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      There should be an organizer, who will invite the people they wants. To make it easier to send multiple invitations at once you can introduce groups. So then organizer can send an invitation to the whole group, let's say "Elm Street neighbours" or "My closest friends". There can any amount of groups, and everyone can be in any number of different groups. Anyone can see only matches either open-to-everyone or to which they has an invitation. Neither password nor link is needed. And then you can introduce other features, like "can other group members invite new participants" and so on.
    • 1
    • 2
    • 3
    • 4
    • 5
    • 735
    • 736
    • 1 / 736