Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. elysha
    3. Topics
    E
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by elysha

    • E

      Primary File Organization in DBMS - Files of ordered records (sorted files)
      SQL, Database Testing • files storage storage engine database engine physical design • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      I think it's just a sloppy turn of the phrase by the book authors. Clearly, if you don't do raw I/O (which, as you say, few modern databases do), you have limited control over how your records are placed on disk. DBMSes minimise the effect of potential fragmentation of database files by the operating system using these methods: Space preallocation. Many DBMSes can be configured to request space allocation in chunks (terminology differs, but they are often referred to as "extents") much larger than the atomic I/O units ("pages" or "blocks"), thus reducing the scale of database file fragmentation. Data file placement. Using the mechanism often called "tablespaces", or simply data file directories, one can place tables for which space allocation characteristics are deemed critical onto separate file systems, where they would be able to grow without interference from other processes. I/O buffers. This is by far the most useful and important method. A well-tuned database would rarely wait for a synchronous I/O operation to read data from disk; all the data pages it needs would be available in memory, often in less fragmented form, by means of the file system cache and/or database buffer pool. Often they employ separate threads that "prefetch" or "read-ahead" the data file blocks before they are actually needed for query processing. See also https://en.wikipedia.org/wiki/File_system_fragmentation#Mitigation on Wikipedia.
    • E

      SQL Server- can a select query be executed to specify column names for the insert into command?
      SQL, Database Testing • sql server primary key • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      O

      Something like this: declare @sql nvarchar(max) = ( select concat(' insert into drugs(',STRING_AGG(cast(name as nvarchar(max)), ', ') within group (order by column_id),') select ',STRING_AGG(cast(name as nvarchar(max)), ', ') within group (order by column_id),' from drugs') from sys.columns c where c.object_id = object_id('drugs') and c.is_identity = 0 ) print(@sql) exec (@sql)
    • E

      postgresql Recursive Update resulting in Sequential Scan
      SQL, Database Testing • postgresql cte postgresql 13 recursive • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      This seems to always use an index scan (at least on Postgres 14) with recursive foo as ( select id, manager_id, department from emp where id = 1000 union all select emp.id, emp.manager_id, emp.department from emp join foo on emp.manager_id = foo.id ) update emp set department = 'IT' where emp.id = any (array(select id from foo)) If you have fast (SSD) disks, you might want to consider lowering random_page_cost, to make Postgres favor index scans in general
    • E

      Can you backup a replicated database?
      SQL, Database Testing • sql server backup sql server 2016 transactional replication • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      Yes, you can and you should be backing up your replicated databases (inclusive of the system generated database called distribution) just the same as any other databases, to the tolerance of data loss you can afford. Please follow the guidelines in https://docs.microsoft.com/en-us/sql/relational-databases/replication/administration/back-up-and-restore-replicated-databases?view=sql-server-ver15 . But the procedure for backing them up should be pretty much the same as any of your other databases (Full, Differential, and Transaction Log Backups all being available for use - depending on your database's Recovery Model). There's more specific recommendations to pay attention to in the aforementioned documentation when it comes time to restore those backups, such as: Replication supports restoring replicated databases to the same server and database from which the backup was created. If you restore a backup of a replicated database to another server or database, replication settings cannot be preserved. In this case, you must re-create all publications and subscriptions after backups are restored. And the specific steps and guidance in https://docs.microsoft.com/en-us/sql/relational-databases/replication/administration/strategies-for-backing-up-and-restoring-snapshot-and-transactional-replication?view=sql-server-ver15#publication-database-read-only-transactional-replication which you should carefully read through prior to your first restore.
    • E

      How to short UUIDs with Postgres?
      SQL, Database Testing • postgresql encoding uuid • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      UUIDs are in fact 16 byte long binary strings and you normally want to store and manipulate them as such. Only when passing UUIDs via a protocol that is not capable of dealing with binary strings (e.g. HTTP queries) you need to serialize them as character strings. The two common methods you are referring to in your question are hexadecimal representation (a44521d0-0fb8-4ade-8002-3385545c3318) and base-64 encoding (mhvXdrZT4jP5T8vBxuvm75). Pick the one that you like and that is compatible with upstream and downstream applications.
    • E

      SQL cumulative sum
      SQL, Database Testing • mysql • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      M

      Ranking your TotalSales by Market and then selecting back rows where rank is 1 should help you get what you’re looking for. select MarketId, ProductName, TotalSales from ( select Contacts.MarketID, Y.ProductName, sum(Y.Price) as TotalSales, rank() over ( partition by Contacts.MarketID order by sum(Y.Price) desc ) as "sale_rank" from ( select T.ProductName as ProductName, T.Price as Price, Order_.CustomerID as CustomerID from ( select Product.ProductName as ProductName, OrderedItems.OrderID as OrderID, OrderedItems.InvoicePrice as Price from Product join OrderedItems on OrderedItems.ProductCode = Product.ProductCode ) as T join Order_ on Order_.OrderID = T.OrderID ) as Y join Contacts on Contacts.CustomerID = Y.CustomerID group by Contacts.MarketID, Y.ProductName ) as s where s.sale_rank = 1 MarketId ProductName TotalSales MarketA Hard drive 1TB 2000 MarketB Hard drive 1TB 1000 https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=d9efb15c2a16ac2fdc2617dcb930f7b7
    • E

      Query that returns no records hangs on lock
      SQL, Database Testing • sql server locking blocking • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      carriann

      There's no direct way to do what you want: read rows at read uncommitted isolation, only taking U locks on rows that match a given predicate. How UPDLOCK works When you specify UPDLOCK, SQL Server takes U locks at the row or page level. If a table lock is needed, SQL Server takes an X lock instead. Table and page locks are held to the end of the transaction regardless of whether any matching rows were found. When row-level locks are used, SQL Server always takes a U lock on a row before testing to see if it matches the conditions. In the special case where a row is tested and found not to match the predicate in the same data access operator (e.g. a scan or seek), the U lock on the current row is released just before acquiring a U lock on the next row. Otherwise, the U lock is held to the end of the transaction as usual. That is a description of how SQL Server implements the UPDLOCK hint. I do understand it is not clear in the https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table#arguments , or how you would like it to behave. Nevertheless, that is how it is. Workarounds There is no perfect way to implement what you want, but there are a couple of partial solutions. The first one is WITH (UPDLOCK, READPAST, ROWLOCK): SELECT ST.PrimaryKey FROM dbo.SomeTable AS ST WITH (UPDLOCK, READPAST, ROWLOCK) WHERE ST.ColumnA = 'A' AND ST.ColumnB BETWEEN 6 AND 8 AND ST.ColumnC != 'C'; This will skip blocking if the row to be tested has an incompatible row-level lock. The query will still block if the incompatible lock is held at the page or table level. The second workaround is to read at the session isolation level, then take U locks in a separate data access by joining back on the primary key: SELECT CA.PrimaryKey FROM dbo.SomeTable AS ST -- No hints here CROSS APPLY ( SELECT TOP (1) ST2.PrimaryKey FROM dbo.SomeTable AS ST2 WITH (UPDLOCK, FORCESEEK) WHERE ST2.PrimaryKey = ST.PrimaryKey ) AS CA WHERE ST.ColumnA = 'A' AND ST.ColumnB BETWEEN 6 AND 8 AND ST.ColumnC != 'C'; It is important to follow the pattern exactly there, including the top. When implemented correctly, this method should be pretty reliable in practice. Upserts I have to mention the question seems like an upsert pattern. Taking UPDLOCK alone is not sufficient. You also need a transaction around every part of the upsert, and a SERIALIZABLE hint to ensure any row that does not exist, continues not to exist until the insert is performed. Update locks can only be taken on rows that exist. See https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/ by Michael J Swart. Of course when you use SERIALIZABLE semantics, your blocking chances might increase substantially.
    • E

      PostgreSQL partitions consuming huge amounts of memory
      SQL, Database Testing • postgresql partitioning postgresql 13 • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Analeea

      Yes, more partitions mean more overhead in memory. Refer below text taken from docs: It is also important to consider the overhead of partitioning during query planning and execution. The query planner is generally able to handle partition hierarchies with up to a few hundred partitions. Planning times become longer and memory consumption becomes higher as more partitions are added. This is particularly true for the UPDATE and DELETE commands. Another reason to be concerned about having a large number of partitions is that the server's memory consumption may grow significantly over a period of time, especially if many sessions touch large numbers of partitions. That's because each partition requires its metadata to be loaded into the local memory of each session that touches it. With data warehouse type workloads, it can make sense to use a larger number of partitions than with an OLTP type workload. Generally, in data warehouses, query planning time is less of a concern as the majority of processing time is spent during query execution. With either of these two types of workload, it is important to make the right decisions early, as re-partitioning large quantities of data can be painfully slow. Simulations of the intended workload are often beneficial for optimizing the partitioning strategy. Never assume that more partitions are better than fewer partitions and vice-versa. Reference: https://www.postgresql.org/docs/14/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES (PostgreSQL 14 | Documentation)
    • E

      After installing a new SSL certificate, my MSSQL server is no longer able to be restarted
      SQL, Database Testing • sql server ssl • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      In addition to the permissions mentioned in another answer ... If the service account that the SQL Engine runs under is not a local administrator, you need to grant it Read access to the certificate's private key. Open the Certificates management console, or add it as a snap-in to a fresh MMC session. Select Local Computer for the set of certificates to manage. Find the certificate in question. It is probably under Personal --> Certificates. Right-click and select All Tasks --> Manage Private Keys.... Add the SQL Engine service account, giving it Read access. (It'll default to Full Control when you add it, but Read is sufficient.) (In a cluster, you have to do this on each node.)
    • E

      Which sql statement is more efficient (Oracle SQL)?
      SQL, Database Testing • oracle oracle 12c • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      The question is: Is Oracle smart enough to delay the join on customer_order in the first query? If it has good statistics to work with, I think the CBO is going to come through for you. If that's the case, then it's doing the same work in each, and cost should be roughly the same. Of course, this is speculation. If you have test data, I would use something like autotrace to see how they were actually executed. Here's a quick reference to using autotrace if you're not familiar with it: https://www.youtube.com/watch?v=fJSRYCC2nX4 Best of luck!
    • E

      Data corruption-Errors found in off-row data- am I loosing data?
      SQL, Database Testing • sql server corruption • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      Personally, I would not trust data returned from a table CHECKDB found some problems. Assuming there is backups (full and log) availabile, I would: Restore a copy of the database from the last full backup (or full+diff) prior to the first time checkdb found the error Restore all transaction log backups, up to the latest recover the database in STANDBY mode, so we can still restore more logs Run checkdb on the restored database If checkdb completed without error, compare the data between the healthy database At this point, I would not bother fixing the existing database at the risk of facing more issues later. I would just schedule a maintenance window to restore the tail log and swap the databases (the nature and configuration of the database may influence my plans, of course). In a situation where this option is not possible (i.e. no log backups available, or not even a reliable full backup), copy data to a new table like you tried is probably your best bet, but there is no way to insure the data you get is really what you should get.
    • E

      Why does mysqldump not backup procedures?
      SQL, Database Testing • mysql backup mysqldump stored procedures • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      There is a very strong likelihood you need one more grant: GRANT SELECT ON mysql.proc TO `dump`@`localhost`; REASON : Dumping Stored Procedures requires reading from mysql.proc UPDATE 2021-12-25 12:20 According to https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_routines --routines, -R Include stored routines (procedures and functions) for the dumped databases in the output. This option requires the global SELECT privilege. The output generated by using --routines contains CREATE PROCEDURE and CREATE FUNCTION statements to create the routines. You may need to adjust grants as follow GRANT PROCESS,SELECT ON *.* TO `dump`@`localhost` or create a separate user for the sole purpose of dumping stored procedures GRANT PROCESS,SELECT ON *.* TO `dump_storedprocs`@`localhost`
    • E

      Calling a powershell script using xp_cmdshell with parameter in T-SQL
      SQL, Database Testing • sql server powershell stored procedures • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      If you look at the line that your print command return, it looks like this: powershell -command "C:\SQL_Powershell\General\MoveFile.ps1" ""C:\BPA Exports\Change Point 1\Processing\"" "ExportCP1Data" ".csv" Note the "" before and after the path... I guess those should be simple " (not double). Take them out of you @sqlFilePath variable and try again.
    • E

      AG failover successful, but old Primary node thinks its still primary after bringing back online
      SQL, Database Testing • availability groups sql server 2017 failover • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      However, when I asked my colleague to bring the clustered nodes back online, when they came up, they think they're the Primary replica too. That's expected if the nodes coming online are able to achieve quorum and thus form the cluster, and can't talk to the partitioned nodes currently online. The partitioned nodes (in this case your DC2 instance) is primary because it was forced quorum. If the two were able to talk, the forced quorum side would win and it would stay primary with the non-forced side (your DC1) obtaining the latest updates from the previously partitioned side (DC2). This is all at the cluster level. The other part of this is the SQL level. Since the AG had to be forced online (due to the forced quorum), when the instances connect back to each other the replicas on the non-forced side (your DC1 side) should be paused due to the forced failover at the AG level. Also, due to the FCI in the environment, there will be no automatic failover of the AG. Therefore, my question is - is there something I can do to make DC1 aware it's now the secondary [...] If connectivity is correctly restored, this will all be automatic. I want you to be aware that by "testing" this way, the cluster and SQL were running in a split-brain scenario, which is not something that should be done.
    • E

      Mysql Deidentify names of users
      SQL, Database Testing • mysql • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      M

      If you want to turn a string of letters (or letters+digits) for (say, for a vehicle license plate) into another string with these attributes: Given the same input string, you get the same encrypted string. (This lets you see that the same plate is showing up twice.) There is no practical way to convert the encrypted string back into the original. UPPER(LEFT(MD5(plate), 8)) (What 8 is the number of letters and digits desired.) Example: mysql> SELECT UPPER(LEFT(MD5('ABCDE'), 8)); +------------------------------+ | UPPER(LEFT(MD5('ABCDE'), 8)) | +------------------------------+ | 2ECDDE39 | +------------------------------+ 1 row in set (0.00 sec) mysql> SELECT UPPER(LEFT(MD5('ABCDE'), 8)); +------------------------------+ | UPPER(LEFT(MD5('ABCDE'), 8)) | +------------------------------+ | 2ECDDE39 | -- reproducable +------------------------------+ If you don't care about reproducibility, RAND() without a seed may be useful. If you need to retrieve the original, see AES_ENCRYPT, but there may be length constraints. With most techniques, there is a chance that two different plates will be encrypted into the same string. More in my fresh answer to that other Question: https://stackoverflow.com/a/70251271/1766831
    • E

      Need help with sub form fields
      Usability Testing • forms website design • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      What you are trying to do is a kind of accordion form, similar to https://wrabit.github.io/accordion-form/ . I think you're too immersed in the form rather than the action. I would try to adjust the global layout to make the action clearer. Some tips: What the user is adding is a new Extra Data, so I would put the add + button associated with the general form title. Put a name or title to each section. I would differentiate between a fixed title of the section and the editable name. A title always helps to clarify the difference between one element and another. When the user deletes it, it's clear that they're deleting the entire section, not just the edited text. Visibly differentiates the title area either by typographical change, background, ... This helps to hierarchize, differentiating the element from its editable area and to create the partition between each of the new Extra Data added. In the image of the question, the centered line is confusing by not defining a total division of each piece, it seems the second field is part of the first. The x is closely related to closing the window, I would not hesitate to put a trash can in the title area of each section to remove it permanently. A basic draft: Try not to make the user dizzy with trips from one place to another. A form implies the complication of "forcing" the user to insert data. Defining editing zones and action zones help to the clarity and understanding of the whole element.
    • E

      What is the best approach in structuring user testing for a task list design?
      Usability Testing • usability testing testing ux designer prototype • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      S

      Map it to the real world task the user would need to do, think about the business scenario and only that. Would it be 5 separate tasks or only one? If it is one, stick to one scenario. You could detail the tasks a bit, but also think in business concepts and parts of the business process. Your task lists UI has to match the real world patterns and if you have to add some system-related, not business value related tasks or steps, you need to take it back to the drawing board before you do the test. Good luck!
    • E

      Best way to display and content manage a forfeit on a sports site?
      Usability Testing • data display • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      How to score a forfeit Rules vary on how to score a forfeit, I would look up the rules for your specific case. E.g. for FIFA it is 3-0 (if the score from the winning team isn't higher) (see https://en.wikipedia.org/wiki/Forfeit_(sport) ). How to display a forfeit I would display both the score and a label with forfeit. I think forfeit should be more prominent because the score is artificial in that case. Something like the example below.
    • E

      Placement of Sign-Up Forms on a Landing Page
      Usability Testing • forms interaction design user behavior • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      A / B test it to make sure.It depends on what you are offering, how people reach the website and the type of traffic you are having. If people know your product and they visit your website they migh fill the form immediately. Other would need to find out more ... and usually this is the case. Understanding your offer and gaining some trust before submitting their data.
    • E

      What is the best route to select a time?
      Usability Testing • interaction design time datetimepicker • • elysha  

      2
      0
      Votes
      2
      Posts
      0
      Views

      morde

      I think there's room on mobile to have hour rows with 15-minute buttons. Hour rows are readable and scannable so users should be able to skip down to their hour.
    • 1
    • 2
    • 3
    • 4
    • 5
    • 749
    • 750
    • 1 / 750