Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Avante
    3. Topics
    A
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by Avante

    • A

      SQL Server on Linux, an issue with filesystem privileges for a backup creation
      SQL, Database Testing • sql server linux backup permissions • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Z

      mssql will need to login again before getting the new group. Easiest would be to restart that server. Alternate solution: Add the users that need to read the backup, to the sample_group, then chown mssql:sample_group /tmp/backup_dir chmod 2750 /tmp/backup_dir sqlcmd -U sa -P ********* -Q "backup database AdventureWorks TO disk='/tmp/backup_dir/1.bak'" Test if the users can read the backup (will depend on mssql's umask), if not: chmod a+r /tmp/backup_dir/1.bak
    • A

      Silent Installation of Oracle 11gR2 Fails During NETCA setup in Redhat Linux 8
      SQL, Database Testing • linux oracle oracle 11g r2 redhat • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      Rather than continue to attempt a hack to get an out-of-support version of the database to run on an operating system version it was never designed to run on, I would advise using a current version of the DB like 19c or 21c. I believe they should run on RHEL 8 with little or no problem. Either that, or if you must have Oracle 11gR2, then use the version of the OS that the database was designed for (RHEL 7).
    • A

      interpreting an explain statement for a slow query on MySQL 5.7.36
      SQL, Database Testing • mysql greatest n per group • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      Add this index: INDEX(obj_type, obj_id, id) This looks like a http://mysql.rjweb.org/doc.php/groupwise_max query; it may have a better formulation. If you need to discuss further, please provide SHOW CREATE TABLE post.
    • A

      REINDEX takes significantly more time on production system than on local snapshot
      SQL, Database Testing • postgresql index postgresql performance • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      G

      we have never deleted or updated rows in the table Then there is typically no reason to run VACUUM FULL at all. Maybe VACUUM (without FULL) is useful. But, typically, that's covered by https://www.postgresql.org/docs/current/routine-vacuuming.html . REINDEX may be useful. Consider the scenarios listed in the manual https://www.postgresql.org/docs/current/sql-reindex.html#id-1.9.3.162.5 and https://www.postgresql.org/docs/current/routine-reindex.html . If you create the index after filling the table, REINDEX is certainly wasted. Not exactly sure where the observed difference comes from. (A hosted service is always a bit of a blackbox.) Either way, VACUUM FULL rebuilds table and indexes from scratch. An additional REINDEX adds nothing useful over just VACUUM FULL. See: https://stackoverflow.com/a/31012925/939860
    • A

      Is there any way to let Postgres know it can filter results and then calculate aggregations?
      SQL, Database Testing • postgresql performance aggregate postgresql performance running totals • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      Be explicit about what you want and put your filter into a subquery in FROM: SELECT ... FROM (SELECT ... FROM a WHERE /* your filter */) AS a_filtered JOIN b ON ... GROUP BY ...
    • A

      Wall clock hourly buckets out of time intervals with postgres or timescale
      SQL, Database Testing • postgresql timescaledb • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Y

      Generate a series of timestamp ranges, join with your data, calculate the overlap and aggregate: SELECT user_id, bucket, coalesce(sum(upper(inters) - lower(inters)), 0) AS duration FROM (SELECT user_id, lower(ranges.r) AS bucket, tsrange(tab.start, tab.end, '[)') * ranges.r AS inters FROM (SELECT tsrange(st, st + '1 hour'::interval, '[)') AS r FROM generate_series( '2021-01-01 00:00:00'::timestamp, '2021-12-31 23:00:00'::timestamp, '1 hour'::interval ) AS g(st) ) AS ranges LEFT JOIN tab ON tsrange(tab.start, tab.end, '[)') * ranges.r <> 'empty' ) AS intersections GROUP BY tab.user_id, bucket;
    • A

      Potential pitfalls for upgrading PostgreSQL from 9.x/10.x to 13.x?
      SQL, Database Testing • postgresql debian upgrade • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      M

      There was never a PostgreSQL version 9, but that is beside the point here. Most of the time, you won't have a problem with such an upgrade. But each major version of PostgreSQL introduced some incompatibilities, which are documented in the release notes of the base release (9.6, 10, 11, ...), so you cannot be certain. You should read through all these release notes to get an idea what problems you may encounter, but the important part is that you test your application thoroughly. A test is the only way to make sure you won't run into problems.
    • A

      SQL Server Access Database in Filesystem
      SQL, Database Testing • sql server sql server 2008 sql server 2008 r2 backup windows server • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      F

      If you can find .mdf and .ldf file you can try to attach it to the new SQL instance using below: CREATE DATABASE ON (FILENAME = ''), (FILENAME = '') FOR ATTACH; Also you can check this solution: https://stackoverflow.com/questions/61954107/how-to-attach-a-database-by-mdf-file-that-has-not-been-detached
    • A

      Does a read-write-split on your galera cluster still make sense when running the servers on the same (private) cloud environment?
      SQL, Database Testing • mariadb clustering galera • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      How big is the dataset? How big is the disk? How much RAM? How many queries per second? I am fishing for whether there is actually a problem you are trying to solve. Meanwhile... Having one "Primary" and two "read-only" "Replicas" is possible with Galera. It may simplify establishing the connections. Or it may make it more complex. It may also lead to the Primary having a different load than the Replicas. Most of today's hardware (whether in the Cloud or private) has very few parameters: SSD is much better than HDD, especially if I/O bound The cloud may be able to provide higher bandwidth, but this is unlikely to be the limiting component. RAM comes in many sizes. If it is practical to have twice as much as the dataset size, then most operations will need only a little I/O. If the dataset is growing rapidly, the Cloud probably has a very easy way to add RAM. Doing it yourself is a hassle. Galera makes this "easy" by letting you take a node out of the cluster, add ram (or replace the machine) and put it back in. If private, you are left with old hardware; if Cloud they repurpose it for someone else. With 3 "read-write" nodes, each has similar load. Access (read or write) needs to be redirected to some node; a "proxy" or something is useful here. High I/O or high CPU often means "inefficient queries" -- this should be investigated before 'throwing hardware at the problem'. For a very high number of read-only connections, you can hang regular Replicas off each Cluster node. This can be scaled "infinitely".
    • A

      SQL Server SSMS - Edit 200 rows functionality - is it possible for two or more people to alter each other's work using this functionality?
      SQL, Database Testing • sql server ssms • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      There's no such thing as "commit whole table". A modification is performed that affects x number of rows based on the WHERE clause. At end of transaction (end of statement unless you specified BEGIN tran and you now say COMMIT or ROLLBACK), the rows that were modified are committed or rolled back. As for the possibility of a conflict. I can see two ways to seek answer: Run a trace and see what SQL is submitted. Try it. I ran a trace and I didn't see anything at the T-SQL that protect you from the conflict. When I tried it, however, when doing the modification as the "second person", I got a message in SSMS: What SSMS did was to submit an UPDATE with a WHERE clause for each column (compare to the value when you read the row). If it got 0 rows back, it assumes that somebody else modified the row while we were lookin at it, and ask us that to do (I don't have the source code for SSMS, so this sentence is my assumption of how it does it). If we say "Yes", then it submits one more UPDATE with a WHERE clause just for the primary key column. Here are those two UPDATEs from my trace, the second one submitted after I answered "Yes" in above dialog: exec sp_executesql N'UPDATE TOP (200) Sales.Customer SET TerritoryID = @TerritoryID WHERE (CustomerID = @Param1) AND (PersonID IS NULL) AND (StoreID = @Param2) AND (TerritoryID = @Param3) AND (AccountNumber = @Param4) AND (rowguid = @Param5) AND (ModifiedDate = @Param6)',N'@TerritoryID int,@Param1 int,@Param2 int,@Param3 int,@Param4 nvarchar(10),@Param5 uniqueidentifier,@Param6 datetime',@TerritoryID=5,@Param1=2,@Param2=1028,@Param3=1,@Param4=N'AW00000002',@Param5='E552F657-A9AF-4A7D-A645-C429D6E02491',@Param6='2014-09-12 11:15:07.263' exec sp_executesql N'UPDATE TOP (200) Sales.Customer SET TerritoryID = @TerritoryID WHERE (CustomerID = @Param7)',N'@TerritoryID int,@Param7 int',@TerritoryID=5,@Param7=2
    • A

      Recursive CTE in PostgreSQL to generate data with different frequencies
      SQL, Database Testing • postgresql cte recursive • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      I think you don't need a recursive solution. select row_number() over (order by r.fk, s.p) as pk, r.fk from generate_series(1, 10) as r (fk), generate_series(1, sc.getfreq(r.fk)) as s (p) ;
    • A

      Does the timestamp type have (hidden) milliseconds?
      SQL, Database Testing • mysql 5.7 timestamp • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      Is there any way to extract more accuracy from updated? No. It is not possible to select the data which is absent in the table and cannot be calculated from the data present in the tables. does MySQL store more precision than the second, even though it shows only seconds? No. It stores exactly what you asked for. It can store up to microsecond - but you have not asked for this accuracy. Well, it really is déjà vu all over again with this question! There appears to be a bug in MySQL's implementation of this... (quelle suprise - a bug in MySQL...) at least on dbfiddle.uk and dbfiddle.com. – Vérace - get VACCINATED NOW You call NOW() - and you get the datetime without milliseconds. Call NOW(3) or NOW(6) and get more accurate value. https://dbfiddle.uk/?rdbms=mysql_5.7&fiddle=ccaf9b2c25e372c7bf0ca82c66ded787 Well, good news from the MySQL camp - you don't need to worry about fractional values of a second at all - 5.7 doesn't support them – Vérace - get VACCINATED NOW Never consult the person from the MySQL camp who said this nonsense again.
    • A

      The performance_schema database of MySQL has been deleted by mistake. Is there any way to recover it?
      SQL, Database Testing • mysql 8.0 • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      D

      Go to Command Prompt and run the following: mysql_upgrade --user=root --password=password --force Restart the MySQL. You should be good to go.
    • A

      Full text search can not find incomplete words
      SQL, Database Testing • sql server full text search • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      O

      try this one: SELECT * FROM flags WHERE CONTAINS(FlagColors,'"Blu*"') ; https://docs.microsoft.com/en-gb/archive/blogs/sqlforum/faq-how-can-i-perform-wildcard-searches-in-full-text-search
    • A

      SQL Update Table Based on Join and Multiple Where Clauses
      SQL, Database Testing • mysql • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      You don't want LEFT. Forget about the version with the syntax error. If you are going to do both the SELECT and the UPDATE, add FOR UPDATE to the SELECT and put them between START TRANSACTION and COMMIT. If the tables are large, these indexes should help: tbl_client: INDEX(mfl_code, id) tbl_appointment: INDEX(app_status, active_app, client_id)
    • A

      Can you use PREPARE statement or plpgsql function to make first query on new connection fast?
      SQL, Database Testing • postgresql postgresql performance prepared statement • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      There is no way to cache execution plans across database connections in PostgreSQL. However, I doubt that planning time is really your problem. Convince yourself by running EXPLAIN (ANALYZE) /* your query */ and looking for the Planning Time. That will not vary much. It is on this planning time that you can save by using prepared statements. The more likely cause for varying execution times is caching. It makes a lot of different if you read 1000 8kB pages from disk or find them cached in RAM. To diagnose that, run EXPLAIN (ANALYZE, BUFFERS) /* your query */ and look at the buffer count. The higher it is, the more sensitive to caching effects your performance will be. Try to tune your queries to have a smaller footprint or get more RAM to cache the database.
    • A

      Need some help with text color suggestion
      Usability Testing • color color scheme • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      O

      Font-weight: bold or extra-bold Color: any color at maximum saturation or white Black outline to the text as explained https://stackoverflow.com/a/4919231/5765312 .
    • A

      Drawing on Mobile with fingers: magnifier or offset from finger location?
      Usability Testing • user behavior usability testing user expectation mobile application user research • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      There are several possibilities. I would try making a separated working zone at the same screen. Horizontal split-screen to get a working area in the lower half Color difference between the image of the working area and the real one A button to move the partition boundary vertically A button with the option to see a working grid in both partitions Possibility of zooming in the work area, with or without repercussion on the real image. For example: Zoom in both with the usual two-finger gesture Zoom in on just one image with three fingers (or vice-versa) The user paints in the work area, the lower part, and visualizes the result in the real image, the upper part.
    • A

      What is the idiomatic way to reload EKS/Kube configs when a new image is pushed to ECR
      Continuous Integration and Delivery (CI, CD) • kubernetes continuous deployment continuous delivery eks aws ecr • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      C

      The 'best approach' in a theory (coming from the perspective of k8s) would be GitOps: Cloud-native Continuous Deployment. Your question lack some important information, how are you doing your deployments right now ? Do you have some limitations in terms of security and auditability ? Sharing my personal 2 cents, using GitOps with ArgoCD outsource and guarantees the continious-running of all of your workloads(pods as an example). Argo will take care for them, even if somebody deletes a pod, it will re-store the previous number of pods, essentially maintaining the desired vs real state. Another benefit is that you could really easily track your appps by version, commit, tag ... you could even roll-back(this works only under a specific circumstance). Going in an example, by simply changing the image inside a pod spec in a directory that Argo keeps track of it is enough for the new version to be roll out as safe as possibble. In our case we have a separate service (it's just a repo with tags) that acts as a wrapper for all the services, once we have a new release we deploy under a pattern of a directory layout and Argo catches it, everything is re/deployed automatically. The UI is great and they even expose a REST API. Now once you have Argo(or some other tool), you get additional efforts in tracking the releases of the tool, versions, backward-comptability, helm metadata support, security, access, password rotations, best way to drop Argo in the cluster ... but this is another story that comes naturally, once you decide to go this path.
    • A

      Are multiline lists valid in Terraform?
      Continuous Integration and Delivery (CI, CD) • terraform • • Avante  

      2
      0
      Votes
      2
      Posts
      0
      Views

      R

      Yes. According to https://www.terraform.io/docs/language/expressions/types.html#lists-tuples List literals can be split into multiple lines for readability, but always require a comma between values. A comma after the final value is allowed, but not required. Values in a list can be arbitrary expressions. Source: Terraform by HashiCorp. (n.d.). Types and Values - Configuration Language. [online] Available at: https://www.terraform.io/docs/language/expressions/types.html#lists-tuples [Accessed 28 Sep. 2021]. ‌
    • 1
    • 2
    • 3
    • 4
    • 5
    • 744
    • 745
    • 1 / 745