Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. Kadyn
    3. Topics
    K
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Topics created by Kadyn

    • K

      SQLCMD fails if -c; is specified in parameters on SQL Server 2019
      SQL, Database Testing • sql server sql server 2019 sqlcmd • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      J

      The https://docs.microsoft.com/en-us/sql/tools/sqlcmd-utility?view=sql-server-ver15 doc says: -c batch_terminator Specifies the batch terminator. By default, commands are terminated and sent to SQL Server by typing the word "GO" on a line by itself. When you reset the batch terminator, do not use Transact-SQL reserved keywords or characters that have special meaning to the operating system, even if they are preceded by a backslash. [emphasis mine] And if you see the https://docs.microsoft.com/en-us/sql/t-sql/language-elements/transact-sql-syntax-conventions-transact-sql?view=sql-server-ver15 doc, you'll find that: ;       Transact-SQL statement terminator. Although the semicolon isn't required for most statements in this version of SQL Server, it will be required in a future version. That info added to what Stewart Gordon pointed on https://www.quora.com/Why-should-I-use-a-semicolon-in-SQL : The ANSI/ISO standards for SQL require statements to be separated by semicolons. As such, the majority of SQL implementations will require them. I think Microsoft SQL Server 2012 and earlier is one of the few that deviates from this standard. From that I'd say the semicolon is now a reserved keyword, although it's not required yet on T-SQL. How to show the error message Error Reporting Options -b Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs. The value that is returned to the DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity level greater than 10; otherwise, the value returned is 0. If the -V option has been set in addition to -b, sqlcmd will not report an error if the severity level is lower than the values set using -V. Command prompt batch files can test the value of ERRORLEVEL and handle the error appropriately. sqlcmd does not report errors for severity level 10 (informational messages). If the sqlcmd script contains an incorrect comment, syntax error, or is missing a scripting variable, ERRORLEVEL returned is 1. -m error_level Controls which error messages are sent to stdout. Messages that have a severity level greater than or equal to this level are sent. When this value is set to -1, all messages including informational messages, are sent. Spaces are not allowed between the -m and -1. For example, -m-1 is valid, and -m -1 is not. This option also sets the sqlcmd scripting variable SQLCMDERRORLEVEL. This variable has a default of 0.
    • K

      Automatically enable trace flag after server restart
      SQL, Database Testing • sql server trace flags • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      D

      We would like to enable permanently enable Trace Flag on server. In SQL Server there are three scopes as which trace flags can be set. These are at the global level, query level and session level as outlined in the following documentation: In SQL Server, there are three scopes at which trace flags can work: query, session, and global. Query trace flags are active for the context of a specific query. Session trace flags are active for a connection and are visible only to that connection. Global trace flags are set at the server level and are visible to every connection on the server. Some flags can only be enabled as global, and some can be enabled at either global or session scope. Reference: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15#:%7E:text=In%20SQL%20Server%2C%20there,global%20or%20session%20scope. (Microsoft | SQL Docs) The you have to abide by a couple of rules which are: A global trace flag must be enabled globally. Otherwise, the trace flag has no effect. We recommend that you enable global trace flags at startup, by using the -T command line option. This ensures the trace flag remains active after a server restart. Restart SQL Server for the trace flag to take effect. If a trace flag has either global, session or query scope, it can be enabled with the appropriate scope. A trace flag that is enabled at the session level never affects another session, and the effect of the trace flag is lost when the SPID that opened the session logs out. Trace flags are set on via a DBCC TRACEON(traceflag,[-1]) and turned off via a DBCC TRACEOFF(traceflag, [-1]). Or in a query using QUERYTRACEON and QUERYTRACEOFF. Startup traceflags are added to the SQL Server startup via the -T traceflag setting in the SQL Server Configuration Manager (recommended). Using the DBCC TRACEON and DBCC TRACEOFF commands. For example, to enable the 2528 trace flag globally, use DBCC TRACEON with the -1 argument: DBCC TRACEON (2528, -1). The effect of enabling a global trace flag with DBCC TRACEON is lost on server restart. To turn off a global trace flag, use DBCC TRACEOFF with the -1 argument. Using the -T startup option to specify that the trace flag be set on during startup. The -T startup option enables a trace flag globally. You cannot enable a session-level trace flag by using a startup option. This ensures the trace flag remains active after a server restart. For more information about startup options, see Database Engine Service Startup Options. At the query level, by using the QUERYTRACEON query hint. The QUERYTRACEON option is only supported for Query Optimizer trace flags documented in the table below. Answering Your Questions We would like to get access of visibility of trace flags (and options for enable/disable) to team which which will not have access to SQL Server Configuration Manager. You can't set a global trace flag without using the recommended SQL Server Configuration Manager or informing the resident DBA and requesting a change (Request for Change --> Change Request) according to the company's processes. You can't set a trace flag at the global level, if is not supported at that level, or if you don't have adequate permissions. (See the list in the linked reference, on whether a trace flag is to be set at a global, session or query level). Probably most important: ...as with any configuration change in SQL Server, it is always best to thoroughly test the flag in a non-production environment before deploying. Additional Permissions Required To turn trace flags on or off with the DBCC TRACEON() command requires the sysadmin fixed server role assigned to your account: Requires membership in the sysadmin fixed server role. So, without the permission of the resident DBA, there will be no configuration changes on the SQL Server which is a good thing, as setting trace flags can have a negative impact on the performance of the SQL Server instance. Reference: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql?view=sql-server-ver15#:%7E:text=Requires%20membership%20in%20the%20sysadmin%20fixed%20server%20role. (Microsoft | SQL Docs)
    • K

      Get rows grouped by a foreign key with count of consecutive values
      SQL, Database Testing • sql server gaps and islands window functions • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      A

      This is a type of gaps-and-islands problem. The key to most solutions of this type, is to count the changes, so you want an is_different column, not is_same. Then you conditionally count that column (easier if you use NULL instead of 0) to create an ID for each group of rows. It's unclear exactly what final results you want, but by grouping up that result, you can get the maximum and minimum number of consecutive rows, as well as the count of actual row-groups, per client_id: WITH PrevValues AS ( SELECT client_id, LAG(is_cancelled) OVER (PARTITION BY client_id ORDER BY date) AS previous_cancelled, CASE WHEN is_cancelled = LAG(is_cancelled) OVER (PARTITION BY client_id ORDER BY date) THEN NULL ELSE 1 END as is_different, date, is_cancelled FROM transactions ), Grouped AS ( SELECT client_id, date, is_different, COUNT(is_different) OVER (PARTITION BY client_id ORDER BY date ROWS UNBOUNDED PRECEDING) AS group_id FROM PrevValues ), ByGroups AS ( SELECT client_id, COUNT(*) as in_a_row FROM Grouped GROUP BY client_id, group_id HAVING COUNT(*) >= 3 ) SELECT client_id, MAX(in_a_row) as max_in_a_row, MIN(in_a_row) as min_in_a_row, COUNT(*) as num_groups FROM ByGroups GROUP BY client_id; https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=ca7d37a3a4d51a5fc9f9a27f941b739a Note that your sample data has rows with identical dates, and is one reason you should always use ROWS UNBOUNDED PRECEDING (the default for ordered window functions is RANGE UNBOUNDED PRECEDING which is subtly different). In any event, you should always try to have deterministic ordering.
    • K

      Can I set up Wal Archiving on a three years old Postgres database?
      SQL, Database Testing • postgresql backup replication transaction log • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      S

      You need to perform a https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-BASE-BACKUP regularly, since you need to restore the backup before you can https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-PITR-RECOVERY .
    • K

      SQL Server :: How many 8K pages are taken by this record?
      SQL, Database Testing • sql server t sql database size size data pages • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Laycee

      I think you have something backwards here. A page is 8 KB (8,192 bytes). The rows in that table have a maximum size of the sum of the data type sizes of all of the columns: I'm not actually going to add all of those up, but let's imagine the sum is 500 bytes. You can look up the size of each data type by going through this page in the docs, and clicking through to each type you want to find size information on: https://docs.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql?view=sql-server-ver15 Because of a fixed size header on each page, and other per-row overhead, the amount of space available for rows on an 8KB page is 8,096 (see here for details: https://dba.stackexchange.com/questions/211762/what-other-information-is-stored-in-the-page-header/211771#211771 ). That means that, in this table, you could have up to 8096 / 500 = 16 rows per page. It might be possible to fit slightly more rows than that, depending on the size of the variable length (nvarchar) columns. You can see how many pages are in a specific index by using a query like this: https://stackoverflow.com/questions/11946957/sql-server-number-of-8k-pages-used-by-a-table-and-or-database From there you could derive an average of how many rows per page you're getting (total rows in the table / total pages in the table). You can also use DBCC PAGE (as described in the "page header" link above) to view detailed information about each page, including how many rows there are on the page. Note that all of the above applies to "in row data." If the table has LOB data types, then those will be stored off-row, potentially on multiple pages, if their size causes the record to exceed the maximum record size of 8,060 bytes.
    • K

      How to avoid returning partial data from SQL Server in .NET application?
      SQL, Database Testing • sql server view odbc • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      M

      You left off the most important part of that second error (emphasis mine): A severe error occurred on the current command. The results, if any, should be discarded. Whatever the "application import node" is, it needs to deal with this error by discarding the resultset - and potentially undoing any work it did on the results that had been streamed in so far. Any other solution would be a workaround, and likely an unsafe one. It's important to note that, while KILLing the spid can cause this error, so can reading corrupt pages from disk. If you see this error unexpectedly, you should run DBCC CHECKDB to make sure your data is safe. ...are you sure that connection via ODBC can catch this kind of error? I wrote a small program in C# that creates an ODBC connection (using the System.Data.OdbcConnection class) and starts streaming results from a long-running query: using System.Data.Odbc; try { var connection = new OdbcConnection("DSN=SQL2019"); var command = new OdbcCommand(@"SELECT * FROM dbo.Posts", connection); connection.Open(); var reader = command.ExecuteReader(); while (reader.Read()) { var id = reader.GetInt32(reader.GetOrdinal("Id")); Console.WriteLine(id); } } catch (Exception e) { Console.WriteLine(e); throw; } Each time I ran the program, I waited a few seconds and then issued a KILL command on the spid that was running my query. I tried this with four different ODBC drivers on my computer, and encountered errors from each one: The errors I got were as follows: ODBC Driver 17 for SQL Server System.Data.Odbc.OdbcException (0x80131937): ERROR [08S01] [Microsoft][ODBC Driver 17 for SQL Server]TCP Provider: An existing connection was forcibly closed by the remote host. ERROR [08S01] [Microsoft][ODBC Driver 17 for SQL Server]Communication link failure SQL Server System.InvalidOperationException: The connection has been disabled. ---> System.Data.Odbc.OdbcException (0x80131937): ERROR [01000] [Microsoft][ODBC SQL Server Driver][DBMSLPCN]ConnectionRead (recv()). ERROR [08S01] [Microsoft][ODBC SQL Server Driver][DBMSLPCN]General network error. Check your network documentation. SQL Server Native Client 11.0 System.Data.Odbc.OdbcException (0x80131937): ERROR [08S01] [Microsoft][SQL Server Native Client 11.0]TCP Provider: An existing connection was forcibly closed by the remote host. ERROR [08S01] [Microsoft][SQL Server Native Client 11.0]Communication link failure SQL Server Native Client RDA 11.0 Same as previous All of that to say, the application should get an error on the connection that got killed. And it should have logic for dealing with the fact that the results might not have been complete when the query was killed.
    • K

      How To trim last two digit of the varchar in SQL SSMS?
      SQL, Database Testing • sql server sql server 2008 • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      Considering that the column always has six chars, you can use LEFT and RIGHT to separate the year part and the month part https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=f6e87741999ab46a6ffa7efaa6d837c6 DECLARE @year_month VARCHAR(6) = '202003'; DECLARE @year INT = LEFT(@year_month, 4), @month INT = RIGHT(@year_month, 2); SELECT @year_month AS input, @year AS year, @month AS month
    • K

      How to performantly query using a function applied to an MySQL auto-increment column?
      SQL, Database Testing • mysql performance index query performance • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Demir

      where id mod 1000 is equivalent to applying a function to the column id. Such is not "sargable", hence the execution will simply scan the entire table. Changing the "1000" will not change the speed of the scan, but will change the speed of the reply -- simply because of the number of rows delivered. If your goal is efficient fetching of several "random" rows, see http://mysql.rjweb.org/doc.php/random If there is a "group_number" that is derived from id, then SELECT ... WHERE group_number = 123 may be very efficient if you also have INDEX(group_number). Either add another column for that and populate it. Or consider a "generated" column. Avoid the REPLACE statement; it is essentially DELETE + INSERT -- and the 'mod' will change.
    • K

      Role-playing Dimensions
      SQL, Database Testing • sql server date schema • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      role-playing dimensions are always related to the join between a fact and a dimension. If you want to examine the login dates you have to place that filed in a fact and not in a dimension. So for example you could have a fact_login table with login_id datestart_id datestop_id Here you have two IDs to link with the same dim_date, the first play the role of login start date, the second one as login end date...
    • K

      Export colums containing (geo)JSON elements to CSV while keeping the JSON double quotes properly
      SQL, Database Testing • postgresql csv copy • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      morde

      copy t_jsonb to STDOUT (FORMAT CSV, QUOTE ' ', header true);
    • K

      How can I prompt SQL server to use more suitable index in query?
      SQL, Database Testing • azure sql database execution plan • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Alberto

      I wolud suggest you to try this: identify why sql server is using the wrong index; maybe there is something to fix instead of forcing a plan if it is not possible you could force the query plan through the query store with sp_query_store_force_plan https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-query-store-force-plan-transact-sql?view=sql-server-ver15
    • K

      RMAN Duplicate with new SID / Path
      SQL, Database Testing • oracle restore rman • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Mystic

      Option 1: Example: run { set newname for datafile 1 to '/u01/NEW_PATH/system_tbs'; set newname for datafile 2 to '/u01/NEW_PATH/undo_tbs'; set newname for datafile 3 to '/u01/NEW_PATH/sysaux_tbs'; set newname for datafile 4 to '/u01/NEW_PATH/users_tbs'; ... set newname for tempfile 1 to '/u01/NEW_PATH/temp_tbs'; ... duplicate target database to NEW_DB logfile group 1 ('/u01/NEW_PATH/redo01.log') size 1G, group 2 ('/u01/NEW_PATH/redo02.log') size 1G, group 3 ('/u01/NEW_PATH/redo03.log') size 1G ; } Option 2: https://docs.oracle.com/database/121/REFRN/GUID-E8B4E0EA-B073-4349-9EA9-E053F499FB9E.htm#REFRN10038 https://docs.oracle.com/database/121/REFRN/GUID-3D5894EF-C33D-4687-978F-F640174F6FCC.htm#REFRN10098 On the new instance, set the parameters as required before starting the duplicate: alter system set db_file_name_convert='/u01/OLD_PATH/', '/u01/NEW_PATH/' scope=spfile; alter system set log_file_name_convert='/u01/OLD_PATH/', '/u01/NEW_PATH/' scope=spfile; And restart the instance. You can skip all the above parts from duplicate command: duplicate target database to NEW_DB; Above needs to be done only once, these parameters can be set in advance in the SPFILE if you plan to do additional duplicates in the future.
    • K

      Order of Key Columns in Index
      SQL, Database Testing • sql server index index tuning clustered index • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      S

      E_ID is not part of the WHERE clause while index IX_EMAIL_3 is sorted by E_ID so SQL can't seek to any of value from where and need to scan the index to find it. That is why it is taking IX_EMAIL_2. Try this: CREATE NONCLUSTERED INDEX IX_EMAIL_4 ON EMAIL (E_CUS_ID ASC, E_TYPE ASC, E_ID ASC) INCLUDE (E_PER_ID) WITH (FILLFACTOR = 95) I think it will be used.
    • K

      Reasons for Column as Included
      SQL, Database Testing • sql server query performance index tuning • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      Analeea

      First of all: https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms345485(v=sql.105) . At all. To put it mildly. I like to think of it as that it doesn't have a lot of CPU cycles to produce the recommendations. However, in this case there is a logic to the recommendations. Consider this predicate: E_DATE_OPENED < DATEADD(DAY, 7, E_DATE_SENT) An index cannot be SEEKed for it, since you basically compare the values between two different columns. The reson that E_DATE_OPENED is in the key for the first example is that it includes below predicate: E_DATE_OPENED IS NOT NULL For above, an index can definitely be SEEKed; hence it makes sense to have that column in the key.
    • K

      Understanding a double entry accounting DB Schema
      SQL, Database Testing • database design schema • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      K

      My first job in IT was assisting the development of an accounting system, respecting european rules, afterwards, I developed one all by myself. When I first came into contact with an american system - won't name any big name, but it was sold in Europe - my reaction was "Seriously??". 2.false, those are in the move_line table 5.false, can only fit in the move_line table 6.see 5 7.false, debit is receiving, credit is spending, in Europe it would be the opposite of what is on the bank statement 8.in a balance sheet, a ledger could appear in a different location (account) depending on the balance being negative or positive 9.the balance field is a joke, I think 12.the fiscal year can have more or less than 12 months, no need for description of periods - as long as they are indeed months - since you can calculate the date with the start date of the fiscal year The schema you've shown is a mash-up of poor accounting and invoicing. No way a customer would pay a single product of invoice, he may pay part of an invoice or multiple invoices at once. I don't think there is room for invoices and products in an accounting schema.
    • K

      Can Cassandra split a table?
      SQL, Database Testing • cassandra • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      jeanid

      There is no built-in command for that so you need to use some external tool, like: https://docs.datastax.com/en/dsbulk/doc/index.html with which you can unload data from table to disk, and then load data selectively - see the example 3 from https://www.datastax.com/blog/datastax-bulk-loader-introduction-and-loading on how to use -m option. Spark with https://github.com/datastax/spark-cassandra-connector/blob/b2.5/doc/14_data_frames.md - read data, select only columns that you need & write into new table(s). Main advantage of Spark is that it can do everything in memory, you don't need much disk space. something like this (not tested, did use spark-shell with Scala. Follow documentation to set necessary connection properties): import org.apache.spark.sql.cassandra._ val df = spark.read .format("org.apache.spark.sql.cassandra") .options(Map( "table" -> "tbl_name", "keyspace" -> "ks_name")) .load() // write locations df.select("rowid", "city", "state") .createCassandraTableEx("ks_name", "location_table", partitionKeyColumns = Some(Seq("rowid")) // write age df.select("rowid", "age") .createCassandraTableEx("ks_name", "age_table", partitionKeyColumns = Some(Seq("rowid"))
    • K

      Efficient update with join
      SQL, Database Testing • sql server update • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      B

      The data sent to the script by the third party contains a modified_date column. I have decided to store this column in my database and update only those rows where this date is different: update main set main.int_col = temp.int_col, main.varchar_col = temp.varchar_col, main.date_col = temp.date_col, main.text_col = temp.text_col main.modified_date = temp.modified_date from main inner join temp on main.external_id = temp.external_id where main.modified_date is null or main.modified_date <> temp.modified_date
    • K

      Determine Ideal Collation Set for correct data storage
      SQL, Database Testing • mariadb storage collation string manipulation character set • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      E

      Under no conditions will MySQL generate these 6 characters: \u00e9 from a single character. ', also stores é as \u00e9 I think that happened in your client, not MySQL. So that column A and column B now use the exact same character set + collation There is no problem with different columns having diff charset and/or collation. A performance problem can occur when you compare columns with different collations (especially in JOIN...ON). l'\u00e9l\u00e9phant, you don't get l'éléphant, but l'éléphant That is inconclusive. Note that display products, especially HTML, will "clean up" things for you. To really see what is in the table, use SELECT HEX(col)... ' -- "HTML entity" \u00e9 -- "unicode" encoding l'éléphant encoded in UTF-8 and displayed in hex (with spaces added to separate characters): Double encoding: 6C 27 C383C2A9 6C C383C2A9 70 68 61 6E 74 UTF-8 encoding: 6C 27 C3A9 6C C3A9 70 68 61 6E 74 latin1 encoding: 6C 27 E9 6C E9 70 68 61 6E 74 text: l ' é l é p h a n t I'm getting the data in PHP, and ... But where is the data coming from? mysqli_set_charset is stating that it is utf8mb4-encoded, but is it really? code a search and replace If you rush into this, you could be making things worse. First let's find out what is really there, where it came from, etc. I'm That is proper in either of these string literals: 'I\'m' "I\'m" The language (PHP/MySQL/etc) will remove the backslash as it parses the string. But it is 'wrong' in other contexts. it escapes the single quote What escapes it?? prepare+execute? real_escape? addslashes? Something else? As implied above, you do need to escape it. But we need to know what did the escaping -- to avoid messing things up further. Even if the data is stored as Hello, I'm James ... You should not let it store that way. That just adds to the confusion later. Ditto for ' and \u00e9. The MySQL table should contain l'éléphant. I repeat, the only way to see if that is what it stored is via SELECT HEX(col) .... And expect "6C 27 C3A9 6C C3A9 70 68 61 6E 74" (minus spaces). A test: mysql> INSERT INTO try_json (j) VALUES ('["I\'m"]'); mysql> INSERT INTO try_json (j) VALUES ('["l\'éléphant"]'); mysql> SELECT j, HEX(j), JSON_EXTRACT(j, '$[0]'), HEX(JSON_EXTRACT(j, '$[0]')) FROM try_json; +------------------+----------------------------------+-------------------------+------------------------------+ | j | HEX(j) | JSON_EXTRACT(j, '$[0]') | HEX(JSON_EXTRACT(j, '$[0]')) | +------------------+----------------------------------+-------------------------+------------------------------+ | ["I'm"] | 5B2249276D225D | "I'm" | 2249276D22 | | ["l'éléphant"] | 5B226C27C3A96CC3A97068616E74225D | "l'éléphant" | 226C27C3A96CC3A97068616E7422 | +------------------+----------------------------------+-------------------------+------------------------------+ Usually you want this; without it, you bet the \unnnn codes: json_encode($a, JSON_UNESCAPED_UNICODE) Use urlencode() when you are going to put the string in a URL. That may be where %7C comes from. PHP's htmlentities() can generate things like < and é. That last one is equivalent to ' In MySQL 8.0, you may need this technique: select cast(unhex('224D6173746572262333393B7322') as char); which yields "Master's" (including the quotes). PHP and its output: echo ""; $s = '"Master's"'; // with html entity echo strlen($s), ' ', $s, ' ', bin2hex($s), " s - with html entity \n"; $t = '"Master's"'; // backslash and apostrophe echo strlen($t), ' ', $t, ' ', bin2hex($t), " t - with backslash and apostrophe \n"; echo ""; 14 "Master's" 224d6173746572262333393b7322 s - with html entity 10 "Master's" 224d6173746572277322 t - with backslash and apostrophe
    • K

      How to get complete SQL Text running of the query in a given SPID
      SQL, Database Testing • sql server • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      inna

      There are a few options available to monitor actively running queries which will provide the full query text: https://docs.microsoft.com/en-us/sql/tools/sql-server-profiler/sql-server-profiler?view=sql-server-ver15 - This is the most deprecated methodology for tracing running queries, and the tool is a bit clunky and can sometimes be difficult in searching for the exact query you're looking for if you have a lot running on your SQL instance at one time. But it is also pretty simple to use, and even provides the values of parameters used in stored procedure queries that it captures in its trace. https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/extended-events?view=sql-server-ver15 - This is the recommended replacement by Microsoft to the aforementioned Profiler. While a little bit more complicated of a learning curve, Extended Events provide more in-depth information that is better searchable. https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-ver15 - This is the most recent query monitoring feature released by Microsoft. Though usually intended to monitor query performance, it does provide the full text of the query as well, which can even be accessed in the https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-query-store-query-text-transact-sql?view=sql-server-ver15 . Note this feature is only available in SQL Server 2016 and later. Each of these features need to already be running though, for you to be able to capture the query. I don't believe there's a reliable way to retroactively find the full query text, if you weren't already using a feature or tool that was monitoring what is running.
    • K

      How to detect tempdb usage?
      SQL, Database Testing • sql server sql server 2019 tempdb • • Kadyn  

      2
      0
      Votes
      2
      Posts
      0
      Views

      T

      lemonheads Some alternative methods, if you don't want to log absolutely everything about your server to tables. Since you want to capture the version store specifically, these might be more useful in general. SELECT SUM(user_object_reserved_page_count) * 8 / 1024.0 / 1024.0 AS user_objects_gb, SUM(internal_object_reserved_page_count) * 8 / 1024.0 / 1024.0 AS internal_objects_gb, SUM(version_store_reserved_page_count) * 8 / 1024.0 / 1024.0 AS version_store_gb, SUM(unallocated_extent_page_count) * 8 / 1024.0 / 1024.0 AS free_space_gb, SUM(mixed_extent_page_count) * 8 / 1024.0 / 1024.0 AS mixed_extent_gb FROM tempdb.sys.dm_db_file_space_usage; SELECT DB_NAME(database_id) AS database_name, reserved_page_count, reserved_space_kb FROM sys.dm_tran_version_store_space_usage; SELECT dopc.object_name, dopc.counter_name, dopc.instance_name, dopc.cntr_value FROM sys.dm_os_performance_counters AS dopc WHERE dopc.counter_name IN ( 'Longest Transaction Running Time' , 'Version Store Size (KB)' , 'Version Cleanup rate (KB/s)' , 'Version Generation rate (KB/s)' ); You can also http://whoisactive.com/docs/25_capturing/ to capture tempdb usage at the query level, though this won't differentiate between version store and temporary objects, etc.
    • 1
    • 2
    • 3
    • 4
    • 5
    • 737
    • 738
    • 1 / 738