- It produces 1 row because 'Ninety' is not a number so the first insert failed
- Even though the first insert failed, it still consumed a number from
car_counter
(i.e., 1), then Oracle increment the sequence by 10 to 11 for next value. - You can change the sequence to
INCREMENT BY 9
so that first value is 1, second is 10, but the third is 19. You maySTART WITH 10
or even 100 if you want things end with 0.
Posts made by baileigh
-
RE: Why will this query only produce 1 row and not 2? And why does it have a value of 11 and not 1?
-
What can malicious postgres db user do to a linux server?
Suppose I created a db user with
create role myuser login password 'xyz';
and allowed typical read and write db permissions.
If a malicious user finds these login credentials (and has access to db), what is the worst he can do to the Ubuntu server where the db lives? (e.g. can he get access to the OS shell? can he install something? can he remove non-db files? ...)
-
RE: How to await the execution of the rds_backup_database stored procedure?
Per https://aws.amazon.com/premiumsupport/knowledge-center/native-backup-rds-sql-server/ :
When the rds_backup_database or rds_restore_database stored procedure is called, the task starts and outputs the information about the task.
...
When the lifecycle status of the task is SUCCESS, the task is complete.
...
You can get the Task ID after you perform the backup or restore statement. Or, you can use the following script to identify all the completed and pending tasks for a particular database:exec msdb.dbo.rds_task_status @db_name='database_name'
To track the status of the job, use this SQL statement:
exec msdb..rds_task_status @task_id= 5
So, after initiating the backup with
rds_backup_database
, you would just need to enter a loop where you userds_task_status
to find the current status, and if it is not "SUCCESS", you could useWAITFOR DELAY '00:00:30'
to sleep for 30 seconds, before continuing your loop to check status again & repeat.There is also https://stackoverflow.com/a/47640253/1948808 that implements a similar solution from PowerShell. Whether you implement your code in PowerShell or T-SQL, the logic for checking & looping would be similar.
-
Sql server complex select query row as column
I have two tables in SQL Server:
Table1:
Table2:
I need a select query to give me output like this:
I am attaching here the sample data of two tables in text format since I am unable to add the Excel sheets here
Table1
SCHOOL_ID TEACHER_ID IS_HEAD_MASTER TAGGED_CLASS IMAGE_NAME LATITUDE LONGITUDE ACCURACY CREATED_DATE CREATED_BY AI_IMAGE_NAME AI_STATUS IS_DOWNLOADED DOWNLOADED_DATE IS_PROCESSEED 204 341 Y 10.jpeg 1.1 2.1 80 49:33.0 28110100204 NULL NULL NULL 16:54.0 NULL 204 341 Y 20.jpeg 1.1 2.1 80 49:33.0 28110100204 NULL NULL NULL 16:54.0 NULL 204 341 Y 30.jpeg 1.1 2.1 80 49:33.0 28110100204 NULL NULL NULL 16:54.0 NULL 204 734 N 40.jpeg 1.2 2.2 90 55:59.3 28110100204 NULL NULL NULL 16:54.0 NULL 204 734 N 50.jpeg 1.2 2.2 90 55:59.3 28110100204 NULL NULL NULL 16:54.0 NULL 204 734 N 60.jpeg 1.2 2.2 90 55:59.3 28110100204 NULL NULL NULL 16:54.0 NULL Table2
TEACHER_ID TAGGED_SCHOOL_ID ATTENDANCE_DT ATTENDANCE_TYPE IMAGE_NAME LATITUDE LONGITUDE ACCURACY CAPTURED_TIME AI_IMAGE_NAME AI_STATUS IS_DOWNLOADED DOWNLOADED_DATE IS_PROCESSEED 341 204 21/02/2022 IN 1.jpeg 1.1 2.1 80 37:16.0 NULL NULL NULL NULL NULL 341 204 21/02/2022 OUT 2.jpeg 1.2 2.2 80 55:45.0 NULL NULL NULL NULL NULL 734 204 21/02/2022 IN 3.jpeg 1.3 2.3 80 24:24.0 NULL NULL NULL NULL NULL 734 204 21/02/2022 OUT 4.jpeg 1.4 2.4 80 31:47.0 NULL NULL NULL NULL NULL I have written below query but it is not giving the output in required format:
select t1.TEACHER_ID as "t1.TEACHER_ID", t2.TEACHER_ID as "t2.TEACHER_ID", STRING_AGG(t1.IMAGE_NAME,', ') as "t1.IMAGE_NAME", STRING_AGG(t2.IMAGE_NAME,', ') as "t2.IMAGE_NAME" from table1 t1, table2 t2 where t1.TEACHER_ID=t2.TEACHER_ID GROUP BY t1.TEACHER_ID=t2.TEACHER_ID;
Please help me to solve my query issue.
It's part of my ML work. Once I will get the records in required format, I will use the images in further process. Currently I am struggling with select query part.
Table1 and Table2 are just dummy names. Obviously, the actual table names and data are different.
-
Does killed session appears in Query Store?
Is there any option to see killed sessions in Query Store?
I am asking, because we have an additional tool which is terminating sessions if they take more than 30 minutes to run (
KILL
command).I would like to check the execution plans in the Query Store for queries that have been terminated. I can not find in query store sessions/queries which were killed by this additional application.
-
RE: Why can't Many to Many relationships exist in relational database management systems?
I think it's a matter of terminology. As Justin mentioned, there can be many-to-many relationships in the real world. However, that requires an intermediary table in the database to bridge the gap at the relationship level. This intermediary table would be presented as a one-to-many relationship. When looked at from a higher level, the tables on the left and right of the intermediary table would represent a many-to-many relationship in the real world.
To elaborate on Justin's merchant example. A merchant can have many customers, just as they can have many products. Any number of customers can buy any number of products, just as any number or products may have been purchased by any number of customers. But you would never try to relate the two directly in the customer or product tables.
Instead, you would create an orders table. For each individual customer, you'd have a one-to-many relation for the number of orders they placed. While from the product side, you'd have a one-to-many relation from product to the number of orders.
If a relationship was not handled in this manner, picture a customer table needing to have a column for each product a user purchased, or a product table needing a column for each customer that purchased the product.
J.D. brought up a good point in the comments. Another way someone might make a mess of a many-to-many relationship would be...
a single generic column for the purchased product and a row for every product purchased per customer in the customer table. Such a denormalized design would result in the cardinality of the customer table not equaling the cardinality of the number of customers there are. Bridge tables for many-to-many relationships
So, to those points, the author is correct.
-
Allow create table (dynamic name) and insert but nothing else
I'm trying to find a way to allow an application to create tables and insert data into them on a SQL Server 2019 while protecting from injection attacks in case the app credentials would leak. My experience is limited when it comes to writing code that can run in parallel and writing dynamic sql that is protected from sql injection attacks.
The table name is based on input from the application, i.e. if the input is 'nds' the table name should be lake.nds_raw_log.
It is my understanding that there is no way to do this via directly granting permissions to the role for this application since creating tables is not separated from deleting or altering them.What I've come up with is executing a stored procedure as dbo. Sure it's not long but I have two issues with it:
- it feels contrived which by my experience says that there is an easier way.
- I believe that I need to run it as serializable to avoid orphan tables if I retrieve the wrong table when I query for my newly created table. This shouldn't actually be that big of an issue since it won't happen that often after the first start in production so maybe I shouldn't care about it.
create procedure [lake].[create_terminal_raw_log_table] ( @terminal_name nvarchar(100) ) with execute as 'dbo' as begin try set transaction isolation level serializable begin transaction --create table declare @dynamic_sql nvarchar(1000) = 'create table [lake].' + quotename(@terminal_name) + ' ( id bigint not null, [timestamp] datetime2(3) not null, cmd varbinary(max) not null );' exec sp_executesql @dynamic_sql
/*get name of new table, this is why I believe that I need serializable isolation since other tables can be created in parallel*/ declare @table_name nvarchar(100) = ( select top 1 [name] as table_name from sys.tables order by create_date desc ) --rename table declare @old_name nvarchar(100) = '[lake].' + @table_name, @new_name nvarchar(100) = @table_name + '_raw_log' begin try exec sp_rename @objname = @old_name, @newname = @new_name end try begin catch set @dynamic_sql = 'drop table ' + @old_name exec sp_executesql @dynamic_sql ;throw end catch --create primary key set @dynamic_sql = 'alter table [lake].' + @new_name + ' add constraint pk__' + @new_name + ' primary key(id)' exec sp_executesql @dynamic_sql commit transaction end try begin catch rollback --I thought a rollback would occur when I throw after dropping the table but that doesn't seem to be the case ;throw end catch
So I guess this boils down to 3 questions:
- Is this stored procedure actually safe from SQL injection attacks?
- Is there an easier way to do it?
- Is it correct that setting the transaction level as serializable will protect the code from selecting the wrong table when selecting from sys.tables?
-
RE: How to simulate error with MSG = 0 on Sql Server?
This error is returned by the client API, not SQL Server, when something bad happens like an access violation or killed connection that closes the connection on the server side. That's why the message id is zero instead of one in
sys.messages
. `Erland Sommarskog calls this a false error message in https://www.sommarskog.se/error_handling/Part2.html#falseerrmsg .RAISERROR
with a severity of 20 (which requiresWITH LOG
and sysadmin or alter trace permissions) may be able to simulate the error depending on the client API you are using. I see the error in SSMS (which uses the .NET framwork SqlClient API) after running the command below, along with other error messages. Note that a SQL exception may generate multiple error messages, including this one, so you will need to examine the errors returned for the one(s) of interest.RAISERROR('This is a severe error test',20,0) WITH LOG;
-
Find closest record by datetime and state
I'm working on a query that will be able to return record from the following tables in specific order.
Descriptions table:
DescriptionId AssessmentId YearId CreatedAt State 56 55 2018 2022-02-06 15:09:49.500 0 68 55 2019 2022-02-06 15:11:14.167 0 71 55 2020 2022-02-06 15:12:03.780 1 Threads table:
ThreadId AssessmentId ImpactId CreatedAt State 60 55 1 2022-02-06 15:09:49.5 0 65 55 2 2022-02-06 15:10:10.97 0 67 55 3 2022-02-06 15:10:18.657 0 69 55 4 2022-02-06 15:11:20.483 0 70 55 5 2022-02-06 15:11:27.263 1 Idea is to connect 1 Description to 1 Thread depending on
CreatedAt
andState
column.The last records in both tables are always already connected by state, so there is no need to check the date.
Past record are required to check by
CreatedAt
.As far I can tell, it has to go from the newest record to oldest. So the last record is checked by state, and the next record is checked by date.
The
DescriptionId
record with ID68
is connected toThreadId
70 because theCreatedAt
of the thread is older than theCreatedAt
of Description.The Description with ID 56 is connected to
ThreadId
67 is because the thread date is older than the description date and younger than the next date of the newDescription
.Data and schema is available on http://sqlfiddle.com/#!18/30878/2
Result should look like following:
DescriptionId CreatedAt(Description) ThreadId CreatedAt (Thread) 56 2022-02-06 15:09:49.500 67 2022-02-06 15:10:18.657 68 2022-02-06 15:11:14.167 70 2022-02-06 15:11:27.263 71 2022-02-06 15:12:03.780 70 2022-02-06 15:11:27.263 (because of the same state) -
Upgrade all databases using one upgraded database ik one local SQL instance
Situation : MICROSOFT SQL SERVER 2019 server , for testing I use express version.
An app is linked to one local SQL instance , let name this instance ‘ Appname’ .
In this instance there is a database , let’s call it ‘root database’. And the instance contain more databases ( can be 100 , even more ) .
So each of these databases contain data , each of them gets data from the App user works with.
The data does not change if an upgrade is needed.
the ‘root database’needs to be upgraded once a while ( data stays the same but the schemes , etc .. , needs to be upgraded)
How can I use Microsoft studio manager to upgrade all other databases using upgraded ‘root database’? So they all have same changes ( schemes , .. ) ?
How to use CMD to get the same results and maybe this method will upgrade all databases faster ?
-
RE: Autovacuum on tables with high-volume updates
If you update all rows of one table in a single statement, the table will be bloated to twice its minimal size (it will contain a live tuple and a dead tuple for each row), and that bloat cannot be removed by autovacuum. If autovacuum doesn't finish fast enough, the next update will only increase the problem.
Ideas for a solution:
Run the update in batches, where each batch updates only a fraction of the rows. Between the updates, run an explicit
VACUUM
. That will keep the bloat at bay.Accept the bloat of 50%+, but configure autovacuum to run as fast as possible by setting
autovacuum_vacuum_cost_delay = 0
andmaintenance_work_mem = '1GB'
. That way, you have a chance that the table won't be bloated even more.Deliberately create the table will a fillfactor of 45, so that it is deliberately bloated from the beginning, but make sure that the update only modifies columns that are not indexed. Then you will get https://www.cybertec-postgresql.com/en/hot-updates-in-postgresql-for-better-performance/ , which will at least reduce the need for
VACUUM
.
-
RE: How to avoid replication lag in case all writes on master and reads on replica?
Replication always has lag, if only a very small amount, because your data changes are not even written to the binary log until you commit a transaction. Then the replica has to download those events in the binary log and apply them on the replica instance. This is normally very quick, but by definition, it is > 0 lag for every event.
It is also possible for your application to attempt to read data it just wrote, before it commits its own transaction. In that case, there's no way the replica can have the same change, since it hasn't even been committed on the source instance yet.
If your app needs absolutely current data, with no possibility of reading stale data, then the queries must read from the source instance, not a replica.
But not every read needs to read such strictly current data. Each query in your application has potentially different sensitivity to reading data that is lagging a little bit behind the source.
I wrote a presentation https://www.percona.com/resources/technical-presentations/readwrite-splitting-mysql-and-php-percona-mysql-webinars for Percona, which describes several different solutions for different levels of replication lag tolerance.
-
RE: Create a select query to get employee name, total salary of employee, hobby name(comma-separated - you need to use subquery for hobby name)
The lack of ddl and sample data makes it difficult to guess where the problem might be, but the following looks very suspicious:
INNER JOIN hobby ON h.id = eh.fk_hobby_id
It means that you will multiply the number of hobbies for each employee with the cardinality of the hobby table. You may want to try:
SELECT CONCAT(e.first_name, ' ', e.last_name) AS full_name , SUM(es.salary) AS total_salary , (SELECT GROUP_CONCAT(h.name) FROM hobby h WHERE h.id = eh.fk_hobby_id ) AS hobby_name FROM employee_hobby eh INNER JOIN employee e ON e.id = eh.fk_employee_id INNER JOIN employee_salary es ON es.fk_employee_id = eh.fk_employee_id GROUP BY eh.fk_employee_id;
Group by eh.fk_employee_id may or may not be correct, but it is hard to tell without ddl for the tables.
-
ALTER TABLE ADD COLUMN in batches but with NEWSEQUENTIALID()
I have a large table, but not huge (less than 2 million rows on aging hardware), and when adding a non-nullable column to an existing table, I usually follow the structure set out https://dba.stackexchange.com/a/188410/45757 to avoid problems with the script timing out when our database migrations run on deployment (FYI - This isn't a full text index problem).
So, in summary, I:
- Alter the table and add the column as NULL and do not add a default constraint
- Backfill the column in batches
- Alter the table and change the column to be NOT NULL and add the default constraint
However, in the following case I want to add a new UNIQUEIDENTIFER column and fill it with NEWSEQUENTIALID() rather than NEWID() values.
Without running in batches, my script would look like this:
IF NOT EXISTS ( SELECT NULL FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Invoice' AND COLUMN_NAME = 'InternalId') BEGIN ALTER TABLE Invoice ADD InternalId UNIQUEIDENTIFIER NOT NULL CONSTRAINT [DF_Invoice_InternalId] DEFAULT (NEWSEQUENTIALID()) END GO
However if I split this up into batches, and attempt to fill the nullable InternalId with the following:
IF NOT EXISTS ( SELECT NULL FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Invoice' AND COLUMN_NAME = 'InternalId') BEGIN ALTER TABLE Invoice ADD InternalId UNIQUEIDENTIFIER NULL END
DECLARE @MaxId INT, @LoopStart INT, @LoopEnd INT, @LoopSize INT = 50000
SELECT @MaxId = MAX(InvoiceId) FROM Invoice
SELECT @LoopStart = MIN(InvoiceId) FROM Invoice
SET @LoopEnd = @LoopStart + @LoopSizePRINT 'Updating InternalIds to a new GUID'
WHILE @LoopStart <= @MaxId
BEGIN
-- update internal id
UPDATE I
SET InternalId = NEWSEQUENTIALID()
FROM Invoice I
WHERE I.InvoiceId BETWEEN @LoopStart AND @LoopEndSET @LoopStart = @LoopEnd + 1 SET @LoopEnd = @LoopEnd + @LoopSize
END
IF EXISTS (
SELECT NULL FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Invoice'
AND COLUMN_NAME = 'InternalId'
AND IS_NULLABLE = 'YES')
BEGIN
ALTER TABLE Invoice
ALTER COLUMN InternalId UNIQUEIDENTIFIER NOT NULL
ENDIF NOT EXISTS (SELECT NULL FROM sys.objects WHERE name = 'DF_Invoice_InternalId')
BEGIN
ALTER TABLE Invoice
ADD CONSTRAINT [DF_Invoice_InternalId]
DEFAULT ((NEWSEQUENTIALID())) FOR [InternalId]
END
I get the following error:
Msg 302, Level 16, State 0, Line 40 The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression.
Any tips on how to work around this? Or am I over-thinking this?
The reason for doing this change is to expose the Sequential IDs (
InternalId
or could be calledPublicId
) externally in an API, as a replacement for the current sequential numeric Ids (InvoiceId). The numeric Id (the primary key) should have been kept internal, as it exposes a sequential and guessable internal value. The Sequential GUID is still sequential, but also not so easily guessable. To illustrate the point, I'm doing something like this, but @First is being provided via an API call. It is used for polling and processing new invoices using a watermarking process.CREATE TABLE #Test ( Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY, DateCreated DATETIME NOT NULL DEFAULT(GETDATE()), Code NVARCHAR(50) NOT NULL )
DECLARE @Id INT
DECLARE @NO_OF_CHARS INT = 10
SET @Id = 1WHILE @Id <= 12000
BEGININSERT INTO #Test (Code) VALUES (SUBSTRING (REPLACE(CONVERT(VARCHAR(40), NEWID()), '-',''), 1, @NO_OF_CHARS))
SET @Id = @Id + 1
ENDALTER TABLE #Test
ADD InternalId UNIQUEIDENTIFIER NOT NULL DEFAULT(NEWSEQUENTIALID())DECLARE @First UNIQUEIDENTIFIER
SELECT * FROM #Test
SELECT @First = InternalId FROM #Test WHERE Id = 1
SELECT * FROM #Test WHERE InternalID > @FirstDROP TABLE #Test
-
SSMS v18.9.2 Display Estimated Execution Plan returning XML error
From SSMS v18.9.2, when highlight a simple Select query like
select * from sys.tables
" in the master database (or any database), I get the following message:An error occurred while executing batch. Error message is: Error processing execution plan results. The error message is: There is an error in XML document (1, 2049). Unexpected end of file while parsing Name has occurred. Line 1, position 2049.
Anyone know what is causing this and how to fix it? Thanks in advance.
-
RE: sqldeveloper export tool ignores filter by schema
You don't have to click "More" or "Lookup". On the right side, there shows "All" by default, which means all objects will be exported. The image is illustrated as below.
-
Making an index a unique index in very large MySQL table within one transaction - is the following approach safe?
For the purpose of optimizing SELECT statements I am trying to make an index UNIQUE with the following SQL statement in MySQL:
ALTER TABLE credentials DROP INDEX special_credential_id, ADD UNIQUE KEY special_credential_id(special_credential_id)
My question is: Is this one transaction? That means if creating the unique index fails, will the old special_credential_id index still be there? Usually it would be easy simply to create a new index but we are talking about a table containing 100 Mio entries.
-
How to prevent user use \du command in PostgreSQL?
Are there any posibilities to prevent users invoke \du command in Postgres ?
-
vacuum full shrinks newly created and ordered table size
I have created and empty table and populated it with generate_series function. I looked at the table size. and it is oddly 64KB, because each page keeps 226 integers and there are 5 blocks respectively, thus its size should be 40KB.
Then I did a full vacuum and its size dropped to 40 KB.
The questions:
- why is the table size is bigger than number of blocks?
- what does full vacuum do a newly created table?
create table tbl (data int); postgres=# insert into tbl (data) SELECT * from generate_series(1,1000); INSERT 0 1000
postgres=# \d+
List of relations
Schema | Name | Type | Owner | Persistence | Access method | Size | Description
--------+------+-------+----------+-------------+---------------+-------+-------------
public | tbl | table | postgres | permanent | heap | 64 kB |
(1 row)postgres=# vacuum FULL tbl ;
VACUUM
postgres=# \d+
List of relations
Schema | Name | Type | Owner | Persistence | Access method | Size | Description
--------+------+-------+----------+-------------+---------------+-------+-------------
public | tbl | table | postgres | permanent | heap | 40 kB |
(1 row) -
What do these weird errors logged mean?
Today, I got these errors logged for the first time:
stats_timestamp 2022-01-16 10:58:25.625142+01 is later than collector's time 2022-01-16 10:58:23.356175+01 for database 0 statistics collector's time 2022-01-16 10:58:25.625142+01 is later than backend local time 2022-01-16 10:58:23.355776+01
It worries me. What causes it to happen? What does it mean? I found nothing whatsoever online.
Windows 10 Pro 2009. PostgreSQL 14.1.