Navigation

    SOFTWARE TESTING

    • Register
    • Login
    • Search
    • Job Openings
    • Freelance Jobs
    • Companies
    • Conferences
    • Courses
    1. Home
    2. charlenef
    3. Posts
    C
    • Profile
    • Following
    • Followers
    • Topics
    • Posts
    • Best
    • Groups

    Posts made by charlenef

    • RE: SQL Server Login Access getting revoked by a daily database refresh

      You could try creating a SQL Agent job that runs once an hour and checks if the permission is present. If not, add the permission back. With it being a dev region, I wouldn't be super worried about this approach.

      Something like...

      IF NOT EXISTS (SELECT 1 FROM sys.database_principals WHERE [name] = '')
          CREATE USER [] FROM LOGIN 
      

      ALTER ROLE db_datareader ADD MEMBER <dev_user>

      posted in SQL
      C
      charlenef
    • How to get partnumber that have at least have one source type Website and at least one stockid equal 1 Based on PartNumber?

      I work on sql server 2017 i face issue i can't get partnumber

      that have at least source type website and at least have stockid equal 1 per partnumber

      so with another meaning i need to get part numers that have stockid 1 and source type website

      so i need to make select query get part numbers that must have source type website and stock =1 Based on Part number

      sample data

      create table #stockyes
      (
      PartNumber varchar(100),
      sourcetype varchar(100),
      stockid tinyint
      )
      insert into #stockyes(PartNumber,sourcetype,stockid)
      values
      (1233,'Website',1),
      (1233,'Website',0),
      (1233,'Datasheet',1),
      

      (8901,'Website',1),
      (8901,'Website',0),

      (9015,'Website',1),

      (7801,'Website',1),
      (7801,'Website',1),

      (9401,'Datasheet',0),

      (5688,'Datasheet',0),
      (5688,'Datasheet',0),

      (5688,'Lifecycle',2),
      (5688,'Lifecycle',2),

      (8409,'Lifecycle',1),

      (6703,'Website',0),
      (6703,'Website',0),
      (6703,'Website',0),

      (3099,'Website',0),
      (3099,'Website',0),

      (8541,'Website',0)

      expected result

      expected result have website and stock id 1

      posted in SQL
      C
      charlenef
    • foreign key table update impact

      Lets say there is orders table with few foreign keys. One of them is addressId.

      Orders
      -- Id
      -- orderNum
      -- customerId
      -- addressId (foreign)
      

      CustomerAddress
      -- Id
      -- customerId
      -- street
      -- cityId
      -- areaId
      -- postal
      -- geo

      Customer
      -- Id
      -- Name
      -- more columns

      now when the customer updates the address in profile. the updated address will show up even for past orders that are linked to that record. which is not good.

      My thought solution is :-

      1. create a new table (OrderAddress).
      2. when an order is placed, insert from CustomerAddress to OrderAddress.
      3. get addressId of the newly inserted record from OrderAddress and save it in the Order record.

      this way I make sure the order address would not get modified in case the original address has been. But it's kind of expensive.

      Is this good practice ?

      posted in SQL
      C
      charlenef
    • RE: GROUP BY date column and then group by custom daytime

      I don't understand how preprocess_id was produced and how an actual hour was mapped to arrive_timeday in your desire result. Any way, you can use a CASE expression or a mapping table to map to the number.

      Using CASE expression

      SELECT arrive_date, arrive_timeday, arrive_location
           , jsonb_object_agg(thing_type, total_things)
      FROM  (
         SELECT arrive_date
              , arrive_timeday
              , arrive_location
              , thing_type
              , sum(quantity) AS total_things
         FROM   (
             SELECT date_trunc('day', arrive_date) AS arrive_date
                  , case 
                    when extract(hour from arrive_date) in (5, 6, 7 , 8, 9) then 0
                    when extract(hour from arrive_date) in (10, 11) then 1
                    when extract(hour from arrive_date) in (12, 13, 14) then 2
                    when extract(hour from arrive_date) in (15, 16, 17 ,18 ,19, 20) then 4
                    when extract(hour from arrive_date) in (21, 22, 23, 0, 1, 2, 3, 4) then 8
                    end arrive_timeday
                  , arrive_location
                  , thing_type
                  , quantity
             FROM   inventory
         ) inv
         GROUP  BY arrive_date, arrive_timeday, arrive_location, thing_type
         ) sub
      GROUP  BY arrive_date, arrive_timeday, arrive_location
      ORDER  BY arrive_date, arrive_timeday, arrive_location;
      

      Using mapping table:

      Table creation:

      CREATE TABLE hour_mapping (
        hour_from integer NOT NULL,
        hour_to integer NOT NULL,
        timeday integer NOT NULL,
        descpt character varying NOT NULL
      );
      

      INSERT INTO hour_mapping (hour_from, hour_to, timeday, descpt) VALUES
      (5, 9, 0, 'morning')
      , (10, 11, 1, 'mid_morning')
      , (12, 14, 2, 'midday')
      , (15, 20, 4, 'evening')
      , (21, 23, 8, 'night')
      , (0, 4, 8, 'night')
      ;

      Query

      SELECT arrive_date, arrive_timeday, arrive_location
           , jsonb_object_agg(thing_type, total_things)
      FROM  (
         SELECT date_trunc('day', arrive_date) AS arrive_date
              , m.timeday as arrive_timeday
              , arrive_location
              , thing_type
              , sum(quantity) AS total_things
         FROM  inventory inv, hour_mapping m
         WHERE extract(hour from inv.arrive_date) between m.hour_from and hour_to
         GROUP  BY arrive_date, arrive_timeday, arrive_location, thing_type
         ) sub
      GROUP  BY arrive_date, arrive_timeday, arrive_location
      ORDER  BY arrive_date, arrive_timeday, arrive_location;
      

      https://dbfiddle.uk/?rdbms=postgres_13&fiddle=5e878c9fea252a97e3ea14f45d016c50

      posted in SQL
      C
      charlenef
    • RE: Looking for visualization to understand how an allocation order scan results in missing/duplicate rows?

      In read uncommitted isolation mode, SQL server engine may decide to use the index-order scan or index order allocation scan.

      There is no such thing as an "index order allocation scan". There are index-order scans and allocation-order scans.

      I am looking for visualization of this.

      I'm not going to draw a picture, but imagine an index with pages like this:

      page keys next page
      500 000-099 300
      300 100-199 700
      700 200-299

      The page numbers are their physical position in the file. So page 500 is the 500th 8KB page in the physical file.

      Notice physical page IDs do not match index key order.

      For example, physical page 500 contains keys (000-099), which are lower than those on page 300 (100-199). This is perfectly normal and allowed.

      In this (very) simplified example, a single IAM page covers this allocation range. The IAM page tells us that pages 300, 500, and 700 belong to this index (and that's all it tells us).

      Now an allocation-order scan of the index starts at page 300 (the lowest physical page number that belongs to this index according to the IAM).

      Meanwhile, another process causes page 500 to split:

      page keys next page
      500 000-049 100
      300 100-199 700
      700 200-299
      100 050-099 300

      Half the rows have moved from page 500 to (new) page 100. Since we are using an allocation-order scan, we do not follow the next-page pointers. Instead, we continue with the next higher-numbered page after page 300, which is 500, then 700.

      We never encounter the rows on page 100. These are 'missing' due to the page split and allocation-order scan.


      If the split of page 500 had resulted in a new page numbered higher than 300, we would have encountered it during the allocation-order scan. No missing rows, and no duplicates.


      As a final example, imagine we have read page 300, and are now processing page 500. At this point, another process causes page 300 to split, creating new page 600. We will encounter the rows on page 600 twice: Once when we read page 300 before the split, and once when we encounter page 600 after the split.

      posted in SQL
      C
      charlenef
    • RE: Backfilling data using generate time series

      Assuming you want one row per day and group name:

      WITH incomplete_table(date, group_name, value) AS (
         VALUES 
           ('2022-01-01'::date, 'a', 1)
         , ('2022-01-15'::date, 'a', 0.5)
         , ('2022-01-31'::date, 'a', 0.2)
         , ('2022-01-02'::date, 'b', 0.1)
         , ('2022-01-10'::date, 'b', 0.15)
         , ('2022-01-20'::date, 'b', 0.15)
         )
      SELECT d.date, g.group_name, i.value
      FROM  (
         SELECT date::date
         FROM   generate_series (timestamp '2022-01-01'
                               , timestamp '2022-01-31'
                               , interval '1 day') date
         ) d
      CROSS  JOIN (SELECT DISTINCT group_name FROM incomplete_table) g  -- ①
      LEFT   JOIN LATERAL (
         SELECT i.group_name, i.value
         FROM   incomplete_table i
         WHERE  i.group_name = g.group_name
         AND    i.date = timestamp '2022-01-01'  -- ? ②
         ORDER  BY i.date DESC
         LIMIT  1
         ) i ON true
      ORDER  BY g.group_name, d.date DESC;
      

      db<>fiddle https://dbfiddle.uk/?rdbms=postgres_14&fiddle=6a683b65cd480b1c9c934626ed098b77

      ① If available, use a table "groups" providing distinct group names. Faster. Else, if the table is big, consider an emulated index-skip scan. See:

      • https://stackoverflow.com/a/66894500/939860

      ② The search for the latest value is not limited to the given time range unless you spell that out in the LATERAL subquery. About LATERAL:

      • https://stackoverflow.com/a/28557803/939860

      You still get value IS NULL where no earlier value is found.

      If incomplete_table is big, an index on (group_name, date) will help performance (a lot). Possibly even a "covering" index, adding column value. See:

      • https://dba.stackexchange.com/questions/81456/slow-index-scans-in-large-table/81554#81554

      Very similar case with more explanation:

      • https://dba.stackexchange.com/questions/266009/return-previous-running-total-when-value-is-null-in-a-time-series/266013#266013
      posted in SQL
      C
      charlenef
    • RE: Deadlock and locks on multitenant database

      Without specific details on individual examples of locking and / or deadlocking issues you're facing, there's not much of a dedicated answer that can be provided. If you're aware of a specific case or two that is particularly troublesome, especially with the deadlocks, providing the queries, DDL of the entities involved, the deadlock graphs, and the execution plans, in a targeted question, would all be very useful to trying to improve those particular cases.

      That being said, a general potential fix you could look into is changing the isolation level on the database to something more conducive to your locking tolerance. By default SQL Server uses the Read Committed isolation level, which employs https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/optimistic-concurrency - essentially writers blocking readers and other writers during data changes. One particular isolation level you might be interested in is called https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server aka RCSI.

      RCSI helps reduce locks during concurrency of DELETES and UPDATES with SELECT statements by using row versioning to preserve the data at different points in time. This comes at the tradeoff of additional resource overhead on the server, generally with tempdb. It is considered optimistic concurrency by nature.

      You may find these additional readings on RCSI informative:

      1. https://dba.stackexchange.com/a/188034/150011
      2. https://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
      3. https://www.littlekendra.com/2016/02/18/how-to-choose-rcsi-snapshot-isolation-levels/
      posted in SQL
      C
      charlenef
    • Why is psql not asking for db password when using pg_service.conf

      I've opened an SSH tunnel to connect to a remote server as follows:

      $ ssh -f -N -L 5433:127.0.0.1:5432 username@servername
      

      This tunnel has been precisely opened as follows (as shown by ps aux | grep ssh😞

      ssh -f -N -L 5433:127.0.0.1:5432 username@servername
      

      I do have a ~/.pg_service.conf with:

      [my-pg-service]
          host=127.0.0.1
          port=5433
          dbname=mydatabase
          user=pguser
      

      just append the .pgpass file here:

      [my-pg-service-2]
      host=127.0.0.1
      port=5433
      dbname=mydatabase
      user=pguser
      passfile=~/.pgpass

      When running:

      $ psql service=my-pg-service
      

      it currently (and surprisingly) connects without asking for the database password! (may it be it's stored in a sort of cache because I already used it prior to that command?)

      But when using psql service=my-pg-service-2 it actually asks for the database password.
      I hoped it would behave the opposite way!

      my ~/.pgpass (chmod 0600) file looks like:

      #hostname:port:database:username:password
      

      Remote pg database on server servername when using an SSH tunnel (5433)

      127.0.0.1:5433:mydatabase:pguser:8+k3&4d2ihs1=&gp!*y)62xoh+^^z$&*ino!66jj()(yw@o36

      Please note that this command is also asking for the database password:

      $ psql -d postgres://pguser@localhost:5433/mydatabase
      

      But this one is not:

      $ psql -d postgres://pguser@127.0.0.1:5433/mydatabase
      

      (I only changed localhost to 127.0.0.1).

      And this is precisely because there is no such line starting with localhost in the .pgpass file:

      localhost:5433:mydatabase:pguser:8+k3&4d2ihs1=&gp!*y)62xoh+^^z$&*ino!66jj()(yw@o36
      

      If I add this line, psql connects without prompting for a password in both previous cases, but I still have the issue with the pg service, e.g. when specifying passfile=~/.pgpass it asks for a password, and when not, it doesn't.

      Can someone please explain this behaviour and what I did wrong?
      May this be linked to the server side configuration which is too broad?

      Just in case, the database is dockerized on the remote server. It uses the https://registry.hub.docker.com/r/postgis/postgis/ (13:3.2) which presents these default characteristics:

      $ docker run \
        --rm \
        --name postgis \
        -e POSTGRES_DB=postgres \
        -e POSTGRES_USER=postgres \
        -e POSTGRES_PASSWORD=changeme \
        -d postgis/postgis:13-3.2
      

      $ docker exec -it postgis bash -c "tail -n 21 /var/lib/postgresql/data/pg_hba.conf"

      CAUTION: Configuring the system for local "trust" authentication

      allows any local user to connect as any PostgreSQL user, including

      the database superuser. If you do not trust all your local users,

      use another authentication method.

      TYPE DATABASE USER ADDRESS METHOD

      "local" is for Unix domain socket connections only

      local all all trust

      IPv4 local connections:

      host all all 127.0.0.1/32 trust

      IPv6 local connections:

      host all all ::1/128 trust

      Allow replication connections from localhost, by a user with the

      replication privilege.

      local replication all trust
      host replication all 127.0.0.1/32 trust
      host replication all ::1/128 trust

      host all all all md5

      OS: Ubuntu 21.10
      PG: 14

      Doc:
      https://www.postgresql.org/docs/14/libpq-pgservice.html
      https://www.postgresql.org/docs/14/libpq-connect.html#LIBPQ-PARAMKEYWORDS
      https://www.postgresql.org/docs/14/libpq-pgpass.html

      posted in SQL
      C
      charlenef
    • RE: Update all rows with a one to many relationship count in the same table in MYSQL

      my production server is running 10.3.31-MariaDB however I can switch it to MYSQL 8 if needed. – webmaster8800

      This solution is applicable for both MySQL 8+ and MariaDB 10.3+

      UPDATE users t1
      JOIN ( SELECT referred_by_user_id, COUNT(*) cnt
             FROM users t2
             GROUP BY 1 ) t3 ON t1.id = t3.referred_by_user_id
      SET t1.invite_count = t3.cnt;
      

      https://dbfiddle.uk/?rdbms=mysql_8.0&rdbms2=mariadb_10.3&fiddle=beae1375004f1b0361ba31d54e3feeac

      posted in SQL
      C
      charlenef
    • RE: What kind of object is made when we make CREATE VIEW, and what sort of commands can be ran on it?

      You can think of a view as a non-materialized query definition with a referenceable name. It doesn't store any data itself, rather when you query a view the data comes from the underlying entities that compose the query for which that view represents, effectively the tables that actually store the data at the lowest level themselves. The columns of a view take on the data types of the source entity they ultimately come from.

      DML statements can be executed against a view, such as deleting and updating data, which will cause the view to act as a passthrough of such statements to the underlying tables. But generally there are restrictions on when and how a view can be updated, for example usually only columns from the same underlying table can be updated in the same statement against a view. While I couldn't find a direct reference to these restrictions in MS Access, you should be able to find somewhat helpful information on the same restrictions in https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-ver15#updatable-views in SQL Server which parallels MS Access to a degree.

      posted in SQL
      C
      charlenef
    • SQL to handle multiple AND conditions

      I have the following data:

      Institution_no Cred_type
      1 1
      1 2
      2 1

      I have a list box so the user can select multiple Cred_type, they want to be able to select either OR or AND conditions.

      For the OR I have like this AND CRED_TYPE IN (1,2)

      For the AND, I am really scratching my head. What they are saying is they want a list of institutions that have cred type 1 and cred_type 2. Perhaps I'm not thinking clearly but this is row by row, so doing this would lead to no results.

      AND cred_type = 1 AND cred_type = 2 -- you can't have a single row have two different values, this would return no results.

      They require that the user can select 10, 20, or more, so writing out a bunch of code for each and combining them all would be really tough - but this is the only thought I had so far. It would be like this

      Select institution_no from table where cred_type = 1
      UNION 
      Select institution_no from table where cred_type = 2
      

      -- this would combine both and get me what I want, but you could imagine all the code for 10 or 20 of these.

      posted in SQL
      C
      charlenef
    • configure size temdb

      I have a disk space of 743 GB which contains the tempdb database

      there was a request to set the max size tempdb to 669GB keeping 10% percent as free disk space

      I have 8 tempb file + log file

      how can I distribute these values ​​between the mdf and ldf files

      thanks

      posted in SQL
      C
      charlenef
    • RE: SSMS v18.9.2 Display Estimated Execution Plan returning XML error

      It looks like the issue was my SET TEXTSIZE option was set to 0. I changed it to 2147483647 and the issue went away.

      Go to Tools -> Options -> Query Execution -> SQL Server -> General. Then, in the SET TEXTSIZE box, enter 2147483647, and click OK.

      posted in SQL
      C
      charlenef
    • how to update id as row number in MySQL 5.7

      I have a table did not contains id, now I add the id column and want to update the id as row number, what should I do? this is my table DDL:

      CREATE TABLE `spark_car_brand` (
        `alpha` varchar(255) DEFAULT NULL,
        `id` int(11) DEFAULT NULL,
        `brand_id` varchar(255) DEFAULT NULL,
        `brand_name` varchar(255) DEFAULT NULL,
        `brand_logo` varchar(255) DEFAULT NULL
      ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
      

      I am using this SQL seems did not work:

      update spark_car_brand
      set id = ROW_NUMBER();
      
      posted in SQL
      C
      charlenef
    • RE: Unable to connect to Database Engine after trying to move tempdb

      I believe you should be able to startup the SQL Server instance in minimal configuration mode and connect to it via the command prompt with SQLCMD, as discussed in this article: https://www.xtivia.com/blog/start-sql-server-lost-tempdb-data-files/ . Note this will put the server in single user mode.

      After you connect to it, you should be able to execute ALTER DATABASE tempdb MODIFY FILE and modify the file locations to an appropriate location. Then shutdown and restart the instance.

      Steps from the aforementioned article for reference:

      1. Open command prompt window #1 as ADMIN and go to the BINN directory where SQL Server is installed.
      1. Then execute command Sqlservr.exe /f /c
      1. Then open one more command window #2 and if this is a default instance then open SQLCMD using the following command SQLCMD –S localhost –E
      1. Execute the ALTER DATABASE command: USE MASTER GO ALTER DATABASE tempdb MODIFY FILE (NAME = tempdb, FILENAME = 'C:\NEWPATH\tempdb.mdf') GO
      1. Now go back to Command window #1 and hit CTRL C. It will ask if you want to stop the instance. Y/N. Enter Y
      1. Now start the SQL Server instance from configuration manager. You should see the new data file created and SQL Server started.
      posted in SQL
      C
      charlenef
    • Constant rebuilding indexes to fix performance - why?

      We have a database server (2016 SQL Server), that we have added a step of 'rebuilding indexes' to the deployment process.

      In decades of working with MS SQL Server at a many companies, I've never ONCE had to rebuild the indexes in order to fix a performance problem.

      Yet, we do it at least every 2 weeks, and often times more often than that. And I'm told "yep! that fixed the problem!" It seems to me, it is much more likely to have fixed a symptom. I know, I've had an issue with a database where a query would run > 10 minutes the first time, and in a few milliseconds after it had completed once. (In this case, it had built a temp index on Right(field,8) that someone was using in the query, and adding an index on that fixed it)

      I'm thinking maybe a rebuild is causing everything to be loaded into memory, so the server has the entire index right there and ready to use?

      Has anyone else seen this, and is this an indication of another issue that maybe we can fix?
      (More RAM, better disks, or something?)

      posted in SQL
      C
      charlenef
    • Pick specific column value in a group of rows based on condition

      I am working with mariadb server version 10.6.4.

      This is my table definition:

      CREATE TABLE `tmp_dba` (
        `ID` bigint(20) NOT NULL AUTO_INCREMENT,
        `case_id` bigint(20) DEFAULT NULL,
        `client_id` bigint(20) NOT NULL,
        `arrival` date DEFAULT NULL,
        `departure` date DEFAULT NULL,
        PRIMARY KEY (`ID`)
      )
      

      Here is some example data:

      INSERT INTO `tmp_dba` VALUES
      (1,10,1000,'2018-10-02','2019-04-25'),
      (2,10,1000,'2019-04-26','2019-05-01'),
      (3,10,1000,'2019-05-02',NULL),
      (4,20,2000,'2018-11-21',NULL),
      (5,20,2001,'2018-11-21',NULL),
      (6,20,2002,'2018-11-21',NULL),
      (7,30,3000,'2019-03-04','2022-01-01'),
      (8,30,3001,'2019-03-04','2022-01-01'),
      (9,30,3002,'2019-03-04','2022-01-01'),
      (10,30,3003,'2019-03-04','2022-01-01'),
      (11,30,3004,'2019-03-04','2022-01-01');
      

      What I would like to achieve is determine the MIN(arrival) of each group of case_id and client_id and in case departure is not null MAX(departure) should be displayed otherwise null.

      I want to end up with just one row per client providing the the data mentioned above.

      For example for case_id = 10 I want to see 1 row like that: 10;1000;2018-10-02;NULL.

      For case_id = 20 the result should be 4 rows, because of 4 different combinations of case_id and client_id.

      For case_id = 30 there should be 5 rows shown, because of 5 different combinations of case_id and client_id.

      I must be doing something wrong when using group by.

      Further Information:

      • Data is consecutive in the meaning that a new record entered will have a higher ID than the old one. Also a new record will be - relative to the one before - in the future, always. This does not mean that the next record always would have an arrival the day after last departure.
      • A new record in the table for the same customer will always set the departure date of the previous record. The previous record departure in that case can not be null.
      • When there is no departure and the client is "in" still, departure will be null always.
      • Departure must be less than arrival unless departure is null.
      • Arrival can not be null.

      Is this doable?

      Solution

      SELECT 
      -- First solution, but wrong, GROUP_CONCAT(ID) avoids error below
      -- ID, case_id, client_id, 

      Thank you very much for your help.

      Steffi

      posted in SQL
      C
      charlenef
    • How to identify primary/secondary AOAG when running jobs in certain databases?

      we have 3 servers, 1 one of them host a primary and a secondary aoag, 2 also hosts a primary ( secondary on node 1 ) and a secondary (primary on node 1). 3 is just DR. All of them have the same jobs.

      I'm trying to understand how can we run jobs on certain databases and determine if this database is primary on this server or not. I would like to achieve this without hardcode anything.

      I have these 2 queries but they only used to work when I have an entire server as primary or secondary:

      if (select
              ars.role_desc
          from sys.dm_hadr_availability_replica_states ars
          inner join sys.availability_groups ag
          on ars.group_id = ag.group_id
          where ag.name = 'YourAvailabilityGroupName'
          and ars.is_local = 1) = 'PRIMARY'
      begin
          -- this server is the primary replica, do something here
      end
      else
      begin
          -- this server is not the primary replica, (optional) do something here
      end
      

      --ou
      DECLARE @ServerName NVARCHAR(256) = @@SERVERNAME
      DECLARE @RoleDesc NVARCHAR(60)

      SELECT @RoleDesc = a.role_desc
      FROM sys.dm_hadr_availability_replica_states AS a
      JOIN sys.availability_replicas AS b
      ON b.replica_id = a.replica_id
      WHERE b.replica_server_name = @ServerName

      IF @RoleDesc = 'PRIMARY'
      BEGIN
      PRINT 'OK. NEXT...'
      END
      else
      RAISERROR ('ERROR. JOB IS NOT GOING TO RUN BECAUSE THIS IS NOT THE PRIMARY INSTANCE.', -- Message text.
      16, -- Severity.
      1 -- State.
      );

      In this case now, I have multiple databases being primary and secondary (not readable).

      so when runnin a job in database X I would like to run a function for example to determine that this database is not primary on that server, so the job stops:

      for example:

          IF cool_function_primary_or_secondary() <> 'PRIMARY' 
      BEGIN
      do my stuff
       msdb..sp_stop_job @job_name=my job 
      END
      

      this will not fail the job.

      and, I would need to set the database to the database that will run the job to execute this right? also, if it's not accessible, is it going to fail ?

      posted in SQL
      C
      charlenef
    • Why does SELECTing sum() return null instead of 0 when there are no matching records?

      This is but one of countless little details that make me frustrated all the time, forces me to memorize all kinds of special code and causes my queries to become uglier than they should have to be.

      Try this query on for size:

      SELECT sum(amount) FROM table WHERE conditions;
      

      If it finds no records to sum() the amount of, it returns empty/null/undefined instead of 0, causing the output to be unexpectedly empty in my application, instead of the "sum" of zero. But sum() means "the sum", so why not just return 0?

      I am aware of the solution. You "simply" do:

      SELECT COALESCE(sum(amount), 0) FROM table WHERE conditions;
      

      Now it will return 0 even if there are no records. But it's ugly and it no longer feels "fun" to use. Not that databases are supposed to be a "fun game", but you know what I mean: if a query becomes too convoluted/"ugly", it no longer feels satisfying to use it, especially if you know that this will have to be repeated in all kinds of places and it's not just some obscure, one-off edge case.

      What was the thought process behind making it behave like this? I have many other issues related to null, but I'll focus on this one thing for this question.

      posted in SQL
      C
      charlenef
    • How to find last value from Ms Sql

      I have an example table like this:

      ID UserId Name
      1 00009 Jomy
      2 00012 Tomy
      3 00012 Tomy
      4 00012 Tomy
      5 00009 Jomy

      I want to get the last value of UserId = 00012, (UserId from ID number 4). I have tried several queries like this:

      SELECT TOP 1 * 
      FROM TABLE 
      WHERE UserId = 00012 
      ORDER BY UserId DESC
      

      But it didn't work. Can you guys help me to find the correct query?

      posted in SQL
      C
      charlenef
    • 1
    • 2
    • 3
    • 4
    • 5
    • 1477
    • 1478
    • 1 / 1478