You can use following in your IF Controller to check if the file exists
You can use following in your IF Controller to check if the file exists
I have a series of queries which depend on a where clause which looks like the following:
where a = and b = and c = and d =
The angular brackets represent conditions that are passed in as string parameters from a script containing my queries. I need to modify these queries so that in some instances (that the script decides) the results of my queries are independent of one of those conditions. I was wondering if there is some clever way to construct the parameter to accomplish that, so that my script can simply adjust the parameter when necessary and it flows through all of the queries, rather than modifying each query or making larger modifications to my script.
I have such a table and I would like to get these values:
+───────────────────+────────+──────────+──────────────────────────────────────────────────────+ | TS | Value | ValueA | | +───────────────────+────────+──────────+──────────────────────────────────────────────────────+ | 2022-06-03 05:00 | 1 | 1 | | | 2022-06-03 06:00 | 2 | 2 | | | 2022-06-03 07:00 | 3 | 3 | | | 2022-06-03 08:00 | 4 | 4 | | | 2022-06-03 09:00 | 5 | 5 | | | 2022-06-03 10:00 | 6 | 6 |
The maximum values from each day I can draw like this:
select DATE_FORMAT(DATE_ADD(ts, INTERVAL 30 MINUTE),'%Y-%m-%d') as time, max(Value) from MyTable group by time
and I can increase the values in each row like this:
select DATE_FORMAT(DATE_ADD(ts, INTERVAL 30 MINUTE),'%Y-%m-%d %H:00') as time, ROUND(sum(Value) over (order by time),2) as 'ValueA' from MyTable order by time
I just have no idea how to get this effect, where in the first row of the day is the value of the maximum sum of the previous day (exactly the example as I described above) Is this feasible at the level of a regular SQL Query?
An alternative way to Lennart's window function answer, is to just use a
GROUP BY and
HAVING clause against the
peoples table to filter out the ones with the same
postcode like so:
SELECT towns.id, towns.town, peoples.names FROM towns LEFT JOIN ( SELECT MAX(names) AS names, postcode FROM peoples GROUP BY postcode HAVING COUNT(name) = 1 ) peoples ON towns.postcode = peoples.postcode
Here's the problem you're facing: an SQL query cannot return "dynamic columns." The columns of a query are fixed at the time it is parsed, i.e. before it begins reading any data. The query can't add more columns to its own select-list depending on the data it reads during execution.
So you have two choices:
Figure out which distinct values you want to become columns, and build an SQL query with those columns. This could be done by running another query first to
SELECT DISTINCT Name ... and then using application code to format the pivot query. Some folks use creative solutions with
GROUP_CONCAT() to format the query. I'm sure you've seen these solutions.
The other strategy is to forget about pivoting in SQL. Just fetch the data as it is in your database, and then write application code to present it in a pivoted format.
That's it. Those are your choices.
Here's a solution:
select '2019' as `Year`, max(case name when 'Name1' then `2019` end) as `Name1`, max(case name when 'Name2' then `2019` end) as `Name2`, max(case name when 'Name3' then `2019` end) as `Name3`, max(case name when 'Name4' then `2019` end) as `Name4` from mytable union select '2020', max(case name when 'Name1' then `2020` end), max(case name when 'Name2' then `2020` end), max(case name when 'Name3' then `2020` end), max(case name when 'Name4' then `2020` end) from mytable union select '2021', max(case name when 'Name1' then `2021` end), max(case name when 'Name2' then `2021` end), max(case name when 'Name3' then `2021` end), max(case name when 'Name4' then `2021` end) from mytable union select '2022', max(case name when 'Name1' then `2022` end), max(case name when 'Name2' then `2022` end), max(case name when 'Name3' then `2022` end), max(case name when 'Name4' then `2022` end) from mytable;
Output, tested on MySQL 8.0.29:
+------+-------+-------+-------+-------+ | Year | Name1 | Name2 | Name3 | Name4 | +------+-------+-------+-------+-------+ | 2019 | 124 | 102 | 34 | NULL | | 2020 | 98 | NULL | 56 | NULL | | 2021 | 35 | 34 | 97 | 35 | | 2022 | NULL | NULL | 123 | NULL | +------+-------+-------+-------+-------+
Invoke the MySQL client with the
Do not cache each query result, print each row as it is received. This may slow down the server if the output is suspended. With this option, mysql does not use the history file.
The reason the default is to store the result in the client is implied by this documentation. If the interactive client is suspended (for example using Ctrl-Z job control), the MySQL Server must use resources to keep the result set active.
You should consider not fetching 100 million rows in a single query to the interactive client.
PostgreSQL is able to optimize
WHERE EXISTS (/* correlated subquery */) into a join or semi-join, but it is not smart enough to detect that the
= TRUE in
EXISTS (...) = TRUE can be removed, so it does not apply the optimization here.
Since the optimization is not used, it is unsurprising that the second plan is slower. Although, to be honest, with a tiny query like that the difference could just be noise.
Some background for the second execution plan:
The second plan with the
alternatives: shows that you are using an older version of PostgreSQL, which still had
AlternativeSubPlans. The idea behind that was that PostgreSQL could potentially decide to start using a different subplan during query execution if the row count estimates proved to be off. This capability was removed with commit https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=41efb8340877e8ffd0023bb6b2ef22ffd1ca014d in v14. You may want to refer to Tom Lane's commit message for details:
Move resolution of AlternativeSubPlan choices to the planner.
When commit bd3daddaf introduced AlternativeSubPlans, I had some
ambitions towards allowing the choice of subplan to change during
execution. That has not happened, or even been thought about, in the
ensuing twelve years; so it seems like a failed experiment. So let's
rip that out and resolve the choice of subplan at the end of planning
(in setrefs.c) rather than during executor startup. This has a number
of positive benefits:
Removal of a few hundred lines of executor code, since
AlternativeSubPlans need no longer be supported there.
Removal of executor-startup overhead (particularly, initialization
of subplans that won't be used).
Removal of incidental costs of having a larger plan tree, such as
tree-scanning and copying costs in the plancache; not to mention
setrefs.c's own costs of processing the discarded subplans.
EXPLAIN no longer has to print a weird (and undocumented)
representation of an AlternativeSubPlan choice; it sees only the
subplan actually used. This should mean less confusion for users.
Since setrefs.c knows which subexpression of a plan node it's
working on at any instant, it's possible to adjust the estimated
number of executions of the subplan based on that. For example,
we should usually estimate more executions of a qual expression
than a targetlist expression. The implementation used here is
pretty simplistic, because we don't want to expend a lot of cycles
on the issue; but it's better than ignoring the point entirely,
as the executor had to.
That last point might possibly result in shifting the choice
between hashed and non-hashed EXISTS subplans in a few cases,
but in general this patch isn't meant to change planner choices.
Since we're doing the resolution so late, it's really impossible
to change any plan choices outside the AlternativeSubPlan itself.
Patch by me; thanks to David Rowley for review.
The title doesn't help and hopefully I can explain this right. I need to exclude the PartNum from a query if it is in the KARDEX bin. These parts have multiple bins. If a part has the Kardex bin I want to exclude the part. Here's sample data.
So I want to exclude 100217 and 101104-003 but keep the others.
I have a problem inserting in a nested table in oracle
These are the relevant types and tables;
create type movies_type as Table of ref movie_type;
create type actor_type under person_type
create table actor of actor_type
NESTED TABLE starring STORE AS starring_nt;
this is how i tried to insert
insert into actor values (actor_type(29,'Carrie','Fisher',TO_DATE('21/10/1956', 'DD/MM/YY'),TO_DATE('27/12/2016', 'DD/MM/YY'),'USA', movies_type(select ref(m) from movie m where movie_id in (7, 8, 9))));
this doesn't work, it gives
SQL Error: ORA-00936: missing expression
which isn't very helpful.
i also tried nesting the select statement in parenthesis because i thought it might have been a syntax error
insert into actor values (actor_type(29,'Carrie','Fisher',TO_DATE('21/10/1956', 'DD/MM/YY'),TO_DATE('27/12/2016', 'DD/MM/YY'),'USA', movies_type((select ref(m) from movie m where movie_id in (7, 8, 9)))));
but it said
SQL ERROR ORA-01427: single-row subquery returns more than one row
so i changed it to this
insert into actor values (actor_type(29,'Carrie','Fisher',TO_DATE('21/10/1956', 'DD/MM/YY'),TO_DATE('27/12/2016', 'DD/MM/YY'),'USA', movies_type((select ref(m) from movie m where movie_id=7))));
which worked but it isn't what i want since it doesn't allow me to have multiple values in
i don't understand what the problem is exactly and the errors messages aren't helpful
why does it say missing expression?
and why in the second case it gives single-row sub-query returns more than one row?
thank you very much.
Update: here is the type movie_type and table movie:
create type movie_type as Object ( MOVIE_ID NUMBER(15), TITLE VARCHAR(50) , GENRE VARCHAR(30), RELEASE_DATE DATE, RUNNING_TIME NUMBER, BUDGET NUMBER ) Final;
create table MOVIE of movie_type;
ALTER TABLE MOVIE
ADD CONSTRAINT PK_MOVIE_ID PRIMARY KEY (MOVIE_ID);
ALTER TABLE MOVIE modify TITLE not null;
relevant insertions in movie:
INSERT INTO MOVIE (MOVIE_ID, TITLE, GENDER, RELEASE_DATE, RUNNING_TIME, BUDGET) VALUES (7,'Star Wars','epic space opera',TO_DATE('25/05/1977', 'DD/MM/YY'),121,11000000);
INSERT INTO MOVIE (MOVIE_ID, TITLE, GENDER, RELEASE_DATE, RUNNING_TIME, BUDGET) VALUES (8,'The Empire Strikes Back','epic space opera',TO_DATE('17/05/1980', 'DD/MM/YY'),124,18000000);
INSERT INTO MOVIE (MOVIE_ID, TITLE, GENDER, RELEASE_DATE, RUNNING_TIME, BUDGET) VALUES (9,'Return of the Jedi','epic space opera',TO_DATE('25/05/1983', 'DD/MM/YY'),132,32500000);
Just a sketch, no idea whether it works in access:
SELECT year(dt), month(dt), sum(amnt) FROM ( SELECT OrigSaleDt as dt, OrigSaleAmmnt as amnt FROM T UNION ALL SELECT RevSaleDt as dt, RevSaleAmmnt as amnt FROM T ) AS X GROUP BY year(dt), month(dt)
Performing a VACUUM FULL produces a lot of WAL. We need downtime to perform this as well. What if we manually created a new table and copied over all the data. Then we could simply drop the original table and rename the new table? Would this create much fewer WAL entries? Could this be potentially faster?
Our example is that we're starting to migrate large jsonb columns out of tables. So the result is a significant reduction in size.
We're not able to use pg_repack because we're using Heroku and they don't support this extension.