Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nielsm
Jun 1, 2009



Is that handwritten as that, or generated from some ORM or such?

Adbot
ADBOT LOVES YOU

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

kiwid posted:

Does this entire mess just reduce down to v.contract_nbr >= 0?

yes

RandomBlue
Dec 30, 2012

hay guys!


Biscuit Hider

kiwid posted:

Can someone help me understand what is happening in this WHERE clause?

SQL code:
WHERE (
	(
		NULL IS NULL AND v.contract_nbr = 0
	) 
	OR
	(
		v.contract_nbr > 0 AND (
			(
				NULL IS NULL AND 'Y' = 'Y'
			) OR (
				NULL IS NOT NULL AND offsite_storage_id IS NULL
			)
		)
	)
)
When would NULL IS NOT NULL ever be true?

Does this entire mess just reduce down to v.contract_nbr >= 0?

Yes, though my guess is the left side of all those NULL comparisons is supposed to be a column name, maybe some parameter that was supposed to be passed in and it's being passed in as null instead of a string and you get this. Either that or ORM generated nonsense, though there aren't any mangled aliases in there that you usually get with ORM generated queries.

RandomBlue fucked around with this message at 22:41 on Jan 31, 2024

kiwid
Sep 30, 2013

nielsm posted:

Is that handwritten as that, or generated from some ORM or such?

It's from an SQL trace generated by software. So, probably an ORM.


:thumbsup:

RandomBlue posted:

Yes, though my guess is the left side of all those NULL comparisons is supposed to be a column name, maybe some parameter that was supposed to be passed in and it's being passed in as null instead of a string and you get this. Either that or ORM generated nonsense, though there aren't any managed aliases in there that you usually get with ORM generated queries.

You might be right, perhaps those nulls should have been column names. This is some really lovely legacy software we're talking about.

double nine
Aug 8, 2013

this is really silly but I'm stumped with a null join clause

I have 2 tables. table 1 has client# and their subscription (1 line per subscription), the relevant start and end date. table two has products bought.

I need an qsl that returns basically everything from table2 (so a left join), but ONLY for people who are subscribed to subscription 24 AND NOT have subscription 01, both clauses true on the day of purchase.

I'm not even at the filter of the date yet, just that conditional filtering is stumping me.
eg

code:
table1
client   subs     start                 end
01234   01        01012023          31122999
01234   24        01012023          30042023                     
45235   24        01012000          31102015
98412   07        01072015          31122999
[...]

 

table2

client   prod    purcdate          amnt    invoice
01234   456312 27042022          266.95  98765423
45981   456999 14122023          012.99  89654351
39812   753942 13072023          140.44  89132186
45235   546832 25092023          045.99  218935931
[...]

redleader
Aug 18, 2005

Engage according to operational parameters

double nine posted:

this is really silly but I'm stumped with a null join clause

I have 2 tables. table 1 has client# and their subscription (1 line per subscription), the relevant start and end date. table two has products bought.

I need an qsl that returns basically everything from table2 (so a left join), but ONLY for people who are subscribed to subscription 24 AND NOT have subscription 01, both clauses true on the day of purchase.

I'm not even at the filter of the date yet, just that conditional filtering is stumping me.
eg

code:
table1
client   subs     start                 end
01234   01        01012023          31122999
01234   24        01012023          30042023                     
45235   24        01012000          31102015
98412   07        01072015          31122999
[...]

 

table2

client   prod    purcdate          amnt    invoice
01234   456312 27042022          266.95  98765423
45981   456999 14122023          012.99  89654351 
39812   753942 13072023          140.44  89132186
45235   546832 25092023          045.99  218935931
[...]

as a starting point, exists might be useful?
code:

select *
from table2 t2
where
    exists (select * from table1 t1 where t1.subs = 24 and t1.start >= t2.purcdate and t1.end <= t2.purcdate)
    and not exists (select * from table1 t1 where t1.subs = 1 and t1.start >= t2.purcdate and t1.end <= t2.purcdate)

Heavy_D
Feb 16, 2002

"rararararara" contains the meaning of everything, kept in simple rectangular structures

kiwid posted:

It's from an SQL trace generated by software. So, probably an ORM.

:thumbsup:

You might be right, perhaps those nulls should have been column names. This is some really lovely legacy software we're talking about.

If it turned out to be handwritten, I might have guessed it was to make it easier to comment out any combination of filters without leaving an invalid WHERE clause
pre:
		NULL IS NULL --AND v.contract_nbr = 0
But since it's autogenerated, I'd guess that where you've ended up with "v.contract_nbr = 0", the system has a variable length list of filters which can be turned on or off. Starting with NULL IS NULL means that you don't have to restructure the WHERE clause when all the filters are off/list is length zero. It also means that you can prepend all of the clauses with " AND " when you're building the query, rather than remembering to special case the first clause off the list. I mean, it all smells like an expedient thing done in some decidedly hacky code, but in that context it makes sense.

kiwid
Sep 30, 2013

Heavy_D posted:

the system has a variable length list of filters which can be turned on or off

I mostly use MySQL so I see this often with
code:
WHERE TRUE
but didn't realize that doesn't work in TSQL. This makes sense.

double nine
Aug 8, 2013

out of curiosity, is there a way to have an excel sql query refer to the content of a sheet?

for example, i have the following query:

code:
select clientid, tarifyear, this, that, and_the_other 
from database1.subdb2 t1
where t1.clientid = 123456789
and would like to have the query be dynamic so that the where statement becomes t1.clientid = table1.cellC3 or something to that effect

is this possible

acidx
Sep 24, 2019

right clicking is stealing

double nine posted:

out of curiosity, is there a way to have an excel sql query refer to the content of a sheet?

for example, i have the following query:

code:
select clientid, tarifyear, this, that, and_the_other 
from database1.subdb2 t1
where t1.clientid = 123456789
and would like to have the query be dynamic so that the where statement becomes t1.clientid = table1.cellC3 or something to that effect

is this possible

Edit: misunderstood the question

acidx fucked around with this message at 07:09 on Feb 7, 2024

abelwingnut
Dec 23, 2002


so is there a quick and easy way to produce this kind of pivot?



i have all of these columns that will be the same for a subgroup of rows. then, the rest of the columns for that subgroup will be different and need to be pivoted. and yes, this'll need to happen 50 times. plus, each subgroup will have different questions, so q1 for one subgroup will be different than another.

i see a very hellish way of doing this with cases, and i also see a very hellish way of doing this with joins, but i'm reeeeeeeeeeeeally hoping there's a way to pivot this.

and this is in redshift.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
If I have a table containing an ordered list:
code:
item	data1	data2
5	sdf	fgh	
10	dfg	dhh
15	fgjk	dfhj
17	fhj	dghh
25	fhj	jdh
35	kjh	jhgf
41	jhgf	jhg
How can I write a select statement that returns 2 number of rows on either side of the target?

So if the item originally selected was the row with item=17, the query return the rows for 10,15,17,25,35


edit: It's MySQL 8 fwiw

Agrikk fucked around with this message at 18:18 on Feb 28, 2024

nielsm
Jun 1, 2009



I'm not sure if this works, but possibly a triple UNION?

SQL code:
SELECT item, data1, data2 FROM table WHERE item = 17
UNION
SELECT item, data1, data2 FROM table WHERE item < 17 ORDER BY item DESC LIMIT 2
UNION
SELECT item, data1, data2 FROM table WHERE item > 17 ORDER BY item ASC LIMIT 2
The syntax for getting only a limited number of rows depends on the DBMS.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
SQL code:
(SELECT item, data1, data2 FROM table WHERE item = 17)
UNION
(SELECT item, data1, data2 FROM table WHERE item < 17 ORDER BY item DESC LIMIT 2)
UNION
(SELECT item, data1, data2 FROM table WHERE item > 17 ORDER BY item ASC LIMIT 2)
Your code worked. I just had to wrap each select statement in parentheses. Thanks!

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Can somebody give me a crash course in effective MySQL bulk loading? I could also use a recommendation for a logical backup tool that'll work well with tables in the 50GB-300GB range, are people happier with mydumper or mysqlpump?

I've never touched MySQL before, have the time in the trenches with Postgres and Spark SQL, and have some big tables that are desperately in need of some optimization. Right now they're entirely denormalized and have no indexes of any kind, I'm hoping to both normalize to save some space and add indexes to make querying it sane. This is genomics research data, and the researchers have previously been suffering through the fact that any query takes 3min to come back and running an application that makes a lot of queries is going to take 5 days to run.

I've got a dozen questions and nobody to talk to, so if anybody sees obvious problems with any aspect of this I'd appreciate advice. This is an example of what I'm dealing with:

SQL code:
CREATE TABLE `BIG_TABLE_1` (
    `tissue` text,
    `gene` text,
    `phecode` text,
    `ensID` text,
    `beta` double DEFAULT NULL,
    `pval` double DEFAULT NULL,
    `zscore` double DEFAULT NULL,
    `description` text,
    `phe_group` text
) ENGINE = InnoDB DEFAULT CHARSET = latin1;
We've got lots of these, some have a few more columns, but they're all in the 50GB-300GB range with really low cardinality in the text fields, so there's tons of duplication.

My plan is to create new, normalized tables from this on a replica of the DB that I have exclusively to myself, so I can do full table locks, turn off indexing while inserting, and do any number of deferred processing tricks while I'm loading the data. I've been reading and threw a bunch of different stuff at it in hopes of faster inserts, but I'm not sure which of these are useful and which are just cargo cult bullshit:

SQL code:
USE pxcan_normalized;
BEGIN;

-- Configure for bulk inserts
SET autocommit = 0; -- I dont need this because i'm trying to do it all in one transaction, right? Should I batch it into smaller transactions?
SET unique_checks = 0;
SET foreign_key_checks = 0;
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; -- I had to set this because apparently InnoDB was creating row locks even though i have whole table locks?

-- Does this even do anything for InnoDB?
LOCK TABLES tbl1 WRITE,
tbl2 WRITE; -- and so on
Anyway, my general plan was to pull out columns with very low cardinality into their own tables and then create normalized copies of the big tables using those new child tables. For example, we've got less than 50 tissue, sample_group, phe_group and less than 50,000 genes and 10,000 phecodes and associated descriptions, so there's room to deduplicate a good amount of data.

In the past these huge tables have been made somewhat usable by adding indexes (to the big duplicated text fields) and also by making more copies of the table that have been somewhat filtered, for example:

SQL code:
CREATE TABLE bigtbl_pvalue_05 AS
SELECT
    *
FROM
    bigtbl
WHERE
    pval <.05;
This sucks. I'd much rather have indexes work for us. If I were in Postgres I'd partition by pval at the common cutoffs people use, but in MySQL I guess I'd have to make pval the first column in the primary key to affect how InnoDB lays it out on the disk? The natural key here is (tissue, gene, phecode) right now, which will become (tissue_id, gene_id, phecode_id) in my normalized schema. Would it be horrible to throw pval on the front of the primary key, and will that cause MySQL to cluster by pval such that queries with a specific pval range will be almost as fast as the smaller copy of the table? Once I've got this working well I've got dozens of these DBs to fix up.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
instead of spending brainpower worrying about clustering and how to optimally set up a primary key, i would just do the normalization and then spec out a server that fits the resulting dataset in ram.

it's likely to be both cheaper and give better outcomes than spending a bunch of your expensive time on optimization.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Twerk from Home posted:

This sucks. I'd much rather have indexes work for us. If I were in Postgres I'd partition by pval at the common cutoffs people use, but in MySQL I guess I'd have to make pval the first column in the primary key to affect how InnoDB lays it out on the disk? The natural key here is (tissue, gene, phecode) right now, which will become (tissue_id, gene_id, phecode_id) in my normalized schema. Would it be horrible to throw pval on the front of the primary key, and will that cause MySQL to cluster by pval such that queries with a specific pval range will be almost as fast as the smaller copy of the table?

It's been a while since I did anything with MySQL but I believe this is true.

What is a pval exactly? Could you just leave the primary key as it is and create an index on pval? (which would then actually be (pval, tissue, gene, phecode) because iirc indexes implicitly get filled out with any primary key columns you didn't declare to be in them, in order)

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I'm very familiar with Python, and I know the "basics" of SQL enough that I can look at the posts here ITT and generally understand, but I want to learn SQL more. How would you guys recommend learning SQL in 2024?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Boris Galerkin posted:

I'm very familiar with Python, and I know the "basics" of SQL enough that I can look at the posts here ITT and generally understand, but I want to learn SQL more. How would you guys recommend learning SQL in 2024?

I think you might get better advice if you can specify if you're more interested in SQL as an application developer wanting fast transactions or for data analysis and transformation, because those are pretty different worlds with different skillsets once you get past the basics.

I'm clueless in the OLAP world and doing my best to learn quickly. There's also so many kinds of tools at this point, everything uses SQL.

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I track a lot of stuff (finances, mostly, but not limited to) in Google Sheets and I just found out that you can run SQL queries and it just seems so much easier/better than not using SQL. I guess I mostly just wanted some guidance on how to work with spreadsheets/datasets using SQL, not developing applications.

nielsm
Jun 1, 2009



One book I have gotten a lot of use out of is SQL Antipatterns, which is unfortunately out of print and was never updated to reflect newer DBMS functionality either. But I really like its approach of showing common problems, describing bad solutions, and then going into better solutions and why they are better.

TasogareNoKagi
Jul 11, 2013

nielsm posted:

One book I have gotten a lot of use out of is SQL Antipatterns, which is unfortunately out of print and was never updated to reflect newer DBMS functionality either. But I really like its approach of showing common problems, describing bad solutions, and then going into better solutions and why they are better.

Appears to be available as an ebook at https://pragprog.com/titles/bksap1/sql-antipatterns-volume-1/. Not sure if it's been updated or if it's the same text.

nielsm
Jun 1, 2009



It sounds like the ebook version is actually updated, so I would recommend that.

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord
I'm seconding SQL Antipatterns, it surprisingly became one of my favorite programming books ever, I keep going back to it.

Bone Crimes
Mar 7, 2007

Boris Galerkin posted:

I'm very familiar with Python, and I know the "basics" of SQL enough that I can look at the posts here ITT and generally understand, but I want to learn SQL more. How would you guys recommend learning SQL in 2024?

I am also interested in this. I got a book "sql in 10 minutes" as it was highly rated, but it doesn't teach the way I think or the way most sql is written. Would prefer something with just a bit more theory.

Seventh Arrow
Jan 26, 2005

I like Codecademy a lot for learning. They have an interactive query window and a course that will take you from basic select queries to calculating churn. Of course, once you're past the free period you have to fork over to continue. I think it's worth it though, given the amount of stuff you can learn on their platform. YMMV though.

If you need free stuff, then on youtube there's Ankit Bansal and Alex Freberg. There's also learnsql.com, and sql-practice.com for when you have enough under your belt to start practicing with some challenges.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Jabor posted:

instead of spending brainpower worrying about clustering and how to optimally set up a primary key, i would just do the normalization and then spec out a server that fits the resulting dataset in ram.

it's likely to be both cheaper and give better outcomes than spending a bunch of your expensive time on optimization.

This is the advice that I'm here for, but I am still seeking a faster way to do the normalization / transformation in the first place.

MySQL is pretty slow chewing on 1.5 billion rows in a single transaction, any suggestions on good ways to batch this down (or parallelize it, or anything)? I've got lots of databases to do this to and have run into some edge case failures on the data that I didn't encounter when I tested this with a tiny subset of rows to make sure everything worked.

SQL code:
INSERT INTO
    big_tbl1_norm (
        beta,
        pval,
        zscore,
        tissue_id,
        gene_id,
        phecode_id,
        phe_group_id
    )
SELECT
    CAST(orig.beta AS FLOAT),
    CAST(orig.pval AS FLOAT),
    CAST(orig.zscore AS FLOAT),
    t.tissue_id,
    g.gene_id,
    p.phecode_id,
    pg.phe_group_id
FROM
    src_schema.big_tbl1 orig
    JOIN src_schema.tissue t ON orig.tissue = t.tissue
    JOIN src_schema.gene g ON orig.gene = g.gene
    JOIN src_schema.phecode p ON orig.phecode = p.phecode
    JOIN src_schema.phe_group pg ON orig.phe_group = pg.phe_group;
So far the MySQL-isms that I've tripped on are that text indexes are case insensitive by default, meaning that you can't store somestring and SomeString in a column with a unique index, and that InnoDB will run out of row locks and die with " The total number of locks exceeds the lock table size" if you're doing big operations like this, I thought that locking the whole table would prevent obtaining row locks but I had to set READ UNCOMMITTED.

nielsm
Jun 1, 2009



Consider whether the database needs to be "online" and ready for multi-user during that operation. If you can disable or remove the indexes and constraints while loading data, and then only reinstate them afterwards, that might be significantly faster.

kiwid
Sep 30, 2013

I'm building a data warehouse and running into deadlocks with SQL Server.

One suggestion to me was to use the WITH (NOLOCK) table hint but it seems like this is frowned upon? I don't really want to copy uncommitted data into the data warehouse. Does anyone have any suggestions for me?

kiwid
Sep 30, 2013

Another question,

I'm trying to use the MERGE statement (TSQL) to sync a table to a data warehouse.

SQL code:
MERGE INTO [dw].[staging].[accounts] AS [dst]
USING [production].[dba].[s1_name_and_address] AS [src]
ON [dst].[name_and_address_id] = [src].[name_and_address_id]
WHEN MATCHED AND CAST([src].[change_date] AS DATETIME) >= CAST([dst].[last_synced_at] AS DATETIME) THEN
...
The problem I'm having is that the destination database stores datetimes in UTC but the source database stores as EST. I can't seem to add for example:

SQL code:
WHEN MATCHED AND CAST([src].[change_date] AS DATETIME) AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC' >= CAST([dst].[last_synced_at] AS DATETIME) THEN
as this seems to be invalid. What are my options here so it also works with daylight savings time?

redleader
Aug 18, 2005

Engage according to operational parameters
what data type is src.change_date?

kiwid
Sep 30, 2013

redleader posted:

what data type is src.change_date?

datetime

edit: just to clarify my problem, if the source table row is updated and change date is updated, the merge won't update that row in the destination until at minimum 4-5 hours later because the last_synced_at is in UTC.

edit2: One option I can do is to do this:

SQL code:
WHEN MATCHED AND CAST([src].[change_date] AS DATETIME) >= CAST([dst].[last_synced_at] AS DATETIME) THEN
    UPDATE SET
        ...
        -- [dst].[last_synced_at]      = GETUTCDATE(),
        [dst].[last_synced_at]      = GETUTCDATE() AT TIME ZONE 'UTC' AT TIME ZONE 'Eastern Standard Time',
But I despise the fact that one timestamp in my entire database/table is going to be in another timezone.

edit3: Never mind! I figured it out.

Apparently I should be putting the AT TIME ZONE inside the cast brackets instead.

This works:

SQL code:
WHEN MATCHED AND CAST([src].[change_date] AT TIME ZONE 'Eastern Standard Time' AT TIME ZONE 'UTC' AS DATETIME) >= CAST([dst].[last_synced_at] AS DATETIME) THEN

edit4:

OMG I feel stupid. This first query is actually correct and works, it was just my IDE that was giving me a false error:



Wrapping it in parentheses caused the error to go away.

kiwid fucked around with this message at 01:51 on Mar 8, 2024

redleader
Aug 18, 2005

Engage according to operational parameters
whoops, should have asked about last_synced_at too. assuming last_synced_at is also a datetime, does this work? datetimeoffsets are stored and manipulated internally as their UTC values (per docs), so this should compare the two in the same TZ

SQL code:
WHEN MATCHED AND ([src].[change_date] AT TIME ZONE 'Eastern Standard Time') >= ([dst].[last_synced_at] AT TIME ZONE 'UTC') THEN
if last_synced_at is a datetimeoffset, then you can drop the AT TIME ZONE 'UTC'

kiwid
Sep 30, 2013

redleader posted:

whoops, should have asked about last_synced_at too. assuming last_synced_at is also a datetime, does this work? datetimeoffsets are stored and manipulated internally as their UTC values (per docs), so this should compare the two in the same TZ

SQL code:
WHEN MATCHED AND ([src].[change_date] AT TIME ZONE 'Eastern Standard Time') >= ([dst].[last_synced_at] AT TIME ZONE 'UTC') THEN
if last_synced_at is a datetimeoffset, then you can drop the AT TIME ZONE 'UTC'

It's datetime too but yes you fixed my issue by adding parentheses. Thanks so much :D

kiwid
Sep 30, 2013

redleader posted:

SQL code:
WHEN MATCHED AND ([src].[change_date] AT TIME ZONE 'Eastern Standard Time') >= ([dst].[last_synced_at] AT TIME ZONE 'UTC') THEN

Just looking at this again this morning. Do I not have to convert to the same time zone before comparing? Your example shows you compare two different time zones.

edit: nvm, looks like you can compare datetimeoffset but not datetime.

kiwid fucked around with this message at 14:39 on Mar 8, 2024

redleader
Aug 18, 2005

Engage according to operational parameters

kiwid posted:

Just looking at this again this morning. Do I not have to convert to the same time zone before comparing? Your example shows you compare two different time zones.

edit: nvm, looks like you can compare datetimeoffset but not datetime.

yeah, when comparing datetimeoffsets, sql server normalises them all into utc before comparing them. in fact, casting [dst].[last_synced_at] to datetimeoffset with AT TIME ZONE 'UTC' might actually not be needed at all. i think that implicitly (or explicitly) casting a datetime to datetimeoffset will set the offset to +00:00, so just converting the EST datetime to datetimeoffset should work. but i'd definitely do some googling and comment the heck out of that if i went that route

kiwid
Sep 30, 2013

How come a MERGE is so fast?

I'm not working with a lot of data, but copying 5000 rows is about 300ms.

mortarr
Apr 28, 2005

frozen meat at high speed

redleader posted:

yeah, when comparing datetimeoffsets, sql server normalises them all into utc before comparing them. in fact, casting [dst].[last_synced_at] to datetimeoffset with AT TIME ZONE 'UTC' might actually not be needed at all. i think that implicitly (or explicitly) casting a datetime to datetimeoffset will set the offset to +00:00, so just converting the EST datetime to datetimeoffset should work. but i'd definitely do some googling and comment the heck out of that if i went that route

I just want to say I have done all this converting from datetime to datetimeoffset and feel your pain. I was the timezone "expert" for a couple of companies in the past, and I hope I never have to dick about with timezones ever again, they are by far the most complex and error-prone kind of calculations I've ever had to work with.

RandomBlue
Dec 30, 2012

hay guys!


Biscuit Hider

mortarr posted:

I just want to say I have done all this converting from datetime to datetimeoffset and feel your pain. I was the timezone "expert" for a couple of companies in the past, and I hope I never have to dick about with timezones ever again, they are by far the most complex and error-prone kind of calculations I've ever had to work with.

I loving hate working with dates and times because there's always some new way poo poo gets hosed up. There are a shitload of formats even if timezones aren't an issues but they always are. Storing it in UTC is the easiest part.

If you control the input and can limit things there it's not so bad but I work on a product that works with user provided CSVs, spreadsheets, DB sources, CRMs, etc.. and of course we work with international customers so all that poo poo has to work together seamlessly.

Adbot
ADBOT LOVES YOU

Moonwolf
Jun 29, 2004

Flee from th' terrifyin' evil of "NHS"!


kiwid posted:

How come a MERGE is so fast?

I'm not working with a lot of data, but copying 5000 rows is about 300ms.

Merge is just handling insert/update and deletes in one statement, if you make it do all of them, how fast can you get your system to do a 5k insert or update?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply