|
I'd be a little shocked if that was the case, since that sounds like the Halloween problem - and I wouldn't expect that in Postgres. Does EXPLAIN shed any light on it?
|
# ¿ Jun 27, 2019 21:39 |
|
|
# ¿ May 6, 2024 03:22 |
|
That almost sounds like a bug, particularly when the same query gives different results based on... something.
|
# ¿ Jun 28, 2019 00:40 |
|
What RDBMS?
|
# ¿ Jul 25, 2019 08:37 |
|
Tibalt posted:Edit: Thanks for the reply Ruggan as well. I'll take a look at the query plan as well. If you're using SQL Server and want a second opinion from the dubious, anonymous internet people here, you can upload the execution plan to Paste The Plan. Plan Explorer can anonymize the plan if you're worried about sharing table/ index names etc. Make sure it's the actual execution plan if at all possible!
|
# ¿ Jul 31, 2019 03:14 |
|
Pollyanna posted:Is there a term for when you have to dive into a database full of different tables trying to find the right keys you can join on to get your data? SQL archaeology? "Backend development"
|
# ¿ Aug 6, 2019 02:12 |
|
AzureSkys posted:I'm very new to SQL and have been working on a thing using MS Access for work to compare a master part list to a specific inventory list. The master is frequently updated with many parts and I import the latest one into the table "MasterList". The inventory list table only needs to compare to a fraction of the parts from the master so I set up deletion queries to remove the unneeded parts from the master based on what I've gone through and specifically added to the filtering list. Then my comparison works well to find changes. doing the delete in one query using DeletionList is easy, since you can use OR conditions in join statements: code:
not sure why you had the doubled-up conditions in your queries, and i'm assuming the distinctrow thing is an access thing doing it with a wildcard is very similar. insert your rows including the wildcard characters into DeletionList, then code:
at least, assuming access sql behaves vaguely similarly to other sqls e: entirely untested; do your own testing and validation before running on a production db, etc etc
|
# ¿ Aug 12, 2019 11:52 |
|
Der Shovel posted:I'm doing something wrong here but I can't for the life of me figure out what. what happens if you try an always true condition like WHERE 1 = 1?
|
# ¿ Aug 13, 2019 07:45 |
|
NihilCredo posted:- we're back to relying on the query optimizer to not gently caress up[ sql_server.txt
|
# ¿ Aug 25, 2019 11:30 |
|
depends entirely on the query + query optimiser + data distribution. trying to give any rules of thumb is basically irresponsible tbh
|
# ¿ Aug 27, 2019 09:59 |
|
cursed_schema.sql
|
# ¿ Oct 15, 2019 07:05 |
|
Hammerite posted:They say that one of their goals is "avoid the need for joins as much as possible". Why do you suppose they have that as a goal? Given that they are working from seemingly bizarre premises, it is not that surprising that they have created something bizarre. a lot of devs are really, really scared of joins for some reason. they often say "performance!" but i think they're really just nervous and intimidated by sql
|
# ¿ Oct 16, 2019 02:23 |
|
nielsm posted:
let this be your first lesson on the true meaning of "ansi sql"
|
# ¿ Oct 16, 2019 12:55 |
|
can mysql do hash joins yet?
|
# ¿ Oct 17, 2019 01:13 |
|
they are a pain in the rear end though
|
# ¿ Jan 28, 2020 06:11 |
|
Ruggan posted:Business logic belongs in the database alone i was with you until this point because i've seen what people do when the database is the business layer
|
# ¿ Feb 21, 2020 11:45 |
|
D34THROW posted:I wish I could get my company to host a SQL server. Our ERP is SQL-based (I think) but it sucks rear end (Dynamics NAV 2013), and it doesn't do half of what we really need it to do. microsoft dynamics crm? that's backed by a sql server db, almost certainly on-prem. i happen to know this because i used ms dynamics for task tracking and found it vastly easier to query the db (once i'd reverse-engineered the schema and how we used it) directly rather than using the front-end i wouldn't suggest updating that db directly though, because there's a fair chance doing that could lead to your support contract or w/e being voided
|
# ¿ Feb 26, 2020 06:14 |
|
Paper With Lines posted:The loving Navision schema set up is LOL poo poo. How did you reverse engineer it? We basically have 5 general queries that we adapt depending on what we want that were written like six years ago because no one ever wants to ever interact with it on the db side but there is a high business demand to get data out. nothing sophisticated - i opened up a thing in the app that showed some of the data i wanted, then went poking around in tables looking for that data. once i found where it was stored, i started joining on tables that seemed relevant using fk-sounding columns until i had enough to understand what i needed to
|
# ¿ Feb 27, 2020 00:22 |
|
give TABLESAMPLE a shot. you should be able to use the postgres equivalent of SELECT TOP 1 to get a single random row
|
# ¿ Mar 20, 2020 10:11 |
|
NihilCredo posted:Your specific SQL dialect may offer some extra help, for example Postgres has a "crosstab" function that will perform a pivot on every value in the selected rows. oh man, it's so unfair that something like this hasn't been standardised
|
# ¿ Mar 27, 2020 02:01 |
|
check out How to Share Data between Stored Procedures, by notable sql server greybeard erland sommarskog. you should check out his other articles too, just so you know what topics he's covered in enormous detail
|
# ¿ May 21, 2020 00:49 |
|
ChickenWing posted:Riddle me this, SQL fans: which rdbms? were those columns used in the query and/or view, and if so, how? i assume the result set is equivalent between staging and prod
|
# ¿ May 30, 2020 02:39 |
|
Secx posted:Using SSMS, is there a shortcut/add-in that allows you to enclose values in single quotes for use in an IN clause? i jam 'em into notepad++ and do a regex replace. something like find ^(.+)$, replace with '$1', i've also done similar things to split up text into multiple columns for inserting into temp tables or w/e
|
# ¿ Sep 25, 2020 23:47 |
|
i didn't know that "left join" was a hot topic in the sql community
|
# ¿ Dec 13, 2020 23:17 |
|
Moonwolf posted:On top of what the others have said, MySQL can only do nested loop joins, which are simple, and the fastest way to do small results sets, but will always suck no matter what for 10k+ results sets, and be terrible for 100k+. Trimming unused joins will certainly help though. i knew this, and i'm still shocked any time someone reminds me of it
|
# ¿ Jan 11, 2021 21:53 |
|
in mssql you could do something like the following. not sure if postgres or whatever supports this syntax though, but iirc this is standard sql syntaxcode:
basically just putting the VALUES clause into a subquery and filtering that i agree with the previous posters that value should be set as non-nullable (in addition to any code/query changes), if having nulls in value doesn't make logical sense in your schema redleader fucked around with this message at 22:44 on Feb 10, 2021 |
# ¿ Feb 10, 2021 22:42 |
|
nitpick: uuids aren't "cryptographic" in any sense of the word even if you know your uuidv4 is generated with a csprng or true random source, you cannot guarantee that this will always be the case. it would be somewhere between easy and trivial to gently caress up id generation accidentally, and end up using regular uuids
|
# ¿ Oct 7, 2021 01:47 |
|
the only people who take offense at the idea of surrogate keys seem to be those who have crawled extremely up their own rear end using relational algebra
|
# ¿ Oct 7, 2021 03:19 |
|
Hammerite posted:Everyone else is talking about surrogate keys as the alternative to externally-meaningful keys, but I suspect you are talking about surrogate keys as the alternative to composite keys, which is a different dichotomy. nope, applies to both cases
|
# ¿ Oct 8, 2021 11:35 |
|
don't forget that recipes can use the same ingredient multiple times - e.g. you use 1 cup water in step one, then 2 cups in step 3. personally i wouldn't fold those into one entry, since they're used for different purposes and it's convenient to see at a glance if you want to make it more complicated, ingredients are often (but not always!) grouped into sections - one for the cake, one for the icing, etc
|
# ¿ Dec 16, 2021 22:49 |
|
i'm probably the horror, but i've never seen the point in having special guids or whatever so that ids can't be guessed on the front end. you should have authorization checks in place to prevent users accessing resources they're not allowed to regardless, so even if they changed id=69 to id=420 they should just get a 401 or 404 or whatever you like maybe knowing the current id and growth rate could leak a bit of information? seems a bit of a niche problem i guess it could be defence in depth if your authn/z checks are buggy? plus copying/typing numeric ids is just so dang convenient
|
# ¿ Mar 9, 2022 05:55 |
|
half my job is just mapping ids around
|
# ¿ Mar 10, 2022 10:09 |
|
TheFluff posted:Beginner pitfalls, or I guess just a bunch of random Big Thinks from a number of years of experience: good stuff, i agree with it all re: query builders: does anyone know of any good ones (or indeed any at all) for .net?
|
# ¿ Aug 18, 2022 01:32 |
|
DELETE CASCADE posted:your post on orms is really good btw. but i wanted to ask a question in response to this one. what do you think about the placement of join conditions? i'm sure we all agree that joins should be written using the join clause, not by specifying the relationships in the where clause. constant conditions seem like they belong in the where clause, but for an inner join it's equivalent to write "a join b on a.id = b.id and b.col = 1". until you switch to a left join. and what about a relationship between tables that isn't a foreign key, but is relevant for the specific query? are there like, style guidelines for this? i try to do what feels right the way i do it is: JOIN conditions only include columns involved in the relationship between the tables. all other conditions go in the WHERE clause. this fits in with how i think of the different operations: a JOIN introduces a related table, a WHERE filters the rows. i also find it's easier to scan a query when this is followed: if you know that filter clauses are in the WHERE, then you can just skim over the table names in the FROM and JOIN and ignore the boring parts of the ON clause. so in your example the b.col = 1 would go in the WHERE left joins might be different, of course, but i try to follow the above (and usually can, far more often than not) PhantomOfTheCopier posted:By example, suppose you have selected FROM book and you're going to join on author. Do you say "JOIN author ON author.id=book.author" or do you say "JOIN author ON book.author=author.id"? Or do you say, this is just an inner join, why confuse matters when this is simply "FROM book,author WHERE author.id=book.author"? whether book.author or author.id are entirely equivalent, and which comes first comes down purely to your coding standards (i prefer the former). i shudder to think of the dbms that treats these differently the old-style join syntax should be entirely avoided unless you have a drat good reason to (the drat good reason is that you've profiled it and found that it's substantially better than writing out the joins explicitly, and you've commented it to that effect). again, any halfassed db should treat this the same as the other two, but it wouldn't surprise me to learn that some dbs are dumber about this construct old style joins should be a syntax error tbh
|
# ¿ Aug 18, 2022 13:07 |
|
did you update inserttime, just add a that new column RecTime_UTC, or is your second where clause wrong? the two queries are identical except for a column in the select which rdbms? i think this might be somewhat db-dependent, although i can hazard a guess... assuming you updated inserttime as well as creating RecTime_UTC, my guess is that inserttime was inserted with a data type that had a time zone offset. using the first row from your example, i'm guessing inserttime would have been inserted as 2018-02-19 20:42:54+02:00 (instead of a time with no offset like 2018-02-19 20:42:54). this time is equivalent to the utc time 2018-02-19 18:42:54+00:00. when comparing the date columns in the where clause, the db compares the dates in utc. these are the same, so you see the same rows appearing in both queries for example, here's how sql server's datetimeoffset type works: quote:The data is stored in the database and processed, compared, sorted, and indexed in the server as in UTC. The time zone offset will be preserved in the database for retrieval. this makes some assumptions about your schema - specifically that something in your schema is offset-aware. probably. i think. idk, times are a nightmare even with good libraries for working with them. and dbs do not have good datetime libraries
|
# ¿ Aug 19, 2022 12:03 |
|
kiwid posted:Thanks. Is there any performance differences between a CTE vs subquery? nah, optimiser treats 'em the same. think of a cte as being a way to copy/paste a subquery primary reasons to use one over the other: defining a subquery once and using it multiple times (cte), readability. some people prefer ctes over subqueries even for single-use things. personally i only use ctes in that case if i'd end up with multiple nested subqueries (which might happen with complicated logic and/or window functions) note that sql server doesn't evaluate a cte only once and reuse the result set. every time you refer to a cte in a query, you're creating a different subquery. i've seen this trip people up - people expect defining a thing once means it will execute once, but alas that isn't so. in postgres you can add a hint to the cte to get it to materialize the cte results, but sql server has nothing like this
|
# ¿ Oct 20, 2022 21:00 |
|
the query store might help - it's theoretically good for recording and finding query regressions. you might need to turn it on, if it's not enabled already. i've always found it a pain to query, but that's probably because i don't use it much at all (our workload isn't well suited)
|
# ¿ Jan 21, 2023 21:04 |
|
i know it's been solved, but if you're wondering, the reason you're getting a bunch more rows than you expect is because this statement doesn't have a where clause: code:
|
# ¿ Jan 26, 2023 08:00 |
|
credburn posted:Hey, gang! i don't speak postgres, but from a quick google id expect there to be a CREATE TRIGGER statement somewhere. that is, a statement to wire up that function to an actual trigger on the table
|
# ¿ Aug 26, 2023 01:01 |
|
does your sqlite wrapper expose one of the sqlite3_changes functions? looks like another option might be the count_changes pragma, although this is deprecated - by what, i'm not sure. it definitely seems like a nicer way to get the affected row count than sqlite3_changes
|
# ¿ Oct 7, 2023 04:37 |
|
|
# ¿ May 6, 2024 03:22 |
|
oooh, yeah, i get it now. have you tried the solution in the accepted answer on that question? i think that could work for you, if sqlite allows it. the rowset will be empty if the id doesn't exist, and you'd get the old value of is_deleted to see if it had been previously deleted or notcode:
you'll no doubt need to tweak the syntax, because sql is a vibe and not a standard
|
# ¿ Oct 7, 2023 05:14 |