|
Jethro posted:Another reason to use a (non-correlated) subquery instead of a join is when the join would return duplicate records.
|
# ? Aug 27, 2019 03:08 |
|
|
# ? Jun 5, 2024 00:56 |
|
Vegetable posted:Couldn’t you just select distinct customer_id out of a filtered-for-30-days orders table and left join it to the customers table to add customer details? Inner but yeah. That’s how I usually see it done. I think their argument is that it’s better suited for the where clause because it’s essentially filter criteria. But I personally would be happy to tolerate either approach.
|
# ? Aug 27, 2019 06:39 |
|
How efficient is the WHERE example above? My understanding was that stuffing subqueries in WHERE and SELECT conditions are a bad idea due to inefficiency. But then again I was taught that joins are always the golden standard and should be preferred where possible, so I don’t have any depth of knowledge on this.
|
# ? Aug 27, 2019 08:36 |
|
depends entirely on the query + query optimiser + data distribution. trying to give any rules of thumb is basically irresponsible tbh
|
# ? Aug 27, 2019 09:59 |
|
Vegetable posted:How efficient is the WHERE example above? My understanding was that stuffing subqueries in WHERE and SELECT conditions are a bad idea due to inefficiency. But then again I was taught that joins are always the golden standard and should be preferred where possible, so I don’t have any depth of knowledge on this. What redleader said, but also note that there’s a difference between a correlated and non-correlated query in terms of whether the engine may need to run it more than once.
|
# ? Aug 27, 2019 12:50 |
|
Ruggan posted:There are a ton of ways and no “best”. Do you need drill downs or detail or do you just need the aggregate stats? Just coming back to this, but here's a sample table structure: code:
code:
e: I think I'm making some progress having just discovered that column groups are a thing? But it's still not working the way I expect it to. Sir Bobert Fishbone fucked around with this message at 16:08 on Aug 27, 2019 |
# ? Aug 27, 2019 15:22 |
|
Sir Bobert Fishbone posted:Just coming back to this, but here's a sample table structure: I started writing up a bunch of stuff for this, but ended up scrapping it. Your best bet is to probably calculate the values in SQL and possibly aggregate if necessary. So, create some parameters in your report for StartDate and EndDate and pass them. Assuming a data structure like this: code:
code:
If you don't need the individual records and can aggregate, you might save yourself some performance just pre-calculating in SQL by taking care of NULL aggregate elimination like so: code:
If you really want to do the calculations in SSRS, you're going to be writing much nastier IIF logic and probably doing some dataset filters which honestly is a big waste of time IMO. Usually the finesse with SSRS is crafting your dataset into one that closely fits what you want to display in the report and minimizes the number of rows returned (only returning the level of granularity you actually need). For example, if for some reason you need drilldowns, return the non-aggregated equivalent dataset (careful to only return rows that qualify for one of your groups so you don't data dump the full table): code:
Does that do what you need?
|
# ? Aug 27, 2019 20:27 |
|
Sir Bobert Fishbone posted:e: I think I'm making some progress having just discovered that column groups are a thing? But it's still not working the way I expect it to. Don't do matrix column groups for this. I've learned the hard way that column group filters don't work if you aren't grouping on your filter criteria. In general avoid matrixes like the plague, they are rarely useful (although sometimes they truly make sense for when you want a true matrix that expands horizontally and not just column-based calculations).
|
# ? Aug 27, 2019 20:28 |
|
Ruggan posted:... I think this just laid out exactly what I've been missing--I've been dancing so close to the calculated column values idea, but wasn't able to get there. This was a super helpful post; thanks a ton! e: yeah dang this is precisely what I need. It's so much easier than the union-filled monstrosities I've been working up. Sir Bobert Fishbone fucked around with this message at 20:55 on Aug 27, 2019 |
# ? Aug 27, 2019 20:40 |
|
Sir Bobert Fishbone posted:I think this just laid out exactly what I've been missing--I've been dancing so close to the calculated column values idea, but wasn't able to get there. This was a super helpful post; thanks a ton! Glad it helped. Let me know if you need any other guidance - unfortunately I've received far beyond whatever the LD50 dose is of SSRS exposure.
|
# ? Aug 28, 2019 14:49 |
Is there an easy way to locate all the names of relations in a SQL db? Im getting an error message that references a relation name, but I cant find it anywhere manually.
|
|
# ? Aug 29, 2019 04:43 |
|
NPR Journalizard posted:Is there an easy way to locate all the names of relations in a SQL db? You might have luck with INFORMATION_SCHEMA.
|
# ? Aug 29, 2019 08:28 |
McGlockenshire posted:You might have luck with INFORMATION_SCHEMA. I did some more googling and found code:
|
|
# ? Aug 29, 2019 09:00 |
|
NPR Journalizard posted:I did some more googling and found
|
# ? Aug 29, 2019 12:32 |
|
You can prefix the sys schema with the database you want to specifically look at (tempdb being one of those, as Nth Doctor mentioned):code:
|
# ? Aug 29, 2019 14:11 |
|
Anyone know of a T-SQL linter that actually works? I’m addicted to eslint for JS and its integration in Visual Studio code, and I wish there was something similar for SSMS. Our DW team of 10 or so folks struggles with formatting consistency and I think it would be a big win if we could stop having formatting discussions and just apply a standard. Poor mans is the closest thing I know of... Ruggan fucked around with this message at 00:06 on Sep 1, 2019 |
# ? Aug 31, 2019 18:47 |
|
Have you tried Devart's SQL Complete? A few years ago when I compared it, Redgate and a bunch of other products I found it to be the most consistent. It certainly isn't perfect though. I think they have a free, time-limited trial version.
|
# ? Aug 31, 2019 18:53 |
|
Ruggan posted:Anyone know of a T-SQL linter that actually works? I’m addicted to eslint for JS and its integration in Visual Studio code, and I wish there was something similar for SSMS. Our DW team of 10 or so folks struggles with formatting consistency and I think it would be a big win if we could stop having formatting discussions and just apply a standard. I have a colleague rolling his own, around gudusoft sql parser. I haven't tried it, though.
|
# ? Sep 1, 2019 02:21 |
|
Suppose I want to create a table with denormalised data in Postgres which would only be updated infrequently. As the database contains millions of rows which would need to be joined and aggregated in windows, trying to do all this in a single query will easily blow out the memory. I'm using a query builder (SQLAlchemy) so would it be bad practice to build the table by repeating the query in an external loop?
|
# ? Sep 6, 2019 09:12 |
|
Company policy is to start moving to MySQL for all new databases going forward. We’ve got a Mongo database we therefore want to replace with MySQL, in document store mode. Has anyone worked with/migrated to MySQL document store mode before and have experienced or feedback to give? How feasible is it? Was it a difficult migration? Can you still index on nested documents’ attributes in array fields? Edit: bonus points if you have also tried to use ActiveRecord with MySQL document stores!! Pollyanna fucked around with this message at 16:17 on Sep 6, 2019 |
# ? Sep 6, 2019 16:09 |
|
Just wondering. If you have several hundred rows of data to insert, what will you do to make it go faster? I've built up a habit of pasting everything into an Excel doc and concatenating it into an insert statement. No idea if it's actually faster, but it works for me.
|
# ? Sep 6, 2019 18:49 |
|
Bulk insert https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017 But if you can't I'd recommend testing multiple sizes of insert statements to see what your setup can take. I found our lovely MySQL servers had linear performance up to ~2500 records at the job I was unfortunate enough to use that product. After 2500 it started slowing down, so I just chunked inserts in 2000 record statements. Munkeymon fucked around with this message at 19:15 on Sep 6, 2019 |
# ? Sep 6, 2019 19:10 |
|
Protocol7 posted:Just wondering. If you have several hundred rows of data to insert, what will you do to make it go faster? Open a transaction, do your inserts, then commit
|
# ? Sep 6, 2019 21:05 |
|
Seconding bulk insert, that's always worked best for me. Some other things you can do, but usually shouldn't: * Table lock hint * Change the recovery model to simple or bulk logged (temporarily!) * Set fill factor on the table to 100 * Turn off constraints (temporarily!) Don't do any of these without reading up on the implications of each!
|
# ? Sep 6, 2019 21:18 |
|
This is all TSQL Currently when we make notes on customer accounts the notes are sequential. I'd like to make it so users can respond to a particular note, and respond to that note, etc. I think what I'm looking for is a CTE and recursion to get these all in the right order when queried, but I'm not sure, or if I need another table to handle this. For this example my table looks something like this for simplicities sake: code:
In the end, I'd like my output to look something like this code:
|
# ? Sep 9, 2019 21:19 |
|
Just-In-Timeberlake posted:This is all TSQL I'm in a car hundreds of miles from home and a pc. I banged out a recursive CTE that sorta worked but used a string of delimited noteids to help sort the data that likely had problems when sorting lexigraphically vs. numerically so I scrapped it and instead found this: HIERARCHYID data type I bet this will get you very far along in solving your problem.
|
# ? Sep 10, 2019 00:59 |
|
You could also use a bridge table as well ie a separate table with AncestorId, DescendentId, and (optionally) depth and other attributes of the relationship if you want. I usually just use a ParentID + a CTE though.
|
# ? Sep 10, 2019 17:31 |
|
Nth Doctor posted:I'm in a car hundreds of miles from home and a pc. drat, that looks slick as hell, thanks
|
# ? Sep 10, 2019 20:12 |
|
Cold on a Cob posted:You could also use a bridge table as well ie a separate table with AncestorId, DescendentId, and (optionally) depth and other attributes of the relationship if you want. ParentID+CTE was one of my paths, but I still had lots of trouble having all of the related posts sort together in a way that made consistent sense and still respected chronology.
|
# ? Sep 10, 2019 21:19 |
|
We're currently running SQL 2014 and SSRS 2014 on our main db server. I would like to upgrade SSRS to the newest and greatest. I'm finding a bunch of conflicting info online, so I'm hoping to get a sanity check by asking some dumb questions: - Can I upgrade SSRS without upgrading SQL itself? - Assuming the answer is yes, can i just do an in-place upgrade to SSRS, or do I have to install it side by side and migrate everything over and break all of our report links?
|
# ? Sep 19, 2019 20:07 |
|
The guys at our work have done in place upgrades for the last few SSRS upgrades. The only issue has been that the config file seems to have gotten reset (or they failed to port custom configuration forward). I have no idea about whether SSRS is dependent on an equivalent version or SQL but I would guess the answer is yes.
|
# ? Sep 21, 2019 17:31 |
|
SSRS is not dependent on an equivalent version of SQL Server. In fact, it's not even a part of the SQL Server installer anymore as of 2017, it's a separate download! Although it probably has a minimum SQL Server version requirement. Edit: I can't actually find anything about which version of SQL Server SSRS requires. It's possible that it's only forward-compatible, not backward-compatible. Tax Oddity fucked around with this message at 20:30 on Sep 21, 2019 |
# ? Sep 21, 2019 18:25 |
|
okay, getting an error in my codecode:
Mycroft Holmes fucked around with this message at 22:08 on Sep 23, 2019 |
# ? Sep 23, 2019 21:40 |
|
there is no reason to have the 'sal' on line 2. there's also a chance the percents could gently caress with your code, but that depends on which flavor of sql. and for completeness in the case, you want an ELSE abelwingnut fucked around with this message at 22:13 on Sep 23, 2019 |
# ? Sep 23, 2019 22:11 |
|
yes, CASE has two variants. One works like a switch statement in C-style languages, the other works like a sequence of if-else-if-...-else. You have provided a column name (sal) which suggests that it's the switch variant, but your "cases" are booleans which presumably is inconsistent with that.
|
# ? Sep 23, 2019 23:56 |
|
Cross posting from AWS thread: I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS. The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again?
|
# ? Sep 24, 2019 21:08 |
|
SnatchRabbit posted:Cross posting from AWS thread: I know nothing about aws cloud hosting of MSSQL but I'd try importing the database schema into a visual studio database project, then generating a script from that targeting the instance you're migrating to. There are diff tools you could use to generate the script but I like to import so I can review what's being moved in a more structured way. Edit: SQL Server Data Tools installs support for database projects in visual studio: https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017
|
# ? Sep 24, 2019 21:16 |
|
SnatchRabbit posted:Cross posting from AWS thread: a quick Google search uncovered this that might be of help https://www.mssqltips.com/sqlservertip/4606/generate-tsql-scripts-for-all-sql-server-databases-and-all-objects-using-powershell/
|
# ? Sep 24, 2019 22:11 |
|
Just-In-Timeberlake posted:a quick Google search uncovered this that might be of help Thanks, I'm going to have the dba give this a shot.
|
# ? Sep 25, 2019 15:44 |
|
|
# ? Jun 5, 2024 00:56 |
|
I'm like a month late to join vs "where foo in (subquery)" chat, but, uh. Why would you ever use the subquery form when exists(), well, exists? Like all the examples that got brought up as usecases for in(subquery) are pretty much textbook examples of when to use a semi-join (well, I guess the in(subquery) is actually a semi-join in practice, huh). Is this some old received wisdom about working around a wonky query planner or something? Everything I've read indicates exists() is basically the fastest way of getting the filtering effects of an inner join but without selecting columns from the right hand table or duplicating rows in the left hand table, because it's not necessary to find every matching row in the right hand table - early termination after finding the first row is possible. Am I missing something obvious here? I use exists() a lot because it turns out it's pretty common for me to want to filter by the existence of a relation without duplicating by it. e: I guess in many cases the id in(subquery) ends up with the same query plan as where exists huh, so it doesn't really matter does it TheFluff fucked around with this message at 23:03 on Sep 30, 2019 |
# ? Sep 30, 2019 20:59 |