Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vegetable
Oct 22, 2010

Jethro posted:

Another reason to use a (non-correlated) subquery instead of a join is when the join would return duplicate records.

For example: Return all customers that had an order in the last month:

SQL code:
select c.*
  from customers c
 where c.id in (select customer_id from orders where OrderDate > today()-30)
vs
SQL code:
select distinct c.*
  from customers c
        inner join orders o on c.id = o.customerId
 where o.orderDate > today()-30
Couldn’t you just select distinct customer_id out of a filtered-for-30-days orders table and left join it to the customers table to add customer details?

Adbot
ADBOT LOVES YOU

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Vegetable posted:

Couldn’t you just select distinct customer_id out of a filtered-for-30-days orders table and left join it to the customers table to add customer details?

Inner but yeah. That’s how I usually see it done. I think their argument is that it’s better suited for the where clause because it’s essentially filter criteria. But I personally would be happy to tolerate either approach.

Vegetable
Oct 22, 2010

How efficient is the WHERE example above? My understanding was that stuffing subqueries in WHERE and SELECT conditions are a bad idea due to inefficiency. But then again I was taught that joins are always the golden standard and should be preferred where possible, so I don’t have any depth of knowledge on this.

redleader
Aug 18, 2005

Engage according to operational parameters
depends entirely on the query + query optimiser + data distribution. trying to give any rules of thumb is basically irresponsible tbh

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Vegetable posted:

How efficient is the WHERE example above? My understanding was that stuffing subqueries in WHERE and SELECT conditions are a bad idea due to inefficiency. But then again I was taught that joins are always the golden standard and should be preferred where possible, so I don’t have any depth of knowledge on this.

What redleader said, but also note that there’s a difference between a correlated and non-correlated query in terms of whether the engine may need to run it more than once.

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Ruggan posted:

There are a ton of ways and no “best”. Do you need drill downs or detail or do you just need the aggregate stats?

Do you have a sample query using fake data? I could prob mock something up for you p quick if so.

Matrix with filters, or tablix. If you don’t need drilldowns and can form your SQL data into what you want it to look like in output (row per type with columns for periods) the tablix option is super easy.

Just coming back to this, but here's a sample table structure:
code:
+----+---------+-----------+-----------+
| ID | DocType |  DocDate  | DocAmount |
+----+---------+-----------+-----------+
| 24 | Type A  | 8/1/2019  |    123.45 |
| 25 | Type B  | 8/15/2019 |   4000.00 |
| 26 | Type A  | 8/4/2018  |    133.09 |
+----+---------+-----------+-----------+
Basically I need a report that does something like the following:
code:
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
|  Type  | Period Count | YTD Count | Last Year Period Count | Period Value | YTD Value | Last Year Period Value |
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
| Type A |            2 |        12 |                      4 |      1000.00 |  10000.00 |                 900.00 |
| Type B |            4 |        11 |                      9 |      2000.00 |  20000.00 |                 500.00 |
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
and I'm too dumb to figure it out without making a hell of a mess with my datasets. I'm sure it's super simple, I just need help figuring out how the YTD/last year period counts would work in an SSRS table.

e: I think I'm making some progress having just discovered that column groups are a thing? But it's still not working the way I expect it to.

Sir Bobert Fishbone fucked around with this message at 16:08 on Aug 27, 2019

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Sir Bobert Fishbone posted:

Just coming back to this, but here's a sample table structure:
code:
+----+---------+-----------+-----------+
| ID | DocType |  DocDate  | DocAmount |
+----+---------+-----------+-----------+
| 24 | Type A  | 8/1/2019  |    123.45 |
| 25 | Type B  | 8/15/2019 |   4000.00 |
| 26 | Type A  | 8/4/2018  |    133.09 |
+----+---------+-----------+-----------+
Basically I need a report that does something like the following:
code:
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
|  Type  | Period Count | YTD Count | Last Year Period Count | Period Value | YTD Value | Last Year Period Value |
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
| Type A |            2 |        12 |                      4 |      1000.00 |  10000.00 |                 900.00 |
| Type B |            4 |        11 |                      9 |      2000.00 |  20000.00 |                 500.00 |
+--------+--------------+-----------+------------------------+--------------+-----------+------------------------+
and I'm too dumb to figure it out without making a hell of a mess with my datasets. I'm sure it's super simple, I just need help figuring out how the YTD/last year period counts would work in an SSRS table.

e: I think I'm making some progress having just discovered that column groups are a thing? But it's still not working the way I expect it to.


I started writing up a bunch of stuff for this, but ended up scrapping it. Your best bet is to probably calculate the values in SQL and possibly aggregate if necessary. So, create some parameters in your report for StartDate and EndDate and pass them.

Assuming a data structure like this:

code:
CREATE TABLE #Test (ID int, DocType varchar(10), DocDate datetime, DocAmount numeric(18,2))

DECLARE @Id int = 1
WHILE (@Id < 1000)
BEGIN
    INSERT INTO #Test (ID, DocType, DocDate, DocAmount)
    VALUES (@Id, CONCAT('Type ', char(65 + round(rand(), 0))), DATEADD(DAY, ROUND(rand() * 365 * 5, 0), '1/1/2015'), 5000 * rand())
    SET @Id = @Id + 1
END

SELECT * FROM #Test

DROP TABLE #Test
You can calculate column values for filtering like so:

code:
select  *,
        IsPER = CASE WHEN DocDate between @StartDate and @EndDate THEN 1 END,
        IsYTD = CASE WHEN DocDate between CONCAT('1/1/', YEAR(GETDATE())) and GETDATE() THEN 1 END,
        IsLYP = CASE WHEN DocDate between DATEADD(YEAR, -1, @StartDate) and DATEADD(YEAR, -1, @EndDate) THEN 1 END
from    #Test
This allows you to just sum the calculated columns in the SSRS report in a tablix with a single row group to get your column values.

If you don't need the individual records and can aggregate, you might save yourself some performance just pre-calculating in SQL by taking care of NULL aggregate elimination like so:

code:
select      DocType,
            CountPER = COUNT(CASE WHEN DocDate between @StartDate and @EndDate                                       THEN 1 END),
            CountYTD = COUNT(CASE WHEN DocDate between CONCAT('1/1/', YEAR(GETDATE())) and GETDATE()                 THEN 1 END),
            CountLYP = COUNT(CASE WHEN DocDate between DATEADD(YEAR, -1, @StartDate) and DATEADD(YEAR, -1, @EndDate) THEN 1 END),
            SumPER   =   SUM(CASE WHEN DocDate between @StartDate and @EndDate                                       THEN DocAmount END),
            SumYTD   =   SUM(CASE WHEN DocDate between CONCAT('1/1/', YEAR(GETDATE())) and GETDATE()                 THEN DocAmount END),
            SumLYP   =   SUM(CASE WHEN DocDate between DATEADD(YEAR, -1, @StartDate) and DATEADD(YEAR, -1, @EndDate) THEN DocAmount END)
from        #Test
GROUP BY    DocType
This is super simple to implement into SSRS:



If you really want to do the calculations in SSRS, you're going to be writing much nastier IIF logic and probably doing some dataset filters which honestly is a big waste of time IMO. Usually the finesse with SSRS is crafting your dataset into one that closely fits what you want to display in the report and minimizes the number of rows returned (only returning the level of granularity you actually need).

For example, if for some reason you need drilldowns, return the non-aggregated equivalent dataset (careful to only return rows that qualify for one of your groups so you don't data dump the full table):

code:
select      *
from        (
            select      *,
                        CountPER = CASE WHEN DocDate between @StartDate and @EndDate                                       THEN 1 END,
                        CountYTD = CASE WHEN DocDate between CONCAT('1/1/', YEAR(GETDATE())) and GETDATE()                 THEN 1 END,
                        CountLYP = CASE WHEN DocDate between DATEADD(YEAR, -1, @StartDate) and DATEADD(YEAR, -1, @EndDate) THEN 1 END,
                        SumPER   = CASE WHEN DocDate between @StartDate and @EndDate                                       THEN DocAmount END,
                        SumYTD   = CASE WHEN DocDate between CONCAT('1/1/', YEAR(GETDATE())) and GETDATE()                 THEN DocAmount END,
                        SumLYP   = CASE WHEN DocDate between DATEADD(YEAR, -1, @StartDate) and DATEADD(YEAR, -1, @EndDate) THEN DocAmount END
            from        #Test
            ) x
where       CountPER is not null
or          CountYTD is not null
or          CountLYP is not null
And now add a child group that displays detail level data, with the row visibility hidden / toggleable by the [DocType] cell.



Does that do what you need?

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Sir Bobert Fishbone posted:

e: I think I'm making some progress having just discovered that column groups are a thing? But it's still not working the way I expect it to.

Don't do matrix column groups for this. I've learned the hard way that column group filters don't work if you aren't grouping on your filter criteria.

In general avoid matrixes like the plague, they are rarely useful (although sometimes they truly make sense for when you want a true matrix that expands horizontally and not just column-based calculations).

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Ruggan posted:

...

Does that do what you need?

I think this just laid out exactly what I've been missing--I've been dancing so close to the calculated column values idea, but wasn't able to get there. This was a super helpful post; thanks a ton!

e: yeah dang this is precisely what I need. It's so much easier than the union-filled monstrosities I've been working up.

Sir Bobert Fishbone fucked around with this message at 20:55 on Aug 27, 2019

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Sir Bobert Fishbone posted:

I think this just laid out exactly what I've been missing--I've been dancing so close to the calculated column values idea, but wasn't able to get there. This was a super helpful post; thanks a ton!

e: yeah dang this is precisely what I need. It's so much easier than the union-filled monstrosities I've been working up.

Glad it helped. Let me know if you need any other guidance - unfortunately I've received far beyond whatever the LD50 dose is of SSRS exposure.

NPR Journalizard
Feb 14, 2008

Is there an easy way to locate all the names of relations in a SQL db?

Im getting an error message that references a relation name, but I cant find it anywhere manually.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

NPR Journalizard posted:

Is there an easy way to locate all the names of relations in a SQL db?

You might have luck with INFORMATION_SCHEMA.

NPR Journalizard
Feb 14, 2008

McGlockenshire posted:

You might have luck with INFORMATION_SCHEMA.

I did some more googling and found

code:
SELECT
    fk.name 'FK Name',
    tp.name 'Parent table',
    cp.name, cp.column_id,
    tr.name 'Refrenced table',
    cr.name, cr.column_id
FROM 
    sys.foreign_keys fk
INNER JOIN 
    sys.tables tp ON fk.parent_object_id = tp.object_id
INNER JOIN 
    sys.tables tr ON fk.referenced_object_id = tr.object_id
INNER JOIN 
    sys.foreign_key_columns fkc ON fkc.constraint_object_id = fk.object_id
INNER JOIN 
    sys.columns cp ON fkc.parent_column_id = cp.column_id AND fkc.parent_object_id = cp.object_id
INNER JOIN 
    sys.columns cr ON fkc.referenced_column_id = cr.column_id AND fkc.referenced_object_id = cr.object_id
ORDER BY
    tp.name, cp.column_id
but that still didnt contain the relation named in my db. I have a sneaking suspicion its a relation on a temporary table, so I have NFI how im going to track that down.

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


NPR Journalizard posted:

I did some more googling and found

code:
SELECT
    fk.name 'FK Name',
    tp.name 'Parent table',
    cp.name, cp.column_id,
    tr.name 'Refrenced table',
    cr.name, cr.column_id
FROM 
    sys.foreign_keys fk
INNER JOIN 
    sys.tables tp ON fk.parent_object_id = tp.object_id
INNER JOIN 
    sys.tables tr ON fk.referenced_object_id = tr.object_id
INNER JOIN 
    sys.foreign_key_columns fkc ON fkc.constraint_object_id = fk.object_id
INNER JOIN 
    sys.columns cp ON fkc.parent_column_id = cp.column_id AND fkc.parent_object_id = cp.object_id
INNER JOIN 
    sys.columns cr ON fkc.referenced_column_id = cr.column_id AND fkc.referenced_object_id = cr.object_id
ORDER BY
    tp.name, cp.column_id
but that still didnt contain the relation named in my db. I have a sneaking suspicion its a relation on a temporary table, so I have NFI how im going to track that down.
Could you run this query in tempdb?

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


You can prefix the sys schema with the database you want to specifically look at (tempdb being one of those, as Nth Doctor mentioned):

code:
select * from tempdb.sys.foreign_keys

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


Anyone know of a T-SQL linter that actually works? I’m addicted to eslint for JS and its integration in Visual Studio code, and I wish there was something similar for SSMS. Our DW team of 10 or so folks struggles with formatting consistency and I think it would be a big win if we could stop having formatting discussions and just apply a standard.

Poor mans is the closest thing I know of...

Ruggan fucked around with this message at 00:06 on Sep 1, 2019

Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.
Have you tried Devart's SQL Complete? A few years ago when I compared it, Redgate and a bunch of other products I found it to be the most consistent. It certainly isn't perfect though.

I think they have a free, time-limited trial version.

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Ruggan posted:

Anyone know of a T-SQL linter that actually works? I’m addicted to eslint for JS and its integration in Visual Studio code, and I wish there was something similar for SSMS. Our DW team of 10 or so folks struggles with formatting consistency and I think it would be a big win if we could stop having formatting discussions and just apply a standard.

Poor mans is the closest thing I know of...

I have a colleague rolling his own, around gudusoft sql parser. I haven't tried it, though.

Mr. Angry
Jan 20, 2012
Suppose I want to create a table with denormalised data in Postgres which would only be updated infrequently. As the database contains millions of rows which would need to be joined and aggregated in windows, trying to do all this in a single query will easily blow out the memory. I'm using a query builder (SQLAlchemy) so would it be bad practice to build the table by repeating the query in an external loop?

Pollyanna
Mar 5, 2005

Milk's on them.


Company policy is to start moving to MySQL for all new databases going forward. We’ve got a Mongo database we therefore want to replace with MySQL, in document store mode. Has anyone worked with/migrated to MySQL document store mode before and have experienced or feedback to give? How feasible is it? Was it a difficult migration? Can you still index on nested documents’ attributes in array fields?

Edit: bonus points if you have also tried to use ActiveRecord with MySQL document stores!!

Pollyanna fucked around with this message at 16:17 on Sep 6, 2019

Macichne Leainig
Jul 26, 2012

by VG
Just wondering. If you have several hundred rows of data to insert, what will you do to make it go faster?

I've built up a habit of pasting everything into an Excel doc and concatenating it into an insert statement. No idea if it's actually faster, but it works for me.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Bulk insert https://docs.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017 :whatup:

But if you can't I'd recommend testing multiple sizes of insert statements to see what your setup can take. I found our lovely MySQL servers had linear performance up to ~2500 records at the job I was unfortunate enough to use that product. After 2500 it started slowing down, so I just chunked inserts in 2000 record statements.

Munkeymon fucked around with this message at 19:15 on Sep 6, 2019

spiritual bypass
Feb 19, 2008

Grimey Drawer

Protocol7 posted:

Just wondering. If you have several hundred rows of data to insert, what will you do to make it go faster?

Open a transaction, do your inserts, then commit

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
Seconding bulk insert, that's always worked best for me.

Some other things you can do, but usually shouldn't:
* Table lock hint
* Change the recovery model to simple or bulk logged (temporarily!)
* Set fill factor on the table to 100
* Turn off constraints (temporarily!)

Don't do any of these without reading up on the implications of each!

Just-In-Timeberlake
Aug 18, 2003
This is all TSQL

Currently when we make notes on customer accounts the notes are sequential. I'd like to make it so users can respond to a particular note, and respond to that note, etc. I think what I'm looking for is a CTE and recursion to get these all in the right order when queried, but I'm not sure, or if I need another table to handle this.

For this example my table looks something like this for simplicities sake:

code:
      NoteId | Note | RespondingToNoteId
So if someone were to respond to NoteId 1, the value 1 would be in RespondingToNoteId

In the end, I'd like my output to look something like this

code:
     Note 1
     |   |---Response to Note 1
     |   |    |---Response to first response
     |   |         |---Response to second response
     |   |---Second response to Note 1
     |   |---Third response to Note 1
     Note 2
If someone could point me in the right direction, or let me know if what I want is possible many thanks.

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Just-In-Timeberlake posted:

This is all TSQL

Currently when we make notes on customer accounts the notes are sequential. I'd like to make it so users can respond to a particular note, and respond to that note, etc. I think what I'm looking for is a CTE and recursion to get these all in the right order when queried, but I'm not sure, or if I need another table to handle this.

For this example my table looks something like this for simplicities sake:

code:
      NoteId | Note | RespondingToNoteId
So if someone were to respond to NoteId 1, the value 1 would be in RespondingToNoteId

In the end, I'd like my output to look something like this

code:
     Note 1
     |   |---Response to Note 1
     |   |    |---Response to first response
     |   |         |---Response to second response
     |   |---Second response to Note 1
     |   |---Third response to Note 1
     Note 2
If someone could point me in the right direction, or let me know if what I want is possible many thanks.

I'm in a car hundreds of miles from home and a pc.
I banged out a recursive CTE that sorta worked but used a string of delimited noteids to help sort the data that likely had problems when sorting lexigraphically vs. numerically so I scrapped it and instead found this:
HIERARCHYID data type

I bet this will get you very far along in solving your problem.

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
You could also use a bridge table as well ie a separate table with AncestorId, DescendentId, and (optionally) depth and other attributes of the relationship if you want.

I usually just use a ParentID + a CTE though.

Just-In-Timeberlake
Aug 18, 2003

Nth Doctor posted:

I'm in a car hundreds of miles from home and a pc.
I banged out a recursive CTE that sorta worked but used a string of delimited noteids to help sort the data that likely had problems when sorting lexigraphically vs. numerically so I scrapped it and instead found this:
HIERARCHYID data type

I bet this will get you very far along in solving your problem.

drat, that looks slick as hell, thanks

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Cold on a Cob posted:

You could also use a bridge table as well ie a separate table with AncestorId, DescendentId, and (optionally) depth and other attributes of the relationship if you want.

I usually just use a ParentID + a CTE though.

ParentID+CTE was one of my paths, but I still had lots of trouble having all of the related posts sort together in a way that made consistent sense and still respected chronology.

Sir Bobert Fishbone
Jan 16, 2006

Beebort
We're currently running SQL 2014 and SSRS 2014 on our main db server. I would like to upgrade SSRS to the newest and greatest. I'm finding a bunch of conflicting info online, so I'm hoping to get a sanity check by asking some dumb questions:

- Can I upgrade SSRS without upgrading SQL itself?
- Assuming the answer is yes, can i just do an in-place upgrade to SSRS, or do I have to install it side by side and migrate everything over and break all of our report links?

Ruggan
Feb 20, 2007
WHAT THAT SMELL LIKE?!


The guys at our work have done in place upgrades for the last few SSRS upgrades. The only issue has been that the config file seems to have gotten reset (or they failed to port custom configuration forward).

I have no idea about whether SSRS is dependent on an equivalent version or SQL but I would guess the answer is yes.

Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.
SSRS is not dependent on an equivalent version of SQL Server. In fact, it's not even a part of the SQL Server installer anymore as of 2017, it's a separate download! Although it probably has a minimum SQL Server version requirement.

Edit: I can't actually find anything about which version of SQL Server SSRS requires. It's possible that it's only forward-compatible, not backward-compatible.

Tax Oddity fucked around with this message at 20:30 on Sep 21, 2019

Mycroft Holmes
Mar 26, 2010

by Azathoth
okay, getting an error in my code
code:
 select empno, ename,
  2  case sal
  3  when sal < 1500 then '0%'
  4  when sal > 1499 and sal < 2000 then '3%'
  5  when sal > 1999 and sal < 2500 then '6%'
  6  when sal > 2499 and sal < 3000 then '10%'
  7  when sal > 2999 and sal < 5000 then '15%'
  8  when sal > 4999 and sal < 10000 then '25%'
  9  end tax
 10  from emp;
it's saying i'm missing a keyword in line 3

Mycroft Holmes fucked around with this message at 22:08 on Sep 23, 2019

abelwingnut
Dec 23, 2002


there is no reason to have the 'sal' on line 2. there's also a chance the percents could gently caress with your code, but that depends on which flavor of sql.

and for completeness in the case, you want an ELSE

abelwingnut fucked around with this message at 22:13 on Sep 23, 2019

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
yes, CASE has two variants. One works like a switch statement in C-style languages, the other works like a sequence of if-else-if-...-else. You have provided a column name (sal) which suggests that it's the switch variant, but your "cases" are booleans which presumably is inconsistent with that.

SnatchRabbit
Feb 23, 2006

by sebmojo
Cross posting from AWS thread:

I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS. The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again?

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice

SnatchRabbit posted:

Cross posting from AWS thread:

I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS. The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again?

I know nothing about aws cloud hosting of MSSQL but I'd try importing the database schema into a visual studio database project, then generating a script from that targeting the instance you're migrating to. There are diff tools you could use to generate the script but I like to import so I can review what's being moved in a more structured way.

Edit: SQL Server Data Tools installs support for database projects in visual studio: https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017

Just-In-Timeberlake
Aug 18, 2003

SnatchRabbit posted:

Cross posting from AWS thread:

I have a client that wants to migrate two MSSQL database servers with 200+ db objects between them to AWS Cloud. Now, up until this point we've been fine using Data Migration Service to move the table data from their on-prem servers into AWS. The problem is that DMS doesn't migrate indexes, users, privileges, stored procedures, and other database changes not directly related to table data. So now we have generate scripts by hand for these 200+, at minimum, objects. What I'm asking is, is there some hacky way to automate this migration or are we just stuck having to do it all by hand over and over again?

a quick Google search uncovered this that might be of help

https://www.mssqltips.com/sqlservertip/4606/generate-tsql-scripts-for-all-sql-server-databases-and-all-objects-using-powershell/

SnatchRabbit
Feb 23, 2006

by sebmojo

Thanks, I'm going to have the dba give this a shot.

Adbot
ADBOT LOVES YOU

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
I'm like a month late to join vs "where foo in (subquery)" chat, but, uh. Why would you ever use the subquery form when exists(), well, exists? Like all the examples that got brought up as usecases for in(subquery) are pretty much textbook examples of when to use a semi-join (well, I guess the in(subquery) is actually a semi-join in practice, huh). Is this some old received wisdom about working around a wonky query planner or something? Everything I've read indicates exists() is basically the fastest way of getting the filtering effects of an inner join but without selecting columns from the right hand table or duplicating rows in the left hand table, because it's not necessary to find every matching row in the right hand table - early termination after finding the first row is possible. Am I missing something obvious here? I use exists() a lot because it turns out it's pretty common for me to want to filter by the existence of a relation without duplicating by it.

e: I guess in many cases the id in(subquery) ends up with the same query plan as where exists huh, so it doesn't really matter does it

TheFluff fucked around with this message at 23:03 on Sep 30, 2019

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply