Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Tax Oddity
Apr 8, 2007

The love cannon has a very short range, about 2 feet, so you're inevitably out of range. I have to close the distance.

Jose posted:

Anyone know why when I right click on something in object explorer I have to wait like 10 seconds before the menu appears? This is for SQL server 2016 and its really annoying since object explorer freezes while i wait and I can't find anything to help while googling. If i remote desktop onto the server and use management studio there i have no problems

I have been googling the same issue today, I thought I was the only one. Unfortunately, I don't know how to help.

Adbot
ADBOT LOVES YOU

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
In SSMS, I'm trying to diagnose an issue where I have a maintenance plan full backup, that's failing on one database. When I copy out the TSQL and run that manually, it backs up fine.

Any ideas on what I can check?

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Are the maintenance plans running with a service account with proper permissions?

Minus Pants
Jul 18, 2004
I've been unsuccessfully trying to figure out how to do a recursive query the last couple days and could use some help. This is with Postgres 9.5.

I have two tables: categories and items. Categories is a tree structure, with parent_id pointing to another category in the table, or null if it's top-level:
code:
-- Categories
id    parent_id    name      
1     null         Cat1
2     1            SubCat1.1
3     null         Cat2
4     2            SubCat1.1.1
The items table ties items to the categories. Each category can have both category and item children.
code:
-- Items
id    category_id    name               value
1     1              Cat1-Item1         1234
2     1              Cat1-Item2         1234
3     2              SubCat1.1-Item     1234
4     4              SubCat1.1.1-Item   1234
5     3              Cat2-Item             1234
My goal is to get a query to return a list of categories and items in a JSON structure. Something like:
code:
[ 
	"Cat1": [ 
		"SubCat1.1": [ 
			"SubCat1.1-Item":1234,
			"SubCat1.1.1": [
				"SubCat1.1.1-Item":1234
			]
		], 
		"Cat1-Item1":1234, 
		"Cat1-Item2":1234
	], 

	"Cat2": [
		"Cat2-Item":1234
	] 
]
I've been playing with this snippet from Stackoverflow to at least build the category tree, but it seems to only work on the deepest leaves, and I'm not sure why. Any suggestions? Should I just script this rather than try pure SQL?

code:
WITH RECURSIVE c AS (
    SELECT *, 0 as lvl
    FROM categories
    WHERE parent_id IS NULL
  UNION ALL
    SELECT categories.*, c.lvl + 1 as lvl
    FROM categories
    JOIN c ON categories.parent_id = c.id
),
-- select max level
maxlvl AS (
  SELECT max(lvl) maxlvl FROM c
),
-- accumulate children
j AS (
    SELECT c.*, json '[]' children -- at max level, there are only leaves
    FROM c, maxlvl
    WHERE lvl = maxlvl
  UNION ALL
    -- a little hack, because PostgreSQL doesn't like aggregated recursive terms
    SELECT (c).*, array_to_json(array_agg(j)) children
    FROM (
      SELECT c, j
      FROM j
      JOIN c ON j.parent_id = c.id
    ) v
    GROUP BY v.c
)
-- select only root
SELECT row_to_json(j) json_tree
FROM j
WHERE lvl = 0;

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Minus Pants posted:

I've been unsuccessfully trying to figure out how to do a recursive query the last couple days and could use some help. This is with Postgres 9.5.

I have two tables: categories and items. Categories is a tree structure, with parent_id pointing to another category in the table, or null if it's top-level:

Something like:
code:
[ 
	"Cat1": [ 
		"SubCat1.1": [ 
			"SubCat1.1-Item":1234,
			"SubCat1.1.1": [
				"SubCat1.1.1-Item":1234
			]
		], 
		"Cat1-Item1":1234, 
		"Cat1-Item2":1234
	], 

	"Cat2": [
		"Cat2-Item":1234
	] 
]

The trouble isn't so much the recursion as much as it is the structure. What actual rows do you expect to produce that JSON? In a standard row, the field names would be turned into the keys, suggesting that you need field names generated from the data; not so easy. With static nested subqueries you'd just use "AS Wuteveh" to define the names of your child structures, but that won't work here.

I'll not saying it's impossible, but you probably don't want to build the rows this way. With simple recursion, it would likely be best to stringify the child so you aren't generating arbitrary-width rows. A function might help you deal with the AS clause, but it still seems more likely that you'll want to consider alternatives.

edit: I have built a query to return a report for nested accounts, but the return structure contains the depth, a concatenated full-path which is used to ensure proper output sorting, and other things; most are used only by the stored procedure, which uses a FOR LOOP to build the final formatted rows. Once you jump to that type of function, however, it's presumably indistinguishable from building the basic output rows and then constructing the desired structure with a scripting language.

edit two: The first problem you need to solve is: How will you generate {"SubCat1.1.1-Item":1234} from the existing tables? If you can't do that, the recursion is immaterial, and I'm fairly certain that cannot be done with standard SQL (turn values into field names), whence a stored procedure / helper function of some sort is mandated.

PhantomOfTheCopier fucked around with this message at 06:29 on Sep 7, 2016

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


PhantomOfTheCopier posted:

edit two: The first problem you need to solve is: How will you generate {"SubCat1.1.1-Item":1234} from the existing tables? If you can't do that, the recursion is immaterial, and I'm fairly certain that cannot be done with standard SQL (turn values into field names), whence a stored procedure / helper function of some sort is mandated.

You may be able to do this with some abusive dynamic SQL, but I really don't recommend it. Is there a reason you're trying to create JSON entirely in the database?

Jose
Jul 24, 2007

Adrian Chiles is a broadcaster and writer

Nev posted:

I have been googling the same issue today, I thought I was the only one. Unfortunately, I don't know how to help.

Good to know i'm not the only one but googling it has been fruitless for me. Any success?

Minus Pants
Jul 18, 2004

PhantomOfTheCopier posted:

The trouble isn't so much the recursion as much as it is the structure. What actual rows do you expect to produce that JSON? In a standard row, the field names would be turned into the keys, suggesting that you need field names generated from the data; not so easy. With static nested subqueries you'd just use "AS Wuteveh" to define the names of your child structures, but that won't work here.

Thanks, I'm not used to thinking in SQL and this makes a lot of sense. I'll just write a function.

Nth Doctor posted:

You may be able to do this with some abusive dynamic SQL, but I really don't recommend it. Is there a reason you're trying to create JSON entirely in the database?

The app is spitting out JSON and it's been very convenient to just let Postgres do all the work. No real requirement to keep that logic in the db.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Interestingly enough, in this scenario it might be easier to build YAML from the query results simply because it so specifically uses indentation practically like an outline level.

Polio Vax Scene
Apr 5, 2009



I'm having a conundrum with a phone number query. Need to find matching numbers in the database from user input, but the numbers in the system have formatting in them.

So if a user saves a record with "1-800-867-5309 ext. 545", that's exactly what is saved in the database.
But if a user types "8675309" in the search, that record would be a valid result.

Currently we take the input phone number, strip out non-digits, and insert %% between each number and at the start and end.
So the LIKE condition becomes '%8%%6%%7%%5%%3%%0%%9'.
The problem with that is, if a user types in just "8675309", it will return numbers like "555-826-7153 ext 109" as a match.
We can't change the LIKE to be '%8675309%', because that would invalidate the expected result.

It's looking like our only option is to take the number without formatting and save that in a separate column, and do the search against that, but maybe someone else has a better idea.

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Manslaughter posted:

I'm having a conundrum with a phone number query. Need to find matching numbers in the database from user input, but the numbers in the system have formatting in them.

So if a user saves a record with "1-800-867-5309 ext. 545", that's exactly what is saved in the database.
But if a user types "8675309" in the search, that record would be a valid result.

Currently we take the input phone number, strip out non-digits, and insert %% between each number and at the start and end.
So the LIKE condition becomes '%8%%6%%7%%5%%3%%0%%9'.
The problem with that is, if a user types in just "8675309", it will return numbers like "555-826-7153 ext 109" as a match.
We can't change the LIKE to be '%8675309%', because that would invalidate the expected result.

It's looking like our only option is to take the number without formatting and save that in a separate column, and do the search against that, but maybe someone else has a better idea.

Would something like this help?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Manslaughter posted:

It's looking like our only option is to take the number without formatting and save that in a separate column, and do the search against that, but maybe someone else has a better idea.

This would a good idea anyway. Regardless of how exactly you perform the operation of "take this string and skip all non-numeric characters in it", it's better to perform said operation only once (during insert/update operations), rather than on every search. If you want to have the database take charge of the number extraction (rather than the application), you can just use a persisted computed column.

Polio Vax Scene
Apr 5, 2009



Thanks for the suggestions, I've told our team that it's probably best to store the digits in a different column.
I can't do any fancy manipulation directly in SQL - I am limited to just what can be done in a LIKE.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

I've never thought about it like this (I've always done store number without formatting, then equals search, LIKE %xxxxxxxx, LIKE %%%xxxxx, etc), but for the phone number thing what if you stored the quads and triad in separate columns? The weird LIKE is trying to "solve" partial matches regardless of separators, but I'd imagine trying to do the multi-column approach efficiently would be a nightmare.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Consider more cases than you've described here and a possible restriction on behavior. If a user searches for "1234567" do you want it to match "1800-999-1234 ext 567"? And "...999-9123x4567"? Or will you assume a search string of length seven necessarily requires an adjacent "123" and an adjacent "4567"?

Others have mentioned some manner of data input or preprocessed storage. I'm questioning your search input validation.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Scaramouche posted:

I've never thought about it like this (I've always done store number without formatting, then equals search, LIKE %xxxxxxxx, LIKE %%%xxxxx, etc), but for the phone number thing what if you stored the quads and triad in separate columns? The weird LIKE is trying to "solve" partial matches regardless of separators, but I'd imagine trying to do the multi-column approach efficiently would be a nightmare.

Did you just make a suggestion, then immediately write it off as "a nightmare"? Anyway this "what if you stored the quads and triad in separate columns" is a terrible idea because it is only going to work with North American phone numbers. The moment someone tries to use the software who is in a country with different conventions for how phone numbers are structured and formatted, it'll fall on its face.

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
This question may be super simple, and I'm just not thinking about it the right way. I have two queries that pull from separate linked servers, using completely different schemas, but the queries return the same-formatted data. So:

code:
SELECT
<logic> AS WorkType
<aggregate logic> AS WorkMonth
<count logic> AS NumberOfWorkOrders
FROM LinkedServerA
WHERE <logic>
GROUP BY <aggregate month logic>

UNION


SELECT
<logic> AS WorkType
<aggregate logic> AS WorkMonth
<count logic> AS NumberOfWorkOrders
FROM LinkedServerB
WHERE <logic>
GROUP BY <aggregate month logic>
Would return something like

code:
Work Type      | WorkMonth | NumberOfWorkOrders
---------------+-----------+-------------------
Hydrant Repair | 2016-07   | 20
Hydrant Repair | 2016-07   | 30
But what I WANT is to return something that combines the results of the two servers, like:

code:
Work Type      | WorkMonth | NumberOfWorkOrders
---------------+-----------+-------------------
Hydrant Repair | 2016-07   | 50
I can't join the base data together because the schemas are so different, but is there a way to join the results? Do I need to use a temp table and query/group on that?

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy

You should be able just use that query as an inline view and group by it:

code:
SELECT   x.worktype, 
         x.workmonth, 
         Sum(x.numberofworkorders) 
FROM     ( 
                  SELECT   <logic> AS worktype <aggregate logic> AS workmonth <count logic> AS numberofworkorders
                  FROM     linkedservera 
                  WHERE    <logic> 
                  GROUP BY <aggregate month logic> 
                  UNION 
                  SELECT   <logic> AS worktype <aggregate logic> AS workmonth <count logic> AS numberofworkorders
                  FROM     linkedserverb 
                  WHERE    <logic> 
                  GROUP BY <aggregate month logic> ) x 
GROUP BY x.worktype, 
         x.workmonth

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
I knew it would be something simple I wasn't thinking about. Thanks!

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Postgres 9.6 is adding indexed searches of the cube datatype. Any idea of the space complexity or how slow inserts are going to be on that?

I want to put an index of features in a cube, stored as zeros and ones in a volumeless cube where each bit is a dimension (let's say like 16 bits or 64 bits per string). Then I want to be able to query and return k-nearest-neighbor rows from a given point.

By using taxicab distance I should be able to search for KNN for the Hamming distance. Is this going to be remotely scaleable, is there a better way to do this, etc? I'd prefer Postgres if possible.

Paul MaudDib fucked around with this message at 04:25 on Sep 14, 2016

McGlockenshire
Dec 16, 2005

GOLLOCKS!

Paul MaudDib posted:

Postgres 9.6 is adding indexed searches of the cube datatype. Any idea of the space complexity or how slow inserts are going to be on that?

According to Waiting for 9.6 – cube extension KNN support, merely adding the index cut a sample worst-case select against one million three-point rows from over five minutes to under two seconds. There's no mention of write performance, though. I can't imagine it'd be that much worse than a normal index.

9.6 RC1 is out now, you may as well just test it and find out yourself.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I posted this in the enterprise windows thread, but this is probably a better spot:

I'm not a SQL guy.

I got asked to troubleshoot some blocked processes on a server. It triggered an alarm when it reported 30 blocked processes a week ago. For the most part, it's usually zero with occasionally 2-3. If I stare at the process monitor I'll occasionally see a few pop up.

My understanding is that blocking is just a normal consequence of locking.

I get the impression that blocks aren't a problem unless they increase wait times.

Is there a way to check logs or set up a trigger to create a log entry when there's a block?

I'm mostly looking for a way to say "Hey, since this parameter wasn't affected, this is just a thing that happens sometimes, don't worry about it."

Or, if it's actually a problem, fix it.

I've spent some time staring at the process monitor, but when blocks show up, they disappear so quickly I can only get a screenshot, and that's no good.

Mr. Swoon
Nov 25, 2003

ALL WE DO IS PARTY

Assuming you're using SQL Server, you could run SQL Server Profiler (under Tools in Management Studio) and leave it up for a day. It keeps and ongoing log of everything going on, and there's a template for locking. You can even set it to just log blocked processes.

But yeah, as long as it's not making any queries last hours, or killing transactions because of a deadlock, it shouldn't be a big deal.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin

Mr. Swoon posted:

Assuming you're using SQL Server, you could run SQL Server Profiler (under Tools in Management Studio) and leave it up for a day. It keeps and ongoing log of everything going on, and there's a template for locking. You can even set it to just log blocked processes.

But yeah, as long as it's not making any queries last hours, or killing transactions because of a deadlock, it shouldn't be a big deal.

THIS! This is exactly what I was looking for!
Thank you so much!

Hopefully I set it up correctly and don't log a bunch of garbage

saint gerald
Apr 17, 2003
This is a really stupid SQL question, but it arose in a class I was taking the other day and got me thinking. (Not a homework assignment, just a bit of fun.)

Is it possible to calculate a median from just a column of numbers without using window functions (like ROW_NUMBER) or non-standard result limiting features (like SELECT TOP 50 PERCENT)? I've only been able to do it by assuming the table contains a primary key (or some other field(s) that can uniquely identify a particular row (otherwise that self-join blows up when a value is repeated). Efficiency and scalability are not important.

Here's my best effort.

code:
with
ranks as (select nums1.table_PK, nums1.Quantity, count(*) as rnk 
				from nums as nums1, nums as nums2
				where (nums1.Quantity < nums2.Quantity ) or (nums1.Quantity = nums2.Quantity and nums1.table_PK < nums2.table_PK)
				group by nums1.table_PK, nums1.Quantity
			),
themiddle as (select ((count(*)*1.0)/2 + 0.5) as rn from ranks)
select avg(Quantity) as Median from ranks
inner join themiddle on rnk = floor(rn)
             or rnk = ceiling(rn);

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy
Am I allowed to use offset - fetch in an order by or is that considered a non-standard result limiting feature? If I can:
code:
declare @test table
(
ID int
)
/* fill this with whatever garbage */
insert into @test (ID)
select 1
union
select 2
union
select 3
union 
select 50
union 
select 51
union
select 52
union
select 53
union
select 54

declare @numberOfRows int = (select count(*) from @test)
declare @numberToSkip int

if cast(@numberOfRows as decimal(18,2))%2 = 1 -- odd number of rows
	begin
	set @numberToSkip = @numberOfRows / 2
	select ID from @test order by ID offset @numberToSkip rows fetch first 1 rows only
	end
else -- even number of rows
	begin
	declare @IDsToAverage table
	(
	ID int
	)
	set @numberToSkip = @numberOfRows / 2 - 1
	insert into @IDsToAverage (ID)	
	select ID from @test order by ID offset @numberToSkip rows fetch first 2 rows only
	select avg(cast(ID as decimal(18,2))) from @IDsToAverage
	end
i'm sure there's a better way to do that but that works i think

lol internet.
Sep 4, 2007
the internet makes you stupid
Probably going to crosspost this in the Enterprise Windows thread as it is semi sysadmin related but here goes.

I have MS SQL Server in a AOAG setup. I added SSRS report server database to the AOAG setup and configured a SSRS scale out deployment.

Everything is working as expected with the exception of accessing the SSRS URL through the SQL listener DNS name. (http://sql-listener/reportserver instead of using http://sql1/reportserver and http://sql2/reportserver)

I assume this probably isn't supported but any sort of workarounds or recommendation?

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Hint: Consider an ordered list with an odd number of elements, the same list reversed, and at which indices the elements will match.

Edit: Fairly certain a similar trick with >= and LIMIT can be used for even cardinality.

Edit2: Yeah it works in a test script with random inputs, and it's just easier to use one algorithm for both cardinalities. (Still could be some weird edge case I haven't seen). Surely someone already invented this?

PhantomOfTheCopier fucked around with this message at 23:27 on Sep 20, 2016

kloa
Feb 14, 2007


For SQL Server 2012:

If I have URL parameters stored in a column, such as below

code:

&w=540, 1020&f=660, 1125

Can I do arithmetic on the just numbers directly somehow, or do I need to parse out the numbers, perform calcs, then rebuild the string?

I'm dealing with time, so every number needs to be divided by 60 to get an hour from the parameters (i.e. from above: 540/60 = 9 am on a 24 hour cycle).

This isn't my application, so I have to work with the string stored within the database to convert it to something legible to end users.

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy
You'll need to parse and rebuild :(

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
At work I've had to create a new table with the same structure as a preexisting one in SQL Server. I did it using the export wizard in SQL Server Management Studio and using a new table in the same database as destination. However, I'm running into snags now because the new table doesn't have the same indexes, identity column, and so on as the original table, in other words the structure hasn't been fully duplicated.

For my future reference, what's the equivalent in SQL Server of MySQL's "CREATE TABLE newTable LIKE oldTable"?

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Hammerite posted:

At work I've had to create a new table with the same structure as a preexisting one in SQL Server. I did it using the export wizard in SQL Server Management Studio and using a new table in the same database as destination. However, I'm running into snags now because the new table doesn't have the same indexes, identity column, and so on as the original table, in other words the structure hasn't been fully duplicated.

For my future reference, what's the equivalent in SQL Server of MySQL's "CREATE TABLE newTable LIKE oldTable"?
It's been a while since I've used MSSQL, but I think in Management Studio you can right click on the table, do "Create Script..." or something like that, and in the options you can have it include indexes and whatnot. You edit the script to create the table and indexes with new names. Then you can use the export wizard to populate the data.

Mr. Swoon
Nov 25, 2003

ALL WE DO IS PARTY

Jethro posted:

It's been a while since I've used MSSQL, but I think in Management Studio you can right click on the table, do "Create Script..." or something like that, and in the options you can have it include indexes and whatnot. You edit the script to create the table and indexes with new names. Then you can use the export wizard to populate the data.

It's "Generate Scripts" under Tasks when you right click a DB. Click on whatever tables you want, hit Next, then hit Advanced to select scripting out indexes, FKs, etc. If the table doesn't have too many rows, you can also set "Types of data to script" to "Schema and data" to include an insert for all the data.

Edit: You can right click the table to script just that part, same with each of the other objects. I was just thinking of the easiest way to do it all at once.

Mr. Swoon fucked around with this message at 17:06 on Sep 21, 2016

lol internet.
Sep 4, 2007
the internet makes you stupid
Hoping to get some advice help! Not a DBA, more of a sysadmin but this stuff gets pushed my way regardless so forgive me if I have incorrect wording\terminology.

Anyways, we have a SSRS report which takes about 50 seconds to load, then if we reload the same report, it is pretty much instant. If we wait like an hour and load the report again, the 50 seconds load time happens with the subsequent loads pretty much instant.

I looked at the SSRS execution log and I am seeing it takes 40 seconds for the TimeDataRetrieval field.

I assume the root cause as it queries the database, and the query is stored in some sort of sql buffer\cache which is why the subsequent queries are instant if you reload the same report shortly afterwards.

Is there anyway to force SQL to hold these SSRS queries in memory longer? Do you have any recommendations to get around this issue?

Also, is there any sort of simple way to find out what query the report is making exactly so I can execute it in a query window. I guess I want to find out how much rows\tables\records the query is doing so I can have some proof to management. Because as far as they're concerned, it's a pretty "simple report."

Thanks!

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy
There's Snapshot settings in the Manage menu in SSRS that will let run the report at whatever interval you want and cache the results for however long you want.

To get the query, you should just have to download the .rdl from the hosted report and open it in visual studio, or at the very least, you can use notepad++ or whatever text editor to rip out the query from it

lol internet.
Sep 4, 2007
the internet makes you stupid

Moundgarden posted:

There's Snapshot settings in the Manage menu in SSRS that will let run the report at whatever interval you want and cache the results for however long you want.

To get the query, you should just have to download the .rdl from the hosted report and open it in visual studio, or at the very least, you can use notepad++ or whatever text editor to rip out the query from it

The thing is the report is called from Dynamics CRM. Would it automatically call the snapshot report?

Also, the report has parameters (ie. User and Inventory item number) would it snapshot all the possible results from those parameters?

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy
I've never worked with Dynamics so I can't comment on that sorry :(

Regarding the parameters, the snapshot mechanism I'm familiar with just uses whatever defaults the report has. If it has no defaults I don't think it will run

lol internet.
Sep 4, 2007
the internet makes you stupid
Any idea which SQL Profiler filters I should be using to try to trace SSRS data retrieval ?

I tried a couple including RPC:Completed/RPC:Starting/PC Starting/PC Completed/SQL: Batch Starting/SQL: Batch Completed and I am not seeing anything with a long duration.

Would it be a bad idea to just add all filters, then do a trace? I can repeat the issue which takes literally maybe 3mins.

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord
MSSQL question:
We have a series of triggers we use to add records to tracking tables so that we can keep track of what's been inserted/updated/deleted, since we deal with a lot of lab data. So, for instance...

If TableA has ColumnID, ColumnA, ColumnB, ColumnC
trackTableA would have TrackingID, ColumnID, ColumnA, ColumnB, ColumnC, Action
trgTableAInsert would insert the Inserted values from TableA into trackTableA, along with a value of "Insert" for the Action column, each time TableA is inserted into.

Now, here's my question. We have a bunch of these triggers for a lot of tables, all basically with the same code, but for different table schemas. I'm looking to script out some generic, unified stored procedure that can be called, but I'm not sure how I can handle that. Is there a way to script it out so that I can change it to be something like

trgTableAinsert calls uspGenericTracking([Inserted]), which takes the Inserted values and inserts them into the appropriate tracking table? I'd think that'd be fairly straightforward, except that the tables are all different, so there's no way to easily script out the "INSERT INTO (xxx, xxx, xxx, xxx) VALUES (xxx, xxx, xxx, xxx)" part. Maybe with dynamic SQL? Is this a job for SSIS? I admittedly don't know much about SSIS.

Count Thrashula fucked around with this message at 15:57 on Oct 7, 2016

Adbot
ADBOT LOVES YOU

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


COOL CORN posted:

MSSQL question:
We have a series of triggers we use to add records to tracking tables so that we can keep track of what's been inserted/updated/deleted, since we deal with a lot of lab data. So, for instance...

If TableA has ColumnID, ColumnA, ColumnB, ColumnC
trackTableA would have TrackingID, ColumnID, ColumnA, ColumnB, ColumnC, Action
trgTableAInsert would insert the Inserted values from TableA into trackTableA, along with a value of "Insert" for the Action column, each time TableA is inserted into.

Now, here's my question. We have a bunch of these triggers for a lot of tables, all basically with the same code, but for different table schemas. I'm looking to script out some generic, unified stored procedure that can be called, but I'm not sure how I can handle that. Is there a way to script it out so that I can change it to be something like

trgTableAinsert calls uspGenericTracking([Inserted]), which takes the Inserted values and inserts them into the appropriate tracking table? I'd think that'd be fairly straightforward, except that the tables are all different, so there's no way to easily script out the "INSERT INTO (xxx, xxx, xxx, xxx) VALUES (xxx, xxx, xxx, xxx)" part. Maybe with dynamic SQL? Is this a job for SSIS? I admittedly don't know much about SSIS.

This looks like a job for Change Data Capture

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply