|
Jose posted:Anyone know why when I right click on something in object explorer I have to wait like 10 seconds before the menu appears? This is for SQL server 2016 and its really annoying since object explorer freezes while i wait and I can't find anything to help while googling. If i remote desktop onto the server and use management studio there i have no problems I have been googling the same issue today, I thought I was the only one. Unfortunately, I don't know how to help.
|
# ? Sep 2, 2016 14:03 |
|
|
# ? Jun 5, 2024 13:12 |
|
In SSMS, I'm trying to diagnose an issue where I have a maintenance plan full backup, that's failing on one database. When I copy out the TSQL and run that manually, it backs up fine. Any ideas on what I can check?
|
# ? Sep 2, 2016 14:16 |
|
Are the maintenance plans running with a service account with proper permissions?
|
# ? Sep 5, 2016 16:30 |
|
I've been unsuccessfully trying to figure out how to do a recursive query the last couple days and could use some help. This is with Postgres 9.5. I have two tables: categories and items. Categories is a tree structure, with parent_id pointing to another category in the table, or null if it's top-level: code:
code:
code:
code:
|
# ? Sep 6, 2016 16:58 |
|
Minus Pants posted:I've been unsuccessfully trying to figure out how to do a recursive query the last couple days and could use some help. This is with Postgres 9.5. The trouble isn't so much the recursion as much as it is the structure. What actual rows do you expect to produce that JSON? In a standard row, the field names would be turned into the keys, suggesting that you need field names generated from the data; not so easy. With static nested subqueries you'd just use "AS Wuteveh" to define the names of your child structures, but that won't work here. I'll not saying it's impossible, but you probably don't want to build the rows this way. With simple recursion, it would likely be best to stringify the child so you aren't generating arbitrary-width rows. A function might help you deal with the AS clause, but it still seems more likely that you'll want to consider alternatives. edit: I have built a query to return a report for nested accounts, but the return structure contains the depth, a concatenated full-path which is used to ensure proper output sorting, and other things; most are used only by the stored procedure, which uses a FOR LOOP to build the final formatted rows. Once you jump to that type of function, however, it's presumably indistinguishable from building the basic output rows and then constructing the desired structure with a scripting language. edit two: The first problem you need to solve is: How will you generate {"SubCat1.1.1-Item":1234} from the existing tables? If you can't do that, the recursion is immaterial, and I'm fairly certain that cannot be done with standard SQL (turn values into field names), whence a stored procedure / helper function of some sort is mandated. PhantomOfTheCopier fucked around with this message at 06:29 on Sep 7, 2016 |
# ? Sep 7, 2016 05:42 |
|
PhantomOfTheCopier posted:edit two: The first problem you need to solve is: How will you generate {"SubCat1.1.1-Item":1234} from the existing tables? If you can't do that, the recursion is immaterial, and I'm fairly certain that cannot be done with standard SQL (turn values into field names), whence a stored procedure / helper function of some sort is mandated. You may be able to do this with some abusive dynamic SQL, but I really don't recommend it. Is there a reason you're trying to create JSON entirely in the database?
|
# ? Sep 7, 2016 15:08 |
|
Nev posted:I have been googling the same issue today, I thought I was the only one. Unfortunately, I don't know how to help. Good to know i'm not the only one but googling it has been fruitless for me. Any success?
|
# ? Sep 8, 2016 12:48 |
|
PhantomOfTheCopier posted:The trouble isn't so much the recursion as much as it is the structure. What actual rows do you expect to produce that JSON? In a standard row, the field names would be turned into the keys, suggesting that you need field names generated from the data; not so easy. With static nested subqueries you'd just use "AS Wuteveh" to define the names of your child structures, but that won't work here. Thanks, I'm not used to thinking in SQL and this makes a lot of sense. I'll just write a function. Nth Doctor posted:You may be able to do this with some abusive dynamic SQL, but I really don't recommend it. Is there a reason you're trying to create JSON entirely in the database? The app is spitting out JSON and it's been very convenient to just let Postgres do all the work. No real requirement to keep that logic in the db.
|
# ? Sep 8, 2016 19:09 |
|
Interestingly enough, in this scenario it might be easier to build YAML from the query results simply because it so specifically uses indentation practically like an outline level.
|
# ? Sep 8, 2016 23:27 |
I'm having a conundrum with a phone number query. Need to find matching numbers in the database from user input, but the numbers in the system have formatting in them. So if a user saves a record with "1-800-867-5309 ext. 545", that's exactly what is saved in the database. But if a user types "8675309" in the search, that record would be a valid result. Currently we take the input phone number, strip out non-digits, and insert %% between each number and at the start and end. So the LIKE condition becomes '%8%%6%%7%%5%%3%%0%%9'. The problem with that is, if a user types in just "8675309", it will return numbers like "555-826-7153 ext 109" as a match. We can't change the LIKE to be '%8675309%', because that would invalidate the expected result. It's looking like our only option is to take the number without formatting and save that in a separate column, and do the search against that, but maybe someone else has a better idea.
|
|
# ? Sep 9, 2016 16:05 |
|
Manslaughter posted:I'm having a conundrum with a phone number query. Need to find matching numbers in the database from user input, but the numbers in the system have formatting in them. Would something like this help?
|
# ? Sep 9, 2016 16:27 |
|
Manslaughter posted:It's looking like our only option is to take the number without formatting and save that in a separate column, and do the search against that, but maybe someone else has a better idea. This would a good idea anyway. Regardless of how exactly you perform the operation of "take this string and skip all non-numeric characters in it", it's better to perform said operation only once (during insert/update operations), rather than on every search. If you want to have the database take charge of the number extraction (rather than the application), you can just use a persisted computed column.
|
# ? Sep 9, 2016 17:17 |
Thanks for the suggestions, I've told our team that it's probably best to store the digits in a different column. I can't do any fancy manipulation directly in SQL - I am limited to just what can be done in a LIKE.
|
|
# ? Sep 9, 2016 18:33 |
|
I've never thought about it like this (I've always done store number without formatting, then equals search, LIKE %xxxxxxxx, LIKE %%%xxxxx, etc), but for the phone number thing what if you stored the quads and triad in separate columns? The weird LIKE is trying to "solve" partial matches regardless of separators, but I'd imagine trying to do the multi-column approach efficiently would be a nightmare.
|
# ? Sep 10, 2016 00:16 |
|
Consider more cases than you've described here and a possible restriction on behavior. If a user searches for "1234567" do you want it to match "1800-999-1234 ext 567"? And "...999-9123x4567"? Or will you assume a search string of length seven necessarily requires an adjacent "123" and an adjacent "4567"? Others have mentioned some manner of data input or preprocessed storage. I'm questioning your search input validation.
|
# ? Sep 10, 2016 00:17 |
|
Scaramouche posted:I've never thought about it like this (I've always done store number without formatting, then equals search, LIKE %xxxxxxxx, LIKE %%%xxxxx, etc), but for the phone number thing what if you stored the quads and triad in separate columns? The weird LIKE is trying to "solve" partial matches regardless of separators, but I'd imagine trying to do the multi-column approach efficiently would be a nightmare. Did you just make a suggestion, then immediately write it off as "a nightmare"? Anyway this "what if you stored the quads and triad in separate columns" is a terrible idea because it is only going to work with North American phone numbers. The moment someone tries to use the software who is in a country with different conventions for how phone numbers are structured and formatted, it'll fall on its face.
|
# ? Sep 10, 2016 00:28 |
|
This question may be super simple, and I'm just not thinking about it the right way. I have two queries that pull from separate linked servers, using completely different schemas, but the queries return the same-formatted data. So:code:
code:
code:
|
# ? Sep 12, 2016 17:06 |
|
COOL CORN posted:stuff You should be able just use that query as an inline view and group by it: code:
|
# ? Sep 12, 2016 17:16 |
|
I knew it would be something simple I wasn't thinking about. Thanks!
|
# ? Sep 12, 2016 17:25 |
|
Postgres 9.6 is adding indexed searches of the cube datatype. Any idea of the space complexity or how slow inserts are going to be on that? I want to put an index of features in a cube, stored as zeros and ones in a volumeless cube where each bit is a dimension (let's say like 16 bits or 64 bits per string). Then I want to be able to query and return k-nearest-neighbor rows from a given point. By using taxicab distance I should be able to search for KNN for the Hamming distance. Is this going to be remotely scaleable, is there a better way to do this, etc? I'd prefer Postgres if possible. Paul MaudDib fucked around with this message at 04:25 on Sep 14, 2016 |
# ? Sep 14, 2016 04:18 |
|
Paul MaudDib posted:Postgres 9.6 is adding indexed searches of the cube datatype. Any idea of the space complexity or how slow inserts are going to be on that? According to Waiting for 9.6 – cube extension KNN support, merely adding the index cut a sample worst-case select against one million three-point rows from over five minutes to under two seconds. There's no mention of write performance, though. I can't imagine it'd be that much worse than a normal index. 9.6 RC1 is out now, you may as well just test it and find out yourself.
|
# ? Sep 14, 2016 05:39 |
|
I posted this in the enterprise windows thread, but this is probably a better spot: I'm not a SQL guy. I got asked to troubleshoot some blocked processes on a server. It triggered an alarm when it reported 30 blocked processes a week ago. For the most part, it's usually zero with occasionally 2-3. If I stare at the process monitor I'll occasionally see a few pop up. My understanding is that blocking is just a normal consequence of locking. I get the impression that blocks aren't a problem unless they increase wait times. Is there a way to check logs or set up a trigger to create a log entry when there's a block? I'm mostly looking for a way to say "Hey, since this parameter wasn't affected, this is just a thing that happens sometimes, don't worry about it." Or, if it's actually a problem, fix it. I've spent some time staring at the process monitor, but when blocks show up, they disappear so quickly I can only get a screenshot, and that's no good.
|
# ? Sep 16, 2016 00:57 |
|
Assuming you're using SQL Server, you could run SQL Server Profiler (under Tools in Management Studio) and leave it up for a day. It keeps and ongoing log of everything going on, and there's a template for locking. You can even set it to just log blocked processes. But yeah, as long as it's not making any queries last hours, or killing transactions because of a deadlock, it shouldn't be a big deal.
|
# ? Sep 16, 2016 19:31 |
|
Mr. Swoon posted:Assuming you're using SQL Server, you could run SQL Server Profiler (under Tools in Management Studio) and leave it up for a day. It keeps and ongoing log of everything going on, and there's a template for locking. You can even set it to just log blocked processes. THIS! This is exactly what I was looking for! Thank you so much! Hopefully I set it up correctly and don't log a bunch of garbage
|
# ? Sep 16, 2016 19:56 |
|
This is a really stupid SQL question, but it arose in a class I was taking the other day and got me thinking. (Not a homework assignment, just a bit of fun.) Is it possible to calculate a median from just a column of numbers without using window functions (like ROW_NUMBER) or non-standard result limiting features (like SELECT TOP 50 PERCENT)? I've only been able to do it by assuming the table contains a primary key (or some other field(s) that can uniquely identify a particular row (otherwise that self-join blows up when a value is repeated). Efficiency and scalability are not important. Here's my best effort. code:
|
# ? Sep 16, 2016 20:17 |
|
Am I allowed to use offset - fetch in an order by or is that considered a non-standard result limiting feature? If I can:code:
|
# ? Sep 16, 2016 20:42 |
|
Probably going to crosspost this in the Enterprise Windows thread as it is semi sysadmin related but here goes. I have MS SQL Server in a AOAG setup. I added SSRS report server database to the AOAG setup and configured a SSRS scale out deployment. Everything is working as expected with the exception of accessing the SSRS URL through the SQL listener DNS name. (http://sql-listener/reportserver instead of using http://sql1/reportserver and http://sql2/reportserver) I assume this probably isn't supported but any sort of workarounds or recommendation?
|
# ? Sep 18, 2016 22:52 |
|
Hint: Consider an ordered list with an odd number of elements, the same list reversed, and at which indices the elements will match. Edit: Fairly certain a similar trick with >= and LIMIT can be used for even cardinality. Edit2: Yeah it works in a test script with random inputs, and it's just easier to use one algorithm for both cardinalities. (Still could be some weird edge case I haven't seen). Surely someone already invented this? PhantomOfTheCopier fucked around with this message at 23:27 on Sep 20, 2016 |
# ? Sep 19, 2016 06:31 |
|
For SQL Server 2012: If I have URL parameters stored in a column, such as below code:
Can I do arithmetic on the just numbers directly somehow, or do I need to parse out the numbers, perform calcs, then rebuild the string? I'm dealing with time, so every number needs to be divided by 60 to get an hour from the parameters (i.e. from above: 540/60 = 9 am on a 24 hour cycle). This isn't my application, so I have to work with the string stored within the database to convert it to something legible to end users.
|
# ? Sep 19, 2016 20:01 |
|
You'll need to parse and rebuild
|
# ? Sep 19, 2016 20:06 |
|
At work I've had to create a new table with the same structure as a preexisting one in SQL Server. I did it using the export wizard in SQL Server Management Studio and using a new table in the same database as destination. However, I'm running into snags now because the new table doesn't have the same indexes, identity column, and so on as the original table, in other words the structure hasn't been fully duplicated. For my future reference, what's the equivalent in SQL Server of MySQL's "CREATE TABLE newTable LIKE oldTable"?
|
# ? Sep 21, 2016 12:30 |
|
Hammerite posted:At work I've had to create a new table with the same structure as a preexisting one in SQL Server. I did it using the export wizard in SQL Server Management Studio and using a new table in the same database as destination. However, I'm running into snags now because the new table doesn't have the same indexes, identity column, and so on as the original table, in other words the structure hasn't been fully duplicated.
|
# ? Sep 21, 2016 15:03 |
|
Jethro posted:It's been a while since I've used MSSQL, but I think in Management Studio you can right click on the table, do "Create Script..." or something like that, and in the options you can have it include indexes and whatnot. You edit the script to create the table and indexes with new names. Then you can use the export wizard to populate the data. It's "Generate Scripts" under Tasks when you right click a DB. Click on whatever tables you want, hit Next, then hit Advanced to select scripting out indexes, FKs, etc. If the table doesn't have too many rows, you can also set "Types of data to script" to "Schema and data" to include an insert for all the data. Edit: You can right click the table to script just that part, same with each of the other objects. I was just thinking of the easiest way to do it all at once. Mr. Swoon fucked around with this message at 17:06 on Sep 21, 2016 |
# ? Sep 21, 2016 16:56 |
|
Hoping to get some advice help! Not a DBA, more of a sysadmin but this stuff gets pushed my way regardless so forgive me if I have incorrect wording\terminology. Anyways, we have a SSRS report which takes about 50 seconds to load, then if we reload the same report, it is pretty much instant. If we wait like an hour and load the report again, the 50 seconds load time happens with the subsequent loads pretty much instant. I looked at the SSRS execution log and I am seeing it takes 40 seconds for the TimeDataRetrieval field. I assume the root cause as it queries the database, and the query is stored in some sort of sql buffer\cache which is why the subsequent queries are instant if you reload the same report shortly afterwards. Is there anyway to force SQL to hold these SSRS queries in memory longer? Do you have any recommendations to get around this issue? Also, is there any sort of simple way to find out what query the report is making exactly so I can execute it in a query window. I guess I want to find out how much rows\tables\records the query is doing so I can have some proof to management. Because as far as they're concerned, it's a pretty "simple report." Thanks!
|
# ? Sep 29, 2016 01:45 |
|
There's Snapshot settings in the Manage menu in SSRS that will let run the report at whatever interval you want and cache the results for however long you want. To get the query, you should just have to download the .rdl from the hosted report and open it in visual studio, or at the very least, you can use notepad++ or whatever text editor to rip out the query from it
|
# ? Sep 29, 2016 03:54 |
|
Moundgarden posted:There's Snapshot settings in the Manage menu in SSRS that will let run the report at whatever interval you want and cache the results for however long you want. The thing is the report is called from Dynamics CRM. Would it automatically call the snapshot report? Also, the report has parameters (ie. User and Inventory item number) would it snapshot all the possible results from those parameters?
|
# ? Sep 29, 2016 04:49 |
|
I've never worked with Dynamics so I can't comment on that sorry Regarding the parameters, the snapshot mechanism I'm familiar with just uses whatever defaults the report has. If it has no defaults I don't think it will run
|
# ? Sep 29, 2016 14:47 |
|
Any idea which SQL Profiler filters I should be using to try to trace SSRS data retrieval ? I tried a couple including RPC:Completed/RPC:Starting/PC Starting/PC Completed/SQL: Batch Starting/SQL: Batch Completed and I am not seeing anything with a long duration. Would it be a bad idea to just add all filters, then do a trace? I can repeat the issue which takes literally maybe 3mins.
|
# ? Oct 6, 2016 04:20 |
|
MSSQL question: We have a series of triggers we use to add records to tracking tables so that we can keep track of what's been inserted/updated/deleted, since we deal with a lot of lab data. So, for instance... If TableA has ColumnID, ColumnA, ColumnB, ColumnC trackTableA would have TrackingID, ColumnID, ColumnA, ColumnB, ColumnC, Action trgTableAInsert would insert the Inserted values from TableA into trackTableA, along with a value of "Insert" for the Action column, each time TableA is inserted into. Now, here's my question. We have a bunch of these triggers for a lot of tables, all basically with the same code, but for different table schemas. I'm looking to script out some generic, unified stored procedure that can be called, but I'm not sure how I can handle that. Is there a way to script it out so that I can change it to be something like trgTableAinsert calls uspGenericTracking([Inserted]), which takes the Inserted values and inserts them into the appropriate tracking table? I'd think that'd be fairly straightforward, except that the tables are all different, so there's no way to easily script out the "INSERT INTO (xxx, xxx, xxx, xxx) VALUES (xxx, xxx, xxx, xxx)" part. Maybe with dynamic SQL? Is this a job for SSIS? I admittedly don't know much about SSIS. Count Thrashula fucked around with this message at 15:57 on Oct 7, 2016 |
# ? Oct 7, 2016 15:55 |
|
|
# ? Jun 5, 2024 13:12 |
|
COOL CORN posted:MSSQL question: This looks like a job for Change Data Capture
|
# ? Oct 7, 2016 17:03 |