|
Mashi posted:My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way? The way I'm doing it right now is: I can think of ways to do that dynamically using SQL injection but what's the point ? It will cost just as much and probably make your code longer. With the way your table is setup (which I'd hate) your pretty much doing 3 scans anyway you slice it. This type of data should have been pre-determined and stored horizontally... or in XML... but you probably already know that...
|
# ¿ Dec 15, 2007 10:08 |
|
|
# ¿ May 2, 2024 21:14 |
|
Triple Tech posted:Maybe I'm misunderstanding something. Can INSERT insert multiple rows? So are you INSERTing the SELECTion of the temporary table into the main table? I have no idea what you're TRYing to ask. I really don't think you're asking if INSERT can do this? code:
benisntfunny fucked around with this message at 05:17 on Dec 20, 2007 |
# ¿ Dec 20, 2007 05:14 |
|
Xae posted:For fucks sake people, learn to use non-full/inner/equal joins. I think my favorite was something I had found that was written to generate some kind of drop down menu. it was something to the effect of code:
|
# ¿ Jan 1, 2008 01:44 |
|
bloodsoup posted:So, I would like to find out: If that's the case, and you have all these tables this page might answer some questions: http://www.devx.com/dbzone/Article/29585/1954?pf=true Descriptions of the fields you use? I doubt that it's in the database itself, given there's 10,000 tables as you claim, but does your company use a data modeling program? If your database is in there it might provide descriptions in the event that the database had to integrate with a new one.
|
# ¿ Jan 5, 2008 02:00 |
|
Golbez posted:The following search: I don't work with MySQL but if I were writing this in SQL Server it would be code:
EDIT: oh and depending on my data and the word I was searching I might consider... code:
code:
I think you have a lot of different options to choose from depending on exactly what kind of data you're looking at, and how this Query will be executed. benisntfunny fucked around with this message at 15:00 on Jan 7, 2008 |
# ¿ Jan 7, 2008 14:45 |
|
chocojosh posted:I think you want the between clause. Call me crazy but I don't think that's at all what he's looking for. He's asking how to stem words, not look for the items that fall in the middle of them.
|
# ¿ Jan 7, 2008 14:47 |
|
cletus42o posted:I got around it by just getting a list of league IDs, and looping over that and only summing up the results a league at a time, but I'd prefer to have it all in one query. Anyone have any ideas? Well, I would have wrote a loop myself with SQL injection just doing a UNION over and over. Probably a horrible approach. I did a quick search and found: http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/ By the way, I don't think your inner join works at all. Your inner join selects 15 points... unless I'm reading something wrong. I probably am.
|
# ¿ Jan 10, 2008 01:58 |
|
Ardhanari posted:Replace the last two WHERE conditions with a BETWEEN? That really shouldn't make much of a difference at all.. that query is nothing.. sounds like your machine must just really be a piece of poo poo. Sort of like when I had Dreamhost hosting my MySQL databases...
|
# ¿ Jan 12, 2008 18:24 |
|
deimos posted:Or it's misconfigured (using my-small.cnf or something similar). http://www.forbiddenweb.org/topic/12493/index.html or.. Also to be sure it's actually using the index you could force it on there. http://dev.mysql.com/doc/refman/5.1/en/index-hints.html I've run into trouble with some of my SQL Server queries being "optimized" to run without utilizing an index. I'm sure this problem isn't exclusive to Microsoft.
|
# ¿ Jan 12, 2008 20:44 |
|
minato posted:Option 2) Use a single table: Option 2 is basically how my company does logging enterprise wide. Not that everything we do is right... lots of times it's not.. but it seems to work pretty good overall.
|
# ¿ Jan 16, 2008 05:52 |
|
SQL Server 2005... I have a query, that is far too long to paste here. Basically we have a large content database and I wrote a query to audit on all the potential issues we have with content on a daily basis... The Title equals the Description... odd characters finding their way into the content etc... One of the items I have in there is a hits count based on a summary table with about 3,000,000 rows give or take a million. The hits table has an index on the date. As a query this all completes in about 20 minutes. Using the exact same query a stored procedure version takes over 5 hours before I stopped it because it was annoying me. When the stored procedure executes it chooses NOT to use the index on my hits table. I've tried recompiling and giving the procedure table hints but it doesn't seem to matter. Any ideas? This sucks. benisntfunny fucked around with this message at 20:28 on Sep 23, 2008 |
# ¿ Sep 23, 2008 20:26 |
|
Victor posted:Do you request data from this 3mil record table via subquery or derived table/CTE expression? Here's what I run. code:
|
# ¿ Sep 26, 2008 17:34 |
|
Begby posted:You are joining two tables on a key that is stored in a text field that you are converting to an integer. I can't see how that could ever be fast, I can't imagine it using any index at all since you are using convert. The key is not a text field. It's a date field. I left something out of this example. it should actually say [code] INSERT INTO @Hits (TMP_ContentId,TMP_Hits) --temp table SELECT CNT1.CNT_CONTENT_ID, SUM(ISNULL ( EVENT_TYPE_CNT, 0 ) ) FROM dbo.EVENT_TABLE INNER JOIN dbo.BUT_CONTENT AS CNT1 ON CONVERT( INT, ISNULL ( TEXT, 0 ) ) = CNT1.CNT_CONTENT_ID WHERE TIME_STAMP BETWEEN @StartDate AND @EndDate -- INDEX COLUMN AND EVENT_TYPE_ID = @Hit --code value for hits AND TEXT = CNT1.CNT_CONTENT_ID AND USER_ROLE_ID IN ( SELECT item FROM parse_comma_delimited_list_in(@Roles, ',' ) ) AND CNT1.ACTIVE_DATE > GETDATE() [code]
|
# ¿ Sep 28, 2008 18:18 |
|
Begby posted:If you only have one index then that is a big reason why it is running slow, among other things. 3 million records really isn't all that much, SQL server should be able to handle this kind of query. The issue here though is your field types and indexes. You will probably want to build a composite index with all the items in your where clause and play with the order. The query above to calculate hits for 9000 items on a three month span only takes 40 seconds or so to complete as a query. Over 2 hours as a stored procedure with the exact same code. benisntfunny fucked around with this message at 14:03 on Sep 29, 2008 |
# ¿ Sep 29, 2008 14:00 |
|
Aredna posted:I apologize if this was covered as I may have missed it, but are you also doing the insert when you run it as a query or only when you are running it as as stored procedure? If you are only using the insert in the stored procedure perhaps it's related to that. the code is the exact same between stored procedure and query. Again this is just a snippit of the stored procedure. I'm only including this because everything ran fine as a stored proc before adding this additional code... also the analyzer tells me this is where it's getting stuck.
|
# ¿ Sep 29, 2008 16:43 |
|
Is there a better way to do this? tsql I have entries being made into the database something like this... code:
So my result is code:
code:
EDIT: This is dumb, can't I just do the exact same thing I'm doing in the delete but the opposite and handle it in the select? EDIT2: No. benisntfunny fucked around with this message at 23:54 on Oct 14, 2009 |
# ¿ Oct 14, 2009 23:32 |
|
ShoulderDaemon posted:
it's an example... albiet since I gave a number as varchar you might wonder how good. MAX(DATE) didn't work for me. I also have quite a bit more columns being returned.
|
# ¿ Oct 14, 2009 23:43 |
|
Sprawl posted:Max(Date) should do exactly what you want unless its not stored in the database as a date. code:
|
# ¿ Oct 15, 2009 00:00 |
|
ShoulderDaemon posted:You probably don't want to group by the column you are trying to take the maximum of. Probably shouldn't. You're right. THanks. I knew it was dumb. EDIT: well actually it didn't work on my main select, but it works enough for me to do this a cleaner way. benisntfunny fucked around with this message at 00:10 on Oct 15, 2009 |
# ¿ Oct 15, 2009 00:06 |
|
Is there some type of linkedserver or OPENROWSET type thing for Oracle 10g so I can query a SQL Server within the editor window?
|
# ¿ Jan 27, 2010 02:46 |
|
Corsair Jr posted:Does anyone know of a means for SQL Server 2008 to optimize the datatypes for a static set of data? If you plan on leaving them all as varchars you can measure each column. SELECT MAX(Column1), MAX(Column2), MAX(Column3), ETC If you think some are INT only, try changing it to INT and see if it works... Same for datetime etc. You'll likely want to create an index or two depending on what your data actually is. There might be some other ways but doing it this way will let you get a better idea of what is in there and help you know what you should be indexing. Unless the data was computer generated I often find this approach helps me find bad data to clean when I try to change columns from one datatype to another and it doesn't work.
|
# ¿ Jan 27, 2010 16:39 |
|
Agrikk posted:Another SQL Server 2008 question: Select userid, activedate, count(*) from sometable Where activedate between somedate and dateadd(dd,42,somedate) And eventx = 'whatever' Group by userid, activedate At least I think that's what you're asking. If your dates include time then that's not going to be very fun. You migh want to do something like convert(varchar,activedate,101) This sucks typing on iPad. --- looking at your example in your edit.... It doesn't match up with what you've said. You don't need that first item in your where clause since it's handled by an ON statement and you don't have to use BETWEEN like I have but you might want to do something like <= >= benisntfunny fucked around with this message at 06:09 on May 9, 2011 |
# ¿ May 9, 2011 06:02 |
|
Aredna posted:This is on SQL Server 2008. I'm optimizing a query and want to test the performance of several different indexes. The issue is that the table is loaded into cache after the first run increasing the speed by 50x. local server? Obviously it won't be exactly the same but you might be able to get a general idea.
|
# ¿ May 11, 2011 01:42 |
|
Agrikk posted:I actually said fuckit and am now looking at a product called SQL Data Compare Pro. It does data comparisons between two databases and can be called via command line (helpful for scheduled tasks...) Visual Studio Team Data Edition does it. I mean if you guys are interested spending money... But yeah - from the last guy's point. It sounds like you could probably just use MERGE and an Agent Job. Probably doesn't even need SSIS.
|
# ¿ May 13, 2011 17:10 |
|
Internet Explorer posted:Okay SQL megathread. I have not read the entire thread and the first post isn't much of a Megathread info post, so I am sorry if this question has been asked before. We have some front-office / back-office software that is used for a lot of financial reporting and that sort of thing to the higher ups who run the company. The problem is, my boss is the only "DBA" and he does everything by hand. Every time they need some numbers he types of an SQL query or uses an old query and edits it. There has to be some piece of software out there that will help him build reports that are essentially templates, or are easy enough for the end-user to find the data they need with dropdown boxes, etc. Something like Crystal Reports, I guess? I do not have much experience in this department. Please recommend me something because it sucks having a boss who is supposed to be managing things and spends his entire day reinventing the wheel. Thank you! I don't really see anything wrong with what your boss is doing in theory. Sounds like he just sucks at SQL.
|
# ¿ May 20, 2011 21:56 |
|
Aredna posted:This is on SQL Server 2008. I received a performance tip to change my queries from the form of "select ... into #temp" to "insert into #temp select ..." because the latter uses fewer locking resources. On SQL Server I've always used INSERT INTO #PandaBear SELECT... on Oracle I do the opposite. Reasoning? None. I also prefer using variables for my temp tables in SQL Server* (i.e. @PandaBear instead). *Unless it's a metric gently caress ton of data then I'd probably stick with a temp table. benisntfunny fucked around with this message at 00:19 on May 24, 2011 |
# ¿ May 24, 2011 00:17 |
|
bewilderment posted:I have that basic fundamental knowledge - it's the course I did about a year ago. I know that the code I posted is bad, it's practically pseudocode and not intended to be the actual thing I'd type, I was just typing quickly. I wouldn't call the database every time someone selects a menu item. You should build out the structure of your drop down menu in the database and have your application take in the whole structure and cache it for a given amount of time. You can do the structure a few ways... a table like SuperSimpleWay code:
code:
edit: to the other guy's point though... and I hear your reasoning... but even your pseudo code is pretty bad. You might want to really review some of your coursework prior to coding. The question of "Can you use an IF statement?" alone is enough for me to think you need to brush up on your SQL skills. edit more: I know I said "I wouldn't call the database every time", but what i meant to say was "DO NOT WRITE YOUR STORED PROC LIKE THIS. DO NOT CALL THE DATABASE EVERYTIME." If you can't figure out how to do the caching method where you build a menu and store it then find an alternative outside the DB like storing your menu in an XML config file somewhere on your server... hard code it into your page... do something where you're not hitting the database repeatedly like that. If your web site was large you would be opening tons of connections to perform that simple operation. It would get especially bad with people like me who will sometimes find menu items I want using arrow keys... type in the first letter of the menu item them press the down arrow till I get where I'm going. You'd hit the DB every single time I skip over an entry and run that rather lovely SQL just to find the next menu item. benisntfunny fucked around with this message at 13:37 on May 24, 2011 |
# ¿ May 24, 2011 11:33 |
|
epswing posted:I'm starting to write tests for this database-backed application, and now realize the need for a fresh database every time the test suite is run. You need a brand new database every time? You don't just need a clean database with default data? Why not create two databases that are identical? One is the default state and one is your test. Upon starting a new test run dump the data in your test DB and move your default state data over. Also you could do it all with one database and a script that inserts the necessary data depending on how much you have. Creating a new database EVERY TIME a test suite is run seems like a bit much.
|
# ¿ May 24, 2011 18:13 |
|
epswing posted:By dump I guess you mean DELETE FROM or TRUNCATE. This isn't so easy with a normalized database. The tables need to be emptied in a particular order so as to not break referential integrity. Also, when the schema changes, the test database needs to stay in sync. I'm very familiar with normalized databases and doing exactly what I mentioned. In fact I do just that activity in many test environments daily. I use DELETE just in case the copying screws up half way and needs to rollback. I'm not sure what you mean by "I don't have to do anything" it sounds like to me you have to create a database and copy a schema over. Any way - people will do things in whatever way they like best. You might want to look into this: http://msdn.microsoft.com/en-us/library/aa258257(v=sql.80).aspx edit: Additionally you can write a script that drops all of the FKs and reinserts them after everything is done -- so yeah. This is essentially what Microsoft's VS Data Edition does for schema/data compares. benisntfunny fucked around with this message at 19:07 on May 24, 2011 |
# ¿ May 24, 2011 19:04 |
|
epswing posted:I'm 99% sure prikey implies index. You'd be 100% correct. quote:A PRIMARY KEY is a unique index where all key columns must be defined as NOT NULL. If they are not explicitly declared as NOT NULL, MySQL declares them so implicitly (and silently). A table can have only one PRIMARY KEY. If you do not have a PRIMARY KEY and an application asks for the PRIMARY KEY in your tables, MySQL returns the first UNIQUE index that has no NULL columns as the PRIMARY KEY.
|
# ¿ Jun 1, 2011 17:29 |
|
Agrikk posted:Import/Export wizard was being stupid. I did just that and it worked just fine. Import/Export only reads portion of the initial rows to guess at the datatype when reading flat files which always seems to lead to fuckage.
|
# ¿ Jun 9, 2011 05:16 |
|
Thel posted:- SQL (derp) -- Depending on what version the company is using you'll want to know a little of what's in/not in 2000 vs 2005 vs 2008 - what it takes to upgrade and why they'd want to aside from support from MS. -- Server health. Identifying/solving problems going on in the server. (locks, stopped replication, applications opening too many connections, failed agent jobs)
|
# ¿ Jun 13, 2011 14:03 |
|
Lyon posted:Random question, I work for a software company and we sell an information system focused on laboratories. Obviously we use the crap out of databases and SQL for the back end of our product. Generic database stuff could be useful but it would be best to focus on the specific type of database you're using. MySQL, Oracle, MS SQL Server? ... DB2? Even though relational DB concepts are pretty universal there are topics specific to the type of DB you're on.
|
# ¿ Jun 20, 2011 14:04 |
|
Thel posted:MS SQL Server 2005. If we're only talking about a 1000 rows it would probably be easiest and safest to store it in a variable before populating into the table code:
EDIT: Also interesting that you said the table is only 1000 rows because AD queries only support 1000 results at a time. Are you sure it's probably not much larger than 1000 and that you're just not getting enough information? You may have to change your query to grab data in chunks. benisntfunny fucked around with this message at 15:25 on Jun 22, 2011 |
# ¿ Jun 22, 2011 15:17 |
|
Bob Morales posted:What can I do about slow MySQL importing? Could you setup replication to the other database? http://dev.mysql.com/doc/refman/5.1/en/replication.html Even if you didn't always want data in server B you could setup a different schema then just take locally from B to B to populate data when you wanted it.
|
# ¿ Jun 23, 2011 14:29 |
|
Bob Morales posted:I am under the assumption that if we used replication, that once we wrote to the database on server B, we'd have to delete it and start over from a new dump in order to get back in sync. Here's what I'm suggesting... and I'm not sure how many places you can replicate to in MySQL so it may not work. Keep your replication to Server A Replicate another copy to Server B in a new schema Limit the ID in Server B to read only on that new schema Delete your current test tables and repopulate with data from new schema that is a fully replicated copy of prod as needed. No reason you would have to delete anything from those tables. benisntfunny fucked around with this message at 02:24 on Jun 24, 2011 |
# ¿ Jun 23, 2011 23:27 |
|
Bob Morales posted:I got it down to 8 minutes, but apparently that's not fast enough. That's one of our biggest databases, our other ones take 20 seconds to dump and a minute and 20 seconds to import. So you're going to rule out the option I gave you which would allow you to locally copy the data over? - which is without a doubt going to be your fastest option. We typically copy about 4-5GB of data using a similar method in under a minute. how are you defining the DB as big? Also if the issue is 8 minutes is too long because you're locking the tables during that time and you don't want to do the replication method I suggested I would at least dump the data into an alternative schema pre-load then load from those tables locally to reduce locking time. edit: also - If you have any DB auditing type columns in your data you could easily do a compare to only grab the data you need and update existing information where the modified date is greater than the current modified date. benisntfunny fucked around with this message at 02:28 on Jun 24, 2011 |
# ¿ Jun 24, 2011 02:22 |
|
Bob Morales posted:I really don't understand what you were saying to do. 8 minutes is only too long because someone else said so. I hope someone else can chime in and tell me I'm not making sense. Here's a picture of what I'm trying to explain. I fully understand what you want to do. Stop getting loving snippy it's not our fault your testing is super time sensitive and can't be executed through standard load processes.
|
# ¿ Jun 24, 2011 19:32 |
|
MrBishop posted:
edit: Copy and paste this to see what I'm saying - also. I updated what I originally said to do. code:
http://www.kodyaz.com/articles/sql-server-dates-table-using-tsql-cte-calendar-table.aspx benisntfunny fucked around with this message at 21:12 on Jun 24, 2011 |
# ¿ Jun 24, 2011 19:49 |
|
|
# ¿ May 2, 2024 21:14 |
|
MrBishop posted:Thanks for following up, but maybe I didn't explain well enough where the problem lies, and that's in having a running total, that continues to accumulate (0) through the missing weeks. What I started with is: ah. sorry. I actually think you've probably done exactly what I would do too. I did just have trouble understanding you at first.
|
# ¿ Jun 24, 2011 22:19 |