|
To update a bunch of data in our database, I'm working on a pretty simple DTS package for Sql Server 2000. That's working all well and good, but why oh why does the filesize of the .dts file increase by 300-600K every time I go to Package -> Save, even when I haven't made any modifications to the package logic at all? Is there some sort of versioning history the .dts file is saving to itself? It's almost up to 7 megs and all I've done for the last few hours is tweak the sql in some 'Execute Sql' Tasks, save, and execute the whole package.
|
# ? Sep 27, 2008 05:25 |
|
|
# ? May 16, 2024 04:06 |
|
I know nothing about DTS or SQL Server but most databases have some sort of 'VACUUM' command that will reclaim unused space.
|
# ? Sep 27, 2008 16:40 |
|
MoNsTeR posted:You can't do this in Oracle, or rather, you can only do it with some Clever Tricks. Does it work in other implementations? This would be very handy for a personal project. You can do this in MySQL, and its very handy.
|
# ? Sep 27, 2008 18:39 |
|
Hi guys, This is going to be slightly long but I've been trying hard to solve this issue the whole morning now. I seem to hit a brick wall and hopefully you guys can see this clearer than me. I'm trying to insert this statement into a table called 'accounts' but I seem to be hitting some sort of duplicate key error when there shouldn't be any... code:
SHOW INDEX from account: code:
code:
code:
|
# ? Sep 28, 2008 12:09 |
|
ahh I just fixed it. turns out, my other tables doesn't have auto_inc flagged.
|
# ? Sep 28, 2008 12:29 |
|
Begby posted:You are joining two tables on a key that is stored in a text field that you are converting to an integer. I can't see how that could ever be fast, I can't imagine it using any index at all since you are using convert. The key is not a text field. It's a date field. I left something out of this example. it should actually say [code] INSERT INTO @Hits (TMP_ContentId,TMP_Hits) --temp table SELECT CNT1.CNT_CONTENT_ID, SUM(ISNULL ( EVENT_TYPE_CNT, 0 ) ) FROM dbo.EVENT_TABLE INNER JOIN dbo.BUT_CONTENT AS CNT1 ON CONVERT( INT, ISNULL ( TEXT, 0 ) ) = CNT1.CNT_CONTENT_ID WHERE TIME_STAMP BETWEEN @StartDate AND @EndDate -- INDEX COLUMN AND EVENT_TYPE_ID = @Hit --code value for hits AND TEXT = CNT1.CNT_CONTENT_ID AND USER_ROLE_ID IN ( SELECT item FROM parse_comma_delimited_list_in(@Roles, ',' ) ) AND CNT1.ACTIVE_DATE > GETDATE() [code]
|
# ? Sep 28, 2008 18:18 |
|
imBen posted:The key is not a text field. It's a date field. I left something out of this example. it should actually say If you only have one index then that is a big reason why it is running slow, among other things. 3 million records really isn't all that much, SQL server should be able to handle this kind of query. The issue here though is your field types and indexes. You will probably want to build a composite index with all the items in your where clause and play with the order. You are inner joining BUT_CONTENT and EVENT_TABLE on a field. If that is not indexed in BUT_CONTENT it is going to cause a slowdown. I am not all that familiar with optimization on SQL server though, so perhaps someone else here can assist. Also, like I said you are converting the field called TEXT to an integer in the join, then you are also doing a comparison in the where clause which is redundant (and not doing a conversion). You should be able to take it out of the where clause. Then you need to sort out if you really need to do that conversion at all. One good way to troubleshoot this kind of stuff is to add a large limit, then eliminate all but one filter from your where clause, then add them back in one at a time until the query gets slow. Build your index as necessary.
|
# ? Sep 29, 2008 13:46 |
|
Begby posted:If you only have one index then that is a big reason why it is running slow, among other things. 3 million records really isn't all that much, SQL server should be able to handle this kind of query. The issue here though is your field types and indexes. You will probably want to build a composite index with all the items in your where clause and play with the order. The query above to calculate hits for 9000 items on a three month span only takes 40 seconds or so to complete as a query. Over 2 hours as a stored procedure with the exact same code. benisntfunny fucked around with this message at 14:03 on Sep 29, 2008 |
# ? Sep 29, 2008 14:00 |
|
I apologize if this was covered as I may have missed it, but are you also doing the insert when you run it as a query or only when you are running it as as stored procedure? If you are only using the insert in the stored procedure perhaps it's related to that.
|
# ? Sep 29, 2008 15:05 |
|
Aredna posted:I apologize if this was covered as I may have missed it, but are you also doing the insert when you run it as a query or only when you are running it as as stored procedure? If you are only using the insert in the stored procedure perhaps it's related to that. the code is the exact same between stored procedure and query. Again this is just a snippit of the stored procedure. I'm only including this because everything ran fine as a stored proc before adding this additional code... also the analyzer tells me this is where it's getting stuck.
|
# ? Sep 29, 2008 16:43 |
|
Ok this is going to be really stupid, so just make fun of me and move on if you like, but to preface this I know NOTHING about SQL. I've got an Openfire Jabber server... I was running SQL Express but i'm moving to MySQL because I hit the 4gb cap on a SQL Express DB. A straight conversion of data isn't really going to work because I'm also renaming the server, and there are some other issues between the 2 DB's from what i've seen. (varchar limits etc). I do want to grab some data from one and move it to the other as well though. Basically I have a table ofGroupProp on mssql that has data I want, and I need to get it into mysql with the same table name. What I'm running into is how to export it from MS SQL and import it into MySQL. Can anyone point me to something that'll help?
|
# ? Sep 30, 2008 23:20 |
|
skipdogg posted:What I'm running into is how to export it from MS SQL and import it into MySQL. http://dev.mysql.com/downloads/gui-tools/5.0.html http://dev.mysql.com/doc/migration-toolkit/en/index.html
|
# ? Oct 2, 2008 05:04 |
|
I'm having a bit of trouble trying to work a new AVG() kink into an existing query that I was finally able to put together (I know next to nothing about SQL, but this task has been given to me since I do other programming-related duties). The ultimate goal is to find the average number of days, per item, from the time a purchase order is sent out to the time we get it in stock. Here's my query (MSSQL, I think): code:
code:
What I need now is this: see the DATEDIFF (the "diff" column) that gives me the difference in days from the PO sent date to the item receive date? I need to calculate the average number of days it takes a product to get in on a per-product basis. I've tried AVG and GROUP BY in a bunch of different combinations to work that out, but I only get errors about columns being or not being in "aggregate functions," and the documentation on AVG only gives me very simple examples that I can't apply to my query. Can anyone help me figure out how to get the average? **EDIT: typo Forkboy fucked around with this message at 21:54 on Oct 3, 2008 |
# ? Oct 3, 2008 16:41 |
|
Pento McGreno posted:Indeed it is. Thanks.
|
# ? Oct 3, 2008 18:12 |
|
I assume you meant "per-product" and "per-project basis" was just a typo...code:
|
# ? Oct 3, 2008 21:48 |
|
MoNsTeR posted:I assume you meant "per-product" and "per-project basis" was just a typo... Ah... it was indeed. Fixed. MoNsTeR posted:
It didn't occur to me until you posted that the person who needs the data doesn't need the details of every PO, just the averages. Taking your model, I got: code:
n00b question: There's a quirk... the average numbers all seem to be rounded down. Any way to get a decimal to, say, two places?
|
# ? Oct 3, 2008 22:29 |
|
Forkboy posted:n00b question: There's a quirk... the average numbers all seem to be rounded down. Any way to get a decimal to, say, two places? That is because datediff is returning an integer value, so the result of the average function will also be an integer. You can fast the datediff to another type, such as decimal, inside of the average. Since you want exactly two decimal places you can do it like this and then not have to worry about rounding the value back up to two decimal places like you would if you used a full floating point number: code:
|
# ? Oct 3, 2008 22:53 |
|
Aredna posted:
Sweeeeeeeeeeet. Works beautifully... thanks very much.
|
# ? Oct 3, 2008 23:21 |
|
I'm trying to generate an SQL query for a Trac ticket with custom fields (the context isn't relevant, but it's here in case you're wondering why the table format is so messed up). I have no prior experience with SQL. I spent almost 2 hours reading SQL tutorials, but could not succeed in creating the required SQL command. I realize this is inappropriate, but I was hoping someone here with extensive knowledge of SQL would spare a couple of minutes and give me the solution, because I need this information for tomorrow. Essentially, I have two tables like so: code:
code:
code:
I found Trac custom field SQL examples here but failed in my attempts to adapt them to my needs. Again, sorry for just asking for the answer like this.
|
# ? Oct 6, 2008 03:32 |
|
I need a way to export huge(20+gb) data sets from Oracle as fast as possible. However there are some problems: 1) It needs to be by query, not by Table. 2) It must be to my local machine. The network admins have refused to give me a couple dozen gigs on the network drives to dump these. 3) Variable formatting. I have to pull in the format the requester wants. 4) Preferably some way to break on file size or records retrieved. I need to be able to fit these on a DVD or CD. 5) I can't actually run code on the system, so no PL/SQL. This is probably impossible, but it is the situation I find my self in. I'll charitably describe the situation as a "clusterfuck", but it is the assignment I have. Right now I just run queries through SQL*PLUS and spool them, but this is stupidly slow and I generally dislike spending 20-30 hours on a weekend baby sitting my laptop at work. I have looked at SQLDeveloper, but it only seems to support export by table, not be query. Any ideas?
|
# ? Oct 6, 2008 16:57 |
|
Xae posted:I need a way to export huge(20+gb) data sets from Oracle as fast as possible. However there are some problems:
|
# ? Oct 6, 2008 17:27 |
|
Jethro posted:It seems to me that if you are retrieving specific queries with specific formatting, you aren't "dumping data" you are generating reports. So maybe look into some sort of reporting tool? Most of the formatting is simply switching between delimiters in columns. Reports are generally human readable, these I consider "dumping data" because they are in thousand character long '|' or ',' delimited files. The files are to be loaded into external (to the company) databases, not to be read by humans.
|
# ? Oct 6, 2008 18:09 |
|
Different query, more dumb questions where The Google fails: how can I use the result of an aggregate function call in another column? What I need is a list of our products and how many are sold per month, sorted by top sellers. Here's a simplified version of what I have so far, in MSSQL: code:
How can I add another line that gives me total sold / months? If I copy those functions to another line (something like "sum again / datediff again"), the web portal I have to use just craps itself.
|
# ? Oct 7, 2008 19:10 |
|
Forkboy posted:How can I add another line that gives me total sold / months? If I copy those functions to another line (something like "sum again / datediff again"), the web portal I have to use just craps itself. What error do you get - division by zero or something else?
|
# ? Oct 7, 2008 20:15 |
|
Unicephalon 40-D posted:What error do you get - division by zero or something else? No... it's a mini SQL portal that's on the admin backend of our webstore. It gives me syntax errors if there are any, but none here. It just runs for a split second then gives me "an error has occurred." It really doesn't give me a whole lot of other help. Here's my whole query that gives me the non-helpful error: code:
Forkboy fucked around with this message at 20:41 on Oct 7, 2008 |
# ? Oct 7, 2008 20:38 |
|
Try this instead, it will give you a NULL for Sold Per Month if the number of months happens to be zero:code:
|
# ? Oct 7, 2008 21:19 |
|
Unicephalon 40-D posted:Try this instead Ah, thank you! I hadn't considered that it might try to divide by zero somewhere in there. Follow-up question: how do I get the table to sort by the sold-per-month number?
|
# ? Oct 7, 2008 21:46 |
|
Forkboy posted:Ah, thank you! I hadn't considered that it might try to divide by zero somewhere in there. You could just duplicate the column definition: code:
code:
Unicephalon 40-D fucked around with this message at 23:02 on Oct 7, 2008 |
# ? Oct 7, 2008 22:58 |
|
Unicephalon 40-D posted:You could just duplicate the column definition: Hey, thanks very much. You've saved us a world of headaches... also, I didn't know about the [string name of a previous column] trick. That will save me a ton of time in the future.
|
# ? Oct 7, 2008 23:40 |
|
I have just started out with SQL, and I'm building a small oracle database at uni. It holds information about recordings, artists, and tracks. My problem is listing out only the recordings with highest number of tracks (antall_verk) from a query that uses the COUNT operator. Atributte names are all in norwegian. A quick translation: Katalognummer - Record_number Tittel - title Plate - Recording Antall_verk - number_of_tracks I currently have the following query: code:
code:
Anyone got any suggestions?
|
# ? Oct 9, 2008 19:22 |
|
I think the easiest solution involves working out of a view based on your joined tables:code:
code:
|
# ? Oct 9, 2008 19:58 |
|
A question of my own: If I've got tables People (PersonID, other fields), Organizations (OrganizationID, other fields) and People_Organizations (PersonID, OrganizationID) which lists many-to-many relationships between people and organizations, should People_Organizations use a composite index on PersonID and OrganizationID, or two separate indexes?
|
# ? Oct 9, 2008 20:03 |
|
Awesome,thanks a lot! I had completely forgotten about views.
|
# ? Oct 9, 2008 20:04 |
|
Zoracle Zed posted:A question of my own: If you are going to get all the organizations that a person belongs too, then an index on personID makes sense. If you are going o get all the people that belong to an organization, then an index on organizationID makes sense. If you are going to be selecting based on personID and organizationID then a composite index would work, however I think you would only need to do that on a delete. So in this case two indexes sounds right, and a third if necessary.
|
# ? Oct 9, 2008 20:58 |
|
Kind of a generic question, but what's better: joins or sub-queries?
|
# ? Oct 10, 2008 02:20 |
|
Sergeant Hobo posted:Kind of a generic question, but what's better: joins or sub-queries? I think sub-queries, because I feel like if you were partying with one then it would get belligerently drunk and make out with someone's girlfriend while her boyfriend was on a smoke break. So I guess it really just depends on how slutty your girlfriend is.
|
# ? Oct 10, 2008 17:26 |
|
This is another prose-y question... What's the best way I can update multiple columns/rows of data? I'm only familiar using the UPDATE statement when it's a single column, limited by a WHERE clause using an id or pseudo id. My scenario is there's a table we have with columns ABCDE. At the first phrase of loading data, I can only fill in columns ABC, and I'll null out DE. In a second phase of loading, I'll have access to the same ids and whatnot, but I can fill in the DE sections. So, what SQL statement or design can I use to fill out those columns in an existing table? Edit: Talking with my coworker, this seems like something a normal UPDATE statement can handle, with joins and what not... I guess I'll look into it. Triple Tech fucked around with this message at 14:51 on Oct 14, 2008 |
# ? Oct 14, 2008 14:01 |
|
Is there a way to create an index of unique, variable length numeric ranges? For example, I would like my tables to be able to treat (1, 1000) and (1001, 1500) as two separate keys and be able to quickly determine if a new incoming range falls into one that is already in the DB. In an ideal world, the range would be represented as a composite index of two table columns (start, end). I'm not up poo poo creek if there isn't an easy way to do this because the size of this table is never expected to be more than 20 rows so manually pulling all of the data in this table to do this check in a manual fashion won't be costly. It won't be regularly queried in this manner and inserts only happen during specific failure scenarios (which is when this check would need to happen).
|
# ? Oct 14, 2008 22:12 |
|
var1ety posted:
|
# ? Oct 14, 2008 22:18 |
|
|
# ? May 16, 2024 04:06 |
|
crazyfish posted:Is there a way to create an index of unique, variable length numeric ranges? I just tested it by creating a function which took in two numbers, start and end, and returned the number of rows in the existing table where the range overlapped that described by start and end. Then I put a check on the table which said CheckTable(start,end) = 0
|
# ? Oct 14, 2008 22:40 |