|
Sub Par posted:Tested in SQL Server 2008. Not sure if you can do this in MySQL or not, but I know it will also work in Oracle. I'll test this later tonight, but I'm 99% sure it will count print records multiple times (each print record will be included for each class in the same subject the student is in). i.e. if you have a student taking ACCT101 and ACCT102 classes concurrently, and the student prints four pages for the 'ACCT' subject, how are you going to know which class it's for? (Also, I have a suspicion that the 'subject' in the printlog is actually the student's major subject, not the subject for the current printlog. This is only based on experience with terrible schemas pulling wrong-rear end data. )
|
# ? Jun 23, 2011 05:21 |
|
|
# ? Jun 7, 2024 21:51 |
|
Sub Par posted:Tested in SQL Server 2008. Not sure if you can do this in MySQL or not, but I know it will also work in Oracle. code:
Thel posted:(Also, I have a suspicion that the 'subject' in the printlog is actually the student's major subject, not the subject for the current printlog. This is only based on experience with terrible schemas pulling wrong-rear end data. ) DoctorScurvy fucked around with this message at 09:04 on Jun 23, 2011 |
# ? Jun 23, 2011 09:01 |
|
What can I do about slow MySQL importing? I dump from server A with mysqldump. It takes about 2 minutes to give me a 2.1GB file. Then I send it over to server B and import it in to that server (starting fresh with a new DB) and it takes 15 minutes! I just double the RAM in the VM from 3GB to 6GB because it was using it all. It didn't make the import any faster. Well, it made it about 30 seconds faster but we're talking 15 minutes total, so it wasn't much of an improvement. I was hoping the solution would be 'add more cheap RAM'. Are inserts/imports just that much more expensive to run? I'm using the stock configuration for MySQL (CentOS 5.6) and it's 64-bit on a Core i7 950. Here's what the CPU/disk/memory usage graphs look like: Bob Morales fucked around with this message at 13:57 on Jun 23, 2011 |
# ? Jun 23, 2011 13:49 |
|
Bob Morales posted:Are inserts/imports just that much more expensive to run? Without knowing much about MySQL internals, reads are usually going to be faster than writes, so I'd expect importing to take longer than exporting. (Maybe not that much longer though...)
|
# ? Jun 23, 2011 14:20 |
|
epswing posted:Without knowing much about MySQL internals, reads are usually going to be faster than writes, so I'd expect importing to take longer than exporting. I could understand double or triple, but 8x is a big jump. I would be happy with getting it down to 5 minutes.
|
# ? Jun 23, 2011 14:26 |
|
Bob Morales posted:What can I do about slow MySQL importing? Could you setup replication to the other database? http://dev.mysql.com/doc/refman/5.1/en/replication.html Even if you didn't always want data in server B you could setup a different schema then just take locally from B to B to populate data when you wanted it.
|
# ? Jun 23, 2011 14:29 |
|
benisntfunny posted:Could you setup replication to the other database? I am under the assumption that if we used replication, that once we wrote to the database on server B, we'd have to delete it and start over from a new dump in order to get back in sync. The idea is to use this as a test with real production data instead of the data in our staging/development servers. Server A is actually a local replication clone of our production DB.
|
# ? Jun 23, 2011 14:46 |
|
Bob Morales posted:I could understand double or triple, but 8x is a big jump. I would be happy with getting it down to 5 minutes. Are these tables Inno or MyISAM? Inno's always going to be slower, and inserts are much more expensive than reads. Index recreation is also really an expensive task. Some of this thread may be helpful - http://stackoverflow.com/questions/67117/speeding-up-mysql-dumps-and-imports
|
# ? Jun 23, 2011 14:58 |
|
stuph posted:Are these tables Inno or MyISAM? Inno's always going to be slower, and inserts are much more expensive than reads. Index recreation is also really an expensive task. Inno. I a looking through High Performance MySQL now.
|
# ? Jun 23, 2011 15:02 |
|
You might consider disabling all of the constraints and indexes on the tables, then enable them after the data has been imported. Another option might to disable the fsync while you import the data and then enable it after.
|
# ? Jun 23, 2011 15:09 |
|
We're trying to setup Bucardo to do some Postgres replication. It seems to work fine for tables, but we can't get it to synchronize sequences at all, even though sequences have been added to the sync and everything seems ok. This is a problem, because Does the idea of replicating sequences not mean what we think it means, or are there other things we need to setup/run to make this work?
|
# ? Jun 23, 2011 20:08 |
|
Bob Morales posted:I am under the assumption that if we used replication, that once we wrote to the database on server B, we'd have to delete it and start over from a new dump in order to get back in sync. Here's what I'm suggesting... and I'm not sure how many places you can replicate to in MySQL so it may not work. Keep your replication to Server A Replicate another copy to Server B in a new schema Limit the ID in Server B to read only on that new schema Delete your current test tables and repopulate with data from new schema that is a fully replicated copy of prod as needed. No reason you would have to delete anything from those tables. benisntfunny fucked around with this message at 02:24 on Jun 24, 2011 |
# ? Jun 23, 2011 23:27 |
|
I got it down to 8 minutes, but apparently that's not fast enough. That's one of our biggest databases, our other ones take 20 seconds to dump and a minute and 20 seconds to import. I'm going to try using this: http://code.google.com/p/sqlyog/ or maatkit to see if it's faster to re-sync our data with the production master after we've made changes, or if it's faster to just dump/reload.
|
# ? Jun 24, 2011 01:38 |
|
Bob Morales posted:http://code.google.com/p/sqlyog/ I've used sqlyog for the last couple years. It's just a mysql windows/gui client. Not sure why you'd think it would do anything faster than the mysql shell program, unless you're aiming for some feature I'm not aware of...
|
# ? Jun 24, 2011 01:48 |
|
Bob Morales posted:I got it down to 8 minutes, but apparently that's not fast enough. That's one of our biggest databases, our other ones take 20 seconds to dump and a minute and 20 seconds to import. So you're going to rule out the option I gave you which would allow you to locally copy the data over? - which is without a doubt going to be your fastest option. We typically copy about 4-5GB of data using a similar method in under a minute. how are you defining the DB as big? Also if the issue is 8 minutes is too long because you're locking the tables during that time and you don't want to do the replication method I suggested I would at least dump the data into an alternative schema pre-load then load from those tables locally to reduce locking time. edit: also - If you have any DB auditing type columns in your data you could easily do a compare to only grab the data you need and update existing information where the modified date is greater than the current modified date. benisntfunny fucked around with this message at 02:28 on Jun 24, 2011 |
# ? Jun 24, 2011 02:22 |
|
the mysqldump and import is as fast as your gonna get. http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html
|
# ? Jun 24, 2011 04:25 |
|
You have tuned my.cnf for your server specs, right?
|
# ? Jun 24, 2011 06:42 |
|
benisntfunny posted:So you're going to rule out the option I gave you which would allow you to locally copy the data over? - which is without a doubt going to be your fastest option. We typically copy about 4-5GB of data using a similar method in under a minute. I really don't understand what you were saying to do. 8 minutes is only too long because someone else said so. Basically we have a production server, and now we have a local replication mirror. What I am trying to do is set up a test server, which will allow you to take a snapshop of a database from the mirror and actually do stuff with it. We have a clone of our webserver set up as well. Basiscally so we can see changes with real data instead of on our staging or development servers. But once we start testing the app, the data is going to change of course. So when we are done, we'll drop the DB and when we are going to test out a new feature, we just grab a new dump from the mirror so we have current data. McGlockenshire posted:You have tuned my.cnf for your server specs, right? Yes, they are no longer the default values, Are the they best? Maybe not. epswing posted:I've used sqlyog for the last couple years. It's just a mysql windows/gui client. Not sure why you'd think it would do anything faster than the mysql shell program, unless you're aiming for some feature I'm not aware of... They have a sync tool for Linux where I am going to see if it's faster to re-sync my modified DB to the current copy on server A, rather than doing a dump/import.
|
# ? Jun 24, 2011 13:55 |
|
I'm trying to solve what I thought would be a simple problem, but feel like I've gone way out of my way to achieve my goal. I have a table for tracking death loss in farm systems, and want to report it by week. No problem so far, that's pretty easy. The problem is that when there are zero deads to report, nothing goes into the database, so I end up with weeks missing data. This is easy to fix for the counts, by just doing an outer join with a list of numbers, so I get all my weeks displayed even when there's no report. The hangup comes when I want to show a cumulative total for the entire period, and have the cumulative show even on weeks where there's nothing reported. The cumulative is already calculated when it goes into the table, I just need to carry it over into the "empty" weeks. My solution was to join the table with itself, group by the appropriate columns, and select the MAX of the cumulative total column. I feel like I may be making it harder than need be, so would appreciate any opinions. Here's what I ended up with (MS SQL, btw): code:
|
# ? Jun 24, 2011 14:13 |
|
Bob Morales posted:I really don't understand what you were saying to do. 8 minutes is only too long because someone else said so. I hope someone else can chime in and tell me I'm not making sense. Here's a picture of what I'm trying to explain. I fully understand what you want to do. Stop getting loving snippy it's not our fault your testing is super time sensitive and can't be executed through standard load processes.
|
# ? Jun 24, 2011 19:32 |
|
MrBishop posted:
edit: Copy and paste this to see what I'm saying - also. I updated what I originally said to do. code:
http://www.kodyaz.com/articles/sql-server-dates-table-using-tsql-cte-calendar-table.aspx benisntfunny fucked around with this message at 21:12 on Jun 24, 2011 |
# ? Jun 24, 2011 19:49 |
|
benisntfunny posted:Create a temp table with all the dates - right join them into your query Thanks for following up, but maybe I didn't explain well enough where the problem lies, and that's in having a running total, that continues to accumulate (0) through the missing weeks. If I'm reading your example correctly, it looks like it provides a way to have contiguous week numbers when there's missing data, but I just handled that with a FULL OUTER join between the DeadShit table and a WeekNumbers table. Like I said, I finally got a solution on my own, just wondering if there was a more elegant/efficient solution. What I started with is: code:
code:
|
# ? Jun 24, 2011 21:40 |
|
MrBishop posted:Thanks for following up, but maybe I didn't explain well enough where the problem lies, and that's in having a running total, that continues to accumulate (0) through the missing weeks. What I started with is: ah. sorry. I actually think you've probably done exactly what I would do too. I did just have trouble understanding you at first.
|
# ? Jun 24, 2011 22:19 |
|
You can also generate a list of weeks with a recursive CTE if the version of MS SQL you are using is new enough (2005 or newer I think).code:
If you prefer week numbers, it's easy enough to modify the script above.
|
# ? Jun 24, 2011 22:49 |
|
benisntfunny posted:I hope someone else can chime in and tell me I'm not making sense. Are you saying to have two databases on the test server? If I was going to do that, I could just keep two databases on the replication slave.
|
# ? Jun 25, 2011 01:45 |
|
Bob Morales posted:Snippy? The point of having it replicate to two places is you keep your prod replicated but can locally access the data in test that is read only so you can't really do any harm to it.
|
# ? Jun 25, 2011 13:01 |
|
benisntfunny posted:The point of having it replicate to two places is you keep your prod replicated but can locally access the data in test that is read only so you can't really do any harm to it. Here's what I have now: Production DB ---Internet--- Slave A ----- Test B We're going to do all our work with B, we just want to be able to re-sync it back up to match A the fastest way possible. A is read-only.
|
# ? Jun 25, 2011 16:21 |
|
This is most likely an absolutely retarded question but I'm trying to setup a query I can modify to pull info from our accounting software to Excel. I'm dealing with about three years of customer data so I want to be able to select the item codes and customer before I initiate the query rather than wait 10 years for the system to pull up all my info. SELECT AR_InvoiceHistoryDetail.InvoiceNo, AR_InvoiceHistoryHeader.ARDivisionNo, AR_InvoiceHistoryHeader.BillToName, AR_InvoiceHistoryHeader.CustomerNo, AR_InvoiceHistoryHeader.InvoiceDate, AR_InvoiceHistoryHeader.TransactionDate, AR_InvoiceHistoryHeader.ShipToState, AR_InvoiceHistoryHeader.ShipToCountryCode, AR_InvoiceHistoryDetail.HeaderSeqNo, AR_InvoiceHistoryDetail.DetailSeqNo, AR_InvoiceHistoryDetail.ItemCode, AR_InvoiceHistoryDetail.ItemType, AR_InvoiceHistoryDetail.ItemCodeDesc, AR_InvoiceHistoryDetail.ExtendedDescriptionKey, AR_InvoiceHistoryDetail.SalesAcctKey, AR_InvoiceHistoryDetail.CostOfGoodsSoldAcctKey, AR_InvoiceHistoryDetail.InventoryAcctKey, AR_InvoiceHistoryDetail.UnitOfMeasure, AR_InvoiceHistoryDetail.SubjectToExemption, AR_InvoiceHistoryDetail.Commissionable, AR_InvoiceHistoryDetail.TaxClass, AR_InvoiceHistoryDetail.Discount, AR_InvoiceHistoryDetail.DropShip, AR_InvoiceHistoryDetail.WarehouseCode, AR_InvoiceHistoryDetail.PriceLevel, AR_InvoiceHistoryDetail.ProductLine, FROM AR_InvoiceHistoryDetail AR_InvoiceHistoryDetail, AR_InvoiceHistoryHeader AR_InvoiceHistoryHeader WHERE AR_InvoiceHistoryHeader.HeaderSeqNo = AR_InvoiceHistoryDetail.HeaderSeqNo AND AR_InvoiceHistoryHeader.InvoiceNo = AR_InvoiceHistoryDetail.InvoiceNo AND ((AR_InvoiceHistoryHeader.TransactionDate Between {d '2009-01-01'} And {d '2010-12-31'})) I want to add a AND AR_InvoiceHistoryHeader.BillToName = and a AR_InvoiceHistoryDetail.ItemCode Between to this query. I tried following their format and I tried googling a bit, but I haven't had any luck.
|
# ? Jun 28, 2011 23:08 |
|
This is what a crystal report? or what kind of sql?
|
# ? Jun 28, 2011 23:44 |
|
That seems very possible based on my googling. It is an Excel Query to MAS90 (from Sage) which is our finance system.
|
# ? Jun 28, 2011 23:54 |
|
okay well adding to a crystal report, report selection formal is just like sql so what has you tripped up exactly? i did see an errant command at the end of the fields there.code:
If they are number type fields you dont use any kind of quotes just a number
|
# ? Jun 29, 2011 00:28 |
|
Sweet thanks, I guess I was just making it more complicated than I thought. All the extra (( and { were screwing with me. Edit: What if I can't do between for my last item, but rather need specific item numbers? Can I do some kind of (1, 2, 3, 4) or do I need a separate line for each one? Also, what if I want to do a LIKE operator? Is it just LIKE "%word%" ? I'm getting errors for that, stupid SQL. I should probably try to learn it though... Anyone got a recommendation for an intro to SQL/Database Admin ? Lyon fucked around with this message at 01:09 on Jun 29, 2011 |
# ? Jun 29, 2011 00:51 |
|
You should be able to use any WHERE selection operator you'd like there. I'm actually surprised BETWEEN would work in that instance, as I thought it was restricted to num/date types. So for your specific questions (choosing the last field for convenience, may not actually work depending on values/data types): AND AR_InvoiceHistoryDetail.ItemCode IN(1,2,3,4) If the values are strings: AND AR_InvoiceHistoryDetail.ItemCode IN("1","2","3","4") Using like: AND AR_InvoiceHistoryDetail.ItemCode LIKE "%thinger%" These should work, I'm assuming MSSQL (if so, use single quote '' instead of double quote "" for the strings). I'm leaving the double quotes in my examples because that's what Sprawl used. The main reason LIKE comparisons fail is because the field being searched (e.g. ItemCode) is of type text or ntext. This means it's 'too big' to do like comparisons on because the field could contain a URL or an entire novel. If performance isn't a concern, and you're certain the size of the field is below a certain threshold, you can just convert to VARCHAR(MAX). e.g. AND CONVERT(varchar(MAX),AR_InvoiceHistoryDetail.ItemCode) like "%thinger%" I'm not even sure that's the source of your error though.
|
# ? Jun 29, 2011 01:47 |
|
This is super frustrating, won't let me use like and won't "compile" when I try to use your convert code, frustrating as hell. I was hoping to make this flexible but it seems like I'll just have to look up the exact customer # for each company and just an = instead of letting me type in names/parts of names.
|
# ? Jun 29, 2011 02:23 |
|
Lyon posted:This is super frustrating, won't let me use like and won't "compile" when I try to use your convert code, frustrating as hell. Oh you can acutally use that kind of list but in crystal the syntax is a little different. I use and AR_InvoiceHistoryDetail.ItemCode in [1,2,3,4] so try that first.
|
# ? Jun 29, 2011 03:01 |
|
Aw crap. I didn't realize it was crystal reports sub-SQL; you can probably ignore most of what I said and just listen to Sprawl instead. I have no idea what's legal in Crystal or not.
|
# ? Jun 29, 2011 07:58 |
|
Hey guys. I know this is only tangentially related to SQL, but I didn't think starting a new thread just for my stupid question would be appreciated. So one of our pages went bad at work a couple weeks ago. I finally got some time to work on it and figured out what was wrong today; after it retrieved an ADO Recordset from SQL, it was trying to reference a field with adoRS("Policy No") when it should have been adoRS("Policy_No"). The field name is most definitely Policy_No in SQL Server. I fixed it, it works now... but I didn't write this code and it had been in place for years before I touched it. And apparently it worked fine as it was, defying all logic as I know it. Now so long as the problem was clear and I fixed it I don't care, but my boss is going to demand an explanation tomorrow for why it used to work and then just kind of stopped. He'll be pissed if I can't give him one he likes. Anybody know any reason off the top of their head some clearly erroneous code like that would identify as the correct field, and then just... stop?
|
# ? Jun 30, 2011 01:53 |
|
Backhand posted:Hey guys. I know this is only tangentially related to SQL, but I didn't think starting a new thread just for my stupid question would be appreciated. Did you upgrade the server it was running on any time recently? (including any service packs/hotfixes for IIS/your web server?) That's truly bizarre, incidentally, I don't know of any languages that would try 'Policy_No' if 'Policy No' failed. (That said ... is your server side code ASP, perchance?)
|
# ? Jun 30, 2011 02:09 |
|
Yeah I'd have to agree; has to be some server change, either programmatic (hotfix, patch, upgrade, etc.) or configuration (namespace, character set, some obscure rule).
|
# ? Jun 30, 2011 02:38 |
|
|
# ? Jun 7, 2024 21:51 |
|
its very possible the ado recordset was using a odbc driver at some point that was updated to no longer replace blank spaces with underscores.
|
# ? Jun 30, 2011 04:08 |