|
I use Toad for Oracle stuff and SQL Server Management Studio for SQL Server 05 stuff
|
# ? Dec 18, 2008 20:41 |
|
|
# ? May 14, 2024 00:13 |
|
tayl0r posted:Do you guys use any tools like PL/SQL Developer, Oracle SQL Developer, Visual Studio, etc? Echoing Jethro, I used Toad for Oracle. It isn't very polished and it is very expensive, but it is the best tool I've used yet.
|
# ? Dec 18, 2008 20:48 |
|
You should check out PL/SQL Developer if you're on Oracle. There is a free trial version. It's totally worth it. http://www.allroundautomations.nl/plsqldev.html The real product is only $180 to buy as well, which any company should gladly pay for a good tool.
|
# ? Dec 18, 2008 21:14 |
|
I could not work without TOAD, period. It is worth 10 times whatever ridiculous price they charge for it.
|
# ? Dec 18, 2008 21:51 |
|
Personally I wouldn't pay a cent more than what Toad costs.
|
# ? Dec 18, 2008 22:10 |
|
I use SQL Server Management Studio. It's kind of sad, but I really like the indenting, excellent undo support, and syntax highlighting. That's about all in terms of benefits -- emacs vi lol. SSMS 2008 has syntax highlighting, but from trivial testing, I don't like it. I wish I could find someone interested in working on SqlWrite with me, but so far, no go.
|
# ? Dec 19, 2008 00:42 |
|
I found the thread after posting in the programming questions megathread. I figured that wasn't the place for this stuff, but I didn't see anything else come up with 'SQL.' Silly me. I have two questions, with one being more bound to SQL as a language, and the other more a database architectural question: 1. Say I have a table made of stock data like 'symbol, closing date, closing price.' A has redundant values if taken by itself, though each row in the table is unique to each other. How can I list out only the unique values in the symbol column? 2. I wrote a Perl script to import a ton of old stock data; it's the current NYSE listings to 1985, wherever possible. I think it'll be a few million of lines, and it inserts each one one-at-a-time using the mysql command-line interface. I think that's a big problem and I wonder how I can bulk up the upload, given it's in a CSV file format. 2b. After starting around 12:00 midnight, I got 1.5% through the set by 7:30AM. The system hadn't locked, and I saw the script blip through the data set of one more symbol before I yanked it. My system clock was set to around 12:30AM. I'm using the MySQL bundled with Ubuntu 8.10 without any configuration changes. What are some basic steps I can do to make this upload work? I can try to run again overnight to determine if it's scaling issue or what.
|
# ? Dec 19, 2008 02:41 |
|
If someone is willing to give me some links, I'll start some documentation on different ways to bulk load data with different RDBMSes. I have done one thing so far, and that is document the sometimes-esoteric error messages from bcp.exe, the Bulk Copy Tool for MS SQL Server. I would love to have a nice resource for helping people avoid the one-row-at-a-time approach that they so often resort to, due to poor/hard to find documentation.
|
# ? Dec 19, 2008 02:50 |
|
rocko: The easiest and least configurable way is with the distinct keyword. code:
Another way is with aggregate functions and the group by clause: code:
One thing to keep in mine with count() is that it will only increment the count() counter if the thing that is passed into count() is NOT NULL. So, if you say "count(symbol)", and in 5 rows the symbol is NULL and 10 rows the symbol is NOT NULL, you will get a count back of 10. If you were to do a count(1) or count(*) (both are exactly the same) you would get a count back of 15.
|
# ? Dec 19, 2008 03:23 |
|
rocko: for bulk loading just search on google for "mysql bulk load" or "mysql bulk insert" it looks like this is what you want: http://dev.mysql.com/doc/refman/5.0/en/load-data.html If you want try just using normal inserts, one thing you can do is turn off auto-commit and then only commit every 10,000 rows or something like that. Commiting takes a lot of time. Also, remove or disable any indexes, primary keys, foreign keys, etc from the table. Another thing, and this will speed up ANY sql statement from a script or a program, is by using bind variables. You might already be doing this though. Here is what I mean: code:
tayl0r fucked around with this message at 04:17 on Dec 19, 2008 |
# ? Dec 19, 2008 04:07 |
|
tayl0r posted:rocko: for bulk loading just search on google for "mysql bulk load" or "mysql bulk insert" Edit: I found an example of importing CSV data. It almost worked. It looks like I defined some floating point quantities as DECIMAL(10,2), but the CSV data I'm importing has for levels of precision after the decimal point. It got the symbol and date correct, but everything else was just strange. I will look into what I might do to massage that, though I think I should redefine the table to handle the extra precision. After all, if I got it, I might as well use it. Bulk loading was much faster for the one symbol I loaded. Should I just commit when I'm finally done with my load? At that point I'd have gone from 0 rows to many millions without committing once. Would that introduce different problems? Rocko Bonaparte fucked around with this message at 07:38 on Dec 19, 2008 |
# ? Dec 19, 2008 07:02 |
|
You probably don't want to keep a multi-million row transaction going. Honestly I'm not sure what the drawbacks are, but I'm sure there are some. It will probably depend on the RDBMS, but a worst-case would be that the entire transaction history is kept in memory until the commit happens. At best, the commit will just take a super long time and if you lose power or something during the commit you will have to start all over again. I would commit every 5-10k rows. Oh, and tayl0r was using Perl's DBI (http://www.perl.com/pub/a/1999/10/DBI.html). edit: and I don't know what you mean by precompilation, but it just means that the SQL engine won't have to parse the SQL string again (I don't know this for fact, just going off of tayl0r's post). edit2: oh, and out of curiosity where did you get the nyse records? is that publicly available somewhere? supster fucked around with this message at 11:24 on Dec 19, 2008 |
# ? Dec 19, 2008 11:18 |
|
supster posted:edit: and I don't know what you mean by precompilation, but it just means that the SQL engine won't have to parse the SQL string again (I don't know this for fact, just going off of tayl0r's post). I was using precompiling loosely, and parsing would be part of that. I figured those commands ensured the engine knew the string was good and could use it like a template without worrying about it again. I get daily stock data using Finance::QuoteHist, which pulls from online sources. Though I read around that sites like QuoteMedia or Yahoo don't care if you comb them every night, I'm starting to find I think that's not true. But if you wanted to do a big pull, you'll be fine. You'll have to get a list of stock symbols, and feed them into Finance::QuoteHist one at a time. You should be able to go back to... about 1983 I think. Getting the symbols is a bit of a pain. It looks like you have to do some parsing, no matter how you cut it. And asking around gets you a bunch of "RTFM n00b" for some reason. I have the URLs I used at home, which I can forward along. From there you can filter out the stock symbols for each wget'd page.
|
# ? Dec 19, 2008 16:47 |
|
supster posted:I would commit every 5-10k rows.
|
# ? Dec 19, 2008 18:58 |
|
Victor posted:Maybe for a DB like MySQL. For SQL Server on a fairly beefy machine, I'll transfer rows from one table to another in ~5 million row chunks. For bulk loading data, Microsoft's bcp tool bypasses the sort of transaction log problems noted above by using special bulk load semantics. I just break the bcp files up in 5mil chunks to make it easier to debug certain kins of errors whose difficulty of debugging are O(n) where n is the size of the file. Although, this brings me to a good point... why aren't you using MySQL's bulk loading tool? It will do this MUCH faster than you can do it in Perl or Java.
|
# ? Dec 20, 2008 00:34 |
|
Do you even have to commit using the MySQL bulk loader? You do in Oracle.
|
# ? Dec 20, 2008 00:41 |
|
No. MySQL's bulk load tool is pretty primitive... LOAD DATA. It's pretty simple, you just need to have your data in a format it can read (basically a csv). Even if your data isn't already in a format LOAD DATA can read, it will be easier and faster to parse your data into that format and then use LAOD DATA to insert it into the database.
|
# ? Dec 20, 2008 01:08 |
|
supster posted:No. MySQL's bulk load tool is pretty primitive... LOAD DATA. It's pretty simple, you just need to have your data in a format it can read (basically a csv). Yeah, I would never suggest using normal inserts if a bulk loading tool is an option. They are hundreds of times faster, it is pretty incredible. It makes you realize how much overhead there is in normal inserts. Other things you can do are just to do a bulk load into a fresh table, then merge / insert that data into your pre-existing table using insert + select. That also lets you do aggregations, filters, joins, other stuff like that.
|
# ? Dec 20, 2008 01:52 |
|
supster posted:He is using MySQL though and without a bulk loading tool. Although 5 million rows sounds crazy, good to know that bcp can handle that, I will have to remember that. Right now my problem with the bulk insert is that it's messing up my decimal values. I asked about it before, but it was amongst a bunch of other rambing. <bunch of stupidity truncated. This is all stupidity. They did match up. God knows what I was comparing last night> Things aren't quite ordered the same way, but the values are just wrong no matter what. I am thinking it has something to do with having two significant digits in the table, but 4 in the import. I am looking into how I can change those representations--let me look up the command first. But I wondered why it accepted the difference in precision and distorted the numbers like that. I think that's really scary. Rocko Bonaparte fucked around with this message at 05:06 on Dec 20, 2008 |
# ? Dec 20, 2008 04:02 |
|
Ummmmm... the columns just look out-of-order.
|
# ? Dec 20, 2008 04:16 |
|
Victor posted:Ummmmm... the columns just look out-of-order. On the bright side, I figured out how to use the alter command to redefine the data type for a column.
|
# ? Dec 20, 2008 05:05 |
|
MoNsTeR posted:I could not work without TOAD, period. It is worth 10 times whatever ridiculous price they charge for it. Unless you are a DBA TOAD is a waste. PL/SQL Developer is much better for development work. It is cheaper and far superior. Xae fucked around with this message at 20:12 on Dec 20, 2008 |
# ? Dec 20, 2008 20:06 |
|
Okay, I'm still a rookie here, but this is driving me nuts. I'm displaying a list of records as shown. I want to only show the "topmost" of each instance of each unique filename. I'm using this GROUP BY statement, but I'm having issues. Records like 04-4A5204.pdf are dropping off just fine, I want serial #8 to drop off, and just show serial #10. But with records 21 and 9, I want 9 to drop off and just show 21. Instead, I'm getting the opposite. Should I be using something other than GROUP BY for more control? Because it doesn't look like I can expect the top sorted result to be the one that is the one they're grouped by. Oh, and using mysql.
|
# ? Dec 22, 2008 20:15 |
|
Civil posted:Okay, I'm still a rookie here, but this is driving me nuts. In this case, if you can know for a fact that a higher serial means also a higher (or equal) bodate, then I'd say do max(serial) as serial, max(bodate) as bodate and also put ext in the GROUP BY. Also, please put your queries in [code] blocks. It's kinda impossible to copy and past from a screen shot.
|
# ? Dec 22, 2008 20:27 |
|
Xae posted:Unless you are a DBA TOAD is a waste. PL/SQL Developer is much better for development work. It is cheaper and far superior. I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot? Vanadium Dame fucked around with this message at 02:48 on Dec 23, 2008 |
# ? Dec 23, 2008 02:43 |
|
Markoff Chaney posted:I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot? Just download the free trial and see for yourself. Or, look at their website. I'm sure they list out all the features their software has. You will seriously poo poo yourself after you see how much better PL/SQL Developer is.
|
# ? Dec 23, 2008 03:26 |
|
Markoff Chaney posted:I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot? Not crash, multi-threaded so one query won't lock the whole thing, not crash, superior interface, vastly better debugger, built in testing, doesn't crash. The interface as a whole is miles better. It is the only development tool I have ever purchased out of my own pocket. I ended up having a client buy an unlimited license to support the people who wanted to ditch TOAD after seeing my use PL/SQL Developer. The Unlimited license was less than the cost of 6 TOAD licenses. Xae fucked around with this message at 05:04 on Dec 23, 2008 |
# ? Dec 23, 2008 05:00 |
|
Is it better/faster to use Distinct or just compute unique entries using php?
|
# ? Dec 23, 2008 05:37 |
|
hexadecimal posted:Is it better/faster to use Distinct or just compute unique entries using php?
|
# ? Dec 23, 2008 05:42 |
|
supster posted:Ultimately it will depend on the data set (among many other things), but just use distinct - it is almost gauranteed to perform better. agreed. plus, typing "distinct" has got to be easier than any solution outside of the database.
|
# ? Dec 23, 2008 09:52 |
|
Xae posted:Not crash, multi-threaded so one query won't lock the whole thing, not crash, superior interface, vastly better debugger, built in testing, doesn't crash. The earliest TOAD I've used is 7.5 and that's multi-threaded, so I'm not sure what ancient version you've tried. Since upgrading to 9.5 I've had maybe 10 crashes total, that's nearly a solid year of using it 8-10 hours a day. To be fair though I'll download the PL/SQL Developer trial. Edit: Hoooooo boy, where to start. 1. Can't have more than one statement in a SQL window 2. no Describe/View/whatever hotkey 3. View menu command is woefully underpowered (where's my Used By tab?!) and schizophrenic to boot (why is there a seperate View and View Spec & Body?) 4. Explain Plan pops up a seperate window and is less readable These may seem small but they are so integral to my minute-by-minute productivity that they are dealbreakers. I do like how it uses line-printer style white/green in the query results window, how the editor itself behaves, that it's lightweight and fast, and how it manages to make a bit more efficient use of screen real estate. If you really couldn't afford TOAD and didn't like Oracle SQL Developer this could work, but by no means could it replace TOAD for a seasoned user. Now, a lot of that is familiarity. Someone who's never used TOAD isn't going to miss their ultra-powerful F4 key, isn't going to expect Ctrl+Enter to run queries and Ctrl+E to explain plan. And another big chunk is what you're working on and how you personally organize it. Maybe you have a printed ERD and all your packages open in editors so you don't need that describe hotkey like someone with thousands of tables and hundreds of procedures. Maybe you don't organize your objects by schema so the behavior of the object browser makes sense. It's all personal preference ultimately so if this thing floats your boat, hey great you get to save kajillion dollars on TOAD licenses as gravy. But no one who actually knows how to leverage TOAD's functionality would ever switch to it. I would bet a hundred bucks easy that no DB developer in my office would prefer it. MoNsTeR fucked around with this message at 17:52 on Dec 23, 2008 |
# ? Dec 23, 2008 16:52 |
|
Is Toad for MYSQL (which appears to be far less expensive, that is to say, free) as good as the TOAD y'all are using for, I'm assuming, Oracle?
|
# ? Dec 23, 2008 20:14 |
|
Golbez posted:Is Toad for MYSQL (which appears to be far less expensive, that is to say, free) as good as the TOAD y'all are using for, I'm assuming, Oracle? An Oracle-like TOAD for MySQL would be overkill. There's a reason that phpMyAdmin is still so popular for MySQL. I really don't think any MySQL GUI is better than any other, as long as it doesn't crash it'll do what I need it to. (Which, with MySQL, is "not much")
|
# ? Dec 23, 2008 20:33 |
|
MoNsTeR posted:The earliest TOAD I've used is 7.5 and that's multi-threaded, so I'm not sure what ancient version you've tried. Since upgrading to 9.5 I've had maybe 10 crashes total, that's nearly a solid year of using it 8-10 hours a day. 1) Yes, you can. 2) I think you can control-click a table name, or any object name, to get the description 3) Because you don't always need to see the body? It comes in handy when you don't want to grab a 20k SLOC package to see the note in the spec 4) You can alter the explain plan window quite a bit. You can set up custom filters object filters that let you just peak at any objects you want. It is also fairly easy to develop your own plug-ins to add any TOAD like features you just can't do with out. Almost the entire program can be customized.
|
# ? Dec 25, 2008 00:10 |
|
About anything the program has in a menu somewhere can be assigned a hotkey. I bind the following as a starting point with PL/SQL Developer.code:
* Move the input cursor over an object in a query (such as a table) and Ctrl+D to display or Ctrl+Shift+E to edit (view structure, and indexes) or Ctrl+Q to launch a "select * from TABLE t". * SQL Beautifier. * Easy export of query results to Excel, with one tab for the SQL statement and one for the results. * Ability to add new tools (I have one to launch SQL*Plus for the current session and one to run dbms_xplan.display_cursor for the previous statement). * Can press Alt+S to show one record with columns and values vertical instead of horizontal (for really wide tables).
|
# ? Dec 25, 2008 00:36 |
|
What is the most efficient way to check for existance of some record in a table matching certain criteria? Is it COUNT or using IF EXISTS, or something else?
hexadecimal fucked around with this message at 04:24 on Dec 25, 2008 |
# ? Dec 25, 2008 04:01 |
|
Ugh. My brain hurts from search and searching, so I'm resorting to just asking for help. I've got a table tblFoo that's something like this: code:
My first thought was to create a temporary table populated with dates that incremented from the smallest date to the largest date in the table, and RIGHT JOINed that with the above table, and have my script leave a blank spot on the graph when the ID is NULL (as a result of the RIGHT JOIN). This will at least provide proper spacing on my x-axis, and just leave blank spots on my graph. Is this a good solution? Also, is creating a temporary table much different than doing something like the following: code:
|
# ? Dec 25, 2008 06:52 |
|
hexadecimal posted:What is the most efficient way to check for existance of some record in a table matching certain criteria? Is it COUNT or using IF EXISTS, or something else?
|
# ? Dec 25, 2008 06:59 |
|
jovial_cynic posted:What's the syntax for adding additional rows like this? Instead of creating a table, could I just make a "virtual table" in this fashion, building the multi-row select statement via perl? select UNION select UNION select UNION etc
|
# ? Dec 25, 2008 07:08 |
|
|
# ? May 14, 2024 00:13 |
|
Triple Tech posted:select UNION select UNION select UNION etc YES! Thanks!
|
# ? Dec 25, 2008 07:34 |