Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!
I use Toad for Oracle stuff and SQL Server Management Studio for SQL Server 05 stuff

Adbot
ADBOT LOVES YOU

Operation Atlas
Dec 17, 2003

Bliss Can Be Bought

tayl0r posted:

Do you guys use any tools like PL/SQL Developer, Oracle SQL Developer, Visual Studio, etc?

PL/SQL Developer is by far the best tool that I've ever used for writing SQL and working with the database. It's specifically a PL/SQL development IDE but it also has a ton of things just for browsing database objects, looking at explain plans, writing SQL, importing & exporting data from tables, comparing table DDL in two different schemas or databases, etc.

It's a shame it only works with Oracle.

Echoing Jethro, I used Toad for Oracle. It isn't very polished and it is very expensive, but it is the best tool I've used yet.

tayl0r
Oct 3, 2002
This post brought to you by SOE
You should check out PL/SQL Developer if you're on Oracle. There is a free trial version. It's totally worth it. http://www.allroundautomations.nl/plsqldev.html

The real product is only $180 to buy as well, which any company should gladly pay for a good tool.

MoNsTeR
Jun 29, 2002

I could not work without TOAD, period. It is worth 10 times whatever ridiculous price they charge for it.

tayl0r
Oct 3, 2002
This post brought to you by SOE
Personally I wouldn't pay a cent more than what Toad costs.

Victor
Jun 18, 2004
I use SQL Server Management Studio. It's kind of sad, but I really like the indenting, excellent undo support, and syntax highlighting. That's about all in terms of benefits -- emacs vi lol. SSMS 2008 has syntax highlighting, but from trivial testing, I don't like it. I wish I could find someone interested in working on SqlWrite with me, but so far, no go.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I found the thread after posting in the programming questions megathread. I figured that wasn't the place for this stuff, but I didn't see anything else come up with 'SQL.' Silly me.

I have two questions, with one being more bound to SQL as a language, and the other more a database architectural question:

1. Say I have a table made of stock data like 'symbol, closing date, closing price.' A has redundant values if taken by itself, though each row in the table is unique to each other. How can I list out only the unique values in the symbol column?

2. I wrote a Perl script to import a ton of old stock data; it's the current NYSE listings to 1985, wherever possible. I think it'll be a few million of lines, and it inserts each one one-at-a-time using the mysql command-line interface. I think that's a big problem and I wonder how I can bulk up the upload, given it's in a CSV file format.
2b. After starting around 12:00 midnight, I got 1.5% through the set by 7:30AM. The system hadn't locked, and I saw the script blip through the data set of one more symbol before I yanked it. My system clock was set to around 12:30AM. I'm using the MySQL bundled with Ubuntu 8.10 without any configuration changes. What are some basic steps I can do to make this upload work? I can try to run again overnight to determine if it's scaling issue or what.

Victor
Jun 18, 2004
If someone is willing to give me some links, I'll start some documentation on different ways to bulk load data with different RDBMSes. I have done one thing so far, and that is document the sometimes-esoteric error messages from bcp.exe, the Bulk Copy Tool for MS SQL Server. I would love to have a nice resource for helping people avoid the one-row-at-a-time approach that they so often resort to, due to poor/hard to find documentation.

tayl0r
Oct 3, 2002
This post brought to you by SOE
rocko:
The easiest and least configurable way is with the distinct keyword.

code:
select distinct columna, columnb from table where blah = blah;
It basically takes your whole output and removes all duplicate rows. Similar to piping text through `uniq` on linux. So, if you just "select distinct symbol from table" you'll get a list of all the unique symbols.

Another way is with aggregate functions and the group by clause:
code:
select count(1) cnt, symbol
from table
group by symbol
The group by clause tells the database to group all the rows with the same value of "symbol" together. Then, it applies the aggregate functions. In this case, we're only doing a count, so it just counts the rows. You can also use sum(), min(), max(), and a few others.

One thing to keep in mine with count() is that it will only increment the count() counter if the thing that is passed into count() is NOT NULL. So, if you say "count(symbol)", and in 5 rows the symbol is NULL and 10 rows the symbol is NOT NULL, you will get a count back of 10. If you were to do a count(1) or count(*) (both are exactly the same) you would get a count back of 15.

tayl0r
Oct 3, 2002
This post brought to you by SOE
rocko: for bulk loading just search on google for "mysql bulk load" or "mysql bulk insert"
it looks like this is what you want: http://dev.mysql.com/doc/refman/5.0/en/load-data.html

If you want try just using normal inserts, one thing you can do is turn off auto-commit and then only commit every 10,000 rows or something like that. Commiting takes a lot of time. Also, remove or disable any indexes, primary keys, foreign keys, etc from the table.

Another thing, and this will speed up ANY sql statement from a script or a program, is by using bind variables. You might already be doing this though. Here is what I mean:
code:
#traditional SQL
foreach my $loop (@data) {
   my $query = $DBH->prepare("insert into table x values ('MSFT', 100.05, STR_TO_DATE('01/01/2008', '%m/%d/%Y'))");
   $query->execute();
}

#using bind variables
my $query = $DBH->prepare("insert into table x values (:1, :2, STR_TO_DATE(:3, '%m/%d/%Y'))");
foreach my $loop (@data) {
  $query->execute($symbol, $price, $date);
}
See what I mean? This removes the database overhead of having to parse the SQL statement every time.

tayl0r fucked around with this message at 04:17 on Dec 19, 2008

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

tayl0r posted:

rocko: for bulk loading just search on google for "mysql bulk load" or "mysql bulk insert"
it looks like this is what you want: http://dev.mysql.com/doc/refman/5.0/en/load-data.html

If you want try just using normal inserts, one thing you can do is turn off auto-commit and then only commit every 10,000 rows or something like that. Commiting takes a lot of time. Also, remove or disable any indexes, primary keys, foreign keys, etc from the table.

Another thing, and this will speed up ANY sql statement from a script or a program, is by using bind variables. You might already be doing this though. Here is what I mean:

See what I mean? This removes the database overhead of having to parse the SQL statement every time.
I'll go take a peek around. Do bind variables involve precompilation in some way? It looks like you wrote your example in Perl, so I assume you used some module or another. Do you recommend any particular Perl module for this stuff?

Edit: I found an example of importing CSV data. It almost worked. It looks like I defined some floating point quantities as DECIMAL(10,2), but the CSV data I'm importing has for levels of precision after the decimal point. It got the symbol and date correct, but everything else was just strange. I will look into what I might do to massage that, though I think I should redefine the table to handle the extra precision. After all, if I got it, I might as well use it.

Bulk loading was much faster for the one symbol I loaded. Should I just commit when I'm finally done with my load? At that point I'd have gone from 0 rows to many millions without committing once. Would that introduce different problems?

Rocko Bonaparte fucked around with this message at 07:38 on Dec 19, 2008

supster
Sep 26, 2003

I'M TOO FUCKING STUPID
TO READ A SIMPLE GRAPH
You probably don't want to keep a multi-million row transaction going. Honestly I'm not sure what the drawbacks are, but I'm sure there are some. It will probably depend on the RDBMS, but a worst-case would be that the entire transaction history is kept in memory until the commit happens. At best, the commit will just take a super long time and if you lose power or something during the commit you will have to start all over again.

I would commit every 5-10k rows.


Oh, and tayl0r was using Perl's DBI (http://www.perl.com/pub/a/1999/10/DBI.html).


edit: and I don't know what you mean by precompilation, but it just means that the SQL engine won't have to parse the SQL string again (I don't know this for fact, just going off of tayl0r's post).

edit2: oh, and out of curiosity where did you get the nyse records? is that publicly available somewhere?

supster fucked around with this message at 11:24 on Dec 19, 2008

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

supster posted:

edit: and I don't know what you mean by precompilation, but it just means that the SQL engine won't have to parse the SQL string again (I don't know this for fact, just going off of tayl0r's post).

edit2: oh, and out of curiosity where did you get the nyse records? is that publicly available somewhere?
Tonight I'll try to import everything, and commit after every symbol is uploaded. I have to determine why all the pricing data got garbled on the test run first. I will look up that module as well since my quote fetching code is in Perl, although I do all my calculations in Java. Haven't gotten there yet with a database though.

I was using precompiling loosely, and parsing would be part of that. I figured those commands ensured the engine knew the string was good and could use it like a template without worrying about it again.

I get daily stock data using Finance::QuoteHist, which pulls from online sources. Though I read around that sites like QuoteMedia or Yahoo don't care if you comb them every night, I'm starting to find I think that's not true. But if you wanted to do a big pull, you'll be fine. You'll have to get a list of stock symbols, and feed them into Finance::QuoteHist one at a time. You should be able to go back to... about 1983 I think.

Getting the symbols is a bit of a pain. It looks like you have to do some parsing, no matter how you cut it. And asking around gets you a bunch of "RTFM n00b" for some reason. I have the URLs I used at home, which I can forward along. From there you can filter out the stock symbols for each wget'd page.

Victor
Jun 18, 2004

supster posted:

I would commit every 5-10k rows.
Maybe for a DB like MySQL. For SQL Server on a fairly beefy machine, I'll transfer rows from one table to another in ~5 million row chunks. For bulk loading data, Microsoft's bcp tool bypasses the sort of transaction log problems noted above by using special bulk load semantics. I just break the bcp files up in 5mil chunks to make it easier to debug certain kins of errors whose difficulty of debugging are O(n) where n is the size of the file.

supster
Sep 26, 2003

I'M TOO FUCKING STUPID
TO READ A SIMPLE GRAPH

Victor posted:

Maybe for a DB like MySQL. For SQL Server on a fairly beefy machine, I'll transfer rows from one table to another in ~5 million row chunks. For bulk loading data, Microsoft's bcp tool bypasses the sort of transaction log problems noted above by using special bulk load semantics. I just break the bcp files up in 5mil chunks to make it easier to debug certain kins of errors whose difficulty of debugging are O(n) where n is the size of the file.
He is using MySQL though and without a bulk loading tool. Although 5 million rows sounds crazy, good to know that bcp can handle that, I will have to remember that.

Although, this brings me to a good point... why aren't you using MySQL's bulk loading tool? It will do this MUCH faster than you can do it in Perl or Java.

tayl0r
Oct 3, 2002
This post brought to you by SOE
Do you even have to commit using the MySQL bulk loader?

You do in Oracle.

supster
Sep 26, 2003

I'M TOO FUCKING STUPID
TO READ A SIMPLE GRAPH
No. MySQL's bulk load tool is pretty primitive... LOAD DATA. It's pretty simple, you just need to have your data in a format it can read (basically a csv).

Even if your data isn't already in a format LOAD DATA can read, it will be easier and faster to parse your data into that format and then use LAOD DATA to insert it into the database.

tayl0r
Oct 3, 2002
This post brought to you by SOE

supster posted:

No. MySQL's bulk load tool is pretty primitive... LOAD DATA. It's pretty simple, you just need to have your data in a format it can read (basically a csv).

Even if your data isn't already in a format LOAD DATA can read, it will be easier and faster to parse your data into that format and then use LAOD DATA to insert it into the database.

Yeah, I would never suggest using normal inserts if a bulk loading tool is an option. They are hundreds of times faster, it is pretty incredible. It makes you realize how much overhead there is in normal inserts.

Other things you can do are just to do a bulk load into a fresh table, then merge / insert that data into your pre-existing table using insert + select. That also lets you do aggregations, filters, joins, other stuff like that.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

supster posted:

He is using MySQL though and without a bulk loading tool. Although 5 million rows sounds crazy, good to know that bcp can handle that, I will have to remember that.

Although, this brings me to a good point... why aren't you using MySQL's bulk loading tool? It will do this MUCH faster than you can do it in Perl or Java.
Perl would be for later when I'm doing more modest operations.

Right now my problem with the bulk insert is that it's messing up my decimal values. I asked about it before, but it was amongst a bunch of other rambing.

<bunch of stupidity truncated. This is all stupidity. They did match up. God knows what I was comparing last night>

Things aren't quite ordered the same way, but the values are just wrong no matter what. I am thinking it has something to do with having two significant digits in the table, but 4 in the import. I am looking into how I can change those representations--let me look up the command first. But I wondered why it accepted the difference in precision and distorted the numbers like that. I think that's really scary.

Rocko Bonaparte fucked around with this message at 05:06 on Dec 20, 2008

Victor
Jun 18, 2004
Ummmmm... the columns just look out-of-order.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Victor posted:

Ummmmm... the columns just look out-of-order.
Woah. When I first thought I saw a difference I must have been way up past my bedtime. Yeah you're right. Just to check, I turned it back to decimal(10,2) just to load up the data from yesterday. In that case I thought I was seeing prices in the $40's at the end of the CSV, but the most recent values in the database were in the $80's. But they line up now. I have no drat clue. Methinks I'll whack most of that post with a brief note of the silliness.

On the bright side, I figured out how to use the alter command to redefine the data type for a column.

Xae
Jan 19, 2005

MoNsTeR posted:

I could not work without TOAD, period. It is worth 10 times whatever ridiculous price they charge for it.

Unless you are a DBA TOAD is a waste. PL/SQL Developer is much better for development work. It is cheaper and far superior.

Xae fucked around with this message at 20:12 on Dec 20, 2008

Civil
Apr 21, 2003

Do you see this? This means "Have a nice day".
Okay, I'm still a rookie here, but this is driving me nuts.

I'm displaying a list of records as shown. I want to only show the "topmost" of each instance of each unique filename.

I'm using this GROUP BY statement, but I'm having issues.

Records like 04-4A5204.pdf are dropping off just fine, I want serial #8 to drop off, and just show serial #10. But with records 21 and 9, I want 9 to drop off and just show 21. Instead, I'm getting the opposite.

Should I be using something other than GROUP BY for more control? Because it doesn't look like I can expect the top sorted result to be the one that is the one they're grouped by. Oh, and using mysql.

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Civil posted:

Okay, I'm still a rookie here, but this is driving me nuts.

I'm displaying a list of records as shown. I want to only show the "topmost" of each instance of each unique filename.

I'm using this GROUP BY statement, but I'm having issues.

Records like 04-4A5204.pdf are dropping off just fine, I want serial #8 to drop off, and just show serial #10. But with records 21 and 9, I want 9 to drop off and just show 21. Instead, I'm getting the opposite.

Should I be using something other than GROUP BY for more control? Because it doesn't look like I can expect the top sorted result to be the one that is the one they're grouped by. Oh, and using mysql.
If you use a GROUP BY in MySQL, anything that is either not in the GROUP BY or in an aggregate function (like max(serial) will be returned more or less at random. This is why most "real" RDMBSs won't even let you create a query without putting everything in a GROUP BY or aggregate.

In this case, if you can know for a fact that a higher serial means also a higher (or equal) bodate, then I'd say do max(serial) as serial, max(bodate) as bodate and also put ext in the GROUP BY.

Also, please put your queries in [code] blocks. It's kinda impossible to copy and past from a screen shot.

Vanadium Dame
May 29, 2002

HELLO I WOULD LIKE TO TALK TO YOU ABOUT MY VERY STRONG OPINIONS

Xae posted:

Unless you are a DBA TOAD is a waste. PL/SQL Developer is much better for development work. It is cheaper and far superior.

I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot?

Vanadium Dame fucked around with this message at 02:48 on Dec 23, 2008

tayl0r
Oct 3, 2002
This post brought to you by SOE

Markoff Chaney posted:

I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot?

Just download the free trial and see for yourself. Or, look at their website. I'm sure they list out all the features their software has.

You will seriously poo poo yourself after you see how much better PL/SQL Developer is.

Xae
Jan 19, 2005

Markoff Chaney posted:

I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot?

Not crash, multi-threaded so one query won't lock the whole thing, not crash, superior interface, vastly better debugger, built in testing, doesn't crash.

The interface as a whole is miles better. It is the only development tool I have ever purchased out of my own pocket. I ended up having a client buy an unlimited license to support the people who wanted to ditch TOAD after seeing my use PL/SQL Developer. The Unlimited license was less than the cost of 6 TOAD licenses.

Xae fucked around with this message at 05:04 on Dec 23, 2008

hexadecimal
Nov 23, 2008

by Fragmaster
Is it better/faster to use Distinct or just compute unique entries using php?

supster
Sep 26, 2003

I'M TOO FUCKING STUPID
TO READ A SIMPLE GRAPH

hexadecimal posted:

Is it better/faster to use Distinct or just compute unique entries using php?
Ultimately it will depend on the data set (among many other things), but just use distinct - it is almost gauranteed to perform better.

tayl0r
Oct 3, 2002
This post brought to you by SOE

supster posted:

Ultimately it will depend on the data set (among many other things), but just use distinct - it is almost gauranteed to perform better.

agreed. plus, typing "distinct" has got to be easier than any solution outside of the database.

MoNsTeR
Jun 29, 2002

Xae posted:

Not crash, multi-threaded so one query won't lock the whole thing, not crash, superior interface, vastly better debugger, built in testing, doesn't crash.

The interface as a whole is miles better. It is the only development tool I have ever purchased out of my own pocket. I ended up having a client buy an unlimited license to support the people who wanted to ditch TOAD after seeing my use PL/SQL Developer. The Unlimited license was less than the cost of 6 TOAD licenses.

The earliest TOAD I've used is 7.5 and that's multi-threaded, so I'm not sure what ancient version you've tried. Since upgrading to 9.5 I've had maybe 10 crashes total, that's nearly a solid year of using it 8-10 hours a day.

To be fair though I'll download the PL/SQL Developer trial.

Edit:
Hoooooo boy, where to start.
1. Can't have more than one statement in a SQL window
2. no Describe/View/whatever hotkey
3. View menu command is woefully underpowered (where's my Used By tab?!) and schizophrenic to boot (why is there a seperate View and View Spec & Body?)
4. Explain Plan pops up a seperate window and is less readable

These may seem small but they are so integral to my minute-by-minute productivity that they are dealbreakers.

I do like how it uses line-printer style white/green in the query results window, how the editor itself behaves, that it's lightweight and fast, and how it manages to make a bit more efficient use of screen real estate. If you really couldn't afford TOAD and didn't like Oracle SQL Developer this could work, but by no means could it replace TOAD for a seasoned user.

Now, a lot of that is familiarity. Someone who's never used TOAD isn't going to miss their ultra-powerful F4 key, isn't going to expect Ctrl+Enter to run queries and Ctrl+E to explain plan. And another big chunk is what you're working on and how you personally organize it. Maybe you have a printed ERD and all your packages open in editors so you don't need that describe hotkey like someone with thousands of tables and hundreds of procedures. Maybe you don't organize your objects by schema so the behavior of the object browser makes sense.

It's all personal preference ultimately so if this thing floats your boat, hey great you get to save kajillion dollars on TOAD licenses as gravy. But no one who actually knows how to leverage TOAD's functionality would ever switch to it. I would bet a hundred bucks easy that no DB developer in my office would prefer it.

MoNsTeR fucked around with this message at 17:52 on Dec 23, 2008

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
Is Toad for MYSQL (which appears to be far less expensive, that is to say, free) as good as the TOAD y'all are using for, I'm assuming, Oracle?

Operation Atlas
Dec 17, 2003

Bliss Can Be Bought

Golbez posted:

Is Toad for MYSQL (which appears to be far less expensive, that is to say, free) as good as the TOAD y'all are using for, I'm assuming, Oracle?

An Oracle-like TOAD for MySQL would be overkill. There's a reason that phpMyAdmin is still so popular for MySQL.

I really don't think any MySQL GUI is better than any other, as long as it doesn't crash it'll do what I need it to. (Which, with MySQL, is "not much")

Xae
Jan 19, 2005

MoNsTeR posted:

The earliest TOAD I've used is 7.5 and that's multi-threaded, so I'm not sure what ancient version you've tried. Since upgrading to 9.5 I've had maybe 10 crashes total, that's nearly a solid year of using it 8-10 hours a day.

To be fair though I'll download the PL/SQL Developer trial.

Edit:
Hoooooo boy, where to start.
1. Can't have more than one statement in a SQL window
2. no Describe/View/whatever hotkey
3. View menu command is woefully underpowered (where's my Used By tab?!) and schizophrenic to boot (why is there a seperate View and View Spec & Body?)
4. Explain Plan pops up a seperate window and is less readable

These may seem small but they are so integral to my minute-by-minute productivity that they are dealbreakers.

I do like how it uses line-printer style white/green in the query results window, how the editor itself behaves, that it's lightweight and fast, and how it manages to make a bit more efficient use of screen real estate. If you really couldn't afford TOAD and didn't like Oracle SQL Developer this could work, but by no means could it replace TOAD for a seasoned user.

Now, a lot of that is familiarity. Someone who's never used TOAD isn't going to miss their ultra-powerful F4 key, isn't going to expect Ctrl+Enter to run queries and Ctrl+E to explain plan. And another big chunk is what you're working on and how you personally organize it. Maybe you have a printed ERD and all your packages open in editors so you don't need that describe hotkey like someone with thousands of tables and hundreds of procedures. Maybe you don't organize your objects by schema so the behavior of the object browser makes sense.

It's all personal preference ultimately so if this thing floats your boat, hey great you get to save kajillion dollars on TOAD licenses as gravy. But no one who actually knows how to leverage TOAD's functionality would ever switch to it. I would bet a hundred bucks easy that no DB developer in my office would prefer it.

1) Yes, you can.
2) I think you can control-click a table name, or any object name, to get the description
3) Because you don't always need to see the body? It comes in handy when you don't want to grab a 20k SLOC package to see the note in the spec
4) You can alter the explain plan window quite a bit.

You can set up custom filters object filters that let you just peak at any objects you want.

It is also fairly easy to develop your own plug-ins to add any TOAD like features you just can't do with out. Almost the entire program can be customized.

var1ety
Jul 26, 2004
About anything the program has in a menu somewhere can be assigned a hotkey. I bind the following as a starting point with PL/SQL Developer.

code:
File / New / SQL Window [Ctrl+S]
File / Close [Ctrl+W]
Edit / PL/SQL Beautifier [Shift+Ctrl+X]
Edit / Recall Statement [Ctrl+E]
Edit / Select All [Ctrl+A]
Session / Execute [Alt+Enter]
Session / Break [Shift+Esc]
Object: Describe [Ctrl+D]
Object: Edit [Shift+Ctrl+E]
Object: Query data [Ctrl+Q]
SQL Window: Single Record View [Alt+S]
I have not used TOAD, but I find the following things invaluable in my day-to-day:

* Move the input cursor over an object in a query (such as a table) and Ctrl+D to display or Ctrl+Shift+E to edit (view structure, and indexes) or Ctrl+Q to launch a "select * from TABLE t".
* SQL Beautifier.
* Easy export of query results to Excel, with one tab for the SQL statement and one for the results.
* Ability to add new tools (I have one to launch SQL*Plus for the current session and one to run dbms_xplan.display_cursor for the previous statement).
* Can press Alt+S to show one record with columns and values vertical instead of horizontal (for really wide tables).

hexadecimal
Nov 23, 2008

by Fragmaster
What is the most efficient way to check for existance of some record in a table matching certain criteria? Is it COUNT or using IF EXISTS, or something else?

hexadecimal fucked around with this message at 04:24 on Dec 25, 2008

jovial_cynic
Aug 19, 2005

Ugh. My brain hurts from search and searching, so I'm resorting to just asking for help.

I've got a table tblFoo that's something like this:

code:
ID | itemValue | itemDate
--------------------------
 1 |   345     | 2008-12-01
 2 |   643     | 2008-12-02
 3 |   456     | 2008-12-03
 4 |   457     | 2008-12-19
 5 |   194     | 2008-12-20
 6 |   645     | 2008-12-21

etc.
In my perl script that does a good job graphing a list of values. I want to graph tblFoo.itemValue, using the date along the x-axis. However, there's a clear gap between ID 3 and ID 4, and the perl script will happily ignore the gap (showing 1,2,3,19,21), making my x-axis useless.

My first thought was to create a temporary table populated with dates that incremented from the smallest date to the largest date in the table, and RIGHT JOINed that with the above table, and have my script leave a blank spot on the graph when the ID is NULL (as a result of the RIGHT JOIN). This will at least provide proper spacing on my x-axis, and just leave blank spots on my graph.

Is this a good solution?

Also, is creating a temporary table much different than doing something like the following:

code:
select '2008-12-24' as arbitraryDate. 
This returns a single row for arbitraryDate. What's the syntax for adding additional rows like this? Instead of creating a table, could I just make a "virtual table" in this fashion, building the multi-row select statement via perl?

Victor
Jun 18, 2004

hexadecimal posted:

What is the most efficient way to check for existance of some record in a table matching certain criteria? Is it COUNT or using IF EXISTS, or something else?
Think about it for a second. What if there are two records that match your criteria. Does the DB engine need to check both for IF EXISTS? Does it have to check both for COUNT? Now, some DB engines may or may not be smart enough to skip the extra work.

Triple Tech
Jul 28, 2006

So what, are you quitting to join Homo Explosion?

jovial_cynic posted:

What's the syntax for adding additional rows like this? Instead of creating a table, could I just make a "virtual table" in this fashion, building the multi-row select statement via perl?

select UNION select UNION select UNION etc

Adbot
ADBOT LOVES YOU

jovial_cynic
Aug 19, 2005

Triple Tech posted:

select UNION select UNION select UNION etc

YES! Thanks!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply