|
Victor posted:This technique can be extremely performant (yes, that's becoming a word, deal with it grammar Nazis). I can bulk insert tens of thousands of rows per second into a single-socket, quad-core SQL Server. If we had gigabet network, I wouldn't be surprised if I could get it up to hundreds of thousands of rows per second. In contrast, inserting rows one by one is much slower. Well sure, once you already have the data inserted into the temp table this is true. The most common thing, however, is that you are trying to insert new data, in which case you are probably not getting a benefit because you still have to get the data inserted into the temp table somehow, either through row-by-row inserts or some bulk loading process (which could be done to the actual table anyways). Overall inserting the data directly will be faster if you don't have to transform it first. The exception I know of offhand is with something Oracle has called an external table. This is where you take a text file and drop it on the local disk of the database, and then create a table using special syntax so that it reads the file directly as a table. Then you can INSERT SELECT from this table directly, and it essentially avoid the original insert operations (since it's doing a direct read from the file). However this approach has it's own problems, not the least of which is figuring out a good method for dropping off files securely on your database server.
|
# ? Dec 19, 2007 19:09 |
|
|
# ? May 13, 2024 09:55 |
|
Doing a bulk insert into SQL Server with the utility bcp, inserting into a temporary table and then transforming it and inserting it into a permanent table, is still much faster than doing one-by-one inserts into the permanent table. At least, that's the way it is with SQL Server. If you don't need to do any keying (transforming a string such as "Mammography" to the ID of the row in some "dictionary" table that contains that code), then yes, you can insert directly into the permanent table. However, if you need to do transforms, and the permanent table has indexes on columns needing transforms, it will most definitely be faster to use a temporary table. Sometimes one must use a temporary table: what if the permanent table has a column for the ID for the code but not the code itself? (E.g. procedure_fk, which points to a Procedure record (id: 1, description: "Mammography"). Yes, I've been looking at a lot of hospital data lately.) SQL Server has an analog to your Oracle example: OPENROWSET.
|
# ? Dec 19, 2007 19:18 |
|
Victor posted:If you don't need to do any keying (transforming a string such as "Mammography" to the ID of the row in some "dictionary" table that contains that code), then yes, you can insert directly into the permanent table. However, if you need to do transforms, and the permanent table has indexes on columns needing transforms, it will most definitely be faster to use a temporary table. Sometimes one must use a temporary table: what if the permanent table has a column for the ID for the code but not the code itself? (E.g. procedure_fk, which points to a Procedure record (id: 1, description: "Mammography"). Yes, I've been looking at a lot of hospital data lately.) I think we're saying the same exact thing.. it's not faster if you just want to load the data as is, but if you need to transform the data first it can help.
|
# ? Dec 19, 2007 20:47 |
|
There's also the issue of needing to verify data integrity/quality before you shove it in a production table.
|
# ? Dec 19, 2007 21:03 |
|
Triple Tech posted:Maybe I'm misunderstanding something. Can INSERT insert multiple rows? So are you INSERTing the SELECTion of the temporary table into the main table? I have no idea what you're TRYing to ask. I really don't think you're asking if INSERT can do this? code:
benisntfunny fucked around with this message at 05:17 on Dec 20, 2007 |
# ? Dec 20, 2007 05:14 |
|
Victor posted:There's also the issue of needing to verify data integrity/quality before you shove it in a production table. Very, very true. Keep an eye out for fields being cropped and row separators not performing correctly. I use the BCP tool a fair amount in ASE and SQL2005, I would recommend always using a format file rather than the -c switch. It's slight more laborious but it's helpful when you are trying to import a CSV (or what have you) that was supplied rather than generated personally. Also, when working in Sybase IQ, the LoadTable command is a total bitch, try and avoid where at all possible.
|
# ? Dec 20, 2007 12:30 |
PDO/MySQL question: If I'm executing the following query: SELECT id, name, detail FROM user WHERE detail = :filter LIMIT 0, 25 is there an easy way to get the total number of results or know if there are more than 25 for pagination without running the query twice, one time without the LIMIT portion and just doing a SELECT COUNT(id) FROM user WHERE detail = :filter?
|
|
# ? Dec 21, 2007 09:48 |
|
fletcher posted:PDO/MySQL question: Meet your new friend SQL_CALC_FOUND_ROWS !!! http://www.arraystudio.com/as-workshop/mysql-get-total-number-of-rows-when-using-limit.html
|
# ? Dec 21, 2007 17:05 |
|
Anyone familiar with bulk imports in Sybase? Can I create a table and then use bcp to just tell me the format file for bcp, then use that format file to import more data? Totally new to how all this stuff works...
|
# ? Dec 21, 2007 17:07 |
Alex007 posted:Meet your new friend SQL_CALC_FOUND_ROWS !!! Is there an easy way to use this with PDO?
|
|
# ? Dec 21, 2007 21:10 |
|
Alex007 posted:Meet your new friend SQL_CALC_FOUND_ROWS !!! Will you be my best friend? This is news to me and makes my life a happier one
|
# ? Dec 21, 2007 22:42 |
|
Is there a way to programmatically detect the existance of a table, to let's say conditionalize running a create (create if not exist)? This is Sybase I'm working on. Apparently there are stored procedures that list out all the tables, and I guess I could parse those... Are those stored procedures made by my coworkers or are those Sybase built ins?
|
# ? Dec 21, 2007 23:37 |
|
Triple Tech posted:Is there a way to programmatically detect the existance of a table, to let's say conditionalize running a create (create if not exist)? This is Sybase I'm working on. Apparently there are stored procedures that list out all the tables, and I guess I could parse those... Are those stored procedures made by my coworkers or are those Sybase built ins? Isn't there a "sysobjects" table, or something like that? That's what we do in SQL Server.
|
# ? Dec 22, 2007 00:29 |
|
INFORMATION_SCHEMA.TABLES might exist.
|
# ? Dec 22, 2007 00:48 |
|
Is there an SQL way to programmatically do this or will I just use Perl to parse the output of some lookup command to see if there's a table? And then conditionalize the running of that CREATE sql?
|
# ? Dec 22, 2007 01:36 |
|
Did you try code:
|
# ? Dec 22, 2007 01:56 |
|
Sybase is very similiar to SQL Server. (SQL Server descends from Sybase if I remember it right). Querying against sys_objects would be your best bet.
|
# ? Dec 22, 2007 06:22 |
|
imBen posted:I have no idea what you're TRYing to ask. I really don't think you're asking if INSERT can do this? ... Because of course it can. I've never done a lot of INSERT work so I guess I never learned... m5 posted:Isn't there a "sysobjects" table, or something like that? That's what we do in SQL Server. Yes! This worked. I can just select the name from that table and parse that in Perl space...
|
# ? Dec 27, 2007 18:20 |
|
Counting from joined tables can't be this retarded, I must be doing something wrong. What I'm trying to do here is count the rows in the activity table that have a created_on greater than the updated_at in the reports table (All activity after the report was updated basically). Table structure should be pretty obvious, all you need to know is that reports has: id, created_on, updated_at and activity has: id, created_on, report_id code:
|
# ? Dec 27, 2007 22:04 |
Alex007 posted:Meet your new friend SQL_CALC_FOUND_ROWS !!! Hmm, I tried this, but every time I do a SELECT FOUND_ROWS() it always comes back with 1, even though the query where I put SQL_CALC_FOUND_ROWS into returns > 1 rows
|
|
# ? Dec 27, 2007 23:10 |
|
fletcher posted:Hmm, I tried this, but every time I do a SELECT FOUND_ROWS() it always comes back with 1, even though the query where I put SQL_CALC_FOUND_ROWS into returns > 1 rows If your not already, try using the command line client instead of something like mysql query browser.
|
# ? Dec 28, 2007 00:01 |
Land Mime posted:If your not already, try using the command line client instead of something like mysql query browser. Works fine from the command line. Returns 1 when trying to do it with PDO and PHP.
|
|
# ? Dec 28, 2007 00:33 |
|
Scarboy, I'd suggest doing a subquery, unless you found the performance prohibitive. Are you using MySQL? If so, you can use the ability to select non-aggregated fields that weren't grouped by to your advantage (note the use of coalesce):code:
|
# ? Dec 28, 2007 01:42 |
|
Oh gently caress I'm an idiot. I completely forgot that I could put conditions in the on clause. That makes my life so much easier, thanks for reminding me about that!
|
# ? Dec 28, 2007 04:24 |
|
Hi everyone, I have a database is dumped every day and sent via ftp to another server. Sometimes I would like to open this sqldump and make a few queries on, but I don't want it to overwrite the current database. I saw that you can dump the sqldump back into the same database but I was unsure because I didn't want to overwrite the data. Is there a way I can open an sqldump file and perform some basic queries on it without it overwriting the main db?
|
# ? Dec 28, 2007 18:37 |
|
Twlight posted:Hi everyone, Can you send it to a server that has its own instance of sql running (ie doesn't have the operational db you're using)?
|
# ? Dec 28, 2007 21:43 |
|
I'll probably make a thread on it one of these days, but for now: Using LINQ in SQLCLR.
|
# ? Dec 28, 2007 22:02 |
code:
|
|
# ? Dec 28, 2007 22:23 |
|
fletcher posted:
What happens when you group on "picture.id"? (What DBMS is that?)
|
# ? Dec 28, 2007 22:28 |
|
It would seem that based on your query, you want multiple rows for one picture, each with a different comment, and get a count on a grouping of comments by the picture ID, if that makes any sense and I'm right, a LEFT JOIN should return one picture with many comments.
|
# ? Dec 28, 2007 22:29 |
m5 posted:What happens when you group on "picture.id"? (What DBMS is that?) Returns the same thing. I'm using MySQL5 noonches posted:It would seem that based on your query, you want multiple rows for one picture, each with a different comment, and get a count on a grouping of comments by the picture ID, if that makes any sense and I'm right, a LEFT JOIN should return one picture with many comments. Nah I don't want any of the actual comments, just how many there are for that picture id.
|
|
# ? Dec 28, 2007 22:37 |
|
fletcher posted:Nah I don't want any of the actual comments, just how many there are for that picture id. But thats what I meant, just one item in the pictures table, multiple times, matched to different rows from the comments table each time, then grouped by the item from the pictures table to get a count of items from the comments table. So basically your same query with a LEFT JOIN.
|
# ? Dec 28, 2007 22:41 |
|
fletcher, unless I'm blind, that code should work. Perhaps you could provide code for creating the picture and comment tables with enough sample data to demonstrate the problem?
|
# ? Dec 28, 2007 22:50 |
Victor posted:fletcher, unless I'm blind, that code should work. Perhaps you could provide code for creating the picture and comment tables with enough sample data to demonstrate the problem? Ahhh I'm retarded, my data got hosed up when I loaded it. The query works fine. I suck.
|
|
# ? Dec 28, 2007 23:22 |
|
This week at work reviewing some code I saw the worst SQL statement in history. It was 350 lines long and had 7 selects in it. It UNION ALL'ed then SELECT DISTINCT'ed off the union. I took the first query, the main one, changed two joins to left outer joins, and deleted the next 300 lines of code. For fucks sake people, learn to use non-full/inner/equal joins. See this or, this for more information. Xae fucked around with this message at 02:03 on Dec 31, 2007 |
# ? Dec 31, 2007 02:00 |
|
Xae posted:For fucks sake people, learn to use non-full/inner/equal joins. I think my favorite was something I had found that was written to generate some kind of drop down menu. it was something to the effect of code:
|
# ? Jan 1, 2008 01:44 |
|
I have a table called "types" and a table called "items". Each item row is linked to a type row via a type ID number. I want to select every row in "types" and add on a column that returns the number of items that use that type. How can I do this? I've been experimenting with subqueries and joins but I can't get anything working. Thanks! (I'm using mySQL 4.1)
|
# ? Jan 1, 2008 22:57 |
|
Bitruder posted:I have a table called "types" and a table called "items". Each item row is linked to a type row via a type ID number. I want to select every row in "types" and add on a column that returns the number of items that use that type. How can I do this? I've been experimenting with subqueries and joins but I can't get anything working. You're looking for the GROUP BY and COUNT keywords. Your query would probably look something like code:
|
# ? Jan 1, 2008 23:30 |
|
Thanks for the previous reply! Another question: The following works fine when the subqueries only return 1 item, but when a subquery returns more than 1 items, I'd like to have all the results work by ORing them together, but it errors out now. Is there a way to treat multiple subquery rows as ORs? code:
SOLVED! Instead of =, I want "IN". Bitruder fucked around with this message at 01:37 on Jan 2, 2008 |
# ? Jan 2, 2008 01:32 |
|
|
# ? May 13, 2024 09:55 |
|
code:
code:
|
# ? Jan 2, 2008 06:04 |