|
This is sort of a dumb problem but I'm looking for some way in Oracle to generate a table, cursor, collection or whatever that simply has the numbers 900,000 to 999,999. I've basically got an existing PL/SQL procedure that's looking for the first value in that range which isn't already in use (a pretty involved query to run potentially 100,000 times). This value is used as an ID for a product record. That procedure is unsurprisingly taking forever, especially when it's used in a batch importing hundreds or thousands of products. For testing an alternative I created a table with just one column and all 100,000 numbers. It's hideous but when I rewrote it as a join against that table the performance went from taking over a minute to find one value to 2 seconds to find every free ID in that range. So is there some way to do this or am I just out of luck?
|
# ? Aug 17, 2011 22:54 |
|
|
# ? Jun 8, 2024 00:03 |
|
1337JiveTurkey posted:This is sort of a dumb problem but I'm looking for some way in Oracle to generate a table, cursor, collection or whatever that simply has the numbers 900,000 to 999,999. code:
|
# ? Aug 18, 2011 00:03 |
|
var1ety posted:
Thanks! Since that's recursive, do I need to be aware of any call stack limitations or the like?
|
# ? Aug 18, 2011 00:39 |
|
1337JiveTurkey posted:Thanks! Since that's recursive, do I need to be aware of any call stack limitations or the like? It depends how you're using it, but I wouldn't worry about it as a one-off solution. Here is a note from Tanel Poder about resource consumption in the UGA (user global area) with the "connect by" method; he also offers a resource friendly alternative using a cartesian merge. http://blog.tanelpoder.com/2008/06/08/generating-lots-of-rows-using-connect-by-safely/
|
# ? Aug 18, 2011 00:48 |
|
Is there a preferable join syntax from a performance standpoint? Are there general rules of thumb as to more effective syntax like how code:
code:
code:
code:
code:
code:
|
# ? Aug 18, 2011 02:02 |
|
ElMudo posted:It doesn't.
|
# ? Aug 18, 2011 10:21 |
|
So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control?
|
# ? Aug 18, 2011 17:06 |
|
KennyG posted:Does anyone have any good online resources for learning about performance tuning Oracle queries. Here's a good post by Jonathan Lewis linking to a presentation talking about using dbms_xplan to tune using cardinality feedback, which is a very effective way of getting to the root of many performance problems and figuring out why the optimizer is doing what it is doing. http://jonathanlewis.wordpress.com/2009/05/11/cardinality-feedback/ For blogs, here are the ones I find the most relevant (in no particular order) - Richard Foote Tanel Poder Jonathan Lewis Inside the Oracle Optimizer (Maria Colgan) And last, but certainly not least - AskTom (Tom Kyte) For books, depending on your level of Oracle experience I recommend the following two in this order. Tom's book is a must-read for all Oracle developers. Effective Oracle by Design (Tom Kyte) Troubleshooting Oracle Performance (Christian Antognini)
|
# ? Aug 18, 2011 17:21 |
|
Golbez posted:So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control? I think you're just going to have to wait it out. If your table is InnoDB this is going to be especially painful. If I were going to do something like this, I wouldn't just alter the table directly: code:
|
# ? Aug 18, 2011 17:43 |
|
Doctor rear end in a top hat posted:I think you're just going to have to wait it out. If your table is InnoDB this is going to be especially painful. If I were going to do something like this, I wouldn't just alter the table directly: MyISAM. I just don't understand why it can't slow itself down.
|
# ? Aug 18, 2011 17:46 |
|
KennyG posted:Is there a preferable join syntax from a performance standpoint? The third one of these does something different to the other three. Outer joins (LEFT JOIN or RIGHT JOIN) are different to inner joins (JOIN). A left outer join fetches at least one row for each row in the left-hand table, even if there are no matching rows in the right-hand table. Out of the other three, the second one is better than the first one because it's more explicit about the join criteria [see edit]; but there will be no difference in performance (since that's what you asked about), because the query optimiser should arrive at the same plan for both of them. The last one is bizarre, both because it's totally unnecessary to make table 2 a subselect like that and because it's inconsistent (why are the columns from table 2 treated in that way but not the columns from table 1)? Also, the query optimiser should hopefully optimise that subquery away, but why gratuitously give it an extra hurdle to trip up at? It can only risk hindering the query's performance. EDIT: I noticed after posting that in the first one, the join criteria are moved to the WHERE clause. But this is still bad, because logic related to the join should be in the ON clause. In particular, for some things you can do with outer joins you need join-related logic to be in the ON clause. Hammerite fucked around with this message at 17:57 on Aug 18, 2011 |
# ? Aug 18, 2011 17:55 |
|
Golbez posted:MyISAM. I just don't understand why it can't slow itself down. Because you want your tables write-locked for as little time as possible. Maybe you can have the OS give the MySQL process a lower priority?
|
# ? Aug 18, 2011 18:18 |
|
Bob Morales posted:I moved the log file size up to 128MB and now they go at 15minutes instead of 5 hours. What the hell. Did you also change the Log Buffer size? the defaults are rather small if your doing large transactions like that and if it was the default 5mb where it has to clobber it constantly yea i guess i can see how that hosed it up.
|
# ? Aug 18, 2011 18:45 |
|
Doctor rear end in a top hat posted:Because you want your tables write-locked for as little time as possible. Maybe you can have the OS give the MySQL process a lower priority? Not really, it's our production database. But this table isn't high traffic, I don't mind if it's write locked for a while if the payoff is load not spiking.
|
# ? Aug 18, 2011 19:02 |
|
Golbez posted:So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control? Well you could use something like an external program to limit the cpu usage depending on the server type and things. cpulimit for debian likes for linux for example. Another thing you could try is to limit the thread concurrency value to less then 2 threads per cpu but that would be active 100% of the time. I know oracle has the ability to limit % use but i dont believe mysql or postgres can.
|
# ? Aug 18, 2011 20:42 |
|
MySQL database, MyISAM tables. We have to download 30gb or so of data a night and replace one of our databases with it. Not refresh - replace. The replication to the slaves takes a while: the download finished 4 hours ago, and replication is still going on, and looks like it's still two hours behind. Would it make sense to disable replication for this database, and when the import is done, simply dump the database to a file, rsync it over to the slave, and load infile? Looking at the documentation, "Each slave receives a copy of the entire contents of the binary log. It is the responsibility of the slave to decide which statements in the binary log should be executed; you cannot configure the master to log only certain events." Looks like I can use --replicate-wild-ignore-table on this one, but would this method seem to make sense? Even if it doesn't necessarily speed up "replication", it would allow the actual replication to proceed and not get caught up on the giant import, right?
|
# ? Aug 19, 2011 15:52 |
|
If not for it being MyISAM and other shenanigains, I'd recommend XtraBackup, which can take and restore entire snapshots, and even has a mode designed to create a new slave from a master. While it works with MyISAM in addition to InnoDB, it's really designed to copy entire *instances*, not just specific databases within the instance. The various replicate ignore bits do work, however the data still has to be transmitted from master to slave, even if the slave will end up ignoring it. This shouldn't be a problem as long as master and slave are local to each other. Due to the way that the binary log works, though, that amount of data is going to cause some non-trivial replication lag while the slave works through the transmitted log. McGlockenshire fucked around with this message at 04:31 on Aug 20, 2011 |
# ? Aug 20, 2011 04:27 |
|
I've put together an SQL statement that extracts my SMS messages from the SMS.db sqlite file on my iPhone, also joining the required tables from the AddressBook.db to get the contact name.code:
Sample message attached showing normal message, then a jibberish code message. Rick Rickshaw fucked around with this message at 19:03 on Aug 22, 2011 |
# ? Aug 22, 2011 19:00 |
|
I would really change all those replaces to a single regex that could also deal with /r/n /r or /n being the 3 types of carriage returns. http://www.sqlite.org/lang_expr.html
|
# ? Aug 22, 2011 19:36 |
|
Is there a difference from:quote:SELECT type, AVG(price) FROM books and this? quote:SELECT type, AVG(price) FROM books I'm a peer tutor at my school, although SQL frankly isn't my strongest point. Had to help some guy with beginner SQL homework and one of the problems was to select the various types of books and their average price, and to filter it by only showing types with an average higher than 10 (as you can clearly tell from the statements above). I had the guy type out the first set of code there, but it wasn't executing properly. Maybe he just had a type-o that I wasn't seeing (it happens A LOT), or is there an actual difference in the way SQL handles the syntax? We're using MySQL Workbench for reference.
|
# ? Aug 22, 2011 22:40 |
|
The first one would error out because your group by statement can't be after a having. The having as to be the last one.
|
# ? Aug 22, 2011 22:49 |
|
Okay, yea, it was throwing an error but I didn't realize it had to be in that exact order. We did fix the problem by swapping it, but I just wasn't sure if he typed stuff wrong the first time and I didn't catch it.
|
# ? Aug 22, 2011 22:53 |
|
I'm working on a simple ajax search for a personal database. I'm using the query below, but it fails on some cases. Say a `comment` contains the word "love". If I search "lo" it will return the `comment`, but if I search "love" I get no results. It works for other terms, although I haven't tested them all.code:
|
# ? Aug 23, 2011 17:11 |
|
PresidentCamacho posted:I'm working on a simple ajax search for a personal database. I'm using the query below, but it fails on some cases. Say a `comment` contains the word "love". If I search "lo" it will return the `comment`, but if I search "love" I get no results. It works for other terms, although I haven't tested them all. You'll have to show us the exact query being generated. Like, when it doesn't bring up "love" when searched for "love", show us what the (I assume PHP) is generating. There might be weirdness.
|
# ? Aug 23, 2011 17:28 |
|
Did you get a compiled sql statement or run one you manually typed in instead of using your php variable? Because that looks like correct syntax but its likely not going to the sql server correctly. efb;
|
# ? Aug 23, 2011 17:32 |
|
Sorry about that. This is my first project so I'm learning as I go. Here is the code that looks up the search term(I'll put it in a class later):code:
Also, php only returns<div id="search-comments"></div> with the full term "love" so it's either not looking or nothing is returned. PresidentCamacho fucked around with this message at 18:02 on Aug 23, 2011 |
# ? Aug 23, 2011 17:58 |
|
I think I'm right in saying the if(mysql_fetch_array($data)) { line actually gets the first result row? So it may be lost from the loop. I could very well be wrong, though. Also, what does $word echo before you get to the first query.
|
# ? Aug 23, 2011 18:53 |
|
I already see at least one problem: $user_rating = '<div id=rating><div id=professor>$professor</div><div id=comment>$comment</div></div>'; That won't evaluate $professor; you need these surrounded in " for that to work, not '. That said, you didn't show us what we wanted to see: The SQL string that PHP generates. Echo $search_lookup for us. Edit: And excellent point from above. When you run that mysql_fetch_array(), it internally moves the pointer, meaning that the next call (like in the while loop) will miss the first result. Instead, use mysql_num_rows().
|
# ? Aug 23, 2011 18:55 |
|
PresidentCamacho fucked around with this message at 02:01 on Feb 27, 2012 |
# ? Aug 23, 2011 19:16 |
|
PresidentCamacho posted:you can test it out here. It's not echoing any thing. I hope you're doing something to sanitize $word.
|
# ? Aug 23, 2011 19:23 |
|
Doctor rear end in a top hat posted:I hope you're doing something to sanitize $word. I made sure to sanitize all the inputs. Since I know what most of the values should be- i.e. rating is an int < 5, ect- I check that first then pass it along. addslashes() and htmlencode() is also used. I'm pretty much just building it as ideas come along, you are more than welcome to poke at it though.
|
# ? Aug 23, 2011 19:34 |
|
PresidentCamacho posted:I made sure to sanitize all the inputs. Since I know what most of the values should be- i.e. rating is an int < 5, ect- I check that first then pass it along. addslashes() and htmlencode() is also used. You should probably switch to PDO, but if you're just doing simple queries and want to stick with mysql_query(), you can use mysql_real_escape_string() to let your database connection escape the string instead of just letting PHP fire in the dark with addslashes(). I would only htmlencode on the output, but I guess you could store values that way if you're just using the data for the web.
|
# ? Aug 23, 2011 19:48 |
|
I've got this query (I changed column and table names for here) and it takes 3 seconds to run and I have no idea why. It gives me all animals that share their parent with at least one other animal.code:
It takes .002 seconds to query: code:
code:
If I EXPLAIN the query, it shows this: code:
|
# ? Aug 23, 2011 21:52 |
|
That is likely because of the having count thing which is basically another go at the data. So it delays a bunch of things especially in a sub-query. Also doing a In search for 1300 items is not fast either. I would need to see how/what kinda data your working with for a suggestion.
|
# ? Aug 23, 2011 22:07 |
|
I just changed it tocode:
|
# ? Aug 23, 2011 22:13 |
|
Sprawl posted:That is likely because of the having count thing which is basically another go at the data. So it delays a bunch of things especially in a sub-query. Also doing a In search for 1300 items is not fast either. Perhaps could use a join instead? code:
|
# ? Aug 23, 2011 22:19 |
|
Hammerite posted:Perhaps could use a join instead? I rewrote the whole thing using a join. I realized I needed one record with both animals that share a parent. There will only ever be two names to a parent, and there is another column that has a unique index with parent, so I just joined the table on itself. code:
|
# ? Aug 23, 2011 22:45 |
|
Okay so I have a table with some data and timestamps. I want to get a count of the table at 15 minute increments for a daterange (i.e 30 days ago). So at 07-23-2011:00:00:00 the table had 12345 rows, at 07-23-2011:00:15:00 it had 13456 rows, etc. I've actually managed to get it by a count(*) of each time slice with: SELECT date AS d, extract(hour from date) as h, floor(extract(minute from date)/15) * 15 AS m, count(*) as count FROM table WHERE date >= '07/23/2011' GROUP BY d, h, m but I want the total table rows at each time slice, not the total rows in each slice. Help me forum sql thread, you're my only hope.
|
# ? Aug 24, 2011 07:48 |
|
Ah well i believe this is mysql and you can't use created/self named fields in group by so there is your problem. SELECT date AS d, extract(hour from date) as h, floor(extract(minute from date)/15) * 15 AS m, count(*) as count FROM table WHERE date >= '07/23/2011' GROUP BY table.date , extract(hour from date), floor(extract(minute from date)/15) * 15
|
# ? Aug 24, 2011 08:29 |
|
|
# ? Jun 8, 2024 00:03 |
|
E: NM. I figured it out.
stray fucked around with this message at 19:55 on Sep 1, 2011 |
# ? Sep 1, 2011 16:06 |