Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
1337JiveTurkey
Feb 17, 2005

This is sort of a dumb problem but I'm looking for some way in Oracle to generate a table, cursor, collection or whatever that simply has the numbers 900,000 to 999,999.

I've basically got an existing PL/SQL procedure that's looking for the first value in that range which isn't already in use (a pretty involved query to run potentially 100,000 times). This value is used as an ID for a product record. That procedure is unsurprisingly taking forever, especially when it's used in a batch importing hundreds or thousands of products.

For testing an alternative I created a table with just one column and all 100,000 numbers. It's hideous but when I rewrote it as a join against that table the performance went from taking over a minute to find one value to 2 seconds to find every free ID in that range. So is there some way to do this or am I just out of luck?

Adbot
ADBOT LOVES YOU

var1ety
Jul 26, 2004

1337JiveTurkey posted:

This is sort of a dumb problem but I'm looking for some way in Oracle to generate a table, cursor, collection or whatever that simply has the numbers 900,000 to 999,999.

I've basically got an existing PL/SQL procedure that's looking for the first value in that range which isn't already in use (a pretty involved query to run potentially 100,000 times). This value is used as an ID for a product record. That procedure is unsurprisingly taking forever, especially when it's used in a batch importing hundreds or thousands of products.

For testing an alternative I created a table with just one column and all 100,000 numbers. It's hideous but when I rewrote it as a join against that table the performance went from taking over a minute to find one value to 2 seconds to find every free ID in that range. So is there some way to do this or am I just out of luck?

code:
select 900000+(level-1)
  from dual
connect by level <= 99999;
That should do the trick, if all you need is a list of numbers.

1337JiveTurkey
Feb 17, 2005

var1ety posted:

code:
select 900000+(level-1)
  from dual
connect by level <= 99999;
That should do the trick, if all you need is a list of numbers.

Thanks! Since that's recursive, do I need to be aware of any call stack limitations or the like?

var1ety
Jul 26, 2004

1337JiveTurkey posted:

Thanks! Since that's recursive, do I need to be aware of any call stack limitations or the like?

It depends how you're using it, but I wouldn't worry about it as a one-off solution. Here is a note from Tanel Poder about resource consumption in the UGA (user global area) with the "connect by" method; he also offers a resource friendly alternative using a cartesian merge.

http://blog.tanelpoder.com/2008/06/08/generating-lots-of-rows-using-connect-by-safely/

KennyG
Oct 22, 2002
Here to blow my own horn.
Is there a preferable join syntax from a performance standpoint?


Are there general rules of thumb as to more effective syntax like how
code:
Select T.C1, T.C2...
is better than
code:
Select *



code:
Select ....
  From Table1, Table 2
 Where Table1.PK = Table2.FK
vs
code:
Select ...
  From Table1 Join Table2 on Table1.PK = Table2.FK
vs
code:
Select ...
  From Table1 Left Join Table2 on Table1.Pk = Table2.FK
vs
code:
Select ...
  From Table 1 Join (Select ... From Table2) as T2 on Table1.PK = T2.FK
Etc.

Question Mark Mound
Jun 14, 2006

Tokyo Crystal Mew
Dancing Godzilla

ElMudo posted:

It doesn't.

Use something like below. Let me know if you need an explanation of what it does.
code:
UPDATE pt1
SET PartPrefix = 'BB',
PartNum = 'BB-' + substring(pt1.PartNum,4,200)
FROM Parts_Table AS pt1
LEFT JOIN Parts_Table AS pt2 ON substring(pt1.PartNum, 4, 200) = substring(pt2.PartNum, 4, 200) AND pt2.PartPrefix = 'BB'
WHERE pt1.PartPrefix = 'AA' AND pt2.PartNum IS NULL
Then delete the 'AA' records remaining.
That worked a charm, thanks! :D

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control?

var1ety
Jul 26, 2004

KennyG posted:

Does anyone have any good online resources for learning about performance tuning Oracle queries.

Here's a good post by Jonathan Lewis linking to a presentation talking about using dbms_xplan to tune using cardinality feedback, which is a very effective way of getting to the root of many performance problems and figuring out why the optimizer is doing what it is doing.

http://jonathanlewis.wordpress.com/2009/05/11/cardinality-feedback/

For blogs, here are the ones I find the most relevant (in no particular order) -

Richard Foote
Tanel Poder
Jonathan Lewis
Inside the Oracle Optimizer (Maria Colgan)

And last, but certainly not least -

AskTom (Tom Kyte)

For books, depending on your level of Oracle experience I recommend the following two in this order. Tom's book is a must-read for all Oracle developers.

Effective Oracle by Design (Tom Kyte)
Troubleshooting Oracle Performance (Christian Antognini)

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players

Golbez posted:

So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control?

I think you're just going to have to wait it out. If your table is InnoDB this is going to be especially painful. If I were going to do something like this, I wouldn't just alter the table directly:
code:
CREATE newtable LIKE oldtable;
ALTER TABLE newtable...;
INSERT INTO newtable SELECT * FROM oldtable;
Then you just rename the tables and delete the old one if it's good.

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine

Doctor rear end in a top hat posted:

I think you're just going to have to wait it out. If your table is InnoDB this is going to be especially painful. If I were going to do something like this, I wouldn't just alter the table directly:
code:
CREATE newtable LIKE oldtable;
ALTER TABLE newtable...;
INSERT INTO newtable SELECT * FROM oldtable;
Then you just rename the tables and delete the old one if it's good.

MyISAM. I just don't understand why it can't slow itself down. :(

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

KennyG posted:

Is there a preferable join syntax from a performance standpoint?

code:
Select ....
  From Table1, Table 2
 Where Table1.PK = Table2.FK
vs
code:
Select ...
  From Table1 Join Table2 on Table1.PK = Table2.FK
vs
code:
Select ...
  From Table1 Left Join Table2 on Table1.Pk = Table2.FK
vs
code:
Select ...
  From Table 1 Join (Select ... From Table2) as T2 on Table1.PK = T2.FK
Etc.

The third one of these does something different to the other three. Outer joins (LEFT JOIN or RIGHT JOIN) are different to inner joins (JOIN). A left outer join fetches at least one row for each row in the left-hand table, even if there are no matching rows in the right-hand table.

Out of the other three, the second one is better than the first one because it's more explicit about the join criteria [see edit]; but there will be no difference in performance (since that's what you asked about), because the query optimiser should arrive at the same plan for both of them.

The last one is bizarre, both because it's totally unnecessary to make table 2 a subselect like that and because it's inconsistent (why are the columns from table 2 treated in that way but not the columns from table 1)? Also, the query optimiser should hopefully optimise that subquery away, but why gratuitously give it an extra hurdle to trip up at? It can only risk hindering the query's performance.

EDIT: I noticed after posting that in the first one, the join criteria are moved to the WHERE clause. But this is still bad, because logic related to the join should be in the ON clause. In particular, for some things you can do with outer joins you need join-related logic to be in the ON clause.

Hammerite fucked around with this message at 17:57 on Aug 18, 2011

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players

Golbez posted:

MyISAM. I just don't understand why it can't slow itself down. :(

Because you want your tables write-locked for as little time as possible. Maybe you can have the OS give the MySQL process a lower priority?

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!

Bob Morales posted:

I moved the log file size up to 128MB and now they go at 15minutes instead of 5 hours. What the hell.

Did you also change the Log Buffer size? the defaults are rather small if your doing large transactions like that and if it was the default 5mb where it has to clobber it constantly yea i guess i can see how that hosed it up.

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine

Doctor rear end in a top hat posted:

Because you want your tables write-locked for as little time as possible. Maybe you can have the OS give the MySQL process a lower priority?

Not really, it's our production database. But this table isn't high traffic, I don't mind if it's write locked for a while if the payoff is load not spiking.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!

Golbez posted:

So I just issued a query to alter a column in a table with 8.5 million rows and 8.1gb of data. Load on our server just passed 10. Surely there's a way to tell MySQL to calm down a bit and not kill the machine? Maybe I have a bad setting that's causing it to go out of control?

Well you could use something like an external program to limit the cpu usage depending on the server type and things. cpulimit for debian likes for linux for example.

Another thing you could try is to limit the thread concurrency value to less then 2 threads per cpu but that would be active 100% of the time.

I know oracle has the ability to limit % use but i dont believe mysql or postgres can.

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
MySQL database, MyISAM tables. We have to download 30gb or so of data a night and replace one of our databases with it. Not refresh - replace. The replication to the slaves takes a while: the download finished 4 hours ago, and replication is still going on, and looks like it's still two hours behind.

Would it make sense to disable replication for this database, and when the import is done, simply dump the database to a file, rsync it over to the slave, and load infile?

Looking at the documentation, "Each slave receives a copy of the entire contents of the binary log. It is the responsibility of the slave to decide which statements in the binary log should be executed; you cannot configure the master to log only certain events." Looks like I can use --replicate-wild-ignore-table on this one, but would this method seem to make sense? Even if it doesn't necessarily speed up "replication", it would allow the actual replication to proceed and not get caught up on the giant import, right?

McGlockenshire
Dec 16, 2005

GOLLOCKS!
If not for it being MyISAM and other shenanigains, I'd recommend XtraBackup, which can take and restore entire snapshots, and even has a mode designed to create a new slave from a master. While it works with MyISAM in addition to InnoDB, it's really designed to copy entire *instances*, not just specific databases within the instance.

The various replicate ignore bits do work, however the data still has to be transmitted from master to slave, even if the slave will end up ignoring it. This shouldn't be a problem as long as master and slave are local to each other. Due to the way that the binary log works, though, that amount of data is going to cause some non-trivial replication lag while the slave works through the transmitted log.

McGlockenshire fucked around with this message at 04:31 on Aug 20, 2011

Rick Rickshaw
Feb 21, 2007

I am not disappointed I lost the PGA Championship. Nope, I am not.
I've put together an SQL statement that extracts my SMS messages from the SMS.db sqlite file on my iPhone, also joining the required tables from the AddressBook.db to get the contact name.

code:
SELECT 
	msg.rowid AS msg_rowid, 
	CASE WHEN msg.flags = 2 THEN 'You' ELSE con.first || ' ' || con.last END AS "To", 
	CASE WHEN msg.flags = 3 THEN 'You' ELSE con.first || ' ' || con.last END AS "From", 
	con.first || ' ' || con.last AS Contact,
	msg.text AS "Message", 
	msg.address AS "Number", 
	datetime(msg.date, 'unixepoch', 'localtime') AS "Date", 
	msg.date AS date_epoch, 
	msg.flags, 
	conx.value AS conx_number,
	conx.record_id AS con_record_id
FROM message AS msg
LEFT JOIN ABMultiValue AS conx ON CASE WHEN
LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(conx.value,"(",""),")",""),"+","")," ",""),"-","")) = 10 
THEN "1" || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(conx.value,"(",""),")",""),"+","")," ",""),"-","") 
ELSE REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(conx.value,"(",""),")",""),"+","")," ",""),"-","") END = 
CASE WHEN LENGTH(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(msg.address,"(",""),")",""),"+","")," ",""),"-","")) = 10 
THEN "1" || REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(msg.address,"(",""),")",""),"+","")," ",""),"-","") 
ELSE REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(msg.address,"(",""),")",""),"+","")," ",""),"-","") END
LEFT JOIN ABPerson AS con ON con.rowid = conx.record_id
The purpose of this is to extract the messages to an .xls for searching/archiving. It's NEARLY perfect, except it breaks on messages that have carriage returns. In Excel it returns jibberish. I'm wondering if there's any way to remove carriage returns in Sqlite? The first part of this solution only returned one record (I have a hundred or so of these messages), and when attempting to use the codes (X'0D), nothing was being returned.

Sample message attached showing normal message, then a jibberish code message.

Rick Rickshaw fucked around with this message at 19:03 on Aug 22, 2011

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
I would really change all those replaces to a single regex that could also deal with /r/n /r or /n being the 3 types of carriage returns.

http://www.sqlite.org/lang_expr.html

Sab669
Sep 24, 2009

Is there a difference from:

quote:

SELECT type, AVG(price) FROM books
WHERE paperback='Y'
HAVING AVG(price) > 10
GROUP BY type;

and this?

quote:

SELECT type, AVG(price) FROM books
WHERE paperback='Y'
GROUP BY type
HAVING AVG(price) > 10;

I'm a peer tutor at my school, although SQL frankly isn't my strongest point. Had to help some guy with beginner SQL homework and one of the problems was to select the various types of books and their average price, and to filter it by only showing types with an average higher than 10 (as you can clearly tell from the statements above).

I had the guy type out the first set of code there, but it wasn't executing properly. Maybe he just had a type-o that I wasn't seeing (it happens A LOT), or is there an actual difference in the way SQL handles the syntax? We're using MySQL Workbench for reference.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
The first one would error out because your group by statement can't be after a having. The having as to be the last one.

Sab669
Sep 24, 2009

Okay, yea, it was throwing an error but I didn't realize it had to be in that exact order. We did fix the problem by swapping it, but I just wasn't sure if he typed stuff wrong the first time and I didn't catch it.

PresidentCamacho
Sep 9, 2009
I'm working on a simple ajax search for a personal database. I'm using the query below, but it fails on some cases. Say a `comment` contains the word "love". If I search "lo" it will return the `comment`, but if I search "love" I get no results. It works for other terms, although I haven't tested them all.

code:
SELECT * FROM comments WHERE comment LIKE '%$word%' ORDER BY id DESC;
I'm thinking it has something to do with the term showing up twice in the same comment, but I don't know.

Only registered members can see post attachments!

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine

PresidentCamacho posted:

I'm working on a simple ajax search for a personal database. I'm using the query below, but it fails on some cases. Say a `comment` contains the word "love". If I search "lo" it will return the `comment`, but if I search "love" I get no results. It works for other terms, although I haven't tested them all.

code:
SELECT * FROM comments WHERE comment LIKE '%$word%' ORDER BY id DESC;
I'm thinking it has something to do with the term showing up twice in the same comment, but I don't know.



You'll have to show us the exact query being generated. Like, when it doesn't bring up "love" when searched for "love", show us what the (I assume PHP) is generating. There might be weirdness.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
Did you get a compiled sql statement or run one you manually typed in instead of using your php variable? Because that looks like correct syntax but its likely not going to the sql server correctly.

efb;

PresidentCamacho
Sep 9, 2009
Sorry about that. This is my first project so I'm learning as I go. Here is the code that looks up the search term(I'll put it in a class later):

code:
$search_lookup = "SELECT * FROM comments WHERE comment LIKE '%$word%' ORDER BY id DESC;";
$data = mysql_query($search_lookup)or die(mysql_error());  
if(mysql_fetch_array( $data ))
  {
    echo '<div id="search-comments">';
    while ($info = mysql_fetch_array( $data ))
      {
      $professor = $info['professor'];
      $comment = $info['comment'];
      $user_rating = '<div id=rating><div id=professor>$professor</div><div id=comment>$comment</div></div>';
    echo $user_rating;
      }
    echo "</div>";
  } else {
    $search_lookup = "SELECT * FROM comments WHERE professor LIKE '%$word%' ORDER BY id DESC;";
    $data = mysql_query($search_lookup)or die(mysql_error());  
    if(mysql_fetch_array( $data ))
The else is supposed to to search the professors column if the word isn't found in the comments column, so it's just reused code.

Also, php only returns<div id="search-comments"></div> with the full term "love" so it's either not looking or nothing is returned.

PresidentCamacho fucked around with this message at 18:02 on Aug 23, 2011

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
I think I'm right in saying the

if(mysql_fetch_array($data)) {

line actually gets the first result row? So it may be lost from the loop. I could very well be wrong, though.

Also, what does $word echo before you get to the first query.

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
I already see at least one problem:
$user_rating = '<div id=rating><div id=professor>$professor</div><div id=comment>$comment</div></div>';

That won't evaluate $professor; you need these surrounded in " for that to work, not '.

That said, you didn't show us what we wanted to see: The SQL string that PHP generates. Echo $search_lookup for us.

Edit: And excellent point from above. When you run that mysql_fetch_array(), it internally moves the pointer, meaning that the next call (like in the while loop) will miss the first result. Instead, use mysql_num_rows().

PresidentCamacho
Sep 9, 2009
:feelsgood:

PresidentCamacho fucked around with this message at 02:01 on Feb 27, 2012

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players

PresidentCamacho posted:

you can test it out here. It's not echoing any thing.

http://www.bragcollege.com/review

It just skips over the whole the whole while loop.



BOOM! You guys got it.

code:
 if(mysql_num_rows( $data ))

I hope you're doing something to sanitize $word.

PresidentCamacho
Sep 9, 2009

Doctor rear end in a top hat posted:

I hope you're doing something to sanitize $word.

I made sure to sanitize all the inputs. Since I know what most of the values should be- i.e. rating is an int < 5, ect- I check that first then pass it along. addslashes() and htmlencode() is also used.

I'm pretty much just building it as ideas come along, you are more than welcome to poke at it though.

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players

PresidentCamacho posted:

I made sure to sanitize all the inputs. Since I know what most of the values should be- i.e. rating is an int < 5, ect- I check that first then pass it along. addslashes() and htmlencode() is also used.

I'm pretty much just building it as ideas come along, you are more than welcome to poke at it though.

You should probably switch to PDO, but if you're just doing simple queries and want to stick with mysql_query(), you can use mysql_real_escape_string() to let your database connection escape the string instead of just letting PHP fire in the dark with addslashes(). I would only htmlencode on the output, but I guess you could store values that way if you're just using the data for the web.

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players
I've got this query (I changed column and table names for here) and it takes 3 seconds to run and I have no idea why. It gives me all animals that share their parent with at least one other animal.
code:
SELECT name FROM animals WHERE parent IN 
(SELECT parent FROM animals WHERE parent IS NOT NULL GROUP BY parent HAVING count(*) > 1)
name and parent are both indexed and animals only has 2500 rows. name's index is unique.
It takes .002 seconds to query:
code:
SELECT name FROM animals
and it takes .00008 seconds to query:
code:
SELECT parent FROM animals WHERE parent IS NOT NULL GROUP BY parent HAVING count(*) > 1
which returns 1 row.
If I EXPLAIN the query, it shows this:
code:
select_type	    table    type   possible_keys  key	   key_len  ref	  rows  Extra
PRIMARY	            animals  ALL    NULL	   NULL	   NULL	    NULL  2541	Using where
DEPENDENT SUBQUERY  animals  range  parent	   parent  13	    NULL  1270	Using where; Using index
How in the hell?

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
That is likely because of the having count thing which is basically another go at the data. So it delays a bunch of things especially in a sub-query. Also doing a In search for 1300 items is not fast either.

I would need to see how/what kinda data your working with for a suggestion.

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players
I just changed it to
code:
SELECT name
FROM   animals
WHERE  parent IN (SELECT parent
                  FROM   (SELECT parent,
                                 COUNT(*) parentcount
                          FROM   animals
                          WHERE  parent IS NOT NULL
                          GROUP  BY parent) parentcounts
                  WHERE  parentcount > 1)  
and it finishes in a much more reasonable .37 seconds.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Sprawl posted:

That is likely because of the having count thing which is basically another go at the data. So it delays a bunch of things especially in a sub-query. Also doing a In search for 1300 items is not fast either.

Perhaps could use a join instead?

code:
select
    animals.name
from
    animals
    join (
        select parent
        from animals
        where parent is not null
        group by parent
        having count(*) > 1
    ) as MySubquery

butt dickus
Jul 7, 2007

top ten juiced up coaches
and the top ten juiced up players

Hammerite posted:

Perhaps could use a join instead?

I rewrote the whole thing using a join. I realized I needed one record with both animals that share a parent. There will only ever be two names to a parent, and there is another column that has a unique index with parent, so I just joined the table on itself.

code:
SELECT animalm,
       animalf
FROM   (SELECT name   AS animalm,
               parent AS parentm
        FROM   animals
        WHERE  gender = 'M') m,
       (SELECT name   AS animalf,
               parent AS parentf
        FROM   animals
        WHERE  gender = 'F') f
WHERE  parentm = parentf  

SiliconCow
Jul 14, 2001
Okay so I have a table with some data and timestamps. I want to get a count of the table at 15 minute increments for a daterange (i.e 30 days ago). So at 07-23-2011:00:00:00 the table had 12345 rows, at 07-23-2011:00:15:00 it had 13456 rows, etc.


I've actually managed to get it by a count(*) of each time slice with:

SELECT date AS d,
extract(hour from date) as h,
floor(extract(minute from date)/15) * 15 AS m,
count(*) as count
FROM table WHERE date >= '07/23/2011'
GROUP BY d, h, m

but I want the total table rows at each time slice, not the total rows in each slice.

Help me forum sql thread, you're my only hope.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
Ah well i believe this is mysql and you can't use created/self named fields in group by so there is your problem.



SELECT date AS d,
extract(hour from date) as h,
floor(extract(minute from date)/15) * 15 AS m,
count(*) as count
FROM table WHERE date >= '07/23/2011'
GROUP BY table.date , extract(hour from date), floor(extract(minute from date)/15) * 15

Adbot
ADBOT LOVES YOU

stray
Jun 28, 2005

"It's a jet pack, Michael. What could possibly go wrong?"
E: NM. I figured it out.

stray fucked around with this message at 19:55 on Sep 1, 2011

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply