|
With the select union select union way of going about my aforementioned problem, I could end up passing (from a web page) a hundred lines of:code:
Is there a more practical way of doing this internally? Can mysql built a table on the fly that is populated with an initial start date, incremented on the date through an arbitrary number of iterations, and have that be usable in a join? Or is my external loop + lots of union select iterations the only way to go about this?
|
# ? Dec 25, 2008 13:00 |
|
|
# ? May 29, 2024 20:42 |
|
Have you considered creating a temp table and then using your script space to INSERT dates to it? From there it's a traditional join, and then the lifetime of the table is handled by the connection.
|
# ? Dec 25, 2008 13:55 |
|
Triple Tech posted:Have you considered creating a temp table and then using your script space to INSERT dates to it? From there it's a traditional join, and then the lifetime of the table is handled by the connection. I thought about that, but I think that solution (after 100 iterations) isn't much different than what I'm doing with the select union. On top of that, I'd have to apply some perl date math prior to the inserts, which would be a little clunky; sql's date_add() function is rather handy. I was hoping that there was an auto-population of incremental dates in sql...
|
# ? Dec 25, 2008 17:03 |
|
jovial_cynic posted:With the select union select union way of going about my aforementioned problem, I could end up passing (from a web page) a hundred lines of: The best solution is getting a date/time aware graphing package and offloading the task. That said, a convenient way I've found to address this problem in the past using Oracle is to build my date range table using a large reference table, or using an internal function to do the equivalent. An example follows. code:
It doesn't look like MySQL has a rownum function, but I found the following suggestion on a blog (http://blog.gomilko.com/2007/04/28/mysql-rownum-imitation/). code:
|
# ? Dec 25, 2008 18:56 |
|
var1ety posted:It doesn't look like MySQL has a rownum function, but I found the following suggestion on a blog (http://blog.gomilko.com/2007/04/28/mysql-rownum-imitation/). Oh, holy smokes, this is perfect. The faked rownum works well with the date_add function: code:
|
# ? Dec 25, 2008 20:07 |
|
A cool thing with SQL2005 (and probably the versions of Oracle that let you run Java) is that it's trivial to write a table-returning function, and simply pass in the range of numbers you'd like. Same for dates.
|
# ? Dec 25, 2008 20:35 |
|
I have a table setup for handling sesions like the following:code:
Right now I have this but don't know how effective this would be: code:
|
# ? Dec 26, 2008 23:45 |
|
For particularly long queries, are there any tricks roughly to measure progress? I want to be able to, say, update a progress bar as a query chugs along in my applications. That way, I know it's thinking about it, and not just blowing up on me.
|
# ? Dec 27, 2008 21:55 |
|
Rocko Bonaparte posted:For particularly long queries, are there any tricks roughly to measure progress? I want to be able to, say, update a progress bar as a query chugs along in my applications. That way, I know it's thinking about it, and not just blowing up on me. You should be trapping errors in your application so you know the integrity of the currently executing query is intact. To show progress I generally just add a ticker that increments every N rows during my fetch loop. If I wanted something more robust I would keep track of the run time for the previous M runs and use this to give a more realistic ETA.
|
# ? Dec 27, 2008 22:39 |
|
Rocko Bonaparte posted:For particularly long queries, are there any tricks roughly to measure progress? I want to be able to, say, update a progress bar as a query chugs along in my applications. That way, I know it's thinking about it, and not just blowing up on me.
|
# ? Dec 28, 2008 04:16 |
|
I'm just getting started so apologies for the very basic question, still wrapping my head around everything. I have one table called Users; each user has an ID number, and each user can be associated with up to 3 items. So there's a column for UserID, and then three columns -- Item1, Item2, and Item3. Those columns each contain either an ItemID or 0. Then there's a table called Items; each item has an ID number, and each item can be associated with up to 6 users. So there's a column for ItemID, and then six columns for UserIDs, once again with either a UserID or 0. The Users table also contains some more descriptive stuff like reg date, email, etc, and the Items table has item descriptions etc. How would I use JOIN (I think this is the right command) to, for example, query based on a UserID and get back the item descriptions of all the items they're associated with? And vice versa, query based on an ItemID and get back the email addresses of all of the associated users?
|
# ? Dec 29, 2008 05:00 |
|
Is there a reason you're not using a table to cross reference items to users? The way you're doing it is called "denormalized" and is probably bad for what you're doing. Have you looked up the syntax for doing joins for your RDBMS?
|
# ? Dec 29, 2008 05:10 |
|
Victor posted:Is there a reason you're not using a table to cross reference items to users? The way you're doing it is called "denormalized" and is probably bad for what you're doing. Have you looked up the syntax for doing joins for your RDBMS? I just started working in SQL like 24 hours ago and have been teaching myself from the W3Schools tutorial... so yeah, if I am doing this in a completely stupid way please let me know what the standard method is.
|
# ? Dec 29, 2008 05:16 |
|
Emo.fm posted:I'm just getting started so apologies for the very basic question, still wrapping my head around everything. If I understand correctly: What you're doing is called a many-to-many association. You need a join table to connect the Users and Items. Users shouldn't have any "Item" columns, and "Items" shouldn't have any "User" columns. Instead, have a table that's called "items_users" that has two columns: "user_id" and "item_id". And then use sub-selects and joins to get the information you want. Emo.fm posted:How would I use JOIN (I think this is the right command) to, for example, query based on a UserID and get back the item descriptions of all the items they're associated with? A subquery is easiest to understand: code:
code:
Hope that helps.
|
# ? Dec 29, 2008 17:15 |
|
^^^ Beaten You should look up some information on database normalization.
|
# ? Dec 29, 2008 17:31 |
|
Thanks everyone, I feel so stupid for not realizing this way of doing things from the beginning... now that I've fixed it all of my code is like 1/4 the length . I will go do some reading about normalization.
|
# ? Dec 29, 2008 20:22 |
|
I have a table that logs user_name, sensor_name and in and out times for a certain user visiting a certain area with given sensor_name. I am trying to compile some simple statistics information for this and am wondering what the fastest way is. For a given user_name, there may be many entries with same user_name and sensor_name and/or many entries with different user_name, sensor_name. I compute total visiting time for each user_name by summing up its out_time - in_time for all entries containing user_name and sensor_name that is part of sensor_name(s) list (I do this so I can combine stats for multiple sensors). I compute other values such as total number of visits of some user_name in given set of sensor_name(s). I also sort by these computed values. I came up with 3 ways to accomplish this via php/MySQL
What I am wondering about is if I am missing something major about SQL that would make my first method faster? If I can do most of the work in MySQL while computing these stats (without terrible performance) it considerably makes my job as programmer easier. EDIT: I just realize that I could modify query in #1 so that it returns user_name,total_time for each return tuple. However it still does not solve the speed problem. hexadecimal fucked around with this message at 21:25 on Dec 29, 2008 |
# ? Dec 29, 2008 21:19 |
|
Hey guys I have an index on a DATETIME column, and I'm trying to group by DATE(idx_col) but of course, as DATE() is an expression, it's forced to do a filesort for the GROUP BY. My question is, isn't it *theoretically* possible to avoid the filesort since it theoretically can retrieve from the index ordered by DATE(idx_col) by using the index on the column? Is this simply an optimization that was never implemented or is it entirely impossible? Someone with more knowledge on how MySQL uses indexes for GROUP BY should give me some advice here.
|
# ? Dec 30, 2008 01:25 |
|
Atom posted:Hey guys I have an index on a DATETIME column, and I'm trying to group by DATE(idx_col) but of course, as DATE() is an expression, it's forced to do a filesort for the GROUP BY. My question is, isn't it *theoretically* possible to avoid the filesort since it theoretically can retrieve from the index ordered by DATE(idx_col) by using the index on the column? Oracle has function based indexes, where you can build an index from the output of a function. I don't think MySQL has that. What you can do instead is to add a column to your table (preferably integer or number) and then update that column with the function you want to be indexed. Then, just index that column. You'll need to update your insert / update statements to keep this other column updated as well. Then, instead of grouping by the DATE() function just group by that new indexed column that already has the data in the DATE() function format you want. I'm not really sure if that will speed up your query but you should be able to do this and try it out fairly quickly.
|
# ? Dec 30, 2008 03:52 |
|
tayl0r posted:Oracle has function based indexes, where you can build an index from the output of a function. I don't think MySQL has that. Oh I know I could just change it into a DATE column, or store a seperate column for the date and get the index for a group by...my question was more of a theoretical one. That is to say, if it's ordered by DATETIME, it is, by necessity also ordered by DATE, so why *couldn't* MySQL use the data as-is to group-by? Wouldn't it also by possible to group by a FLOOR(x) using index in this manner? Atom fucked around with this message at 19:07 on Dec 30, 2008 |
# ? Dec 30, 2008 19:05 |
|
Why, oh why, am I not able to use a "not equal to" operator in my select statement? SELECT * FROM table WHERE Display = 1 2 records shown SELECT * FROM table WHERE Display != 1 (also tried <> and variations of NOT) 0 records found (should return several records) The Display field is simply not populated, it's an approval flag, and I'm just updating it with a 1 when I want a record to display. However, I'm having a bear of a time coming up with anything that doesn't return 0 values. It's currently set to a tinyint type, because VARCHAR wouldn't work. I suppose I can set the field to a 0 value when I create the record, but I just wanted the not equal to operator to work like it should. Using mysql.
|
# ? Dec 31, 2008 21:27 |
|
Civil posted:Why, oh why, am I not able to use a "not equal to" operator in my select statement? Use "where display is null".
|
# ? Dec 31, 2008 21:40 |
|
Or try putting the 1 in single quotes maybe <> '1'
|
# ? Dec 31, 2008 21:50 |
|
Read up on ANSI SQL nulls.
|
# ? Dec 31, 2008 22:07 |
|
var1ety posted:Use "where display is null".
|
# ? Dec 31, 2008 22:22 |
How do you decide when it's appropriate to use a trigger vs. doing it from the application code?
|
|
# ? Jan 2, 2009 07:58 |
|
fletcher posted:How do you decide when it's appropriate to use a trigger vs. doing it from the application code? Triggers are useful because you write it once and then every possible place where the trigger *should* go off, it does. If you rely on the application, there could be places where it should execute the trigger code but it doesn't. If you're controlling both the database code and the application code, there probably isn't that much value in using a trigger since you can probably do a pretty good job of making sure the application is behaving itself. If you cannot control the application code, or your application code is getting pretty complex and you're not sure you can find every possible place where the trigger code should be executed, then using a trigger is probably wise.
|
# ? Jan 2, 2009 11:14 |
|
Triggers operate entirely within a transaction, which can cause locking issues. I actually need to look into whether INSTEAD OF triggers would solve this, unless maybe tayl0r or someone else knows about this...
|
# ? Jan 2, 2009 16:55 |
|
Victor posted:Triggers operate entirely within a transaction, which can cause locking issues. I actually need to look into whether INSTEAD OF triggers would solve this, unless maybe tayl0r or someone else knows about this... Sure don't, sorry. I've only used triggers to do basic stuff like insert the old & new values of a changing column in another table, to keep track of how it changes it over time. So, locking was never a problem there.
|
# ? Jan 3, 2009 07:13 |
|
My shop has run into problems with tons of business logic being embedded in triggers, so it is perhaps an esoteric issue.
|
# ? Jan 3, 2009 09:10 |
|
I've been running into some odd performance problems with my web application which links up to a mysql database. My script is in perl, and my connection string depends on "use DBI." I know that "use MySql" is available also -- anybody know the relevant differences between the two? I tested the speed of my queries via phpMyAdmin, and my most complex queries are only taking about 0.012 seconds to run, so it's not my queries that are causing a problem. I think my application is hanging on the initial connection to the database, so I want choose whichever is actually faster or better suited for my web app. Sorry if this question isn't SQL related enough... but I figured that people have to connect to their SQL databases somehow.
|
# ? Jan 4, 2009 07:21 |
|
Victor posted:My shop has run into problems with tons of business logic being embedded in triggers, so it is perhaps an esoteric issue. The danger of triggers is people forget about them, or that the business case changes and you have to delve through layer after layer of triggers to change the code. If you can count on good documentation they are perfectly fine to use, but reality is that the documentation will inevitably suck. Cascading triggers (triggers inserting/updating other tables, whose triggers update other tables, etc) have been responsible for some of my biggest headaches. I spent 4 weeks mapping out triggers for a small claims system. Most people had forgotten what triggers did what and even that some existed. The system ended up doing an asinine amount of duplicated work. I dislike the whole "automagic" system that triggers create. People should know what the system is doing with out diving through a couple dozen triggers. Xae fucked around with this message at 01:47 on Jan 5, 2009 |
# ? Jan 5, 2009 01:45 |
|
Curious. I'm not aware of any such issues with our triggers, but I have not personally worked with the more complicated scenarios. What I will say is that it is nice to be able to write normal insert/update statements to do things instead of executing sprocs that necessarily have to operate one-at-a-time, at least until we get table-valued parameters in SQL2008. I don't know what the TVP support situation is for other databases. Having to operate through a web service seems even more gross. Perhaps you have comments on Kenneth Downs advocating triggers? He and I appear to agree, although I have not investigated this thoroughly.
|
# ? Jan 5, 2009 06:35 |
|
How do I preserve a new line character from a select statement? Something like code:
code:
l1 l2
|
# ? Jan 5, 2009 19:47 |
|
Atimo posted:How do I preserve a new line character from a select statement? What is this being used for?
|
# ? Jan 5, 2009 20:13 |
|
surrealcatalyst posted:What is this being used for? I have a script that uses the table schema to create a new identical table to anyone specified, copies the data from the orginal into it, drops the orginal, recreates it, copies the data back, then drops the temp table. When it recreates it I'm adding a number of new columns, and altering some column types, (mainly to expand the varchar limit). It does it all with PRINT statements for output. I need to run this against 100 or so tables, so I was trying to create a script that would create the 100 or so other scripts for me to edit in the new columns, but when I changed it all to put is into a string and return that, I lost all the line breaks, and it all became next to impossible to edit by hand. So I'd like to retain those line breaks. I know this is a stupid way of doing this, I already created a much better way, but my boss INSISTS that I do it this way.
|
# ? Jan 5, 2009 20:22 |
|
convince your boss his way is loving retarded
|
# ? Jan 5, 2009 20:42 |
|
tayl0r posted:convince your boss his way is loving retarded I have tried, but both him and a senior developer seem to think this is better. So now I get to waste days I dont have doing all this poo poo by hand, over about 35,000 lines of sql. So am I SOL on the line break thing?
|
# ? Jan 5, 2009 20:58 |
|
Are you using "results to grid"? Also, you were using select, not print, above...
|
# ? Jan 5, 2009 20:59 |
|
|
# ? May 29, 2024 20:42 |
|
Victor posted:Are you using "results to grid"? Also, you were using select, not print, above... What I'd like to do is setup the t-sql that's creating the string into a sproc call, populate a table of table names to run it against, get it all back as one big lump table, then have a bit of .net code or something write each row back out to a file, but without the line breaks its worthless. Guess I'll just go back to my nightmare now.
|
# ? Jan 5, 2009 21:04 |