|
Thanks for the help with the voting query, but I have another question regarding grouping DATETIME fields. I've got a table called 'views' that's laid out like so: code:
code:
Ideally, I'd like to be able to plot the results out like how Steam does. I've had a brief look at the INTERVAL function, but it's hard to find examples of how to use this in a GROUP query.
|
# ? Dec 10, 2007 01:32 |
|
|
# ? May 13, 2024 10:05 |
|
Opius posted:Thanks for the help with the voting query, but I have another question regarding grouping DATETIME fields. There are probably more elegant ways to do this, but I've done something similar by taking the Unix time ticks and dividing it by the unit of time in seconds. If the division doesnt give a whole integer, truncate it to an integer. Multiply it back by your unit of time, and you should have everything grouped up with 10 minute time units.
|
# ? Dec 10, 2007 01:47 |
|
You could also fill a table with the hour and minute intervals and do a simple join.code:
|
# ? Dec 10, 2007 15:00 |
|
In SQL Server you can do this:code:
|
# ? Dec 10, 2007 15:53 |
|
Opius posted:My question is: is there anyway to group results in sets of 10 minutes, rather than single minutes in the query? Give this a shot...tried it against a similar database I have here and it seems to work fine. code:
|
# ? Dec 10, 2007 19:58 |
|
I've been banging my head against the wall over this one all night. The following throws an T_ENCAPSED_STRING error, assume properly passed variables from HTML form: code:
|
# ? Dec 11, 2007 08:57 |
|
foghorn posted:I've been banging my head against the wall over this one all night. You missed a period after $name : code:
|
# ? Dec 11, 2007 12:51 |
|
Alex007 posted:You missed a period after $name : Thanks. Guess I just needed a second pair of eyes!
|
# ? Dec 11, 2007 16:47 |
|
I have a properties table where I store data relating to a particular row in key/value pairs. It looks like this: Table: person_properties code:
My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way? The way I'm doing it right now is: code:
Mashi fucked around with this message at 00:45 on Dec 12, 2007 |
# ? Dec 11, 2007 22:57 |
|
That's what you get for choosing the Entity-Attribute-Value model, which many people consider to be a design anti-pattern because of its many drawbacks. You're either going to have to do a JOIN per column, or a sub-select like you're doing now.
|
# ? Dec 12, 2007 02:38 |
|
You really need to ask yourself if the EAV model is necessary. I work professionally on a gigantic EAV-modeled database, and I'll tell you that while it is sometimes necessary due to application requirements, it's never easy to deal with. The headache you're dealing with now is only the beginning of your pain.
|
# ? Dec 12, 2007 02:47 |
|
Wow and there I was really proud of myself for thinking of such a great table structure . Thanks for the heads up, I'll reconsider it.
|
# ? Dec 12, 2007 03:09 |
If you have a mysql database of hundreds of thousands of articles what is the best way to pull up the top 100 articles ordered by number of views? I'm guessing something like this is pretty taxing on the db server since it does a full table scan? SELECT id, title FROM articles ORDER BY views DESC LIMIT 100 How do you get around doing this? A temporary table that gets updated via a cron job or something?
|
|
# ? Dec 12, 2007 04:56 |
|
Mashi posted:My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way?
|
# ? Dec 12, 2007 05:23 |
|
fletcher posted:If you have a mysql database of hundreds of thousands of articles what is the best way to pull up the top 100 articles ordered by number of views?
|
# ? Dec 12, 2007 05:24 |
Victor posted:Not if you index the views column. Now, depending on how often the number of views is updated, this index might be a performance problem. Try it out! So does that mean the views column shouldn't be updated every time a unique user views the article? I guess this also means that I don't really understand what an indexed column is. Is there an article that will help me out?
|
|
# ? Dec 12, 2007 09:34 |
|
fletcher posted:So does that mean the views column shouldn't be updated every time a unique user views the article? So there's a trade-off. If your "views" column is being updated very very frequently (like, you're slashdot or something), then adding an index to that column will slow down your DB a little bit, but the upside is that your report to get the top 100 articles sorted by view will be very quick. You'd have to have a pretty heavy-traffic site to notice any slowdown though. In the event that you did get unacceptable slowdown, the solution would be to sacrifice accuracy for speed. Either: - Have two "views" columns, one indexed and one unindexed. Keep the unindexed one up-to-date and accurate, and periodically (via a cronjob) copy over this value to the indexed column. - Have one unindexed "views" column, but cache the results of the top 100 articles so that you don't see the performance hit every time.
|
# ? Dec 12, 2007 10:16 |
|
This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions. I'm currently using: SELECT material_number, material_text FROM materials INNER JOIN mat_desc ON materials.material_number = mat_desc.material_number GROUP BY material_number, material_text; This only works because language = 'DE' is ordered first by default, and 'EN' is ordered second. This seems pretty dodgy though. Is there a proper way of doing this?
|
# ? Dec 12, 2007 16:42 |
|
CoC SQL gods, I have a question, is there a way to apply an EAV-like schema without the SQL heresy that is EAV. I have a client asking for what is basically a product database that can be compared, the problem is that there's many different products that I want to be able to define dynamically (eventually offload this to their administrators). It's going to be used to track price and feature changes through telecommunication providers (this is for pricing and mostly marketing/decision making reasons, and it's not going to be outside facing, so performance is not so critical (few clients, some patience). Ed: The comparisons are going to be only between the same products from different companies. So basically I need to define my data models dynamically. I was thinking of using this sort of modeling. An example of my Data Definition (only products themselves need to be dynamically define): Product1 (Voice lines) (edit: there was a rogue paste here, gently caress my sensitive touchpad) - Company - this is really metadata - Contract Length - same - Actual Price - Extra Line Price - Facility MRC - SLC Price - Regulatory fees - Total Fees - Total Facility MRC plus Fees - Notes Product2 (Long Distance Plans) - Company - this is really metadata - Contract Length - same - Local Calls - LD US - LD Int - TF Services - CCs - Timeframe - Per Minute Price - Notes An example of atoms: Company 1's Product1: - Company: Company1 - Contract Length: 0yr contract - Actual Price: $350 - ELP: $310 - SLCP: $660 - RF: $30 - TF: $440 - TFPF: $900 - Notes: Does not include feature X or Y Company 1's Product2: - Company: Company1 - Contract Length: 0yr - Local Calls: True - LD US: True - LD Int: True - TF Services: True - CCs: True - Timeframe: 24hr (fractioned 30/6) - Per Minute Price: $.05/min - Notes: <blank> My guess is that SQL is just not designed for this and it's better to use something like an RDF datastore for this, and if this is the case, does anyone have any resources for learning to use RDF? deimos fucked around with this message at 20:14 on Dec 12, 2007 |
# ? Dec 12, 2007 17:34 |
|
deimos posted:CoC SQL gods, I have a question, is there a way to apply an EAV-like schema without the SQL heresy that is EAV. I have a client asking for what is basically a product database that can be compared, the problem is that there's many different products that I want to be able to define dynamically (eventually offload this to their administrators). It's going to be used to track price and feature changes through telecommunication providers (this is for pricing and mostly marketing/decision making reasons, and it's not going to be outside facing, so performance is not so critical (few clients, some patience). Come up with a generalized model. If you're going to be able to implement some sort of comparison, you'll be doing that anyway.
|
# ? Dec 12, 2007 17:42 |
|
m5 posted:Come up with a generalized model. If you're going to be able to implement some sort of comparison, you'll be doing that anyway. Gah forgot to mention, the comparison is only between companies on the same products, the problem with a generalized model is that the products are very different, some might have 35 different attributes (this is pretty much the case with wireless plans) and might grow dynamically (wireless plans are constantly adding features, think of alltel's favorites and other stuff).
|
# ? Dec 12, 2007 17:59 |
|
deimos posted:Gah forgot to mention, the comparison is only between companies on the same products, the problem with a generalized model is that the products are very different, some might have 35 different attributes (this is pretty much the case with wireless plans) and might grow dynamically (wireless plans are constantly adding features, think of alltel's favorites and other stuff). Well, then in that case I'd pack the most "stable" attributes into a primary table. For some of the others, instead of a single lump of name/value pairs, maybe there could be separate tables for different sorts of optional attributes. You then left-join to those tables from the main one. Maybe.
|
# ? Dec 12, 2007 18:48 |
|
On EAV: if you have SQL2005, it becomes a lot prettier with PIVOT functionality. A lot prettier. Dunno about how badly it performs...Dakha posted:This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions. code:
|
# ? Dec 12, 2007 19:42 |
|
Dakha posted:This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions. code:
|
# ? Dec 12, 2007 19:45 |
|
Victor posted:On EAV: if you have SQL2005, it becomes a lot prettier with PIVOT functionality. A lot prettier. Dunno about how badly it performs... Yeah postgresql has the tablefuncs contrib for this, I was just wondering if anyone had a better idea. I am guessing I will make a product table and just foreign key an EAV-based table to it. Another question, do I separate the "design/definition" data and the actual data or do I put them in the same table with some sort of flag? Since I don't think I'll ever have to do anything with the data other than load and save (in terms of RDBMS interaction, because I won't ever have to search it) I could even make it a blob and just use XML/JSON to store it (yes, this is relational heresy, I know, but I don't care).
|
# ? Dec 12, 2007 20:29 |
|
deimos posted:Yeah postgresql has the tablefuncs contrib for this, I was just wondering if anyone had a better idea. I am guessing I will make a product table and just foreign key an EAV-based table to it. quote:Another question, do I separate the "design/definition" data and the actual data or do I put them in the same table with some sort of flag? quote:Since I don't think I'll ever have to do anything with the data other than load and save (in terms of RDBMS interaction, because I won't ever have to search it) I could even make it a blob and just use XML/JSON to store it (yes, this is relational heresy, I know, but I don't care).
|
# ? Dec 12, 2007 20:39 |
|
Victor posted:It really depends on your software. If you go the table inheritance approach, then it's possible that running DDL is part of standard application operation -- that's always viewed with extreme amounts of suspicion. However, just like any powerful functionality, there are ways to use it that will probably be OK. Woop, I was looking up on postgres' tablefuncs and found: http://developer.postgresql.org/pgdocs/postgres/hstore.html This is sweetness, you can even create indexes on the data, as perverse as that is. They also have an XML contrib to run xpath through the data (the same thing DB2 has) if I decide to go the XML way. edit: django + postgreSQL + hstore it is, onward with the heresy (it's actually fairly easy to integrate into django because hstore's functions return something that can be eval()ed into a python datastructure )! deimos fucked around with this message at 22:19 on Dec 12, 2007 |
# ? Dec 12, 2007 21:22 |
|
Come join my insanity. code:
code:
Edit: Just realized, more than 50% of the results probably contain "services". In that case, is there any way to skip the 50% limit? I suppose I could change it to boolean... Golbez fucked around with this message at 01:25 on Dec 15, 2007 |
# ? Dec 15, 2007 01:17 |
|
Questions moved to own thread.
Mashi fucked around with this message at 18:48 on Dec 16, 2007 |
# ? Dec 15, 2007 02:25 |
|
Mashi posted:My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way? The way I'm doing it right now is: I can think of ways to do that dynamically using SQL injection but what's the point ? It will cost just as much and probably make your code longer. With the way your table is setup (which I'd hate) your pretty much doing 3 scans anyway you slice it. This type of data should have been pre-determined and stored horizontally... or in XML... but you probably already know that...
|
# ? Dec 15, 2007 10:08 |
|
I'm wondering if there's a better way to use mysql console in windows. The command prompt window consistently screws up my output of tables because I can't widen the window (unless I'm a dumbass and don't realize there's a way to widen the window). Edit: Nevermind, I am a dumbass. Right click -> properties -> layout tab -> Window Size Tap fucked around with this message at 07:30 on Dec 16, 2007 |
# ? Dec 16, 2007 07:27 |
Tap posted:I'm wondering if there's a better way to use mysql console in windows. The command prompt window consistently screws up my output of tables because I can't widen the window (unless I'm a dumbass and don't realize there's a way to widen the window). The mysql admin client for windows on their website is pretty decent. There's also a replacement console for windows (called console2 I think, I don't have it on the machine I am on right now) which is pretty cool, and allows for tabbed console windows and stuff. I am pretty sure it is also resizeable. edit: added links http://dev.mysql.com/downloads/gui-tools/5.0.html http://sourceforge.net/projects/console/ I do prefer phpMyAdmin over the other choices though fletcher fucked around with this message at 10:08 on Dec 16, 2007 |
|
# ? Dec 16, 2007 08:04 |
|
fletcher posted:The mysql admin client for windows on their website is pretty decent. There's also a replacement console for windows (called console2 I think, I don't have it on the machine I am on right now) which is pretty cool, and allows for tabbed console windows and stuff. I am pretty sure it is also resizeable. This doesn't get installed with wampserver does it? Edit: Nevermind. Tap fucked around with this message at 09:30 on Dec 16, 2007 |
# ? Dec 16, 2007 09:12 |
|
code:
In other words, I want this to be returned: code:
SELECT DISTINCT `instructor`, `date` FROM `flights` WHERE `pilot_id` = '50' ORDER BY `date` DESC LIMIT 12 but thats not quite there. Any help?
|
# ? Dec 17, 2007 06:00 |
|
SELECT instructor, max(date) FROM flights WHERE pilot_id = '50' GROUP BY instructor
|
# ? Dec 17, 2007 06:08 |
|
1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"? 2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?
|
# ? Dec 19, 2007 16:28 |
|
Triple Tech posted:1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"? This is a major pissing match everywhere you go. There are two camps, I like to call them "innies" and "outies". People who love ORM layers (in particular the RoR people) are big-time outies, and go as far as insisting you don't need things like foreign key constraints. Personally I think it depends on the app. The database we have at work is an application into itself, so embedding the relationships of data via constraints and triggers makes sense, because there is no guarantee that any particular application code will be involved when changes are made. quote:2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item? INSERT ... SELECT ... is the normal way to do this. However, in general it's slower because you are duplicating the number of inserts overall (and this doubles the number of transactions that have to be recorded for the log). The reason you would want to do this is if you want to do some transformation of the data in bulk before you do the final insert. Oracle provided "global temporary tables" for this very thing, which are sort of like session-based table data structures.
|
# ? Dec 19, 2007 16:43 |
|
Maybe I'm misunderstanding something. Can INSERT insert multiple rows? So are you INSERTing the SELECTion of the temporary table into the main table?
|
# ? Dec 19, 2007 16:51 |
|
This is Oracle, so your database of choice may differ... but:code:
|
# ? Dec 19, 2007 17:00 |
|
|
# ? May 13, 2024 10:05 |
|
Triple Tech posted:1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"? quote:2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?
|
# ? Dec 19, 2007 18:11 |