Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nigger Goku
Dec 11, 2004

あっちに行け
Thanks for the help with the voting query, but I have another question regarding grouping DATETIME fields.

I've got a table called 'views' that's laid out like so:
code:
INT content_id, DATETIME added
If I wanted to generate a report showing the most popular time of day for views, I can do something like this:
code:
SELECT
	HOUR(added) AS hour,MINUTE(added) AS minute,COUNT(content_id) AS count
FROM
	`views`
GROUP BY
	`hour`,`minute`
My question is: is there anyway to group results in sets of 10 minutes, rather than single minutes in the query?

Ideally, I'd like to be able to plot the results out like how Steam does.

I've had a brief look at the INTERVAL function, but it's hard to find examples of how to use this in a GROUP query.

Adbot
ADBOT LOVES YOU

Heffer
May 1, 2003

Opius posted:

Thanks for the help with the voting query, but I have another question regarding grouping DATETIME fields.

I've got a table called 'views' that's laid out like so:
code:
INT content_id, DATETIME added
If I wanted to generate a report showing the most popular time of day for views, I can do something like this:
code:
SELECT
	HOUR(added) AS hour,MINUTE(added) AS minute,COUNT(content_id) AS count
FROM
	`views`
GROUP BY
	`hour`,`minute`
My question is: is there anyway to group results in sets of 10 minutes, rather than single minutes in the query?

Ideally, I'd like to be able to plot the results out like how Steam does.

I've had a brief look at the INTERVAL function, but it's hard to find examples of how to use this in a GROUP query.

There are probably more elegant ways to do this, but I've done something similar by taking the Unix time ticks and dividing it by the unit of time in seconds. If the division doesnt give a whole integer, truncate it to an integer. Multiply it back by your unit of time, and you should have everything grouped up with 10 minute time units.

Turambar
Feb 20, 2001

A Túrin Turambar turun ambartanen
Grimey Drawer
You could also fill a table with the hour and minute intervals and do a simple join.


code:
Create table dbo.ReportTimeInterval
(
HourStart int not null,
MinuteStart int not null,
HourEnd int not null,
MinuteEnd int not null
)
GO
insert into dbo.ReportTimeInterval
(
select
0,0,0,9 union all select
0,10,0,19 union all select
...
23,50,23,59
)
GO
SELECT
dbo.ReportTimeInterval.HourStart,
dbo.ReportTimeInterval.MinuteStart,
dbo.ReportTimeInterval.HourEnd,
dbo.ReportTimeInterval.MinuteEnd,
COUNT(content_id) AS count
FROM
`views`
inner join dbo.ReportTimeInterval
on dbo.ReportTimeInterval.HourStart <= datepart(hour,added)
and dbo.ReportTimeInterval.MinuteStart <= datepart(minute,added)
and dbo.ReportTimeInterval.HourEnd >= datepart(hour,added)
and dbo.ReportTimeInterval.MinuteEnd >= datepart(minute,added)
GROUP BY
dbo.ReportTimeInterval.HourStart,
dbo.ReportTimeInterval.MinuteStart,
dbo.ReportTimeInterval.HourEnd,
dbo.ReportTimeInterval.MinuteEnd
This is untested code, BTW. But it should give you the general idea.

m5
Oct 1, 2001
In SQL Server you can do this:
code:
select foo, datepart(minute, timestamp) / 10
from whatever
group by datepart(minute, timestamp) / 10

Boogeyman
Sep 29, 2004

Boo, motherfucker.

Opius posted:

My question is: is there anyway to group results in sets of 10 minutes, rather than single minutes in the query?

Give this a shot...tried it against a similar database I have here and it seems to work fine.

code:
SELECT
  DATEPART(year, timestamp) AS [Year],
  DATEPART(month, timestamp) AS [Month],
  DATEPART(day, timestamp) AS [Day],
  DATEPART(hour, timestamp) AS Hour,
  CAST(DATEPART(minute, timestamp) / 10 AS INT) * 10 AS RoundMinute,
  COUNT(*) AS RecordCount
FROM
  SomeTable
WHERE
  timestamp >= '10/1/2007' AND
  timestamp < '11/1/2007'
GROUP BY
  DATEPART(year, timestamp),
  DATEPART(month, timestamp),
  DATEPART(day, timestamp),
  DATEPART(hour, timestamp),
  CAST(DATEPART(minute, timestamp) / 10 AS INT) * 10
ORDER BY
  DATEPART(year, timestamp),
  DATEPART(month, timestamp),
  DATEPART(day, timestamp),
  DATEPART(hour, timestamp),
  CAST(DATEPART(minute, timestamp) / 10 AS INT) * 10
You'll have to join the date parts back up for display purposes, but that shouldn't be a big deal.

foghorn
Oct 9, 2006

Haters gunna hate.
I've been banging my head against the wall over this one all night.

The following throws an T_ENCAPSED_STRING error, assume properly passed variables from HTML form:

code:
$description = addslashes($description);

$query = "INSERT INTO nations (name, description, population, flag, active, language, mod, kind, password) VALUES ('".$name"', '".$description."', '".$population."', '".$flag."', ".$active.", '".$language."', '0', '".$type."', '".$password."')";
mysql_query($query) or die(mysql_error());
Everything else on that page is used elsewhere in the app somewhere, so I know it's not the problem.

Alex007
Jul 8, 2004

foghorn posted:

I've been banging my head against the wall over this one all night.

The following throws an T_ENCAPSED_STRING error, assume properly passed variables from HTML form:

code:
$description = addslashes($description);

$query = "INSERT INTO nations (name, description, population, flag, active, language, mod, kind, password) VALUES ('".$name"', '".$description."', '".$population."', '".$flag."', ".$active.", '".$language."', '0', '".$type."', '".$password."')";
mysql_query($query) or die(mysql_error());
Everything else on that page is used elsewhere in the app somewhere, so I know it's not the problem.

You missed a period after $name :
code:
(...) VALUES ('".$name[u].[/u]"', (...)

foghorn
Oct 9, 2006

Haters gunna hate.

Alex007 posted:

You missed a period after $name :
code:
(...) VALUES ('".$name[u].[/u]"', (...)

Thanks. Guess I just needed a second pair of eyes!

Mashi
Aug 15, 2005

Just wanted you to know your dinner's cold and the children all agree you're a shitheel!
I have a properties table where I store data relating to a particular row in key/value pairs. It looks like this:

Table: person_properties
code:
  id, person_id, property_name, property_value
  1   45         name           joe
  2   45         hair_color     brown
  3   45         likes_tuna     1
  4   46         name           sally
  5   46         hair_color     blonde
  6   46         likes_tuna     0
So as you can see I'm just storing data pertaining to persons in a vertical rather than horizontal format, so that I can hold very different data without altering any tables.

My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way? The way I'm doing it right now is:

code:
SELECT p.*,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "name") AS name,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "hair_color") AS hair_color,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "likes_tuna") AS likes_tuna
FROM   people p
WHERE  id = 45
I'd like to do this using a single join but I can't find legal way to dynamically set column names. So is there a way, or is this the best I am going to get?

Mashi fucked around with this message at 00:45 on Dec 12, 2007

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
That's what you get for choosing the Entity-Attribute-Value model, which many people consider to be a design anti-pattern because of its many drawbacks. You're either going to have to do a JOIN per column, or a sub-select like you're doing now.

npe
Oct 15, 2004
You really need to ask yourself if the EAV model is necessary. I work professionally on a gigantic EAV-modeled database, and I'll tell you that while it is sometimes necessary due to application requirements, it's never easy to deal with. The headache you're dealing with now is only the beginning of your pain.

Mashi
Aug 15, 2005

Just wanted you to know your dinner's cold and the children all agree you're a shitheel!
Wow and there I was really proud of myself for thinking of such a great table structure :saddowns:.

Thanks for the heads up, I'll reconsider it.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
If you have a mysql database of hundreds of thousands of articles what is the best way to pull up the top 100 articles ordered by number of views?

I'm guessing something like this is pretty taxing on the db server since it does a full table scan?

SELECT id, title FROM articles ORDER BY views DESC LIMIT 100

How do you get around doing this? A temporary table that gets updated via a cron job or something?

Victor
Jun 18, 2004

Mashi posted:

My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way?
I just learned that MS SQL Server does a single execute per subquery, per row returned, in some cases. My my example, converting the subquery to a join (which might require a derived table) caused performance to increase from 1m27s to 2s. Who knows what other RDBMSes use; you did not specify one. (*cough* rules *cough*)

Victor
Jun 18, 2004

fletcher posted:

If you have a mysql database of hundreds of thousands of articles what is the best way to pull up the top 100 articles ordered by number of views?

I'm guessing something like this is pretty taxing on the db server since it does a full table scan?
Not if you index the views column. Now, depending on how often the number of views is updated, this index might be a performance problem. Try it out!

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Victor posted:

Not if you index the views column. Now, depending on how often the number of views is updated, this index might be a performance problem. Try it out!

So does that mean the views column shouldn't be updated every time a unique user views the article?

I guess this also means that I don't really understand what an indexed column is. Is there an article that will help me out?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

fletcher posted:

So does that mean the views column shouldn't be updated every time a unique user views the article?

I guess this also means that I don't really understand what an indexed column is. Is there an article that will help me out?
An indexed column is simply one that has an index attached to it. An index is a data structure internal to the database that's automatically read and updated by the database (and not by you) to help the database quickly locate rows when you search that column. The advantage is better performance when you're searching that column, and it's especially useful when there are thousands of rows involved. It can bring report times down from minutes to mere seconds. The downside is that there's a performance overhead involved in updating the index whenever you insert, update or delete values in that column.

So there's a trade-off. If your "views" column is being updated very very frequently (like, you're slashdot or something), then adding an index to that column will slow down your DB a little bit, but the upside is that your report to get the top 100 articles sorted by view will be very quick. You'd have to have a pretty heavy-traffic site to notice any slowdown though.

In the event that you did get unacceptable slowdown, the solution would be to sacrifice accuracy for speed. Either:
- Have two "views" columns, one indexed and one unindexed. Keep the unindexed one up-to-date and accurate, and periodically (via a cronjob) copy over this value to the indexed column.
- Have one unindexed "views" column, but cache the results of the top 100 articles so that you don't see the performance hit every time.

Dakha
Feb 18, 2002

Fun Shoe
This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions.

I'm currently using:

SELECT material_number, material_text
FROM materials
INNER JOIN mat_desc
ON materials.material_number = mat_desc.material_number
GROUP BY material_number, material_text;

This only works because language = 'DE' is ordered first by default, and 'EN' is ordered second. This seems pretty dodgy though. Is there a proper way of doing this?

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!
CoC SQL gods, I have a question, is there a way to apply an EAV-like schema without the SQL heresy that is EAV. I have a client asking for what is basically a product database that can be compared, the problem is that there's many different products that I want to be able to define dynamically (eventually offload this to their administrators). It's going to be used to track price and feature changes through telecommunication providers (this is for pricing and mostly marketing/decision making reasons, and it's not going to be outside facing, so performance is not so critical (few clients, some patience). Ed: The comparisons are going to be only between the same products from different companies.

So basically I need to define my data models dynamically. I was thinking of using this sort of modeling.

An example of my Data Definition (only products themselves need to be dynamically define):
Product1 (Voice lines) (edit: there was a rogue paste here, gently caress my sensitive touchpad)
- Company - this is really metadata
- Contract Length - same
- Actual Price
- Extra Line Price
- Facility MRC
- SLC Price
- Regulatory fees
- Total Fees
- Total Facility MRC plus Fees
- Notes

Product2 (Long Distance Plans)
- Company - this is really metadata
- Contract Length - same
- Local Calls
- LD US
- LD Int
- TF Services
- CCs
- Timeframe
- Per Minute Price
- Notes

An example of atoms:
Company 1's Product1:
- Company: Company1
- Contract Length: 0yr contract
- Actual Price: $350
- ELP: $310
- SLCP: $660
- RF: $30
- TF: $440
- TFPF: $900
- Notes: Does not include feature X or Y

Company 1's Product2:
- Company: Company1
- Contract Length: 0yr
- Local Calls: True
- LD US: True
- LD Int: True
- TF Services: True
- CCs: True
- Timeframe: 24hr (fractioned 30/6)
- Per Minute Price: $.05/min
- Notes: <blank>


My guess is that SQL is just not designed for this and it's better to use something like an RDF datastore for this, and if this is the case, does anyone have any resources for learning to use RDF?

deimos fucked around with this message at 20:14 on Dec 12, 2007

m5
Oct 1, 2001

deimos posted:

CoC SQL gods, I have a question, is there a way to apply an EAV-like schema without the SQL heresy that is EAV. I have a client asking for what is basically a product database that can be compared, the problem is that there's many different products that I want to be able to define dynamically (eventually offload this to their administrators). It's going to be used to track price and feature changes through telecommunication providers (this is for pricing and mostly marketing/decision making reasons, and it's not going to be outside facing, so performance is not so critical (few clients, some patience).

So basically I need to define my data models dynamically. I was thinking of using this sort of modeling.

An example of my Data Definition (only products themselves need to be dynamically define):
Product1 (Voice lines)
- Company - this is really metadata
- Contract Length - same
- Actual Price
- Extra Line Price
- Facility MRC
- SLC Price
- Regulatory fees
- Total Fees
- Total Facility MRC plus Fees
- Notes

Product2 (Long Distance Plans)
- Company - this is really metadata
- Contract Length - same
- Local Calls
- LD US
- LD Int
- TF Services
- CCs
- Timeframe
- Per Minute Price
- Notes

An example of atoms:
Company 1's Product1:
- Company: Company1
- Contract Length: 0yr contract
- Actual Price: $350
- ELP: $310
- SLCP: $660
- RF: $30
- TF: $440
- TFPF: $900
- Notes: Does not include feature X or Y

Company 1's Product2:
- Company: Company1
- Contract Length: 0yr
- Local Calls: True
- LD US: True
- LD Int: True
- TF Services: True
- CCs: True
- Timeframe: 24hr (fractioned 30/6)
- Per Minute Price: $.05/min
- Notes: <blank>


My guess is that SQL is just not designed for this and it's better to use something like an RDF datastore for this, and if this is the case, does anyone have any resources for learning to use RDF?

Come up with a generalized model. If you're going to be able to implement some sort of comparison, you'll be doing that anyway.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

m5 posted:

Come up with a generalized model. If you're going to be able to implement some sort of comparison, you'll be doing that anyway.

Gah forgot to mention, the comparison is only between companies on the same products, the problem with a generalized model is that the products are very different, some might have 35 different attributes (this is pretty much the case with wireless plans) and might grow dynamically (wireless plans are constantly adding features, think of alltel's favorites and other stuff).

m5
Oct 1, 2001

deimos posted:

Gah forgot to mention, the comparison is only between companies on the same products, the problem with a generalized model is that the products are very different, some might have 35 different attributes (this is pretty much the case with wireless plans) and might grow dynamically (wireless plans are constantly adding features, think of alltel's favorites and other stuff).

Well, then in that case I'd pack the most "stable" attributes into a primary table. For some of the others, instead of a single lump of name/value pairs, maybe there could be separate tables for different sorts of optional attributes. You then left-join to those tables from the main one. Maybe.

Victor
Jun 18, 2004
On EAV: if you have SQL2005, it becomes a lot prettier with PIVOT functionality. A lot prettier. Dunno about how badly it performs...

Dakha posted:

This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions.

I'm currently using:

SELECT material_number, material_text
FROM materials
INNER JOIN mat_desc
ON materials.material_number = mat_desc.material_number
GROUP BY material_number, material_text;

This only works because language = 'DE' is ordered first by default, and 'EN' is ordered second. This seems pretty dodgy though. Is there a proper way of doing this?
code:
select  material_number,
        coalesce(de.material_text, en.material_text)
from    materials m
left join mat_desc de on de.material_number = m.material_number and de.language = 'DE'
inner join mat_desc en on en.material_number = m.material_number and en.language = 'EN'
This is if you can ensure the EN version exists. Otherwise, you need two left joins and some business logic for what to do when both joins fail.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Dakha posted:

This is probably really easy to fix, but I'm trying select a list of material numbers, and their corresponding text descriptions. However, the table with the text descriptions has several entries for each material number, for different languages. I want to get the german description, unless it doesn't exist, in which case I want the english descriptions.

code:
SELECT materials.material_number
    , (SELECT material_text
       FROM mat_desc
       WHERE mat_desc.material_number = materials.material_number
       ORDER BY (CASE WHEN mat_desc.language = "DE" THEN "00000"
                      ELSE mat_desc.language
                 END)
       LIMIT 1
       )
FROM materials
ORDER BY material_number;
This will select all materials, and for each material do a sub-select on the language. The CASE statement will order it so that the DE language comes first (assuming "00000" is considered to be before any existing language name).

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

Victor posted:

On EAV: if you have SQL2005, it becomes a lot prettier with PIVOT functionality. A lot prettier. Dunno about how badly it performs...

Yeah postgresql has the tablefuncs contrib for this, I was just wondering if anyone had a better idea. I am guessing I will make a product table and just foreign key an EAV-based table to it.

Another question, do I separate the "design/definition" data and the actual data or do I put them in the same table with some sort of flag?

Since I don't think I'll ever have to do anything with the data other than load and save (in terms of RDBMS interaction, because I won't ever have to search it) I could even make it a blob and just use XML/JSON to store it (yes, this is relational heresy, I know, but I don't care).

Victor
Jun 18, 2004

deimos posted:

Yeah postgresql has the tablefuncs contrib for this, I was just wondering if anyone had a better idea. I am guessing I will make a product table and just foreign key an EAV-based table to it.
It really depends on your software. If you go the table inheritance approach, then it's possible that running DDL is part of standard application operation -- that's always viewed with extreme amounts of suspicion. However, just like any powerful functionality, there are ways to use it that will probably be OK.

quote:

Another question, do I separate the "design/definition" data and the actual data or do I put them in the same table with some sort of flag?
I would separate them. Otherwise, whenever you query for one type of data you have to filter out the other. That's just ugly. Not to mention that if you want to add a column to one set of data but not the other, you'd be screwed.

quote:

Since I don't think I'll ever have to do anything with the data other than load and save (in terms of RDBMS interaction, because I won't ever have to search it) I could even make it a blob and just use XML/JSON to store it (yes, this is relational heresy, I know, but I don't care).
If you never need the database to know more than that it's a blob of data, it's perfectly acceptable to store that blob of data in the DB as a blob. C.J. Date writes in Databases in Depth that one could have columns (he calls them attributes) that store tables (he calls them relations). Blobs become a problem when you start running, say, XPath, in the DB, against XML data. That's when things start getting retarded. Then again, SQL Server appears to be heading towards making that easier; I wouldn't be surprised if they were to allow creating indexes on certain XPath expressions or something. The important thing is to document what's in the blob/xml. That way, if anyone ever has to screw with this data, they will fall in love with you instead of wanting to murder you in the coldest of blood.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

Victor posted:

It really depends on your software. If you go the table inheritance approach, then it's possible that running DDL is part of standard application operation -- that's always viewed with extreme amounts of suspicion. However, just like any powerful functionality, there are ways to use it that will probably be OK.

Woop, I was looking up on postgres' tablefuncs and found: http://developer.postgresql.org/pgdocs/postgres/hstore.html

This is sweetness, you can even create indexes on the data, as perverse as that is.

They also have an XML contrib to run xpath through the data (the same thing DB2 has) if I decide to go the XML way.

edit: django + postgreSQL + hstore it is, onward with the heresy (it's actually fairly easy to integrate into django because hstore's functions return something that can be eval()ed into a python datastructure :black101: )! :can:

deimos fucked around with this message at 22:19 on Dec 12, 2007

Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
Come join my insanity.

code:
mysql> SELECT t.Name FROM tblstuff t WHERE MATCH (t.Name,t.BriefDescription,t.Background,t.Summary,t.Status,t.Evaluation) AGAINST ("service") AND t.ID = "55";
+-----------------------------------------------------+
| Name                                                |
+-----------------------------------------------------+
| Operation and Maintenance of Video Services II      |
+-----------------------------------------------------+
1 row in set (0.00 sec)
code:
mysql> SELECT t.Name FROM tblstuff t WHERE MATCH (t.Name,t.BriefDescription,t.Background,t.Summary,t.Status,t.Evaluation) AGAINST ("services") AND t.ID = "55";
Empty set (0.00 sec)
why? why? I just chose a single result to focus on since it obviously has "Services" RIGHT THERE. When I search for 'service' I get ~4000 results. When I search for 'services', I get zero. Any ideas?

Edit: Just realized, more than 50% of the results probably contain "services". In that case, is there any way to skip the 50% limit? I suppose I could change it to boolean...

Golbez fucked around with this message at 01:25 on Dec 15, 2007

Mashi
Aug 15, 2005

Just wanted you to know your dinner's cold and the children all agree you're a shitheel!
Questions moved to own thread.

Mashi fucked around with this message at 18:48 on Dec 16, 2007

benisntfunny
Dec 2, 2004
I'm Perfect.

Mashi posted:

My question is: What is the cleanest / most efficient way to retrieve this data as if it were stored the regular horizontal way? The way I'm doing it right now is:

code:
SELECT p.*,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "name") AS name,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "hair_color") AS hair_color,
       (SELECT property_value
        FROM   person_properties pp
        WHERE  pp.person_id = p.id
               AND property_name = "likes_tuna") AS likes_tuna
FROM   people p
WHERE  id = 45
I'd like to do this using a single join but I can't find legal way to dynamically set column names. So is there a way, or is this the best I am going to get?

I can think of ways to do that dynamically using SQL injection but what's the point ? It will cost just as much and probably make your code longer. With the way your table is setup (which I'd hate) your pretty much doing 3 scans anyway you slice it. This type of data should have been pre-determined and stored horizontally... or in XML... but you probably already know that...

Tap
Apr 12, 2003

Note to self: Do not shove foreign objects in mouth.
I'm wondering if there's a better way to use mysql console in windows. The command prompt window consistently screws up my output of tables because I can't widen the window (unless I'm a dumbass and don't realize there's a way to widen the window).

Edit: Nevermind, I am a dumbass.

Right click -> properties -> layout tab -> Window Size

Tap fucked around with this message at 07:30 on Dec 16, 2007

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Tap posted:

I'm wondering if there's a better way to use mysql console in windows. The command prompt window consistently screws up my output of tables because I can't widen the window (unless I'm a dumbass and don't realize there's a way to widen the window).

Edit: Nevermind, I am a dumbass.

Right click -> properties -> layout tab -> Window Size

The mysql admin client for windows on their website is pretty decent. There's also a replacement console for windows (called console2 I think, I don't have it on the machine I am on right now) which is pretty cool, and allows for tabbed console windows and stuff. I am pretty sure it is also resizeable.

edit: added links

http://dev.mysql.com/downloads/gui-tools/5.0.html

http://sourceforge.net/projects/console/

I do prefer phpMyAdmin over the other choices though

fletcher fucked around with this message at 10:08 on Dec 16, 2007

Tap
Apr 12, 2003

Note to self: Do not shove foreign objects in mouth.

fletcher posted:

The mysql admin client for windows on their website is pretty decent. There's also a replacement console for windows (called console2 I think, I don't have it on the machine I am on right now) which is pretty cool, and allows for tabbed console windows and stuff. I am pretty sure it is also resizeable.

This doesn't get installed with wampserver does it?

Edit: Nevermind.

Tap fucked around with this message at 09:30 on Dec 16, 2007

nbv4
Aug 21, 2002

by Duchess Gummybuns
code:
id   date    instructor  
----------------------------
1    12/7            Bob      
2    12/7            Bob
3    12/8            Tom
4    12/8            Harry
5    12/9            Bob
6    12/10           Tom
7    12/10           William
8    12/11           Tom
9    12/12           Fred
10   12/13           Bob
How can I form a query that returns a list of each instructor and the date latest date associated with that instructor?

In other words, I want this to be returned:
code:
   date    instructor  
----------------------------
    12/8            Harry
    12/10           William
    12/11           Tom
    12/12           Fred
    12/13           Bob
each instructor is returned ONCE, along with the date that is the latest date associated with that instructor. The closest I can come up with is:

SELECT DISTINCT `instructor`, `date`
FROM `flights`
WHERE `pilot_id` = '50'
ORDER BY `date` DESC
LIMIT 12

but thats not quite there. Any help?

noonches
Dec 5, 2005

It represents my lifestyle and stratus as a street savy irreverent youth, who lives large, and yet hungers for the next level in life.
SELECT instructor, max(date) FROM flights WHERE pilot_id = '50' GROUP BY instructor

Triple Tech
Jul 28, 2006

So what, are you quitting to join Homo Explosion?
1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"?

2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?

npe
Oct 15, 2004

Triple Tech posted:

1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"?

This is a major pissing match everywhere you go. There are two camps, I like to call them "innies" and "outies". People who love ORM layers (in particular the RoR people) are big-time outies, and go as far as insisting you don't need things like foreign key constraints.

Personally I think it depends on the app. The database we have at work is an application into itself, so embedding the relationships of data via constraints and triggers makes sense, because there is no guarantee that any particular application code will be involved when changes are made.

quote:

2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?

INSERT ... SELECT ... is the normal way to do this. However, in general it's slower because you are duplicating the number of inserts overall (and this doubles the number of transactions that have to be recorded for the log). The reason you would want to do this is if you want to do some transformation of the data in bulk before you do the final insert. Oracle provided "global temporary tables" for this very thing, which are sort of like session-based table data structures.

Triple Tech
Jul 28, 2006

So what, are you quitting to join Homo Explosion?
Maybe I'm misunderstanding something. Can INSERT insert multiple rows? So are you INSERTing the SELECTion of the temporary table into the main table?

npe
Oct 15, 2004
This is Oracle, so your database of choice may differ... but:

code:
SQL> describe temp_table;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 FOO                                                NUMBER(5)
 BAR                                                VARCHAR2(16)

SQL> describe real_table;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 FOO                                                NUMBER(5)
 BAR                                                VARCHAR2(16)

SQL> select * from temp_table;

       FOO BAR
---------- ----------------
         1 row one
         2 row two

SQL> insert into real_table (foo, bar) select foo, bar from temp_table;

2 rows created.

Adbot
ADBOT LOVES YOU

Victor
Jun 18, 2004

Triple Tech posted:

1) When should business logic be written at the database level versus the programming language level? Or is it something like "always in the DB unless you can't"?
If you need to be able to run insert/update queries directly against the database that run through your business logic, you pretty much have to include the business logic in the DB. However, many people make web services so that you write to the DB exclusively through the web service. The problem is one of scalability: computing all the business rules in the DB can get expensive. Scaling databases that aren't read-only is hard. Moreover, it can be better to express business logic with non-SQL syntax. My company puts almost all of the business logic in the DB because we can and don't run into scalability issues. That way, our application that talks to the databases of different clients (with vastly different schemas) doesn't need to have a clue about the business rules.

quote:

2) My coworker just told me of this technique I've never heard before, so I wanted to know how common place it is. When importing a set of data, like many rows, you create a temporary table that's the same as the table you want to insert the stuff into, then you use some crazy SQL command to import the stuff en masse, and then delete the temporary table. True? What crazy command is this? And you wouldn't just loop over the stuff and INSERT each row/item?
This technique can be extremely performant (yes, that's becoming a word, deal with it grammar Nazis). I can bulk insert tens of thousands of rows per second into a single-socket, quad-core SQL Server. If we had gigabet network, I wouldn't be surprised if I could get it up to hundreds of thousands of rows per second. In contrast, inserting rows one by one is much slower.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply