|
fletcher posted:What does the INNER do? What happens if you take it out? The inner join is a type of join, and as far as I know, the default one used when you omit the inner. It just matches each row in the first table against each in the second. You would use other kinds of joins for the cases not covered by inner joins--for instance, if you had a table of books, and its foreign key to the authors table for some reason points to a nonexistent row, you could use an outer join to list each book next to either its author's name or some string like "NULL" if the author doesn't exist. An inner join would miss this case; there's an example in that link I gave you.
|
# ? Nov 21, 2007 13:08 |
|
|
# ? May 13, 2024 09:42 |
|
fletcher posted:What does the INNER do? What happens if you take it out? The INNER JOIN will select all rows from both tables as long as there is a match between the columns you are matching on. If you take it out and just say JOIN, it will default to an INNER JOIN. I may have read what you are trying to do wrong, but maybe you should try a RIGHT JOIN. noonches fucked around with this message at 14:51 on Nov 21, 2007 |
# ? Nov 21, 2007 14:48 |
|
Join illustrations.
|
# ? Nov 21, 2007 15:47 |
|
My company's been running MySQL 4 and PHP 4, mainly because, well, it worked. Now we're testing a new server, and think it's time to upgrade both to version 5. However, when I migrated the 4 databases to the 5 test server, while some straight lookups work fine, some queries are killing the server. A search that took 1.2 seconds to run on the old server, I have to kill after it's run for 600 seconds on the new one. I'm guessing it has to do with it having LIKEs or JOINs or some crap that 5 doesn't like, or at least in an order that 5 doesn't like, but this brings up my question: Is there a list of changes from MySQL 4 to MySQL 5 that a programmer should know about? Most I've seen is about the features, but what about coding quality? Obviously not all of the commands are handled identically to how they were before. I've googled all I can but I have yet to find anything about performance hits with certain queries or what not.
|
# ? Nov 23, 2007 09:29 |
|
I don't know about the queries not running, it's best to just see what the SQL error messages tell you and act on that. But as for the slow queries, have you created indices everywhere where they used to be? Because if, say, your foreign keys aren't indexed the DBMS will be doing full table scans on joins.
|
# ? Nov 23, 2007 10:56 |
How taxing is something like a SELECT COUNT(id) FROM messages WHERE owner = me if there are hundreds of thousands of entries in the messages table? Is it better to just keep a count in the users table and update that?
fletcher fucked around with this message at 23:19 on Nov 23, 2007 |
|
# ? Nov 23, 2007 23:11 |
|
If you have an index on owner, it's not very expensive. However, that makes it more expensive to insert/remove rows from your messages table. Whether this actually matters depends on details you have not provided.
|
# ? Nov 24, 2007 01:10 |
|
I'm extracting data from an oracle database using SQL+. One of the tables I need to get has several columns such as '/AB/CDE' and '/AB/CDF' etc. However, when I attempt to execute: SELECT KONP./AB/CDE, KONP./AB/CDF FROM KONP WHERE.... it doesn't recognise the columns. Nor does: SELECT /AB/CDE, /AB/CDF FROM KONP WHERE.... Is this an escape character thing? I thought the escape character was backslash not slash. What should I do so that SQL+ can get the columns I need?
|
# ? Nov 26, 2007 12:32 |
|
Dakha posted:I'm extracting data from an oracle database using SQL+. One of the tables I need to get has several columns such as '/AB/CDE' and '/AB/CDF' etc. Square brackets maybe? I'm guessing because I'm not sure what you're saying - are those slashes supposed to mean something or are they part of the column names and not otherwise interesting? (If so then wtf) So maybe code:
|
# ? Nov 26, 2007 13:15 |
|
m5 posted:Square brackets maybe? I'm guessing because I'm not sure what you're saying - are those slashes supposed to mean something or are they part of the column names and not otherwise interesting? (If so then wtf) Yeah, they are just really stupidly named columns. The columns are in several tables im joining, so do I use: KONP.[/AB/CDE] or [KONP./AB/CDE] ?
|
# ? Nov 26, 2007 13:28 |
|
Use double quotes, just beware that when doing this oracle suddenly gets case sensitive:code:
|
# ? Nov 26, 2007 14:04 |
|
I'm trying to use Microsoft Excel to build a script to insert spreadsheet data (and not the headers) into an MS SQL database, unfortunately I don't know jack about (read: forgot the first time I was told) the syntax involved with creating a string with Excel. Help?
|
# ? Nov 27, 2007 01:29 |
|
Dakha posted:Yeah, they are just really stupidly named columns. The columns are in several tables im joining, so do I use:
|
# ? Nov 27, 2007 03:41 |
|
uh zip zoom posted:I'm trying to use Microsoft Excel to build a script to insert spreadsheet data (and not the headers) into an MS SQL database, unfortunately I don't know jack about (read: forgot the first time I was told) the syntax involved with creating a string with Excel. Help? There's an import utility for SQL 2005 that you can have generate the SQL syntax. It includes the ability to skip column headers and has a GUI for things like mapping columns. You could easily make it a SQL Agent job as long as your excel file is in a predictable place with a predictable name. Alternatively, you could make a stored procedure that takes the file name/location as a parameter.
|
# ? Nov 27, 2007 07:08 |
|
I might make a thread about sweet SQL2005 features at some point, but I just wanted to get this out there: using cross apply/outer apply allows you to join a table-valued function to a table, using values from the table as input to the UDF. That's useful for, among other things, normalizing data with a CLR Regex Split function. I learned about this tonight, after getting pissed that there was no way to link a table-valued UDF to a table, since relations can't refer to each other outside of the join condition. SQL2005 to the rescue!
|
# ? Nov 27, 2007 07:14 |
|
Victor posted:I might make a thread about sweet SQL2005 features at some point, but I just wanted to get this out there: using cross apply/outer apply allows you to join a table-valued function to a table, using values from the table as input to the UDF. That's useful for, among other things, normalizing data with a CLR Regex Split function. I learned about this tonight, after getting pissed that there was no way to link a table-valued UDF to a table, since relations can't refer to each other outside of the join condition. SQL2005 to the rescue! Can you elaborate on this or maybe give an example? I work with SQL 2005 a lot and I have no clue what you're talking about.
|
# ? Nov 27, 2007 07:17 |
|
Let's say I have deployed a table-valued function called dbo.RegexSplit that returns a table containing the column value. Let's say I have a table that is (foo int not null, list_of_bars varchar(100) not null). I want to normalize that to (foo int not null, bar int not null). First, I triedcode:
code:
|
# ? Nov 27, 2007 07:37 |
|
I figured out my earlier problem with MySQL 5 being slow as hell - my lovely query that I designed three years ago has more parentheses and joins in it than anything should have. But while redesigning it, I had this join question: Right now, it is (simplified) like this: code:
Again, this is vastly simplified, but my point is, would moving to commas give me the exact same results? Also, should the above be reordered, putting the Dept/Agency join at the end, or using parentheses to separate them out from the rest of the joins? This is what I would see the above being in comma form: code:
|
# ? Nov 27, 2007 13:14 |
|
Commas are syntactically equivalent to inner join syntax, so don't get hung up on it. Have you tried running EXPLAIN on your query to see where the cost is? Complicated looking queries are not necessarily problematic, you'll want to learn to trust EXPLAIN.
|
# ? Nov 27, 2007 13:22 |
|
Satan's Scallion posted:Commas are syntactically equivalent to inner join syntax, so don't get hung up on it. Have you tried running EXPLAIN on your query to see where the cost is? Using indexes (one of the joined tables had somehow never been indexed before) cut the execution time from interminable (10+ minutes) down to 12 seconds. Getting rid some of the joins, however, cut it down to instantaneous, so I was trying to figure out how best to clean up the web of joins. EXPLAIN is helping a lot, but I've done all I can with indexing, now I need to clean up the query itself. If they're equivalent to an inner join, they should only be used when there's positively going to be a DepartmentID and PhaseID in Entries, right? (I apologize if some of this sounds amateur; this system has let me coast by on rather simple queries for a while, so now it's hitting me with stuff I haven't really had to think about in years) And I actually just discovered EXPLAIN last week - it's the best thing in my life since sex.
|
# ? Nov 27, 2007 15:05 |
|
With left joins like that, what helps is making sure you get your qualifying (filtering, in other words) criteria into the JOIN expressions. Join in an order such that your most restrictive conditions are evaluated as early as possible.
|
# ? Nov 27, 2007 15:11 |
|
m5 posted:With left joins like that, what helps is making sure you get your qualifying (filtering, in other words) criteria into the JOIN expressions. Join in an order such that your most restrictive conditions are evaluated as early as possible. I'm not sure what you mean by getting the qualifying criteria into the join expressions... In this case, this join is simply to fill out the info on getting the actual name of the Department instead of the Department ID, etc... Now, sometimes the WHERE will be only on Entries, but sometimes it will be on the DepartmentID, but even then, it's still searching for the DepartmentID in Entries, using the one in Departments only to join the DepartmentName.
|
# ? Nov 27, 2007 15:20 |
|
Golbez posted:I'm not sure what you mean by getting the qualifying criteria into the join expressions... In this case, this join is simply to fill out the info on getting the actual name of the Department instead of the Department ID, etc... Now, sometimes the WHERE will be only on Entries, but sometimes it will be on the DepartmentID, but even then, it's still searching for the DepartmentID in Entries, using the one in Departments only to join the DepartmentName. I mean that if you're filtering on code:
code:
m5 fucked around with this message at 16:16 on Nov 27, 2007 |
# ? Nov 27, 2007 16:14 |
|
Something tells me we should write up a tutorial on joins or create a FAQ entry. I've made the point m5 notes above before -- sometimes you just want an left/right join to fail but don't want the entire row eliminated from the results set.
|
# ? Nov 27, 2007 16:48 |
|
Glad to see this thread, and I hope I can contribute something useful here and there. And now, my plea for sympathy: I support a vertical market software package that is based on multiple Microsoft Access databases accessed via Jet 3.0 and programmed in Visual Basic 6. The amount of bullshit I have to put up with on a daily basis keeping up the slices of swiss cheese that are my clients' databases makes me want to On the plus side, I have learned a great deal about SQL in a short period of time and within the next year we will be migrating to a SQL Server 2005 Express platform. Edit: Golbez, I have a great deal of problems due to Jet 3 being ridiculously inefficient with anything beyond a simple SELECT...especially over a network. Can I assume that using indexed fields in my queries will speed that up? I do notice that JOINs are almost always faster than a nested query... Powdered Toast Man fucked around with this message at 19:21 on Nov 27, 2007 |
# ? Nov 27, 2007 19:19 |
|
Powdered Toast Man posted:Edit: Golbez, I have a great deal of problems due to Jet 3 being ridiculously inefficient with anything beyond a simple SELECT...especially over a network. Can I assume that using indexed fields in my queries will speed that up? I do notice that JOINs are almost always faster than a nested query... I would say using indexed fields can help things greatly, though I don't know much outside of MySQL these days.
|
# ? Nov 28, 2007 02:05 |
|
Powdered Toast Man posted:Edit: Golbez, I have a great deal of problems due to Jet 3 being ridiculously inefficient with anything beyond a simple SELECT...especially over a network. Can I assume that using indexed fields in my queries will speed that up? I do notice that JOINs are almost always faster than a nested query... Indexes definitely help. The other big thing that helps with JET is to not open and close connections like one should do with SQL Server (or most other server RDBMSs). Seems like there is no connection pool. Depending on how much you are reading/writing, it can speed things up greatly. I should note my principal use for Access DBs these days is as data stores/logging tools for some command line utilities we use to import things into the CRM system here.
|
# ? Nov 28, 2007 21:42 |
|
How reliable is using phpMyAdmin for dumping a ~600mb database into a mysqldump style output file? Normally I'd just ssh in and use mysqldump itself, but unfortunately this time around I don't have remote access for ssh (nor can I use a local mysqldump connecting to the remote db server, which is how I previously did what I'm trying to accomplish tonight.) Would this pose any risk of corrupting the existing data and/or abruptly stopping mid-process via a php time-out, corrupting the dump? I am, for all intents and purposes, a phpMyAdmin newbie and need to take the utmost care of this data.
|
# ? Dec 1, 2007 00:08 |
|
I doubt it would work. It won't corrupt anything, but it will probably fail. In my (small) experience with large database backups produced by PHP interfaces, it tends to overrun memory limits very quickly.
|
# ? Dec 1, 2007 04:02 |
|
Yeah, that'll most likely fail. If you don't have SSH, take a look at your cpanel or what-have-you. Most hosts provide database backups through that as well.
|
# ? Dec 1, 2007 22:39 |
|
Kylratix posted:Yeah, that'll most likely fail. If you don't have SSH, take a look at your cpanel or what-have-you. Most hosts provide database backups through that as well. Yeah, I found the link inside cPanel for me to download the mysqldump output of the database in question which worked like gang busters. phpMyAdmin really is horrible, even after I got the database and did what I needed to do with it (namely trimming it down from 600 megs to 20, hooray for old apps that store images in the db), the re-import still didn't work all that well. I had to split the 15 meg .sql text file into 2 or 3 separate chunks to make it work properly. Ahwell it's done at any rate, thanks for the help.
|
# ? Dec 1, 2007 23:41 |
|
I've got a table set up for allowing people to vote on content items, with the structure like so:code:
The problem is trying to find the mostly liked content item from this table. The query I'm currently using is: code:
Is there a particular way to assign ENUM values to 'fake' columns, so I could do '(yay/nay) AS ratio'? Or is there a better way to achieve the same result? Any help is appreciated.
|
# ? Dec 3, 2007 01:00 |
|
Opius posted:I've got a table set up for allowing people to vote on content items, with the structure like so: code:
/count(content_id) AS ratio minato posted:Edit: Jethro's solution is better, replace my count(vote) with "sum(case when vote="yay" then 1 else 0 end)". Jethro fucked around with this message at 02:09 on Dec 3, 2007 |
# ? Dec 3, 2007 01:18 |
|
Opius posted:However, instead of just finding the items with the most 'yay' votes, I'd like to find the ones with the highest ratio of 'yay' votes to 'nay' votes. code:
Edit: Jethro's solution is better, replace my count(vote) with "sum(case when vote="yay" then 1 else 0 end)".
|
# ? Dec 3, 2007 01:33 |
Similar to the yay/nay voting, if you are trying to track who has voted on what, where each user is only allowed to vote once on each item being voted on, is it a bad move to just make a 1 column table with the primary key is a concat of userId_contentId? Or should you split it up to 3 columns like id, userId, contentId? What yields better performance?
|
|
# ? Dec 3, 2007 22:45 |
|
Two columns: userId and contentId. The PK is both of those columns.
|
# ? Dec 3, 2007 23:15 |
|
I'm using MySQL and I've read a lot about the different possible engines but it's still not clear which one is the best since all of them usually claim to be. Specifically, I'm interested in the differences and pros/cons between MyISAM and Innodb. I think Innodb is better when you have a lot of queries since it does row-level locking but MyISAM seems to have a lot of optimization too. EDIT: VVVV Yes I did. I was looking for actual experience and opinions from people here, other than posts in other forums or technical whitepapers and benchmarks. Senso fucked around with this message at 15:28 on Dec 4, 2007 |
# ? Dec 4, 2007 15:12 |
|
Have you googled your question?
|
# ? Dec 4, 2007 15:16 |
How do I use the mysql EXPLAIN to my advantage?
|
|
# ? Dec 4, 2007 20:06 |
|
|
# ? May 13, 2024 09:42 |
|
You can see how efficient your query is. How much records needed to be traversed to get the results of the query. For instance, you almost never want to do full table scans. It's good to check your queries with explain to see if full table scans are happening at certain queries so you can introduce indices or reform the queries.
|
# ? Dec 4, 2007 20:10 |