|
This should be really simple, but I'm just not getting it. Pretty new to mysql.code:
My first query was: SELECT * FROM tablea left join tableb on tablea.field1 = tableb.field1 Which join do I want, or do I need another operator?
|
# ? May 21, 2008 00:00 |
|
|
# ? May 25, 2024 15:32 |
|
Civil posted:This should be really simple, but I'm just not getting it. Pretty new to mysql. The answer is not in the JOIN, it's in the GROUP BY: code:
|
# ? May 21, 2008 00:55 |
|
I do want records from TableA, even if they don't have a database match in TableB. My problem is that LEFT JOIN is giving me multiple returns on TableA when there are multiple matches on TableB. I only want the first match on TableB.
|
# ? May 21, 2008 01:00 |
|
Civil posted:I do want records from TableA, even if they don't have a database match in TableB. My problem is that LEFT JOIN is giving me multiple returns on TableA when there are multiple matches on TableB. I only want the first match on TableB. This is why you need to GROUP BY, because JOIN *WILL* return all rows from table B when the join condition is met. Either you GROUP BY, just like I posted, of you join on a subquery to "prepare" table B, which would be slower and stupid. Here's what it would look like anyway: code:
code:
|
# ? May 21, 2008 01:09 |
|
Here's a problem I've been working on for longer than I'd like. I know most of the SQL basics but this seems to be a bit over my head. Let's say I have this simplified table of messages sent to and from a server. All outgoing messages get an ACK from the receiver if he got the message. Outgoing messages are given an ACK number (that loops from 0 to 999), the same number given to the corresponding ACK. Messages that are sent to the server don't have any ACK's, so they have an ACK number of 0. So in this example, the first four rows are the server sending two messages of the type ABC, then getting an ACK to the first one, then the second one. code:
code:
I've been trying various forms of self join on the table to get the desired result, but it either excludes what I want, includes what I don't want or is way too slow. Or all of the above. This might be way too specific for anyone to bother reading and understanding, but I figured I might as well try asking.
|
# ? May 21, 2008 11:33 |
|
When I get crazy queries that are hard to figure out, I ask myself, what is the primary concept I want vs what is joined. Primary concept: Rows where type != ACK Join concept: Time Join condition: ACK num = ACK num (selecting from ACKs only) So, in pseudo code... code:
|
# ? May 21, 2008 13:47 |
|
Triple Tech posted:When I get crazy queries that are hard to figure out, I ask myself, what is the primary concept I want vs what is joined. Thanks for this. My basic idea was to do something like this, but on a table that will contain hundreds of thousands of rows and grows about 5.000 rows a day this query will get pretty expensive. The main problem is that the ACK_number isn't unique (loops from 0 to 999) so joining on that will return a lot of unwanted rows. So my fix was to only join on ACK_numbers that are less than 10 minutes apart, but that's a pretty cheap fix that made the query even more needlessly complicated and slow. And as a side effect this approach pairs every incoming messages with every other incoming message as well, since they all have an ACK_number of 0.
|
# ? May 21, 2008 16:37 |
|
Your table is going to be terrible for performance. Can you add a column, something like "ack_of" that is a foreign key to the ID, referencing the sender's id? This would make querying much simpler and faster.
|
# ? May 21, 2008 19:07 |
|
Zedlic posted:Thanks for this. If an UP/DN ACK pair is *always* present (the other end always replies) and they are logged reasonably in order and there are no duplicates and you're using Oracle or SQL Server you could use an analytic function like FIRST_VALUE to compute the final table efficiently. Otherwise, the query is going to be necessarily slow unless you help stage the join condition. One way you could do this is translating this rolling 0-999 value into a real value in the range 0-999,999,999 (a big number you won't ever reach) and storing this in another table, or computing it on the fly with a function. You could then index this function/column, and joins would become very fast.
|
# ? May 21, 2008 19:42 |
|
var1ety posted:If an UP/DN ACK pair is *always* present (the other end always replies) and they are logged reasonably in order and there are no duplicates and you're using Oracle or SQL Server you could use an analytic function like FIRST_VALUE to compute the final table efficiently. I'll preface this by saying that I'm not very familiar with analytic functions. Would either of these work logically and efficiently in the case that an ACK is not always present? code:
- Calculate FIRST_VALUE to fill in the [Time received] - Having clause to filter out rows with more than a 10 minute lag before the ACK
|
# ? May 21, 2008 20:16 |
|
I have two tables: autos - id - other columns that don't matter and votes - id - auto_id - vote_value - ip_address - date_added I wrote a query to select all autos that have do not have any votes for the current IP address and day: code:
|
# ? May 21, 2008 21:08 |
|
Stephen posted:... Untested, but this is what you need I'm pretty sure code:
|
# ? May 21, 2008 21:36 |
|
Stephen posted:... Is it possible that your votes table does not have a matching id at all? If so, you will need to also check if v.id is null. Try code:
|
# ? May 21, 2008 21:37 |
|
Alex007 posted:Untested, but this is what you need I'm pretty sure Hah, this is perfect, thanks. I just assumed that since the IP address and Date values did not equal null, it should be returning the results anyways. Thanks again.
|
# ? May 21, 2008 21:39 |
|
Stephen posted:Hah, this is perfect, thanks. Just doing my job Stephen posted:I just assumed that since the IP address and Date values did not equal null, it should be returning the results anyways. You can't test NULL with = or != because NULL is unknown. If I show you two boxes and you do not know what's in them, you can't say the content is the same because they are BOTH unknown. = and != would BOTH return FALSE. code:
|
# ? May 21, 2008 21:47 |
|
yaoi prophet posted:Your table is going to be terrible for performance. Can you add a column, something like "ack_of" that is a foreign key to the ID, referencing the sender's id? This would make querying much simpler and faster. That's something I didn't really look into. Thanks for the tip, I'll check tomorrow. It's going to be hard though, since the ID is just an auto-incremented index and is not related to the message itself in any way. Within the message data, the only connection between an ACK message and the original message is the ACK_number, which loops from 0-999. var1ety posted:If an UP/DN ACK pair is *always* present (the other end always replies) and they are logged reasonably in order and there are no duplicates and you're using Oracle or SQL Server you could use an analytic function like FIRST_VALUE to compute the final table efficiently. There is always an ACK to an UP message, either an ACK/Failed or ACK/Success. (The ACK isn't coming from the recipient himself, but that's not really important to this problem.) However, I only have MySQL. Don't know if something like that is available there. I've thought about mapping the ACK_number to something bigger that won't realistically loop ever, but the problem is finding a mapping function that maps each pair of ACK_numbers to a unique number. Any ideas on that are appreciated.
|
# ? May 21, 2008 23:27 |
|
Zedlic posted:I've thought about mapping the ACK_number to something bigger that won't realistically loop ever, but the problem is finding a mapping function that maps each pair of ACK_numbers to a unique number. Any ideas on that are appreciated. If you created a new column ack_number_noloop then you could count the number of ACKs with that sequence number with a smaller timestamp, multiply that count by 1000, and add the ack_number. This is a way to link the two records together as yaoi prophet recommended. It will work as long as when you get message 500 then the ACK for the previous message 500 has already been committed to the database.
|
# ? May 22, 2008 02:54 |
|
I keep getting this error when trying to import a database:quote:ERROR 1064 (42000) at line 35657: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''1' at line 1 I'm assuming there is something up with the very first chunk of code. Does anyone see something off about this? code:
|
# ? May 25, 2008 06:45 |
|
Ashex posted:I keep getting this error when trying to import a database: I think the error is near line 35657 -- it's just line 1 of that clause or section.
|
# ? May 25, 2008 08:08 |
|
Ok, I've got sizes and persons. Persons have a foreign key (sid) references size (id), standard one-to-many. I want a listing of how many people are wearing which sizes, so if i have sizes S, M and L, I want a result telling me S=10, M=40, L=15. This is all well and good, except using an inner join leaves out the sizes with totals of 0. I've put together a left join query which actually does do exactly what I want, but I get the feeling that there's a better way to do it. code:
code:
code:
|
# ? May 25, 2008 21:13 |
|
I'd do thiscode:
|
# ? May 25, 2008 22:02 |
|
Can you tell me why this is the same/better/worse?
|
# ? May 25, 2008 22:39 |
|
epswing posted:Can you tell me why this is the same/better/worse? I think it's easier to understand what the query is doing when you break the query up. Additionally, while using the case function in that way is convenient, it is hard on the database because Oracle has to evaluate it for every row. I benchmarked the two original queries and found that my query ran in 50% of the time. I then benchmarked my original query against a modified version that uses the case method, and found that my query ran in 70% of the time. code:
|
# ? May 26, 2008 02:27 |
|
epswing posted:-- does exactly what i want, but i get the feeling there's a better way Hmm, I haven't tested this but shouldn't you just be able to: select s.name as size, count(*) as total from size s left join person p on s.id=p.sid group by s.id order by s.id; http://www.java2s.com/Code/SQL/Select-Clause/COUNTandGROUPBY.htm
|
# ? May 26, 2008 18:07 |
|
I'm almost positive I've tried exactly that, no dice.code:
|
# ? May 26, 2008 19:28 |
|
epswing posted:I'm almost positive I've tried exactly that, no dice. http://www.w3schools.com/sql/func_count.asp This says that it will return the count of NON-Null values. As there is ONE record containing the value L after the left join, the COUNT(*) for L is 1. However, the p.id field should be NULL (you can confirm this by doing just the left join and taking out the where conditions). Thus, replacing the count(*) with count(p.id) should fix it... in theory. This is untested chocojosh fucked around with this message at 19:52 on May 26, 2008 |
# ? May 26, 2008 19:43 |
|
You win!code:
code:
Muchas gracias! epswing fucked around with this message at 20:36 on May 26, 2008 |
# ? May 26, 2008 20:32 |
|
in MS SQL, I'm trying to alter a nullable integer column so that it's not null with a default value of zero. How could this best be done using a constraint (since I believe that's the only way to go about it, since I don't believe you can add default values when altering a column)?
|
# ? May 27, 2008 22:28 |
|
uh zip zoom posted:in MS SQL, I'm trying to alter a nullable integer column so that it's not null with a default value of zero. Create a Before Insert Row Trigger Add Constraint
|
# ? May 27, 2008 23:19 |
|
Xae posted:Update the Column, set =0 where col is null suppose a user adds a record and doesn't include a value for the column that has the default value, will that field still get said default value?
|
# ? May 27, 2008 23:34 |
|
Xae posted:Update the Column, set =0 where col is null Triggers ? For Default values ? God dammit this is a stupid idea. This is what you're looking for: code:
|
# ? May 27, 2008 23:58 |
|
Alex007 posted:
yes, yes it is. Thank you very much.
|
# ? May 28, 2008 00:11 |
|
Alex007 posted:Triggers ? For Default values ? God dammit this is a stupid idea. What the gently caress do you think a default value is?
|
# ? May 28, 2008 00:58 |
|
Xae posted:What the gently caress do you think a default value is? Default values are better implemented using, well, default values, not triggers.
|
# ? May 28, 2008 01:11 |
|
Alex007 posted:Default values are better implemented using, well, default values, not triggers. Its a chunk of code that goes "If blah is null, then set to x". Oddly similar to what a trigger would be. And for almost any database the performance hit would be negligible. Regardless of the terminology the code running is going to be shockingly similar.
|
# ? May 28, 2008 01:21 |
|
Help with indices and using mysql's explain! I'm generating a so-called square table which joins 3 tables, but repeats some rows as columns so that my guy's analysis software can plow through it. The end result will be spit out into a csv. Here's the query: These are InnoDB tables. code:
And so I ran explain on that bad boy. I'm very unfamiliar with indices but I know they can be handy. Could someone point me in the right direction? (sorry for the table breakage) code:
|
# ? May 28, 2008 01:36 |
|
Start with the Postgres Docs Link. Even if you don't use Postgres their docs are some of the best out there for most core DB things like Indexes, constraints, etc.
|
# ? May 28, 2008 02:46 |
|
Those are great docs. So I gave them a read and now I'm trying things out but I can't get it to go fast yet. Even when I'm limiting the result to 100 results it goes dog slow. Here is what I used to create the indices... code:
code:
|
# ? May 28, 2008 09:22 |
|
I've got a pretty simple problem:code:
|
# ? May 28, 2008 17:54 |
|
|
# ? May 25, 2024 15:32 |
|
nbv4 posted:I've got a pretty simple problem: What RMDBS? MySQL? Try using single quotes around 'Turbine' If a plane has a 'Turbine' tag and a different tag (say, 'Butts'), do you want the flights associated with that plane? If not, then what you are trying to do will not work. I believe for that purpose, you would need to use a sub-select like code:
|
# ? May 28, 2008 22:35 |