|
Agrikk posted:How do I do the check on either KillerName or VictimName? I would try something like code:
The one problem I see is if someone played a game but didn't kill/wasn't killed in the match they wouldn't have a record in here.
|
# ? Jun 17, 2009 16:38 |
|
|
# ? May 14, 2024 05:19 |
|
Stupid and probably easy question, I just don't know the proper terminology to google it. In MySQL, I can do a query on two "copies" of the same table: code:
|
# ? Jun 17, 2009 18:42 |
|
That works for me in PostgreSQL. Are you sure you've spelt the columnname correctly? If the column name is the same as a Postgres keyword, try putting it in quotes (e.g. "A.columnname")
|
# ? Jun 17, 2009 19:06 |
|
That was it, thanks! Not the first time I've had quotes mess me up today.
|
# ? Jun 17, 2009 19:20 |
|
Okay so I was wondering if there was a way to put a 'where' statement in a 'sum' statement. I need to sum up some numbers in a table multiple times, based on a condition. Here is a simplified version of the table:code:
code:
Ninja Edit: Actually, I looked on the internets and found some useful info (whoud'a thunk). Does this look viable (don't have access to the DB at the moment): code:
|
# ? Jun 18, 2009 01:19 |
|
I don't know anything about "decode", but this is how I think I would do itcode:
|
# ? Jun 18, 2009 01:32 |
|
If you are using Oracle 11g then you can use the new PIVOT feature.
|
# ? Jun 18, 2009 02:15 |
|
Hammerite posted:I don't know anything about "decode", but this is how I think I would do it With the exception of Nunavut's creation in the previous century, I don't really think this is going to be too much of a problem. (We're talking about provinces of Canada, incidentally) But what you're saying is exactly what I needed to know, the ability to use conditionals in the SUM statement. Thanks!
|
# ? Jun 18, 2009 02:16 |
|
I have a web program in php/mysql that I use to keep track of raids for a wow guild. Currently attendance is calculated by tallying the raids attended per member and dividing it by the total raids overall (mysql): code:
I have been trying to figure out how to do this, and the DISTINCT tag doesn't seem to be helping me (or I am using it wrong). Thanks! PS, why the hell is the code tag freaking out?
|
# ? Jun 19, 2009 22:21 |
|
I'm looking to begin learning some database theory, and I've been trying to find some good introductory books on the subject. I've never really had much interactions with databases, so something that's a fairly gentle introduction would be nice. Do any of you guys have any suggestion? Is there a quintessential introductory book?
|
# ? Jun 21, 2009 01:51 |
|
Meganiuma posted:I'm looking to begin learning some database theory, and I've been trying to find some good introductory books on the subject. I've never really had much interactions with databases, so something that's a fairly gentle introduction would be nice. Do any of you guys have any suggestion? Is there a quintessential introductory book? I think that the Manga Guide to Databases is a good gentle introduction to the concepts. SQL and relational basics by Fabian Pascal is also good.
|
# ? Jun 21, 2009 06:29 |
|
Cheesemaster200 posted:
I don't know MySQL, so I can't comment on the code specifically, but the basic way to SUM different periods is to GROUP BY your period. Since you want daily, thats a piece of cake; just GROUP BY your date field (cut off the time if its a timestamp). I'm not sure if I understand your query, but you'll probably need to group by user too. camels posted:I think that the Manga Guide to Databases is a good gentle introduction to the concepts. Awesome! My knowledge of DB management is basically non-existent. I will buy both.
|
# ? Jun 21, 2009 15:27 |
|
If you're counting total raids as something likecode:
code:
|
# ? Jun 22, 2009 15:48 |
|
MoNsTeR posted:Trunc() is an oracle function, I'm not sure what the MySQL equivalent is. All it does is take a date field with a value like '22-JUN-2009 08:48' and cut it off to '22-JUN-2009'. http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date
|
# ? Jun 22, 2009 17:36 |
|
Dr.Khron posted:I don't know MySQL, so I can't comment on the code specifically, but the basic way to SUM different periods is to GROUP BY your period. Since you want daily, thats a piece of cake; just GROUP BY your date field (cut off the time if its a timestamp). I'm not sure if I understand your query, but you'll probably need to group by user too. The problem is it is part of a much larger query. If I use DISTINCT right after the SUM or COUNT, I will get the count of distinct outputs if the IF statement (ie. 1). What I need to somehow do is add a conditional in the IF statement that will not count the same date twice.
|
# ? Jun 22, 2009 18:19 |
|
Are there any inherit long-term problems with constantly creating and deleting a set of tables in MySQL? I am basically taking an existing application and trying to give each user session their own sandbox by creating their own copy of the database (~15 tables, couple dozen rows each) each time a session is started (and performing gc on expired sessions to delete the unused tables). I understand this is not the best approach, but I am trying to minimize development time and this is the quickest implementation I could come up with. Support for no more than a dozen concurrent users is necessary. I know there are applications that create and delete thousands of tables (Wordpress MU comes to mind), so I am not too wary, but are there any potential issues I am not considering? supster fucked around with this message at 00:11 on Jun 25, 2009 |
# ? Jun 25, 2009 00:08 |
|
supster posted:Are there any inherit long-term problems with constantly creating and deleting a set of tables in MySQL? Hell, if it's small enough, just create temporary tables in memory for the users and let MySQL destroy them when the session ends. No issues that I can really think of, but I'd probably go the create temporary table ... engine = memory route here.
|
# ? Jun 25, 2009 03:44 |
|
Cheesemaster200 posted:I have a web program in php/mysql that I use to keep track of raids for a wow guild. Here's one possible approach: 1. use DATE() to convert r.raid_date from a datetime to a date value. 2. use a subquery with GROUP BY to get the distinct raid_dates. 3. join or select from the distinct dates subquery. I'm not very familiar with MySQL, but if it supports CASE..WHEN and COALESCE I would look at using those instead of IF(), if it makes the query simpler and easier to understand.
|
# ? Jun 25, 2009 05:30 |
|
camels posted:Here's one possible approach: I have my dataset acquired by the following sub-query: code:
code:
|
# ? Jun 25, 2009 17:24 |
|
Cheesemaster200 posted:I have my dataset acquired by the following sub-query: Something like this...? code:
|
# ? Jun 25, 2009 21:01 |
|
I am trying to use a sqlite database to search for records that are similar but not necessarily exactly the term given. Does anyone know of a good way to fuzzy search in sqlite similar to a Levenshtein distance algorith? Eg if I searched for the term 'England' with a margin of error = 1, it would return both 'Angland' and 'Englend' as well as the original term.
|
# ? Jun 27, 2009 03:21 |
|
scrabbleship posted:I am trying to use a sqlite database to search for records that are similar but not necessarily exactly the term given. Does anyone know of a good way to fuzzy search in sqlite similar to a Levenshtein distance algorith? Eg if I searched for the term 'England' with a margin of error = 1, it would return both 'Angland' and 'Englend' as well as the original term. Is it possible to use a spell-checker to scrub the existing data? Bad Titty Puker fucked around with this message at 09:11 on Jun 28, 2009 |
# ? Jun 28, 2009 08:13 |
|
camels posted:Is it possible to use a spell-checker to scrub the existing data? Well, I am trying to build something that matches ID3 tags to a database generated by ID3 tags. A lot of the data can't really be spell check because it mostly consists of proper nouns and I can't really ensure the ID3 tags being matched to the database are spelled correctly either. Is there really nothing out there like this? I figured it would be somewhat common in using databases to search for results.
|
# ? Jun 29, 2009 01:39 |
|
scrabbleship posted:Well, I am trying to build something that matches ID3 tags to a database generated by ID3 tags. A lot of the data can't really be spell check because it mostly consists of proper nouns and I can't really ensure the ID3 tags being matched to the database are spelled correctly either. Is there really nothing out there like this? I figured it would be somewhat common in using databases to search for results. A lovely and non-scalable way of doing this would be to use the underscore (_) wildcard character in a LIKE statement, permutating over all possible combination relative to your margin of error. So, with England and a margin of error of 1: _ngland e_gland en_land eng_and engl_nd engla_d englan_ Maybe a sqlite plugin exists to do fuzzy matching? I saw on the fulltext sqlite page that it does not support this natively.
|
# ? Jun 29, 2009 02:03 |
|
scrabbleship posted:Well, I am trying to build something that matches ID3 tags to a database generated by ID3 tags. A lot of the data can't really be spell check because it mostly consists of proper nouns and I can't really ensure the ID3 tags being matched to the database are spelled correctly either. Is there really nothing out there like this? I figured it would be somewhat common in using databases to search for results. Maybe build a lookup table if the number of distinct tags is small, and the margin of error is small? Select all the distinct tags into a dictionary table, maybe use compression and/or pruning to reduce the table size. Then populate a lookup table, something like: search_keys(word1_id INT, word2_id INT, lev_distance INT) unique index on (word1_id, word2_id) index on (word1_id, lev_distance) I have never used SQLite but I'm thinking that a search_keys table could easily hold hundreds of millions of rows and perform fast.
|
# ? Jun 29, 2009 03:20 |
|
I've got two tables which are joined by a third to avoid a many-to-many relationship. Specifically, a User can belong to many Facilities. I need to display various things about users, including the facilities they belong to, on one line of a gridview. Is there any way to build a view or something in which all of the facilities to which a user belongs are concatenated into one column?
|
# ? Jun 29, 2009 20:52 |
|
What database are you using? If you're using Postgres you can use something like:code:
|
# ? Jun 29, 2009 22:26 |
|
Neat that Postgres provides a built-in to do that. If you happen to be on Oracle I can share my solution, which is a kludgy User-Defined Aggregate Function.
|
# ? Jun 29, 2009 23:29 |
|
Wonderbread 2000 posted:I've got two tables which are joined by a third to avoid a many-to-many relationship. Specifically, a User can belong to many Facilities. I need to display various things about users, including the facilities they belong to, on one line of a gridview. Is there any way to build a view or something in which all of the facilities to which a user belongs are concatenated into one column? If you are using MySQL look at GROUP_CONCAT
|
# ? Jun 30, 2009 15:27 |
Is there a way of directly importing an excel file into a database, or copying over the rows? Say I have a SQL table set up like this: code:
|
|
# ? Jun 30, 2009 18:47 |
|
DBs usually have bulk import tools that can read from CSV files... But I wouldn't say they're easy like "accomplishable under 3 minutes" easy. You set up the schema, export to CSV, align all the columns, set it to load, and then pray. Really if this is a one shot job you're better off exporting to CSV, writing a Perl script to generate insert statements, and then calling them. I would consider bulk import to be more of a productionalized solution.
|
# ? Jun 30, 2009 18:50 |
I just want to get used to the import tool from now. Is there a guide to doing it through SQL Server Management Studio?
|
|
# ? Jun 30, 2009 19:00 |
I just found out about the SQL command 'BULK INSERT' which is almost perfect, however with a comma delimited file, I give it the FIELDTERMINATOR of ',' and it dies trying to handle "this text has, a comma". Any idea how to get around that? Do I need to use another type of delimiter? Edit: BULK INSERT EDIT_LONG_CODE FROM 'C:\ImportTest.txt' WITH ( FIELDTERMINATOR = '\t', ROWTERMINATOR = '\n' ) This is just about the fastest thing I could have hoped for, if anyone else ends up needing this. Just pop it onto a tab delimited file and you're good to go. Atoramos fucked around with this message at 19:40 on Jun 30, 2009 |
|
# ? Jun 30, 2009 19:20 |
|
Atoramos posted:Do I need to use another type of delimiter? No, I think you need to encapsulate your text data properly. http://en.wikipedia.org/wiki/Comma-separated_values#Specification code:
Squashy Nipples fucked around with this message at 21:27 on Jun 30, 2009 |
# ? Jun 30, 2009 21:06 |
|
I'm sure I'm being retarded, but can someone point out to me what's wrong with this query?code:
MySQL posted:MySQL error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SUM( CASE Game.NumberOfPlayers WHEN 2 THEN ' at line 7
|
# ? Jul 2, 2009 23:27 |
|
What's that FROM clause doing on line 6?
|
# ? Jul 2, 2009 23:30 |
|
Ah poo poo, I've forgotten to put the word SELECT at the start of my subquery. I knew it would be something retarded. Thanks.
|
# ? Jul 2, 2009 23:32 |
|
I was going to post again puzzling over the fact that my query didn't seem to be working in cases where there are no rows over which to sum. It didn't seem to be giving me back an answer. I worked out after a bit of puzzling that it was giving back an answer, but the method I was using to view the output was fooling me into thinking that it wasn't. It was just giving back NULL. It comes down to the fact that SUMming over zero rows doesn't give you zero in SQL (well, MySQL anyway); it gives you NULL. I find this rather annoying; it seems ridiculous to me (as a mathematician) that an empty sum should be asserted to be something other than zero. This rant isn't addressed to anyone in particular; carry on.
|
# ? Jul 3, 2009 00:14 |
|
Dude, don't dis NULL, its super useful. Sometimes the difference between a zero value and no data is an important distinction. I ran afoul of NULL a few weeks ago, when I needed the SUM of a set of values to return NULL if ANY of the individual values were NULL. Check back a few pages.
|
# ? Jul 3, 2009 00:55 |
|
|
# ? May 14, 2024 05:19 |
|
Dr.Khron posted:Dude, don't dis NULL, its super useful. Sometimes the difference between a zero value and no data is an important distinction. NULL can be useful, just remember that it's strange, at least in Oracle. 1 = NULL and 1 != NULL both evaluate as false for one thing, and for another you can't index on null. If you've got a column with an index on it and there are null values in there you can use that index if you specify <column value> = <whatever>, but not <column value> IS NULL; EDIT: and I just used a ; instead of a . at the end of a sentence, I've been doing this poo poo for too long
|
# ? Jul 3, 2009 01:00 |