|
I'm not sure if that is efficient either, for the average person there has to be more stories than there are votes since they can only vote once per story, so it would be a smaller result set to just search for all the users votes than to search for all the stories. In the end all I need to know is if a vote was found, not what it was, so I think the binary search is still the best bet. Another good thing is that the API is simpler with the binary search, since i only need a single method isVoteFound(user, id), which when called once will automatically speed up any subsequent calls since the list is cached. If I went#3 with the OR clause, I would need to change the api to isVoteFound(user, List ids).
|
# ? Jun 29, 2010 12:53 |
|
|
# ? Jun 8, 2024 08:17 |
|
But you already search for all the user's stories? The 15 stories you display. So adding a left join to show which stories the user has voted on should, if I understand what you are trying to do, be much simpler and more efficient. One query instead of 16 (original query x 1 and look up votes x 15).
|
# ? Jun 29, 2010 14:32 |
|
Yeah but your query assumes only one user is hitting the site. The original stories query has to be independent of any user since it will be shown to many different users with different vote statuses. edit - I guess you are taking the approach of running the query for every user, where I was caching the story query for all users and running the votes independently.
|
# ? Jun 29, 2010 15:05 |
|
Well that explains it.
|
# ? Jun 29, 2010 15:11 |
|
Well no, his query does not assume that, since the user_id is plugged in at runtime. More generally, you basically have to decide whether you can afford to leverage the database or if you have to keep everything in memory. The problem you've proposed is undoubtedly easier to solve using SQL, but it will almost certainly be slower. The question is whether that matters at your level of scale.
|
# ? Jun 29, 2010 20:55 |
|
MySQL engines: Right now I'm using the default MyISAM engines and I'm thinking about switching to InnoDB for foreign keys. I'm currently using triggers to perform the same cascading deletes/updates that foreign keys would provide. Is there any reason I should switch? I don't have need right now for row-level locking or transactions. If I do switch, is there an easier way to do it than copying my data to a new table with the same schema (but different engine), deleting the old one and then renaming the new one?
|
# ? Jul 2, 2010 18:30 |
|
I have a hopefully pretty easy question on a JOIN. I have two tables, a Quote table (PK is QuoteID) and a History table (PK is HistoryID). The linking key is QuoteID. The history table can contain multiple instances of the same QuoteID. What I am needing to do is grab the QuoteID based on other row information in my History table based off the highest instance of HistoryID for a given QuoteID. So let's say I have a coulmn in my History table named Status, I need to get the QuoteID where the Status = "Test" from the highest instance of HistoryID. Does any of that make sense? Here's what I have so far that works: SELECT Quote.QuoteID, History.HistoryID FROM Quote JOIN History ON Quote.QuoteID=History.QuoteID Which returns some results like this: code:
Any help would be greatly appreciated!
|
# ? Jul 2, 2010 19:05 |
|
If you're using MySQL you'd need to do something likecode:
If you're using MSSQL 2005 or greater (or some not ancient release of Oracle) you could use analytic functions to do something like: code:
|
# ? Jul 2, 2010 19:21 |
|
Awesome, the top query worked perfectly. Thanks for the help Jethro!
|
# ? Jul 2, 2010 19:36 |
|
Doctor rear end in a top hat posted:MySQL engines: Right now I'm using the default MyISAM engines and I'm thinking about switching to InnoDB for foreign keys. I'm currently using triggers to perform the same cascading deletes/updates that foreign keys would provide. Is there any reason I should switch? I don't have need right now for row-level locking or transactions. If I do switch, is there an easier way to do it than copying my data to a new table with the same schema (but different engine), deleting the old one and then renaming the new one? There is no reason not to use InnoDB, especially given that you're already doing something insanely silly to work around not using it. Converting is easy: code:
|
# ? Jul 2, 2010 21:42 |
|
Got them switched, but having trouble setting up a couple of the foreign keys (out of like 20) The following two statements: ALTER TABLE sites ADD FOREIGN KEY (owner) REFERENCES owners (id) ON UPDATE CASCADE ALTER TABLE territories ADD FOREIGN KEY (owner) REFERENCES owners (id) ON DELETE CASCADE ON UPDATE CASCADE Give the same error: Can't create table './ctasc/#sql-1625_b1d012.frm' (errno: 150) (ctasc) is the database name. and here's the relevant schema of the tables: Sites: code:
code:
code:
I'm retarded and didn't have an index on the id column in my owners table. Fixed! butt dickus fucked around with this message at 22:07 on Jul 2, 2010 |
# ? Jul 2, 2010 22:05 |
|
Yeah, the errors given by InnoDB when there is something preventing a foreign key from being created/altered are not helpful at all.
|
# ? Jul 4, 2010 14:45 |
|
I have a (embarrassingly simple) SQL question. I have the following table: id(primary key, int) name(varchar2) container(int - something like 30 possible unique values) active(boolean) I need a list of all the names that have more than one active row for a given container. Here's the query I've come up with: code:
KillaB227 fucked around with this message at 20:59 on Jul 4, 2010 |
# ? Jul 4, 2010 20:54 |
|
KillaB227 posted:I have a (embarrassingly simple) SQL question. hrmm i think should work if its unique by container code:
|
# ? Jul 5, 2010 01:33 |
|
KillaB227 posted:I have a (embarrassingly simple) SQL question. code:
|
# ? Jul 5, 2010 01:40 |
|
Sprawl posted:hrmm i think should work if its unique by container camels posted:
Awesome! Thank you both.
|
# ? Jul 5, 2010 02:51 |
|
Apparently the Sybase database I'm working on is either really old or I don't know what. It wouldn't let me create a foreign key with an "on update" clause, gave me syntax error. alter table XYZ add foreign key (fieldXYZ) references ABC(fieldABC) on update cascade Take out the "on update", it works. I ended up axing that constraint entirely and enforcing manually with triggers, but I'd like to be able to cascade... What might I be wanting to look at to enable this option if it's been disabled somehow? I can get full DBA access if needed.
|
# ? Jul 7, 2010 08:15 |
|
baquerd posted:Apparently the Sybase database I'm working on is either really old or I don't know what. What version of Sybase? quote:It wouldn't let me create a foreign key with an "on update" clause, gave me syntax error. What error message are you getting?
|
# ? Jul 7, 2010 09:07 |
|
camels posted:What version of Sybase? It should be 12.5 I believe. We had to set a new environment variable SYBASE_OCS with the latest update we've published. I'm a Java/PHP guy who does SQL on the side, so I'm not very knowledgeable about DBA myself. "Incorrect syntax near the keyword 'ON'."
|
# ? Jul 7, 2010 09:11 |
|
baquerd posted:It should be 12.5 I believe. We had to set a new environment variable SYBASE_OCS with the latest update we've published. I'm a Java/PHP guy who does SQL on the side, so I'm not very knowledgeable about DBA myself. I dunno. Looks like version 12.5 does not support cascading updates: http://infocenter.sybase.com/help/topic/com.sybase.dc32300_1250/pdf/sqlug.pdf quote:create table [database.[owner].]table_name quote:Using triggers vs. integrity constraints
|
# ? Jul 7, 2010 09:58 |
|
camels posted:I dunno. Looks like version 12.5 does not support cascading updates: Ah, good find, bleh. A trigger on each side of the foreign key constraints isn't too bad, only needed one primary key able to be updated anyway.
|
# ? Jul 7, 2010 10:25 |
|
Okay, the bottom line is that I don't know much about JDBC or SQLExpress, but, it is what it is. I'm trying to run the FogBugz importer for Jira (which looks to be at least partially broken for FB7.1) - but I can't even get far enough to see if it's going to work at all because I don't know what I need the JDBC string to look like. Jira is expecting something like jdbc:jtds:sqlserver://localhost:1433/FogBUGZ. That is the correct host and port (DB is local to Jira install), but this is a named instance of SQLEXPRESS. I can get in through OSQL with "localhost\sqlexpress", but that's not letting Jira connect (generic failure error with no explanation). If anyone has tried to solve a similar problem I'd be much obliged. Google led me to try doing sqlserver://localhost/db_name;instanceName=sqlexpress but that has not worked either. The SQL instance is using windows auth, but my impression is that this means i can omit the username entirely because Jira is running as the logged in user (not in service mode) which is, again, on the SQL box as admin.
|
# ? Jul 9, 2010 22:02 |
|
A local government agency provides us with a spreadsheet full of names and addresses that our users need to search for a particular business reason. The spreadsheet contains 5 name fields and 5 address fields and some other miscellanous information. I have been tasked with developing a small ASP.NET web app that allows them to search the list, displays the results in a datagrid and then allows them to email the rendered datagrid somewhere. With this in mind, I imported the spreadsheet into a single table in SQL server and enabled Full Text Indexing, thinking that I would just create a single stored procedure that looked something like: SELECT * FROM TableName WHERE CONTAINS (columns,@searchstring) The problem I have is that names and addresses are sometimes stored like this: Name1: Jacqueline Name2: Kennedy Onassis Name3: null and sometimes like this: Name1: Jacqueline Name2: Kennedy Name3: Onassis So users are not getting the results they should be, because they are searching for 'martin luther king boulevard' and 'martin luther' is stored in column address2 and 'king boulevard' is stored in column address3. What I thought would work is to pass a search string that looks something like this: SELECT * FROM TableName WHERE CONTAINS (*,'"Kennedy" AND "Onassis" OR "Kennedy Onassis"') In order to cover all the bases of a particular string either being in a single column or split accross multiple columns. However this does not seem to bring back the expected results. Using CONTAINS with an AND in the middle doesn't really seem to work at all. Is there a better way to do this or do I just have the wrong syntax? Have I approached this completely wrong, would it have been easier to just concatenate all the name and address fields before I put the data into SQL server?
|
# ? Jul 12, 2010 13:33 |
|
Is there a reason to have the columns separate? If not, you could concatenate the data and store it in one column.
|
# ? Jul 12, 2010 14:14 |
|
I would personally extract the data and programmatically concatenate and search, especially if you want to provide more to the search feature than a plaintext search (e.g. suggestions and autocompletion).
|
# ? Jul 12, 2010 14:16 |
|
gallop w/a boner posted:A local government agency provides us with a spreadsheet full of names and addresses that our users need to search for a particular business reason. The spreadsheet contains 5 name fields and 5 address fields and some other miscellanous information. SELECT * FROM TableName WHERE CONTAINS (*, "Kennedy Onassis"') OR CONCATINS(*,'"Kennedy" AND "Onassis") Should work.
|
# ? Jul 12, 2010 18:44 |
|
I'm working with Queries in Java and I'm having problems I cannot quite explain. What I want to do is as follows: I have a sizable database of URLs. Many of them are duplicates and I would like to query the database and have it return only unique URLs of the last 24 hours, so we have this code (With an arbitrary date at the moment):code:
Now here is the kicker. I also want to then count up the number of errors a given URL has returned in the last 24 hours. All this is running on Java so the block of code in question looks like this: code:
Now, the first query works and "error_text is most definitely a column in the table. I'm not sure where to go from here as I can run the exact same Query in GUI Tools and it works. Perhaps this is a Java question, but the problem is within the SQL query. Why on earth is it returning this error and how do I fix it? Also, perhaps there is an easier way to do what I want to do. I kinda hacked this together with a pretty limited knowledge of SQL, so if there is a better way to count the number of a specific instance of text in a column, I'm all ears! Canine Blues Arooo fucked around with this message at 16:33 on Jul 13, 2010 |
# ? Jul 13, 2010 15:40 |
|
I'm not sure if this is what's causing it, but I think you need to group records if you're going to get a count.code:
|
# ? Jul 13, 2010 16:04 |
|
You're reassigning rs in the middle of your while loop and then you try to get the error_text column from your second query which does not have that column.
|
# ? Jul 13, 2010 17:00 |
|
Doctor rear end in a top hat posted:I'm not sure if this is what's causing it, but I think you need to group records if you're going to get a count. That seems kinda wasteful with there errortext in the where. code:
|
# ? Jul 13, 2010 19:44 |
|
Doctor rear end in a top hat posted:I'm not sure if this is what's causing it, but I think you need to group records if you're going to get a count. This actually did end up working. Many thanks~
|
# ? Jul 13, 2010 19:56 |
|
I've got an issue here that's being a pain that maybe someone can help me figure out here. I've got a bunch of really old SQL Server 2000 databases that need to be moved to a SQL Server 2008 server. The databases themselves are largely recreated, but the schema, users, and other miscellaneous objects differ on the 2008 server so a flat "detach database and reattach it" won't work--what I really need to do is transfer just the tables from the older server to the new one. The problem is the standard "quick and stupid" method of just importing the tables doesn't work, because the identity specification for all the columns is not preserved, even when selecting to enable identity insert. Manually going in and changing each of those identity columns after transferring the tables is certainly possible, but having to do that for several hundred tables is going to be a nightmare. I'm sure I'm just missing something stupid, because there HAS to be a way to transfer tables from one database to another and still preserve whether or not columns are identity columns. SQL is really a secondary thing I do occasionally and my google-fu is not working up to par to find the answer to this one. Any ideas?
|
# ? Jul 14, 2010 19:42 |
|
baquerd posted:I would personally extract the data and programmatically concatenate and search, especially if you want to provide more to the search feature than a plaintext search (e.g. suggestions and autocompletion). Ended up concatenating and it works pretty well now. Thanks guys.
|
# ? Jul 14, 2010 21:24 |
|
Genpei Turtle posted:I've got an issue here that's being a pain that maybe someone can help me figure out here. It's probably a lot easier to take care of the non-table objects than to try to "just copy the tables." I would look into backing up the SQL 2000 databases and restoring them onto the SQL 2008 server (definitely *not* detaching and attaching database files!). Run scripts to make changes to the non-table objects.
|
# ? Jul 14, 2010 22:27 |
|
For some reason this is a lot harder to figure out than I think it should be. I have an XML file that I want to import into a database table. Specifically it is the eBay CatTree.XML if anyone is familiar with it. Basically it looks like this: code:
|
# ? Jul 15, 2010 22:00 |
|
excidium posted:For some reason this is a lot harder to figure out than I think it should be. I have an XML file that I want to import into a database table. Specifically it is the eBay CatTree.XML if anyone is familiar with it. Google "XML parser <language of choice>". You could get by with a regular expression hack, but better to use something like Java's SAX parser, format your nodes into an insert query and run the query (or output to flat file for BCP if you really have a million records).
|
# ? Jul 15, 2010 22:06 |
|
excidium posted:I can not figure out HOW to do this though. Any hints/links I can read up more on? Whoops, I read this as a PHP question (I have that thread open in another window). You haven't told us which database you're working with. If you decide this needs to be done with some programming language, here's an example of how you might do it in PHP: php:<? $input = <<<END <data> <Category> <CategoryID>20081</CategoryID> <CategoryName>Antiques</CategoryName> </Category> <Category> <CategoryID>20082</CategoryID> <CategoryName>Baseball Cards</CategoryName> </Category> </data> END; $xml = new SimpleXMLElement($input); foreach ($xml->Category as $cat) { echo "insert into blah ... $cat->CategoryID $cat->CategoryName \n"; } ?> code:
epswing fucked around with this message at 14:37 on Jul 16, 2010 |
# ? Jul 15, 2010 22:13 |
|
Having an issue with the non-standard SQL "ON DUPLICATE KEY UPDATE" command in MySQL 5.0;code:
cynic fucked around with this message at 12:59 on Jul 16, 2010 |
# ? Jul 16, 2010 12:52 |
|
cynic posted:Having an issue with the non-standard SQL "ON DUPLICATE KEY UPDATE" command in MySQL 5.0; I suggest not using ON DUPLICATE KEY UPDATE here. Instead, check whether a clashing row exists before beforming the insert. To avoid a race condition, lock the table before checking and release the lock after inserting/updating as appropriate.
|
# ? Jul 16, 2010 13:35 |
|
|
# ? Jun 8, 2024 08:17 |
|
Hammerite posted:I think this is happening because your subquery is returning an empty set... Makes sense... I had a think about that, realised I was massively overcomplicating things, and made something that worked; code:
|
# ? Jul 16, 2010 13:47 |