|
Rakshas posted:I have two tables - the first one has a bunch of company information and user#s. The second table is strange, as it has a bunch of rows I need formatted as columns in the first table. I need one query to rule them all. I'd like to run the query and get something that resembles the Result Table. I am being kind of lazy tonight, and I don't have a book that I wanted to look this up in (that I know has this topic as I've used it before). The book is SQL Cookbook by Anthony Molinaro. It is kind of nice because it lays out solutions to problems like this in SQL Server/MySQL/DB2 & postgres side by side. I'll try to remember to read that section again tomorrow to refresh my memory, and I'll post something tomorrow. Anyway, if the columns are always going to be fixed, something like this might work: http://saveabend.blogspot.com/2006/10/mysql-crosstabs.html
|
# ? Apr 9, 2008 02:49 |
|
|
# ? May 11, 2024 13:30 |
|
da keebsta knicca posted:I have deleted a bunch of indexes and cleared everything out still no go with this simplified SQL. And how many rows of data are in the table you are running the query against? Make sure you have enough data in the table that MySQL isn't "optimizing away" your results because of the 50% rule. From here: quote:A word that matches half of the rows in a table is less likely to locate relevant documents. In fact, it most likely finds plenty of irrelevant documents. We all know this happens far too often when we are trying to find something on the Internet with a search engine. It is with this reasoning that rows containing the word are assigned a low semantic value for the particular data set in which they occur. A given word may reach the 50% threshold in one data set but not another. If you're sure you have varied data inserted, run "REPAIR TABLE `job_postings`". Then run a query with EXPLAIN and a suitably-unique word for the AGAINST clause: code:
Here's what I put in my table for the test: code:
code:
|
# ? Apr 9, 2008 05:08 |
I wanna add the ability to search my site and I'm wondering if I should even bother with MySQL's fulltext after hearing so many bad things about it. What are some alternatives I should look at that I can use with PHP?
|
|
# ? Apr 9, 2008 05:28 |
|
I've got two questions, one SQL related, and the other kinda SQL related. First: Does anyone have experience with Greenplum? I haven't been able to find any good resources for using it efficiently. Amazon has no books and I haven't been able to find any sites on the topic. We're starting to transition to it at work for reporting (from SQL Server) and some things are much faster, but the design philosophy behind some of the queries seems different than how I would do it on SQL Server. Any help at all would be very useful. Second: Does anyone have experience automating SQL query parameters from an e-mail? The e-mail is in a set format every day and is just plain text so a number of tools could be used to parse out the data. The problem is that I have no idea how to go taking the e-mail from when I receive it in Outlook and getting it to the SQL server to run.
|
# ? Apr 9, 2008 19:53 |
|
Alright I have a design issue. Suppose I want to record locations like "123 East Street" or "685 Candy Cane Lane". Well those are simple enough to store in a table called "Addresses" But each address could have zero or more different objects like "house", "mailbox","garage","pond","barn","shed",etc... And each object has its own independent attributes like "size","color","number of cars","livestock", etc. The attributes of one object are not necessarily the same as another. My problem is , how do I record all of these objects and their attributes into a SQL table without having a million tables or without having a million columns. Thanks!
|
# ? Apr 9, 2008 20:39 |
|
You could make a linked table that is an EAV model. Your description sounds pretty much like it falls in the one scenario where you'd want to implement that.
|
# ? Apr 9, 2008 20:47 |
|
I've been on the defensive end of a few EAV fights, but I'll always say that this is not the case to use it. If you can model your relationships as tables, then do it. There's no reason to fear having a lot of tables, that's exactly what the database is there for. EAV gains traction in cases where the application has to be designed to allow dynamic run-time modification of the relationships by the users on a constant basis. I don't want to get dragged into this argument again but that's where at least some people will justify using it. If you can model your relationships statically, it's always the way to go. Edited to add: the reason to hate on this so much is that if you're just going with a "dynamic" model to avoid having to manage all of these relationships, it's just self-delusion. These relationships need to be defined and managed somewhere and if you don't do it via your schema design, you'll end up doing it either in the table data or in your application code. It will end up being more work, more prone to problems, and harder to version control, and piss off everyone who has to work on it later. npe fucked around with this message at 21:04 on Apr 9, 2008 |
# ? Apr 9, 2008 20:59 |
|
For what it is worth, this will run on a PDA and the user will supply their own list of objects and attributes.
|
# ? Apr 9, 2008 21:24 |
|
genericadmin posted:Helping me.. EDIT: Creating a new table worked.. thanks everyone da keebsta knicca fucked around with this message at 23:47 on Apr 9, 2008 |
# ? Apr 9, 2008 22:05 |
|
yaoi prophet posted:I've been on the defensive end of a few EAV fights, but I'll always say that this is not the case to use it. If you can model your relationships as tables, then do it. There's no reason to fear having a lot of tables, that's exactly what the database is there for. I also hate EAV. We have a system that uses it at work and it is a hosed up piece of poo poo. It is an afterthought to code constraints -- think of a house that has 4 address attributes that are all active -- this happens in EAV if the coders aren't careful. And the coders will never be careful enough. Which loving address is correct? We made an arbitrary rule at work to use the minimum active attribute of a given type.. but this is just loving bullshit. Data access is also slow because usually you will have some functions that you pass in a class id, entity id, and type id. Most functions blow away indexes. Stay away unless it is the only option.
|
# ? Apr 10, 2008 05:33 |
|
I am trying to query a table with almost a million rows and the results are running slow as hell. I created an index on the datetime column 'view_date', however my query now returns no rows at all!code:
1 SIMPLE NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const table... Is the result I get when I explain the query.. can anyone tell me why my datetime index is failing?
|
# ? Apr 10, 2008 15:52 |
|
How can I make several tables keep their timestamps in sync with a central watch table?code:
Can this be done with foreign keys or do I need to use a trigger?
|
# ? Apr 10, 2008 16:18 |
|
stack posted:How can I make several tables keep their timestamps in sync with a central watch table? EDIT: But yes, if you insist on doing it the other way (or are unable to change how things work) I'm pretty sure the only way to do it automagically would be to use a trigger.
|
# ? Apr 10, 2008 17:21 |
|
Okay so instead of a EAV design, I'm thinking about this:code:
Is this a really lovely idea? I don't have a lot of database training/education. I work mostly with applications but every now and then I have to worry about database stuff so I appreciate the help. Reminder that this will run on a PDA. Also, there really isn't a whole of "reading" going on. It is mostly writing (data collection).
|
# ? Apr 10, 2008 18:42 |
|
Maybe I'm lost in the definition somewhere, but isn't that essentially an EAV model?
|
# ? Apr 10, 2008 18:47 |
|
Yes, that's EAV. If you need dynamic modification of your model at run time, you essentially have two options: (1) modify your literal schema dynamically using the data definition language (create/alter tables, basically) called from the application code (2) use an EAV approach and basically implement a database on top of your database as you are doing with your example They are both lovely approaches. Which one is worse is going to depend on the needs of your application, the platform you are on, and probably your own biases. Actually, the third option that is advocated by Oracle guru Tom Kyte is to create as many columns as a table will allow, and name them something like FIELD_001 through FIELD_999 or what have you. Then you just deal with your relationships that way, somehow. I'm a huge fan of his but I have to admit this is pretty npe fucked around with this message at 18:59 on Apr 10, 2008 |
# ? Apr 10, 2008 18:54 |
|
Aredna posted:Second: Does anyone have experience automating SQL query parameters from an e-mail? The e-mail is in a set format every day and is just plain text so a number of tools could be used to parse out the data. The problem is that I have no idea how to go taking the e-mail from when I receive it in Outlook and getting it to the SQL server to run. I'd recommend making a C# console application. The C# application would query your mail server, get the e-mail body text, parse it, and then make the appropriate queries in the database. I have never actually accessed e-mail from C#, but I am pretty sure that there must be some way to do it. You could always ask on the .NET megaquestion thread. Accessing sql server from C# is fairly trivial (where I work we use the wonderful SqlHelper class to wrap low-level details -- see the article here: http://aspnet.4guysfromrolla.com/articles/062503-1.aspx). Since the application will compile and result in an exe, it can be set to run every day through windows scheduled task manager.
|
# ? Apr 10, 2008 20:12 |
|
Assume the following table:code:
code:
3 5 4 which are every "child" with 2 as "parent". Now the question(s): - What is the simplest way to do this in MySQL and not use code? - What is the simplest way to do this with code (PHP)? - What is fastest? Do this in SQL or recursive code? My googling only resulted in stuff like "MySQL will have this function soon" and "you should use some kind of hierarchic structure in your table with references to left and right nodes".
|
# ? Apr 11, 2008 14:29 |
|
zapateria posted:- What is the simplest way to do this in MySQL and not use code? Your naming convention is bad ('parent' and 'child'). The way you have it the first element should be called 'node' and the second should be called 'child'. Actually, this is usually a bad way to store trees, since you can have multiple children but not multiple parents, so probably what you want to do is use a NODE|PARENT format, with a PARENT of 0 indicating a root level node. This is how it's usually done. It's referred to as the adjacency list method. You should not attempt to reconstruct a tree using an adjacency list entirely in MySQL. It's slow and the code is silly. It's much faster to do it in PHP. Here's an overview of how. Select all the nodes in the tree. Create two node lists: one sorted by node and the other by parent. For each node construct a CHILDREN array. Then you can move a pointer/index through each of the two lists. At each *node, increment through the parents and add matching entries. Whenever *parent is greater than *node, increment node and continue. Stop whenever you hit the bottom of either list. This method is linear in the number of nodes in the tree. Then you have a traditional tree with a CHILDREN array, and you can perform traditional tree functions on it. Alternatively, if you don't need all this tree functionality, and you just need a list of children, you can treat it like a connectivity problem in a directed graph. Select everything out and use the arrays to trace back and color everything connected to '2'.
|
# ? Apr 11, 2008 18:58 |
|
zapateria posted:My googling only resulted in stuff like "MySQL will have this function soon" and "you should use some kind of hierarchic structure in your table with references to left and right nodes". The left and right nodes refer to the nested set model which has slow inserts but really fast reads. With that model you will also be able to get your entire tree with a single SQL statement and no recursive loops. To go that route you will need to get a set of queries for reading the tree or a subset of the tree and also to insert records. A good book on that, that will have all the SQL you need, is a by a guy named Joe Celko. http://www.amazon.com/Hierarchies-Smarties-Kaufmann-Management-Systems/dp/1558609202/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1207945269&sr=8-3 That book also covers the adjacency model and also probably how you have it set up. To solve the problem you are currently facing, you can either do a recursive solution, or if the data is of a fixed depth you can do multiple left joins. Like if the deepest child is 4 deep you would do 3 left joins back to the same table (or something like that, its been awhile since I have done it).
|
# ? Apr 11, 2008 21:23 |
|
I'm using DBI with Perl against MySQL. I know Perl inside and out, but have very little database experience. I want to iterate all rows in a very large table (millions of rows). Hence: code:
I thought that the "prepare, execute, fetch, fetch, fetch, ..." paradigm meant that the database would get ready to start streaming results at execute(), then walk a cursor through the results with each fetch(). Instead, it seems like execute() wants to collect all the results, and fetch() will just iterate them locally. Am I doing something wrong? Do I need to use LIMIT to get a subset, and wrap it in a larger loop to walk the LIMIT through the table? That would seem to defeat the purpose of having a database...
|
# ? Apr 11, 2008 22:03 |
|
Stabby McDamage posted:I'm using DBI with Perl against MySQL. I know Perl inside and out, but have very little database experience. The result set is being buffered by DBD::MySQL, like you said. By default, the driver uses "mysql_store_result()" to buffer the result locally. MySQL also provides "mysql_use_result()", which works more like you described, but at a possibly serious performance hit at the server (see here). Unless you're doing this on an isolated box where you're the only client, I would probably go with using LIMIT as you said. Just remember that for "LIMIT 5000, 20", the server actually reads rows 1-5000 before sending you the next twenty you actually want, so you might want a fully-indexed subquery involved that pulls the keys you need first and passes those to an outer query that gets the full row data. Server-side prepare (which is probably not on by default, depending on your DBD::MySQL version) will prepare the query on the server, so that the client needs only send the parameters for the query on each execute, rather than sending the entire block of SQL + parameters and doing the query parsing step each time. It's mostly useful for bulk write operations. To use it, you need the "mysql_server_prepare=1" option included in the DSN given to connect(), but read the docs first. Check out DBD::MySQL docs and source, and the MySQL C API docs and it demystifies a lot of things. MySQL sucks. Enjoy.
|
# ? Apr 12, 2008 03:35 |
|
genericadmin posted:(The answer) genericadmin posted:MySQL sucks. Enjoy. I'm rapidly finding that out. It took weeks to get one table loaded because of a particular index field having massive overhead. Is Postgre any better? This is single-user data analysis -- I'm doing machine learning on a dump of wikipedia.
|
# ? Apr 12, 2008 20:27 |
|
Stabby McDamage posted:Thank you thank you thank you. I was rapidly losing hope. I'll try that out when I get back to work. Postgres tends to be faster, when using constraints or with multiple users, more reliable and have more features. MySQL tends to be a bit easier to use and setup.
|
# ? Apr 12, 2008 21:34 |
|
I have two DBS, for the time being one is Oracle and one is MySQL but I might move everything to Oracle. OraDB has one table: hour_vl | key | value | MySQLDB has one table: maaler | key | The maaler table has a small (< 1000) subset of keys used in the hour_vl table hour_vl is huge. Right now (don't beat me up) I do the following to get the metering values from the metering equipment in maaler: code:
I don't really want to replicate any data between the two DB's, and I have limited experience with database-links. Is there any better way to do this? zapateria fucked around with this message at 13:26 on Apr 13, 2008 |
# ? Apr 13, 2008 13:24 |
|
zapateria posted:Is there any better way to do this? Yeah, you use one database.
|
# ? Apr 13, 2008 17:55 |
|
Begby posted:Yeah, you use one database. That's unfortunately not an option, since I only have read-access to the OraDB.
|
# ? Apr 13, 2008 18:57 |
|
zapateria posted:I have two DBS, for the time being one is Oracle and one is MySQL but I might move everything to Oracle. You could write a quick little program in Python to do it -- you would need a module to connect to MySQL, and Cx_Oracle to connect to Oracle. Create a list to hold the maaler values, and then set up a loop to grab 200(?) values from Oracle at a time, and output to a CSV file. But I think I'm giving you a Glass Bottle or Dirty Shoe answer: http://weblogs.asp.net/alex_papadimoulis/archive/2005/05/25/408925.aspx
|
# ? Apr 13, 2008 19:51 |
|
Full text search question. I am doing this job search still, but I am having trouble with mysql returning results based on searches with terms like "C++" and "C#". I think it only searches against "C". Is there a way to have it to make sure it searches against the "++" characters also? The problem is that I will have to make the index minimum 1 character instead of 2 characters. I didn't even want it this small, but needed to search "PHP". I guess my project is a little different from what full text would normally be used for, but I need a quick search against an already pretty complex badly designed database.
|
# ? Apr 14, 2008 03:59 |
|
da keebsta knicca posted:I am doing this job search still, but I am having trouble with mysql returning results based on searches with terms like "C++" and "C#". I think it only searches against "C". Is there a way to have it to make sure it searches against the "++" characters also?
|
# ? Apr 14, 2008 05:13 |
|
kalleboo posted:Try adding + as a word character: http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html Ack... thanks. Requires me to have meetings with 2 different people to discuss possible implications site wide and all this junk. Changing a character XML file shouldn't effect display of any of these special characters should it? and only effect searches? Right now this will be the first thing we actually use full text search with so it shouldn't be a problem right? Yet we display a lot of special characters and stuff from our databases, because the nature of us having a lot of data from international people. da keebsta knicca fucked around with this message at 05:45 on Apr 14, 2008 |
# ? Apr 14, 2008 05:42 |
|
da keebsta knicca posted:Ack... thanks. Requires me to have meetings with 2 different people to discuss possible implications site wide and all this junk. It won't affect the data at all, just what characters are used to build the fulltext index. Remember that '+' is a boolean operator also, and that increasing the indexed character set will increase the index size and lower performance.
|
# ? Apr 14, 2008 13:49 |
|
Can I do this? I know I can just make it two update statements with a where clause, but I wonder if a single update is any better. This is SQL Server 2005. And Victor already showed me how to exclude a record from a sum using case else 0, but how do you exclude something from a count? The documentation says count all will exclude nulls, but Victor says he's never seen all used that way. How else would you use it? code:
|
# ? Apr 14, 2008 20:51 |
|
I think this will work:code:
vvv My appologies... on occasion I try and use the equivalent of an expression from another language in SQL >< Aredna fucked around with this message at 21:27 on Apr 14, 2008 |
# ? Apr 14, 2008 21:13 |
|
Aredna posted:
What does the comma 1 part do? I understand what max does, but not with multiple arguments in the expression.
|
# ? Apr 14, 2008 21:20 |
|
I have a text column in a table that has random comma's in it. How can i strip the comma's out of each row in the column? I want the comma's removed, but that can be replaced with a different character if that makes it easier. This is in a MS SQL2000 database/server.
|
# ? Apr 14, 2008 22:28 |
|
StonedogJones posted:I have a text column in a table that has random comma's in it. I think Replace() should do the trick http://doc.ddart.net/mssql/sql70/ra-rz_6.htm
|
# ? Apr 15, 2008 02:41 |
|
OK i read about 4 pages of this thread ( first 2 and last 2 ) and no body has really answered this kind of question, so here it goes. I have a cd, SQL server 2005 and it has several versions on it ( enterprise, standard) Im sort of new at this and iv been trying to find help on installing the CD, when i put it in and open it, it opens in internet explorer, then it has like a list of links. So i hit one of the links that look best ( standard 32 bit ) and i found the set up, then this is where i get confused When setting it up, it says its installing office 2003 web componets or something along those lines, and then thats all. I don't know what to do from here. Sorry for this being so l ong for such an easy question, bassically i want to install it an i am too stupid to figure out how! please help!
|
# ? Apr 15, 2008 03:26 |
|
Xae posted:I think Replace() should do the trick no i dont think replace will do it. the column has different text in each row, so i cant search for a string, and then replace something in that string, because the string is not constant for each row.
|
# ? Apr 15, 2008 05:06 |
|
|
# ? May 11, 2024 13:30 |
|
StonedogJones posted:no i dont think replace will do it. the column has different text in each row, so i cant search for a string, and then replace something in that string, because the string is not constant for each row. Xae is right, replace should do fine. Assuming the table has a unique key you could do something like the following: select 'update my_table set text_column = ''' + replace(replace(text_column, ',', ''), '''', '''''') + ''' where key = ' + cast(key as varchar(100)) as cmd from my_table The outer replace is to escape any single quotes in the text, you can skip it if you know it's clean. Spool the output of that to a file and execute it and you should be all set.
|
# ? Apr 15, 2008 06:13 |