|
I ended up using CONVERT(varchar(64), blahdata) Yeah, Configuration Manager gathers all kinds of goodies into an MS SQL database. It has nice reporting tools built-in, but the bigwigs wanted to see it in a pretty interface and be able to add additional information. So I made a FileMaker database (more expertise around here for it than anything else similar) that essentially runs a few SQL calls every morning and populates the pretty fields.
|
# ? Aug 11, 2010 22:31 |
|
|
# ? May 14, 2024 18:37 |
|
I have a database that can have anywhere from a few hundred to a few thousand tables in it and they all have a 'Name' column. What would the best way be to give me a list of the most common names? Right now I am doing it in code and I get an output but it takes a long time. Is there a way that you can do this with SQL?
|
# ? Aug 16, 2010 04:17 |
|
IratelyBlank posted:I have a database that can have anywhere from a few hundred to a few thousand tables in it and they all have a 'Name' column. What would the best way be to give me a list of the most common names? Right now I am doing it in code and I get an output but it takes a long time. Is there a way that you can do this with SQL? Most (all?) database vendors expose database object metadata through a set of views. In Oracle for example you could get a list of all the tables you care about with something like code:
You could then take the table names from this and build up some kind of massive query with a bunch of UNIONs to tell you what you want to know. It sounds like it would still be pretty slow though.
|
# ? Aug 16, 2010 08:12 |
|
IratelyBlank posted:I have a database that can have anywhere from a few hundred to a few thousand tables in it and they all have a 'Name' column. What would the best way be to give me a list of the most common names? Right now I am doing it in code and I get an output but it takes a long time. Is there a way that you can do this with SQL? How on earth did you end up with a database structure with a few hundred to a few thousand tables? Are you doing some sort of dynamic thing where you are adding in tables when data gets added? If so then that is the real problem that needs to get solved as that is going to give you more and more headaches down the road.
|
# ? Aug 16, 2010 14:46 |
|
Goat Bastard posted:Most (all?) database vendors expose database object metadata through a set of views. I'm using MySQL and that is how I have been doing it through code. I did a "show tables; " to get my list of table names and then I created a new table and did a: code:
|
# ? Aug 16, 2010 15:16 |
|
Begby posted:How on earth did you end up with a database structure with a few hundred to a few thousand tables? Are you doing some sort of dynamic thing where you are adding in tables when data gets added? If so then that is the real problem that needs to get solved as that is going to give you more and more headaches down the road. The program I written imports from a number of sources but the data setup is basically the same. Each table is a unique name of an item, person, street, whatever and each one of these unique items has hundreds of names that belong to it. Each containing name can also be one of the unique table names. I thought the best method would be to just create a table for each unique item and then add 1 column to the table and have each row be the sub-names. I don't know if this makes sense, I will draw a picture: code:
|
# ? Aug 16, 2010 15:29 |
|
IratelyBlank posted:Does this make sense? If there is a better way to do it I am willing to change things around. This isn't a good design - RDBMSs aren't designed to treat data like this. What you should probably do instead of this is have two tables, TopName and SubName (say). The rows in TopName correspond to your tables in your present setup. The rows in SubName correspond to the rows in the many tables you currently have. SubName needs (at minimum) two columns, one of which is your "sub-name" and one of which is a foreign key linking back to TopName. Consider (obviously change the datatypes of the string column as appropriate): code:
|
# ? Aug 16, 2010 15:43 |
|
Yeah, that's not a database, you're just using tables as textfile placeholders. Your initial question and the fact that you have to script to create gigantic queries that take forever just to do the simplest count(*) sorted desc listing is a hint at how generally unusable the design is in the first place. Now I'm still curious if there's a valid real-word db design that justifiably needs 1000+ genuinely distinct tables.
|
# ? Aug 16, 2010 15:51 |
|
Hammerite posted:This isn't a good design - RDBMSs aren't designed to treat data like this. Wow this is great, thanks a lot for this. I tried inserting a few random values and the results look good, I will rewrite my code to insert my data into this table and see how it goes. Thanks again.
|
# ? Aug 16, 2010 16:12 |
|
Begby posted:How on earth did you end up with a database structure with a few hundred to a few thousand tables? Are you doing some sort of dynamic thing where you are adding in tables when data gets added? If so then that is the real problem that needs to get solved as that is going to give you more and more headaches down the road. While it sounds like he's got bigger problems, hundreds or even thousands of tables is not terribly unusual. code:
|
# ? Aug 16, 2010 19:46 |
|
That brings up a relational question I've had. If there were a table where there were 1000 possibilities for a book, and 50,000 entries, you would rightly have a table with a book_id and a book_name (and any other info that the book might have), and use the book_id in the main table. But what if there is no other information, and there's only three books? Personally I was ready to make a lookup table with 3 rows and 2 columns, but they said just put the book title straight into the main table. Is this good design, bad design, neutral design, ?
|
# ? Aug 16, 2010 21:25 |
|
It's bad design and if you ever need to expand the system to cover a substantial number of books later, it's going to bite you in the rear end. On the other hand if you know it's *never* going to expand, it's e: Tables are not resource hogs, loving use them aaaaargh.
|
# ? Aug 16, 2010 22:18 |
|
Golbez posted:That brings up a relational question I've had. If there were a table where there were 1000 possibilities for a book, and 50,000 entries, you would rightly have a table with a book_id and a book_name (and any other info that the book might have), and use the book_id in the main table. Volumes being small is almost never a good reason not to use a table anyway.
|
# ? Aug 16, 2010 22:25 |
|
Hammerite posted:... Now that I have these tables populated how do do I go about querying the data to find out which 'Name' Appears the most often? I have tried: code:
code:
fake e: after that last query I thought to try: code:
|
# ? Aug 17, 2010 02:33 |
|
IratelyBlank posted:Now that I have these tables populated how do do I go about querying the data to find out which 'Name' Appears the most often? I have tried: that's not quite how that table works. code:
|
# ? Aug 17, 2010 06:14 |
|
Thel posted:It's bad design and if you ever need to expand the system to cover a substantial number of books later, it's going to bite you in the rear end. No Safe Word posted:Volumes being small is almost never a good reason not to use a table anyway. Thanks for the affirmation. They wanted to keep the list of tables as clean as possible, I suppose. Thankfully it doesn't appear to be climbing beyond 3 anytime soon, nor will we need more than the title, ... though I just remembered we're also recording the chapters. Yeah. Hm. Oh well. Someday I'll convince them to rejigger it, around the same time I convince them to move on from mysql to mysqli/PDO. Yeah. Edit: Oh, and thanks for giving me a good line when proposing things like this, "Tables are not resource hogs, use them."
|
# ? Aug 17, 2010 14:17 |
|
Really quick question that I'm sure someone can help me with. I'm having some sql issues with a really simple DB (using mysql). The DB that is pretty simple, basically just: | id# | fname | lname | startday | startmonth | startyear | endday | endmonth | endyear | I need to get the fnames, lnames and IDs from the table that are after a chosen start date and before a chosen end date (mm/dd/yyyy). Is there a way to have mySQL treat those startday/startmonth/start year as just a DATE datatype? So I can use >= and <= to select them? So DATE(CONVERT(....something..)) > DATE('2009-01-01'); I tried just ANDING everything, but the problem is that if you pick a start date of like 07/15/2010 you wouldn't get a record that has a startdate of 08/01/2010. I'm also not really keen on sql beyond the absolute basics. Thanks in advance.
|
# ? Aug 17, 2010 23:41 |
|
Plinkey posted:Really quick question that I'm sure someone can help me with. I'm having some sql issues with a really simple DB (using mysql). So they are current in as strings? Are you using MySQL 4.1.2x or newer? STR_TO_DATE( CONCAT(startyear,'-',startmonth,'-',startday),'%Y-%m-%d")
|
# ? Aug 18, 2010 00:05 |
|
Sprawl posted:So they are current in as strings? Sorry, ints. I have no idea what SQL version...what command can I run to check that.
|
# ? Aug 18, 2010 00:48 |
|
Plinkey posted:Sorry, ints. select version(); That should still work for ints.
|
# ? Aug 18, 2010 01:03 |
|
Sprawl posted:select version(); 5.1.34. E: Ok, that works to return a date when I use constant strings. But for some reason this isn't working. SELECT STR_TO_DATE( CONCAT(startday,',',startmonth,',',startyear), '%d,%m,%Y) AS date FROM booking; All I get is a '> prompt and the cmd prompt seems to be doing nothing. E2: I'm dumb, forgot a ' Plinkey fucked around with this message at 02:01 on Aug 18, 2010 |
# ? Aug 18, 2010 01:19 |
|
I am trying to modify the table that someone gave me earlier to add a "MidNameID" column. This is the create table query I am trying to use:code:
code:
code:
|
# ? Aug 18, 2010 02:03 |
|
It's not just CONSTRAINT you're missing, but also a comma after the previous constraint.code:
|
# ? Aug 18, 2010 04:56 |
|
MoNsTeR posted:While it sounds like he's got bigger problems, hundreds or even thousands of tables is not terribly unusual. Yeah, I am not saying thousands of tables is bad in itself, nor did I say that in my post. It just raised suspicions that he was dynamically creating tables based on data content instead of creating tables based on data structure. My suspicions were correct.
|
# ? Aug 18, 2010 13:52 |
|
I'm looking to buy a dedicated server to act primarily as a single-user mySQL server. We collect about 12m lines of data per day (weekdays) locally and plan on piping it to the database at the end of each day. We estimate we need about 200gb of space for a year's worth of data. With this data, we use it to do various calculations so queries can be either only a few hundred lines long, or thousands of lines long. Our scripts also vary in the frequency of querying so the server has to be robust to lots of small read/writes and also longer read/writes. Finally, the server needs to be able to run some algorithms on this data (mostly PHP, maybe some C++). We know with our current system, the database is really the bottleneck. Our algorithms aren't really all that complex, just lengthy and repetitive. Our budget is about $200/month and for that we can get one of the following configurations (all running CentOS or RedHat): A ----- Xeon 3210 2.1ghz Quad Core 4GB RAM 300GB 10K RPM SAS B ---- Xeon 3360 2.8ghz Quad Core 2GB RAM 300GB 10K RPM SAS C ---- Xeon 3360 2.8ghz Quad Core 4GB RAM 250GB 7200 RPM SATA Really my question is, for the type of application I've described above which of the three main components are most important for a database server and in what relative weights? I'm guessing the 700mhz isn't a huge deal compared to the doubled RAM and the faster HDD so we're leaning towards option A, but any input would be greatly appreciated. I hope this is the right thread to ask, as the specs of the server really are limited by the database. edit: Assume the table structure will be set up in an appropriate manner for that much data, it's not really my area of expertise. mik fucked around with this message at 17:05 on Aug 18, 2010 |
# ? Aug 18, 2010 17:03 |
|
So all its going to do is only ever run MySQL and cgi pools or C++ code? the 10,000 SAS drive is excelent for database related uses. You definitley want the 4 gigs of ram for the database. Depending on how long the algorithms takes to execute its hard to say if that .7 ghz will make that much of a difference. Out of those 3 options i would say A.
|
# ? Aug 18, 2010 17:51 |
|
I'm sure this has been asked before, but is there a free mysql to postgresql migrator?
|
# ? Aug 18, 2010 19:10 |
|
Echo_ posted:I'm sure this has been asked before, but is there a free mysql to postgresql migrator? Well if you just want the tables and stuff made this ruby script looks like it should work. http://github.com/maxlapshin/mysql2postgres
|
# ? Aug 18, 2010 20:25 |
|
mik posted:Really my question is, for the type of application I've described above which of the three main components are most important for a database server and in what relative weights? 2. Storage Speed 3. Storage Speed Oh and I guess RAM matters too. But not as much as storage speed. RAM would matter more if you had hundreds of users.
|
# ? Aug 19, 2010 16:49 |
|
MoNsTeR posted:1. Storage Speed It also matters for the size of the table, number of queries/sorts/amount of work being done at the same time. But yes a 10,000 RPM SAS is very good for database stuff.
|
# ? Aug 19, 2010 18:21 |
|
This is more of a MySQL configuration question than anything else, but I have a web environment running the following software: Mac OSX Server 10.6 MySQL 5.1.49 PHP 5.3.2 My script works fine using the web browser on the server when it's pointed it 'localhost' but if I set it to open '192.168.1.100' (the server's ip), the script returns errors related to the MySQL connection... any thoughts? http://i.imgur.com/s1Mvb.png edit. Also, ALL other hosts return the script errors, not just the server. The only way the connection works is via 'localhost' when browsing on the server i am not zach fucked around with this message at 21:10 on Aug 19, 2010 |
# ? Aug 19, 2010 21:08 |
|
What is line 19 exactly?
|
# ? Aug 19, 2010 21:14 |
|
Sprawl posted:What is line 19 exactly? mysql_select_db($this->strDBName,mysql_connect($this->strDBServer,$this->strDBUser,$this->strDBPass));
|
# ? Aug 19, 2010 21:26 |
|
i am not zach posted:mysql_select_db($this->strDBName,mysql_connect($this->strDBServer,$this->strDBUser,$this->strDBPass)); Check the values of strDBUser, STRDBPass, and strDBServer to make sure they are all what you think they should be. this sounds like a case of you trying to login to the DB server from a Host that isn't allowed
|
# ? Aug 19, 2010 22:18 |
|
Sprawl posted:Check the values of strDBUser, STRDBPass, and strDBServer to make sure they are all what you think they should be. Thats what I thought at first... But the server hosts both PHP and MySQL, PHP is simply connecting to 'localhost' 'root'@'localhost' definitely has the correct permissions and definitely has the correct password Is there a different host I need to add that PHP uses? Even if that's the case, why would it work if I typed in 'localhost' to the address bar and not when I type '192.168.1.100' What is the user I need to allow in order for it to connect correctly?
|
# ? Aug 19, 2010 22:24 |
|
i am not zach posted:Thats what I thought at first... Does your MYSQL have the /tmp/mysql.sock thing setup?
|
# ? Aug 19, 2010 22:41 |
|
Sprawl posted:Does your MYSQL have the /tmp/mysql.sock thing setup? Just figured it out. I had set up that mysql.sock thing but (I know this is my fault and I'm dumb) I didn't realize Zend Community Server was packaged with it's own version of MySQL and the PHP script hadn't been pointed at the correct mysql.sock Long story short I needed to add another symlink to /tmp/mysql.sock (already had to do it once for the administration console) That was definitely the issue. Thanks for your help, looks like we got to the answer at about the same time, funny how that works.
|
# ? Aug 19, 2010 23:08 |
|
Best practices question: For a long time I've not done SELECT *, thinking it better to always specify which columns I wanted. Is this common, is there any use for SELECT *, am I making a big deal out of nothing, ?
|
# ? Aug 20, 2010 17:00 |
|
Golbez posted:Best practices question: For a long time I've not done SELECT *, thinking it better to always specify which columns I wanted. Is this common, is there any use for SELECT *, am I making a big deal out of nothing, ? Strictly speaking, enumerating all the columns you need is probably better. Uses for SELECT *: - Testing - When you know you actually do want all the columns from a table (typically it will be a table with very few columns, that is a joining table or has a role similar to that) - When you're lazy and gently caress it, it doesn't matter (if your query isn't one that's going to be run frequently I think you can be forgiven for taking this attitude) (Bear in mind that you can use tablename.*)
|
# ? Aug 20, 2010 17:06 |
|
|
# ? May 14, 2024 18:37 |
|
Another use I just remembered for the tablename.* syntax - selecting all columns from a subquery (a subquery you're joining to a table, say).
|
# ? Aug 20, 2010 17:08 |