|
OK, so you have databases which exist within a mysql installation. Within each database, you have tables. In each table you have fields. Is all that correct? For a website project, should I have ALL of it in one database, each user in their own table, and then user details as a field in the tables, or do I have all this messed up? I'm having trouble wrapping my brain around how data is represented in mysql. Most beginner tutorials focus on specific commands (which I can just learn just fine by reading code examples) and have nothing on the kind of stuff I'm having trouble with. Basically I'm wanting to make a site that has a ton of users, and each user has a ton of data that is stored in the database (with no data shared between any other users). I'm got phpmyadmin installed, and when I create a new table, it wants me to give a name for the table, and I don't know if I should put "bob", "user_name", "users", or what-the-gently caress. nbv4 fucked around with this message at 10:07 on Nov 12, 2007 |
# ¿ Nov 12, 2007 10:04 |
|
|
# ¿ Apr 29, 2024 11:43 |
|
minato posted:Edit: To elaborate, a table consists of a list of 0-many rows. Each row represents all the information about some "thing". A row consists of various fields that describe that single thing. So examples of tables would be "users", "projects", "products", etc. Each row in those tables describes a single user, project or product. The fields depend on the thing you're trying to describe. A user might have an ID, name, DOB, etc. A product might have an ID, name, quantity available, etc. OK I think I'm starting to get it now. One more question. Is there a guide somewhere that helps you determine which data type is the best for certain kinds of data? For instance, whats the best type to use to store a users email address? What if you have a value that only has a small number of values? Such as, for example, car manufacturer. Lets say there are only 5 possible manufacturers. Would it be best to store that data as either "1,2,3,4,5", then have each number correlate to a manufacturer, or would it be best to just store it as a string? Or what about setting up 5 different boolean rows for each manufacturer? Would it matter which way you did it?
|
# ¿ Nov 12, 2007 11:19 |
|
code:
code:
|
# ¿ Nov 14, 2007 07:06 |
|
minato posted:Not closing the </form>? that was what it was, I feel stupid. Now I've got another problem that is stumping me. Here is my query: $result = mysql_query("SELECT * FROM pilots WHERE email=$email and password=" . MD5($password); then below that, I have: $row = mysql_fetch_row($result); print $row[0]; which should print out something, but it's giving me this error: Warning: mysql_fetch_row(): supplied argument is not a valid MySQL result resource in /var/www/logbook/login-script.php on line 20 makes me believe the problem is with my query string. I copied the string exactly as how it comes from php: SELECT * FROM users WHERE email=nbvfour@gmail.com and password=a4f9767401ed74edc2321b7b1bc7790e into phpmyadmin, and it says it's an invalid query string. I'm thinking I'm supoosed to escape the '@' but I can't seem to find anything on google.
|
# ¿ Nov 19, 2007 05:52 |
|
code:
In other words, I want this to be returned: code:
SELECT DISTINCT `instructor`, `date` FROM `flights` WHERE `pilot_id` = '50' ORDER BY `date` DESC LIMIT 12 but thats not quite there. Any help?
|
# ¿ Dec 17, 2007 06:00 |
|
I have a PHP script that, at the very top of the script, makes a MySQL query that grabs a bit of data from the database, and makes that data global. So it looks like this:php:<? $query = "some query"; global $data; $data = mysql_query($query); ?> When the functions use the data, they do it with mysql_fetch_array() and a while() loop. At the end of each of those loops, I have "mysql_data_seek($data, 0);", but it still thinks each one, except for the first, is empty. What am I missing?
|
# ¿ Jan 15, 2008 18:37 |
|
quick question, this is my query:code:
|
# ¿ Feb 1, 2008 06:39 |
|
Victor posted:
cool that worked. I couldn't get that char() thing to work, so I just used order by coalesce(manufacturer, 'ZZZZZZZZZZZ') and it worked fine.
|
# ¿ Feb 1, 2008 21:34 |
|
I have a PHP script that makes a few hundred mysql queries, then executes them one by one. All of these queries are INSERT queries. The way I way it is now, if one of those queries fails, it's a pain in the rear end to go through and fix whatever problem that may have occurred. I want to have it so the system does a dry run through each query, and only if every single query is valid, it executes them all. Is there any way to test a query against the database? It's probably very simple, but Googling gives me nothing.
|
# ¿ Feb 7, 2008 10:46 |
|
minato posted:Why would a query fail? If a query fails, that's a sign that either your data or code is broken. the user uploads a file that is transformed into about 600 queries. If one line of that data is not right for whatever reason, only 599 lines will get entered, and it'll be absolute hell to get things right. I need some way to ensure 100% of the queries are correct, or else it tells the user his data needs looking over.
|
# ¿ Feb 7, 2008 11:31 |
|
minato posted:But really, you should be validating all data before it gets anywhere near the database before beginning any inserts. I am, I just would like one last line of defence against total query entry, just for piece of mind against anything unforeseen... I tried running through each query once with "LIMIT 0" attached, but that didn't work. I'll look into the transaction thing.
|
# ¿ Feb 7, 2008 11:55 |
|
kalleboo posted:This is the whole point of transactions - to be able to roll back changes if anything goes wrong (well, along with making things atomic and the side effect of certain things being optimizable). With MySQL you need to be using InnoDB and a fairly recent version (5.0 IIRC) for support. I just spent the last hour looking up transactions, and have now got it working with my site. The only problem is that my webhost is using an old version of MySQL so it may not work there edit: basically i have this: code:
edit2: nevermind, it works nbv4 fucked around with this message at 13:49 on Feb 7, 2008 |
# ¿ Feb 7, 2008 13:20 |
|
I have a TEXT field that contains comma separated tags. Is it even possible to create a query that returns each unique tag used?
|
# ¿ Feb 11, 2008 15:02 |
|
No Safe Word posted:If changing your schema to actually have a table of separate tags and setting up a many to many relationship is an option, then do so. using another table for tags seems like too much work for such a simple tagging system that my project requires. Basically each user owns a bunch of items, and the tags are used to organize and create statistics within that user's items. There will never be a full-table tag search. I'm pretty sure a non-normalized solution is the way to go. I guess if worse comes to worse, I could just do something like: SELECT DISTINCT tags FROM `planes` WHERE user_id='7' which would probably return 5 or 6 rows, normally. Then use PHP to separate out the fat and create the list that way...
|
# ¿ Feb 11, 2008 15:40 |
|
m5 posted:Well, that's the way you do things in SQL. Then how would it be done? The user types in their tags. PHP splits the string up at the commas. A recursive function goes through each tag, and checks to see if it's already in the database. If it's not, it adds it, then gets the auto_incrimented value and sticks it into the item's 'tags' row. Thats like 5 or 6 queries. Why do it that way when you can just stuff it all into a TEXT column and be done with it? I'm not writing an enterprise level project here where the database has to fulltext through millions of rows. Each tag exists within each user. None of these tags need to exist "globally".
|
# ¿ Feb 11, 2008 16:25 |
|
Whats the easiest way to get a total of a column by calendar months? For instance, since today is 3-14-2008, I need the total from the last 6 calendar months, so that would be all items from 9-01-2007 to 3-14-2008.
|
# ¿ Mar 14, 2008 20:45 |
|
IsaacNewton posted:What DBA are you using? yeah, I'm using mysql and that query you provided works wonderfully. Is there any way to get it to return it it all on one row, though? edit:" nevermind, i figured it out. remove the GROUP BY part... nbv4 fucked around with this message at 00:37 on Mar 15, 2008 |
# ¿ Mar 14, 2008 22:58 |
|
I have a SQL problem that I can't even figure out where to begin. I need to get the date of the third from last occurance of a record. Say this is the table:code:
|
# ¿ Mar 17, 2008 07:34 |
|
minato posted:Bingo. so if "item" is "approaches", and the name of the table is "flights" would the query look like this: code:
|
# ¿ Mar 17, 2008 08:30 |
|
How can I sort a table like this:code:
code:
nbv4 fucked around with this message at 06:37 on Mar 24, 2008 |
# ¿ Mar 22, 2008 13:05 |
|
Boogeyman posted:If you're using MSSQL, this will work (just tested it). That actually doesn't work because value1 is actually "SUM(another_value) AS value1" so it gives me this error: "#1247 - Reference 'value1' not supported (reference to group function)" I think making a third column would be best because some instances there may be a value in both value1 and value2.
|
# ¿ Mar 24, 2008 23:03 |
|
Alex007 posted:You can't sort on grouped fileds, but you can do it (sort) in an outer select: hmm, thats not working either. Here is my exact query: code:
code:
nbv4 fucked around with this message at 00:40 on Mar 26, 2008 |
# ¿ Mar 26, 2008 00:26 |
|
Could someone please help me come up with a WHERE clause that returns a row if it's NOT in the form of "XXX-XXX" for instance, "abc-abc" is not a hit, but "abc-xyx" is. The column can be written as "abc-abc-asd-fdffsdfrekl kjfdg", but I'm only interested in the first seven characters. I imagine regular expressions could play a role in this, but I don't know how to do those edit: changed requirements nbv4 fucked around with this message at 10:32 on Apr 2, 2008 |
# ¿ Apr 2, 2008 10:22 |
|
Zombywuf posted:
ah, good job. I didn't realize sql had substring functions. I was looking around for a fulltext search solution...
|
# ¿ Apr 2, 2008 11:37 |
|
I have a table thats about to be populated with a few hundred thousand records. The table has about 34 columns. Some of these columns are boolean columns that could be combined with one another. For instance one column called "is_car" and another "is_motorcycle", which could be combined as "vehicle_type" with a value of either NULL, "m", or "c". The way it is now works fine, I'm just wondering if it's worth the trouble to get a little more performance (if any performance at all) by restructuring the table. Will it matter at all in the long run?
|
# ¿ May 6, 2008 11:21 |
|
I'm working on a tagging scheme for my website. Basically I have one table full of blog entries, and another table of tags. Each row in the tag table has an blog_id that connects it to the blog it is tagging. Everything is working fine except for one thing. I can't figure out how I should go about edit/deleting/adding new tags to an blog. Lets say I have one blog that is tagged with three tags. Thats three rows in the tag table each with the corresponding blog_id. If I were to add an additional tag, I could just add a new row to that tags table, but what about if I want to delete one? Do I just delete the row? Wouldn't that hurt performance? Also, how do I even determine if a its a new tag or not? I'm thinking of, whenever the user edits tags, to just delete all existing tags for that blog, then add the new ones, ensuring no duplicates exist. Is this a dumb idea? I can't think of any other way to do it. nbv4 fucked around with this message at 07:47 on May 8, 2008 |
# ¿ May 8, 2008 01:06 |
|
I've got a pretty simple problem:code:
|
# ¿ May 28, 2008 17:54 |
|
Jethro posted:
this is exactly what I needed, thanks
|
# ¿ May 30, 2008 19:35 |
|
I have a table that has a primary key that is auto incremented. Over time many entries have been deleted, but as you know, the auto_increment value just keeps going. There are like 2200 total rows, yet the auto_increment value is currently on like 3800. I want to reset the auto_increment value so it will begin going through and "filling in the holes". How do you reset the auto_increment value? Googling bring up this query: ALTER TABLE [table name] AUTO_INCREMENT = 1 But whenever I run it, I get the "this query was successfully executed!" message in phpmyadmin, but nothing gets reset. New records get placed with ~3800 values. Even if I go to the "Operations" tab and enter "1" into the text box, it gives me the same success message, but when the page refreshes, the old value of ~3800 is displayed. If I put in a larger value, like 9000, it will work fine. First off, will setting the value to 1 actually do what I want, or will it just overwrite over the old values and screw up all my data? And secondly, how I actually set the darn value?
|
# ¿ Aug 29, 2008 20:56 |
|
SQL nub here. I have a varchar field in my table that the user can have in many different formats, such as "abc def", "ABD-DEF", "ABC; DEF", "ABC - DEF", etc etc. So far I've used PHP to do all the "cleaning up" of this data to get into a uniform format to work with, but now I realize the best way is probably to do it with a MySQL function. Something like "SELECT DISTINCT clean_up(`column`) FROM ..." instead of a bunch of PHP loops and crap following it. Doing a google search for "user defined functions mysql" tells me that there are about 3 or 4 ways to create functions, and they all, as far as I can tell, require compiling C++ code. Is there an easy way to define a simple function without going balls out? All I need is a few REPLACE()'s, and a SUBSTRING() or two...
|
# ¿ Sep 18, 2008 06:38 |
|
In my database, I have a table called "flights" which has information about the flight, as well as a plane_id, which is connected to a row in a "planes" table which has information about the plane. I have a user who has like 900 planes in his database which are just basically "orphan" planes. I want to remove these duplicate planes, which do not have a single flight connected to them. How do I do this? I came up with a query which pretty much does what I want that uses LEFT JOIN's and discards all entries that have a flight_id value greater that's not NULL, but this query for some stupid reason takes like over 40 seconds to run on normal user_id's, and for this particular user, it I guess takes too long because it always returns a "1053" error. Whats the best way to do this? edit: here is the query that is taking too long to run: code:
nbv4 fucked around with this message at 20:04 on Dec 15, 2008 |
# ¿ Dec 15, 2008 19:49 |
|
^^ actually I'm stupid. There were no indexes... I added the indexes and now it runs in under a second.
|
# ¿ Dec 15, 2008 20:29 |
|
Triple Tech posted:What database system is this that something like that takes over 40 seconds? MySQL 5 with about 14,000 rows total and no indexes lollers.
|
# ¿ Dec 15, 2008 21:04 |
|
nevermind...
nbv4 fucked around with this message at 02:49 on Sep 5, 2009 |
# ¿ Sep 4, 2009 22:21 |
|
How do you copy a postgis database from one server to another? One the old server, I'm using postgres 8.3 and postgis 1.3.x. On the new server I'm using 8.4 and postgis 1.4.0. Is there a way to do pg_dump without including the postgis stuff?
|
# ¿ Nov 7, 2009 19:50 |
|
nevermind
nbv4 fucked around with this message at 02:43 on Nov 20, 2009 |
# ¿ Nov 20, 2009 02:35 |
|
I have a backup script that looks like this:code:
code:
code:
|
# ¿ Mar 5, 2010 20:48 |
|
ShoulderDaemon posted:You can pass multiple --table arguments to pg_dump, so your backups won't be inconsistent. Ok, now I have this: code:
code:
|
# ¿ Mar 6, 2010 02:26 |
|
|
# ¿ Apr 29, 2024 11:43 |
|
ShoulderDaemon posted:If you pass --disable-triggers to pg_restore then it will prevent foreign key checks from being applied to the data as it restores ah thanks that did it
|
# ¿ Mar 7, 2010 02:10 |