|
Hammerite posted:As an additional thought, googling tells me that in order to use foreign keys in MySQL I need to switch the tables that will use them from the MyISAM engine to InnoDB. Idle reading suggests that MyISAM is a generally somewhat higher-performance engine than InnoDB but that since in particular some of my tables are UPDATE'd quite often in comparison with how often they are read, InnoDB with foreign keys may or may not be superior. It is tough for me to know what is optimal. The only legitimate reason to use MyISAM over InnoDB is for fulltext indexing, and there are plenty of other ways to go about searching (i.e. Sphinx, though that's another can of worms).
|
# ¿ Jul 5, 2009 03:13 |
|
|
# ¿ May 17, 2024 20:37 |
|
You might have more luck attacking INFORMATION_SCHEMA. None of the special info MySQL provides can be treated like a table, which is very silly.
|
# ¿ Jul 11, 2009 01:04 |
|
awdio posted:Eventually I want to ignore spaces, case and characters that aren't 0-9 and a-z. code:
|
# ¿ Aug 3, 2009 18:44 |
|
MySQL uses collations when thinking about case sensitivity. The default collation is case insensitive. Can we have a sample of a few real product names, and a sample of a real search string?
|
# ¿ Aug 3, 2009 20:38 |
|
Who or what is providing that search term? I'm gonna fall back to my previous suggestion of just adding a column explicitly for the condensed search terms and populating it programatically. There's no way in SQL to replace ranges of characters, regex-style. Not without a REPLACE() chain a few dozen layers deep, and that's just stupid.
|
# ¿ Aug 3, 2009 21:17 |
|
Doctor rear end in a top hat posted:MySQL engines: Right now I'm using the default MyISAM engines and I'm thinking about switching to InnoDB for foreign keys. I'm currently using triggers to perform the same cascading deletes/updates that foreign keys would provide. Is there any reason I should switch? I don't have need right now for row-level locking or transactions. If I do switch, is there an easier way to do it than copying my data to a new table with the same schema (but different engine), deleting the old one and then renaming the new one? There is no reason not to use InnoDB, especially given that you're already doing something insanely silly to work around not using it. Converting is easy: code:
|
# ¿ Jul 2, 2010 21:42 |
|
If you are running entirely InnoDB, you probably should look at Percona's "XtraBackup". It works without locking the entire database, and hey, it's OSS. And if you aren't running InnoDB, fix that.
|
# ¿ Jul 28, 2010 21:18 |
|
It honestly might be easier to separate the two pieces of data into two separate tables, one to store *only* first-time logins, and one to store *only* activity markers. You can then combine this with INSERT IGNORE and a unique key on the id/timestamp column to blindly fire updates without worrying about the state of the other column. It's a workaround and not an elegant solution. It reduces one area of complexity (entering data) while increasing another area of complexity (reporting on both sets).
|
# ¿ Sep 1, 2010 17:54 |
|
The pHo posted:I see on edit you've put square brackets around some of the variables... what is that for? (Everything is working as I hoped so I presume it's just a 'good measures' thing?) MySQL's default setting is to use the backtick to escape identifiers. MSSQL's default setting is to use square brackets. The ANSI standard dictates double quotes, FWIW. Really though, you don't need to escape most identifiers, just ones that could be mistaken for SQL keywords, like DESC and DATE, though most SQL parsers are smart enough to not screw it up.
|
# ¿ Sep 7, 2010 21:48 |
|
Out of morbid curiosity, are you using that UUID as a primary key, and are you also using InnoDB? If so, you should be aware that InnoDB stores rows on disk in the order of the primary key. Other databases call this a "clustered index." UUIDs are not guaranteed to be anything near sequential, so you may encounter a performance hit when entering new rows if things need to be shuffled around.
|
# ¿ Oct 4, 2010 03:12 |
|
Hammerite posted:Also, you described it as "basically a Varchar(36)..." Looking up UUIDs it looks like they're unvarying in length, so a CHAR(36) would presumably be better. I was about to say "except when MySQL decides that your CHAR should be a VARCHAR because it's loving retarded and knows better than you", but I couldn't make it do that in 5.1. When did they fix this, it makes me happy. e: They fixed that in 5.0. Amazing.
|
# ¿ Oct 4, 2010 18:05 |
|
That won't help much if it's limited server resources that are the problem. One instance could easily consume all available I/O. But if the other database is small enough to be kept in RAM, and both instances are tuned properly, it could work out.
|
# ¿ Oct 18, 2010 17:27 |
|
Golbez posted:Let's say I have a status history table... Totally doable. I think. This will use MySQL's don't-have-to-GROUP BY-all-the-columns logic, which may not work in not-MySQL databases. You did not specify which SQL server you're using, but you're using PHP, and PHP and MySQL go together like Dumb and Dumber. code:
|
# ¿ Oct 20, 2010 05:40 |
|
Golbez posted:Thanks, though I'm not really getting it to work, but... immediate last status? What if there's three in a row? Uh, yeah, dunno about that one. Time to break out the PHP.
|
# ¿ Oct 20, 2010 17:50 |
|
Keep in mind that different cultures have different naming rules. Some have tribal names, some place the family name first, then the person's given name. Some have family names that are derived from their parents. Some have family names that have multiple words. Some people have multiple middle names. Creating rules for western pattern names is going to end up excluding someone, and knowing how these things tend to work, that someone will proceed to raise a royal stink. For these reasons, it's always going to be a workable idea to provide the user with one and only one name field, and then both store that verbatim and try to parse it.
|
# ¿ Oct 28, 2010 18:51 |
|
Sharrow posted:Does Postgres have anything that approaches high availablity? There seems to be like 20 different third-party solutions and I can't always tell what's semi-official and what's a hacked up Perl script. Here's the official word on the matter. As of 9.0, it supports hot-standby / read-only slaves without third party solutions. It does this using async WAL shipping, akin to MySQL's binary replication log. They're working on synchronous replication in 9.1. For failover and reader load balancing, pgpool-II seems like one of the better choices. If you need multiple writers (multi-master), you'll need one of the many trigger-based third-party solutions. Bucardo seems to be a popular and relatively sane one. I wish I had something to recommend. We're still deciding how we're going to replace our MySQL multi-master setup, and are debating either Bucardo or making our app perform the replication itself through a message queue. The only reason we're considering doing it ourselves is because we don't use replication for load balancing, but as a communication mechanism between the front-facing website and the in-house bits. McGlockenshire fucked around with this message at 21:57 on Nov 15, 2010 |
# ¿ Nov 15, 2010 21:53 |
|
e: n/m, contradictory data.
McGlockenshire fucked around with this message at 07:51 on Dec 30, 2010 |
# ¿ Dec 30, 2010 07:47 |
|
They could also use "shouldn't use SQL" to mean "don't need ACID," referring to one of the new shard-based/distributed "NoSQL" key-value stores that sacrifice one or more of the ACID principals in the name of performance.
|
# ¿ Jan 7, 2011 00:49 |
|
Have you considered adding triggers on the inventory table instead? You can use this to capture before-and-after data into another table, which may prove more useful than a daily snapshot. You can even build the daily snapshot from that data by effectively re-playing the log. (Any sane inventory system should already be doing this, either at the table level or at the application level...)
|
# ¿ Jan 12, 2011 07:25 |
|
I think that PhotoRealAuthorisationRequired is premature optimization. You've only allocated a 16-bit integer for your photo identifier. You'll run out of photo ids way, way before the join or union you'd use to calculate viewer permissions from the other tables becomes a performance concern. You can easily make that table a view and replace it with a table+triggers later on if you identify it as an actual performance hog. The information can also be cached at the application level instead. Otherwise, the rest looks fine. Related but unimportant if you're the only dev, your naming scheme is funny. CategoryAuthorisationRequired is a link table between Category and Authorisation. Consider naming it "CategoryAuthorisations" instead. Same for PhotoAuthorisationRequired => PhotoAuthorisations. CategoryMembership is a link table between Category and Photo. Consider "PhotoCategories" instead. The shorter names will make it very clear for other coders that these are link tables. PhotoRealAuthorisationRequired could become, say, PhotoCalculatedAuthorisation. McGlockenshire fucked around with this message at 19:02 on Jan 24, 2011 |
# ¿ Jan 24, 2011 18:59 |
|
Someone somewhere recently brought up a technique to represent a parent/child/tree relationship in a relational database without happy things like WITH RECURSIVE. If you have that in your database engine of choice, use it, it's a no-brainer. I'm still trying to figure out why I didn't bookmark the article I'm thinking about. Going to have to go diving...
|
# ¿ Feb 17, 2011 01:13 |
|
Yeah, that looks similar to if not identical to what I was thinking of. Bookmarking it this time around, thanks!
|
# ¿ Feb 17, 2011 04:49 |
|
With Percona's XtraBackup providing full support for both InnoDB and MyISAM, there's really no excuse to have a MyISAM-specific backup process in place. To be frank, this sounds more like a "we're afraid of change" excuse than anything else. Your code shouldn't give a drat about the underlying storage engine (and unless you're using MySQLisms, your code shouldn't even care that it's running on MySQL to begin with...) Really, if "the backups will break" is all they've got, it's time for either some hardcore re-education or time to find another job. InnoDB has been superior to MyISAM in almost every way since 5.0, and with the sole exception of full text search, there is zero reason to keep it in use. McGlockenshire fucked around with this message at 21:46 on Mar 4, 2011 |
# ¿ Mar 4, 2011 21:41 |
|
Aredna posted:As someone who has to create reports off of databases with little to no documentation: Trying to reverse engineer enums is a pain in the rear end and any with a table set up have made my life 10x easier. Related: make use of COMMENT fields, available both at the column and table levels. One of the little things about MySQL that doesn't suck.
|
# ¿ Mar 10, 2011 00:38 |
|
quote:Similarly, do people have experience with TRUNCATE performance for InnoDB tables, or at least those without foreign-key relationships? We found it faster to do a CREATE TABLE new LIKE old to copy the structure of the table, then do a two way rename -- old => old_old, new => old. It should only take a split second, and then you can deal with the data in the old table as you'd like. I'd imagine this would completely break poo poo if you have foreign keys.
|
# ¿ Mar 11, 2011 18:37 |
|
WRT bizarreness when doing a table rename, I've never seen any sort of copying or *any* view reference updating... but then again, our data is stupid and we don't use views. The mere fact that you can turn off foreign key refs and drop a table full of keys referenced elsewhere pretty much says it all when the conversation turns to MySQL and data integrity.
|
# ¿ Mar 11, 2011 22:54 |
|
As long as the MySQL version involved doesn't silently chomp CHAR columns over four characters long into VARCHARs, the identity table sounds perfectly fine. (Keep in mind that you really only need to worry horribly about enums-as-data when the enum is used in multiple tables. A status column in a single table with a restricted set of possibilities that won't change frequently is ok to keep enumized, though it's still not a super solution. Oh if only MySQL supported CHECK clauses...) McGlockenshire fucked around with this message at 18:36 on Mar 14, 2011 |
# ¿ Mar 14, 2011 18:34 |
|
Be careful with REPLACE INTO when the table you're operating on is referred to by other tables with ON DELETE CASCADE. Tt performs a delete and re-insert and the delete cascades immediately.
|
# ¿ Mar 15, 2011 06:47 |
|
Possibly-stupid drowsy-before-bed idea. Are there ever conditions under which word A is synonymous with word B, but word B is not synonymous with A? I'm pretty sure the answer is no. Therefore, you can effectively group all of the related words together into synonym families. Table Families ( family integer primary key ) Table Words ( word text, family integer references Families(family), unique(word, family) ) To get synonyms, SELECT word, family FROM Words WHERE family = ( SELECT family FROM Words WHERE word = ? ) Keep in mind that words can be multiple parts of speech and have a different meaning based on that context. This implies that each word can be a member of multiple families, which instantly breaks the query I just wrote. You'd need to add part-of-speech or context or something else to the Words table to differentiate different word meanings to tie them together with the appropriate family of synonyms, then find a way to query that one specific word+pos+context. Given that there could be multiple entries for one word, you could end up returning words from the synonym query that belong to other families, which you could then crawl out to a certain extent to get related-but-not-synonymous words. Anyway, if this seems stupid, pick option #1. Option #2, the creation of multiple tables for specific data, is an anti-pattern and you should forget you ever proposed it. McGlockenshire fucked around with this message at 07:32 on Mar 28, 2011 |
# ¿ Mar 28, 2011 07:29 |
|
You have tuned my.cnf for your server specs, right?
|
# ¿ Jun 24, 2011 06:42 |
|
Because ALTER has to basically rewrite the table, yes, it has to get a lock, and no, there's no sane way to bypass this restriction. What are you trying to accomplish?
|
# ¿ Jul 7, 2011 00:12 |
|
Ah, you people and your modern, sane databases. Traditional MySQL replication works by just sending the actual SQL commands to all listening slaves. If you run the ALTER on the master, the slaves will also receive the command (once completed) unless you halt replication, disconnect every client, record the log position, run the command, record the new log position, then update every slave to pick up replication at the new log position. Modern MySQL replication can just send over changed data, but DDL statements are still replicated as actual SQL text. quote:Adding a field to a large table without write/read locking, or write/read locking for as short time as possible. Depending on whether or not you can just disable connections to the database, this may or may not be filled with peril and the chance of subtle data loss if the original table is changed between the time you INSERT INTO ... SELECT and do the rename. MySQL does not support DDL statements inside a transaction, so an exclusive lock is your only choice here. Running the alter directly is the thing least likely to be trouble for you.
|
# ¿ Jul 7, 2011 07:23 |
|
quote:Does anyone know how Postgres handles it? The ALTER TABLE docs say that DROP COLUMN doesn't require a table rewrite, but changing a column data type does require one. DDL is also supported in transactions, thank goodness.
|
# ¿ Jul 7, 2011 17:42 |
|
RoadCrewWorker posted:I really love postgres and couldn't go back to mysql, but man, there's some severely headache inducing flaws that just make me think i must be missing something incredibly obvious.
|
# ¿ Jul 7, 2011 22:21 |
|
Looks like you're hitting a deadlock. Increase it again and wait for it to be near a timeout, then run SHOW FULL PROCESSLIST as root on both instances to reveal the queries that are deadlocking.
|
# ¿ Jul 13, 2011 20:41 |
|
Bob Morales posted:I'm dyslexic. Lines 36-40, connection 5 (second row in the process list) is holding open a transaction that, for some reason, connection 4 (first row) wants to lock upon. Unfortunately there's no clue there on why it's blocking, as nothing interesting is listed in those four lines. Connection 4 is just trying to get a shared lock ("LOCK mode S locks rec but NOT gap waiting"), no idea why it'd be denied here. Does the tool make two connections? There's lots of noise in Google about that particular lock state, but nothing looks immediately relevent. You may find Percona's SHOW INNODB STATUS walkthrough helpful. McGlockenshire fucked around with this message at 06:48 on Jul 14, 2011 |
# ¿ Jul 14, 2011 06:45 |
|
No, SQL uses double quotes. MySQL uses backticks, but also accepts double quotes. Oh, and MSSQL uses loving square brackets, I am not making that up. If you want to think cross-platform, think double quotes.
|
# ¿ Jul 15, 2011 08:01 |
|
If not for it being MyISAM and other shenanigains, I'd recommend XtraBackup, which can take and restore entire snapshots, and even has a mode designed to create a new slave from a master. While it works with MyISAM in addition to InnoDB, it's really designed to copy entire *instances*, not just specific databases within the instance. The various replicate ignore bits do work, however the data still has to be transmitted from master to slave, even if the slave will end up ignoring it. This shouldn't be a problem as long as master and slave are local to each other. Due to the way that the binary log works, though, that amount of data is going to cause some non-trivial replication lag while the slave works through the transmitted log. McGlockenshire fucked around with this message at 04:31 on Aug 20, 2011 |
# ¿ Aug 20, 2011 04:27 |
|
Golbez posted:We're slightly paranoid about Oracle's moves on MySQL; does anyone have any experience or comment on MariaDB? Also consider Percona's not-a-fork. I've used it as a drop-in replacement a few times, and it's worked well. The only problems I've had are the same problems you usually have when upgrading, like forgetting about old_passwords and client library versions.
|
# ¿ Sep 28, 2011 22:58 |
|
|
# ¿ May 17, 2024 20:37 |
|
mister_gosh posted:I'm looking to create hot (and cold) backups of a production MySQL database. Are there any tools out there that are proven/standard? Percona's Xtrabackup is pretty much the gold standard here and works both with MyISAM and InnoDB tables.
|
# ¿ Nov 16, 2011 05:35 |