|
Sprawl posted:its very possible the ado recordset was using a odbc driver at some point that was updated to no longer replace blank spaces with underscores. That actually happens?
|
# ? Jun 30, 2011 07:29 |
|
|
# ? Jun 8, 2024 07:19 |
|
Apparently so. Yeah, I'm gonna write this one off as an OBDC driver issue. I didn't update the sucker and if someone else did, they didn't tell me... but honestly, I don't care. It was wrong, it's fixed, end of story so far as I'm concerned. The boss is just obsessed with accountability is all. Not entirely without reason I must admit... I didn't update IIS or any drivers recently, and if someone else did they REALLY should have let me know. And yeah Thel, it's an ASP classic environment. I know, I know... I'd love to upgrade to .NET, but every single time me and the DBA bring the matter up we get shot down on the grounds that it would take too much time and energy away from other things.
|
# ? Jun 30, 2011 17:03 |
|
I'm using MSSQL 2005 so I'm stuck with the datetime datatype. I have a table which has a series of dated events indexed by the id of a company where that event occured. In these events time does not and will never matter. I'm going to have to run a lot of queries based on the date events occured. As an example SELECT DISTINCT DatePartofTheDateField WHERE CompanyID = 123 will be a particularly common one. Looking around on Google there are a lot of different methods for querying the date part of a datetime field only however most of them involve some form of conversion and it seems to me that the sort of query I'm talking about above would be really, really inefficient in a large (millions of records) database and now amount of indexing is going to help that. My first thought (well other than upgrade to 2008) was to see if it is possible to create an index on a function of the date column. But I'm not sure if/how that works). My next was some kind of trigger that would update the datetime column (or a second column) to the date midight automatically whenever a record was updated and index that. My final thought was to do the above with a trigger except convert it to some kind of integer based representation of the date (number of days since 1/1/1900 or something, a timestamp I guess) similar and index that on the assumption that indexed integers are a lot more efficient than indexed dates and times. This data is only ever going to be accessed by a single .NET class so I can do all the conversions before the query and on the results in this class. I have no idea if any of this makes sense or if I'm on the right track. I guess my main concerns is that there's some degree of reinventing the wheel here and/or if we do eventually upgrade to 2008 (which has a date datatype) the whole thing would become an ugly pointless legacy. Edit: Just ran a few tests with 100,000 records. I was suprised to find that querying the integer time stamp and a the datetime column where all the dates were set to midnight took literally the same amount of time. Where I was converting the dates as part of the query it took about ten times as long, which was not suprising. Gravy Jones fucked around with this message at 15:08 on Jul 4, 2011 |
# ? Jul 4, 2011 12:57 |
|
Gravy Jones posted:
Short of switching to 2k8 I think you're on the right track. I would serialize the dates into number strings and index those strings in the database. The date would be converted going in and out of the table either by your application or the database itself. If anyone ever did want to look at the database easy you could just create a view of the table.
|
# ? Jul 4, 2011 15:08 |
|
Gravy Jones posted:I'm using MSSQL 2005 so I'm stuck with the datetime datatype. You should also be able to create a view that is able to do the same thing.
|
# ? Jul 4, 2011 15:15 |
|
Gravy Jones posted:Date stuff in SQL Server 2005 What I would do is either add a field to the table (and update via triggers), or as Sprawl suggested, use a view. It depends on your data and usage as to which is better (If the data changes often but you only need to query on the dates occasionally, then go with the view. If your data is mostly static but you're hitting it with datepart queries every second, then the table might be better*). As far as actually converting the datetime, instead of doing the excel-style "days since epoch start", just convert the dateparts to YYYYMMDD (i.e. 20110706). Less mess, less fuss, more usability. (Unless you're running queries like "Give me from now to six weeks ago", in which case augh. Either way it's going to be a mess.) * I haven't actually tested this, I don't know if this is true. Now that I think about it, if you make the view an indexed view the performance should be about the same.
|
# ? Jul 5, 2011 00:03 |
|
So is there any way to do an ALTER TABLE on a 5.1 MySQL system without write locking? I know just regularly there's not, but surely there's some strange voodoo way of creating another instance and keeping track of checkpoints and that, or copying to temp tables blah blah.
|
# ? Jul 6, 2011 22:39 |
|
Because ALTER has to basically rewrite the table, yes, it has to get a lock, and no, there's no sane way to bypass this restriction. What are you trying to accomplish?
|
# ? Jul 7, 2011 00:12 |
|
McGlockenshire posted:Because ALTER has to basically rewrite the table, yes, it has to get a lock, and no, there's no sane way to bypass this restriction. What are you trying to accomplish? Adding a field to a large table without write/read locking, or write/read locking for as short time as possible.
|
# ? Jul 7, 2011 01:37 |
|
J. Elliot Razorledgeball posted:Adding a field to a large table without write/read locking, or write/read locking for as short time as possible. Dunno if this works in MySQL, but in Oracle you can add a column to a giant table as not having a default/and is null and it will lock the table for a only very short period of time. Then if you need to populate the column and set a default/not null you can update the column values via a loop that batch commits to keep any locks from lasting too long, and then alter the column to have a default/be not null once the update loop is done which should also be quite fast since the column should be fully populated with values (or close to it). Vanadium Dame fucked around with this message at 02:18 on Jul 7, 2011 |
# ? Jul 7, 2011 02:16 |
|
Markoff Chaney posted:Dunno if this works in MySQL, but in Oracle you can add a column to a giant table as not having a default/and is null and it will lock the table for a only very short period of time. Then if you need to populate the column and set a default/not null you can update the column values via a loop that batch commits to keep any locks from lasting too long, and then alter the column to have a default/be not null once the update loop is done which should also be quite fast since the column should be fully populated with values (or close to it). This works in MSSQL too; watch out if you're replicating though. Actually altering a replicated table in the first place means you'll probably have to resubscribe/publish anyway.
|
# ? Jul 7, 2011 02:30 |
|
Ah, you people and your modern, sane databases. Traditional MySQL replication works by just sending the actual SQL commands to all listening slaves. If you run the ALTER on the master, the slaves will also receive the command (once completed) unless you halt replication, disconnect every client, record the log position, run the command, record the new log position, then update every slave to pick up replication at the new log position. Modern MySQL replication can just send over changed data, but DDL statements are still replicated as actual SQL text. quote:Adding a field to a large table without write/read locking, or write/read locking for as short time as possible. Depending on whether or not you can just disable connections to the database, this may or may not be filled with peril and the chance of subtle data loss if the original table is changed between the time you INSERT INTO ... SELECT and do the rename. MySQL does not support DDL statements inside a transaction, so an exclusive lock is your only choice here. Running the alter directly is the thing least likely to be trouble for you.
|
# ? Jul 7, 2011 07:23 |
|
McGlockenshire posted:I'm going to give this a shot
|
# ? Jul 7, 2011 07:44 |
|
McGlockenshire posted:Remind me never to use MySQL for ... anything. Jesus.
|
# ? Jul 7, 2011 08:03 |
|
Thel posted:
I think that's a bit overreaching, MySQL obviously has its uses. Online schema changes happen to not be a strong point.
|
# ? Jul 7, 2011 08:07 |
|
Does anyone know how Postgres handles it?
|
# ? Jul 7, 2011 15:19 |
|
quote:Does anyone know how Postgres handles it? The ALTER TABLE docs say that DROP COLUMN doesn't require a table rewrite, but changing a column data type does require one. DDL is also supported in transactions, thank goodness.
|
# ? Jul 7, 2011 17:42 |
|
I have a table of data that needs to be updated using the following rules and I don't have the foggiest notion of how to go about solving it:code:
code:
1. When a MemberID is established for a GUID, update all of the prior MemberID that equal zero with that GUID (for all sessions). 2. If a different MemberID is then established for that GUID, use that MemberID for all subsequent sessions. 3. If a session is started with no memberID, but a memberID is subsequently established, update the entire session with the MemberID. 4. If a session has an existing memberID (from rule2) and a memberID is established during the session, update the whole session with the new MemberID. Agrikk fucked around with this message at 19:29 on Jul 7, 2011 |
# ? Jul 7, 2011 19:27 |
|
McGlockenshire posted:Unfortunately I don't have a large enough data set in PG to properly test this -- only a few thousand rows, only a few hundred megs. Adding a column was instantaneous in my largest tables, as was dropping. I got about 50 or so 15-20 GB tables (not counting indices) with >200 million rows each so that'd come in terribly handy. Querys i used with mysql are now completely impossible. I really love postgres and couldn't go back to mysql, but man, there's some severely headache inducing flaws that just make me think i must be missing something incredibly obvious.
|
# ? Jul 7, 2011 19:33 |
|
RoadCrewWorker posted:I really love postgres and couldn't go back to mysql, but man, there's some severely headache inducing flaws that just make me think i must be missing something incredibly obvious.
|
# ? Jul 7, 2011 22:21 |
|
Agrikk posted:I have a table of data that needs to be updated using the following rules and I don't have the foggiest notion of how to go about solving it: From your previous posts I'm assuming you're using SQL Server and therefore have access to the row_number() function. I think these three beasts of update statements should do what you want (they need to be run in order): code:
The second two could also be achieved in a much clearer way by writing procedural code to iterate over a cursor and update the rows that way, but I wanted to see if I could solve it using only SQL. Goat Bastard fucked around with this message at 10:16 on Jul 8, 2011 |
# ? Jul 8, 2011 10:08 |
|
I've started messing around with postgreSQL, and I've reached a point where I need to actually crack open a book. Are there any that are strongly recommended for beginners?
|
# ? Jul 9, 2011 16:24 |
|
angrytech posted:I've started messing around with postgreSQL, and I've reached a point where I need to actually crack open a book. Are there any that are strongly recommended for beginners? Do you need a Postgres book or just a plain SQL book? The documentation online for Postgres is pretty solid/complete.
|
# ? Jul 9, 2011 20:10 |
|
No Safe Word posted:Do you need a Postgres book or just a plain SQL book? The documentation online for Postgres is pretty solid/complete. I just want to kill trees when I learn Postgres, although if there's a really awesome SQL book, recommend away.
|
# ? Jul 9, 2011 22:47 |
|
I'm very interested in an introduction to SQL book. Sounds like angrytech is far more advanced than I am, but what would be a good first book for learning SQL?
|
# ? Jul 11, 2011 18:49 |
|
Lyon posted:...angrytech ... far more advanced than I am Oh god no, but thanks. An intro to SQL book would be awesome. I just don't see myself using MSSQL, Oracle or MySQL anytime soon so I feel like I can afford to specialize is postgres.
|
# ? Jul 11, 2011 20:11 |
|
Quick MSSQL memory question. SQL 2008 R2 Standard supports up to 64GB of RAM. Does anyone know if that limit is per server or per instance? I'd really like to get one beefy server with 128GB of RAM and run two instances (64GB each) instead of having to buy two separate servers.
|
# ? Jul 12, 2011 19:29 |
|
Some quick googling says that it's 64GB per instance.
|
# ? Jul 12, 2011 19:37 |
|
Jethro posted:Some quick googling says that it's 64GB per instance. That's the same thing I was seeing, but I was worried since that info wasn't coming directly from Microsoft. I ended up having my boss talk to our CDW rep today and he confirmed that it is 64GB per instance, which makes things easier.
|
# ? Jul 13, 2011 17:03 |
|
This is a longshot, but does anyone use Webyog sja to synchronize MySQL databases? I'm trying to use it, but I get this error every time: code:
These are InnoDB (I think) tables. I'm trying to see if using this tool is faster than dropping/re-dumping the databases (we might only need to sync up the last week worth of data). I increased the innodb_lock_wait_timeout to 100 and it does the same thing, it just takes longer. I also tried adding innodb_table_lock=0, no difference. Both servers are CentOS x86_64 5.6 and mysql is ver 14.12 Distrib 5.0.77
|
# ? Jul 13, 2011 19:40 |
|
Looks like you're hitting a deadlock. Increase it again and wait for it to be near a timeout, then run SHOW FULL PROCESSLIST as root on both instances to reveal the queries that are deadlocking.
|
# ? Jul 13, 2011 20:41 |
|
McGlockenshire posted:Looks like you're hitting a deadlock. http://pastebin.us/463 That's the target server, the other one didn't have anything interesting
|
# ? Jul 13, 2011 22:14 |
|
Bob Morales posted:http://pastebin.us/463 Looks like that site has been hijacked or something paste it here.
|
# ? Jul 14, 2011 00:11 |
|
I'm dyslexic. http://pastebin.us/643
|
# ? Jul 14, 2011 00:32 |
|
I'm having a complete brain fart here, so perhaps someone can lend a hand. I have three tables: Routes, Flights, and Fares. * Routes contains rows that indicate you can fly between A and B and all A-B pairs are unique. * Flights contains rows that describe each individual flight between A and B (AF 443 on July 14, 2011) * Fares contains a history of every price we've seen reported for a given flight I'm looking to run a single query that finds the most recent fares for each flight then picks the cheapest flight to every destination from a given origin. Here's the best stab I've made so far: code:
It would seem that I need to apply a sort to my GROUP, but that doesn't change the results. Any thoughts?
|
# ? Jul 14, 2011 03:37 |
|
isnoop posted:I'm having a complete brain fart here, so perhaps someone can lend a hand. What's with fare2? What is it doing? Also, when you're doing a composite query like this, it's probably easier to build it up in stages and test each stage. Just to make sure, you have a given origin (i.e. JFK), and want one result for each single outbound route, with the cheapest price in your date range?
|
# ? Jul 14, 2011 05:29 |
|
Bob Morales posted:I'm dyslexic. Lines 36-40, connection 5 (second row in the process list) is holding open a transaction that, for some reason, connection 4 (first row) wants to lock upon. Unfortunately there's no clue there on why it's blocking, as nothing interesting is listed in those four lines. Connection 4 is just trying to get a shared lock ("LOCK mode S locks rec but NOT gap waiting"), no idea why it'd be denied here. Does the tool make two connections? There's lots of noise in Google about that particular lock state, but nothing looks immediately relevent. You may find Percona's SHOW INNODB STATUS walkthrough helpful. McGlockenshire fucked around with this message at 06:48 on Jul 14, 2011 |
# ? Jul 14, 2011 06:45 |
|
isnoop posted:... If your database supports analytic functions then you probably want something like code:
|
# ? Jul 14, 2011 09:22 |
|
I've just started working with mysql workbench, and if I'm logged into a sql server and I've selected my schema and I type: DROP DATABASE 'nameofdb'; and hit run, it says there is an error in my syntax. Now I'm sure that syntax is correct. What is the problem here?
|
# ? Jul 14, 2011 14:24 |
|
|
# ? Jun 8, 2024 07:19 |
|
revmoo posted:I've just started working with mysql workbench, and if I'm logged into a sql server and I've selected my schema and I type: '' is for strings, try using backticks ``
|
# ? Jul 14, 2011 15:46 |