Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Golbez
Oct 9, 2002

1 2 3!
If you want to take a shot at me get in line, line
1 2 3!
Baby, I've had all my shots and I'm fine
How to run a search engine?

How I've been doing it: Query selects every column the results table needs.

How I'm toying with doing it: Query only gets the unique IDs, then passes each ID to my Record object in PHP, which then loads each ID's data.

We go from one large query, to potentially hundreds or even thousands of small ones - yet I see zero speed degradation so far (it's nice to have a dedicated database server), and things are kept inside the pretty solid class, meaning one less place where I have to deal with code. Is this a good idea to continue with, for instances where I do have a data object available? Or is this something that's going to bite me in the rear end in the future?

I almost asked this in the PHP thread but I'm guessing this is more of a MySQL question, since it's the database that's being hammered, not PHP.

Adbot
ADBOT LOVES YOU

LP0 ON FIRE
Jan 25, 2006

beep boop

Begby posted:

What about writing a script that will go through each record, get rid of the special characters, then search for the filename in the file system. If it exists it will update the record, if not it will write to a log. After its all done, assuming most of your names match as you say, 99% of them will be done. Then all you have to do is go through the log and take care of the missing ones.

Looping through your records will mean only one query to grab all the records and one update for each found file (vs. doing a cumbersome sql search for every file, then an update). Then you can use regular expressions or whatever in a language instead of dicking around with MySQL replace.

This is exactly what I decided to do! Way better idea than abusing the MySQL replace.

Sub Par
Jul 18, 2001


Dinosaur Gum

awdio posted:

The search term needs to be added to the same table in the correct row as the product name. It's problem solving for a company that needs to have their image filenames go with the correct products. Luckily enough, the image names are very similar aside from not having spaces or special characters. It's going to be a giant query that will just add the filenames to the correct product.

I believe there are some UDFs for MySQL that will work like regex_replace() in Oracle. Google finds this, not sure how helpful that is, but it seems easier than nesting replace() ten times.

LP0 ON FIRE
Jan 25, 2006

beep boop

Sub Par posted:

I believe there are some UDFs for MySQL that will work like regex_replace() in Oracle. Google finds this, not sure how helpful that is, but it seems easier than nesting replace() ten times.

Wow, this is amazing, and will probably use this someday. Thanks.

baquerd
Jul 2, 2007

by FactsAreUseless
This is really basic, but I'm not sure of the syntax. Using Sybase, I want to select a numerical value, but do an in-place swap to a string.

For example, with this table:

0
1
2
1
1

I want to be able to select out:
zero
one
two
one
one

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

quadreb posted:

This is really basic, but I'm not sure of the syntax. Using Sybase, I want to select a numerical value, but do an in-place swap to a string.

For example, with this table:

0
1
2
1
1

I want to be able to select out:
zero
one
two
one
one

I am not sure of in sybase, but in other languages you can use a case or switch statement to do this, or you can create a new table with the integers/text and do a join.

Sub Par
Jul 18, 2001


Dinosaur Gum

quadreb posted:

This is really basic, but I'm not sure of the syntax. Using Sybase, I want to select a numerical value, but do an in-place swap to a string.

For example, with this table:

0
1
2
1
1

I want to be able to select out:
zero
one
two
one
one
In Oracle, there are a couple of cheat ways to do this using to_char() and some of the date formats. For example:

code:
 select to_char(to_timestamp(lpad(integer_here,9,'0'),'FF9'),'FFSP') number_in_words from dual
With a value representing an integer between 0 and 9,999,999 in the integer_here part will return the number in words. I'm sure there's a similar way to do this in Sybase if you play around with it some, I'm not familiar with the Sybase implementation of to_char().

baquerd
Jul 2, 2007

by FactsAreUseless

Sub Par posted:

code:
 select to_char(to_timestamp(lpad(integer_here,9,'0'),'FF9'),'FFSP') number_in_words from dual

That's an imaginative implementation. I ended up just doing it after I extracted the query data.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



In Postgres, is there a way to do the following, or will I have to do it outside of the database & update the values like that?

code:
UPDATE 
	parents p
SET
	average = AVG(pp.value)
FROM
	relations pr,
	children pp
WHERE
	p.id = pr.parent_fk AND pp.id = pr.child_fk AND p.value IS NULL;
It tells me:

ERROR: cannot use aggregate function in UPDATE

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Use a sub-select, like
code:
UPDATE parents p
SET average = (SELECT AVG(pp.value)
               FROM relations pr
                   JOIN children pp ON (pp.id = pr.child_fk)
               WHERE p.id = pr.parent_fk)
WHERE p.value IS NULL;
(Not tested)

If you're using Postgres 8.4 you can use Common Table Expressions, but that might be overkill.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



minato posted:

Use a sub-select, like
code:
UPDATE parents p
SET average = (SELECT AVG(pp.value)
               FROM relations pr
                   JOIN children pp ON (pp.id = pr.child_fk)
               WHERE p.id = pr.parent_fk)
WHERE p.value IS NULL;
(Not tested)

If you're using Postgres 8.4 you can use Common Table Expressions, but that might be overkill.

Ah thanks. I had tried with AVG(SELECT ...) but yours actually works :)

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm having a problem with database corruption. I am going throw millions of rows with a Java program communicating to mysql using JDBC. The JVM often crashes gracelessly while executing these queries, and sometimes afterwards one or more tables get marked as crashed.

I wrote a script to restore the database from a pile of CSV files. It drops the indexes, the tables, recreates the tables, fills it from the CSV files, then rebuilds the indexes. My idea was to use this whenever a table was marked as crashed because I do lose data when I run the repair command, and I seem to lose data arbitrarily.

This most recent time, I got the exception that a table had been marked as crashed so I went through the motions. I start up my program again afterwards, and it's still marked as crashed, after doing all this dropping and rebuilding! Is that a record maintained some place else that had gone stale? When I recreate the database, I follow the same names for my tables.

The JDBC problem does look like a JVM bug. I've seen it on two different Linux distributions across different minor releases of the Sun 64-bit JVM. I was toying around with my code on a laptop and used a 32-bit Linux with a 32-bit JVM and got better results. I'm trying to switch back to a 32-bit JVM on my main system, and I honestly think it'll help. So I'm working on the underlaying problem so I can get a bug report in.

Until then, I hate all the coddling I've been having to do to execute these queries.

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

Rocko Bonaparte posted:

I'm having a problem with database corruption. I am going throw millions of rows with a Java program communicating to mysql using JDBC. The JVM often crashes gracelessly while executing these queries, and sometimes afterwards one or more tables get marked as crashed.

I wrote a script to restore the database from a pile of CSV files. It drops the indexes, the tables, recreates the tables, fills it from the CSV files, then rebuilds the indexes. My idea was to use this whenever a table was marked as crashed because I do lose data when I run the repair command, and I seem to lose data arbitrarily.

This most recent time, I got the exception that a table had been marked as crashed so I went through the motions. I start up my program again afterwards, and it's still marked as crashed, after doing all this dropping and rebuilding! Is that a record maintained some place else that had gone stale? When I recreate the database, I follow the same names for my tables.

The JDBC problem does look like a JVM bug. I've seen it on two different Linux distributions across different minor releases of the Sun 64-bit JVM. I was toying around with my code on a laptop and used a 32-bit Linux with a 32-bit JVM and got better results. I'm trying to switch back to a 32-bit JVM on my main system, and I honestly think it'll help. So I'm working on the underlaying problem so I can get a bug report in.

Until then, I hate all the coddling I've been having to do to execute these queries.

This isn't pretty. Have you stopped the server and messed around with the command line mysql tools to try and repair the tables? That would by my next step if the SQL repair commands didn't work. There are some pretty good mysql dev docs for the steps to go through to do a repair.

Also, have you tried replication? MySQL replication works be logging SQL commands then executing those same commands on the slave server. Assuming that the client is causing the corruption by a bad connection or faulty API use, and not from sending some crazy rear end SQL command, then if your master took a poo poo, the replicated slave would still be intact and you wouldn't have to go through all the bullshit to get your master up and running again.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
It got interesting after I did the last rebuild. I rebuilt, had the crash flag, so I repaired. I then ran my program and it complained it was still marked as crashed. So it was like I'm in a perpetual repair loop.

Begby posted:

This isn't pretty. Have you stopped the server and messed around with the command line mysql tools to try and repair the tables? That would by my next step if the SQL repair commands didn't work. There are some pretty good mysql dev docs for the steps to go through to do a repair.

Also, have you tried replication? MySQL replication works be logging SQL commands then executing those same commands on the slave server. Assuming that the client is causing the corruption by a bad connection or faulty API use, and not from sending some crazy rear end SQL command, then if your master took a poo poo, the replicated slave would still be intact and you wouldn't have to go through all the bullshit to get your master up and running again.
All I knew how to do was the "repair table" command. The repair commands did find some stuff. It seems I got a lot of errors regarding misaligned data. I can't quote directly since I'm not near it right now, but I am sure I can dump out an example later to show you, given the rate these happen.

I don't know much about replication, and that looks like another thing to look up. I would be interested if the JDBC connection is creating a lot of bullshit commands, but I am generally using it by executing direct SQL statements. I'm not much of a J2EE coder, but I know some of those queries can get pretty complex. In my case, I'm executing something like "select * from table where date >= 'something' & date <= 'something else'". It's hard-coded in my source to execute that. They queries are pretty dumb in that regards.

The particular table that crashes seems to have many rows in it. I'm basically stowing a bunch of stock data, and it's the NASDAQ one that usually shits itself. The NYSE data is the first I load in, and that often succeeds, but then NASDAQ dies. Interestingly enough, the NYSE data won't crash, but the NASDAQ table will, and the AMEX and ETF tables that are supposedly next on the list will sometimes also have their crash flags set.

Later I'll get the number of rows out of the NASDAQ table to get some idea of how big it is. Overall, across all the tables, I think I have something like 20 million rows.

I don't like the query times, and I wonder if the raw size of the tables has anything to do with my trouble. So I wondered if I should start dividing up my tables a bit more. For example, I could split a table of stock quotes up by the first letter of the symbol. I usually select based on dates, and I have indexes by that, so I just march from table to table getting all symbols in that table in the date span. The tables should be shorted, but I also should be able to better report my progress. So instead of having one big "NASDAQ_quotes" I could have "NASDAQ_quotes_0," "NASDAQ_quotes_A," "NASDAQ_quotes_B"... and so on. This should create a bunch of tables no more than half a million rows each. Would this add a lot of overhead or could this eventually help my crashing issues.

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.
20 million isn't too many rows for even one table. A lot of tables in one database I work with have more than that in a single table and it gets a lot of hits and I have never had it crash. You should be able to get it to go faster without splitting it up.

I wonder if it could be a hardware issue too? Is everything patched and up to date?

If its going slow it could be your indexes. If you are selecting a date range and don't have the indexes setup properly then it will probably be creating temp tables or doing a file sort. That could be where the corruption is happening as well if something pukes while a temp table is only half created.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Begby posted:

20 million isn't too many rows for even one table. A lot of tables in one database I work with have more than that in a single table and it gets a lot of hits and I have never had it crash. You should be able to get it to go faster without splitting it up.

I wonder if it could be a hardware issue too? Is everything patched and up to date?

If its going slow it could be your indexes. If you are selecting a date range and don't have the indexes setup properly then it will probably be creating temp tables or doing a file sort. That could be where the corruption is happening as well if something pukes while a temp table is only half created.
I was seeing crashes before I had the indexes prepared, but that could be another problem. This is more chronic. What I learned about creating indexes I learned from this thread after complaining about long load times. From what little I know, I should be able to create it and then let it be. Are there some commands I need to do to update indexes? I only have an index on one field--the date. I add some rows every day, but it's my understanding that index should update as I insert this new data. Regardless, the indexes most definitely will be up-to-date after I rebuild the table and recreate the index table afterwards.

It's been a few months since I installed mysql so I wouldn't claim it's completely patched. I can try to update it here in gentoo tonight and see what happens.

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

Rocko Bonaparte posted:

I was seeing crashes before I had the indexes prepared, but that could be another problem. This is more chronic. What I learned about creating indexes I learned from this thread after complaining about long load times. From what little I know, I should be able to create it and then let it be. Are there some commands I need to do to update indexes? I only have an index on one field--the date. I add some rows every day, but it's my understanding that index should update as I insert this new data. Regardless, the indexes most definitely will be up-to-date after I rebuild the table and recreate the index table afterwards.

It's been a few months since I installed mysql so I wouldn't claim it's completely patched. I can try to update it here in gentoo tonight and see what happens.

You don't need to update the index.

However, only one index gets used. By default the primary key always gets an index made too. If something is in your where clause that is not in an index, then it can cause it to use a temporary table.

The best way to test this is to use EXPLAIN in a mysql client.

You just write your select SQL statment, then put EXPLAIN in front of it and run it. The results show you how many rows are hit in each table in the query and also what index it used for each row. It will also tell you possible indexes that would work, and whether or not a file sort or temp table is being used.

I am guessing you will need to create an index that uses both the stock symbol and the dates. The order of the fields in the index make a difference as well. You can use explain to figure all that out, its a very handy tool.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Begby posted:

The best way to test this is to use EXPLAIN in a mysql client.

You just write your select SQL statment, then put EXPLAIN in front of it and run it. The results show you how many rows are hit in each table in the query and also what index it used for each row. It will also tell you possible indexes that would work, and whether or not a file sort or temp table is being used.

I am guessing you will need to create an index that uses both the stock symbol and the dates. The order of the fields in the index make a difference as well. You can use explain to figure all that out, its a very handy tool.
Generally I don't match by name at all; everything that fits within a date span is something I want to capture. Would a name index still be useful?

I guess I do an original query to find all the unique names across all the tables, but that query actually doesn't take very long. Could it still be potentially disruptive?

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

Rocko Bonaparte posted:

Generally I don't match by name at all; everything that fits within a date span is something I want to capture. Would a name index still be useful?

I guess I do an original query to find all the unique names across all the tables, but that query actually doesn't take very long. Could it still be potentially disruptive?

Do like I said, take your big slow queries and run explain on them one at a time. Then post the results back here.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I'm back in front of it so I can run it into the ground.

Begby posted:

Do like I said, take your big slow queries and run explain on them one at a time. Then post the results back here.
Well since you've brought it up a few times already:

code:
mysql> explain SELECT * FROM NASDAQ_closing WHERE (close_date >= '2007-01-01' && close_date <= '2008-01-01');
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
| id | select_type | table          | type  | possible_keys     | key               | key_len | ref  | rows   | Extra       |
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
|  1 | SIMPLE      | NASDAQ_closing | range | NASDAQ_date_index | NASDAQ_date_index | 4       | NULL | 967204 | Using where | 
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
1 row in set (0.00 sec)
TBH it didn't take that long to run, so I'm wondering if instead the JVM is making GBS threads itself when it's reading the result sets. I don't see why that would crash the database. So my problem might be that my code is biting off more than it can chew at once.

I just ran the check command on all of them and they come out ok. I don't know what to think now.

I'm having the oddest problem running eclipse with a 32-bit JVM on my system. I set up the paths fine and run some java stuff with the 32-bit JVM, but eclipse craps out before showing a splash screen, and doesn't emit any kind of errors.

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

Rocko Bonaparte posted:

code:
mysql> explain SELECT * FROM NASDAQ_closing WHERE (close_date >= '2007-01-01' && close_date <= '2008-01-01');
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
| id | select_type | table          | type  | possible_keys     | key               | key_len | ref  | rows   | Extra       |
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
|  1 | SIMPLE      | NASDAQ_closing | range | NASDAQ_date_index | NASDAQ_date_index | 4       | NULL | 967204 | Using where | 
+----+-------------+----------------+-------+-------------------+-------------------+---------+------+--------+-------------+
1 row in set (0.00 sec)
Well, its using the full key and its as efficient as it gets. However, its hitting almost 1 million records. That is going to be slow no matter what, even if you split your tables up further. Are you really using all 1 million of those records, can you narrow your query down? I am surprised its not running slower than it is. You are probably running out of RAM on the server and and maybe even the client side if this query gets run often. Just sending those records across the wire is going to be slower than poo poo.

Begby fucked around with this message at 02:37 on Aug 8, 2009

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Begby posted:

Well, its using the full key and its as efficient as it gets. However, its hitting almost 1 million records. That is going to be slow no matter what, even if you split your tables up further. Are you really using all 1 million of those records, can you narrow your query down? I am surprised its not running slower than it is. You are probably running out of RAM on the server and and maybe even the client side if this query gets run often. Just sending those records across the wire is going to be slower than poo poo.
Yes I chew through every single one of them. I could try to break up my algorithm a bit to be able to work in smaller chunks, although that would take a lot more effort.

I'm not so sure about eating through all my RAM, but I do at one point have the data structure I intend to store the results residing in RAM the same time I have the database result set. I'd have to find a way to try to measure the memory profile, given it's a Java app, but I can't imagine each record taking more than 128 bytes, which means maybe two 128M structures. My JVM has 2.5GB allocated to it. I think the Sun JVM has some way to spit out GC statistics so I can see if I'm whacking against my upper bound constantly or something.

The database and the program using it both reside on the same computer, so are there any other little things I could try to do to speed it up? Local socket buffer or somesuch? Subdividing the queries was meant not to speed it up, but to better ensure it actually runs. If it takes me 5 times to do the query, when I can otherwise have it run in the time it would take to normally do 1.5 times the time, then I'd split it up.

I found that putting NASDAQ first seems to work a bit better. I think it's the biggest one of the bunch.

There might be a hardware issue too. I actually had a longer edit to my post going on as I was repairing the tables from the next instance that it came back crashed, and my system reset. A nice gentoo box had completely gone bonkers. It did this twice! Memtest never brings up anything, and I wonder if it isn't my motherboard. I had the damndest time with Windows Vista for stuff like this, so this computer might very well just be crap.

Squashy Nipples
Aug 18, 2007

Is MySQL the only DB that supports "EXPLAIN"?

EDIT:
Never mind, I found out that DB2 supports it, I just didn't know the syntax.

Squashy Nipples fucked around with this message at 20:52 on Aug 11, 2009

Secx
Mar 1, 2003


Hippopotamus retardus
Nevermind, Google saved the day.

Secx fucked around with this message at 20:56 on Aug 11, 2009

onionradish
Jul 6, 2006

That's spicy.
I have an SQLite database of "books read."

While each book has a unique ID, I also want it to have a sequence number field -- the "nth" book that was read, based on the date completed. (Books may be entered out of sequence, so I can't use BOOK_ID directly.)

Is there an SQLite query I can run to update/generate this sequence number as a row value, or a way to calculate it as part of query? Maybe access the RowNumber through a query sorted by date?

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slægt skal følge slægters gang



Not sure if SQLite does LIMIT/OFFSET but if it does the following would work:

SELECT * FROM books_read ORDER BY date_completed LIMIT 1 OFFSET n;

dancavallaro
Sep 10, 2006
My title sucks
Or if that doesn't work:

code:
SELECT * FROM books_read ORDER BY date_completed LIMIT n, 1;
edit: Nevermind, just looked at the Sqlite docs and OFFSET should work fine. But this will work too.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum
Does anyone if a .dbf file is easily used with a mysql? Either easily converted or whatever. I have one and I need to be able to access it with mysql.

Thanks

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Sweeper posted:

Does anyone if a .dbf file is easily used with a mysql? Either easily converted or whatever. I have one and I need to be able to access it with mysql.

Thanks

Well, I did a Google search for "dbf mysql" and got quite a few results promising ways to convert DBF files to something MySQL can read. So I guess so.

Not being snarky.

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

Hammerite posted:

Well, I did a Google search for "dbf mysql" and got quite a few results promising ways to convert DBF files to something MySQL can read. So I guess so.

Not being snarky.
When I googled it all I got were kinda iffy looking sites and links to dead sites and one paid app, I'll give it another go I guess

oversteer
Jun 6, 2005

Sweeper posted:

Does anyone if a .dbf file is easily used with a mysql? Either easily converted or whatever. I have one and I need to be able to access it with mysql.

Thanks


Is that a Paradox DB file? pxtools works well for us, converts into a text file MySQL can import

http://jan.kneschke.de/projects/pxtools

Sweeper
Nov 29, 2007
The Joe Buck of Posting
Dinosaur Gum

oversteer posted:

Is that a Paradox DB file? pxtools works well for us, converts into a text file MySQL can import

http://jan.kneschke.de/projects/pxtools
I was told it was a FoxPro db.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
The ongoing saga of my database constantly crashing:

I noticed yesterday during a check command that one of the databases had 3 clients dangling from it. Given my single program here is the only client, that wasn't right. I started to explicitly close the JDBC connection in my Java program after it had done the query and swalled up the ResultSet. I also added System.gc() calls, but I think modern JVMs pretty much just ignore those. Between the two, I haven't had any problems since, but it'll take more time to be sure.

I wondered if it was a general recommendation to explicitly close the JDBC connection when done making queries. I was running on the tiny damaged notion that it would close as part of the destructor, but I guess the destructor might decide to do it in the least elegant manner possible. That might have been a source of headaches.

Lamb-Blaster 4000
Sep 20, 2007

my mysql.proc table is corrupt, when I try to do anything with it I get the error

code:
ERROR 1034 (HY000): Incorrect key file for table 'proc'; try to repair it
I tried running myisamchk -r and got this

code:
C:\xampp\mysql\bin>myisamchk.exe -r "../data/mysql/proc.MYI"
- recovering (with sort) MyISAM-table '../data/mysql/proc.MYI'
Data records: 7
- Fixing index 1
I tried running repair table proc and got this
code:
+------------+--------+----------+-------------------------------------------------------+
| Table      | Op     | Msg_type | Msg_text                                              |
+------------+--------+----------+-------------------------------------------------------+
| mysql.proc | repair | Error    | Incorrect key file for table 'proc'; try to repair it |
| mysql.proc | repair | error    | Corrupt                                               |
+------------+--------+----------+-------------------------------------------------------+
what else can I do?

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.
That looks like a really neat problem!

I think you are going to have to google the poo poo out of it as a lot of others are having this problem where myisamchk says its fixed but its just kidding, and it appears to have a several causes.

http://bugs.mysql.com/bug.php?id=11199

Googling for 'ERROR 1034 (HY000)' seemed to have promising results. Please post back here if you manage to get it fixed. Hopefully you have a dump you can restore from if it comes to that.

Lamb-Blaster 4000
Sep 20, 2007

Begby posted:

That looks like a really neat problem!

I think you are going to have to google the poo poo out of it as a lot of others are having this problem where myisamchk says its fixed but its just kidding, and it appears to have a several causes.

http://bugs.mysql.com/bug.php?id=11199

Googling for 'ERROR 1034 (HY000)' seemed to have promising results. Please post back here if you manage to get it fixed. Hopefully you have a dump you can restore from if it comes to that.

thing is I don't care about my current procedures, I've only recently started dabbling in procedures so I have nothing important in there and even if I did I originally wrote them all in text files first, so they're fine.

The best solution for me would be to just replace the table as is.

I ran through that link, someone said they were able to dump the table into a new one using CREATE LIKE and INSERT INTO. I'll probably try that and if it works I won't be checking further.

I was only googling for "Incorrect key file for table 'proc'; try to repair it"

I'll try the error code and let you know if I find a solution.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Why does

code:
select column from table where something in ("asdf", "fdsa")
return 0 rows, but

code:
select sum(column) from table where something in ("asdf", "fdsa")
return 1 row of just a NULL? Shouldn't it return 0 rows as well?

This is on MySQL 5.1, running on Windows.

Vanadium Dame
May 29, 2002

HELLO I WOULD LIKE TO TALK TO YOU ABOUT MY VERY STRONG OPINIONS

fletcher posted:

Why does

code:
select column from table where something in ("asdf", "fdsa")
return 0 rows, but

code:
select sum(column) from table where something in ("asdf", "fdsa")
return 1 row of just a NULL? Shouldn't it return 0 rows as well?

This is on MySQL 5.1, running on Windows.

sum() is an aggregate function so you'll always get a row back - this can be very handy for avoiding a no rows returned error if you are selecting into a variable (for PL/SQL at least, not sure if MySQl does a different thing).

ether
May 20, 2001

Lamb-Blaster 4000 posted:

I tried running myisamchk -r and got this

code:
C:\xampp\mysql\bin>myisamchk.exe -r "../data/mysql/proc.MYI"
- recovering (with sort) MyISAM-table '../data/mysql/proc.MYI'
Data records: 7
- Fixing index 1
I tried running repair table proc and got this
code:
+------------+--------+----------+-------------------------------------------------------+
| Table      | Op     | Msg_type | Msg_text                                              |
+------------+--------+----------+-------------------------------------------------------+
| mysql.proc | repair | Error    | Incorrect key file for table 'proc'; try to repair it |
| mysql.proc | repair | error    | Corrupt                                               |
+------------+--------+----------+-------------------------------------------------------+
what else can I do?

REPAIR TABLE proc USE_FRM might help you (from within a MySQL-shell obviously) It recreates the MYI based upon the tabledefinition, very handy for situation where you don't have a MYI or a corrupt one (CREATE TABLE .. LIKE does not create the MYI to the best of my knowledge)

Adbot
ADBOT LOVES YOU

Lamb-Blaster 4000
Sep 20, 2007

gonna try that right now. I tried CREATE TABLE LIKE, but got the same error. Will report back shortly.

sweet, looks like it works! it returned:

code:
mysql> repair table proc use_frm;
+------------+--------+----------+---------------------------------------------------+
| Table      | Op     | Msg_type | Msg_text                                          |
+------------+--------+----------+---------------------------------------------------+
| mysql.proc | repair | info     | Key 1 - Found wrong stored record at 0            |
| mysql.proc | repair | info     | Found block that points outside data file at 36   |
| mysql.proc | repair | info     | Found block that points outside data file at 376  |
| mysql.proc | repair | info     | Found block that points outside data file at 852  |
| mysql.proc | repair | info     | Found block that points outside data file at 1928 |
| mysql.proc | repair | info     | Found block that points outside data file at 2620 |
| mysql.proc | repair | info     | Found block that points outside data file at 3020 |
| mysql.proc | repair | status   | OK                                                |
+------------+--------+----------+---------------------------------------------------+
8 rows in set (0.13 sec)
Thanks so much! This was gonna kill me, I had a huge sql block that needed to be in a procedure, but until I could fix this I had to put it directly in my php :(

Lamb-Blaster 4000 fucked around with this message at 15:12 on Aug 24, 2009

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply