|
I have two nearly identical tables (jobs and archived_jobs) whose only difference is two additional columns (archived_on and archived_by) in the archived_jobs table. I want to be able to move a single record from the jobs table to the archived_jobs table and remove the record from jobs. I tried to do various versions of the following type of query code:
|
# ? Jan 13, 2010 20:11 |
|
|
# ? May 19, 2024 06:22 |
|
grumm3t posted:code:
|
# ? Jan 13, 2010 20:15 |
|
The input is cleaned elsewhere... and that worked. Thanks!
|
# ? Jan 13, 2010 20:33 |
|
Jethro posted:Ignoring your sql injection problem: Why do you advocates of prepared statements (I'm making an assumption here that that's your preferred way of sanitising input) always go to the extreme of assuming that a failure to use prepared statements amounts to a full-on failure to sanitise input at all? I mean it's not just you, plenty of people around here seem to do it for whatever reason, but loving hell there's more than one way to skin a cat.
|
# ? Jan 13, 2010 20:56 |
|
Of course there is more than one way to skin a cat, but what are the odds a random person on the Internet is a professional cat skinner? Low. Way easier to assume they're just going to shoot themselves in the foot. By the way, prepared statements are not a method of sanitization, in the same way RAID is not a form of backup. Interfaces provided by libraries that support prepared statements simply help lower the surface of injection based attacks.
|
# ? Jan 13, 2010 21:20 |
|
stack posted:I could use some help coming up with an index for a table on Mysql 5.1. Test an index on other_id and if it's good enough then I would stop there. Every index you add has the possibility of affecting other statements, and the more you index the greater overhead on writes. I think that a 1:10 ratio of writes to reads is not low.
|
# ? Jan 13, 2010 23:02 |
|
How often should I run ANALYZE TABLE on InnoDB tables that are updated maybe a few hundred times a day? Is once a day overkill? Is ANALYZE TABLE demanding, or is it not something to worry about?
|
# ? Jan 14, 2010 00:16 |
|
Hammerite posted:How often should I run ANALYZE TABLE on InnoDB tables that are updated maybe a few hundred times a day? Is once a day overkill? Is ANALYZE TABLE demanding, or is it not something to worry about? I'm no big city InnoDB expert but wouldn't it matter more how many rows are updated each day, not how many update statements are executed? We (Oracle shop) analyze pretty much all of our tables on at least a weekly basis I think. MoNsTeR would probably know our schedule better.
|
# ? Jan 14, 2010 03:07 |
|
It should be weekly but I think some Oracle bug is loving it up. At any rate yes it depends on the rate of change in your data as well as type type of change. By that I mean a table that steadily grows at X rows per day is different from one that might have all of its rows deleted one day, then a million rows inserted the next, then half of them deleted, or other crazy random crap. Rate of update is less important unless you are regularly updating indexed values.
|
# ? Jan 14, 2010 16:45 |
|
yatagan posted:I'm trying to understand why Oracle even bothered implementing a GROUP BY feature if all non-aggregation variables must be put in the statement?
|
# ? Jan 14, 2010 16:48 |
|
So I have two tables like this Programs: code:
code:
Basketball Ted ted@whatever.com Ted ted@whatever.com Football Bob bob@whatever.com Ted ted@whatever.com Baseball Ted ted@whatever.com John john@whatever.com Also, the program manager and backup might not be assigned (null). Its easy to get one or the other, but I can't figure out both.
|
# ? Jan 14, 2010 22:14 |
|
You want to join to the table twice and use a different alias for each join. To account for scenarios where the program manager or backup program manager is null, you should use a left join instead of an inner join.
|
# ? Jan 14, 2010 23:01 |
|
Aredna posted:You want to join to the table twice and use a different alias for each join. Thanks that worked, didn't know you could do that.
|
# ? Jan 14, 2010 23:43 |
|
Thanks Markoff Chaney and MoNsTeR, I elected to have it run twice a week instead of daily.
|
# ? Jan 15, 2010 01:04 |
|
Hammerite posted:Thanks Markoff Chaney and MoNsTeR, I elected to have it run twice a week instead of daily. Also, how big is the table? If it's less than <arbitrary number, I usually go with 10-20k rows> you should probably be full scanning it regardless and not bother with indexes.
|
# ? Jan 16, 2010 03:54 |
|
Having some problems with CAST in T-SQL here: I have a table events that contains events with a start and end date. I'm defining the total length of all events by using DATEDIFF(DAY, [earliest event startdate], [latest event enddate]). The values I need are the percentage of days and event encompasses from the whole of all events (say event A lasts 10 days, event B lasts 8 days, so the whole is 18 days of which event A uses 55%), and how far from the start of all events the event is removed (event B starts when event A ends, so it starts at 55%, the same value but ofcourse events could overlap). This looks like a very bad description, the gist of it is I need to be able to cast DATEDIFF / DATEDIFF AS DOUBLE. DATEDIFF returns an integer by default so when the total length is 100 days and I want to know how much of that time 33 days is I get 0. Here's the SQL I had so far: code:
code:
e: whoops, DOUBLE != FLOAT, nevermind fellas wigga please fucked around with this message at 11:26 on Jan 16, 2010 |
# ? Jan 16, 2010 11:22 |
|
I want to run a cronjob every 6 hours that will take any rows created more than 6 hours ago and move them out of the active table and into a separate storage table Anyone know the best way to do this? I think i'll need two separate commands: one to copy the rows meeting the criteria into the other table and one to remove the rows meeting the criteria from the original table
|
# ? Jan 18, 2010 08:34 |
I needed to remove a trailing space from a certain column and I tried to use this query to find out which rows I would be updating:code:
|
|
# ? Jan 18, 2010 19:46 |
|
fletcher posted:I needed to remove a trailing space from a certain column and I tried to use this query to find out which rows I would be updating: Collation issue? I just tried SELECT 'mystring' = 'mystring ' and SELECT 'MYSTRING' = 'mystring' on my database (MySQL, default collation is something case insensitive) and both returned 1.
|
# ? Jan 18, 2010 20:14 |
|
If you are using MySQL thencode:
|
# ? Jan 18, 2010 20:19 |
|
Does anyone have a quick t-sql (MSSQL 2005) proc that will identify card numbers based on the first 4 digits? I've got a VB code version but want to do a one-time pass on the server before clearing the data for PCI compliance. I want to be able to say this row is Amex, Visa, etc. based on the number structure.
|
# ? Jan 18, 2010 21:28 |
|
fletcher posted:I needed to remove a trailing space from a certain column and I tried to use this query to find out which rows I would be updating... If you're using MS SQL, then you need to use RTRIM() or LTRIM().
|
# ? Jan 18, 2010 21:43 |
|
Scaramouche posted:Does anyone have a quick t-sql (MSSQL 2005) proc that will identify card numbers based on the first 4 digits? I've got a VB code version but want to do a one-time pass on the server before clearing the data for PCI compliance. I want to be able to say this row is Amex, Visa, etc. based on the number structure. Sorry guys, I kept googling Credit Card check, parser, identification etc. when I should have been googling Credit Card VALIDATION. Here's a C# version that's better than my VB one (though not much) http://jlcoady.net/c-sharp/credit-card-validation-in-c-sharp I'll post a t-sql translation later if anyone wants.
|
# ? Jan 18, 2010 22:14 |
|
Well this was way simpler than I thought it had to be. I thought I'd have to verify every card number, but of course since these are already charged/AVS'd I don't even have to do that, I just need to examine the first few digits to determine card type. Here's the underwhelming SQL which does a year/month rollup:code:
I based the card info on this: http://en.wikipedia.org/wiki/Bank_card_number since there was a lot of conflicting/incomplete information out there. Obviously if you're writing a 'real' verifier you'd want it to be more complete (more card types, luhn check, etc.). You'd also probably want to do more with the unknowns as well, but there was only 1 case anyway. So pretty simplistic but thought I'd put it here anyway in case anyone was curious.
|
# ? Jan 19, 2010 20:26 |
|
Is there some type of linkedserver or OPENROWSET type thing for Oracle 10g so I can query a SQL Server within the editor window?
|
# ? Jan 27, 2010 02:46 |
|
Does anyone know of a means for SQL Server 2008 to optimize the datatypes for a static set of data? I've got a table with roughly 9 million records (there won't be any records added for years) and right now every column is set to varchar(100). It would be awesome if there were some way for SQL Server to analyze the entire data set and select the appropriate data type for each column.
|
# ? Jan 27, 2010 15:51 |
|
Corsair Jr posted:Does anyone know of a means for SQL Server 2008 to optimize the datatypes for a static set of data? If you plan on leaving them all as varchars you can measure each column. SELECT MAX(Column1), MAX(Column2), MAX(Column3), ETC If you think some are INT only, try changing it to INT and see if it works... Same for datetime etc. You'll likely want to create an index or two depending on what your data actually is. There might be some other ways but doing it this way will let you get a better idea of what is in there and help you know what you should be indexing. Unless the data was computer generated I often find this approach helps me find bad data to clean when I try to change columns from one datatype to another and it doesn't work.
|
# ? Jan 27, 2010 16:39 |
|
You definitely want to get those data types as small as possible. My experience with tables that have "all varchar" and then optimized down makes a WORLD of difference. It really shouldn't take you more than an afternoon's work to run through each column. Just do a distinct on the values and see what the smallest numeric/text type you can fit in them is.
|
# ? Jan 27, 2010 16:57 |
|
benisntfunny posted:If you plan on leaving them all as varchars you can measure each column. It seems like it should be possible since the import package function includes a "suggest datatype" function, but that maxes out at 1000 rows, so its no good for 9 million records.
|
# ? Jan 27, 2010 19:50 |
|
You could write this yourself in a script. If any of the ISNUMERIC/ISDATETIME functions fail, you know it has to be a varchar definitely.
|
# ? Jan 27, 2010 20:00 |
|
Triple Tech posted:You could write this yourself in a script. If any of the ISNUMERIC/ISDATETIME functions fail, you know it has to be a varchar definitely. I found a script online that will find the max length. I might play around with it tomorrow or try to write a new one that can identify datetimes etc. code:
|
# ? Jan 28, 2010 01:19 |
|
I'm having some issues inserting into a table in Access 2007. I am in SQL View and trying to write a query like this: code:
But it doesn't work at all. I got a pop up from access asking for the parameter value of the accessTable.Index or some useless crap like that. Also, the RIGHT function is because I am only concerned about the far 3 right digits. I'd prefer to do something like code:
I am used to MYSQL and a little frustrated by access. Any pointers here? I have also tried: code:
code:
|
# ? Jan 29, 2010 01:57 |
|
cannibustacap posted:I'm having some issues inserting into a table in Access 2007. code:
Kekekela fucked around with this message at 06:03 on Jan 29, 2010 |
# ? Jan 29, 2010 05:57 |
|
If I could run an approach by some of the more SQL-experienced folks here. I have currently 24 excel spreadsheets, each with 14 columns around 700 lines or so (monthly maintenance reports, so there's future growth involved), and I was toying around with python 2.6's sqlite3 module to try and simplify some of the parsing tasks I was already doing. I see my options as: 1 database file per month, with a table of contents database file with the names of each of those files. 1 database file overall with a table per month and another table for any sort of global stats, etc as they come up. Normal use-case would be to import all the spreadsheet files in one time to the database, then any output would be generated from selecting relevant information from the database and formatting/acting on it/etc. So there wouldn't be any deletes or joins, but in the future it may be useful to select rows from more than 1 month's report at a time, even if it's not necessary now. Thanks for any suggestions or feedback.
|
# ? Jan 29, 2010 16:04 |
|
If every month's data has the same structure I would use one table.
|
# ? Jan 29, 2010 17:06 |
|
Yeah, that just sounds like one SQLite instance with one table for any common entities across different months. No need to go all database crazy. SQLite has seen worse.
|
# ? Jan 29, 2010 17:16 |
|
MoNsTeR posted:If every month's data has the same structure I would use one table. 1 table with 24 x ~700 = 16800 rows, is that not too much to start seeing a performance hit on selects? I don't have enough experiences with databases to know what a "reasonable" size would be. edit: Thanks to you both. Sounds doable
|
# ? Jan 29, 2010 17:20 |
|
Lurchington posted:1 table with 24 x ~700 = 16800 rows, is that not too much to start seeing a performance hit on selects? I don't think so, not if you put appropriate indexes on.
|
# ? Jan 29, 2010 17:53 |
|
Hammerite posted:I don't think so, not if you put appropriate indexes on. Unless SQLite is retarded that's still small enough that I'd guess a full table scan would be faster than using indexes anyway.
|
# ? Jan 30, 2010 03:20 |
|
|
# ? May 19, 2024 06:22 |
|
Kekekela posted:You don't want an insert, you want an update. INSERT Is for inserting rows, UPDATE is for updating columns in existing rows. Also index is a reserved word or whatever its called so you'll need to access it with brackets I think, depending on your flavor of sql. Also WHRERE is misspelled but I'm guessing you just made a mistake copying by hand. I also didn't realize you could use RIGHT on numeric types, but if its not complaining about that I guess its ok. Try something like... Ahhh, that makes a lot of sense. Much appreciated! Unfortunatly, the "right" keyword doesn't help in this situation. I guess you can't compare the output of right with an integer. What a shame, any ideas on that though?
|
# ? Feb 1, 2010 03:06 |