|
That'd be a RHEL 5.x kernel. RHEL almost never updates packages to newer releases. Instead, they tend to backport security fixes. That version of MySQL is probably fine, as long as they are indeed keeping it up to date with the upstream packages.
|
# ? Jul 10, 2012 21:24 |
|
|
# ? Jun 5, 2024 08:11 |
|
Scaramouche posted:I'm not really a Lunix guy but someone asked me to look over their host's install for security reasons. The one thing that sticks out to me is that they're running MySQL 5.0.95-community on Red Hat 2.6.18-194.11.3.el5. The thing is from what I can tell the latest version of MySQL is 5.5.25a, and that anything <5.1 is actually pretty shoddy from a security standpoint. Is this an opinion you guys share, or am I just being too picky?
|
# ? Jul 11, 2012 00:24 |
|
McGlockenshire posted:That'd be a RHEL 5.x kernel. RHEL almost never updates packages to newer releases. Instead, they tend to backport security fixes. That version of MySQL is probably fine, as long as they are indeed keeping it up to date with the upstream packages. Do you know if there's any way to tell? The only info I have (don't have shell access natch) is from the 'cpanel' attributes which read: cPanel Version 11.32.3 (build 21) Apache version 2.0.63 PHP version 5.2.14 MySQL version 5.0.95-community Architecture x86_64 Operating system linux Perl version 5.8.8 Kernel version 2.6.18-194.11.3.el5 cPanel Pro 1.0 (RC1) I've googled around the Kernel version but that number brings up results for CentOS 5 and OEL 5 as well as RHEL 5.
|
# ? Jul 11, 2012 01:35 |
|
Scaramouche posted:cPanel Version 11.32.3 (build 21) You're hosed, those web hosting automation suites are notorious for wanting things done in a particular way. The chances of successfully updating to a newer release just dropped significantly.
|
# ? Jul 11, 2012 08:10 |
|
Scaramouche posted:I'm not really a Lunix guy but someone asked me to look over their host's install for security reasons. The one thing that sticks out to me is that they're running MySQL 5.0.95-community on Red Hat 2.6.18-194.11.3.el5. The thing is from what I can tell the latest version of MySQL is 5.5.25a, and that anything <5.1 is actually pretty shoddy from a security standpoint. Is this an opinion you guys share, or am I just being too picky? Get permission first, then hit it with Metasploit or OpenVAS
|
# ? Jul 11, 2012 20:51 |
|
Bob Morales posted:Get permission first, then hit it with Metasploit or OpenVAS Are you stalking me http://forums.somethingawful.com/showthread.php?threadid=3455244&pagenumber=4#post405471224
|
# ? Jul 11, 2012 21:14 |
|
man, I work with SQL so infrequently now that my brain just starts freezing whenever I have to do anything even remotely complicated. and this isn't even that complicated. So, I have two tables, wagedata and employees. an employee will have multiple records in the employees table for various positions he's worked at. I want to select all wagedata entries for a given employee's SSN, only when that employee has a 'wotckey' entry. that part is easy, got it. code:
So I rewrote the WHERE clause as follows : code:
mindphlux fucked around with this message at 23:26 on Jul 18, 2012 |
# ? Jul 18, 2012 23:24 |
|
mindphlux posted:
Won't execute how? You get an exception or it just returns no results? Regardless, I've found it to be a good policy to use brackets when there's both ANDs and ORs in a WHERE clause. The order of operations without them can be hard to get right. So I'd suggest trying code:
|
# ? Jul 19, 2012 01:31 |
|
I've got an Oracle question.code:
It doesn't work as expected though: sometimes it returns no rows, sometimes it returns 1 row, sometimes 2 or even all 3. Not sequential rows either: it sometimes returns X,Z, and happens regardless of whether tbl is an inline view or real table. Testing further using code:
code:
I solved my actual problem another way but I'm interested to know what causes this behaviour if anyone knows. The original problem happened on 11g r1 and I reproduced it with the code here on 10g r2 XE. Goat Bastard fucked around with this message at 13:32 on Jul 19, 2012 |
# ? Jul 19, 2012 12:41 |
|
Goat Bastard posted:I've got an Oracle question. Your function call is executing once for each row in tbl, rather than once for the statement.
|
# ? Jul 19, 2012 13:51 |
|
var1ety posted:Your function call is executing once for each row in tbl, rather than once for the statement. Right, that makes sense. Thanks.
|
# ? Jul 19, 2012 14:14 |
|
Data integration question here. At my company, we have two servers running SQL Server 2008R2. We have a process whereby server A, behind a firewall and in our local domain, executes hundreds of stored procedures overnight that update and/or drop and replace hundreds of tables. This occurs across about 20 different databases on the server. The tables are data sources for an SSRS-powered web-based reporting tool. Server B is our web server, outside the DMZ. Each of the 20 databases on Server A has a counterpart on Server B where there are copies of the tables that drive the reports. While these two databases are counterparts, they are not identical - they just share the tables in question. The stored procedures are run as part of an SSIS package that has as its last step a Transfer SQL Server Objects task which copies the tables we need copied from Server A to Server B. Recently, we have been experiencing a problem whereby this step fails for one or more databases due to a TCP Communication Link Failure or a Socket Unavailable error. After many rounds of trying to get this fixed, it was suggested to us that we use Replication to transfer the data rather than the SSIS step. I have zero experience with replication and am self taught at all of this so my first step was to google for comparisons between the SSIS task and replication to see if it meets our needs. I can't really tell. Given this short description, does it seem like this task is best suited for replication, or should we continue to work on the SSIS step? Sub Par fucked around with this message at 19:32 on Jul 19, 2012 |
# ? Jul 19, 2012 19:30 |
|
I'm dealing with a heavily loaded mysqld process at work. It runs a lot of reports that are poorly written and it's choking during the business day. One symptom I'm seeing is that static file loads are taking 1-10 seconds, I assume because Apache is blocking waiting for data from the database. Is this a safe assumption? Also I noticed it will only use 2% system ram no matter what I so I advised IT to increase the allotment. It's using like 16mb right now and every query, even <1s queries seem to be writing tmp data to disk all the time. I'm hoping this will fix it. I have a report I fired up today that locked like 4 tables and caused so much stuff to stack up in the queue that the server just poo poo itself and I have to restart the process.
|
# ? Jul 20, 2012 22:21 |
|
Goat Bastard posted:Won't execute how? You get an exception or it just returns no results? I was just getting a timeout on the query. the brackets helped immensely, and the query does what I wanted it to. Thanks! Glad I wasn't too far off track...
|
# ? Jul 20, 2012 23:13 |
|
revmoo posted:I'm dealing with a heavily loaded mysqld process at work. It runs a lot of reports that are poorly written and it's choking during the business day. One symptom I'm seeing is that static file loads are taking 1-10 seconds, I assume because Apache is blocking waiting for data from the database. Is this a safe assumption? MySQL ships with a default buffer cache allotment of 8MB. Why they defaulted the main memory pool to the size of RAM in a 386-era chip I'll never know (actually I do know - we need an emote for :mysql:). Anyway, google 'mysql memory tuning' for an overview of the options you need to set.
|
# ? Jul 21, 2012 14:09 |
|
Sub Par posted:I have zero experience with replication and am self taught at all of this so my first step was to google for comparisons between the SSIS task and replication to see if it meets our needs. I can't really tell. Given this short description, does it seem like this task is best suited for replication, or should we continue to work on the SSIS step? We use replication pretty heavily to move data from server to server for reporting purposes and to keep our production hotswap backups in sync with the main production servers. The basic answer is that replication is hot poo poo. But SSIS is also hot poo poo, so which one suits your purposes is going to depend on what you're using the step for. Transfer SQL Server Object can transfer procs, functions, views, etc. Replication But it doesn't sound like you're actually using any of that, so why do you care? For moving pure data around, I prefer snapshot replication or SSIS. SSIS does have the advantage that you can put the SSIS package onto some bullshit middleman server that exists in a network zone that the DMZ won't cause issues with, which is nice. The replication agent needs to see the other server to work. Crossing a firewall may be more trouble than it's worth there. Are you really just copying data? If so, why are you not just using a Data Flow task?
|
# ? Jul 23, 2012 07:16 |
|
Thanks for that. We are just copying data, but we need to completely replace the contents of the tables on the destination server. And there are 20-40 tables per database. As far as I know, this would mean one mean-looking data flow task with one source/destination combo for each table. The transfer objects task lets us just check off the tables we want to move and it's done. We suspect that there may be something wrong with either the hardware on our web server or some of the switching/firewall gear between the servers that is causing the random crapouts we've been seeing. We've replaced the web server now so we'll see how that goes, but we'll stick with SSIS for now. Thanks.
|
# ? Jul 23, 2012 16:37 |
|
Sub Par posted:Thanks for that. We are just copying data, but we need to completely replace the contents of the tables on the destination server. And there are 20-40 tables per database. As far as I know, this would mean one mean-looking data flow task with one source/destination combo for each table. The transfer objects task lets us just check off the tables we want to move and it's done. You can always just use the "Import / Export Wizard" in SSMS to create your SSIS package for you. Just right-click a database and choose Tasks -> Import Data (or Export Data if it's the source DB) Then choose all the tables you want to copy and whatever options you want for each one (replace data instead of append, enable identity_insert to preserve primary keys). Then just choose save package instead of run immediately and save the .dtsx somewhere. Then you can always open it later if you need to make edits to it.
|
# ? Jul 23, 2012 20:59 |
|
JeansW posted:You can always just use the "Import / Export Wizard" in SSMS to create your SSIS package for you. this is good advice. I got SSMS to automatically create stuff that I couldn't for the life of me figure out how to do using the SSIS package manager. dunno why that was, but I ended up using SSMS like 10 times in my last database project to kick start what ended up as much more complicated SSIS packages.
|
# ? Jul 24, 2012 04:31 |
|
What's the best way to handle date conversions in SQL Server 2008? I have to convert datetimes which were persisted as ET to UTC, and my Google-Fu is failing me right now. The hardest part is taking into account the daylight savings, which seems to be dependent on the year. Argh! Anyway, any help is appreciated. Thanks!
|
# ? Jul 24, 2012 18:40 |
|
If you want to do it just for the current date time you can do something simple like this to convert from when UTC to the timezone of your server:code:
code:
EDIT: I just realized you are going the other way. In that case you should know that there isn't a perfect mapping: in the fall when it changed did the timestamp between 2:00:00.000 AM and 2:59:59.997 AM occur the first or second time we had that time occur locally? For your situation I would just use a case statement to do a one time conversion to UTC. If you have a way to disambiguate the 2-3AM issue above with some other field such as an ordering on a strictly increasing ID field that's great, but if not and it's not a huge issue just assume they occur in the first pass through that time and know that if someone looks at the data for that 1 hour of each year it's gonna be off. Aredna fucked around with this message at 22:27 on Jul 24, 2012 |
# ? Jul 24, 2012 22:22 |
|
How do I work with fields as part of a group? A file gets checked out. it has a null check in time. When it gets checked in the check in time gets filled. Sometimes that doesn't happen and a file can have more than one checkout with more than one null checkin time. Since only one checkin_datetime should be NULL (meaning the file is currently checked out) I want to fix the earlier NULL checkin_datetime. by giving it the next checkout time. code:
code:
Instead of manually looking at them and writing something like UPDATE CHECKOUT_HISTORY SET checkin_datetime = (SELECT checkout_datetime FROM CHECKOUT_HISTORY WHERE checkout_history_id = 13317) WHERE checkout_history_id = 58120 for each one How do I treat this in relation to a group of rows having the same file_id? How do I identify which row to update the checkin_datetime (min(checkout_datetime) where checkin_datetime is NULL) and which row to get the checkout_datetime from? (the next checkout_datetime in the group) Also by moving the WHERE checkin_datetime is null subquey clause into its own subquery I have another query that returns over 100000 rows that are groups like this that have atleast (and most of the time) one NULL checkin_date time per group. Most of these look ok (the single NULL value is for the row with the latest checkout_date time) The file is currently checked out but it's not a problem. How do I exclude these groups from the query results (so no group where the checkin_datetime for the row with the max(checkout_datetime) per group is null) so I can see if any of these files have problems? Crazy Mike fucked around with this message at 08:45 on Jul 25, 2012 |
# ? Jul 25, 2012 01:33 |
|
Aredna posted:If you want to do it just for the current date time you can do something simple like this to convert from when UTC to the timezone of your server: Thanks again!
|
# ? Jul 25, 2012 19:59 |
|
Crazy Mike posted:How do I work with fields as part of a group? This code is for SQL Server, but you should be able to find a similar function for most flavors of SQL. You want to use the RANK function to get you a rank of 1, 2, 3, etc. to join on. Then you can join from a row with rank 1 to rank 2 and get the next row. If you add this to the query you have below it should give you what you need to identify the proper rows. code:
|
# ? Jul 25, 2012 22:53 |
|
Aredna posted:If you add this to the query you have below it should give you what you need to identify the proper rows. Good advice, just as an fyi using row_number() OVER (PARTITION BY [CHECKOUT_HISTORY].[file_id] ORDER BY checkout_datetime) instead of rank() will also eliminate that corner case you talked about, without needing to add another column to the order by.
|
# ? Jul 26, 2012 11:52 |
|
I don't know if this is the right place to ask about Crystal Reports, but here goes. I have a Crystal report that queries some tables by Division. In this report they want a section that shows all the Assignment values from one table and for each one displays a sum of the matching Assignments from another table. This is confusing, here is what the tables looks like: TableA PKField - primary key Division TableB PKField - foreign key to TableA Assignment TableC CodeType - will be "E" Code - each code is a possible value for Assignment in TableB I will be looking for all records in TableA that have a certain Division. Then I will display all values for Code in TableC where CodeType="E". For each value of Code, it will display a total where that value matches an Assignment returned for TableB. Here is some sample data: code:
Demo - 1 Train - 1 Test - 0 I don't know how to do this within crystal. I have the main report that shows all the simple stuff, and then I was going to insert a subreport but within that I'm not sure what to do.
|
# ? Jul 27, 2012 20:17 |
|
I posted a few weeks back asking how I'd find all of the tables in my DB that contained a field named X... now I come back with another strange question, is there a way to query the DB for all column names of type decimal? If I could get their table name too that'd be lovely. edit; Eh, balls I'm bad at google. Managed to find it on Stack Overflow after searching google for a bit. If anyone else is curious this did it: SQL code:
Sab669 fucked around with this message at 22:52 on Jul 27, 2012 |
# ? Jul 27, 2012 22:36 |
|
opie posted:I don't know if this is the right place to ask about Crystal Reports, but here goes. The SQL query would be something like [code] Select count(tableb.pkfield), tableb.assignment from tablea inner join tableb on tablea.pk =tableb.pk Group by Tablea.pk , Tablea.division [\code] So you create something like this as a comment in the table select area the join it to tablea on the division and put the output in a group footer or header for the groupa division. Atleast without anything else bein in the report if you have an existing data street I would put it in a sub report because poo poo gets fucky then.
|
# ? Jul 28, 2012 00:13 |
|
Goat Bastard posted:Good advice, just as an fyi using row_number() OVER (PARTITION BY [CHECKOUT_HISTORY].[file_id] ORDER BY checkout_datetime) instead of rank() will also eliminate that corner case you talked about, without needing to add another column to the order by. This is probably because I haven't looked in to it, but I tend to use rank because I know its deterministic. Is row number going to always give you the same result on the same subset of data when corner cases like this pop up? I know rank will give me poo poo results forcing me to review the script when I miss a corner case, but I worry that row number would work and give me a different result on two runs if the data in my table changes.
|
# ? Jul 28, 2012 02:36 |
|
Aredna posted:This is probably because I haven't looked in to it, but I tend to use rank because I know its deterministic. Is row number going to always give you the same result on the same subset of data when corner cases like this pop up? I know rank will give me poo poo results forcing me to review the script when I miss a corner case, but I worry that row number would work and give me a different result on two runs if the data in my table changes. Good point. I don't actually know, but I suspect that it would be undefined.
|
# ? Jul 28, 2012 06:12 |
|
I started learning sql a while back by using this site http://sqlzoo.net/ (among other things). I had to stop for a bit, and I'm starting to pick it up again. Are there any similar/better sites out there or is this my best bet?
|
# ? Jul 28, 2012 08:18 |
|
First off, let me say I know dick about SQL programming and I'm in a major bind, so I don't have time to learn my way up to building this query. I'm planning on learning SQL so that I can handle tasks like this in the very near future, but I'm under the gun and the only one in my department this week so I've got no time to work my way into this solution in the time I need it done. Any help is massively appreciated and I don't normally just ask to be given an answer, but I'm hoping you guys will be my saviors. I need to build a query to pull data from two tables based on a list output by filtering a 3rd table's data and joining the data from the first two tables into unique rows all based on the same field data. Let me try and explain a bit better: I have tables 1, 2 and 3. In the tables, there is a field called login, and all tables have values that match this field between them. As in all three tables have a record that has the value of bob for login. In 1, there is a field called type. In 2, there are fields called data and score. And in 3, there are fields called demo and state. I need to filter table 1 by type with a value of 0000 to 0009 (basically, 000%). I then need to apply the output of that filter against tables 2 and 3, thereby filtering those two tables and get the output so that I see loginID|data|score|demo|state that I can export to a file. I already have the filter for table 1 built, but I need to know how to apply that output against the other two tables and then join both outputs into unique rows, based on login value, that has the other 4 fields. Or should I join table 2 and 3 then filter based on login values present in table 1? Hopefully, I've made that clear enough. Any help is greatly appreciated as I've got no clue how to go about this. Thanks. Pvt. Public fucked around with this message at 04:35 on Aug 2, 2012 |
# ? Aug 2, 2012 04:33 |
|
Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like:code:
|
# ? Aug 2, 2012 05:34 |
|
Pvt. Public posted:First off, let me say I know dick about SQL programming and I'm in a major bind, so I don't have time to learn my way up to building this query. I'm planning on learning SQL so that I can handle tasks like this in the very near future, but I'm under the gun and the only one in my department this week so I've got no time to work my way into this solution in the time I need it done. Any help is massively appreciated and I don't normally just ask to be given an answer, but I'm hoping you guys will be my saviors. SQL code:
|
# ? Aug 2, 2012 06:11 |
|
Pvt. Public posted:I need to build a query to pull data from two tables based on a list output by filtering a 3rd table's data and joining the data from the first two tables into unique rows all based on the same field data. I'm not sure I fully understand your problem, but I'll take a stab at it anyway. Numbers are confusing, so let's rename your three tables Alpha, Beta and Gamma. They look something like: code:
code:
code:
Adding the third table is a logical extension: code:
Now, this is going to work perfectly as long as there's only one login per row per table. If there are multiple login rows in any of the tables, you're going to have a problem. Let's say that there's one record in Alpha, two in Beta, and one in Gamma for a single login. You're going to end up with two rows in your result set. The first will have the data from the only row in Alpha, the first row in Beta, and the only row in Gamma. The second will have the only row in Alpha, the second row in Beta, and the only row in Gamma. That's not a disaster, but what if there are two rows in Gamma as well? Now you're going to get four. code:
Has this been helpful?
|
# ? Aug 2, 2012 06:20 |
|
Thank you both, Goat Bastard and McGlockenshire! I will give these a shot and see what I get. I truly appreciate it.
|
# ? Aug 2, 2012 14:08 |
|
TransatlanticFoe posted:Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like: If your results are always three characters you could do code:
|
# ? Aug 2, 2012 14:16 |
|
TransatlanticFoe posted:Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like: I've done something similar by putting the results in a temp table or CTE and when i'm selecting SOMESTRINGHERE appending '%' to the beginning and end. Now you just join on LIKE instead of =.
|
# ? Aug 2, 2012 15:08 |
|
Goat Bastard posted:
Thanks Goat Bastard, this worked perfectly! I appreciate the help.
|
# ? Aug 2, 2012 15:58 |
|
|
# ? Jun 5, 2024 08:11 |
|
Sub Par posted:If your results are always three characters you could do Couldn't you go around the character limit by using substring(security_symbol,1,length(security_symbol))?
|
# ? Aug 2, 2012 19:12 |