Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
McGlockenshire
Dec 16, 2005

GOLLOCKS!
That'd be a RHEL 5.x kernel. RHEL almost never updates packages to newer releases. Instead, they tend to backport security fixes. That version of MySQL is probably fine, as long as they are indeed keeping it up to date with the upstream packages.

Adbot
ADBOT LOVES YOU

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!

Scaramouche posted:

I'm not really a Lunix guy but someone asked me to look over their host's install for security reasons. The one thing that sticks out to me is that they're running MySQL 5.0.95-community on Red Hat 2.6.18-194.11.3.el5. The thing is from what I can tell the latest version of MySQL is 5.5.25a, and that anything <5.1 is actually pretty shoddy from a security standpoint. Is this an opinion you guys share, or am I just being too picky?
from a security standpoint that is bad but for a feature standpoint that's really bad the different between those versions are quite vast.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

McGlockenshire posted:

That'd be a RHEL 5.x kernel. RHEL almost never updates packages to newer releases. Instead, they tend to backport security fixes. That version of MySQL is probably fine, as long as they are indeed keeping it up to date with the upstream packages.

Do you know if there's any way to tell? The only info I have (don't have shell access natch) is from the 'cpanel' attributes which read:
cPanel Version 11.32.3 (build 21)
Apache version 2.0.63
PHP version 5.2.14
MySQL version 5.0.95-community
Architecture x86_64
Operating system linux
Perl version 5.8.8
Kernel version 2.6.18-194.11.3.el5
cPanel Pro 1.0 (RC1)

I've googled around the Kernel version but that number brings up results for CentOS 5 and OEL 5 as well as RHEL 5.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

Scaramouche posted:

cPanel Version 11.32.3 (build 21)

You're hosed, those web hosting automation suites are notorious for wanting things done in a particular way. The chances of successfully updating to a newer release just dropped significantly.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Scaramouche posted:

I'm not really a Lunix guy but someone asked me to look over their host's install for security reasons. The one thing that sticks out to me is that they're running MySQL 5.0.95-community on Red Hat 2.6.18-194.11.3.el5. The thing is from what I can tell the latest version of MySQL is 5.5.25a, and that anything <5.1 is actually pretty shoddy from a security standpoint. Is this an opinion you guys share, or am I just being too picky?

Get permission first, then hit it with Metasploit or OpenVAS

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Bob Morales posted:

Get permission first, then hit it with Metasploit or OpenVAS

Are you stalking me ;)
http://forums.somethingawful.com/showthread.php?threadid=3455244&pagenumber=4#post405471224

mindphlux
Jan 8, 2004

by R. Guyovich
man, I work with SQL so infrequently now that my brain just starts freezing whenever I have to do anything even remotely complicated. and this isn't even that complicated.

So, I have two tables, wagedata and employees. an employee will have multiple records in the employees table for various positions he's worked at. I want to select all wagedata entries for a given employee's SSN, only when that employee has a 'wotckey' entry.

that part is easy, got it.

code:
SELECT DISTINCT dbo.wagedata.id,
                dbo.wagedata.ein,
                dbo.wagedata.last,
                dbo.wagedata.first,
                dbo.wagedata.ssn,
                dbo.wagedata.storeid,
                dbo.wagedata.storealpha,
                dbo.wagedata.storestate,
                dbo.employees.wotckey,
                dbo.employees.certein,
                dbo.employees.storenumber,
                dbo.employees.eeid
FROM   dbo.employees
       INNER JOIN dbo.wagedata
               ON dbo.employees.ssn = dbo.wagedata.ssn
WHERE  ( dbo.employees.wotckey IS NOT NULL )
Now what I want to do is restrict the wagedata results to only records where the dbo.wagedata.storeid OR dbo.wagedata.ein match up with the employee records that have a wotckey entry. Essentially, I only want wagedata from the positions that have wotckeys. (which you can tell by either storeid or EIN)

So I rewrote the WHERE clause as follows :

code:
WHERE  ( dbo.employees.wotckey IS NOT NULL )
       AND ( dbo.wagedata.ein = dbo.employees.fein )
        OR ( dbo.wagedata.storeid = dbo.employees.storenumber )
this part is where I'm being an idiot and basically don't know how to go about this. Is this completely wrong? The query won't execute so I'm probably royally screwing up.

mindphlux fucked around with this message at 23:26 on Jul 18, 2012

Goat Bastard
Oct 20, 2004

mindphlux posted:


...

Now what I want to do is restrict the wagedata results to only records where the dbo.wagedata.storeid OR dbo.wagedata.ein match up with the employee records that have a wotckey entry. Essentially, I only want wagedata from the positions that have wotckeys. (which you can tell by either storeid or EIN)

So I rewrote the WHERE clause as follows :

code:
WHERE  ( dbo.employees.wotckey IS NOT NULL )
       AND ( dbo.wagedata.ein = dbo.employees.fein )
        OR ( dbo.wagedata.storeid = dbo.employees.storenumber )
this part is where I'm being an idiot and basically don't know how to go about this. Is this completely wrong? The query won't execute so I'm probably royally screwing up.

Won't execute how? You get an exception or it just returns no results?

Regardless, I've found it to be a good policy to use brackets when there's both ANDs and ORs in a WHERE clause. The order of operations without them can be hard to get right.

So I'd suggest trying

code:
WHERE  ( dbo.employees.wotckey IS NOT NULL )
       AND (
                ( dbo.wagedata.ein = dbo.employees.fein )
             OR ( dbo.wagedata.storeid = dbo.employees.storenumber )
           )

Goat Bastard
Oct 20, 2004

I've got an Oracle question.

code:
WITH  tbl AS (SELECT 1 AS id, 'X' AS data FROM dual UNION ALL 
              SELECT 2, 'Y' FROM dual UNION ALL
              SELECT 3, 'Z' FROM dual
             )
SELECT data
FROM   tbl
WHERE  id = trunc(dbms_random.value(1, 4))
;
The idea was to select a random row from tbl (since dbms_random.value(low, high) returns a floating point number greater than or equal to low and strictly less than high, and trunc(number) rounds down always to the nearest whole number)

It doesn't work as expected though: sometimes it returns no rows, sometimes it returns 1 row, sometimes 2 or even all 3. Not sequential rows either: it sometimes returns X,Z, and happens regardless of whether tbl is an inline view or real table.

Testing further using

code:
DECLARE
  TYPE result_tbl IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
  
  l_results result_tbl;
  l_result NUMBER;
  
  FUNCTION f RETURN NUMBER
  AS
    l_count NUMBER;
  BEGIN
    WITH  tbl AS (SELECT 1 AS id, 'A' AS data FROM dual UNION ALL 
                  SELECT 2, 'B' FROM dual UNION ALL
                  SELECT 3, 'C' FROM dual UNION ALL
                  SELECT 4, 'D' FROM dual UNION ALL
                  SELECT 5, 'E' FROM dual UNION ALL
                  SELECT 6, 'F' FROM dual UNION ALL
                  SELECT 7, 'G' FROM dual UNION ALL
                  SELECT 8, 'H' FROM dual UNION ALL
                  SELECT 9, 'I' FROM dual UNION ALL
                  SELECT 10, 'J' FROM dual 
                )
    SELECT count(data)
    INTO   l_count
    FROM   tbl
    WHERE  id = trunc(dbms_random.value(1, 11))
    ;
    RETURN l_count;
  END;
BEGIN
  FOR i IN 1..100000 LOOP
    l_result := f();
    BEGIN
      l_results(l_result) := l_results(l_result) + 1;
    EXCEPTION WHEN no_data_found THEN
      l_results(l_result) := 1;
    END;
  END LOOP;
  
  FOR i IN l_results.FIRST..l_results.LAST LOOP
    dbms_output.put_line('i: ' || i || ', count: ' || l_results(i));
  END LOOP;
END;
/
gave me

code:
i: 0, count: 34641
i: 1, count: 39201
i: 2, count: 19140
i: 3, count: 5726
i: 4, count: 1113
i: 5, count: 153
i: 6, count: 23
i: 7, count: 3
and similar numbers when run repeatedly (never more than 7 in total for 100,000 or 1,000,000 iterations and I gave up waiting for 10,000,000 to return on my desktop XE install).

I solved my actual problem another way but I'm interested to know what causes this behaviour if anyone knows. The original problem happened on 11g r1 and I reproduced it with the code here on 10g r2 XE.

Goat Bastard fucked around with this message at 13:32 on Jul 19, 2012

var1ety
Jul 26, 2004

Goat Bastard posted:

I've got an Oracle question.

code:
WITH  tbl AS (SELECT 1 AS id, 'X' AS data FROM dual UNION ALL 
              SELECT 2, 'Y' FROM dual UNION ALL
              SELECT 3, 'Z' FROM dual
             )
SELECT data
FROM   tbl
WHERE  id = trunc(dbms_random.value(1, 4))
;

Your function call is executing once for each row in tbl, rather than once for the statement.

Goat Bastard
Oct 20, 2004

var1ety posted:

Your function call is executing once for each row in tbl, rather than once for the statement.

Right, that makes sense. Thanks.

Sub Par
Jul 18, 2001


Dinosaur Gum
Data integration question here. At my company, we have two servers running SQL Server 2008R2. We have a process whereby server A, behind a firewall and in our local domain, executes hundreds of stored procedures overnight that update and/or drop and replace hundreds of tables. This occurs across about 20 different databases on the server. The tables are data sources for an SSRS-powered web-based reporting tool.

Server B is our web server, outside the DMZ. Each of the 20 databases on Server A has a counterpart on Server B where there are copies of the tables that drive the reports. While these two databases are counterparts, they are not identical - they just share the tables in question.

The stored procedures are run as part of an SSIS package that has as its last step a Transfer SQL Server Objects task which copies the tables we need copied from Server A to Server B. Recently, we have been experiencing a problem whereby this step fails for one or more databases due to a TCP Communication Link Failure or a Socket Unavailable error. After many rounds of trying to get this fixed, it was suggested to us that we use Replication to transfer the data rather than the SSIS step.

I have zero experience with replication and am self taught at all of this so my first step was to google for comparisons between the SSIS task and replication to see if it meets our needs. I can't really tell. Given this short description, does it seem like this task is best suited for replication, or should we continue to work on the SSIS step?

Sub Par fucked around with this message at 19:32 on Jul 19, 2012

revmoo
May 25, 2006

#basta
I'm dealing with a heavily loaded mysqld process at work. It runs a lot of reports that are poorly written and it's choking during the business day. One symptom I'm seeing is that static file loads are taking 1-10 seconds, I assume because Apache is blocking waiting for data from the database. Is this a safe assumption?

Also I noticed it will only use 2% system ram no matter what I so I advised IT to increase the allotment. It's using like 16mb right now and every query, even <1s queries seem to be writing tmp data to disk all the time. I'm hoping this will fix it. I have a report I fired up today that locked like 4 tables and caused so much stuff to stack up in the queue that the server just poo poo itself and I have to restart the process.

mindphlux
Jan 8, 2004

by R. Guyovich

Goat Bastard posted:

Won't execute how? You get an exception or it just returns no results?

Regardless, I've found it to be a good policy to use brackets when there's both ANDs and ORs in a WHERE clause. The order of operations without them can be hard to get right.

So I'd suggest trying

code:
WHERE  ( dbo.employees.wotckey IS NOT NULL )
       AND (
                ( dbo.wagedata.ein = dbo.employees.fein )
             OR ( dbo.wagedata.storeid = dbo.employees.storenumber )
           )

I was just getting a timeout on the query. the brackets helped immensely, and the query does what I wanted it to. Thanks! Glad I wasn't too far off track...

Thel
Apr 28, 2010

revmoo posted:

I'm dealing with a heavily loaded mysqld process at work. It runs a lot of reports that are poorly written and it's choking during the business day. One symptom I'm seeing is that static file loads are taking 1-10 seconds, I assume because Apache is blocking waiting for data from the database. Is this a safe assumption?

Also I noticed it will only use 2% system ram no matter what I so I advised IT to increase the allotment. It's using like 16mb right now and every query, even <1s queries seem to be writing tmp data to disk all the time. I'm hoping this will fix it. I have a report I fired up today that locked like 4 tables and caused so much stuff to stack up in the queue that the server just poo poo itself and I have to restart the process.

MySQL ships with a default buffer cache allotment of 8MB. Why they defaulted the main memory pool to the size of RAM in a 386-era chip I'll never know (actually I do know - we need an emote for :mysql:). Anyway, google 'mysql memory tuning' for an overview of the options you need to set.

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

Sub Par posted:

I have zero experience with replication and am self taught at all of this so my first step was to google for comparisons between the SSIS task and replication to see if it meets our needs. I can't really tell. Given this short description, does it seem like this task is best suited for replication, or should we continue to work on the SSIS step?

We use replication pretty heavily to move data from server to server for reporting purposes and to keep our production hotswap backups in sync with the main production servers. The basic answer is that replication is hot poo poo. But SSIS is also hot poo poo, so which one suits your purposes is going to depend on what you're using the step for. Transfer SQL Server Object can transfer procs, functions, views, etc. Replication

But it doesn't sound like you're actually using any of that, so why do you care? For moving pure data around, I prefer snapshot replication or SSIS. SSIS does have the advantage that you can put the SSIS package onto some bullshit middleman server that exists in a network zone that the DMZ won't cause issues with, which is nice. The replication agent needs to see the other server to work. Crossing a firewall may be more trouble than it's worth there.

Are you really just copying data? If so, why are you not just using a Data Flow task?

Sub Par
Jul 18, 2001


Dinosaur Gum
Thanks for that. We are just copying data, but we need to completely replace the contents of the tables on the destination server. And there are 20-40 tables per database. As far as I know, this would mean one mean-looking data flow task with one source/destination combo for each table. The transfer objects task lets us just check off the tables we want to move and it's done.

We suspect that there may be something wrong with either the hardware on our web server or some of the switching/firewall gear between the servers that is causing the random crapouts we've been seeing. We've replaced the web server now so we'll see how that goes, but we'll stick with SSIS for now. Thanks.

JeansW
Aug 4, 2004
Yes. Like the jeans...

Sub Par posted:

Thanks for that. We are just copying data, but we need to completely replace the contents of the tables on the destination server. And there are 20-40 tables per database. As far as I know, this would mean one mean-looking data flow task with one source/destination combo for each table. The transfer objects task lets us just check off the tables we want to move and it's done.

We suspect that there may be something wrong with either the hardware on our web server or some of the switching/firewall gear between the servers that is causing the random crapouts we've been seeing. We've replaced the web server now so we'll see how that goes, but we'll stick with SSIS for now. Thanks.

You can always just use the "Import / Export Wizard" in SSMS to create your SSIS package for you.

Just right-click a database and choose Tasks -> Import Data (or Export Data if it's the source DB)

Then choose all the tables you want to copy and whatever options you want for each one (replace data instead of append, enable identity_insert to preserve primary keys).

Then just choose save package instead of run immediately and save the .dtsx somewhere. Then you can always open it later if you need to make edits to it.

mindphlux
Jan 8, 2004

by R. Guyovich

JeansW posted:

You can always just use the "Import / Export Wizard" in SSMS to create your SSIS package for you.

Just right-click a database and choose Tasks -> Import Data (or Export Data if it's the source DB)

Then choose all the tables you want to copy and whatever options you want for each one (replace data instead of append, enable identity_insert to preserve primary keys).

Then just choose save package instead of run immediately and save the .dtsx somewhere. Then you can always open it later if you need to make edits to it.

this is good advice. I got SSMS to automatically create stuff that I couldn't for the life of me figure out how to do using the SSIS package manager. dunno why that was, but I ended up using SSMS like 10 times in my last database project to kick start what ended up as much more complicated SSIS packages.

Incompl
Mar 23, 2008

What's the best way to handle date conversions in SQL Server 2008?

I have to convert datetimes which were persisted as ET to UTC, and my Google-Fu is failing me right now. The hardest part is taking into account the daylight savings, which seems to be dependent on the year. Argh! Anyway, any help is appreciated. Thanks!

Aredna
Mar 17, 2007
Nap Ghost
If you want to do it just for the current date time you can do something simple like this to convert from when UTC to the timezone of your server:
code:
dateadd(hour,datediff(hour,getutcdate(),getdate()),FIELDNAME)
All of my data is in CST so when I need to do something longer I do the following that we just hardcoded as a short term workaround until we find a better way to do it. I should note that we do not have data around the time of the actual changes each year so I didn't put any thought into coding the exact time crossover correctly for this.
code:
dateadd(hour,
        case when cast(FIELDNAME as date) between '2007-03-11' and '2007-11-03' then -5
             when cast(FIELDNAME as date) between '2008-03-09' and '2008-11-01' then -5
             when cast(FIELDNAME as date) between '2009-03-08' and '2009-10-31' then -5
             when cast(FIELDNAME as date) between '2010-03-14' and '2010-11-06' then -5
             when cast(FIELDNAME as date) between '2011-03-13' and '2011-11-05' then -5
             when cast(FIELDNAME as date) between '2012-03-11' and '2012-11-03' then -5
             when cast(FIELDNAME as date) between '2013-03-10' and '2013-11-02' then -5
             when cast(FIELDNAME as date) between '2014-03-09' and '2014-11-01' then -5
             when cast(FIELDNAME as date) between '2015-03-08' and '2015-10-31' then -5
             else -6 end,
        FIELDNAME)
The ideal solution is to use the datetimeoffset field to store time zone offsets, but that isn't an option for those of us just reading the data.


EDIT: I just realized you are going the other way. In that case you should know that there isn't a perfect mapping: in the fall when it changed did the timestamp between 2:00:00.000 AM and 2:59:59.997 AM occur the first or second time we had that time occur locally? For your situation I would just use a case statement to do a one time conversion to UTC. If you have a way to disambiguate the 2-3AM issue above with some other field such as an ordering on a strictly increasing ID field that's great, but if not and it's not a huge issue just assume they occur in the first pass through that time and know that if someone looks at the data for that 1 hour of each year it's gonna be off.

Aredna fucked around with this message at 22:27 on Jul 24, 2012

Crazy Mike
Sep 16, 2005

Now with 25% more kimchee.
How do I work with fields as part of a group?

A file gets checked out. it has a null check in time. When it gets checked in the check in time gets filled.
Sometimes that doesn't happen and a file can have more than one checkout with more than one null checkin time.
Since only one checkin_datetime should be NULL (meaning the file is currently checked out) I want to fix the earlier NULL checkin_datetime.
by giving it the next checkout time.



code:
   
    SELECT [checkout_history_id]
          ,[checkout_datetime]
          ,[checkin_datetime]
          ,[CHECKOUT_HISTORY].[file_id]
    FROM [CHECKOUT_HISTORY]
    INNER JOIN FILE ON [CHECKOUT_HISTORY].file_id = FILE.file_id
    WHERE file_year = '2012' 
    AND 
    [CHECKOUT_HISTORY].file_id IN (	SELECT [CHECKOUT_HISTORY].file_id 
  					FROM [CHECKOUT_HISTORY]
  					WHERE checkin_datetime is null
  					GROUP BY [CHECKOUT_HISTORY].file_id 
  					HAVING COUNT([CHECKOUT_HISTORY].file_id)>1 
  					)
       ORDER BY [CHECKOUT_HISTORY].file_id, checkout_datetime
       
returns 21 rows of 3 groups of 3 rows per file and 3 groups of 4 rows per file (spaces added for clarity)
code:
    checkout_history_id	checkout_datetime		checkin_datetime	file_id
    58120		2012-04-10 14:33:25.867		NULL			9847
    13317		2012-05-08 09:35:58.087		2012-06-18 08:46:35.520	9847
    10010		2012-06-18 08:46:35.520		NULL			9847
     
    58120		2012-04-10 14:33:25.867		NULL			1180
    13312		2012-05-08 09:35:58.130		2012-06-18 08:46:36.230	1180
    10470		2012-06-18 08:46:36.230		NULL			1180
    
    51207		2012-04-10 14:33:25.867		NULL			3843
    58170		2012-05-08 09:35:58.160		2012-06-18 08:46:36.587	3843
    36807		2012-06-18 08:46:36.587		NULL			3843
     
    78343		2012-04-05 14:37:15.327		2012-04-05 14:37:26.903	3737
    33763		2012-04-10 14:33:25.867		NULL			3737
    70687		2012-05-08 08:23:06.853		2012-06-18 08:51:49.017	3737
    18237		2012-06-18 08:51:49.017		NULL			3737
     
    42310		2012-04-05 16:07:42.297		2012-04-09 16:06:47.737	4150
    33780		2012-04-10 14:33:25.867		NULL			4150
    58203		2012-05-08 09:35:58.193		2012-06-18 08:46:34.790	4150
    35047		2012-06-18 08:46:34.790		NULL			4150
     
    07620		2012-04-05 14:35:07.603		2012-04-05 14:35:28.090	1137
    34055		2012-04-10 14:33:25.867		NULL			1137
    06767		2012-05-08 08:23:06.763		2012-06-18 08:49:35.930	1137
    36183		2012-06-18 08:49:35.930		NULL			1137
     
     
     

Instead of manually looking at them and writing something like
UPDATE CHECKOUT_HISTORY
SET checkin_datetime = (SELECT checkout_datetime FROM CHECKOUT_HISTORY WHERE checkout_history_id = 13317)
WHERE checkout_history_id = 58120
for each one

How do I treat this in relation to a group of rows having the same file_id?
How do I identify which row to update the checkin_datetime (min(checkout_datetime) where checkin_datetime is NULL)
and which row to get the checkout_datetime from? (the next checkout_datetime in the group)

Also by moving the WHERE checkin_datetime is null subquey clause into its own subquery I have another query that returns over 100000 rows that are groups like this that have atleast (and most of the time) one NULL checkin_date time per group.
Most of these look ok (the single NULL value is for the row with the latest checkout_date time) The file is currently checked out but it's not a problem.
How do I exclude these groups from the query results (so no group where the checkin_datetime for the row with the max(checkout_datetime) per group is null)
so I can see if any of these files have problems?

Crazy Mike fucked around with this message at 08:45 on Jul 25, 2012

Incompl
Mar 23, 2008

Aredna posted:

If you want to do it just for the current date time you can do something simple like this to convert from when UTC to the timezone of your server:
code:
dateadd(hour,datediff(hour,getutcdate(),getdate()),FIELDNAME)
All of my data is in CST so when I need to do something longer I do the following that we just hardcoded as a short term workaround until we find a better way to do it. I should note that we do not have data around the time of the actual changes each year so I didn't put any thought into coding the exact time crossover correctly for this.
code:
dateadd(hour,
        case when cast(FIELDNAME as date) between '2007-03-11' and '2007-11-03' then -5
             when cast(FIELDNAME as date) between '2008-03-09' and '2008-11-01' then -5
             when cast(FIELDNAME as date) between '2009-03-08' and '2009-10-31' then -5
             when cast(FIELDNAME as date) between '2010-03-14' and '2010-11-06' then -5
             when cast(FIELDNAME as date) between '2011-03-13' and '2011-11-05' then -5
             when cast(FIELDNAME as date) between '2012-03-11' and '2012-11-03' then -5
             when cast(FIELDNAME as date) between '2013-03-10' and '2013-11-02' then -5
             when cast(FIELDNAME as date) between '2014-03-09' and '2014-11-01' then -5
             when cast(FIELDNAME as date) between '2015-03-08' and '2015-10-31' then -5
             else -6 end,
        FIELDNAME)
The ideal solution is to use the datetimeoffset field to store time zone offsets, but that isn't an option for those of us just reading the data.


EDIT: I just realized you are going the other way. In that case you should know that there isn't a perfect mapping: in the fall when it changed did the timestamp between 2:00:00.000 AM and 2:59:59.997 AM occur the first or second time we had that time occur locally? For your situation I would just use a case statement to do a one time conversion to UTC. If you have a way to disambiguate the 2-3AM issue above with some other field such as an ordering on a strictly increasing ID field that's great, but if not and it's not a huge issue just assume they occur in the first pass through that time and know that if someone looks at the data for that 1 hour of each year it's gonna be off.
Thank you for the response, I figured there was no 'pretty' and easy way to do it. This whole experience is making me hate dates not in UTC, haha. I actually didn't think of the issue you presented, but you're right...ugh! We decided to bypass SQL Server entirely, will probably run a script to read in dates, and let the Java jdk handle the time conversions.

Thanks again!

Aredna
Mar 17, 2007
Nap Ghost

Crazy Mike posted:

How do I work with fields as part of a group?
....
How do I treat this in relation to a group of rows having the same file_id?
How do I identify which row to update the checkin_datetime (min(checkout_datetime) where checkin_datetime is NULL)
and which row to get the checkout_datetime from? (the next checkout_datetime in the group)

This code is for SQL Server, but you should be able to find a similar function for most flavors of SQL. You want to use the RANK function to get you a rank of 1, 2, 3, etc. to join on. Then you can join from a row with rank 1 to rank 2 and get the next row.

If you add this to the query you have below it should give you what you need to identify the proper rows.
code:
rank() over (partition by [CHECKOUT_HISTORY].[file_id] order by checkout_datetime, [checkout_history_id] as filerank
To account for the corner case of having one file_id with 2 identical values for checkout_datetime I also sorted by checkout_history_id under the assumption that it is a unique identifier. It's only used for this corner case as a tiebreaker, but will prevent future issues should this scenario occur.

Goat Bastard
Oct 20, 2004

Aredna posted:

If you add this to the query you have below it should give you what you need to identify the proper rows.
code:
rank() over (partition by [CHECKOUT_HISTORY].[file_id] order by checkout_datetime, [checkout_history_id] as filerank
To account for the corner case of having one file_id with 2 identical values for checkout_datetime I also sorted by checkout_history_id under the assumption that it is a unique identifier. It's only used for this corner case as a tiebreaker, but will prevent future issues should this scenario occur.

Good advice, just as an fyi using row_number() OVER (PARTITION BY [CHECKOUT_HISTORY].[file_id] ORDER BY checkout_datetime) instead of rank() will also eliminate that corner case you talked about, without needing to add another column to the order by.

opie
Nov 28, 2000
Check out my TFLC Excuse Log!
I don't know if this is the right place to ask about Crystal Reports, but here goes.

I have a Crystal report that queries some tables by Division. In this report they want a section that shows all the Assignment values from one table and for each one displays a sum of the matching Assignments from another table. This is confusing, here is what the tables looks like:
TableA
PKField - primary key
Division

TableB
PKField - foreign key to TableA
Assignment

TableC
CodeType - will be "E"
Code - each code is a possible value for Assignment in TableB

I will be looking for all records in TableA that have a certain Division. Then I will display all values for Code in TableC where CodeType="E". For each value of Code, it will display a total where that value matches an Assignment returned for TableB. Here is some sample data:
code:
TableA:  PKField    Division
         1234       Sales
         2345       Sales
         3456       Marketing

TableB:  PKField    Assignment
         1234       Demo
         2345       Train

TableC:  CodeType   Code
         E          Demo
         E          Train
         E          Test
For Division="Sales", the report would show:

Demo - 1
Train - 1
Test - 0

I don't know how to do this within crystal. I have the main report that shows all the simple stuff, and then I was going to insert a subreport but within that I'm not sure what to do.

Sab669
Sep 24, 2009

I posted a few weeks back asking how I'd find all of the tables in my DB that contained a field named X... now I come back with another strange question, is there a way to query the DB for all column names of type decimal? If I could get their table name too that'd be lovely.

edit; Eh, balls I'm bad at google. Managed to find it on Stack Overflow after searching google for a bit.

If anyone else is curious this did it:
SQL code:
select
    so.name table_name
   ,sc.name column_name
   ,st.name data_type
from sysobjects so
inner join syscolumns sc on (so.id = sc.id)
inner join systypes st on (st.type = sc.type)
where so.type = 'U'
and st.name IN ('DECIMAL')

Sab669 fucked around with this message at 22:52 on Jul 27, 2012

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!

opie posted:

I don't know if this is the right place to ask about Crystal Reports, but here goes.

I have a Crystal report that queries some tables by Division. In this report they want a section that shows all the Assignment values from one table and for each one displays a sum of the matching Assignments from another table. This is confusing, here is what the tables looks like:
TableA
PKField - primary key
Division

TableB
PKField - foreign key to TableA
Assignment

TableC
CodeType - will be "E"
Code - each code is a possible value for Assignment in TableB

I will be looking for all records in TableA that have a certain Division. Then I will display all values for Code in TableC where CodeType="E". For each value of Code, it will display a total where that value matches an Assignment returned for TableB. Here is some sample data:
code:

TableA:  PKField    Division
         1234       Sales
         2345       Sales
         3456       Marketing

TableB:  PKField    Assignment
         1234       Demo
         2345       Train

TableC:  CodeType   Code
         E          Demo
         E          Train
         E          Test
For Division="Sales", the report would show:

Demo - 1
Train - 1
Test - 0

I don't know how to do this within crystal. I have the main report that shows all the simple stuff, and then I was going to insert a subreport but within that I'm not sure what to do.
This is unfortunately not something that you can do in crystal unless you write you own query and add it as a "command" in the table browser.

The SQL query would be something like
[code]
Select
count(tableb.pkfield), tableb.assignment
from tablea
inner join tableb on tablea.pk =tableb.pk
Group by
Tablea.pk ,
Tablea.division
[\code]
So you create something like this as a comment in the table select area the join it to tablea on the division and put the output in a group footer or header for the groupa division.
Atleast without anything else bein in the report if you have an existing data street I would put it in a sub report because poo poo gets fucky then.

Aredna
Mar 17, 2007
Nap Ghost

Goat Bastard posted:

Good advice, just as an fyi using row_number() OVER (PARTITION BY [CHECKOUT_HISTORY].[file_id] ORDER BY checkout_datetime) instead of rank() will also eliminate that corner case you talked about, without needing to add another column to the order by.

This is probably because I haven't looked in to it, but I tend to use rank because I know its deterministic. Is row number going to always give you the same result on the same subset of data when corner cases like this pop up? I know rank will give me poo poo results forcing me to review the script when I miss a corner case, but I worry that row number would work and give me a different result on two runs if the data in my table changes.

Goat Bastard
Oct 20, 2004

Aredna posted:

This is probably because I haven't looked in to it, but I tend to use rank because I know its deterministic. Is row number going to always give you the same result on the same subset of data when corner cases like this pop up? I know rank will give me poo poo results forcing me to review the script when I miss a corner case, but I worry that row number would work and give me a different result on two runs if the data in my table changes.

Good point. I don't actually know, but I suspect that it would be undefined.

The Collector
Aug 9, 2011

I've seen things you people wouldn't believe.
Rats raining down in the night during the Stanley Cup finals.
All those moments will be lost in time, like tears in rain.
Pillbug
I started learning sql a while back by using this site http://sqlzoo.net/ (among other things). I had to stop for a bit, and I'm starting to pick it up again. Are there any similar/better sites out there or is this my best bet?

Pvt. Public
Sep 9, 2004

I am become Death, the Destroyer of Worlds.
First off, let me say I know dick about SQL programming and I'm in a major bind, so I don't have time to learn my way up to building this query. I'm planning on learning SQL so that I can handle tasks like this in the very near future, but I'm under the gun and the only one in my department this week so I've got no time to work my way into this solution in the time I need it done. Any help is massively appreciated and I don't normally just ask to be given an answer, but I'm hoping you guys will be my saviors.

I need to build a query to pull data from two tables based on a list output by filtering a 3rd table's data and joining the data from the first two tables into unique rows all based on the same field data. Let me try and explain a bit better:

I have tables 1, 2 and 3. In the tables, there is a field called login, and all tables have values that match this field between them. As in all three tables have a record that has the value of bob for login. In 1, there is a field called type. In 2, there are fields called data and score. And in 3, there are fields called demo and state. I need to filter table 1 by type with a value of 0000 to 0009 (basically, 000%). I then need to apply the output of that filter against tables 2 and 3, thereby filtering those two tables and get the output so that I see loginID|data|score|demo|state that I can export to a file.

I already have the filter for table 1 built, but I need to know how to apply that output against the other two tables and then join both outputs into unique rows, based on login value, that has the other 4 fields. Or should I join table 2 and 3 then filter based on login values present in table 1?

Hopefully, I've made that clear enough. Any help is greatly appreciated as I've got no clue how to go about this. Thanks.

Pvt. Public fucked around with this message at 04:35 on Aug 2, 2012

TransatlanticFoe
Mar 1, 2003

Hell Gem
Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like:
code:
select concat(s.Security_Symbol, '%') from Company c left join Security s on c.Company_Id = s.Company_Id where c.Company_Name like '%SOME_STRING_HERE%'
Is there some way I can use the results of that (e.g. 'ABC%', 'DEF%', etc.) to use as a subquery so I can find all of the Security entries where the symbols start with those results?

Goat Bastard
Oct 20, 2004

Pvt. Public posted:

First off, let me say I know dick about SQL programming and I'm in a major bind, so I don't have time to learn my way up to building this query. I'm planning on learning SQL so that I can handle tasks like this in the very near future, but I'm under the gun and the only one in my department this week so I've got no time to work my way into this solution in the time I need it done. Any help is massively appreciated and I don't normally just ask to be given an answer, but I'm hoping you guys will be my saviors.

I need to build a query to pull data from two tables based on a list output by filtering a 3rd table's data and joining the data from the first two tables into unique rows all based on the same field data. Let me try and explain a bit better:

I have tables 1, 2 and 3. In the tables, there is a field called login, and all tables have values that match this field between them. As in all three tables have a record that has the value of bob for login. In 1, there is a field called type. In 2, there are fields called data and score. And in 3, there are fields called demo and state. I need to filter table 1 by type with a value of 0000 to 0009 (basically, 000%). I then need to apply the output of that filter against tables 2 and 3, thereby filtering those two tables and get the output so that I see loginID|data|score|demo|state that I can export to a file.

I already have the filter for table 1 built, but I need to know how to apply that output against the other two tables and then join both outputs into unique rows, based on login value, that has the other 4 fields. Or should I join table 2 and 3 then filter based on login values present in table 1?

Hopefully, I've made that clear enough. Any help is greatly appreciated as I've got no clue how to go about this. Thanks.

SQL code:
SELECT t1.login
      ,t2.data
      ,t2.score
      ,t3.demo
      ,t3.state
FROM   t1
JOIN   t2 ON t2.login = t1.login
JOIN   t3 ON t3.login = t1.login
WHERE  t1.type LIKE '000_' -- or whatever your t1 filter is
Change the JOINs to LEFT JOIN if there might not be a record for that login in t2 or t3 and you still want that login to be included in your results

McGlockenshire
Dec 16, 2005

GOLLOCKS!

Pvt. Public posted:

I need to build a query to pull data from two tables based on a list output by filtering a 3rd table's data and joining the data from the first two tables into unique rows all based on the same field data.

I'm not sure I fully understand your problem, but I'll take a stab at it anyway. Numbers are confusing, so let's rename your three tables Alpha, Beta and Gamma. They look something like:

code:
CREATE TABLE Alpha (
    login TEXT NOT NULL UNIQUE,
    type TEXT,
);
CREATE TABLE Beta (
    login TEXT NOT NULL UNIQUE,
    data TEXT,
    score INTEGER
);
CREATE TABLE Gamma (
    login TEXT NOT NULL UNIQUE,
    demo BOOLEAN,
    state TEXT
);
Ignoring the column types, is this correct? In particular, your description implies that there may be only one row per login per table here, so I've applied a UNIQUE constraint. So, let's begin putting it together.

code:
SELECT Alpha.login, Alpha.type
  FROM Alpha
 WHERE 1 = 1 
   AND ...
This is the base query. Time to join the tables! Given that we only want results to appear when a matching row exists on both sides of the join, we'll pick an INNER JOIN:

code:
SELECT Alpha.login, Alpha.type, Beta.data, Beta.score
  FROM Alpha
 INNER JOIN Beta USING(login)
 WHERE 1 = 1
   AND ...
The USING clause here takes a single column name which must be shared between the tables being joined. You could also say ON(Alpha.login = Beta.login) instead, in case your database doesn't understand USING.

Adding the third table is a logical extension:

code:
SELECT Alpha.login, Alpha.type, Beta.data, Beta.score, Gamma.demo, Gamma.state
  FROM Alpha
 INNER JOIN Beta USING(login)
 INNER JOIN Gamma USING(login)
 WHERE 1 = 1
   AND ...
Again, you could say ON(Alpha.login = Gamma.login) here.

Now, this is going to work perfectly as long as there's only one login per row per table. If there are multiple login rows in any of the tables, you're going to have a problem. Let's say that there's one record in Alpha, two in Beta, and one in Gamma for a single login. You're going to end up with two rows in your result set. The first will have the data from the only row in Alpha, the first row in Beta, and the only row in Gamma. The second will have the only row in Alpha, the second row in Beta, and the only row in Gamma.

That's not a disaster, but what if there are two rows in Gamma as well? Now you're going to get four.

code:
1 = A1, B1, G1
2 = A1, B2, G1
3 = A1, B1, G2
4 = A1, B2, G2
Suddenly you're getting data repeating itself and in a way that isn't going to be clear and obvious. This is probably NOT what you want, and it's going to take more effort to solve this problem.

Has this been helpful?

Pvt. Public
Sep 9, 2004

I am become Death, the Destroyer of Worlds.
Thank you both, Goat Bastard and McGlockenshire! I will give these a shot and see what I get. I truly appreciate it.

Sub Par
Jul 18, 2001


Dinosaur Gum

TransatlanticFoe posted:

Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like:
code:
select concat(s.Security_Symbol, '%') 
from Company c 
	left join Security s 
on 
	c.Company_Id = s.Company_Id 
where c.Company_Name like '%SOME_STRING_HERE%'
Is there some way I can use the results of that (e.g. 'ABC%', 'DEF%', etc.) to use as a subquery so I can find all of the Security entries where the symbols start with those results?

If your results are always three characters you could do
code:
select Junk
from FakeTable
where substring(security_symbol,1,3) in (
	select s.Security_Symbol 
	from Company c 
	left join 
		Security s 
	on 
		c.Company_Id = s.Company_Id 
	where c.Company_Name like '%SOME_STRING_HERE%')
Another option would be to define a function that takes the characters and spits back matches based on wildcard-appended like queries. Or if you're on Oracle you could use your select statement to build an ugly regex and use regexp_like.

Aredna
Mar 17, 2007
Nap Ghost

TransatlanticFoe posted:

Is there any equivalent for IN that uses LIKE? Specifically, I have two tables, a Company table with a Company_Name column, and a Security table with a foreign key using Company_Id and a Security_Symbol column. I'm doing something like:
code:
select concat(s.Security_Symbol, '%') from Company c left join Security s on c.Company_Id = s.Company_Id where c.Company_Name like '%SOME_STRING_HERE%'
Is there some way I can use the results of that (e.g. 'ABC%', 'DEF%', etc.) to use as a subquery so I can find all of the Security entries where the symbols start with those results?

I've done something similar by putting the results in a temp table or CTE and when i'm selecting SOMESTRINGHERE appending '%' to the beginning and end. Now you just join on LIKE instead of =.

Pvt. Public
Sep 9, 2004

I am become Death, the Destroyer of Worlds.

Goat Bastard posted:

SQL code:
SELECT t1.login
      ,t2.data
      ,t2.score
      ,t3.demo
      ,t3.state
FROM   t1
JOIN   t2 ON t2.login = t1.login
JOIN   t3 ON t3.login = t1.login
WHERE  t1.type LIKE '000_' -- or whatever your t1 filter is
Change the JOINs to LEFT JOIN if there might not be a record for that login in t2 or t3 and you still want that login to be included in your results

Thanks Goat Bastard, this worked perfectly! I appreciate the help. :)

Adbot
ADBOT LOVES YOU

shelper
Nov 10, 2005

Something's still wrong with this code

Sub Par posted:

If your results are always three characters you could do
code:
select Junk
from FakeTable
where substring(security_symbol,1,3) in (
	select s.Security_Symbol 
	from Company c 
	left join 
		Security s 
	on 
		c.Company_Id = s.Company_Id 
	where c.Company_Name like '%SOME_STRING_HERE%')
Another option would be to define a function that takes the characters and spits back matches based on wildcard-appended like queries. Or if you're on Oracle you could use your select statement to build an ugly regex and use regexp_like.

Couldn't you go around the character limit by using substring(security_symbol,1,length(security_symbol))?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply