Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thel
Apr 28, 2010

Sub Par posted:

Tested in SQL Server 2008. Not sure if you can do this in MySQL or not, but I know it will also work in Oracle.

This assumes that a student can only have one teacher per subject On further review I think this would work out fine. It also assumes there aren't nulls anywhere. Hopefully it's helpful.

I'll test this later tonight, but I'm 99% sure it will count print records multiple times (each print record will be included for each class in the same subject the student is in). i.e. if you have a student taking ACCT101 and ACCT102 classes concurrently, and the student prints four pages for the 'ACCT' subject, how are you going to know which class it's for?

(Also, I have a suspicion that the 'subject' in the printlog is actually the student's major subject, not the subject for the current printlog. This is only based on experience with terrible schemas pulling wrong-rear end data. :v:)

Adbot
ADBOT LOVES YOU

DoctorScurvy
Nov 11, 2005
More of a passing curiosity, really

Sub Par posted:

Tested in SQL Server 2008. Not sure if you can do this in MySQL or not, but I know it will also work in Oracle.

This assumes that a student can only have one teacher per subject On further review I think this would work out fine. It also assumes there aren't nulls anywhere. Hopefully it's helpful.
Thanks a lot for your attempt. I wasn't able to get it to work, however - somewhere along the line it was picking up "all students who have both this subject and this teacher" or something, it was messy. I ended up with this abomination with a temporary table, which works perfectly:

code:
CREATE TABLE #StudentClasses (
		StudentCode char(10),
		ClassCode char(20),
		SubjectCode char(20),
		TeacherCode char(20)
	);

INSERT INTO #StudentClasses (StudentCode, ClassCode)
		SELECT StudentCode, Classcode FROM enrolments;

SELECT classcode, subjectcode, teachercode FROM classes;

LOOPING THROUGH RESULTS WITH PHP:
	UPDATE #StudentClasses SET teachercode='".$data['teachercode']."', subjectcode='".$data['subjectcode']."' WHERE classcode='".$data['classcode']."'"

SELECT teachercode, count(*) AS num FROM #StudentClasses
				INNER JOIN reportcycleprintlog
				ON #StudentClasses.subjectcode = reportcycleprintlog.subjectcode
				AND #StudentClasses.studentcode = reportcycleprintlog.studentcode
				GROUP BY teachercode
				ORDER BY num DESC
I'm well aware that some of that could be replaced by JOIN statements, but it's not a public server, I'm the only person to use or access it so I don't need to worry about efficiency. At any rate it only takes a few seconds to run.

Thel posted:

(Also, I have a suspicion that the 'subject' in the printlog is actually the student's major subject, not the subject for the current printlog. This is only based on experience with terrible schemas pulling wrong-rear end data. :v:)
It's for a report printing system. Printing a student's report produces 1 printlog entry per subject the student is enrolled in. Each student is, always, enrolled in multiple subjects, none of which take any precedence over the others.

DoctorScurvy fucked around with this message at 09:04 on Jun 23, 2011

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

What can I do about slow MySQL importing?

I dump from server A with mysqldump. It takes about 2 minutes to give me a 2.1GB file. Then I send it over to server B and import it in to that server (starting fresh with a new DB) and it takes 15 minutes!

I just double the RAM in the VM from 3GB to 6GB because it was using it all. It didn't make the import any faster. Well, it made it about 30 seconds faster but we're talking 15 minutes total, so it wasn't much of an improvement. I was hoping the solution would be 'add more cheap RAM'.

Are inserts/imports just that much more expensive to run? I'm using the stock configuration for MySQL (CentOS 5.6) and it's 64-bit on a Core i7 950.

Here's what the CPU/disk/memory usage graphs look like:

Bob Morales fucked around with this message at 13:57 on Jun 23, 2011

epswing
Nov 4, 2003

Soiled Meat

Bob Morales posted:

Are inserts/imports just that much more expensive to run?

Without knowing much about MySQL internals, reads are usually going to be faster than writes, so I'd expect importing to take longer than exporting.

(Maybe not that much longer though...)

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

epswing posted:

Without knowing much about MySQL internals, reads are usually going to be faster than writes, so I'd expect importing to take longer than exporting.

(Maybe not that much longer though...)

I could understand double or triple, but 8x is a big jump. I would be happy with getting it down to 5 minutes.

benisntfunny
Dec 2, 2004
I'm Perfect.

Bob Morales posted:

What can I do about slow MySQL importing?

I dump from server A with mysqldump. It takes about 2 minutes to give me a 2.1GB file. Then I send it over to server B and import it in to that server (starting fresh with a new DB) and it takes 15 minutes!


Could you setup replication to the other database?
http://dev.mysql.com/doc/refman/5.1/en/replication.html

Even if you didn't always want data in server B you could setup a different schema then just take locally from B to B to populate data when you wanted it.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

benisntfunny posted:

Could you setup replication to the other database?
http://dev.mysql.com/doc/refman/5.1/en/replication.html

Even if you didn't always want data in server B you could setup a different schema then just take locally from B to B to populate data when you wanted it.

I am under the assumption that if we used replication, that once we wrote to the database on server B, we'd have to delete it and start over from a new dump in order to get back in sync.

The idea is to use this as a test with real production data instead of the data in our staging/development servers.

Server A is actually a local replication clone of our production DB.

stuph
Aug 31, 2004

donkey punching my way to the top

Bob Morales posted:

I could understand double or triple, but 8x is a big jump. I would be happy with getting it down to 5 minutes.

Are these tables Inno or MyISAM? Inno's always going to be slower, and inserts are much more expensive than reads. Index recreation is also really an expensive task.

Some of this thread may be helpful - http://stackoverflow.com/questions/67117/speeding-up-mysql-dumps-and-imports

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

stuph posted:

Are these tables Inno or MyISAM? Inno's always going to be slower, and inserts are much more expensive than reads. Index recreation is also really an expensive task.

Some of this thread may be helpful - http://stackoverflow.com/questions/67117/speeding-up-mysql-dumps-and-imports

Inno. I a looking through High Performance MySQL now.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
You might consider disabling all of the constraints and indexes on the tables, then enable them after the data has been imported. Another option might to disable the fsync while you import the data and then enable it after.

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!
We're trying to setup Bucardo to do some Postgres replication. It seems to work fine for tables, but we can't get it to synchronize sequences at all, even though sequences have been added to the sync and everything seems ok. This is a problem, because

Does the idea of replicating sequences not mean what we think it means, or are there other things we need to setup/run to make this work?

benisntfunny
Dec 2, 2004
I'm Perfect.

Bob Morales posted:

I am under the assumption that if we used replication, that once we wrote to the database on server B, we'd have to delete it and start over from a new dump in order to get back in sync.

The idea is to use this as a test with real production data instead of the data in our staging/development servers.

Server A is actually a local replication clone of our production DB.

Here's what I'm suggesting... and I'm not sure how many places you can replicate to in MySQL so it may not work.

Keep your replication to Server A
Replicate another copy to Server B in a new schema
Limit the ID in Server B to read only on that new schema

Delete your current test tables and repopulate with data from new schema that is a fully replicated copy of prod as needed. No reason you would have to delete anything from those tables.

benisntfunny fucked around with this message at 02:24 on Jun 24, 2011

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

I got it down to 8 minutes, but apparently that's not fast enough. That's one of our biggest databases, our other ones take 20 seconds to dump and a minute and 20 seconds to import.

I'm going to try using this:

http://code.google.com/p/sqlyog/

or maatkit to see if it's faster to re-sync our data with the production master after we've made changes, or if it's faster to just dump/reload.

epswing
Nov 4, 2003

Soiled Meat

I've used sqlyog for the last couple years. It's just a mysql windows/gui client. Not sure why you'd think it would do anything faster than the mysql shell program, unless you're aiming for some feature I'm not aware of...

benisntfunny
Dec 2, 2004
I'm Perfect.

Bob Morales posted:

I got it down to 8 minutes, but apparently that's not fast enough. That's one of our biggest databases, our other ones take 20 seconds to dump and a minute and 20 seconds to import.

I'm going to try using this:

http://code.google.com/p/sqlyog/

or maatkit to see if it's faster to re-sync our data with the production master after we've made changes, or if it's faster to just dump/reload.

So you're going to rule out the option I gave you which would allow you to locally copy the data over? - which is without a doubt going to be your fastest option. We typically copy about 4-5GB of data using a similar method in under a minute.

how are you defining the DB as big? Also if the issue is 8 minutes is too long because you're locking the tables during that time and you don't want to do the replication method I suggested I would at least dump the data into an alternative schema pre-load then load from those tables locally to reduce locking time.

edit: also - If you have any DB auditing type columns in your data you could easily do a compare to only grab the data you need and update existing information where the modified date is greater than the current modified date.

benisntfunny fucked around with this message at 02:28 on Jun 24, 2011

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
the mysqldump and import is as fast as your gonna get.

http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html

McGlockenshire
Dec 16, 2005

GOLLOCKS!
You have tuned my.cnf for your server specs, right?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

benisntfunny posted:

So you're going to rule out the option I gave you which would allow you to locally copy the data over? - which is without a doubt going to be your fastest option. We typically copy about 4-5GB of data using a similar method in under a minute.

how are you defining the DB as big? Also if the issue is 8 minutes is too long because you're locking the tables during that time and you don't want to do the replication method I suggested I would at least dump the data into an alternative schema pre-load then load from those tables locally to reduce locking time.

I really don't understand what you were saying to do. 8 minutes is only too long because someone else said so.

Basically we have a production server, and now we have a local replication mirror. What I am trying to do is set up a test server, which will allow you to take a snapshop of a database from the mirror and actually do stuff with it. We have a clone of our webserver set up as well. Basiscally so we can see changes with real data instead of on our staging or development servers. But once we start testing the app, the data is going to change of course.

So when we are done, we'll drop the DB and when we are going to test out a new feature, we just grab a new dump from the mirror so we have current data.

McGlockenshire posted:

You have tuned my.cnf for your server specs, right?

Yes, they are no longer the default values, Are the they best? Maybe not.

epswing posted:

I've used sqlyog for the last couple years. It's just a mysql windows/gui client. Not sure why you'd think it would do anything faster than the mysql shell program, unless you're aiming for some feature I'm not aware of...

They have a sync tool for Linux where I am going to see if it's faster to re-sync my modified DB to the current copy on server A, rather than doing a dump/import.

MrBishop
Sep 30, 2006

I see what you did there...

Soiled Meat
I'm trying to solve what I thought would be a simple problem, but feel like I've gone way out of my way to achieve my goal.

I have a table for tracking death loss in farm systems, and want to report it by week. No problem so far, that's pretty easy. The problem is that when there are zero deads to report, nothing goes into the database, so I end up with weeks missing data. This is easy to fix for the counts, by just doing an outer join with a list of numbers, so I get all my weeks displayed even when there's no report.

The hangup comes when I want to show a cumulative total for the entire period, and have the cumulative show even on weeks where there's nothing reported. The cumulative is already calculated when it goes into the table, I just need to carry it over into the "empty" weeks.

My solution was to join the table with itself, group by the appropriate columns, and select the MAX of the cumulative total column. I feel like I may be making it harder than need be, so would appreciate any opinions. Here's what I ended up with (MS SQL, btw):

code:
SELECT t1.Center, t1.[Group Id], t1.[WeekNumber], t1.Deads, t2.[Move Ins],
    MAX(CAST(ISNULL(t2.[Total Dead], 0) AS Int)) AS [Cum. Deads],
    CASE t2.[Move Ins]
        WHEN '0' THEN 0.0
        ELSE CAST(ISNULL(t1.[Deads], 0) AS Float)/CAST(t2.[Move Ins] AS Float)
    END AS Mortality,
    CASE t2.[Move Ins]
        WHEN '0' THEN 0.0
        ELSE MAX(CAST(ISNULL(t2.[Total Dead], 0) AS Float))/CAST(t2.[Move Ins] AS Float)
    END AS [Cum. Mortality]
FROM t1 left JOIN t2 ON (t1.[Group Id]=t2.[Group Id] AND t1.WeekNumber>=t2.WeekNumber)
GROUP BY t1.Center, t1.[Group Id], t1.[WeekNumber], t1.Deads,
    t2.[Move Ins]
ORDER BY 2, 3

benisntfunny
Dec 2, 2004
I'm Perfect.

Bob Morales posted:

I really don't understand what you were saying to do. 8 minutes is only too long because someone else said so.

Basically we have a production server, and now we have a local replication mirror. What I am trying to do is set up a test server, which will allow you to take a snapshop of a database from the mirror and actually do stuff with it. We have a clone of our webserver set up as well. Basiscally so we can see changes with real data instead of on our staging or development servers. But once we start testing the app, the data is going to change of course.


I hope someone else can chime in and tell me I'm not making sense.

Here's a picture of what I'm trying to explain. I fully understand what you want to do. Stop getting loving snippy it's not our fault your testing is super time sensitive and can't be executed through standard load processes.

benisntfunny
Dec 2, 2004
I'm Perfect.

MrBishop posted:

code:
SELECT t1.Center, t1.[Group Id], t1.[WeekNumber], t1.Deads, t2.[Move Ins],
    MAX(CAST(ISNULL(t2.[Total Dead], 0) AS Int)) AS [Cum. Deads],
    CASE t2.[Move Ins]
        WHEN '0' THEN 0.0
        ELSE CAST(ISNULL(t1.[Deads], 0) AS Float)/CAST(t2.[Move Ins] AS Float)
    END AS Mortality,
    CASE t2.[Move Ins]
        WHEN '0' THEN 0.0
        ELSE MAX(CAST(ISNULL(t2.[Total Dead], 0) AS Float))/CAST(t2.[Move Ins] AS Float)
    END AS [Cum. Mortality]
FROM t1 left JOIN t2 ON (t1.[Group Id]=t2.[Group Id] AND t1.WeekNumber>=t2.WeekNumber)
GROUP BY t1.Center, t1.[Group Id], t1.[WeekNumber], t1.Deads,
    t2.[Move Ins]
ORDER BY 2, 3
Create a temp table with all the dates - right join them into your query
edit:
Copy and paste this to see what I'm saying - also. I updated what I originally said to do.
code:
DECLARE @FakeResults TABLE
(WeekNumber INT
 )
 DECLARE @FakeResults2 TABLE
 (
 WeekNumber INT,
 DeadShit   INT
 )
INSERT INTO @FakeResults VALUES (1)
INSERT INTO @FakeResults VALUES (2)
INSERT INTO @FakeResults VALUES (3)
INSERT INTO @FakeResults VALUES (4)
INSERT INTO @FakeResults VALUES (5)
INSERT INTO @FakeResults VALUES (6)
INSERT INTO @FakeResults VALUES (7)

INSERT INTO @FakeResults2 VALUES (1,92031)
INSERT INTO @FakeResults2 VALUES (3,2)
INSERT INTO @FakeResults2 VALUES (5,41)
INSERT INTO @FakeResults2 VALUES (6,9)

SELECT ISNULL(fr2.WeekNumber,fr1.WeekNumber),
       ISNULL(fr2.DeadShit,0)
FROM @FakeResults2 fr2
RIGHT JOIN @FakeResults fr1
	ON fr2.WeekNumber = fr1.Weeknumber
	
Obviously you'll probably want to dynamically load all your week numbers into that table. There's ways you can do this:
http://www.kodyaz.com/articles/sql-server-dates-table-using-tsql-cte-calendar-table.aspx

benisntfunny fucked around with this message at 21:12 on Jun 24, 2011

MrBishop
Sep 30, 2006

I see what you did there...

Soiled Meat

benisntfunny posted:

Create a temp table with all the dates - right join them into your query
edit:
Copy and paste this to see what I'm saying - also. I updated what I originally said to do.

<snip>


Thanks for following up, but maybe I didn't explain well enough where the problem lies, and that's in having a running total, that continues to accumulate (0) through the missing weeks. If I'm reading your example correctly, it looks like it provides a way to have contiguous week numbers when there's missing data, but I just handled that with a FULL OUTER join between the DeadShit table and a WeekNumbers table. Like I said, I finally got a solution on my own, just wondering if there was a more elegant/efficient solution.

What I started with is:
code:
WeekNumber|Deads|Cum. Deads
---------------------------
1         |NULL |NULL
2         |3    |3
3         |NULL |NULL
4         |2    |5
What I want is:
code:
WeekNumber|Deads|Cum. Deads
---------------------------
1         |0    |0
2         |3    |3
3         |0    |3
4         |2    |5

benisntfunny
Dec 2, 2004
I'm Perfect.

MrBishop posted:

Thanks for following up, but maybe I didn't explain well enough where the problem lies, and that's in having a running total, that continues to accumulate (0) through the missing weeks. What I started with is:
code:
WeekNumber|Deads|Cum. Deads
---------------------------
1         |0    |0
2         |3    |3
3         |0    |3
4         |2    |5

ah. sorry. I actually think you've probably done exactly what I would do too. I did just have trouble understanding you at first.

Aredna
Mar 17, 2007
Nap Ghost
You can also generate a list of weeks with a recursive CTE if the version of MS SQL you are using is new enough (2005 or newer I think).
code:
with dates as
(
   select cast('2000-01-01' as date) as startweek
   union all
   select dateadd(d,7,startweek)
   from dates
   where startweek <= '2011-12-31'
)
select *
from dates
option (maxrecursion 1000)
The option at the end is because the default is only 100 levels deep, which will cause you to error out if you ever have more than 100 weeks.

If you prefer week numbers, it's easy enough to modify the script above.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

benisntfunny posted:

I hope someone else can chime in and tell me I'm not making sense.

Here's a picture of what I'm trying to explain. I fully understand what you want to do. Stop getting loving snippy it's not our fault your testing is super time sensitive and can't be executed through standard load processes.


Snippy?

Are you saying to have two databases on the test server? If I was going to do that, I could just keep two databases on the replication slave.

benisntfunny
Dec 2, 2004
I'm Perfect.

Bob Morales posted:

Snippy?

Are you saying to have two databases on the test server? If I was going to do that, I could just keep two databases on the replication slave.

The point of having it replicate to two places is you keep your prod replicated but can locally access the data in test that is read only so you can't really do any harm to it.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

benisntfunny posted:

The point of having it replicate to two places is you keep your prod replicated but can locally access the data in test that is read only so you can't really do any harm to it.

Here's what I have now:

Production DB ---Internet--- Slave A ----- Test B

We're going to do all our work with B, we just want to be able to re-sync it back up to match A the fastest way possible. A is read-only.

Lyon
Apr 17, 2003
This is most likely an absolutely retarded question but I'm trying to setup a query I can modify to pull info from our accounting software to Excel.

I'm dealing with about three years of customer data so I want to be able to select the item codes and customer before I initiate the query rather than wait 10 years for the system to pull up all my info.


SELECT AR_InvoiceHistoryDetail.InvoiceNo, AR_InvoiceHistoryHeader.ARDivisionNo, AR_InvoiceHistoryHeader.BillToName, AR_InvoiceHistoryHeader.CustomerNo, AR_InvoiceHistoryHeader.InvoiceDate, AR_InvoiceHistoryHeader.TransactionDate, AR_InvoiceHistoryHeader.ShipToState, AR_InvoiceHistoryHeader.ShipToCountryCode, AR_InvoiceHistoryDetail.HeaderSeqNo, AR_InvoiceHistoryDetail.DetailSeqNo, AR_InvoiceHistoryDetail.ItemCode, AR_InvoiceHistoryDetail.ItemType, AR_InvoiceHistoryDetail.ItemCodeDesc, AR_InvoiceHistoryDetail.ExtendedDescriptionKey, AR_InvoiceHistoryDetail.SalesAcctKey, AR_InvoiceHistoryDetail.CostOfGoodsSoldAcctKey, AR_InvoiceHistoryDetail.InventoryAcctKey, AR_InvoiceHistoryDetail.UnitOfMeasure, AR_InvoiceHistoryDetail.SubjectToExemption, AR_InvoiceHistoryDetail.Commissionable, AR_InvoiceHistoryDetail.TaxClass, AR_InvoiceHistoryDetail.Discount, AR_InvoiceHistoryDetail.DropShip, AR_InvoiceHistoryDetail.WarehouseCode, AR_InvoiceHistoryDetail.PriceLevel, AR_InvoiceHistoryDetail.ProductLine,

FROM AR_InvoiceHistoryDetail AR_InvoiceHistoryDetail, AR_InvoiceHistoryHeader AR_InvoiceHistoryHeader

WHERE AR_InvoiceHistoryHeader.HeaderSeqNo = AR_InvoiceHistoryDetail.HeaderSeqNo AND AR_InvoiceHistoryHeader.InvoiceNo = AR_InvoiceHistoryDetail.InvoiceNo AND ((AR_InvoiceHistoryHeader.TransactionDate Between {d '2009-01-01'} And {d '2010-12-31'}))

I want to add a AND AR_InvoiceHistoryHeader.BillToName = and a AR_InvoiceHistoryDetail.ItemCode Between to this query. I tried following their format and I tried googling a bit, but I haven't had any luck.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
This is what a crystal report? or what kind of sql?

Lyon
Apr 17, 2003
That seems very possible based on my googling. It is an Excel Query to MAS90 (from Sage) which is our finance system.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
okay well adding to a crystal report, report selection formal is just like sql so what has you tripped up exactly? i did see an errant command at the end of the fields there.

code:
SELECT AR_InvoiceHistoryDetail.InvoiceNo,
AR_InvoiceHistoryHeader.ARDivisionNo,
AR_InvoiceHistoryHeader.BillToName, 
AR_InvoiceHistoryHeader.CustomerNo, 
AR_InvoiceHistoryHeader.InvoiceDate, 
AR_InvoiceHistoryHeader.TransactionDate, 
AR_InvoiceHistoryHeader.ShipToState, 
AR_InvoiceHistoryHeader.ShipToCountryCode, 
AR_InvoiceHistoryDetail.HeaderSeqNo, 
AR_InvoiceHistoryDetail.DetailSeqNo, 
AR_InvoiceHistoryDetail.ItemCode, 
AR_InvoiceHistoryDetail.ItemType, 
AR_InvoiceHistoryDetail.ItemCodeDesc, 
AR_InvoiceHistoryDetail.ExtendedDescriptionKey, 
AR_InvoiceHistoryDetail.SalesAcctKey, 
AR_InvoiceHistoryDetail.CostOfGoodsSoldAcctKey, 
AR_InvoiceHistoryDetail.InventoryAcctKey, 
AR_InvoiceHistoryDetail.UnitOfMeasure, 
AR_InvoiceHistoryDetail.SubjectToExemption, 
AR_InvoiceHistoryDetail.Commissionable, 
AR_InvoiceHistoryDetail.TaxClass, 
AR_InvoiceHistoryDetail.Discount, 
AR_InvoiceHistoryDetail.DropShip, 
AR_InvoiceHistoryDetail.WarehouseCode, 
AR_InvoiceHistoryDetail.PriceLevel, 
AR_InvoiceHistoryDetail.ProductLine

FROM AR_InvoiceHistoryDetail AR_InvoiceHistoryDetail, AR_InvoiceHistoryHeader AR_InvoiceHistoryHeader

WHERE 
AR_InvoiceHistoryHeader.HeaderSeqNo = AR_InvoiceHistoryDetail.HeaderSeqNo
AND AR_InvoiceHistoryHeader.InvoiceNo = AR_InvoiceHistoryDetail.InvoiceNo
AND ((AR_InvoiceHistoryHeader.TransactionDate Between {d '2009-01-01'} And {d '2010-12-31'}))
AND AR_InvoiceHistoryHeader.BillToName = "Thing"
AND AR_InvoiceHistoryDetail.ItemCode Between "first" AND "second"
Now depending if those fields are number fields or string fields or something else that can change how you put the data into it.

If they are number type fields you dont use any kind of quotes just a number

Lyon
Apr 17, 2003
Sweet thanks, I guess I was just making it more complicated than I thought. All the extra (( and { were screwing with me.

Edit: What if I can't do between for my last item, but rather need specific item numbers? Can I do some kind of (1, 2, 3, 4) or do I need a separate line for each one?

Also, what if I want to do a LIKE operator? Is it just LIKE "%word%" ? I'm getting errors for that, stupid SQL. I should probably try to learn it though...

Anyone got a recommendation for an intro to SQL/Database Admin ?

Lyon fucked around with this message at 01:09 on Jun 29, 2011

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

You should be able to use any WHERE selection operator you'd like there. I'm actually surprised BETWEEN would work in that instance, as I thought it was restricted to num/date types.

So for your specific questions (choosing the last field for convenience, may not actually work depending on values/data types):
AND AR_InvoiceHistoryDetail.ItemCode IN(1,2,3,4)
If the values are strings:
AND AR_InvoiceHistoryDetail.ItemCode IN("1","2","3","4")
Using like:
AND AR_InvoiceHistoryDetail.ItemCode LIKE "%thinger%"

These should work, I'm assuming MSSQL (if so, use single quote '' instead of double quote "" for the strings). I'm leaving the double quotes in my examples because that's what Sprawl used.

The main reason LIKE comparisons fail is because the field being searched (e.g. ItemCode) is of type text or ntext. This means it's 'too big' to do like comparisons on because the field could contain a URL or an entire novel. If performance isn't a concern, and you're certain the size of the field is below a certain threshold, you can just convert to VARCHAR(MAX). e.g.

AND CONVERT(varchar(MAX),AR_InvoiceHistoryDetail.ItemCode) like "%thinger%"

I'm not even sure that's the source of your error though.

Lyon
Apr 17, 2003
This is super frustrating, won't let me use like and won't "compile" when I try to use your convert code, frustrating as hell.

I was hoping to make this flexible but it seems like I'll just have to look up the exact customer # for each company and just an = instead of letting me type in names/parts of names.

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!

Lyon posted:

This is super frustrating, won't let me use like and won't "compile" when I try to use your convert code, frustrating as hell.

I was hoping to make this flexible but it seems like I'll just have to look up the exact customer # for each company and just an = instead of letting me type in names/parts of names.

Oh you can acutally use that kind of list but in crystal the syntax is a little different.

I use

and AR_InvoiceHistoryDetail.ItemCode in [1,2,3,4]

so try that first.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Aw crap. I didn't realize it was crystal reports sub-SQL; you can probably ignore most of what I said and just listen to Sprawl instead. I have no idea what's legal in Crystal or not.

Backhand
Sep 25, 2008
Hey guys. I know this is only tangentially related to SQL, but I didn't think starting a new thread just for my stupid question would be appreciated.

So one of our pages went bad at work a couple weeks ago. I finally got some time to work on it and figured out what was wrong today; after it retrieved an ADO Recordset from SQL, it was trying to reference a field with adoRS("Policy No") when it should have been adoRS("Policy_No"). The field name is most definitely Policy_No in SQL Server. I fixed it, it works now... but I didn't write this code and it had been in place for years before I touched it. And apparently it worked fine as it was, defying all logic as I know it.

Now so long as the problem was clear and I fixed it I don't care, but my boss is going to demand an explanation tomorrow for why it used to work and then just kind of stopped. He'll be pissed if I can't give him one he likes. Anybody know any reason off the top of their head some clearly erroneous code like that would identify as the correct field, and then just... stop?

Thel
Apr 28, 2010

Backhand posted:

Hey guys. I know this is only tangentially related to SQL, but I didn't think starting a new thread just for my stupid question would be appreciated.

So one of our pages went bad at work a couple weeks ago. I finally got some time to work on it and figured out what was wrong today; after it retrieved an ADO Recordset from SQL, it was trying to reference a field with adoRS("Policy No") when it should have been adoRS("Policy_No"). The field name is most definitely Policy_No in SQL Server. I fixed it, it works now... but I didn't write this code and it had been in place for years before I touched it. And apparently it worked fine as it was, defying all logic as I know it.

Now so long as the problem was clear and I fixed it I don't care, but my boss is going to demand an explanation tomorrow for why it used to work and then just kind of stopped. He'll be pissed if I can't give him one he likes. Anybody know any reason off the top of their head some clearly erroneous code like that would identify as the correct field, and then just... stop?

Did you upgrade the server it was running on any time recently? (including any service packs/hotfixes for IIS/your web server?)

That's truly bizarre, incidentally, I don't know of any languages that would try 'Policy_No' if 'Policy No' failed.

(That said ... is your server side code ASP, perchance?)

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Yeah I'd have to agree; has to be some server change, either programmatic (hotfix, patch, upgrade, etc.) or configuration (namespace, character set, some obscure rule).

Adbot
ADBOT LOVES YOU

Sprawl
Nov 21, 2005


I'm a huge retarded sperglord who can't spell, but Starfleet Dental would still take me and I love them for it!
its very possible the ado recordset was using a odbc driver at some point that was updated to no longer replace blank spaces with underscores.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply