Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Nth Doctor posted:

Please don't break first normal form.

I'd like to, but I'd rather organise stuff with XML than add a new table to the schema every five weeks or so.

Adbot
ADBOT LOVES YOU

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

NihilCredo posted:

I'd like to, but I'd rather organise stuff with XML than add a new table to the schema every five weeks or so.
If you're making the decision to store this arbitrary data as an XML blob, what speed loss are you estimating for queries that include that field? Likewise, what is the contrast between returning the entire XML string to the application versus using the in-database XML decoder? It seems possible that leaving the decoding in the application might serve your purpose better. If you expect such additional values to be a rather small percentage of your data, have you considering storing this stuff in an arbitrary attributes table (recordid,key,value)?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

PhantomOfTheCopier posted:

If you're making the decision to store this arbitrary data as an XML blob, what speed loss are you estimating for queries that include that field?

Pithy, slightly bitter answer: still smaller speed losses than our food traceability data, which is all stored in a single nvarchar column and has to be parsed through String.Split() both by our application and by reports.

Also, unlike the above, the customer-specific data we'd store in a XML blob is highly unlikely to be queried as part of core basic operations ("find item by ID", "get all sales for this item") and fairly unlikely to be queried in the performance-sensitive parts of the application (point-of-sale operations, mostly). And if it were, it would only slow down the particular customer who makes use of it, who could then get his particular hardware upgraded or something.

quote:

Likewise, what is the contrast between returning the entire XML string to the application versus using the in-database XML decoder? It seems possible that leaving the decoding in the application might serve your purpose better.

I admit I kind of automatically assumed that, if SQL Server even implemented a XML data type in the first place (with the option of indexing it!), it's because it offers advantages over storing XML in a nvarchar column and grabbing+parsing the whole thing every time.

Nevertheless, most of our customer's setups involve a reasonably robust database server machine and a bunch of POS (both meanings of POS) terminals, so offloading as much work to the server as possible makes sense anyway. And if it turned out it didn't, it's a trivial, easily-reversed change for us developers to write ".value('/foo[1]')" in the query itself or ".SelectSingleNode('/foo[1]').InnerText" immediately afterwards.

quote:

If you expect such additional values to be a rather small percentage of your data, have you considering storing this stuff in an arbitrary attributes table (recordid,key,value)?

I have:

quote:

e: SizeOfLeftToe was probably a bad example because that could be a simple one-to-many [CustomerID]-[PropertyID]-[Value] table. The reason I want to go with a XML column is because it offers the flexibility to handle more complex properties as well.

A fairly common example would be software interoperability: many of our customers have their own special snowflake accounting / e-commerce / document management / cow milking / whatever software, and they pay us to make our management solution work with it as seamlessly as possible, which often means storing a bunch of information that the other program cares about, which can be structured in all kind of ways.

Now, I personally would be quite happy to fill the schema with a bunch of Interop_SpecialSnowflake_Foos and Interop_SpecialSnowflake_FooBars tables and keep everything lovely and tidy, but my boss strongly dislikes the idea and says he'd only OK it if I were confident it was the best option. And right now I'm not, because migrating a customer's schema is currently a significantly more onerous process than simply updating the application files, and the XML approach has the massive advantage of leaving the schema unchanged (except for the views, but those don't hold actual data and thus the concerns of updating actual rows don't apply to them).

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.
A situation I seem to run into a lot is having a query that can produce a singular data point given some parameter(s) and not be sure how to generate a table of the results of that query over a set of parameters.

For example, let's say I have a table of items with categories and prices: id (INT), category (INT), price(float), and I want to know how many categories have at least x members with a price below some price p for a range of values of x and p. Obviously, I can find this for some particular values, say 3 and 0.25, with something like:

SQL code:
WITH c AS (
  SELECT category, COUNT(category) AS count_per_category
  FROM my_table
  WHERE price < 0.25
  GROUP BY category
)

SELECT COUNT(category)
FROM c
WHERE count_per_category >= 3
What I'd like is to somehow get out a table of that result for several values of x and p. I know you can't pivot natively in SQL, but it seems like I should be able to get out a result set of the form value_x (INT), value_p (double), num_categories_meeting_cutoffs (INT). I'm just not seeing how to get there.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

NihilCredo posted:

I want to add a XML column to several of our tables so that in the future, when one dinky little customer asks to add "Size of Left Toe" to his customer records, we don't need to even consider touching the data structure.

It sounds like you're looking for an EAV table, not some crap involving XML. Or a document data store, not a relational database. Otherwise suck it up and alter the schema. Involving XML in this fiasco will only make future maintainers of your database and code hate you.

Remember, always assume the next person to take over your job is a murderous psychopath who knows where you live.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

McGlockenshire posted:

It sounds like you're looking for an EAV table, not some crap involving XML. Or a document data store, not a relational database. Otherwise suck it up and alter the schema. Involving XML in this fiasco will only make future maintainers of your database and code hate you.

Remember, always assume the next person to take over your job is a murderous psychopath who knows where you live.

Duly noted, and I will put more weight on the side of the the attribute table option.

In your opinion, is there ever a legitimate use for XML manipulation in a SQL database (that is, using a XML column for anything more than simply storing and retrieving an XML document as a whole)?

FateFree
Nov 14, 2003

I'm working with a relational data model with 19 tables and very aggressive SLAs. Our current DB Teradata (crap) has something called Multi Statement Queries. That lets us send all 19 insert queries, separated by semicolons, in one shot which are then executed in parallel on the db. For gets, it executes 19 selects and returns 19 result sets which we then correlate together.

Is there any other database that supports this kind of feature? We are looking to move away from TD into something like oracle or even an open source solution.

McGlockenshire
Dec 16, 2005

GOLLOCKS!

NihilCredo posted:

Duly noted, and I will put more weight on the side of the the attribute table option.

In your opinion, is there ever a legitimate use for XML manipulation in a SQL database (that is, using a XML column for anything more than simply storing and retrieving an XML document as a whole)?

I have never encountered a use case for XML manipulation in a relational database where the whole intent wasn't dealing with externally-provided XML documents. That said, I don't have a lot of experience with this, and would never intentionally choose a relational database if I knew I was going to deal with XML mainly.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

FateFree posted:

I'm working with a relational data model with 19 tables and very aggressive SLAs. Our current DB Teradata (crap) has something called Multi Statement Queries. That lets us send all 19 insert queries, separated by semicolons, in one shot which are then executed in parallel on the db. For gets, it executes 19 selects and returns 19 result sets which we then correlate together.

Is there any other database that supports this kind of feature? We are looking to move away from TD into something like oracle or even an open source solution.
Have you done comparisons against sequential operations on the same database with the same connection mechanisms? In the general case, nineteen inserts running in parallel may not produce the same results, so most databases (with which I am familiar) haven't spent time developing code to determine when such queries can be executed in parallel because the overhead of that process is also a bit undetermined. If you know you have nineteen inserts into nineteen distinct tables with no interrelational dependencies, then, yes, there are tools you can use to ensure parallel updates, at least from the client perspective. The typical approach is to use something like load balancing software, connection pooling software, etcetera. PGPool solves some cases, for example; MySQL documentation notes C library calls that handle parallelizing multiple selects.

Some open source databases restrict themselves to a single process per worker model, whence single queries may ultimately run on a single CPU. Most of those systems, however, support multiple client connections, so you can easily fork or thread your connector to handle custom inserts, updates, or selects. I'd expect, in fact, that most of this would have to be custom as you're building a contract with the DB that you're doing things to your data that are not likely to result in deadlocks or other forms of contention, but the benefit is that you have a lot more control over the tuning in such a custom design. Note that for some queries, sequential order can matter, so you'll even have to exercise caution there. With stored procedures, you can probably build a reasonable approach to efficiently loading the data.

I still question the results of sequential versus parallel test runs against your current system, though, because it's likely just going to move your bottleneck. Unless you're using a customized construction on your filesystem, or mapping to different underlying devices --- for example, in PostgreSQL, you could construct nineteen different table spaces on nineteen disks in a JBOD --- you'll pile up at the disk write and sync step in any case. If your row size is relatively small, it probably won't be very noticeable, suggesting you should start with a sequential benchmark to determine what, if anything, can meet your expected daily traffic rate when it's appropriately tuned normally, and before you devote considerable effort to ensuring a parallel-worker system gives you a tenfold improvement.

var1ety
Jul 26, 2004

NihilCredo posted:

Duly noted, and I will put more weight on the side of the the attribute table option.

In your opinion, is there ever a legitimate use for XML manipulation in a SQL database (that is, using a XML column for anything more than simply storing and retrieving an XML document as a whole)?

This applies to Oracle, but perhaps not to SQL Server, but Tom Kyte weighs in on the side of generic columns or an XML column to solve the EAV problem. You can index these, so you get performance with varying levels of flexibility. Entity-attribute-value data models are a nightmare for reporting and data densification.

Here's a discussion about the name/value problem - https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2314483800346542969

A quick summary of the "generic column" solution (which is also in the above link) is that you add a bunch of generic string/date/number columns that aren't used at go-live, but provide an easy way for developers to expand the system in the future by allocating them and using them.

Sedro
Dec 31, 2008
I would also choose just about anything over an EAV model. XML is simpler to insert, simpler to query and can be made more performant with functional indices. Postgres' hstore is really great if it's available to you.

Pardot
Jul 25, 2001




Sedro posted:

I would also choose just about anything over an EAV model. XML is simpler to insert, simpler to query and can be made more performant with functional indices. Postgres' hstore is really great if it's available to you.

With 9.4, I'd recommend jsonb over hstore. It was originally hstore2, but with json as the external representation instead of that perl-like thing.

Sedro
Dec 31, 2008

Pardot posted:

With 9.4, I'd recommend jsonb over hstore. It was originally hstore2, but with json as the external representation instead of that perl-like thing.
Right, use that. I thought they were still calling it hstore

Aredna
Mar 17, 2007
Nap Ghost
I don't know what the right answer is for your solution, but EAV is going to make me hate you if I ever have to use your database.

It's not just that I have to pivot your data to get it in an easy to use format, but also that pivots are clunky. You either have to hard-code the column names or make it dynamic which involves either a procedure or some nested CTEs, string manipulation, and executes. None of those are pretty or easy to optimize.

Xae
Jan 19, 2005

FateFree posted:

I'm working with a relational data model with 19 tables and very aggressive SLAs. Our current DB Teradata (crap) has something called Multi Statement Queries. That lets us send all 19 insert queries, separated by semicolons, in one shot which are then executed in parallel on the db. For gets, it executes 19 selects and returns 19 result sets which we then correlate together.

Is there any other database that supports this kind of feature? We are looking to move away from TD into something like oracle or even an open source solution.

Teradata Was always the DB that impressed me the most. What issues are you having with it?

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
Hey guys, I have a hard sql query I'm having trouble figuring out how to do, wondering if someone could help. I am grabbing 10 rows from one column in a table, I need to compare 5 of the rows with 5 others , whichever is the highest goes into the sum. This is the query which just sums up all of the players total:

code:
Select SUM(fantasypoints) AS score FROM NBAplayers 
WHERE status = 'open' 
AND playerID = '$playerarray[Player1]' 
OR playerID = '$playerarray[Player2]' 
OR playerID = '$playerarray[Player3]' 
OR playerID = '$playerarray[Player4]' 
OR playerID = '$playerarray[Player5]' 
OR playerID = '$playerarray[Player6]' 
OR playerID = '$playerarray[Player7]' 
OR playerID = '$playerarray[Player8]' 
OR playerID = '$playerarray[Player9]' 
OR playerID = '$playerarray[Player10]'
I need to compare [Player1] with [Player6], [Player2] with [Player7], etc. From googling it seems like I can use CASE but where in this query would I CASE?

MrMoo
Sep 14, 2000

Well for starters you can clean up that query using the IN expression, secondly it looks like you simply want a ranking function but this is not standard SQL and thus depends upon your database.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
1. You know that AND has higher precedence than OR?
2. You should look into using prepared statements.
3. For the question you asked, difficult to say without understanding the table structure in more detail, but the task would probably be a good deal easier if you had an additional table that contained the details of which players are to be paired up. This would allow you to solve your problem by combining SUM with GREATEST.

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k

MrMoo posted:

Well for starters you can clean up that query using the IN expression, secondly it looks like you simply want a ranking function but this is not standard SQL and thus depends upon your database.

Not trying to rank. Here's the two columns I'm calling



Imagine fantasypoints have a random amount of points.

I'm selecting two of the rows on the left, comparing their numbers on the right, and the higher one is added to the sum.

This doesn't have to be perfect since it's only a proof of concept site so I'm not overtly concerned about optimization right now. I may just end up pulling each value into a php variable and comparing them there.

MrMoo
Sep 14, 2000

Can you update this SQL Fiddle with your intent?

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Sepist posted:

Not trying to rank. Here's the two columns I'm calling



Imagine fantasypoints have a random amount of points.

I'm selecting two of the rows on the left, comparing their numbers on the right, and the higher one is added to the sum.

This doesn't have to be perfect since it's only a proof of concept site so I'm not overtly concerned about optimization right now. I may just end up pulling each value into a php variable and comparing them there.

Suppose you have an additional table PlayerMatch (columns playerId_1 and playerId_2, guids with foreign key to the NBAplayers table). Each row of the table gives the IDs of two players you want to match up. There will be as many rows as there are terms in your sum. Then you will

code:
SELECT
    SUM(GREATEST(NBAP_1.fantasypoints, NBAP_2.fantasypoints))
FROM
    PlayerMatch
    INNER JOIN NBAplayers AS NBAP_1 ON PlayerMatch.playerId_1 = NBAP_1.playerID
    INNER JOIN NBAplayers AS NBAP_2 ON PlayerMatch.playerId_2 = NBAP_2.playerID
GREATEST isn't standard SQL but it is my understanding that most RDBMSs have it. If yours doesn't you can use a CASE instead.

Snak
Oct 10, 2005

I myself will carry you to the Gates of Valhalla...
You will ride eternal,
shiny and chrome.
Grimey Drawer
I have the following assignment:

Homework posted:

Write a query that shows returns [sic] the name and city of the university that has no people in database that are associated with it. Your query must use EXISTS to achieve.

persons are associated with universities by a foreign key "uid" in the person (uid is the university's primary key).
I can get the correct result simply by "SELECT university_name, city FROM university LEFT JOIN person USING(uid) WHERE person IS NULL"

I have no idea how to do it using EXISTS. The assignment has other parts that differentiate between using IN and NOT IN, so I don't think I am supposed to use NOT EXISTS here.

I'm NOT trying to cheat on my assignment, I'm genuinely confused about using EXISTS for this purpose, and I'm having trouble finding examples of something similar...

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
Probably something simple like this...

SELECT university_name, city
FROM university
WHERE NOT EXISTS (
SELECT 1
FROM person
WHERE person.uid = university.uid
)

The distinction between EXISTS and NOT EXISTS seems silly to me here. Maybe ask for clarification on the assignment?

Gatac fucked around with this message at 08:53 on Feb 22, 2015

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k

Hammerite posted:

Suppose you have an additional table PlayerMatch (columns playerId_1 and playerId_2, guids with foreign key to the NBAplayers table). Each row of the table gives the IDs of two players you want to match up. There will be as many rows as there are terms in your sum. Then you will

code:
SELECT
    SUM(GREATEST(NBAP_1.fantasypoints, NBAP_2.fantasypoints))
FROM
    PlayerMatch
    INNER JOIN NBAplayers AS NBAP_1 ON PlayerMatch.playerId_1 = NBAP_1.playerID
    INNER JOIN NBAplayers AS NBAP_2 ON PlayerMatch.playerId_2 = NBAP_2.playerID
GREATEST isn't standard SQL but it is my understanding that most RDBMSs have it. If yours doesn't you can use a CASE instead.

I think that makes it more complicated then it needs to be. I ultimately ended up using MAX and php queries for now, appreciate all your help

Hammerite posted:

1. You know that AND has higher precedence than OR?

Had no idea, and quickly ran into this problem. Wrapping the AND in parenthesis resolved the problem though, thanks!

Snak
Oct 10, 2005

I myself will carry you to the Gates of Valhalla...
You will ride eternal,
shiny and chrome.
Grimey Drawer

Gatac posted:

Probably something simple like this...

SELECT university_name, city
FROM university
WHERE NOT EXISTS (
SELECT 1
FROM person
WHERE person.uid = university.uid
)

The distinction between EXISTS and NOT EXISTS seems silly to me here. Maybe ask for clarification on the assignment?

Yeah it did turn out we were allowed to use NOT EXISTS.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Snak posted:

persons are associated with universities by a foreign key "uid" in the person (uid is the university's primary key).
I can get the correct result simply by "SELECT university_name, city FROM university LEFT JOIN person USING(uid) WHERE person IS NULL"

I have no idea how to do it using EXISTS. The assignment has other parts that differentiate between using IN and NOT IN, so I don't think I am supposed to use NOT EXISTS here.
With the poorly written question in the first place, let's hope NOT EXISTS is permitted, because you really don't want to do the whole thing without the NOT keyword:
code:
SELECT name FROM test_univ u WHERE EXISTS (
  SELECT tu.id,tp.cnt
  FROM test_univ tu
  LEFT OUTER JOIN (SELECT count(t.uid) cnt,t.uid FROM test_people t GROUP BY t.uid) AS tp ON (tp.uid=tu.id)
  WHERE tp.cnt IS NULL AND tu.id=u.id
);
:buddy:


vvv Yes, I use CTEs, but there's this terribly over-used open source database(-like thing) that has no notion of them whatsoever, so I figured I'd stick with the universal "subsubsubqueries for all!" approach.

PhantomOfTheCopier fucked around with this message at 20:05 on Feb 22, 2015

Chill Callahan
Nov 14, 2012
Just a helpful tip (I don't know if you're using SQL Server or not but) I like to refactor subqueries into common table expressions wherever possible.

Aredna
Mar 17, 2007
Nap Ghost

Chill Callahan posted:

Just a helpful tip (I don't know if you're using SQL Server or not but) I like to refactor subqueries into common table expressions wherever possible.

I like to do this as well, but occasionally I've ran into performance problems when for some reason it was rerunning the full CTE for every row rather than caching the results. Most of the time using a join would fix it, but occasionally I would have to resort to temp tables to get performance back where it should be.

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

Aredna posted:

I like to do this as well, but occasionally I've ran into performance problems when for some reason it was rerunning the full CTE for every row rather than caching the results. Most of the time using a join would fix it, but occasionally I would have to resort to temp tables to get performance back where it should be.

Temp tables or CTEs are preferred regardless, for performance reasons (table spools are bad), and for readability's sake. I'm sure we've all run across queries with five subselects, and they're infuriating to look at and try to figure out what the hell the author intended and why it was better to do that than insert into a #table and join that.

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
Is there a way to sort a sql result in the order of your WHERE clause? My sql query looks like this:

code:
WHERE playerID = '005948cb-f744-4928-bf7f-d26076717c99' OR playerID = 'dbf62639-cf0d-43ea-b470-c116128b76ec' 
OR playerID = '5176c4c0-b451-4122-81e9-7d5dc9502de5' OR playerID = '2012f4fd-98e0-4080-98a7-0cfb9de87067' 
OR playerID = '34ff5b55-42ef-40c0-a2c4-9c28b5476a03' OR playerID = '1c1c0edb-84df-42a8-a74b-5899849cd41a' 
OR playerID = '1647115a-1607-4250-889e-8ae1af98fc92' OR playerID = '3e512d9e-ff9c-4f23-a4fd-a88128ee3af2' 
OR playerID = 'a696f51d-0329-4d74-a52a-1d5ef80a1bc8' OR playerID = '2fcb797b-0f24-4862-9c6f-7dea69720f7a'
and I would like 005948cb-f744-4928-bf7f-d26076717c99 to be the first row, dbf62639-cf0d-43ea-b470-c116128b76ec the second, etc.

Sedro
Dec 31, 2008
SQL code:
ORDER BY CASE playerID
  WHEN '005948cb-f744-4928-bf7f-d26076717c99' THEN 1
  WHEN 'dbf62639-cf0d-43ea-b470-c116128b76ec' THEN 2
  WHEN '5176c4c0-b451-4122-81e9-7d5dc9502de5' THEN 3
  ...
END
You could also use a temporary table or equivalent which would simplify your WHERE clause and ORDER BY

Sepist
Dec 26, 2005

FUCK BITCHES, ROUTE PACKETS

Gravy Boat 2k
Thanks man, worked perfectly

Aredna
Mar 17, 2007
Nap Ghost

Anaxandrides posted:

Temp tables or CTEs are preferred regardless, for performance reasons (table spools are bad), and for readability's sake. I'm sure we've all run across queries with five subselects, and they're infuriating to look at and try to figure out what the hell the author intended and why it was better to do that than insert into a #table and join that.

Sure, but you can use index hints to get around the table spool on a join sometimes. I've definitely had a case where that gave me the best performance.

I hate subselects in queries though because unless they are really short and have perfect formatting they are almost impossible to understand easily.

Peanut and the Gang
Aug 24, 2009

by exmarx

Pardot posted:

With 9.4, I'd recommend jsonb over hstore. It was originally hstore2, but with json as the external representation instead of that perl-like thing.

Ooooo

Anaxandrides
Jun 7, 2012

Every summer day in the desert is like a vacation in hell.

Aredna posted:

Sure, but you can use index hints to get around the table spool on a join sometimes. I've definitely had a case where that gave me the best performance.

I hate subselects in queries though because unless they are really short and have perfect formatting they are almost impossible to understand easily.

I hate them for formatting reasons, but the spool can sometimes not be gotten around with an index. If you have, for example...

code:
INSERT INTO @MyTable
SELECT ID, Name, Value
FROM MyRealTable m
WHERE m.ID NOT IN (SELECT ID FROM @MyTable)
You're going to spool. Expensively. This kind of code was really common when I started at the company I work for, because it "works", but that query gets incrementally bigger as @MyTable grows in a way that a LEFT JOIN does not.

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av
Can anyone recommend an ORM for sqlite? C or C++

(the platform APIs for database access on BlackBerry 10 are awful and useless)

Opulent Ceremony
Feb 22, 2012

Anaxandrides posted:

You're going to spool. Expensively. This kind of code was really common when I started at the company I work for, because it "works", but that query gets incrementally bigger as @MyTable grows in a way that a LEFT JOIN does not.

Would you mind expanding on how these approaches differ internally in the context of why the sub-query is worse?

Peanut and the Gang
Aug 24, 2009

by exmarx
Is there a phpmyadmin-esque tool for DB2? I'm googling, and things like pma and adminer don't support it. :(

raej
Sep 25, 2003

"Being drunk is the worst feeling of all. Except for all those other feelings."
I need some help figuring out how to set a declared variable in Oracle. I thought it would be something like this:
code:
DECLARE    	
I_SP_NAME VARCHAR2(2000) := 'batch_format';   
I_PARM_BATCH_NO VARCHAR2(2000);
SET I_PARM_BATCH_NO = (SELECT max(ACTION_FORM_BATCH_NO) from tmp_action_table) 

BEGIN    
MY_USER.JOB_UPDATE_PARAMS(I_SP_NAME, I_PARM_BATCH_NO);
END;
I can set the value manually with I_PARM_BATCH_NO VARCHAR2(2000) := '12345'; but I need to to get the max from this table every time it's called.

How do I do this?

Adbot
ADBOT LOVES YOU

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Peanut and the Gang posted:

Is there a phpmyadmin-esque tool for DB2? I'm googling, and things like pma and adminer don't support it. :(

We fired our only db2 client years ago, and I didn't touch that database, but they had this installed:
http://theopensourcery.com/db2ui.htm

I have no idea how relevant it is in today's world of DB2 usage.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply