Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
REFRESH MATERIALIZED VIEW requires an ACCESS EXCLUSIVE lock on the view so that's the culprit. Check if you can use CONCURRENTLY?

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

TheFluff posted:

REFRESH MATERIALIZED VIEW requires an ACCESS EXCLUSIVE lock on the view so that's the culprit. Check if you can use CONCURRENTLY?

Gah! Should have thought about it. CONCURRENTLY should be possible, and thankfully we're dropping those matviews in the next release anyway.

Withnail
Feb 11, 2004
Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments?

NPR Journalizard
Feb 14, 2008

Withnail posted:

Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments?

Havent tested it against an oracle database, but visual studio can compare SQL database schemas.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Yet another Postgres (12) trivia question.

I have a TEXT ARRAY column containing the flattened model of a tree structure. So like

code:
id | code
---------------------------
 1 | {A, A, A}
 2 | {A, A, A, B}
 3 | {A, A, B}
 4 | {A, A, C}
 5 | {A, B}
and so forth. Note that I had previously tried storing the tree 'properly', with self-referencing IDs and recursive queries, but keeping data sync'ed turned out to be a mountain of pain.

When I need to grab all the ancestors of a certain node, it's a simple self-join:

code:
select ancestor.code
from table as descendant
join table as ancestor
  on ancestor.code = descendant.code[1 : array_length(ancestor.code, 1)]
Problem is, it's slow as balls because I can't find any index type that will actually index that right-side expression (extract the first N elements of an index). A GIN index did not work, and I can't think of a way to index it as an expression. I tried creating N individual indexes for the expression descendant.code[1 : N] and it didn't work either.

Since the table doesn't change too often, I could create a materialized view and refresh it upon change, but I'd rather keep things cleaner if possible.

uguu
Mar 9, 2014

edit: missed a ) xD

uguu fucked around with this message at 20:43 on Jun 21, 2021

MrKatharsis
Nov 29, 2003

feel the bern

Withnail posted:

Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments?

Toad or OEM.

Or a homegrown script from 3 DBAs ago.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

NihilCredo posted:

Yet another Postgres (12) trivia question.

I have a TEXT ARRAY column containing the flattened model of a tree structure. So like

code:
id | code
---------------------------
 1 | {A, A, A}
 2 | {A, A, A, B}
 3 | {A, A, B}
 4 | {A, A, C}
 5 | {A, B}
and so forth. Note that I had previously tried storing the tree 'properly', with self-referencing IDs and recursive queries, but keeping data sync'ed turned out to be a mountain of pain.

When I need to grab all the ancestors of a certain node, it's a simple self-join:

code:
select ancestor.code
from table as descendant
join table as ancestor
  on ancestor.code = descendant.code[1 : array_length(ancestor.code, 1)]
Problem is, it's slow as balls because I can't find any index type that will actually index that right-side expression (extract the first N elements of an index). A GIN index did not work, and I can't think of a way to index it as an expression. I tried creating N individual indexes for the expression descendant.code[1 : N] and it didn't work either.

Since the table doesn't change too often, I could create a materialized view and refresh it upon change, but I'd rather keep things cleaner if possible.

Does your code actually work? Are you storing the path with the root node last in the array? I'm really confused by your example. At first I thought your tree looked like this

code:
          A
        /   \
      A      B
    / | \
  A   B   C
  |
  B
and in that case if I want A.A.A.B and all of its ancestor nodes, I'd want A.A.A, A.A and A (that is, I'd want the rows with the same prefix as the descendant row), but now I'm not so sure. If you really do want an index on the suffix (all array elements except the first one), and the length of the array elements is not fixed, I guess you could try creating a normal B-tree index on a reversed string representation of the array and query it with LIKE? :v: Just throwing stupid ideas out there.

In any case, if the tree structure doesn't change often, I think the MPTT (modified preorder tree traversal) pattern can work for you, if changing the schema is acceptable. See e.g. here for an overview. Or I guess you could try the ltree module but I've never tried that myself.

TheFluff fucked around with this message at 14:22 on Jun 22, 2021

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

NihilCredo posted:

Yet another Postgres (12) trivia question.

I have a TEXT ARRAY column containing the flattened model of a tree structure. So like

code:
id | code
---------------------------
 1 | {A, A, A}
 2 | {A, A, A, B}
 3 | {A, A, B}
 4 | {A, A, C}
 5 | {A, B}
and so forth. Note that I had previously tried storing the tree 'properly', with self-referencing IDs and recursive queries, but keeping data sync'ed turned out to be a mountain of pain.

When I need to grab all the ancestors of a certain node, it's a simple self-join:

code:
select ancestor.code
from table as descendant
join table as ancestor
  on ancestor.code = descendant.code[1 : array_length(ancestor.code, 1)]
Problem is, it's slow as balls because I can't find any index type that will actually index that right-side expression (extract the first N elements of an index). A GIN index did not work, and I can't think of a way to index it as an expression. I tried creating N individual indexes for the expression descendant.code[1 : N] and it didn't work either.

Since the table doesn't change too often, I could create a materialized view and refresh it upon change, but I'd rather keep things cleaner if possible.

have you considered implementing this as an "ltree" type instead of a text array? I think that would be the canonical way to do it and supposedly you can do GIST indexes on these.

https://www.postgresql.org/docs/13/ltree.html

http://patshaughnessy.net/2017/12/13/saving-a-tree-in-postgres-using-ltree

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

TheFluff posted:

Does your code actually work? Are you storing the path with the root node last in the array? I'm really confused by your example.

Yeah it works (it's in production), unless I bungled something when cleaning up the code for posting. Consider the AAAB example. The join condition is:

code:
ancestor.code = descendant.code[1 : array_length(ancestor.code, 1)]
and it returns true in the followin cases:

code:
A = AAAB[1 : 1]
AA = AAAB[1 : 2]
AAA = AAAB[1 : 3]
AAAB = AAAB[1 : 4]

quote:

I guess you could try creating a normal B-tree index on a reversed string representation of the array and query it with LIKE? Just throwing stupid ideas out there.

There's an index type specifically optimized for LIKE that we use already so it might not be such a stupid idea. Would need to make sure to use an absolutely unambiguous separator character.

quote:

In any case, if the tree structure doesn't change often, I think the MPTT (modified preorder tree traversal) pattern can work for you, if changing the schema is acceptable. See e.g. here for an overview.

I need to sit down and read this carefully but it might be what I'm looking for, thanks. Certainly the issues it highlights at the start with the adjacency model match my experience.

Paul MaudDib posted:

have you considered implementing this as an "ltree" type instead of a text array?

I used ltree in the first prototype but it is not supported properly by the driver library we use :sigh: Although I suppose I could change the database column to a ltree and then somehow cast it back to an array to something else when querying.

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
I'm just an idiot and forgot postgres arrays are 1-indexed so never mind that part :frogdunce:

MPTT is pretty cool and has worked well for me in the past but since it can involve a bunch of calculations on the application side, it might be worth checking if there's a library available that does it for whatever application stack you might have. There's an implementation for Django for example that might be worth sneaking glances at even if you're not using it.

e: or look up a closure table as an alternative solution

TheFluff fucked around with this message at 17:25 on Jun 22, 2021

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

NihilCredo posted:

I used ltree in the first prototype but it is not supported properly by the driver library we use :sigh: Although I suppose I could change the database column to a ltree and then somehow cast it back to an array to something else when querying.

yeah I've run into that in the past too, for example the JSON type column was not really supported in Hibernate (at the time? possibly improved since then?). The way I handled it at the time was to write support for the type myself - not full support but there was a Java-code level way to write a shim that basically treated the column as just a normal text column so it could be read/written. It sounds complex but it was basically a couple dozen lines of code iirc. This doesn't mean you could write arbitrary values to the columns as theoretically that would cause SQL errors if postgres couldn't parse what was going on, but you could serialize a true json object and just shove the text into that field, or maintain a "shadow" object that was a true JSON type and have the application work on the "true" object but feed the text-method to the ORM layer and have the "get" method on the text method call the toString() and serialize it only then.

This is hacky but another option is you could do a view with an INSTEAD OF INSERT OR UPDATE trigger on it and the view provides a text type but takes writes on the table and passes them through but converts that view-column from text type into into an actual ltree/jsonb column on the underlying table.

obviously this approach doesn't actually do anything about your ORM/driver not supporting the type for queries/etc, you would have to write these particular queries using native-SQL and not (eg) HQL or similar high-level query languages, but this gets you most of the bang for the smallest buck. You could just mostly treat it like text, and write those lookup queries natively.

if you end up not being able to directly do an index the way you want, one option is you could build a structure which represents the index yourself and update it in parallel. Not as easy as using a built-in index that just works automatically, obviously, but if you don't want to use the built-in support whatcha gonna do. This is vaguely ringing a bell in the context of full-text or fuzzy-text searching (possibly before postgres had that support, or for people for whom the support is not suitable?) - obviously it's not exactly what you want to do but that might be a similar task you could look at in the context of implementation approaches and the best way to implement it efficiently within your application / within postgres. It was something like implementing word triplets or character triplets that would go into some kind of a tree structure that could be efficiently searched.

a foolish pianist
May 6, 2007

(bi)cyclic mutation

I've got a table with columns "date", "location", and "visitor_count", and I need to return just the rows with the date with the most visitors, e.g. from:
code:
date, location, visitor_count
7/1, A, 1
7/1, B, 10
7/5, B, 2
7/5, A, 100
7/10, A, 5
7/10, B, 12
I want to get:
code:
date, location, visitor_count
7/5, B, 2
7/5, A, 100
I'm really not sure how to go about this. Any suggestions?

EDIT:

hmm, something like
code:
SELECT date, location, visitor_count FROM table INNER JOIN
	( SELECT date FROM table
	  GROUP BY date
	  ORDER BY SUM(visitor_count) DESC
	  LIMIT 1) limit_table
ON table.date = limit_table.date

a foolish pianist fucked around with this message at 19:45 on Jun 23, 2021

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

a foolish pianist posted:

I've got a table with columns "date", "location", and "visitor_count", and I need to return just the rows with the date with the most visitors, e.g. from:
code:
date, location, visitor_count
7/1, A, 1
7/1, B, 10
7/5, B, 2
7/5, A, 100
7/10, A, 5
7/10, B, 12
I want to get:
code:
date, location, visitor_count
7/5, B, 2
7/5, A, 100
I'm really not sure how to go about this. Any suggestions?

you can use a CTE:

SQL code:
with date_with_most_visitors as (
  select
    date,
    sum(visitor_count) as total_visitors
  from some_table
  group by date
  order by total_visitors desc
  limit 1
)
select *
from some_table
where exists(
  select 1
  from date_with_most_visitors
  where some_table.date = date_with_most_visitors.date
)
Use an inner join instead of a where exists if you feel like it, like 95% certain it'll perform exactly the same in this case. I just have a (possibly annoying) habit of always preferring exists() if I'm not selecting any columns from the joined table, but I suspect a lot of people would find an inner join more natural.

There are other ways to do it with nested subqueries if you're on a cursed database that doesn't have CTE's (e.g. Mysql before version 8 :smith:), but just say so in that case.


e: saw your edit; you got it. performance probably not meaningfully different from my solution, use whichever you like. I personally tend to find CTE's more readable than subqueries in joins, but that's very subjective. CTE's can have different performance characteristics though depending on what you're doing but in this case I doubt it matters.

TheFluff fucked around with this message at 20:15 on Jun 23, 2021

DELETE CASCADE
Oct 25, 2017

i haven't washed my penis since i jerked it to a phtotograph of george w. bush in 2003

TheFluff posted:

I just have a (possibly annoying) habit of always preferring exists() if I'm not selecting any columns from the joined table

...

I personally tend to find CTE's more readable than subqueries in joins

can we be sql friends

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE

DELETE CASCADE posted:

can we be sql friends

let's :kimchi:

mega dy
Dec 6, 2003

I used to be a subquery guy but after a lot of soul-searching I've realized that CTE's are the One True Way, especially with really complex queries. We all have our own individual path of enlightenment though.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
What are the low-level performance implications of tablespaces in postgres? Obviously the generic answer is “they let you split load across multiple IO devices” but I’m curious about the specifics.

Let’s say we take the concept to the extreme and put every single table on its own tablespace. Each table space is its own ZFS dataset. And let’s say you have one transaction that hits on three different tables (and thus three tablespaces).

What are the implications for that in terms of transaction commit behavior, WAL writes, followbehind writes, autovacuum pauses, etc, vs if it all just lived in one tablespace? Let’s say that in both cases they live on one physical drive so both installations have identical disk IO capabilities available (we’re not giving the multi-dataset one more drives).

Does managing more but smaller tablespaces usually work better than one big one, and if so is there a reason you wouldn’t want to take it to the extreme like that, given how cheap a dataset can be, how it organizes similar data for optimal compression and caching on the dataset, etc? They can all go into a parent dataset so you can still do an atomic snapshot that captures all tablespaces at once, etc.

This also bears on declarative partitioned tables I think since those are basically tables mapped into one giant “view” (not really but) and can (and usually do) have their own tablespaces right? Especially if you can partition data that is frequently getting rototilled by update/delete, it should lead to smaller vacuums (or at least isolate this IO from the rest of the table if it’s on a separate disk)? So in this same sense, if you partition your data (and again to be fair let’s assume it’s on the same IO subsystem) is there a reason that would hurt performance by “over-partitioning” the data, or would that help by allowing more requests “in-flight” and increasing IO depth?

If you are using some aspect of your data to “scatter” data across a (reasonable, let’s say <8) number of partitions/tablespaces to spread IO (in this case we are not assuming a one-drive IO subsystem) it would probably be preferable to have one partition/tablespace per drive like this rather than trying to use all drives in a raid/vdev?

I know the ultimate answer here is probably “try it for yourself”, just curious if there’s any common wisdom here on what actually works, beyond the super basic “lol it lets you split IO” that everyone copies/pastes. Any success stories/horror stories?

Paul MaudDib fucked around with this message at 08:57 on Jun 29, 2021

BlankSystemDaemon
Mar 13, 2009




There's only one bit of general-purpose tuning that you should listen to.
The rest are very workload-specific; for example, you'll often see logbias=latency be thrown around, but unless you're only doing a lot of big writes without a pair of SLC SSDs as log devices, it's going to have the opposite effect.

Personally, I favour an idea I've brought up to one of people who works on both FreeBSD and Postgres: add a completely separate dataset type that exposes the dnode objects as key=value stores, rather than going through the ZFS POSIX Layer.
The ZPL is what's responsible for making ZFS act like a filesystem, but the internal structure of the dnodes mean it's effectively just an abstraction layer that can be gotten rid of, while having ZFS provide a couple of things that Postgres needs to guarantee ACID compliance.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
that's interesting since key-value stores are basically their own type of database entirely, and one that a lot of people use.

what would that actually look like in terms of ZFS? would it be like a dataset with tons of (non-zfs-related) properties or what?

out of curiosity is there any historical precedent in sysv-unix or BSD for an in-kernel key-value store? edit: I am likely thinking of berkeleydb which doesn't seem to be in-kernel

Paul MaudDib fucked around with this message at 01:18 on Jul 6, 2021

BlankSystemDaemon
Mar 13, 2009




I've no clue what it would look like, to be honest.
One way would be to expose it as a character device through devfs (or the equivalent on non-FreeBSD that's responsible for populating /dev), but it could just as well be a message passing system of some description - although I would probably favour the first if I had a vote.

I'm not entirely sure what you're asking; most Unix-like OS' include some database.
FreeBSD includes a private instance of sqlite, ie. it's accessible to things shipped with FreeBSD, but not accessible to the FreeBSD userland (similar to lua and a few other things).

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
Hoping somebody here has experience with Oracle Advanced Queueing.

I'm currently trying to propagate between queues over a database link. The destination queue is not in the schema targeted by the database link, but in another schema on that same remote database. Propagation seems to assume, however, that the destination queue is exactly in the schema specified by the database link. Documentation says AQ does not support synonyms, either.

Am I just SOL? I know I can get it done with stored procedures, but I'd like to try the built-in methods first.

MrKatharsis
Nov 29, 2003

feel the bern
Create a synonym?

If I were your DBA I would advise not to use database links at all if it can be avoided.

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
AQ does not like synonyms.

Sadly, the dblink is here to stay. Half the reason we want to queue stuff is to buffer messages in case the "remote" system isn't available for some reason. But it seems there's further configuration fuckery afoot and I've kicked it to our DBAs for the time being.

MrKatharsis
Nov 29, 2003

feel the bern
lol loving Oracle

NotNut
Feb 4, 2020
I'm doing some stuff in Unreal Engine and I need to figure out how these float values map to these bytes

0.01: 0A D7 23 3C
0.10: CD CC CC 3D
0.20: CD CC 4C 3E
0.30: 9A 99 99 3E
0.5: 00 00 00 3F
0.8: CD CC 4C 3F
0.9: 66 66 66 3F
0.960: 8F C2 75 3F
1.0: 00 00 80 3F
1.10: CD CC 8C 3F
1.120: 29 5C 8F 3F
1.490: 52 B8 BE 3f
2.0: 00 00 00 40

Can find more examples if necessary

Oh and a value of 0.0 is four bytes of zeroes

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Those look like regular-rear end single-precision floats.

Also this is the SQL thread, you might prefer the stickied general programming thread if you have other questions.

NotNut
Feb 4, 2020

Jabor posted:

Those look like regular-rear end single-precision floats.

Also this is the SQL thread, you might prefer the stickied general programming thread if you have other questions.

Lol oops. Well, thank you

Moonwolf
Jun 29, 2004

Flee from th' terrifyin' evil of "NHS"!


Gatac posted:

AQ does not like synonyms.

Sadly, the dblink is here to stay. Half the reason we want to queue stuff is to buffer messages in case the "remote" system isn't available for some reason. But it seems there's further configuration fuckery afoot and I've kicked it to our DBAs for the time being.

I've used Oracle a fair bit, but never AQ. My guess as to why synonyms and other 'put the queue in a different schema' stuff don't work is because of permissions fuckery. A schema has every privilege over objects stored in it, but you have to grant everything to outside schemata. If they never bothered exposing the relevant privilege through a grant then you'd be stuck with this.

Is there any reason you can't just queue straight onto the destination schema?

Gatac
Apr 22, 2008

Fifty Cent's next biopic.

Moonwolf posted:

I've used Oracle a fair bit, but never AQ. My guess as to why synonyms and other 'put the queue in a different schema' stuff don't work is because of permissions fuckery. A schema has every privilege over objects stored in it, but you have to grant everything to outside schemata. If they never bothered exposing the relevant privilege through a grant then you'd be stuck with this.

Is there any reason you can't just queue straight onto the destination schema?

Actually we realized that part of the problem is that subscriber notifications for the queues aren't working on the databases involved. As in, can set them up but DBMS does not spawn a notification job to process them. DBAs aren't into AQ so they can't help me with that problem. And I can't queue on the destination schema because there's concern that database may not be 100% available. Part of the reason for AQ is to decouple the processing between databases so destination going bye bye does not leave dangling transactions on the source database.

Solution as I see it now is to have a queue on the source schema, a job that dequeues from there in a loop with infinite wait and executes a stored procedure on the destination schema that does the actual work. If destination is unavailable, catch the exception and rollback the dequeue for a retry.

I don't like it, mind. I prefer to have the scheduler job wait, say, 10 seconds on the dequeue before exiting the loop, and then using subscriber notifications for a callback procedure that just enables that job as needed. Squares the circle of not having the job endlessly sit around while making sure messages are dequeued in order, which you can't guarantee if you're processing directly with the callback procedure.

Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


NotNut posted:

I'm doing some stuff in Unreal Engine and I need to figure out how these float values map to these bytes

0.01: 0A D7 23 3C
0.10: CD CC CC 3D
0.20: CD CC 4C 3E
0.30: 9A 99 99 3E
0.5: 00 00 00 3F
0.8: CD CC 4C 3F
0.9: 66 66 66 3F
0.960: 8F C2 75 3F
1.0: 00 00 80 3F
1.10: CD CC 8C 3F
1.120: 29 5C 8F 3F
1.490: 52 B8 BE 3f
2.0: 00 00 00 40

Can find more examples if necessary

Oh and a value of 0.0 is four bytes of zeroes

Fun fact: you can also represent 0.0 as 80 00 00 00 (but it's negative zero)

Away all Goats
Jul 5, 2005

Goose's rebellion

code:
CREATE OR REPLACE PROCEDURE member_ck_sp 
    (p_user IN bb_shopper.username%TYPE,
    p_password IN OUT VARCHAR2,
    p_cookie OUT VARCHAR2,
    ap_check OUT VARCHAR2)
    IS
BEGIN
    SELECT firstname||' '||lastname, cookie
    INTO p_password, p_cookie
    FROM bb_shopper
    WHERE username = p_user
    AND password = p_password;
    DBMS_OUTPUT.PUT_LINE('Member: '||p_password||' cookie '||p_cookie);
    
    EXCEPTION
    WHEN NO_DATA_FOUND THEN
    DBMS_OUTPUT.PUT_LINE('Invalid');    
    ap_check := 'INVALID';
END member_ck_sp;
What am I doing wrong here? When I try to call the procedure with
code:
EXEC member_ck_sp(p_user => 'rat55', p_password => 'kile');
I get the error "PLS00306: wrong number or types of arguments in call to 'MEMBER_CK_SP'"

For reference this is the exercise question I'm trying to answer

quote:

The home page of the Brewbean’s Web site has an option for members to log on with their IDs
and passwords. Develop a procedure named MEMBER_CK_SP that accepts the ID and password
as inputs, checks whether they make up a valid logon, and returns the member name and cookie
value. The name should be returned as a single text string containing the first and last name.
The head developer wants the number of parameters minimized so that the same
parameter is used to accept the password and return the name value. Also, if the user doesn’t
enter a valid username and password, return the value INVALID in a parameter named
p_check. Test the procedure using a valid logon first, with the username rat55 and password
kile. Then try it with an invalid logon by changing the username to rat.


(This is using oracle 11g)

Away all Goats fucked around with this message at 20:08 on Jul 7, 2021

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
You need to assign variables to the OUT parameters, not literals. Like this:

code:
declare
  v_password varchar2(200);
  v_cookie   varchar2(200);
  v_check    varchar2(200);
begin
  v_password := 'kile';
  
  member_ck_sp(p_user => 'rat55', p_password => v_password, p_cookie => v_cookie, p_check => v_check);
end;
Also, I'm sure you're already aware, but this specification is pretty much canonically How Not To Stored Procedure, so please take this as the exercise it is and not as an example of how you should be coding in production.

Gatac fucked around with this message at 20:33 on Jul 7, 2021

Away all Goats
Jul 5, 2005

Goose's rebellion

Thanks. I tried running that code snippet instead of the EXEC statement but I'm still getting the same PLS00306 error.

quote:

Also, I'm sure you're already aware, but this specification is pretty much canonically How Not To Stored Procedure, so please take this as the exercise it is and not as an example of how you should be coding in production.
noted

Edit:
I see what you mean and got it with this
code:
DECLARE 
  lv_pass_txt VARCHAR2(30):='kile'; 
  lv_cook_num bb_shopper.cookie%TYPE;
  lv_chk_txt VARCHAR2(7); 
BEGIN 
  member_ck_sp('rat55',lv_pass_txt,lv_cook_num,lv_chk_txt);
END; 
Thanks!

Away all Goats fucked around with this message at 23:01 on Jul 7, 2021

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Jabor posted:

Those look like regular-rear end single-precision floats.

Also this is the SQL thread, you might prefer the stickied general programming thread if you have other questions.
I gotta say, you really missed out on an opportunity here.

code:
megaT=> WITH bigendian AS (SELECT x'3F8F5C29'::bit(32)::text x) SELECT (1+substr(x,10,23)::bit(23)::integer*pow(2,-23))*pow(2,substr(x,2,8)::bit(8)::integer-127) "werks?" FROM bigendian;

      werks?      
------------------
 1.12000000476837
I leave the byte-wise reversal as an exercise to the reader. :pseudo:

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
Trying to get to grips with PostgreSQL. We are aiming to migrate a database from Oracle to PostgreSQL and I'm doing some exploratory work on that, just trying to bash together a proof of concept.

ora2pg has generated a file containing statements like this:

ALTER FUNCTION getcurrentuserid OWNER TO MyName;

When I try to execute this script from the "psql" command line program, it says 'ERROR: role "MyName" does not exist'. But I am also looking at it in the admin app "pgAdmin 4" and it lists MyName under "Login/Group Roles". I can get it to show me the following SQL:

code:
-- Role: "MyName"
-- DROP ROLE "MyName";

CREATE ROLE "MyName" WITH
  LOGIN
  NOSUPERUSER
  INHERIT
  NOCREATEDB
  NOCREATEROLE
  NOREPLICATION
  ENCRYPTED PASSWORD '(redacted)';

GRANT "SOME_ROLE" TO "MyName";
I am running psql as the "postgres" super user, so it's not a permissions issue. Why does it say MyName doesn't exist when it clearly does? I tried variations on capitalization, it didn't make a difference.

code:
Server [localhost]:
Database [postgres]:
Port [5432]:
Username [postgres]:
Password for user postgres:
psql (13.3)
WARNING: Console code page (850) differs from Windows code page (1252)
         8-bit characters might not work correctly. See psql reference
         page "Notes for Windows users" for details.
Type "help" for help.

postgres=# \i C:/perl/perl/site/bin/grant_output.sql
SET
psql:C:/perl/perl/site/bin/grant_output.sql:23: ERROR:  role "myname" does not exist
postgres=#

DELETE CASCADE
Oct 25, 2017

i haven't washed my penis since i jerked it to a phtotograph of george w. bush in 2003
in postgres all identifiers are case-insensitive by default, which actually means they automatically fold to lowercase. if you don't want this behavior, you enclose the identifier in double quotes, and postgres will preserve the original case. but once you have done that, you need to keep using the double quotes, or it will fold to lowercase and then not match. so if you created the role MyName, you can refer to it as MyName, or MYNAME, or mYnAmE, and they will all fold to myname. but if you create the role "MyName", you must refer to it as "MyName" including the quotes, and the other case variants will not work.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

DELETE CASCADE posted:

you must refer to it as "MyName" including the quotes

including the quotes!!!

this was it, I hadn't included the quotes, but what the gently caress

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
Same deal in Oracle, though. Double-quoted identifiers are case sensitive, unquoted are not.

Adbot
ADBOT LOVES YOU

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Gatac posted:

Same deal in Oracle, though. Double-quoted identifiers are case sensitive, unquoted are not.

the issue here is not case sensitivity. I understand the concept of case sensitivity. This is about the double quotes being part of the name, inasmuch as they're required to be present or not (disregarding the case where the name is an SQL keyword). That's what my what the gently caress was about.

and I was sure that that wasn't the case in Oracle, but gently caress me if it isn't true. Well, I concede the point, but being on the same side as Oracle is nothing to boast about!

e: I'm going to drag this thread into a derail if I'm not careful, but this is one of the most WTF things I've ever encountered in databases. The double quotes are not meant to be part of the name, gently caress sakes.

Hammerite fucked around with this message at 21:21 on Jul 8, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply