|
REFRESH MATERIALIZED VIEW requires an ACCESS EXCLUSIVE lock on the view so that's the culprit. Check if you can use CONCURRENTLY?
|
# ? Jun 17, 2021 18:25 |
|
|
# ? Jun 8, 2024 09:55 |
|
TheFluff posted:REFRESH MATERIALIZED VIEW requires an ACCESS EXCLUSIVE lock on the view so that's the culprit. Check if you can use CONCURRENTLY? Gah! Should have thought about it. CONCURRENTLY should be possible, and thankfully we're dropping those matviews in the next release anyway.
|
# ? Jun 18, 2021 11:34 |
|
Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments?
|
# ? Jun 18, 2021 19:59 |
Withnail posted:Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments? Havent tested it against an oracle database, but visual studio can compare SQL database schemas.
|
|
# ? Jun 21, 2021 06:41 |
|
Yet another Postgres (12) trivia question. I have a TEXT ARRAY column containing the flattened model of a tree structure. So like code:
When I need to grab all the ancestors of a certain node, it's a simple self-join: code:
Since the table doesn't change too often, I could create a materialized view and refresh it upon change, but I'd rather keep things cleaner if possible.
|
# ? Jun 21, 2021 12:06 |
|
edit: missed a ) xD
uguu fucked around with this message at 20:43 on Jun 21, 2021 |
# ? Jun 21, 2021 20:26 |
|
Withnail posted:Can anyone recommend a tool to compare two instances of the same oracle database in two disconnected environments? Toad or OEM. Or a homegrown script from 3 DBAs ago.
|
# ? Jun 22, 2021 00:08 |
|
NihilCredo posted:Yet another Postgres (12) trivia question. Does your code actually work? Are you storing the path with the root node last in the array? I'm really confused by your example. At first I thought your tree looked like this code:
In any case, if the tree structure doesn't change often, I think the MPTT (modified preorder tree traversal) pattern can work for you, if changing the schema is acceptable. See e.g. here for an overview. Or I guess you could try the ltree module but I've never tried that myself. TheFluff fucked around with this message at 14:22 on Jun 22, 2021 |
# ? Jun 22, 2021 14:17 |
|
NihilCredo posted:Yet another Postgres (12) trivia question. have you considered implementing this as an "ltree" type instead of a text array? I think that would be the canonical way to do it and supposedly you can do GIST indexes on these. https://www.postgresql.org/docs/13/ltree.html http://patshaughnessy.net/2017/12/13/saving-a-tree-in-postgres-using-ltree
|
# ? Jun 22, 2021 15:52 |
|
TheFluff posted:Does your code actually work? Are you storing the path with the root node last in the array? I'm really confused by your example. Yeah it works (it's in production), unless I bungled something when cleaning up the code for posting. Consider the AAAB example. The join condition is: code:
code:
quote:I guess you could try creating a normal B-tree index on a reversed string representation of the array and query it with LIKE? Just throwing stupid ideas out there. There's an index type specifically optimized for LIKE that we use already so it might not be such a stupid idea. Would need to make sure to use an absolutely unambiguous separator character. quote:In any case, if the tree structure doesn't change often, I think the MPTT (modified preorder tree traversal) pattern can work for you, if changing the schema is acceptable. See e.g. here for an overview. I need to sit down and read this carefully but it might be what I'm looking for, thanks. Certainly the issues it highlights at the start with the adjacency model match my experience. Paul MaudDib posted:have you considered implementing this as an "ltree" type instead of a text array? I used ltree in the first prototype but it is not supported properly by the driver library we use Although I suppose I could change the database column to a ltree and then somehow cast it back to an array to something else when querying.
|
# ? Jun 22, 2021 16:08 |
|
I'm just an idiot and forgot postgres arrays are 1-indexed so never mind that part MPTT is pretty cool and has worked well for me in the past but since it can involve a bunch of calculations on the application side, it might be worth checking if there's a library available that does it for whatever application stack you might have. There's an implementation for Django for example that might be worth sneaking glances at even if you're not using it. e: or look up a closure table as an alternative solution TheFluff fucked around with this message at 17:25 on Jun 22, 2021 |
# ? Jun 22, 2021 16:50 |
|
NihilCredo posted:I used ltree in the first prototype but it is not supported properly by the driver library we use Although I suppose I could change the database column to a ltree and then somehow cast it back to an array to something else when querying. yeah I've run into that in the past too, for example the JSON type column was not really supported in Hibernate (at the time? possibly improved since then?). The way I handled it at the time was to write support for the type myself - not full support but there was a Java-code level way to write a shim that basically treated the column as just a normal text column so it could be read/written. It sounds complex but it was basically a couple dozen lines of code iirc. This doesn't mean you could write arbitrary values to the columns as theoretically that would cause SQL errors if postgres couldn't parse what was going on, but you could serialize a true json object and just shove the text into that field, or maintain a "shadow" object that was a true JSON type and have the application work on the "true" object but feed the text-method to the ORM layer and have the "get" method on the text method call the toString() and serialize it only then. This is hacky but another option is you could do a view with an INSTEAD OF INSERT OR UPDATE trigger on it and the view provides a text type but takes writes on the table and passes them through but converts that view-column from text type into into an actual ltree/jsonb column on the underlying table. obviously this approach doesn't actually do anything about your ORM/driver not supporting the type for queries/etc, you would have to write these particular queries using native-SQL and not (eg) HQL or similar high-level query languages, but this gets you most of the bang for the smallest buck. You could just mostly treat it like text, and write those lookup queries natively. if you end up not being able to directly do an index the way you want, one option is you could build a structure which represents the index yourself and update it in parallel. Not as easy as using a built-in index that just works automatically, obviously, but if you don't want to use the built-in support whatcha gonna do. This is vaguely ringing a bell in the context of full-text or fuzzy-text searching (possibly before postgres had that support, or for people for whom the support is not suitable?) - obviously it's not exactly what you want to do but that might be a similar task you could look at in the context of implementation approaches and the best way to implement it efficiently within your application / within postgres. It was something like implementing word triplets or character triplets that would go into some kind of a tree structure that could be efficiently searched.
|
# ? Jun 22, 2021 16:51 |
I've got a table with columns "date", "location", and "visitor_count", and I need to return just the rows with the date with the most visitors, e.g. from:code:
code:
EDIT: hmm, something like code:
a foolish pianist fucked around with this message at 19:45 on Jun 23, 2021 |
|
# ? Jun 23, 2021 19:30 |
|
a foolish pianist posted:I've got a table with columns "date", "location", and "visitor_count", and I need to return just the rows with the date with the most visitors, e.g. from: you can use a CTE: SQL code:
There are other ways to do it with nested subqueries if you're on a cursed database that doesn't have CTE's (e.g. Mysql before version 8 ), but just say so in that case. e: saw your edit; you got it. performance probably not meaningfully different from my solution, use whichever you like. I personally tend to find CTE's more readable than subqueries in joins, but that's very subjective. CTE's can have different performance characteristics though depending on what you're doing but in this case I doubt it matters. TheFluff fucked around with this message at 20:15 on Jun 23, 2021 |
# ? Jun 23, 2021 20:09 |
|
TheFluff posted:I just have a (possibly annoying) habit of always preferring exists() if I'm not selecting any columns from the joined table can we be sql friends
|
# ? Jun 24, 2021 03:22 |
|
DELETE CASCADE posted:can we be sql friends let's
|
# ? Jun 24, 2021 13:18 |
|
I used to be a subquery guy but after a lot of soul-searching I've realized that CTE's are the One True Way, especially with really complex queries. We all have our own individual path of enlightenment though.
|
# ? Jun 24, 2021 17:25 |
|
What are the low-level performance implications of tablespaces in postgres? Obviously the generic answer is “they let you split load across multiple IO devices” but I’m curious about the specifics. Let’s say we take the concept to the extreme and put every single table on its own tablespace. Each table space is its own ZFS dataset. And let’s say you have one transaction that hits on three different tables (and thus three tablespaces). What are the implications for that in terms of transaction commit behavior, WAL writes, followbehind writes, autovacuum pauses, etc, vs if it all just lived in one tablespace? Let’s say that in both cases they live on one physical drive so both installations have identical disk IO capabilities available (we’re not giving the multi-dataset one more drives). Does managing more but smaller tablespaces usually work better than one big one, and if so is there a reason you wouldn’t want to take it to the extreme like that, given how cheap a dataset can be, how it organizes similar data for optimal compression and caching on the dataset, etc? They can all go into a parent dataset so you can still do an atomic snapshot that captures all tablespaces at once, etc. This also bears on declarative partitioned tables I think since those are basically tables mapped into one giant “view” (not really but) and can (and usually do) have their own tablespaces right? Especially if you can partition data that is frequently getting rototilled by update/delete, it should lead to smaller vacuums (or at least isolate this IO from the rest of the table if it’s on a separate disk)? So in this same sense, if you partition your data (and again to be fair let’s assume it’s on the same IO subsystem) is there a reason that would hurt performance by “over-partitioning” the data, or would that help by allowing more requests “in-flight” and increasing IO depth? If you are using some aspect of your data to “scatter” data across a (reasonable, let’s say <8) number of partitions/tablespaces to spread IO (in this case we are not assuming a one-drive IO subsystem) it would probably be preferable to have one partition/tablespace per drive like this rather than trying to use all drives in a raid/vdev? I know the ultimate answer here is probably “try it for yourself”, just curious if there’s any common wisdom here on what actually works, beyond the super basic “lol it lets you split IO” that everyone copies/pastes. Any success stories/horror stories? Paul MaudDib fucked around with this message at 08:57 on Jun 29, 2021 |
# ? Jun 29, 2021 08:41 |
There's only one bit of general-purpose tuning that you should listen to. The rest are very workload-specific; for example, you'll often see logbias=latency be thrown around, but unless you're only doing a lot of big writes without a pair of SLC SSDs as log devices, it's going to have the opposite effect. Personally, I favour an idea I've brought up to one of people who works on both FreeBSD and Postgres: add a completely separate dataset type that exposes the dnode objects as key=value stores, rather than going through the ZFS POSIX Layer. The ZPL is what's responsible for making ZFS act like a filesystem, but the internal structure of the dnodes mean it's effectively just an abstraction layer that can be gotten rid of, while having ZFS provide a couple of things that Postgres needs to guarantee ACID compliance.
|
|
# ? Jun 30, 2021 09:50 |
|
that's interesting since key-value stores are basically their own type of database entirely, and one that a lot of people use. what would that actually look like in terms of ZFS? would it be like a dataset with tons of (non-zfs-related) properties or what? out of curiosity is there any historical precedent in sysv-unix or BSD for an in-kernel key-value store? edit: I am likely thinking of berkeleydb which doesn't seem to be in-kernel Paul MaudDib fucked around with this message at 01:18 on Jul 6, 2021 |
# ? Jun 30, 2021 10:01 |
I've no clue what it would look like, to be honest. One way would be to expose it as a character device through devfs (or the equivalent on non-FreeBSD that's responsible for populating /dev), but it could just as well be a message passing system of some description - although I would probably favour the first if I had a vote. I'm not entirely sure what you're asking; most Unix-like OS' include some database. FreeBSD includes a private instance of sqlite, ie. it's accessible to things shipped with FreeBSD, but not accessible to the FreeBSD userland (similar to lua and a few other things).
|
|
# ? Jun 30, 2021 10:18 |
|
Hoping somebody here has experience with Oracle Advanced Queueing. I'm currently trying to propagate between queues over a database link. The destination queue is not in the schema targeted by the database link, but in another schema on that same remote database. Propagation seems to assume, however, that the destination queue is exactly in the schema specified by the database link. Documentation says AQ does not support synonyms, either. Am I just SOL? I know I can get it done with stored procedures, but I'd like to try the built-in methods first.
|
# ? Jul 2, 2021 17:15 |
|
Create a synonym? If I were your DBA I would advise not to use database links at all if it can be avoided.
|
# ? Jul 5, 2021 22:01 |
|
AQ does not like synonyms. Sadly, the dblink is here to stay. Half the reason we want to queue stuff is to buffer messages in case the "remote" system isn't available for some reason. But it seems there's further configuration fuckery afoot and I've kicked it to our DBAs for the time being.
|
# ? Jul 6, 2021 10:57 |
|
lol loving Oracle
|
# ? Jul 6, 2021 23:33 |
|
I'm doing some stuff in Unreal Engine and I need to figure out how these float values map to these bytes 0.01: 0A D7 23 3C 0.10: CD CC CC 3D 0.20: CD CC 4C 3E 0.30: 9A 99 99 3E 0.5: 00 00 00 3F 0.8: CD CC 4C 3F 0.9: 66 66 66 3F 0.960: 8F C2 75 3F 1.0: 00 00 80 3F 1.10: CD CC 8C 3F 1.120: 29 5C 8F 3F 1.490: 52 B8 BE 3f 2.0: 00 00 00 40 Can find more examples if necessary Oh and a value of 0.0 is four bytes of zeroes
|
# ? Jul 7, 2021 01:30 |
|
Those look like regular-rear end single-precision floats. Also this is the SQL thread, you might prefer the stickied general programming thread if you have other questions.
|
# ? Jul 7, 2021 02:11 |
|
Jabor posted:Those look like regular-rear end single-precision floats. Lol oops. Well, thank you
|
# ? Jul 7, 2021 02:14 |
|
Gatac posted:AQ does not like synonyms. I've used Oracle a fair bit, but never AQ. My guess as to why synonyms and other 'put the queue in a different schema' stuff don't work is because of permissions fuckery. A schema has every privilege over objects stored in it, but you have to grant everything to outside schemata. If they never bothered exposing the relevant privilege through a grant then you'd be stuck with this. Is there any reason you can't just queue straight onto the destination schema?
|
# ? Jul 7, 2021 13:00 |
|
Moonwolf posted:I've used Oracle a fair bit, but never AQ. My guess as to why synonyms and other 'put the queue in a different schema' stuff don't work is because of permissions fuckery. A schema has every privilege over objects stored in it, but you have to grant everything to outside schemata. If they never bothered exposing the relevant privilege through a grant then you'd be stuck with this. Actually we realized that part of the problem is that subscriber notifications for the queues aren't working on the databases involved. As in, can set them up but DBMS does not spawn a notification job to process them. DBAs aren't into AQ so they can't help me with that problem. And I can't queue on the destination schema because there's concern that database may not be 100% available. Part of the reason for AQ is to decouple the processing between databases so destination going bye bye does not leave dangling transactions on the source database. Solution as I see it now is to have a queue on the source schema, a job that dequeues from there in a loop with infinite wait and executes a stored procedure on the destination schema that does the actual work. If destination is unavailable, catch the exception and rollback the dequeue for a retry. I don't like it, mind. I prefer to have the scheduler job wait, say, 10 seconds on the dequeue before exiting the loop, and then using subscriber notifications for a callback procedure that just enables that job as needed. Squares the circle of not having the job endlessly sit around while making sure messages are dequeued in order, which you can't guarantee if you're processing directly with the callback procedure.
|
# ? Jul 7, 2021 13:21 |
|
NotNut posted:I'm doing some stuff in Unreal Engine and I need to figure out how these float values map to these bytes Fun fact: you can also represent 0.0 as 80 00 00 00 (but it's negative zero)
|
# ? Jul 7, 2021 18:29 |
|
code:
code:
For reference this is the exercise question I'm trying to answer quote:The home page of the Brewbean’s Web site has an option for members to log on with their IDs (This is using oracle 11g) Away all Goats fucked around with this message at 20:08 on Jul 7, 2021 |
# ? Jul 7, 2021 19:31 |
|
You need to assign variables to the OUT parameters, not literals. Like this:code:
Gatac fucked around with this message at 20:33 on Jul 7, 2021 |
# ? Jul 7, 2021 20:29 |
|
Thanks. I tried running that code snippet instead of the EXEC statement but I'm still getting the same PLS00306 error.quote:Also, I'm sure you're already aware, but this specification is pretty much canonically How Not To Stored Procedure, so please take this as the exercise it is and not as an example of how you should be coding in production. Edit: I see what you mean and got it with this code:
Away all Goats fucked around with this message at 23:01 on Jul 7, 2021 |
# ? Jul 7, 2021 22:21 |
|
Jabor posted:Those look like regular-rear end single-precision floats. code:
|
# ? Jul 7, 2021 23:25 |
|
Trying to get to grips with PostgreSQL. We are aiming to migrate a database from Oracle to PostgreSQL and I'm doing some exploratory work on that, just trying to bash together a proof of concept. ora2pg has generated a file containing statements like this: ALTER FUNCTION getcurrentuserid OWNER TO MyName; When I try to execute this script from the "psql" command line program, it says 'ERROR: role "MyName" does not exist'. But I am also looking at it in the admin app "pgAdmin 4" and it lists MyName under "Login/Group Roles". I can get it to show me the following SQL: code:
code:
|
# ? Jul 8, 2021 17:33 |
|
in postgres all identifiers are case-insensitive by default, which actually means they automatically fold to lowercase. if you don't want this behavior, you enclose the identifier in double quotes, and postgres will preserve the original case. but once you have done that, you need to keep using the double quotes, or it will fold to lowercase and then not match. so if you created the role MyName, you can refer to it as MyName, or MYNAME, or mYnAmE, and they will all fold to myname. but if you create the role "MyName", you must refer to it as "MyName" including the quotes, and the other case variants will not work.
|
# ? Jul 8, 2021 19:09 |
|
DELETE CASCADE posted:you must refer to it as "MyName" including the quotes including the quotes!!! this was it, I hadn't included the quotes, but what the gently caress
|
# ? Jul 8, 2021 20:25 |
|
Same deal in Oracle, though. Double-quoted identifiers are case sensitive, unquoted are not.
|
# ? Jul 8, 2021 20:31 |
|
|
# ? Jun 8, 2024 09:55 |
|
Gatac posted:Same deal in Oracle, though. Double-quoted identifiers are case sensitive, unquoted are not. the issue here is not case sensitivity. I understand the concept of case sensitivity. This is about the double quotes being part of the name, inasmuch as they're required to be present or not (disregarding the case where the name is an SQL keyword). That's what my what the gently caress was about. and I was sure that that wasn't the case in Oracle, but gently caress me if it isn't true. Well, I concede the point, but being on the same side as Oracle is nothing to boast about! e: I'm going to drag this thread into a derail if I'm not careful, but this is one of the most WTF things I've ever encountered in databases. The double quotes are not meant to be part of the name, gently caress sakes. Hammerite fucked around with this message at 21:21 on Jul 8, 2021 |
# ? Jul 8, 2021 21:12 |