|
Munkeymon posted:The possibility of the table becoming a (potentially materialized) view that redirects all operations via triggers unspooled in my mind the other day and I took a little walk to the window to watch the snow fall to try to push the maintenance implications out of my head the amount of technical debt you can rack up with database bullshit is amazing no responsible developer sees this and thinks it's anything other than the high-interest-credit-card of technical design talk your dbadmin into the hack first, and then tell him how long it'll be deployed (protip: you can't) Paul MaudDib fucked around with this message at 10:05 on Jan 25, 2020 |
# ? Jan 25, 2020 09:34 |
|
|
# ? Jun 8, 2024 09:20 |
|
Jethro posted:I'm pretty sure (though it's been a few years since I've done anything outside of Oracle), that you can't do a for each trigger in MS SQL. Unless you did something like create a cursor in the trig...Nope, can't finish that thought. That is exactly what you have to do.
|
# ? Jan 25, 2020 16:57 |
|
That belongs in the coding horrors thread
|
# ? Jan 25, 2020 18:01 |
|
One team we have loves the pattern of: look up the id for something in one table, insert all the fields into another table, which has trigger on insert that updates all the fields in the second table from the first one. Like a lovely example is keeping track of active user accounts or whatever - grab the user id on login or action or whatever from the users table (along with manager, email, role, etc..), insert all of that into the active_users table, and then a trigger updates the second table with the manager, email, role, whatever just to be safe I guess?. except that theres very real chance this is happening 100s of times a minute in multiple parts of the product. The whole database is just a birds nest of various triggers. They also love bytea fields too for some reason, guid primary keys. hashing strings for comparison so we actually have table(s) like: ( id uniqueidentifier primary key name varchar, namehash varbinary ) And all the lookups are done on either the id or namehash column depending on the day. I'm mixing up tsql and psql but meh. SQL is very slow and thats the reason many features I've asked for can't be implemented I've been told.
|
# ? Jan 26, 2020 23:31 |
|
power botton posted:SQL is very slow Sure, when you abuse the gently caress out of your database like that. Jesus. I'm really sorry your coworkers are morons.
|
# ? Jan 26, 2020 23:40 |
|
Paul MaudDib posted:
Haha, dbadmin? We don't even have one on the hiring roadmap. I also don't even have a working hack :| Just-In-Timeberlake posted:That is exactly what you have to do. another SQL feature I've mostly avoided. Cool. Great. I'm leaning strongly towards making the application figure out the index and writing some automated tests to guarantee the historical data doesn't come out wrong at this point. At least that doesn't feel like building a layer cake of One Weird Tricks.
|
# ? Jan 27, 2020 15:35 |
|
Munkeymon posted:I'm leaning strongly towards making the application figure out the index and writing some automated tests to guarantee the historical data doesn't come out wrong at this point. At least that doesn't feel like building a layer cake of One Weird Tricks.
|
# ? Jan 27, 2020 16:20 |
|
call me crazy but i have always, always, always avoided triggers. they seem like way more trouble than they're worth, and it seems like they can get out of hand super, super quick. like how would you even start to reverse engineer a data problem stemming from one of them?
|
# ? Jan 27, 2020 17:57 |
|
abelwingnut posted:call me crazy but i have always, always, always avoided triggers. they seem like way more trouble than they're worth, and it seems like they can get out of hand super, super quick. like how would you even start to reverse engineer a data problem stemming from one of them? I'm sure they have their place and someone much smarter than me can elaborate, but I haven't yet found a situation that absolutely warranted a trigger.
|
# ? Jan 27, 2020 21:22 |
|
Triggers are there to prevent data problems. They are what is used to guarantee that data logically related maintains correctness despite being stored in normalized tables. Triggers don't suffer from connection-specific enforcement rules. (IE if you put data logic in an application, it doesn't apply when someone directly connects to the database, but triggers still do.) What do you think On Delete Cascade is?
|
# ? Jan 28, 2020 01:52 |
|
they are a pain in the rear end though
|
# ? Jan 28, 2020 06:11 |
|
MIcrosoft's recommended schema for an auditable table involves triggers that automatically set the invalidation date on the old row when a new one is written, IIRC. Pretty neat, but also somewhat special-purpose.
|
# ? Jan 28, 2020 15:24 |
|
redleader posted:they are a pain in the rear end though Used sanely they are fine and can save you from yourself.
|
# ? Jan 28, 2020 17:35 |
|
Munkeymon posted:MIcrosoft's recommended schema for an auditable table involves triggers that automatically set the invalidation date on the old row when a new one is written, IIRC. Pretty neat, but also somewhat special-purpose. Speaking of which, if you have MSSQL, system-versioned temporal tables are great and I wish I had them in Postgres. You can implement something similar with triggers, but you can't get the query syntax and that's half the fun.
|
# ? Jan 28, 2020 18:53 |
|
TheFluff posted:Speaking of which, if you have MSSQL, system-versioned temporal tables are great and I wish I had them in Postgres. You can implement something similar with triggers, but you can't get the query syntax and that's half the fun. That's slick as gently caress.
|
# ? Jan 28, 2020 20:30 |
|
TheFluff posted:Speaking of which, if you have MSSQL, system-versioned temporal tables are great and I wish I had them in Postgres. You can implement something similar with triggers, but you can't get the query syntax and that's half the fun. Oh, someone else other than Oracle does native audited tables, that's good, we might see it spread since it's actually really useful. Oracle's one's called Flashback Data Archive for proper long term, they've let you use undo to wind back queries in the near term a while. I believe Postgres is moving to an undo area system, at least under EDB's work on it, which would enable them to do this stuff much more reasonably than the orphaned-rows-and-vacuuming system they have now.
|
# ? Jan 29, 2020 01:26 |
|
The trigger I posted on the 22nd actually works just fine... if I disable the other trigger that tries to do the same-ish thing that I didn't think to look for and I guess nobody knew about to mention during standup? Welp, that was a week I'll never get back, but I guess I learned a couple things.
|
# ? Jan 29, 2020 15:50 |
|
The tech debt is real. Seen this a few times. People are "comfortable" and "familiar" with SQL and databases and try to make them do too much.
|
# ? Feb 20, 2020 02:36 |
|
amethystdragon posted:The tech debt is real. I think the proliferation of ORMs and NoSQL implies a general discomfort with SQL.
|
# ? Feb 20, 2020 03:18 |
|
Nth Doctor posted:I think the proliferation of ORMs and NoSQL implies a general discomfort with SQL. Alternate take: the proliferation of ORM’s is due to a desire to not write what is (mistakenly, often) thought of as boilerplate data access logic, and NoSQL means I “don’t have to worry about object-relational impedance mismatch” or “think about my schema, because storage is cheap!”
|
# ? Feb 20, 2020 14:11 |
I'd go along with that. SQL is great when you're just writing bare queries. When you're wrapping it in some other language, unless you're using a good ORM you end up with a majority of your database code being taken up with (fragile, unreadable, repetitive) input sanitization and type-checking routines. I'm perfectly happy writing bare SQL queries and still do whenever I want something super customized to optimize performance, but I err on the side of the ORM because I care about maintainability and data safety more than I care about seeing the actual pistons going up and down.
|
|
# ? Feb 20, 2020 14:39 |
|
What ORM do you use? It sucks to have an ORM write queries for you imo. It also sucks to have your server code directly accessing tables. I greatly prefer to build an interface layer of stored procedures that formalize the interactions a server is allowed to take against the database. You get the benefit of an interface with the readability of SQL. Nested complex objects? Serialize them to JSON in SQL then, or return normalized data. I dunno. Using EF makes me wanna blow my brains out. C# logic directly affecting the database does too.
|
# ? Feb 20, 2020 19:08 |
|
Ruggan posted:What ORM do you use?
|
# ? Feb 20, 2020 19:14 |
|
Writing select foo,bar from fart where id=55 gets old fast. And there is absolutely no benefit to anyone. Same thing for stored procedures. With that being said, there are times when you have to get down and dirty and manually molest that SQL statement or stored procedure. For everything else, there's the ORM to do the janitoring.
|
# ? Feb 20, 2020 22:01 |
|
Maybe I’m just old school in this sense but I’m much more comfortable using a CQRS pattern and letting the ORM handle persistence, while the reads where I really care about not having the ORM do dumb poo poo gets handled via something like Dapper with handwritten SQL.
|
# ? Feb 20, 2020 23:06 |
|
How'd we get on this topic? I didn't start it did I? Having just had this conversation elsewhere, naup you can keep your ORMs. Even I was surprised to find that such a small data set couldn't be handled effectively by the orm because instead of having a viable driver utilizing the basic features of the database, it decided to roll it's own. When you can't write a measly 250kB over 1k rows in a single table with one foreign key in under 30 seconds (with no reads happening)... yeah you gotta be kidding me. It also reminded me of something from a database class very long ago: Yes the quick forms directly tie fields (lname,fname) to the underlying row in the database. How convenient! Um yeah don't do that, because your form should use local variables and a 'save' button to permit undo operations, to send the updates atomically, improve efficiency, etc. To do it with multiple objects mapped across many rows in multiple tables isn't a matter of laziness; it's poor design. But hey at least your WordPress site is up fast!
|
# ? Feb 20, 2020 23:39 |
|
PhantomOfTheCopier posted:To do it with multiple objects mapped across many rows in multiple tables isn't a matter of laziness; it's poor design. Good design doesn’t ship products and we have deadlines! We’re agile!
|
# ? Feb 20, 2020 23:55 |
|
I'm on the Laravel Eloquent victim train for ORM. It's fine for simple queries but breaks down with complex queries. I like to rely on ORM for simple queries for readability, but fall back to raw for complex ones because it improves comprehension past a critical point. Those queries get parameterized into methods as vendor classes that spit out the right query based upon database/version.
|
# ? Feb 21, 2020 04:28 |
|
Volguus posted:Writing select foo,bar from fart where id=55 gets old fast. And there is absolutely no benefit to anyone. Same thing for stored procedures. With that being said, there are times when you have to get down and dirty and manually molest that SQL statement or stored procedure. For everything else, there's the ORM to do the janitoring. You know what gets old fast? Not being able to publish updates without a downtime because your ORM is tightly coupled with your database tables and expects its internal model of your schema to be up to date. We have a bunch of applications at my company where you can’t push sql hotfixes because of tight coupling and use of ORMs. “Oh just abstract it! Implement data access layer views and or stored procedures!” you might say. Well gently caress what’s the point of the ORM then? Hard pass on full fledged ORMs. They exist to hold your hand because you aren’t as comfortable in SQL as you are in C# or whatever your server side language is. Validation belongs on the client in UI, the server as type, null, and rule checking, and the database as FKs and constraints. Security belongs on the client in UI, the server in server side security checks against the session user and not in the db because your server is running under a service account that has all the access it needs and no more. Business logic belongs in the database alone because the next time I have to loving debug your goddamn server side application and jump through a set of files you copied from an online tutorial I’m gonna drop your database. Bob from accounting wants his reports and i need to be able to replicate your stupid rear end logic using queries alone.
|
# ? Feb 21, 2020 06:56 |
|
Meanwhile I’ll be over here with my version controlled sql database, dal schema with specific access methods, and dapper data access layer. Oh you need data from my app that matches what the app shows? Sure, use this public sql interface or api endpoint. Oh you found a bug? Fixed. No you don’t need to wait until the next release because we can independently push changes to client, server, or database whenever we want with absolutely no downtime.
|
# ? Feb 21, 2020 06:59 |
|
Ruggan posted:Business logic belongs in the database alone i was with you until this point because i've seen what people do when the database is the business layer
|
# ? Feb 21, 2020 11:45 |
|
Ruggan posted:Business logic belongs in the database alone because the next time I have to loving debug your goddamn server side application and jump through a set of files you copied from an online tutorial I’m gonna drop your database. redleader posted:i was with you until this point because i've seen what people do when the database is the business layer Yeah this was where I left the trail as well. My DB is chock full of business rules in the sprocs and it can really suck when you're trying to decompose complex processes into step by step ones for use in integration apis. Maintainability can also really suffer as sprocs get special snowflake changes for a small subset of tenants. My current project is reimplementing a module that is mostly run in multi-thousand-line procedures with business rules lifted out and sprocs getting used only for CRUD of simple table level records. Before we started only a handful of really experienced devs would work on it, leading to huge bottlenecks of work. My goal is to allow customers to be able to change their own processes eventually.
|
# ? Feb 21, 2020 13:12 |
|
Ruggan posted:You know what gets old fast? Not being able to publish updates without a downtime because your ORM is tightly coupled with your database tables and expects its internal model of your schema to be up to date. We have a bunch of applications at my company where you cant push sql hotfixes because of tight coupling and use of ORMs. Well now, this is very much worthy of the coding horrors thread. Can you please get over there so that we can continue the charade? That thread is massively in need of a diversion after the docker-every-loving-where episode.
|
# ? Feb 21, 2020 13:32 |
|
I guess I should clarify that when I say business logic I don’t mean thousand line sprocs with nested sprocs, triggers, cursors, all of that. I agree that some crazy rear end bullshit doesn’t belong in a sp. More what I’m referring to is the logic by which a value is calculated. Like Billy wants to add a new property to a model object so he just calculates it in C# or whatever instead of putting it in the database where it belongs. Now John needs to use that in a report and he needs to recreate your server side logic in SQL. I’ve seen far too many times where a report writer asks a dev where they can get a piece of data and the answer is they can’t because it’s calculated on the fly by the application and has a lot of complex logic behind it that would have been simple in SQL.
|
# ? Feb 21, 2020 15:14 |
|
Nth Doctor posted:Yeah this was where I left the trail as well. My DB is chock full of business rules in the sprocs and it can really suck when you're trying to decompose complex processes into step by step ones for use in integration apis. Maintainability can also really suffer as sprocs get special snowflake changes for a small subset of tenants. Yea again I probably misstated my intent. Composable, simple code. Wherever possible. It takes a lot of time and iteration to get it right but it’s a fight worth fighting. Thousand line blocks of unformatted SQL code also make me wanna barf so I’m with you there.
|
# ? Feb 21, 2020 15:19 |
|
Ruggan posted:complex logic behind it that would have been simple in SQL. They hand-wrote joins in the server layer or something?
|
# ? Feb 21, 2020 15:20 |
Or are we talking like "price per container = unit price * container capacity" defined as a model property in the ORM, instead of having it redundantly stored in the DB as a separate column or a derived value where the math is done DB-side?
|
|
# ? Feb 21, 2020 15:24 |
|
Munkeymon posted:They hand-wrote joins in the server layer or something? Yes, and/or aggregate and/or case statement equivalents. One time I saw some guy using enums they hand defined to match lookup tables in the database, but the enums were out of date with the table. EF really encourages this though. Linq is powerful and makes it too easy to just hand roll your own stuff that looks crazy a month later
|
# ? Feb 21, 2020 15:25 |
|
Data Graham posted:Or are we talking like "price per container = unit price * container capacity" defined as a model property in the ORM, instead of having it redundantly stored in the DB as a separate column or a derived value where the math is done DB-side? This, but more complex. If you can just tell someone to multiply two columns then fine. If you’re defining a percent complete for a row calculated based on current date, start, due dates and statuses, then maybe you should put that somewhere other people can get at it.
|
# ? Feb 21, 2020 15:27 |
|
|
# ? Jun 8, 2024 09:20 |
|
An example I can think of is the status of a feature for a customer. Customer has a row for the feature with a status. On/off. But if they aren’t licensed for the feature it should be Not Licensed. But if they’re an international org so this is actually NA due to a lookup. But if they have been granted an exemption in a separate table then it’s Exempt but only during the date range that is listed in that table. Let’s create this as a model property of the feature. Or maybe just calculate it on the fly. Oh you’re saying that all the product teams want to report on the count of customers using features belonging to their product? Sorry, the logic is really complex and is only available in our server code.
|
# ? Feb 21, 2020 15:34 |