Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
redleader
Aug 18, 2005

Engage according to operational parameters

NihilCredo posted:

i also think this is usually the cleanest option. no query filters, just a separate connection string per tenant.

the downside is that you have to be really drat sure you will never need to perform queries across multiple organisations

if you do need that, using one schema per tenant can be a decent compromise. you can specify the schema in the connection string, but still override it in the query if needed. at least if your db is mssql or postgres

another thing to be wary of with one-db-per-tenant (or indeed anything that varies the db connection string per tenant) is that ef core support is sketchy, iirc. you'll need to figure that bit out on your own

obviously not a concern if you're not using ef

Adbot
ADBOT LOVES YOU

ChocolatePancake
Feb 25, 2007
If you inject your ITenantProvider with your OnConfiguring method for entity framework, you can use a scoped database service no problem. I have used this approach several times, and it works well. I can probably get you some sample code tomorrow if you like.

Supersonic
Mar 28, 2008

You have used 43 of 300 characters allowed.
Tortured By Flan

ChocolatePancake posted:

If you inject your ITenantProvider with your OnConfiguring method for entity framework, you can use a scoped database service no problem. I have used this approach several times, and it works well. I can probably get you some sample code tomorrow if you like.

This would be appreciated! I've also been looking at Finbuckle Multitenant tonight which seems to be a potential solution as well.

ChocolatePancake
Feb 25, 2007
I've not used Finbuckle before, but it looks intriguing. This is what I do:

code:
//put your connection strings in your appSettings.json like this:
"ConnectionStrings": {
  "TenantADataModel": "data source=<rest of connection string here>;",
  "TenantBDataModel": "data source=<rest of connection string here>;"
},

 public partial class MyDbContext : DbContext
 {
     private readonly ITenantIdentifier tenantIdentifier;
     private readonly IConfiguration configuration;

     public MyDbContext(IConfiguration configuration, ITenantIdentifier tenantIdentifier)
     {
         this.configuration = configuration;
         this.tenantIdentifier = tenantIdentifier;
     }

     protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
     {
         var connectionString = configuration
             .GetConnectionString(tenantIdentifier.GetCurrentTenantId() + "DataModel"); 
         optionsBuilder.UseSqlServer(connectionString);
     }	 
}

in Startup.cs:
 
services.AddSingleton<ITenantIdentifier, MyTenantIdentifier>();
services.AddDbContext<MyDbContext>();
From there you just inject your MyDbContext wherever you need it. Each request will get its own connection string based on the tenant ID.
This is with SqlServer, but should work the same with Postgres.

Hope that helps!

epswing
Nov 4, 2003

Soiled Meat
It'd be different for everyone, but what criteria is commonly used to set the TenantID in MyTenantIdentifier to "TenantA" or "TenantB" in an ASP.NET Core project (the currently logged in User? the subdomain?), and where is a reasonable place to store that information?

ChocolatePancake
Feb 25, 2007
We like to use subdomains, storing the mapping in a config file, makes it easier for us, but there's lots of ways to skin that cat.

biznatchio
Mar 31, 2001


Buglord
You don't even need to bury it in the DbContext OnConfiguring like that, you can do it right in your application's startup. The AddDbContext extension method for IServiceCollection has an overload to provide a function that takes IServiceProvider as one of its arguments, so you can get your tenant-providing service there and keep all your configuration during service registration where it belongs.


ChocolatePancake posted:

We like to use subdomains, storing the mapping in a config file, makes it easier for us, but there's lots of ways to skin that cat.

We do that for our IDP (mapping to tenant by hostname); then for all our application services we map based off the issuer of the passed OAuth access token.

But there are still some things that are difficult to make per-tenant; so we structure our web applications to have the initial WebApplication host built, and all that host does is look at the supplied bearer token to identify the issuer, then passes the request off to a separate tenant-specific host instance (one per tenant -- created on demand when the first request for a particular tenant hits the process), which then handles it exactly like it would if it were a single tenant application because it basically *is* just running lots of separate single tenant applications all in one process. You also don't get boned if you ever need to use a library that assumes a single tenant application; and you never ever accidentally do things like mix up memory caches cross-tenant or any of the other pitfalls you might fall for if you're not hypervigilant about making sure you never accidentally assume a single tenant.

biznatchio fucked around with this message at 04:37 on May 1, 2024

susan b buffering
Nov 14, 2016

epswing posted:

It'd be different for everyone, but what criteria is commonly used to set the TenantID in MyTenantIdentifier to "TenantA" or "TenantB" in an ASP.NET Core project (the currently logged in User? the subdomain?), and where is a reasonable place to store that information?

Our IDP just adds the tenant ID to the user's claims.

Calidus
Oct 31, 2011

Stand back I'm going to try science!

susan b buffering posted:

Our IDP just adds the tenant ID to the user's claims.

I really like this method.

Furism
Feb 21, 2006

Live long and headbang
I have a very newbish question, since I'm only a hobby developer and even then, I hadn't started Visual Studio for like 2 years before today so I'm a bit rusty. I'm throwing together a very simple and quick Blazor web app to centralize all the calculators and converters I need for sports (miles to km, pace to speed, pace + time to distance, etc). I'm stuck on the One-rep Max calculator trying to implement a bafflingly simple formula (Epley's), which is

1RM = Weight(1+r/30), assuming r > 1.

Weight is the weight you're doing the assessment at, rep is the amount of reps you could do before failure, and this comes out as a higher number because your 1RM is supposed to be heavier than your, say, 5RM. For instance a 5RM at 50kg should come out as 56.3kg (according to another calculator). Sorry for the long intro, I wanted to give context.

Anyway, my code is dumb and so far is:

code:
@code{
    double Weight = 0;
    int RepAmount = 1;
    double epley1Rm = 0;
    double brzycki1Rm = 0;

    private void EpleyFormula(){
        epley1Rm = Weight * (1 + RepAmount/30);
    }
}
The problem I have is that regardless of the amount of reps I input, the 1RM is always equal to the initial Weight. I broke down the formula, and the problem seems to be that RepAmount/30 always returns 0. I'm not sure how rounding works for doubles so it's probably that. Looking at the documentation, Microsoft says that "If the magnitude of the result of a floating-point operation is too small for the destination format, the result of the operation becomes positive zero or negative zero." but I have no idea if that's the case here.

Is there something very basic and dumb I'm missing here?

Geddy Krueger
Apr 24, 2008
You're doing integer division. Cast the int to a double first

Furism
Feb 21, 2006

Live long and headbang
Ooh, because I'm using two different types the compiler needs to know which one I actually need, and since I don't specify it by casting the type then it just uses the simpler of the two?

Geddy Krueger
Apr 24, 2008

Furism posted:

Ooh, because I'm using two different types the compiler needs to know which one I actually need, and since I don't specify it by casting the type then it just uses the simpler of the two?

Pretty much. Since RepAmount is an int and .NET interprets the literal 30 as an int, it doesn't change it to a double until it needs to. You can either cast RepAmount as part of the equation, or change the 30 to 30.0 and that should do it.

E: or just change the declared type of RepAmount to double if you want

Furism
Feb 21, 2006

Live long and headbang
Thanks for the clarification, appreciate it. Working fine now!

CitizenKeen
Nov 13, 2003

easygoing pedant
Is there a forum or a Discord for SixLabors ImageSharp, or any way to get help with that library?

I've been using ImageSharp to load up .png files, clone -> rotate -> resize them, and save as webp files. And every once in a while, the files are exported as animated webps (which are indistinguishable to every browser but which Discord can't parse).

Why? No idea. All the png files are sourced from various sources (sometimes a PNG or JPG, sometimes a WEBP, often a screen grab), and then cropped in Affinity and exported as png files before running through my C# code. So 1 in 30 or 40 images are coming out as animated webp, even though they always start as png files.

Any insight?

epswing
Nov 4, 2003

Soiled Meat

CitizenKeen posted:

Is there a forum or a Discord for SixLabors ImageSharp, or any way to get help with that library?

I've been using ImageSharp to load up .png files, clone -> rotate -> resize them, and save as webp files. And every once in a while, the files are exported as animated webps (which are indistinguishable to every browser but which Discord can't parse).

Why? No idea. All the png files are sourced from various sources (sometimes a PNG or JPG, sometimes a WEBP, often a screen grab), and then cropped in Affinity and exported as png files before running through my C# code. So 1 in 30 or 40 images are coming out as animated webp, even though they always start as png files.

Any insight?

Is it consistent? If you re-process the same file that resulted in an animated webp, does it still/always result in an animated webp file?

EssOEss
Oct 23, 2006
128-bit approved
You described the symptom "does not work in Discord" but how did you get from that to "it is an animated webp"?

SirViver
Oct 22, 2008
From having a look at the source code, maybe some of the PNGs you have are actually APNGs? Those would be able to contain multiple frames while still being completely backwards compatible with regular PNG (displayed as the first frame only by programs that don't understand APNG). ImageSharp understands APNGs and would dutifully convert them into animated WEBPs.

If I understand the ImageSharp API correctly, I guess you could try loading all images with the DecoderOptions configured with MaxFrames = 1, which would get rid of any animated file content and convert everything to a normal WEBP.

CitizenKeen
Nov 13, 2003

easygoing pedant

epswing posted:

Is it consistent? If you re-process the same file that resulted in an animated webp, does it still/always result in an animated webp file?

Yes, the same files will result in the same problem.

EssOEss posted:

You described the symptom "does not work in Discord" but how did you get from that to "it is an animated webp"?

I posted two webp files - one that previews in Discord, one that does not - in the D Sharp Plus Discord (the Discord .NET bot library I'm using) and it was noted the working one was a webp, the not-working one is an animated webp, and that Discord does not render animated webp files.

SirViver posted:

From having a look at the source code, maybe some of the PNGs you have are actually APNGs? Those would be able to contain multiple frames while still being completely backwards compatible with regular PNG (displayed as the first frame only by programs that don't understand APNG). ImageSharp understands APNGs and would dutifully convert them into animated WEBPs.

If I understand the ImageSharp API correctly, I guess you could try loading all images with the DecoderOptions configured with MaxFrames = 1, which would get rid of any animated file content and convert everything to a normal WEBP.

I've never heard of an APNG file. That's a strong contender for the culprit: If i start with an animated WEBP, open and edit in Affinity, then export, and it exports as an APNG, then run my ImageSharp code against it, I could see that being the culprit. I had no idea animated PNGs were a thing - this gives me an avenue for an investigation. Thank you!

Kyte
Nov 19, 2013

Never quacked for this
APNG is a non-standard extension, hence why it's kinda rare but some people still use it because it's less stupid than webm/animated webp.
The Correct(TM) way of doing it would be to see if there's more than one frame in the png and if so convert to webm instead but that's, y'know, a whole lot of extra effort.

CitizenKeen
Nov 13, 2003

easygoing pedant

Kyte posted:

APNG is a non-standard extension, hence why it's kinda rare but some people still use it because it's less stupid than webm/animated webp.
The Correct(TM) way of doing it would be to see if there's more than one frame in the png and if so convert to webm instead but that's, y'know, a whole lot of extra effort.

Not to clutter the .NET thread but I don't want the extra frames (I suspect someone upstream is making them animated by accident - they're static images), so I'm just flattening them when I convert from WEBP to PNG. Seems to solve the problem just fine.

Blue Footed Booby
Oct 4, 2006

got those happy feet

Jen heir rick posted:

I hate writing web apps and wish I could write native apps, but it seems like the world is moving to hybrid/web type apps. The economics are just too good. You can have a single team write an app that works on android iOS, macOS, Linux, and windows. You just can't beat that with native apps. You need a team or developer for each platform. The suits just aren't gonna pay for that. Even if the result is a shittier experience. People will get over it. poo poo sucks yo.

I work on winforms apps for a living. :smuggo:

Yes I'm a dinosaur, yes the last couple job hunts epically sucked, but I don't ever have to work on web poo poo. It's great.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Blue Footed Booby posted:

I work on winforms apps for a living. :smuggo:

Yes I'm a dinosaur, yes the last couple job hunts epically sucked, but I don't ever have to work on web poo poo. It's great.

I'd think that Winform talent would be harder to come by these days so you would be in more demand. Not so much?

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer

Blue Footed Booby posted:

I work on winforms apps for a living. :smuggo:

Yes I'm a dinosaur, yes the last couple job hunts epically sucked, but I don't ever have to work on web poo poo. It's great.

If I have it my way, I will be writing WPF and (every now and then) WinForms apps for the rest of my life. MS Build has basically cemented WPF as a standard for Desktop going forward too, which is just great.

gently caress web. gently caress it with the largest, thorniest stick you can find. What a tremendous waste of everyone's loving time.

Calidus
Oct 31, 2011

Stand back I'm going to try science!
You find legacy .NET app that is too big and makes too much money to port to something modern.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
We recently made a web app for our largest client to replace the WPF app we had previously maintained for them, and used Blazor. It's fine. There are things that are nicer about WPF, but there are things that are nicer in the browser, too. I'd use either going forward, based on the requirements of the project.

Magnetic North
Dec 15, 2008

Beware the Forest's Mushrooms

Calidus posted:

You find legacy .NET app that is too big and makes too much money to port to something modern.

In my experience, the problem with this plan is that any org that is staying on a legacy app is probably doing so because they're too cheap to replace it or because the existing knowledge base ran out on them since they were too cheap to offer adequate raises.

Calidus
Oct 31, 2011

Stand back I'm going to try science!

Magnetic North posted:

In my experience, the problem with this plan is that any org that is staying on a legacy app is probably doing so because they're too cheap to replace it or because the existing knowledge base ran out on them since they were too cheap to offer adequate raises.

Welcome to Manufacturing!

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer

Hammerite posted:

We recently made a web app for our largest client to replace the WPF app we had previously maintained for them, and used Blazor. It's fine. There are things that are nicer about WPF, but there are things that are nicer in the browser, too. I'd use either going forward, based on the requirements of the project.

Of the cancerous growths that are web frameworks, Blazor is probably the best of the what we got. It's still loving terrible.

We have a couple apps that have desktop and web-based components in Blazor, and there are a couple who have identical UIs and (as close as we reasonable can get) identical functionality. They all hit the same service end points and junk. The WPF one is just way, way better. It renders an order of magnitude faster. It supports a lot of common operations 'for free' without having to do cursed surgery on a browser (drag and drop, context menu, etc.), our desktop app has tear-able, dock-able tabs because that's a thing you can just do, interacting with resources on the local machine is actually a thing you can do. The list goes on.

No - The browser is where functionality, UX, and performance go to die. Platform considerations aside, literally the only thing you are getting out of deploying on a browser is distribution. You are sacrificing everything at the alter of not being forced to hand off an installer once per person - it's complete nonsense.

Now, obviously depending on what skillset your team might have, that's something that matters for an existing effort, but hiring native desktop programmers isn't hard. We get a pile of apps every time a job req goes up - there are way more of these people than twitter etc. would let on.

Canine Blues Arooo fucked around with this message at 18:08 on May 24, 2024

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Canine Blues Arooo posted:

No - The browser is where both functionality, UX, and performance go to die. Platform considerations aside, literally the only thing you are getting out of deploying on a browser is distribution. You are sacrificing everything at the alter of not being forced to hand off an installer once per person - it's complete nonsense.

I used the cloud version of photoshop the other day and the user experience was almost identical save that I didn't have to download and install gigabytes of crap in order to change the color of an image from yellow to red. Yeah it sure must have sucked to develop cloud photoshop but as an end user it's super convenient.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Canine Blues Arooo posted:

Of the cancerous growths that are web frameworks, Blazor is probably the best of the what we got. It's still loving terrible.

We have a couple apps that have desktop and web-based components in Blazor, and there are a couple who have identical UIs and (as close as we reasonable can get) identical functionality. They all hit the same service end points and junk. The WPF one is just way, way better. It renders an order of magnitude faster. It supports a lot of common operations 'for free' without having to do cursed surgery on a browser (drag and drop, context menu, etc.), our desktop app has tear-able, dock-able tabs because that's a thing you can just do, interacting with resources on the local machine is actually a thing you can do. The list goes on.

No - The browser is where functionality, UX, and performance go to die. Platform considerations aside, literally the only thing you are getting out of deploying on a browser is distribution. You are sacrificing everything at the alter of not being forced to hand off an installer once per person - it's complete nonsense.

Now, obviously depending on what skillset your team might have, that's something that matters for an existing effort, but hiring native desktop programmers isn't hard. We get a pile of apps every time a job req goes up - there are way more of these people than twitter etc. would let on.

These criticisms are a mixture of complaints that I agree are genuine drawbacks of web apps over desktop apps, complaints where I think we have different priorities, and complaints that I just don't think matter much.

For an example of the first type, yes, it's much less convenient to work with context menus in the browser compared to a desktop app, and to address things at a very high level this comes originally from the fact that browser conventions are built on a history of the web as a place where documents are being shared rather than applications.

For an example of the latter type - having "tear-able, dock-able tabs" is something that we had in the desktop app I referred to, and we now don't have it in the Blazor application. But does this genuinely mean that something is wrong/missing, or was having rearrangeable panels just a way of throwing our hands up and abandoning our task as application developers of coming up with a UI layout for the app that "just works" for the workflows, the use cases we want to support? I think that for most apps, if you arrive at the correct layout (and gain confidence that it's correct by talking to the people who actually use the app, and getting their feedback, and implementing the changes they ask for after understanding why they were asking for them) then you don't need all those rearrangeable panels and so on. This becomes less true the more different ways there are to use the app. For example, an IDE like Visual Studio, which is basically a multi-document editor with a plethora of integrated specialised panels for different purposes - there's no way around it, that absolutely needs the user to be able to customise the layout, just because there are so many ways to use it. And I would not want to use a "web app" version of Visual Studio, at all. But most apps just aren't that complex.

epswing
Nov 4, 2003

Soiled Meat
ASP.NET project using .NET 6 and EF Core with a SQL Server database. Mostly a LOB app, where users can Create/Save orders, etc. In addition to that, a user can Send an order, which talks to another system on the internet, this operation could take a long time (seconds).

The Save and Send operations for the same order must run serially, but it's fine for them to run in parallel for different orders. The order entity currently has a LastModified timestamp property, which is checked and updated during the Save operation. While saving, if the LastModified property doesn't match LastModified of the order in the DB, someone else has modified it since the user opened it, and an error is thrown. The Send operation updates LastModified as well.

Some scenarios:

User A opens Order 1, User B opens Order 2, and both click Save (or Send) at the same time. This is fine, different orders, no conflicts.

User A and User B both open Order 1, and both click Save at the same time. One of them has to win, the other has to lose and will be shown an error (something like "Record has been modified by another user"). Checking LastModified is a race condition that needs to be solved, but because the Save operation is fast, it's rare.

User A and User B both open Order 1, and one clicks Send and the other clicks Save at roughly the same time. Same problem as above, but this race condition is terrible and happens frequently, because the Send operation could take several seconds, so the order's rear end is exposed the whole time.

To stop the bleeding I could create a static obj and wrap both Save and Send in a lock (obj) { ... }. I believe this would eliminate the race conditions but now nothing happens in parallel, i.e. User A runs Send which takes 10 seconds, and Users B, C, and D all innocently click Save on different orders, all of which have to wait.

Can I leverage EF's transactions here? Is this more of a SemaphoreSlim problem? What's the modern way to handle this?

I've read https://learn.microsoft.com/en-us/ef/core/saving/concurrency and concurrency tokens look interesting, but I'm confused about which isolation level I'd use here, and I'm not sure how to avoid deadlocks. Also, let's pretend I'm not allowed to make a database schema change at this point :3:

nielsm
Jun 1, 2009



How much can you change things up? Like, could you add a State field to the Order, with values of New, ToSend, Sent? So when a user clicks to send an order, it just changes that State field, and then a background job picks up the ToSend order and changes it to Sent when done. Meanwhile, clients will see that the order is being processed and can't modify it.

epswing
Nov 4, 2003

Soiled Meat

nielsm posted:

How much can you change things up? Like, could you add a State field to the Order, with values of New, ToSend, Sent? So when a user clicks to send an order, it just changes that State field, and then a background job picks up the ToSend order and changes it to Sent when done. Meanwhile, clients will see that the order is being processed and can't modify it.

I can't modify the database at the moment. Even if I could, I think I'd have the same race condition with the State field as I do now with the LastModified field.

Kyte
Nov 19, 2013

Never quacked for this
Your shared resource is the database row, so whatever you do has to be database-based. There's no guarantee in-memory solutions will work correctly in ASP.NET.
Normally I'd suggest taking a database lock but one of the operations is gonna run long so probably not ideal.
A concurrency check takes you halfway there, but it only protects against simultaneous writes. You want that, but you also want to lock within the whole period between read and write for the Send operation.
So you need a field that signals the application(s) that the row is in use.

Another issue you didn't mention but probably should be worth considering is what happens if a user Save()s 1 second before another user Send()s. It's technically correct but the sender probably would like to review the updated data before sending. Therefore, your lock should probably last a bit longer than the actual save operation.

Since you can't change the schema, a somewhat janky solution is to use the LastModified column.
Simply put, if LastModified >= DateTime.Now() (+ some constant if you're worried about clock sync), then the row is in use and cannot be operated on.
Send() can use MaxValue to signal it's completing at an indeterminate point in the future. Save() can then identify MaxValue and tell "this row is currently being sent" to the user.
Save() can update LastModified to Now + some reasonable lockout period + constant. Send() can then tell the user that data changed between reviewing and sending.

Kyte fucked around with this message at 17:25 on Jun 5, 2024

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer
e: this was a wrong post.

biznatchio
Mar 31, 2001


Buglord
On a SQL Server database like you mentioned, you can use Application Locks to provide the longer locking you're looking for over the lifetime of the Send operation. Call sp_getapplock to acquire a named lock (so you can have a different lock per order) prior to initiating the Save/Send operation, and sp_releaseapplock after its complete. The only restriction is that applocks are scoped to either a single transaction, or to a single database connection -- so you'll need to hold an open database connection across the Send operation. The applock will release automatically when the scope its bound to ends (whether that's a single transaction or a database connection).

Depending on the arguments you pass to sp_getapplock, other users can either immediately bounce off the held lock and get a "record in use" error message instead; or they can queue up to wait to get it when it becomes available.

zokie
Feb 13, 2006

Out of many, Sweden
Assuming you are just running a single instance of your application you could hack this in a number of ways, but easiest would probably be to use the concurrent collections. Get yourself a singleton and for each order put a little “lock” into the ConncurrentDictionary or whatever. Remove when done, check that it’s not busy before taking action.

It also feels like EF should be able to tell you if it has pending changes that haven’t been SaveChanges()-ed yet… I assume that it’s the EF concurrency errors that are being thrown?

bobua
Mar 23, 2003
I'd trade it all for just a little more.

epswing posted:

ASP.NET project using .NET 6 and EF Core with a SQL Server database. Mostly a LOB app, where users can Create/Save orders, etc. In addition to that, a user can Send an order, which talks to another system on the internet, this operation could take a long time (seconds).

The Save and Send operations for the same order must run serially, but it's fine for them to run in parallel for different orders. The order entity currently has a LastModified timestamp property, which is checked and updated during the Save operation. While saving, if the LastModified property doesn't match LastModified of the order in the DB, someone else has modified it since the user opened it, and an error is thrown. The Send operation updates LastModified as well.

Some scenarios:

User A opens Order 1, User B opens Order 2, and both click Save (or Send) at the same time. This is fine, different orders, no conflicts.

User A and User B both open Order 1, and both click Save at the same time. One of them has to win, the other has to lose and will be shown an error (something like "Record has been modified by another user"). Checking LastModified is a race condition that needs to be solved, but because the Save operation is fast, it's rare.

User A and User B both open Order 1, and one clicks Send and the other clicks Save at roughly the same time. Same problem as above, but this race condition is terrible and happens frequently, because the Send operation could take several seconds, so the order's rear end is exposed the whole time.

To stop the bleeding I could create a static obj and wrap both Save and Send in a lock (obj) { ... }. I believe this would eliminate the race conditions but now nothing happens in parallel, i.e. User A runs Send which takes 10 seconds, and Users B, C, and D all innocently click Save on different orders, all of which have to wait.

Can I leverage EF's transactions here? Is this more of a SemaphoreSlim problem? What's the modern way to handle this?

I've read https://learn.microsoft.com/en-us/ef/core/saving/concurrency and concurrency tokens look interesting, but I'm confused about which isolation level I'd use here, and I'm not sure how to avoid deadlocks. Also, let's pretend I'm not allowed to make a database schema change at this point :3:

I have these issues, funny enough on 'orders' myself. Made worse by it not being a single database row, but a view that incorporates other items(inventory count, inventory pulled status, etc) so other table changes effect my orders and the employees permissions(and order can't be sent for pulling inventory if there is no inventory on the order, inventory on the order is limited by the client on the order, etc).

I used to use a lastupdated field, but I hated that method. I ended up doing a signalr backend to orchestrate changes based on whether they actually caused a concurrency issue. This way users could see when another employee was viewing or editing orders and see their changes in real time as they typed or made changes.

99% of my cases were an employee opening an order and moving on, then coming back to that order 5 minutes later or 2 days later, so larger updates like inventory images and client or document generation didn't make sense to do in real time, so if the page wasn't in focus\top most I'd simply place an overlay on the order that notified the order was edited and to click it to remove the overlay, which refreshed the order.

The biggest advantage was really just allowing them to see that another employee was actively interacting with an order, which either got them to communicate, or at least wait.

Adbot
ADBOT LOVES YOU

epswing
Nov 4, 2003

Soiled Meat

zokie posted:

Assuming you are just running a single instance of your application you could hack this in a number of ways, but easiest would probably be to use the concurrent collections. Get yourself a singleton and for each order put a little “lock” into the ConncurrentDictionary or whatever. Remove when done, check that it’s not busy before taking action.

It also feels like EF should be able to tell you if it has pending changes that haven’t been SaveChanges()-ed yet… I assume that it’s the EF concurrency errors that are being thrown?

Neat, ok, I didn't think of the thread-safe collections. For the "check that it’s not busy before taking action" part, if the record is busy, sounds like I can either reject the action, or wait until the record is not busy.

Reject would look something like this?

C# code:
private static ConcurrentDictionary<int, int> orderLock = new ConcurrentDictionary<int, int>();

public void Save(Order order)
{
    try
    {
        if (!orderLock.TryAdd(order.Id, 0))
        {
            throw new Exception("Record is being edited by another user, try again later");
        }
        
        // Save the order here, still error if LastModified is not what we expect
    }
    finally
    {
        orderLock.TryRemove(order.Id, out int _);
    }
}
Wait would look something like this? Maybe I'd add a retryCount and maxRetryCount to bail out of an infinite loop here, just in case. To me this looks like a spin lock that serializes access to saving orders on a per-order basis, which is what I wanted.

C# code:
private static ConcurrentDictionary<int, int> orderLock = new ConcurrentDictionary<int, int>();

public void Save(Order order)
{
    try
    {
        while (!orderLock.TryAdd(order.Id, 0))
        {
            Thread.Sleep(100);
        }
        
        // Save the order here, error if LastModified is not what we expect
    }
    finally
    {
        orderLock.TryRemove(order.Id, out int _);
    }
}
Do these look insane or reasonable?

epswing fucked around with this message at 13:49 on Jun 6, 2024

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply