|
NihilCredo posted:i also think this is usually the cleanest option. no query filters, just a separate connection string per tenant. another thing to be wary of with one-db-per-tenant (or indeed anything that varies the db connection string per tenant) is that ef core support is sketchy, iirc. you'll need to figure that bit out on your own obviously not a concern if you're not using ef
|
# ? May 1, 2024 00:31 |
|
|
# ? Jun 8, 2024 17:28 |
|
If you inject your ITenantProvider with your OnConfiguring method for entity framework, you can use a scoped database service no problem. I have used this approach several times, and it works well. I can probably get you some sample code tomorrow if you like.
|
# ? May 1, 2024 01:12 |
|
ChocolatePancake posted:If you inject your ITenantProvider with your OnConfiguring method for entity framework, you can use a scoped database service no problem. I have used this approach several times, and it works well. I can probably get you some sample code tomorrow if you like. This would be appreciated! I've also been looking at Finbuckle Multitenant tonight which seems to be a potential solution as well.
|
# ? May 1, 2024 02:13 |
|
I've not used Finbuckle before, but it looks intriguing. This is what I do:code:
This is with SqlServer, but should work the same with Postgres. Hope that helps!
|
# ? May 1, 2024 02:36 |
|
It'd be different for everyone, but what criteria is commonly used to set the TenantID in MyTenantIdentifier to "TenantA" or "TenantB" in an ASP.NET Core project (the currently logged in User? the subdomain?), and where is a reasonable place to store that information?
|
# ? May 1, 2024 04:17 |
|
We like to use subdomains, storing the mapping in a config file, makes it easier for us, but there's lots of ways to skin that cat.
|
# ? May 1, 2024 04:22 |
|
You don't even need to bury it in the DbContext OnConfiguring like that, you can do it right in your application's startup. The AddDbContext extension method for IServiceCollection has an overload to provide a function that takes IServiceProvider as one of its arguments, so you can get your tenant-providing service there and keep all your configuration during service registration where it belongs.ChocolatePancake posted:We like to use subdomains, storing the mapping in a config file, makes it easier for us, but there's lots of ways to skin that cat. We do that for our IDP (mapping to tenant by hostname); then for all our application services we map based off the issuer of the passed OAuth access token. But there are still some things that are difficult to make per-tenant; so we structure our web applications to have the initial WebApplication host built, and all that host does is look at the supplied bearer token to identify the issuer, then passes the request off to a separate tenant-specific host instance (one per tenant -- created on demand when the first request for a particular tenant hits the process), which then handles it exactly like it would if it were a single tenant application because it basically *is* just running lots of separate single tenant applications all in one process. You also don't get boned if you ever need to use a library that assumes a single tenant application; and you never ever accidentally do things like mix up memory caches cross-tenant or any of the other pitfalls you might fall for if you're not hypervigilant about making sure you never accidentally assume a single tenant. biznatchio fucked around with this message at 04:37 on May 1, 2024 |
# ? May 1, 2024 04:34 |
|
epswing posted:It'd be different for everyone, but what criteria is commonly used to set the TenantID in MyTenantIdentifier to "TenantA" or "TenantB" in an ASP.NET Core project (the currently logged in User? the subdomain?), and where is a reasonable place to store that information? Our IDP just adds the tenant ID to the user's claims.
|
# ? May 1, 2024 20:51 |
|
susan b buffering posted:Our IDP just adds the tenant ID to the user's claims. I really like this method.
|
# ? May 1, 2024 23:31 |
|
I have a very newbish question, since I'm only a hobby developer and even then, I hadn't started Visual Studio for like 2 years before today so I'm a bit rusty. I'm throwing together a very simple and quick Blazor web app to centralize all the calculators and converters I need for sports (miles to km, pace to speed, pace + time to distance, etc). I'm stuck on the One-rep Max calculator trying to implement a bafflingly simple formula (Epley's), which is 1RM = Weight(1+r/30), assuming r > 1. Weight is the weight you're doing the assessment at, rep is the amount of reps you could do before failure, and this comes out as a higher number because your 1RM is supposed to be heavier than your, say, 5RM. For instance a 5RM at 50kg should come out as 56.3kg (according to another calculator). Sorry for the long intro, I wanted to give context. Anyway, my code is dumb and so far is: code:
Is there something very basic and dumb I'm missing here?
|
# ? May 5, 2024 14:28 |
|
You're doing integer division. Cast the int to a double first
|
# ? May 5, 2024 14:35 |
|
Ooh, because I'm using two different types the compiler needs to know which one I actually need, and since I don't specify it by casting the type then it just uses the simpler of the two?
|
# ? May 5, 2024 14:38 |
|
Furism posted:Ooh, because I'm using two different types the compiler needs to know which one I actually need, and since I don't specify it by casting the type then it just uses the simpler of the two? Pretty much. Since RepAmount is an int and .NET interprets the literal 30 as an int, it doesn't change it to a double until it needs to. You can either cast RepAmount as part of the equation, or change the 30 to 30.0 and that should do it. E: or just change the declared type of RepAmount to double if you want
|
# ? May 5, 2024 14:41 |
|
Thanks for the clarification, appreciate it. Working fine now!
|
# ? May 5, 2024 14:46 |
|
Is there a forum or a Discord for SixLabors ImageSharp, or any way to get help with that library? I've been using ImageSharp to load up .png files, clone -> rotate -> resize them, and save as webp files. And every once in a while, the files are exported as animated webps (which are indistinguishable to every browser but which Discord can't parse). Why? No idea. All the png files are sourced from various sources (sometimes a PNG or JPG, sometimes a WEBP, often a screen grab), and then cropped in Affinity and exported as png files before running through my C# code. So 1 in 30 or 40 images are coming out as animated webp, even though they always start as png files. Any insight?
|
# ? May 8, 2024 19:02 |
|
CitizenKeen posted:Is there a forum or a Discord for SixLabors ImageSharp, or any way to get help with that library? Is it consistent? If you re-process the same file that resulted in an animated webp, does it still/always result in an animated webp file?
|
# ? May 8, 2024 19:16 |
|
You described the symptom "does not work in Discord" but how did you get from that to "it is an animated webp"?
|
# ? May 8, 2024 19:42 |
|
From having a look at the source code, maybe some of the PNGs you have are actually APNGs? Those would be able to contain multiple frames while still being completely backwards compatible with regular PNG (displayed as the first frame only by programs that don't understand APNG). ImageSharp understands APNGs and would dutifully convert them into animated WEBPs. If I understand the ImageSharp API correctly, I guess you could try loading all images with the DecoderOptions configured with MaxFrames = 1, which would get rid of any animated file content and convert everything to a normal WEBP.
|
# ? May 8, 2024 20:51 |
|
epswing posted:Is it consistent? If you re-process the same file that resulted in an animated webp, does it still/always result in an animated webp file? Yes, the same files will result in the same problem. EssOEss posted:You described the symptom "does not work in Discord" but how did you get from that to "it is an animated webp"? I posted two webp files - one that previews in Discord, one that does not - in the D Sharp Plus Discord (the Discord .NET bot library I'm using) and it was noted the working one was a webp, the not-working one is an animated webp, and that Discord does not render animated webp files. SirViver posted:From having a look at the source code, maybe some of the PNGs you have are actually APNGs? Those would be able to contain multiple frames while still being completely backwards compatible with regular PNG (displayed as the first frame only by programs that don't understand APNG). ImageSharp understands APNGs and would dutifully convert them into animated WEBPs. I've never heard of an APNG file. That's a strong contender for the culprit: If i start with an animated WEBP, open and edit in Affinity, then export, and it exports as an APNG, then run my ImageSharp code against it, I could see that being the culprit. I had no idea animated PNGs were a thing - this gives me an avenue for an investigation. Thank you!
|
# ? May 8, 2024 21:12 |
|
APNG is a non-standard extension, hence why it's kinda rare but some people still use it because it's less stupid than webm/animated webp. The Correct(TM) way of doing it would be to see if there's more than one frame in the png and if so convert to webm instead but that's, y'know, a whole lot of extra effort.
|
# ? May 8, 2024 22:51 |
|
Kyte posted:APNG is a non-standard extension, hence why it's kinda rare but some people still use it because it's less stupid than webm/animated webp. Not to clutter the .NET thread but I don't want the extra frames (I suspect someone upstream is making them animated by accident - they're static images), so I'm just flattening them when I convert from WEBP to PNG. Seems to solve the problem just fine.
|
# ? May 8, 2024 23:06 |
|
Jen heir rick posted:I hate writing web apps and wish I could write native apps, but it seems like the world is moving to hybrid/web type apps. The economics are just too good. You can have a single team write an app that works on android iOS, macOS, Linux, and windows. You just can't beat that with native apps. You need a team or developer for each platform. The suits just aren't gonna pay for that. Even if the result is a shittier experience. People will get over it. poo poo sucks yo. I work on winforms apps for a living. Yes I'm a dinosaur, yes the last couple job hunts epically sucked, but I don't ever have to work on web poo poo. It's great.
|
# ? May 23, 2024 21:05 |
|
Blue Footed Booby posted:I work on winforms apps for a living. I'd think that Winform talent would be harder to come by these days so you would be in more demand. Not so much?
|
# ? May 23, 2024 21:35 |
|
Blue Footed Booby posted:I work on winforms apps for a living. If I have it my way, I will be writing WPF and (every now and then) WinForms apps for the rest of my life. MS Build has basically cemented WPF as a standard for Desktop going forward too, which is just great. gently caress web. gently caress it with the largest, thorniest stick you can find. What a tremendous waste of everyone's loving time.
|
# ? May 23, 2024 22:21 |
|
You find legacy .NET app that is too big and makes too much money to port to something modern.
|
# ? May 24, 2024 01:53 |
|
We recently made a web app for our largest client to replace the WPF app we had previously maintained for them, and used Blazor. It's fine. There are things that are nicer about WPF, but there are things that are nicer in the browser, too. I'd use either going forward, based on the requirements of the project.
|
# ? May 24, 2024 10:23 |
|
Calidus posted:You find legacy .NET app that is too big and makes too much money to port to something modern. In my experience, the problem with this plan is that any org that is staying on a legacy app is probably doing so because they're too cheap to replace it or because the existing knowledge base ran out on them since they were too cheap to offer adequate raises.
|
# ? May 24, 2024 10:54 |
|
Magnetic North posted:In my experience, the problem with this plan is that any org that is staying on a legacy app is probably doing so because they're too cheap to replace it or because the existing knowledge base ran out on them since they were too cheap to offer adequate raises. Welcome to Manufacturing!
|
# ? May 24, 2024 14:24 |
|
Hammerite posted:We recently made a web app for our largest client to replace the WPF app we had previously maintained for them, and used Blazor. It's fine. There are things that are nicer about WPF, but there are things that are nicer in the browser, too. I'd use either going forward, based on the requirements of the project. Of the cancerous growths that are web frameworks, Blazor is probably the best of the what we got. It's still loving terrible. We have a couple apps that have desktop and web-based components in Blazor, and there are a couple who have identical UIs and (as close as we reasonable can get) identical functionality. They all hit the same service end points and junk. The WPF one is just way, way better. It renders an order of magnitude faster. It supports a lot of common operations 'for free' without having to do cursed surgery on a browser (drag and drop, context menu, etc.), our desktop app has tear-able, dock-able tabs because that's a thing you can just do, interacting with resources on the local machine is actually a thing you can do. The list goes on. No - The browser is where functionality, UX, and performance go to die. Platform considerations aside, literally the only thing you are getting out of deploying on a browser is distribution. You are sacrificing everything at the alter of not being forced to hand off an installer once per person - it's complete nonsense. Now, obviously depending on what skillset your team might have, that's something that matters for an existing effort, but hiring native desktop programmers isn't hard. We get a pile of apps every time a job req goes up - there are way more of these people than twitter etc. would let on. Canine Blues Arooo fucked around with this message at 18:08 on May 24, 2024 |
# ? May 24, 2024 17:46 |
|
Canine Blues Arooo posted:No - The browser is where both functionality, UX, and performance go to die. Platform considerations aside, literally the only thing you are getting out of deploying on a browser is distribution. You are sacrificing everything at the alter of not being forced to hand off an installer once per person - it's complete nonsense. I used the cloud version of photoshop the other day and the user experience was almost identical save that I didn't have to download and install gigabytes of crap in order to change the color of an image from yellow to red. Yeah it sure must have sucked to develop cloud photoshop but as an end user it's super convenient.
|
# ? May 24, 2024 17:55 |
|
Canine Blues Arooo posted:Of the cancerous growths that are web frameworks, Blazor is probably the best of the what we got. It's still loving terrible. These criticisms are a mixture of complaints that I agree are genuine drawbacks of web apps over desktop apps, complaints where I think we have different priorities, and complaints that I just don't think matter much. For an example of the first type, yes, it's much less convenient to work with context menus in the browser compared to a desktop app, and to address things at a very high level this comes originally from the fact that browser conventions are built on a history of the web as a place where documents are being shared rather than applications. For an example of the latter type - having "tear-able, dock-able tabs" is something that we had in the desktop app I referred to, and we now don't have it in the Blazor application. But does this genuinely mean that something is wrong/missing, or was having rearrangeable panels just a way of throwing our hands up and abandoning our task as application developers of coming up with a UI layout for the app that "just works" for the workflows, the use cases we want to support? I think that for most apps, if you arrive at the correct layout (and gain confidence that it's correct by talking to the people who actually use the app, and getting their feedback, and implementing the changes they ask for after understanding why they were asking for them) then you don't need all those rearrangeable panels and so on. This becomes less true the more different ways there are to use the app. For example, an IDE like Visual Studio, which is basically a multi-document editor with a plethora of integrated specialised panels for different purposes - there's no way around it, that absolutely needs the user to be able to customise the layout, just because there are so many ways to use it. And I would not want to use a "web app" version of Visual Studio, at all. But most apps just aren't that complex.
|
# ? May 25, 2024 15:41 |
|
ASP.NET project using .NET 6 and EF Core with a SQL Server database. Mostly a LOB app, where users can Create/Save orders, etc. In addition to that, a user can Send an order, which talks to another system on the internet, this operation could take a long time (seconds). The Save and Send operations for the same order must run serially, but it's fine for them to run in parallel for different orders. The order entity currently has a LastModified timestamp property, which is checked and updated during the Save operation. While saving, if the LastModified property doesn't match LastModified of the order in the DB, someone else has modified it since the user opened it, and an error is thrown. The Send operation updates LastModified as well. Some scenarios: User A opens Order 1, User B opens Order 2, and both click Save (or Send) at the same time. This is fine, different orders, no conflicts. User A and User B both open Order 1, and both click Save at the same time. One of them has to win, the other has to lose and will be shown an error (something like "Record has been modified by another user"). Checking LastModified is a race condition that needs to be solved, but because the Save operation is fast, it's rare. User A and User B both open Order 1, and one clicks Send and the other clicks Save at roughly the same time. Same problem as above, but this race condition is terrible and happens frequently, because the Send operation could take several seconds, so the order's rear end is exposed the whole time. To stop the bleeding I could create a static obj and wrap both Save and Send in a lock (obj) { ... }. I believe this would eliminate the race conditions but now nothing happens in parallel, i.e. User A runs Send which takes 10 seconds, and Users B, C, and D all innocently click Save on different orders, all of which have to wait. Can I leverage EF's transactions here? Is this more of a SemaphoreSlim problem? What's the modern way to handle this? I've read https://learn.microsoft.com/en-us/ef/core/saving/concurrency and concurrency tokens look interesting, but I'm confused about which isolation level I'd use here, and I'm not sure how to avoid deadlocks. Also, let's pretend I'm not allowed to make a database schema change at this point
|
# ? Jun 5, 2024 14:57 |
How much can you change things up? Like, could you add a State field to the Order, with values of New, ToSend, Sent? So when a user clicks to send an order, it just changes that State field, and then a background job picks up the ToSend order and changes it to Sent when done. Meanwhile, clients will see that the order is being processed and can't modify it.
|
|
# ? Jun 5, 2024 15:17 |
|
nielsm posted:How much can you change things up? Like, could you add a State field to the Order, with values of New, ToSend, Sent? So when a user clicks to send an order, it just changes that State field, and then a background job picks up the ToSend order and changes it to Sent when done. Meanwhile, clients will see that the order is being processed and can't modify it. I can't modify the database at the moment. Even if I could, I think I'd have the same race condition with the State field as I do now with the LastModified field.
|
# ? Jun 5, 2024 15:33 |
|
Your shared resource is the database row, so whatever you do has to be database-based. There's no guarantee in-memory solutions will work correctly in ASP.NET. Normally I'd suggest taking a database lock but one of the operations is gonna run long so probably not ideal. A concurrency check takes you halfway there, but it only protects against simultaneous writes. You want that, but you also want to lock within the whole period between read and write for the Send operation. So you need a field that signals the application(s) that the row is in use. Another issue you didn't mention but probably should be worth considering is what happens if a user Save()s 1 second before another user Send()s. It's technically correct but the sender probably would like to review the updated data before sending. Therefore, your lock should probably last a bit longer than the actual save operation. Since you can't change the schema, a somewhat janky solution is to use the LastModified column. Simply put, if LastModified >= DateTime.Now() (+ some constant if you're worried about clock sync), then the row is in use and cannot be operated on. Send() can use MaxValue to signal it's completing at an indeterminate point in the future. Save() can then identify MaxValue and tell "this row is currently being sent" to the user. Save() can update LastModified to Now + some reasonable lockout period + constant. Send() can then tell the user that data changed between reviewing and sending. Kyte fucked around with this message at 17:25 on Jun 5, 2024 |
# ? Jun 5, 2024 17:20 |
|
e: this was a wrong post.
|
# ? Jun 5, 2024 19:18 |
|
On a SQL Server database like you mentioned, you can use Application Locks to provide the longer locking you're looking for over the lifetime of the Send operation. Call sp_getapplock to acquire a named lock (so you can have a different lock per order) prior to initiating the Save/Send operation, and sp_releaseapplock after its complete. The only restriction is that applocks are scoped to either a single transaction, or to a single database connection -- so you'll need to hold an open database connection across the Send operation. The applock will release automatically when the scope its bound to ends (whether that's a single transaction or a database connection). Depending on the arguments you pass to sp_getapplock, other users can either immediately bounce off the held lock and get a "record in use" error message instead; or they can queue up to wait to get it when it becomes available.
|
# ? Jun 5, 2024 20:36 |
|
Assuming you are just running a single instance of your application you could hack this in a number of ways, but easiest would probably be to use the concurrent collections. Get yourself a singleton and for each order put a little “lock” into the ConncurrentDictionary or whatever. Remove when done, check that it’s not busy before taking action. It also feels like EF should be able to tell you if it has pending changes that haven’t been SaveChanges()-ed yet… I assume that it’s the EF concurrency errors that are being thrown?
|
# ? Jun 5, 2024 20:43 |
|
epswing posted:ASP.NET project using .NET 6 and EF Core with a SQL Server database. Mostly a LOB app, where users can Create/Save orders, etc. In addition to that, a user can Send an order, which talks to another system on the internet, this operation could take a long time (seconds). I have these issues, funny enough on 'orders' myself. Made worse by it not being a single database row, but a view that incorporates other items(inventory count, inventory pulled status, etc) so other table changes effect my orders and the employees permissions(and order can't be sent for pulling inventory if there is no inventory on the order, inventory on the order is limited by the client on the order, etc). I used to use a lastupdated field, but I hated that method. I ended up doing a signalr backend to orchestrate changes based on whether they actually caused a concurrency issue. This way users could see when another employee was viewing or editing orders and see their changes in real time as they typed or made changes. 99% of my cases were an employee opening an order and moving on, then coming back to that order 5 minutes later or 2 days later, so larger updates like inventory images and client or document generation didn't make sense to do in real time, so if the page wasn't in focus\top most I'd simply place an overlay on the order that notified the order was edited and to click it to remove the overlay, which refreshed the order. The biggest advantage was really just allowing them to see that another employee was actively interacting with an order, which either got them to communicate, or at least wait.
|
# ? Jun 5, 2024 22:24 |
|
|
# ? Jun 8, 2024 17:28 |
|
zokie posted:Assuming you are just running a single instance of your application you could hack this in a number of ways, but easiest would probably be to use the concurrent collections. Get yourself a singleton and for each order put a little “lock” into the ConncurrentDictionary or whatever. Remove when done, check that it’s not busy before taking action. Neat, ok, I didn't think of the thread-safe collections. For the "check that it’s not busy before taking action" part, if the record is busy, sounds like I can either reject the action, or wait until the record is not busy. Reject would look something like this? C# code:
C# code:
epswing fucked around with this message at 13:49 on Jun 6, 2024 |
# ? Jun 6, 2024 13:42 |