Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
Our code is very database-centric and focused on performance so we don't touch EF for anything. I just use Dapper and it takes care of my "mapping" for me. If I have data in memory already I stick to manually mapping it to types using ctors or linq, or using linq to project to an anonymous type if I'm only using it locally.

I take a YAGNI approach to returning data. I try to avoid flattening hierarchical data as much as possible and only do it if there's a requirement. I only return columns that are asked for. For me this means I don't have as much code to write and support. I don't get very many follow up requests for more data. YMMV though, and this obviously works better for internal APIs used by a relatively small number of API clients (most of my APIs have one GUI and maybe 2 or 3 other APIs making calls to them).

Adbot
ADBOT LOVES YOU

MadFriarAvelyn
Sep 25, 2007

ThePeavstenator posted:

For real though, auto-mapping libraries give you 5 minutes of convenience and a lifetime of pain.

I want to print this on a t-shirt and wear it around the (virtual) office. Any enterprise project I've worked on in the past ~5 years that has used something like, say, AutoMapper, has been nothing but developers accidentally breaking the application because they didn't update a mapping and didn't catch the issue in dev/qa testing.

mortarr
Apr 28, 2005

frozen meat at high speed

MadFriarAvelyn posted:

I want to print this on a t-shirt and wear it around the (virtual) office. Any enterprise project I've worked on in the past ~5 years that has used something like, say, AutoMapper, has been nothing but developers accidentally breaking the application because they didn't update a mapping and didn't catch the issue in dev/qa testing.

I really used to love automagic, generic tricks, and general technical wankery, but now I find I'm just not interested in investing the mental energy to do that poo poo - I just want code that works and that I can be fairly sure I'll understand when I come back to fix some critical bug or add a vital feature after not touching the codebase for ages.

I kind of wish I'd just rolled my own mappings over the last few years, tbh, there'd be less technical debt and it'd be a lot clearer how and when the mapping/transformations are happening.

B-Nasty
May 25, 2005

I honestly tried to give AutoMapper a shot doing some basic DTO/domain object to API model class mappings, and the complexity quickly spirals out of control. Pretty soon, you have this massive, convoluted mapping configuration to handle nested objects and all sorts of scenarios that should be pretty basic in the grand scheme of things. It goes from 'neat' to feeling like you're fighting it.

Manual object mapping code is janitorial coding, but it's easy to understand and you (or your successor) won't want to murder you in a few years.

brap
Aug 23, 2004

Grimey Drawer
I haven’t used it in a project but I’m keen on the idea of using a code fix to generate the initial implementation of your mapping function, then manually updating or regenerating it as needed.

https://cezarypiatek.github.io/post/generate-mapping-code-with-roslyn/

insta
Jan 28, 2009
gently caress i am stealing all that. I literally just wrote an article on our corporate wiki about why we implement the IConverter<TIn, TOut> interface manually instead of AutoMapper.

Also gently caress any team that buries AutoMapper directly in code instead of behind an interface.

distortion park
Apr 25, 2011


We do code generation based on the database schema (tables/views only) into some subfolder of the project, you can regenerate it at will. This is nice because:
* It is super easy to understand - the classes are right there! Your debugger works like normal! No reflection.
* You don't have to do the boring, error prone bit of manual mapping yourself.
* You can run it as part of your CI suite to make sure noone forgets to do it/it's always up to date (pick behaviour to taste)
* Works beautifully with dapper.

Anything more complex than POCOs of db tables you then do by hand as normal, but they generally suck with an ORM anyway. I think in theory you could extend it easily enough to stored procs but we sadly don't use many of them.

Not nearly as good as the type generation available in F# though, that stuff is incredible.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



brap posted:

I haven’t used it in a project but I’m keen on the idea of using a code fix to generate the initial implementation of your mapping function, then manually updating or regenerating it as needed.

https://cezarypiatek.github.io/post/generate-mapping-code-with-roslyn/

This looks fantastic - thanks!

pointsofdata posted:

We do code generation based on the database schema (tables/views only) into some subfolder of the project

What do you use?

Calidus
Oct 31, 2011

Stand back I'm going to try science!
My work does it EF Core and powershell, it works well for smaller databases (<50 tables). The EF Core tools make it much easier to do now than in the past.

raminasi
Jan 25, 2005

a last drink with no ice
Does anyone know anything about the intricacies of UseDefaultCredentials and Kerberos? I am in the unenviable position of needing to get some .NET Core code running in Linux to authenticate against a service running elsewhere that's expecting Windows authentication. (There are solutions to the broader problem that don't require this that we're exploring in parallel, but I'm hoping I can get this to work.) I discovered that if I install Kerberos and kinit before making the request, it Just Works, which was very exciting.

Until this week, when it stopped working. Two things happened since it was last verified: A domain controller got upgraded (I don't know what this entailed) and we moved the proof of concept over to a proper build pipeline, meaning I may have forgotten to bring some configuration setting over. Now, when I try to do something as simple as
C# code:
var client = new HttpClient(new HttpClientHandler { UseDefaultCredentials = true });
var response = await client.GetAsync("http://my.endpoint");
var content = await response.Content.ReadAsStringAsync();
Console.WriteLine(content);
I get Server not found in Kerberos database. Now, a raw curl will authenticate correctly, so something is working, just not what I want. This is double confusing because I was under the impression that both curl and the .NET stuff are eventually just calling out to the same underlying Kerberos library.

I turned tracing on, and I located the specific difference: .NET constructs the Kerberos SPN using the destination (non-standard) port, but curl doesn't, and apparently the one with the port isn't valid. But I don't know why they're different or how to affect it, and furthermore, the curl one doesn't seem to be correct as documented!

Why did this work before? Why doesn't it work now? Why does curl work? Is the destination host misconfigured somehow? If so, why does it work for actual AD authentication but not my jury-rigged attempt? Could the domain controller update have affected anything? The fact that this just stopped working is driving me nuts.

WorkerThread
Feb 15, 2012

raminasi posted:

Does anyone know anything about the intricacies of UseDefaultCredentials and Kerberos? I am in the unenviable position of needing to get some .NET Core code running in Linux to authenticate against a service running elsewhere that's expecting Windows authentication. (There are solutions to the broader problem that don't require this that we're exploring in parallel, but I'm hoping I can get this to work.) I discovered that if I install Kerberos and kinit before making the request, it Just Works, which was very exciting.

Until this week, when it stopped working. Two things happened since it was last verified: A domain controller got upgraded (I don't know what this entailed) and we moved the proof of concept over to a proper build pipeline, meaning I may have forgotten to bring some configuration setting over. Now, when I try to do something as simple as
C# code:
var client = new HttpClient(new HttpClientHandler { UseDefaultCredentials = true });
var response = await client.GetAsync("http://my.endpoint");
var content = await response.Content.ReadAsStringAsync();
Console.WriteLine(content);
I get Server not found in Kerberos database. Now, a raw curl will authenticate correctly, so something is working, just not what I want. This is double confusing because I was under the impression that both curl and the .NET stuff are eventually just calling out to the same underlying Kerberos library.

I turned tracing on, and I located the specific difference: .NET constructs the Kerberos SPN using the destination (non-standard) port, but curl doesn't, and apparently the one with the port isn't valid. But I don't know why they're different or how to affect it, and furthermore, the curl one doesn't seem to be correct as documented!

Why did this work before? Why doesn't it work now? Why does curl work? Is the destination host misconfigured somehow? If so, why does it work for actual AD authentication but not my jury-rigged attempt? Could the domain controller update have affected anything? The fact that this just stopped working is driving me nuts.

:yikes:

distortion park
Apr 25, 2011


Munkeymon posted:

This looks fantastic - thanks!


What do you use?

We use some abomination that was forked from a typescript code generator years ago. It works completely fine though - on the rare occasions it doesn't you just get a compile error (or in theory integration tests could fail but that's never happened).

It's way simpler than you would think. Loop over each schema -> make a file. Each table becomes a class, and each column a property. Done!

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*?

I have an exception handling middleware in my services that just catches any unhandled exceptions and logs them, and I want to also include the request data as well. But the request could have sensitive information in it which I don't want to log. Ideally I'd want to mask it out to indicate that it was present, but excluding it entirely would be fine.

I can't use [JsonIgnore] because that prevents the property from being deserialized into the request model when it comes in initially as well. So I'm thinking of decorating the sensitive fields with something like [NotLogged] and then have the logger in the exception handler configured to use some custom logic to mask/exclude anything with that attribute, but most of the stuff I'm finding just says to use JsonIgnore or to write a custom converter that extends JsonConverter<T>, but the JsonConverter<T> option requires each of the converters to be registered via a JsonSerializerOptions object which isn't practical in this case since everyone would have to remember to register.

brap
Aug 23, 2004

Grimey Drawer

beuges posted:

Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*?

Is this what you wanted? https://stackoverflow.com/questions/11564091/making-a-property-deserialize-but-not-serialize-with-json-net

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

beuges posted:

Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*?

I have an exception handling middleware in my services that just catches any unhandled exceptions and logs them, and I want to also include the request data as well. But the request could have sensitive information in it which I don't want to log. Ideally I'd want to mask it out to indicate that it was present, but excluding it entirely would be fine.

I can't use [JsonIgnore] because that prevents the property from being deserialized into the request model when it comes in initially as well. So I'm thinking of decorating the sensitive fields with something like [NotLogged] and then have the logger in the exception handler configured to use some custom logic to mask/exclude anything with that attribute, but most of the stuff I'm finding just says to use JsonIgnore or to write a custom converter that extends JsonConverter<T>, but the JsonConverter<T> option requires each of the converters to be registered via a JsonSerializerOptions object which isn't practical in this case since everyone would have to remember to register.

https://docs.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonignorecondition?view=net-5.0

Pair with the logic of "ok, don't write this if I null it"

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Wrong angle of attack. Masking sensitive data is not your serialization library's concern, it's your logging library's concern.

https://nblumhardt.com/2014/07/using-attributes-to-control-destructuring-in-serilog/

https://docs.sentry.io/platforms/dotnet/guides/nlog/data-management/sensitive-data/

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

NihilCredo posted:

Wrong angle of attack. Masking sensitive data is not your serialization library's concern, it's your logging library's concern.

https://nblumhardt.com/2014/07/using-attributes-to-control-destructuring-in-serilog/

https://docs.sentry.io/platforms/dotnet/guides/nlog/data-management/sensitive-data/

Thanks for this, the Serilog [NotLogged] attribute is exactly the approach I was looking for. It does tie all the services to Serilog but that's not the biggest deal.

User0015
Nov 24, 2007

Please don't talk about your sexuality unless it serves the ~narrative~!

thank you. this fixed everything except my marriage and crippling gambling addiction.

In unrelated news, EF Core + .net 5 is amazing and I've managed to sweet talk the bossman into letting the upgrade happen Q2 instead of end of year. Dumping EF6 can't come fast enough.

GI_Clutch
Aug 22, 2000

by Fluffdaddy
Dinosaur Gum
Does anyone have any ideas on the best way to handle email alerts? I'm working on a project with very limited hours. I have to process files that are received, and generate output files to be ingested by our systems as well as a third party. There are a number of things that could go wrong throughout the process, but I need to process whatever can be done. For example, if we receive 1000 records and three are bad, the 997 still get processed and we log the errors. It's important though that those three records get addressed somewhat quickly though. That's where the email alerts come in.

I am using Serilog to log errors to a custom event log. The app exits with an exit code relaying if any errors occurred. The low effort plan (due to lack of hours) is that the app will be called by a PowerShell script and if the exit code is not 0, send an email to the help desk to tell them to check the event log. The manager of the help desk would prefer that errors relating to the files we receive are sent to that vendor with his team CCed instead (so they don't have to be the middle man). I figure to do that I'd need to refactor things quite a bit so that I could aggregate the errors in some way (including which party the error is related to) and then send out one or more emails depending on the intended party with detailed information (we can't just say "hey we got a bad file from you", we'd need to tell them exactly what was wrong). Of course, if poo poo hit the bed and this process never got called, there would be no alert period.

Any thoughts? I know there are some really awesome tools out there that can probably aggregate the issues and handle the alerting, etc. but again, I have limited hours. If anything, I think I might just change what I have slightly to provide various exit codes based on the issues that occurred so the help desk gets a better idea from the "check the event log" email of what went wrong.

GI_Clutch fucked around with this message at 17:46 on Feb 12, 2021

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Since you mentioned Serilog, it supports logging to email - just set the email sink to minimum level Error, and every call to Log.Error() or Log.Fatal() will result in an email. You should of course set a filter so you don't just email them random exceptions.

You can leave it at that and your vendor will get an email for every bad record, which may be acceptable. Otherwise you'll need to collect the error objects in a List<> and call Log.Error() once at the end of the ingestion loop, which shouldn't be too complicated.

insta
Jan 28, 2009
That is exactly what a service bus is for. Like, all of it.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

GI_Clutch posted:

Does anyone have any ideas on the best way to handle email alerts? I'm working on a project with very limited hours. I have to process files that are received, and generate output files to be ingested by our systems as well as a third party. There are a number of things that could go wrong throughout the process, but I need to process whatever can be done. For example, if we receive 1000 records and three are bad, the 997 still get processed and we log the errors. It's important though that those three records get addressed somewhat quickly though. That's where the email alerts come in.

I am using Serilog to log errors to a custom event log. The app exits with an exit code relaying if any errors occurred. The low effort plan (due to lack of hours) is that the app will be called by a PowerShell script and if the exit code is not 0, send an email to the help desk to tell them to check the event log. The manager of the help desk would prefer that errors relating to the files we receive are sent to that vendor with his team CCed instead (so they don't have to be the middle man). I figure to do that I'd need to refactor things quite a bit so that I could aggregate the errors in some way (including which party the error is related to) and then send out an email alert with detailed information (we can't just say "hey we got a bad file from you", we'd need to tell them exactly what was wrong). Of course, if poo poo hit the bed and this process never got called, there would be no alert period.

Any thoughts? I know there are some really awesome tools out there that can probably aggregate the issues and handle the alerting, etc. but again, I have limited hours. If anything, I think I might just change what I have slightly to provide various exit codes based on the issues that occurred so the help desk gets a better idea from the "check the event log" email of what went wrong.

You can actually configure serilog to send an email if an exception is thrown in a certain assembly - e.g. https://stackoverflow.com/questions/55138818/serilog-send-mail-if-exception-is-thrown-in-specific-assembly

Aggregation etc could be done with more time but that seems like quickest way of doing the requirement while keeping existing logging.

GI_Clutch
Aug 22, 2000

by Fluffdaddy
Dinosaur Gum
Thanks for the replies. I had considered the email sink and don't know why I didn't include it in my writeup. I agree on the service bus reply as well, but being a customer project with limited hours and likely needing to open firewall requests, etc. it is probably a no go.

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
Writing to database might be an option too if you already have a report/alerting framework in place that uses the database. We do it this way because we aggregate errors along with other information we store in our database so adding serilog errors to the mix was really easy.

Boz0r
Sep 7, 2006
The Rocketship in action.
6 months ago I talked our CTO into giving me a Rider key and God drat, it's been great. It just does everything better than VS. Only problem I've run into is TFS integration, but that's poo poo in VS too. I really recommend trying Rider out.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I have a new WPF problem. Suppose I have a ListView of some buttons for various save games. The save list view also has a button up top to create a new save. Right now, these lists are generated independently for saving and loading. However, this means if I add/remove a game from one screen, the other won't notice it. This is because despite the save data coming from the same source, the lists bound in each view are separate instances. A normal idea would be to give each view the same displayable data, but the save view has that extra item at the top for creating a new game.

I thought I'd get cute and try a CompositeCollection. However, I hit a wall with sorting. The custom sorter I have for the save listview can't be attached to the composite collection because CompositeCollection doesn't have a CustomSort property in the first place. Sad trombone.

Would a CompositeCollection even be appropriate here? Or should I look into other schemes? Just loading all the poo poo up each time I bring up either view is fine for messing around for awhile, but I feel like I'll be coming back around to it later if there's ever a bunch of saves to display. Or maybe that's where pagination would come in.

I think I'm sticking with just loading everything fresh when entering the view, but I figured I'd ask what would be kosher here while it's in front of me and I can dump notes about it in a TODO or something.

Mr Shiny Pants
Nov 12, 2012

Rocko Bonaparte posted:

I have a new WPF problem. Suppose I have a ListView of some buttons for various save games. The save list view also has a button up top to create a new save. Right now, these lists are generated independently for saving and loading. However, this means if I add/remove a game from one screen, the other won't notice it. This is because despite the save data coming from the same source, the lists bound in each view are separate instances. A normal idea would be to give each view the same displayable data, but the save view has that extra item at the top for creating a new game.

I thought I'd get cute and try a CompositeCollection. However, I hit a wall with sorting. The custom sorter I have for the save listview can't be attached to the composite collection because CompositeCollection doesn't have a CustomSort property in the first place. Sad trombone.

Would a CompositeCollection even be appropriate here? Or should I look into other schemes? Just loading all the poo poo up each time I bring up either view is fine for messing around for awhile, but I feel like I'll be coming back around to it later if there's ever a bunch of saves to display. Or maybe that's where pagination would come in.

I think I'm sticking with just loading everything fresh when entering the view, but I figured I'd ask what would be kosher here while it's in front of me and I can dump notes about it in a TODO or something.

Give them the same list as a source and just have the savelist get an extra item that you create at runtime.
Or make a separate button for creating a new save.

LongSack
Jan 17, 2003

Is there an elegant way to handle the desire to call async methods in a constructor?

I’m working on the Employee Portal for my demo app, using ASP.Net core 5.0, and I have a controller (AdminController) that maintains a list of users and roles. It would be really convenient to load those lists in the constructor. Originally, I was doing
C# code:
GetUsers().GetAwaiter.GetResult()
GetRoles().GetAwaiter.GetResult()
But this caused sporadic EF exceptions.

So now, in each action method, the first thing I have to do is await GetUsers() or await GetRoles(), and it really feels clumsy. I mean, it works, and no more EF exceptions, but I’m wondering if there’s a better way.

raminasi
Jan 25, 2005

a last drink with no ice
I’d use a private constructor and a public static factory method. oops, missed that it’s a controller so you don’t control instantiation

raminasi fucked around with this message at 02:01 on Feb 15, 2021

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
Without getting into language purity arguments with anyone - not really, and it's by design. If it was supported there would be a form of the constructor that returned a Task<YourClass> instead of just YourClass and code written to invoke your constructor would know to await it.

One workaround is to use a lazy async factory to instantiate objects like that so the invoking code awaits creation properly.

Edit: sorry mixing patterns in my brain. There's async lazy properties and then there's just using an async factory. Here's the latter:

C# code:
public class Foo       
{       
    public FooData Data { get; set; }       
   
    async public static Task<Foo> BuildFooAsync(DataContext context)  
    {       
        var data = await context.GetDataEtc();
        return new Foo(data);
    }       

    private Foo(FooData data)
    {
        Data = data;   
    }
}

// somewhere else:
var foo = await Foo.BuildFooAsync(context);

//now you've got a fully initialized foo instance

If it's valid for Foo to accept data from elsewhere you could make that ctor public. You could also make the empty constructor private assuming it's never valid.

Cold on a Cob fucked around with this message at 02:21 on Feb 15, 2021

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

LongSack posted:

Is there an elegant way to handle the desire to call async methods in a constructor?

I’m working on the Employee Portal for my demo app, using ASP.Net core 5.0, and I have a controller (AdminController) that maintains a list of users and roles. It would be really convenient to load those lists in the constructor. Originally, I was doing
C# code:
GetUsers().GetAwaiter.GetResult()
GetRoles().GetAwaiter.GetResult()
But this caused sporadic EF exceptions.

So now, in each action method, the first thing I have to do is await GetUsers() or await GetRoles(), and it really feels clumsy. I mean, it works, and no more EF exceptions, but I’m wondering if there’s a better way.

The available users and roles should be treated as an external dependency. A class that knows how to retrieve those should be injected into the constructor of your controller and methods called as appropriate by the controller's methods. The constructor shouldn't be doing heavy lifting.

Cold on a Cob
Feb 6, 2006

i've seen so much, i'm going blind
and i'm brain dead virtually

College Slice
Re-reading the question and in your case I'd just take a dependency on data context and then async lazy load a private variable whenever it's needed so you don't keep reloading Users and Roles ie internally always call a method instead of a property and have that method load a private variable if it's not already loaded. There is a library that can make this easier, google "nito async". I'm assuming potentially stale data is ok since you wanted to load it in a ctor anyway.

Definitely agree with New Yorp New YOrp that no matter what you do, constructor should not be doing heavy lifting.

LongSack
Jan 17, 2003

New Yorp New Yorp posted:

The available users and roles should be treated as an external dependency. A class that knows how to retrieve those should be injected into the constructor of your controller and methods called as appropriate by the controller's methods. The constructor shouldn't be doing heavy lifting.

If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods.

I didn’t mention in my original post that the actual identity stuff is done through an API, so every operation is done asynchronously. In a monolithic app, I would use _userManager.Users and _roleManager.Roles. Maybe that changes things.

Of course, it’s always possible (probable) that I’m misunderstanding

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

LongSack posted:

If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods.

I didn’t mention in my original post that the actual identity stuff is done through an API, so every operation is done asynchronously. In a monolithic app, I would use _userManager.Users and _roleManager.Roles. Maybe that changes things.

Of course, it’s always possible (probable) that I’m misunderstanding

async constructors don't really make sense. an async method returns task<T>, but a constructor is supposed to return T. imagining that little detail is glossed over, what should happen to the object if an async method necessary for the creation of the object fails in the middle?

having a separate async initialization method / action methods is the way to go here.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

LongSack posted:

If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods.

I didn’t mention in my original post that the actual identity stuff is done through an API, so every operation is done asynchronously. In a monolithic app, I would use _userManager.Users and _roleManager.Roles. Maybe that changes things.

Of course, it’s always possible (probable) that I’m misunderstanding

I'm saying "don't call those methods in the constructor". The constructor should just follow standard DI patterns to receive an instance of something that knows how to make those API calls. Then the API is called by whatever methods need them, other than the constructor.

Try to treat constructors as dumb things that just take references and assign them to local variables as needed and make no method calls of their own.

Cuntpunch
Oct 3, 2003

A monkey in a long line of kings

LongSack posted:

If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods.

I didn’t mention in my original post that the actual identity stuff is done through an API, so every operation is done asynchronously. In a monolithic app, I would use _userManager.Users and _roleManager.Roles. Maybe that changes things.

Of course, it’s always possible (probable) that I’m misunderstanding

Those are conceptually Lazy properties.

So you have a UsersController that needs to get a list of Users. Great, that's an external dependency, so we create IUserManager and add it as a dependency to our UsersController. Then we add a UserManager as the implementation of it and register it with DI.

UserManager conceivably needs to fetch data from persistence somewhere. Let's say HTTP. So *it* needs an IHttpClientFactory or similar.

Now, our UserManager can offer a .Users getter-property, or a .GetUsers() method, to taste.

Internally it has a field storing our collection of users. Let's say we instantiate that as just *empty*.

Now, *at runtime* - we're going to be able to instantiate a UserManager(the Http stuff handled by ASP.NET), and that lets us Instantiate a UsersController, and we're ready to go.

When our UsersController endpoint gets hit, it calls into the UserManager to get Users.

Our UserManager looks at its own internal state and branches. If its internal list is empty, make a call to go fetch that data and populate its list of known users. If it isn't empty, then we've already done that, and can just pass that data forward.

LongSack
Jan 17, 2003

OK, thanks. That's already what I'm doing in each action method as needed.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
It makes me very happy that I can write "x is not null" in C# now.

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer

Hammerite posted:

It makes me very happy that I can write "x is not null" in C# now.

Seriously though. Null checks in C# did not need to be as goofy as they were at times.

Adbot
ADBOT LOVES YOU

LongSack
Jan 17, 2003

Hammerite posted:

It makes me very happy that I can write "x is not null" in C# now.

Agreed. My favorite thing in C# 8 was switch expressions, and in 9 I really like the new pattern matching stuff like “is not null” and “case 11 or 12 or 13:”. It reads so much cleaner.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply