|
Our code is very database-centric and focused on performance so we don't touch EF for anything. I just use Dapper and it takes care of my "mapping" for me. If I have data in memory already I stick to manually mapping it to types using ctors or linq, or using linq to project to an anonymous type if I'm only using it locally. I take a YAGNI approach to returning data. I try to avoid flattening hierarchical data as much as possible and only do it if there's a requirement. I only return columns that are asked for. For me this means I don't have as much code to write and support. I don't get very many follow up requests for more data. YMMV though, and this obviously works better for internal APIs used by a relatively small number of API clients (most of my APIs have one GUI and maybe 2 or 3 other APIs making calls to them).
|
# ? Feb 1, 2021 14:19 |
|
|
# ? Jun 7, 2024 16:09 |
|
ThePeavstenator posted:For real though, auto-mapping libraries give you 5 minutes of convenience and a lifetime of pain. I want to print this on a t-shirt and wear it around the (virtual) office. Any enterprise project I've worked on in the past ~5 years that has used something like, say, AutoMapper, has been nothing but developers accidentally breaking the application because they didn't update a mapping and didn't catch the issue in dev/qa testing.
|
# ? Feb 1, 2021 16:08 |
|
MadFriarAvelyn posted:I want to print this on a t-shirt and wear it around the (virtual) office. Any enterprise project I've worked on in the past ~5 years that has used something like, say, AutoMapper, has been nothing but developers accidentally breaking the application because they didn't update a mapping and didn't catch the issue in dev/qa testing. I really used to love automagic, generic tricks, and general technical wankery, but now I find I'm just not interested in investing the mental energy to do that poo poo - I just want code that works and that I can be fairly sure I'll understand when I come back to fix some critical bug or add a vital feature after not touching the codebase for ages. I kind of wish I'd just rolled my own mappings over the last few years, tbh, there'd be less technical debt and it'd be a lot clearer how and when the mapping/transformations are happening.
|
# ? Feb 2, 2021 01:41 |
|
I honestly tried to give AutoMapper a shot doing some basic DTO/domain object to API model class mappings, and the complexity quickly spirals out of control. Pretty soon, you have this massive, convoluted mapping configuration to handle nested objects and all sorts of scenarios that should be pretty basic in the grand scheme of things. It goes from 'neat' to feeling like you're fighting it. Manual object mapping code is janitorial coding, but it's easy to understand and you (or your successor) won't want to murder you in a few years.
|
# ? Feb 2, 2021 01:55 |
|
I haven’t used it in a project but I’m keen on the idea of using a code fix to generate the initial implementation of your mapping function, then manually updating or regenerating it as needed. https://cezarypiatek.github.io/post/generate-mapping-code-with-roslyn/
|
# ? Feb 2, 2021 03:11 |
|
gently caress i am stealing all that. I literally just wrote an article on our corporate wiki about why we implement the IConverter<TIn, TOut> interface manually instead of AutoMapper. Also gently caress any team that buries AutoMapper directly in code instead of behind an interface.
|
# ? Feb 2, 2021 06:13 |
|
We do code generation based on the database schema (tables/views only) into some subfolder of the project, you can regenerate it at will. This is nice because: * It is super easy to understand - the classes are right there! Your debugger works like normal! No reflection. * You don't have to do the boring, error prone bit of manual mapping yourself. * You can run it as part of your CI suite to make sure noone forgets to do it/it's always up to date (pick behaviour to taste) * Works beautifully with dapper. Anything more complex than POCOs of db tables you then do by hand as normal, but they generally suck with an ORM anyway. I think in theory you could extend it easily enough to stored procs but we sadly don't use many of them. Not nearly as good as the type generation available in F# though, that stuff is incredible.
|
# ? Feb 2, 2021 14:13 |
|
brap posted:I haven’t used it in a project but I’m keen on the idea of using a code fix to generate the initial implementation of your mapping function, then manually updating or regenerating it as needed. This looks fantastic - thanks! pointsofdata posted:We do code generation based on the database schema (tables/views only) into some subfolder of the project What do you use?
|
# ? Feb 2, 2021 16:57 |
|
My work does it EF Core and powershell, it works well for smaller databases (<50 tables). The EF Core tools make it much easier to do now than in the past.
|
# ? Feb 2, 2021 20:18 |
|
Does anyone know anything about the intricacies of UseDefaultCredentials and Kerberos? I am in the unenviable position of needing to get some .NET Core code running in Linux to authenticate against a service running elsewhere that's expecting Windows authentication. (There are solutions to the broader problem that don't require this that we're exploring in parallel, but I'm hoping I can get this to work.) I discovered that if I install Kerberos and kinit before making the request, it Just Works, which was very exciting. Until this week, when it stopped working. Two things happened since it was last verified: A domain controller got upgraded (I don't know what this entailed) and we moved the proof of concept over to a proper build pipeline, meaning I may have forgotten to bring some configuration setting over. Now, when I try to do something as simple as C# code:
I turned tracing on, and I located the specific difference: .NET constructs the Kerberos SPN using the destination (non-standard) port, but curl doesn't, and apparently the one with the port isn't valid. But I don't know why they're different or how to affect it, and furthermore, the curl one doesn't seem to be correct as documented! Why did this work before? Why doesn't it work now? Why does curl work? Is the destination host misconfigured somehow? If so, why does it work for actual AD authentication but not my jury-rigged attempt? Could the domain controller update have affected anything? The fact that this just stopped working is driving me nuts.
|
# ? Feb 2, 2021 22:22 |
|
raminasi posted:Does anyone know anything about the intricacies of UseDefaultCredentials and Kerberos? I am in the unenviable position of needing to get some .NET Core code running in Linux to authenticate against a service running elsewhere that's expecting Windows authentication. (There are solutions to the broader problem that don't require this that we're exploring in parallel, but I'm hoping I can get this to work.) I discovered that if I install Kerberos and kinit before making the request, it Just Works, which was very exciting.
|
# ? Feb 2, 2021 23:02 |
|
Munkeymon posted:This looks fantastic - thanks! We use some abomination that was forked from a typescript code generator years ago. It works completely fine though - on the rare occasions it doesn't you just get a compile error (or in theory integration tests could fail but that's never happened). It's way simpler than you would think. Loop over each schema -> make a file. Each table becomes a class, and each column a property. Done!
|
# ? Feb 3, 2021 16:54 |
|
Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*? I have an exception handling middleware in my services that just catches any unhandled exceptions and logs them, and I want to also include the request data as well. But the request could have sensitive information in it which I don't want to log. Ideally I'd want to mask it out to indicate that it was present, but excluding it entirely would be fine. I can't use [JsonIgnore] because that prevents the property from being deserialized into the request model when it comes in initially as well. So I'm thinking of decorating the sensitive fields with something like [NotLogged] and then have the logger in the exception handler configured to use some custom logic to mask/exclude anything with that attribute, but most of the stuff I'm finding just says to use JsonIgnore or to write a custom converter that extends JsonConverter<T>, but the JsonConverter<T> option requires each of the converters to be registered via a JsonSerializerOptions object which isn't practical in this case since everyone would have to remember to register.
|
# ? Feb 7, 2021 15:59 |
|
beuges posted:Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*? Is this what you wanted? https://stackoverflow.com/questions/11564091/making-a-property-deserialize-but-not-serialize-with-json-net
|
# ? Feb 7, 2021 16:33 |
|
beuges posted:Using .net 5 and System.Text.Json, what's the simplest way to exclude a property from serialization *on demand*? https://docs.microsoft.com/en-us/dotnet/api/system.text.json.serialization.jsonignorecondition?view=net-5.0 Pair with the logic of "ok, don't write this if I null it"
|
# ? Feb 7, 2021 18:25 |
|
Wrong angle of attack. Masking sensitive data is not your serialization library's concern, it's your logging library's concern. https://nblumhardt.com/2014/07/using-attributes-to-control-destructuring-in-serilog/ https://docs.sentry.io/platforms/dotnet/guides/nlog/data-management/sensitive-data/
|
# ? Feb 7, 2021 23:43 |
|
NihilCredo posted:Wrong angle of attack. Masking sensitive data is not your serialization library's concern, it's your logging library's concern. Thanks for this, the Serilog [NotLogged] attribute is exactly the approach I was looking for. It does tie all the services to Serilog but that's not the biggest deal.
|
# ? Feb 8, 2021 09:12 |
|
ThePeavstenator posted:I use these for mapping POCOs thank you. this fixed everything except my marriage and crippling gambling addiction. In unrelated news, EF Core + .net 5 is amazing and I've managed to sweet talk the bossman into letting the upgrade happen Q2 instead of end of year. Dumping EF6 can't come fast enough.
|
# ? Feb 10, 2021 05:21 |
|
Does anyone have any ideas on the best way to handle email alerts? I'm working on a project with very limited hours. I have to process files that are received, and generate output files to be ingested by our systems as well as a third party. There are a number of things that could go wrong throughout the process, but I need to process whatever can be done. For example, if we receive 1000 records and three are bad, the 997 still get processed and we log the errors. It's important though that those three records get addressed somewhat quickly though. That's where the email alerts come in. I am using Serilog to log errors to a custom event log. The app exits with an exit code relaying if any errors occurred. The low effort plan (due to lack of hours) is that the app will be called by a PowerShell script and if the exit code is not 0, send an email to the help desk to tell them to check the event log. The manager of the help desk would prefer that errors relating to the files we receive are sent to that vendor with his team CCed instead (so they don't have to be the middle man). I figure to do that I'd need to refactor things quite a bit so that I could aggregate the errors in some way (including which party the error is related to) and then send out one or more emails depending on the intended party with detailed information (we can't just say "hey we got a bad file from you", we'd need to tell them exactly what was wrong). Of course, if poo poo hit the bed and this process never got called, there would be no alert period. Any thoughts? I know there are some really awesome tools out there that can probably aggregate the issues and handle the alerting, etc. but again, I have limited hours. If anything, I think I might just change what I have slightly to provide various exit codes based on the issues that occurred so the help desk gets a better idea from the "check the event log" email of what went wrong. GI_Clutch fucked around with this message at 17:46 on Feb 12, 2021 |
# ? Feb 12, 2021 17:30 |
|
Since you mentioned Serilog, it supports logging to email - just set the email sink to minimum level Error, and every call to Log.Error() or Log.Fatal() will result in an email. You should of course set a filter so you don't just email them random exceptions. You can leave it at that and your vendor will get an email for every bad record, which may be acceptable. Otherwise you'll need to collect the error objects in a List<> and call Log.Error() once at the end of the ingestion loop, which shouldn't be too complicated.
|
# ? Feb 12, 2021 17:54 |
|
That is exactly what a service bus is for. Like, all of it.
|
# ? Feb 12, 2021 17:54 |
|
GI_Clutch posted:Does anyone have any ideas on the best way to handle email alerts? I'm working on a project with very limited hours. I have to process files that are received, and generate output files to be ingested by our systems as well as a third party. There are a number of things that could go wrong throughout the process, but I need to process whatever can be done. For example, if we receive 1000 records and three are bad, the 997 still get processed and we log the errors. It's important though that those three records get addressed somewhat quickly though. That's where the email alerts come in. You can actually configure serilog to send an email if an exception is thrown in a certain assembly - e.g. https://stackoverflow.com/questions/55138818/serilog-send-mail-if-exception-is-thrown-in-specific-assembly Aggregation etc could be done with more time but that seems like quickest way of doing the requirement while keeping existing logging.
|
# ? Feb 12, 2021 17:54 |
|
Thanks for the replies. I had considered the email sink and don't know why I didn't include it in my writeup. I agree on the service bus reply as well, but being a customer project with limited hours and likely needing to open firewall requests, etc. it is probably a no go.
|
# ? Feb 12, 2021 18:47 |
|
Writing to database might be an option too if you already have a report/alerting framework in place that uses the database. We do it this way because we aggregate errors along with other information we store in our database so adding serilog errors to the mix was really easy.
|
# ? Feb 12, 2021 19:32 |
|
6 months ago I talked our CTO into giving me a Rider key and God drat, it's been great. It just does everything better than VS. Only problem I've run into is TFS integration, but that's poo poo in VS too. I really recommend trying Rider out.
|
# ? Feb 13, 2021 14:23 |
|
I have a new WPF problem. Suppose I have a ListView of some buttons for various save games. The save list view also has a button up top to create a new save. Right now, these lists are generated independently for saving and loading. However, this means if I add/remove a game from one screen, the other won't notice it. This is because despite the save data coming from the same source, the lists bound in each view are separate instances. A normal idea would be to give each view the same displayable data, but the save view has that extra item at the top for creating a new game. I thought I'd get cute and try a CompositeCollection. However, I hit a wall with sorting. The custom sorter I have for the save listview can't be attached to the composite collection because CompositeCollection doesn't have a CustomSort property in the first place. Sad trombone. Would a CompositeCollection even be appropriate here? Or should I look into other schemes? Just loading all the poo poo up each time I bring up either view is fine for messing around for awhile, but I feel like I'll be coming back around to it later if there's ever a bunch of saves to display. Or maybe that's where pagination would come in. I think I'm sticking with just loading everything fresh when entering the view, but I figured I'd ask what would be kosher here while it's in front of me and I can dump notes about it in a TODO or something.
|
# ? Feb 14, 2021 09:00 |
|
Rocko Bonaparte posted:I have a new WPF problem. Suppose I have a ListView of some buttons for various save games. The save list view also has a button up top to create a new save. Right now, these lists are generated independently for saving and loading. However, this means if I add/remove a game from one screen, the other won't notice it. This is because despite the save data coming from the same source, the lists bound in each view are separate instances. A normal idea would be to give each view the same displayable data, but the save view has that extra item at the top for creating a new game. Give them the same list as a source and just have the savelist get an extra item that you create at runtime. Or make a separate button for creating a new save.
|
# ? Feb 14, 2021 15:15 |
|
Is there an elegant way to handle the desire to call async methods in a constructor? I’m working on the Employee Portal for my demo app, using ASP.Net core 5.0, and I have a controller (AdminController) that maintains a list of users and roles. It would be really convenient to load those lists in the constructor. Originally, I was doing C# code:
So now, in each action method, the first thing I have to do is await GetUsers() or await GetRoles(), and it really feels clumsy. I mean, it works, and no more EF exceptions, but I’m wondering if there’s a better way.
|
# ? Feb 15, 2021 01:29 |
|
raminasi fucked around with this message at 02:01 on Feb 15, 2021 |
# ? Feb 15, 2021 01:59 |
|
Without getting into language purity arguments with anyone - not really, and it's by design. If it was supported there would be a form of the constructor that returned a Task<YourClass> instead of just YourClass and code written to invoke your constructor would know to await it. One workaround is to use a Edit: sorry mixing patterns in my brain. There's async lazy properties and then there's just using an async factory. Here's the latter: C# code:
Cold on a Cob fucked around with this message at 02:21 on Feb 15, 2021 |
# ? Feb 15, 2021 02:07 |
|
LongSack posted:Is there an elegant way to handle the desire to call async methods in a constructor? The available users and roles should be treated as an external dependency. A class that knows how to retrieve those should be injected into the constructor of your controller and methods called as appropriate by the controller's methods. The constructor shouldn't be doing heavy lifting.
|
# ? Feb 15, 2021 02:26 |
|
Re-reading the question and in your case I'd just take a dependency on data context and then async lazy load a private variable whenever it's needed so you don't keep reloading Users and Roles ie internally always call a method instead of a property and have that method load a private variable if it's not already loaded. There is a library that can make this easier, google "nito async". I'm assuming potentially stale data is ok since you wanted to load it in a ctor anyway. Definitely agree with New Yorp New YOrp that no matter what you do, constructor should not be doing heavy lifting.
|
# ? Feb 15, 2021 02:40 |
|
New Yorp New Yorp posted:The available users and roles should be treated as an external dependency. A class that knows how to retrieve those should be injected into the constructor of your controller and methods called as appropriate by the controller's methods. The constructor shouldn't be doing heavy lifting. If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods. I didn’t mention in my original post that the actual identity stuff is done through an API, so every operation is done asynchronously. In a monolithic app, I would use _userManager.Users and _roleManager.Roles. Maybe that changes things. Of course, it’s always possible (probable) that I’m misunderstanding
|
# ? Feb 15, 2021 04:32 |
|
LongSack posted:If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods. async constructors don't really make sense. an async method returns task<T>, but a constructor is supposed to return T. imagining that little detail is glossed over, what should happen to the object if an async method necessary for the creation of the object fails in the middle? having a separate async initialization method / action methods is the way to go here.
|
# ? Feb 15, 2021 06:16 |
|
LongSack posted:If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods. I'm saying "don't call those methods in the constructor". The constructor should just follow standard DI patterns to receive an instance of something that knows how to make those API calls. Then the API is called by whatever methods need them, other than the constructor. Try to treat constructors as dumb things that just take references and assign them to local variables as needed and make no method calls of their own.
|
# ? Feb 15, 2021 09:29 |
|
LongSack posted:If I create classes that return the lists of users/roles, since the methods that actually retrieve the users/roles are async, then the methods in the retriever classes use would also have to be async, so I don’t see where that gains me anything. And if you’re saying not to use those classes in the ctor but in the action methods, I’ve already got methods I can use in the action methods. Those are conceptually Lazy properties. So you have a UsersController that needs to get a list of Users. Great, that's an external dependency, so we create IUserManager and add it as a dependency to our UsersController. Then we add a UserManager as the implementation of it and register it with DI. UserManager conceivably needs to fetch data from persistence somewhere. Let's say HTTP. So *it* needs an IHttpClientFactory or similar. Now, our UserManager can offer a .Users getter-property, or a .GetUsers() method, to taste. Internally it has a field storing our collection of users. Let's say we instantiate that as just *empty*. Now, *at runtime* - we're going to be able to instantiate a UserManager(the Http stuff handled by ASP.NET), and that lets us Instantiate a UsersController, and we're ready to go. When our UsersController endpoint gets hit, it calls into the UserManager to get Users. Our UserManager looks at its own internal state and branches. If its internal list is empty, make a call to go fetch that data and populate its list of known users. If it isn't empty, then we've already done that, and can just pass that data forward.
|
# ? Feb 15, 2021 09:40 |
|
OK, thanks. That's already what I'm doing in each action method as needed.
|
# ? Feb 15, 2021 21:42 |
|
It makes me very happy that I can write "x is not null" in C# now.
|
# ? Feb 17, 2021 11:41 |
|
Hammerite posted:It makes me very happy that I can write "x is not null" in C# now. Seriously though. Null checks in C# did not need to be as goofy as they were at times.
|
# ? Feb 17, 2021 22:15 |
|
|
# ? Jun 7, 2024 16:09 |
|
Hammerite posted:It makes me very happy that I can write "x is not null" in C# now. Agreed. My favorite thing in C# 8 was switch expressions, and in 9 I really like the new pattern matching stuff like “is not null” and “case 11 or 12 or 13:”. It reads so much cleaner.
|
# ? Feb 18, 2021 01:12 |