Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

epalm posted:

Firstly, the status code. Sometimes, the user did something wrong and returning 400 is correct. Sometimes, I have done something wrong, and returning 500 is correct. What strategies do you use to return the right status code? I could wrap all user-generated errors in some top-level AppException, catch AppException in every controller action, and generate an appropriate 400 as Bar does above. Otherwise, the exception will skip that catch, and become a 500. Do people do this?

You don't need to throw an exception if you're returning a response right from the controller. The way I usually break down responsibility of each layers in your typical web service is with the controller having 2 responsibilities: validate incoming request parameters/headers/whatever, and then respond with an appropriate response code from the result it receives from the service layer. If you're validating input in the controller, you can just return BadRequest() without throwing an exception if the inputs are not valid. You shouldn't be using exceptions for controlling execution flow in general.

epalm posted:

Secondly, I want to log exceptions. Is there some way other than wrapping every controller action in the same boilerplate try/catch?

Use Application Insights if you're using .NET and especially if you're using .NET and hosting on Azure. Gives you a ton of metrics right out of the box with no configuration and is very customizable.

epalm posted:

Third question, is there a circumstance where the client won't receive a json object with Message, ExceptionMessage, ExceptionType, and StackTrace fields? Or can I count on this structure always being the same when an ApiController throws an exception?

You shouldn't be exposing these through your web API. It can be a security list because that stack trace could potentially include things like query strings or worse, configuration information. Even if you're careful to only throw certain exceptions and it's internal use only, you've now coupled the consumers of your API to C# stack traces that are thrown by .NET Core default HTTP error responses. Ideally you're writing web APIs so that if you wanted to, you could rewrite them in a completely different framework/language and not break compatibility with existing consumers. Older ASP.NET projects don't even throw the same errors, they return full error HTML pages or XML in the response.

What you should do is return a standard message with just enough information to tell the consumers what the problem is. It could be as simple as a static string, or as verbose as a JSON blob with a bunch of descriptive status fields.

e:

Mr Shiny Pants posted:

What a user does should not cause an exception on your server and the right handling should be in place.
I would not use exceptions for this, crappy input is not exceptional. :)

At least, that is how I try to write my code.

Yeah this is pretty much it. As far as exceptions go, here are the codes you're going to end up returning most often:

400 Bad Request - No exception thrown, you validate the incoming request and respond with this if it's not valid
401 Unauthorized - Usually is going to be baked in to your web app framework
403 Forbidden - You'll have to write the code for this usually, request is well-formed and the user is authorized to use the service but lacks the permissions for that request. No exception needed.
404 Not Found - Didn't find what was requested. Usually it'll be because your data access layer returned null/empty/empty list when you queried your database or another web service. No exception needed.
405 Method Not Allowed - Usually is going to be baked in to your web app framework
500 Internal Server Error - Some exceptional condition that isn't being handled happened while trying to process the request. Exception thrown, but you should still probably not expose a full stack trace to the consumer.

ThePeavstenator fucked around with this message at 06:04 on Jan 6, 2019

Adbot
ADBOT LOVES YOU

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

epalm posted:

I re-read and this struck me as odd. What's the point then of being able to catch specific exceptions?
C# code:
try { /* try something */ }
catch (ExceptionA) { /* do something */ }
catch (ExceptionB) { /* do something else */ }

Let's take this example you posted:

epalm posted:

C# code:
public class MyController : ApiController
{    
    [HttpPost]
    public void Bar()
    {
        try
        {
            throw new Exception();
        }
        catch (Exception ex)
        {
            log.Error(ex);
            throw ResponseException(Request, ex);
        }
    }
}

I assume this is a minimal example and there's more in that try block in your actual code, but basically what the "throw new Exception();" does in this example is act as an expensive goto statement.

If you're throwing an exception, the code you're writing should either reach a condition that it doesn't know how to handle, or is beyond the scope of what it should handle and needs something up the call stack to deal with it. As an example from the .NET Framework source code:
C# code:
public class TcpClient : IDisposable {

    /* Snipped code */

    public void Connect(string hostname, int port) {
        /* Snipped code */

        if ( m_Active ) {
            throw new SocketException(SocketError.IsConnected);
        }

        /* Snipped code */
    }

    /* Snipped code */
}
TcpClient doesn't even bother to try and deal with the situation where it's trying to connect to a socket that's already been connected to. Instead it throws it to the caller. Maybe TcpClient could just silently continue and just use the already connected socket, but not every caller will want this to invisibly succeed.

If you're the caller (or a caller further up the stack), you can choose to handle this condition because how it should be handled is unambiguous for your use case. For example:
C# code:
public class TcpWrapperThatJustWantsAnOpenClient {
    private TcpClient client;
    public TcpWrapperThatJustWantsAnOpenClient (TcpClient client) {
        this.client = client;
        try {
            this.client.Connect()
        }
        catch (SocketException se) {
            Console.WriteLine("Got a SocketException when connecting the client in the constructor, but I know what I want to do in this situation.");
        }
    }
}
If you know what you want to do with an exception you're throwing, then don't throw it and just do the thing. That's what people mean when they say don't use exceptions as control flow. Exceptions are meant to be thrown up the call stack until they either crash the program, or reach a caller that knows how they want to handle it.

So to take your code and remove the exception performing control flow:
C# code:
public class MyController : ApiController
{    
    [HttpPost]
    public void Bar()
    {
        /* Rest of Your Code */

        if (badShitHappened) {
            var exceptionString = $"Here's some bad poo poo info: {badShitInfoString}";
            log.Error(exceptionString);
            throw ResponseException(Request, exceptionString);
        }

        /* More Code */
    }
}
Now you're checking if an exceptional condition you don't want to handle occured, and then you throw it to the caller (which in this case is the ASP.NET framework).

ThePeavstenator fucked around with this message at 08:51 on Jan 6, 2019

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
If you're able to just do everything through version control, it has the added bonus of getting new programmers into the habit of using it as a part of their workflow right away. Visual Studio has a decent GUI for Git so they don't have to tackle the command line if that's something you want to avoid.

If you need the repos to be private, GitHub now has free private repos that allow up to 3 contributors, and Bitbucket allows private repos with up to 5.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

BIGFOOT EROTICA posted:

Right but I don't just have one object making these requests, i have 10 (or however many) that each have different sessions/cookies etc for scraping

code:
while(thing still has data)
{
	await process[0].doasync();
	await process[1].doasync();
	await process[2].doasync();
	etc
}
The second one wont run until the first await completes and so on.

I could do
code:
List<Task> tasks;

while(thing still has data)
{
	tasks.Add(process[0].doasync());
	tasks.Add(process[1].doasync());
	tasks.Add(process[2].doasync());
	
	await Task.All(tasks);
}
but if they complete at different times I cant immediately requeue it, i have to wait until they all finish

Inside of the doasync() method, put the while loop around whatever async logic you want to continue.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

User0015 posted:

Phone posting here, but what happens to events when a using statement completes? For example

Using(var foo = new Foo()) {
...
foo.workCompleteZugZug += SomeHandler(...);
await foo.DoWork();
}

Once that completes, does that clean up properly on dispose? I'm wondering if that leaves behind a reference somewhere.

foo holds a reference to the SomeHandler action delegate, not the other way around, so foo will get GCd

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

LongSack posted:

The only time I’ve done anything “clever” (in the pejorative sense) is my Time class, which freely converts to/from double where, for instance, 12:30 pm can be represented as 12.5

even ignoring the comparison thing, you will heavily regret writing any time implementation yourself the second you have to deal with anything other than static time stamps

if you have any logic surrounding time don't even use the .NET DateTime types, just use NodaTime

LongSack posted:

and (IMO) the code reads cleaner.

less text =/= cleaner code

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
if all of your code is basically just scripts for your own productivity and you know all the implementation details in your head than it's probably fine, but don't mistake abstraction for making things cleaner

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

EssOEss posted:

Oh huh what happened now?

You can host .NET Core 2.2+ apps in-process on IIS itself instead of the .NET Core app running on a Kestrel host with IIS acting as a proxy forwarding requests to it.

When I say "you can" I mean "you should" unless you've got multiple .NET Core apps running behind a single IIS proxy (you should not do this).

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

LongSack posted:

Oh, and if you’re wondering why use Expression<Func<foo, bool>> rather than just Func<foo, bool>, it’s because Where with the latter returns an IEnumerable<foo>, so you can’t add, say, an .AsNoTracking(). Where with the former returns an IQueryable<foo>, so you can.

Expression<Func<foo, bool>> is an expression tree, which means it's a declarative set of instructions that are interpreted at runtime. Your Linq code on an IQueryable is really just adding more instructions to the expression tree. Then at runtime something like EF can interpret the expression and turn it into something like a SQL query and send it to a database. AsNoTracking() is just an extension method on Expression<> that adds to the tree and be interpreted at runtime.

Func<foo, bool> is an imperative function that gets compiled and executed as a part of your program. Which means that it can only operate on .NET types, which means you need actual data structures in memory (or IEnumerables emitting them). So going from Expression to Func (or IQueryable to IEnumerable) means that your expression tree gets interpreted and executed and anything chained after that will be ran in your program.

beuges posted:

Just so you know, there are .AsQueryable<T> and .AsEnumerable<T> methods that are available from entity framework that will transform from one to the other.

Converting from IEnumerable to IQueryable just means that your expression tree will get interpreted by the CLR and invoke methods as if your IQueryable was an enumerable. It won't tack your Linq on to the upstream expression tree.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
Communicating over a network, communicating with a database using an agreed-upon protocol, deserializing data retrieved from a database, and serializing data that you want a database to write are all solved problems. Managing a database schema, managing data object lifecycles during application execution, scoping transactions, writing queries, and migrating data are problems that have solutions. These things are subtly different.

Micro-ORMs like Dapper only solve the former while traditional, thicker ORMs like EF claim to also be able to solve the latter. Often they're able to, until they can't. Complexity is inevitable if your application is going to do anything useful, and at some point you're going to have to make a change to your software that reveals someone made a bad assumption somewhere. That's how you get into poo poo like this:

a hot gujju bhabhi posted:

These days I'm working on a large repository of .NET Framework horseshit and I'm having some trouble with Entity Framework because the people who originally built this out clearly didn't know how to use it. I have a Windows service, with various unnecessary layers of complexity behind it, and something in there is causing the service to "randomly" crash.

Nothing is shown in the NLog logs, which means the application is obviously crashing before the exception can be written out. However, in the Event Viewer I found the crash exception:
code:
Application: MyAwfulWindowsService.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ObjectDisposedException
   at System.Data.Entity.Core.Objects.ObjectContext.ReleaseConnection()
   at System.Data.Entity.Core.Objects.ObjectContext+<ExecuteInTransactionAsync>d__3d`1[[System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at System.Data.Entity.Core.Objects.ObjectContext+<SaveChangesToStoreAsync>d__39.MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy+<ExecuteAsyncImplementation>d__9`1[[System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at System.Data.Entity.Core.Objects.ObjectContext+<SaveChangesInternalAsync>d__31.MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at RIS.Racenet.Repository.HorseRepositoryContext+<CommitChangesAsync>d__4.MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at MyAwfulWindowsService.UnitofWork+<CommitChangesAsync>d__0.MoveNext()
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(System.Threading.Tasks.Task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(System.Threading.Tasks.Task)
   at MyAwfulWindowsService.DataImportService+<AttachExternalCode>d__41.MoveNext()
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore+<>c.<ThrowAsync>b__6_1(System.Object)
   at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(System.Object)
   at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
   at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
   at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
From Googling around, I've found that this is usually caused by trying to do something to an attached entity after the context has been disposed, but honestly I have no idea. I'm really hoping that this is a somewhat frequent issue that someone here might see and go "oh that stupid thing, yeah that's because X". If not I won't be surprised, but I had to try, I'm getting desperate.

The people that wrote that could just suck, or maybe they did need to add complexity because the business requirements demanded it but EF didn't leave them any room to manage that complexity in a more sane manner.

EssOEss posted:

Well, it depends on the usage but I can certainly agree that horrible experiences with EF are possible. The main issues with EF (Core) that I have encountered are:

1. Migrations sometimes glitch out and EF thinks you need to apply some nonsense migration that is a no-op at best but can sometimes even be destructive.
2. LINQ->SQL transformations are extremely limited once you move on from basic "SELECT X FROM Y WHERE Z" types of queries.
3. No built-in bulk operation support.

These are the big pain points for me but life with EF is certainly better than without EF. However, I am careful to only use the "good parts" of EF. For example, I would never touch the inheritance hierarchy modeling functionality, I treat lazy loading as cancer to be rooted out, I keep any writable EF object lifetimes constrained to a single C# method (any EF objects returned are strictly read-only and never used for pushing data to the database).

It took me years to learn the right patterns and they are not really documented anywhere - if you read guides, they just tell you what features do, rarely when it is appropriate to use them (or even more importantly, to avoid them). Given that, I would say that you need to know how to use it and you'll be fine. Use it as a minimal C#<->SQL translation layer and you'll be fine. But if you just take all the documented features and apply them in a spray-and-pray pattern in your business logic, you will suffer greatly.

a hot gujju bhabhi posted:

Exactly, it's just about knowing what you're doing and designing your code appropriately. Lazy loading should never be a problem because you should really be projecting your queries to domain models at the outset. If you're gonna do an update, use a dto to represent the change, fetch the object and update and save it in the same method as you said.

Look I agree that getting to know EF well is hard work, but the benefits it brings are immense.

If EF brings all these footguns if it's used beyond performing basic queries or simple data writes, why not just write the SQL directly? I agree that manually mapping data between something like ADO.NET and POCOs is a huge pain, but Dapper solves that problem and seems like a better fit than what you're using EF for.

ThePeavstenator fucked around with this message at 22:47 on Jan 25, 2020

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

Boz0r posted:

We use early bound entities for developing plugins for Dynamics 365. We upgraded our csproj files to the new 2017 format, and we've just discovered a the early bound types have stopped working. Usually, we have to add the following line to an AssemblyInfo.cs:

code:
[assembly: Microsoft.Xrm.Sdk.Client.ProxyTypesAssemblyAttribute()]
But after we've converted to the new format that file no longer exists. I've tried adding the tag somewhere else in our code, but it doesn't work. Any ideas?

If by "2017 format" you mean SDK-style csproj (what .NET Standard 2.0+ and .NET Core projects use), you can add assembly info stuff to the csproj file. Example: https://stackoverflow.com/a/44502158

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
DO:
Not build microservices. Instead build monoliths with independent, configurable, well-tested components that are injected as services. Your app will scale and be perfectly performant as a monolith and will have a simple development process, simple CI/CD pipeline, and simple architecture.

Let's say you somehow get enough users to where the IComponentThatDoesShit service starts to become the majority of the load on your monolith, and it's affecting overall performance including the performance of other services. Or an alternative situation might be your app is performing fine but you're somehow making enough money to grow a dev team that's getting too large to work on a single monolith, and the IComponentThatDoesShit service is the most complex service in your system. In situations like these, you take the IComponentThatDoesShit implementation, make it into it's own webservice, and then take the existing implementation of IComponentThatDoesShit that has all the business logic and replace it with an implementation that makes HTTP calls to the new webservice.

Keep repeating as needed.

DON'T:
Build monoliths and not follow IoC practices. Starting with a greenfield monolith and understanding that at some point some services will need to be broken into separate microservices is the way to go, but this becomes a huge PITA when there's no dependency mapping and all your services have a hellish web of dependencies where you can't break off a module without affecting anything else.

e:

Basically don't think about writing microservices, think about writing code in modules and being in a tree-like dependency structure. You make choices that certain chunks of the tree should be running in a separate service for either ease of development or performance reasons. These chunks are microservices. Unless you are psychic, you will not know which chunks need to be running in separate services until an obvious need arises, so don't break anything up until you actually need to. Pre-maturely creating separate services is how you get into resume jerk-off land where everything gets put in a queue and for some reason there's still a single "database access" microservice that can't be broken up because everything is coupled to it.

ThePeavstenator fucked around with this message at 23:53 on Jan 22, 2021

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

EssOEss posted:

This code is fine and an exception being thrown does hit the catch. You must be doing something different in your real code than in your pasted example. Post a full sample project that demonstrates the issue you are facing.

As an aside, while the code is fine in the sense of it achieving what you asked about, I am unsure why you are starting an asynchronous operation only to run it synchronously. The whole point of asynchronous code is that you can go and do other stuff at the same time. More typical would be to either by async all through the stack or to just do result = clien.GetAsync(url).Result.

I just want to clarify something for people who might be new to async/await and not fully understand what "do other stuff at the same time" means. Take this example:
C# code:
void DoSync() {
    DoLongRunningThing1Sync(); // Takes 5 seconds
    DoLongRunningThing2Sync(); // Takes 10 seconds
}
async Task DoAsync() {
    await DoLongRunningThing1Async(); // Takes 5 seconds
    await DoLongRunningThing2Async(); // Takes 10 seconds
}
If you're debugging in Visual Studio, both of these methods will take 15 seconds to run. If Thing 1 and Thing 2 are able to run in parallel, you can write parallel code with or without async/await:
C# code:
void DoSyncParallel() {
    var task1 = Task.Run(() => DoLongRunningThing1Sync());
    var task2 = Task.Run(() => DoLongRunningThing2Sync());

    Task.WaitAll(task1, task2); // Takes 10 seconds because Thing 2 is the longest operation, Thing 1 runs in parallel
}
async Task DoAsyncParallel() {
    var task1 = DoLongRunningThing1Async();
    var task2 = DoLongRunningThing2Async();

    await Task.WhenAll(task1, task2); // Takes 10 seconds because Thing 2 is the longest operation, Thing 1 runs in parallel
}
Async/await is not about writing parallel code.

Let's say that these can't be run in parallel, Thing 1 has to be called, and then Thing 2 can only be called after Thing 1 completes. More often than not I find all my async/await code looks pretty much the same as if it were sync, which is where I think a lot of people who are new to async/await get tripped up since they don't really see the point of using it.

The difference between the Sync and Async examples is that in the DoAsync code, .NET can do other stuff with that thread, even if you don't write any parallel code yourself.

Let's say this is some method used by a ASP.NET webservice. With async/await, in the 5 and 10 seconds it takes for Thing 1 and Thing 2 to run, the calling thread can execute other work rather than sit there and wait for the operation to finish. That single operation still takes 15 total seconds, but the thread isn't sitting there waiting and being blocked.

In the DoSync example, the operation also takes 15 seconds, but 1 request calling the DoSync method also eats up the thread for 15 seconds.

In the DoAsync example, 1 request will eat up the thread for as long as it takes to invoke the task and register the callback (which is basically just a few sync method calls and some object allocations), but in the 5 and 10 seconds of waiting, other work can be done by the thread (such as servicing multiple requests). Your code all still looks sync, but the .NET runtime can more efficiently assign work to threads that are just sitting there waiting for long-running calls to finish when they're awaited. This is why using async/await for long-running calls in the UI thread is desirable. If you await an HTTP request call in the UI thread, the UI thread isn't locked up waiting for the long-running HTTP request operation to complete. The UI thread can do things like update the view and respond to user input instead of being eaten up by the long-running HTTP call while it waits to finish.

I think this is part of what confuses a lot of people about async/await. It's not a parallelization syntax, it's a continuation syntax. Your series of awaited calls still take the same total amount of time and execute in sequence, but they don't make the actual thread that they're executing on sit and wait for them to finish.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
I use these for mapping POCOs

Alternatively these are also good (especially .NET 5 with init-only properties)

For real though, auto-mapping libraries give you 5 minutes of convenience and a lifetime of pain. They're great when they work with 0 additional code because all of your objects have identical fields, but you're presumably using different object types and mapping between them to decouple data at different layers. That implies that there's a good chance that the shape of the data at those different layers can diverge. Once that happens you now have to write mapping code, except the compiler isn't able to do as much work to help you anymore because now that's all being done at runtime via reflection. It's not a huge deal to write and maintain these:

C# code:
public YourPostApi Map(YourPostDb yourPostDb) =>
    new YourPostApi
    {
        AreBad = true,
        AreSigned = yourPostDb.AreSigned,
        Content = yourPostDb.Content
    };
...and then vice-versa if you need to map the other way. Now there's no risk of Automapper blowing up at runtime when it hits an edge case you didn't anticipate, and you can set breakpoints and debug your actual mapping code when needed (and it's really easy to write tests for these).

And with only a single object mapping method, you can still map collections via Linq:

C# code:
IEnumerable<YourPostDb> postsRetreivedFromDatabase = GetPostsFromDb();
List<YourPostApi> listOfPostsToReturnToCaller = postsRetreivedFromDatabase.Select(Map).ToList();
e:

Also re: performance - constructors and object initializers are pretty dang fast compared to reflection!

ThePeavstenator fucked around with this message at 02:52 on Feb 1, 2021

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
Custom exception types can be good in situations where you can identify what’s wrong and want to be specific, but can’t or shouldn’t handle that situation in the layer of code you’re on.

For example, some method call reads Foo data from a DB and then does some processing on it and then returns the result. Foo data has FieldA and FieldB, and only A or B should have a value, not both. If you do a check in the method to make sure FieldA and FieldB aren’t both set before processing each piece of Foo data, you can define a more specific custom exception to throw in cases where FieldA and FieldB are both set. Then when your custom MutuallyExclusiveFooFieldException or whatever you call it shows up in logs it’s easy to diagnose the issue and also correlate/quantify how often that happens. This also has the benefit of upstream callers that might know how to resolve that specific issue being able to catch that specific exception.

This is kind of a subjective design decision though and there are other valid ways to get easy to debug telemetry and conditional error handling.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

epswing posted:

I've got an EF/SQL/LINQ question. Say I've got a table with Id (int not null sequence) and Message (string) columns. I want to produce the last 5 distinct messages in reverse order (most recent first).

For example, given the following data:
pre:
Id	Message
1	apple
2	apple
3	apple
4	apple
5	banana
6	banana
7	dusty
8	elephant
9	elephant
10	hello
The result should be: hello, elephant, dusty, banana, apple

First cut

C# code:
IEnumerable<string> RecentMessages()
{
    return messageTable
        .OrderByDescending(x => x.Id)
        .Select(x => x.Message)
        .Distinct()
        .Take(5);
}
Oops, Distinct scrambles the ordering. OK, maybe I'll Distinct before OrderByDescending, but since this is now on the whole record I need to specify how to compare with an IEqualityComparer.

C# code:
IEnumerable<string> RecentMessages()
{
    return messageTable
        .Distinct(new MessageComparer())
        .OrderByDescending(x => x.Id)
        .Select(x => x.Message)
        .Take(5);
}

class MessageComparer : IEqualityComparer<Message>
{
    public bool Equals(Message x, Message y) => x.Message == y.Message;

    public int GetHashCode(Message obj)
    {
        if (obj == null)
            return 0;
        return obj.Message.GetHashCode() ^ obj.Message.GetHashCode();
    }
}
Oops, this can't be converted into a store expression, silly. OK, getting into whack-a-mole territory, I guess I could OrderByDescending, read the whole table into C#, and include my own DistinctIterator (partially stolen from a SO answer):

C# code:
IEnumerable<string> RecentMessages()
{
    var messages = messageTable
        .OrderByDescending(x => x.Id)
        .Select(x => x.Message);
    
    return DistinctIterator(messages).Take(5);
}

IEnumerable<TSource> DistinctIterator<TSource>(IEnumerable<TSource> source, IEqualityComparer<TSource> comparer = null)
{
    HashSet<TSource> set = comparer != null
        ? new HashSet<TSource>(comparer)
        : new HashSet<TSource>();
    
    foreach (TSource element in source)
        if (set.Add(element))
            yield return element;
}
This works, but I really don't want to load the whole table into C#. I could do something hacky like Take(1000) in RecentMessages and only operate on at most 1000 records instead of the whole table, but that still sucks. How can I get this done in the DB via EF?

Edit: Actually, re-reading the final solution, because DistinctIterator uses yield, and everything is IEnumerable, and I'm not ToList'ing anywhere, isn't everything 'streaming' (i.e. not buffering the whole table somewhere) and this will actually just load 5 records via Take and stop there? Either way, I'm still interested in a way to do this entirely with EF if possible.

IEnumerable/yield just means that evaluation/enumeration will be done lazily, but the IEnumerable data source is almost certainly from an in-memory collection in this case (if it was still being done on the DB it would still be an IQueryable).

Ask yourself this - can you write a raw query to accomplish what you want solely on the DB? If the answer is no, then there’s nothing EF will be able to do to make that happen either.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

mistermojo posted:

yeah groupby should be what you want

code:
var tupleList = new List<(int id, string message)>
  {
    (1, "apple"),
    (2, "apple"),
    (3, "apple"),
    (4, "apple"),
    (5, "banana"),
    (6, "banana"),
    (7, "dusty"),
    (8, "elephant"),
    (9, "elephant"),
    (10, "hello")
 };

var last5distinct = tupleList.OrderByDescending(x => x.id).GroupBy(p => p.message).Select(y => y.First().message).Take(5);

Console.WriteLine(string.Join(Environment.NewLine, last5distinct));
gets

hello
elephant
dusty
banana
apple


.NET 6 finally adds DistinctBy so we won't even need to use another package soon!

e: if you know its always ordered by id you could simplify it with something like

code:
var last5distinct = tupleList.GroupBy(p => p.message).Select(y => y.First().message).Reverse().Take(5);

IEnumerable and IQueryable are different things with different behaviors even though it looks like you’re just writing the same Linq syntax. This is one of the reasons I hate EF. Linq is very powerful but also has some footguns that devs run into when they don’t understand the difference between when a Linq query is describing functions performed on a sequence (IEnumerable) and when a Linq query is actually building an expression tree (IQueryable) that can be evaluated and executed on a sequence or any arbitrary data source that has a way to translate the expression tree into instructions it understands. EF just papers over this and says “Look at all the poo poo Linq can do! No need to worry about all that complicated poo poo like knowing what your databases are actually capable of or how you should structure your data for your chosen DB(s). Oop your app is eating up all your memory and/or CPU and/or what look like simple queries are taking way longer than expected to evaluate? You should have read the docs and known how all this poo poo works under the hood before deploying it!”

The paradox of EF is that it demands you understand the various intricacies of Linq and how the data providers work internally in order to avoid firing one of the many footguns it hands to you, but once you understand what EF is doing for you and how it works internally, your best choice is usually “don’t use EF and write the queries/HTTP calls/etc myself”.

ThePeavstenator fucked around with this message at 21:12 on Jun 21, 2021

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

Polio Vax Scene posted:

There's a developer on our team that continues to use .NET Framework for all their work. I think they're intimidated by .NET Core or something. Any good resources you recommend that can be used to convince them to swap?

It’s been very clear for years now that .NET Core is where all the development effort is going - https://devblogs.microsoft.com/dotnet/net-core-is-the-future-of-net/

.NET Core was also rebranded to just .NET to further drive that point home - https://devblogs.microsoft.com/dotnet/introducing-net-5/

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
I've had the most success in keeping code reasonably performant and readable with the advice of "use the currently recommended standard library API first". Everyone has their own definition of "easy" and "simple" (and often that definition is "stuff I already know") but it's hard to argue with the 1st party .NET docs saying "use async/await, not .Wait() or .Result" or "use DateTimeOffset instead of DateTime".

In the case of string concatenation, I think you can make the case that all of these are fine if you follow "use the currently recommended standard library API first":
C# code:
void LogHourlyBackgroundJobDone(long executionTimeMillis)
{
    // This is fine
    logger.Log($"Finished background job that executes once per hour, execution time was {executionTimeMillis} ms.");

    // I'm not going to do more then drop a NIT if I thought using a different string format/concatenation/interpolation method would be better
    var logString = string.Format("Finished background job that executes once per hour, execution time was {0} ms.", executionTimeMillis);
    logger.Log(logString);

    // This is fine too, the primary factor for how this code is written should be consistency with how the rest of your codebase does log string construction
    logger.Log("Finished background job that executes once per hour, execution time was " + executionTimeMillis + " ms.");

    // This seems a little verbose for this situation and is probably the worst-looking example out of all of these,
    // but if someone really wanted to use it I could live with approving most PRs that had this as long as it wasn't obnoxious
    var logStringBuilder = new StringBuilder();
    logStringBuilder.Append("Finished background job that executes once per hour, execution time was ");
    logStringBuilder.Append(executionTimeMillis);
    logStringBuilder.Append(" ms.");
    logger.Log(logStringBuilder.ToString());
}
While clearly this is not:
C# code:
void LogSomeMetricsInHotPath(List<long> giantListOfExecutionTimesToLogInHotPath)
{
    if (giantListOfExecutionTimesToLogInHotPath.Count < 1000000)
    {
        throw new ArgumentException("Please only use this method to log batches of at least 1 million execution times.", nameof(giantListOfExecutionTimesToLogInHotPath));
    }

    // var logStringBuilder = new StringBuilder().AppendLine("Building a log string"); // just use the recommended API
    var logString = "Building a log string\n";

    foreach (var executionTimeMillis in giantListOfExecutionTimesToLogInHotPath)
    {
        // logStringBuilder.Append("Here's an Execution Time: ").Append(executionTimeMillis).AppendLine(" ms"); // this is just about as readable as the concatenation
        logString += "Here's an Execution Time getting logged in a really, really hot path: ";
        logString += executionTimeMillis;
        logString += "ms\n";
    }

    // logger.Log(logStringBuilder.ToString()); // that's all it takes
    logger.Log(logString);
}

ThePeavstenator fucked around with this message at 04:06 on Mar 3, 2022

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:

epswing posted:

Question for you warriors that have migrated projects from .NET 4.8 to .NET 6.

I've got an ASP.NET project now targeting framework ".NET 6" and target OS "Windows" (i.e. the line in the csproj file is <TargetFramework>net6.0-windows</TargetFramework>, and did the work to also switch to EF Core. Builds find locally. When I try to publish to Azure via DevOps pipeline, it fails during the NuGet Restore step, with the error:

I'm confused about what this is telling me. NuGet is pulling in an in compatible version of EF Core? Or a version of EF Core that targets .NET 6 but not Windows?

That's weird. the TFM "net6.0" basically means "any platform that can run .NET 6 can take a dependency on this", where "net6.0-windows" means ".NET 6.0 minimum, with Windows-specific APIs" - https://docs.microsoft.com/en-us/dotnet/standard/frameworks#net-5-os-specific-tfms

What happens if you just use "net6.0" instead of "net6.0-windows"?

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
Are you invoking MSBuild in your ADO pipeline or are you using dotnet build? NuGet/MSBuild are integrated in dotnet, so you shouldn't need to deal with NuGet versions like that as a separate tool.

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
This is 100% an XY problem. Setting the entire UI thread sounds like a bad plan - what happens if you set the UI thread to have a culture for "zh-CN" when the currency in question is the Yuan, and then the entire UI ends up translated into Chinese on a UK/US user's computer without Chinese language support installed?

This is a pretty non-negotiable software design pattern IMO - Money really needs to be represented as a (Currency, Amount) tuple and the issues you're running into (someone in Great Britain needs to be shown US dollar amounts in the UI) illustrate why, here's a NuGet package if you want a pre-baked implementation of a Money data structure - https://www.nuget.org/packages/nmoneys

If you're not going to update your legacy application to follow this pattern you can always set the UI thread to a custom culture where everything is the same as the user's default UI culture, but you set the currency symbol yourself, but that's not foolproof either - different cultures have different ways of expressing things like negative currency amounts, and the second this is no longer true:

epswing posted:

The software doesn't do multi-currency

...you'll be forced to make the conversion and wish you had done it in the first place.

e:

You don't necessarily have to use that NuGet package to be clear. If you know what the currency is in the UI (as in, the legacy code already has some way of identifying the currency that just isn't used when converting to a string), you can implement a GetCurrencyCulture() method that accepts the currency and then returns the appropriate .NET culture for the currency, and then you can provide that culture as a parameter in your string.Format call - https://learn.microsoft.com/en-us/d...(system-string)

ThePeavstenator fucked around with this message at 02:28 on Feb 26, 2024

Adbot
ADBOT LOVES YOU

ThePeavstenator
Dec 18, 2012

:burger::burger::burger::burger::burger:

Establish the Buns

:burger::burger::burger::burger::burger:
Thinking "oh I'll just use a decimal and make it work, making a Money type seems like gold plating" will lead you to the same land of pain as using the standard DateTime type and adding TimeSpan values to it because a library like Noda time for calendar/time arithmetic seems too complex for a "simple" use case.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply