Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
biznatchio
Mar 31, 2001


Buglord

ljw1004 posted:



Note: although the ?. operator is built into C#6 and VB14, the warning squiggle and codefix in this video are not built in. I wrote them myself via a simple "analyzer" - I put the source code here on github. For me the single most important feature of VS2015 is that you can write your own refactorings, analyzers and code-fixes.

Saw this from a few days back --- is this a safe refactoring? Does the ?. operator take a local copy to be consistent across the null check and the method invocation?

Adbot
ADBOT LOVES YOU

biznatchio
Mar 31, 2001


Buglord

crashdome posted:

Yeah, BindingSource is great. A list doesn't have any INotify ....blah blah... so you'd have to call Suspend, Resume, Refresh on the listbox control manually during the update. Thanks for reminding me how much I don't miss WinForms.

For Windows Forms data binding, you can use BindingList<T> (or the IBindingList interface), which supports all the list changed events you'd care about, and is natively supported by BindingSource and all the built-in controls.

It's basically the WinForms version of WPF's ObservableCollection.

biznatchio
Mar 31, 2001


Buglord

Mr Shiny Pants posted:

It is what I've been doing now and boy filtering console output is a pain in the rear end.

But that's The Unix Way™! Everything's a stream of bytes intended for human consumption and if you like parsing boy howdy have I got good news for you!

Seriously though, depending on what you're trying to scrape and how you're trying to scrape it, you might benefit from building and using some standardized translators to process the input lines in easily definable ways and compose them together to achieve what you want in each case; or Process.Start to /bin/sh and pipe through standard unix text-processing utilities which are basically the same thing. A shell like Powershell makes a lot more sense for moving structured data between tasks, but for as inefficient as processing human-readable stdout is in Unix, at least they've built up a really good suite of utilities that make the concept workable.

biznatchio
Mar 31, 2001


Buglord

Sab669 posted:

http://stackoverflow.com/q/31457372/1189566

Anyone by chance able to explain this? :psyduck:

You're not messing with the Controls collection on the TabControl directly anywhere else in your code are you? Looking at the reference source for TabControl, it seems getting an item by index directly pulls from an internal list of TabPages, whereas removing by index simply removes from the Controls collection by index -- directly poking at the Controls collection would get those two collections out of sync with each other.

Removing an item by TabPage reference (as opposed to by index) calls the Controls.Remove overload with the object reference as well, which would explain why that works to remove the right page when RemoveAt wouldn't. It really sounds like an extra item is in your Controls collection.

biznatchio fucked around with this message at 17:13 on Jul 16, 2015

biznatchio
Mar 31, 2001


Buglord

Sab669 posted:

The method that mkves tabs from Active / Archived takes two TabControls and an index. So this is most likely it.

TabPage tp = source.TabPages[index]
destination.TabPages.Add(tp)

is the jist of it. Sorry for no formatting, on mobile.

That doesn't look like it's the culprit, it's accessing through the TabPages property, not the Controls property.

You could write some simple code just before your bad RemoveAt() call to loop over the Controls property and dump out everything that's in there. The RemoveAt() is going to remove based off those indexes, not the indexes you'd get by doing the same dump over the TabPages property. They should normally return the same set of objects, but I'm betting they don't in your case, and when you find what's in Controls but not in TabPages, you can work backward and find out how it got in there.

edit: Actually, I see the problem: it won't work right if you add a tab page to a tab control without removing it from the original tab control it was on first. The two lines of code you have above show the problem. Add a source.TabPages.Remove(tp) between those two lines.

biznatchio
Mar 31, 2001


Buglord

Manslaughter posted:

Trying to transition to visual studio 2015 and it's breaking on exceptions even if it's in a try/catch and even if I have all the exception types unchecked in the exception handler. What gives?

Turn off the option to break on first chance exceptions.

biznatchio
Mar 31, 2001


Buglord
The first thing I'd do to debug that sort of problem is to stick a proxy between the device and the service and see exactly what's happening on the wire.

biznatchio
Mar 31, 2001


Buglord

EssOEss posted:

Is it really possible that .NET Framework's implementation of Canonical XML 1.0, implemented since .NET 1.1, is incorrect? Or am I missing something crucial here?

See https://connect.microsoft.com/VisualStudio/feedback/details/3002812 and https://github.com/sandersaares/xml-c14n-whitespace-defect for a description. In short, .NET Framework appears to (incorrectly) strip whitespace.

I didn't look too deeply into it, but as a quick response, per the documentation from XmlDocument.Load(), emphasis mine:

quote:

The Load method always preserves significant white space. The PreserveWhitespace property determines whether or not insignificant white space, that is white space in element content, is preserved. The default is false; white space in element content is not preserved.

If you set PreserveWhitespace to true before loading your input XML, that may fix your whitespace problem.

biznatchio
Mar 31, 2001


Buglord

LOOK I AM A TURTLE posted:

I assume you're working on a video game. This great article talks about (among other things) how the programmers behind Starcraft painted themselves into a corner due to overuse of inheritance: http://www.codeofhonor.com/blog/tough-times-on-the-road-to-starcraft

Inheritance isn't inherently bad, you just have to design your taxonomy very carefully because if you let it grow naturally, you end up doing absolutely awful things shortsightedly that hamstrings you later on. The Starcraft example shows inheritance for the sake of it, without any apparent rhyme or reason to it. Why are game units inherited from the class created to show particles? In what world does that make sense?

I suspect the answer to that question would be something like "Well, you see particles were the only sprites that could move on the map on the time, and game units need to move, so it was easiest to just inherit from the particle class..."

Any justification for inheritance that includes the words "...it was easiest to just..." is a big red flag because it implies there was a better solution but instead of sticking to proper design, corners are being cut and technical debt is being piled on.

biznatchio
Mar 31, 2001


Buglord

Baloogan posted:

I want to size the ElementHost dynamically, but I want to avoid this exception. Any ideas?

Assuming you want the UIElement to drive the show completely, you could put the UIElement you want to host inside a ScrollViewer to insulate it from the size changes of the ElementHost. That will allow the UIElement to dictate its size without going into an unfortunate loop of responding to the size changes, since being inside a ScrollViewer will result in the target UIElement always getting infinite as the layout available size.

biznatchio
Mar 31, 2001


Buglord
I thought the whole point of putting things as separate packages in Nuget was so they could fix issues without having to wait for a monolithic release?

biznatchio
Mar 31, 2001


Buglord

Munkeymon posted:

That's somewhat surprising (to me, at least!)

Why would you try to mock up a parse tree directly when you've already got a perfectly serviceable syntax for defining any valid parse tree you might want to test?

biznatchio
Mar 31, 2001


Buglord
I've always been a fan of putting credentials into the user's environment variables, perhaps encrypted with the user's DPAPI account key to prevent accidental disclosure; but as with anything you'll find just as many people arguing vehemently against it as arguing for it.

biznatchio
Mar 31, 2001


Buglord

dougdrums posted:

Unnng yes. I'm a relative luddite and use wifi in my house and office to make calls, and I don't have regular cell service. There are tons of android apps that now assume you're hooked up to the magical cloud fairy 24/7 and will flip out otherwise, and then retry the operation over and over again, forever wasting my battery.

There really should be some option to allow the phone's network stack to let every network call basically "hang" and do nothing until its timeout instead of immediately returning a failure when the network isn't available. That way at least those misbehaving apps will spend a lot of time effectively sleeping instead of wasting battery in a busy retry loop.

biznatchio
Mar 31, 2001


Buglord

Sab669 posted:

This is purely speculation, but I assume it's probably because there are still a lot of old computers in use. So they just make it default to the behavior that will make your application run fine on x86 or x64.

Not setting "Prefer 32-bit" doesn't stop your application from running on x86; it just means that if you are on an x64 system, the process will upgrade to 64-bit.

My suspicion is the setting defaults that way because unless a developer is acutely aware of the situation, they're not going to be testing their code as both 32-bit and 64-bit, and since most applications don't need 64-bit address space, the default might as well be 32-bit just to avoid any bitness issues with unmanaged libraries that could arise because the developer never bothered to consider the case.

But do note that "AnyCPU, Prefer 32-bit" is not the same as setting the build settings to "x86", because "AnyCPU, Prefer 32-bit" allows the code to run on ARM devices, whereas "x86" doesn't.

biznatchio
Mar 31, 2001


Buglord

dougdrums posted:

Is there a simple way to get a .net core 2 console program to use the server GC? What magical file do I have to add to get it to be copied into the build runtime config? Right now I am literally overwriting the runtime config that the build generates and restarting the program ...

Add this property to the PropertyGroup in your csproj:

code:
<ServerGarbageCollection>true</ServerGarbageCollection>
Older versions (not sure what version it changed) required it to be set in a runtimeconfig.template.json file instead:

code:
{
  "gcServer": true,
  "System.GC.Server": true,
  "gcConcurrent": true
}

biznatchio
Mar 31, 2001


Buglord
WinForms and WPF are dead ends. Don't let the fact they're still getting bug fixes fool you -- only getting bug fixes is the definition of being in "maintenance mode". All the dev love is behind the UWP platform now.

And to be honest, UWP isn't as god awful as it used to be. But even in the presence of roadmaps that only lead straight over a cliff I'd still say that, when it comes to making desktop apps for Windows, WinForms and WPF are still your best options.

biznatchio fucked around with this message at 22:05 on Jan 18, 2018

biznatchio
Mar 31, 2001


Buglord

beuges posted:

UWP apps can run on the desktop, they're not limited to phones only, but they're meant to be distributed via the store, not as standalone software. If your users are not likely to want to use the store, that's going to be a big factor.

That's not true anymore and hasn't been for over a year. UWP apps can be distributed as standalone software with no dependency on the store in the same way any arbitrary EXE can.

biznatchio
Mar 31, 2001


Buglord

amotea posted:

Haha just remembered VS had those capitalized menu items in some release a while ago. :discourse:

I just happened to need to open Visual Studio 2013 for some testing today and was also reminded of WHAT A GOOD IDEA IT IS TO HAVE EVERYTHING IN CAPS.

biznatchio
Mar 31, 2001


Buglord
Yeah, virtual is a very important part of a class's contract and shouldn't just blindly be applied to everything in the way Java normally does. As a base class, making a method virtual means that you can't rely on it doing what you expect it to do in derived classes. Consider the following example:

code:
    public class BaseClass
    {
        public BaseClass(IMyDAL dal, UserToken currentUser)
        {
            _dal = dal;
            _currentUser = currentUser;
        }

        private readonly IMyDAL _dal;
        private readonly UserToken _currentUser;

        public bool CheckAccess(string resource)
        {
            return _dal.CheckAccess(_currentUser, resource);
        }

        public void KillEverything()
        {
            if (!CheckAccess("LaunchMissiles"))
                throw new AccessDeniedException();

            _dal.EndHumanity();
        }
Non-virtual methods by default like in C#? No problem. But if you presume everything is virtual by default like in Java, then this happens:

code:
        public class DerivedClass: BaseClass
        {
            public DerivedClass(IMyDAL dal, UserToken currentUser)
                : base(dal, currentUser)
            {
            }

            public override bool CheckAccess(string resource)
            {
                return true;
            }
        }
Next thing you know your biggest concern is avoiding radscorpions and finding a new water chip; all because you left a critical method virtual and someone derived from your class and broke it, and thus violated your encapsulation. Basically, you have to defensively assume that every non-final/sealed method isn't going to do at all what you intended it to do.

Allowing people to break your assumptions is something you should always be opting into (by adding a keyword to allow it), not out of (by adding a keyword to disallow it).

biznatchio
Mar 31, 2001


Buglord
You can suppress that behavior by adding [System.ComponentModel.DesignerCategory("Code")] to the class.

biznatchio
Mar 31, 2001


Buglord

a hot gujju bhabhi posted:

Yep, that's definitely what it is. I don't know how to fix it I'm afraid, but there'll definitely be a way to configure that.

Esposito posted:

It's not a delicate solution, but you can also right-click on the file -> open with -> csharp editor -> set as default, but this is a global setting and might be annoying if there are other classes of this type that you want to open in designer view.

There's a solution a few posts up, guys.

biznatchio
Mar 31, 2001


Buglord

B-Nasty posted:

They're far better off than if they used almost any other technology.

.NET Framework 4.8 is supported on Windows Server 2019, which has mainstream support until 2024. Their WebForms app should run fine, fully-supported with patches for security issues, for at least another 5 years. As an example, a WebForms app written in .NET 2.0 could have been used, unchanged, from 2005 to 2025. 20 years is an eternity in computing/software, and it's more than enough time to migrate critical apps.

.NET Framework 4.8 ships with the current version of Windows 10 LTSC, which guarantees it will have mainstream support until 2029. Plus, it continues to ship with the current regular editions of Windows 10 and each new OS version bumps the support lifecycle out 18 months, so basically it will be perpetually supported until the end of time since I have a real hard time seeing Microsoft ever dropping it from a Windows 10 feature update any time in the foreseeable future (say, the next 15 years or so, assuming Windows 10 itself is still the 'current' Windows for that long).

.NET 4.8 will be with us forever as an everlasting monument to our hubris. Look upon my frameworks, ye mighty, and despair.

biznatchio
Mar 31, 2001


Buglord
My philosophy is that as much should go into the catch block as necessary (but no more) to accomplish three goals:

1) Record the failure with enough fidelity to enable postmortem analysis;
2) Any necessary cleanup and rollback for the failure, whether that's done immediately in-line or if you just schedule it for something else; and
3) If for any reason #1 or #2 fail and you're going to surface a new error as a result, it must always be written such that the evidence of the original failure that led to you being in the catch block in the first place is preserved in a way that's discoverable (e.g. InnerException, logging).

For #2, it may very well be that whatever you're protecting has no logic that needs to be executed to compensate for the failure. If your task was to open a file, and you failed to open the file, then there's no cleanup you need to do as a result of that.

But if your task was to open a file, fill it with structured data, flush it and close it, and you failed midway through writing data, then your compensation logic might be complicated, such as deleting the half-written file, copying it somewhere, sending out a service bus message, or whatever other various things you might need to do to prevent that failure from persisting as broken application state. And since compensation logic can fail in its own right, you need to think about what you can do to fail safe when cleanup isn't possible (up to and including stopping the whole drat service if things are severe enough).

My first preference is to treat a try/catch block like a database transaction. Either it succeeds fully, or it backs out and leaves no trace other than the failure being returned or rethrown. But in some cases 'backing out' isn't the right compensation, and 'gracefully transitioning into a known error state' is instead.

biznatchio fucked around with this message at 19:46 on May 7, 2020

biznatchio
Mar 31, 2001


Buglord

Rocko Bonaparte posted:

I recently found myself doing some C# copy-paste and wondered if I could avoid it. I was wrapping a lot of private fields with getters and setters so I could strobe an event when the setters were called. The event is different per field and the data type varies per field, but it's still pretty generic. I can't think of any way to represent that other than to still type it out without getting into ugly reflection poo poo that is far worse, but I thought I'd ask anyways.

I could probably compromise and have a more overall "this thing changed but I can't tell you what" kind of event and make that standard but I'm still stuck messing with the fields. I can only think of a generic helper that encapsulates the field but I'd rather have the class trying to do all this contain the field itself.

Well, you could do something like this, which reduces the amount of boilerplate you need to write, but still requires you to write a field, event, and one-line getter and setter for each property.

Or, you could use the .Net standard INotifyPropertyChanged interface and simplify things a little further into something like this; though you'll pay a little bit of runtime cost with this because it's adding a dictionary lookup and boxing/unboxing to your gets and sets.

You could combine the two approaches to remove that extra overhead, though, at the cost of having to define the backing field for each property like this.

biznatchio
Mar 31, 2001


Buglord
Turn on file access auditing for the repo folder, do a build and a run; then turn the auditing back off and use Event Viewer to search for and extract the file access audit log entries to an EVTX file, then write a small C# program using System.Diagnostics.Eventing.Reader.EventLogReader to iterate through the entries and build a list of files.

biznatchio
Mar 31, 2001


Buglord
I don't know if I'd recommend it, but you can serialize an Expression to a JSON object using Aq.ExpressionJsonSerializer, and then deserialize it on the other side into something you can execute. But if you're going to expose something like that in a public API you better make sure you have your ducks in a row that you're not just allowing anyone to do arbitrary code execution on you.

biznatchio
Mar 31, 2001


Buglord

adaz posted:

Async/await: its usually on another thread except when it isnt*

* isnt: the compiler says so. usually cpu bound code or you don't await something in the called method

* you call .result

* you are writing a wpf or windows form app that makes use of synchronization context

Calling .Result doesn't affect which thread the task runs on. It just blocks the current thread and puts it to sleep until the task is done. The task itself will keep running wherever it needs to, which can really gently caress you in a single-threaded synchronization context like WPF or WinForms because it's a recipe to deadlock -- the task you're waiting on might require the same thread you just blocked to wait for it.

That's why library authors are encouraged to use ConfigureAwait; because it frees them from accidentally contributing to a deadlock in the event that a consumer of the library uses .Result.

(Don't ever use .Result. Not just because you break the whole idea of asynchronous processing if you're blocking threads, but because you should be calling .GetAwaiter().GetResult() instead; which does exactly the same thing as .Result except it doesn't wrap any exceptions the task throws inside an AggregateException, you just get the raw exception instead.)

biznatchio
Mar 31, 2001


Buglord

raminasi posted:

Yes please don’t un-idiomatically use compiler internals directly just to save three lines of error handling code.

MSDN Magazine suggested using GetAwaiter().GetResult() when you need to block a thread to wait; but underneath the wider and much better advice that you shouldn't be blocking threads to wait for tasks in the first place because it's a recipe for deadlocks no matter how you do it.

biznatchio
Mar 31, 2001


Buglord

brap posted:

What does white screen of death mean here? Is VS failing so badly it’s bringing down your whole system?

When a WPF application like Visual Studio has a rendering hang, it leaves the application's window unpainted, which is usually white.

biznatchio
Mar 31, 2001


Buglord
You should only be using dynamic in extremely specific situations; and foremost among those are some forms of COM interop, some forms of interop with objects given to you by a scripting engine, and some cases where you need to process arbitrarily-shaped JSON objects. And even in these cases there's often a better way you should be doing it instead.

You should never be in a situation where you're looking at a var keyword and thinking "boy I should change that to dynamic", because var and dynamic fill entirely different purposes. Unless you can explain in detail why the actual contract of an object won't be known until runtime and why using adapter classes and interfaces isn't suitable, then dynamic is the wrong solution.

At least, if your intention is to write performant, robust, and maintainable code, that is. If you're just making GBS threads out a quick utility that'll be used a few times and never seen again, then by all means go hog wild.

biznatchio
Mar 31, 2001


Buglord

LongSack posted:

Exceptions: I really try to not throw exceptions unless there is truly an exceptional condition. 401, 403 and 404 are entirely normal and I don’t feel they fall into the exceptional category.

The mindset that exceptions only need to be thrown in "exceptional" circumstances is very much the wrong one to hold when it comes to idiomatic C#.

Your method names should be verbs or verb phrases, and you should throw an exception whenever the method fails to perform that verb; whether or not the failure was "exceptional" or not. If you have a method named PerformWebRequest, then yes, if you get back a 401 you should be returning normally -- because you performed a web request, and the 401 was the response you got. But if you have a method named GetProductData and you get back a 401 from the service call that should have returned the product data, you should absolutely be throwing an exception because you didn't successfully get the product data.

Follow this guidance and you'll be writing proper idiomatic C# that operates the same as the standard library and all the C# libraries you'll be pulling in and using and you won't have to remember and write code around two different ways of handling failures. (You also won't get totally screwed when you build a house of cards of methods that always return a failure object instead of throw an exception, and then you call a standard library method somewhere deep down the call stack that throws because it failed, and now all of a sudden your callstack unwinds without you handling it properly because you were expecting to be able to clean up without catch blocks by checking return values instead.)

And yes, if you dig around you'll probably find some old blog posts from 2003 that claim that exception performance in C# is bad. It was, but it was only bad to the point where you didn't want to be throwing tens or hundreds of thousands of them in a tight loop (which is why methods like Int32.TryParse exist as a non-exception throwing alternative to Parse, because parsing integers is something that happens in tight loops all the time). But they were never bad to the point that you should be compromising your control flow structure in general code to avoid throwing them when things go wrong. Indeed, it's worse for performance not to throw them, because you always pay the cost of callers having to check your return value for a failure that should have been an exception even when there was no failure at all, but you only pay the minor cost of throwing an exception when you actually have to throw one.

biznatchio fucked around with this message at 13:10 on Mar 8, 2021

biznatchio
Mar 31, 2001


Buglord
If it helps you can create Exception classes really easy in Visual Studio by just typing "exception" and hitting tab twice. But that's just automating the boilerplate, not really eliminating it. I'll usually just throw an ApplicationException until I have a need to do catch filtering on that specific failure.

biznatchio
Mar 31, 2001


Buglord

Funking Giblet posted:

Improved pattern matching makes generic exceptions easier to work with.

Yes, 1000%. For inter-project exceptions they more or less entirely eliminate the need for custom exception types.

But if you're designing a reusable library, I'd still recommend creating and throwing your own exception types rather than saddling the library's consumers with 1) needing to know what pattern they should look for to catch the specific failure they care about catching; and 2) accidentally permanently tying yourself to having to maintain that your exception continues to match that arbitrary ad hoc pattern in future releases of your library. A (lowercase t) type is a type, regardless of whether it's a (uppercase T) Type or not; and if you make it a Type your life will be easier in the long run at the minor expense of 30 seconds of work. It's always better to be as explicit as possible.


edit: and I also agree that there's a special place in hell for anyone who's ever written throw e; to rethrow a caught exception. I can't believe the compiler doesn't even at least flag it as a warning, because if you ask me, it should be no less than a compile error.

biznatchio fucked around with this message at 16:53 on Mar 10, 2021

biznatchio
Mar 31, 2001


Buglord

nielsm posted:

Since you do have a service management system, consider if those things that need doing shouldn't go into it, as tasks with a deadline, as soon as it becomes known it needs doing. That should make a clearer audit trail.

Echoing this. Email is a cesspool, don't ever rely on it as a notification system for actionable tasks. If you have a system for tickets (and you mentioned you have ServiceNow), just skip the informalities and create the ticket there directly, because that's where people already look for work needing to be done. Even if it's a horrible dumpster fire of a ticket system -- as they tend to be -- you want your generated tasks to be where everyone else's are.

biznatchio
Mar 31, 2001


Buglord
I'd bet you could throw one together for .NET 4 easily by grabbing the .Net Core source code for ImmutableSortedDictionary and ImmutableQueue and then build a wrapper that exposes a normal mutable PriorityQueue interface with an added GetImmutableEnumerator() method that inherently leverages the ImmutableSortedDictionary data field within to provide a stable iterator.

biznatchio
Mar 31, 2001


Buglord

Rocko Bonaparte posted:

I mean, yeah, but that "easily" there is doing a lot of work.

Not really; it took about 10 minutes. (But about twice as long to write the unit tests.)

biznatchio
Mar 31, 2001


Buglord
If you grabbed it, I just pushed a couple changes to brush it up, correct some minor issues, and add documentation.

biznatchio
Mar 31, 2001


Buglord

SirViver posted:

If you want to do "endless" background work it should IMO be done on a separate thread, optionally with a CancellationToken so your main application is able to "nicely" get the background thread to stop when the app wants to exit. Async/await doesn't really come into play for this.

Using async is perfectly acceptable for endless background work loops, as long as the work loop itself uses async/await internally rather than doing blocking work. But yes, pass a CancellationToken to the method so it can be stopped.

I use a pattern similar to this a lot to encapsulate such async background work within a class that can just simply be disposed to stop the background work:

code:
public class SomeClass : IDisposable, IAsyncDisposable
{
    private readonly CancellationTokenSource _disposedCts = new CancellationTokenSource();
    private readonly Task _processingTask;

    public SomeClass()
    {
        _processingTask = ProcessingLoopAsync(_disposedCts.Token);
    }

    public void Dispose()
    {
        DisposeAsync().AsTask().Wait();
    }

    public async ValueTask DisposeAsync()
    {
        _disposedCts.Cancel();
        try { await _processingTask; } catch { }
    }

    private async Task ProcessingLoopAsync(CancellationToken cancellationToken)
    {
        while (!cancellationToken.IsCancellationRequested)
        {
            // do stuff here
            // with "await Task.Delay(timespan, cancellationToken)" for loops that do stuff on intervals
        }
    }
}

biznatchio fucked around with this message at 16:02 on Jun 1, 2023

Adbot
ADBOT LOVES YOU

biznatchio
Mar 31, 2001


Buglord
Unobserved faulted tasks will cause the process to terminate in .Net Framework 4 -- at an indeterminate time because it can only happen when the Task is finalized, and that's at the mercy of the GC. Starting with .Net Framework 4.5, and all versions of .Net Core, they do not (unless you explicitly configure that you want the legacy behavior); but you can still be notified of them via the TaskScheduler.UnobservedTaskException event.

biznatchio fucked around with this message at 06:55 on Jun 4, 2023

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply