Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

zokie posted:

We are having performance problems with a service we depend on and while they acknowledge that things could be better they don’t really want to own that they are the problem. So I’ve been trying to investigate stuff with perfmon.exe and perfview.exe but I’m not that familiar with any of it.

The service is asp.net running on .NET Framework 4 (just 4 :() hosted on IIS, nothing special. I’m pretty sure the problems are 1) them abusing the GC by reading all the requests and response bodies fully into memory (to log), no streaming, giving the LOH having lots of pressure and that triggers Gen 2 GC.
I’ve been able to measure and see how often that happens but I have no idea of how frequent is to much?

The bigger problems is probably them using async APIs synchronously e v e r y w h e r e! I’ve used perfview to look at thread pool events, but again I have no idea of what is OK here or not.

The I looked at one of the Thread Time Stacks views and put blocked_time in the find field. Pressed enter and it showed 94% inclusive! Does this mean what I think it means? That they are using the remaining 6% for CPU and network &c but that 94% of the time in the threads is spent waiting when we really don’t need to? Because that’s just so bad I can’t believe it!
Maybe that includes idle threads in the thread pool?

Anyone got experience with this?

look into horizontal scalability solutions - e.g. can you mitigate the problem using multiple app servers/load balancing / reverse proxy caching/restarting processes once they hit a certain memory use threshold? generally if performance is going to be an issue, you need to figure that stuff out anyway, even if stuff is working right, but it's usually way cheaper to come up with solutions that mitigate issues like these rather than waiting for a perf issue to be fixed. especially given that this is a third party product, if you're looking for an immediate solution, having a back and forth with a third party vendor isn't going to solve the immediate problem.

also, you don't ever need to speculate about the cause of performance problems if it's .net code - e.g. a statement like "I’m pretty sure the problems are" is not necessary if you have access to the dlls the service depends on (which I'm guessing you do if you have access to the server the service is running on.) just run it through jetbrains dot peek /etc. and take a look at the result of the decompiled code. i've had some success with this method before but frankly, there are just so many potential ways of screwing up performance in the single app server case that you may as well plan on the sln outlined in paragraph 1 unless it's not possible to mitigate the issue using horizontal scaling (e.g. extremely stateful apis).

Adbot
ADBOT LOVES YOU

raminasi
Jan 25, 2005

a last drink with no ice

Bruegels Fuckbooks posted:

also, you don't ever need to speculate about the cause of performance problems if it's .net code - e.g. a statement like "I’m pretty sure the problems are" is not necessary if you have access to the dlls the service depends on (which I'm guessing you do if you have access to the server the service is running on.) just run it through jetbrains dot peek /etc. and take a look at the result of the decompiled code. i've had some success with this method before but frankly, there are just so many potential ways of screwing up performance in the single app server case that you may as well plan on the sln outlined in paragraph 1 unless it's not possible to mitigate the issue using horizontal scaling (e.g. extremely stateful apis).

I've worked on a .NET web service that was big and complicated enough that we had to use perfview to figure out what the hell was going on even though it was our product and we had access to the complete source code.

zokie
Feb 13, 2006

Out of many, Sweden
It’s maintained by another team, so I have access to the code and everything. I know the code is terrible, they use .Wait() and .Result everywhere and only very few async/await mostly because the their shared libraries target .Net 4 which I just learned was end of lifed in 2016 and now I’m so loving angry.

I did manage to fold and exclude stuff in perfview to get better numbers, I think the 94% number included idle threads. But not I drilled down under only requests and got 17% blocked, 9% network and 5% CPU.

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".
If you’re worried about garbage collection being one of the issues, there were a lot of performance improvements to it in 4.5 that they/you are missing out on. It would look like sporadic performance numbers most likely as the old GC used to “stop the world” while it ran some of its routines

epswing
Nov 4, 2003

Soiled Meat
Y'know what really grinds my gears about async/await. All the garbage that now fills my log files, making what used to be a simple stacktrace harder to read.

The exception stacktrace used to look like this:
code:
Client.exe Error: 0 : [2021-05-31 10:49:50 SyncManager] HttpRequestException: An error occurred while sending the request.
WebException: Unable to connect to the remote server
SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:54644
   at ProgramAspClient.Api.Request.GetRequest() in C:\path\to\ProgramAspClient\Api\Request.cs:line 32
   at ProgramAspClient.Managers.SyncManager.GetAsync() in C:\path\to\ProgramAspClient\Managers\SyncManager.cs:line 24
   at ProgramAspClient.Managers.BaseManager.ExecuteRequest() in C:\path\to\ProgramAspClient\Managers\BaseManagers.cs:line 49
But now it looks like this:
code:
Client.exe Error: 0 : [2021-05-31 10:49:50 SyncManager] HttpRequestException: An error occurred while sending the request.
WebException: Unable to connect to the remote server
SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:54644
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
   at ProgramAspClient.Api.Request.<GetRequest>d__4`1.MoveNext() in C:\path\to\ProgramAspClient\Api\Request.cs:line 32
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at ProgramAspClient.Managers.SyncManager.<>c__DisplayClass3_0.<<GetAsync>b__0>d.MoveNext() in C:\path\to\ProgramAspClient\Managers\SyncManager.cs:line 24
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
   at ProgramAspClient.Managers.BaseManager.<ExecuteRequest>d__10`1.MoveNext() in C:\path\to\ProgramAspClient\Managers\BaseManagers.cs:line 49
I guess maybe I can't have my cake and eat it too but, drat, y'know?

distortion park
Apr 25, 2011


At least you actually get the full stack trace in there, unlike some other implementations *cough* nodejs *cough*

LOOK I AM A TURTLE
May 22, 2003

"I'm actually a tortoise."
Grimey Drawer

epswing posted:

Y'know what really grinds my gears about async/await. All the garbage that now fills my log files, making what used to be a simple stacktrace harder to read.

The exception stacktrace used to look like this:
code:
But now it looks like this:
code:
I guess maybe I can't have my cake and eat it too but, drat, y'know?

Get yourself a Ben.Demystifier.

zokie
Feb 13, 2006

Out of many, Sweden
Can anyone think of a good reason to target .net framework 4 in 2021? Or even 2017?

Also does anyone have any idea of how frequent generation 2 collections are “OK”?

I mean the code here is terrible and ugh. But it has been “working” for a long time so it would be nice to have more hard data about GC and thread starvation.

Ever since I learned react in 2015 it’s all I’ve been asked to do so I don’t feel as confident in C# anymore

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt


Careful if you're using Serilog, for some reason v0.4 of this library gives causes a stack overflow when an unhandled exception is logged. If that happens to you, pinning to version < 0.4 works fine.

raminasi
Jan 25, 2005

a last drink with no ice

zokie posted:

Can anyone think of a good reason to target .net framework 4 in 2021? Or even 2017?

Also does anyone have any idea of how frequent generation 2 collections are “OK”?

I mean the code here is terrible and ugh. But it has been “working” for a long time so it would be nice to have more hard data about GC and thread starvation.

Ever since I learned react in 2015 it’s all I’ve been asked to do so I don’t feel as confident in C# anymore

You aren’t running Datadog on the box, are you? .NET registers a bunch of useful performance counters for this stuff that the .NET Datadog integration picks up automatically so you can get some really helpful graphs basically for free.

zokie
Feb 13, 2006

Out of many, Sweden
I’ve gotten good data with perfview, it’s only that I have no idea of a Gen 2 GC is ok every 5 minutes or what.

I mean gently caress them for reading request/response bodies into memory and in a blocking manner. But things are mostly “working” for a given value of “working”…

Same with AppPool threads, is it ok with a Adjustment because of starvation here and there? How about the Climb thingy? There are plenty of resources of how to get the data, but after that?

EssOEss
Oct 23, 2006
128-bit approved
Those are $50,000 questions you are asking there. Don't expect meaningful answers to be available on this off hand. Everything is relative, especially if we are talking about "fixing bad performance" rather than starting from scratch to design something awesome.

When investigating a pile of dung, you'll find dung everywhere. What you need is to find the changes that provide measurable benefits. Do you have the capability to actually make changes and try a modified version? Find some hotspots, make a change, measure the difference. If you lack the capability for this, try get buy-in to perform 3-5 experiments you can think of that might benefit matters. If you succeed with at least some of the initial batch, you might get the support to drive it further.

raminasi
Jan 25, 2005

a last drink with no ice

zokie posted:

I’ve gotten good data with perfview, it’s only that I have no idea of a Gen 2 GC is ok every 5 minutes or what.

I mean gently caress them for reading request/response bodies into memory and in a blocking manner. But things are mostly “working” for a given value of “working”…

Same with AppPool threads, is it ok with a Adjustment because of starvation here and there? How about the Climb thingy? There are plenty of resources of how to get the data, but after that?

It's this:

EssOEss posted:

Those are $50,000 questions you are asking there. Don't expect meaningful answers to be available on this off hand. Everything is relative, especially if we are talking about "fixing bad performance" rather than starting from scratch to design something awesome.

When investigating a pile of dung, you'll find dung everywhere. What you need is to find the changes that provide measurable benefits. Do you have the capability to actually make changes and try a modified version? Find some hotspots, make a change, measure the difference. If you lack the capability for this, try get buy-in to perform 3-5 experiments you can think of that might benefit matters. If you succeed with at least some of the initial batch, you might get the support to drive it further.

I know it's frustrating, but there aren't really objectively right answers here. My last job was doing multiple Gen 2 collections per second. Were we happy with it? Absolutely not. Was it so bad we couldn't run a business off it? Almost, but not quite. Did we have the same workload profile and performance requirements that you do? Almost definitely not.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

EssOEss posted:

Those are $50,000 questions you are asking there. Don't expect meaningful answers to be available on this off hand. Everything is relative, especially if we are talking about "fixing bad performance" rather than starting from scratch to design something awesome.

When investigating a pile of dung, you'll find dung everywhere. What you need is to find the changes that provide measurable benefits. Do you have the capability to actually make changes and try a modified version? Find some hotspots, make a change, measure the difference. If you lack the capability for this, try get buy-in to perform 3-5 experiments you can think of that might benefit matters. If you succeed with at least some of the initial batch, you might get the support to drive it further.

It also depends on performance requirements. Readily fixable problems generally tend to be on the order of "we see a seven second pause more than once a minute due to garbage collection" - this is usually a result of just doing something really bad or an obvious bug. If the problem is that you're seeing a one second pause every few minutes or so, that's when you start thinking about buffer pooling / making everything structs / avoiding gen 2 allocations - that's an expensive refactoring, but feasible. If the problem is that no gc delays are ever acceptable in any situation (but somehow were OK in the bring-up of the project), you're probably looking at rearchitecting whatever has the problem - you're only going to find a bunch of tweaks that might make it better, not a silver bullet.

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".
GC talk always reminds me of this article:

null garbage collector

It turns out that a null garbage collector implementation is valid. You just have to have a situation where your app only ever needs to hit the gas and never the brake, lol

zokie
Feb 13, 2006

Out of many, Sweden
Thanks for the replies everyone, I kinda realized after writing that last post that the answer is depends. Luckily I won’t need to use that data since I in a single day migrated their solution to .net 4.8 when they had said they would spend the next sprint on a spike to see if it was feasible...

So now they can spend that time removing blocking IO instead

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

namlosh posted:

GC talk always reminds me of this article:

null garbage collector

It turns out that a null garbage collector implementation is valid. You just have to have a situation where your app only ever needs to hit the gas and never the brake, lol

that approach isn't as ridiculous as it might sound - you can get deterministic performance just by starting a new process with null gc that does expensive operation x, and killing the process whenever it's done doing whatever it's doing (mitigating the memory use). but unfortunately, one of the reasons why you might have long running processes is to mitigate startup time...

namlosh
Feb 11, 2014

I name this haircut "The Sad Rhino".

Bruegels Fuckbooks posted:

that approach isn't as ridiculous as it might sound - you can get deterministic performance just by starting a new process with null gc that does expensive operation x, and killing the process whenever it's done doing whatever it's doing (mitigating the memory use). but unfortunately, one of the reasons why you might have long running processes is to mitigate startup time...

Or you’re writing software for a missile :)

I’ve done some tricks with the GC before, mostly because the app was latency sensitive during a specific timeframe… but for no GC, I agree that you’re probably looking at a short lived process

adaz
Mar 7, 2009

As most folks are saying performance is pretty complicated but tools like DataDog, New Relic, and Dynatrace can help you diagnose at least what methods are taking the most time. Diagnosing why those methods take so long is then your problem. My initial inclination is that GC likely isnt a problem and just on what you've described the use of all the .results and IO blocking is 100% going to be worse. Even in extreme situations like millions of objects being created & destroyed the garbage collector is usually able to handle that like a champ. You also mentioned them reading in the request/response bodies and those are mostly strings so it does help SOMEWHAT to relieve pressure. I'd also suggest maybe using the jetbrains suite like dottrace and dotmemory which should help you diagnose if garbage collection is really an issue.

But realistically if this is as you say a bog standard (badly) programmed .NET 4 app running on IIS the bottleneck is probably the async code running sync. Ultimately, that app can easily run out of threads that can service requests because every sync request is going to block a thread to a core and eventually you're going to run out or you're just going to deadlock all over the place because it's really, really, tricky to not deadlock when using .Result.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



I've got a weird build (I think) issue that I'm not even sure where I'd send a bug report for. I'm writing an Azure Function targeting Net Standard 3.1 and using the NuGet package Microsoft.Extensions.Configuration in a project that's a dependency of a dependency of the main project, so Func -> Data Mangling -> DB Touching (used here). When I use the latest 3.x version of that package, the function runs fine and works, but when I use the 5.x it throws a runtime error because it can't find Microsoft.Extensions.Configuration.Abstractions.

Using Procmon I determined that, when using the 3.x version, it loads the assembly from AppData\Local\AzureFunctionsTools\Releases\3.28.0\cli_x64\ and, like I said, everything works fine.

Using the 5.x versions, it tries to load it from <project>\bin\Debug\netcoreapp3.1\bin where the func.exe is, which makes sense and the DLL isn't there so it makes sense that it throws the error. What's not making sense is that the DLL is sitting in <project>\bin\Debug\netcoreapp3.1 but isn't copied into the \bin directory. Or maybe it's not supposed to - I'm not sure because this is my second Core project and the first one was too simple and limited in scope to break up into layers.

A couple other packages I'm using in the Function also depend on that one, namely Microsoft.Extensions.Diagnostics.HealthChecks and AspNetCore.HealthChecks.SqlServer which I also had to downgrade so that nothing would try to load the 5.x version of Microsoft.Extensions.Configuration. I'd prefer to stick with current versions of everything and I'm also just not sure at this point that whatever we package to deploy will work. I know I can just check that by deploying it, but it should still run locally.

I should note that adding the Microsoft.Extensions.Configuration.Abstractions package to the Function's project didn't fix the issue, which isn't too surprising because that didn't guarantee an assembly would be copied in regular Framework, either. I know one time I had to add a load-bearing using to make it copy a dependency to the bin directory years ago but the Function is actually using Microsoft.Extensions.Configuration and that's not copied to netcoreapp3.1/bin so I doubt that'll work.

So what do I do here? Is there some binding directive I have to just know to add to my project to get the build system to work?

Mr. Angry
Jan 20, 2012
I suspect that's because the 5.x version isn't compatible with .NET Core 3.1.

I ran into a similar problem last year with packages not working in a .NET Core project and found this open issue on GitHub. It looks like the 5.x and 3.1.x versions of Microsoft's extensions are compatible only with their respective frameworks but for some reason MS have set both versions to target .NET Standard, so there's no way for NuGet to distinguish between the two versions :psyduck:

You've already said the 3.1.x version of Microsoft.Extensions.Configuration works so stick with that I guess. It does seem to be updated regularly and I assume it's supported as part of .NET Core 3.1 but I can't find any confirmation to that effect.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



The 5.x version targets NetStandard 2 and I can load it in LINQPad running 3.1 🤷‍♂️

No Pants
Dec 10, 2000

You usually only find out that they're incompatible during runtime when your poo poo breaks because a method is missing somewhere.

Small White Dragon
Nov 23, 2007

No relation.

zokie posted:

Can anyone think of a good reason to target .net framework 4 in 2021? Or even 2017?

Do you mean just 4.0 or 4.x?

We're looking at .NET for various kinds of cross-platform projects and there are a number of places where .NET 5 has weird problems.

Example:
https://github.com/FNA-XNA/FNA/wiki/0:-FAQ#i-have-a-bug-when-running-on-net-core-and-

mystes
May 31, 2006

Small White Dragon posted:

Do you mean just 4.0 or 4.x?

We're looking at .NET for various kinds of cross-platform projects and there are a number of places where .NET 5 has weird problems.

Example:
https://github.com/FNA-XNA/FNA/wiki/0:-FAQ#i-have-a-bug-when-running-on-net-core-and-
The "bug" being that .net 5 doesn't support a mono specific extension that was never supported by .net framework or .net core?

Small White Dragon
Nov 23, 2007

No relation.

mystes posted:

The "bug" being that .net 5 doesn't support a mono specific extension that was never supported by .net framework or .net core?

I don't know all the details but from talking to folks involved in cross platform projects, not just games, there are apparently a number of cases where the answer is "use 4.x."

I assume most folks here only care about Windows, in which case none of that may apply.

WorkerThread
Feb 15, 2012

Small White Dragon posted:

Do you mean just 4.0 or 4.x?

We're looking at .NET for various kinds of cross-platform projects and there are a number of places where .NET 5 has weird problems.

Example:
https://github.com/FNA-XNA/FNA/wiki/0:-FAQ#i-have-a-bug-when-running-on-net-core-and-

I love confrontational open source devs.

Xik
Mar 10, 2011

Dinosaur Gum
FNA is catering for a pretty specific niche of fairly strict XNA compatibility with cleaner cross platform support.

WorkerThread posted:

I love confrontational open source devs.

Dude made a living porting XNA games to Linux and Mac and then FOSS'd his work to help other indie game devs when he could have easily just not.

mystes
May 31, 2006

Small White Dragon posted:

I don't know all the details but from talking to folks involved in cross platform projects, not just games, there are apparently a number of cases where the answer is "use 4.x."

I assume most folks here only care about Windows, in which case none of that may apply.
The way you're framing this is kind of weird.

.Net framework is windows only so the idea that .net framework is better for cross platform stuff is... not correct.

.net core/.net 5 is actually cross platform, and contrary to what you're implying for some reason, a lot of people in this thread specifically like it for this reason, although admittedly most of those people are using it more for server stuff.

The problem is that you're not distinguishing between .net framework and mono. Existing games aren't using .net framework for cross platform support, they're using mono for cross platform support. .Net 5 has really good compatibility with .net framework but it may be harder to upgrade to for software that is written specifically for mono.

I am not denying that this is an actual issue that may kind of suck, but software that relies on mono features like fna-xna is probably going to need to be updated to support .net 5, which will probably be a pain. However, that doesn't mean that .net 5 has bad cross platform support, which again is a silly thing to say because it actually is cross platform unlike .net framework which has zero cross platform support.

Drastic Actions
Apr 7, 2009

FUCK YOU!
GET PUMPED!
Nap Ghost
A point of .NET 5 and 6 has been to coalesce the runtimes into one set of tooling. Depending on the scenario you may be using CoreCLR or Mono. Like how Mobile works with .NET 6, the Xamarin SDKs got ported to workloads that target the new tooling, which is still running Mono as its runtime. Same with WASM support, Mono has it and I don't think CoreCLR does. And if you really want to target a specific runtime yourself, you can.

For those running on FNA, they could probably use .NET 5 or 6 and target the Mono runtime, it should probably work.

The point here though is unless you already know of things you want to use in .NET Framework that hasn't been ported to .NET Core / NET 5+, you should just target NET 5+ and not think about it.

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with

Grimey Drawer

Drastic Actions posted:

The point here though is unless you already know of things you want to use in .NET Framework that hasn't been ported to .NET Core / NET 5+, you should just target NET 5+ and not think about it.

My Take: If you are building desktop apps that you know are only going to be targeting Windows, probably use Framework.

Edit: I looked into some of the concerns I thought I had about functionality that would be gained or lost in .NET 5 vs Framework, and they seem to be either incorrect or unfounded. I retract the above and label it a 'bad take'.

Canine Blues Arooo fucked around with this message at 19:51 on Jun 4, 2021

mystes
May 31, 2006

Canine Blues Arooo posted:

My Take: If you are building desktop apps that you know are only going to be targeting Windows, probably use Framework.
It doesn't make sense to sense to make new software that only works on Framework now. If you want to actually distribute a version that's targeting Framework because you feel like that's still easier, that's reasonable enough, but since Framework is dead you should probably make sure to write it in such a way that it also runs on .Net 5 and higher so you can switch it out whenever you want.

Small White Dragon
Nov 23, 2007

No relation.

mystes posted:

The way you're framing this is kind of weird.

Sorry if my terminology is wrong.

WorkerThread
Feb 15, 2012

I'd be thinking pretty hard about doing greenfield development in the full framework at this point. Unless you are using some extremely esoteric tech I just don't see a good reason.

brap
Aug 23, 2004

Grimey Drawer
Can’t speak for cross platform game dev, but for most windows apps or cross platform CLIs or server apps, .NET 5+ is the way to go. The more that time goes by the more Framework will be left behind with language and runtime features, e.g. full support for nullable reference types, default interface methods, abstract static interface methods, etc. Not to mention innumerable performance improvements.

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm new to Azure and trying to figure out what kind of storage I should use. I've got an event triggered function putting telemetry data in json format into storage, and another timer triggered function that every minute grabs all that data and sends it to an external REST api. Blob storage seems like overkill, since I think I have to make a request for each entry, and if the first function fills up the storage faster than the second one grabs it, I'll never get all of it to package and send.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Boz0r posted:

I'm new to Azure and trying to figure out what kind of storage I should use. I've got an event triggered function putting telemetry data in json format into storage, and another timer triggered function that every minute grabs all that data and sends it to an external REST api. Blob storage seems like overkill, since I think I have to make a request for each entry, and if the first function fills up the storage faster than the second one grabs it, I'll never get all of it to package and send.

Why trigger on a timer? It seems like you'd want to use a queue and skip the blob storage bit entirely.

distortion park
Apr 25, 2011


The discussion about GC pauses above makes me wonder if anyone's ever made/publicised a GC aware load balancer. If you had a fairly predictable resource usage profile and relatively short lived requests, it might be possible for the lb to predict when an instance wants to do a GC, stop sending it requests, tell it to GC, then start again.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

pointsofdata posted:

The discussion about GC pauses above makes me wonder if anyone's ever made/publicised a GC aware load balancer. If you had a fairly predictable resource usage profile and relatively short lived requests, it might be possible for the lb to predict when an instance wants to do a GC, stop sending it requests, tell it to GC, then start again.

I saw an academic paper about predicting gc pauses in java, but I have never seen it as a feature in a load balancing product.

I've also seen a solution where the load balancer has a rest api. When a web service gets a notification of pending gen2 gc, it could just call that and tells it to take the service out of the pool until the gen2 garbage collection is completed, and then could put itself back in the pool on completion.

Adbot
ADBOT LOVES YOU

LongSack
Jan 17, 2003

Question about MSTest and test order.

I have a series of tests for a registration service that must run in sequence, as prior tests set up conditions in the service for following tests. According to this post, the tests should run in alphabetical order, so I named the tests Test01_.... through Test06_..... They are not, however, executing in alphabetical order.

I've added them to a playlist to work around this, and that works, but it seems that it should be unnecessary given the link above. Are other test frameworks (i.e., NUnit or XUnit) better at this? TIA

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply