Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fankey
Aug 31, 2001

mystes posted:

On Windows can't you just intercept the url from the events on the webview when it tries to redirect so you don't actually need it to listen for it in some manner? I haven't actually tried this though, so I could be wrong.

If I used an embedded webview yes, but according to the draft embedding the webview is not recommended

quote:

The best current practice for authorizing users in native apps is to perform the OAuth authorization request in an external user-agent (typically the browser), rather than an embedded user-agent (such as one implemented with web-views).
I don't think embedding the webview would allow use of browser password saving / password managers which might be one of the reasons they don't recommend doing it that way.

Adbot
ADBOT LOVES YOU

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Embedding a webview is bad because at that point the user is ultimately typing their third-party credentials into your application and just trusting you to not misuse them.

You shouldn't have any firewall issues if you make sure you're only listening on the loopback port.

NoDamage
Dec 2, 2000
I don't think it's particularly common for a software firewall to block localhost connections to non-privileged ports on the same machine. If that was an issue I expect it would have been considered by the Oauth design team.

And yeah, modern best practice is to open the auth request in the user's default browser. For all the reasons already mentioned, plus if they're already cookied they won't need to login again.

Whybird
Aug 2, 2009

Phaiston have long avoided the tightly competetive defence sector, but the IRDA Act 2052 has given us the freedom we need to bring out something really special.

https://team-robostar.itch.io/robostar


Nap Ghost
Hello! I hope this is the right place to ask general c# programming questions because I am going to ask one here. I'm developing for Unity but the question I'm asking isn't anything to do with Unity itself, just to do with how to be object-oriented in C#.

I'm in a situation which is kind of like the below (but not completely analogous): I have a WorldState which a lot of code will want to read properties of, and which a few select pieces of code will be able to change. I want to be confident that no code without permission edits the WorldState, so I've created an interface which only includes the read-only properties, like so:

code:
public class WorldState : IWorldStateReader {
   int numberOfTurns;
   public int NumberOfTurns { get { return numberOfTurns; } }
   List<Monster> monsters;
   public List<Monster> Monsters { get { return monsters; } }
}

public interface IWorldStateReader {
   int NumberOfTurns { get; }
   List<Monster> Monsters { get; }
}
This works great for the NumberOfTurns property, but my understanding is that the Monsters property will return a list of references to any Monsters I've instantiated, which a piece of code using the IWorldStateReader would still be able to freely alter.

The only solutions to this I can think of are
- ensure that anything IWorldStateReader can see is a struct or otherwise value-type variable
- create a "reader" interface for any class that IWorldStateReader can access, and have it access that instead

Neither of these feel like a great way to do things, which makes me think I've designed something wrong at some point. Are there any other better options available to me?

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

Whybird posted:

- create a "reader" interface for any class that IWorldStateReader can access, and have it access that instead.

You can already do this for free with pretty much all of the built in collection: List<T> implements IEnumerable<T> so if you make that the return type, that’s what the calling code will see.

The caller of course can still cast to a List<T> and modify things but the point of encapsulation isn’t really to make it impossible to access private data, more to encourage loose coupling.

Whybird
Aug 2, 2009

Phaiston have long avoided the tightly competetive defence sector, but the IRDA Act 2052 has given us the freedom we need to bring out something really special.

https://team-robostar.itch.io/robostar


Nap Ghost

Eggnogium posted:

You can already do this for free with pretty much all of the built in collection: List<T> implements IEnumerable<T> so if you make that the return type, that’s what the calling code will see.

The caller of course can still cast to a List<T> and modify things but the point of encapsulation isn’t really to make it impossible to access private data, more to encourage loose coupling.

I think maybe I'm misunderstanding what you mean there? My talk of creating reader interfaces meant that I'd, for example, have my Monster class implement some sort of IMonsterReader interface which gives read-only access to the bits of the Monster class I want a class with IWorldStateReader to be able to edit, and have IWorldReader expose a List<IMonsterReader> instead of a List<Monster>.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Your other options are:

- Defensive copying, where your read-only accessors make a deep copy of anything mutable and return that
- Immutability, where the objects you're storing in your state cannot ever change, and anything that wants to change the state does so by creating new objects and replacing the existing ones
- Annotating the interface return values to suggest that they shouldn't be changed, and writing an annotation processor that fails your build if it detects someone breaking the annotation contract

They're all some combination of "a lot of work" and "not so hot for performance". I'm not sure if anyone has already built an annotation system you could use, which would make that a pretty palatable option, but I suspect not.

What you really want is something like c++'s const keyword, which unfortunately doesn't exist in c#. (C# const means something different).

Having read-only interfaces for everything is honestly not the worst option.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

Whybird posted:

I think maybe I'm misunderstanding what you mean there? My talk of creating reader interfaces meant that I'd, for example, have my Monster class implement some sort of IMonsterReader interface which gives read-only access to the bits of the Monster class I want a class with IWorldStateReader to be able to edit, and have IWorldReader expose a List<IMonsterReader> instead of a List<Monster>.

Oh yes, to prevent callers setting properties on your monster you’d have to make a read-only interface. My suggestion was to address that unless your interface’s return type is IEnumerable<T> the callers will be able call Add, Remove, etc on the List to change its contents.

You could also split things across assemblies and try using the internal keyword though I don’t think you can make a property setter internal and the getter not, so would have to move to getter/setter methods instead of properties.

distortion park
Apr 25, 2011


Just make read only interfaces and don't tell people about the underlying class, that's good enough for almost all use cases.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Public get / private set is extremely common, I'd be very surprised if it didn't work with internal as well.

Falling that, Visual Studio has a refactor command that will automatically extract interfaces from any class for you, then you need only delete the setters and replace lists / arrays with IEnumerable (or better yet IReadOnlyList but I don't think Unity supports that).

SirViver
Oct 22, 2008
You can use IReadOnlyList<Monster> and return (an ideally cached instance of) monsters.AsReadOnly() to prevent your list from being modified. Of course that would still leave you with the Monster instances being mutable, so you'd have to create a IReadOnlyMonster interface to expose instead, and repeat that for basically any other reference type you expose. However as mentioned, that doesn't actually prevent anyone from modifying the state if they really want to.

Since this is a game I'd also be somewhat wary of any overly enterprisey solutions. In general you'd probably want to avoid anything that explicitly or implicitly creates new object instances (i.e. creates a lot of garbage), as that will eventually lead to GC pauses ruining your framerate, though, then again, maybe that doesn't overly matter for your type of game.


Another solution without any performance impact whatsoever that just sprang to mind would be to write a Roslyn analyzer that detects any modifications to a WorldState instance outside of classes marked with a special WorldStateEditor attribute and marks those edits as an error. Though you'd have to do flow analysis to find if any element read from the WorldState (e.g. a Monster) is modified, hmm. Or instead of that you enhance the analyzer scope to all of your class types, allowing modification only where an attribute like [StateEditor(typeof(Monster))] declares it as such. If the goal is to limit state editing to specific places - or rather, make places where state is edited easy to find - then this might help without the overhead of interfacing everything up.

brap
Aug 23, 2004

Grimey Drawer
Wrapper struct that exposes only the read members? No allocation or virtual dispatch, but not as versatile as an interface, but that’s fine?

SirViver
Oct 22, 2008
But now you're copying potentially huge structs everywhere, so you need to pass that wrapper instance around via ref. And it does cause allocations, just on the stack instead of the managed heap (but yeah, GC perf would not be affected by that). I mean it might work, but that's basically the "defensive copy" solution including all the coding work required to create a deep copy of the entire state - which you should then of course cache for the update cycle/frame and not create on every access.

SirViver fucked around with this message at 18:51 on Jun 21, 2020

brap
Aug 23, 2004

Grimey Drawer
What? The wrapper would contain a single field

Whybird
Aug 2, 2009

Phaiston have long avoided the tightly competetive defence sector, but the IRDA Act 2052 has given us the freedom we need to bring out something really special.

https://team-robostar.itch.io/robostar


Nap Ghost
Gotcha. Creating read-only interfaces for the subclasses sounds like the best solution, then: I'm more interested in stopping myself from accidentally changing the contents of the class than having a completely airtight solution, and I was kind of curious how a professional developer would have gone about doing it since part of why I'm doing this is to get better at doing C# the Right Way. I'd had it in my head that adding the read-only interfaces would be a pain but the more I think about it, the more I realise that it really doesn't add much work.

Also thanks Eggnoggium, I hadn't gotten my head around the difference between an IEnumerable and an IList so that's a really useful thing to know!

Thank you all, that's been really helpful!

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

SirViver posted:

You can use IReadOnlyList<Monster> and return (an ideally cached instance of) monsters.AsReadOnly() to prevent your list from being modified.

I don't really understand the purpose of the AsReadOnly() method (in general, not just here). Surely you expose a property of type IReadOnlyList<Monster> and just return your list of monsters.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Hammerite posted:

I don't really understand the purpose of the AsReadOnly() method (in general, not just here). Surely you expose a property of type IReadOnlyList<Monster> and just return your list of monsters.

The method and the class it returns (ReadOnlyCollection<T>) simply predates the existence of IReadOnlyList<T>. If you can just cast to IReadOnlyList<T> there's no reason to use the wrapper class.

SirViver
Oct 22, 2008

Hammerite posted:

I don't really understand the purpose of the AsReadOnly() method (in general, not just here). Surely you expose a property of type IReadOnlyList<Monster> and just return your list of monsters.
No, because then you can just cast back to List and modify all you want. AsReadOnly wraps the list in an actual ReadOnlyCollection that doesn't expose the list instance and throws on any attempt to add or remove items. That way you also protect against accidental modification, e.g. by code that just takes an object (your cast-only IReadOnlyList) and does itself a typecheck for "if (obj is ICollection<T> col && !col.IsReadOnly)" or similar.

SirViver
Oct 22, 2008

brap posted:

What? The wrapper would contain a single field

Ah OK, now I understand :downs:. Yeah, that would prevent any casting back to the original type and otherwise fulfill the same purpose as an interface. Though since this is apparently mostly about reminding the OP himself not to modify the objects, interfaces really are the best solution.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

SirViver posted:

No, because then you can just cast back to List and modify all you want.

I know you can. So what?

quote:

AsReadOnly wraps the list in an actual ReadOnlyCollection that doesn't expose the list instance and throws on any attempt to add or remove items. That way you also protect against accidental modification, e.g. by code that just takes an object (your cast-only IReadOnlyList) and does itself a typecheck for "if (obj is ICollection<T> col && !col.IsReadOnly)" or similar.

Anybody who does that deserves what they get

brap
Aug 23, 2004

Grimey Drawer
Absolutely agreed, they should be using 'if (obj is ICollection<T> { IsReadOnly: false } col)'

Boz0r
Sep 7, 2006
The Rocketship in action.
How do I get MSBuild to validate a json file that doesn't have a .json file extension on build time and throw an error if it's invalid? We use nswag to generate proxies on build time, but a merge conflict screwed with one of the lists in the .nswag file, and we spent a bunch of time finding a missing comma.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Boz0r posted:

How do I get MSBuild to validate a json file that doesn't have a .json file extension on build time and throw an error if it's invalid? We use nswag to generate proxies on build time, but a merge conflict screwed with one of the lists in the .nswag file, and we spent a bunch of time finding a missing comma.

Sounds like nswag *is* validating your JSON at build time, it just has poor error messages.

I'm not joking. You want to add some crude validation of pure format conformance immediately before nswag is going to do the same thing but better (i.e. also checking that the OpenAPI spec is respected, etc.).

It's like checking for folder permissions manually before writing a file, instead of just trying to write and letting the filesystem tell you if you don't have permission, or the path doesn't exist, or whatever you haven't thought of.

Let NSwag do the work. For example, make it pipe its errors to a file, and then check in the next step if that file is empty.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
This is a tricky one, not technically .NET but I didn't know where else to ask and it's Sitecore related so I figure some of the .NET devs here will have had exposure to this. Behind our Sitecore website we have a Solr service handling indexing of our news articles for searches and filtering and so on. We're migrating away from SearchStax onto a simple server that we've set up in order to reduce costs (and have better access to configuration, since SearchStax don't provide direct access to most of the config).

We have configured our new Solr service with the same heap size, the same request limits, the same buffer sizes, etc, etc and yet for some reason when we send a particular query that I know to have a large response size, we get an immediate 404*. The query itself searches for records that match a list of IDs, and that list is 100 GUIDs long. If we reduce the size of the list, and therefore the size of the response, we start to see successful responses. So it's definitely something size related. Here's what we've found and matched to our pre-existing Solr service:

* JVM memory
* Physical memory
* Swap space
* Jetty outputBufferSize
* Jetty outputAggregationSize
* Jetty requestHeaderSize
* Jetty responseHeaderSize
* Jetty headerCacheSize

These are all matched to the old service and yet the new service is still unable to handle this query. I've honestly reached the end of my list of ideas on what could possibly be wrong and I'm now just fiddling with random poo poo to see if I can fluke it. I'd love if someone far more familiar with Solr could shed some light on what might be misconfigured, or at least where I can look for error messages or something. I've looked at the Solr logs and there are no errors so I think it must be a Jetty error.

It may or may not be relevant, but this is running under Azure as an App Service.

Alternatively if anyone at least knows another thread I can post this in where Solr people might see it that would be great too.

* Note: 404 seems to be a red herring, it seems to be the generic response type for a host of different errors, frustratingly...

Mata
Dec 23, 2003
Trying to improve performance of a big ole enterprisey asp.net.core + angular2 app. The bottleneck is the sending 50mb of JSON data to the client, no big surprise there - what IS surprising is how much faster everything is when serializing everything in the controller and returning a string, rather than returning some object and letting the middleware serialize the JSON behind the scenes.
What's even weirder is the deserialization on the client goes about 5 times faster when manually serializing the response on the server, and passing out the string. My only explanation for this is that asp.net by default does non-blocking serialization and streams out the results in chunks, which results in another bottleneck for the client.

I also tried sending out the data over websockets with messagepack compression, but this is about 10-20 times slower than sending JSON (this could obviously be improved by making the data more binary and less stringy, but a big part of it is also how brutally fast V8 seems to be at parsing JSON, whereas messagepack does most of its deserialization in javascript, it seems)

Anyone have advice on improving the perf. characteristics of an app like this? I feel like messagepack-over-websockets ought to be the more scalable solution, but the more binary i make the wire-type, the more crunching i have to do on the client-side to make the datatypes useable in angular. Competing against V8s JSON-parser seems difficult. Why manually serializing the responses on the server results in such a speedup both on the server and clientside is a mystery to me...

edit: I should mention we use Newtonsoft for the serializer. System.text.json appears to perform significantly worse, on average. No custom serializers for either - would be nice to avoid but I'm aware there's speedups to be found there.

Mata fucked around with this message at 14:02 on Jul 1, 2020

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Mata posted:

Trying to improve performance of a big ole enterprisey asp.net.core + angular2 app. The bottleneck is the sending 50mb of JSON data to the client, no big surprise there - what IS surprising is how much faster everything is when serializing everything in the controller and returning a string, rather than returning some object and letting the middleware serialize the JSON behind the scenes.
What's even weirder is the deserialization on the client goes about 5 times faster when manually serializing the response on the server, and passing out the string. My only explanation for this is that asp.net by default does non-blocking serialization and streams out the results in chunks, which results in another bottleneck for the client.

I also tried sending out the data over websockets with messagepack compression, but this is about 10-20 times slower than sending JSON (this could obviously be improved by making the data more binary and less stringy, but a big part of it is also how brutally fast V8 seems to be at parsing JSON, whereas messagepack does most of its deserialization in javascript, it seems)

Anyone have advice on improving the perf. characteristics of an app like this? I feel like messagepack-over-websockets ought to be the more scalable solution, but the more binary i make the wire-type, the more crunching i have to do on the client-side to make the datatypes useable in angular. Competing against V8s JSON-parser seems difficult. Why manually serializing the responses on the server results in such a speedup both on the server and clientside is a mystery to me...

edit: I should mention we use Newtonsoft for the serializer. System.text.json appears to perform significantly worse, on average. No custom serializers for either - would be nice to avoid but I'm aware there's speedups to be found there.

50mb of JSON data to open a page is really high. How much information can you present on a single page? Have you looked into techniques like pagination of results, etc?

Also one thing to consider is checking to see if there's redundant data in the JSON - e.g. if there are a bunch of fields whose values contain startingIdentifier + data, you could come up with some kind of convention where starting identifier and data are stored in different fields and the client would stitch them together and pass them to the UI.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mata posted:

Anyone have advice on improving the perf. characteristics of an app like this?

Fix it so it doesn't send 50 mb of data to the client. That's the problem. Trying to optimize around the root cause of the problem is going to give you, at best, a modest improvement and will probably take you just as long as fixing the real problem.

Mata
Dec 23, 2003
I'm looking for some low hanging fruit that doesn't involve rewriting a fairly massive application. The initial idea back in the day was to suffer through a slightly longer initial load, to then enjoy a faster overall experience once all the data is clientside. It seems like the data volumes we're at now are approaching the limits where this approach is no longer feasible. I do think there's some counterintuitive performance characteristics in this tech stack, to illustrate what I mean:

Here's V8 slurping up 50mb of json in 2.5 seconds....


and here's messagepack taking 48 seconds to do the same.

Zooming in on the details reveals a billion string and splice operations, all done in JS. The heap size even grows twice as large.

It does seem like reducing the data volume is the way to go, but based off this, it doesn't look like 5mb of messagepacked data would outperform 50mb of json. Maybe if I can get the binary representation down to something like 0.5mb, but that means more data processing and stitching client-side... But yeah it does look like if any of this dataloading can be deferred, that would be better in the long term.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Mata posted:

I'm looking for some low hanging fruit that doesn't involve rewriting a fairly massive application. The initial idea back in the day was to suffer through a slightly longer initial load, to then enjoy a faster overall experience once all the data is clientside. It seems like the data volumes we're at now are approaching the limits where this approach is no longer feasible. I do think there's some counterintuitive performance characteristics in this tech stack, to illustrate what I mean:

Here's V8 slurping up 50mb of json in 2.5 seconds....


and here's messagepack taking 48 seconds to do the same.

Zooming in on the details reveals a billion string and splice operations, all done in JS. The heap size even grows twice as large.

It does seem like reducing the data volume is the way to go, but based off this, it doesn't look like 5mb of messagepacked data would outperform 50mb of json. Maybe if I can get the binary representation down to something like 0.5mb, but that means more data processing and stitching client-side... But yeah it does look like if any of this dataloading can be deferred, that would be better in the long term.

Messagepack etc. might be appropriate for service to service or native code but it's nearly impossible to beat JSON deserialization performance in browser/js consumers especially with large datasets. It's not the transmission that's the problem, fundamentally think about what it means to have a 50mb object...

Bruegels Fuckbooks fucked around with this message at 18:02 on Jul 1, 2020

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



I know Newtonsoft can stream-serialize and presumably deserialize. Are you doing that or is it somehow slower?

Boz0r
Sep 7, 2006
The Rocketship in action.
We're using xUnit, FluentAssertions and Moq. I normally wrap my assertions in an AssertionScope, but I don't think my Moq verifies get added to that scope. Is there a way to do that?

EDIT: Another question. We're mocking a bunch of external services, and some of our methods are doing a bunch of calls, so the arrange steps of our tests are getting very big. How do I make something like this test faster? I've read a bit about AutoFixture and AutoMock, but I'm not entirely sure how to use them.

code:
[Fact]
public async Task GetOrCreateWithSomeTypeUnitTest_NewErrorLog()
{
    // Arrange
    var someType = "someType";
    var someOtherType = "someOtherType";
    var mbtId = Guid.NewGuid();
    var agt = new TypeDTO();
    ExceptionDto exceptionDto = null;
    string errorOccuredIn = null;

    var mockMb = new Mock<IServiceManager>();
    mockMb.Setup(repo => repo.GetTypeId(someType, someOtherType)).ReturnsAsync((Guid?)null);
    mockMb.Setup(repo => repo.GetSomethingById(someOtherType)).ReturnsAsync(agt);
    mockMb.Setup(repo => repo.CreateSomeType(someType, agt))
        .ReturnsAsync(new SomeTypeDTO
        {
            SomeTypeId = mbtId
        });

    var mockUtil = new Mock<IUtilServiceManager>();
    mockUtil.Setup(repo => repo.GetErrorLogAsync(It.IsAny<string>())).ReturnsAsync((ErrorLogDto)null);
    mockUtil.Setup(repo =>
            repo.CreateErrorLogAsync(It.IsAny<string>(), It.IsAny<ExceptionDto>()))
        .Callback<string, ExceptionDto>((s, dto) =>
        {
            errorOccuredIn = s;
            exceptionDto = dto;
        });

    var mockOther = new Mock<IOtherServiceManager>();
    var manager = new BMManager(mockMb.Object, mockOther.Object, mockUtil.Object);

    // Act
    var id = await manager.GetOrCreateWithSomeType(someType, someOtherType);    

    // Assert
    id.Should().Be(mbtId);
    mockMb.Verify(x => x.GetTypeId(someType, someOtherType), Times.Once);
    mockMb.Verify(x => x.GetSomethingById(someOtherType), Times.Once);
    mockMb.Verify(x => x.CreateSomeType(someType, agt), Times.Once);
    mockUtil.Verify(x => x.GetErrorLogAsync(It.IsAny<string>()), Times.Once);
    mockUtil.Verify(x => x.CreateErrorLogAsync(It.IsAny<string>(), It.IsAny<ExceptionDto>()), Times.Once);
    exceptionDto.Should().NotBeNull();
    exceptionDto.XrmObjects.Should().NotBeNull().And.HaveCount(1);
    errorOccuredIn.Should().NotBeNullOrEmpty().And.BeEquivalentTo(Constants.Sync);
}

Boz0r fucked around with this message at 09:15 on Jul 3, 2020

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Mata posted:

Trying to improve performance of a big ole enterprisey asp.net.core + angular2 app. The bottleneck is the sending 50mb of JSON data to the client, no big surprise there - what IS surprising is how much faster everything is when serializing everything in the controller and returning a string, rather than returning some object and letting the middleware serialize the JSON behind the scenes.
What's even weirder is the deserialization on the client goes about 5 times faster when manually serializing the response on the server, and passing out the string. My only explanation for this is that asp.net by default does non-blocking serialization and streams out the results in chunks, which results in another bottleneck for the client.

edit: I should mention we use Newtonsoft for the serializer. System.text.json appears to perform significantly worse, on average. No custom serializers for either - would be nice to avoid but I'm aware there's speedups to be found there.

Interesting, this seems to be the exact opposite of what I encountered.

A couple weeks ago we were trying to speed up or at least make a decent progress bar for a mobile app's initial synchronization, which involved downloading roughly that same amount of json (as a one-time setup and occasional reset!). We are also using ASP.NET 3.1 with Newtonsoft.Json instead of System.Text.Json (due to NSwag not supporting the latter).

Since the json was an array, I was hoping that it would be naturally chunked in transfer. Turns out, it wasn't, no matter what sort of collection I cast the array to - IEnumerable, IAsyncEnumerable - the Newtonsoft middleware had no particular way to handle it, it would just convert it back to an array and serialize it as a single HTTP Response with a huge transfer size.

In order to make it into a chunked transfer I had to - you can guess - manually serialize each object in the array, something like this:

code:
type ControllerBase with
   member this.StreamJsonArray collection = 
       task {
           this.Response.StatusCode <- HttpStatusCode.OK
           this.Response.ContentType <- "application/json;gzip"

           use sw = new StreamWriter(this.Response.Body)
           for index, item in Seq.indexed collection do 
               do! sw.WriteAsync (if index = 0 then '[' else ',')
               do! sw.WriteAsync (toJson item)
           do! sw.WriteAsync (']')

           return EmptyResult()
       }

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

Interesting, this seems to be the exact opposite of what I encountered.

A couple weeks ago we were trying to speed up or at least make a decent progress bar for a mobile app's initial synchronization, which involved downloading roughly that same amount of json (as a one-time setup and occasional reset!). We are also using ASP.NET 3.1 with Newtonsoft.Json instead of System.Text.Json (due to NSwag not supporting the latter).

Since the json was an array, I was hoping that it would be naturally chunked in transfer. Turns out, it wasn't, no matter what sort of collection I cast the array to - IEnumerable, IAsyncEnumerable - the Newtonsoft middleware had no particular way to handle it, it would just convert it back to an array and serialize it as a single HTTP Response with a huge transfer size.

In order to make it into a chunked transfer I had to - you can guess - manually serialize each object in the array, something like this:

code:
type ControllerBase with
   member this.StreamJsonArray collection = 
       task {
           this.Response.StatusCode <- HttpStatusCode.OK
           this.Response.ContentType <- "application/json;gzip"

           use sw = new StreamWriter(this.Response.Body)
           for index, item in Seq.indexed collection do 
               do! sw.WriteAsync (if index = 0 then '[' else ',')
               do! sw.WriteAsync (toJson item)
           do! sw.WriteAsync (']')

           return EmptyResult()
       }

Weird, you'd guess this is something the HTTP layer would do. Here's a bunch of data, I'll chunk it for you.

Mata
Dec 23, 2003

NihilCredo posted:

Interesting, this seems to be the exact opposite of what I encountered.

Cool. Did you get any speedup from this or did you just want to get the response piecemeal to make it easier to work with?
I should mention I didn't see any evidence of Newtonsoft.Json chunking the response, this was just an assumption I made to explain why the serialization took so much longer on the server-side when we let the middleware do the serialization vs when we serialized it ourselves.

Here's what System.text.json looks like on the client: (the low TTFB combined with long content download indicates a streaming response to me)

And here's Newtonsoft middleware: (Sure it's faster than system.text.json... But a lot slower than manual serialization using the same lib)

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick
ASP.net core routing question.

I have a controller like so:
code:
    [Route("[controller]")]
    [ApiController]
    public class CategoriesController : ControllerBase
    {
        [HttpPost("{brandId}")]
        public async Task<ActionResult<CreateCategoryResponse>> CreateCategory(int brandId)
        {
            return Ok(await manager.CreateCategoryForBrandAsync(brandId));
        }

        [HttpGet("{brandId}")]
        public async Task<ActionResult<ListCategoriesResponse>> ListCategories(int brandId)
        {
            return Ok(await manager.ListCategoriesForBrandAsync(brandId));
        }
    }

which will expose GET and POST on the routes /categories/{brandId}. There are going to be a number of other operations to perform here, and I'm thinking that logically it would make more sense to map the routes as /{brandId}/categories/{id}/whatever/etc or, worst case, /brands/{brandId}/categories/{id}/whatever/etc instead.
Is there an easy way to specify that using attribute based routing? Offhand I know I can just create a BrandsController and put everything under that, but there are a couple of other controllers already and I don't want to have one giant BrandsController with every endpoint for the entire service sitting under there if I can help it. Every request will always need to specify the brandId that it applies to, there aren't any requests which won't have a brandId as part of the request.

e: solution doesn't have to use attribute-based routing, it's just the easiest to understand for me, but if this can be achieved in some other way as well then that's great too.

beuges fucked around with this message at 13:46 on Jul 5, 2020

SAVE-LISP-AND-DIE
Nov 4, 2010

beuges posted:

ASP.net core routing question.

I have a controller like so:
code:
    [Route("[controller]")]
    [ApiController]
    public class CategoriesController : ControllerBase
    {
        [HttpPost("{brandId}")]
        public async Task<ActionResult<CreateCategoryResponse>> CreateCategory(int brandId)
        {
            return Ok(await manager.CreateCategoryForBrandAsync(brandId));
        }

        [HttpGet("{brandId}")]
        public async Task<ActionResult<ListCategoriesResponse>> ListCategories(int brandId)
        {
            return Ok(await manager.ListCategoriesForBrandAsync(brandId));
        }
    }
which will expose GET and POST on the routes /categories/{brandId}. There are going to be a number of other operations to perform here, and I'm thinking that logically it would make more sense to map the routes as /{brandId}/categories/{id}/whatever/etc or, worst case, /brands/{brandId}/categories/{id}/whatever/etc instead.
Is there an easy way to specify that using attribute based routing? Offhand I know I can just create a BrandsController and put everything under that, but there are a couple of other controllers already and I don't want to have one giant BrandsController with every endpoint for the entire service sitting under there if I can help it. Every request will always need to specify the brandId that it applies to, there aren't any requests which won't have a brandId as part of the request.

e: solution doesn't have to use attribute-based routing, it's just the easiest to understand for me, but if this can be achieved in some other way as well then that's great too.
[Route("~/brands/{brandId}/categories/{id}/whatever/etc")] applied to your method should do what you want.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

SAVE-LISP-AND-DIE posted:

[Route("~/brands/{brandId}/categories/{id}/whatever/etc")] applied to your method should do what you want.

Brilliant, thanks!

Boz0r
Sep 7, 2006
The Rocketship in action.
I've been messing around with AutoFixture and AutoMoq and it's really cool, but I've hit a snag that I don't know how to solve. My code gets a bunch of proxies from a static factory class that I switch out with a mock factory in my test base class, and I use AutoDataAttribute to inject fixtures into my tests.

My mocking factory looks like this:
code:
public class ProxyFactoryMock : IProxyFactory
{
    private readonly IFixture _fixture;

    public ProxyFactoryMock(IFixture fixture)
    {
        _fixture = fixture;
    }

    public T GetProxy<T>(string url) where T : IProxyBase
    {
        return _fixture.Create<T>();
    }
}
I create an AutoMoq attribute like in their guide:
code:
public class AutoMoqDataAttribute : AutoDataAttribute
{
    public AutoMoqDataAttribute() : base(() => new Fixture().Customize(new AutoMoqCustomization()))
    {
    }
}
And I use it in my test definition like so.
code:
[Theory, AutoMoqData]
public void AutoProxyTest([Frozen] Mock<ISomethingProxy> somethingProxy)
{
	// Arrange
	somethingProxy.Setup(proxy => proxy.Method(It.IsAny<string>())).ReturnsAsync(() => new SwaggerResponse<string>(default, default, "test"));
	
	...
}
My problem here is, that the attribute and the factory have two different instances of IFixture, so the factory doesn't use the proxy I just set up.

How do I fix this in the neatest way, without having to add extra code to each unit tests, and also having the instance be unique per test?

Boz0r fucked around with this message at 07:35 on Jul 7, 2020

StoicFnord
Jul 27, 2012

"If you want to make enemies....try to change something."


College Slice
Hi all,

Sorry for asking this but i am at the end of my tether.

I am attempting to decrypt a JWE inside a JWT. We are using ECDH-ES and AES128GCM.

Should be pretty easy using Jose-JWT. Except the ConcatKDF function is not implemented in net core.

I am not a huge Crypto guy, We use libraries for this, but I am having a great deal of difficulty in finding a library or algorithm to allow for JWE decryption using ECC.

Can anyone point me to some resources i can go through?

Adbot
ADBOT LOVES YOU

EssOEss
Oct 23, 2006
128-bit approved
Can you post an example token and decryption key (better yet, example code)? I find it hard to follow your description but have successfully used Jose-JWT in the past so I can give it a try.

Unless what you're asking is just about how to use some unsupported algorithm (is it?).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply