|
EssOEss posted:Can you post an example token and decryption key (better yet, example code)? I find it hard to follow your description but have successfully used Jose-JWT in the past so I can give it a try. Ill see what i can do (Much of it is company confidential) Basically the issue is that Jose-JWT doesnt implement ConcatKDF in net standard. Ref: https://github.com/dvsekhvalnov/jose-jwt/blob/e54de3bb706edf294053b4b86f0db47333d433ef/jose-jwt/crypto/ConcatKDF.cs
|
# ? Jul 7, 2020 11:34 |
|
|
# ? Jun 8, 2024 09:17 |
|
Boz0r posted:I've been messing around with AutoFixture and AutoMoq and it's really cool, but I've hit a snag that I don't know how to solve. My code gets a bunch of proxies from a static factory class that I switch out with a mock factory in my test base class, and I use AutoDataAttribute to inject fixtures into my tests. I'm sorry it looks like you are missing some code here or the psuedo code didnt get quite translated here. In your example test you are injecting an a moq of <ISomethingProxy>. Why would you expect that to use your factory? There's nothing telling autofixture to do that - it see's an object of type ISomethingProxy and will create you and auto moq'd object of that type. What are you wanting it do here? Should that ISomethingProxy instead be a IProxyFactory?
|
# ? Jul 7, 2020 17:56 |
|
Boz0r posted:I've been messing around with AutoFixture and AutoMoq and it's really cool, but I've hit a snag that I don't know how to solve. My code gets a bunch of proxies from a static factory class that I switch out with a mock factory in my test base class, and I use AutoDataAttribute to inject fixtures into my tests. Make the factory class injectable instead of static and swap factory implementations?
|
# ? Jul 7, 2020 18:02 |
|
Not sure if this is suitable for this thread but not sure where else to post it. We have a .NET Framework API that is using Azure Cache for its caching layer, which is Azure's managed implementation of Redis. We occasionally get these errors where the connection to Redis has dropped and we get hundreds of messages in the logs like this:code:
What I've tried I found a lot of talk from Googling around that the StackExchange.Redis library can have errors like this, but unfortunately this error seems to be some sort of 'catch-all' error that can arise from many many causes. The two suggestions I've seen and tried to no avail:
Monitoring for Azure Cache shows no unusual data at all during the times the connections are failing. The CPU load for example is peaking at 20%, so I don't think it's an issue of underprovisioning or the like. This is creating a big problem for us because without Redis the load on our DB is ridiculous and it's creating cascading problems. I hope someone can help, I'm at the end of my ideas for how to even begin to debug this.
|
# ? Jul 7, 2020 23:32 |
|
adaz posted:I'm sorry it looks like you are missing some code here or the psuedo code didnt get quite translated here. In your example test you are injecting an a moq of <ISomethingProxy>. Why would you expect that to use your factory? There's nothing telling autofixture to do that - it see's an object of type ISomethingProxy and will create you and auto moq'd object of that type. What are you wanting it do here? Should that ISomethingProxy instead be a IProxyFactory? When I need a proxy somewhere in my code I get it like this: code:
code:
|
# ? Jul 8, 2020 10:47 |
|
Mata posted:Cool. Did you get any speedup from this or did you just want to get the response piecemeal to make it easier to work with? Speed-up wasn't a big concern for a one-time download, I just wanted to make it possible for the clients to display a progress bar as the collection was being downloaded. Actually, I thought about your problem again and it might in fact correspond to what I observed. When you're using Newtonsoft to serialize, the JSON is sent as a single huge HTTP response so the browser can't begin deserializing it until it has got the entire, uncorrupted response. But if you pre-serialize it as a string, then Newtonsoft is out of the picture and ASP.NET is free to chunk the string response. Then your browser can start deserializing the response as soon as it receives a single piece, instead of all together at the end. Can you inspect your raw HTTP request/responses application calls with Fiddler and see if the Content-Length is set to chunked? Also, since you were asking for low-hanging fruit: the response is already gzipped, right? It should be by default.
|
# ? Jul 8, 2020 14:02 |
|
The browser can begin reading as soon as headers are sent. If the backend is not sending anything until the whole blob is serialized then the client had to wait that long to even see headers. You don't even need chunked responses to stream/show progress, the backend just has to start sending data asap or you can only show a "waiting" (or 0% progress) until that TTFB (when the headers come in).
|
# ? Jul 8, 2020 18:37 |
|
Boz0r posted:When I need a proxy somewhere in my code I get it like this: Ahhh! I see. Yeah this isn't working quite right because you're relying on basically service locator pattern for your ProxyFactory. AutoFixture can't really hook into that pipeline. In general for autofixture you want everything to be - as much as possible - constructor injected / hidden behind an interface or abstract class. You start violating that and it becomes harder and harder to test. Can I ask why you don't just inject into your class constructors a instance of IProxyFactory? If you did that then autofixture (and your DI framework for that matter) could control the entire creation pipe and you wouldn't need to set the instance explictly like you are doing in the AutoMoqData.
|
# ? Jul 11, 2020 00:50 |
|
adaz posted:Ahhh! I see. Yeah this isn't working quite right because you're relying on basically service locator pattern for your ProxyFactory. AutoFixture can't really hook into that pipeline. In general for autofixture you want everything to be - as much as possible - constructor injected / hidden behind an interface or abstract class. You start violating that and it becomes harder and harder to test. It's custom plugin code running on Dynamics 365, so I have no idea how to go about doing DI in that context, and none of our other consulting teams have done it.
|
# ? Jul 11, 2020 10:46 |
|
I recently found myself doing some C# copy-paste and wondered if I could avoid it. I was wrapping a lot of private fields with getters and setters so I could strobe an event when the setters were called. The event is different per field and the data type varies per field, but it's still pretty generic. I can't think of any way to represent that other than to still type it out without getting into ugly reflection poo poo that is far worse, but I thought I'd ask anyways. I could probably compromise and have a more overall "this thing changed but I can't tell you what" kind of event and make that standard but I'm still stuck messing with the fields. I can only think of a generic helper that encapsulates the field but I'd rather have the class trying to do all this contain the field itself.
|
# ? Jul 12, 2020 18:42 |
|
Rocko Bonaparte posted:I recently found myself doing some C# copy-paste and wondered if I could avoid it. I was wrapping a lot of private fields with getters and setters so I could strobe an event when the setters were called. The event is different per field and the data type varies per field, but it's still pretty generic. I can't think of any way to represent that other than to still type it out without getting into ugly reflection poo poo that is far worse, but I thought I'd ask anyways. Well, you could do something like this, which reduces the amount of boilerplate you need to write, but still requires you to write a field, event, and one-line getter and setter for each property. Or, you could use the .Net standard INotifyPropertyChanged interface and simplify things a little further into something like this; though you'll pay a little bit of runtime cost with this because it's adding a dictionary lookup and boxing/unboxing to your gets and sets. You could combine the two approaches to remove that extra overhead, though, at the cost of having to define the backing field for each property like this.
|
# ? Jul 12, 2020 22:25 |
|
Sounds like a use-case for T4
|
# ? Jul 13, 2020 00:22 |
|
biznatchio posted:Well, you could do something like this, which reduces the amount of boilerplate you need to write, but still requires you to write a field, event, and one-line getter and setter for each property. Taking the INotify approach a bit further, you can use Prism and simplify it a bit since some of the boilerplate is just handled. The 'simple' version here still has you signing up for private/public locals, but the class is simple to read and maintain and it's a pretty reasonable amount of boilerplate, all things considered.
|
# ? Jul 13, 2020 05:50 |
|
I suppose it's safe to say there isn't any particular magic built in for it but I did forget about using actions and refs. Thanks everybody! This is actually more than I expected. Also: insta posted:Sounds like a use-case for T4 Are there any other preprocessor-like things like this that are pretty common? I think I was suggested this a few months ago for some particular bit of Rocko insanity. If it's the same thing then it's probably about time I get into this particular brand of crazy.
|
# ? Jul 13, 2020 06:21 |
|
Rocko Bonaparte posted:Are there any other preprocessor-like things like this that are pretty common? I think I was suggested this a few months ago for some particular bit of Rocko insanity. If it's the same thing then it's probably about time I get into this particular brand of crazy. Source Generators are coming Real Soon Now
|
# ? Jul 13, 2020 06:42 |
|
redleader posted:Source Generators are coming Real Soon Now In addition to Source Generators, you can also use something like PostSharper: https://doc.postsharp.net/inotifypropertychanged-add (my take on all of these things is that generated code solves problems, but always creates more problems than it solves...)
|
# ? Jul 14, 2020 18:36 |
|
For something like templates or metaprogramming, I was kind of assuming something like macros, but that INotifyPropertyChanged thing was pretty neat. And yeah, I haven't used anything like that for anything due to all the issues I've had in the past with step debugging in particular.
|
# ? Jul 16, 2020 00:04 |
|
I'm trying to clean up a massive git folder from an old solution that is full of useless crap, like old versions of code, never-used assets, and so on. Is there a way to make MSBuild print out a clean list of every file it actually used during the build process, so I can delete the rest?
|
# ? Jul 27, 2020 11:58 |
|
NihilCredo posted:I'm trying to clean up a massive git folder from an old solution that is full of useless crap, like old versions of code, never-used assets, and so on. Shouldn't your csproj or fsproj list all the files it uses?
|
# ? Jul 27, 2020 14:05 |
|
Mr Shiny Pants posted:Shouldn't your csproj or fsproj list all the files it uses? I think so, but it's like 25 project files each with a slightly different mix of content include, none include, compile include, etc., plus it references a bunch of local .DLLs (ancient hardware vendor libraries, mainly), plus SOAP client support files that I don't fully understand (what's a *.datasource file?). Point is, walking through all of that XML to grab just the file paths would be a bore and I would be very worried about forgetting some rarely-used shim or some tiny icon file. So I was more looking for something clever like setting all of the folder's last accessed date back by a few years, running a full rebuild, and seeing what got accessed.
|
# ? Jul 27, 2020 15:08 |
|
NihilCredo posted:I'm trying to clean up a massive git folder from an old solution that is full of useless crap, like old versions of code, never-used assets, and so on. Have you done analysis to see where the massiveness comes from? In my experience, it's binaries 99% of the time. "Old versions of code" is suspect -- that's what source control is for? Unless you mean someone copied Foo/* to Foo.bak/* and committed that, but that's immediately obvious. I'd take an incremental, is-it-good-enough-yet? approach. Find something big and clearly bad. Remove it with BFG. Repeat until the repo meets your size/speed requirements. Worst case, just take the past X weeks or months of history and archive the old version of the repo.
|
# ? Jul 27, 2020 15:24 |
|
New Yorp New Yorp posted:"Old versions of code" is suspect -- that's what source control is for? Unless you mean someone copied Foo/* to Foo.bak/* and committed that, but that's immediately obvious. It's more like: the file ProductView.vb used to live in project A, but at some point in the aughties it got moved out to Project B, but the old ProductView.vb still exists in project A's folder even though it's not actually being referenced by the project file anymore - it's dead code. Or just as often, some Feature.vb file was left unfinished and the incomplete file full of stubs never got added to the project but it's still lying around the folder Most of the time, in Visual Studio, it's quite invisible and harmless. But the dead code still shows up whenever you do a grep / find in folder, or when I use Everything to quickly open up a certain file, and it's rather annoying because there's nothing indicating it's dead code, the path looks legit (who remembers "wait, ObscureProductView.vb is supposed to be in Foo/Main/Views/, not in Bar/Support/Product/View or whatever). There's hundreds of files like these, since this whole solution wasn't put under source control until... I think 2012 or something like that (apparently the programmers at the time treated the project file like a master branch). I would like to clean the repo up before I finally move it to Git from TFS, especially because they would be much more annoying when people will be going through the repo using Gitlab's web UI and search. NihilCredo fucked around with this message at 15:53 on Jul 27, 2020 |
# ? Jul 27, 2020 15:51 |
|
Turn on file access auditing for the repo folder, do a build and a run; then turn the auditing back off and use Event Viewer to search for and extract the file access audit log entries to an EVTX file, then write a small C# program using System.Diagnostics.Eventing.Reader.EventLogReader to iterate through the entries and build a list of files.
|
# ? Jul 27, 2020 18:44 |
|
biznatchio posted:Turn on file access auditing for the repo folder, do a build and a run; then turn the auditing back off and use Event Viewer to search for and extract the file access audit log entries to an EVTX file, then write a small C# program using System.Diagnostics.Eventing.Reader.EventLogReader to iterate through the entries and build a list of files. Good one, or use process monitor. https://docs.microsoft.com/en-us/sysinternals/downloads/procmon It will show you all file access.
|
# ? Jul 27, 2020 19:07 |
|
Surely the compiler will still compile/copy even the files that contain useless code. Or what is the rationale behind this attempt?
|
# ? Jul 28, 2020 19:57 |
|
NihilCredo posted:It's more like: the file ProductView.vb used to live in project A, but at some point in the aughties it got moved out to Project B, but the old ProductView.vb still exists in project A's folder even though it's not actually being referenced by the project file anymore - it's dead code. Or just as often, some Feature.vb file was left unfinished and the incomplete file full of stubs never got added to the project but it's still lying around the folder It'll take a little while, but it should be able to just cleansweep through with Solution Explorer in (full) Visual Studio: Make sure you're in Solution View, then turn on Show All Files up at the top, and then you're going to be able to see in any given place where there are files-on-disk that aren't referenced by the project/solution, like so:
|
# ? Jul 29, 2020 08:21 |
|
I'm trying to write a fairly simple piece of code but I have been working ridiculous hours and my brain is currently tapioca. Every time I go to even sketch out the logic of it my brain gives up. What I'm trying to do is take a List containing any number of Dictionary<string, uint> objects and then return a Dictionary<string, uint> containing every way that you can combine the keys and values in order. So, for example if I put something like this into the function: [{ "A1":1, "A2": 2},{"B1":1, "B2": 2}]; I'd get back something like this: {"A1B1": 2, "A1B2": 3, "A2B1": 3, "A2B2": 4} If I was going to do this for a set number of entries like 2 or 3, I'd probably just do some nested loops iterating over each collection and combining the values in the deepest loop, but I need to do it over an arbitrary number of entries. I know that a solution to this is to do it with recursion, but my brain is just hard locking every time I try to sketch out the basic logic. Would any of you please help me here? Just a rough pseudocode explanation of the steps to take would really help me right now. TIP fucked around with this message at 08:51 on Aug 3, 2020 |
# ? Aug 3, 2020 08:02 |
|
The key insight to coming up with a recursive solution is to look hard at your incremental step - suppose you have a solution for three different dictionaries. As in, you've run the code, and you've got your solution back out of it. Now someone comes along with a fourth dictionary, and asks you what the solution would be if you had had that fourth dictionary in there from the start. Can you do that? And can you generalize that so you could do exactly the same steps if someone then came along with a fifth dictionary, and a sixth, and so on? Once you know what your incremental step is, the base case is frequently pretty obvious. Then you can write your recursive solution: code:
|
# ? Aug 3, 2020 08:19 |
|
Jabor posted:The key insight to coming up with a recursive solution is to look hard at your incremental step - suppose you have a solution for three different dictionaries. As in, you've run the code, and you've got your solution back out of it. Thanks! This post was just what I needed to get my thoughts in order on it. Only took a minute to get working once I had the logic figured out.
|
# ? Aug 3, 2020 09:23 |
|
We use early bound entities for developing plugins for Dynamics 365. We upgraded our csproj files to the new 2017 format, and we've just discovered a the early bound types have stopped working. Usually, we have to add the following line to an AssemblyInfo.cs:code:
|
# ? Aug 5, 2020 13:24 |
|
Boz0r posted:We use early bound entities for developing plugins for Dynamics 365. We upgraded our csproj files to the new 2017 format, and we've just discovered a the early bound types have stopped working. Usually, we have to add the following line to an AssemblyInfo.cs: If by "2017 format" you mean SDK-style csproj (what .NET Standard 2.0+ and .NET Core projects use), you can add assembly info stuff to the csproj file. Example: https://stackoverflow.com/a/44502158
|
# ? Aug 5, 2020 16:57 |
|
Boz0r posted:We use early bound entities for developing plugins for Dynamics 365. We upgraded our csproj files to the new 2017 format, and we've just discovered a the early bound types have stopped working. Usually, we have to add the following line to an AssemblyInfo.cs: And if adding directly to the .csproj doesn't work, you can always create AssemblyInfo.cs yourself. It's no longer automatically generated, but it's still used if it exists.
|
# ? Aug 5, 2020 18:56 |
|
There is nothing special about AssemblyInfo.cs - you can put that stuff into any .cs file and the result will be the same. Having AssemblyInfo.cs for it is just a convention, not a technical requirement. Whatever changed is not because AssemblyInfo.cs is missing - if adding the stuff elsewhere does not work, you've got some other gremlins, possibly specific to whatever Dynamics SDKs you are using. I find your mention of "2017 format" confusing, though. The file format does not change just because of VS version - are you now targeting a different .NET version or something, which brings a different project file format? Different .NET runtimes do use different file formats but this is a way bigger change than that of a file format and has many compatibility implications. Maybe explain in detail what you are trying to do. EssOEss fucked around with this message at 20:21 on Aug 6, 2020 |
# ? Aug 6, 2020 20:18 |
|
EssOEss posted:I find your mention of "2017 format" confusing, though. The file format does not change just because of VS version - are you now targeting a different .NET version or something, which brings a different project file format? Different .NET runtimes do use different file formats but this is a way bigger change than that of a file format and has many compatibility implications. Maybe explain in detail what you are trying to do. The SDK-style project file format is commonly colloquially referred to as “2017-style” because VS 2017 is the first version that supported it.
|
# ? Aug 6, 2020 21:08 |
|
I tried doing that, but our problem remains, so I think it's something else. For some reason, when we register our plugins in D365 we get unknown type errors when using early bound types. We tried using a clean project template that we usually use for new customers, and that works. So we probably screwed something else up.
|
# ? Aug 7, 2020 10:25 |
|
Question about APIs. When coding a desktop app, my ECL classes (this is the layer that produces observable DTO objects from entity objects) have a method similar to code:
code:
However, now I’m working on a demonstration app that is a management system for a restaurant. The lower layers look like MS Sql -> EF Core -> DAL layer -> API layer. There is a WPF based management desktop app as well as a web site. All access to data is through the API layer, since that’s where the authentication and authorization are done. I’m thinking about how I can compose those Expression<Func<TEntity, bool>> elements and send them to the API controllers. My first (naïve) thought was to serialize them and send them in the request body, but this failed spectacularly. So my other idea is to expose more endpoints in the API controllers. But I’m wondering if there’s another way to do this? Ideas?
|
# ? Aug 8, 2020 04:24 |
|
LongSack posted:Question about APIs. Read up on REST API design. If anything, you should be translating the Expressions to REST calls to get the appropriate data, not passing them to REST calls. If you want to do it right, you're going to have to re-examine your approach; trying to directly translate a set of patterns that worked fine in a monolith is not going to work in the REST world. [edit] Also, you're going to go crazy trying to maintain feature parity between two UIs. Consider using Atom or Electron or something similar and hosting the web front end within a desktop app. New Yorp New Yorp fucked around with this message at 18:35 on Aug 8, 2020 |
# ? Aug 8, 2020 18:14 |
|
New Yorp New Yorp posted:Read up on REST API design. If anything, you should be translating the Expressions to REST calls to get the appropriate data, not passing them to REST calls. If you want to do it right, you're going to have to re-examine your approach; trying to directly translate a set of patterns that worked fine in a monolith is not going to work in the REST world. Yeah, that's what I meant by exposing more endpoints. I didn't explain myself properly. quote:[edit] Also, you're going to go crazy trying to maintain feature parity between two UIs. Consider using Atom or Electron or something similar and hosting the web front end within a desktop app. It's a demo app, so once it's written not much will change. Also, the two front ends have different purposes. The WPF app is for back-end management intended for use by the staff. The web front end is intended for customers of the restaurant, to view the menu, make reservations, etc.
|
# ? Aug 8, 2020 20:19 |
|
I don't know if I'd recommend it, but you can serialize an Expression to a JSON object using Aq.ExpressionJsonSerializer, and then deserialize it on the other side into something you can execute. But if you're going to expose something like that in a public API you better make sure you have your ducks in a row that you're not just allowing anyone to do arbitrary code execution on you.
|
# ? Aug 8, 2020 21:43 |
|
|
# ? Jun 8, 2024 09:17 |
|
If you're comfortable with opening your API to arbitrary projection/predicate query, maybe OData could work for you. A cursory search says there is support for LINQ-enabled client with Microsoft.OData.Client.
|
# ? Aug 8, 2020 22:30 |