|
SirViver posted:LOL. I bet LinqPad just has a non-fixed width output font JFC you're probably right Edit epswing fucked around with this message at 21:22 on Apr 8, 2021 |
# ? Apr 8, 2021 21:20 |
|
|
# ? Jun 8, 2024 01:15 |
|
Yeah I'm pretty sure that's Tahoma.
|
# ? Apr 8, 2021 21:39 |
|
Also, keep in mind that Linqpad outputs HTML. I've had times where I just built a quick script to generate some tab delimited data. Copy/paste the results and you'll find your tabs are now spaces. Then I just fall back to using a StreamWriter to write it out.
|
# ? Apr 8, 2021 22:14 |
|
Scott Hanselman is teaching C# to the kids on tik tok this month - a 1 minute video every day for 30 days. https://twitter.com/shanselman/status/1380047715472568324 I don't know how effective it will be but I really admire the optimism.
|
# ? Apr 8, 2021 22:57 |
|
I have asked around and I haven't heard or come up with a "good" way to use nullable reference types when it comes to serialization of POCOs/Records. ASP.NET is going serialize some Json into an object using System.Text. It seems like your options are put nullable references types all the way down your stack or your making "DirtyNullableClass" and "CleanNotNullableClass" for all your POCOs.code:
|
# ? Apr 9, 2021 13:54 |
|
Calidus posted:I have asked around and I haven't heard or come up with a "good" way to use nullable reference types when it comes to serialization of POCOs/Records. ASP.NET is going serialize some Json into an object using System.Text. It seems like your options are put nullable references types all the way down your stack or your making "DirtyNullableClass" and "CleanNotNullableClass" for all your POCOs. Otherwise, in cases where you're not using reflection for serializing and you're just instantiating the object in your own code normally, you can use init only properties for that now. It's weird to compare this to f# where records are immutable by default and the slightly bizarre/ugly but effective solution for serialization is add a special [<CLIMutable>] attribute which makes the object mutable to non-f# code while still keeping it immutable to f#. However, this does remind me that someone in the python thread or somewhere was asking a similar question a while ago about automatically having mutable/immutable versions of the same classes at compile time, and it's interesting to note that this is something that can actually be handled by typescript's type system but can't be done in c# without source generators or something like that. mystes fucked around with this message at 16:16 on Apr 9, 2021 |
# ? Apr 9, 2021 16:07 |
|
Having an issue with Blazor WASM and autofocus. I know that just adding autofocus to the input won't work because of how the browser processes it. I found a method that works for Blazor Server, and am trying to do the same thing on WASM with no luck. The input: code:
C# code:
JavaScript code:
(I know it's a minor thing, going to a web form and having the focus not be in the first input is a HUGE pet peeve of mine.)
|
# ? Apr 9, 2021 18:57 |
|
epswing posted:JFC you're probably right https://forum.linqpad.net/discussion/1150/how-do-i-get-monospaced-results I think that still works - haven't tried it in 6 yet
|
# ? Apr 9, 2021 23:03 |
|
Anyone have a good WinForms refresher course/video/blog post to recommend? It’s been over 4 years since I have looked at one.
|
# ? Apr 14, 2021 21:00 |
|
New Yorp New Yorp posted:FWIW We have an application that uses Serverside Blazor and it's a constant shitshow mess. I'd recommend WASM. What was the driver behind switching from WASM to server side Blazor? Are there WASM pitfalls that server solves?
|
# ? Apr 14, 2021 22:36 |
|
TheBlackVegetable posted:What was the driver behind switching from WASM to server side Blazor? Are there WASM pitfalls that server solves? Wasm was still in preview and someone didn't like that. The switch caused a ton of technical debt and ongoing problems we're still fighting a year later.
|
# ? Apr 14, 2021 22:44 |
|
Blazor seems like such an unproven and untested technology I'm surprised people are building production apps with it. Gives me webforms vibes, despite my dislike for javascript.
|
# ? Apr 14, 2021 22:54 |
|
Microsoft needs a product to run on Blazor for me to really buy in.
|
# ? Apr 15, 2021 01:03 |
|
I feel like the major selling point of Blazor is, 'It's not Javascript!'. Honestly, that's a pretty compelling sales pitch.
|
# ? Apr 15, 2021 01:12 |
|
It’s not JavaScript until you actually need JavaScript.
|
# ? Apr 15, 2021 02:01 |
|
Calidus posted:It’s not JavaScript until you actually need JavaScript.
|
# ? Apr 15, 2021 02:05 |
|
mystes posted:Yeah transpiling is great until you spend more time dealing with javascript interop then you would have spent just writing javascript. Blazor generally works really well and has required minimal JS interop thus far. Also, Blazor doesn't transpile to JS. It's webassembly. New Yorp New Yorp fucked around with this message at 04:12 on Apr 15, 2021 |
# ? Apr 15, 2021 04:09 |
|
Anyone using serilog? I’m currently logging from my 2 api services to azure blob storage. It works great, but I’d love a way to switch files at midnight. For example, my store API currently logs to “apilog-yyyyMMdd” which is cool when the service starts, but if it runs for a week without a restart, then the whole week’s logs will be in that file. I haven’t found a way to do it in serilog itself, so I suppose I could see if there’s a way to schedule regular restarts for my app services, which seems less than ideal. Alternatively, I could write a function that reads the current log files and splits them out into separate files based on the time stamps on the log entries, perhaps into a separate container. Is there a better way? TIA
|
# ? Apr 23, 2021 03:53 |
|
Both the file s ink and the azure blob storage sink support rolling log files. https://github.com/serilog/serilog-sinks-file https://github.com/chriswill/serilog-sinks-azureblobstorage
|
# ? Apr 23, 2021 04:15 |
|
LongSack posted:Anyone using serilog? I’m currently logging from my 2 api services to azure blob storage. It works great, but I’d love a way to switch files at midnight. scheduling automatic restarts for long running services is way easier than getting woken up at 3 in the morning because someone across an ocean found a handle leak that takes about a month to become a real problem - i would just do it for anything long-running when i can get away with it.
|
# ? Apr 23, 2021 08:08 |
|
God I wish they hadn't used "{ Value }" as the default ToString representation for records, it so easily causes conflicts with string.Format which will throw FormatException if you pass in a string that contains a ToString'd record. Curlybraces are like the only non-esoteric symbol you should be careful with in C# string handling.
|
# ? Apr 23, 2021 12:24 |
|
Mata posted:God I wish they hadn't used "{ Value }" as the default ToString representation for records, it so easily causes conflicts with string.Format which will throw FormatException if you pass in a string that contains a ToString'd record. Curlybraces are like the only non-esoteric symbol you should be careful with in C# string handling.
|
# ? Apr 23, 2021 12:34 |
|
mystes posted:Is it normal to pass dynamically generated strings as the first argument to String.Format in c#? It might not be an actual vulnerability like printf but it still seems like asking for trouble? What can I say, I go looking for trouble, and I find it
|
# ? Apr 23, 2021 14:19 |
|
spaced ninja posted:Both the file s ink and the azure blob storage sink support rolling log files. Thank you! I missed that somehow.
|
# ? Apr 23, 2021 22:37 |
|
How does the VS Test task (VSTest@2 in Azure Pipelines) locate and run DLLs once it has been invoked? I'm not talking about the test selection process, but once vstest.console.exe has actually been invoked. We have a unit test that tests that types from a certain assembly can be loaded using reflection. This relies on identifying the assembly containing our code that's running, constructing the expected path to the other assembly (it is of a known name and should be alongside the reflection-using assembly in the same directory), and loading it. But when it runs on our build agent, it seems that the reflection-using assembly that gets loaded is not the one that should be loaded based on the paths that were passed to vstest.console.exe, because we get a "file not found" exception that names a completely different folder. The folder it mentions is the build output folder of an unrelated project that is part of the build; there is a copy of the reflection-using assembly in there, but there's no apparent reason why that copy of the assembly would be used when there's also a copy alongside the unit test DLL. to clarify: the project containing the code that uses reflection is R. The dll that is supposed to be loaded by reflection is L. The unit test project is T. The unrelated project is X. vstest.console.exe gets invoked like this: vstest.console.exe "C:\agent\_work\1\s\SolutionName\Release\T.dll" <some other arguments> SolutionName\Release\ contains T.dll, R.dll and L.dll. SolutionName\X\bin\Release\ contains R.dll, T.dll (I don't know why) and X.dll. The error message says that it was trying to load SolutionName\X\bin\Release\L.dll ... which means that the copy of R.dll being used is the one in SolutionName\X\bin\Release\, whereas I would expect it to be the one in SolutionName\Release\ because that's where vstest.console.exe was told to look for T.dll and should have been able to find both T.dll and R.dll. It doesn't make any difference whether the reflection code uses Assembly.GetExecutingAssembly(), Assembly.GetCallingAssembly(), Assembly.GetAssembly(), or the .Assembly property of a suitable Type object. The task is open source, but I have no idea what I'm looking for in order to understand this behaviour. And I don't know TypeScript.
|
# ? Apr 27, 2021 21:16 |
|
I have a web api sending a file to another web api via REST. It's Base64 encoding a byte array that gets decoded by the receiver. Is this necessary? Can't I just send a byte array?
|
# ? May 4, 2021 08:47 |
|
Boz0r posted:I have a web api sending a file to another web api via REST. It's Base64 encoding a byte array that gets decoded by the receiver. Is this necessary? Can't I just send a byte array? You can. You can send the data using the multipart/form-data MIME type, which has the added benefit of also supporting sending other data alongside the file. If the receiving end is using ASP.NET Core, it can easily implement this using the IFormFile interface. On the dispatching end you can basically write raw bytes to the request stream, with a small preamble before the payload in the case of multipart messages. Not sure how much of the work has to be done manually on the client side in .NET, but it shouldn't be hard. If it's possible you would ideally use streams instead of byte arrays through the whole operation, to avoid having to read the entire file into the process memory.
|
# ? May 4, 2021 09:53 |
|
If all you want is to send this file as the request body then you can even just use StreamContent and on the receiving side use Request.BodyReader to read this data.
|
# ? May 5, 2021 01:41 |
|
Right now the file is a string in a DTO class. We use nswag to generate clients based on our controllers, and in this case the receiving end is .net 4.8.
|
# ? May 5, 2021 08:45 |
|
How do you protect secrets client-side? Scenario: I’m rewriting my LLC’s web site. The current version uses pretty much straight HTML/CSS (Bootstrap)/JS with a little PHP back end for sending emails from the “Contact Us” page (it runs on a Digital Ocean droplet). The new site is using Blazor server, no JS, and from-scratch CSS (no Bootstrap). I’m replacing the PHP back end with an API running as an Azure web app. I’m protecting the API with a 504-bit API key (which, since all of the key characters are printable, has significantly fewer than 504 bits of entropy). Eventually, I will generalize the API so that different API keys result in different behavior (for example, different recipient, etc.). My first thought for the rewrite was to do it in Blazor WASM, but I realized pretty early on that I had no idea how to protect the API key. The whole point of the key is to prevent randos from using my mail API (and my Mailgun account) as a spam relay. With Blazor server running in Azure, I can use Azure App Settings to store the keys, so there is no exposure to the client. With Blazor WASM, I couldn’t think of a way to keep the key (and thus access to my mail relay) out of the hands of potential bad actors. I understand that for login scenarios, it’s less of a problem, because if a user authenticates and gets a token back, then having access to that token by the end user doesn’t give them any access they don’t already have by virtue of being the authenticated user. However, in my case, the user is not authenticating at all, so i really do need to protect the API key. So is there a good way to keep a secret like an API key when doing client-side apps?
|
# ? May 11, 2021 04:04 |
|
No
|
# ? May 11, 2021 04:14 |
|
OK, so I’m not missing something obvious. Good to know.
|
# ? May 11, 2021 05:27 |
|
You would want to not expose a generic mail sending api that can be abused to the client.
|
# ? May 11, 2021 12:39 |
|
If the user of the website isn't authenticated, the backend is already a public API for all intents and purposes. Keep all your secrets server-side and just make your API endpoints open. You can add rate limiting to make it harder to abuse, either via middleware or via an external service like Cloudflare.
|
# ? May 11, 2021 13:09 |
|
LongSack posted:How do you protect secrets client-side? Just don't design it as a generic "send email" endpoint where the caller specifies To/CC/BCC/From/Subject/Body; make sure it's just a "Contact Us" API where the inputs are your name/how we can contact you/what you want to talk about, and put all the mail configuration/generation server-side. That way all you have to worry about from a malicious user is spamming the hell out of your customer service department (or sales, or whoever the recipient is), which is an easier problem to solve. If you want to make it able to route emails to multiple recipients later, go with something like a "destination" key value that gets mapped server-side to an email address.
|
# ? May 11, 2021 15:13 |
|
thumper57 posted:Just don't design it as a generic "send email" endpoint where the caller specifies To/CC/BCC/From/Subject/Body; make sure it's just a "Contact Us" API where the inputs are your name/how we can contact you/what you want to talk about, and put all the mail configuration/generation server-side. That way all you have to worry about from a malicious user is spamming the hell out of your customer service department (or sales, or whoever the recipient is), which is an easier problem to solve. Yep. That's what I did. The client packages up information entered on the "Contact Us" form and passes it to the email endpoint, which pulls the sender, recipient, subject from the app settings, and the stuff the user entered all goes into the body of the email. quote:If you want to make it able to route emails to multiple recipients later, go with something like a "destination" key value that gets mapped server-side to an email address. That's what I was referring to when I mentioned using multiple API keys later, which would allow the sender/subject/etc. to change depending on the API key sent. Could use a different mechanism, which would allow me to change the API key as needed without re-keying the recipients. NihilCredo posted:You can add rate limiting to make it harder to abuse, either via middleware or via an external service like Cloudflare. I'm Using the AspNetCoreRateLimit package for rate limiting. It seems to work pretty well.
|
# ? May 11, 2021 18:38 |
|
I'm working on writing a wrapper library for an undocumented public facing API, but am confused about the use of nullable. I have DTO classes to map the JSON deserialization, and I transfer these into models which are then sent to the user. While working on an endpoint, I received a null reference error for one of the endpoint properties, but only on a certain date. Using <Nullable>enable</Nullable> in my .csproj makes tons of warnings appear in my DTOs and elsewhere. I'm just wondering, since I don't know exactly what might be null and what isn't in the API, do I just treat everything as nullable? I had been doing this but I'm not sure if this is the correct route. It also adds a ton of ? all over my code. The alternative I had recommended to me was to make everything non-nullable, and only use nullable types for values which can be null (I'll need to do a lot of trial and error through this route). Supersonic fucked around with this message at 01:27 on May 23, 2021 |
# ? May 23, 2021 01:15 |
|
The purpose of nullable reference types is to help you more easily write code that does not throw NullReferenceException. Yes, if you work with data models coming from an external source, you can either treat everything as nullable. This will mean many ? in many places. But that is correct - every place you have to add a ? and check for nullness in logic is a place that could otherwise throw NullReferenceException. Alternatively, you can skip marking the fields nullable. However, nullable reference type annotations are a feature for you - the code author - and even if you mark a reference field as non-nullable, this does not mean it is not null on the write. If you want to avoid NullReferenceExceptions then you could, for example, validate all the deserialized DTOs to ensure that non-nullable fields do not actually contain null values. Alternatively, you could just accept NullReferenceExceptions if some data is unusual. Alternatively, you can add ? operators and/or null checks everywhere that the potentially nullable data is used. Personally, I would just validate the DTOs after deserialization and throw when some mystery null values appear, only marking as nullable the "I expect this to sometimes be null" fields.
|
# ? May 23, 2021 11:13 |
|
EssOEss posted:The purpose of nullable reference types is to help you more easily write code that does not throw NullReferenceException. Thanks! For now I'm going to treat everything as nullable so I don't need to refactor all the DTOs right now, but I'm going to look into DTO validation in the longer term.
|
# ? May 24, 2021 01:38 |
|
|
# ? Jun 8, 2024 01:15 |
|
We are having performance problems with a service we depend on and while they acknowledge that things could be better they don’t really want to own that they are the problem. So I’ve been trying to investigate stuff with perfmon.exe and perfview.exe but I’m not that familiar with any of it. The service is asp.net running on .NET Framework 4 (just 4 ) hosted on IIS, nothing special. I’m pretty sure the problems are 1) them abusing the GC by reading all the requests and response bodies fully into memory (to log), no streaming, giving the LOH having lots of pressure and that triggers Gen 2 GC. I’ve been able to measure and see how often that happens but I have no idea of how frequent is to much? The bigger problems is probably them using async APIs synchronously e v e r y w h e r e! I’ve used perfview to look at thread pool events, but again I have no idea of what is OK here or not. The I looked at one of the Thread Time Stacks views and put blocked_time in the find field. Pressed enter and it showed 94% inclusive! Does this mean what I think it means? That they are using the remaining 6% for CPU and network &c but that 94% of the time in the threads is spent waiting when we really don’t need to? Because that’s just so bad I can’t believe it! Maybe that includes idle threads in the thread pool? Anyone got experience with this?
|
# ? May 31, 2021 16:11 |