|
So today I learned that HttpWebResponse throws an error on 404 and you have to wrap the whole thing in try/catch and you can use the exception's response code in the catch segment. CHECKING TO SEE IF THE WEBSITE IS THERE IS EXACTLY WHAT I'M USING YOU FOR, YOU STUPID CALL, WHY DO YOU NOT KNOW HOW TO HANDLE loving 404S I just needed to vent.
|
# ? Sep 8, 2015 22:20 |
|
|
# ? May 17, 2024 10:22 |
|
CapnAndy posted:So today I learned that HttpWebResponse throws an error on 404 and you have to wrap the whole thing in try/catch and you can use the exception's response code in the catch segment. It also throws an exception on 304 Not Modified! I hated seeing "The remote server returned an error: 304 (Not Modified)" because the server wasn't in the wrong If you have the option, using HttpClient in System.Net.Http (.NET 4.5 I think?) is much better.
|
# ? Sep 8, 2015 22:28 |
|
Lone_Strider posted:It also throws an exception on 304 Not Modified! I hated seeing "The remote server returned an error: 304 (Not Modified)" because the server wasn't in the wrong If you have the option, using HttpClient in System.Net.Http (.NET 4.5 I think?) is much better. If the exception has a recognizable response code why are you throwing exceptions, you know exactly what those responses are you stupid call.
|
# ? Sep 8, 2015 22:29 |
|
Yeah it's an insanely stupid system. I think the ASP.NET 5 stuff doesn't do this. I also love returning a 503 by throwing a loving exception. Exceptions are maybe the most misused concept in high-level languages.
|
# ? Sep 9, 2015 00:59 |
|
ASP.NET 5 is much smarter about pretty much everything involving response codes.
|
# ? Sep 9, 2015 01:21 |
|
These are textbook examples of what Eric Lippert calls vexing exceptions. I must admit I've made the same mistake in my own code many times, and I continue to do it sometimes because I'm stupid and forget, but I also have yet to design APIs used by what might be literally millions of developers.
|
# ? Sep 9, 2015 11:26 |
|
I have some stream processing questions. I'm just muddling my way through using docs/example. I don't have the code in front of me at the moment so please forgive syntax errors. Say I have a method which writes from input to output: C# code:
C# code:
C# code:
Edit: now that I'm looking at all these using statements, I get the feeling that by saying using (var writer = new StreamWriter(output)) the StreamWriter will close the Stream before the method returns, and when I use output later, it blows up. epswing fucked around with this message at 21:56 on Sep 9, 2015 |
# ? Sep 9, 2015 21:54 |
|
Nope, the streams get disposed of when the code exits the using block (assuming you're not manually disposing of them in your methods). You're correct in that you need to start each Process method with a seek on the input stream back to its beginning, assuming the streams support seeking. Edit: whoops, you're right, you need to get rid of the using blocks in your Process methods. They'll kill the streams for sure. Potassium Problems fucked around with this message at 22:27 on Sep 9, 2015 |
# ? Sep 9, 2015 22:25 |
|
Right! Makes sense. Removing the using statements from Process methods did the trick.
epswing fucked around with this message at 22:46 on Sep 9, 2015 |
# ? Sep 9, 2015 22:29 |
|
epalm posted:Right! Makes sense. Removing the using statements from Process methods did the trick. While that will work, that's not the most appropriate solution. Using IDisposables without using blocks looks like a bad thing, even though it may not be. Future developers may be tempted to wrap them with a using, which will cause a non-obvious bug. My preferred solution is to use this StreamReader overload and pass true to the last parameter, leaveOpen. There is a similar StreamWriter overload.
|
# ? Sep 10, 2015 14:17 |
|
Can someone explain DateTime.now to me? I thought it was just the number of ticks since the epoch stored in a long, but I'm getting key collisions when trying to use it as a key in a dictionary. This code is from an educational project at work, so I'm required to be able to return the collection items in the order they were added. Using an array or linked list isn't allowed because removal is too slow, so instead I have to use a dictionary. I'm aware that it's the kinda thing that doesn't matter in a typical use case, but that's the "discovery" they want you to make on this project (that if you use a list you fail the unit tests for efficiency). Basically, we have this item class and this inventory class, the the inventory needs to be able to efficiently loop over the items in both order added and alphabetical order by name. To do this we basically have 2 dicitonaries. One with names as keys and one with *something* as keys that keeps things in chron order. Anyway, here's the code I eventually used that works: C# code:
C# code:
|
# ? Sep 10, 2015 18:18 |
|
Would https://msdn.microsoft.com/en-us/library/System.Collections.Specialized.OrderedDictionary(v=VS.110).aspx help or is the point that you're supposed to make something like it yourself?
|
# ? Sep 10, 2015 18:30 |
|
Munkeymon posted:Would https://msdn.microsoft.com/en-us/library/System.Collections.Specialized.OrderedDictionary(v=VS.110).aspx help or is the point that you're supposed to make something like it yourself? Yeah, we have to use the dictionary class provided. I'm more just confused about DateTime.Now. I already handed in working solution with the incrementing long. It seems that ticks are less granular than I thought. I just wrote this code: C# code:
code:
|
# ? Sep 10, 2015 18:33 |
|
DateTime isn't intended for high-resolution timing. Don't use it for that. It's system-dependent. Per http://stackoverflow.com/questions/2143140/c-sharp-datetime-now-precision, quote:DateTime's precision is somewhat specific to the system it's being run on. The precision is related to the speed of a context switch, which tends to be around 15 or 16 ms. (On my system, it is actually about 14 ms from my testing, but I've seen some laptops where it's closer to 35-40 ms accuracy.) Also keep in mind that DateTime.Now represents a dependency that makes it impossible to effectively unit test your code.
|
# ? Sep 10, 2015 18:36 |
|
LeftistMuslimObama posted:What's the granularity of a tick? https://msdn.microsoft.com/en-us/library/system.datetime.ticks(v=vs.110).aspx posted:A single tick represents one hundred nanoseconds or one ten-millionth of a second. There are 10,000 ticks in a millisecond, or 10 million ticks in a second. But there's no guarantee that it's monotonically increasing, which is what you need because a modern CPU can do more than 10 million things in a second. I don't see anything like that available without dipping into the Windows API. https://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
|
# ? Sep 10, 2015 18:42 |
|
Cool. I think for this just maintaining a counter myself was the simplest/best choice then. I have this problem where I feel like my simple solutions are too obvious and think there has to be a more "elegant" way to do things. I run into walls that way much too often. I need to get more confident in my solutions.
|
# ? Sep 10, 2015 18:45 |
|
Yeah, I made a cache with a size limit for fun a while back and tried to do the same thing and the tests failed for the same reason but I didn't even thinking about using a simple incrementing number first. It happens.
|
# ? Sep 10, 2015 18:53 |
|
This reminds me of a funny story about a program written long ago that stupidly used DateTime.Now client-side to timestamp when a new job was added by a user (yes, I was the culprit who wrote it). The PCs were domain joined and faithfully got their NTP syncs from the domain controller. As a result, everything was fine for a very long time and then one day a PC decided to go off the playground and run its time as 1970 (or something really old) for reasons I don't remember. Because the application also loaded jobs into a schedule by the scheduling assistant on another PC based on that timestamp, all of the jobs added by the first user that day showed up first and was marked as high priority and over 30 years overdue. Almost broke the system if it wasn't for the fact that as humans we often question oddities like that and the scheduling assistant brought it up to the manager who then informed me and I had to manually adjust the timestamps. An update was done to instead use SQL's version of it during the insert instead. Since they only had one SQL server for that application and it had a pretty reliable time sync, it was felt "good enough for the time being". I try not to think about it any more but, back then I must have spent days trying to come up with a simple but, reliable sense of coordinating time over an entire system/application and/or verifying it to appease the managers that "Yes, we can guarantee something like this will never happen again" - which is ludicrous because you can't guarantee that at all. Storing a master timestamp in a database, checking offsets, etc... then I realized just how fragile the concept of time is and my brain almost melted. Now I don't think about it and just use the best case for the scenario and hope nothing stupid ever happens. ...And if it does I shift the blame as system mismanagement.
|
# ? Sep 10, 2015 19:12 |
|
crashdome posted:I realized just how fragile the concept of time is and my brain almost melted. On the stupid DAQ I've built up now, I'm using this snippet to get millis-since-epoch: string timestamp = (DateTime.UtcNow - new DateTime(1970, 1, 1)).TotalMilliseconds.ToString(); In one sense sub-15ms precision doesn't matter, the sensor has it's own rate and I can detect what sample frame it's returning. I'm logging another thing, but it's on the same PC and events are several seconds long. But the discussion upthread gives me some doubt about the sensitivity to a context switch, is it having to switch over to get the time or can I assume that snippet called in a 200ms timer is relatively accurate?
|
# ? Sep 10, 2015 21:13 |
|
I found this extension a while back when I needed to get an epoch date. In my case I'm lucky in that I don't care about the time, simply just converting the date to epoch:code:
|
# ? Sep 10, 2015 21:26 |
|
crashdome posted:... then I realized just how fragile the concept of time is and my brain almost melted. Oh lord, I'm taking a course on this and it gets out of hand reeeal quick, if you're curious: http://plato.stanford.edu/entries/logic-temporal/#LinTimTemLogLTL
|
# ? Sep 10, 2015 21:46 |
|
Munkeymon posted:But there's no guarantee that it's monotonically increasing, which is what you need because a modern CPU can do more than 10 million things in a second. I don't see anything like that available without dipping into the Windows API. https://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx System.Diagnostics.Stopwatch uses QueryPerformanceFrequency etc. if it's available, and has static properties for whether it's a high resolution timer and how many stopwatch ticks are in a second. On this PC it claims to be high resolution with ~3.3 million stopwatch ticks per second. Even that isn't high enough resolution for the following loop to run forever (it's terminating after 2-3 iterations usually): code:
|
# ? Sep 10, 2015 23:30 |
|
Bognar posted:My preferred solution is to use this StreamReader overload and pass true to the last parameter, leaveOpen. There is a similar StreamWriter overload. Thanks for this nugget of wisdom.
|
# ? Sep 11, 2015 02:15 |
|
Azure question: After adding a 2nd Web App (which was later deleted), a new resource entered the picture called "Default1". It appears to be an "App Service plan", though I'm not sure what that means. It has CPU and Memory metrics. My lone Web App seems to point to Default1. My question is, what exactly is this App Service plan? If I create more Web Apps, will they reside under Default1? Should WebApp-to-AppServicePlan be 1:1?
|
# ? Sep 11, 2015 02:20 |
|
i'm very glad to see this issue being worked on: NuGet.exe install 3.2 makes multiple unnecessary network calls when your latency to nuget servers is 300-500 ms, the number of calls makes a very large difference.
|
# ? Sep 11, 2015 12:17 |
|
this has probably already been answered in this thread, but could someone give me a simple way of using functions in a F# library from any C# program? i've been getting into F# and love it, but i've yet to actually use it for anything practical.
|
# ? Sep 12, 2015 01:18 |
|
Have you tried just adding the F# library as a reference? I'm doing it at work and there isn't anything special. https://stackoverflow.com/questions/478531/call-f-code-from-c-sharp
|
# ? Sep 12, 2015 01:24 |
|
pepito sanchez posted:this has probably already been answered in this thread, but could someone give me a simple way of using functions in a F# library from any C# program? i've been getting into F# and love it, but i've yet to actually use it for anything practical. https://github.com/cmgross/NasgaMe/blob/master/NasgaMe/Utility/BusinessLayer.cs Check the last method in the file, it calls my F# web scraper. It was my first time using F#
|
# ? Sep 12, 2015 01:54 |
|
Just to be sure I'm not missing the entire point: Is it true that there's no way that a generic class can be extended more than once? i.e. in C++ it's possible to do so with specialization to create terminal classes, but it looks like in .NET languages, there's no way to do that in a way that isn't circular, right? OneEightHundred fucked around with this message at 04:08 on Sep 13, 2015 |
# ? Sep 13, 2015 04:03 |
|
could you please give an example of the C++ idiom you mean?
|
# ? Sep 13, 2015 09:02 |
|
I just wanted to say that Xamarin is awesome and that it is a crying shame that they have to show Microsoft how you do cross platform development. WTF is up with Cordova and the other stuff? Seems like a waste of resources when Xamarin, a much smaller company, has ported XAML to about every device and you can use it to build pretty much awesome apps straight off. I just created a new project and loaded in my PCL from my WinRT app, hooked up the viewmodels from the PCL into an IOS Xamarin forms project and it works...... drag to refresh everything. Pretty amazing. Now I am rebuilding my WinRT app to a Xamarin forms store application and have the benefit of it running on pretty much any device I want. Mr Shiny Pants fucked around with this message at 15:11 on Sep 13, 2015 |
# ? Sep 13, 2015 15:03 |
|
OneEightHundred posted:Just to be sure I'm not missing the entire point: Not sure if I understood you correctly, but do you mean something like this? https://dotnetfiddle.net/YnLcHc
|
# ? Sep 13, 2015 15:50 |
|
OneEightHundred posted:Just to be sure I'm not missing the entire point: code:
.NET generics don't allow for multiple implementations for a generic class due to how the type-system works. Your generic class has to have a unique name. That said you might be able to mock up a similar working concept with interfaces and a factory that's able to figure out which implementation to pick for which specific case. The implementations would have to have be named differently though. Keep in mind that templates and generics are similar but very different at the same time.
|
# ? Sep 13, 2015 15:58 |
|
Gul Banana posted:could you please give an example of the C++ idiom you mean? code:
As far as I can tell, doing this or anything like it isn't possible without dependent type names, specializations, or deriving from a type parameter, none of which .NET generics allow, which I think rules out any way that a generic could ever inherit from another instance of the same generic. It looks like that's deliberate, but I'm just trying to confirm that's actually the case and there's not some crazy way of doing it that I'm not thinking of. (To be clear: The point of this isn't to implement generic covariance in .NET and I know that .NET 4 supports those, the point is that I'm trying to do something that will fail miserably if this assumption isn't accurate.)
|
# ? Sep 13, 2015 16:43 |
|
i see. probably the closest .NET idiom is its equivalent of curiously recurring templates: class Base<TSelf> where TSelf : Base<TSelf> { } class Derived : Base<Derived> { } e.g. a generic can be *constrained* by itself, but no, they can never inherit from themselves, and there are no associated types/type members. so you should be safe to do whatever odd thing is depending on that.
|
# ? Sep 13, 2015 20:24 |
|
So, let's say I have a SQL table of a few thousand rows that I want to store in memory, as it will be frequently and randomly accessed. The table has only 6-7 relevant columns, all short strings or numbers. In the business logic, each row translates to an instance of a class which isn't quite a POCO, since it features a Parent property that returns (lazily) the instance identified by the ParentID column, plus a few static methods to find common ancestors and the like. (The fact that the class is self-referential is important, since otherwise I could have made it a struct.) Am I more likely to be better off : A) loading the table into memory as a DataTable and, when necessary, grabbing the row by the provided Id and instantiating the class from the row (and then throwing it away) or B) immediately loading the table into memory as a big Dictionary of string -> class ?
|
# ? Sep 14, 2015 12:35 |
|
I'd say B), because a dictionary of class instances is not going to be any bigger/worse than a DataTable full of DataRows. As a benefit, the Parent property can then also do a lazy but fast lookup in the same dictionary. Just make sure the class properties are read only, so you can't gently caress up your cache by accident.
|
# ? Sep 14, 2015 13:43 |
|
If your goal is to just cache a few thousand objects, then yeah pay the (minor) upfront cost of instantiating everything and loading it into a dictionary.
|
# ? Sep 14, 2015 14:10 |
|
So a conceptual question about viewmodels in apps. Let's say I have a viewmodel that has a collection and this collection contains other viewmodels, if you click an item in the collection a new view is loaded and the selected viewmodel in the collection is the datacontext for the newly opened view. How do you handle resuming of the application in a hierarchy like this? Worst case scenario is that the complete viewmodel is unloaded and needs to be re-instantiated again but if the app is paused on a particular view how do I figure out what I need to load again? Should I just forget about doing this and just reload, saving the state in the event of a pause or memory contention, and restarting the app as it were? Or should I figure out how I can re-architect my models so that I can pick and know which model I need to instantiate in this situation? One thing I can imagine is having IDs in my models and storing them in the event of a pause and during resuming instantiating everything again, getting the ID of the last loaded model and opening the right view again. This will take quite a lot of re-architecting, anyone done this before?
|
# ? Sep 15, 2015 05:23 |
|
|
# ? May 17, 2024 10:22 |
|
I have a few cases where I think async/await might improve user experience, but my coworker on the project doesn't think so. The project is legacy Web Forms. When the user clicks a button to save their work, we call a "Save" method on a business layer object and then notify the user that their save went through.code:
code:
|
# ? Sep 15, 2015 13:57 |