Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SirViver
Oct 22, 2008
Were you attempting to post this to the coding horrors thread, or did I just fail to get the joke?

Adbot
ADBOT LOVES YOU

SirViver
Oct 22, 2008

gently caress them posted:

It Just Worked because it ate the exception and instead of crashing just did what I wanted, return json of a blank record.

Just now I added some logic to check if startRow is less than one, and if so, make it 0, so .Skip(startRow) cannot have a negative value. Go figure the old if statement works again. Why the hell would .Skip() failing (or being given a negative value - and why is that allowed?) give me such a crappy exception? "I skipped negative rows so the count is negative." doesn't make any drat sense.
Skip() has a parameter named "count", which must not be negative - this is the exception you got. It's kind of an unfortunate coincidence that you ran into this while querying the Count property, which certainly confused things. The reason you only got it on the if (items.Count...) respectively if (items.Any()) call is because LINQ is lazily executed :)

SirViver
Oct 22, 2008

Ithaqua posted:

That code doesn't make any sense, anyway. Only the result of the last query is returned.
It's equivalent (as far as I can tell) to:
No, that's not what it does (assuming IQueryable behaves the same as LINQ to Objects). The loop "appends" the Where() filter for each keyword to the query, so you end up with an AND connection of the keywords.

However, incidentally it would've had the behavior you described if the sample hadn't used the temporary variable and you compiled the code on VS2010; you'd still get an AND linked chain of filters, but all filters would search for the last keyword only due to an implementation detail regarding foreach variables in older C# compilers.

SirViver
Oct 22, 2008
Here's some more info about closures and loops.

SirViver
Oct 22, 2008

epalm posted:

If my test project does a pretty good job of testing my services and visiting most code paths, but uses a database instead of mocking out repositories, does that make me a bad person?
Nah, but it makes your tests integration tests instead of unit tests.

If you do intend those to be unit tests, mock your repository. Things unit tests should NOT include:
  • A test database, that makes the tests comparatively difficult to setup, maintain and maybe even slow to run. Same goes for anything related to the test that is persisted outside of application memory somehow.
  • Non-reproducible results, both due to the test database containing previous test data (unless it is cleaned up properly or created from scratch in code on every test run) and due to the usage of random data in your test, which by chance may introduce random failures (generated an already existing product name).

SirViver
Oct 22, 2008
I'm slowly going bonkers here. Warning, :words: incoming.

Backstory: two months or so ago, I've decided that after 9ish grueling years of ASP.NET WebForms development with a little bit of WinForms sprinkled in between and constantly decreasing levels of caring about it all, I finally want to expand my horizons to include "proper", modern coding techniques and patterns. I started at creating Unit tests, as this is something I could immediately introduce to our codebase at work, at least for the parts that are easily Unit-testable without requiring years of refactoring (e.g., static utility methods/helpers, etc.). I'm under no illusion of ever covering significant portions of our humongous application with these tests, but hey, at least it's something. This has been quite successful so far, also thanks to the excellent book The Art of Unit Testing.

Recently I've decided to create a small Windows application to provide a GUI for managing our main application's DB connectivity config files. This is just a personal little side-project for learning purposes (so no harm if I just abandon it), but I figured, I might as well create something useful while I'm at it. I chose to implement this using WinForms and the MVP pattern, deciding against WPF/MVVM, because a) I'm limited to VS2010 for the time being and I'm not sure the tooling is up to handling this properly, and b) I want to keep this at baby steps - I know WinForms and its pitfalls of code-behind madness reasonably well, whereas I've simply given up at every previous attempt to learn yet another markup language (XAML) and pattern (MVVM) at the same time. I figured going this route would be a good stepping stone towards accomplishing the latter at some point. Maybe that was a mistake?

So far I've learned that while the principles behind MVP are relatively clear, everybody seems to have their own idea on how to actually implement it (even disregarding its different flavors Supervising Presenter and Passive View). While there are plenty of examples, these examples tend to be maddeningly shallow and don't even cover the most basic of functionalities (or maybe I just suck at using Google, who knows).

But let me backtrack a bit before I finally come to my question, so you can understand where I come from, my thought processes, and maybe spot where I go wrong, because clearly I'm thinking myself in a corner here somewhere. Why use MVP? To separate the UI/View from the logic of the application (via the Presenter, which is the glue between Model and View - the same thing databinding + ViewModel accomplish in MVVM). Why do that? So you can test it. Now, how to make Presenters and Model properly testable? By using IoC, most commonly implemented via DI. So far, so good, I think, but now I start running into problems. Who creates the Presenters? Who creates the Views? Actually, the main problem boils down to: how do I correctly create a child window/dialog in response to the user doing something, without having the View know too much about its Presenter (or even child View Presenters) and Model?

At the moment I've settled on the Views (Forms) creating their Presenters and supplying all* of the Presenter dependencies to the View. This gives me the freedom to just new() a child View in the appropriate place (like a click event) and get on with it, which seems like the most practical thing to do. The understanding/feeling I got was that the Presenter belongs to the View (kinda, sorta) and that the Presenter is not really responsible for opening a child View - an action that is more or less an implementation detail of the parent View. The idea being that generally the View should be replaceable and might even just be a Console implementation for all I know. On the other hand, plenty of other sources mention that this is an unrealistic goal, and in most cases you're also going to implement a new Presenter if you make a new View. Fair enough.
*Actually, that's kind of a lie, I only pass in one dependency (an application context, which at the moment holds the relevant Model objects, but I kinda plan to replace this with a proper repository pattern, not that this matters here) with the rest of the dependencies' default implementations being created in the public ctor of the Presenter - the ctor taking all the DI parameters is internal and therefore only visible to the tests (which are purely academic at this point, since I'm not doing TDD). The reason for this is that I felt kind of icky having the View know about the Model at all, and this way the knowledge is reduced/hidden as much as possible.

I've also tried using an IoC container (Simple Injector in this case) to achieve this separation, but this ends up being too limiting, as it either forces me to create the whole dependency graph at the composition root (limiting me to only a single instance of the child View assuming I inject it as dependency on the parent View - but what if I want to have multiple child views open at the same time?? Do I need a ChildViewFactory?), or forcing me to reference the IoC container from within the parent View so I can Resolve() a child View reference, but then I end up with the Service Locator pattern which just makes matters worse.

At this point I'm kinda frustrated, as I could have implemented this whole thing three times over had I done it the "classic" way (testability be damned), but that's obviously not the point of the exercise. I feel like I'm just missing something obvious and once it clicks the whole thing will resolve in unicorns and butterflies, yet at the same time everything I can find on the net either doesn't deal with such exotic matters as creating child dialogs or falls back to using the Service Locator pattern in various states of disguise :argh:.

Any advice? Bite the bullet and go WPF/MVVM? Or do similar problems exist there? :ohdear:

SirViver
Oct 22, 2008

Jewel posted:

Question: Python lets you use ", ".Join(array), but why doesn't .NET?



It lets me call the function Join, which is strange because if you call a static function on any other instance it gives you an error saying "don't call a static on an instance", so why is string exempt?
String isn't exempt; "abc".Join(...) does not make a call to String.Join(string, string[]) but instead picks up the LINQ extension method Join(...) (used for joining elements in a SQL-like fashion), since a string is an IEnumerable<char>.

Jewel posted:

No, string.Join(string, int[]) works absolutely fine because it converts each int to a string.
Also no, an int[] is definitely not auto-converted to a string[]. Where did you get that idea from? :psyduck:

E: Disregard this - I didn't realize newer .NET versions added a different overload of String.Join.

SirViver fucked around with this message at 16:54 on Nov 25, 2014

SirViver
Oct 22, 2008

Bognar posted:

The quote was "it converts each int to a string", which to me doesn't imply array type conversion. It's also not incorrect, since string.Join(string, int[]) will use the string.Join(string, IEnumerable<T>) overload, which calls .ToString on every element.
Ahh sorry, my mistake. I'm still stuck in a VS2010/.NET 3.5 mindset, as that's what I have to use at work. Didn't realize they added a new overload.

SirViver
Oct 22, 2008
Persist the Visual Tree when switching tabs in the WPF TabControl?
Not sure if you'd count this as hacky, but I guess that's WPF for ya v:v:v

SirViver
Oct 22, 2008

chippy posted:

Goons, help (Winforms).
Just checked it and this should do it:
C# code:
// Set via designer...
textBox1.AcceptsTab = true;
textBox1.Multiline = true;
textBox1.KeyPress += textBox1_KeyPress;

// In code behind
private void textBox1_KeyPress(object sender, KeyPressEventArgs e)
{
	if (e.KeyChar == (char)13) // Return
	{
		e.Handled = true; // Prevent newline being added to textbox
		HandleBarcodeEntry(textBox1.Text); // Move this from your form handler to here
	}
}
Not exactly watertight or pretty to look at, but it should get you going. Note that it has to be the KeyPress event and not KeyDown/Up, as the Handled property on their event object does exactly gently caress all (for preventing key input). If you don't like the (char)13 comparison, you can also create a bool returnKeyPressed or similar and set that in the KeyDown event by comparing e.KeyCode against the nicer Keys.Return enumeration value (and also maybe move the barcode handling code there and only use the KeyPress event to prevent the newline from being added).

SirViver
Oct 22, 2008

AuxPriest posted:

Alternatively, set a breakpoint at the beginning of the suspect block and step through with f10 until you find the offending line. When you do, step into it with f11 and start digging.
Alternatively, you open Debug > Exceptions and check "Thrown" for Common Language Runtime Exceptions (or, if necessary, just for the exact exception type if your code legitimately throws/handles other exceptions) and hey presto, VS will break right at the point where the exception is generated.

SirViver
Oct 22, 2008
I'm greatly confused by your code. What is your application supposed to do / what does it look like? The only thing I can think of it that it has a console/chat-like window (the RichTextBox) and a separate textbox messageTextBox which calls SendMessage() once you press enter or hit a send button?

Then on one special input "Brady" you want it to query for a password? If that is the case, I see two ways of doing that:
  1. Have a variable in your form class like bool waitingForPasswordInput = false;, which you set to true in the "Brady" input case and on the next SendMessage() call check if the input was the correct password, reset the variable and do whatever special logic you want to execute.

  2. Create a separate form with just a textbox for the password input and show that via ShowDialog(), which will block at this point. Once the dialog box is closed you can read what was in its password textbox and continue from there.

SirViver
Oct 22, 2008

Drastic Actions posted:

VS 2015 is great. I don't have any breakages except with this UWP stuff, which I think is because of all of these insider build cruft. But beyond that, it's great. You should be fine.
Personally, I'm pretty disappointed so far. I've been evaluating it while working on our large ASP.NET WebForms app, and overall the experience has been worse than VS 2013.
  • Edit & Continue in vicinity of LINQ/Lambda expressions is great, but the process of doing so is very sluggish. If you pause execution or hit a breakpoint and want to start typing, VS completely hangs for several seconds before your input is processed at all. Also very frequently objects will end up being nulled after editing that would not have been with the old E&C.

  • When writing code, there is a small but noticeable delay in keyword highlighting and similar functions like brace matching. This can make writing nested calls kinda awkward, as you can't rely on the highlighting actually showing you the current level so you know when to stop mashing ')'. I know this is a small & stupid thing, but I notice it and it irks me.

  • Occasional unexplainable sluggishness/hangs/general performance problems after using VS for some time. I've experienced VS hangs when trying to do banal stuff like:
    • Opening the Extensions & Updates dialog and trying to scroll or click on anything. The dialog is completely unresponsive for a while, then completes one action, then becomes unresponsive again, repeat.
    • Opening the New Project dialog and trying to scroll or click on anything. Same as above.
    • Browsing the Tools > Options dialog and clicking on various option groups on the left. VS would hang about ~10 seconds before it would display the corresponding options on the right.

  • Stupid little bugs like Go to Definition (F12) on external assemblies producing a "One or more errors occurred." dialog instead of showing the metadata info. If the SO report is correct, it's caused by the "Keep tabs" C# text editor setting :psyduck:

  • Other "enhancements" that you can't turn off, such as collapsed XML doc comments showing the first line of the <summary> contents. When I collapse things, I want the visual clutter to be reduced - this is the opposite of that. No option to disable as far as I can tell. Worse, there's apparently no option to change the text color of collapsed XML doc regions, so I can't even make it ultra light gray.

  • Code Lens is nice and I want to use it, but it adds too much visual clutter during normal development. There is still no toggle shortcut for it.
There might be other things I've forgotten. Of course it's not all bad, but my overall first impression regarding user experience is it being a step backwards from VS 2013.

SirViver
Oct 22, 2008

ljw1004 posted:

Oh dear! If you have time,
Thanks for the reply!

I'll try to reproduce these issues and send the reports within the next week or so. Of course, with the whole thing only happening sometimes it might take a while. Especially the hanging dialogs could be tricky, as IIRC I had to force close devenv.exe to get anything done once it occurred. The EnC thing I can reproduce relatively easily though. Let's see.

I do have some extensions installed, of which the most performance hungry is probably the Productivity Power Tools 2015 one, though running in Safe Mode (all 3rd party Extensions disabled) didn't show any immediate performance improvement either. Resharper I only have on VS 2013 and perma-suspended except for when I really need specific functionality of it, as the performance impact of it is quite frankly ridiculous.

quote:

* For CodeLens, you say it's too much visual clutter. (I agree, and always turn it off myself, unless I'm particularly keen to see the GitHub commit history for a particular method). You write "There's still no toggle shortcut". Do I understand correct that you just want a keyboard shortcut to quickly turn it off and on again?
Yes, exactly. Just some way to enable/disable Code Lens via a command instead of having to go to Options and switching it from there. Whether it's done via a toolbar button or keyboard shortcut doesn't really matter to me, though I think the latter would be the generally preferred option. Implementing one probably comes with the other anyway, though. The reason I wrote "still" is actually because I found a VS 2013 Uservoice entry wishing for it when I did a Google search for the non-existing shortcut - I admittedly didn't even know Code Lens existed since VS 2013 already :shobon:

SirViver
Oct 22, 2008

ljw1004 posted:

* Occasional unexplainable sluggishness/hangs in opening Extensions&Updates, opening NewProject, ToolsOptions -- for these again, if you have time, it'd be great to "send a frown". We haven't had any other reports of these kinds of hangs yet.
Welp, sent two reports for this a few minutes ago after I had managed to reproduce it, but of course, once I start the recording tool and open the dialog it magically works :(

I'll see if I can properly reproduce it later, I hope you don't mind the duplicate reports - I tag all of them with the attention thing you mentioned.

E: Also sent a report regarding the EnC hang.

SirViver fucked around with this message at 08:59 on Jul 31, 2015

SirViver
Oct 22, 2008

ljw1004 posted:

Oh dear! If you have time,

SirViver posted:

Welp, sent two reports for this a few minutes ago after I had managed to reproduce it, but of course, once I start the recording tool and open the dialog it magically works :(

I'll see if I can properly reproduce it later, I hope you don't mind the duplicate reports - I tag all of them with the attention thing you mentioned.

E: Also sent a report regarding the EnC hang.
I've finally managed to reproduce and record the New Project dialog hang. It was unresponsive for nearly two minutes, at least with the recording tool running. Visual Studio even popped up the "VS is busy" tray notification twice during the wait. I hope this helps narrow down the possible cause for such hangs.

Oh and by the way, if anyone's wondering, you can actually recolor the collapsed text region blocks with the "Collapsed Text (Collapsed)" display item - something I had tried before - but unlike most other color options you have to for some reason restart Visual Studio for it to take effect.

SirViver
Oct 22, 2008
Is there any rough estimate as to when VS 2015 SP1 will come out yet? Or any way to join a beta program of sorts to get developer builds faster (like it was done for Windows 10)?

I think I'll go back to VS 2013 for the time being, as I feel it's just too buggy and slow to use for daily development :(. There seem to be a lot more situations where the UI just hangs completely for 2-3 seconds (also freezing other VS 2015 instances for some reason), which gets really irritating after a while. Also I have to say Edit & Continue has proven rather useless to me so far:
  • It's much slower to start editing. Hangs for a good while whereas VS 2013 editing started instantly.
  • Editing methods containing Lambda expressions seem to randomly complain about changing the type of the Lambda parameters for no discernible reason, preventing EnC.
  • There seems to be a bug where class member references are broken somehow (??). Specifically I see this behavior in ASP.NET WebForms where I have some class object as Page member that I access in Page_Load(). After I EnC, the member reference will not only be null, but so broken that you can't even assign a value to it. The debugger doesn't show any value when you mouseover the variable. Assignments to it "succeed", but no value is shown afterwards either. Accessing the member causes a NullReferenceException.

SirViver
Oct 22, 2008
Thanks, I'll try to contact you in the next few days. I did send those "frown" reports with the tag you mentioned earlier - do you know if anything useful could be gleaned from those yet?


In the meanwhile, a short documentary about the EnC bug I mentioned.

Start with a bog-standard ASP.NET Web Form (derived from a base class that ultimately derives from Page), with some business object members:


Then EnC the Page_Load method, for example I added a dummy if (true) { } below there. The method is barely 20 lines long and contains nothing fancy; no LINQ or lambda expressions or anything.

Boom, all the member fields are broken now and you will get NullRef exceptions when trying to access them. Looking at them in Watch reveals a really strange error:


After some further digging it appears the fields have somehow been duplicated during the EnC recompile, quite rightfully confusing both debugger and runtime:

(duplicated configBPO member not shown).

Welp. :psyduck:

SirViver
Oct 22, 2008
HA, I found the bug!

I was trying to create a minimal repro case, and the error somehow seemed to be related to the specific assembly the referenced member's type is defined in - if I put the same dummy class I used for testing in in a new class library it worked. Having members referencing both the "broken" assembly and the newly created one resulted in only the broken-assembly-members being screwed up after EnC.

On a hunch I decided to make sure the assembly properties are equal between the both and suddenly I could reproduce the error even with the new class library. Turns out if you use an auto-generated assembly version number EnC breaks member references to types defined in that assembly.


The repro code is literally this:

Main console project...

Program.cs
C# code:
using EnCClassLibrary;

namespace EditAndContinueConsoleTest
{
	class Program
	{
		static SomeClass someClass;

		static void Main(string[] args)
		{
			someClass = new SomeClass(); 
			someClass.SomeProperty = 1; // Step until here, EnC below, continue execution --> will throw NullReferenceException

			if (true) { } // EnC this to false (or whatever - specifics don't matter, just trigger EnC)
		}
	}
}
...referencing the "EnCClassLibrary" (doesn't matter if via project or assembly reference), in which you have

AssemblyInfo.cs
C# code:
...
[assembly: AssemblyVersion("1.0.*")]
...
SomeClass.cs
C# code:
namespace EnCClassLibrary
{
    public class SomeClass
    {
		public int SomeProperty { get; set; }

		public SomeClass()
		{

		}
    }
}

SirViver
Oct 22, 2008
I've now also managed reproduce the EnC slowness independently from my work environment; simply put, VS2015 EnC can't deal with large assemblies.

I wrote a small tool that generates random classes with some properties, methods, and a member variable of the previously generated class, to have some class coupling (broken up every 10 classes, so the hierarchy doesn't get ridiculous). I let it churn out some 2000 classes that I put into a class library, which ended up being about 400K LOC that compiles to a 17.5MB assembly. That's pretty much exactly twice the size of our actual business logic assembly, but if anything it just demonstrates the issue even better.

I tested this on my home machine - which is a fair bit faster than my work machine, so the results would be even worse there - with a completely fresh install of VS2015 Enterprise. Here are the results when I try to EnC some code in one of the generated classes:
pre:
VS2013
Start editing code: immediate
Apply changes: 4 seconds

VS2015
Start editing code: 63 seconds on first edit, afterwards immediate
Apply changes: 1 second
During those 63 seconds VS is completely unresponsive and pegs one core on 100%. There doesn't seem to be any significant disk or memory activity. In a PerfView ETL trace it looked like VS is performing a Roslyn(?) parse of the entire assembly source code, but then again I'm far from competent at interpreting those traces; stuff like for example OTHER <<mscorlib.ni!System.Runtime.CompilerServices.AsyncMethodBuilderCore+ContinuationWrapper.Invoke()>> shows up, so I think it's a reasonable guess.

Note that removing the class coupling entirely significantly speeds this up (about 3 seconds to start editing), but then again a completely flat hierarchy wouldn't exactly be a realistic test. Reducing the project to "only" 1000 classes (half LOC + size) also significantly reduces the delay down to a mere 10 seconds, which is closer to the hang times I'm seeing on my work machine with our actual project. It does suggest that there are some areas in the code parser that have algorithmic complexity issues, seeing how doubling the amount of classes produces six times longer hangs.

Incidentally, if I regenerate those 2000 classes while VS is open, VS2013 will immediately detect the changes and ask to reload, whereas VS2015 will reproducibly hang (presumably for 63ish seconds) before it does anything - though I didn't wait it out but force killed the process instead.

In general it seems VS2015 has a lot more operations tied to some background compilation/parsing process that, if it takes long to complete, will freeze the entire UI. I believe this may also be the cause for the hanging issues I have with the New Project respectively Extensions and Updates dialogs, as both work fine without a project loaded, but hang when I first open our main project. Once the hang resolves, opening the dialog again works as it should, which is similar behavior as seen with EnC.

In fact, let me just try this right now... YES, the New Project dialog also hangs on my home machine with the large generated project opened. So I think that one's confirmed too.

Now, I'm not saying it is desirable to have such large assemblies in the first place, but the reality of the matter is that it does happen and that VS2013 deals with it without complaint, whereas '15 chokes and dies. Though looking at the problem also seems to suggest a common source, so maybe a single fix is all that's needed to squash a wide array of performance issues. Doesn't mean that the fix will be easy, though.

@ljw1004: I'll send you the details and repro code Monday morning (CET).

SirViver
Oct 22, 2008

Brady posted:

I have a question about this if you don't mind, although it's not really thread relevant. However it's not letting me PM you, saying that you've opted not to receive PMs. Do you have an e-mail address I can reach you at?
That's because PMs are a Plat... ohh, I seem to have mysteriously acquired Platinum, which also explains the report buttons suddenly showing up. Assuming someone bought it for me, thanks! - whoever you are.

Anyway, PMs are now enabled.

SirViver
Oct 22, 2008

ljw1004 posted:

SirVivier, thanks for the clean and simple bug report. I took the liberty of filing it on github: https://github.com/dotnet/roslyn/issues/4575. (Edit: wow, more of the compiler devs are piling on. Looks like auto-generated assembly versions have wider implications.)
Nice to see this get some attention :). Though, truth be told, I'll probably simply revert our assemblies to fixed version numbers (or ones written during the actual release package build - duh), as the generated ones serve no real purpose anyway. A few years back I had enabled them with the intention that it's easier to verify the assembly installed at the customer actually being the one delivered in the release package (to rule out hosed up installs or amateur-level tampering), but that issue arose exactly never.

If you do end up deprecating this feature, as seems to be discussed in the linked github issues, just remember to also remove this comment from the generated AssemblyInfo.cs
pre:
// You can specify all the values or you can default the Build and Revision Numbers 
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
...and adjust the MSDN documentation, though I'm sure you already knew that.

SirViver fucked around with this message at 18:56 on Aug 17, 2015

SirViver
Oct 22, 2008

Kekekela posted:

Night Shade posted:

XY problem
This is my first time hearing this term but after googling it, I think I will be using it extensively going forward. :downs:
Wait, while I'm familiar with the term, is it intentional that when you pronounce X-Y it's sorta kinda like "ask why" :v:

SirViver
Oct 22, 2008

Ithaqua posted:

Must be an accent thing, because "ex" and "ask" sound nothing alike.
Yea, I was mostly referring to an American accent with sloppy pronunciation of "ask" as "aks" or "axe" :)

SirViver
Oct 22, 2008
You actually need to use the SqlCommand you created, as you do in AddPat. Currently you're calling ExecuteNonQuery on a string, which obviously can't work.

Also, your SQL update syntax is incorrect. It should be:

UPDATE PATIENT SET Locaton = @location WHERE Id = @id

E: Also, if you just rethrow exceptions, don't use throw ex; but just throw; instead. If you don't do it this way it will create a completely new exception instead of just passing the existing one on, which kills the stack trace information you had on it. Granted, in this case it doesn't really make a difference as the stack trace would point to code inside the SqlConnection, but it's good practice to learn early anyway.

SirViver fucked around with this message at 16:57 on Aug 31, 2015

SirViver
Oct 22, 2008
I'd say B), because a dictionary of class instances is not going to be any bigger/worse than a DataTable full of DataRows. As a benefit, the Parent property can then also do a lazy but fast lookup in the same dictionary. Just make sure the class properties are read only, so you can't gently caress up your cache by accident.

SirViver
Oct 22, 2008

epalm posted:

Confirm/deny?
Confirmed.

When I discovered this I was also quite surprised that the default rounding mode used by .NET (ToEven) is not the "standard" rounding (AwayFromZero) everyone first learns in school.

But I also see how it can make sense. I guess it was decided to go this route as it by default reduces rounding errors when accumulating rounded values - a scenario that is rather likely to happen in business applications written by inexperienced developers. By comparison scientific applications will use full precision values anyway (or more realistically not use C# in the first place) and in most cases any rounding "defects" will be limited to the UI display.

SirViver
Oct 22, 2008
Does anyone else experience VS 2015 randomly but insistently resetting the C# "Keep tabs" formatting option back to "Insert spaces"? It's driving me nuts :mad:

SirViver
Oct 22, 2008

Finster Dexter posted:

Can't say I've had that happen. Is it a problem with the vssettings file being corrupted or recreated or something?
No idea, but all the other settings remain as they should, at least as far as I can tell. At first I thought it's a settings sync issue as I've seen exactly that option getting disabled in the sync logs, but even after disabling sync it still happens.

One maybe not so common thing I do have going on is that I regularly run multiple VS 2015 instances in parallel. My best guess would be the processes trying to synchronize the settings between them and screwing up somehow.

Oh and I also have the F12 Go To Definition fix extension, which temporarily flicks the Keep Tabs option off when you press F12 to circumvent a VS bug, though from looking at its code I don't see anything obviously wrong with it and when testing it does reset the option properly. I'll give disabling that extension a try - maybe the quick switching of the option confuses VS.

SirViver
Oct 22, 2008

epalm posted:

We have two C# solutions with a small but growing overlap of features. I want to pull the common functionality into a 3rd C# solution, referenced by the two products. Could I somehow do this using Nuget?

We're using VS 2013 Pro, and C# 4.5
In my honest opinion - and take it with a grain of salt, as I have by choice only minimal exposure to NuGet - I've so far found every interaction with NuGet, except for the initial installation of a package, to be an absolute pain in the rear end. And I haven't even installed packages with notable dependencies.

There's packages inexplicably failing to restore, NuGet barfing its files all over your projects, everybody being required to do the exact right thing w/r/t source control to not horribly gently caress things up, stuff "magically" working when you build with Visual Studio but not with MSBuild, etc. It's supposed to make dependency management a non-issue, but, at least for me, it has so far been the exact opposite. :shrug:

Personally I continue to use lib folders for seldom updated libraries and direct project references for stuff that is actively changing - just because code is shared doesn't mean you can't include the same projects in multiple solutions directly. Anything only available from NuGet is downloaded once and then stuffed into a lib folder under source control. These approaches might not be "sexy", but they are far more reliable and straightforward to troubleshoot should anything be amiss.

SirViver
Oct 22, 2008
Just installed Update 1 (without problems) and I'm very happy to report that Edit and Continue performance on large projects has been significantly improved :thumbsup:

Though I do find it kinda funny that there doesn't seem to be any kind of comprehensive list of fixed issues, or maybe I just can't find it. The official page only shows like "Fixed issues: (3 entries), Known issues: (19 entries)", which doesn't exactly tell the whole truth and would be very depressing if it did.

SirViver
Oct 22, 2008
That's weird. Maybe your VS install is borked? Or maybe it's a VS Express thing? On Enterprise edition it shows fine:

SirViver
Oct 22, 2008
Ok, so, if I want to use ASP.NET 5 but have a third party library that depends on custom configuration sections in the old app/web.config format... am I completely boned? Or is there a way to make these still work?

SirViver
Oct 22, 2008
I think a Binding Redirect is what you're looking for.

By default an assembly that was referencing a third party assembly will always ask for that exact version to be loaded (in your case BulletSharp was compiled against SharpDX 2.6.3.0, so it will ask for exactly that version). You can override this by providing an assembly redirect in the app.config file, telling the runtime that any assembly looking for version "2.6.3.0" of SharpDX should now look for "3.0.2.0" instead. I believe from .NET 4.5.1 onwards there's also some kind of automatic binding redirection mechanism, but I'm not sure on that.

Of course, that doesn't guarantee that the code will actually work with the newer assembly, but in most cases it should, as drastic changes to the public API aren't all that common.

It should probably look something like this:
XML code:
<configuration>
...
  <runtime>
    <assemblyBinding>
      <dependentAssembly>
        <assemblyIdentity name="SharpDX ???" publicKeyToken="???" />
        <bindingRedirect oldVersion="0.0.0.0 - 3.0.1.9" newVersion="3.0.2.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>
...
</configuration>

SirViver
Oct 22, 2008

22 Eargesplitten posted:

What's the point of a getter and setter if the syntax for calling them is the same as getting the variable if it were a public variable? I thought the point of getters and setters was to make sure that nothing outside that class could access the variable without going through a specific method call.
Yes, that is their point. The syntax being like a variable access (that btw during compile gets rewritten to a classic get() or set() method call) just makes things nicer to look at and condenses separate get/set methods into a single property. It still fulfills the same purpose as manually written getter and setter methods.

Of course, if you just use auto-implemented properties like
C# code:
public int MyProperty { get; set; }
then that by itself doesn't yet do anything different than just exposing a public variable. However, what it does do is future proof your class for the case where you might add some sort of validation later, in which case you would replace the auto-implemented property with a manually implemented one like so
C# code:
private int _myProperty;
public int MyProperty 
{ 
    get { return _myProperty; } 
    set 
    {
        if (value < 0)
            throw new ArgumentOutOfRangeException("value", value, nameof(MyProperty) + " must not be lower than zero.");
        _myProperty = value;
    }
}
This can be done completely transparently to the caller; your class interface has not changed and no calls need to be rewritten.

22 Eargesplitten posted:

Also, why can't I set a variable to be private and then make the getters and setters public? Coming from Java, this whole thing seems weird.
I'm not quite sure what you mean by that? Private backing variable with public getter/setter property is pretty much the standard use case - see above. What you can also do is make private, protected, or internal properties, or make getter and/or setter have a more restrictive accessibilty.
C# code:
// Visible to everyone, but the value can only be set by the class
public int MyProperty { get; private set; }
// Visible to classes within the same assembly, but the value can only be set by the class or by derived classes
internal int MyProperty { get; protected set; }
// Visible to this class and derived classes, but only this class can read the value - derived classes may only set the value
// (This is weird af)
protected int MyProperty { private get; set; }

SirViver
Oct 22, 2008
If you add an optional parameter you need to make sure all assemblies using this method are recompiled or they won't work, throwing the runtime error you've mentioned.

The reason for this is that optional parameters are evaluated at compile time. They are a pure C# compiler feature - the CLR does not have a concept of optional parameters at all. The whole magic happens during overload resolution; if no method with exactly matching parameters is found, the compiler looks for methods with optional parameters that fit the structure of the specified parameters. If it finds one, it extracts the default parameter value information from the method definition and rewrites your method call as if you had supplied all parameters, using the default values for those you haven't specified.

For example, if you have:
C# code:
// Util.dll
public decimal Foo(decimal bar, bool baz = true) { ... }

// Application.dll
decimal val = ...;
decimal result = Util.Foo(val);
Then when compiling Application.dll the compiler won't find a Foo method with a single parameter during overload resolution, but it finds the one with the optional parameter. It rewrites your code to instead read:
C# code:
// Application.dll
decimal val = ...;
decimal result = Util.Foo(val, true);
It is very important to know that optional parameters are essentially syntactic sugar that is only evaluated during compilation of the calling assembly, and not a runtime feature. Otherwise you can easily run into the trap of extending methods with optional parameters instead of providing a new overload, which then results in runtime errors if you for example only supply a new Util.dll instead of also recompiling Application.dll. Without recompile, the latter still tries to call Util.Foo(decimal), which is an overload that doesn't actually exist anymore. Similarly, if you had already recompiled Application.dll previously and you then change the default value of baz to false, you also need to recompile Application.dll again, or it will still call your method with baz = true.

Optional parameters are very nice and can in some cases greatly reduce the amount of overloads you need to provide, but they are not a replacement of overloads. If you expose optional parameters as part of a public API, then any addition or modification of such parameters has to be treated as a breaking change, even if it doesn't obviously look like one. Supplying an overload is still the safer option in those scenarios.

Why exactly your first attempt at recompiling failed I don't know, but I wouldn't be surprised if it was just a random Visual Studio screwup. Stuff like this happens from time to time - usually a VS restart resolves those problems.

SirViver
Oct 22, 2008
I realize this might be more of a Visual Studio/Roslyn(?) question than being directly C#/.NET related, but has anyone here dabbled with writing of custom debugger visualizers? Or more specifically, knows how VS (or the .NET runtime?) determines whether an object is "replaceable" or not?

The reason I'm asking is that I have two generally working visualizers, one for DataSets and one for strings, but ever since I switched to Visual Studio 2015 (which is why I'm suspecting Roslyn) the visualizer for strings does not allow me to in-place edit them anymore (i.e., IVisualizerObjectProvider.IsObjectReplaceable = false). Clearly, strings are generally editable during debugging or otherwise you couldn't edit them in-line (using the "hover-watch" or watch window), but for some reason the debugger visualizers don't get access to that functionality anymore.

I mean, it's not a huge deal, but sometimes it would be nice to be able to edit large strings containing newlines in a proper editor than in a single enormous line :)

SirViver
Oct 22, 2008

Ciaphas posted:

Anyone with experience in EF against an Oracle database who might know why a nullable number column coming into a decimal? field might cause an InvalidCastException for some rows? There's one row in particular that causes the crash reliably, where the number is 0.0000006924267etcetc. I can't change it to double? in the model because then the build complains that that's incompatible with Oracle number.
How are the number columns on your Oracle DB defined? Unless you limit both scale and precision appropriately, a .NET decimal field won't necessarily be able to hold the value Oracle provides. You can usually easily force such errors by doing something like SELECT 1/3 FROM dual - the Oracle data provider will in such cases (annoyingly, but quite rightfully) throw InvalidCastExceptions (or maybe actually OverflowExceptions, that later get masked by the invalid cast ones - yeah, don't ask, the Oracle Managed Data Provider is a truly amazing piece of engineering :downs:), as the conversion would technically make you lose data, even if in most practical cases you probably don't actually care about that loss.

Now, I have no idea how EF handles this respectively how much it allows overriding its default behavior, but from my experience working with our own DAL you sort of have three ways of dealing with this (assuming you generally don't actually care about keeping maximum possible numerical precision):
  1. Restrict the number columns to an appropriate scale+precision that fits in a .NET decimal. By default, Oracle numbers will allow storing values up to 38 digits total, while a .NET decimal can hold only up to 28. Note, however, that this won't save you if you ever do divisions in your queries. If you just do straight reads/writes this might be enough, though.
  2. Select the values ROUND()ed appropriately, assuming most troubles stem from having too many decimal places. Quick, extremely ugly and not 100% foolproof either.
  3. Intercept whatever mechanism tries to read values as decimal and read them as the Oracle data provider-native OracleNumber instead. Then perform some maths to trim the raw data down to a size that fits a .NET decimal before converting it. This has the advantage of failure-proofing your data access without requiring SQL changes, while still providing maximum precision allowed by a decimal. The downside is having to deal with the Oracle specific guts that EF is supposed to hide away from you in the first place. May not actually possible to do with EF in the first place, though... no idea :shrug:

SirViver
Oct 22, 2008

chippy posted:

I think this dude's making the common mistake of thinking of the model as just the data, rather than all the behaviour and business logic too.
:yeah:

Adbot
ADBOT LOVES YOU

SirViver
Oct 22, 2008

I'm by no means an expert on async, but as far as I can tell your problem (besides the lack of curly brace placement) is that you're still running your code synchronously. Specifically you're calling onTick.Invoke(), which is a synchronous call and therefore negates all the async/awaits you plastered around your code.

What I believe you need to do is following things:
  1. Generally kill all usages of async void unless the method is actually used as an event handler which needs to have a void return type. The correct equvalent for this is async Task, so your method returns a Task (without type, i.e. void) that can be awaited. Using async void is very bad form and only exists as a necessary concession to support asynchronous event handlers.
  2. Specifically, change private async void OnTick() to private async Task OnTick(). IIRC you don't need to change anything else, the compiler infers the Task being returned.
  3. Consequently your Action onTick parameter needs to be changed to a Func<Task> onTick.
  4. Finally your call to onTick.Invoke() should be changed to await onTick().
These changes should make your code finally run fully asynchronously.


Personally I think what async can do is quite amazing, yet at the same time I'm not entirely a huge fan of it. It hides very complicated logic behind a seemingly simple syntax that ~~just works~~ except for when it doesn't. Not because it's actually faulty, but because it just doesn't quite succeed in hiding the complexity it tries to hide and inadvertently makes developers assume it does things that it doesn't actually do, which ends up in misunderstandings and perplexing gotchas.

For example, you're using async and at the same time automatically talk about multithreading, yet they have nothing to do with each other. You can use async to asynchronously await the completion of a Task that is executed on another thread (e.g. via Task.Run()), but the main point of async is that it allows for asynchronous code execution without the use of multithreading at its core. How exactly it actually does that I can't adequately explain (because I don't really know beyond a vague idea), but one of the consequences, as far as I understand, is that you can't actually write truly asynchronous methods yourself, unless your async method is just a wrapper that ultimately ends up awating a .NET framework method that implements "true" async. If no such method exists for the things you need to accomplish, the best you can do is wrap the call in an awaited Task.Run() that makes it execute on a different thread - which will work - but will also kinda defeat the point of not using up and potentially starving the thread pool in scenarios where this matters (e.g. ASP.NET). That said, it's at least not going to be any worse than using raw multithreading. I guess what irks me is that the difference between async/await and threading is not obvious enough unless you actually read and understand the documentation (the vast majority of developers will unfortunately not do that when copy/pasting from Stack Overflow is just so much easier), yet at the same time I don't really know how to make it more clear either. Maybe it'll just take time for this feature to sink in :shrug:

SirViver fucked around with this message at 17:44 on Jan 4, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply