|
GrumpyDoctor posted:I think calling Task.Run is easier than spinning up a BackgroundWorker, so is there something else I'm not thinking of? Marshalling progress back to the UI thread, unless I'm missing something about how Task.Run works.
|
# ? Apr 13, 2015 17:22 |
|
|
# ? Jun 6, 2024 16:50 |
|
Dapper performance question, if anyone can help: I'm using dapper in a Winforms project. At some point, I have to shove a load of Foo's (anything from 1 to probably around 100 at the top end) into the database. Dapper handles this pretty nicely, I can pass the SQL and the collection of Foos in to the Execute method and it deals with it. Previously it was fine to just shove them in the db and forget about them as that is the last time they were needed. Now, however, I need to populate their ID fields as they are updated as we need them for something afterwards. This is easier enough to do, I'll just add SELECT SCOPE_IDENTITY() on to the end of the SQL and use the Query() method instead. However, this does mean I can't just hand in the collection of Foos because I need to individually update the ID property of each one, so I'll have to loop through the collection and hand each into the Query method individually. Two questions here: Is there a better way of doing this with Dapper, and is this going to perform horribly compared to the batch insert I was doing before? If it makes any difference, I'm doing all of this wrapped in a TransactionScope. chippy fucked around with this message at 18:05 on Apr 13, 2015 |
# ? Apr 13, 2015 17:42 |
|
Ithaqua posted:Marshalling progress back to the UI thread, unless I'm missing something about how Task.Run works. I don't have a lot of personal experience with it but IProgress<T> seems straightforward enough to me I suppose it's a matter of taste
|
# ? Apr 13, 2015 17:48 |
|
chippy posted:Dapper performance question, if anyone can help: p.s. I'm gonna test this myself, just hoping someone also happens to have a quick answer chippy fucked around with this message at 18:04 on Apr 13, 2015 |
# ? Apr 13, 2015 17:51 |
|
GrumpyDoctor posted:I don't have a lot of personal experience with it but IProgress<T> seems straightforward enough to me We in the VB/C#/.NET team at Microsoft are telling folks to use IProgress<T>, as the article mentions. And in Store/Phone apps, where there are WinRT APIs that return IAsyncOperationWithProgress, we wrap them up into a .NET IProgress<T>. So yes, use IProgress! Here's how your code would look if your DoStuff is IO-bound (e.g. reading a text file from disk line by line, or fetching a page from the internet). The nice thing here is that you don't even need a background thread, and don't need to worry about how to update the UI from it. code:
code:
code:
|
# ? Apr 13, 2015 19:07 |
|
chippy posted:p.s. I'm gonna test this myself, just hoping someone also happens to have a quick answer For a 100 inserts? I would test it, I think you will be pleasantly surprised by the performance.
|
# ? Apr 14, 2015 05:38 |
|
Mr Shiny Pants posted:For a 100 inserts? I would test it, I think you will be pleasantly surprised by the performance. Yeah, I'm not expecting it to take too long. I know that Dapper is pretty performant and this isn't a lot of data to be inserting, the problem is that this application is used at a factory where it's run on a lovely network. mostly on terminals with poor wireless connections, and all supported by a copy of SQL Server 2008 Express running on lovely old hardware. All of which they are very resistant to improving if it involves spending any money. There are hundreds of these inserts flying around all the time and they do have performance issues already, so I have to be very careful with anything that might make it worse.
|
# ? Apr 14, 2015 09:21 |
|
Piss, it takes about twice as long. e: No it doesn't, I'm a twat. chippy fucked around with this message at 10:12 on Apr 14, 2015 |
# ? Apr 14, 2015 09:55 |
|
chippy posted:Yeah, I'm not expecting it to take too long. I know that Dapper is pretty performant and this isn't a lot of data to be inserting, the problem is that this application is used at a factory where it's run on a lovely network. mostly on terminals with poor wireless connections, and all supported by a copy of SQL Server 2008 Express running on lovely old hardware. All of which they are very resistant to improving if it involves spending any money. There are hundreds of these inserts flying around all the time and they do have performance issues already, so I have to be very careful with anything that might make it worse. What you are describing basically is a micro-chasm of the public internet. I would think about pushing the communications layer to a HTTP-based API, that will do a lot better in spotty networking conditions than sending SQL down the line.
|
# ? Apr 14, 2015 13:28 |
|
Ithaqua posted:What is DoStuff doing? If it's an I/O bound operation (reading files, getting stuff from a web service, etc), use async/await. If it's a long-running CPU-bound operation, use Tasks or Background Workers. Background workers are conceptually simpler than Tasks. Yeah I tried the multi-threading thing and didn't really know what I was doing so I abandoned that route. I want to make a button that, when clicked, will pause the operation after the current block has been executed, and not pause it in the middle of the loop, so I don't think the mutli-threaded thing would suit me anyway. Basically I'm just relocating files from one server to another and renaming them along the way and I want the user to be able to pause at their leisure, but only after the current file rename is completed to avoid any issues. I'm checking out Await but I'm lost when it comes to attaching it to a button click, especially since the UI is locked during execution... which in that case, is making the application double-threaded is the only way to go then? I know this is what ASync is for kind of but I can't find any example online that actually uses Await to pause anything completely until the button is clicked again, but instead all I can find are examples of a wait timer. LiterallyAnything fucked around with this message at 15:22 on Apr 14, 2015 |
# ? Apr 14, 2015 15:16 |
|
Brady posted:Yeah I tried the multi-threading thing and didn't really know what I was doing so I abandoned that route. I want to make a button that, when clicked, will pause the operation after the current block has been executed, and not pause it in the middle of the loop, so I don't think the mutli-threaded thing would suit me anyway. Basically I'm just relocating files from one server to another and renaming them along the way and I want the user to be able to pause at their leisure, but only after the current file rename is completed to avoid any issues. I'm checking out Await but I'm lost when it comes to attaching it to a button click, especially since the UI is locked during execution... which in that case, is making the application double-threaded is the only way to go then? I know this is what ASync is for kind of but I can't find any example online that actually uses Await to pause anything completely until the button is clicked again, but instead all I can find are examples of a wait timer. All code in .NET is executed on a thread. When you are in a GUI by default everything runs on the UI thread. The UI thread is what is responsible for listening to UI events (button click, window drag/drop) and coordinating repaints of the UI. Since it's handling button clicks if you don't move long running work to another thread it will consume all of CPU time of the UI thread and the UI thread can't listen to UI events and schedule repaints. If you want to do long running work and have a responsive UI you need to do the work on another thread. The best options right now are BackgroundWorker, Task Parallel Library (using Task.Run, or async/await if the library exposes Async (Task returning) methods like HttpClient. Those options are from worst to best. Allowing the user to stop the action requires the programmer to allow it without the user killing the Process. This is a more advanced scenario that requires use of a CancellationToken. I found a Console application example. The same process will apply to the UI. I would have Start and Cancel buttons. When you Start create a CancellationToken and pass it into your long running job that is probably spawned from Task.Run. Now the Cancel button is clickable and will cancel the CancellationToken. You can control when the CancellationToken is observed in your long running process and stop when you finish a unit of work. Combine this with ljw1004 IProgress example and you have an awesome application! If you want to learn more about writing concurrent programming I suggest Concurrency in C# Cookbook. It skips Threads which you should 99.99% never use if you are in .NET 4+ but goes into everything I mentioned above plus Reactive Extensions. gariig fucked around with this message at 15:41 on Apr 14, 2015 |
# ? Apr 14, 2015 15:39 |
|
wwb posted:What you are describing basically is a micro-chasm of the public internet. I would think about pushing the communications layer to a HTTP-based API, that will do a lot better in spotty networking conditions than sending SQL down the line. This also. For spotty connections something that was built to handle it would be much better than raw SQL connections.
|
# ? Apr 14, 2015 16:54 |
|
I would absolutely love to be able to do something like that but I would never be given the time to do it. The application is a years-old hulking mess cobbled together by a non-programmer and most parts of it are too far gone for anything other than a major rewrite. There's no core application, just a load of WinForms clients throwing SQL at the database. I posted some code once in the Coding Horrors once and got a lot of condolences and "get out as soon as you can" type responses. It's just me, bolting on new features on my own, to unreasonably short timescales.
|
# ? Apr 14, 2015 17:15 |
|
gariig posted:Allowing the user to stop the action requires the programmer to allow it without the user killing the Process. This is a more advanced scenario that requires use of a CancellationToken. I found a Console application example. The same process will apply to the UI. I would have Start and Cancel buttons. When you Start create a CancellationToken and pass it into your long running job that is probably spawned from Task.Run. Now the Cancel button is clickable and will cancel the CancellationToken. You can control when the CancellationToken is observed in your long running process and stop when you finish a unit of work. Combine this with ljw1004 IProgress example and you have an awesome application! Cancellation and pausing aren't really the same thing.
|
# ? Apr 15, 2015 02:37 |
I have a weird problem that is driving me (and my end users) a bit nuts. Using WPF I have a ListBox with its ItemSource bound to an ObservableCollection. I have a DataTemplate inside of that ListBox containing a few Buttons, TextBoxes, Checkboxes, etc. What my users want is to be able to press up or down on the keyboard and end up on the same control in the previous or next item in the listbox. So for example: We are in the 2nd textbox in the 4th item. User pressed the down arrow key. We are now in the 2nd textbox in the 5th item. How could I accomplish this? I would use a datagrid, but that seems to introduce a whole host of other problems as this ListBox is itself inside of a DataTemplate inside of a TabControl with it's tabs bound to an ObservableCollection. DataGrids lack a lot of the dependency properties needed to persist information and hide/show columns without really muddying up the view or re-implementing my own version of DataGrid. Basically, with the tab control's visualization, DataGrids can be a nightmare.
|
|
# ? Apr 15, 2015 14:55 |
|
I think you'll want to handle the KeyDown event for those elements in your View. If you name all of your input controls that you care about, you can just attach the same event handler for each of them. The handler would detect whether you pressed Up or Down, grab the name of the sender control, traverse the parent controls to find the item above or below it, then find the control with the same name. That's how I might try it off the top of my head anyway, someone else might have a better idea.
|
# ? Apr 15, 2015 15:06 |
|
wilderthanmild posted:I have a weird problem that is driving me (and my end users) a bit nuts. Get the current tabindex from the other active control and set the tabindex on the control being navigated to? Seems like there is a keyboard focus property, maybe you can get this and set it to the element navigated to: https://msdn.microsoft.com/en-us/library/aa969768.aspx#Keyboard_Navigation Mr Shiny Pants fucked around with this message at 16:58 on Apr 15, 2015 |
# ? Apr 15, 2015 16:51 |
|
wilderthanmild posted:Basically, with the tab control's visualization, DataGrids can be a nightmare. In my experience, DataGrids are more trouble than they're worth in most cases, maybe to the point of them being an obsolete holdover from WinForms. The fact that the ListView control has a built-in GridView style that's much easier to work with and customize supports my theory. Maybe it's just there to support binding to a DataTable, but again, I think that's the old way of doing things. wilderthanmild posted:Using WPF I have a ListBox with its ItemSource bound to an ObservableCollection. I have a DataTemplate inside of that ListBox containing a few Buttons, TextBoxes, Checkboxes, etc. What my users want is to be able to press up or down on the keyboard and end up on the same control in the previous or next item in the listbox. Edit: Come to think of it, you may be able to accomplish what you want by using a ListView configured as a GridView, rather than using a ListBox. I THINK that the default behavior of navigating between records in a GridView is to keep the focus on the same column, but I haven't used one recently enough to be sure. Edit the second: http://www.wpf-tutorial.com/listview-control/listview-with-gridview/ <-- Quick tutorial about it Che Delilas fucked around with this message at 21:38 on Apr 15, 2015 |
# ? Apr 15, 2015 21:26 |
|
I set up a datagrid yesterday for a view to allow multiple entries and I'm so glad this conversation is happening because wow... F' datagrids.
|
# ? Apr 15, 2015 23:58 |
|
I would post this in the coding horrors thread but it's really only going to make sense to .NET developers. I'm authoring a plugin for some software and just asked on the developer forums why, if all their API objects implement IDisposable, Dispose is never called in any example code or mentioned in any of their documentation, anywhere. The response was a link to the MSDN page for IDisposable.
|
# ? Apr 16, 2015 20:51 |
|
How do you even respond to something like that?
|
# ? Apr 16, 2015 20:52 |
|
Do they override Finalize at least? Still stupid not to demonstrate the ability to call Dispose though.
|
# ? Apr 16, 2015 21:06 |
|
Most of the people making plugins for this thing are novice programmers; I'm hoping that the response I got assumed that my question was "What's this IDisposable thingy?" rather than "Why do you people not appear to understand what you're doing?" But it's real annoying when your reward for not calling someone an idiot is getting called one yourself.
|
# ? Apr 16, 2015 21:06 |
|
Their example code doesn't include a lot of using blocks does it?
|
# ? Apr 16, 2015 22:18 |
|
Yesterday I was trying to speed up a fairly intensive procedure when I noticed that, no matter how much I cut down on the work it had to do, it wouldn't go under 5-6 seconds execution time. Then I realised that most of that time was actually being spent on the initial setup: looping through the DataTable containing the query results and setting up the business logic objects based on its contents. But this DataTable was barely a thousand rows, and the "building" wasn't doing any real work either ("if [Type] = "A" then add ([Name], [Quantity]) to SomeList else add to SomeOtherList", stuff like that) - I couldn't fathom why it would be so slow. Almost on a whim, I tried changing the For Each loop to a For loop. I fully expected nothing to happen... instead lo and behold, the loop became almost instant (Still noticeable, but well under a second.) Does that sort of behaviour ring a bell to anyone? Everything I can google says that the difference in iteration speed should have been insignificant. Casting to IEnumerable couldn't possibly have triggered some schema validation or anything of the sort, I hope?
|
# ? Apr 16, 2015 23:00 |
|
NihilCredo posted:Yesterday I was trying to speed up a fairly intensive procedure when I noticed that, no matter how much I cut down on the work it had to do, it wouldn't go under 5-6 seconds execution time. This is all from memory: Probably an implicit cast is happening. I assume you're doing foreach (DataRow row in whatever.Rows), right? Data tables are pre-generics... Rows is a RowCollection or something similar, and getting stuff out of it requires casting back from object. You can validate if I'm insane or not by changing the "DataRow" in the foreach to "var".
|
# ? Apr 16, 2015 23:07 |
|
Ithaqua posted:This is all from memory: That makes a worrying amount of sense. I'll try that tomorrow, thanks! It seems odd to me that casting would be that expensive, especially when I'm checking for the "original" type. You'd think that looking at the type token and seeing that yep, it matches, would be pretty fast; if nothing else, the fastest of all possible casts. If that's indeed the guilty party, then at least I can keep using the prettier For Each as long as I drop the As DataRow part - and since accessing cells by column name is already horribly unsafe anyway that's quite fine. But, I'll also have to quit using LINQ on any DataTable of non-negligible size .
|
# ? Apr 17, 2015 00:20 |
|
Che Delilas posted:Their example code doesn't include a lot of using blocks does it? It includes none. I only discovered that IDisposable was even involved when I tried to write something in F#, which emits warnings unless you instantiate IDisposables slightly differently from other objects. Another tip I just received: You can use using blocks to automatically call Dispose for you! But you have to make sure the object isn't needed anymore!
|
# ? Apr 17, 2015 03:55 |
|
GrumpyDoctor posted:Another tip I just received: You can use using blocks to automatically call Dispose for you! But you have to make sure the object isn't needed anymore! "You mean I have to make sure the object isn't needed outside of the scope in which it was instantiated? Teach me more about this crazy language, Programmer-Sempai!"
|
# ? Apr 17, 2015 04:13 |
|
Che Delilas posted:"You mean I have to make sure the object isn't needed outside of the scope in which it was instantiated? Teach me more about this crazy language, Programmer-Sempai!" This bit more more than once with streams.....
|
# ? Apr 17, 2015 06:31 |
|
gently caress me, ListViews in WinForms are the WORST thing.
|
# ? Apr 17, 2015 11:16 |
|
GrumpyDoctor posted:Most of the people making plugins for this thing are novice programmers; I'm hoping that the response I got assumed that my question was "What's this IDisposable thingy?" rather than "Why do you people not appear to understand what you're doing?" But it's real annoying when your reward for not calling someone an idiot is getting called one yourself. Could they be handling the instantiation / disposal of the plugin? If they are doing so they probably don't want userland, so to speak, disposing things . . .
|
# ? Apr 17, 2015 13:52 |
|
wwb posted:Could they be handling the instantiation / disposal of the plugin? If they are doing so they probably don't want userland, so to speak, disposing things . . . Even if they were, they shouldn't make their public API objects disposable if they don't want them to be disposed.
|
# ? Apr 17, 2015 14:23 |
|
wwb posted:Could they be handling the instantiation / disposal of the plugin? If they are doing so they probably don't want userland, so to speak, disposing things . . . Not the plugin objects. Everything. The thing's a 3D modeler, and every geometry object complicated enough to be a class is IDisposable.
|
# ? Apr 17, 2015 14:35 |
|
Bognar posted:Even if they were, they shouldn't make their public API objects disposable if they don't want them to be disposed. You'd probably have to make IPlugin disposable if your code just sees collections of IPlugins to manage. Not a good way to make that non public. @GrumpyDoctor -- OK they are just retards then.
|
# ? Apr 17, 2015 16:30 |
|
I'm interested in improving performance of some CPU bound functions. The functions are already async and my UI is responsive. it's just the tasks can take several minutes. Its a procedure which optimizes order and placement of materials. Running all possible combinations is complicated as is the scoring to determine best fit. Would using F# in any way improve performance on procedures that rely heavily on loops and lists as containers?
|
# ? Apr 17, 2015 19:34 |
|
GrumpyDoctor posted:Not the plugin objects. Everything. The thing's a 3D modeler, and every geometry object complicated enough to be a class is IDisposable.
|
# ? Apr 17, 2015 19:38 |
|
crashdome posted:I'm interested in improving performance of some CPU bound functions. The functions are already async and my UI is responsive. it's just the tasks can take several minutes. Its a procedure which optimizes order and placement of materials. Running all possible combinations is complicated as is the scoring to determine best fit. Would using F# in any way improve performance on procedures that rely heavily on loops and lists as containers? You say they're already "async", but async isn't about performance, it's about responsiveness. Are you actually parallelizing the tasks? Are the tasks actually parallelizable?
|
# ? Apr 17, 2015 19:38 |
|
I only mentioned async because I wanted to note it wasn't about UI performance but rather function performance. I tried parallelizing but certain tasks are themselves several minutes long. The function is simple to describe in human terms but, very complex behind the scenes due to a lot of checking and repositioning: 1) Take a group of master materials (about 6-7 items) each one being a unique material 2) For every master material I have to take about 100+ items of various sizes, shapes, and even due dates and position them as a best fit in several "runs" based on some very complex settings. It's basically a brute force best fit optimization routine. I can parallelize each master material as the subitems are unique to each master item. However, I have to iterate all 100+ subitems, find the best run, remove them from the list and start again until I have a good schedule. I have a feeling I'm losing performance in object creation (most likely list creation/manipulation) and am starting to rewrite using smaller classes within the optimization routine itself. I just thought maybe using F# had some benefits with this type of procedure.
|
# ? Apr 17, 2015 19:58 |
|
|
# ? Jun 6, 2024 16:50 |
|
It's my understanding that many of the constructs in F# (e.g. immutability, seq vs LINQ) actually decrease performance in favor of code clarity or concision. Ultimately, though, both languages compile down to IL - but I feel that C# gives you more opportunities to do low level performance tweaks since it's mutable by default. It's unlikely that you're running into issues with object allocation, but it is very believable that you could be doing too many operations that copy data (e.g. lots of .ToList calls). It's really hard to point to anything without actually seeing your code, though. As always with performance optimization, don't optimize until you know what's slow. Run a profiler and inspect the results before changing code.
|
# ? Apr 17, 2015 20:33 |