Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Sab669
Sep 24, 2009

Thanks guys <3
Downloading a copy from DreamSpark at an excruciatingly slow rate. Or maybe it's just my new apartment's lovely internet :sigh: Went from a 100 mbps connection down to 30.

Sab669 fucked around with this message at 03:55 on Oct 9, 2013

Adbot
ADBOT LOVES YOU

ManoliIsFat
Oct 4, 2002

You kids these days. 4 megabytes a second "ohmygod what am i in loving rural kansas?" In my day we would download stuff for weeks

epswing
Nov 4, 2003

Soiled Meat

ManoliIsFat posted:

You kids these days. 4 megabytes a second "ohmygod what am i in loving rural kansas?" In my day we would download stuff for weeks

And go apeshit if someone picked up the phone.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
In my day we wrote our ones and zeroes on stone tablets, and sometimes we didn't have any zeroes

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
What's the scoop on HTTPContext, and session tracking?

I'm cleaning up some code and my controller is doing a lot of logic that should be wrapped up in a call from the repository. However, this little thing gave my inexperienced self pause:

code:
  user = HttpContext.Session.Item("user")
Something tells me this might be worth keeping in the Controller, instead of in the repository. I talked to the team lead, and he basically said my intuition was spot on, since you can get some weird, weird bugs with that.

Then I realized I really don't know poo poo about how session is tracked among users :downs:

So, what's behind all the magic?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

2banks1swap.avi posted:

What's the scoop on HTTPContext, and session tracking?

I'm cleaning up some code and my controller is doing a lot of logic that should be wrapped up in a call from the repository. However, this little thing gave my inexperienced self pause:

code:
  user = HttpContext.Session.Item("user")
Something tells me this might be worth keeping in the Controller, instead of in the repository. I talked to the team lead, and he basically said my intuition was spot on, since you can get some weird, weird bugs with that.

Then I realized I really don't know poo poo about how session is tracked among users :downs:

So, what's behind all the magic?

Session stores things in the user's current session. You can define any number of session storage mechanisms (e.g. cookies, or in-process in IIS, or a third-party tool like NCache). So the expiration is not necessarily something that's predictable, and can indeed lead to weird bugs. Don't store things in the session.

Sab669
Sep 24, 2009

ManoliIsFat posted:

You kids these days. 4 megabytes a second "ohmygod what am i in loving rural kansas?" In my day we would download stuff for weeks

Hey I lived on dial up too, once :v:

Essential
Aug 14, 2003
What's the largest amount of data you guys feel comfortable with sending from a client (in this case desktop app) up to a wcf service (or any kind of REST/web service)?

For instance, if you had a 100mb json file would you pipe the whole file up? If not how small would you cut/block the data up?

There's numerous advantages to sending smaller blocks of the data I just don't really know how small to go. On the service side is an entity framework insert/update so I have to keep in mind the performance issue on that end. From what I've read the sweet spot seems to be about 500 records per insert/update so I'm kind of using that as a benchmark.

epswing
Nov 4, 2003

Soiled Meat
So you want to (A) upload a file and (B) save its contents to a database. I don't know anything more about what you're trying to do, but you might want to focus on A and B separately, if it helps your cause.

(A): WCF has limits. If it's really a json textfile you could consider compressing it on the client side before sending it over the wire, and that alone might keep you under your WCF limits.

(B): EF has limits. Once the file is safely aboard the server filesystem, you can uncompress it and break it up into EF-sized chucks at your leisure, without worrying about holding a WCF request open for the duration.

This way, your EF problem doesn't affect your WCF problem (and vice versa), and you can tackle them each with the best option at your disposal.

(Again, I don't know what you're doing, so this might not make any sense.)

epswing fucked around with this message at 06:25 on Oct 10, 2013

Essential
Aug 14, 2003

epalm posted:

(A): WCF has limits. If it's really a json textfile you could consider compressing it on the client side before sending it over the wire, and that alone might keep you under your WCF limits.

You can open wcf up to receive any size messages/data, so it's not a limit on wcf that I'm running into, more of a best practices kind of thing.

epalm posted:

(B): EF has limits. Once the file is safely aboard the server filesystem, you can uncompress it and break it up into EF-sized chucks at your leisure, without worrying about holding a WCF request open for the duration.

I'm already doing this, as once the wcf service receives the data it calls SaveChanges() every 500 adds/updates.

By far the biggest bottleneck I'm running into is using entity framework for inserting the rows. Because it's raw json data I'm sending I may just dump entity framework and use sql adapters to insert/update the data. With entity framework it's taking around 15 minutes to insert 28k records into my test table. That's calling SaveChanges() every 500 records.

EDIT: You got me thinking though, another way to do this would be to upload the data to blob storage and then have an azure web/worker role grab the data and then insert into the database. I would still have the issue on inserting/updating the data, but it could be done this way.

EDIT2: I think this might be a good time to implement async/await on the client side. I could chunk the file on the client and then async/await each upload & insert. This process is going to take some time with the amount of data that needs to be sent, it would be nice to provide a UI that shows each step in the process and a realistically timed progress bar to the user.

Essential fucked around with this message at 07:05 on Oct 10, 2013

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
At work we use VS2010, and we repeatedly run into an issue when trying to update service references that can easily spiral out and take up hours of our time. Using the update tool is problematic; it seems to be a matter of the OperationContract() annotation not being applied to a function in an interface that visual studio looks at to set everything up.

Is the solution to just find your service contract annotated interface and manually do it, or is there a fix that could be applied so you can just right click and keep on trucking?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

2banks1swap.avi posted:

At work we use VS2010, and we repeatedly run into an issue when trying to update service references that can easily spiral out and take up hours of our time. Using the update tool is problematic; it seems to be a matter of the OperationContract() annotation not being applied to a function in an interface that visual studio looks at to set everything up.

Is the solution to just find your service contract annotated interface and manually do it, or is there a fix that could be applied so you can just right click and keep on trucking?

Rather than dealing with service references, especially with WCF, I've always just had a Contracts assembly that contained all my service interfaces that was shared between projects. That way there's only one place to update. And of course, I always go out of my way to avoid changing my contract, especially if the service is something public-facing where I can't control who consumes it. In that case, changing your contract means you're potentially breaking any third-party consumers.

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Essential posted:

What's the largest amount of data you guys feel comfortable with sending from a client (in this case desktop app) up to a wcf service (or any kind of REST/web service)?

For instance, if you had a 100mb json file would you pipe the whole file up? If not how small would you cut/block the data up?

There's numerous advantages to sending smaller blocks of the data I just don't really know how small to go. On the service side is an entity framework insert/update so I have to keep in mind the performance issue on that end. From what I've read the sweet spot seems to be about 500 records per insert/update so I'm kind of using that as a benchmark.

Can you possibly go into greater detail about what you are trying to do? Generally when you are using JSON you are talking about inserting that data into the DOM of a client (Web API to Knockout/Angular/etc). In those cases small JSON of 50K or less is generally what you want to do.

If this is some sort of bulk retrieval of data I would probably switch technologies. I would probably stand up a secure FTP server and use SSIS or SQLBulkCopy to load the data. An ORM isn't really meant to handle large loads of data like 500MB of JSON.

Essential
Aug 14, 2003

gariig posted:

Can you possibly go into greater detail about what you are trying to do? Generally when you are using JSON you are talking about inserting that data into the DOM of a client (Web API to Knockout/Angular/etc). In those cases small JSON of 50K or less is generally what you want to do.

If this is some sort of bulk retrieval of data I would probably switch technologies. I would probably stand up a secure FTP server and use SSIS or SQLBulkCopy to load the data. An ORM isn't really meant to handle large loads of data like 500MB of JSON.

I'm taking data from a local database and the first time the data is retrieved I will be getting the last couple years worth. So the only really large data transfer is happening on that first run. Then I need to get the data into an Azure SQL database.

The data has to move across the wire so Json seem's like the smallest format and I can always compress/decompress to get it even smaller. So whether I use a wcf service, data to blob, secure ftp, isn't really what has me concerned. I'll always have a high speed internet connection when moving the data. And I can deserialize the Json right back into my entities.

I think you are right about SqlBulkCopy. I have been looking at that and the performance boost is massive over using Entity Framework. EF was very easy to setup though and I was hoping for better performance so I wouldn't have to do anything else.

Edit to add some EF related stuff:
I'm seeing posts on SO that people are able to get acceptable performance when doing large inserts with EF (20k records in 10 seconds). There are a couple configuration options to set (turning off AutoDetectChangesEnabled and ValidateOnSaveEnabled). The first time I ran a test after setting those I inserted 5k records in about 3 seconds, however running tests after that were much slower, in the 30 second to 1 minute range. I think it may be possible that the bottleneck is that my Azure SQL db may not be on the same virtual machine that my web role for inserting the data is.

Essential fucked around with this message at 16:54 on Oct 10, 2013

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Essential posted:

I'm taking data from a local database and the first time the data is retrieved I will be getting the last couple years worth. So the only really large data transfer is happening on that first run. Then I need to get the data into an Azure SQL database.

Why are you moving the data in piecemeal? Why not just ship the whole thing to Azure over a weekend? However, SSIS is probably the easiest to do this if you have a lot of bandwidth between your SQL Server and Azure.

I would poke through MSDN to see what Microsoft suggests.

EDIT: You really should be asking this question in the Database thread. If this is for your company you might want to contract out a DBA to do this. This should be a fairly simple thing to do for a DBA.

Polidoro
Jan 5, 2011


Huevo se dice argidia. Argidia!
My copy of C# in Depth finally arrived this morning. Just in time for me not having time to read it for months :smith:

Essential
Aug 14, 2003

gariig posted:

Why are you moving the data in piecemeal? Why not just ship the whole thing to Azure over a weekend? However, SSIS is probably the easiest to do this if you have a lot of bandwidth between your SQL Server and Azure.

I would poke through MSDN to see what Microsoft suggests.

EDIT: You really should be asking this question in the Database thread. If this is for your company you might want to contract out a DBA to do this. This should be a fairly simple thing to do for a DBA.

At this point I'm not moving the data piecemeal, I'm uploading the whole file at once. The data is not coming from SQL Server, the data is coming from up to 4 different database types (c-tree being 1 of them). And this has to go out to around 500 offices so I can't just fire it off and let it run over the weekend.

I get your point about this being more of a database question though.

I just found an ORM to SQLBulkCopy piece that looks pretty good and should be fairly easy to implement.

EDIT: Just implemented the above SqlBulkCopy and I just inserted 100k rows in 30 seconds. Gariig you were definitely right on using that, it's incredibly faster. To recap, EF bulk update was taking 15 minutes. SqlBulkCopy took 30 seconds.

Essential fucked around with this message at 21:55 on Oct 10, 2013

Rooster Brooster
Mar 30, 2001

Maybe it doesn't really matter anymore.

Polidoro posted:

My copy of C# in Depth finally arrived this morning. Just in time for me not having time to read it for months :smith:

I already have such a huge pile of books that I don't have time to get through that I didn't even bother to order the new C# in Depth at all :smith:

Opulent Ceremony
Feb 22, 2012
Architecture question for Automapper people out there. I'm using MVC 3 and EF 5 Code First. I think Automapper's Project().To is pretty cool, but it looks like it needs the IQueryable of the DbContext exposed (which makes sense, since you want it to only select relevant columns before the db is actually queried).

I'm more used to any DAL methods input and output both being in Domain objects, since we keep them lean and without methods. Over here we've got the Code First models as our Domain objects, so the projecting would be into ViewModels, which I don't think I'd want that DAL assembly to be aware of.

A concrete example would be a view-all-people page where the ViewModel would take a set of Person's, but we would only need a fraction of the actual fields from that object (say just the Id and Name) for that view.

How do you set this up in an organized fashion? Do you just give in and expose the IQueryable to other layers? Do you make a new DTO set and have the DAL output that instead (seems weird since they would be view-specific)? Am I thinking about this wrong and you have a better system of organization? I'd appreciate a little input into how you made Project().To work for your project while maintaining organization.

Dietrich
Sep 11, 2001

Opulent Ceremony posted:

Architecture question for Automapper people out there. I'm using MVC 3 and EF 5 Code First. I think Automapper's Project().To is pretty cool, but it looks like it needs the IQueryable of the DbContext exposed (which makes sense, since you want it to only select relevant columns before the db is actually queried).

I'm more used to any DAL methods input and output both being in Domain objects, since we keep them lean and without methods. Over here we've got the Code First models as our Domain objects, so the projecting would be into ViewModels, which I don't think I'd want that DAL assembly to be aware of.

A concrete example would be a view-all-people page where the ViewModel would take a set of Person's, but we would only need a fraction of the actual fields from that object (say just the Id and Name) for that view.

How do you set this up in an organized fashion? Do you just give in and expose the IQueryable to other layers? Do you make a new DTO set and have the DAL output that instead (seems weird since they would be view-specific)? Am I thinking about this wrong and you have a better system of organization? I'd appreciate a little input into how you made Project().To work for your project while maintaining organization.

If you've got a control that can use an IQueryable, then I give it an IQueryable. Generally it will handle pagination, data trimming (only getting the columns you want), and whatever else all on it's own.

This is pretty much the only time I will ever expose an IQueryable past the data layer.

Mr. Crow
May 22, 2008

Snap City mayor for life
WPF Question.

Is there a technical reason (or otherwise) to not use Application.Current.Windows?

Backstory, we need to check if there are any modal views open before creating and showing another modal view from the 'backend', and then if there is one, don't create/show the 'backend' view.

Doing some research the common suggestion seems to be have some sort of view manager that's responsible for keeping track of open views, and in lieu of that you can use Application.Current.Windows. My concern is everyone seems to hint that using App.Current.Windows is a bad thing, but I'm having trouble finding out why.

We have a couple very specific view managers elsewhere, but don't really have a need for a general all-encompassing one beyond this specific instance and I'm getting pushback for creating an all-encompassing ViewManager at this time.

zokie
Feb 13, 2006

Out of many, Sweden

Mr. Crow posted:

WPF Question.

Is there a technical reason (or otherwise) to not use Application.Current.Windows?

Backstory, we need to check if there are any modal views open before creating and showing another modal view from the 'backend', and then if there is one, don't create/show the 'backend' view.

Doing some research the common suggestion seems to be have some sort of view manager that's responsible for keeping track of open views, and in lieu of that you can use Application.Current.Windows. My concern is everyone seems to hint that using App.Current.Windows is a bad thing, but I'm having trouble finding out why.

We have a couple very specific view managers elsewhere, but don't really have a need for a general all-encompassing one beyond this specific instance and I'm getting pushback for creating an all-encompassing ViewManager at this time.

Can the user create new dialogs still? To handle dialogs in an application where all windows were modal I've previously used a Stack<Window> in a view manager, we needed to keep any window open until the viewmodel had finished with any remote calls and then close it. So when a new dialog needed to be opened we pushed it on the stack and then when we're done and need to close it we pop it of and close it.

Still it seems to me that any solution for maintaining control-flow with views doing MVVM seems to involve some kind of horror...

Mr. Crow
May 22, 2008

Snap City mayor for life

zokie posted:

Can the user create new dialogs still? To handle dialogs in an application where all windows were modal I've previously used a Stack<Window> in a view manager, we needed to keep any window open until the viewmodel had finished with any remote calls and then close it. So when a new dialog needed to be opened we pushed it on the stack and then when we're done and need to close it we pop it of and close it.

Still it seems to me that any solution for maintaining control-flow with views doing MVVM seems to involve some kind of horror...

It's not that complicated. The dumbed down version is the application is using a separate 3rd party gps mapping application, so the problem is we want to prevent views being built/opening when the user is clicking on a point in the mapping application, when we have a modal dialog open.

That is a clever idea though, not sure when I'd ever use it but if a situation ever comes up I'll try to keep it in mind :)

Polio Vax Scene
Apr 5, 2009



My next project involves integrating with a web service...but uh, all their methods require passing just XML.

Did I screw something up? Or am I going to need to make a stop at the bottle shop over lunch?



I tested some of them, by actually writing some good XML, and they work...but every method there that has just a string parameter requires you to pass a big chunk of XML. There are no classes or types or anything, just methods that take XML.

uXs
May 3, 2005

Mark it zero!
Architecture question:

How do I handle sudden loss of network connectivity? Or database or service timeouts?

I think I should try to catch the exceptions generated by that as low as possible, so I can avoid duplicating the exception-handling code. So, in my database/service client layer instead of the business logic or user interface layer.

But, I still need to warn the user that something is wrong: it would be nice to show some dialog that, for example, tells the user to plug their network cable back in or to enable their wireless thing.

So I was thinking about having an event in my database class that can bubble upwards toward the main form, show the dialog there, and then the answer can travel back down: either try again, or crash.

Good, bad, or horrible idea?

ljw1004
Jan 18, 2005

rum

Manslaughter posted:

My next project involves integrating with a web service...but uh, all their methods require passing just XML.
Did I screw something up? Or am I going to need to make a stop at the bottle shop over lunch?

Not sure what the problem is. Transmitting XML like that is functionality identical to transmitting JSON - you rely just on luck to make sure you've got all the data in the right place. And receiving+parsing XML is functionally identical to receiving+parsing JSON, if you skip the XML validation and merely dig into it through paths. You don't get any more, you don't get any less than JSON.

I actually prefer XML over JSON for .NET apps because you have ready-made syntax for it in VB, e.g.
code:
  ' it's easier to send XML than JSON

  send(<customer id=<%= c.Id %> >
          <name><%= c.Name %></name>
       </customer>)

  send(<robots>
          <% Iterator Function() As XElement
                For x = 0 To 9
                    Yield <robot id=<%= x %> />
                Next
             End Function()
          %>
       </robots>)


   ' it's easier to read XML than JSON

   Dim xml = XElement.Parse(request.Content)
   Dim c As New Customer With {.Id=xml.@id, .Name=xml.Name.Value}

   Dim xml = XElement.Parse(request.Content)
   Dim robots = From r In xml...<robot> Select New Robot With {.Id = r.@id}
I also quite like receiving XML when I write JavaScript apps, because I can use all the jQuery selector magic to walk through xml. And the XML is right there already parsed in the XmlHttpRequest.responseXML field.



If I want to turn the XML into a strongly-typed class in .NET, that's easy as well. I like using DataContract for that...
code:
    <DataContract(Name:="user", Namespace:="")>
    Public Structure UserResult
        <DataMember(Name:="uuid", EmitDefaultValue:=False)> Public Uuid As String
        <DataMember(Name:="firstName", EmitDefaultValue:=False)> Public FirstName As String
        <DataMember(Name:="lastName", EmitDefaultValue:=False)> Public LastName As String
    End Structure

    Async Function GetUser(http As HttpClient, uri As Uri, cancel As CancellationToken) As Task(Of UserResult)
        Using r = Await http.GetAsync(uri, Net.Http.HttpCompletionOption.ResponseContentRead, cancel)
            Await EnsureSuccessStatusCodeAsync(r)
            Using s = Await r.Content.ReadAsStreamAsync()
                Dim d As New DataContractSerializer(GetType(UserResult))
                Return CType(d.ReadObject(s), UserResult)
            End Using
        End Using
    End Function

Polio Vax Scene
Apr 5, 2009



I was just hoping I wouldn't have to write all my DataContracts myself, as some of these methods require really large amounts of XML to be sent.

ljw1004
Jan 18, 2005

rum

uXs posted:

How do I handle sudden loss of network connectivity? Or database or service timeouts?

I reckon that no one handles connectivity failures well.


You get advice like "just do retries" but that (1) makes it take longer before the user can see that something is wrong, (2) turns network failure from a 0.1% case to a 0.01% case that you still have to deal with properly via some other technique.

Also, imagine if each abstraction layer has its own policy for retries and timeouts. Imagine if the bottom-level component has a timeout of 5 seconds and retries twice. Imagine if one level up your software has a timeout of 2 seconds and retries four times. Those two don't compose well. And have pity on the poor user at the top of this stack.

Also, networks have wildly different failure characteristics. Cellular networks usually have huge latency but very low packet-loss rates. Cable has smaller latency and higher packet-loss rates. All platforms seem to have inexplicable timeouts/hangs when you try to do a network operation but it's still in the process of trying to acquire a network connection. I don't know what the network characteristics are like in a data-center.


So: I'd be inclined NOT to capture exceptions at all, except right up there at the user level. As soon as a failure happens, display it to the user "Network failure: unable to XYZ". Then the user can hit the Refresh button, just like they do in a web-browser, or sign into a network. Users understand that interaction model. It has no magic or mystery in it. If the user hits Refresh three times in a row, in my mobile app, then I offer them the chance to email me a failure report that includes a detailed exception stack trace.

Another idea I've wondered about (but never tried) is that the "timeout pie" gets cooked by the user when he hits a button, and then it gets passed down the stack all the way to the bottom-level components, and each component can eat bits of that pie how they want, but once it's exhausted then they have to abort themselves and throw a timeout exception back to the user.


What about correctness? Here there aren't any good answers. People just tell you "make sure your operation is IDEMPOTENT so you can issue it again." Well, that's necessary because the First Lay Of Distributed Systems is that you can never avoid the failure mode where you believe the operation failed but it might actually have succeeded. That's a mathematical law, and retries don't get around it. Do you know how to make your operations idempotent? It's easy for Create/Read/Delete operations, but it's hard for Update operations! I don't think there's good guidance or good standard patterns out there.

glompix
Jan 19, 2004

propane grill-pilled

Opulent Ceremony posted:

How do you set this up in an organized fashion? Do you just give in and expose the IQueryable to other layers? Do you make a new DTO set and have the DAL output that instead (seems weird since they would be view-specific)? Am I thinking about this wrong and you have a better system of organization? I'd appreciate a little input into how you made Project().To work for your project while maintaining organization.

I think it's okay to make "view-specific" types for your DAL to return. Automapper makes it easy to map to those, and it's really not much extra work or code, just a DTO. Many times these kinds of methods end up being reusable. A mobile app and a desktop web site needing special aggregates for typeahead or tooltip info or something simple are going to look the same most likely. The code also ends up being easier to understand, since your controllers or whatever are calling specifically-named methods that describe what they actually do, instead of just jamming a LINQ query in there.

Certain concerns I wouldn't sweat too much until a problem is actually apparent. For example, think about an application that lets users select which columns display on a grid. Ideally, the resulting query would only select the visible columns. I think it's okay to just select the entire set of possible columns as long as there aren't more than 20-30 and most of them are useful. You have to really be careful about knowing where the pressure on your app is going to come from, though. If you're serving dozens of requests a second, don't do that. If your application is constantly in flux but is low-scale, it's definitely okay.

glompix fucked around with this message at 19:29 on Oct 11, 2013

wwb
Aug 17, 2004

Manslaughter posted:

My next project involves integrating with a web service...but uh, all their methods require passing just XML.

Did I screw something up? Or am I going to need to make a stop at the bottle shop over lunch?



I tested some of them, by actually writing some good XML, and they work...but every method there that has just a string parameter requires you to pass a big chunk of XML. There are no classes or types or anything, just methods that take XML.

You would see alot of these in the earlier days of web services -- mainly because most cross platform toolkits started falling down somewhere north of "send a string parameter"

Xml Serialization is your friend here -- just build classes to deal with the XML. Not fun but doesn't take too long to handle usually. Well, unless the service is playing fast and lose with the xml it returns and not quite returning the same representations depending on different things.

uXs
May 3, 2005

Mark it zero!

ljw1004 posted:

So: I'd be inclined NOT to capture exceptions at all, except right up there at the user level. As soon as a failure happens, display it to the user "Network failure: unable to XYZ". Then the user can hit the Refresh button, just like they do in a web-browser, or sign into a network. Users understand that interaction model. It has no magic or mystery in it. If the user hits Refresh three times in a row, in my mobile app, then I offer them the chance to email me a failure report that includes a detailed exception stack trace.

Problem here is that the user level would then have to know what a network failure is, or at least the different types of exceptions that indicate a network failure. I could maybe be persuaded to let the user interface receive all the failures (like instead of 'void Save()' I'd have 'SaveResult Save()'), but there's no way in hell that I'm putting try {} catch(SqlException) {} catch (IOException) {} catch (NetworkException) {} ... or whatever multiple times in every single goddamn form.

Anyway, current ideas are:

a)
-database or service client or whatever classes all have a 'OperationFailed' event
-said event is subscribed to by whatever class creates (or rather, is injected) the above classes
-... and so all the way to the top, to the main form
-main form receives the events (so the individual sub-forms don't have to know anything yet), and displays a 'retry/crash?' dialog
-user answer goes back down, and the operation is retried or failed
-if it fails, exception is thrown and program crashes

This could work, but it means that some really, really, low-level code is asking for user interaction, which seems just wrong.

b)
-instead of void Save() and void Update(), have SaveResult Save() methods, and let the interface code handle it however the gently caress they want

This would mean that there's a lot more code in the interface code to handle these errors, which is annoying.
There's also the problem that any method that actually returns results would have to wrap that result in something else that has the error messages. That's pretty ugly.

or maybe:
c)
-catch all network and IO errors in low-level code, and wrap them in my own more general 'network timeout/failure' Exception. And then the interface code that calls the save methods can just catch that single Exception and handle it.

epswing
Nov 4, 2003

Soiled Meat
I've seen this pattern several times in code examples. What's going on here, and why?

C# code:
public class MyClass : IDisposable
{
    private readonly SomeResource resource;
    private bool disposed;
    
    public MyClass(SomeResource resource)
    {
        this.resource = resource;
    }

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    public virtual void Dispose(bool disposing)
    {
        if (!this.disposed)
            if (disposing)
                resource.Dispose();

        this.disposed = true;
    }
}

Orzo
Sep 3, 2004

IT! IT is confusing! Say your goddamn pronouns!

epalm posted:

I've seen this pattern several times in code examples. What's going on here, and why?

http://stackoverflow.com/questions/151051/when-should-i-use-gc-suppressfinalize

Sedro
Dec 31, 2008

epalm posted:

I've seen this pattern several times in code examples. What's going on here, and why?
Read up on the dispose pattern.

The goal is to make sure unmanaged resources are disposed exactly once.

Things get complicated when you have subclassing in play and that's what the pattern addresses.

Your example doesn't have any unmanaged resources so you can just do this:
C# code:
public class MyClass : IDisposable
{
    private readonly SomeResource resource;
    
    public MyClass(SomeResource resource)
    {
        this.resource = resource;
    }

    public void Dispose()
    {
        resource.Dispose();
    }
}

brosmike
Jun 26, 2009
Sedro's example is right on the money for the simple case of "I have a standalone class that manages other Disposable classes." This should cover most Disposable objects. There's two common reasons you might find yourself wanting to implement complicated disposal logic like the pattern you posted:

  1. You're part of a disposable class heirarchy and need to make sure your child/parent dispose methods can be called appropriately.
  2. You're wrapping an unmanaged resource that you really want to make sure gets cleaned up, so you also want to implement a finalizer.

The sample you posted looks like an attempt to cover case 1, but doesn't quite do so in the recommended fashion; it's close, assuming MyClass is intended to be the root class of the heirarchy, except that the normal pattern recommends that the virtual version of Dispose() be protected, not public. The intention is to have callers of members of the heirarchy use the non-virtual Dispose() provided by the base class while subclasses only implement the protected virtual version to perform the actual cleanup.

The StackOverflow post Orzo mentioned is more focused around case 2. Note however that if you actually find yourself in case 2 (wanting to write a finalizer), it's usually more appropriate to implement a SafeHandle instead to wrap whatever resource you're hoping to guarantee cleanup of. "Guaranteeing" cleanup is a much harder problem than it looks like at first glance, and there are lots and lots of little gotchas involved in making a robust finalizer. Consider, for example, how you might protect against the JIT compiler hitting an out of memory condition when it attempts to run your cleanup code. Or don't bother considering it, because SafeHandle already did it for you.

It's very, very rarely appropriate for the same class to fall under cases 1 and 2 at the same time - usually it would be better to separate the concerns so the finalizable logic is a SafeHandle which the original class contains an instance of, and have the original class call the SafeHandle's Dispose() in its Dispose(bool).

Dr Monkeysee
Oct 11, 2002

just a fox like a hundred thousand others
Nap Ghost

Sedro posted:

Your example doesn't have any unmanaged resources so you can just do this:
C# code:
public class MyClass : IDisposable
{
    private readonly SomeResource resource;
    
    public MyClass(SomeResource resource)
    {
        this.resource = resource;
    }

    public void Dispose()
    {
        resource.Dispose();
    }
}

I really really wish the MSDN documentation would mention this second case because 90% of the time if you're implementing IDisposable it's to wrap another IDisposable. The Finalize pattern is completely superfluous boilerplate unless you're working with an unmanaged resource directly but the MSDN guidance makes it sound like that is the *only* acceptable way to implement IDisposable.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

Monkeyseesaw posted:

I really really wish the MSDN documentation would mention this second case because 90% of the time if you're implementing IDisposable it's to wrap another IDisposable. The Finalize pattern is completely superfluous boilerplate unless you're working with an unmanaged resource directly but the MSDN guidance makes it sound like that is the *only* acceptable way to implement IDisposable.

By default, FxCop also insists on this pattern for any IDisposable implementation. So annoying.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
I'm currently learning .NET and struggling a little with EF Code First. I have a structure that should look a little like this:
code:
COMPETITION
----
Id
Name

COMP_USER
----
CompId
UserId

USER
----
Id
Name
I have a similar relationship between users and roles:
code:
USER
----
Id
Name

USER_ROLE
----
UserId
RoleId

ROLE
----
Id
Name
My models look roughly like this at the moment:
code:
public class User
{
    public User()
    {
        Competitions = new HashSet<Competition>();
        Roles = new HashSet<Role>();
    }

    // Primitive properties
    [Key]
    public int Id { get; set; }
    public string Name { get; set; }
 
    // Navigation properties
    public ICollection<Competition> Competitions { get; set; }
    public ICollection<Role> Roles { get; set; }
}

public class Competition
{
    public Competition()
    {
        Users = new HashSet<User>();
    }

    // Primitive properties
    [Key]
    public int Id { get; set; }
    public string Name { get; set; }

    // Navigation properties
    [InverseProperty("Competitions")]
    public ICollection<User> Users { get; set; }
}

public class Role
{
    public Role()
    {
        Users = new HashSet<User>();
    }

    // Primitive properties
    [Key]
    public int Id { get; set; }
    public string Name { get; set; }

    // Navigation properties
    [InverseProperty("Roles")]
    public ICollection<User> Users { get; set; }
}
For some reason, I get a joining table for the users and the roles, but for the competitions and the users EF generates a foreign key "UserId" in the Competition table, instead of creating a CompetitionsUsers table. Can anyone tell me what embarrassing blunder I've made here? The model setup for each looks identical to my untrained eye.


Edit: I'll leave this here for posterity, but I was right: it was an embarrassing blunder. I had tweaked the code and forgot to add the new migration before running the update-database command...

putin is a cunt fucked around with this message at 00:07 on Oct 15, 2013

Boz0r
Sep 7, 2006
The Rocketship in action.
I'm trying to write a program that makes a Netduino communicate with Azure, but I've run into a snag. All the other projects I've seen use HttpWebRequest, but when I use it, it tells me it cannot resolve the symbol. What gives?

I import alle the same headers so I assume it's something to do with Visual Studio. Any ideas?

Adbot
ADBOT LOVES YOU

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Gnack posted:

code:
public class Competition
{
    ...

    // Navigation properties
    [InverseProperty("Competitions")]
    public ICollection<User> Users { get; set; }
}

public class Role
{
    ...

    // Navigation properties
    [InverseProperty("Roles")]
    public ICollection<User> Users { get; set; }
}

In most cases you shouldn't need the InverseProperty attribute. Entity Framework is smart enough to figure out that you want a many-to-many relationship there. You should only have to use InverseProperty if for some reason the convention isn't picking up your relationship, you want a self-referencing entity, or you want multiple relationships between the same two entities.

  • Locked thread