Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Factor Mystic
Mar 20, 2006

Baby's First Post-Apocalyptic Fiction

gently caress them posted:

Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?"

Also, I've found myself doing some DB-janitoring lately. I'm NOT the DBA - we actually have one, thank god - but I still find myself wanting to think of what the best way to back up tables is. Just dump to an excel?

I need to start scripting some of this poo poo don't I.

Backing up tables is a separate problem than tracking stored procedures in source control, which yes you absolutely need to be doing.

Adbot
ADBOT LOVES YOU

ManoliIsFat
Oct 4, 2002

Ithaqua posted:

Or he could use an appropriate data storage mechanism instead of messing around with flat files.
I don't know, a simple "lockfile" existing on a share has saved me a lot of dumb coordination. On program startup, see if the file exists. If so (and it's less than 12 hours old or some timeout), exit out of the program. Delete the file on exit. I suppose the right way to do it would be a DB entry.

gently caress them posted:

Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?"

Also, I've found myself doing some DB-janitoring lately. I'm NOT the DBA - we actually have one, thank god - but I still find myself wanting to think of what the best way to back up tables is. Just dump to an excel?

I need to start scripting some of this poo poo don't I.
I've done it the manual way where you set up a repo for db changes and its just a bunch of ALTER scripts with a version number in them. This RedGate program for MSSQL is cool in theory http://www.red-gate.com/products/sql-development/sql-source-control/ . I used their SQL Compare tool in the past.

But backing up...I'm not quite sure what you mean. You should have DB native backups. Writing out to a CSV is fine if you want an "agnostic" copy or are trying to ferry the data around, but that's not a real backup strategy.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

gently caress them posted:

Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?"

Also, I've found myself doing some DB-janitoring lately. I'm NOT the DBA - we actually have one, thank god - but I still find myself wanting to think of what the best way to back up tables is. Just dump to an excel?

I need to start scripting some of this poo poo don't I.

Use SQL Server Data Tools. Import your database into an SSDT project, and then you have a source-controlled, canonical version of your database. Change things in your SSDT project, then publish it when you need to do a release. There you go, source controlled database objects.

For data, make database backups. Don't dump poo poo to Excel, that's dumb.

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
What happened was basically a miscommunication. The actual DB is regularly backed up by the DBA. Some stuff I did to update a lookup and then change some rows pointing to it was wiped during an update because DBA wasn't told I did what I did - I'm thinking if I make any small changes in the future, I should save those small changes in particular.

I think I just need to sit down with the DBA and work something out. Who says you can't network around managerial issues?

We DO have source control, I just have never done sprocs in them yet. It's also more of a side-thing when someone responsible for that walks over to ask me for a favor if Mr. DBA is busy doing DBA things. In this case, it was basically "hey so we're re-doing our docket codes and categories because the chief judge said so."

I'm kind of a bungee-dev right now anyway since the various layers of hierarchy and bureaucracy can't make up their drat minds (state vs my county vs a few counties all together that depend on my county all fussing about bullshit; Rick Scott's continued reign as Governor probably doesn't help either) and as such I'm leaving a ton of poo poo half finished. Which is why I need to be much more particular about tracking my own work.

:yotj: still beats my old job by a light-year though.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

gently caress them posted:

We DO have source control, I just have never done sprocs in them yet.

Seriously just use SSDT, this is a solved problem.

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
It's still installing!

Knyteguy
Jul 6, 2005

YES to love
NO to shirts


Toilet Rascal

Ithaqua posted:

Use SQL Server Data Tools. Import your database into an SSDT project, and then you have a source-controlled, canonical version of your database. Change things in your SSDT project, then publish it when you need to do a release. There you go, source controlled database objects.

For data, make database backups. Don't dump poo poo to Excel, that's dumb.

Dangit, Ithaqua! Every time I want to answer one you beat me to the punch! (just kidding man we're all lucky to get your help).

Just wanted to chime in this is what we use at work and it works perfectly.

ljw1004
Jan 18, 2005

rum
The VB/C# team is looking at improving Edit & Continue. We did a survey to see what features people want to see in it...


One of the common requests (omitted from the chart) was "please add EnC support for x64".

That was a surprise to us since EnC support for x64 already shipped in VS2013! I guess people were burned by its lack in the past and didn't bother to try again in VS2013, or they did try again but it failed for one of the other reasons.

Anyway, I just want to make sure that everyone knows: VS2013 has EnC support for x64.

ManoliIsFat
Oct 4, 2002

Modifying LINQ queries would be HUGE.

I never understood, why can't lamdas work in quickwatch?

ljw1004
Jan 18, 2005

rum

ManoliIsFat posted:

Modifying LINQ queries would be HUGE.
I never understood, why can't lamdas work in quickwatch?

The way lambdas work is, under the hood, the compiler generates a class for them, with members & code.

The way quickwatch works is, under the hood, the IDE evaluates expressions. We never got around to making it so that quickwatch can also generates classes. (which assemblies would these classes get generated into? where would they go? how would we get rid of classes that were no longer needed once the watch was deleted? The CLR has no good way to get rid of code, short of unloading an entire assembly.)

Funking Giblet
Jun 28, 2004

Jiglightful!
Surely allowing it in debug mode can add some hooks to "ghost" assemblies at build time?

wwb
Aug 17, 2004

gently caress them posted:

What happened was basically a miscommunication. The actual DB is regularly backed up by the DBA. Some stuff I did to update a lookup and then change some rows pointing to it was wiped during an update because DBA wasn't told I did what I did - I'm thinking if I make any small changes in the future, I should save those small changes in particular.

I think I just need to sit down with the DBA and work something out. Who says you can't network around managerial issues?

We DO have source control, I just have never done sprocs in them yet. It's also more of a side-thing when someone responsible for that walks over to ask me for a favor if Mr. DBA is busy doing DBA things. In this case, it was basically "hey so we're re-doing our docket codes and categories because the chief judge said so."

I'm kind of a bungee-dev right now anyway since the various layers of hierarchy and bureaucracy can't make up their drat minds (state vs my county vs a few counties all together that depend on my county all fussing about bullshit; Rick Scott's continued reign as Governor probably doesn't help either) and as such I'm leaving a ton of poo poo half finished. Which is why I need to be much more particular about tracking my own work.

:yotj: still beats my old job by a light-year though.

In case snarky old DBA guy doesn't go for SSDT and such you could also use a migration framework. Roundhouse is a good option with snarky old DBAs as that just runs scripts they can understand pretty well.

LOOK I AM A TURTLE
May 22, 2003

"I'm actually a tortoise."
Grimey Drawer

ljw1004 posted:

The way lambdas work is, under the hood, the compiler generates a class for them, with members & code.

The way quickwatch works is, under the hood, the IDE evaluates expressions. We never got around to making it so that quickwatch can also generates classes. (which assemblies would these classes get generated into? where would they go? how would we get rid of classes that were no longer needed once the watch was deleted? The CLR has no good way to get rid of code, short of unloading an entire assembly.)

Out of curiosity, since the blog post says you're working on it, roughly how are you planning to solve it? I'm assuming you can't change the way lambdas work or make changes to the CLR?

RICHUNCLEPENNYBAGS
Dec 21, 2010

Ithaqua posted:

C# in Depth is a good survey of the deeper features of C#. It's a good starting point.

I haven't read the Seeman book, but based on the title it's about ways to handle one method of achieving loose coupling, which is important for unit testing. Osherove's book is an awesome intro to unit testing. So:

1/2: C# in Depth / TAOUT
3: DI

Seeman's book is good, but often tedious. However, you can skim over it in a day or two and get all the important bits.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

RICHUNCLEPENNYBAGS posted:

Seeman's book is good, but often tedious. However, you can skim over it in a day or two and get all the important bits.

Also you only really need to look over the first half. The second half is all about different IoC containers and how to set up various .NET frameworks to use IoC. It's useful as a reference, but not really necessary to read front to back.

epswing
Nov 4, 2003

Soiled Meat
Say I wanted to fire and forget an action. However, if the action fails (throws an exception), retry the action after an appropriate delay, and do this some number of times before giving up altogether.

Would something like this be adequate?

C# code:
public class RetryAction
{
    private int retryCount;
    private TimeSpan retryDelay;
    private Action action;

    public RetryAction(int retryCount, TimeSpan retryDelay, Action action)
    {
        this.retryCount = retryCount;
        this.retryDelay = retryDelay;
        this.action = action;
    }

    public void Start()
    {
        Debug.Assert(retryCount > 0);

        Task.Factory.StartNew(() =>
        {
            while (true)
            {
                try
                {
                    action();
                    break;
                }
                catch
                {
                    retryCount--;

                    if (retryCount > 0)
                        Thread.Sleep(retryDelay);
                    else
                        break;
                }
            }
        });
    }
}
The following code example would try up to 3 times, waiting 10 seconds between each attempt.

C# code:
new RetryAction(3, TimeSpan.FromSeconds(10), () =>
{
    using (var client = new WebClient())
    {
        client.Headers.Add("Content-Type", "text/xml");

        try
        {
            client.UploadString(new Uri(SEND_URL), "POST", data);
        }
        catch (Exception e)
        {
            log.Error("error sending xml to {0}: {1}", SEND_URL, e.Message);
            throw;
        }
    }
}).Start();
Anyone see anything hugely wrong with this?

1. I'm using try/catch as part of my workflow, maybe this is bad.

2. The while(true) smells a bit, but the alternative is testing retryCount > 0 in two places to avoid an extra and unnecessary call to Thread.Sleep.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
This is a perfect case for using async/await... there's no reason to be doing the action on another thread.

Also, consider implementing cancellation.

Also also, consider using a Timer.

Also also also, consider using something that's already solved the problem (note: no idea of the quality of this code): https://github.com/pbolduc/Retry

New Yorp New Yorp fucked around with this message at 19:17 on Jun 3, 2014

Sedro
Dec 31, 2008
I tried converting it to async
C# code:
public class RetryAction<T>
{
    readonly int retryCount;
    readonly TimeSpan retryDelay;
    readonly Func<Task<T>> action;

    public RetryAction(int retryCount, TimeSpan retryDelay, Func<Task<T>> action)
    {
        this.retryCount = retryCount;
        this.retryDelay = retryDelay;
        this.action = action;
    }

    public async Task<T> Start() // could take a CancellationToken
    {
        var remaining = retryCount;

        while (true)
        {
            try
            {
                return await action();
            }
            catch
            {
                if (remaining == 0) throw;
            }
            --remaining;
            await Task.Delay(retryDelay);
        }
    }
}

// usage is similar

new RetryAction<string>(3, TimeSpan.FromSeconds(10), async () =>
{
    using (var client = new WebClient())
    {
        client.Headers.Add("Content-Type", "text/xml");
        return await client.UploadStringTaskAsync(new Uri(SEND_URL), "POST", data);
    }
}).Start();

slovach
Oct 6, 2005
Lennie Fuckin' Briscoe
What am I doing that's so slow? I just want to draw some simple stuff for fun on my form background.

I create the Graphics context and bitmap on form load, and just tick the drawing in a timer. With a 20ms interval, it's already getting noticeably chuggy and heavy. It's like a 320x240 window.

code:
            Rectangle rect = new Rectangle(0, 0, this.Width, this.Height);
            formBitmapData = formBitmap.LockBits(rect, System.Drawing.Imaging.ImageLockMode.WriteOnly,
                                                       System.Drawing.Imaging.PixelFormat.Format32bppArgb);
            unsafe
            {
                for (int y = 0; y < formBitmap.Height; y++)
                {
                    for (int x = 0; x < formBitmap.Width; x++)
                    {
                        int index = ((y * this.Width) + x);

                        if (Convert.ToBoolean(x & y))
                        {
                            uint* data  = (uint*)formBitmapData.Scan0;
                            data[index] = 0xFFFF00FF;
                        }
                    }
                }
            }

            formBitmap.UnlockBits(formBitmapData);
            gfxContext.DrawImage(formBitmap, rect);
        }

IcedPee
Jan 11, 2008

Yarrrr! I be here to plunder the fun outta me workplace! Avast!

FREE DECAHEDRON!
I hope this is the right place to ask this since it's kind of an IIS question, but it's in relation to a WCF service I'm making, so whatever.


I have a service running in a console window. There was a contract
code:
[OperationContract]
void RunFileEngineWithDictionary(Dictionary<string, List<int>> FileDictionary);
This worked fine. Unfortunately, I needed to add a field to the datacontract.

code:
    [DataContract]
    public class MediaVolumeData
    {
        [DataMember]
        public string Robot { get; set; }
        [DataMember]
        public int RunID { get; set; }        
    }
So now my operation became

code:
[OperationContract]
void RunFileEngineWithDictionary(Dictionary<string, List<MediaVolumeData>> FileDictionary);
Now this dictionary only has about three entries at most, but each value in the dictionary could have a few hundred items in the list (I say that loosely because on the consuming side, the parameter must be MediaVolume[] and not List<MediaVolume> because that's how WCF wants it).

Well, this breaks the 48k size default. I'd very much like to change this default, but no matter what the hell I do in my configuration, my service ignores it. I tried adding all this to my web.config based on what I'd read on various forums like stackoverflow

code:
    <bindings>
      <basicHttpsBinding>
        <binding maxReceivedMessageSize="10485760" maxBufferPoolSize="10485760" maxBufferSize="10485760" >
          <readerQuotas maxArrayLength="10485760" maxBytesPerRead="10485760" maxDepth="10485760" maxStringContentLength="10485760" 
maxNameTableCharCount="10485760" />
        </binding>
      </basicHttpsBinding>
      <basicHttpBinding>
        <binding maxReceivedMessageSize="10485760" maxBufferPoolSize="10485760" maxBufferSize="10485760">
          <readerQuotas maxArrayLength="10485760" maxBytesPerRead="10485760" maxDepth="10485760" maxStringContentLength="10485760"
 maxNameTableCharCount="10485760" />
        </binding>
      </basicHttpBinding>
      <wsHttpBinding>
        <binding maxReceivedMessageSize="10485760" maxBufferPoolSize="10485760">
          <readerQuotas maxArrayLength="10485760" maxBytesPerRead="10485760" maxDepth="10485760" maxStringContentLength="10485760" 
maxNameTableCharCount="10485760" />
        </binding>
      </wsHttpBinding>
      <webHttpBinding>
        <binding  maxReceivedMessageSize="10485760" maxBufferPoolSize="10485760" maxBufferSize="10485760" />
      </webHttpBinding>
      <customBinding>
        <binding>
          <binaryMessageEncoding maxReadPoolSize="10485760" maxSessionSize="10485760" maxWritePoolSize="10485760"/>
          <httpsTransport maxBufferPoolSize="10485760" maxBufferSize="10485760" maxReceivedMessageSize="10485760" />
        </binding> 
      </customBinding>
    </bindings>
under <system.serviceModel> and even adding this

code:
<serverRuntime uploadReadAheadSize="10485760"/>
under <system.webServer>

No luck. The last thing I tried was configuring IIS with appcmd

appcmd.exe set config http://localhost/DataServices -section:system.webServer/serverRuntime /uploadReadAheadSize:10485760 /commit:apphost

The output says it has applied the configuration changes, but the 48k limit is still there. I get a 413 error saying it's too big every single time unless I send a dictionary smaller than 48k (which is absurd).

Any ideas as to why my configurations are being ignored? I've completely run out of ways to try to increase the 48k limit.

ljw1004
Jan 18, 2005

rum

epalm posted:

Say I wanted to fire and forget an action. However, if the action fails (throws an exception), retry the action after an appropriate delay, and do this some number of times before giving up altogether.

"Automatic retry" is a bad design smell. Here are some slides from a talk I've been giving internally at Microsoft...






Poor user experience
Let's say you retry after an appropriate delay. Well, some failures will be intermittent, and others will be permanent. What the user will experience is that without this code they'd observe failure 20% of the time in 2 seconds, but with your code they'll observe failure 15% of the time in 10 seconds. That's an eternity, and a worse overall user experience.

There is a tried-and-true best practice for handling failures. That is: give the user an error message as promptly as possible, and let them take action as they see fit (normally by hitting the Refresh button). This leads to happier users.


Incorrect engineering assumptions
More fundamentally, what is an "appropriate delay"? If someone codes a retry, they are making a statistical assertion that the likelihood of failure now is uncorrelated with the likelihood of failure after the delay. (If that weren't true, then there'd be no point delaying!)

This statistical assertion is not backed up by evidence. There are no generally accepted statistical rules of thumb here. Anything you write here is "coding blind" -- at best it's needless code that creates a worse user experience, and at worst it introduces bugs in subtle and rarely-tested codepaths.

If you're writing a mobile app, then failures are most commonly associated with poor connectivity - e.g. walking into a closed building, or wandering out of tower range, or wifi configuration issues. Nothing in your code will ever do the right action here. The right action is to give the user full information properly, and let them take remedial action.

If you're writing a backend batch-processing server, then maybe the right action upon failure is to push the item to the back of the queue so it runs later this night or the next night. That way, things like "404 not found" errors will likely be fixed up by an engineer because his pager rings and tells him to get his drat service back up and running within a couple of hours. And "temporary timeout" is just as likely caused by a DDOS attack or domino datacenter crash that will also take a couple of hours to fix.

For communication within a datacenter -- in 6 months of heavy duty web traffic within AWS, my brother (PhD in network theory, now working for a datamining startup) said he never once observed failure between the machines.

Bugs due to race conditions
The basic law of distributed systems is that there are three ways a network message can play out:

(1) It might succeed and you know it (200 OK)
(2) It might fail and you know it (500 Failure, ...)
(3) It might either succeed or fail but you don't know which (TimeoutException, or ConnectionClosed)

Any library which fails to expose these three possibilities is flawed, in the sense that apps can't use it to write reliable code.

I guess it's okay because your API is specifically designed solely for unimportant messages to the server, i.e. ones where it's entirely fine for the correct running of your app even if the POST never succeeded. (what are you using it for? just opportunistic telemetry? there are very few cases where fire-and-forget is ever acceptable...)

Let's spell out. Imagine the first POST attempt succeeds in creating/updating data on the webservice, but nevertheless ends with a TimeoutException. Then you'll try again -- even if some other client has seen the data in the meantime and acted upon it or changed it! And even if there is no other client, well, will your webservice reliably handle two POSTS to the same URL?

Generally, the basic tools for distributed code are idempotency and at-least-once guarantees. "Idempotency" is when you make sure that, even if your operation is performed more than once, it still does the right thing. GET operations are always idempotent. As for PUT and POST, well, that depends on the exact semantics. For some updates like "add $1 into my bank account" you need to invent your own ways to ensure idempotency. Typically you do this using http "etags", which provided the distributed equivalent of Interlocked.CompareExchange.

"At-least-once guarantees" are because, if you don't know if the operation succeeded or failed, then you'll likely run it again. In a mobile app, if it failed, you show the error to the user and let them hit the Retry button, so the user provides the guarantee. In a datacenter batch processor, you'd likely stick the item at the back of the queue so it can be retried in a couple of hours. You'd also increment a "poison pill count" so that, if there's something structurally wrong, it doesn't keep retrying from now to eternity but instead emails an operator to resolve the problem manually.

ljw1004 fucked around with this message at 15:05 on Jun 4, 2014

epswing
Nov 4, 2003

Soiled Meat
In this specific case, the action is not applied by a user. When an internal thing happens, I need to send some details to an external service (over which I have no control) whose web service is "sometimes" unavailable for a few seconds at a time.

Retrying the connection few times, with an appropriate delay, still sounds to me like the right thing to do. It's actually not totally critical that the communication succeeds (hence fire-and-forget), but it would be nice to move from ~90% success to ~99% success by just retrying a few times. How else would I improve my situation, considering I don't have full control over all systems in the equation. The external service is known to be idempotent, by the way.

If the action was applied by a user, then I understand and agree with you on all points.

epswing fucked around with this message at 15:49 on Jun 4, 2014

IcedPee
Jan 11, 2008

Yarrrr! I be here to plunder the fun outta me workplace! Avast!

FREE DECAHEDRON!

IcedPee posted:

:words:

Nevermind. I figured it out. Since I was just using this link as a basis for hosting my service in a console, I neglected to check on how it was being created - since it creates a new binding, it has no reason to rely on the configuration files for its binding. Adding the desired properties to the new binding fixed the problem.

Careful Drums
Oct 30, 2007

by FactsAreUseless
So here's my problem:

Our team has a handful of big web ASP.NET/MVC applications and the static content (images, css, some js) is scattered about in either the projects or a dedicated static content project. Currently the static content project just contains images, and we want to move all of the static content into that project so we can manage it separately from our MVC apps. Moving files and changing the URL in header links/scripts is easy enough, but how I can I handle Bundling?

So far, I understand that System.Web.Optimization exposes a new token every time a bundle changes so the app can really only access bundles that it bundled itself. Maybe I would need a way to expose that bundle token via an API but that seems ridiculous. Can this be accomplished or is there a better way to manage bundled static resources across projects?

Polio Vax Scene
Apr 5, 2009



In an MVC controller can I send a response through the context, close it out, then continue on with processing? I tried doing HttpContext.Response.End() and such right at the start but my request didn't get a response until the server's entire routine was done.

Careful Drums
Oct 30, 2007

by FactsAreUseless
I think to do that, you can return your ActionResult, then do your post-processing from the OnResultExecuted event.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy
You could spawn a thread from your controller action and continue work there. Obviously, you'll want to handle all exceptions since that could take down your whole site if that thread failed.

Although, to me this is a code smell. What are you doing that takes a long time that the user doesn't need to know about the result of? If it's a significant amount of processing, you shouldn't be doing that on your webserver. Queue up the work and handle it in another process.

Careful Drums posted:

Can this be accomplished or is there a better way to manage bundled static resources across projects?

I've searched for a good solution to handling resources that don't exist in the project and I don't think I've found a good general purpose one yet. In my case, I was looking for something that would ease swapping between a CDN and some local repository for static content.

Bognar fucked around with this message at 21:26 on Jun 4, 2014

Polio Vax Scene
Apr 5, 2009



My MVC app communicates with services at client sites, performing a few queries based on the input sent by the 'user'. If the client's service takes too long to respond (15 seconds in this case) the user's original request will just time out, even though things are fine.

I store/save the input sent immediately, so I just want to say 'yep, got it' and then work with it.

I ended up spawning an additional thread which works good. I'm doing a catch-all in the new thread and using ILogger to write the exception if it happens.

ManoliIsFat
Oct 4, 2002

You may want to think about a queuing mechanism to make life a little easier. If it's truly an asynchronous job, you just throw a message to a rabbitmq or something that describes the job you want it to do, and some service/program just pops outta the queue, does the job, writes the result. You could also retry failures this way. No more spawning threads in the web app.

ManoliIsFat fucked around with this message at 03:23 on Jun 5, 2014

ljw1004
Jan 18, 2005

rum

Bognar posted:

You could spawn a thread from your controller action and continue work there.

You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request?

In any case, recently announced and new in .net4.5.2 is HostingEnvironment.QueurBackgroundWorkItem for this kind of thing...
http://msdn.microsoft.com/en-us/library/ms171868(v=vs.110).aspx

RICHUNCLEPENNYBAGS
Dec 21, 2010

ljw1004 posted:

"Automatic retry" is a bad design smell. Here are some slides from a talk I've been giving internally at Microsoft...






Poor user experience
Let's say you retry after an appropriate delay. Well, some failures will be intermittent, and others will be permanent. What the user will experience is that without this code they'd observe failure 20% of the time in 2 seconds, but with your code they'll observe failure 15% of the time in 10 seconds. That's an eternity, and a worse overall user experience.

There is a tried-and-true best practice for handling failures. That is: give the user an error message as promptly as possible, and let them take action as they see fit (normally by hitting the Refresh button). This leads to happier users.


Incorrect engineering assumptions
More fundamentally, what is an "appropriate delay"? If someone codes a retry, they are making a statistical assertion that the likelihood of failure now is uncorrelated with the likelihood of failure after the delay. (If that weren't true, then there'd be no point delaying!)

This statistical assertion is not backed up by evidence. There are no generally accepted statistical rules of thumb here. Anything you write here is "coding blind" -- at best it's needless code that creates a worse user experience, and at worst it introduces bugs in subtle and rarely-tested codepaths.

If you're writing a mobile app, then failures are most commonly associated with poor connectivity - e.g. walking into a closed building, or wandering out of tower range, or wifi configuration issues. Nothing in your code will ever do the right action here. The right action is to give the user full information properly, and let them take remedial action.

If you're writing a backend batch-processing server, then maybe the right action upon failure is to push the item to the back of the queue so it runs later this night or the next night. That way, things like "404 not found" errors will likely be fixed up by an engineer because his pager rings and tells him to get his drat service back up and running within a couple of hours. And "temporary timeout" is just as likely caused by a DDOS attack or domino datacenter crash that will also take a couple of hours to fix.

For communication within a datacenter -- in 6 months of heavy duty web traffic within AWS, my brother (PhD in network theory, now working for a datamining startup) said he never once observed failure between the machines.

Bugs due to race conditions
The basic law of distributed systems is that there are three ways a network message can play out:

(1) It might succeed and you know it (200 OK)
(2) It might fail and you know it (500 Failure, ...)
(3) It might either succeed or fail but you don't know which (TimeoutException, or ConnectionClosed)

Any library which fails to expose these three possibilities is flawed, in the sense that apps can't use it to write reliable code.

I guess it's okay because your API is specifically designed solely for unimportant messages to the server, i.e. ones where it's entirely fine for the correct running of your app even if the POST never succeeded. (what are you using it for? just opportunistic telemetry? there are very few cases where fire-and-forget is ever acceptable...)

Let's spell out. Imagine the first POST attempt succeeds in creating/updating data on the webservice, but nevertheless ends with a TimeoutException. Then you'll try again -- even if some other client has seen the data in the meantime and acted upon it or changed it! And even if there is no other client, well, will your webservice reliably handle two POSTS to the same URL?

Generally, the basic tools for distributed code are idempotency and at-least-once guarantees. "Idempotency" is when you make sure that, even if your operation is performed more than once, it still does the right thing. GET operations are always idempotent. As for PUT and POST, well, that depends on the exact semantics. For some updates like "add $1 into my bank account" you need to invent your own ways to ensure idempotency. Typically you do this using http "etags", which provided the distributed equivalent of Interlocked.CompareExchange.

"At-least-once guarantees" are because, if you don't know if the operation succeeded or failed, then you'll likely run it again. In a mobile app, if it failed, you show the error to the user and let them hit the Retry button, so the user provides the guarantee. In a datacenter batch processor, you'd likely stick the item at the back of the queue so it can be retried in a couple of hours. You'd also increment a "poison pill count" so that, if there's something structurally wrong, it doesn't keep retrying from now to eternity but instead emails an operator to resolve the problem manually.

But I mean, if your application depends on an external API or something, how can you not have exponential backoff? Just giving up on the first retry seems pretty lovely since a lot of times retrying does work.

Gul Banana
Nov 28, 2003

idk what kind of internal networks you're using at Microsoft where HTTP or DNS or firewalls never just, go down for five seconds. it's been sadly common for me and my clients would rather their batch processing tasks start up again 10 seconds later than waiting until the next processing window.

idempotency is a valuable tool for this of course. it isn't mutually exclusive with back-off-and-retry, and in fact makes it far safer.

No Safe Word
Feb 26, 2005

RICHUNCLEPENNYBAGS posted:

But I mean, if your application depends on an external API or something, how can you not have exponential backoff? Just giving up on the first retry seems pretty lovely since a lot of times retrying does work.

Just because it's a smell doesn't mean it's categorically bad.

I mean, Microsoft itself has the whole Transient Fault Handling block for their Azure bits that has Exponential Back-off as one of the provided retry strategies: http://msdn.microsoft.com/en-us/library/hh680934(PandP.50).aspx

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

ljw1004 posted:

You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request?

That's a good point. ASP.NET can try to tear down AppDomains for multiple reasons, and if it doesn't know about the code that's running then it won't try to wait on it. You could be lazy about it and use ThreadPool.QueueUserWorkItem so it uses a thread from the ASP.NET thread pool, though the more correct way is to create a class representing your work and have it inherit from IRegisteredObject and use HostingEnvironment.RegisterObject to let ASP.NET know you're doing work. That gives you ~30 seconds to complete your work before the AppDomain is torn down.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



ljw1004 posted:

You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request?

I can (unfortunately) confirm that it does (superficially) work. In light of this conversation, I'm going to try to get they guy who did it to change the way he handles our use case.

Opulent Ceremony
Feb 22, 2012

Manslaughter posted:

In an MVC controller can I send a response through the context, close it out, then continue on with processing? I tried doing HttpContext.Response.End() and such right at the start but my request didn't get a response until the server's entire routine was done.

I think there are libraries for this like HangFire, though I've never used them.

Uziel
Jun 28, 2004

Ask me about losing 200lbs, and becoming the Viking God of W&W.
I'm trying to convert my hobby project's web scraper to F#. I'm trying to clean up my results so that I simply have a List of string arrays, the caveat being that I'm consuming the results from the F# library in C#.

This method returns a string array and I'd like to send each string in the array to a function called resultsBody that takes a string and returns a sequence of string arrays, but the final end result should be a single sequence of string[], not a sequence of a sequence of string []s.

code:
//takes a string, returns a sequence of string arrays
let resultsBody resultsPage = //snipped out, just html agility pack parsing of html

//returns a string array
 let asyncScrape url allParameters =
        allParameters
        |> Seq.map(fun v ->
            yearAndClassResultsAsync url v)
            |> Async.Parallel
            |> Async.RunSynchronously

//takes a string array, using the result of asyncScrape, but returns seq<string[]> []
let parseSite html =
        Array.mapi (fun s -> resultsBody) html
Any ideas? I simply want my a single sequence of string arrays that is the result of resultsBody being applied to the the asyncScrape's resulting string[]!!!


Edit: I figured it out. It was this:
code:
let asyncScrape url allParameters =
        allParameters
        |> Seq.map(fun v ->
            yearAndClassResultsAsync url v)
            |> Async.Parallel
            |> Async.RunSynchronously
            |> Array.mapi (fun s -> resultsBody)
                |> Seq.map (fun m -> 
                    Array.concat m)

//c# code that gets me an ienumerable<string[]>
var resultsForYear = Scrape.asyncScrape(url, allParameters);

Uziel fucked around with this message at 02:20 on Jun 6, 2014

raminasi
Jan 25, 2005

a last drink with no ice
I think you could have just used Seq.collect. (Also check out the Array.Parallel module.)

e: Actually the solution you posted shouldn't be compiling for a couple of reasons so I'm not sure what you're doing.

raminasi fucked around with this message at 04:29 on Jun 6, 2014

spiderlemur
Nov 6, 2010
Hey guys, newbie here.

I have an ASP.NET MVC web application, and I want to present some data in a view that refreshes itself every so often. The data is coming from a WCF service, so I'd like to continuously call that service every few seconds and send the data back to the user without forcing them to refresh the page. How is this best achieved?

I tried spending some time with Google on this but I got a lot of different and unrelated results. I'd like to keep this as simple as possible since I'm very new to MVC. It's also possible that I'm just not searching for the right things.

Adbot
ADBOT LOVES YOU

Horn
Jun 18, 2004

Penetration is the key to success
College Slice

spiderlemur posted:

Hey guys, newbie here.

I have an ASP.NET MVC web application, and I want to present some data in a view that refreshes itself every so often. The data is coming from a WCF service, so I'd like to continuously call that service every few seconds and send the data back to the user without forcing them to refresh the page. How is this best achieved?

I tried spending some time with Google on this but I got a lot of different and unrelated results. I'd like to keep this as simple as possible since I'm very new to MVC. It's also possible that I'm just not searching for the right things.

What you're looking for is to use some kind of ajax calls to pull new data down from the server. This post gives a pretty basic example.

  • Locked thread