Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mortarr
Apr 28, 2005

frozen meat at high speed

GrumpyDoctor posted:

What have you tried, and what's not working?

I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo.

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012

mortarr posted:

I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo.

Json.net is an awesome library when working with objects that you need to serialize to json. You serialize it, load into to the requeststream and off you go.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

mortarr posted:

Can't believe I wasted like four hours on this poo poo.

Welcome to programming. Learn to love simple solutions to :bang: problems.

epswing
Nov 4, 2003

Soiled Meat

mortarr posted:

I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo.

I've run into this before. My request should definitely be a GET, but the data I'm sending is complex (an array of objects with properties) and just doesn't work with GET, but does work with POST. When I say "doesn't work" I mean the IEnumerable arg is null.

So at the time I just called it a day and used POST.

But...what up with that?

Mr Shiny Pants
Nov 12, 2012

epalm posted:

I've run into this before. My request should definitely be a GET, but the data I'm sending is complex (an array of objects with properties) and just doesn't work with GET, but does work with POST. When I say "doesn't work" I mean the IEnumerable arg is null.

So at the time I just called it a day and used POST.

But...what up with that?

Uhm GET is for retrieving data? Post is for sending? Maybe I am missing something?

EDIT: Or do you need send a query or something? What I usually do is use the URL to create the query.

Mr Shiny Pants fucked around with this message at 11:24 on Sep 4, 2014

epswing
Nov 4, 2003

Soiled Meat
In both scenarios data is being "sent", whether I GET or POST /page.html?searchTerm=poop

The idea between GET and POST, is that GET should have no server-side side effects (search for something, get back results), whereas POST may have server-side side effects (save this record).

My question is why can't I use GET and send a complex type? For example say I'm running a search with a list search terms.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

epalm posted:

In both scenarios data is being "sent", whether I GET or POST /page.html?searchTerm=poop

The idea between GET and POST, is that GET should have no server-side side effects (search for something, get back results), whereas POST may have server-side side effects (save this record).

My question is why can't I use GET and send a complex type? For example say I'm running a search with a list search terms.

Well according to
http://stackoverflow.com/questions/2300871/how-to-take-an-array-of-parameters-as-get-post-in-asp-net-mvc
you can construct your HttpGet action method with a string[] as an input param, so each parameter included in the query string is an element of the array. You have to name each parameter the same thing (?searchterm=Vegetables&searchterm=Fruits&searchterm=Dairy).

As for complex objects, I mean we're still dealing with GET here which uses the query string. You have to encode your complex data into a string of some kind in such a way that you can decode it in the action method on the server side (at least, I'm pretty sure that's the case; I'm fairly new to web dev).

Che Delilas fucked around with this message at 19:09 on Sep 4, 2014

ljw1004
Jan 18, 2005

rum

epalm posted:

My question is why can't I use GET and send a complex type? For example say I'm running a search with a list search terms.

I don't even understand if you're talking about client-side or server-side, and which language you're writing in...


Sure you can send a complex type in a GET url.

epswing
Nov 4, 2003

Soiled Meat
JavaScript code:
rows = [
    { Id: 42, Qty: 2 },
    { Id: 44, Qty: 2 },
    { Id: 48, Qty: 4 },
];

$.ajax({
    url: "/path/to/mymethod",
    type: 'POST',
    contentType: "application/json",
    dataType: "html",
    data: JSON.stringify({ model: rows }),
});
C# code:
[HttpPost]
public PartialViewResult MyMethod(int expId, IEnumerable<Row> model)
{
    return PartialView("...");
}

public class Row
{
    public int Id { get; set; }
    public int Qty { get; set; }
}
The above, using POST, works like a charm. Technically I "should" be using GET because I'm just requesting some data/visuals, and the call to MyMethod has no side effects.

If I change type: 'POST', to type: 'GET',
and [HttpPost] to [HttpGet]
then MyMethods model argument will inexplicably be null

Che Delilas
Nov 23, 2009
FREE TIBET WEED

epalm posted:

JavaScript code:
rows = [
    { Id: 42, Qty: 2 },
    { Id: 44, Qty: 2 },
    { Id: 48, Qty: 4 },
];

$.ajax({
    url: "/path/to/mymethod",
    type: 'POST',
    contentType: "application/json",
    dataType: "html",
    data: JSON.stringify({ model: rows }),
});
C# code:
[HttpPost]
public PartialViewResult MyMethod(int expId, IEnumerable<Row> model)
{
    return PartialView("...");
}

public class Row
{
    public int Id { get; set; }
    public int Qty { get; set; }
}
The above, using POST, works like a charm. Technically I "should" be using GET because I'm just requesting some data/visuals, and the call to MyMethod has no side effects.

If I change type: 'POST', to type: 'GET',
and [HttpPost] to [HttpGet]
then MyMethods model argument will inexplicably be null

What query string do you get when you run that as HttpGet? What that's saying to me is that the output of JSON.stringify({model:rows}) is a string that doesn't work as a URI in a browser's address bar.

Edit: As far as what you 'should' be using, GET's primary advantage is that it generates that URI that can be copy/pasted, so if a user wants to bookmark the 4th page of a set of search results or send that as a link in email, they can. If you don't need that level of convenience, POST is perfectly acceptable.

Che Delilas fucked around with this message at 20:34 on Sep 4, 2014

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy
Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string (GET request bodies are ignored per the HTTP/1.1 spec). POST requests, on the other hand, can have anything in the body and will usually work on most servers as long as the Content-Type header is correct (e.g. application/json).

Unfortunately, the default MVC model binder is rather terrible at deserializing complex objects from a query string. A lot of things have to fall into place to make it work, and it's generally not worth figuring it out over just POSTing.

Mr. Crow
May 22, 2008

Snap City mayor for life

Bognar posted:

Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string (GET request bodies are ignored per the HTTP/1.1 spec). POST requests, on the other hand, can have anything in the body and will usually work on most servers as long as the Content-Type header is correct (e.g. application/json).

Unfortunately, the default MVC model binder is rather terrible at deserializing complex objects from a query string. A lot of things have to fall into place to make it work, and it's generally not worth figuring it out over just POSTing.

Missed opportunity to :justpost:

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

ljw1004 posted:

It looks like TextFieldParser doesn't actually buy you much in this situation, and is more trouble than it's worth. I'd do it like this:

code:
Using afile As New IO.StreamReader("TextFile1.csv")
    afile.ReadLine() ' discard the first row, which contains column headings
    While Not afile.EndOfStream
        Dim line = afile.ReadLine()
        Dim csv = line.Split(","c)
        If csv.Count <> 3 Then Continue While
        Console.WriteLine("{0}...{1}...{2}", csv(0), csv(1), csv(2))
    End While
End Using
You'll need some heuristic for how to detect column headings, and blank lines, and subtitles. In your code your heuristic was to look for the exact content. I picked different heuristics in the hope that they'd be more general (i.e. wouldn't need you rewriting your code when the CSV starts looking slightly different).

Thanks for your speedy response; I was just heading out of the office last night when I wrote that (it was 9pm my time) and haven't had a chance to revisit til now.

That's... kind of the conclusion I came to myself after doing some more reading, that if I'm dealing with unstructured data I might as well not be using a structured format like ReadFields. What makes this data even better is that the number of columns isn't actually fixed, which is why I had to go with the string literal approach I was taking. It should still work moving forward I'm hoping, and luckily converting to readline doesn't really affect my core logic of what I'm doing with the columns, just the while not loop of reading it in.

ljw1004
Jan 18, 2005

rum

Bognar posted:

Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string

Should I feel bad that my strong instinct in this case is to make a request of the form

code:
"/path/to/mymethod?query=" + uriEncodeComponent(JSON.stringify(...))
And then write my ASP.Net WebAPI or MVC to retrieve it like this

code:
[HttpPost]
public PartialViewResult MyMethod(string query)
{
    var s = HttpUtility.UrlDecode(a.Replace("%20","+"));
    var json = JSON.Parse(s); // or whatever
What this loses is that the query string, although still mostly human-readable, is marginally less readable. What it gains is that it works and you're in precise control over the exact serialization/deserialization format.


I always get confused over the exact correct uri encoding/decoding stuff. I followed what I read here:
http://stackoverflow.com/questions/86477/does-c-sharp-have-an-equivalent-to-javascripts-encodeuricomponent

SirViver
Oct 22, 2008
Were you attempting to post this to the coding horrors thread, or did I just fail to get the joke?

Sedro
Dec 31, 2008
JSON in the URL is not just marginally less readable, it's incomprehensible

Before encoding, hmm doesn't look so bad
code:
/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]}
After encoding
code:
/Search?term=pumas&filters=%7B%22productType%22%3A%5B%22Clothing%22%2C%22Bags%22%5D%2C%22color%22%3A%5B%22Black%22%2C%22Red%22%5D%7D
:justpost: it

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Mr. Crow posted:

Missed opportunity to :justpost:

:smith: I am ashamed.

Factor Mystic
Mar 20, 2006

Baby's First Post-Apocalyptic Fiction
What's a good technique to parallelize a recursive, async retrieval task? Bear with my as I haven't fully thought this out. I'm loading hierarchical items (folders of items & folders) from a remote resource, and there's wait time associated with getting a folder's children. My current solution works perfectly fine... it's basically naively recursing folders. Looks something like this (substantially simplified):

code:
var allItems = new ObservableCollection<Item>();

async Task LoadStuff(Item folder) {
    var items = await folder.GetChildItems();
    this.allItems.Add(items);

    var folders = items.Where(item => item.IsFolder);
    foreach(var f in folders) {
        await LoadStuff(f);
    }
}

var rootFolder = ...
LoadStuff(rootFolder);
It's kinda slow. I'm trying to think of ways to make it faster. Doing something like "await Task.WhenAll(folders.Select(LoadStuff))" seems to balloon the number of threads in use, but I assume that's capped by thread pool availability anyway. I could potentially NOT modify allItems in the body and add an accumulator parameter to the method, then return the entire set at the end, but that's somewhat unfortunate because I am taking advantage of OC<T>'s events to do some progressive UI updates, but that may be manageable if there's a parallel technique here.

Maybe some kind of producer/consumer queue, capped with a particular degree of parallelism? Then push folders into it, get back children at some point in the future, evaluate them into more queue insertions? That's getting more complicated.

Factor Mystic fucked around with this message at 04:25 on Sep 5, 2014

sarehu
Apr 20, 2007

(call/cc call/cc)
Edit: I can't help you without more specifics.

sarehu fucked around with this message at 04:52 on Sep 5, 2014

ljw1004
Jan 18, 2005

rum

Factor Mystic posted:

What's a good technique to parallelize a recursive, async retrieval task? I'm loading hierarchical items (folders of items & folders) from a remote resource, and there's wait time associated with getting a folder's children.
...
Doing something like "await Task.WhenAll(folders.Select(LoadStuff))" seems to balloon the number of threads in use, but I assume that's capped by thread pool availability anyway.

"Balloon the number of threads" is a surprise to me. The await operator doesn't create new threads. Task.WhenAll doesn't create new threads. The general principle is that no keywords in C# create new threads, and only a few specific clearly-identified functions like "Task.Run" will allocate anything on the threadpool. So where did your mysterious ballooning threads come from? I don't know. There might have been other threads being used under the hood, due to misbehaved APIs, but it's hard to know without a debugger.

Stepping back, let's examine the problem from a theoretical angle. By the time your algorithm completes you will have issued "N" total calls to the remote resource and they will have completed. No amount of parallelization will ever change this total number "N".

You believe that your remote resource can typically handle a higher rate of concurrent requests than just simply doing those N requests one after the other. This is a reasonable belief and true of most servers. So what is the optimum number of parallel requests? Impossible to say. We don't know if the server will be fielding requests from other clients at the same time. We don't know if the server will reject requests if it's busy, or queue them up. It's likely not a good use of resources to implement our own rate-adjusting throttling mechanism to determine on-the-fly the optimum number of requests (like the throttler in TCP/IP does).

The best practical answer, one that works great in most situations, is just pick a number. Let's say "3" parallel requests. If the server takes time "t" for each request, then you'll finish in about N*t/3.


Here are two idioms for throttling async stuff. The first looks stupid, but it's clear and works and doesn't need new abstractions and is robust, and that in my book is a good pattern :) If you run it all on the UI thread then you don't even need to worry about using concurrency-safe data structures. For instance, you can use a normal (non-concurrent) queue, and your ProcessWorkItem method can happily add things to an ObservableCollection that's databound to the UI.

code:
async void Button1_Click() {
    var t1 = WorkerAsync();
    var t2 = WorkerAsync();
    var t3 = WorkerAsync();
    await Task.WhenAll(t1,t2,t3);
}

async Task WorkerAsync() {
   while (queue.Count>0) {
      var i = queue.Dequeue();
      ProcessWorkItem(i);
   }
}
In your case, you described your problem recursively, but I've implemented it non-recursively. Just make a queue of all outstanding folders that have yet to be processed. In your "ProcessWorkItem" function, you can retrieve all child folders, add them to the queue, then retrieve all child files.



Here's another solution for throttling based on the "Dataflow" library from Microsoft. Dataflow is powerful and you can wire it up in more sophisticated ways. It runs the callbacks on the threadpool. But again, you're still turning the recursive thing into something queue-based.

code:
private static async Task LotsOfWorkAsync()
{
    ITargetBlock<Folder> throttle = null;
    throttle = Throttle<Folder>(
        async folder =>
        {
            // handle the folder by posting further folders to the throttle
        },
        maxParallelism: 3);
    throttle.Post(top_level_folder);

    // Signal that we're done enqueuing work.
    // Actually, I don't know where best to call this function.
    // You'd call it when there are no more items left to enqueue.
    // I don't know how to figure that out cleanly.
    //throttle.Complete();

    // Don't complete this async method until the queue is fully processed.
    await throttle.Completion;
}

private static ITargetBlock<T> Throttle<T>(Func<T, Task> worker, int maxParallelism)
{
    var block = new ActionBlock<T>(worker,
        new ExecutionDataflowBlockOptions {
            MaxDegreeOfParallelism = maxParallelism,
        });
    return block;
}

ljw1004 fucked around with this message at 05:41 on Sep 5, 2014

wwb
Aug 17, 2004

If I need a structure complex enough to need json to deal with it I would start strongly considering POST. The big wall you can run into is max url length -- which can get set by intermediaries and security software as well as things you might be able to understand [browsers] and fix [server settings].

That said, for:

code:
/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]}
I would prefer:

code:
/Search?term=pumas&productType=Clothing&productType=Bags&color=Black&color=Red
This makes the url much more readable and you don't waste precious characters encoding json for no reason.

From the controller side, this will work with

code:
public ActionResult Search(string term, string[] productType, string[] color)
If you are working with the web API it will serialize out to a properly constructed object if you'd prefer. Not sure if that works on the MVC side though.

raminasi
Jan 25, 2005

a last drink with no ice
If I've got a spun-off async operation blocking on a call to TcpListener.AcceptTcpClient(), and something else calls TcpListener.Stop(), a SocketException is thrown with the message "A blocking operation was interrupted by a call to WSACancelBlockingCall". That seeems like it's exactly what I want. Can I just swallow this exception? It feels like there should be a better way to do this.

edit: There had better be a better way to do this, as I am apparently unable to actually catch this exception. No matter where I put try/catch blocks, the exception slips through and brings my app down.

raminasi fucked around with this message at 19:16 on Sep 6, 2014

epswing
Nov 4, 2003

Soiled Meat
Why does ASP running off Visual Studio (which I guess is using IISExpress?) not allow connections from remote hosts? If I'm writing a service-based thing with WebAPI I absolutely must "deploy" it somewhere for my co-workers (who are, say, writing in a completely different language, and don't have VS or even Windows for that matter) to play with it?

Edit: before ASP I wrote a fair bit of django, and you can just start up the dev server with python manage.py runserver 0.0.0.0:8000 and blamo, publicly available web server to play/test with. They do put all sorts of warnings about not using the dev server in production. Is MS just making sure the ASP/VS/IISExpress dev server is never used in prod? If so, thanks for holding my hand, but drat that's annoying.

epswing fucked around with this message at 19:19 on Sep 6, 2014

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug
You can but I'd setup a server that you push to for integration. That way you can bring your local IIS Express up and down and your co-workers don't need to worry if you left the API up or not. If you can't get another box deploy to your local IIS instance and leave that running over port 80.

zerofunk
Apr 24, 2004
You can install IIS and have it use that instead. That's mostly what I use although I'm not sure what all the differences are.

EssOEss
Oct 23, 2006
128-bit approved
By ASP, do you mean ASP.NET? Just install the full version of IIS and you'll be fine. It should be a checkbox in the "Windows Features" dialog. The IIS you get with any modern Windows has all the features as the real server version except it is limited to 10 concurrent connections on a client OS. IIS Express is far more barebones, though likely still fine for local development of simple web API stuff.

But yeah, if you are working in any professional context, set up an integration server for other teams to work against. Nobody wants your ongoing development to keep breaking their apps whenever you change the interfaces or whatnot.

epswing
Nov 4, 2003

Soiled Meat

quote:

:words:

gariig: The steps in that apparently don't work (for me). The page just doesn't load.

EssOEss: Yep, ASP.NET.

I understand what you're all saying. And I'll get IIS installed if I truly must do so. But say I didn't have permission to install IIS. Or say I didn't want to because I'm working on a pretty tiny SSD and didn't want to give up the space. Or say I wanted VS to debug a request as it came in.

I'm stuck right where this guy is (used SPI Port Forward, getting Bad Request - Invalid Hostname HTTP Error 400): http://stackoverflow.com/questions/22561155/how-to-connect-to-visual-studio-server-remotely. Others seemed to have used SPI Port Forward successfully, so I'm not sure what I'm doing wrong. Theoretically I should just be able to forward a port, and VS shouldn't be able to tell the difference between and local and remote request, right?

epswing fucked around with this message at 01:57 on Sep 7, 2014

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

epalm posted:

I understand what you're all saying.

I don't think you do. Everyone is saying that there should be a box whose sole responsibility is hosting a dev version of your software. You can debug a remote process by attaching the debugger to it.

EssOEss
Oct 23, 2006
128-bit approved
Right. The reason for this separate box is not to make life for you easy, it is to make life easy for the colleagues who have to work with your stuff.

Installing real IIS on your PC is just an answer to your question, it is not the solution to your problem.

RICHUNCLEPENNYBAGS
Dec 21, 2010

EssOEss posted:

Right. The reason for this separate box is not to make life for you easy, it is to make life easy for the colleagues who have to work with your stuff.

Installing real IIS on your PC is just an answer to your question, it is not the solution to your problem.

So for people for do this how do you handle different branches working with modified database schemas?

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

RICHUNCLEPENNYBAGS posted:

So for people for do this how do you handle different branches working with modified database schemas?

I'd spin up a VM for each branch to deploy to. What problem are you trying to solve?

EDIT: VVVV That's another way to do it. Also, deploying to Azure/AWS/etc would work

gariig fucked around with this message at 17:56 on Sep 7, 2014

crashdome
Jun 28, 2011
Alternatively, you could simply deploy each branch as it's own host header or port in IIS if you are limited.

Factor Mystic
Mar 20, 2006

Baby's First Post-Apocalyptic Fiction

ljw1004 posted:

"Balloon the number of threads" is a surprise to me. The await operator doesn't create new threads. Task.WhenAll doesn't create new threads. The general principle is that no keywords in C# create new threads, and only a few specific clearly-identified functions like "Task.Run" will allocate anything on the threadpool. So where did your mysterious ballooning threads come from? I don't know. There might have been other threads being used under the hood, due to misbehaved APIs, but it's hard to know without a debugger.

I don't know either. Perhaps I was misreading the my debug print statements, which also print the current Environment.CurrentManagedThreadId. Normally the highest id's I see are 7-8. With the WhenAll approach, the id's climbed steadily up into the 80's before I killed the process. I suppose my report could be inaccurate if the managed thread id doesn't reliably indicate which native thread is running and could tick higher even when reusing the same native thread at a later point, but poking around the reference source sure seems to imply that it's referring to a native thread.

ljw1004 posted:

Stepping back, let's examine the problem from a theoretical angle. By the time your algorithm completes you will have issued "N" total calls to the remote resource and they will have completed. No amount of parallelization will ever change this total number "N".

You believe that your remote resource can typically handle a higher rate of concurrent requests than just simply doing those N requests one after the other. This is a reasonable belief and true of most servers. So what is the optimum number of parallel requests? Impossible to say. We don't know if the server will be fielding requests from other clients at the same time. We don't know if the server will reject requests if it's busy, or queue them up. It's likely not a good use of resources to implement our own rate-adjusting throttling mechanism to determine on-the-fly the optimum number of requests (like the throttler in TCP/IP does).

This detailing of my situation is pretty accurate. I also do not know the optimum number of parallel requests, nor the behavior of the service when overloaded. It is undocumented, as far as I can tell. I suspect that the number of allowable requests is greater than 1, so some form of parallelization seemed to be plausible to reduce the total overall time.

ljw1004 posted:

The best practical answer, one that works great in most situations, is just pick a number. Let's say "3" parallel requests. If the server takes time "t" for each request, then you'll finish in about N*t/3.


Here are two idioms for throttling async stuff. The first looks stupid, but it's clear and works and doesn't need new abstractions and is robust, and that in my book is a good pattern :) If you run it all on the UI thread then you don't even need to worry about using concurrency-safe data structures. For instance, you can use a normal (non-concurrent) queue, and your ProcessWorkItem method can happily add things to an ObservableCollection that's databound to the UI.

code:
async void Button1_Click() {
    var t1 = WorkerAsync();
    var t2 = WorkerAsync();
    var t3 = WorkerAsync();
    await Task.WhenAll(t1,t2,t3);
}

async Task WorkerAsync() {
   while (queue.Count>0) {
      var i = queue.Dequeue();
      ProcessWorkItem(i);
   }
}
In your case, you described your problem recursively, but I've implemented it non-recursively. Just make a queue of all outstanding folders that have yet to be processed. In your "ProcessWorkItem" function, you can retrieve all child folders, add them to the queue, then retrieve all child files.

Yes, this seems like the most obvious approach, however I believe it'll be preferable to run queue consumption on another thread. (A detail which I left out of my example case is that this code is already running on a background Task thread, not on the UI thread. Since this is really more of a patterns question, it didn't seem super relevant. The reason is UI responsiveness. The OC<T> in my example is not databound in the UI. Not relevant details for a pattern question).

ljw1004 posted:

Here's another solution for throttling based on the "Dataflow" library from Microsoft. Dataflow is powerful and you can wire it up in more sophisticated ways. It runs the callbacks on the threadpool. But again, you're still turning the recursive thing into something queue-based.

code:
private static async Task LotsOfWorkAsync()
{
    ITargetBlock<Folder> throttle = null;
    throttle = Throttle<Folder>(
        async folder =>
        {
            // handle the folder by posting further folders to the throttle
        },
        maxParallelism: 3);
    throttle.Post(top_level_folder);

    // Signal that we're done enqueuing work.
    // Actually, I don't know where best to call this function.
    // You'd call it when there are no more items left to enqueue.
    // I don't know how to figure that out cleanly.
    //throttle.Complete();

    // Don't complete this async method until the queue is fully processed.
    await throttle.Completion;
}

private static ITargetBlock<T> Throttle<T>(Func<T, Task> worker, int maxParallelism)
{
    var block = new ActionBlock<T>(worker,
        new ExecutionDataflowBlockOptions {
            MaxDegreeOfParallelism = maxParallelism,
        });
    return block;
}

Thanks for the advice. It looks like a queue is the way to go in any case.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

RICHUNCLEPENNYBAGS posted:

So for people for do this how do you handle different branches working with modified database schemas?

I do a lot of devops stuff these days, so here's my take:

Ideally, your work in branches is short-lived and is frequently merged into an integration branch. Every checkin to the integration branch should trigger a build and (depending on other factors) is either immediately deployed to an integration server, or deployed to an integration server on demand/on a schedule. That's where the testing happens. By its very nature, something that you're actively developing isn't ready for testing.

For long-running feature branches, you can host different versions of the site under different URLs. If there's a database involved and the schema is evolving, there's no way around it: You'll need different copies of the database. That always sucks.

When you're working with two developers or teams collaborating on different aspects of an application that need to communicate, it's really important to define what the public API is going to look like up front. The API will obviously evolve during the development process, but it should be well defined enough that you're only tweaking an interface that's shared between the two developers/teams, not making huge breaking changes every day.

And, of course, all of this should be covered by suites of unit tests that run after every build, and integration tests that run after every deployment.

Luigi Thirty
Apr 30, 2006

Emergency confection port.

candy for breakfast posted:

Nthing everyone else here: WPF/MVVM is not easy. It took all of us some time to understand it. Once you hit that 'aha' moment then you realize it turns into something spectacular and never want to go back to winforms.

I helped someone convert their example program to proper MVVM in another thread. Granted the dealer hitting/staying logic doesn't work, but it's a simple example of INotifyPropertyChanged, RelayCommand, and datacontext implementations.

Confirmed that nothing I have ever done has made me feel stupider than trying to write this in WPF.

ljw1004
Jan 18, 2005

rum

Factor Mystic posted:

I don't know either. Perhaps I was misreading the my debug print statements, which also print the current Environment.CurrentManagedThreadId. Normally the highest id's I see are 7-8. With the WhenAll approach, the id's climbed steadily up into the 80's before I killed the process.

I suppose my report could be inaccurate if the managed thread id doesn't reliably indicate which native thread is running and could tick higher even when reusing the same native thread at a later point, but poking around the reference source sure seems to imply that it's referring to a native thread.

I'm not sure which reference source you're looking at? The .NET reference source only says that the getter of CurretManagedThreadId is "extern". In any case, when you look at the debugger window, you usually see higher CurrentManagedThreadId than there are threads. This proves either that your way of testing isn't accurate or that the VS debugger fails to show all threads. I reckon the way of testing isn't accurate :)






quote:

Yes, this seems like the most obvious approach, however I believe it'll be preferable to run queue consumption on another thread. (A detail which I left out of my example case is that this code is already running on a background Task thread, not on the UI thread. Since this is really more of a patterns question, it didn't seem super relevant. The reason is UI responsiveness. The OC<T> in my example is not databound in the UI. Not relevant details for a pattern question).

I reckon there's almost never a good reason to run your work on a background thread, and lots of bad reasons. Here are slides from a recent talk I gave to the Windows XAML team:






FACT: doing asynchronous work on the UI thread, i.e. using the await operator, will NEVER harm UI responsiveness, not even on the lowest-power device you'll find. The only things that harm responsiveness are when you have "10s" or "100s" concurrency (e.g. if you throttle up to 100 concurrent requests). Or when you have code which blocks a thread.

The only code that blocks a thread is (1) calling blocking APIs - in which case rewrite your code to use async APIs; or (2) doing a CPU-bound computational kernel - in which case do this small computational kernel inside Task.Run.

What you can end up with is an architecture where the orchestration of the app is done entirely on a single thread (i.e. all the awaiting, data-binding, app-logic, ...). And only the small computational inner-loops are done on the threadpool using Task.Run. This architecture will have fewer concurrency bugs and easier-to-read code.

Factor Mystic
Mar 20, 2006

Baby's First Post-Apocalyptic Fiction

ljw1004 posted:

I'm not sure which reference source you're looking at? The .NET reference source only says that the getter of CurretManagedThreadId is "extern". In any case, when you look at the debugger window, you usually see higher CurrentManagedThreadId than there are threads. This proves either that your way of testing isn't accurate or that the VS debugger fails to show all threads. I reckon the way of testing isn't accurate :)



Ok, fair enough... I acknowledge that eyeballing thread id's is not a reliable way of reporting the number of threads in use by a program.

I restored the WhenAll code from before, turned on the Threads window, and set a conditional breakpoint to break if the managed thread id >= 80. Now we can get a more accurate picture of what was actually happening when I said I thought the number of threads was ballooning



Ok I was a little off.

ljw1004 posted:

I reckon there's almost never a good reason to run your work on a background thread, and lots of bad reasons. Here are slides from a recent talk I gave to the Windows XAML team:






FACT: doing asynchronous work on the UI thread, i.e. using the await operator, will NEVER harm UI responsiveness, not even on the lowest-power device you'll find. The only things that harm responsiveness are when you have "10s" or "100s" concurrency (e.g. if you throttle up to 100 concurrent requests). Or when you have code which blocks a thread.

The only code that blocks a thread is (1) calling blocking APIs - in which case rewrite your code to use async APIs; or (2) doing a CPU-bound computational kernel - in which case do this small computational kernel inside Task.Run.

What you can end up with is an architecture where the orchestration of the app is done entirely on a single thread (i.e. all the awaiting, data-binding, app-logic, ...). And only the small computational inner-loops are done on the threadpool using Task.Run. This architecture will have fewer concurrency bugs and easier-to-read code.

I know who you are, and I appreciate you time replying. I also understand that what you're saying SHOULD be the case, and I SHOULDN'T need to run this op on a background thread to avoid UI glitchyness. And in fairness, the code has gone though several iterations and improvements from when I first noticed the issues, so to make sure I wasn't wasting everyone's time I went back and cloned the current background thread method (accepts a TaskCompletionSource so the UI-caller can await it anyway) to a normal "async Task<T>" method, and awaited it like normal. Glitchy. It's kind of hard to put meaning on behind that word... it's mostly related to touch latency, I suppose. As in, swiping pivot headers on a rhythm will be slower/unresponsive than when using the background thread approach.

I feel like there's an obvious explanation for this, and that is that it's not about the awaitables, it's that there's unexpectedly long blocking methods acting up here. I can't really nail it down (and this particular aspect of the program has already be "solved", so it's not a showstopper), but I do have two more data points:

1- Dynamic objects are involved. They're the actual response objects from my slow remote resource API. I'm plucking properties out of them into a normal statically typed class for the layer of the app we've been talking about.

2- The ui/touch latency is MUCH higher when using the WhenAll approach. To me this implies some kind of resource starvation scenario.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

GrumpyDoctor posted:

edit: There had better be a better way to do this, as I am apparently unable to actually catch this exception. No matter where I put try/catch blocks, the exception slips through and brings my app down.

Can you post some code?

Are you explicitly catching SocketException, or catching any Exception? If you're just catching a SocketException and the TcpListener is on an async call, it's possible that an AggregateException is being thrown instead.

EDIT: On second thought, this is probably unlikely. Back to line 1 - got any code?

Bognar fucked around with this message at 14:38 on Sep 8, 2014

Fuck them
Jan 21, 2011

and their bullshit
:yotj:
Starting to not feel the linq right now.

I kept getting "don't do this For Each because the list doesn't have any elements."

Ok! I'll throw in a drat if statement to make sure the list has a count greater than zero.

So on the line where I go "If (items.Count > 0) Then"

"ArgumentException not handled by user code
Count must have a non-negative value."

I have no clue what is going on. I'm going to just wrap it in a try catch, but honestly, what the hell is going on?

Adbot
ADBOT LOVES YOU

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug
Can you post a code sample of what you are doing?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply