|
GrumpyDoctor posted:What have you tried, and what's not working? I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo.
|
# ? Sep 4, 2014 05:14 |
|
|
# ? Jun 5, 2024 04:14 |
|
mortarr posted:I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo. Json.net is an awesome library when working with objects that you need to serialize to json. You serialize it, load into to the requeststream and off you go.
|
# ? Sep 4, 2014 05:42 |
|
mortarr posted:Can't believe I wasted like four hours on this poo poo. Welcome to programming. Learn to love simple solutions to problems.
|
# ? Sep 4, 2014 06:18 |
|
mortarr posted:I was treating c# like javascript and it turns out if I do a Post instead of a Get I can JSON.stringify() my object and send it up as post data, and then it's all sweet from there. Can't believe I wasted like four hours on this poo poo. I've run into this before. My request should definitely be a GET, but the data I'm sending is complex (an array of objects with properties) and just doesn't work with GET, but does work with POST. When I say "doesn't work" I mean the IEnumerable arg is null. So at the time I just called it a day and used POST. But...what up with that?
|
# ? Sep 4, 2014 06:46 |
|
epalm posted:I've run into this before. My request should definitely be a GET, but the data I'm sending is complex (an array of objects with properties) and just doesn't work with GET, but does work with POST. When I say "doesn't work" I mean the IEnumerable arg is null. Uhm GET is for retrieving data? Post is for sending? Maybe I am missing something? EDIT: Or do you need send a query or something? What I usually do is use the URL to create the query. Mr Shiny Pants fucked around with this message at 11:24 on Sep 4, 2014 |
# ? Sep 4, 2014 11:21 |
|
In both scenarios data is being "sent", whether I GET or POST /page.html?searchTerm=poop The idea between GET and POST, is that GET should have no server-side side effects (search for something, get back results), whereas POST may have server-side side effects (save this record). My question is why can't I use GET and send a complex type? For example say I'm running a search with a list search terms.
|
# ? Sep 4, 2014 18:32 |
|
epalm posted:In both scenarios data is being "sent", whether I GET or POST /page.html?searchTerm=poop Well according to http://stackoverflow.com/questions/2300871/how-to-take-an-array-of-parameters-as-get-post-in-asp-net-mvc you can construct your HttpGet action method with a string[] as an input param, so each parameter included in the query string is an element of the array. You have to name each parameter the same thing (?searchterm=Vegetables&searchterm=Fruits&searchterm=Dairy). As for complex objects, I mean we're still dealing with GET here which uses the query string. You have to encode your complex data into a string of some kind in such a way that you can decode it in the action method on the server side (at least, I'm pretty sure that's the case; I'm fairly new to web dev). Che Delilas fucked around with this message at 19:09 on Sep 4, 2014 |
# ? Sep 4, 2014 19:06 |
|
epalm posted:My question is why can't I use GET and send a complex type? For example say I'm running a search with a list search terms. I don't even understand if you're talking about client-side or server-side, and which language you're writing in... Sure you can send a complex type in a GET url.
|
# ? Sep 4, 2014 19:30 |
|
JavaScript code:
C# code:
If I change type: 'POST', to type: 'GET', and [HttpPost] to [HttpGet] then MyMethods model argument will inexplicably be null
|
# ? Sep 4, 2014 20:13 |
|
epalm posted:
What query string do you get when you run that as HttpGet? What that's saying to me is that the output of JSON.stringify({model:rows}) is a string that doesn't work as a URI in a browser's address bar. Edit: As far as what you 'should' be using, GET's primary advantage is that it generates that URI that can be copy/pasted, so if a user wants to bookmark the 4th page of a set of search results or send that as a link in email, they can. If you don't need that level of convenience, POST is perfectly acceptable. Che Delilas fucked around with this message at 20:34 on Sep 4, 2014 |
# ? Sep 4, 2014 20:30 |
|
Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string (GET request bodies are ignored per the HTTP/1.1 spec). POST requests, on the other hand, can have anything in the body and will usually work on most servers as long as the Content-Type header is correct (e.g. application/json). Unfortunately, the default MVC model binder is rather terrible at deserializing complex objects from a query string. A lot of things have to fall into place to make it work, and it's generally not worth figuring it out over just POSTing.
|
# ? Sep 4, 2014 21:40 |
|
Bognar posted:Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string (GET request bodies are ignored per the HTTP/1.1 spec). POST requests, on the other hand, can have anything in the body and will usually work on most servers as long as the Content-Type header is correct (e.g. application/json). Missed opportunity to
|
# ? Sep 4, 2014 22:29 |
|
ljw1004 posted:It looks like TextFieldParser doesn't actually buy you much in this situation, and is more trouble than it's worth. I'd do it like this: Thanks for your speedy response; I was just heading out of the office last night when I wrote that (it was 9pm my time) and haven't had a chance to revisit til now. That's... kind of the conclusion I came to myself after doing some more reading, that if I'm dealing with unstructured data I might as well not be using a structured format like ReadFields. What makes this data even better is that the number of columns isn't actually fixed, which is why I had to go with the string literal approach I was taking. It should still work moving forward I'm hoping, and luckily converting to readline doesn't really affect my core logic of what I'm doing with the columns, just the while not loop of reading it in.
|
# ? Sep 4, 2014 23:15 |
|
Bognar posted:Generally, you shouldn't JSON.stringify if you're using jQuery AJAX to GET. GET commands are (de)serialized as application/x-www-form-urlencoded because they have to be part of the query string Should I feel bad that my strong instinct in this case is to make a request of the form code:
code:
I always get confused over the exact correct uri encoding/decoding stuff. I followed what I read here: http://stackoverflow.com/questions/86477/does-c-sharp-have-an-equivalent-to-javascripts-encodeuricomponent
|
# ? Sep 4, 2014 23:15 |
|
ljw1004 posted:...
|
# ? Sep 4, 2014 23:40 |
|
JSON in the URL is not just marginally less readable, it's incomprehensible Before encoding, hmm doesn't look so bad code:
code:
|
# ? Sep 4, 2014 23:43 |
|
Mr. Crow posted:Missed opportunity to I am ashamed.
|
# ? Sep 5, 2014 01:53 |
|
What's a good technique to parallelize a recursive, async retrieval task? Bear with my as I haven't fully thought this out. I'm loading hierarchical items (folders of items & folders) from a remote resource, and there's wait time associated with getting a folder's children. My current solution works perfectly fine... it's basically naively recursing folders. Looks something like this (substantially simplified):code:
Maybe some kind of producer/consumer queue, capped with a particular degree of parallelism? Then push folders into it, get back children at some point in the future, evaluate them into more queue insertions? That's getting more complicated. Factor Mystic fucked around with this message at 04:25 on Sep 5, 2014 |
# ? Sep 5, 2014 04:23 |
|
Edit: I can't help you without more specifics.
sarehu fucked around with this message at 04:52 on Sep 5, 2014 |
# ? Sep 5, 2014 04:45 |
|
Factor Mystic posted:What's a good technique to parallelize a recursive, async retrieval task? I'm loading hierarchical items (folders of items & folders) from a remote resource, and there's wait time associated with getting a folder's children. "Balloon the number of threads" is a surprise to me. The await operator doesn't create new threads. Task.WhenAll doesn't create new threads. The general principle is that no keywords in C# create new threads, and only a few specific clearly-identified functions like "Task.Run" will allocate anything on the threadpool. So where did your mysterious ballooning threads come from? I don't know. There might have been other threads being used under the hood, due to misbehaved APIs, but it's hard to know without a debugger. Stepping back, let's examine the problem from a theoretical angle. By the time your algorithm completes you will have issued "N" total calls to the remote resource and they will have completed. No amount of parallelization will ever change this total number "N". You believe that your remote resource can typically handle a higher rate of concurrent requests than just simply doing those N requests one after the other. This is a reasonable belief and true of most servers. So what is the optimum number of parallel requests? Impossible to say. We don't know if the server will be fielding requests from other clients at the same time. We don't know if the server will reject requests if it's busy, or queue them up. It's likely not a good use of resources to implement our own rate-adjusting throttling mechanism to determine on-the-fly the optimum number of requests (like the throttler in TCP/IP does). The best practical answer, one that works great in most situations, is just pick a number. Let's say "3" parallel requests. If the server takes time "t" for each request, then you'll finish in about N*t/3. Here are two idioms for throttling async stuff. The first looks stupid, but it's clear and works and doesn't need new abstractions and is robust, and that in my book is a good pattern If you run it all on the UI thread then you don't even need to worry about using concurrency-safe data structures. For instance, you can use a normal (non-concurrent) queue, and your ProcessWorkItem method can happily add things to an ObservableCollection that's databound to the UI. code:
Here's another solution for throttling based on the "Dataflow" library from Microsoft. Dataflow is powerful and you can wire it up in more sophisticated ways. It runs the callbacks on the threadpool. But again, you're still turning the recursive thing into something queue-based. code:
ljw1004 fucked around with this message at 05:41 on Sep 5, 2014 |
# ? Sep 5, 2014 05:19 |
|
If I need a structure complex enough to need json to deal with it I would start strongly considering POST. The big wall you can run into is max url length -- which can get set by intermediaries and security software as well as things you might be able to understand [browsers] and fix [server settings]. That said, for: code:
code:
From the controller side, this will work with code:
|
# ? Sep 6, 2014 09:39 |
|
If I've got a spun-off async operation blocking on a call to TcpListener.AcceptTcpClient(), and something else calls TcpListener.Stop(), a SocketException is thrown with the message "A blocking operation was interrupted by a call to WSACancelBlockingCall". That seeems like it's exactly what I want. Can I just swallow this exception? It feels like there should be a better way to do this. edit: There had better be a better way to do this, as I am apparently unable to actually catch this exception. No matter where I put try/catch blocks, the exception slips through and brings my app down. raminasi fucked around with this message at 19:16 on Sep 6, 2014 |
# ? Sep 6, 2014 18:13 |
|
Why does ASP running off Visual Studio (which I guess is using IISExpress?) not allow connections from remote hosts? If I'm writing a service-based thing with WebAPI I absolutely must "deploy" it somewhere for my co-workers (who are, say, writing in a completely different language, and don't have VS or even Windows for that matter) to play with it? Edit: before ASP I wrote a fair bit of django, and you can just start up the dev server with python manage.py runserver 0.0.0.0:8000 and blamo, publicly available web server to play/test with. They do put all sorts of warnings about not using the dev server in production. Is MS just making sure the ASP/VS/IISExpress dev server is never used in prod? If so, thanks for holding my hand, but drat that's annoying. epswing fucked around with this message at 19:19 on Sep 6, 2014 |
# ? Sep 6, 2014 19:13 |
|
You can but I'd setup a server that you push to for integration. That way you can bring your local IIS Express up and down and your co-workers don't need to worry if you left the API up or not. If you can't get another box deploy to your local IIS instance and leave that running over port 80.
|
# ? Sep 6, 2014 19:26 |
|
You can install IIS and have it use that instead. That's mostly what I use although I'm not sure what all the differences are.
|
# ? Sep 6, 2014 19:28 |
|
By ASP, do you mean ASP.NET? Just install the full version of IIS and you'll be fine. It should be a checkbox in the "Windows Features" dialog. The IIS you get with any modern Windows has all the features as the real server version except it is limited to 10 concurrent connections on a client OS. IIS Express is far more barebones, though likely still fine for local development of simple web API stuff. But yeah, if you are working in any professional context, set up an integration server for other teams to work against. Nobody wants your ongoing development to keep breaking their apps whenever you change the interfaces or whatnot.
|
# ? Sep 6, 2014 23:06 |
|
quote:gariig: The steps in that apparently don't work (for me). The page just doesn't load. EssOEss: Yep, ASP.NET. I understand what you're all saying. And I'll get IIS installed if I truly must do so. But say I didn't have permission to install IIS. Or say I didn't want to because I'm working on a pretty tiny SSD and didn't want to give up the space. Or say I wanted VS to debug a request as it came in. I'm stuck right where this guy is (used SPI Port Forward, getting Bad Request - Invalid Hostname HTTP Error 400): http://stackoverflow.com/questions/22561155/how-to-connect-to-visual-studio-server-remotely. Others seemed to have used SPI Port Forward successfully, so I'm not sure what I'm doing wrong. Theoretically I should just be able to forward a port, and VS shouldn't be able to tell the difference between and local and remote request, right? epswing fucked around with this message at 01:57 on Sep 7, 2014 |
# ? Sep 7, 2014 01:50 |
|
epalm posted:I understand what you're all saying. I don't think you do. Everyone is saying that there should be a box whose sole responsibility is hosting a dev version of your software. You can debug a remote process by attaching the debugger to it.
|
# ? Sep 7, 2014 02:54 |
|
Right. The reason for this separate box is not to make life for you easy, it is to make life easy for the colleagues who have to work with your stuff. Installing real IIS on your PC is just an answer to your question, it is not the solution to your problem.
|
# ? Sep 7, 2014 08:53 |
|
EssOEss posted:Right. The reason for this separate box is not to make life for you easy, it is to make life easy for the colleagues who have to work with your stuff. So for people for do this how do you handle different branches working with modified database schemas?
|
# ? Sep 7, 2014 15:51 |
|
RICHUNCLEPENNYBAGS posted:So for people for do this how do you handle different branches working with modified database schemas? I'd spin up a VM for each branch to deploy to. What problem are you trying to solve? EDIT: VVVV That's another way to do it. Also, deploying to Azure/AWS/etc would work gariig fucked around with this message at 17:56 on Sep 7, 2014 |
# ? Sep 7, 2014 17:19 |
|
Alternatively, you could simply deploy each branch as it's own host header or port in IIS if you are limited.
|
# ? Sep 7, 2014 17:23 |
|
ljw1004 posted:"Balloon the number of threads" is a surprise to me. The await operator doesn't create new threads. Task.WhenAll doesn't create new threads. The general principle is that no keywords in C# create new threads, and only a few specific clearly-identified functions like "Task.Run" will allocate anything on the threadpool. So where did your mysterious ballooning threads come from? I don't know. There might have been other threads being used under the hood, due to misbehaved APIs, but it's hard to know without a debugger. I don't know either. Perhaps I was misreading the my debug print statements, which also print the current Environment.CurrentManagedThreadId. Normally the highest id's I see are 7-8. With the WhenAll approach, the id's climbed steadily up into the 80's before I killed the process. I suppose my report could be inaccurate if the managed thread id doesn't reliably indicate which native thread is running and could tick higher even when reusing the same native thread at a later point, but poking around the reference source sure seems to imply that it's referring to a native thread. ljw1004 posted:Stepping back, let's examine the problem from a theoretical angle. By the time your algorithm completes you will have issued "N" total calls to the remote resource and they will have completed. No amount of parallelization will ever change this total number "N". This detailing of my situation is pretty accurate. I also do not know the optimum number of parallel requests, nor the behavior of the service when overloaded. It is undocumented, as far as I can tell. I suspect that the number of allowable requests is greater than 1, so some form of parallelization seemed to be plausible to reduce the total overall time. ljw1004 posted:The best practical answer, one that works great in most situations, is just pick a number. Let's say "3" parallel requests. If the server takes time "t" for each request, then you'll finish in about N*t/3. Yes, this seems like the most obvious approach, however I believe it'll be preferable to run queue consumption on another thread. (A detail which I left out of my example case is that this code is already running on a background Task thread, not on the UI thread. Since this is really more of a patterns question, it didn't seem super relevant. The reason is UI responsiveness. The OC<T> in my example is not databound in the UI. Not relevant details for a pattern question). ljw1004 posted:Here's another solution for throttling based on the "Dataflow" library from Microsoft. Dataflow is powerful and you can wire it up in more sophisticated ways. It runs the callbacks on the threadpool. But again, you're still turning the recursive thing into something queue-based. Thanks for the advice. It looks like a queue is the way to go in any case.
|
# ? Sep 7, 2014 18:08 |
|
RICHUNCLEPENNYBAGS posted:So for people for do this how do you handle different branches working with modified database schemas? I do a lot of devops stuff these days, so here's my take: Ideally, your work in branches is short-lived and is frequently merged into an integration branch. Every checkin to the integration branch should trigger a build and (depending on other factors) is either immediately deployed to an integration server, or deployed to an integration server on demand/on a schedule. That's where the testing happens. By its very nature, something that you're actively developing isn't ready for testing. For long-running feature branches, you can host different versions of the site under different URLs. If there's a database involved and the schema is evolving, there's no way around it: You'll need different copies of the database. That always sucks. When you're working with two developers or teams collaborating on different aspects of an application that need to communicate, it's really important to define what the public API is going to look like up front. The API will obviously evolve during the development process, but it should be well defined enough that you're only tweaking an interface that's shared between the two developers/teams, not making huge breaking changes every day. And, of course, all of this should be covered by suites of unit tests that run after every build, and integration tests that run after every deployment.
|
# ? Sep 7, 2014 18:46 |
|
candy for breakfast posted:Nthing everyone else here: WPF/MVVM is not easy. It took all of us some time to understand it. Once you hit that 'aha' moment then you realize it turns into something spectacular and never want to go back to winforms. Confirmed that nothing I have ever done has made me feel stupider than trying to write this in WPF.
|
# ? Sep 7, 2014 19:18 |
|
Factor Mystic posted:I don't know either. Perhaps I was misreading the my debug print statements, which also print the current Environment.CurrentManagedThreadId. Normally the highest id's I see are 7-8. With the WhenAll approach, the id's climbed steadily up into the 80's before I killed the process. I'm not sure which reference source you're looking at? The .NET reference source only says that the getter of CurretManagedThreadId is "extern". In any case, when you look at the debugger window, you usually see higher CurrentManagedThreadId than there are threads. This proves either that your way of testing isn't accurate or that the VS debugger fails to show all threads. I reckon the way of testing isn't accurate quote:Yes, this seems like the most obvious approach, however I believe it'll be preferable to run queue consumption on another thread. (A detail which I left out of my example case is that this code is already running on a background Task thread, not on the UI thread. Since this is really more of a patterns question, it didn't seem super relevant. The reason is UI responsiveness. The OC<T> in my example is not databound in the UI. Not relevant details for a pattern question). I reckon there's almost never a good reason to run your work on a background thread, and lots of bad reasons. Here are slides from a recent talk I gave to the Windows XAML team: FACT: doing asynchronous work on the UI thread, i.e. using the await operator, will NEVER harm UI responsiveness, not even on the lowest-power device you'll find. The only things that harm responsiveness are when you have "10s" or "100s" concurrency (e.g. if you throttle up to 100 concurrent requests). Or when you have code which blocks a thread. The only code that blocks a thread is (1) calling blocking APIs - in which case rewrite your code to use async APIs; or (2) doing a CPU-bound computational kernel - in which case do this small computational kernel inside Task.Run. What you can end up with is an architecture where the orchestration of the app is done entirely on a single thread (i.e. all the awaiting, data-binding, app-logic, ...). And only the small computational inner-loops are done on the threadpool using Task.Run. This architecture will have fewer concurrency bugs and easier-to-read code.
|
# ? Sep 7, 2014 19:53 |
|
ljw1004 posted:I'm not sure which reference source you're looking at? The .NET reference source only says that the getter of CurretManagedThreadId is "extern". In any case, when you look at the debugger window, you usually see higher CurrentManagedThreadId than there are threads. This proves either that your way of testing isn't accurate or that the VS debugger fails to show all threads. I reckon the way of testing isn't accurate Ok, fair enough... I acknowledge that eyeballing thread id's is not a reliable way of reporting the number of threads in use by a program. I restored the WhenAll code from before, turned on the Threads window, and set a conditional breakpoint to break if the managed thread id >= 80. Now we can get a more accurate picture of what was actually happening when I said I thought the number of threads was ballooning Ok I was a little off. ljw1004 posted:I reckon there's almost never a good reason to run your work on a background thread, and lots of bad reasons. Here are slides from a recent talk I gave to the Windows XAML team: I know who you are, and I appreciate you time replying. I also understand that what you're saying SHOULD be the case, and I SHOULDN'T need to run this op on a background thread to avoid UI glitchyness. And in fairness, the code has gone though several iterations and improvements from when I first noticed the issues, so to make sure I wasn't wasting everyone's time I went back and cloned the current background thread method (accepts a TaskCompletionSource so the UI-caller can await it anyway) to a normal "async Task<T>" method, and awaited it like normal. Glitchy. It's kind of hard to put meaning on behind that word... it's mostly related to touch latency, I suppose. As in, swiping pivot headers on a rhythm will be slower/unresponsive than when using the background thread approach. I feel like there's an obvious explanation for this, and that is that it's not about the awaitables, it's that there's unexpectedly long blocking methods acting up here. I can't really nail it down (and this particular aspect of the program has already be "solved", so it's not a showstopper), but I do have two more data points: 1- Dynamic objects are involved. They're the actual response objects from my slow remote resource API. I'm plucking properties out of them into a normal statically typed class for the layer of the app we've been talking about. 2- The ui/touch latency is MUCH higher when using the WhenAll approach. To me this implies some kind of resource starvation scenario.
|
# ? Sep 8, 2014 00:05 |
|
GrumpyDoctor posted:edit: There had better be a better way to do this, as I am apparently unable to actually catch this exception. No matter where I put try/catch blocks, the exception slips through and brings my app down. Can you post some code? Are you explicitly catching SocketException, or catching any Exception? If you're just catching a SocketException and the TcpListener is on an async call, it's possible that an AggregateException is being thrown instead. EDIT: On second thought, this is probably unlikely. Back to line 1 - got any code? Bognar fucked around with this message at 14:38 on Sep 8, 2014 |
# ? Sep 8, 2014 14:36 |
|
Starting to not feel the linq right now. I kept getting "don't do this For Each because the list doesn't have any elements." Ok! I'll throw in a drat if statement to make sure the list has a count greater than zero. So on the line where I go "If (items.Count > 0) Then" "ArgumentException not handled by user code Count must have a non-negative value." I have no clue what is going on. I'm going to just wrap it in a try catch, but honestly, what the hell is going on?
|
# ? Sep 8, 2014 16:54 |
|
|
# ? Jun 5, 2024 04:14 |
|
Can you post a code sample of what you are doing?
|
# ? Sep 8, 2014 16:58 |