|
gently caress them posted:Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?" Backing up tables is a separate problem than tracking stored procedures in source control, which yes you absolutely need to be doing.
|
# ? May 30, 2014 16:30 |
|
|
# ? Jun 10, 2024 13:49 |
|
Ithaqua posted:Or he could use an appropriate data storage mechanism instead of messing around with flat files. gently caress them posted:Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?" But backing up...I'm not quite sure what you mean. You should have DB native backups. Writing out to a CSV is fine if you want an "agnostic" copy or are trying to ferry the data around, but that's not a real backup strategy.
|
# ? May 30, 2014 16:34 |
|
gently caress them posted:Is there anything like 'source control' for stored procedures, besides "just use git and welp there you go?" Use SQL Server Data Tools. Import your database into an SSDT project, and then you have a source-controlled, canonical version of your database. Change things in your SSDT project, then publish it when you need to do a release. There you go, source controlled database objects. For data, make database backups. Don't dump poo poo to Excel, that's dumb.
|
# ? May 30, 2014 16:43 |
|
What happened was basically a miscommunication. The actual DB is regularly backed up by the DBA. Some stuff I did to update a lookup and then change some rows pointing to it was wiped during an update because DBA wasn't told I did what I did - I'm thinking if I make any small changes in the future, I should save those small changes in particular. I think I just need to sit down with the DBA and work something out. Who says you can't network around managerial issues? We DO have source control, I just have never done sprocs in them yet. It's also more of a side-thing when someone responsible for that walks over to ask me for a favor if Mr. DBA is busy doing DBA things. In this case, it was basically "hey so we're re-doing our docket codes and categories because the chief judge said so." I'm kind of a bungee-dev right now anyway since the various layers of hierarchy and bureaucracy can't make up their drat minds (state vs my county vs a few counties all together that depend on my county all fussing about bullshit; Rick Scott's continued reign as Governor probably doesn't help either) and as such I'm leaving a ton of poo poo half finished. Which is why I need to be much more particular about tracking my own work. still beats my old job by a light-year though.
|
# ? May 30, 2014 17:27 |
|
gently caress them posted:We DO have source control, I just have never done sprocs in them yet. Seriously just use SSDT, this is a solved problem.
|
# ? May 30, 2014 17:33 |
|
It's still installing!
|
# ? May 30, 2014 17:46 |
|
Ithaqua posted:Use SQL Server Data Tools. Import your database into an SSDT project, and then you have a source-controlled, canonical version of your database. Change things in your SSDT project, then publish it when you need to do a release. There you go, source controlled database objects. Dangit, Ithaqua! Every time I want to answer one you beat me to the punch! (just kidding man we're all lucky to get your help). Just wanted to chime in this is what we use at work and it works perfectly.
|
# ? May 30, 2014 20:42 |
|
The VB/C# team is looking at improving Edit & Continue. We did a survey to see what features people want to see in it... One of the common requests (omitted from the chart) was "please add EnC support for x64". That was a surprise to us since EnC support for x64 already shipped in VS2013! I guess people were burned by its lack in the past and didn't bother to try again in VS2013, or they did try again but it failed for one of the other reasons. Anyway, I just want to make sure that everyone knows: VS2013 has EnC support for x64.
|
# ? May 31, 2014 06:51 |
|
Modifying LINQ queries would be HUGE. I never understood, why can't lamdas work in quickwatch?
|
# ? May 31, 2014 08:20 |
|
ManoliIsFat posted:Modifying LINQ queries would be HUGE. The way lambdas work is, under the hood, the compiler generates a class for them, with members & code. The way quickwatch works is, under the hood, the IDE evaluates expressions. We never got around to making it so that quickwatch can also generates classes. (which assemblies would these classes get generated into? where would they go? how would we get rid of classes that were no longer needed once the watch was deleted? The CLR has no good way to get rid of code, short of unloading an entire assembly.)
|
# ? May 31, 2014 18:35 |
|
Surely allowing it in debug mode can add some hooks to "ghost" assemblies at build time?
|
# ? May 31, 2014 20:31 |
|
gently caress them posted:What happened was basically a miscommunication. The actual DB is regularly backed up by the DBA. Some stuff I did to update a lookup and then change some rows pointing to it was wiped during an update because DBA wasn't told I did what I did - I'm thinking if I make any small changes in the future, I should save those small changes in particular. In case snarky old DBA guy doesn't go for SSDT and such you could also use a migration framework. Roundhouse is a good option with snarky old DBAs as that just runs scripts they can understand pretty well.
|
# ? May 31, 2014 20:59 |
|
ljw1004 posted:The way lambdas work is, under the hood, the compiler generates a class for them, with members & code. Out of curiosity, since the blog post says you're working on it, roughly how are you planning to solve it? I'm assuming you can't change the way lambdas work or make changes to the CLR?
|
# ? Jun 2, 2014 10:04 |
|
Ithaqua posted:C# in Depth is a good survey of the deeper features of C#. It's a good starting point. Seeman's book is good, but often tedious. However, you can skim over it in a day or two and get all the important bits.
|
# ? Jun 3, 2014 04:10 |
|
RICHUNCLEPENNYBAGS posted:Seeman's book is good, but often tedious. However, you can skim over it in a day or two and get all the important bits. Also you only really need to look over the first half. The second half is all about different IoC containers and how to set up various .NET frameworks to use IoC. It's useful as a reference, but not really necessary to read front to back.
|
# ? Jun 3, 2014 15:06 |
|
Say I wanted to fire and forget an action. However, if the action fails (throws an exception), retry the action after an appropriate delay, and do this some number of times before giving up altogether. Would something like this be adequate? C# code:
C# code:
1. I'm using try/catch as part of my workflow, maybe this is bad. 2. The while(true) smells a bit, but the alternative is testing retryCount > 0 in two places to avoid an extra and unnecessary call to Thread.Sleep.
|
# ? Jun 3, 2014 18:16 |
|
This is a perfect case for using async/await... there's no reason to be doing the action on another thread. Also, consider implementing cancellation. Also also, consider using a Timer. Also also also, consider using something that's already solved the problem (note: no idea of the quality of this code): https://github.com/pbolduc/Retry New Yorp New Yorp fucked around with this message at 19:17 on Jun 3, 2014 |
# ? Jun 3, 2014 19:11 |
|
I tried converting it to asyncC# code:
|
# ? Jun 3, 2014 20:17 |
|
What am I doing that's so slow? I just want to draw some simple stuff for fun on my form background. I create the Graphics context and bitmap on form load, and just tick the drawing in a timer. With a 20ms interval, it's already getting noticeably chuggy and heavy. It's like a 320x240 window. code:
|
# ? Jun 4, 2014 05:31 |
|
I hope this is the right place to ask this since it's kind of an IIS question, but it's in relation to a WCF service I'm making, so whatever. I have a service running in a console window. There was a contract code:
code:
code:
Well, this breaks the 48k size default. I'd very much like to change this default, but no matter what the hell I do in my configuration, my service ignores it. I tried adding all this to my web.config based on what I'd read on various forums like stackoverflow code:
code:
No luck. The last thing I tried was configuring IIS with appcmd appcmd.exe set config http://localhost/DataServices -section:system.webServer/serverRuntime /uploadReadAheadSize:10485760 /commit:apphost The output says it has applied the configuration changes, but the 48k limit is still there. I get a 413 error saying it's too big every single time unless I send a dictionary smaller than 48k (which is absurd). Any ideas as to why my configurations are being ignored? I've completely run out of ways to try to increase the 48k limit.
|
# ? Jun 4, 2014 14:44 |
|
epalm posted:Say I wanted to fire and forget an action. However, if the action fails (throws an exception), retry the action after an appropriate delay, and do this some number of times before giving up altogether. "Automatic retry" is a bad design smell. Here are some slides from a talk I've been giving internally at Microsoft... Poor user experience Let's say you retry after an appropriate delay. Well, some failures will be intermittent, and others will be permanent. What the user will experience is that without this code they'd observe failure 20% of the time in 2 seconds, but with your code they'll observe failure 15% of the time in 10 seconds. That's an eternity, and a worse overall user experience. There is a tried-and-true best practice for handling failures. That is: give the user an error message as promptly as possible, and let them take action as they see fit (normally by hitting the Refresh button). This leads to happier users. Incorrect engineering assumptions More fundamentally, what is an "appropriate delay"? If someone codes a retry, they are making a statistical assertion that the likelihood of failure now is uncorrelated with the likelihood of failure after the delay. (If that weren't true, then there'd be no point delaying!) This statistical assertion is not backed up by evidence. There are no generally accepted statistical rules of thumb here. Anything you write here is "coding blind" -- at best it's needless code that creates a worse user experience, and at worst it introduces bugs in subtle and rarely-tested codepaths. If you're writing a mobile app, then failures are most commonly associated with poor connectivity - e.g. walking into a closed building, or wandering out of tower range, or wifi configuration issues. Nothing in your code will ever do the right action here. The right action is to give the user full information properly, and let them take remedial action. If you're writing a backend batch-processing server, then maybe the right action upon failure is to push the item to the back of the queue so it runs later this night or the next night. That way, things like "404 not found" errors will likely be fixed up by an engineer because his pager rings and tells him to get his drat service back up and running within a couple of hours. And "temporary timeout" is just as likely caused by a DDOS attack or domino datacenter crash that will also take a couple of hours to fix. For communication within a datacenter -- in 6 months of heavy duty web traffic within AWS, my brother (PhD in network theory, now working for a datamining startup) said he never once observed failure between the machines. Bugs due to race conditions The basic law of distributed systems is that there are three ways a network message can play out: (1) It might succeed and you know it (200 OK) (2) It might fail and you know it (500 Failure, ...) (3) It might either succeed or fail but you don't know which (TimeoutException, or ConnectionClosed) Any library which fails to expose these three possibilities is flawed, in the sense that apps can't use it to write reliable code. I guess it's okay because your API is specifically designed solely for unimportant messages to the server, i.e. ones where it's entirely fine for the correct running of your app even if the POST never succeeded. (what are you using it for? just opportunistic telemetry? there are very few cases where fire-and-forget is ever acceptable...) Let's spell out. Imagine the first POST attempt succeeds in creating/updating data on the webservice, but nevertheless ends with a TimeoutException. Then you'll try again -- even if some other client has seen the data in the meantime and acted upon it or changed it! And even if there is no other client, well, will your webservice reliably handle two POSTS to the same URL? Generally, the basic tools for distributed code are idempotency and at-least-once guarantees. "Idempotency" is when you make sure that, even if your operation is performed more than once, it still does the right thing. GET operations are always idempotent. As for PUT and POST, well, that depends on the exact semantics. For some updates like "add $1 into my bank account" you need to invent your own ways to ensure idempotency. Typically you do this using http "etags", which provided the distributed equivalent of Interlocked.CompareExchange. "At-least-once guarantees" are because, if you don't know if the operation succeeded or failed, then you'll likely run it again. In a mobile app, if it failed, you show the error to the user and let them hit the Retry button, so the user provides the guarantee. In a datacenter batch processor, you'd likely stick the item at the back of the queue so it can be retried in a couple of hours. You'd also increment a "poison pill count" so that, if there's something structurally wrong, it doesn't keep retrying from now to eternity but instead emails an operator to resolve the problem manually. ljw1004 fucked around with this message at 15:05 on Jun 4, 2014 |
# ? Jun 4, 2014 15:01 |
|
In this specific case, the action is not applied by a user. When an internal thing happens, I need to send some details to an external service (over which I have no control) whose web service is "sometimes" unavailable for a few seconds at a time. Retrying the connection few times, with an appropriate delay, still sounds to me like the right thing to do. It's actually not totally critical that the communication succeeds (hence fire-and-forget), but it would be nice to move from ~90% success to ~99% success by just retrying a few times. How else would I improve my situation, considering I don't have full control over all systems in the equation. The external service is known to be idempotent, by the way. If the action was applied by a user, then I understand and agree with you on all points. epswing fucked around with this message at 15:49 on Jun 4, 2014 |
# ? Jun 4, 2014 15:38 |
|
IcedPee posted:Nevermind. I figured it out. Since I was just using this link as a basis for hosting my service in a console, I neglected to check on how it was being created - since it creates a new binding, it has no reason to rely on the configuration files for its binding. Adding the desired properties to the new binding fixed the problem.
|
# ? Jun 4, 2014 15:39 |
|
So here's my problem: Our team has a handful of big web ASP.NET/MVC applications and the static content (images, css, some js) is scattered about in either the projects or a dedicated static content project. Currently the static content project just contains images, and we want to move all of the static content into that project so we can manage it separately from our MVC apps. Moving files and changing the URL in header links/scripts is easy enough, but how I can I handle Bundling? So far, I understand that System.Web.Optimization exposes a new token every time a bundle changes so the app can really only access bundles that it bundled itself. Maybe I would need a way to expose that bundle token via an API but that seems ridiculous. Can this be accomplished or is there a better way to manage bundled static resources across projects?
|
# ? Jun 4, 2014 20:15 |
In an MVC controller can I send a response through the context, close it out, then continue on with processing? I tried doing HttpContext.Response.End() and such right at the start but my request didn't get a response until the server's entire routine was done.
|
|
# ? Jun 4, 2014 21:04 |
|
I think to do that, you can return your ActionResult, then do your post-processing from the OnResultExecuted event.
|
# ? Jun 4, 2014 21:18 |
|
You could spawn a thread from your controller action and continue work there. Obviously, you'll want to handle all exceptions since that could take down your whole site if that thread failed. Although, to me this is a code smell. What are you doing that takes a long time that the user doesn't need to know about the result of? If it's a significant amount of processing, you shouldn't be doing that on your webserver. Queue up the work and handle it in another process. Careful Drums posted:Can this be accomplished or is there a better way to manage bundled static resources across projects? I've searched for a good solution to handling resources that don't exist in the project and I don't think I've found a good general purpose one yet. In my case, I was looking for something that would ease swapping between a CDN and some local repository for static content. Bognar fucked around with this message at 21:26 on Jun 4, 2014 |
# ? Jun 4, 2014 21:24 |
My MVC app communicates with services at client sites, performing a few queries based on the input sent by the 'user'. If the client's service takes too long to respond (15 seconds in this case) the user's original request will just time out, even though things are fine. I store/save the input sent immediately, so I just want to say 'yep, got it' and then work with it. I ended up spawning an additional thread which works good. I'm doing a catch-all in the new thread and using ILogger to write the exception if it happens.
|
|
# ? Jun 4, 2014 21:51 |
|
You may want to think about a queuing mechanism to make life a little easier. If it's truly an asynchronous job, you just throw a message to a rabbitmq or something that describes the job you want it to do, and some service/program just pops outta the queue, does the job, writes the result. You could also retry failures this way. No more spawning threads in the web app.
ManoliIsFat fucked around with this message at 03:23 on Jun 5, 2014 |
# ? Jun 4, 2014 21:56 |
|
Bognar posted:You could spawn a thread from your controller action and continue work there. You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request? In any case, recently announced and new in .net4.5.2 is HostingEnvironment.QueurBackgroundWorkItem for this kind of thing... http://msdn.microsoft.com/en-us/library/ms171868(v=vs.110).aspx
|
# ? Jun 5, 2014 01:49 |
|
ljw1004 posted:"Automatic retry" is a bad design smell. Here are some slides from a talk I've been giving internally at Microsoft... But I mean, if your application depends on an external API or something, how can you not have exponential backoff? Just giving up on the first retry seems pretty lovely since a lot of times retrying does work.
|
# ? Jun 5, 2014 03:33 |
|
idk what kind of internal networks you're using at Microsoft where HTTP or DNS or firewalls never just, go down for five seconds. it's been sadly common for me and my clients would rather their batch processing tasks start up again 10 seconds later than waiting until the next processing window. idempotency is a valuable tool for this of course. it isn't mutually exclusive with back-off-and-retry, and in fact makes it far safer.
|
# ? Jun 5, 2014 11:29 |
|
RICHUNCLEPENNYBAGS posted:But I mean, if your application depends on an external API or something, how can you not have exponential backoff? Just giving up on the first retry seems pretty lovely since a lot of times retrying does work. Just because it's a smell doesn't mean it's categorically bad. I mean, Microsoft itself has the whole Transient Fault Handling block for their Azure bits that has Exponential Back-off as one of the provided retry strategies: http://msdn.microsoft.com/en-us/library/hh680934(PandP.50).aspx
|
# ? Jun 5, 2014 14:31 |
|
ljw1004 posted:You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request? That's a good point. ASP.NET can try to tear down AppDomains for multiple reasons, and if it doesn't know about the code that's running then it won't try to wait on it. You could be lazy about it and use ThreadPool.QueueUserWorkItem so it uses a thread from the ASP.NET thread pool, though the more correct way is to create a class representing your work and have it inherit from IRegisteredObject and use HostingEnvironment.RegisterObject to let ASP.NET know you're doing work. That gives you ~30 seconds to complete your work before the AppDomain is torn down.
|
# ? Jun 5, 2014 14:42 |
|
ljw1004 posted:You sure about that? I thought that ASP.Net reserved the right to kill things that aren't part of an in-progress request? I can (unfortunately) confirm that it does (superficially) work. In light of this conversation, I'm going to try to get they guy who did it to change the way he handles our use case.
|
# ? Jun 5, 2014 15:39 |
|
Manslaughter posted:In an MVC controller can I send a response through the context, close it out, then continue on with processing? I tried doing HttpContext.Response.End() and such right at the start but my request didn't get a response until the server's entire routine was done. I think there are libraries for this like HangFire, though I've never used them.
|
# ? Jun 5, 2014 22:54 |
|
I'm trying to convert my hobby project's web scraper to F#. I'm trying to clean up my results so that I simply have a List of string arrays, the caveat being that I'm consuming the results from the F# library in C#. This method returns a string array and I'd like to send each string in the array to a function called resultsBody that takes a string and returns a sequence of string arrays, but the final end result should be a single sequence of string[], not a sequence of a sequence of string []s. code:
Edit: I figured it out. It was this: code:
Uziel fucked around with this message at 02:20 on Jun 6, 2014 |
# ? Jun 5, 2014 23:54 |
|
I think you could have just used Seq.collect. (Also check out the Array.Parallel module.) e: Actually the solution you posted shouldn't be compiling for a couple of reasons so I'm not sure what you're doing. raminasi fucked around with this message at 04:29 on Jun 6, 2014 |
# ? Jun 6, 2014 04:24 |
|
Hey guys, newbie here. I have an ASP.NET MVC web application, and I want to present some data in a view that refreshes itself every so often. The data is coming from a WCF service, so I'd like to continuously call that service every few seconds and send the data back to the user without forcing them to refresh the page. How is this best achieved? I tried spending some time with Google on this but I got a lot of different and unrelated results. I'd like to keep this as simple as possible since I'm very new to MVC. It's also possible that I'm just not searching for the right things.
|
# ? Jun 6, 2014 07:01 |
|
|
# ? Jun 10, 2024 13:49 |
|
spiderlemur posted:Hey guys, newbie here. What you're looking for is to use some kind of ajax calls to pull new data down from the server. This post gives a pretty basic example.
|
# ? Jun 6, 2014 13:08 |