|
Scaramouche posted:Now I'm getting OutOfMemory errors in a different place, when submitting my XML here: Streams give you a way to read and write bytes from/to some other location. That other location can be (but is not limited to) a file, memory, or a network socket. The Position property on the stream represents the position in the source data from where the stream starts reading from or writing to, and it's automatically updated when you read from or write to the stream. In order to actually calculate the MD5, MarketplaceWebServiceClient.CalculateContentMD5 needs to read all of the data out of the stream. It's not actually storing all this data in memory, it's just doing math on small chunks of it at a time. But in doing so the stream updates its Position to the end of the underlying data, which is why you need to reset the Position to 0 otherwise there would be nothing left for the web service client to send. So if instead you were to do: code:
I suspect the examples you're referring to describe the manual process of converting stream data to and from text, which almost nobody ever does because StreamReader and StreamWriter exist and which you don't have to do anyway because you're handing all of the work off to the Amazon library. e: vb may be bogus, I'm a C# guy e2: The using block is important so that you close the file when the Amazon lib is done with it, regardless of stuff otherwise breaking
|
# ¿ Jul 2, 2014 02:14 |
|
|
# ¿ May 3, 2024 01:42 |
|
Scaramouche posted:The interesting thing is I had stumbled upon a similar solution earlier, with the only difference being mine didn't use the Using... block, but the error is identical, down to ScatterGatherBuffers. I'm not sure what's going on here, other than the stream is obviously being treated as the entire thing instead of streaming. This is an example where someone chunks something too big for webclient, but it looks so specific and so much goes on in the While... loop I can't see how I'd get it to work with the Amazon SubmitFeed(request) model: I found a copy of MarketplaceWebServiceClient posted in a github somewhere and I'm pretty sure the link you found is the problem, MarketplaceWebServiceClient internally is creating an HttpWebClient which is buffering the stream anyway. Unfortunately if you haven't got access to the source I don't think there's a way to fix it.
|
# ¿ Jul 2, 2014 07:24 |
|
kingcrimbud posted:I figured this out after creating a new solution. Be careful how you register DelegatingHandlers! Their singleton behavior means dependencies will not resolve per your expectations. If you want per request scoping to work right in a DelegatingHandler, you need to call request.GetDependencyScope() in SendAsync and use the resulting IDependencyScope as a service locator. Been there, done that.
|
# ¿ Jun 23, 2015 03:51 |
|
epalm posted:When I get together with friends, and we talk about work, a phrase like this is a solid conversation-ender But seriously, like three other people where I work have tripped over this same issue - it's fairly subtle behaviour if you start from "I need to wrap something around all my HTTP requests" and arrive at DelegatingHandler.
|
# ¿ Jun 23, 2015 04:25 |
|
The Wizard of Poz posted:I came across that but I can't find any documentation whatsoever to support the theory that it can be used to build an OAuth 2.0 server. This is the frustration I keep encountering, it seems like no one in the world is interested in building their own OAuth server, they're only interested in talking to existing OAuth servers like Google and Facebook etc. https://github.com/DotNetOpenAuth/DotNetOpenAuth/wiki/Security-scenarios#developing-a-user-specific-authorization-server and down
|
# ¿ Aug 17, 2015 05:10 |
|
The Wizard of Poz posted:Not at all, I'm struggling to wrap my head around all this and the myriad of terms that get bandied about far too often appear to be interchangeable or their definitions become muddled. As I read more about OAuth 2.0 I'm starting to think maybe OpenID is what I need after all. It's all very convoluted. I'm not 100% on this but my take on them both is that: OAuth allows a user to authorise some application access to their stuff without needing to give that application their password. OpenID allows users to authenticate themselves using a third party (Google etc.) and I think it can combine with OAuth in a single call to also ask for authorisation to that third party's data To do a typical federated identity sign in, you use OpenID to ask Google to authenticate your user, and then use OAuth to get authorisation from the user to read more info about them than just their email address. Chances are someone is going to come in here and correct me on that. Anyway this is starting to sound like an XY problem, so what is it that you're trying to do?
|
# ¿ Aug 17, 2015 06:14 |
|
The Wizard of Poz posted:I think you're right, let me take a step back. We have a customer database that includes, among other details, a username and password for each of our customers. We would like to use this database to provide authentication for potentially many websites. To achieve this, I'd like to make some kind of authentication service that can be called remotely to authenticate a user and they will then be logged in across all the related websites without having to re-enter their credentials for each one. I assume this would some kind of token system but I'm not sure how to plug it all together. The Wizard of Poz posted:As I read more about OAuth 2.0 I'm starting to think maybe OpenID is what I need after all. It's all very convoluted. I'm pretty sure you're right, and that what you want to be doing is building an OpenID provider over your customer database and making the related websites relying parties. I couldn't easily find any documentation on doing that with DotNetOpenAuth but I did find a sample implementation at https://github.com/DotNetOpenAuth/DotNetOpenAuth.Samples/tree/master/src/OpenID/OpenIdProviderMvc - good luck! Kekekela posted:This is my first time hearing this term but after googling it, I think I will be using it extensively going forward.
|
# ¿ Aug 18, 2015 07:23 |
|
Boz0r posted:Yeah, it's the Community Edition. It was top of the list in Dreamspark, and it said it was pretty much the same as Enterprise so I just picked that, while Enterprise was just underneath it. Trust no one. On the bright side given you have access to the Enterprise edition you should be able to just install that over the top of your current install and upgrade it. At least I did that to my copy of 2013 when I got bumped from Pro to Premium and it worked OK, didn't even break addins.
|
# ¿ Aug 18, 2015 23:46 |
|
Hey Poz one of the other guys at work just pointed me at Thinktecture IdentityServer as an alternative to DotNetOpenAuth, might be worth looking into as well.
|
# ¿ Aug 19, 2015 00:14 |
|
Boz0r posted:I'm using this one, it seems the most elegant. code:
|
# ¿ Aug 19, 2015 13:25 |
|
Dollas posted:New to MVC and activeX, trying to use code from a .NET activeX demo for a signature pad (SigPlus ActiveX: http://www.topazsystems.com/dotnet.html). This is well outside my usual area so I'm really just making an educated guess here but ASP.NET apparently defaults to multi-threaded apartments, which is my gut feeling on the source of your issue - I'm pretty sure ActiveX controls need single threaded apartments. The demo app probably has [STAThread] sitting over its main method - see if changing that to [MTAThread] causes similar issues. Making MVC use a single-threaded apartment looks like a pain in the dick. The other thing you could try is registering the server component into COM+ as an out-of-process server, which will hurt performance compared to an in-process load but should at least let you enforce single-threadedness.
|
# ¿ Aug 20, 2015 00:19 |
|
RICHUNCLEPENNYBAGS posted:I'm also kind of talking out of my rear end but I'd think forcing it to run single-threaded would cause serious performance issues. While we're talking out of our arses I think it sets up a new apartment for each request or does some apartment pooling magic or something. The Wizard of Poz posted:Having trouble wrapping my head around a problem, hoping I can explain it here for someone to help me: Why do you need the expression objects? I'm not sure why you can't just declare GetAsKeyValuePairs on ModelBase as abstract and return a Dictionary<string,string> directly from the subclasses.
|
# ¿ Aug 20, 2015 03:23 |
|
Munkeymon posted:But there's no guarantee that it's monotonically increasing, which is what you need because a modern CPU can do more than 10 million things in a second. I don't see anything like that available without dipping into the Windows API. https://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx System.Diagnostics.Stopwatch uses QueryPerformanceFrequency etc. if it's available, and has static properties for whether it's a high resolution timer and how many stopwatch ticks are in a second. On this PC it claims to be high resolution with ~3.3 million stopwatch ticks per second. Even that isn't high enough resolution for the following loop to run forever (it's terminating after 2-3 iterations usually): code:
|
# ¿ Sep 10, 2015 23:30 |
|
Bob Morales posted:Yes - at least I think so Check your 32 bit ODBC sources. Visual Studio isn't 64 bit (yet).
|
# ¿ Dec 2, 2015 23:01 |
|
xgalaxy posted:project.json is dead. xproj effectively becomes the defacto standard and renamed to csproj + gaining some features from project.json mixed in. I like project.json. I'm less disappointed in this if they're going to make csproj as easy to use as project.json is, but I'm still disappointed. I get why though, there's been a hell of a lot of investment in msbuild and this change probably made some big spending enterprise customer Very Unhappy.
|
# ¿ May 11, 2016 23:51 |
|
Gul Banana posted:... to be able to edit the file without unloading the project. sounds alarmingly difficult to implement, though, given how VS works... VS already handles the project file changing out from underneath it remarkably well, it just doesn't let you edit the file from within the IDE while it's loaded. It might not be that big of a deal. I guess the biggest thing for me was how terse and readable project.json was compared to the piles of xml that .proj currently is. As long as you have the right schema loaded editing a proj file isn't a huge deal, it just takes a lot more than project.json did.
|
# ¿ May 12, 2016 23:54 |
|
Drastic Actions posted:https://twitter.com/mjhutchinson/status/562501156375900160 Yeah I'm not really mourning project.json itself per se, more that it made it dead easy to tell it "hey when you install packages run npm install as well, and then after you build run webpack -p. Last time I tried to do anything with a .csproj in VS I don't even recall it giving me docs in tooltips, let alone autocompletion. I'd be fine with that completion combined with some shorthand for "do this extra step before/after package restore / build / publish".
|
# ¿ May 14, 2016 01:32 |
|
The Wizard of Poz posted:I'm trying to implement an IAuthenticationFilter (the Web Api 2 flavour, NOT the MVC flavour) and I'm struggling with the order the code is executed. I would have expected the Authentication filter to be run before any controller-based stuff, so that I could set the appropriate principal and then load the relevant user data from my DbContext in some kind of base ApiController. Honestly I'd just do something like: code:
|
# ¿ May 18, 2016 12:18 |
|
Space Whale posted:I'm loving around with EF and find myself wanting to do a search by multiple columns per row. code:
Also since you appear to be matching on any column and not all provided columns, you could also build individual searches against each column and returning the union of the result: code:
|
# ¿ Jun 17, 2016 02:32 |
|
Bognar posted:This will not work. Multiple chained .Where clauses work as AND, not OR. I knew that. I even wrote code that took advantage of it like a week ago. Don't code straight into the post box while tired.
|
# ¿ Jun 20, 2016 00:07 |
|
We borrowed this guy's code https://github.com/mrahhal/Migrator.EF6 for a command line utility we provide alongside the packages using the database. We added a couple extra bits and pieces that he didn't include out of the box, like the ability to list pending migrations as well as applied migrations, but it's otherwise pretty complete. e: we also provide some PowerShell DSC scripts that use it, but they're optional and the guys doing the installs have the ability to get update scripts out of it. This is necessary because some of our customer sites (hospitals) have the world's most anal DBAs and we're lucky if we get execute on our own stored procedures.
|
# ¿ Aug 3, 2016 23:28 |
|
RICHUNCLEPENNYBAGS posted:This thing is so hyped up and then I finally read it and was disappointed. It's well presented and all, but it's mostly things that will have occurred to you if you've been working in C# long enough. Or at least that's how I remember it. This should be true of any language that isn't horribly convoluted and esoteric, but that doesn't mean the book is a bad resource for people who haven't spent forever writing C#. You just aren't its target audience.
|
# ¿ Aug 24, 2016 05:38 |
|
raminasi posted:Is an indexer that creates a new object every time it's called as much as an API horror as I intuitively feel like it is? I was expecting Object.ReferenceEquals(customCollection[0], customCollection[0]) to always return true, but I just got burned by something because it actually always returns false. If customCollection is a collection of structs I think this makes sense. Object.ReferenceEquals doesn't have an overload for value types which means if you're passing in two structs like this they'll be boxed separately, which means you now have two references to different boxes containing the same value which means they aren't actually the same reference. If it's a collection of objects then what the gently caress.
|
# ¿ Sep 18, 2016 01:14 |
|
raminasi posted:Nope, it's a reference type. So the indexer of a collection is counter-intuitively generating new instances of a class that wants timely cleanup, and even that part is an afterthought in the documentation. Part of me doesn't want anything to do with whatever it is you're using and part of me is morbidly curious about what it is. Night Shade fucked around with this message at 07:22 on Sep 18, 2016 |
# ¿ Sep 18, 2016 07:20 |
|
dougdrums posted:I have a question about some code in the MS references; I was converting ConcurrentQueue to C as an exercise, and I ran into this bit: https://msdn.microsoft.com/en-us/library/ms228973(v=vs.110).aspx posted:The CLR delays thread aborts for code that is executing in a CER. This stuff is also true of finalisers. e: So when the comment is talking about "prevent anything from happening between them", I read that as being about guarding against a thread being forcefully terminated by managed code or the runtime during TryAppend and corrupting state. Night Shade fucked around with this message at 05:00 on Oct 1, 2016 |
# ¿ Oct 1, 2016 04:57 |
|
Gul Banana posted:i didn't think finally blocks were automatically CERs, though. you need to inherit from criticalhandle or something? and use attributes also, i think Yeah I think you might be right actually, and ThreadAbortExceptions get delayed until after finally blocks and finalisers by the runtime anyway. I haven't spent a lot of time in this area of the framework.
|
# ¿ Oct 2, 2016 05:47 |
|
Gul Banana posted:i don't want to make assumptions, but it's possible that part of ConcurrentDictionary.cs is just a cargo cult technique.. Heh. Maybe. There is some stuff about it on the documentation for Thread.Abort https://msdn.microsoft.com/en-us/library/5b50fdsz(v=vs.110).aspx but I clearly made the leap to constrained execution based on some badly remembered stuff I read a while ago
|
# ¿ Oct 3, 2016 00:29 |
|
Bognar posted:As I understand, it's not expected for this to be released in C# 7. I'm not surprised by this, it's a pretty huge change, but good god it can't come soon enough.
|
# ¿ Oct 18, 2016 00:49 |
|
Baby Proof posted:Well, for one, Fiddler is pretty useful, and their decompiler seems nifty. Telerik bought Fiddler as a fully functional product and have basically just been maintaining it ever since. And I think their decompiler is based on a fork of .NET Reflector before RedGate bought it out and started charging for it.
|
# ¿ Nov 8, 2016 23:02 |
|
bobua posted:(entity framework 6) EF should be handling what's new vs updated without you needing to lift a finger in SaveChanges(), but context.ChangeTracker.Entries<Thing>() should have what you need. Having said that I'm not sure the change tracker works 100% with concrete List instances. You might want to swap to IList or ICollection so it can create change tracking collections instead.
|
# ¿ Nov 28, 2016 22:55 |
|
bobua posted:If I try to context.AddRange(myList); I'll get an exception for trying to add existing items(matching primary keys) when I savechanges. How would the context have any reference to the new items I've added to do that change tracking? Oh sorry I thought the list was part of the model. You should be able to iterate over the list and set context.Entry(thing).State to Added if it's Detached. Everything that the context already knows about should either be Unchanged or Modified.
|
# ¿ Nov 29, 2016 04:47 |
|
The Wizard of Poz posted:I can smell an X/Y problem here. This is definitely something that's usually trivial to the point of not really being actively thought about, EF just handles it. Which means something must be wrong with your larger process. Can you step back from the EF layer and explain what you'd like to achieve overall? Sounds like the UI is bound to a plain list of Things fetched from the context, and isn't adding new Things to the context when the user creates one. If the UI layer has direct access to the context it can add new Things to both the relevant DbSet and the UI-bound list when the user creates one, and then the change tracker will just do its thing in the background for you.
|
# ¿ Nov 29, 2016 23:02 |
|
Three: you're writing an interop struct. Though those should probably be internal and wrapped with a more idiomatic API.
|
# ¿ Dec 3, 2016 00:30 |
|
Mr Shiny Pants posted:You would think that they would manage this for you, because that is the hard part and they know all the ins and outs of Azure. code:
|
# ¿ Jan 18, 2017 07:59 |
|
chippy posted:edit: Oh hang on, are we asking if the ContrivedExample sproc is idempotent, or the CREATE PROCEDURE statement? The EXEC side, not the CREATE side. CREATE isn't I think technically idempotent because it will fail if it is executed a second time but it is safe to automatically retry. I was (badly) alluding to the fact that it's trivial for devs to create code that is guaranteed unsafe to be automatically retried, and if Microsoft puts in some sort of infrastructure for doing so those same devs will blindly turn it on and then blame Microsoft when ThingValue winds up at 2734 instead of 5. Night Shade fucked around with this message at 22:48 on Jan 18, 2017 |
# ¿ Jan 18, 2017 22:45 |
|
ljw1004 posted:I'm not very good at testing. Could you spell out in a bit more detail how "more testable and replaceable" would apply to HttpClient? You can then inject an HttpClient using a custom HttpMessageHandler that returns canned responses/failures/timeouts instead of actually making web requests.
|
# ¿ Jan 23, 2017 22:39 |
|
Warbird posted:When that day does come, I'm going to have to sit down and think about the life choices I've made to get here. Down this path lies alcoholism and liver failure.
|
# ¿ Feb 1, 2017 02:52 |
|
The type argument that it cannot resolve is TEntity, because QaCommandHandler is still an open generic and it doesn't know what to put there. Based on https://simpleinjector.readthedocs.io/en/latest/advanced.html#registration-of-open-generic-types it looks like you need to use RegisterConditional when registering QaCommandHandler as an open generic. Disclaimer: I have never used SimpleInjector, I got that doc by googling for "simple injector open generic". Tickling Google the right way is a skill unto itself. e: you might also want to read through the bit on mixing collections of open generic and non-generic components, it seems relevant also Night Shade fucked around with this message at 05:41 on Mar 8, 2017 |
# ¿ Mar 8, 2017 05:39 |
|
The Wizard of Poz posted:Yeah sorry I should have mentioned that I've read the documentation over and over, but I think I just keep getting stuck on the specific syntactic kung-fu that I need to use here. In the case of RegisterConditional, that is useful when you want to register a catch-all after registering the main implementations which is relevant here to an extent but the problem is the actual registration itself is failing. Nah I dont think so, it reads like when you give it an open generic and an assembly it scans the assembly for concrete implementations of that generic, not for type arguments that might satisfy the generic. On the assumption that you're doing something with TEntity in QaCommandHandler that needs typeof(TEntity) and not just typeof(QaModelBase), I threw something together that appears to do what you're after. The trick was that all of the Qa stuff - IQaCommand, QaCommandHandler and QaDecorator - needed to be generic over TEntity as well, then SimpleInjector was able to tie everything together. http://pastebin.com/FieZ7AR8 The first call to GetInstance() returns a TestHandler, the second returns a QaDecorator<TestQaCommand, QaModel>. Dump is a LinqPad extension, it's basically a deep console.writeline() Obvious downside to this approach: all of your IQaCommands depend on their concrete QaModelBase implementation.
|
# ¿ Mar 9, 2017 01:34 |
|
|
# ¿ May 3, 2024 01:42 |
|
fleshweasel posted:Typewriter is what you want. Oh that's neat. We're using Swashbuckle for Swagger generation and sw2dts to import that into the frontend, but our frontends are react/typescript and built with webpack, not VS.
|
# ¿ Mar 9, 2017 22:55 |