Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Night Shade
Jan 13, 2013

Old School

Scaramouche posted:

Now I'm getting OutOfMemory errors in a different place, when submitting my XML here:
code:
Dim buffer As Byte() = File.ReadAllBytes(xml) '<-- errors out here
Dim ms As New MemoryStream(buffer)

request.FeedContent = ms

request.ContentMD5 = MarketplaceWebServiceClient.CalculateContentMD5(request.FeedContent)
request.FeedContent.Position = 0

Dim response As SubmitFeedResponse = service.SubmitFeed(request)
This makes sense, since it is loading the entire file into memory. The MarketplaceWebServiceClient is provided by Amazon so I don't want to have to mess with it too too much. I see how you can instantiate the file.OpenRead, but all the examples I've seen chunk the read into bytes (usually 1024 or 2048) and apply an encoding, and then loop from there. I'm not sure how I could apply that to what the Amazon service is expecting, since the way it's constructed above it seems to be be expecting the content all of a piece. Unless that FeedContent.Position var means more than I think it does...

Streams give you a way to read and write bytes from/to some other location. That other location can be (but is not limited to) a file, memory, or a network socket. The Position property on the stream represents the position in the source data from where the stream starts reading from or writing to, and it's automatically updated when you read from or write to the stream.

In order to actually calculate the MD5, MarketplaceWebServiceClient.CalculateContentMD5 needs to read all of the data out of the stream. It's not actually storing all this data in memory, it's just doing math on small chunks of it at a time. But in doing so the stream updates its Position to the end of the underlying data, which is why you need to reset the Position to 0 otherwise there would be nothing left for the web service client to send.

So if instead you were to do:
code:
Dim response As SubmitFeedResponse
Using fs As Stream = File.OpenRead(xml)

request.FeedContent = fs

request.ContentMD5 = MarketplaceWebServiceClient.CalculateContentMD5(request.FeedContent)
request.FeedContent.Position = 0

response = service.SubmitFeed(request)

End Using
both the MD5 calculation and the web service client should read the data directly out of the file on disk.

I suspect the examples you're referring to describe the manual process of converting stream data to and from text, which almost nobody ever does because StreamReader and StreamWriter exist and which you don't have to do anyway because you're handing all of the work off to the Amazon library.

e: vb may be bogus, I'm a C# guy
e2: The using block is important so that you close the file when the Amazon lib is done with it, regardless of stuff otherwise breaking

Adbot
ADBOT LOVES YOU

Night Shade
Jan 13, 2013

Old School

Scaramouche posted:

The interesting thing is I had stumbled upon a similar solution earlier, with the only difference being mine didn't use the Using... block, but the error is identical, down to ScatterGatherBuffers. I'm not sure what's going on here, other than the stream is obviously being treated as the entire thing instead of streaming. This is an example where someone chunks something too big for webclient, but it looks so specific and so much goes on in the While... loop I can't see how I'd get it to work with the Amazon SubmitFeed(request) model:
http://blogs.msdn.com/b/johan/archive/2006/11/15/are-you-getting-outofmemoryexceptions-when-uploading-large-files.aspx

I found a copy of MarketplaceWebServiceClient posted in a github somewhere and I'm pretty sure the link you found is the problem, MarketplaceWebServiceClient internally is creating an HttpWebClient which is buffering the stream anyway. Unfortunately if you haven't got access to the source I don't think there's a way to fix it. :(

Night Shade
Jan 13, 2013

Old School

kingcrimbud posted:

I figured this out after creating a new solution. Be careful how you register DelegatingHandlers! Their singleton behavior means dependencies will not resolve per your expectations.

If you want per request scoping to work right in a DelegatingHandler, you need to call request.GetDependencyScope() in SendAsync and use the resulting IDependencyScope as a service locator.

Been there, done that.

Night Shade
Jan 13, 2013

Old School

epalm posted:

When I get together with friends, and we talk about work, a phrase like this is a solid conversation-ender :v:

just kidding I don't have any friends



But seriously, like three other people where I work have tripped over this same issue - it's fairly subtle behaviour if you start from "I need to wrap something around all my HTTP requests" and arrive at DelegatingHandler.

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

I came across that but I can't find any documentation whatsoever to support the theory that it can be used to build an OAuth 2.0 server. This is the frustration I keep encountering, it seems like no one in the world is interested in building their own OAuth server, they're only interested in talking to existing OAuth servers like Google and Facebook etc.

https://github.com/DotNetOpenAuth/DotNetOpenAuth/wiki/Security-scenarios#developing-a-user-specific-authorization-server and down

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

Not at all, I'm struggling to wrap my head around all this and the myriad of terms that get bandied about far too often appear to be interchangeable or their definitions become muddled. As I read more about OAuth 2.0 I'm starting to think maybe OpenID is what I need after all. It's all very convoluted.

I'm not 100% on this but my take on them both is that:
OAuth allows a user to authorise some application access to their stuff without needing to give that application their password.
OpenID allows users to authenticate themselves using a third party (Google etc.) and I think it can combine with OAuth in a single call to also ask for authorisation to that third party's data

To do a typical federated identity sign in, you use OpenID to ask Google to authenticate your user, and then use OAuth to get authorisation from the user to read more info about them than just their email address.

Chances are someone is going to come in here and correct me on that.

Anyway this is starting to sound like an XY problem, so what is it that you're trying to do?

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

I think you're right, let me take a step back. We have a customer database that includes, among other details, a username and password for each of our customers. We would like to use this database to provide authentication for potentially many websites. To achieve this, I'd like to make some kind of authentication service that can be called remotely to authenticate a user and they will then be logged in across all the related websites without having to re-enter their credentials for each one. I assume this would some kind of token system but I'm not sure how to plug it all together.

The Wizard of Poz posted:

As I read more about OAuth 2.0 I'm starting to think maybe OpenID is what I need after all. It's all very convoluted.

I'm pretty sure you're right, and that what you want to be doing is building an OpenID provider over your customer database and making the related websites relying parties. I couldn't easily find any documentation on doing that with DotNetOpenAuth but I did find a sample implementation at https://github.com/DotNetOpenAuth/DotNetOpenAuth.Samples/tree/master/src/OpenID/OpenIdProviderMvc - good luck!

Kekekela posted:

This is my first time hearing this term but after googling it, I think I will be using it extensively going forward. :downs:

:pseudo:

Night Shade
Jan 13, 2013

Old School

Boz0r posted:

Yeah, it's the Community Edition. It was top of the list in Dreamspark, and it said it was pretty much the same as Enterprise so I just picked that, while Enterprise was just underneath it. Trust no one.

On the bright side given you have access to the Enterprise edition you should be able to just install that over the top of your current install and upgrade it. At least I did that to my copy of 2013 when I got bumped from Pro to Premium and it worked OK, didn't even break addins.

Night Shade
Jan 13, 2013

Old School
Hey Poz one of the other guys at work just pointed me at Thinktecture IdentityServer as an alternative to DotNetOpenAuth, might be worth looking into as well.

Night Shade
Jan 13, 2013

Old School

Boz0r posted:

I'm using this one, it seems the most elegant.

The GameTiles array is of type IGameTile. Can I enforce type safety on the function in some way, instead of casting TTile to IGameTile all the time?

code:
public void ResetBoard<TTile>() where TTile : IGameTile, new(){
will constrain TTile to be any type that implements IGameTile and has a parameterless constructor.

Night Shade
Jan 13, 2013

Old School

Dollas posted:

New to MVC and activeX, trying to use code from a .NET activeX demo for a signature pad (SigPlus ActiveX: http://www.topazsystems.com/dotnet.html).

Question:
sigObj.NumberOfTabletPoints() always ends up 0. If I run the standalone activeX .net demo, this is not the case. Do I have to handle that object differently in an MVC scenario?

This is well outside my usual area so I'm really just making an educated guess here but ASP.NET apparently defaults to multi-threaded apartments, which is my gut feeling on the source of your issue - I'm pretty sure ActiveX controls need single threaded apartments. The demo app probably has [STAThread] sitting over its main method - see if changing that to [MTAThread] causes similar issues.

Making MVC use a single-threaded apartment looks like a pain in the dick. The other thing you could try is registering the server component into COM+ as an out-of-process server, which will hurt performance compared to an in-process load but should at least let you enforce single-threadedness.

Night Shade
Jan 13, 2013

Old School

RICHUNCLEPENNYBAGS posted:

I'm also kind of talking out of my rear end but I'd think forcing it to run single-threaded would cause serious performance issues.

While we're talking out of our arses I think it sets up a new apartment for each request or does some apartment pooling magic or something.

The Wizard of Poz posted:

Having trouble wrapping my head around a problem, hoping I can explain it here for someone to help me:

Why do you need the expression objects? I'm not sure why you can't just declare GetAsKeyValuePairs on ModelBase as abstract and return a Dictionary<string,string> directly from the subclasses.

Night Shade
Jan 13, 2013

Old School

Munkeymon posted:

But there's no guarantee that it's monotonically increasing, which is what you need because a modern CPU can do more than 10 million things in a second. I don't see anything like that available without dipping into the Windows API. https://msdn.microsoft.com/en-us/library/ms724408(VS.85).aspx https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx

System.Diagnostics.Stopwatch uses QueryPerformanceFrequency etc. if it's available, and has static properties for whether it's a high resolution timer and how many stopwatch ticks are in a second. On this PC it claims to be high resolution with ~3.3 million stopwatch ticks per second.

Even that isn't high enough resolution for the following loop to run forever (it's terminating after 2-3 iterations usually):
code:
var sw = new Stopwatch();
sw.Start();
HashSet<TimeSpan> hs = new HashSet<TimeSpan>();
while (hs.Add(sw.Elapsed)){}
Console.WriteLine(hs.Count);

Night Shade
Jan 13, 2013

Old School

Bob Morales posted:

Yes - at least I think so



I wasn't sure if I had to add the ODBC package as an add-on or something for VS.



Check your 32 bit ODBC sources. Visual Studio isn't 64 bit (yet).

Night Shade
Jan 13, 2013

Old School

xgalaxy posted:

project.json is dead. xproj effectively becomes the defacto standard and renamed to csproj + gaining some features from project.json mixed in.

I like project.json. I'm less disappointed in this if they're going to make csproj as easy to use as project.json is, but I'm still disappointed.

I get why though, there's been a hell of a lot of investment in msbuild and this change probably made some big spending enterprise customer Very Unhappy.

Night Shade
Jan 13, 2013

Old School

Gul Banana posted:

... to be able to edit the file without unloading the project. sounds alarmingly difficult to implement, though, given how VS works...

VS already handles the project file changing out from underneath it remarkably well, it just doesn't let you edit the file from within the IDE while it's loaded. It might not be that big of a deal.

I guess the biggest thing for me was how terse and readable project.json was compared to the piles of xml that .proj currently is. As long as you have the right schema loaded editing a proj file isn't a huge deal, it just takes a lot more :effort: than project.json did.

Night Shade
Jan 13, 2013

Old School

Drastic Actions posted:

https://twitter.com/mjhutchinson/status/562501156375900160

This is Xamarin Studio, but there is progress on this front.

And I'm not mourning project.json. Not that it was bad, but compatibility issues with it and XS drove me a bit nutty. Now we don't have to worry about it! :woop:

Yeah I'm not really mourning project.json itself per se, more that it made it dead easy to tell it "hey when you install packages run npm install as well, and then after you build run webpack -p. Last time I tried to do anything with a .csproj in VS I don't even recall it giving me docs in tooltips, let alone autocompletion.

I'd be fine with that completion combined with some shorthand for "do this extra step before/after package restore / build / publish".

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

I'm trying to implement an IAuthenticationFilter (the Web Api 2 flavour, NOT the MVC flavour) and I'm struggling with the order the code is executed. I would have expected the Authentication filter to be run before any controller-based stuff, so that I could set the appropriate principal and then load the relevant user data from my DbContext in some kind of base ApiController.

This is the flow I'm after:


AuthenticationFilter                    ==> BaseController                                     ==> Controller/Action
----------------------------------------==>----------------------------------------------------==>----------------------------------------------
Test Authorization header and set the   ==> Use the principal to find the full User record in  ==> Complete action as normal, has access to the
principal if all is well.               ==> database and assign it to protected property.      ==> user record as set in the BaseController



I'm not sure where to put the code in a BaseController in order to have it execute AFTER the authentication filter, but BEFORE the routed controller/action. Generally I find when I'm hitting a brick wall like this, it's because I'm trying to go the wrong way about something, so if any of the above sounds wrong let me know.

Honestly I'd just do something like:

code:
[MyAuthenticationFilter]
public class AuthenticatedUserController : HttpController 
{
  private readonly MyDbContext _context;
  private MyUser _user;
  public BaseController(MyDbContext context)
  {
    _context = context;
  }
  protected MyDbContext Context => _context;
  protected MyUser User => _user ?? (_user = _context.MyUsers.First(/* search goes here */));
}

public class SpecialisedController : AuthenticatedUserController
{
  public SpecialisedController(MyDbContext context) : base(context){}

  [HttpGet]
  public IEnumerable<MyResult> GetMyResult()
  {
    return Context.MyResults.Where(myResult => myResult.User.Id == User.Id);
  }
}

Night Shade
Jan 13, 2013

Old School

Space Whale posted:

I'm loving around with EF and find myself wanting to do a search by multiple columns per row.

Is this ever a good idea? On a BIG table I could see this being troublesome, but for my piddly app, it's not.

Assuming this isn't a terrible idea, what's a good way to do it? Is there anything particularly wrong with this approach?

code:
using (var db = new RecipeContext())
            {
  		var query = from q in db.Recipes
                            where q.Name == searchRecipe.Name || 
                            q.Hops == searchRecipe.Hops ||
                            q.Grain == searchRecipe.Grain ||
                            q.Yeast == searchRecipe.Yeast
                            select q;
        //..... etc 
	}
How would I go about optimizing away empty properties in the searchRecipe object and not bothering to search on those columns?

code:
IQueryable<Recipe> query = db.Recipes;
if( searchRecipe.Name != null ) query = query.Where(q => q.Name == searchRecipe.Name);
if( searchRecipe.Hops != null ) query = query.Where(q => q.Hops == searchRecipe.Hops);
...
and so forth

Also since you appear to be matching on any column and not all provided columns, you could also build individual searches against each column and returning the union of the result:
code:
IQueryable<Recipe> query = null;
if( searchRecipe.Name != null )
{
  var subquery = db.Recipes.Where(q => q.Name == searchRecipe.Name);
  query = query?.Union(subquery) ?? subquery;
}
if( searchRecipe.Hops != null )
{
  var subquery = db.Recipes.Where(q => q.Hops == searchRecipe.Hops);
  query = query?.Union(subquery) ?? subquery;
}
...
query = query ?? db.Recipes;
This allows you to index each search column individually. I'm not sure if the query optimiser is smart enough to deal with individual column indexes when ORing together where clauses.

Night Shade
Jan 13, 2013

Old School

Bognar posted:

This will not work. Multiple chained .Where clauses work as AND, not OR.

:cripes: I knew that. I even wrote code that took advantage of it like a week ago. Don't code straight into the post box while tired.

Night Shade
Jan 13, 2013

Old School
We borrowed this guy's code https://github.com/mrahhal/Migrator.EF6 for a command line utility we provide alongside the packages using the database. We added a couple extra bits and pieces that he didn't include out of the box, like the ability to list pending migrations as well as applied migrations, but it's otherwise pretty complete.

e: we also provide some PowerShell DSC scripts that use it, but they're optional and the guys doing the installs have the ability to get update scripts out of it. This is necessary because some of our customer sites (hospitals) have the world's most anal DBAs and we're lucky if we get execute on our own stored procedures.

Night Shade
Jan 13, 2013

Old School

RICHUNCLEPENNYBAGS posted:

This thing is so hyped up and then I finally read it and was disappointed. It's well presented and all, but it's mostly things that will have occurred to you if you've been working in C# long enough. Or at least that's how I remember it.

This should be true of any language that isn't horribly convoluted and esoteric, but that doesn't mean the book is a bad resource for people who haven't spent forever writing C#. You just aren't its target audience.

Night Shade
Jan 13, 2013

Old School

raminasi posted:

Is an indexer that creates a new object every time it's called as much as an API horror as I intuitively feel like it is? I was expecting Object.ReferenceEquals(customCollection[0], customCollection[0]) to always return true, but I just got burned by something because it actually always returns false.

If customCollection is a collection of structs I think this makes sense. Object.ReferenceEquals doesn't have an overload for value types which means if you're passing in two structs like this they'll be boxed separately, which means you now have two references to different boxes containing the same value which means they aren't actually the same reference.

If it's a collection of objects then what the gently caress.

Night Shade
Jan 13, 2013

Old School

raminasi posted:

Nope, it's a reference type.

In verifying this by checking the API documentation, I also learned that they're IDisposable, despite not once being used as such in a single shred of example code :psyduck:

:stonk:

So the indexer of a collection is counter-intuitively generating new instances of a class that wants timely cleanup, and even that part is an afterthought in the documentation.

Part of me doesn't want anything to do with whatever it is you're using and part of me is morbidly curious about what it is.

Night Shade fucked around with this message at 07:22 on Sep 18, 2016

Night Shade
Jan 13, 2013

Old School

dougdrums posted:

I have a question about some code in the MS references; I was converting ConcurrentQueue to C as an exercise, and I ran into this bit:
code:
//We need do Interlocked.Increment and value/state update in a finally block to ensure that they run
//without interuption. This is to prevent anything from happening between them, and another dequeue
//thread maybe spinning forever to wait for m_state[] to be true;
try { }
finally {  /* has stuff */ }
http://referencesource.microsoft.com/#mscorlib/system/Collections/Concurrent/ConcurrentQueue.cs,789

I'm confused about how this works. I understand that other threads would deadlock if the piece in the finally block doesn't get executed, but how does using a finally block prevent that when used in this way?

https://msdn.microsoft.com/en-us/library/ms228973(v=vs.110).aspx posted:

The CLR delays thread aborts for code that is executing in a CER.
I'm pretty sure that the runtime will delay ThreadAbortExceptions triggered by other threads until the finally block is completed - normally they just appear between CPU instructions. I would expect this to also include ThreadAbortExceptions raised at runtime shutdown, though then the usual however many seconds until I exit anyway rule applies.

This stuff is also true of finalisers.

e: So when the comment is talking about "prevent anything from happening between them", I read that as being about guarding against a thread being forcefully terminated by managed code or the runtime during TryAppend and corrupting state.

Night Shade fucked around with this message at 05:00 on Oct 1, 2016

Night Shade
Jan 13, 2013

Old School

Gul Banana posted:

i didn't think finally blocks were automatically CERs, though. you need to inherit from criticalhandle or something? and use attributes also, i think

Yeah I think you might be right actually, and ThreadAbortExceptions get delayed until after finally blocks and finalisers by the runtime anyway. I haven't spent a lot of time in this area of the framework.

Night Shade
Jan 13, 2013

Old School

Gul Banana posted:

i don't want to make assumptions, but it's possible that part of ConcurrentDictionary.cs is just a cargo cult technique.. :laugh:

Heh. Maybe. There is some stuff about it on the documentation for Thread.Abort https://msdn.microsoft.com/en-us/library/5b50fdsz(v=vs.110).aspx but I clearly made the leap to constrained execution based on some badly remembered stuff I read a while ago :pseudo:

Night Shade
Jan 13, 2013

Old School

Bognar posted:

As I understand, it's not expected for this to be released in C# 7.

I'm not surprised by this, it's a pretty huge change, but good god it can't come soon enough.

Night Shade
Jan 13, 2013

Old School

Baby Proof posted:

Well, for one, Fiddler is pretty useful, and their decompiler seems nifty.

Telerik bought Fiddler as a fully functional product and have basically just been maintaining it ever since. And I think their decompiler is based on a fork of .NET Reflector before RedGate bought it out and started charging for it.

Night Shade
Jan 13, 2013

Old School

bobua posted:

(entity framework 6)

List<Thing> things; //Has some existing Thing's from the database, plus some new Thing's.

Later, I want to save changes, plus add my new Thing's. Right now, I'm just foreaching through the list and checking if the primary key == 0 to see if it's a new 'thing.' Not a problem if I make sure my primary key starts at 1, but it feels wrong... is there a better way to see if a thing is new vs existing without actually searching the database for it? SO has a lot of returns for 'update if exists else add' but they are all old and don't seem that great.

Unrelated question. Is there a contact management database bible? When I'm designing tables, I always find myself creating the different columns for title's and suffix's and writing code to format names based on whether a company name exists along with a first\last etc. Surely someone's written up a 'this is the right way' standard somewhere?

EF should be handling what's new vs updated without you needing to lift a finger in SaveChanges(), but context.ChangeTracker.Entries<Thing>() should have what you need.

Having said that I'm not sure the change tracker works 100% with concrete List instances. You might want to swap to IList or ICollection so it can create change tracking collections instead.

Night Shade
Jan 13, 2013

Old School

bobua posted:

If I try to context.AddRange(myList); I'll get an exception for trying to add existing items(matching primary keys) when I savechanges. How would the context have any reference to the new items I've added to do that change tracking?

Oh sorry I thought the list was part of the model. You should be able to iterate over the list and set context.Entry(thing).State to Added if it's Detached. Everything that the context already knows about should either be Unchanged or Modified.

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

I can smell an X/Y problem here. This is definitely something that's usually trivial to the point of not really being actively thought about, EF just handles it. Which means something must be wrong with your larger process. Can you step back from the EF layer and explain what you'd like to achieve overall?

Sounds like the UI is bound to a plain list of Things fetched from the context, and isn't adding new Things to the context when the user creates one. If the UI layer has direct access to the context it can add new Things to both the relevant DbSet and the UI-bound list when the user creates one, and then the change tracker will just do its thing in the background for you.

Night Shade
Jan 13, 2013

Old School
Three: you're writing an interop struct. Though those should probably be internal and wrapped with a more idiomatic API.

Night Shade
Jan 13, 2013

Old School

Mr Shiny Pants posted:

You would think that they would manage this for you, because that is the hard part and they know all the ins and outs of Azure.

code:
CREATE PROCEDURE dbo.ContrivedExample
  @ThingID int
AS
UPDATE dbo.Thing SET ThingValue = ThingValue + 1 WHERE ThingID = @ThingID
Can this be safely automatically retried if we don't know if it succeeded or failed?

Night Shade
Jan 13, 2013

Old School

chippy posted:

edit: Oh hang on, are we asking if the ContrivedExample sproc is idempotent, or the CREATE PROCEDURE statement?

The EXEC side, not the CREATE side. CREATE isn't I think technically idempotent because it will fail if it is executed a second time but it is safe to automatically retry.

I was (badly) alluding to the fact that it's trivial for devs to create code that is guaranteed unsafe to be automatically retried, and if Microsoft puts in some sort of infrastructure for doing so those same devs will blindly turn it on and then blame Microsoft when ThingValue winds up at 2734 instead of 5.

Night Shade fucked around with this message at 22:48 on Jan 18, 2017

Night Shade
Jan 13, 2013

Old School

ljw1004 posted:

I'm not very good at testing. Could you spell out in a bit more detail how "more testable and replaceable" would apply to HttpClient?

You can then inject an HttpClient using a custom HttpMessageHandler that returns canned responses/failures/timeouts instead of actually making web requests.

Night Shade
Jan 13, 2013

Old School

Warbird posted:

When that day does come, I'm going to have to sit down and think about the life choices I've made to get here.

Down this path lies alcoholism and liver failure.

Night Shade
Jan 13, 2013

Old School
The type argument that it cannot resolve is TEntity, because QaCommandHandler is still an open generic and it doesn't know what to put there.
Based on https://simpleinjector.readthedocs.io/en/latest/advanced.html#registration-of-open-generic-types it looks like you need to use RegisterConditional when registering QaCommandHandler as an open generic.

Disclaimer: I have never used SimpleInjector, I got that doc by googling for "simple injector open generic". Tickling Google the right way is a skill unto itself.

e: you might also want to read through the bit on mixing collections of open generic and non-generic components, it seems relevant also

Night Shade fucked around with this message at 05:41 on Mar 8, 2017

Night Shade
Jan 13, 2013

Old School

The Wizard of Poz posted:

Yeah sorry I should have mentioned that I've read the documentation over and over, but I think I just keep getting stuck on the specific syntactic kung-fu that I need to use here. In the case of RegisterConditional, that is useful when you want to register a catch-all after registering the main implementations which is relevant here to an extent but the problem is the actual registration itself is failing.

Something has just occurred to me, could it be because TEntity is type-restricted to QaModelBase, and all classes deriving from QaModelBase are in a separate assembly to QaCommandHandler? (in other words, it's looking for types that it can register QaCommandHandler with and finding gently caress all?)

Nah I dont think so, it reads like when you give it an open generic and an assembly it scans the assembly for concrete implementations of that generic, not for type arguments that might satisfy the generic.

On the assumption that you're doing something with TEntity in QaCommandHandler that needs typeof(TEntity) and not just typeof(QaModelBase), I threw something together that appears to do what you're after. The trick was that all of the Qa stuff - IQaCommand, QaCommandHandler and QaDecorator - needed to be generic over TEntity as well, then SimpleInjector was able to tie everything together.

http://pastebin.com/FieZ7AR8

The first call to GetInstance() returns a TestHandler, the second returns a QaDecorator<TestQaCommand, QaModel>. Dump is a LinqPad extension, it's basically a deep console.writeline()

Obvious downside to this approach: all of your IQaCommands depend on their concrete QaModelBase implementation.

Adbot
ADBOT LOVES YOU

Night Shade
Jan 13, 2013

Old School

Oh that's neat. We're using Swashbuckle for Swagger generation and sw2dts to import that into the frontend, but our frontends are react/typescript and built with webpack, not VS.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply