Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Asynchrony is confusing me a bit, was hoping I could get some help.

My program has a POD class RawRecord. My viewmodel has private List<RawRecord> RawRecords that is populated by a member function private void PopulateRawRecords(), natch, which is called when the viewmodel's FileName property changes. PopulateRawRecords() takes about a minute on the test file I'm using which, as is, blocks the UI. I figured, okay, make it async and call it normally, I don't need to wait on the resulting List<RawRecord> anyway (the UI will just show me the ones that are loaded already along with a progress bar), and this is a good way to learn about async/await. Problem is, PopulateRawRecords() doesn't have any particularly useful points for me to put an await in, so it just runs synchronously anyway.

Question is, what's the idiomatic way to handle this sort of background task when I can't await on any single asynchronous method? Rewrite it so I can?

(Also, if it makes a difference, I'd like to be able to cancel the thing mid-load, but if I understand Task and TokenCancellationSource correctly that doesn't look too hard anyway.)

(Also also, sorry again for no source code. Separate 'net computer again. :()

Ciaphas fucked around with this message at 17:54 on Apr 23, 2015

Adbot
ADBOT LOVES YOU

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Ciaphas posted:

Question is, what's the idiomatic way to handle this sort of background task when I can't await on any single asynchronous method? Rewrite it so I can?

More or less, yes, assuming that the time spent is in IO-heavy work and not CPU heavy work. If it's CPU heavy, then going async isn't going to do anything terribly special for you. You said you're working with a file, so there's an obvious place to use async/await, but I assume you're doing more stuff than just that.

If it's CPU heavy, you can just use await Task.Run(() => PopulateRawRecords()) to get it off the UI thread. Adding a CancellationToken to that is pretty trivial - pass it as a parameter and check if it's cancelled in your processing loop.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Bognar posted:

More or less, yes, assuming that the time spent is in IO-heavy work and not CPU heavy work. If it's CPU heavy, then going async isn't going to do anything terribly special for you. You said you're working with a file, so there's an obvious place to use async/await, but I assume you're doing more stuff than just that.

If it's CPU heavy, you can just use await Task.Run(() => PopulateRawRecords()) to get it off the UI thread. Adding a CancellationToken to that is pretty trivial - pass it as a parameter and check if it's cancelled in your processing loop.

Ah, Task.Run looks like what I was looking for; thanks loads for that.

As for whether it's IO or CPU bound, does VS2012 have any useful profiling to work that out or is it kind of guess and check? I can't copy the code here, but here's the overall flow:

C# code:
using (FileStream s = File.OpenRead(FileName))
{
    BinaryReader r = new BinaryReader(s);
    
    Func<bool> findNextRecord = delegate { /* seek to just after sync bytes using r, returning true if found or false for EOF */ }; // necessary due to filler crap
    while (findNextRecord())
    {
        // todo: insert cancellation check here

        UInt16 messageId = r.ReadUInt16();
        byte category = r.ReadByte();
        Int32 param1 = r.ReadInt32BE(); // big endian extension method--yep, they mix LE and BE, just loving shoot me
        // etc etc ad nauseum, probably about 512B worth of crap

        RawRecords.Add(new RawRecord( /* constructor params I can't be assed to write out */));
        RaisePropertyChanged("RawRecords"); // I guess adding to RawRecords doesn't count as changing it, on its own
    }
}
In a typical case, that while loop would run about 300k times.

Inverness
Feb 4, 2009

Fully configurable personal assistant.
Just at a glance you reading records from a file like that would be IO bound.

A small, irrelevant recommendation. For a using statement you can do something like:
C# code:
using (var r = new BinaryReader(File.OpenRead(FileName))) { ... }
Every well written reader/writer class is will dispose the underlying stream too.

Also, no, adding to a collection does not count as changing the value of its property, because the value of the property is still the collection. Whenever you want to listen for collection changes you bind to the collection itself when it implements INotifyCollectionChanged like ObservableCollection<T> does.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy
Ah, yeah, BinaryReader doesn't expose any async methods so you won't get any benefit out of making your method async. Stick with Task.Run. On a side note, you probably don't need to call RaisePropertyChanged inside your loop, you could just call it once on the outside. Depending on your UI framework, that may or may not have a performance hit.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Inverness posted:

Just at a glance you reading records from a file like that would be IO bound.

A small, irrelevant recommendation. For a using statement you can do something like:
C# code:
using (var r = new BinaryReader(File.OpenRead(FileName))) { ... }
Every well written reader/writer class is will dispose the underlying stream too.
Thanks for this, but in the actual code I need to reference the FileStream object for position and absolute seeking once in a while.

Inverness posted:


Also, no, adding to a collection does not count as changing the value of its property, because the value of the property is still the collection. Whenever you want to listen for collection changes you bind to the collection itself when it implements INotifyCollectionChanged like ObservableCollection<T> does.

I had tried ObservableCollection<RawRecord> yesterday for that very reason, in fact, but I couldn't then figure out how to, say, bind to RawRecords.Count (or is it size? I forget). Thought it'd be something like
XML code:
<Label Name="recordCount" Text="{Binding RawRecords.Count}"/>
<!-- or is it Content, not Text? I forget -->
It seemed to work fine as soon as I made it a List<RawRecord> and called my ObservableBase.RaisePropertyChanged() on it. :shrug:

(edit)

Bognar posted:

Ah, yeah, BinaryReader doesn't expose any async methods so you won't get any benefit out of making your method async. Stick with Task.Run. On a side note, you probably don't need to call RaisePropertyChanged inside your loop, you could just call it once on the outside. Depending on your UI framework, that may or may not have a performance hit.

I call it inside the loop so that the user can view already-loaded records while the rest are loading. If it's a performance problem I can have it called every 100 loops or something, I suppose.

Ciaphas fucked around with this message at 19:30 on Apr 23, 2015

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


New question, just to make sure I've got the point. Is there any difference between binding to an ObservableCollection<T>, and binding to a List<T> that has RaisePropertyChanged() called for it on every add/delete?

(Coming from another direction, there's no reason to call RaisePropertyChanged() for an ObservableCollection<T>, right?)

Ciaphas fucked around with this message at 00:14 on Apr 24, 2015

Inverness
Feb 4, 2009

Fully configurable personal assistant.

Ciaphas posted:

I had tried ObservableCollection<RawRecord> yesterday for that very reason, in fact, but I couldn't then figure out how to, say, bind to RawRecords.Count (or is it size? I forget). Thought it'd be something like
XML code:
<Label Name="recordCount" Text="{Binding RawRecords.Count}"/>
<!-- or is it Content, not Text? I forget -->
It seemed to work fine as soon as I made it a List<RawRecord> and called my ObservableBase.RaisePropertyChanged() on it. :shrug:
If I wanted to bind to count I'd make a property that updated whenever the collection changed.

Ciaphas posted:

New question, just to make sure I've got the point. Is there any difference between binding to an ObservableCollection<T>, and binding to a List<T> that has RaisePropertyChanged() called for it on every add/delete?

(Coming from another direction, there's no reason to call RaisePropertyChanged() for an ObservableCollection<T>, right?)
Yes. They're two fundamentally different things. You need to understand the difference between INotifyPropertyChanged and INotifyCollectionChanged.

INotifyPropertyChanged is for notifying you when a single property of an object changes in a simple way: by being set to a new value. When you do RaisePropertyChanged() with the name of the collection, you're telling the system that you replaced your entire collection with a new collection by doing myObject.MyCollection = newCollection. This isn't accurate in this case. You don't have a big problem here because you're only binding to count, but if you were doing something like binding to an ItemsControl you would murder your performance since every change would require rebuilding all of the items in the control.

INotifyCollectionChanged is implemented in the collection itself and notifies you of when the collection changes, along with what kind of change occurred on the collection, what items were added and removed, and where items were added and removed from. It's much more fine-grained than the previous, and allows your UI to only do the work it needs to update the view instead of making it think you replaced the entire collection. ObservableCollection<T> is a collection that implements both INotifyCollectionChanged and INotifyPropertyChanged for the Count property. List<T> does neither. Binding to the Count property of the observable collection should work fine. You don't need to RaisePropertyChanged() for the collection, only add and remove items.

I find it useful to look at the source: http://referencesource.microsoft.com/#System/compmod/system/collections/objectmodel/observablecollection.cs

ninjeff
Jan 19, 2004

Inverness posted:

A small, irrelevant recommendation. For a using statement you can do something like:
C# code:
using (var r = new BinaryReader(File.OpenRead(FileName))) { ... }
Every well written reader/writer class is will dispose the underlying stream too.

This feature is actually the opposite of well-written, IMO; it prevents you from keeping the stream around for a while and disposing of it later (say you want to read all the data in the file with a StreamReader, and then append some more data with a StreamWriter). It also makes it hard to verify that you're not leaking anything - how many of these reader/writer classes actually document what they dispose of? If the caller is savvy enough to using the reader/writer, they'll know they need to using the stream too.

The BCL designers appear to agree - this StreamReader constructor overload was added in .NET 4.5. My thinking is that either rigorous ownership semantics weren't worked out in 1.0, or using was new enough that it wasn't clear how well developers were going to use it - see the 'dispose pattern' mess for another example of an attempt at safeguarding IDisposable that probably caused more harm than good.

There is one situation where I think readers/writers should act the way you describe: when created through a convenience constructor that takes e.g. a file path, and creates the stream itself. In this case the caller obviously has no way to dispose of the stream, so the reader/writer should do it.

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
Yeah, I definitely agree with that sentiment. Closing streams they didn't open isn't a great pattern and I've hit it a couple of times.

Inverness
Feb 4, 2009

Fully configurable personal assistant.

ninjeff posted:

This feature is actually the opposite of well-written, IMO; it prevents you from keeping the stream around for a while and disposing of it later (say you want to read all the data in the file with a StreamReader, and then append some more data with a StreamWriter). It also makes it hard to verify that you're not leaking anything - how many of these reader/writer classes actually document what they dispose of? If the caller is savvy enough to using the reader/writer, they'll know they need to using the stream too.

The BCL designers appear to agree - this StreamReader constructor overload was added in .NET 4.5. My thinking is that either rigorous ownership semantics weren't worked out in 1.0, or using was new enough that it wasn't clear how well developers were going to use it - see the 'dispose pattern' mess for another example of an attempt at safeguarding IDisposable that probably caused more harm than good.

There is one situation where I think readers/writers should act the way you describe: when created through a convenience constructor that takes e.g. a file path, and creates the stream itself. In this case the caller obviously has no way to dispose of the stream, so the reader/writer should do it.
I'll agree with this. A using statement makes it simple enough to handle disposing resources that it's better to be explicit by having to dispose it yourself than relying on the class.

My first exposure to this kind of pattern came from Java, which does not have using statements or similar unless something changed after I abandoned it. In that case I prefer a constructor option that specifies how to handle the underlying stream.

Geisladisk
Sep 15, 2007

I'm developing a Windows service. One class in this service references an external DLL.

When I run the unit tests for the service, which are in a separate project, the class runs fine; The DLL is loaded, the code executes as intended, and all is well. When I run the service itself, and the code executes, I get a bad format exception on the DLL. The service itself is 64 bit, and I've tried using both 32 bit and 64 bit DLLs, and it seems to make no difference.

Both the unit test project and the service project are using .NET version 4, and compiled using 64 bit architecture.

I've googled around and found nothing related - Can bad format exceptions be caused by anything other than bit architecture mismatch? :confused:

EssOEss
Oct 23, 2006
128-bit approved
Are you sure it is trying to use the correct DLL? I have never seen that error for any other reason and it seems unlikely to be the reason.

Your wording on "Compiled using 64 bit architecture" makes me think perhaps the bitness of .NET services is not entirely clear to you. It is essentially just an enum field in the binary, set a compile time. There is no other difference and it does not matter if you use 32-bit or 64-bit .NET to build your app.

When you run the service itself, do you see it listed as 32-bit or 64-bit in Task Manager? As a next step, use Process Monitor to find out what exactly is the DLL it tries to load and then verify that the bitness of that DLL matches.

Geisladisk
Sep 15, 2007

EssOEss posted:

Are you sure it is trying to use the correct DLL? I have never seen that error for any other reason and it seems unlikely to be the reason.

Yup. As I said, I've tried it with both a 32 bit and 64 bit version of the DLL, with the same results.

quote:

Your wording on "Compiled using 64 bit architecture" makes me think perhaps the bitness of .NET services is not entirely clear to you. It is essentially just an enum field in the binary, set a compile time. There is no other difference and it does not matter if you use 32-bit or 64-bit .NET to build your app.

I dunno. Possibly. I just meant that I built the service as a 64-bit executable, I wasn't referring to the bitness of the .NET version I used.

quote:

When you run the service itself, do you see it listed as 32-bit or 64-bit in Task Manager? As a next step, use Process Monitor to find out what exactly is the DLL it tries to load and then verify that the bitness of that DLL matches.

It's listed as 64 bit in the Task Manager. I loaded up Process Monitor just now, and confirmed that the service is in fact loading the correct DLL.

Spazz
Nov 17, 2005

This is a dumb question and I'm a dumb person for asking it (and for taking on this dumb problem): Is there a .NET library that will URL encode special or reserved characters, but not the whole string? Specifically, the ones in this list, but not JP language characters.

Edit: I'm dumb and string.Replace(",", "%2C").Replace("!","%21").etc.etc. will work

Spazz fucked around with this message at 18:55 on Apr 24, 2015

xgalaxy
Jan 27, 2004
i write code
Awhile ago I remember reading about a Microsoft tool that checked your source code to see if it was compatible with various .NET framework versions (Eg. Windows Phone vs Xbox vs Metro). My googlefu is failing me and I can't seem to find anything about this now. Does anybody have a link or know what I'm talking about?

xgalaxy fucked around with this message at 19:19 on Apr 24, 2015

bobua
Mar 23, 2003
I'd trade it all for just a little more.

In the context of a controller in an asp.net mvc5 project, what exactly is the difference between placing

DBContext db = new DBContext();

at the top of the controller vs initiating a separate object within each controller action that I'm going to use a database call in?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

bobua posted:

In the context of a controller in an asp.net mvc5 project, what exactly is the difference between placing

DBContext db = new DBContext();

at the top of the controller vs initiating a separate object within each controller action that I'm going to use a database call in?

This is a general OO thing. If it's not in a method, it's a field. A field a single instance of an object that is available to all methods within the class.
If it's instantiated in the method, it's local to that method. Once the method finishes, the object goes out of scope and is eligible to be garbage collected.

bobua
Mar 23, 2003
I'd trade it all for just a little more.

Sorry, I meant more from a best practices\what are the pitfalls point of view.

I don't quite understand garbage collection and database resource handling outside the desktop application environment.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Ithaqua posted:

This is a general OO thing. If it's not in a method, it's a field. A field a single instance of an object that is available to all methods within the class.
If it's instantiated in the method, it's local to that method. Once the method finishes, the object goes out of scope and is eligible to be garbage collected.

In this particular case (and not knowing dick about ASP DBContext, mind) I'd guess that a new DBContext implies a new database connection, which you probably don't want for every call.

(At least if it's anything like our Oracle DBs. Connecting takes about five million years, argh argh argh.)

Inverness posted:

If I wanted to bind to count I'd make a property that updated whenever the collection changed.

Yes. They're two fundamentally different things. You need to understand the difference between INotifyPropertyChanged and INotifyCollectionChanged.

:words:

Thanks for all this. I went back to ObservableCollection<RawRecord> today, and strangely the RawRecords.Count bind worked this time. Don't ask me why, I must have typoed hardcore the first time. At any rate this allowed me to take out a bunch of plumbing and cruft, so thanks for that! :)

Milotic
Mar 4, 2009

9CL apologist
Slippery Tilde

Ciaphas posted:

In this particular case (and not knowing dick about ASP DBContext, mind) I'd guess that a new DBContext implies a new database connection, which you probably don't want for every call.

(At least if it's anything like our Oracle DBs. Connecting takes about five million years, argh argh argh.)

.NET does connection pooling under the covers for Sql Server by default, and can be configured for other databases

https://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/ms254502(v=vs.110).aspx

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

bobua posted:

Sorry, I meant more from a best practices\what are the pitfalls point of view.

I don't quite understand garbage collection and database resource handling outside the desktop application environment.

The pitfall is you are munging your business logic and data access into one spot. There's no separation of concern so testing this without firing up a database impossible. For a small application creating your DbContext in the constructor or injected in via IoC isn't that bad.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

gariig posted:

The pitfall is you are munging your business logic and data access into one spot. There's no separation of concern so testing this without firing up a database impossible. For a small application creating your DbContext in the constructor or injected in via IoC isn't that bad.

Pretty much this.

However, aside from poor architecture, instantiating an Entity Framework context in the controller is mostly no big deal. It normally doesn't touch the database until you make your first SQL call, and .NET ADO connection pooling is handled under the covers so it's not *that* big a deal that the context isn't disposed.

RICHUNCLEPENNYBAGS
Dec 21, 2010

Bognar posted:

Pretty much this.

However, aside from poor architecture, instantiating an Entity Framework context in the controller is mostly no big deal. It normally doesn't touch the database until you make your first SQL call, and .NET ADO connection pooling is handled under the covers so it's not *that* big a deal that the context isn't disposed.

Controllers are disposed at the end of the request so you can easily put a call to dispose your context in the controller's Dispose method.

Also, EF7 is going to introduce an in-memory provider which is going to make life much easier W/R/T testing.

Iverron
May 13, 2012

gariig posted:

The pitfall is you are munging your business logic and data access into one spot. There's no separation of concern so testing this without firing up a database impossible. For a small application creating your DbContext in the constructor or injected in via IoC isn't that bad.

If I'm understanding what you're suggesting correctly, this was discussed at length back around page 45:

Che Delilas posted:

The whole "testable" thing really bugs me as a reason to go through all these double-abstraction-layer gymnastics. I have an MVC project where I have a service (the generic, business-logic-goes-here form of the word, not a web service or something) that gets the DbContext passed to it through its constructor.

A good portion of our inherited MVC projects (my company bought another company's clients, etc.) are either questionably Repository patterned or worse. My preference thus far has been something close to the pattern above (injecting Context into Controllers per web request).

Simple Injector Initializer:
code:
container.RegisterPerWebRequest<DbContext>(() => new DbContext());
Controller:
code:
private ContentService _content;

public PageController(DbContext context)
{
	_content = new ContentService(context);
}

Uziel
Jun 28, 2004

Ask me about losing 200lbs, and becoming the Viking God of W&W.
This is kind of a weird/general question, but I have an opportunity to monetize an app I made for myself when people starting asking me if they could purchase access.
I normally work on intranet apps, and one off personal projects, so I'm out of my element.

The basics are the app is WebAPI that screen scrapes a website (that does not offer a public API) and returns data either as JSON or a CSV (CSV is imported into Google sheets).

So I'd have users that subscribe to a model that is based on the number of sites they need to scrape. I'm rather lost here, but I guess I would need to have their username in the URL and then check to see if they are active, and that the user account I'm scraping from is associated to their account, and then go from there?

Does that make sense? Can anyone point me in the right direction?

RICHUNCLEPENNYBAGS
Dec 21, 2010

Uziel posted:

This is kind of a weird/general question, but I have an opportunity to monetize an app I made for myself when people starting asking me if they could purchase access.
I normally work on intranet apps, and one off personal projects, so I'm out of my element.

The basics are the app is WebAPI that screen scrapes a website (that does not offer a public API) and returns data either as JSON or a CSV (CSV is imported into Google sheets).

So I'd have users that subscribe to a model that is based on the number of sites they need to scrape. I'm rather lost here, but I guess I would need to have their username in the URL and then check to see if they are active, and that the user account I'm scraping from is associated to their account, and then go from there?

Does that make sense? Can anyone point me in the right direction?

I've always used Identity and the login cookies but I guess you could make it part of the URL too.

crashdome
Jun 28, 2011
I'm a terrible business man but, could you make the ability to scrape the sites free and then charge for access to the results? Or charge for advanced features? It seems that a lot if the places that blow up generally monetize free services by selling advertising.

ljw1004
Jan 18, 2005

rum

Uziel posted:

This is kind of a weird/general question, but I have an opportunity to monetize an app I made for myself when people starting asking me if they could purchase access.
I normally work on intranet apps, and one off personal projects, so I'm out of my element.
The basics are the app is WebAPI that screen scrapes a website (that does not offer a public API) and returns data either as JSON or a CSV (CSV is imported into Google sheets).
So I'd have users that subscribe to a model that is based on the number of sites they need to scrape. I'm rather lost here, but I guess I would need to have their username in the URL and then check to see if they are active, and that the user account I'm scraping from is associated to their account, and then go from there?
Does that make sense? Can anyone point me in the right direction?

I consume services from the Azure Marketplace, such as Bing and IP-reverse-lookup. They're similar - I pay for credits (from the same account that I use to pay for all of Azure), and these credits let me make however many thousand queries a month.

I wonder if that would be a viable place for you to sell your stuff? There might be existing machinery that's easy to leverage. PS. I've never tried to offer a service in the marketplace, and have no idea how it works.

RICHUNCLEPENNYBAGS
Dec 21, 2010

ljw1004 posted:

I consume services from the Azure Marketplace, such as Bing and IP-reverse-lookup. They're similar - I pay for credits (from the same account that I use to pay for all of Azure), and these credits let me make however many thousand queries a month.

I wonder if that would be a viable place for you to sell your stuff? There might be existing machinery that's easy to leverage. PS. I've never tried to offer a service in the marketplace, and have no idea how it works.

Azure has an API access thing that supports caching results and handles all the account creation and all that and then only Azure talks directly to your API. So that might actually be a good option. It even supports service tiers.

http://azure.microsoft.com/en-us/services/api-management/

Funking Giblet
Jun 28, 2004

Jiglightful!

Iverron posted:

If I'm understanding what you're suggesting correctly, this was discussed at length back around page 45:


A good portion of our inherited MVC projects (my company bought another company's clients, etc.) are either questionably Repository patterned or worse. My preference thus far has been something close to the pattern above (injecting Context into Controllers per web request).

Simple Injector Initializer:
code:
container.RegisterPerWebRequest<DbContext>(() => new DbContext());
Controller:
code:
private ContentService _content;

public PageController(DbContext context)
{
	_content = new ContentService(context);
}

You just made the lives of everyone easier. I strongly recommend using the context directly, or through a light service layer to abstract common queries all while avoiding repositories, which most people get wrong anyway! I also recommend handling the transactions in an actionfilter so each action begins a transaction and commits when finished.

Funking Giblet fucked around with this message at 21:19 on Apr 26, 2015

Uziel
Jun 28, 2004

Ask me about losing 200lbs, and becoming the Viking God of W&W.

crashdome posted:

I'm a terrible business man but, could you make the ability to scrape the sites free and then charge for access to the results? Or charge for advanced features? It seems that a lot if the places that blow up generally monetize free services by selling advertising.
Hm, advertising wouldn't really work since most people would be consuming the data in Google sheets. It's a pretty niche market but I wouldn't mind some passive income from it considering it removes a huge hassle of manual data entry for people (copying the data from a site to a spreadsheet).

Thanks for the Azure suggestion, I'll check that out.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
To my knowledge, NuGet has no way to handle customising the path for installed JavaScript packages. So a structure like this, for example:

code:
/Scripts/dist/ (bundled, minified code goes here)
/Scripts/src/ (my code all goes here)
/Scripts/vendor/ (jquery, modernizr, etc all go here)
is incompatible with NuGet. Is there some other way people manage their JavaScript dependencies to get around this? Do people just manage this stuff through NPM instead, for example?

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION
I also have a more general question around dependency injection. I have a controller action that looks like this:

code:
public class CoursesController : BaseController
{
    // GET: Course
    public ActionResult Details(int id, IOnlineEnrolmentManager manager)
    {
        Course course = manager.GetCourseById(id);

        return View("Details", course);
    }
}
In my actual project, the IOnlineEnrolmentManager will be implemented by a concrete class called OnlineEnrolmentManager. The interface is then re-implemented by a concrete class called MockManager in my test project. Because the accepted parameter is an interface rather than a concrete class, if I place my cursor on "GetCourseById(id)" and hit F12, I get take to the signature for that method in the interface class. If I wasn't supporting dependency injection I could just make the parameter a concrete type and then hitting F12 would take me to the actual implementation of the method. I know this is probably wishful thinking, but is there a way I can structure this so I can have the best of both worlds?

putin is a cunt fucked around with this message at 07:00 on Apr 27, 2015

brap
Aug 23, 2004

Grimey Drawer
Find References to your interface's type if you want to be able to jump to your implementation. Also, check your spelling.

putin is a cunt
Apr 5, 2007

BOY DO I SURE ENJOY TRASH. THERE'S NOTHING MORE I LOVE THAN TO SIT DOWN IN FRONT OF THE BIG SCREEN AND EAT A BIIIIG STEAMY BOWL OF SHIT. WARNER BROS CAN COME OVER TO MY HOUSE AND ASSFUCK MY MOM WHILE I WATCH AND I WOULD CERTIFY IT FRESH, NO QUESTION

fleshweasel posted:

Find References to your interface's type if you want to be able to jump to your implementation. Also, check your spelling.

Better than nothing, I guess.

Regarding the spelling, if you mean "dependancy", my bad (although some places list the 'a' version as an alternate spelling). If you're talking about "enrolment", we spell it that way here in Australia :)

brap
Aug 23, 2004

Grimey Drawer
Oh, shoot. Sorry about the smug.

Destroyenator
Dec 27, 2004

Don't ask me lady, I live in beer
With Reshaper you can use Alt-End for go to derived class which works on interfaces.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

Iverron posted:

A good portion of our inherited MVC projects (my company bought another company's clients, etc.) are either questionably Repository patterned or worse. My preference thus far has been something close to the pattern above (injecting Context into Controllers per web request).

Funking Giblet posted:

You just made the lives of everyone easier. I strongly recommend using the context directly, or through a light service layer to abstract common queries all while avoiding repositories, which most people get wrong anyway! I also recommend handling the transactions in an actionfilter so each action begins a transaction and commits when finished.

I don't have much time at the moment to elaborate, but I will say I strongly disagree with the general patterns described here. If you need a context, then instantiate one - don't keep the same one around for the lifetime of a request. Due to the way EF handles object caching, strange and unexpected things can happen. Similarly, you shouldn't just automatically set up a transaction per request. Even if you ignore the performance concerns, you still should reason about which requests actually require a transaction and only use one where it's necessary.

I'll elaborate more on this tonight when I have a bit more time.

Adbot
ADBOT LOVES YOU

Funking Giblet
Jun 28, 2004

Jiglightful!

Bognar posted:

I don't have much time at the moment to elaborate, but I will say I strongly disagree with the general patterns described here. If you need a context, then instantiate one - don't keep the same one around for the lifetime of a request. Due to the way EF handles object caching, strange and unexpected things can happen. Similarly, you shouldn't just automatically set up a transaction per request. Even if you ignore the performance concerns, you still should reason about which requests actually require a transaction and only use one where it's necessary.

I'll elaborate more on this tonight when I have a bit more time.

I don't use EF, but NHibernate, which will always create a transaction no matter what, and it's bad for performance to not handle it yourself as it will create one per modification, so you might get a few transactions per action. My current setup allows override the isolation level or skipping transactions if required, but will fall back to the default NHibernate behaviour. As for newing up a context, I tend to share the session in the same request, and allow the IOC container to do the work. The session cache is useful when shared also, any weirdness should either be accounted for or is a bug. Having everything in one transaction for a command or request means it will behave the same as SQL does anyway, meaning any changes will be accessible within the session while inside the transaction.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply