Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Inverness
Feb 4, 2009

Fully configurable personal assistant.
One thing I see often in code is people creating the SqlCommand and SqlParameter objects each time they need to make a call.

Isn't this wasteful? You can reuse the existing command and parameter objects just fine as far as I know. I do it for SQLite at least.

It bothers me because it seems like more unnecessary GC pressure which isn't what you need for a webapp.

This is what I did with SQLite:
code:
public MyConstructor(...)
{
    Connection = new SQLiteConnection(...);
    Connection.Open();

    _delete = new SQLiteCommand("DELETE FROM Store WHERE Bucket = ? AND Key = ?", Connection);
    _delete.Parameters.Add(null, DbType.String);
    _delete.Parameters.Add(null, DbType.String);
    _delete.Prepare();
}

public bool Delete(string bucket, string key)
{
    ValidateArgs(bucket, key);

    _delete.Parameters[0].Value = bucket ?? DefaultBucketName;
    _delete.Parameters[1].Value = key;
    return _delete.ExecuteNonQuery() != 0;
}

Inverness fucked around with this message at 02:43 on Jan 16, 2015

Adbot
ADBOT LOVES YOU

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Inverness posted:

One thing I see often in code is people creating the SqlCommand and SqlParameter objects each time they need to make a call.

Isn't this wasteful? You can reuse the existing command and parameter objects just fine as far as I know. I do it for SQLite at least.

It bothers me because it seems like more unnecessary GC pressure which isn't what you need for a webapp.

This is what I did with SQLite:
code:
public MyConstructor(...)
{
    Connection = new SQLiteConnection(...);
    Connection.Open();

    _delete = new SQLiteCommand("DELETE FROM Store WHERE Bucket = ? AND Key = ?", Connection);
    _delete.Parameters.Add(null, DbType.String);
    _delete.Parameters.Add(null, DbType.String);
    _delete.Prepare();
}

public bool Delete(string bucket, string key)
{
    ValidateArgs(bucket, key);

    _delete.Parameters[0].Value = bucket ?? DefaultBucketName;
    _delete.Parameters[1].Value = key;
    return _delete.ExecuteNonQuery() != 0;
}

SqlCommands hold on to SqlConnections, which hold on to unmanaged resources. That's why they should be disposed and recreated as necessary. I don't know if you necessarily need to dispose SqlCommands, but it's easier to manage if you just instantiate both a command and a connection as needed in a using block.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

bpower posted:

Ok there's nothing inherently wrong with the UoW pattern or the repository pattern, but a lot of people, you included, have pointed out its a needless abstraction of EF.

Right, basically isn't the DbContext a Repository and a Unit of Work already?

bpower
Feb 19, 2011
Pretty much. If you're using your own Repository/UoW on top of EF just because you think they're good data access patterns then you've taken a wrong turn.

Implementing your own will make some things easier and 'nicer' I suppose.

RICHUNCLEPENNYBAGS
Dec 21, 2010

Space Whale posted:

I'll give an example:

Our WebAPI controllers have in the constructor's parameters at least one IRepository<BizObjectType> foo and one IUnitOfWorkFacotry uowFactory. I guess the architect reallly, really likes UoW and hand written contexts?
code:
        public BarController(IUnitOfWorkFactory uowFactory,
                               IRepository<Bar> barRepository)
        {
            UnitOfWork = uowFactory.Create("Bar");
            BarRepository = barRepository;
        }
And the way we'd query the db is like this:

code:

        [Queryable]
        public IQueryable<FooModel> Get()
        {
            var result = FooRepository.GetAll()
                                             .Project<Foo>()
                                             .To<FooModel>();

            foreach (var foo in result)
            {
                ModelUrlFactory.Populate(foo);
            }

            return result;
        }


And actually saving anything to the db:

code:
public IHttpActionResult Post([FromBody]BarModel value)
        {
            if (!ModelState.IsValid)
                return BadRequest();

            if (value.Id != 0)
            {
                var existing = BarRepository.Get(m => m.Id == value.Id);

                if (existing != null)
                    return BadRequest(string.Format("User {0} already exists.", value.Id));
            }

            var bar= new Bar
            {
                BarAccessToken = value.AccessToken,
                InstallDate = value.InstallDate,
                baz = new Baz
                {
			//properties here 
                },
            };

            BarRepository.Add(bar);

            UnitOfWork.Commit();

            return Ok(bar.Id);
        }
It just seems totally unnecessary. Why not just use EF as EF? I've heard it's something to do with "Scaling" and "we did it wrong in the past with one big context," but I thought EF can just do that and you don't need to wrap EF in another layer of UoW.

Yeah, that's pretty much the same as EF.

Malcolm XML posted:

Please use attribute based routing and have your controllers use reasonable method names.

Why is that better than configuring it to use MVC-style routing, exactly?

kingcrimbud
Mar 1, 2007
Oh, Great. Now what?

RICHUNCLEPENNYBAGS posted:

Yeah, that's pretty much the same as EF.


Why is that better than configuring it to use MVC-style routing, exactly?

There's no second guessing what the url of an action is when they're on top of each other.

RICHUNCLEPENNYBAGS
Dec 21, 2010

kingcrimbud posted:

There's no second guessing what the url of an action is when they're on top of each other.

Yeah, instead, just an extra opportunity to gently caress up naming it.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

RICHUNCLEPENNYBAGS posted:

Yeah, instead, just an extra opportunity to gently caress up naming it.

Hey, ~/Catalog/Porducts/105 is a perfectly legitimate route and won't make people think less of your business at all!

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

RICHUNCLEPENNYBAGS posted:

Yeah, instead, just an extra opportunity to gently caress up naming it.

as opposed to having it implicitly generated by some weird convention based thing that is impossible to control?

PhonyMcRingRing
Jun 6, 2002

Ithaqua posted:

SqlCommands hold on to SqlConnections, which hold on to unmanaged resources. That's why they should be disposed and recreated as necessary. I don't know if you necessarily need to dispose SqlCommands, but it's easier to manage if you just instantiate both a command and a connection as needed in a using block.

Also the System.Data.SqlClient related stuff isn't thread safe, which can lead to some bizarre errors with DataReaders trying read data from the wrong command.

Inverness
Feb 4, 2009

Fully configurable personal assistant.

Ithaqua posted:

SqlCommands hold on to SqlConnections, which hold on to unmanaged resources. That's why they should be disposed and recreated as necessary. I don't know if you necessarily need to dispose SqlCommands, but it's easier to manage if you just instantiate both a command and a connection as needed in a using block.
If you're actively and consistently using a SqlConnection, then I don't understand why holding onto its resources throughout the lifetime of the program would be a bad thing.

PhonyMcRingRing posted:

Also the System.Data.SqlClient related stuff isn't thread safe, which can lead to some bizarre errors with DataReaders trying read data from the wrong command.
This makes more sense as to why you would need to remake connections. Pooling means actual connections can be handed out to the threads that need them, yes?

Inverness fucked around with this message at 16:13 on Jan 16, 2015

PhonyMcRingRing
Jun 6, 2002

Inverness posted:

This makes more sense as to why you would need to remake connections. Pooling means actual connections can be handed out to the threads that need them, yes?

Right, and the framework handles that for you in the background.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

PhonyMcRingRing posted:

Right, and the framework handles that for you in the background.

Exactly. There is no reason to manually worry about your resources with a SqlConnection since the framework does all of the pooling for you. Spin up a new one as necessary and make sure it's in a using{} block.


bpower posted:

Pretty much. If you're using your own Repository/UoW on top of EF just because you think they're good data access patterns then you've taken a wrong turn.

Implementing your own will make some things easier and 'nicer' I suppose.

Using a repository pattern is good in a couple of cases. It isolates the rest of your code from Entity Framework, so just your data access layer has to know about it. This helps immensely, for example, when you run into situations where EF generates a terrible ill-performing query and you need to drop down to SQL. If your data access code is hidden behind a repository, you can change the code in one place and not worry about affecting callers. This inherently gives you a second benefit of testability, assuming you are referencing your repository by its interface, since you can provide a mock implementation for your repository to test the rest of your code without hitting the database.

However, implementing UoW on top of EF is just a pitfall in my opinion. A guy here at work has tried to do it a couple of times over the years and it's always turned into an over-complicated mess that doesn't buy you any actual benefit. Maybe I just haven't seen it done properly, but until I do I will remain unconvinced.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Speaking of using{}, if a using (foo) {} block lasts until the end of the method, is it in any way different from just declaring foo as a local variable?

No Safe Word
Feb 26, 2005

NihilCredo posted:

Speaking of using{}, if a using (foo) {} block lasts until the end of the method, is it in any way different from just declaring foo as a local variable?

Yes, if foo has a Dispose defined and an exception is thrown in that block/scope, the using version will invoke the Dispose method and the other will not.

Conceptually speaking (and maybe in actual compiler mojo), using is syntactic sugar for:

code:
var foo = ...;
try 
{
		
}
finally
{
   foo.Dispose();
}

RICHUNCLEPENNYBAGS
Dec 21, 2010

Malcolm XML posted:

as opposed to having it implicitly generated by some weird convention based thing that is impossible to control?

If configuring it doesn't count as "controlling" it then I don't think we're in agreement about what "control" means.

Bognar posted:

Exactly. There is no reason to manually worry about your resources with a SqlConnection since the framework does all of the pooling for you. Spin up a new one as necessary and make sure it's in a using{} block.


Using a repository pattern is good in a couple of cases. It isolates the rest of your code from Entity Framework, so just your data access layer has to know about it. This helps immensely, for example, when you run into situations where EF generates a terrible ill-performing query and you need to drop down to SQL. If your data access code is hidden behind a repository, you can change the code in one place and not worry about affecting callers. This inherently gives you a second benefit of testability, assuming you are referencing your repository by its interface, since you can provide a mock implementation for your repository to test the rest of your code without hitting the database.

However, implementing UoW on top of EF is just a pitfall in my opinion. A guy here at work has tried to do it a couple of times over the years and it's always turned into an over-complicated mess that doesn't buy you any actual benefit. Maybe I just haven't seen it done properly, but until I do I will remain unconvinced.

The problem is that, inevitably, you're going to end up wanting to do more stuff that requires talking directly to EF (like, say, doing some Include statements) and eventually it's not really much an abstraction at all (it's half talking to EF and half to the wrapper and your wrapper gradually accumulates the same list of methods as DbContext). I kind of regret doing this even though it does make testing a bit easier.

RICHUNCLEPENNYBAGS fucked around with this message at 02:28 on Jan 17, 2015

Forgall
Oct 16, 2012

by Azathoth

bpower posted:

Ok there's nothing inherently wrong with the UoW pattern or the repository pattern, but a lot of people, you included, have pointed out its a needless abstraction of EF.

// this guy loving hates the repo patteren
http://ayende.com/blog/search?q=respository

// another
https://cockneycoder.wordpress.com/2013/04/07/why-entity-framework-renders-the-repository-pattern-obsolete/

//another
http://tech.pro/blog/1191/say-no-to-the-repository-pattern-in-your-dal


Well, why stop at repository :v:

http://java.dzone.com/articles/orm-offensive-anti-pattern

Che Delilas
Nov 23, 2009
FREE TIBET WEED

RICHUNCLEPENNYBAGS posted:

If configuring it doesn't count as "controlling" it then I don't think we're in agreement about what "control" means.

The problem is that, inevitably, you're going to end up wanting to do more stuff that requires talking directly to EF (like, say, doing some Include statements) and eventually it's not really much an abstraction at all (it's half talking to EF and half to the wrapper and your wrapper gradually accumulates the same list of methods as DbContext). I kind of regret doing this even though it does make testing a bit easier.

The whole "testable" thing really bugs me as a reason to go through all these double-abstraction-layer gymnastics. I have an MVC project where I have a service (the generic, business-logic-goes-here form of the word, not a web service or something) that gets the DbContext passed to it through its constructor. The service does a thing with the database. I want to unit test the functionality of the service methods without needing that database.

My DbContext looks like this:

C# code:
class MyDbContext : DbContext
{
    public virtual IDbSet<Boat> Boats {get; set;}
    public virtual IDbSet<Plane> Planes {get; set;}

    //...etc
}
To unit test my service (which again, takes a MyDbContext as a parameter in its constructor so it can talk to the database), I do this:

C# code:

[TestMethod]
public void Service_Does_A_Thing()
{
  var testboats = new List<Boat> { /* create some dummy boats here */ }.AsQueryable();
  var testplanes = new List<Plane> { /* create some dummy boats here */ }.AsQueryable();

  //Create the mock sets (using Moq)
  var boatSetMock = new Mock<IDbSet<Boat>>();
  boatSetMock.Setup(m => m.Provider).Returns(testboats.Provider);
  boatSetMock.Setup(m => m.Expression).Returns(testboats.Expression);
  boatSetMock.Setup(m => m.ElementType).Returns(testboats.ElementType);
  boatSetMock.Setup(m => m.GetEnumerator()).Returns(testboats.GetEnumerator());

  boatSetMock.Setup(m => m.Add(It.IsAny<Boat>())).Callback((Boat b) => testboats.Add(b));
  boatSetMock.Setup(m => m.Remove(It.IsAny<Boat>())).Callback((Boat b) => testboats.Remove(b));
  
  //Do the same thing for the Planes mock set
  ....

  //Create the mock DbContext
  var dbContextMock = new Mock<MyDbContext>();
  dbContextMock.Setup(m => m.Boats).Returns(boatSetMock.Object);
  dbContextMock.Setup(m => m.Planes).Returns(planeSetMock.Object);

  //Mocks are done, test the actual service
  var svcUnderTest = new MyService(dbContextMock.Object);

  var result = svcUnderTest.DoAThing();

  Assert.IsTrue(result.ThingDidSuccessfully, "Thing not did successfully!");
}
So what I'm basically doing is mocking out the internals of MyDbContext (the IDbSets) instead of mocking out an abstraction of MyDbContext itself, which moves the ugliness into my unit testing code instead of spraying it all over my production code.

I'm really, really not an expert at any of this; if someone notices that this code will do something horrifying, please speak up. The tests appear to work doing things this way, and operate only on the test data I've given them, but they could be touching things they aren't supposed to (one specific concern I have is if mocking a concrete DbContext class, as in new Mock<MyDbContext>(); still expects a database to be there, even if all of its IDbSets are mocked. This is just me not knowing enough about how EF works under the hood).

RICHUNCLEPENNYBAGS
Dec 21, 2010
Yeah, but at this point it's pretty difficult for me to undo that for a year-old project and has rather limited benefits. Anyway I've inserted some other behaviors like tenant filtering that might be tricky to get just overriding DbContext, anyway.

bpower
Feb 19, 2011

quote:

//Create the mock sets (using Moq)
var boatSetMock = new Mock<IDbSet<Boat>>();

boatSetMock.Setup(m => m.Provider).Returns(testboats.Provider);
boatSetMock.Setup(m => m.Expression).Returns(testboats.Expression);
boatSetMock.Setup(m => m.ElementType).Returns(testboats.ElementType);
boatSetMock.Setup(m => m.GetEnumerator()).Returns(testboats.GetEnumerator());

boatSetMock.Setup(m => m.Add(It.IsAny<Boat>())).Callback((Boat b) => testboats.Add(b));
boatSetMock.Setup(m => m.Remove(It.IsAny<Boat>())).Callback((Boat b) => testboats.Remove(b));

Is the bolded stuff some internal methods needed to call add and remove? If so, aren't you heavily coupling the test to the implementation? I'm learning about TDD at the moment. There seems be be no consensus on really fundamental issues.


Consider these videos

Beck, Flower and David Heinemeier Hansson (DHH) discuss his (DHH's) blog post "tdd is dead" in the video below. The whole thing is fascinating.
https://www.youtube.com/watch?v=z9quxZsLcfo

Heres another
Ian Cooper: TDD, where did it all go wrong
http://vimeo.com/68375232


DHH thinks testing has gone wrong and needs a reboot. The idea that TDD inevitably leads to good design is a myth. Infact some times it leads to bad design, his prime example is "Hexagonal Architecture "

Beck thinks testing has not gone wrong but some people may need to go back to first principles. He has never seen "test induced bad design" or what ever DHh is calling it. He fully believes ,if done correctly, tdd leads to good design.

Ian Cooper thinks testing has gone wrong and needs a reboot. He has seen loads of "test induced bad design".We need to go back to first principles. His prime example of TDD done correctly , in its purest form, as Beck intended, is "Hexagonal Architecture". The very thing DHH held up as the worst consequence of TDD.

What do you guy think? Am I even understanding the points at issue? A lot of the discussion went over my head.


My solution is to create a local test version of my db. I fill it with test data in a similar way to the Seed method in EF migration. All my tests assume the db is in exactly its starting state. they can add data and read it back if they want, but they must try to delete that data first and at the end o ensure the tests are atomic. The test db is pretty empty, its has all the lookups , a few users of each type needed in the tests. a few typical entities that can be reused for many tests.

Its working well so far. Its not perfect, spinning up the db takes about 2 seconds and then about 40 tests that hit take 2 seconds.Thats probably a deal breaker for many, but like your solution it keeps the ugliness outside of production code. I have 8 year old hardware but getting upgraded soon.

Like Che Delilas I welcome all criticism.

bpower
Feb 19, 2011

I dot know anything about that guy, but the bolded part is utterly false. He picked java and Hibernate the two most boilerplate heavy examples he could think of to demonstrate his point. Why not Dapper ? It seems intellectually dishonest.

that dudes blog posted:

First, let's see how ORM works, by example. Let's use Java, PostgreSQL, and Hibernate. Let's say we have a single table in the database, called post:

Before any operation with Hibernate, we have to create a session factory:

This factory will give us "sessions" every time we want to manipulate with Post objects. Every manipulation with the session should be wrapped in this code block:

When the session is ready, here is how we get a list of all posts from that database table:
I think it's clear what's going on here. Hibernate is a big, powerful engine that makes a connection to the database, executes necessary SQL SELECT requests, and retrieves the data. Then it makes instances of class Post and stuffs them with the data. When the object comes to us, it is filled with data, and we should use getters to take them out, like we're using getTitle() above.

When we want to do a reverse operation and send an object to the database, we do all of the same but in reverse order. We make an instance of class Post, stuff it with the data, and ask Hibernate to save it:

This is how almost every ORM works.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

bpower posted:

Is the bolded stuff some internal methods needed to call add and remove? If so, aren't you heavily coupling the test to the implementation? I'm learning about TDD at the moment. There seems be be no consensus on really fundamental issues.

I'm pretty sure that the code you bolded is what is necessary to perform LINQ queries on an IDbSet if you want the IDbSet to point to somewhere other than where the DbContext specifies (in this case I'm pointing to the in-memory testboats List that I created for this test method, instead of a database somewhere).

It still feels hinky, especially since I'm not mocking out every method in the interface. http://msdn.microsoft.com/en-us/library/gg679233%28v=vs.113%29.aspx I really wish there was an obvious approach to this whole repository/uow + unit testing thing, because it really makes me scratch my head.

quote:

My solution is to create a local test version of my db. I fill it with test data in a similar way to the Seed method in EF migration. All my tests assume the db is in exactly its starting state. they can add data and read it back if they want, but they must try to delete that data first and at the end o ensure the tests are atomic. The test db is pretty empty, its has all the lookups , a few users of each type needed in the tests. a few typical entities that can be reused for many tests.

This idea just rubs me the wrong way. There just has to be a decent way to completely short-circuit the need for a database at all for the purposes of unit testing, but without having an otherwise completely redundant abstraction on top of your DbContext.

bpower
Feb 19, 2011

Che Delilas posted:

I'm pretty sure that the code you bolded is what is necessary to perform LINQ queries on an IDbSet if you want the IDbSet to point to somewhere other than where the DbContext specifies (in this case I'm pointing to the in-memory testboats List that I created for this test method, instead of a database somewhere).

It still feels hinky, especially since I'm not mocking out every method in the interface. http://msdn.microsoft.com/en-us/library/gg679233%28v=vs.113%29.aspx I really wish there was an obvious approach to this whole repository/uow + unit testing thing, because it really makes me scratch my head.


This idea just rubs me the wrong way. There just has to be a decent way to completely short-circuit the need for a database at all for the purposes of unit testing, but without having an otherwise completely redundant abstraction on top of your DbContext.


Me too! But I don't know why. If speed was no issue whatsoever whats the difference between creating an in memory mock and having a test db on disk, or more likely several dbs on disk that the relevant tests point at. We both have to maintain test data. EF can keep all dbs structurally in sync.

What about tests involving data access to text files? They're just data on disk right?

This seems like a widely accepted solution.
http://stackoverflow.com/questions/1805012/unit-testing-how-to-access-a-text-file

It points to this
http://msdn.microsoft.com/en-us/library/ms182475.aspx


You end up with this
code:
[TestClass]
  [DeploymentItem("testdata.csv", "my_test_data_folder")]

class TestClass1
{
  [TestMethod]
  public void Test_using_filedata_1()
  {
    string testData = System.IO.File.ReadAllText(@"my_test_data_folder\testdata.csv");
    ...
}
  [TestMethod]
  public void Test_using_filedata_2()
  {
    string testData = System.IO.File.ReadAllText(@"my_test_data_folder\testdata.csv");
    ...
}
  [TestMethod]
  public void Test_using_filedata_3()
  {
    string testData = System.IO.File.ReadAllText(@"my_test_data_folder\testdata.csv");
    ...
}

 }

Thats fine. No body bats an eyelid. Why is that? Isn't it exactly that same as saving data to a database to disk? Why aren't we mocking System.IO.File?

These are not rhetorical questions btw.

Wardende
Apr 27, 2013

Che Delilas posted:

This idea just rubs me the wrong way. There just has to be a decent way to completely short-circuit the need for a database at all for the purposes of unit testing, but without having an otherwise completely redundant abstraction on top of your DbContext.

Not until EF 7 there isn't!

Inverness
Feb 4, 2009

Fully configurable personal assistant.
How well supported will TypeConverter be with .NET Core and other platforms going forward? There isn't any other generic way of specifying how types can be converted for things like serialization yet this is not supported on other platforms. The IValueConverter you find on other platforms is much more limited and situational for things like data binding.

RICHUNCLEPENNYBAGS
Dec 21, 2010
Tests are good but TDD is stupid. That's my take.

Che Delilas
Nov 23, 2009
FREE TIBET WEED

bpower posted:

Me too! But I don't know why. If speed was no issue whatsoever whats the difference between creating an in memory mock and having a test db on disk, or more likely several dbs on disk that the relevant tests point at. We both have to maintain test data. EF can keep all dbs structurally in sync.

I mean, the point of unit testing is to test something in isolation. I want to be able to test some complicated algorithm on its own whether I have access to a database or not; I care about the operation of the algorithm and nothing else.

But you make a good point; I mean in what situation, realistically, are we going to have access to our development machine, the source code we're testing, and our unit testing project and code, but not have access to at least some kind of database system that EF can connect to to get dummy data? Any dev box or CI server worth the name is going to have something available to serve that function. Maybe this is one of those cases of everyone (okay, me) getting caught up in the ~perfect theoretical form~ of a thing, when a little bit of concession to reality would make things a lot simpler for the vast majority of cases.

(As an aside, anyone else ever find themselves wishing that Microsoft had called their stupid Access program anything else at all? Every time I have a conversation that involves databases, my brain twitches just a little bit when the inevitable phrase "access to the database" comes up. They could called it anything else that doesn't imply "availability and permission," but nooOOOooo.)

quote:

Why aren't we mocking System.IO.File?

Well, I wouldn't mock this one specifically, because if I had a method that created a file as part of its operation, I would want to make sure that file actually got created. I've had enough minor trouble caused by file permissions issues, thanks very much :v:. But I know that's sort of beside the point you're making.

brap
Aug 23, 2004

Grimey Drawer
Ideally a unit test suite should create whatever environment it needs to operate and that includes a test database.

mortarr
Apr 28, 2005

frozen meat at high speed
Continuing the mvc / web api chat...

So right now I have controllers named after resources: DocumentController, DocumentsController, FolderController, FoldersController etc. They have methods like below, comments are for this example, I've actually got an mvc help area in-app that displays the xml help on the web api controller methods. ApiControllerBase has some common stuff over the top of the normal web api controller thing, like logging object etc.

code:
// Other view models inherit from this...
public class ApiViewModelBase 
{
	public string Username { get; set; }
}

public class DocumentController: ApiControllerBase
{
	// Add or update, can be called multiple times safely
	[HttpPut] 
	public Document Put(DocumentPutViewModel vm) { ... }

	// Get the document by id
	[HttpGet] 
	public Document Get(DocumentGetViewModel vm) { ... }
} 

public class DocumentsController: ApiControllerBase
{
	// Get the documents related to contact id
	[HttpGet] 
	public List<Document> Get(DocumentSearchByContactIdViewModel vm) { ... }


	// Get the documents related to location
	[HttpGet] 
	public List<Document> Get(DocumentSearchByLocationViewModel vm) { ... }
}
The consensus is that the controller method names need to be better - maybe "SearchByLocation" and "SearchByContactId" in DocumentsController? What about the controller method parameter names? Is there anything else I'm doing that seems dumb?

EssOEss
Oct 23, 2006
128-bit approved
I tend to have one controller per entity type, regardless of plurality. Otherwise, looks fairly standard layout to me.

Dromio
Oct 16, 2002
Sleeper

fleshweasel posted:

Ideally a unit test suite should create whatever environment it needs to operate and that includes a test database.

I disagree. Most unit tests should never hit the database. Mock the results of your query and test the logic of your method alone. No need for an actual db. I only hit a database (in-memory) when writing integration tests,

I use Highway.Data to put a decent abstraction layer on my queries. Plays well with EF and NHibernate.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

EssOEss posted:

I tend to have one controller per entity type, regardless of plurality. Otherwise, looks fairly standard layout to me.

Ditto on one controller per entity type. It would drive me nuts to have a [Type]Controller and [Type]sController for every entity.

brap
Aug 23, 2004

Grimey Drawer

Dromio posted:

I disagree. Most unit tests should never hit the database. Mock the results of your query and test the logic of your method alone. No need for an actual db. I only hit a database (in-memory) when writing integration tests,

I use Highway.Data to put a decent abstraction layer on my queries. Plays well with EF and NHibernate.

Yes and the stored procedures would benefit from tests too.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

fleshweasel posted:

Yes and the stored procedures would benefit from tests too.

There are tools for testing stored procedures. SSDT has database tests.

mortarr
Apr 28, 2005

frozen meat at high speed

Bognar posted:

Ditto on one controller per entity type. It would drive me nuts to have a [Type]Controller and [Type]sController for every entity.

True, I guess when you're using [Http*] attribs and not naming the methods like "Get", "Put", but "SearchByContact", then you're not bound to return the same datatype for both singular and plural. Thanks for clearing things up, I've been writing mvc apps for ages, but I'm still fairly new to the whole REST and web api thing.

RICHUNCLEPENNYBAGS
Dec 21, 2010

mortarr posted:

True, I guess when you're using [Http*] attribs and not naming the methods like "Get", "Put", but "SearchByContact", then you're not bound to return the same datatype for both singular and plural. Thanks for clearing things up, I've been writing mvc apps for ages, but I'm still fairly new to the whole REST and web api thing.

I think if you use HTTP verb routing the convention is to do multiple/all entities if you don't provide an ID and one if you do.

raminasi
Jan 25, 2005

a last drink with no ice
If I call ToDictionary() on a ParallelQuery, do the key-generating projections happen in parallel? It would make sense for them to, but I can't find any documentation of which operations are actually parallelized and which aren't.

Bognar
Aug 4, 2011

I am the queen of France
Hot Rope Guy

GrumpyDoctor posted:

If I call ToDictionary() on a ParallelQuery, do the key-generating projections happen in parallel? It would make sense for them to, but I can't find any documentation of which operations are actually parallelized and which aren't.

The results in ParallelQuery are iterated after the tasks are merged, so no it doesn't happen in parallel. You can view the reference source here:

http://referencesource.microsoft.com/#System.Core/System/Linq/ParallelEnumerable.cs,dcdd1c9b4c10ea06

ljw1004
Jan 18, 2005

rum
[plug] The WPF team wants to get the message out about work they're doing for WPF. In this case, they've brought the XAML-performance-analysis tools from WinRT XAML (VS2013) to also now work with desktop WPF XAML (VS2015 CTP5):

http://blogs.msdn.com/b/wpf/archive/2015/01/16/new-ui-performance-analysis-tool-for-wpf-applications.aspx

Adbot
ADBOT LOVES YOU

Gul Banana
Nov 28, 2003

we were passing around some of those screenshots excitedly at work today :)

actually, i spent some time installing VS2015 in a vm to try and try it out - unfortunately it won't build our main solution yet, something about the package-restore-less workflow is upsetting to it. these perf tools are a great carrot to keep trying!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply