|
ljw1004 posted:One idea: It never occurred to me that those objects could be static. Huh! Inverness posted:Will Roslyn let me add code at compile-time like PostSharp does? Oh hell yeah it would.
|
# ? Nov 18, 2014 17:53 |
|
|
# ? May 22, 2024 19:16 |
|
Inverness posted:Will Roslyn let me add code at compile-time like PostSharp does? That would be fantastic. Yes it would be fantastic, but no there's no support for that. Continue to use PostSharp... Existing approaches - as an experiment I built a Roslyn-powered "single file generator" VS extension. They're used a lot, e.g. when you have a XAML file, then in solution explorer the XAML file's "Custom Tool" property is set to "MSBuild:Compile". That causes a built-in single-file-generator to run, and it produces your MainPage.xaml.g.vb. I wrote it my custom tool to read the Roslyn syntax tree, and spit out additional generated code. This let me do some of what PostSharp does e.g. implement INotifyPropertyChanged. RoslynDOM - http://blogs.msmvps.com/kathleen/2014/08/24/roslyndom-quick-start/ This is a community effort by MVP Kathleen Dollard. She wanted to do something a bit like T4 templates but much easier to write, again powered by Roslyn. PostSharp and Kathleen have philosophical differences on whether code the generated code should be checked into your source control or not, whether you should be able to single-step-debug through the generated code, whether you should use attributes or comments to direct the markup, and whether you should write your templates in something like T4 or C++ macros or just in code that manipulates IL or syntax trees directly, and on whether that should be built into the language like Nemerle or whether it should be done by a tool notionally outside the language like Java annotations. But they're both keen on helping people write code at a higher level. It's an area that still needs more research and invention. Future - One small concrete idea we had for a future version of C# is a new keyword "supercedes"... code:
|
# ? Nov 18, 2014 20:43 |
|
Gul Banana posted:Those one-off analysers are pretty convincing ads for vs2015! Can they be shared with a team via e.g. extension galleries? You can share analyzers in a bunch of ways... (1) You can install analyzers as a VSIX so it's used on your local copy of VS for all project types (2) You can add it as a DLL reference to your project under the References>Analyzers node, just like you do any other reference. This causes it to get passed to the compiler as a command-line switch like /r:. I don't remember the exact switch it uses. This way it will travel with your project, so anyone who loads the project will also get the analyzer. (3) You can add it as a NuGet reference, which basically boils down to (2) (4) You can alter your msbuild file (.csproj or .targets) so that msbuild ultimately invokes the C# compiler csc.exe with a reference to the analyzer (5) You can create a new project template which does any of (2,3,4). I suspect that NuGet will prove the most popular.
|
# ? Nov 18, 2014 20:47 |
|
Funking Giblet posted:You seem to be thinking of them wrong. That clears it up a lot, thanks! That's definitely a lot less code for one off calculations like that.
|
# ? Nov 18, 2014 21:28 |
|
I'm trying to union a couple of lists of objects on some subset of their properties and this is the thing I came up with:C# code:
The actual properties are ints so the comparison function in the example is actually very close to what mine actually is. Am I missing something less slow and convoluted? Should I just go ahead and post this in the coding horrors thread? Munkeymon fucked around with this message at 23:13 on Nov 18, 2014 |
# ? Nov 18, 2014 23:08 |
|
Munkeymon posted:I'm trying to union a couple of lists of objects on some subset of their properties and this is the thing I came up with: You're using tuples in your example, but if you make the data type a proper class and create a corresponding implementation of IEqualityComparer, you can just use a Linq Union: (var resultList = List1.Union(List2, Comparer);) http://msdn.microsoft.com/en-us/library/bb358407.aspx http://stackoverflow.com/questions/5276169/linq-query-with-distinct-and-union
|
# ? Nov 19, 2014 00:34 |
|
Che Delilas posted:You're using tuples in your example, but if you make the data type a proper class and create a corresponding implementation of IEqualityComparer, you can just use a Linq Union: No, I tried that first and I can't because it'll only call the loving GetHashCode method for reasons probably having to do with performance but totally obviating my actual intent.
|
# ? Nov 19, 2014 01:44 |
|
Munkeymon posted:No, I tried that first and I can't because it'll only call the loving GetHashCode method for reasons probably having to do with performance but totally obviating my actual intent. GetHashCode is part of that interface. Implement it so that the objects are equal when they're supposed to be, don't just call base.GetHashCode. Edit: I mean, I suppose it's not simple to choose a good hash function for what you're working with. Might be worth looking into though. Edit the 2nd: http://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-an-overridden-system-object-gethashcode/263416#263416 Che Delilas fucked around with this message at 03:24 on Nov 19, 2014 |
# ? Nov 19, 2014 03:11 |
|
So just started work and could use a pointer to some good resources to pick up on C# and SSIS jazz. Mainly concerned with the SSIS, as I have not worked with SQL much at all.
|
# ? Nov 19, 2014 03:12 |
|
Munkeymon posted:No, I tried that first and I can't because it'll only call the loving GetHashCode method for reasons probably having to do with performance but totally obviating my actual intent. It calls GetHashCode because it can then use a hash set to detect duplicates. That makes the runtime O(n+m), where m and n are the size of the two lists. Your method ends up being something like O((n+m)log(n+m)) worst case. Check out these two links from the reference sources: http://referencesource.microsoft.com/#System.Core/System/Linq/Enumerable.cs,8c5b1699640b46dc http://referencesource.microsoft.com/#System.Core/System/Linq/Enumerable.cs,9c10b234c0932864 EDIT: If you really, really want to pass in an arbitrary comparator then you don't have much of a choice. However, if you have data that easily maps to a hash function, you should use it. Tuple<> has hash combining built in, so you could use it easily for getting hashes of your data. Or you could rip the code out of here: http://referencesource.microsoft.com/#mscorlib/system/tuple.cs,49b112811bc359fd Bognar fucked around with this message at 06:20 on Nov 19, 2014 |
# ? Nov 19, 2014 06:15 |
|
Piggybacking off the GetHashCode()-chat, Nthing you really should implement it (and IEqualityComparer) they will make your life so much easier. This is a good explanation why you need to implement GetHashCode() and the problems that can happen if you dont http://stackoverflow.com/a/371348/961464 We can debate all day best algorithms for GetHashCode but the most common-case one I've seen and personally use is the one linked above that Jon Skeet mentions. Basically, pick two primes, multiply them together and add all your public properties hashes. If you use ReSharper, it has a very nice 'Generate Equality Members' code generation template that will automatically generate an implementation of ==, Equals, IEqualityComparer, GetHashCode etc for you.
|
# ? Nov 19, 2014 15:52 |
|
Munkeymon posted:I'm trying to union a couple of lists of objects on some subset of their properties and this is the thing I came up with: How about this C# code:
C# code:
C# code:
Sedro fucked around with this message at 17:10 on Nov 19, 2014 |
# ? Nov 19, 2014 17:07 |
|
This all is interesting because previously the only advice I've ever read about rolling my own GetHashCode or extending the existing one is "don't" but that SO answer looks like it'll work* so thanks. E: was messing with dotnetpad too long and missed Sedro's reply which is also a neat idea - thanks! I didn't realize IStructuralEquatable would make it so simple. *though that example may be me getting too clever for my own good Munkeymon fucked around with this message at 17:56 on Nov 19, 2014 |
# ? Nov 19, 2014 17:27 |
|
Gildiss posted:So just started work and could use a pointer to some good resources to pick up on C# and SSIS jazz. If you can do C# I would generally skip SSIS unless it is a db to db import -- SSIS is poo poo if you can code, especially for error handling and such.
|
# ? Nov 19, 2014 17:34 |
|
wwb posted:If you can do C# I would generally skip SSIS unless it is a db to db import -- SSIS is poo poo if you can code, especially for error handling and such. I wish that was the case. Because C# is an easy transition for what most of my experience is. SSIS is not because I have never really done anything in SQL and only a little with Visual Basic. So I am looking at this big rear end code base that I barely recognize.
|
# ? Nov 19, 2014 18:48 |
|
Munkeymon posted:This all is interesting because previously the only advice I've ever read about rolling my own GetHashCode or extending the existing one is "don't" but that SO answer looks like it'll work* so thanks. There's nothing special about IStructuralEquatable, I was just using it as a marker for tuples of any arity. You could also use ITuple (more specific) or simply object (less specific) since everything has an Equals method. You can compare UnionBy to the built-in Linq functions: C# code:
|
# ? Nov 19, 2014 18:58 |
|
Any of you guys work with AWS S3? I'm uploading about a million images using this pretty straightforward code (VB) and the AWS SDK (2.3.8.1):code:
1. Get enormous list of files (enumerate *.jpg) 2. Check if the object exists in S3 bucket >without downloading the entire object< 3. If yes upload What I'm running into trouble with is the second step. There's a couple of solutions in this SO question but I'm not comfortable using exception handling as program flow: http://stackoverflow.com/questions/8303011/aws-s3-how-to-check-if-specified-key-already-exists-in-given-bucket-using-java or https://forums.aws.amazon.com/message.jspa?messageID=219046 I think I can use S3FileInfo.Exists (and man was this a pain to hunt down): http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/M_Amazon_S3_IO_S3FileInfo__ctor.htm But what's not clear to me is, when I'm instantiating it eg: code:
EDIT-Sorry just to clarify, I know my approach above will work, but I'm wondering if it's the most efficient when dealing with a million objects. For example, if it keeps failing and I end up doing more checking ifexists than uploading, should I maybe keep a local log of what uploaded successfully and check against that? Of course that list will get to about 90mb assuming a million lines of text. Scaramouche fucked around with this message at 22:46 on Nov 19, 2014 |
# ? Nov 19, 2014 22:19 |
|
Ithaqua posted:ReSharper is in trouble when VS2015 comes out. They're going to continue rolling their own static analysis tools and totally ignore Roslyn. If these new analysers are faster, easier to write than ReSharper extensions and VS doesn't hang when I open a file that is bigger than 100 lines of code; ReSharper is hosed. Since a recent promotion to the senior developer here I've been waging a one man war against overuse/reliance of null, so much so that I set "possible null reference" as an error in TeamCity, out of 3,597 analysis errors 2,163 of them are possible null references. The new guidelines going out here is that null now means "You have requested an entity/property and it does not exist", not "There was an exception, so here's nothing" and especially not: "No error, null means no error " This should probably go into coding horrors but in the recent review of commits I also spotted this; code:
Cancelbot fucked around with this message at 01:12 on Nov 20, 2014 |
# ? Nov 20, 2014 01:01 |
|
Gildiss posted:I wish that was the case. Because C# is an easy transition for what most of my experience is. SSIS is not because I have never really done anything in SQL and only a little with Visual Basic. So I am looking at this big rear end code base that I barely recognize. I've got a really good handle on SSIS -- but the previous comments are correct, it's mainly meant for Server1 => Server2 data movement. The Script Task component is a way for bitter DBAs like me to bar developers from deploying executables to our servers, or to call a webservice to derive a value in the data stream, but not much more than that. What are you trying to do with SSIS?
|
# ? Nov 20, 2014 01:02 |
|
Cancelbot posted:If these new analysers are faster, easier to write than ReSharper extensions and VS doesn't hang when I open a file that is bigger than 100 lines of code; ReSharper is hosed. Exactly. I tried to write a ReSharper extension once and gave up after I couldn't figure out how to determine the return type of a method after hours of trying.
|
# ? Nov 20, 2014 01:05 |
|
Anaxandrides posted:I've got a really good handle on SSIS -- but the previous comments are correct, it's mainly meant for Server1 => Server2 data movement. The Script Task component is a way for bitter DBAs like me to bar developers from deploying executables to our servers, or to call a webservice to derive a value in the data stream, but not much more than that. What are you trying to do with SSIS? It looks like server to server stuff with some packages being a sea of Execute SQL Tasks. Then again this is only day 3 so I really don't know what the gently caress it's all about. Also I miss using Ubuntu. Gildiss fucked around with this message at 03:11 on Nov 20, 2014 |
# ? Nov 20, 2014 02:56 |
|
Can I some advice on ORMs. I wonder should I be using one at all? Here's the facts. I'm building a fairly small project. About 6 months dev work for a senior developer. It might take me 8 months, but thats OK, time is not critical. Its .net c# mvc web site. A little bit of CRUD on 4 or 5 tables Lots of reporting, graphs,data imports from text files ,exports to same. Some of the reporting will be quite complex. Many joins with grouping, sub queries etc The requirements are fluid and I'll be working with an excellent domain expert throughout the project. We be using an evolutionary prototype model. Meaning we build a core area to a very high standard and then add on and modify bits as we go on. All the while reviewing/refactoring. I expect the database structure to change a lot in the first 2 months of the project. We have a small but complex db, lots of M:M relationships regarding permissions and roles on business entities. I'm new to web dev. I've a background in client server desktop dev including c# amd java. I have very strong skills in SQL and DB design in general. We will not have to change db providers, we're happy to be tied to whatever db we start with. We expect to have about 1000 to 2000 user of the sites in total. I've looked at EF and NHibernate. The learning curve is really steep. I need to implement the experts ideas very quickly so I dont want to be fighting with an ORM for hours to do something that I'd normally do in 20 minutes with SQL and a simple DAL. I think I might be missing the point of ORMs. Any advice?
|
# ? Nov 20, 2014 05:38 |
|
bpower posted:I've looked at EF and NHibernate. The learning curve is really steep. I don't know that I would categorize the learning curve on ORMs as really steep. I'm actually more inclined to say that they're dangerously easy. In my opinion, if you know SQL and are strong in DB design, you can easily understand what an ORM is doing (if you also grasp LINQ). The real issue is that ORMs make the common cases much easier than SQL, but the complex cases are much harder than SQL. For example, forget inheritance with ORMs. Some support it, but it's really just a big headache. That may sound like a strike against ORMs, but I would say that most people don't actually need a lot of the complexities that SQL provides. Also, you can always drop down to SQL with an ORM and just use it as a statement mapper if you have a really complex query, and then just rely on it for easy create/update/delete elsewhere.
|
# ? Nov 20, 2014 06:05 |
|
bpower posted:Can I some advice on ORMs. I wonder should I be using one at all? Here's the facts. I've always preferred nHibernate over EF, but both are fairly heavyweight ORMs that can be a real bitch to work with. A microORM like Dapper might be right up your alley. I haven't played with it much, but it does the ORM part without the SQL generation part -- you provide the SQL, it provides the mapping. Regardless of your choice of data access, you'll still want to wrap it all up neatly behind a bunch of injected interfaces so you can mock your data access out for unit testing. For permissions/roles, it's worth taking a look at a membership provider -- permissions/roles are pretty much a solved problem at this point, so don't reinvent the wheel if you don't need to. ASP .NET has a built-in membership provider, and there are plenty of third-party ones out there. I can't give any recommendations here.
|
# ? Nov 20, 2014 06:12 |
|
bpower posted:Can I some advice on ORMs. I wonder should I be using one at all? Here's the facts. I've been using PetaPoco and like it a lot. It's like Dapper and it is pretty easy to get the hang of. Example: code:
Mr Shiny Pants fucked around with this message at 10:03 on Nov 20, 2014 |
# ? Nov 20, 2014 09:58 |
|
I have found that using ORMs brings great success if you simply treat them as a convenient way to access rows in tables. Once you go down the road of building complex models and inheritance hierarchies and lazy loading entity sets into your data model, you have stepped on the slippery slope towards thedailywtf.com. Keep your interactions with the ORM very low-level and do not expect much abstraction. It relieves you of the need to handcraft SQL but does not mean you can treat your database as a cloud of objects (even though ORMs tend to offer features that do just that). In all my projects, I use Entity Framework (and before that used LINQ to SQL) as follows:
In code, this would just be something like the following. code:
Works beautifully!
|
# ? Nov 20, 2014 10:40 |
|
EssOEss posted:I have found that using ORMs brings great success if you simply treat them as a convenient way to access rows in tables. Once you go down the road of building complex models and inheritance hierarchies and lazy loading entity sets into your data model, you have stepped on the slippery slope towards thedailywtf.com. Thanks guys. EssOEss, so what do you do for reporting and complex queries? Can I mix your approach for the simple one-row transactions with PetaPoco or similar for the complex queries? That makes a lot of sense to me, use the orm to do what it does very well and nothing more and then use PetaPoca or similar for the rest.
|
# ? Nov 20, 2014 17:19 |
|
I have so far not run into queries I could not simply model as LINQ to Entities queries. For convenience, I have modeled some as views (seen by Entity Framework just as read-only tables in my usage). What sort of queries do you have in mind? If LINQ to Entities does not allow you to do what you need, you can always just execute raw SQL and map that to some data type that represents the output. Mind you I do not know what PetaPoco is so perhaps we are talking about different things entirely.
|
# ? Nov 20, 2014 18:04 |
|
EssOEss posted:I have so far not run into queries I could not simply model as LINQ to Entities queries. For convenience, I have modeled some as views (seen by Entity Framework just as read-only tables in my usage). What sort of queries do you have in mind? If LINQ to Entities does not allow you to do what you need, you can always just execute raw SQL and map that to some data type that represents the output. PetaPoco is like Dapper. They're very simple orms So lets say you had GetAverageAgeOfGrandChildernGroupbyCity(PersonId) And your sql is something like code:
|
# ? Nov 20, 2014 18:15 |
|
bpower posted:So lets say you had GetAverageAgeOfGrandChildernGroupbyCity(PersonId) You could probably write that in linq. Here are some join-on examples, and some group-by examples. But yes, you can always drop down to raw SQL too, if it's more comfortable.
|
# ? Nov 20, 2014 18:29 |
|
EF is nice because you get navigational properties. It's a convenient abstraction, but it doesn't always produce the best query plans. Of course if you want to write highly-performant code, you should probably stick to indexed views, stored procs, etc. and just use EF as a mapper.
|
# ? Nov 20, 2014 18:37 |
|
epalm posted:You could probably write that in linq. Here are some join-on examples, and some group-by examples. Maybe because I'm so used to looking at sql, but gently caress that stuff looks ugly to me. There's only one comment below the second link, and its some guy wondering why the sorting doesn't work. I cant spend x number of days figuring out stuff like that in this project. code:
Any objections to using EF for the simple crud stuff, and use something else (PetaPoco maybe) for reports/graphs/import/export etc Or should I just pick one and stick with it.
|
# ? Nov 20, 2014 18:57 |
|
Gildiss posted:It looks like server to server stuff with some packages being a sea of Execute SQL Tasks. This actually isn't a terrible use case for SSIS, if you're one of those people -- a sea of Execute SQL tasks running in parallel lets you run multiple SQL calls, well, in parallel.
|
# ? Nov 20, 2014 19:07 |
|
bpower posted:Any objections to using EF for the simple crud stuff, and use something else (PetaPoco maybe) for reports/graphs/import/export etc Seems like a horror to me, but whatever you (and your coworkers) are comfortable with I guess.
|
# ? Nov 20, 2014 19:17 |
|
Mr. Crow posted:Seems like a horror to me, but whatever you (and your coworkers) are comfortable with I guess. Thats the point of my question, I not going to do something just because its comfortable right now. My boss is awesome, I can stop my work on the project for a week or two so I can fully master the right way to do it.
|
# ? Nov 20, 2014 19:36 |
|
bpower posted:Thats the point of my question, I not going to do something just because its comfortable right now. My boss is awesome, I can stop my work on the project for a week or two so I can fully master the right way to do it. I tried EF and nHibernate and getting stuff done in both was easy. But when I started inserting or updating existing objects things became hairy very quickly. Take a look at some of the questions posted on Microsoft blogs about EF throwing a fit because you want to update an existing object. There is always a solution like detaching object from the DB context, so it doesn't track it anymore. Reattaching it later and saving your object etc etc. In the end I went with petapoco and write the sql queries by hand. I don't have the time nor the inclination to learn the intricacies of an ORM. Mind you, I was just trying stuff out and maybe I set it up wrong but looking at all the other people running into the same errors I quickly dropped EF. Nhibernate is more robust, but it was way overkill for what i wanted and also has some warts. Doing object to relational is hard, better know it is hard and use something that doesn't want to abstract it away so you have full control. Mr Shiny Pants fucked around with this message at 21:16 on Nov 20, 2014 |
# ? Nov 20, 2014 21:13 |
|
Mr Shiny Pants posted:I tried EF and nHibernate and getting stuff done in both was easy. But when I started inserting or updating existing objects things became hairy very quickly. I expect my db to change a bit during the early stages of dev, or at least I must develop with that possibility in mind. Does that almost rule out EF? Edit: \/\/\/ Thanks guys. That proves to me I still don't know enough to make a decision. I'll continue the research as they say. bpower fucked around with this message at 00:50 on Nov 21, 2014 |
# ? Nov 21, 2014 00:31 |
|
bpower posted:I expect my db to change a bit during the early stages of dev, or at least I must develop with that possibility in mind. Does that almost rule out EF? Not at all. If you're willing to let go even more, EF migrations will take care of the SQL for creating and updating a database. If you don't want to go that route, you can easily update the EF model as necessary to match your database.
|
# ? Nov 21, 2014 00:42 |
|
bpower posted:I expect my db to change a bit during the early stages of dev, or at least I must develop with that possibility in mind. Does that almost rule out EF? Not at all.
|
# ? Nov 21, 2014 00:46 |
|
|
# ? May 22, 2024 19:16 |
|
Scaramouche posted:Any of you guys work with AWS S3? I'm uploading about a million images using this pretty straightforward code (VB) and the AWS SDK (2.3.8.1): I know you guys had lots of info about this but were just pulling for me behind the scenes to figure it out on my own, and here's what I figured out: I'm expecting S3 to be like a database but it is not a database. So instead I'm making a database table of all the filenames with an uploaded bool and a modified date and using that to direct my upload/modify operations instead of using the minimal S3 tools to do it. You certainly >can< as I do above, but it starts to get unwieldy at around 300,000 objects or so. The best way I've found to do this is get a local copy of your list (either via ListObjectsRequest or your own database) and work from there; it's not really worth it with the limited hardware/bandwidth I'm currently using to do it all in S3. Luckily this builds in iterative updating in future as well.
|
# ? Nov 21, 2014 01:40 |