|
Alien Arcana posted:Hey thread, I have a question about generics. This is very "smelly" code. What are you actually trying to do here?
|
# ? Oct 27, 2014 19:16 |
|
|
# ? May 15, 2024 21:14 |
|
You can get the types of generic arguments via reflection, but I too wonder if there's a better way to do what you're doing.
|
# ? Oct 27, 2014 19:37 |
|
In more detail: if you get to the point where you need to know the types of your generic type parameters, you're suddenly fighting against the purpose of generics (not needing to know the types...). Down that path lies pain.
|
# ? Oct 27, 2014 19:46 |
|
Ithaqua posted:This is very "smelly" code. What are you actually trying to do here? Hahaha It's from a small, personal project of mine. Let's see... basically, it's a poor-man's serialization. You pass an object of any type to the Save method, and it writes all that object's public instance fields to file using a BinaryWriter. Later, you invoke Load<T> with the type of that object, and it pulls the fields back out of the file and recreates the object. (I realize C# has its own serialization functions - I wrote this to explore C#'s capabilities more than anything else.) LoadValue<T> is the method used to load a single, specific field of type T. Its function is to figure out which of a number of more specific methods should be called, depending on what T is. When T is non-generic, this is simple, because the method to call is also non-generic and I can just call it directly. However, when T is generic (List, Tuple, etc.), the method I want to call is also generic, and that's where I run into problems - I have to turn T into a Type object to get its generic arguments, and then there's no way to plug a Type object into the generic method I want to call. Rooster Brooster posted:In more detail: if you get to the point where you need to know the types of your generic type parameters, you're suddenly fighting against the purpose of generics (not needing to know the types...). Down that path lies pain. True. I might have to back up and see if there's a better way to do this. Again. Honestly, I've more or less abandoned the original project that led me to create this thing - I'm working on it for the mental exercise more than anything. EDIT: Oh, I can use dynamic! Nothing could possibly go wrong with that! Alien Arcana fucked around with this message at 20:27 on Oct 27, 2014 |
# ? Oct 27, 2014 19:49 |
|
Paul MaudDib posted:Parallel.ForEach has similar results. What's the explanation here? As Ithaqua said, you're likely doing too little in each thread to get any benefit. However, if you want some serious performance improvements while searching for primes. I'd recommend using a BitArray to store whether a particular index is a prime or not. The downside of this is that you'll have to specify an endpoint and pre-allocate the BitArray to the length you need, rather than calculating to an arbitrary number of primes while maintaining a list of found primes. The upside is *significant* benefits to locality of reference and branch prediction. The algorithm is a sieve, the same as you're using to an extent, but you calculate whether or not every number in the set is a prime. It looks like: 1. Pre-allocate BitArray of length N equal to the size of the largest number to check for primes + 1 (e.g. for all primes under 1000, N = 1001) 2. Iterate the array (using a for loop, no foreach) starting at index 2 3. If the value at the index is 1, the number is a prime. 4. If the number is a prime, iterate from index*2 to N incrementing by index and change all values at the index to 0 (not prime) Since you're using a BitArray, the CPU can fit a much larger set of data in the cache. If you have a modern CPU, you can calculate all the prime numbers from 1 to 2 million without touching the RAM more than once (with a 2MB L3 cache). Also, since you're incrementing linearly through the BitArray, there are a ton more cache hits. I guarantee you'll have a more than 6x speed increase by switching to this when calculating large numbers of primes.
|
# ? Oct 27, 2014 22:26 |
|
Bognar posted:2. Iterate the array (using a for loop, no foreach) starting at index 2 What's the reason behind using a for instead of foreach? Is it because of IEnumerable?
|
# ? Oct 28, 2014 12:24 |
|
Using foreach with IEnumerables causes an allocation for the enumerator, building a stack frame for the .MoveNext call (for each element), assignment to a local variable (for each element), a check for if the enumerator implements IDisposable, and calling .Dispose if it does. Granted, most of this is happening on the stack so it's still pretty fast and the JITter is likely to inline a lot of it, but using for loops is still unambiguously faster. Note that the compiler will treat arrays differently - it won't use GetEnumerator. It will actually create a for loop, but it also has two additional allocations behind the scenes. I'm pretty sure this optimization isn't made for BitArray, however.
|
# ? Oct 28, 2014 15:12 |
|
I have a desktop application and I have a webapi site out on the internet. I would like the desktop application to be able to post/upload an object that has some properties and a file. I've been playing around with WebClient and HttpClient and haven't figured out a solution. Is this possible? Is it a bad idea?
|
# ? Oct 28, 2014 16:43 |
|
moctopus posted:I have a desktop application and I have a webapi site out on the internet. This will work. Depends on what you want to do with it, but I don't see why it should be a bad idea in itself.
|
# ? Oct 28, 2014 16:48 |
|
Mr Shiny Pants posted:This will work. Depends on what you want to do with it, but I don't see why it should be a bad idea in itself. Thank you. Funny right after I posted my question I got it working. Cool.
|
# ? Oct 28, 2014 17:07 |
|
Alien Arcana posted:EDIT: Oh, I can use dynamic! Nothing could possibly go wrong with that! If you're doing dynamic stuff it'll come out way better to use dynamics than to gently caress around with scores of lines of reflection for each member call. That said, in your case you probably want to check typeof(IEnumerable).IsAssignableFrom on the properties and iterate over them if so. I mean, you're doing graph traversal to serialize this thing, right?
|
# ? Oct 29, 2014 00:09 |
|
So I am looking for a bit of advice because I seem to be running around in circles from over-thinking. This is only my third WPF app and I want to be simple yet I don't want forced/bastard code ruining it this time. This current app is like others I have to build... simple (6 data tables), mostly just transactional data access (assign, add, spit some output into a table), and will have about 4-5 Views/ViewModels. I'm using EF6 because it has a designer - it's a client thing for some reason - AND I like the change tracking and the transaction/rollback features. In previous applications I used bastardized repositories. Now, I want to explore this concept: UoW w/ No Repo WHERE I AM HAVING TROUBLE: I want to inject a design-time data into the ViewModel through injection. I also want to use Command/Query objects to run the transactions/queries by passing in the DBContext and manipulating Entities directly in the CQ object. I can't figure out what should be registered/injected to keep this as clean as possible. I've tried the concept registering all my CQ objects and injecting Design-time objects at design and normal objects at run-time into my VM, then I run into the problem that I have to pass a DBContext from my VM to the CQ object but the design-time doesn't use DBContext sooooo... ???? Maybe put the Conext into a run-time service instead and inject that?? If I create a basic "service" per VM and register that, I have what looks to me like a repository and I'm duplicating the CQ objects with service methods. I don't want that either. I'm really struggling to wrap my head around a nice clean (and simple) way to get a DBContext/Entities into my VM at run-time, still use CQ objects to do all the work of doing so, AND keep my design-time data injectable (POCOs only). Someone slap me in the back of my head please. Why is this so hard?
|
# ? Oct 29, 2014 00:52 |
|
Entity Framework is Unit of Work and I'm not a WPF whiz but you probably actually shouldn't have your view models keep references to the context.
|
# ? Oct 29, 2014 03:08 |
|
crashdome posted:In previous applications I used bastardized repositories. Now, I want to explore this concept: UoW w/ No Repo What do you mean by bastardized repositories? Also, why don't you just use repositories where you new up a DbContext on each method call? If it's because it's not testable, then I'd say that the repositories aren't what you should be heavily testing - you should be testing the code that calls the repositories. Though, if you really cared about it you can have your DbContext implement an interface (e.g. IContext) full of IDbSets, then inject into your repositories with a factory class for IContext (a factory so you can instantiate the IContext implementation on each method call). RICHUNCLEPENNYBAGS posted:Entity Framework is Unit of Work and I'm not a WPF whiz but you probably actually shouldn't have your view models keep references to the context. Agreed.
|
# ? Oct 29, 2014 04:38 |
|
RICHUNCLEPENNYBAGS posted:Entity Framework is Unit of Work and I'm not a WPF whiz but you probably actually shouldn't have your view models keep references to the context. I agree with the first part but, the article, unless I am reading it incorrectly, suggested otherwise about the second part. I'd like a suggestion if you have any thoughts. Bognar posted:What do you mean by bastardized repositories? Also, why don't you just use repositories where you new up a DbContext on each method call? If it's because it's not testable, then I'd say that the repositories aren't what you should be heavily testing - you should be testing the code that calls the repositories. Though, if you really cared about it you can have your DbContext implement an interface (e.g. IContext) full of IDbSets, then inject into your repositories with a factory class for IContext (a factory so you can instantiate the IContext implementation on each method call). Actually, I know what you are saying but, it has nothing to do with testing. In fact, if I didn't want design-time data I could probably punch out the non-UI stuff really quickly. It's more about what the article is suggesting - that a repository is pretty much an abstraction of EF and redundant. My bastardized repos were basically what you suggested but since the operations were simple, I would bastardize it and make it look like a proper repository stopping short at only what I really needed to get the job done. I would create a DBContext for every operation. Each operation was to run various methods in my repository based on what my ViewModel commanded it to and maybe some items leaked in from here or there. What I discovered in trying to create a proper repository pattern is that I am basically duplicating what EF can already do in another abstraction - *for no reason other than to keep stuff away from my ViewModel*. The reasoning I am digging this article is because A) I am not going to use something other than EF on this project (and if I do in the far-far future I can easily address it because this is 6 tables we are talking about here). and B) The repository still is dependent on a DBContext anyway. Thought being... why not just remove the repository, put the Context in the VM at VM creation and while I could actually just do my operations in my RelayCommand delegates I thought it would be beneficial to pull that out into objects to prevent VM bloat and convolutions. My VM is disposed so my Context is disposed. It doesn't hang around doing various different operations like switching from major business concerns like, working Customer tables and then switching to Inventory tables. Each ViewModel deals with one specific stage of operations (all of which are tightly related). The drawback has been - and only been - that I want to inject design-time data without needing a DBContext. I appreciate the response and it has me thinking about a possible solution. I'm working this project late tonight so maybe I'll come up with an idea and post code to get everyone's thoughts tomorrow. Edit: I should add one of my previous possible solutions was to "new up" a DBContext in each query but, I want those Entities living for the life-time of my ViewModel to reduce building up some sort of "wire me up to the query" for data in my ViewModel. Seems easier just to have the CQ object perform operations and not be concerned about creating the context environment. crashdome fucked around with this message at 05:34 on Oct 29, 2014 |
# ? Oct 29, 2014 05:31 |
|
Bognar posted:What do you mean by bastardized repositories? Also, why don't you just use repositories where you new up a DbContext on each method call? If it's because it's not testable, then I'd say that the repositories aren't what you should be heavily testing - you should be testing the code that calls the repositories. Though, if you really cared about it you can have your DbContext implement an interface (e.g. IContext) full of IDbSets, then inject into your repositories with a factory class for IContext (a factory so you can instantiate the IContext implementation on each method call). Doesn't the new EF model represent a repository pattern? Making another repository abstraction over this would be redundant?
|
# ? Oct 29, 2014 06:37 |
|
A repository is simply an abstraction over your data access, so yes in that sense EF is already a repository pattern since it's an abstraction over SQL. However, repositories should be used to isolate your data access logic from your business or view logic. EF doesn't really do that for you.crashdome posted:The drawback has been - and only been - that I want to inject design-time data without needing a DBContext. Give this a shot. Anywhere you would write a query or operation on a DbContext, make that an explicit method on a repository interface. For example, you might need to get some Foos to show on the screen: C# code:
C# code:
|
# ? Oct 29, 2014 13:55 |
|
crashdome posted:So I am looking for a bit of advice because I seem to be running around in circles from over-thinking. This is only my third WPF app and I want to be simple yet I don't want forced/bastard code ruining it this time. This current app is like others I have to build... simple (6 data tables), mostly just transactional data access (assign, add, spit some output into a table), and will have about 4-5 Views/ViewModels. I'm using EF6 because it has a designer - it's a client thing for some reason - AND I like the change tracking and the transaction/rollback features. I use the above with the mediator pattern. I leave repositories well alone, they create constantly changing interfaces and implementations. A mediator can figure out which query handler to use for each query, each query handler is a class that exists on its own. code:
In MVC, I use the Ayende style session and transaction UOW management (with nHibernate). I can mock data to test for each query if I feel it requires it.
|
# ? Oct 29, 2014 16:23 |
|
C#/OpenXML question. I need to parse through a docx file word-by-word. I'm re-writing someone's horrifying VBA macro that checks for certain common errors and forbidden terms in documents. Their solution used 11,000 lines of code to check for a grand total of 500 errors. It's just 500 repetitions of this:Visual Basic .NET code:
My solution is to instead, using C#, do an algorithm like the following: code:
Anyway, my question involves the problem of actually looping over individual words. In all my googling, the deepest level at which you can actually retrieve text from the word document appears to be the paragraph. Am I stuck looping over paragraphs and then using string piecing functions to loop over their words, or is there something clever in the API that I'm not finding with my google queries? Or hell, does anyone know a better algorithm for doing what I'm trying to accomplish?
|
# ? Oct 29, 2014 17:08 |
|
Funking Giblet posted:I use the above with the mediator pattern. I leave repositories well alone, they create constantly changing interfaces and implementations. Yes! This is what I started to explore last night before I got pulled away from it. Thank you for the suggestion. I will report back when I am back on this again.
|
# ? Oct 29, 2014 18:25 |
|
Random Entity Framework question: When doing Code First, is there any tangible drawback to not including navigation properties on one end of a relationship? For example: C# code:
By not having a virtual Foo in my Bar class, I miss out on being able to call Bar.Foo, but is that it? Keeping the properties one sided allows for easier JSON serialization, and the way the data is setup Bar should not be accessed directly anyway, always through Foo. Am I making a huge boo boo?
|
# ? Oct 29, 2014 19:27 |
|
When ASP.NET WebAPI processes a request for Content-Type: text/plain, I'd like to return text using my own custom format. Do I need a MediaTypeFormatter like this? I don't see where I'm supposed to customize what my ApiController action returns when it receives a request with Context-Type: text/plain. This SO question shows how to return a text/plain context-type, but I still want the ApiController action to return a json representation when Context-Type: application/json, and an xml representation when Context-Type: application/xml, etc.
|
# ? Oct 29, 2014 19:42 |
|
aBagorn posted:Random Entity Framework question: I'm not an expert, but as far as I know, the only thing you're missing out on is being able to call Bar.Foo as you said. If you never have access to a Bar without also knowing what its Foo is already, I don't see a downside. The relationship is going to be created correctly on the database's end with just the ICollection on the Foo. You should be fine.
|
# ? Oct 29, 2014 19:45 |
|
epalm posted:When ASP.NET WebAPI processes a request for Content-Type: text/plain, I'd like to return text using my own custom format. Do I need a MediaTypeFormatter like this? I don't see where I'm supposed to customize what my ApiController action returns when it receives a request with Context-Type: text/plain. You should use the media type formatter then wire that up in setup. General idea is your service returns an object and the formatter formats things for the wire so your controllers stay format agnostic.
|
# ? Oct 29, 2014 20:12 |
|
Che Delilas posted:I'm not an expert, but as far as I know, the only thing you're missing out on is being able to call Bar.Foo as you said. If you never have access to a Bar without also knowing what its Foo is already, I don't see a downside. The relationship is going to be created correctly on the database's end with just the ICollection on the Foo. You should be fine. This is correct. There are some places in our applications where I explicitly leave it off since joining in the opposite direction could cause performance problems.
|
# ? Oct 30, 2014 00:23 |
|
Man, why does working with Interop.Excel have to be such a loving chore? Goddamnit Microsoft, get your poo poo together.
|
# ? Oct 30, 2014 02:11 |
|
All Office Interop can go gently caress itself right to hell. I was working on something today where we were running into problems trying to automate PowerPoint but running through a Windows service. I eventually found this: http://stackoverflow.com/questions/1006923/automating-office-via-windows-service-on-server-2008/1680214#1680214
|
# ? Oct 30, 2014 02:37 |
|
Mr Shiny Pants posted:Doesn't the new EF model represent a repository pattern? Making another repository abstraction over this would be redundant? The only real advantage is having a play version for tests of code that calls the repositories.
|
# ? Oct 30, 2014 03:35 |
|
Bognar posted:However, if you want some serious performance improvements while searching for primes. I'd recommend using a BitArray to store whether a particular index is a prime or not. The downside of this is that you'll have to specify an endpoint and pre-allocate the BitArray to the length you need, rather than calculating to an arbitrary number of primes while maintaining a list of found primes. The upside is *significant* benefits to locality of reference and branch prediction. Yeah, I should have been using a Sieve of Eratosthenes for maximum performance, but trial division was the first thing that came to mind and the performance was sufficient to solve the problems given
|
# ? Oct 30, 2014 06:02 |
|
Bognar posted:All Office Interop can go gently caress itself right to hell. I was working on something today where we were running into problems trying to automate PowerPoint but running through a Windows service. I eventually found this: Trying to automate Office applications as a service is generally considered "a bad idea" and will lead to a lot of pain. The applications just weren't made to be run without everything a user has. Nowadays you can at least generate the file formats directly using XML, or use a third-party component like Aspose.
|
# ? Oct 30, 2014 15:28 |
|
Dromio posted:Trying to automate Office applications as a service is generally considered "a bad idea" and will lead to a lot of pain. The applications just weren't made to be run without everything a user has. Nowadays you can at least generate the file formats directly using XML, or use a third-party component like Aspose. We are well aware of how much it sucks. We were using Aspose for a while but we found valid PowerPoint files that would be corrupted after opening and saving with Aspose, so we wrote our own tool to manually handle the XML for operations like splitting and concatenating slides. We were also using Aspose to generate thumbnails, but it basically turns to poo poo on any moderately complex graph/chart or anything with error bars. So far, the only way to get thumbnails that are definitely correct is to get them from the source - and so we automate PowerPoint. It's also worth noting that using Aspose on a large number of slides can lead to memory leaks since they improperly use finalizers. Plus it uses like 5MB of memory per slide. Writing our own Office XML mangler was the best thing we did for the reliability of this product. Bognar fucked around with this message at 20:27 on Oct 30, 2014 |
# ? Oct 30, 2014 20:25 |
|
So a question about style, best practice etc. I am writing a program in F# and it needs a configfile from disk. Now I wrote a function that creates it or reads it from disk if it already exists. Now in C# I would have a class with a couple of methods that would load on program start and have some other methods of passing the values around. Now in F# the program gets evaluated during compile time and because the expressions are evaluated during program start they are in effect executed. These record types are effectively static classes ( at least they behave this way ) that are available everywhere making a lot of constructor value passing redundant. Is it good practice to just have an init function that loads the necessary recordtypes and refer to them throughout the program instead of passing the values around like I am used to in C#? I mean they are immutable and it saves a lot of cruft. Don't know if my explanation is clear
|
# ? Oct 30, 2014 21:32 |
|
Is it possible to do a redirect in IIS based on the username/login? Like, say I have a bunch of users who can perform a login to see a web page by going to http://username:password(at)mywebpage.com. However, I want to redirect certain users based on their username, so when they go to http://username:password(at)mywebpage.com, they get redirected to http://username:password(at)myOtherwebpage.com, while other users still access the original page. I've been screwing around in IIS with the rewrite rules and it seems that matches only occur after the ".com". Is it possible to do what I'm trying to do?
|
# ? Oct 30, 2014 22:51 |
|
idontcare posted:Is it possible to do a redirect in IIS based on the username/login? This is a terrible idea holy crap!
|
# ? Oct 31, 2014 01:40 |
|
Most of the badness to do with global variables has to do with the fact that they are variables. If you want to bind an immutable value to a name to reference it from any function, there's nothing wrong with that.
|
# ? Oct 31, 2014 02:28 |
|
Mr Shiny Pants posted:Now in F# the program gets evaluated during compile time and because the expressions are evaluated during program start they are in effect executed. These record types are effectively static classes ( at least they behave this way ) that are available everywhere making a lot of constructor value passing redundant. fleshweasel answered your question, but what's this all about?
|
# ? Oct 31, 2014 02:48 |
|
Dietrich posted:This is a terrible idea holy crap! Regardless if it's a terrible idea or not, I need to do it. Is it possible?
|
# ? Oct 31, 2014 03:01 |
|
You might be able to do it with a custom HTTP handler. However, can you explain why you need to it?
|
# ? Oct 31, 2014 04:38 |
|
GrumpyDoctor posted:fleshweasel answered your question, but what's this all about? Reading it back I don't know what I meant This stuff is all new to me. Edit: Thinking about it some more: What is different from something like C# is that methods only execute when they get called. in F# a let binding executes all the functions that comprise it during execution without needing to be called explicitly in program flow. I don't know, can't really explain it I guess. They do not get executed during compilation right? Mr Shiny Pants fucked around with this message at 08:38 on Oct 31, 2014 |
# ? Oct 31, 2014 08:29 |
|
|
# ? May 15, 2024 21:14 |
|
Dromio posted:Trying to automate Office applications as a service is generally considered "a bad idea" and will lead to a lot of pain. The applications just weren't made to be run without everything a user has. Nowadays you can at least generate the file formats directly using XML, or use a third-party component like Aspose. Yeah but the XML schema is stupid as hell if you actually look at files.
|
# ? Oct 31, 2014 11:38 |