|
amotea posted:(See here for examples: https://blog.jetbrains.com/dotnet/2014/09/04/fighting-common-wpf-memory-leaks-with-dotmemory/) Goddamn, that's pretty terrible. I knew about the event handler leak but the rest come as a surprise!
|
# ? Jul 29, 2016 07:47 |
|
|
# ? Jun 4, 2024 14:27 |
|
idontcare posted:So does .net core not support SignalR at all? I know SignalR is better than just pure WebSockets and all that, but this is an opportunity for me to post my minimalist tiniest self-contained ASP.NET Core WebSockets sample code:
|
# ? Jul 29, 2016 14:25 |
|
Is it really possible that .NET Framework's implementation of Canonical XML 1.0, implemented since .NET 1.1, is incorrect? Or am I missing something crucial here? See https://connect.microsoft.com/VisualStudio/feedback/details/3002812 and https://github.com/sandersaares/xml-c14n-whitespace-defect for a description. In short, .NET Framework appears to (incorrectly) strip whitespace.
|
# ? Aug 2, 2016 08:05 |
|
EssOEss posted:Is it really possible that .NET Framework's implementation of Canonical XML 1.0, implemented since .NET 1.1, is incorrect? Or am I missing something crucial here? I didn't look too deeply into it, but as a quick response, per the documentation from XmlDocument.Load(), emphasis mine: quote:The Load method always preserves significant white space. The PreserveWhitespace property determines whether or not insignificant white space, that is white space in element content, is preserved. The default is false; white space in element content is not preserved. If you set PreserveWhitespace to true before loading your input XML, that may fix your whitespace problem.
|
# ? Aug 2, 2016 17:23 |
|
There is a ton of EF Migration documentation and blog/forum/q&a posts, but I can't seem to find the answer I'm looking for. The scenario is:
My software sets the EF initializer like so: C# code:
C# code:
So how do we hit the panic button and install a previous version of our software (hasn't happened yet, but it will) while still using Add-Migration/Update-Database during development? It was my false assumption that MigrateDatabaseToLatestVersion actually meant "migrate the db up or down to my latest migration". Edit: PS we're using EF 6.1.3, VS 2015, .NET 4.5 epswing fucked around with this message at 22:34 on Aug 2, 2016 |
# ? Aug 2, 2016 22:30 |
|
Because the DB is already on a higher version you cannot downgrade? Seems reasonable.
|
# ? Aug 3, 2016 06:32 |
|
biznatchio posted:I didn't look too deeply into it, but as a quick response, per the documentation from XmlDocument.Load(), emphasis mine: Thanks! That helps a lot! This changes the situation somewhat in new and exicting ways. With PreserveWhitespace + XML Schema, the transform is actually correct! Without XML Schema, behavior remains incorrect - .NET Framework encodes the 0x0D character in newlines instead of stripping it. I have a schema, so good news for me in all the scenarios I care about!
|
# ? Aug 3, 2016 07:34 |
|
Mr Shiny Pants posted:Because the DB is already on a higher version you cannot downgrade? Seems reasonable. Seems reasonable that you cannot ever downgrade the database? Then what are all these DbMigrations with Up() and Down() overrides? C# code:
|
# ? Aug 3, 2016 14:33 |
|
epalm posted:My understanding is that AutomaticMigrationsEnabled = true is for projects that don't use Add-Migration/Update-Database. My team uses Add-Migration/Update-Database all the time, and we sometimes tweak the resulting migration code to execute some extra SQL or specify database-level defaults when adding columns, etc. Version 2.1 doesn't know how to downgrade from version 2.2 because both the Up and the Down methods for the migration are stored in version 2.2. In order to make this work, you would need to add a manual option in version 2.2 to downgrade to the 2.1 version of the database. You can do this with the DbMigrator class and specifying a specific migration in the Update method.
|
# ? Aug 3, 2016 14:39 |
|
Ah right, silly me. That makes sense, only 2.2 knows about how it got there. Even with all the QA in the world, sometimes a bad version gets out there, and the fastest way to get back up and running is to revert to the previous version (industrial software, time is money). But executing DbMigrator.Update in the running application seems weird and risky, the app will basically pull the rug out from under itself. What's the Right Way to handle this? epswing fucked around with this message at 15:12 on Aug 3, 2016 |
# ? Aug 3, 2016 15:07 |
|
epalm posted:Ah right, silly me. That makes sense, only 2.2 knows about how it got there. I don't how fast you need to this, is this always after an update? Or is it after some time? If it usually directly after an upgrade it might be easier to use something like a VM snapshot? Or a DB backup and restore? There are some pretty robust ways of accomplishing what you want without using C#. Just saying.
|
# ? Aug 3, 2016 15:48 |
|
Mr Shiny Pants posted:I don't how fast you need to this, is this always after an update? Or is it after some time? Yeah, I was thinking about timeline earlier. Could happen either way, right? On one hand it could be right after installing, on the other a bug could sit there for a week before crippling the software in a way that downgrading to the previous version is better/cheaper/faster than getting a fix out to the client. We write software for an industry that sees "new" things as bad and scary, so "put it back to the way it was" is a frequent sentiment (whether misguided or not).
|
# ? Aug 3, 2016 16:07 |
|
epalm posted:Yeah, I was thinking about timeline earlier. Could happen either way, right? On one hand it could be right after installing, on the other a bug could sit there for a week before crippling the software in a way that downgrading to the previous version is better/cheaper/faster than getting a fix out to the client. We write software for an industry that sees "new" things as bad and scary, so "put it back to the way it was" is a frequent sentiment (whether misguided or not). I'm not sure how often this is the case. How much testing do you put into your downgrade scripts to ensure that they don't break things even worse in the hypothetical scenario you're considering? One thing you can do is to have two phases of each release - one release with just the new database version, and then the real release with the actual logic and code changes that depend on those database changes. This gives you an intermediate point (the old version of the program, running against the new database) that you can safely roll back to. And if it turns out you do need to roll back the database migration, you can use that intermediate point to do so without causing any issues in code that relies on the upgraded database.
|
# ? Aug 3, 2016 18:29 |
|
epalm posted:Yeah, I was thinking about timeline earlier. Could happen either way, right? On one hand it could be right after installing, on the other a bug could sit there for a week before crippling the software in a way that downgrading to the previous version is better/cheaper/faster than getting a fix out to the client. We write software for an industry that sees "new" things as bad and scary, so "put it back to the way it was" is a frequent sentiment (whether misguided or not). Not sure what the EF-sanctioned way of doing this is, but off the top of my head, what about having a downgrade stored proc that gets replaced with each new version and which reverts the db structure back to the previous version (from the sounds of it, your users don't typically skip versions? could make the downgrade proc more intelligent by giving it a target version to revert to). Then, when you run your previous version's installer, it sees that the db version is ahead of what it's expecting and executes the downgrade script. An alternative, although messier, approach would be to have your newer installer leave a downgrade.dll somewhere which the older installer loads up somehow, and then executes the Down migration via EF's mechanisms, which might be necessary to keep the EF migration tracking properly synced.
|
# ? Aug 3, 2016 19:44 |
|
epalm posted:Ah right, silly me. That makes sense, only 2.2 knows about how it got there. Off the top of my head, you could ship the migration code as a separate library that only exposes a MigrateTo(Version versionNumber) method. Then you configure your installation tool so that, if it finds a newer-but-still-compatible (use semver) version of the migration library already installed on the machine, it uses that one instead of the bundled-in version when executing the migration part of the install process. Provide an option to skip that check, just on the off-chance you ship a broken migration library.
|
# ? Aug 3, 2016 20:46 |
|
I'm a little surprised that there isn't an established way to roll forwards/backwards even though EF provides a mechanism to do so (without resorting to AutomaticMigrationsEnabled = true which as far as I can tell, makes no sense for any non-trivial well-versioned application). Thanks for all the suggestions, I'll think about it and proceed.
|
# ? Aug 3, 2016 21:41 |
|
Managing database migrations with an ORM tool scares the poo poo out of me.
|
# ? Aug 3, 2016 23:14 |
|
We borrowed this guy's code https://github.com/mrahhal/Migrator.EF6 for a command line utility we provide alongside the packages using the database. We added a couple extra bits and pieces that he didn't include out of the box, like the ability to list pending migrations as well as applied migrations, but it's otherwise pretty complete. e: we also provide some PowerShell DSC scripts that use it, but they're optional and the guys doing the installs have the ability to get update scripts out of it. This is necessary because some of our customer sites (hospitals) have the world's most anal DBAs and we're lucky if we get execute on our own stored procedures.
|
# ? Aug 3, 2016 23:28 |
|
Anyone good with with web APIs that require POST requests to be called? It turns out I've literally never done this and I don't even know what the right questions are to ask, so I'm finding everything super confusing. This is the documentation I have, with the actual data removed and replaced with <name of data>:code:
code:
|
# ? Aug 4, 2016 00:04 |
|
I'd use HttpClient instead of those classes you're using. I'm not on a windows machine right now but the answers here have examples of how to post with it: http://stackoverflow.com/questions/15176538/net-httpclient-how-to-post-string-value
|
# ? Aug 4, 2016 00:28 |
|
Post data is usually like: ListId=<ListId>&Foo=Bar Try using an equals instead. Also, I think you need to set the content length header entry. Also see https://msdn.microsoft.com/en-us/library/ms144221.aspx for a one line solution. dougdrums fucked around with this message at 02:55 on Aug 4, 2016 |
# ? Aug 4, 2016 02:52 |
|
Here is an example of a web API client from the Health Clinic example project: BaseRequest.cs and DoctorsService.cs that derives from it.
|
# ? Aug 4, 2016 03:29 |
|
I have a dashboard that shows a user their content. The content can have different versions and a few of the different versions are shown on the users dash (like if a piece of content is in draft, or if it's completed, etc). The way it works right now is the dashboard pings a MySQL db every some amount of seconds to see if anything new should be shown on that users dashboard. This puts a big load on the db obviously and needs to be changed. A coworker of mine told me to look into using SignalR and Redis, both of which are new to me, but things I have briefly looked into as solutions. What I was thinking was after a new version of a piece of content is saved into the db, that same content also saves to Redis and a SignalR hub is used to send out the update to the appropriate user's dashboard if that user is connected. If the user is not connected, when they do connect the dashboard will read from Redis rather than the MySQL db to get their data. This all seems pretty straight forward so this is probably a dumb question to be asking, but this is a good way to go about this and SignalR and Redis are good choices for this, correct?
|
# ? Aug 4, 2016 04:44 |
|
A check every couple of seconds should not be that hard on the DB. Have you checked? Adding Redis, SignalR and caching will open a whole lot of new stuff that can go wrong. Edit: looking at mysql it can also cache queries: http://dev.mysql.com/doc/refman/5.7/en/query-cache.html Might be easier. Mr Shiny Pants fucked around with this message at 09:30 on Aug 4, 2016 |
# ? Aug 4, 2016 09:23 |
|
I ran into something today in C# which feels like it should work, but doesn't.code:
This produces the message "Operator ?? cannot be applied to operands of type Bar and Baz". Shouldn't this work? Is there a good reason that's not occurring to me why it can't?
|
# ? Aug 4, 2016 10:23 |
|
If anyone's interested re: the memory leaks when binding to lists that don't implement INotifyCollectionChanged: After some digging in the source it turns out this is only a temporary memory leak, there's some sort of counter you can decrease by creating more bindings to non-INCC lists, and if it reaches 0 it'll release the reference to your list. It's still terrible to deal with because it's really undefined when (and if ever) the list will be released, and the mechanism of forcing a release is insanely meh. See also http://referencesource.microsoft.com/#PresentationFramework/src/Framework/MS/Internal/Data/ViewManager.cs,3127039be18ca345,references quote:Dev10 bug 452676 exposed a small flaw in this scheme. The "not shown" and https://connect.microsoft.com/VisualStudio/feedback/details/772206/wpf-combobox-itemssource-binding-possible-memory-leak quote:This is not a real leak. When the collection doesn't implement INotifyCollectionChanged, we do keep references "behind the scenes" - that's the purpose of ViewManager._inactiveViewTables (third box in your reference diagram). However, we eventually release this reference if it hasn't been used for several "cleanup passes". A cleanup pass is triggered by ViewManager activity, such as creating a new View (over some other collection).
|
# ? Aug 4, 2016 11:01 |
|
chippy posted:
You need to cast one of the operands as an IFoo: code:
The original doesn't work because the expression "bar ?? baz" isn't well-typed on its own without the compiler knowing that both of the arguments need to be treated as IFoos to produce an IFoo.
|
# ? Aug 4, 2016 11:13 |
|
I was trying to do some obtusely functional programming in C# as an exercise -- I came across this "problem": I have a class that is practically like Random<T> : IEnumerable<T>, that takes some generic type parameter T and resolves it to a iterator that spits out random values of that type. I want for it to resolve array types by calling it's non-array Random<T> constructor, doing something like this (where typetable holds constructor definitions): code:
code:
|
# ? Aug 4, 2016 12:58 |
|
Asymmetrikon posted:You need to cast one of the operands as an IFoo: Cool, thanks. I presumed the compiler should be smart enough to infer that from the fact that I was assigning to an IFoo.
|
# ? Aug 4, 2016 13:57 |
|
dougdrums posted:I was trying to do some obtusely functional programming in C# as an exercise -- I came across this "problem": I believe there are two confusions amplifying each other here. The first is that I think you're misunderstanding dynamic - it doesn't mean "fill in this type at runtime." It's a specific type for which all operations are dispatched dynamically. (Or something like that.) The type for "I don't know what type this will be at compile time" is object. The second confusion is that you can't cast collections of one type into collections of a wider type, which is causing your as to return null, which is causing your exception. That being all said, I have no idea what you're actually trying to do with your method there.
|
# ? Aug 4, 2016 15:36 |
|
dougdrums posted:
I think, given the statement of your problem, that you're going to run into issues you can't resolve unless you use either reflection or two different methods. Is it correct to say that you have something that can generate a T, but for generating T[] or maybe List<T> you want to construct a T[] or List<T> and then delegate to a T generator to fill the array? The problem you'll run into at compile time is that the compiler can't distinguish between a T or a T[] since T[] itself is a type. You can resolve this with two different methods, one for single types and another for list types. Optionally, you can use reflection to inspect the type of T to see if it's an array or list type and handle it as necessary.
|
# ? Aug 4, 2016 16:27 |
|
Yeah, I knew that dynamic still enforces types, but just at run-time ... I don't know what I thought it would do with the type system that using object wouldn't, thanks for the reminder. I'm trying to pipeline LINQ queries, in the grand scheme of things. I have a class Random<T> : Source<T> that is used to instantiate a source for random value types, along with a few other types. When I instantiate a new Random<T>(), the enumerator gets mapped to the source for that type. (By 'source' I mean an object that extends Source<T>, which implements ISource<out T>, which implements IEnumerable<T> ...) There is a nested class, ByteArray(int count) : IEnumerable<byte[]>, that is a simple wrapper around CNGRandomNumberGenerator. All of the value types come from yielding BitConverter.To*() on a static instance of the ByteArray(sizeof(T)) source. I'm trying to implement the constructor Random<T>(int count), where T is some single-dimension array type, and count is the length. I want it to provide a source of random arrays. The problem is that Random<byte>() is a special case, it essentially boils down to ByteArray(1).Take(1).Single()[0]. It would be even more terrible than it already is to make a generic constructor that yields arrays, since this would take one byte at a time from ByteArray(1), which is silly. So I have the one case where T = byte[], and where T : every other Source<T> in the typetable. I can't just do something like code:
So more narrowly, what I'm trying to accomplish is something like: code:
I'm a little slow to the draw: I think that's what I should do, make another constructor for Random<List<T>> or just ByteArray(int).ToList(). Lists, in a functional program ?!? Who would have thought... PS thanks for taking the time to help me with this nonsense I ended up doing this kinda gross thing: code:
code:
dougdrums fucked around with this message at 20:03 on Aug 4, 2016 |
# ? Aug 4, 2016 16:48 |
|
So I'm using Redis for the first time and going over their docs regarding implementation. In the basic usage they sayquote:The central object in StackExchange.Redis is the ConnectionMultiplexer class in the StackExchange.Redis namespace; this is the object that hides away the details of multiple servers. Because the ConnectionMultiplexer does a lot, it is designed to be shared and reused between callers. You should not create a ConnectionMultiplexer per operation. so what's the implementation of that look like? Should I be creating a singleton and accessing the db that way for all client calls, or should I have a static ConnectionMultiplexer field somewhere.
|
# ? Aug 5, 2016 16:03 |
|
idontcare posted:so what's the implementation of that look like? Should I be creating a singleton and accessing the db that way for all client calls, or should I have a static ConnectionMultiplexer field somewhere. Basically, yes, you should have one instance of your multiplexer that everything references. Scroll to the bottom of this blog post to see a decent implementation using Lazy<T>. Ignore the Task.Run stuff. http://gigi.nullneuron.net/gigilabs/setting-up-a-connection-with-stackexchange-redis/. edit: Also you can ignore the lazy config options static value, that seems unnecessary. Just inline it into the lambda for multiplexer creation. Bognar fucked around with this message at 16:22 on Aug 5, 2016 |
# ? Aug 5, 2016 16:19 |
|
Alright, so today at work I ran into a bizarre method that looked something like this (forgot to copy it, but you'll get the gist):code:
Dapper.SimpleCRUD uses an anonymous type to specify the Where clauses of the SQL it generates, but I'm confused as to why. On line 162, it calls GetAllProperties, which converts the anonymously-typed object it received into an IEnumerable of Properties, which it then converts to an array. The whereProps array and the whereConditions object are then passed into the BuildWhere method on line 171. Inside it, the whereProps array is again converted into an array (because the method's signature is IEnumerable). It then enters a for loop iterating over this array (although for some reason the exit condition uses Count() instead of Length) where more weird stuff starts to happen, like using ElementAt() instead of square brackets to access an array's item, or calling GetScaffoldableProperties(sourceEntity).ToArray() even though sourceEntity isn't modified anywhere, and then iterating over that and at this point I pretty much gave up because I'm already pretty sure this is terrible code. I originally meant to ask was if there were any performance benefits to using an anonymous object instead of the library itself providing some sort of WhereClause type and receiving an IEnumerable<WhereClause> but I started reading the code while writing this post and now I'm 99% sure this library is just poo poo and this post would be better off in the coding horrors thread antpocas fucked around with this message at 21:52 on Aug 5, 2016 |
# ? Aug 5, 2016 21:50 |
|
As far as I know, an arbitrary anonymous object (as opposed to a compile-time-known one) is no more powerful and no more type-safe than a simple Dictionary<string, object>. Sure it allows you to store values of different types, but since you can only check their types at runtime through .GetProperties() -> .PropertyType(), it's gonna be the same as calling typeof() on the dictionary values barring some corner cases with interfaces / inheritance (unlikely to happen in a list of loving search criteria, which are probably just primitives).
|
# ? Aug 5, 2016 22:29 |
|
antpocas posted:Dapper.SimpleCRUD stuff It's definitely ugly, but I can see how he arrived at taking that anonymous object, at least. He uses the object to construct a parameterized where clause and then passes the object straight through to Dapper on the Query and Execute methods, which Dapper then uses to fill the SQL parameters. He probably wasn't sure how to construct that object otherwise? It would be fairly simple to switch this around to take in an IEnumerable<Where> or similar, though. MajorBonnet fucked around with this message at 23:17 on Aug 5, 2016 |
# ? Aug 5, 2016 23:12 |
|
NihilCredo posted:As far as I know, an arbitrary anonymous object (as opposed to a compile-time-known one) is no more powerful and no more type-safe than a simple Dictionary<string, object>. ElMudo posted:It's definitely ugly, but I can see how he arrived at taking that anonymous object, at least. Also, I feel kinda bad calling the code someone put on the Internet because they thought it might be helpful to someone else poo poo antpocas fucked around with this message at 23:49 on Aug 5, 2016 |
# ? Aug 5, 2016 23:21 |
|
EDIT: Resolved - Had to initialize the Vendors in the org model withcode:
My org model code:
code:
code:
Thom ZombieForm fucked around with this message at 00:50 on Aug 7, 2016 |
# ? Aug 7, 2016 00:23 |
|
|
# ? Jun 4, 2024 14:27 |
|
Anyone here ever created a Polyline with a million or more points in WPF or UWA? How that'd work out for you in regards to performance (redraws, resizing etc.)?
|
# ? Aug 8, 2016 15:58 |