|
LOOK I AM A TURTLE posted:I know your pain. Where I used to work we had over 1000 configurable values. The configuration values were just thrown into a dynamic table in the UI, which made it way too easy to add five new ones in every drat project since there was virtually no design work involved. I and another developer had many debates with the domain experts where we argued that the configuration creep was one of the major reasons we had so many bugs. From a mathematical point of view, adding a configuration value makes your system at least twice as complex, because every configurable value can be in at least two different states (sometimes far more), and technically every configuration value can alter the behavior of the system when interacting with any one of the other configuration values. If k is the number of unique configurable values, the number of possible configuration states is at least 2k. In practice most of them are independent of each other, but you can never know for sure when some weird combination you hadn't thought of will behave oddly. I'm so glad not to be alone. Configuration creep is definitely something we're suffering from. Luckily I finish this job in a week.
|
# ? Jul 13, 2017 02:27 |
|
|
# ? Jun 7, 2024 17:18 |
|
B-Nasty posted:I don't know if you're using TeamCity, but it makes building/hosting Nuget packages easy. Make some changes to package, commit + build, and go to dependent project's Nuget management to update to the version+1 or whatever directly from TeamCity's feed. That's not a good solution when trying to make anything bigger than a trivial change in Project B to directly support something in Project A. If they're both in the same solution as project references, your workflow looks like this: Modify code in Project B Build Project A and B Run unit tests for Project A and B Discover something broke Repeat Time elapsed: 10 seconds per repetition, minus time spent writing code Otherwise, it looks something like this: Modify code in Project B Build project B Run unit tests for Project B Commit to source control Wait for automated build to run and the package to be published Open Project A Update package reference for Project B Build Project A Run unit tests for Project A Discover something broke Time elapsed: 5 minutes? 10 minutes? Longer? per repetition. Even having a Project B configured to automatically package/publish to a folder on your local PC, with Project A's nuget.config configured to point there would work better, and that would still suck. New Yorp New Yorp fucked around with this message at 04:52 on Jul 13, 2017 |
# ? Jul 13, 2017 04:49 |
|
New Yorp New Yorp posted:This is one of the major NuGet pain points. You could do something with MSBuild conditionals in the csproj file, I think. I dimly remember having done something like that once but I could be misremembering. If you've got new-style csproj you can temporarily add the upstream projects to the downstream solution file locally, and replace the <PackageReference ... /> with a <ProjectReference ... />.
|
# ? Jul 13, 2017 05:20 |
|
New Yorp New Yorp posted:This is one of the major NuGet pain points. You could do something with MSBuild conditionals in the csproj file, I think. I dimly remember having done something like that once but I could be misremembering. B-Nasty posted:I don't know if you're using TeamCity, but it makes building/hosting Nuget packages easy. Make some changes to package, commit + build, and go to dependent project's Nuget management to update to the version+1 or whatever directly from TeamCity's feed. I've got the workflow down to 1) make a change in the dependency 2) nuget pack 3) uninstall and reinstall the dependency It's about the same workflow you'd get in Java ecosystem. You'd think they could do better in .NET where the build tools are standardized (like, nuget seems to understand my .csproj?). I guess I'm spoiled by the npm link workflow. I do use teamcity. It updates the AssemblyVersion attribute with the build number and publishes the package to Nexus. I'm more concerned about development though, before pushing stuff into source control.
|
# ? Jul 13, 2017 05:38 |
|
Night Shade posted:If you've got new-style csproj you can temporarily add the upstream projects to the downstream solution file locally, and replace the <PackageReference ... /> with a <ProjectReference ... />. I'm using vs2017 but not familiar with PackageReference, does that replace the nuget config file?
|
# ? Jul 13, 2017 05:41 |
|
Sedro posted:I'm using vs2017 but not familiar with PackageReference, does that replace the nuget config file? No, they replace packages.config. If you still have one of those, you're using a classic csproj. The dotnet core project templates all create new style csproj files, and then you can just change the target runtime from netcoreapp1.x to net4xx to get a framework dll/exe.
|
# ? Jul 13, 2017 06:22 |
|
Night Shade posted:No, they replace packages.config. If you still have one of those, you're using a classic csproj. It's worth noting that you need to "opt in" to PackageReference via a Visual Studio setting and it's only supported in VS2017 (AFAIK).
|
# ? Jul 13, 2017 17:59 |
|
I've been trying to create a dropdownlist in MVC and I think I'm going insane. I've done a bunch of googling but for some reason this very simple concept is evading me. In my view I have a dropdownlist I'm creating, passing in a SelectItem list from my Model for my values. code:
code:
|
# ? Jul 13, 2017 23:16 |
|
New Yorp New Yorp posted:It's worth noting that you need to "opt in" to PackageReference via a Visual Studio setting and it's only supported in VS2017 (AFAIK). Yeah the new csproj are only supported by vs2017 and the dotnet command-line toolchain.
|
# ? Jul 13, 2017 23:49 |
|
Where are you storing the selecteditem? With MVC 5 I always have a "SelectedThing" property in my viewmodels. Like this: code:
|
# ? Jul 14, 2017 09:22 |
|
Oh dang, I thought just having the Selected property of the SelectItem set to "True" would suffice. In your example, can Foo just be any object with a Name and Value property?
|
# ? Jul 14, 2017 18:36 |
|
100 degrees Calcium posted:Oh dang, I thought just having the Selected property of the SelectItem set to "True" would suffice. For completeness: code:
This is how I populate my list. It is just a list with the name of the foo item. So a List<String>. Now if you want to edit the item, and have the user see what is currently selected then you just put the value from your model in the SelectedFoo and it should all just work. Mr Shiny Pants fucked around with this message at 18:57 on Jul 14, 2017 |
# ? Jul 14, 2017 18:53 |
|
Thanks a ton, Mr Shiny Pants. I had to make some adjustments but you got me there! Here's how it ended up: Models code:
code:
EDIT: Goddamn I'm an idiot. You made it clear that the first argument needed to be the Value property of the select list. I'm the one that complicated it by using ID and assuming I needed a separate Foobar property. I could simplify it even more further by doing what you actually said and not what my dumb brain parsed. Thank you again. 100 degrees Calcium fucked around with this message at 22:57 on Jul 14, 2017 |
# ? Jul 14, 2017 22:53 |
|
Glad you got it working.
|
# ? Jul 15, 2017 09:15 |
|
I'm having a problem that I suspect is very simple but is surprisingly resistant to Googling. It's this: System.Data.SqlClient.SqlCommand.ExecuteScalar, executed without read access (or any other permissions), doesn't throw. It just returns null. That's ok, but I can't figure out how to manually check the permissions available to the current caller either. Why is this so hard to find? Am I thinking about it completely wrong? (To clarify a little bit: I'm trying to write code that can distinguish between "the table doesn't exist" and "you don't know whether the table exists because you can't read the database" and am coming up short.) Also, Dromio posted:What about a custom Roslyn code analyzer that will fail the build when DateTime.Now is used?
|
# ? Jul 17, 2017 21:31 |
|
raminasi posted:Also, Huh. I thought it was a semi-joking suggestion but apparently they're not particularly complicated to set up. Thanks for the ideas, y'all.
|
# ? Jul 17, 2017 23:37 |
|
NihilCredo posted:Huh. I thought it was a semi-joking suggestion but apparently they're not particularly complicated to set up. They've got some annoying gotchas, though, especially around updates. Specifically: -VS caches the things super aggressively. If you update your analyzers package, you'll likely have to restart VS to see the changes. -If you've got multiple projects with analyzers packages that conflict with each other (like if you update one but not another) nothing will work right. -The VSIX debugging project often fails to work for no good reason at all, probably related to one of these dumb things -All these problems are exacerbated if you try to implement code fixes in addition to diagnostics
|
# ? Jul 17, 2017 23:57 |
|
raminasi posted:I'm having a problem that I suspect is very simple but is surprisingly resistant to Googling. It's this: System.Data.SqlClient.SqlCommand.ExecuteScalar, executed without read access (or any other permissions), doesn't throw. It just returns null. That's ok, but I can't figure out how to manually check the permissions available to the current caller either. Why is this so hard to find? Am I thinking about it completely wrong? (To clarify a little bit: I'm trying to write code that can distinguish between "the table doesn't exist" and "you don't know whether the table exists because you can't read the database" and am coming up short.) Have you considered system catalog queries or is that even questionable in in your use case?
|
# ? Jul 18, 2017 16:57 |
|
Munkeymon posted:Have you considered system catalog queries or is that even questionable in in your use case? Well, that page gives me SQL. As far as I can tell, the trick isn't finding the right SQL. I need to somehow interrogate my connection information or something.
|
# ? Jul 18, 2017 20:45 |
|
raminasi posted:Well, that page gives me SQL. As far as I can tell, the trick isn't finding the right SQL. I need to somehow interrogate my connection information or something. Querying the system catalog (INFORMATION_SCHEMA is probably what you'd want to google?) is how you're supposed to figure out if a table exists, not trying to query it and seeing what happens. It might not solve your whole problem, but you can also use https://docs.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-my-permissions-transact-sql and https://docs.microsoft.com/en-us/sql/t-sql/functions/has-perms-by-name-transact-sql to check for permissions. IDK what happens if you don't have permissions to run those, though
|
# ? Jul 18, 2017 21:25 |
|
Sedro posted:My project depends on a nuget package which is built from a project in a different solution. I want to develop some changes to the dependency. How am I supposed to deal with this? Can I switch it into a project reference temporarily somehow? Bit late, but, you can use paket for this with it's paket.local feature: https://fsprojects.github.io/Paket/local-file.html
|
# ? Jul 24, 2017 17:26 |
|
Just figured outcode:
You should actually use code:
|
# ? Jul 24, 2017 17:29 |
|
amotea posted:Just figured out Why does doAsync() need to be started on the UI thread at all? This seems weird, you're posting a callback onto the UI pump so it can jump straight back off of it to do something asynchronously. also, awful-flavoured markdown when edit: to directly answer the question, Task<TResult> TaskExtensions.Unwrap<TResult>(this Task<Task<TResult>> task) Night Shade fucked around with this message at 04:26 on Jul 25, 2017 |
# ? Jul 25, 2017 04:23 |
|
Night Shade posted:Why does doAsync() need to be started on the UI thread at all? This seems weird, you're posting a callback onto the UI pump so it can jump straight back off of it to do something asynchronously. I intentionally kept the example short just to illustrate the problem (should have left out the async keyword in the lambda definition to be 100% correct I guess). Just imagine some lines of code that manipulate the UI and also do some async stuff. Unwrap() doesn't work in this case because InvokeAsync returns a DispatcherOperation<Task>, which seems to be awaitable (magically?), but is not actually a Task.
|
# ? Jul 25, 2017 10:02 |
|
Philosophical, architectural, design-patterny type of a question: Is something like validating the uniqueness of an entity's name a business rule, or application logic? I'm in the process of starting up a new project at the moment and I'm trying to encourage a rich domain approach, using DbContext in the controllers directly and encapsulating all behaviour in domain objects instead of hiding everything behind service layers, repositories and other abstractions. When the user creates a new Foo, I want to enforce the uniqueness of its name. Now it's easy enough to do on the ViewModel with something like FluentValidation, or a variety of other validation mechanisms, but that's just the UI input validation, and means that the core system is not enforcing the uniqueness, only the UI. It's still possible to use the constructor of Foo to create one with a non-unique name and insert it into the context. So, should I also not allow the controllers access to this constructor for the domain object(the UI and core are different projects within the same solution so I could just mark it internal), and instead move the creation of a Foo to a service or factory type object, that can also check the uniqueness? Or should I remove direct access the context in the controllers altogether and introduce a 'Save/Insert' type method in a service layer that checks these requirements instead? Are these things overkill? e: Reading into it, I feel like the best way to do it is at the UI level on the ViewModel using FluentValidation (or a custom validation attribute, or something like that), and then at the core system level by extending the DbContext.ValidateEntity() method (https://msdn.microsoft.com/en-us/data/gg193959.aspx) when we actually try and save the new Foo. Does this make sense? chippy fucked around with this message at 13:40 on Jul 25, 2017 |
# ? Jul 25, 2017 13:15 |
|
chippy posted:Philosophical, architectural, design-patterny type of a question: It's both. Your UI should enforce uniqueness in a user-friendly way, and everything beneath the UI should enforce uniqueness to avoid cases where data gets past it. I've seen too many cases where a web application's front end enforces validation, so you just go to your developer console and enable the button anyway and put junk data in and it's fine with that. If you're lucky, you can even put junk data in that causes the entire application to poo poo itself because it wasn't designed to handle bad input.
|
# ? Jul 25, 2017 13:37 |
|
New Yorp New Yorp posted:It's both. Your UI should enforce uniqueness in a user-friendly way, and everything beneath the UI should enforce uniqueness to avoid cases where data gets past it. I've seen too many cases where a web application's front end enforces validation, so you just go to your developer console and enable the button anyway and put junk data in and it's fine with that. If you're lucky, you can even put junk data in that causes the entire application to poo poo itself because it wasn't designed to handle bad input. Thanks. That was exactly my feeling - that you should validate both the posted back ViewModel, and the newly created domain object itself.So, have boththe UI and the core system check things. So my question now is - the first part is easy, but what's the best way to validate the Foo itself? I'm looking at extending the DbContext.ValidateEntity() to check these things, but that feels like it would get eventually get huge and messy with lots of type checking once you'd added validation for more than a couple of entity types. Another way would be to not give a public constructor and then move Foo creation behind a service or factory, but I was hoping to avoid stuff like that. Or, is that just what I need to do? Or is there just some nice neat 'Unique' annotation I can put on my domain model class that I'm missing?
|
# ? Jul 25, 2017 13:44 |
|
chippy posted:Thanks. That was exactly my feeling - that you should validate both the posted back ViewModel, and the newly created domain object itself.So, have boththe UI and the core system check things. So my question now is - the first part is easy, but what's the best way to validate the Foo itself? I'm looking at extending the DbContext.ValidateEntity() to check these things, but that feels like it would get eventually get huge and messy with lots of type checking once you'd added validation for more than a couple of entity types. Another way would be to not give a public constructor and then move Foo creation behind a service or factory, but I was hoping to avoid stuff like that. Or, is that just what I need to do? Or is there just some nice neat 'Unique' annotation I can put on my domain model class that I'm missing? Add a unique constraint for the field in the database, which I believe you can do in EF with data annotations or in fluent mapping. The problem with checking in-memory on the server is that you're always subject to race conditions: C# code:
In some cases it might make sense to do the check both in the server code and in the DB, but you always have to be prepared for the database rejecting the insert. Sort of like how the server shouldn't trust validation performed on the client, the DB shouldn't trust validation performed by the server code.
|
# ? Jul 25, 2017 14:42 |
|
LOOK I AM A TURTLE posted:Add a unique constraint for the field in the database, which I believe you can do in EF with data annotations or in fluent mapping. The problem with checking in-memory on the server is that you're always subject to race conditions: OK, thanks, sounds good. The closest thing I can find to an annotation at the moment is code:
|
# ? Jul 25, 2017 15:40 |
|
chippy posted:OK, thanks, sounds good. I think you can write your own annotations and use them on your classes. See: https://stackoverflow.com/questions/3413715/how-to-create-custom-data-annotation-validators
|
# ? Jul 25, 2017 16:13 |
|
Oh yeah, also a good idea, thanks.
|
# ? Jul 25, 2017 16:22 |
|
chippy posted:OK, thanks, sounds good. You need an index, that's how the database enforces uniqueness
|
# ? Jul 25, 2017 16:23 |
|
chippy posted:Having an Index seems a bit unecessary. Having an index actually ensures that whatever crappy validation you implement on the UI and the backend, with or without bugs and/or race conditions, the database will complain if your software tries to insert a duplicate value. It's your last, best hope of defense, you should use it!
|
# ? Jul 25, 2017 17:08 |
|
It feels very difficult to find .NET jobs around here that aren't primarily Web Forms. I feel like I've stepped back into 2005. I probably have in most cases I guess.
|
# ? Jul 26, 2017 00:26 |
|
Iverron posted:It feels very difficult to find .NET jobs around here that aren't primarily Web Forms. I feel like I've stepped back into 2005. Where's "here?"
|
# ? Jul 26, 2017 14:14 |
|
Something Awful.
|
# ? Jul 26, 2017 14:27 |
|
raminasi posted:Where's "here?" Midwest so... chippy posted:Something Awful.
|
# ? Jul 26, 2017 18:13 |
|
code:
|
# ? Jul 28, 2017 01:12 |
|
Xeom posted:
Is this what you're asking for? code:
|
# ? Jul 28, 2017 01:17 |
|
|
# ? Jun 7, 2024 17:18 |
|
Potassium Problems posted:Is this what you're asking for? Yes, thank you.
|
# ? Jul 28, 2017 01:36 |