|
Aren't all of the Unity weirdnesses with C#/.net things down to the fact that it relied on old Mono implementations when support was added to the original engine? Of course, they should have loving fixed that over the 10+ years and several major versions since that happened, which has included things like deprecating their "Boo" scripting thing and "UnityScript" modified ECMAscript, and I think some other stuff.
|
# ? Dec 25, 2017 04:13 |
|
|
# ? Jun 5, 2024 16:26 |
|
redleader posted:What the gently caress? How? Why? How? Why? The fun thing is that this isn't actually the horror you think it is. You can't actually create a new UnityEngine.Object, since it's normally backed by a C++ object. They patched the == operator so that any C# Unity Object without an underlying C++ Object to compare equal to null: https://blogs.unity3d.com/2014/05/16/custom-operator-should-we-keep-it/ We just hit this earlier this week at work.
|
# ? Dec 25, 2017 04:20 |
|
Soricidus posted:The tweet talks about MacBooks not iPhones. Does macOS proper do autocorrect now? Because that would be pretty stupid, as the only reason autocorrect is necessary in the first place is that touchscreen keyboards are so ridiculously inaccurate. Text entered with an actual physical keyboard should not be altered without user intervention. This link suggests that autocorrect has been on by default since 10.11, and enabling it on my computer I can confirm that 'duloxetine' gets corrected to 'fluoxetine'. It also has an 'add period with double-space' thing, which makes even less sense to me since it's not like it's particularly hard to hit the period on a proper keyboard. Maybe it's for really slow/bad typists? vOv fucked around with this message at 05:15 on Dec 25, 2017 |
# ? Dec 25, 2017 05:11 |
|
As others have pointed out, part of the problem is that autocorrect in a desktop operating system where people are presumably operating with real loving keyboards is real dumb; the other half is that even if you are going to provide autocorrect functionality, it should be made really loving obvious when the machine has taken it upon itself to change what you wrote.
|
# ? Dec 25, 2017 05:23 |
|
Like, sure, providing autocorrect and never getting poo poo wrong is probably an unattainable goal. But you can at least make the problem a whole lot less bad without being perfect.
|
# ? Dec 25, 2017 05:24 |
|
There is a dashed blue underline when something gets autocorrected... but it goes away as soon as you type a character so it's basically loving useless.
|
# ? Dec 25, 2017 05:28 |
|
EssOEss posted:I hear a lot about how Unity's .NET/C# functionality is very limited compared to the real thing but I never saw an article covering it in depth with any juicy details. Is there some good reading material to understand it in depth? I want more than horrorsnippets! Most of Unity's .NET horrors come from the fact it's just a scripting layer for a native C++ engine, and the API they came up with really wants to pretend otherwise, even if it means brutalizing .NET. So everything that interacts with the native layer makes very little sense from a .NET perspective, because it's not giving you the whole picture and you don't really know what's going on under the hood. The other horror was that up to a few months ago they were stuck on an old Mono version that only supported .NET 2.0. So let's talk about the UnityEngine.Object (which I'll shorten to UnityObject for clarity):
SupSuper fucked around with this message at 06:18 on Dec 25, 2017 |
# ? Dec 25, 2017 06:13 |
|
Steve French posted:As others have pointed out, part of the problem is that autocorrect in a desktop operating system where people are presumably operating with real loving keyboards is real dumb; the other half is that even if you are going to provide autocorrect functionality, it should be made really loving obvious when the machine has taken it upon itself to change what you wrote. Right? Even 3rd party spellcheck programs that are trying to check everything you type don't just assume to know better and implement corrections for you, rather they have some way of notifying you of a suggested correction (like the famous red underline) that you are free to ignore Macbooks having autocorrect turned on by default is so Apple: a feature that when implemented perfectly would solve a minor inconvenience but in practice is just a source of frustration
|
# ? Dec 25, 2017 06:20 |
|
SupSuper posted:There's no in depth article as far as I know, it's all learned from experience by frustrated developers and various random blog posts and docs.
|
# ? Dec 25, 2017 06:21 |
|
You can use ReferenceEquals(obj, null) to tell the difference
|
# ? Dec 25, 2017 06:29 |
|
So Unity recreated the C# language with their own implementation or do they actually have something (minuscule even) in common with .NET proper? I understand the need for an easy to use scripting language (after all, writing a game is hard and you're positioning yourself to make it easy) and I kinda understand the choice of C# as being a quite popular language and relatively easy to use, but the bastardization that they came up with ... that's difficult to digest.
|
# ? Dec 25, 2017 06:58 |
|
They started by forking an old version of Mono and then glued on some "conveniences" without, apparently, considering the downsides to fundamentally altering the semantics of the language.
|
# ? Dec 25, 2017 07:04 |
|
Plorkyeran posted:It's not impractical in the sense that it costs too much. It's impractical in that having every sequence of characters that someone might want to type in your dictionary simply isn't possible, and would actually be less useful in many cases than a less inclusive dictionary (because many typos are technically words that someone might want to type). Right, so they should have a better concept of confidence: not just “is this sequence of characters the user typed a plausible typo for this word that I know?”, but also “am I confident that the user intended to type the other word, and that what they typed is not in fact a word I don’t know?” Like “teh” -> “the” is reasonable, because you’re correcting a very common typo to a very common word and your model of English can probably show that “the” is a highly probable word in context. But should you really ever correct any string of letters to “fluoxetine”? No: it’s pretty unlikely anyone will type that word unless they know what they’re doing, and you should know it’s it’s part of a very specialised and constantly growing medical vocabulary so if they mistype it then that is not unlikely to be because they’re typing a different but similar word.
|
# ? Dec 25, 2017 09:41 |
|
Also, to be clear: this is all proper .NET semantics with operator overloading, native bindings, and reflection that people could implement themselves with .NET. It's just very unintuitive conventions.
|
# ? Dec 25, 2017 10:00 |
|
Deffon posted:Also, to be clear: this is all proper .NET semantics with operator overloading, native bindings, and reflection that people could implement themselves with .NET. It's just very unintuitive conventions. Suddenly Java’s lack of operator overloading doesn’t seem so bad. Seriously, they overloaded == to report nullness for things that are not in fact null?
|
# ? Dec 25, 2017 10:02 |
|
Soricidus posted:Suddenly Java’s lack of operator overloading doesn’t seem so bad. Kotlin seems to have implemented them in a sensible way. C++ lets you overload way too much, including important things like * and & which tend to break everything in exciting ways if you don't know exactly what you're doing.
|
# ? Dec 25, 2017 14:59 |
|
Volguus posted:So Unity recreated the C# language with their own implementation or do they actually have something (minuscule even) in common with .NET proper? I understand the need for an easy to use scripting language (after all, writing a game is hard and you're positioning yourself to make it easy) and I kinda understand the choice of C# as being a quite popular language and relatively easy to use, but the bastardization that they came up with ... that's difficult to digest. If you wanted to see weird stuff, Unity doesn't just support C#. Oh no, it also supports Boo (a .NET based Python-like), and their very own UnityScript (a .NET based JavaScript-like), all running on the same API. They're on their way out now, thankfully.
|
# ? Dec 25, 2017 19:43 |
|
SupSuper posted:It's still .NET, it's just their APIs for the native stuff that break all standards. Everything that exists solely in the .NET layer behaves normally. To their credit, they are very very slowly weeding out the weirdest bits, they're just not in any rush to to do so given it's a 10-year old API at this point. They also forked MonoDevelop, and didn’t always port fixes back to their version, leading to people complaining in our repo about bugs that were already fixed . With their fixes supporting newer (mainline) Mono, there’s also better support for Visual Studio and VS For Mac. So they are trying now to fix the more parts at least.
|
# ? Dec 25, 2017 20:01 |
|
For autocorrect, a first step should be to flag some words as dangerous. Nobody would have been upset if Fluoxetine had been autocorrected into Florida or Forestry.
|
# ? Dec 25, 2017 20:52 |
|
Dr. Arbitrary posted:For autocorrect, a first step should be to flag some words as dangerous. As a bonus, if you have words that can be recognized but not autocorrected to, you make swear words act a lot less stupid.
|
# ? Dec 25, 2017 21:10 |
|
The Phlegmatist posted:Kotlin seems to have implemented them in a sensible way. But without overloading * you wouldn't be able to implement smart pointers. I agree that overloading & is stupid.
|
# ? Dec 25, 2017 21:40 |
To be clear here, we're talking about overloading the C-style unary prefix * and & used for pointer variables and dereferencing, not the binary operands * and & used for multiplication and bitwise AND, right? Because that's an important operator to be able to overload for numeric types. My C++ is pretty weak, so trying to follow the conversation.
|
|
# ? Dec 25, 2017 21:51 |
|
Eela6 posted:To be clear here, we're talking about overloading the C-style unary prefix * and & used for pointer variables and dereferencing, not the binary operands * and & used for multiplication and binary AND, right? Because that's an important operator to be able to overload for numeric types. Right, yeah.
|
# ? Dec 25, 2017 21:52 |
|
vOv posted:But without overloading * you wouldn't be able to implement smart pointers. I agree that overloading & is stupid. Yeah. I mean it should be possible to overload them but I've seen crazy abuse of unary & in scientific libraries because unlike unary + and - it shares the high precedence but also doesn't look confusing in arithmetic expressions. So someone will overload unary & to return the determinant of a matrix or something and make you use std::addressof to actually get the reference. e: some bespoke math library written by grad students actually did this lol The Phlegmatist fucked around with this message at 22:03 on Dec 25, 2017 |
# ? Dec 25, 2017 22:00 |
|
Eigen overloads the comma operator so you can docode:
|
# ? Dec 25, 2017 22:08 |
|
vOv posted:But without overloading * you wouldn't be able to implement smart pointers. Yeah it would be terrible if you had to use method call syntax instead of operators to access the referenced values in your memory management containers. Terrible.
|
# ? Dec 25, 2017 22:13 |
|
Yes, it would be pretty terrible. Syntactically privileging built-in types results in people using them even when a user-defined one would be more appropriate.
|
# ? Dec 25, 2017 22:18 |
|
http://9tabs.com/random/2017/12/23/evil-coding-incantations.html
|
# ? Dec 25, 2017 23:39 |
|
Plorkyeran posted:Yes, it would be pretty terrible. Syntactically privileging built-in types results in people using them even when a user-defined one would be more appropriate. It then becomes the responsibility of the API writer to make sure that the semantics are not too out-of-touch.
|
# ? Dec 25, 2017 23:41 |
|
The worst part is overloading || and &&, which also silently makes them non-short-circuiting.
|
# ? Dec 25, 2017 23:41 |
|
Jabor posted:The worst part is overloading || and &&, which also silently makes them non-short-circuiting. In the same vein, overloading the comma operator makes it go from evaluating expressions left to right to implementation defined behavior.
|
# ? Dec 25, 2017 23:48 |
|
Plorkyeran posted:Yes, it would be pretty terrible. Syntactically privileging built-in types results in people using them even when a user-defined one would be more appropriate. That’s a good argument for allowing overloads of things like arithmetic operators and []. Less so for unary *, since raw pointers are still syntactically privileged at the point of definition, and using a different idiom for smart pointers and iterators would have allowed any use of pointer syntax or pointer arithmetic to become a code smell.
|
# ? Dec 26, 2017 00:07 |
|
Jabor posted:The worst part is overloading || and &&, which also silently makes them non-short-circuiting. C# does have short-circuiting via some craziness fit for this thread. You overload the true and false operators for the type as well as the bitwise AND and OR. Then x && y becomes operator_true(x) ? operator_and(x, y) : x. Scala has the best answer I've seen. There's no operators, just methods with non-alphanumeric names, and short-circuiting is trivial with call-by-name.
|
# ? Dec 26, 2017 00:15 |
|
Soricidus posted:That’s a good argument for allowing overloads of things like arithmetic operators and []. Less so for unary *, since raw pointers are still syntactically privileged at the point of definition, and using a different idiom for smart pointers and iterators would have allowed any use of pointer syntax or pointer arithmetic to become a code smell. Interestingly enough, Microsoft decided to use ^ and % to replace * and &, respectively, when referring to CLR managed memory objects, instead of either overloading or using std::shared_ptr<T>, etc.
|
# ? Dec 26, 2017 01:07 |
|
Soricidus posted:That’s a good argument for allowing overloads of things like arithmetic operators and []. Less so for unary *, since raw pointers are still syntactically privileged at the point of definition, and using a different idiom for smart pointers and iterators would have allowed any use of pointer syntax or pointer arithmetic to become a code smell. Making raw pointers valid iterators without an adapter (and all the accompanying design trade offs) would be a terrible idea if the whole thing was being designed from scratch, but it was a key part in getting people to actually use the STL algorithms in the days before compilers could optimize thin wrappers out of existence, so even in retrospect I think it was the right decision.
|
# ? Dec 26, 2017 02:10 |
|
Sedro posted:C# does have short-circuiting via some craziness fit for this thread. You overload the true and false operators for the type as well as the bitwise AND and OR. Then x && y becomes operator_true(x) ? operator_and(x, y) : x. Yeah, that one has always impressed me. They clearly had very specific use-cases in mind for coercing values to true or false. Don't implement true and false for numeric types or you'll suddenly find that 4 && 8 is false.
|
# ? Dec 26, 2017 04:07 |
|
Soricidus posted:That’s a good argument for allowing overloads of things like arithmetic operators and []. Less so for unary *, since raw pointers are still syntactically privileged at the point of definition, and using a different idiom for smart pointers and iterators would have allowed any use of pointer syntax or pointer arithmetic to become a code smell. You'd still need raw pointers if you wanted to construct a vector of things you don't own, since you can't have a vector of references (and the same goes for many other container types). I agree that pointer arithmetic is bad 99% of the time but you can't do it with smart pointers anyway.
|
# ? Dec 26, 2017 05:51 |
|
vOv posted:You'd still need raw pointers if you wanted to construct a vector of things you don't own, since you can't have a vector of references (and the same goes for many other container types). I've got good news!
|
# ? Dec 26, 2017 05:56 |
|
Absurd Alhazred posted:Interestingly enough, Microsoft decided to use ^ and % to replace * and &, respectively, when referring to CLR managed memory objects, instead of either overloading or using std::shared_ptr<T>, etc. There are some good arguments for not using shared_ptr due to the semantic differences (i.e. CLR objects can be moved), but it'd have been much better if they had used "managed_ptr<T>" or something that didn't require new syntax, which they did do for arrays.
|
# ? Dec 26, 2017 06:20 |
|
|
# ? Jun 5, 2024 16:26 |
|
OneEightHundred posted:I'm still waiting for that to conflict with a standardized change in some future edition of C++ and bite them in the rear end. That would not be a very Microsoft thing to do.
|
# ? Dec 26, 2017 06:30 |