|
Zemyla posted:Does it have RankNTypes and GADTs somewhere? Because I use both of those a lot. if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting. It's honestly bad enough that I think it's barely worth using over C#, because the difference in support is able to balance out the fairly weak benefits of F#.
|
# ? Jul 17, 2017 03:51 |
|
|
# ? Jun 8, 2024 05:44 |
|
Harik posted:I think the pro-billion define people are thinking something along the lines of: And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000
|
# ? Jul 17, 2017 07:10 |
|
QuarkJets posted:And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000 Presumably the thing that takes the values wants nanoseconds. Doing unit conversions at every point of use (DELAY_SECONDS * NS_PER_S) solely to make a number that's hidden behind a macro smaller is dumb.
|
# ? Jul 17, 2017 07:18 |
|
Sinestro posted:if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting. This is nonsense, ignore it. If you already know C# then F# is a great way to get into functional programming without jumping immediately into the deep end and learning an entirely new standard library simultaneously (in other words saying "gently caress it" and not bothering because who has time for that?) It's also a great way to introduce functional code into a larger .NET codebase as a zero-overhead abstraction. I once worked on a project where using an F# JavaScript engine was faster than V8 because most API calls were into the C# engine, resulting in tons of bridge glue that made V8 much slower despite being a far superior JS runtime.
|
# ? Jul 17, 2017 07:18 |
|
Foxfire_ posted:Presumably the thing that takes the values wants nanoseconds. Doing unit conversions at every point of use (DELAY_SECONDS * NS_PER_S) solely to make a number that's hidden behind a macro smaller is dumb. Then define DELAY_SECONDS and define DELAY_NS = DELAY_SECONDS * 1000000000. This accomplishes that without having to resort to defining BILLION. You immediately know how many 0s are in that integer because you know how many nanoseconds are in a second
|
# ? Jul 17, 2017 07:21 |
|
rjmccall posted:That has not worked in a long time. I mean it will compile. -O0 it will probably still be included in the output. QuarkJets posted:Then define DELAY_SECONDS and define DELAY_NS = DELAY_SECONDS * 100000000. This accomplishes that without having to resort to defining BILLION. You immediately know how many 0s are in that integer because you know how many nanoseconds are in a second Report #571: Delay was only 1/2 second instead of 5 seconds. You're also wrong - you immediately know exactly how many 0s should be in there, and your brain will probably parse it properly. Computers, however, deal in "is", not "should be", and it only sees 8. Spreading that potential mistake over your source tree? Not a good idea. Harik fucked around with this message at 07:34 on Jul 17, 2017 |
# ? Jul 17, 2017 07:30 |
|
I think we are trying to talk about different things.
|
# ? Jul 17, 2017 07:33 |
|
rjmccall posted:I think we are trying to talk about different things. Probably, I was talking about : zergstain posted:I thought any decent compiler would emit a warning if you put a semicolon after an if like that. Actually, though, would anything be lost if they made it so that a semicolon by itself isn't a valid C statement? This, as a breaking syntax change. Old code may run now (probably faster than intended if timing loops are optimized away) but would break if empty control-bodies were a compile error. zergstain was actually proposing even more than I thought - I was thinking "flow control followed by empty statement" but he was talking about removing empty statements entirely.
|
# ? Jul 17, 2017 07:43 |
|
QuarkJets posted:And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000 Better yet, you should use the typesystem, and do this instead: C++ code:
|
# ? Jul 17, 2017 07:44 |
|
Really? Someone named a language after Sri Lanka's colonial name? Programmers have no tact at all.
|
# ? Jul 17, 2017 08:18 |
|
Sinestro posted:if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting. I agree with this. If you CAN do it in Haskell, chances are you should. Hybrid languages like F# are mainly interesting for development targets where more sophisticated languages lack support - mobile, GUIs, even gaming (Unity). Heck, Elm is pretty much an extremely barebones version of Haskell and it justifies its existence solely through an excellent frontend dev experience.
|
# ? Jul 17, 2017 08:34 |
|
Simulated posted:This is nonsense, ignore it. You're addressing an entirely different point than what I am addressing. I'm talking about what advantages F# has over Haskell (access to .Net stuff, at the cost of enough that it's probably not too far to consider using C# instead, to take advantage of the huge amount of language-specific infrastructure). I don't think that it's worth using F# except for very, very specific cases, such as... Pretty much exactly what you listed. If you want to write classic ML-style functional code inside of a larger .NET codebase, it's not terrible, but honestly it's got enough pain points (at least as of the last time I used it) that I'd consider just writing C# in a functional style. It's grown most of the features that you might want to use in C# 7, with the upcoming C# 8 taking it even further, and C# has significantly better performance and a loving multi-pass compiler, at the cost of giving up discriminated unions (which are dog-slow and another source of Problem Times when dealing with anything outside of F#) and some niche ideas like type providers and units of measure (being able to solve over an Abelian group in your type checker is neat, but it's such a hammer in search of a nail). I'd strongly counter the recommendation of it as a stepping stone from C# to functional programming, because you're gaining so little and it lacks so much of what "modern" (for definitions of modern that resemble current Haskell idioms) big-F Functional programming has, on top of those very failings encouraging you to not actually do things fully functionally. The sorts of things that it can teach you to do properly have about as much support in C#, and when you've grasped that level of functional thinking, Haskell won't be as hard, even if the syntax is less familiar without the stop-over in F#. Honestly, a conceptual split like that is probably helpful, because the semantics are different enough that any familiarity gained would be false. That depends on what your goal is in terms of learning are. If you're just looking at learning C++ in the sense of "C with nicer syntax for putting functions into structs", learning a lot about C is helpful first. If you're goal is to write modern template-hell, while a knowledge of imperative programming is required, there's not too much to be gained by specifically getting it from C. This is a contrived and somewhat inaccurate example, but it's the best comparison I can think of at 2:00 AM. Sinestro fucked around with this message at 10:06 on Jul 17, 2017 |
# ? Jul 17, 2017 09:42 |
|
Harik posted:I mean it will compile. -O0 it will probably still be included in the output. Defining BILLIONS has that exact same problem. At some point you're going to rely on an assumption that a variable holds what it claims to hold QuarkJets fucked around with this message at 10:09 on Jul 17, 2017 |
# ? Jul 17, 2017 10:04 |
|
QuarkJets posted:Defining BILLIONS has that exact same problem. You haven't actually solved anything by defining BILLIONS You're right, there are literally zero benefits to writing something out in one single place, where it can be subjected to greater scrutiny to ensure that it's correct, instead of writing it out again and again every time you're going to use it.
|
# ? Jul 17, 2017 10:11 |
|
Jabor posted:You're right, there are literally zero benefits to writing something out in one single place, where it can be subjected to greater scrutiny to ensure that it's correct, instead of writing it out again and again every time you're going to use it. I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary
|
# ? Jul 17, 2017 10:12 |
|
QuarkJets posted:I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary It clearly has benefits, since absolutely zero of you noticed that 100 million up there.
|
# ? Jul 17, 2017 10:25 |
|
QuarkJets posted:I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary QuarkJets posted:And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000
|
# ? Jul 17, 2017 10:28 |
|
QuarkJets posted:Sure you run the risk of some idiot forgetting to change the comment after changing the value, but you're facing just as much risk of someone coming along and changing the value of BILLION. The point is that there are many of those "idiots" working on code you care about and some of them are at your company. And I stand by what I said about believing that coders are less likely to allow a variable name to rot than they are to allow comments to rot, although I admit again that I don't have evidence for it (I would be interested in seeing any evidence that exists for or against it).
|
# ? Jul 17, 2017 10:33 |
|
Dylan16807 posted:Downsides compared to literals? Like what? Using a scale that's appropriate to your purpose, such as defining NUM_NANOSEC, is actually more unit-consistent than multiplying a NUM_SEC value by BILLIONS
|
# ? Jul 17, 2017 10:33 |
|
NB. My company's libraries include a CDateTimeSpan class that allows the programmer to handle time intervals without worrying about units up until the point of interface with other APIs that expect ints/doubles/whatever. It has a convenience method for pausing for a given amount of time. So you can do CDateTimeSpan::From<CDateTimeSpan::Units::Seconds>(5.0).Pause(); This is the real correct answer in the timing example, not the silly argument we are having regarding whether you should define BILLION or whatever.
|
# ? Jul 17, 2017 10:37 |
|
This numbers discussion is thrilling and we should definitely keep at it.
|
# ? Jul 17, 2017 11:06 |
|
b0lt posted:Better yet, you should use the typesystem, and do this instead: This, yall. If your language cannot do this, get yourself a better language
|
# ? Jul 17, 2017 12:16 |
|
I think the thread found a more thrilling bikeshed to discuss than the usual syntax discussions.
|
# ? Jul 17, 2017 12:17 |
|
Ghost of Reagan Past posted:The API is basically the worst thing I've ever seen. It's so bad that it has to have been intentionally designed to cause madness. "madness" is a pretty good euphemism for "support contracts"
|
# ? Jul 17, 2017 16:18 |
|
We can't just not paint the bikeshed.
|
# ? Jul 17, 2017 17:23 |
Bongo Bill posted:We can't just not paint the bikeshed. wrong. worse is better, am I right? what's a little structural rot?
|
|
# ? Jul 17, 2017 18:25 |
|
JawnV6 posted:I mean, that’s not totally unreasonable? What test content had you run that passed? In the event there’s no existing content or HW to check high level code against, asm inspection is fine. Perhaps I wasn't clear. There are times when debugging when I have to look at the asm. This is just a scenario where I developed and tested the code already. Some of the older folks wanted me to additionally look at the asm to double check the compiler as standard practice.
|
# ? Jul 17, 2017 18:30 |
|
Harik posted:Probably, I was talking about : I was really thinking of it more in terms of what if it had always been that way. I was pretty certain tons of code would stop compiling if they removed empty statements in the next version of C or something. Not that a loop isn't flow control either.
|
# ? Jul 17, 2017 19:04 |
|
CPColin posted:You wouldn't be able to write for (;;) { ... } instead of while (true) { ... }, for one. Or, I guess, any for loop where you don't have an initialization or iteration step. (Only weirdos do for (;;), BTW.) for(;;) {...} is valid in very (very) specific situations. It uses less clock cycles than while(true) {...} in C on certain compilers. I had to use it in the past on a performance-critical piece of code running on a microprocessor a while back. I could have written the code in assembly instead to get around it, but gently caress that.
|
# ? Jul 18, 2017 00:05 |
|
If you want F# to be Haskell on .NET, you'll be disappointed. F# is C# with a handful of really useful gadgets, worse tooling and documentation, and different syntax. (Recursive types being a pain requires a different approach to program structure, but I think that using a typechecker as a negative example is kind of cheating.) IME, people who like F# quite commonly refer to it as "fun," even when writing boring business apps, which isn't something I feel like I've heard from any other language fans - I've heard "powerful," "intuitive," "ergonomic," and "oh my god look at this," but not a lot of "fun" - so I think opinions of it are substantially aesthetic and subjective. (I don't consider this a bad thing.) It also appears to be where new innovations for C# are incubated, which is kind of neat to watch.
|
# ? Jul 18, 2017 00:42 |
|
There is, in fact, a library that lets you do .NET inter-operation in Haskell, which is pretty loving amazing as far as I'm concerned. Like a lot of modern Haskell, it sort of belongs in this thread because of the hoops involved in expressing what you want. This is like line 146 of the example program.code:
|
# ? Jul 18, 2017 00:52 |
|
Speaking of .NET inter-operation: https://github.com/tjanczuk/edge/tree/master#scripting-clr-from-nodejs
|
# ? Jul 18, 2017 00:54 |
|
So I should probably just learn C# then? What would be a good way of doing so?
|
# ? Jul 18, 2017 02:43 |
|
Zemyla posted:So I should probably just learn C# then? What would be a good way of doing so? F# is nice, I like it a lot. https://www.youtube.com/watch?v=KPa8Yw_Navk Some reasons: When you program something and it compiles, it usually just works. This is not my C# experience at all. The option type is a nice construct. The result type is a nice way of structuring your code. Async workflows beat async/await any day. It is concise. Discriminated Unions are really handy. Immutability. Record types. Pattern matching. To each their own I guess. Mr Shiny Pants fucked around with this message at 06:16 on Jul 18, 2017 |
# ? Jul 18, 2017 06:02 |
|
Taffer posted:for( ;; ) {...} is valid in very (very) specific situations. It uses less clock cycles than while(true) {...} in C on certain compilers. I had to use it in the past on a performance-critical piece of code running on a microprocessor a while back. It's very important to check each cycle that the value of true hasn't changed.
|
# ? Jul 18, 2017 07:35 |
|
Mr Shiny Pants posted:When you program something and it compiles, it usually just works. This is not my C# experience at all.
|
# ? Jul 18, 2017 10:39 |
|
Mr Shiny Pants posted:The option type is a nice construct. You can do all of this in C#, too (except the pattern matching is pretty weak compared to F#)
|
# ? Jul 18, 2017 11:33 |
|
john donne posted:You can do all of this in C#, too (except the pattern matching is pretty weak compared to F#) Of course, but it looks nicer in F#
|
# ? Jul 18, 2017 11:38 |
|
How come for(;;) { ... } works, anyway? Shouldn't that at least be for(;true;) { ... }? Or does an empty statement return true??
|
# ? Jul 18, 2017 11:53 |
|
|
# ? Jun 8, 2024 05:44 |
|
Doom Mathematic posted:How come for(; { ... } works, anyway? Shouldn't that at least be for(;true;) { ... }? Or does an empty statement return true?? It's explicitly defined in the c standard section defining for loops. 6.5.3/2 has: quote:Either or both of the condition and the expression can be omitted. A missing condition makes the implied while clause equivalent to while(true).
|
# ? Jul 18, 2017 13:40 |