Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Sinestro
Oct 31, 2010

The perfect day needs the perfect set of wheels.

Zemyla posted:

Does it have RankNTypes and GADTs somewhere? Because I use both of those a lot.

if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting.

It's honestly bad enough that I think it's barely worth using over C#, because the difference in support is able to balance out the fairly weak benefits of F#.

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Harik posted:

I think the pro-billion define people are thinking something along the lines of:

code:
#define BILLION 1000000000UL

#define DELAY_NANOSECONDS 5 * BILLION   // this is OK.

GTEST(BILLION_DEFINE_UNMOLESTED) {
  ASSERT_EQ(BILLION, 1000*1000*1000);
}  // if you have to do this, you should institute employment review, not code review.

for (i; i < BILLION; i++) { // instantly fail code review.
Context matters. I'd argue against re-normalizing against a larger unit, especially since it might lead to fractions and floating point math. Computers can add large numbers just fine.

And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000

Foxfire_
Nov 8, 2010

QuarkJets posted:

And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000

Presumably the thing that takes the values wants nanoseconds. Doing unit conversions at every point of use (DELAY_SECONDS * NS_PER_S) solely to make a number that's hidden behind a macro smaller is dumb.

Simulated
Sep 28, 2001
Lowtax giveth, and Lowtax taketh away.
College Slice

Sinestro posted:

if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting.

It's honestly bad enough that I think it's barely worth using over C#, because the difference in support is able to balance out the fairly weak benefits of F#.

This is nonsense, ignore it.

If you already know C# then F# is a great way to get into functional programming without jumping immediately into the deep end and learning an entirely new standard library simultaneously (in other words saying "gently caress it" and not bothering because who has time for that?)

It's also a great way to introduce functional code into a larger .NET codebase as a zero-overhead abstraction. I once worked on a project where using an F# JavaScript engine was faster than V8 because most API calls were into the C# engine, resulting in tons of bridge glue that made V8 much slower despite being a far superior JS runtime.

QuarkJets
Sep 8, 2008

Foxfire_ posted:

Presumably the thing that takes the values wants nanoseconds. Doing unit conversions at every point of use (DELAY_SECONDS * NS_PER_S) solely to make a number that's hidden behind a macro smaller is dumb.

Then define DELAY_SECONDS and define DELAY_NS = DELAY_SECONDS * 1000000000. This accomplishes that without having to resort to defining BILLION. You immediately know how many 0s are in that integer because you know how many nanoseconds are in a second

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

rjmccall posted:

That has not worked in a long time.

I mean it will compile. -O0 it will probably still be included in the output.

QuarkJets posted:

Then define DELAY_SECONDS and define DELAY_NS = DELAY_SECONDS * 100000000. This accomplishes that without having to resort to defining BILLION. You immediately know how many 0s are in that integer because you know how many nanoseconds are in a second

Report #571: Delay was only 1/2 second instead of 5 seconds. You're also wrong - you immediately know exactly how many 0s should be in there, and your brain will probably parse it properly. Computers, however, deal in "is", not "should be", and it only sees 8.

Spreading that potential mistake over your source tree? Not a good idea.

Harik fucked around with this message at 07:34 on Jul 17, 2017

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
I think we are trying to talk about different things.

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

rjmccall posted:

I think we are trying to talk about different things.

Probably, I was talking about :

zergstain posted:

I thought any decent compiler would emit a warning if you put a semicolon after an if like that. Actually, though, would anything be lost if they made it so that a semicolon by itself isn't a valid C statement?

This, as a breaking syntax change. Old code may run now (probably faster than intended if timing loops are optimized away) but would break if empty control-bodies were a compile error. zergstain was actually proposing even more than I thought - I was thinking "flow control followed by empty statement" but he was talking about removing empty statements entirely.

b0lt
Apr 29, 2005

QuarkJets posted:

And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000

Better yet, you should use the typesystem, and do this instead:

C++ code:
#include <chrono>

using namespace std::chrono_literals;

static constexpr auto DELAY = 5s;

Jaded Burnout
Jul 10, 2004


Really? Someone named a language after Sri Lanka's colonial name? Programmers have no tact at all.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Sinestro posted:

if you're used to modern (or even like, recent-ish) Haskell, lower your expectations a lot. F# is worse than retro SML in a lot of ways, and that's quite a ways behind what you're probably wanting.

I agree with this. If you CAN do it in Haskell, chances are you should. Hybrid languages like F# are mainly interesting for development targets where more sophisticated languages lack support - mobile, GUIs, even gaming (Unity).

Heck, Elm is pretty much an extremely barebones version of Haskell and it justifies its existence solely through an excellent frontend dev experience.

Sinestro
Oct 31, 2010

The perfect day needs the perfect set of wheels.

Simulated posted:

This is nonsense, ignore it.

If you already know C# then F# is a great way to get into functional programming without jumping immediately into the deep end and learning an entirely new standard library simultaneously (in other words saying "gently caress it" and not bothering because who has time for that?)

It's also a great way to introduce functional code into a larger .NET codebase as a zero-overhead abstraction. I once worked on a project where using an F# JavaScript engine was faster than V8 because most API calls were into the C# engine, resulting in tons of bridge glue that made V8 much slower despite being a far superior JS runtime.

You're addressing an entirely different point than what I am addressing.

I'm talking about what advantages F# has over Haskell (access to .Net stuff, at the cost of enough that it's probably not too far to consider using C# instead, to take advantage of the huge amount of language-specific infrastructure). I don't think that it's worth using F# except for very, very specific cases, such as... Pretty much exactly what you listed. If you want to write classic ML-style functional code inside of a larger .NET codebase, it's not terrible, but honestly it's got enough pain points (at least as of the last time I used it) that I'd consider just writing C# in a functional style. It's grown most of the features that you might want to use in C# 7, with the upcoming C# 8 taking it even further, and C# has significantly better performance and a loving multi-pass compiler, at the cost of giving up discriminated unions (which are dog-slow and another source of Problem Times when dealing with anything outside of F#) and some niche ideas like type providers and units of measure (being able to solve over an Abelian group in your type checker is neat, but it's such a hammer in search of a nail).

I'd strongly counter the recommendation of it as a stepping stone from C# to functional programming, because you're gaining so little and it lacks so much of what "modern" (for definitions of modern that resemble current Haskell idioms) big-F Functional programming has, on top of those very failings encouraging you to not actually do things fully functionally. The sorts of things that it can teach you to do properly have about as much support in C#, and when you've grasped that level of functional thinking, Haskell won't be as hard, even if the syntax is less familiar without the stop-over in F#. Honestly, a conceptual split like that is probably helpful, because the semantics are different enough that any familiarity gained would be false.

That depends on what your goal is in terms of learning are. If you're just looking at learning C++ in the sense of "C with nicer syntax for putting functions into structs", learning a lot about C is helpful first. If you're goal is to write modern template-hell, while a knowledge of imperative programming is required, there's not too much to be gained by specifically getting it from C. This is a contrived and somewhat inaccurate example, but it's the best comparison I can think of at 2:00 AM.

Sinestro fucked around with this message at 10:06 on Jul 17, 2017

QuarkJets
Sep 8, 2008

Harik posted:

I mean it will compile. -O0 it will probably still be included in the output.


Report #571: Delay was only 1/2 second instead of 5 seconds. You're also wrong - you immediately know exactly how many 0s should be in there, and your brain will probably parse it properly. Computers, however, deal in "is", not "should be", and it only sees 8.

Spreading that potential mistake over your source tree? Not a good idea.

Defining BILLIONS has that exact same problem. At some point you're going to rely on an assumption that a variable holds what it claims to hold

QuarkJets fucked around with this message at 10:09 on Jul 17, 2017

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

QuarkJets posted:

Defining BILLIONS has that exact same problem. You haven't actually solved anything by defining BILLIONS

You're right, there are literally zero benefits to writing something out in one single place, where it can be subjected to greater scrutiny to ensure that it's correct, instead of writing it out again and again every time you're going to use it.

QuarkJets
Sep 8, 2008

Jabor posted:

You're right, there are literally zero benefits to writing something out in one single place, where it can be subjected to greater scrutiny to ensure that it's correct, instead of writing it out again and again every time you're going to use it.

I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary

Harik
Sep 9, 2001

From the hard streets of Moscow
First dog to touch the stars


Plaster Town Cop

QuarkJets posted:

I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary

It clearly has benefits, since absolutely zero of you noticed that 100 million up there.

Dylan16807
May 12, 2010

QuarkJets posted:

I'm not saying that defining BILLIONS has no benefits, I'm saying that it has downsides and usually isn't necessary
Downsides compared to literals? Like what?


QuarkJets posted:

And my contention is that you shouldn't define DELAY_NANOSECONDS to be 5 billion, you should instead define DELAY_SECONDS to be 5 or at worst DELAY_MILLISECONDS to be 5000
This is just ridiculous. Having consistent units is much more important than it looking pretty.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

QuarkJets posted:

Sure you run the risk of some idiot forgetting to change the comment after changing the value, but you're facing just as much risk of someone coming along and changing the value of BILLION.

The point is that there are many of those "idiots" working on code you care about and some of them are at your company.

And I stand by what I said about believing that coders are less likely to allow a variable name to rot than they are to allow comments to rot, although I admit again that I don't have evidence for it (I would be interested in seeing any evidence that exists for or against it).

QuarkJets
Sep 8, 2008

Dylan16807 posted:

Downsides compared to literals? Like what?

This is just ridiculous. Having consistent units is much more important than it looking pretty.

Using a scale that's appropriate to your purpose, such as defining NUM_NANOSEC, is actually more unit-consistent than multiplying a NUM_SEC value by BILLIONS

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
NB. My company's libraries include a CDateTimeSpan class that allows the programmer to handle time intervals without worrying about units up until the point of interface with other APIs that expect ints/doubles/whatever. It has a convenience method for pausing for a given amount of time. So you can do

CDateTimeSpan::From<CDateTimeSpan::Units::Seconds>(5.0).Pause();

This is the real correct answer in the timing example, not the silly argument we are having regarding whether you should define BILLION or whatever.

Jaded Burnout
Jul 10, 2004


This numbers discussion is thrilling and we should definitely keep at it.

Xarn
Jun 26, 2015

b0lt posted:

Better yet, you should use the typesystem, and do this instead:

C++ code:
#include <chrono>

using namespace std::chrono_literals;

static constexpr auto DELAY = 5s;

This, yall. If your language cannot do this, get yourself a better language :shrug:

Beef
Jul 26, 2004
I think the thread found a more thrilling bikeshed to discuss than the usual syntax discussions.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Ghost of Reagan Past posted:

The API is basically the worst thing I've ever seen. It's so bad that it has to have been intentionally designed to cause madness.

"madness" is a pretty good euphemism for "support contracts" :golfclap:

Bongo Bill
Jan 17, 2012

We can't just not paint the bikeshed.

Eela6
May 25, 2007
Shredded Hen

Bongo Bill posted:

We can't just not paint the bikeshed.

wrong. worse is better, am I right? what's a little structural rot?

idiotmeat
Apr 3, 2010

JawnV6 posted:

I mean, that’s not totally unreasonable? What test content had you run that passed? In the event there’s no existing content or HW to check high level code against, asm inspection is fine.

Perhaps I wasn't clear. There are times when debugging when I have to look at the asm. This is just a scenario where I developed and tested the code already. Some of the older folks wanted me to additionally look at the asm to double check the compiler as standard practice.

zergstain
Dec 15, 2005

Harik posted:

Probably, I was talking about :


This, as a breaking syntax change. Old code may run now (probably faster than intended if timing loops are optimized away) but would break if empty control-bodies were a compile error. zergstain was actually proposing even more than I thought - I was thinking "flow control followed by empty statement" but he was talking about removing empty statements entirely.

I was really thinking of it more in terms of what if it had always been that way. I was pretty certain tons of code would stop compiling if they removed empty statements in the next version of C or something. Not that a loop isn't flow control either.

Taffer
Oct 15, 2010


CPColin posted:

You wouldn't be able to write for (;;) { ... } instead of while (true) { ... }, for one. Or, I guess, any for loop where you don't have an initialization or iteration step. (Only weirdos do for (;;), BTW.)

Languages that require braces, even for single-line blocks, have the right idea.

for(;;) {...} is valid in very (very) specific situations. It uses less clock cycles than while(true) {...} in C on certain compilers. I had to use it in the past on a performance-critical piece of code running on a microprocessor a while back.

I could have written the code in assembly instead to get around it, but gently caress that.

raminasi
Jan 25, 2005

a last drink with no ice
If you want F# to be Haskell on .NET, you'll be disappointed. F# is C# with a handful of really useful gadgets, worse tooling and documentation, and different syntax. (Recursive types being a pain requires a different approach to program structure, but I think that using a typechecker as a negative example is kind of cheating.) IME, people who like F# quite commonly refer to it as "fun," even when writing boring business apps, which isn't something I feel like I've heard from any other language fans - I've heard "powerful," "intuitive," "ergonomic," and "oh my god look at this," but not a lot of "fun" - so I think opinions of it are substantially aesthetic and subjective. (I don't consider this a bad thing.) It also appears to be where new innovations for C# are incubated, which is kind of neat to watch.

Sinestro
Oct 31, 2010

The perfect day needs the perfect set of wheels.
There is, in fact, a library that lets you do .NET inter-operation in Haskell, which is pretty loving amazing as far as I'm concerned. Like a lot of modern Haskell, it sort of belongs in this thread because of the hoops involved in expressing what you want. This is like line 146 of the example program.

code:
main :: IO ()
main = do
  startClr

  invokeS @"WriteLine" @"System.Console" ()                                     -- Console.WriteLine()
  invokeS @"WriteLine" @"System.Console" "Hello CLR!!!"                         -- Console.WriteLine(String)
  invokeS @"WriteLine" @"System.Console" (2 :: Int32)                           -- Console.WriteLine(Int32)
  invokeS @"WriteLine" @"System.Console" ("The year is {0}", 2017::Int64)       -- Console.WriteLine(String, Object)
  invokeS @"WriteLine" @"System.Console" ("Well {0} {1}", "This", "Is Cool")    -- Console.WriteLine(String, Object, Object)

  list <- new @'("System.Collections.Generic.List", "System.String") ()         -- generics
  invokeI @"Add" list "foo"
  invokeI @"Add" list "bar"
  let prodList = toProducer list                                                -- IEnumerable implementors can be converted to Producers (pipes package)
  runEffect $ prodList >-> stdoutLn]

  d <- delegate @T_ParameterizedThreadStart onThreadStart                       -- Compatible Haskell functions can be turned into delegates

  thread <- new @T_Thread d                                                     -- And then used as you would normally
  invokeI @"Start" thread "SomeParam"
  invokeI @"Join" thread ()

  let handler = \(ex::Object T_FileNotFoundException)-> putStrLn "Woops" >> return (T.pack "")
  s <- catch someThingBad handler

  putStrLn "good thing we recovered"

  return ()

The Fool
Oct 16, 2003


Speaking of .NET inter-operation: https://github.com/tjanczuk/edge/tree/master#scripting-clr-from-nodejs

Zemyla
Aug 6, 2008

I'll take her off your hands. Pleasure doing business with you!
So I should probably just learn C# then? What would be a good way of doing so?

Mr Shiny Pants
Nov 12, 2012

Zemyla posted:

So I should probably just learn C# then? What would be a good way of doing so?

F# is nice, I like it a lot.

https://www.youtube.com/watch?v=KPa8Yw_Navk

Some reasons:

When you program something and it compiles, it usually just works. This is not my C# experience at all.
The option type is a nice construct.
The result type is a nice way of structuring your code.
Async workflows beat async/await any day.
It is concise.
Discriminated Unions are really handy.
Immutability.
Record types.
Pattern matching.


To each their own I guess.

Mr Shiny Pants fucked around with this message at 06:16 on Jul 18, 2017

gonadic io
Feb 16, 2011

>>=

Taffer posted:

for( ;; ) {...} is valid in very (very) specific situations. It uses less clock cycles than while(true) {...} in C on certain compilers. I had to use it in the past on a performance-critical piece of code running on a microprocessor a while back.

I could have written the code in assembly instead to get around it, but gently caress that.

It's very important to check each cycle that the value of true hasn't changed. :rolleye:

toiletbrush
May 17, 2010

Mr Shiny Pants posted:

When you program something and it compiles, it usually just works. This is not my C# experience at all.
The option type is a nice construct.
The result type is a nice way of structuring your code.
Async workflows beat async/await any day.
It is concise.
Discriminated Unions are really handy.
Immutability.
Record types.
Pattern matching.
That's all stuff I'd much rather have in C# (as someone who hasn't used a functional language on anything serious yet)

john donne
Apr 10, 2016

All suitors of all sorts themselves enthral;

So on his back lies this whale wantoning,

And in his gulf-like throat, sucks everything

That passeth near.

Mr Shiny Pants posted:

The option type is a nice construct.
The result type is a nice way of structuring your code.
Immutability.
Pattern matching.*

You can do all of this in C#, too (except the pattern matching is pretty weak compared to F#)

TheBlackVegetable
Oct 29, 2006

john donne posted:

You can do all of this in C#, too (except the pattern matching is pretty weak compared to F#)

Of course, but it looks nicer in F#

Doom Mathematic
Sep 2, 2008
How come for(;;) { ... } works, anyway? Shouldn't that at least be for(;true;) { ... }? Or does an empty statement return true??

Adbot
ADBOT LOVES YOU

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

Doom Mathematic posted:

How come for(;;) { ... } works, anyway? Shouldn't that at least be for(;true;) { ... }? Or does an empty statement return true??

It's explicitly defined in the c standard section defining for loops.
6.5.3/2 has:

quote:

Either or both of the condition and the expression can be omitted. A missing condition makes the implied while clause equivalent to while(true).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply