Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Malcolm XML
Aug 8, 2009

I always knew it would end like this.

EngineerJoe posted:

Super interesting thread. It's neat to see the thought that goes into designing a language that will be as important as Swift. My question is will we be able to get the string representation of enums?

Also can you add a deriving mechanism like Haskell so I don't have to reimplement stuff like equality and string representation + json serialization?

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Doh004 posted:

I think he was trying to be nice by not revealing what her heard because, as rjmccall had mentioned earlier, they try to keep their non released names quiet?

Yeah they got sued by Carl Sagan of all people regarding a code name.

Are y'all gonna let me program kexts with this?

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

rjmccall posted:

I have spent an amusing amount of time talking about making a monad protocol recently.

I don't understand how your C# example works. It looks to me like it both (1) shouldn't type-check because you're not applying the same number of arguments consistently to your protocol and (2) erases the concrete monad type across bind.

Monad is an example of a higher-kinded protocol: the thing that conforms to the protocol is not a type, it's a type function (generally, an unapplied generic type). That's feasible to add as a feature, but seeing as, frankly, the only use I've ever seen for that even in Haskell is Monad, it is not something we are going to fall over ourselves rushing into the language.

Monad, functor,applicative are foundational for the neat abstractions you get. Having parser combinators is pretty great.

And then you can do stuff like ghc.generics, free monads, uniplate and all the stuff that makes life easier when dealing with regular old polymorphism.

F# is stuck because the clr can't deal with it iirc

It would make a lot of sense, especially if you already elaborate into a language that supports it. Mostly for library authors.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

FWIW GHC has a totally bizarre calling convention designed around the STG, the papers on which everyone should read if they're implemented a functional language


LLVM doesn't support register pinning either which is among the reasons why the llvm backend has not taken over.

Malcolm XML fucked around with this message at 16:14 on Jun 22, 2014

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

No poo poo it's faster all he's doing is showing that objc_msgSend is a bottleneck

Dump them to (C) arrays and use qsort on the values and/or cache the comparator and it'd be much faster.

That said swift does it by default.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

rjmccall posted:

Basically, three reasons. (Also, in this particular case the "you" is specific: this is primarily my design.)

Code proximity is the first. In the common case where a cleanup is mandated by some specific action, you can immediately see that the cleanup is necessary instead of having to carefully analyze the rest of the control flow from that point. Now, this advantage is in inherent tension with code sequencing, because defer makes the compiler inject implicit code along exits from the scope, whereas finally appears in its proper order if you think of the jump as passing through the finally block. However, that's a bit of a leap, and I don't think it matches the way most programmers really think about it, at least not for jumps other than fall-through (e.g. return); they just think of the finally block executing, rather than thinking specifically about control flowing forward in the program. And that's fair, because finally is still pretty magical; the only code structure that's really 100% faithful to the execution flow is crappy, error-prone code duplication.

The second is that try/finally doesn't really get the scoping right. Suppose you create something, and you want to make sure it gets destroyed. In the common acquire/release pattern, there's often a variable whose scope should exactly match the lifetime of the resource. try introduces a scope, but you can't declare the variable in it, because it's out of scope in the finally and because you don't actually want to release if there was an error during acquisition. So you declare the variable outside the scope, which breaks the property we want. You could use a C#-like using feature instead, but then the cleanup is no longer ad hoc (you have to make a disposable type) and it's not always a clean fit; that doesn't mean we for-sure will never add that feature, but it's not as core as defer.

The third is that finally doesn't really compose very well. defer actions just pile up as statements in a single scope, but finally blocks have to be nested — either that, or you have to hoist variables and conditionalize the logic in the finally, and potentially even add more finally blocks there.

So why not C++ style RAII? too implicit? i can see an argument for both, with the IDisposable/RAII being what you do unless you need something special in defer.

interesting to see the rationale.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

quote:


Does someone have a pet NLP project they want to incorporate into a compiler?

Are the apple script folks around?

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

rjmccall posted:

It's a bug in the lexing of hex literals. The problem is that there is a floating-point hex literal format, and what follows the dot is, in fact, more hex digits; so we're slurping up the '.' and the 'a' and then failing to backtrack correctly when we see the 's'.



:psyduck: how often does a floating point number in hex notation come up? maybe FastInvSqrt?

  • Locked thread