Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
MononcQc
May 29, 2007



Erlang is a functional, conncurrent, distributed, and fault-tolerant programming language with soft real-time capabilities. It's especially good for server software, and it supports hot code loading so that you can upgrade applications without stopping them.

It's got Prolog-like syntax, powerful pattern matching, one of the best (if not the best) ways to deal with binary data, and contrary to many functional languages, it was built to deal with real world problems before anything else.

It was created at Ericsson Labs in 1986, and has been used in flagship products for them since the early 90s. In 1998, Ericsson decided they didn't want to make languages anymore and abandoned the project, making it open source. A few years later, they figured out it was a mistake and have been the main maintainers of the language since then.

https://www.youtube.com/watch?v=xrIjfIjssLE

Why should I care?

There are a couple of reasons:
  • You want to get into actor-based (and message-passing-based) concurrency
  • You want to program in a functional language that can land you jobs and build products1
  • You need to build reliable and fault-tolerant server-side software
  • You need to build a low-latency application or one that requires massive concurrency
  • Hey, experience is experience, so why not?
If the idea of 'letting it crash' sounds absurd to you, Erlang might be of significant interest. Why do Erlangers embrace such principles, and what form do they take? Why is it that most Erlang developers get into the language for concurrency, but decide to stay for fault-tolerance and letting things crash? Taking a bit of time and trying the language could change how you think about building systems.

The Open Telecom Platform (OTP) framework, despite its boring name, is an decent piece of engineering shared by all Erlang apps worth their while in the wild that let you organize your code in a logical way. It's so useful that most Erlang developers judge it as more important than tests, documentation, or coworkers who know the code to help maintain software. It is so important that the language is officially called 'Erlang/OTP'. Learning it might be worth your time too.

Erlang is not a silver bullet, it's not a new messiah, and it's likely you're not gonna use it at work and will dislike how it looks. It's worth a try though.

Where can I learn more?

First there are websites:
Books are the next best way to learn things, and outside of LYSE (which is the whole book online) they tend to be more complete:


Programming Erlang by Joe Armstrong (Pragmatic Programmers) is the book written by one of the inventors of Erlang. The book is good for people who want to get into the concurrent views of Erlang, stories that led to its design, and a strong focus on raw Erlang, with less OTP. The second edition is coming out soon and will have content on Maps, which should be the next addition to the language.


Erlang Programming by Francesco Cesarini and Simon Thompson (O'Reilly) is a book written by people who are used to giving Erlang classes all the time through Erlang Solutions and the University of Kent. It's similar to Joe Armstrong's book in that it doesn't go too deep in OTP, but shows sufficiently enough to get going for a good while. If Joe's book is more about why the language is the way it is, this one takes a more "get it done" practical approach, and shows supporting tools more.


Erlang and OTP in Action by Martin Logan, Eric Merritt, and Richard Carlsson (Manning) is probably the book you want if you want to dive deep in OTP to get better at it while feeling like skimming the content of basic Erlang. It's got a good lot of practical tips, tuning ideas, and probably spends the most time discussing Java integration, NIFs, and other ways to communicate with other languages.


Learn You Some Erlang for Great Good! by yours truly (No Starch Press). It's possibly the most complete one if what you want is to go from basic Erlang to pretty much the entirety of OTP. Given I'm biased here, I'll let you read it from the website, as it's entirely free there, and you can judge for yourself.


Introducing Erlang by Simon St-Laurent (O'Reilly). It's the gentlest (and shortest) introduction to the language as a book outside of the one in 7 languages in 7 weeks. It was meant that way, and has a few exercises and shows the basics. If you want an overview, this is a good one, and could be a good match when coupled with Erlang and OTP in Action.


Études for Erlang by J. David Eisenberg (O'Reilly) is a short book that has a series of exercises to help programmers learning Erlang. It is originally written to go with Introducing Erlang, but the author has added notes pointing you to the equivalent chapters of Erlang Programming, Programming Erlang, Erlang and OTP in Action, and Learn You Some Erlang For Great Good! It's also available for free online.


This book is out of print, and while the first chapter is available for free online, it's outdated. This book shines if you can find a re-print and know Erlang already. Later chapters show how to build distributed databases (commits, eventual consistency) and insights about how some of the internal libraries of Erlang/OTP are implemented.

Most the books have extracts available at http://erlangcentral.org/books/

Jobs:

Other links

What's this thread for

Questions, discussions, code reviews, anything related Erlang. Post away :toot:

1: Not that there are that many Erlang jobs around, but at least, there are Erlang jobs.

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

Police Academy III posted:

How is the library situation with Erlang, like if I want to do something with graphics + sound or a native-looking UI, how easy is that? Are libraries fairly idiomatic or are they mostly a bunch of half baked ffi wrappers like with a lot of other languages (looking at you common lisp). Also, what does a good Erlang development environment look like, is there an IDE or am I fine with just emacs?

For native UIs Erlang comes with Wx widgets. They're not the most stable thing in the world -- people will tend to boot a slave VM just to deal with the graphics in case it crashes. Most of it appears to be a literal translation of the other APIs. The only tutorial I know for it is http://www.idiom.com/~turner/wxtut/wxwidgets.html

Generally speaking, Erlang is used more for server-side stuff, so people will either manage things through a CLI or create a web interface to interact with the program. Still, you can call observer:start() on a node that has Wx support and you'll see something like this:



Those are diagnostics of the node you started it on, and it's all written in Erlang using the Wx bindings -- you can read the source to figure things out, but it's likely not gonna be great. As Cocoa Crispies said, Erlang doesn't do super well for the desktop.

Regarding IDEs, there's ErlIDE for Eclipse, but the vast majority of Erlang programmers survive on Emacs, Vim, or Sublime.

MononcQc
May 29, 2007

netcat posted:

What are some interesting/fun projects to do while learning Erlang? I've been meaning to learn it for some time (for work actually) but I kinda need to do something tangible or I'll lose interest.

Well, what you're gonna think is 'fun' is gonna be different for everyone. In general, beginner projects include things like writing a socket or HTTP server, a chat application, an IRC bot, or stuff like that. All these component can be put together to write some kind of game to be played over a browser, say poker or cards against humanity, without ever needing to involve a database.

Other people will try to go another way and do AI stuff -- neural nets and whatnot.

If you have more specific details to mention, it's probably possible to recommend better.

Oh and I recommend installing https://github.com/ferd/erlang-history as it will allow your Erlang shell to save their history!

MononcQc
May 29, 2007

the talent deficit posted:

As someone who can't be assed to read the 100+ messages about it on the mailing list but someone who should probably be informed, could you explain the maps/frames controversy? All I have figured out is everyone is unhappy

Ah yes. The thread in question is at http://erlang.org/pipermail/erlang-questions/2013-May/073656.html and the Erlang Enhancement Proposal (EEP) draft is at https://github.com/psyeugenic/eep/blob/egil/maps/eeps/eep-0043.md

The EEP draft basically suggests a new native dictionary structure to be native to the language, coming with specific pattern matching syntax (people are often fed up of dict:lookup(Key, Dict) or some variant). The main inspiration for this draft was an EEP/Draft by Richard O'Keefe (the mailing list's "always right because he really knows everything" guy) available at http://www.cs.otago.ac.nz/staffpriv/ok/frames.pdf (PDF), about a structure called 'frames'.

This is where things get a bit confusing. The frames had the following motivation behind them:

  • Replace Erlang records for their current use case (fast, internal state of a process with pattern matching, static records, much like a C struct, addressable by keys, which should be atoms and static)
  • Can be used cross-modules without sharing definitions in .hrl files (a wart of records)
  • Keep things as efficient as current records (which are basically tuples)
  • Can be seen as its own data type (i.e. not a tuple as the underlying structure)
  • No absolute need for O(1) updates, and said updates should be functional (not in-place)

The maps proposal is inspired by frames, but some of the motivations are slightly different:

  • Replace dicts in their current use cases, and make them nicer to use with pattern matching
  • Provide a faster implementation (likely native)
  • No direct hope to replace records, but that requirement is open to be stapled on later.
  • Arbitrary keys and number of elements
  • Maps should respect global sort order even with arbitrary keys and elements -- because you want to pattern match on them, you need to be able to compare them by {Key,Value}, and they want better comparison than O(n log n).
  • It should be possible to have a way to differentiate between adding and updating a key.

This last requirement meant that the implementation needs to preserve order, which meant that you get in the usual Erlang problem: integers and floats are two distinct data types, but they can compare equal in the case of 1.0 and 1, for example. However, they do no match the same. That is:

code:
true  = 1 == 1.0,
false = 1 =:= 1.0,
1 = 1, % works
1 = 1.0. % crashes
Ideally, this wart would have been avoided by possibly having multiple comparison operators for floats and integers (and not just == vs. =:= and /= and =/=, but also < and @<, for example). However, they don't exist. For this reason, maps get two assignment operators: key => val and key := val (used as SomeMap#{key => val, key2 := val2}.

=> is usable in order to create new keys and update them, but the update is done with comparison semantics (1.0 == 1).

:= is usable in order to pattern match on maps, and they update new keys only (not create new ones), but the update is done with match semantics (1.0 =/= 1).

Now a lot of people haven't read both proposals, and many others have not read either proposals. There a few arguments people were debating:

  • The choice of operators and their semantic is not ideal, but in the end you can't do much about it because of the difference between floats and integers. Some people found it a lot less ideal than others.
  • There was a debate about whether we really needed frames or maps more than the other, or if one could do both at once. This is still up in the air, but the OTP team said they don't intend for Maps to replace Frames.
  • Syntax of nested updates were debated, with alternative recommendations. Most vocal people found out that the syntax suggested in the draft proposal was good enough as is.
  • Questions were asked regarding the underlying implementation. In past conferences, the OTP team hinted at HAMTs being used, but given the need for ordering, they are not applicable there. People talked about such implementations, compared them to frames, discussed tradeoffs, etc. I do not think the OTP team has settled on any specific implementation at this point, and they're open to the idea of having different implementations depending on the map size (they should transition up/down automatically).

For the first point, I believe people tended to like the maps syntax more than frames syntax. And syntax debates being what they are a lot of opinions that led up to little progress were exchanged.

Most of it was really a debate and a bit of a RFC -- it's not enacted yet, there was no prototype implementation made public, and the OTP team is likely going back to the drawing board considering some of these issues and comments.

I don't think people are generally unhappy, but everyone wants things to be right and not get another situation similar to records on their hands. Everyone seems to be relatively happy about it, although Richard O'Keefe seems to think maps are not required -- Frames are the real urgent addition.

Personally my biggest worry is that everything turns into dict-passing where all other data structures and their relative advantages are forbidden, and message passing becomes heavier because everyone just sends internal state around as a map, and APIs and encapsulation go out the window. I'm trying to trust people on this one, but it's still scaring me a bit.

MononcQc fucked around with this message at 19:50 on May 21, 2013

MononcQc
May 29, 2007

"Basically proplists" is a tricky way to put it. Proplists are not necessarily a key/value store. For example, this list of options for the file:open/2 call could be: [read, {encoding,utf8}, write]. This is a proplist that is implicitly expanded to [{read,true} {encoding,utf8}, {write,true}]. Proplists also support say, modifying the list before by prepending [{encoding,latin1}|OldOpts] and for most use cases, the first result found will supercede the other ones.

This form of expansion and overwriting is somewhat unique to proplists and my reading of the Maps EEP doesn't lead me to believe they will support similar operations. They'll be closer to lists of key/value pairs, or orddicts, so yeah, you will be able to match anywhere, not just on heads.

Maps will also add map comprehensions and generators:

code:
M0 = #{ K => V*2  || K := V <- map() },
M1 = #{ I => f(I) || I <- list() },
M2 = #{ K => V    || <<L:8,K:L/binary,V/float>> <= binary() }.
B1 = << <<V:8>> || _ := V <- map() >>,
L1 = [ {K,V} || K := V <- map() ].
Which won't add much in your case.

Frames were never really on the roadmap. eeps have this tendency to be neither denied or worked on for years at a time. The proposal is not rejected, but I don't think the OTP team is announcing them as a feature nor is actively working on them at this time. I'm starting to think I would have preferred them to maps myself.

MononcQc
May 29, 2007

Tolan posted:

What's your impression/opinion of Elixir?

Elixir is not adding too much to Erlang, IMO. Its biggest contributions are in macros, multiple modules per file, and the ability to have contracts, but otherwise most of its features will be a variation of something available in Erlang through the BEAM VM and Core Erlang (the intermediary language many can compile to, if they don't just generate an Erlang abstract parse tree), and its weaknesses will likely be a similar variation.

Then there's also the different syntax.

I think it's a nice attempt at a new language, and possibly the best alternative language on BEAM (though LFE is definitely nice too), but it doesn't offer much for people who already know Erlang outside of a change in a few semantics and the features I mentioned above. My hope for it is that it becomes a honeypot to Ruby fanboys who want Ruby and its do notation everywhere so they stop bothering the Erlang regulars about it, and those that really want to will be able to jump to Erlang from there.

Honeypot is a bit of a strong word, given Elixir can stand on its own and has its own tiny community that's still at the very flexible stage where they can modify the language as they go for what they like, but it's somewhat appropriate because at this point you still need to understand Erlang to be efficient with Elixir.

They got their first book out very lately http://pragprog.com/book/elixir/programming-elixir, so it might be the first sign of the language taking off. You'll probably still need to know a bit of Erlang to feel at home with Elixir, but I believe this is happening less and less.

MononcQc
May 29, 2007

So I've started making a list of a bunch of functionality people like to have when debugging production code. I've had a few snippets in the past already -- mostly gists about obtaining a node's memory, safe ways to read process info (without blowing up the mailbox), and so on.

I've got my own and also Geoff Cant's https://github.com/archaelus/eshellcode. My objective is to progressively add them to a library that can then be deployed with any other app to let DevOps stuff be way simpler and avoid copy/pasting poo poo all around.

If any people in here have useful functions they'd like to see in that or just recommend I add, that would be pretty neat so I can slowly inch my way towards something worthy.

MononcQc
May 29, 2007

Joe has all kinds of crazy idea all the time. He has started work on erl2, which sits dormant on github, and at some point in the past year has suggested people get rid of modules entirely and instead only use functions floating in a global namespace that could be fetched from the Internet on demand. He also has attempted repeatedly to implement SSL equivalents in Javascript, not really understanding why certificates are useful (or not wanting to care about it).

People have described him to me as a guy who gets a thousand ideas before breakfast, and once in a while, one is really really good. What I've seen so far seems to be true. He's extremely creative, genuinely nice, and has a lot of crazy thoughts. If you give a talk about web servers or frameworks at an Erlang conference, he's the guy who'll heckle you about your poo poo being too complex and why isn't the web just a raw TCP connection with message passing protocols and calls in it anyway.

MononcQc
May 29, 2007

No explicit reason. The gist of it is that whatever is in the official OTP repository is the responsibility of the OTP team at Ericsson. I figure they don't want to have to maintain and document rebar, especially given a lot of its functionality is just a wrapper around existing tools.


In other news, there's a preliminary version of R16B01 (RC1) announced today. Here's the upcoming changelog: http://erlang.org/pipermail/erlang-questions/2013-June/074188.html

MononcQc
May 29, 2007

What does cowboy do that's even remotely better with its makefiles?

MononcQc
May 29, 2007

There's also cowboy available, and it can do some RESTful stuff too. I like it because it uses binaries by default, but otherwise everything else works too.

MononcQc
May 29, 2007

Sorry for the thread necromancy, but I was wondering if anyone in here planned to be around for the Erlang Factory Lite in September, in New York City?

I've been invented to talk there and am still working on a presentation outline, but I was wondering if I'd meet anybody from here there.

Also it seems http://thisotplife.tumblr.com is dead, which is a shame.

MononcQc
May 29, 2007

Aha yeah, named pipes require you to close stdio to disconnect (^D), not ^C, and they're 100% local. Using -remsh uses a trickier setup for shells (which I explain briefly at http://ferd.ca/yet-another-article-on-zippers.html)


Remsh is generally the safest, but you have to remember to quit with ^G then q, or ^C, but not with q() or init:stop() as the two latter will also shut down the node entirely.

MononcQc
May 29, 2007

Otto Skorzeny posted:

Dead as in you can't submit stuff to it any more or dead as in nobody has submitted anything for a while?

Dead as in stuff I had submitted nearly a month ago was still not there (and nothing else either). But the magic of complaining did its thing and early this morning updates were pushed through to make me look foolish.

MononcQc
May 29, 2007

Paul MaudDib posted:

Way back when I took a course in Prolog, and I've always wanted to pick Erlang up to have a more practical use for that stuff.

I threw 5.3 into a file and compiled it, and it always kills the interpreter after around 130-140k recursions. Am I running into some kind of memory limit or infinite recursion protection for the interpreter or something? It should be tail recursive, so I'm not sure why it's crashing.

code:
-module(loop).
-export([loop/1]).

loop(N) ->
    io:format("~w~n", [N]),
    loop(N+1).
code:
>loop:loop(0).
0
...
139983
Killed

I've just tried this snippet. It works fine (went well over 10 times your problem) on R15 and R16 versions of the VM.

I'm pretty dumbfounded that something like that would kill your node. How did you get Erlang installed? Do you have any code being loaded in a .erlang file in your home directory?

MononcQc
May 29, 2007

I stole this entry from the PL thread in YOSPOS where I posted it yesterday, and it fits fairly well in here.

For the last couple of weeks, I've been playing a game of optimizing and hunting down all kinds of bottlenecks, memory hogs, processes with too many messages, analyzing crash dumps and whatever om some of our systems so that we'd finally get rid of pretty much 99% of non-actionable alerts over some clusters.

It's the kind of poo poo you end up to do on any drat software stack every once in a while once it gets to be under load for a good while in production, and is fairly unavoidable as an activity. What changes, though, is how you do it.

One thing I wanted to do, for example, is figure what connections (out of more than 20,000) end up taking most of the bandwidth on the server at any given time, either on input or output, to be able to characterize the extremes of the load we may receive. Doing this in many languages or systems would generally require to wrap whatever socket operations are being done in a counter of some sort if it's not done already, and then poll or log the results somewhere. If your application is mostly IO bound over the network, logging that data for arbitrary levels of precision can have a rather damaging effect on the quality of service.

Erlang has this nice thing where stats are automatically accumulated for all socket activity, in advance. Every Erlang socket is called a port, and they're a language construct that wraps a socket, a file descriptor, or whatever else in something that looks like an Erlang process to the rest of the language. The statistics for any of them are available by calling a function called inet:getstat/1 on any given port. Moreover, this function can be combined with some other calls to list all the ports around.

By doing this, you can get the data for all the ports in a program from the Erlang shell, ask for how much data they've seen, make a list and sort it up. You could also wait for a few milliseconds, make a second list, make a difference between both, and get a snapshot of how what ports saw the most data at any interval of time. I ended up writing such functions and putting them in a small library called recon.

I wrote both functions in a short while, loaded the module and could find the biggest culprits in a matter of seconds by calling recon:inet_window(Metric, Number, IntervalToMeasure):

code:
(node@some-ip.ec2.internal)1> recon:inet_window(oct, 10, 1000).
[{#Port<0.336707>,736420,[{recv_oct,731950},{send_oct,4470}]},
 {#Port<0.336796>,627169,[{recv_oct,624338},{send_oct,2831}]},
 {#Port<0.336783>,298342,[{recv_oct,297150},{send_oct,1192}]},
 {#Port<0.336803>,270491,[{recv_oct,268703},{send_oct,1788}]},
 {#Port<0.336819>,177074,[{recv_oct,590276},{send_oct,1341}]},
 {#Port<0.336840>,168324,[{recv_oct,165791},{send_oct,2533}]},
 {#Port<0.336782>,136014,[{recv_oct,446228},{send_oct,1341}]},
 {#Port<0.336801>,123242,[{recv_oct,744971},{send_oct,2533}]},
 {#Port<0.336748>,109884,[{recv_oct,966680},{send_oct,4023}]},
 {#Port<0.336736>,98852,[{recv_oct,208764},{send_oct,1043}]}]
For this very second, the ports above had given me the most traffic, and I can see whatever was the input or the output of each, in bytes. The search can also be done by output or input only. So we found the busiest sockets at a time. Big deal -- we have no idea who owns them. To figure that out, I can use the erlang:process_info/2 function along with some port introspection to figure out what Erlang process owns which socket. I ended up wrapping some more of that functionality in a call:

code:
(node@ip.ec2.internal)2> [{Pid,recon:info(Pid)} || {Port,_,_} <- recon:inet_window(oct, 3, 1000),
(node@ip.ec2.internal)2>                           {_,Pid} <- [erlang:port_info(Port,connected)]].
[{<0.2630.513>,
  [{meta,[{registered_name,...},
          {dictionary,[{'$ancestors', ...},
                       {'$initial_call', {tcp_proxy,init,1}}]},
          {group_leader,<0.100.0>},
          {status,runnable}]},
   {signals,[{links,[<0.30588.0>,#Port<0.33985065>]},
             {monitors,[]},
             {monitored_by,[]},
             {trap_exit,false}]},
   {location,[{initial_call, ...},
              {current_stacktrace,[...]}]},
   {memory,[{memory,109344},
            {message_queue_len,0},
            {heap_size,6772},
            {total_heap_size,13544},
            {garbage_collection,[...]}]},
   {work,[{reductions,2833819}]}]},
 {<0.14755.56>,
  [{meta,[...]},
   {signals,[...]},
   {location,[...]},
   {memory,[...]},
   {work,[...]}]},
 {<0.30316.55>,
  [{meta,[...]},
   {signals,[...]},
   {location,[...]},
   {memory,[...]},
   {work,[...]}]}]
That's an ugly dump, but I can know, for example, that one of our TCP proxies is the one having the most network IO going through it at the time, and that its pid is <0.2630.513>. I can call a function and get that process' entire internal state (if stack traces and whatnot weren't enough already):

code:
(node@ip.ec2.internal)3> recon:get_state("<0.2630.513>").
{state,#Port<0.33985065>,
        {buf,<<"618 <133>1 2013-08-12T13:58:03.6 buffer data"...>>,
             295},
        {{11,12,13,14},59900},
        {1375,868792,992970},
        {my_own_worker,{state,{re_pattern,1,0, <<69,82,67,...>>},
                               {{dict,...}}},
                                34028236692093846346337460743176821146,
                                {1375,810971,924209}}}}}
Where in this case, the dict contains info letting me know what interface that proxy is listening one (which I can inspect by finding whatever is in the buffer, too, or the sharding information of the worker). I could use this information to dynamically change their buffer size, disable some call, give them a special shard, or whatever.

The fun thing about these inspection functions to make time windows and whatnot is that they're usable for anything else. So if you're looking for processes leaking memory, you can either look at them in the absolute or over a sliding time window:

code:
(node@ip.ec2.internal)4> recon:proc_count(memory, 5).
[{<0.121.0>,162095008,
  [my_stats,
   {current_function,{io,wait_io_mon_reply,2}},
   {initial_call,{proc_lib,init_p,5}}]},
 ...}]
(node@ip.ec2.internal)5> recon:proc_window(memory, 5, 5000).
[{<0.12493.0>,688584,
  [{current_function,{gen_fsm,loop,7}},
   {initial_call,{proc_lib,init_p,5}}]},
 ...}]
Which lets me see both what processes hold all the memory right now (the stats process, which holds a shitload of counters), but also what kinds of processes are allocating it the most as we speak (some FSM doing actual work), to see where the churn and work is actually going. I could search by other attributes, such as 'reductions' (CPU used), message_queue_len (mailbox sizes, to identify points of contention), stack of their size, heap, etc. I can force garbage collection on a process, see attribute changes, and spot memory leaks that way, if I want to. I can even do it over the entire node and find which processes leak the most memory of a certain type in general, if at all.

Oh yeah, and all of this can be done remotely over entire clusters to find the worst everywhere:

code:
(node@ip.ec2.internal)6> recon:rpc(fun() -> recon:proc_count(memory, 1) end).
[[{<0.121.0>,144982376, [my_stats, ...]}],
 [{<8348.121.0>,162094936, [my_stats, ...]}],
 [{<8350.121.0>,208774192, [my_stats, ...]}],
 ...]
Sweet, it looks like the biggest memory consumer is the same everywhere (my_stats)! Sounds like we forgot to clear some inactive counters and moving to a lazy scheme will allow to reduce good chunks of unused memory. Took just 2 minutes to run it and find what could be the leak source.

The best thing about all of that is that it's non-blocking, read-only, and generally safe to run on any number of production nodes remotely without impacting quality of service at all.

I don't know how many other languages can give you that kind of run-time introspection, but I know I feel like poo poo every time I need to go back to "reproducing poo poo locally" or "debugging via logging or printf". poo poo is much harder and requires an ungodly amount of redeploys, compared to just digging into a running system for whatever information you need, even across a cluster to give you more data points. Of course if you had a decent stats/graphing/reporting system in place already, and that all the data you need is in there, that's even better, but you're likely not going to get the same level of granularity.

So far I'm pretty happy with Recon as a library and I'm trying to inject it into more work projects :toot:

this post brought to you by your local department of Erlang propaganda.

MononcQc
May 29, 2007

Shinku ABOOKEN posted:

Post this in your blog.

E: Pressed post too soon:

What projects do people use Erlang in? By that I mean, what happened that made you go "I need Erlang for this!"?

Maybe I'll add it to my blog, reword it and whatnot. It's been a while since I've been truly excited about using one of my libs for myself, rather than doing it then moving on.

I've used Erlang for:

  • Chat apps, because lots of users and message passing
  • Real Time Bidding software, because low-latency requirements (soft real time), massive levels of concurrency, and constant system overload (which Erlang rules at)
  • I'm currently using it at Heroku, as part of the routing team, both for HTTP routing and log routing.

Most use cases I've made of Erlang had a few common points in terms of massive concurrency, a lot of time spent over heavy load, strict time constraints, requiring to be always up (shutting down to upgrade means losing money/user data). I've been served well so far.

Cocoa Crispies posted:

From Justin Sheehy, CTO at Basho:

quote:

I had an entertaining and ironic conversation about this recently with a manager at a large database company. He explained to me that we had clearly made the wrong choice, and that we should have chosen Java (like his team) in order to expand the recruiting pool. Then, without breaking a stride, he asked if I could send any candidates his way, to fill his gaps in finding talented people.

^ this is my favorite bit of the whole thing.

E: fixed a link to the wrong blog post

MononcQc fucked around with this message at 03:38 on Aug 14, 2013

MononcQc
May 29, 2007

Otto Skorzeny posted:

As I've (slowly, mea culpa) read LYSE and discussions/papers here and in the PL thread, I've started coming around to the view that there is a lot of overlap between building classical distributed systems and building industrial embedded systems (stuff with hard reliability requirements, eg. SIFs). I'm surprised there isn't more on the web relating the two, but then again I am a crazy dude that thinks there is a fundamental equivalence between statistical signal processing and classical frequency-domain signal processing v:shobon:v

I'd like to hear about that more.

FWIW, there's work to make Erlang more available for bigger 'embedded' platforms (http://www.erlang-embedded.com/).

There's also a German guy I know who uses Erlang in hard real-time systems for the automotive industry by using it in RTEMS (a real time OS). The gist of his thing is that he writes the hard-real time components in C, C++, or even Ada, and gives them a priority higher than Erlang. Because they're usually smaller core components, he can then run everything that is soft-real or lower priority on the same embedded OS with a lower priority, within the Erlang VM.

There's been other interesting work in the topic, trying to make the Erlang VM work for hard real time, but I haven't heard about it in a long while and it's frankly outside of my area of expertise.

MononcQc
May 29, 2007

The Insect Court posted:

Any opinions on Elixir? Is it performant/stable enough to be worth learning in addition to/in place of Erlang at this point?

For reference, Elixir's a new-ish language implemented on top of the Erlang VM, with a Ruby-ish syntax.

From last page:

MononcQc posted:

Elixir is not adding too much to Erlang, IMO. Its biggest contributions are in macros, multiple modules per file, and the ability to have contracts, but otherwise most of its features will be a variation of something available in Erlang through the BEAM VM and Core Erlang (the intermediary language many can compile to, if they don't just generate an Erlang abstract parse tree), and its weaknesses will likely be a similar variation.

Then there's also the different syntax.

I think it's a nice attempt at a new language, and possibly the best alternative language on BEAM (though LFE is definitely nice too), but it doesn't offer much for people who already know Erlang outside of a change in a few semantics and the features I mentioned above. My hope for it is that it becomes a honeypot to Ruby fanboys who want Ruby and its do notation everywhere so they stop bothering the Erlang regulars about it, and those that really want to will be able to jump to Erlang from there.

Honeypot is a bit of a strong word, given Elixir can stand on its own and has its own tiny community that's still at the very flexible stage where they can modify the language as they go for what they like, but it's somewhat appropriate because at this point you still need to understand Erlang to be efficient with Elixir.

They got their first book out very lately http://pragprog.com/book/elixir/programming-elixir, so it might be the first sign of the language taking off. You'll probably still need to know a bit of Erlang to feel at home with Elixir, but I believe this is happening less and less.

I still stand by that position. I'm interested to see how Elixir grows and how the community goes about it.

I've privately held the theory for a while that Erlang is a 'different' language and that different syntax helps people drop the baggage they usually carry around (the same way it's obvious when a C programmer programs C in C++, it's obvious when a OO programmer programs OO in Erlang). I'm very eager to see how people who adopt Elixir for its friendlier syntax will deal with the [relatively] more surprising semantics of the language.

MononcQc
May 29, 2007

Elixir is nicer than Reia. Reia was an attempt to make an entirely new language with new semantics -- object-oriented with each object being its own process.

Compared to that, Elixir keeps its semantics closer to Erlang (not OO, and processes are used similarly as Erlang, not as Reia did it).

MononcQc
May 29, 2007

Erlang is sometimes said to be object-oriented in the original meaning of it (each process acts as an object communicating through message passing), but you'll hit a wall if that's how you approach things. Erlang's processes are meant as a way to separate individual components to provide fault-tolerance; not to compose them and have them interacting on a level as low as function calls all the time. Representing a list or a tree node as a process is useless, while they could very well be objects in any OO language.

Erlang's processes are a way to provide fault-tolerance first. This can be tolerance to some weird hardware failure, a programmer error, corrupted data, etc. That an OO-like system emerges from it is purely accidental.

We can think of it as "OO done right" if we want, but using it in practice as if it were truly OO will likely lead to a shitload of unwarranted friction that would have been avoided by using a functional style over data structures, and keeping processes as isolated small programs that can talk to each other with messages.

That being said, Reia eventually died off and got abandoned after its author tried to get ruby blocks into Erlang and being told 'no' by members of the community, most notably by the well-informed Richard O'Keefe[1][2][3][4][5]. This discussion, and the appearance of Elixir, prompted Tony Arcieri to declare that Erlang is a ghetto and he left the community to work on Celluloid, which tries to bring Erlang to Ruby, rather than his former approach of bringing Ruby to Erlang.

MononcQc
May 29, 2007

Mniot posted:

I'm not a frequent poster to the forums, but I'll be there and it would be fun to say hi.

I've only just started learning Erlang, and your book was highly recommended by my coworkers.

Nice! Good to hear.

---

Oh and I can't believe I forgot to post this here, but I'm giving a free webcast for O'Reilly on Tuesday on Modern Server Application Design with Erlang for a high-level tour of building Erlang apps and how that compares to traditional things, then some general design ideals to keep in mind when using Erlang for that.

I hope it's gonna be good, although I'm still working on it as I type this.

MononcQc
May 29, 2007

Shinku ABOOKEN posted:

Does O'Reilly archive webcasts?

I can't attend this one :(

Other talks that were free seem to have been made public (there's a couple of Haskell ones), so in the best of cases, it should be archived and made available. I don't know the details though.

MononcQc
May 29, 2007

more like dICK posted:

I'm looking to parse HTML and RSS feeds. It looks like mochiweb_html is the goto html parser, but is there anything more standalone that doesn't require bringing in mochiweb? For the RSS, is there a specific RSS library out there, or should I just stick to xmerl?

I went with mochiweb_html for my HTML parsing requirements personally, but I didn't research it very hard. It's annoying and you'll need to fetch the repository, so what you can do is either just import that one module (which could create conflicts if you're writing a library), or you import the whole drat thing and later on fix it with releases where you tell Reltool to just bring in a single file from that app. That assumes you're willing to build releases with Reltool or Rebar, though.

For XML, don't use xmerl. xmerl has a thing where all tags get to be transformed into Erlang atoms, which are not garbage collected, and that can be used as an attack vector to bring your nodes down. It's really dumb like that, and I'm not sure why it's still part of the standard library without a huge warning.

Go use erlsom instead. It's safer, and seems to work reasonably well.

Regarding RSS, you'll have to deal with datetime support in there too. I've used dh_date in my webcast demo, but it doesn't deal with timezones. Instead I've recently found qdate, and while I haven't tried it, it seems to deal with them much better. A quick look at the code tells me high-volume requests would probably hammer its central server a bit, but that would be somewhat easy to refactor if you ever needed it to be done.

MononcQc fucked around with this message at 13:08 on Sep 6, 2013

MononcQc
May 29, 2007

leper khan posted:

This is what I do, and it's worked out really well for me. I'd still be interested in something cleaner if someone finds something though.

The "cleanest" way I can think of is if you end up using OTP releases, which basically mean you take your entire node, and crystallize it by declaring what applications it should contain, along with a few more settings regarding the kind of runtime you want. They're the canonical way to ship an Erlang system, even though a lot of people and companies (those I worked at included) don't do it that way.

If you're using reltool, you can specify custom filters about what part of applications to include or not. I have examples in the cookbook part of my book for reltool, and just today, Riak started using this method to avoid including Mnesia's include files in its project.

I put "cleanest" in quotes, because there's a significant overhead to use Reltool in terms of what you need to know just to ship something. Rebar can actually wrap around it and newer releases of Erlang should contain a self-executable to do it.

Relx is a new build tool for releases that is far easier to use, but is also far less powerful than reltool. So for your use case you might need reltool, but I'd recommend playing around with relx to get accustomed to releases and how they work.

MononcQc
May 29, 2007

Cocoa Crispies posted:

Not Erlang per se but is anyone else going to Ricon 2013 in SF this week?

I won't be there (Toronto awaits me instead), but many of my coworkers will be there.

I need to go to there next year :(

MononcQc
May 29, 2007

So I posted this stuff in plenty of places already, but I had forgotten about this thread where this is where it makes more sense. Here's the blog post: https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole

And here's the relevant stuff about how Erlang's memory works:

quote:

The amount returned by erlang:memory/0-1 is the amount of memory actively allocated, where Erlang terms are laid in memory; this amount does not represent the amount of memory that the OS has given to the virtual machine (and Linux doesn't actually reserve memory pages until they are used by the VM). To understand where memory goes, one must first understand the many allocators being used:



  1. temp_alloc: does temporary allocations for short use cases (such as data living within a single C function call).
  2. eheap_alloc: heap data, used for things such as the Erlang processes' heaps.
  3. binary_alloc: the allocator used for reference counted binaries (what their 'global heap' is).
  4. ets_alloc: ETS tables store their data in an isolated part of memory that isn't garbage collected, but allocated and deallocated as long as terms are being stored in tables.
  5. driver_alloc: used to store driver data in particular, which doesn't keep drivers that generate Erlang terms from using other allocators. The driver data allocated here contains locks/mutexes, options, Erlang ports, etc.
  6. sl_alloc: short-lived memory blocks will be stored there, and include items such as some of the VM's scheduling information or small buffers used for some data types' handling.
  7. ll_alloc: long-lived allocations will be in there. Examples include Erlang code itself and the atom table, which stay there.
  8. fix_alloc: allocator used for frequently used fixed-size blocks of memory. One example of data used there is the internal processes' C struct, used internally by the VM.
  9. std_alloc: catch-all allocator for whatever didn't fit the previous categories. The process registry for named process is there.


The entire list of where given data types live can be found in the source.

By default, there will be one instance of each allocator per scheduler (and you should have one scheduler per core), plus one instance to be used by linked-in drivers using async threads. This ends up giving you a structure a bit like the drawing above, but split it in N parts at each leaf.

Each of these sub-allocators will request memory from mseg_alloc and sys_alloc depending on the use case, and in two possible ways. The first way is to act as a multiblock carrier (mbcs), which will fetch chunks of memory that will be used for many Erlang terms at once. For each mbc, the VM will set aside a given amount of memory (~8MB by default in our case, which can be configured by tweaking VM options), and each term allocated will be free to go look into the many multiblock carriers to find some decent space in which to reside.

Whenever the item to be allocated is greater than the single block carrier threshold (sbct), the allocator switches this allocation into a single block carrier (sbcs). A single block carrier will request memory directly from mseg_alloc for the first 'mmsbc' entries, and then switch over to sys_alloc and store the term there until it's deallocated.

So looking at something such as the binary allocator, we may end up with something similar to:



Whenever a multiblock carrier (or the first 'mmsbc' single block carriers) can be reclaimed, mseg_alloc will try to keep it in memory for a while so that the next allocation spike that hits your VM can use pre-allocated memory rather than needing to ask the system for more each time.

When we call erlang:memory(total), what we get isn't the sum of all the memory set aside for all these carriers and whatever mseg_alloc has set aside for future calls, but what actually is being used for Erlang terms (the filled blocks in the drawings above). This information, at least, explained that variations between what the OS reports and what the VM internally reports are to be expected. Now we needed to know why our nodes had such a variation, and whether it really was from a leak.

Fortunately, the Erlang VM allows us to get all of the allocator information by calling:

code:
[{{A, N}, Data} || A <- [temp_alloc, eheap_alloc, binary_alloc, ets_alloc,
                          driver_alloc, sl_alloc, ll_alloc, fix_alloc, std_alloc],
                   {instance, N, Data} <- erlang:system_info({allocator,Allocator})]
The call isn't pretty and the data is worse. In that entire data dump, you will retrieve the data for all allocators, for all kinds of blocks, sizes, and metrics of what to use. I will not dive into the details of each part; instead, refer to the functions I have put inside the recon library that will perform the diagnostics outlined in the next sections of this article.

To figure out whether the Logplex nodes were leaking memory, I had to check that all allocated blocks of memory summed up to something roughly equal to the memory reported by the OS. The function that performs this duty in recon is recon_alloc:memory(allocated). The function will also report what is being actively used (recon_alloc:memory(used)) and the ratio between them (recon_alloc:memory(usage)).

Fortunately for Logplex (and me), the memory allocated matched the memory reported by the OS. This meant that all the memory the program made use of came from Erlang's own term allocators, and that the leak came from C code directly was unlikely.

The next suspected culprit was memory fragmentation. To check out this idea, you can compare the amount of memory consumed by actively allocated blocks in every allocator to the amount of memory attributed to carriers, which can be done by calling recon_alloc:fragmentation(current) for the current values, and recon_alloc:fragmentation(max) for the peak usage.

By looking at the data dumps for these functions (or a similar one), Lukas figured out that binary allocators were our biggest problem. The carrier sizes were large, and their utilization was impressively low: from 3% in the worst case to 24% in the best case. In normal situations, you would expect utilization to be well above 50%. On the other hand, when he looked at the peak usage for these allocators, binary allocators were all above 90% usage.

Lukas drew a conclusion that turned out to match our memory graphs. Whenever the Logplex nodes have a huge spike in binary memory (which correlates with spikes in input, given that we deal with binary data for most of our operations), a bunch of carriers get allocated, giving something like this:



Then, when memory gets deallocated, some remnants are kept in Logplex buffers here and there, leading to a much lower rate of utilization, looking similar to this:



The result is a bunch of nearly empty blocks that cannot be freed. The Erlang VM will never do defragmentation, and that memory keeps being hogged by binary data that may take a long time to go away; the data may be buffered for hours or even days, depending on the drain. The next time there is a usage spike, the nodes might need to allocate more into ETS tables or into the eheap_alloc allocator, and most of that memory is no longer free because of all the nearly empty binary blocks.

Fixing this problem is the hard part. You need to know the kind of load your system is under and the kind of memory allocation patterns you have. For example, I knew that 99% of our binaries will be smaller or equal to 10kb, because that's a hard cap we put on line length for log messages. You then need to know the different memory allocation strategies of the Erlang virtual machine:

  1. Best fit (bf)
  2. Address order best fit (aobf)
  3. Address order first fit (aoff)
  4. Address order first fit carrier best fit (aoffcbf)
  5. Address order first fit carrier address order best fit (aoffcaobf)
  6. Good fit (gf)
  7. A fit (af)



For best fit (bf), the VM builds a balanced binary tree of all the free blocks' sizes, and will try to find the smallest one that will accommodate the piece of data and allocate it there. In the drawing above, having a piece of data that requires three blocks would likely end in area 3.

Address order best fit (aobf) will work similarly, but the tree instead is based on the addresses of the blocks. So the VM will look for the smallest block available that can accommodate the data, but if many of the same size exist, it will favor picking one that has a lower address. If I have a piece of data that requires three blocks, I'll still likely end up in area 3, but if I need two blocks, this strategy will favor the first mbcs in the diagram above with area 1 (instead of area 5). This could make the VM have a tendency to favor the same carriers for many allocations.

Address order first fit (aoff) will favor the address order for its search, and as soon as a block fits, aoff uses it. Where aobf and bf would both have picked area 3 to allocate four blocks, this one will get area 2 as a first priority given its address is lowest. In the diagram below, if we were to allocate four blocks, we'd favor block 1 to block 3 because its address is lower, whereas bf would have picked either 3 or 4, and aobf would have picked 3.



Address order first fit carrier best fit (aoffcbf) is a strategy that will first favor a carrier that can accommodate the size and then look for the best fit within that one. So if we were to allocate two blocks in the diagram above, bf and aobf would both favor block 5, aoff would pick block 1. aoffcbf would pick area 2, because the first mbcs can accommodate it fine, and area 2 fits it better than area 1.

Address order first fit carrier address order best fit (aoffcaobf) will be similar to aoffcbf, but if multiple areas within a carrier have the same size, it will favor the one with the smallest address between the two rather than leaving it unspecified.

Good fit (gf) is a different kind of allocator; it will try to work like best fit (bf), but will only search for a limited amount of time. If it doesn't find a perfect fit there and then, it will pick the best one encountered so far. The value is configurable through the mbsd VM argument.

A fit (af), finally, is an allocator behavior for temporary data that looks for a single existing memory block, and if the data can fit, af uses it. If the data can't fit, af allocates a new one.

Each of these strategies can be applied individually to every kind of allocator, so that the heap allocator and the binary allocator do not necessarily share the same strategy.

Hopefully someone else than me finds this stuff super interesting :toot:

(images are from my S3 account at work -- no leeching here)

MononcQc
May 29, 2007

Yeah, I'm speaking there: http://www.erlang-factory.com/conference/Toronto2013/speakers/FredHebert

MononcQc
May 29, 2007

Otto Skorzeny posted:

You pronounce your name like the hockey player, right?

Hay-Bear, more or less. No idea how announcers around your place pronounced Guy Hebert's name (I'm guessing that's the one you had in mind?) (eˈbɛʁ)

MononcQc
May 29, 2007

Yeah factory lites are cool to check out a bunch of projects, get a few basics here and there, meet other devs, and do so not too expensively if you don't live far (compared to complete Erlang Factories).

If it's like the ones I've been to be fore, lunch should also be covered and there might be drinks after, but that depends on who organizes it.

MononcQc
May 29, 2007

more like dICK posted:

Is there a reason so many functions are 1-based instead of 0-based (lists:nth/2, element/2 etc)?

It's pretty much a difference between calculating an offset and a position. An offset would be 'how far from the start are you', offset 0 being the first position. This is used for binaries ({0,1} = binary:match(<<"abc">>, <<"a">>)), the array module (which does it for familiarity), and a few others (the re module uses offsets on some matches if you ask for them).

lists:nth/2 and element/2 are about position (first, second, third), and so are 1-indexed. Anyway, that's the easiest way to think about it.

Otherwise, I'm not exactly sure what the design decision was behind it back in the 80s, never thought to ask around. I can try next time I see Robert or Joe online and report back.

MononcQc
May 29, 2007

more like dICK posted:

That makes sense. It just takes some getting used to coming from languages where I'd be calling some_list[0] to get the first element.

Another silly question. Is there any reason to ever have a process that doesn't live in an OTP supervision tree? It seems like every process I write is either an OTP behaviour, or a worker at the leaf of a supervision tree; I don't end up using many of the concurrency primitives, since so much seems to go through OTP. Is this OK, or are there legitimate uses for raw Erlang?

This is perfectly OK. This is a testament to how general OTP is as a framework and covers higher levels of concurrency. Erlang's base mechanisms are primitive (and they're good primitives). You will use them when you profile your code and find some specific hotspot where OTP might be too slow, or you'll find yourself needing a pattern not covered by OTP.

Whenever you end up feeling you need raw Erlang, go take a look at the start functions from proc_lib, as they'll give you some of the base mechanisms to have them talk to supervisors in order for you to inject your own code inside a supervision tree. If you really want to fully have your code become compliant with OTP trees (you just wanted to have a lot of raw Erlang around it), there are functions in the sys module for that.

In the last case, you may feel that raw Erlang is just better to express yourself in for a problem. In these cases, you'll want to add that bit of code to your supervision tree using supervisor_bridge, which lets you create a middle-man for your raw Erlang process. It's usually easier to use that one than reimplementing OTP with proc_lib and sys.

There are other cases where you just want out of the supervision structure, but you'll probably be able to recognize them when it happens.

MononcQc
May 29, 2007

Erlang factories vary a lot. These places act like a junction point where academia and industry meets, so for Basho talks you'll get a mix of distributed systems theory (and let's be frank, Tom and Chris' talk had a lot of it, and it was rough to keep up even if I had read all the papers they mentioned beforehand) and practical implementation. There's a bunch of papers for distributed systems where I contacted the authors and a back and forth with Basho is what prompted new research (Dotted Version Vectors are part of that).

Other conferences have a different makeup. The one in New York had a great talk by Mahesh Paolini-Subramanya (he's an awesome guy, if you meet him, go say hi) about how his former telecom company rewrote their call-center Java FSMs into simpler ones in Erlang using concurrency to better express the problem domain. He has plenty of great war stories. A year or so ago, the Erlang Factory in San Francisco had a keynote about type-checking Erlang, then two talks by unrelated teams about how to use the related type annotations to generate Quickcheck (or equivalent) tests. The year before in London had a track regarding how refactoring could be automated with Wrangler.

This year they had at least 2-3 talks on real-time bidding and software developed with soft real time constraints (I had one of them), one from guys from Boundary about how to deal with high-throughput stuff and avoid anything that blocks in the VM.

I haven't been to two of them that were very similar, although if you go to say, the London Erlang Factory and the San Francisco Erlang Factory in the same year, then there will be some overlap from bigger players that can ship their employees at conferences worldwide.

MononcQc
May 29, 2007

Oh and Erlang Solutions Ltd. decided to bring back the Erlang Handbook, which is a kind of informal spec about the language itself.

MononcQc
May 29, 2007

1. Get to know your CAP theorem. I'd start with http://www.julianbrowne.com/article/viewer/brewers-cap-theorem and then You Can't Sacrifice Partition Tolerance. I also made my own thing on it at http://learnyousomeerlang.com/distribunomicon#my-other-cap-is-a-theorem

2. Read and try to understand Amazon's Dynamo Paper(PDF). It's a very good read and behind a shitload of systems' architecture now.

3. Read on "The fallacies of distributed computing" (I've made a write up on them vs. Erlang).

4. Then, I’d direct people to try and understand vector clocks/Lamport clocks. I suggest reading Basho's Why Vector Clocks are Easy followed by their post titled Why Vector Clocks are Hard. I then explain them very simply in my project.log. Get to read the papers if you want, there's a shitload of them.

5. Check out PACELC. It's a very simple extension to CAP that basically says that PAC is “during a (P)artition, do you pick (A)vailability or (C)onsistency”, and “(E)lse, do you pick (L)atency or (C)onsistency”. For this one, http://www.slideshare.net/abadid/cap-pacelc-and-determinism and http://dbmsmusings.blogspot.ca/2010/04/problems-with-cap-and-yahoos-little.html are nice.

6. Take a look at Two-Phase Commits and Three-Phase Commit protocol. You could also look into other schemes for replication such as chain replication and whatnot.

7. Consensus algorithms. Three main ones here, in order of how easy they are to understand: Raft, ZAB, and Paxos (notoriously hard to understand). They guarantee consistency but with failed nodes.

Oh and all around that, go read Aphyr's blog, particularly "The trouble with timestamps" and the "Call me maybe" series.

MononcQc fucked around with this message at 21:11 on Dec 1, 2013

MononcQc
May 29, 2007

Oh yeah more texts!

- End-to-end arguments in system design

- Idempotence is not a medical condition

MononcQc
May 29, 2007

Rapsey posted:

What is it?

Val = ?L(Key, List),
Val = ?L(Key, List, Default).


IIRC I implemented something like that when I worked there -- I don't know if it's still the same meaning (mine was just called ?lookup or something), and the two arguments one would raise an exception when the value isn't found and no default is provided, so that you never need to visually do the {ok, Val} = Expression thing.

The other sweet thing about it is that the implementation eventually got switched to lists:keyfind to be faster, and other formats could be supported (like looking within JSON structs) without altering the calling code due to macro goodness.

MononcQc
May 29, 2007

I'll be in there speaking, just no idea what about yet :toot:

MononcQc
May 29, 2007

If you're interested in learning more about the internals of the Erlang Runtime System, the OTP team started publishing more documentation and proposals for improvements in the github repository: https://github.com/erlang/otp/tree/master/erts/emulator/internal_doc it's a pretty interesting read.

Although the documents are written in the future tense, that stuff is all currently implemented -- most of it by R16B01-R16B02 -- the OTP team just published documents intended for the RELEASE project (EU research on parallelism) once they were done with them.

MononcQc fucked around with this message at 14:48 on Jan 9, 2014

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

My workflow is to generally use vim and erl, and reload code by hand in there, or often use Common Test and either ct_run or rebar ct to run and re-run tests. I often end up automating some stuff, but I keep things very manual as a whole.

If you want to automate a few things, there are a few auto-reloaders available, the two most common being:
I've used them sometimes before, but I often find myself wanting to control how I reload code.

  • Locked thread