Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
pepito sanchez
Apr 3, 2004
I'm not mexican

MononcQc posted:

I wrote my How I Start article in that spirit.

It's easier to recommend ideas if I know the stuff you're interested in.

that's your site or you're dude? either way cool and thanks. some good articles here.

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

pepito sanchez posted:

that's your site or you're dude? either way cool and thanks. some good articles here.
The site is by a coworker, but I'm the dude who wrote the article.

Corla Plankun
May 8, 2007

improve the lives of everyone
if erlang was made by ericsson why isn't it pronounced air-lang?

Notorious b.s.d.
Jan 25, 2003

by Reene

MononcQc posted:

The question of parallelizing is an interesting one. Generally what I say is that people come to Erlang for the concurrency, but they stay for the fault tolerance.

you're gonna have to be more specific

i like statically typed languages, and i restart daemons after unhandled exceptions. what is erlang gonna win me on fault tolerance?

tef
May 30, 2004

-> some l-system crap ->
fault tolerance goes with the grain of the language, and the runtime inspection tools are pretty useful

Notorious b.s.d.
Jan 25, 2003

by Reene

tef posted:

fault tolerance goes with the grain of the language, and the runtime inspection tools are pretty useful

yeah this is the opposite of specific

cowboy beepboop
Feb 24, 2001

Corla Plankun posted:

if erlang was made by ericsson why isn't it pronounced air-lang?

https://en.wikipedia.org/wiki/Agner_Krarup_Erlang
does anyone know how to pronounce erlang in danish?

my question is where is the great fault tolerant voip/sip server in erlang

Pittsburgh Fentanyl Cloud
Apr 7, 2003


my stepdads beer posted:

my question is where is the great fault tolerant voip/sip server in erlang

I just saw 3/4ths of my coworkers get dumpstered in a move to "agile", so my question is why do we recreate the wheel every four years?

Joe Law
Jun 30, 2008

most programmers (myself included) are terrible and are only capable of "solving" the same set of problems over and over and over again. if we actually had to solve unknown problems never before considered we would flounder and be unmasked as frauds that dont deserve that plush deecee six and a half figgies

Pittsburgh Fentanyl Cloud
Apr 7, 2003


Here's a dumb and retarded question: What has object oriented programming given us over the past twenty years that we wouldn't have had without OOP?

gonadic io
Feb 16, 2011

>>=

Citizen Tayne posted:

Here's a dumb and retarded question: What has object oriented programming given us over the past twenty years that we wouldn't have had without OOP?

State that's slightly more encapsulated than it would otherwise be (functional languages aside)

Funk In Shoe
Apr 20, 2008

Waiting in line, Mr. Haydon told me it is a wheel not meant for lovers but for infants, lifting people and letting them swing, putting the world on display from up high

Saw a couple of job openings asking for people familiar with various stuff including angular.js and decided to check it out, out of curiosity. Result: I've spent most of the day learning about it on codeschool because I got all enthusastic about the challenges and well its pretty neat and appproachable, even for me. But I don't understand how this can be the entirety of a job, doing angular.js stuff. I guess if you had to do it every day for 8 hours you'd get pretty bored.

Brain Candy
May 18, 2006

gonadic io posted:

State that's slightly more encapsulated than it would otherwise be (functional languages aside)

if we were talking about the mythological 'good' programmers, sure

instead it meant big mutable blobs of bad design informed by hazy understanding of the world

if you ask somebody to describe a simple thing, will they give you an exhaustive formal answer? :fishmech: aside, they will say something vague like it's "a red fruit". but computers are dumber than a dog so you have to fill in the blanks on what "fruit" is. or you don't and you end up with lumpy, inconsistent, descriptions for everything

everybody thinks they know what an 'object' is, but most people don't know that the objects that computers can work with are the ones from the philosophy. or math or linguistics, a model that is not what people understand innately. using your everyday model of things will mostly work, it'll just bite you in the rear end when you go to maintain that code months or years later.

and there's a coupling of state with time which is natural with objects, as objects are how people think about persistence over time. from the point of emulating a single observer it makes sense that you can reach out and grab state when whenever. but now is trickier when you have more than one observer. what thing happened in what order? should you really be able to see everything?

as commonly used, OO encourages a mental model that is familiar and incorrect

Brain Candy fucked around with this message at 14:43 on Oct 19, 2015

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
its really just inheritance that sucks. the core concept of OOP (bundling data together with operations on that data) is pretty good, interfaces and polymorphism are pretty good, a single ancestor as a form of code reuse is really awful.

90% of people building unmaintainable disasters is because they went overboard with inheritance and big class hierarchies.

prefect
Sep 11, 2001

No one, Woodhouse.
No one.




Dead Man’s Band

Jabor posted:

its really just inheritance that sucks. the core concept of OOP (bundling data together with operations on that data) is pretty good, interfaces and polymorphism are pretty good, a single ancestor as a form of code reuse is really awful.

90% of people building unmaintainable disasters is because they went overboard with inheritance and big class hierarchies.

what do you think about multiple inheritance?

Brain Candy
May 18, 2006

i'd even say the main benefit of FP programming isn't all of the commonly cited advantages, but that's sufficiently abstract and alien that people don't import their bad assumptions

Shaggar
Apr 26, 2006
inheritance works in same places like controllers in mvc/webapi. The alternative would be duplicating a bunch of plumbing every time you create a new controller.

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

Shaggar posted:

inheritance works in same places like controllers in mvc/webapi. The alternative would be duplicating a bunch of plumbing every time you create a new controller.
No, the alternative would be to use composition and inject some webapi route registrar, and then you'd have a simple class without inheritance where you'd say
code:
class Butts
{
  Butts(WebApiRegistrar webApi)
  {
    webApi.register("/foo", req => { blah });
  }
}

Shaggar
Apr 26, 2006
yes that's the worthless plumbing im talking about.

Shaggar
Apr 26, 2006
you don't get any benefit from not doing the inheritance because there are no scenarios where you're going to write your own complete controller implementation.

pepito sanchez
Apr 3, 2004
I'm not mexican
what about java's observer/observable? i see a clear case for inheritance there and any similar example.

pepito sanchez
Apr 3, 2004
I'm not mexican

Shaggar posted:

you don't get any benefit from not doing the inheritance because there are no scenarios where you're going to write your own complete controller implementation.

shaggar is right

Jerry Bindle
May 16, 2003
there is this inhouse framework (ugh) that was designed for non-java people to be able to easily write modules for this framework. it consists of a mountain of abstract classes. at some point you subclass one of them and somehow know which method to override, and then get to work by resetting public fields of the super classes. i tried to explain why this is not ideal but the person just thought i didn't understand what an abstract class was. not surprisingly the non-java java developers are able to easily copy and paste the examples and get those to work, but anytime they need to do something 'new' they have to run to the mastermind of this abortion and get him to do it for them, because he is the only one who knows how it works

Jerry Bindle
May 16, 2003
oh and the worst part is that their "dependency injection" is a abstract class with a bunch of public static fields. when you use the framework you subclass it and reset the public static fields to what you want.

Jerry Bindle
May 16, 2003
these same people did another testing framework. to launch it, you subclass an abstract junit test class , and set a public field of the parent class to a test script. it just so happens that the test runner ant uses doesn't choke on this design error, but it chokes as it should with maven. "maven is the problem, we've never heard of that what is it"

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord

Bloody posted:

i have a git question

i have two git repositories, firmware and software. the software runs on a pc and interacts with the firmware which runs on an embedded platform. i want to merge these into one common repository and i would like to do so in a manner that preserves their histories and branches

is this a doable thing

so I just needed to do this and I want to report how I did it

which was stolen from searching google of course :rolleyes: http://nuclearsquid.com/writings/subtree-merging-and-you/

the commands dum dum duuum:

code:
git remote add -f projectb /path/to/other/repo
git merge -s ours --no-commit projectb/master
git read-tree --prefix=vendor/projectb/ -u projectb/master
git commit -m "Merge Project B into vendor/projectb/"
the "-s ours" is some classic good git api

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord
there's this weird philosophy in oop concerning code reuse, and it's not just with inheritance

or should I say it's the general notion of "planning ahead" that leads to some really awkward code

MononcQc
May 29, 2007

Notorious b.s.d. posted:

you're gonna have to be more specific

i like statically typed languages, and i restart daemons after unhandled exceptions. what is erlang gonna win me on fault tolerance?

You need to restart your whole daemon after an unhandled exceptions. This means that workflows unimpacted by the exception still need to abort and start over. That's fault tolerance gotcha #1. If you're serving 5,000 concurrent users, some of which are buying stuff online, or doing whatever (like playing dumb games in their browsers over a websocket, or using their one call to their lawyer), and an unrelated exception takes out their current session, you haven't been fault tolerant: a condition unrelated to the current activity (other than sharing the same program) has been taken out.

That's brittle. But there's more to it than that.

The way I like to put it, an extensive type-checking phase, strong test suite, exhaustive model checking, code reviews, linting, and so on, are all aiming at preventing errors from being in programs in the first place. These are required in Erlang, too. The idea is to make sure, as much as possible, that the program is doing the right thing. Yet, errors still make it to programs, and everyone agrees that there are diminishing returns to weeding out all the bugs, potential and real.

It will cost you exponentially more money to weed out the trickier and trickier bugs (without introducing new ones). And so that's why truly error-free programs are gonna be written in vacuums, or on projects that use uncool tech and rules: NASA's The Power of 10: Rules for Developing Safety-Critical Code, for example, forbids using the heap altogether as too risky; they would even delay flights that would go over the new year because code there could have been buggy and it was better to delay by 2-3 weeks in December to January than taking the risk.

Now, those are extremes, and most of the industry isn't running under these rules, doesn't run Ada code despite its proven track record, and so on. That's because most people out there (both of us included) have an addendum to "software should be correct", and it's "software should be correct (within reasonable boundaries of cost and complexity)".

So we got all of these things to prevent errors from being in program, but oh so very few to handle errors in running systems. The approach we take is generally the one used in terms of hygiene without an immune system: live in a clean bubble. But most humans don't need and won't go through these costs, because, well, we have an immune system and an ability to heal injuries.

The only mechanism most software takes close to that are things like redundancy and crash-fast with an easy way to bring something back. But the way we generally apply it is at the architectural level: run the same software in many nodes, restart the whole software as a unit, and so on. The finer-grained mechanisms at which we could handle unexpected errors within software haven't made it there, and for good reason: an exception in a Java thread, for example, risks leaving memory inconsistent so of course you gotta kill it all. Doing otherwise would be risking lovely rear end corruption and that would be worse. Fail-fast is good.

So what's the position of Erlang and why I'm saying it helps fault tolerance?
  • isolated immutable memory: killing or losing a process will not break the memory of others, and is of no risk.
  • preemptive scheduling: no processes, even runaway ones, can starve another one of CPU. You can however set priorities for critical tasks; this is soft-real time stuff (much like the per-process GC, which is not primarily for fault tolerance though)
  • processes dying may still affect a broader state of interdependencies. For this reason, Erlang adds links, which let you add expliciteness of "should-fail-with" relationships between processes
  • because not all processes that depend on each other should die together, it also adds monitors, which lets you detect a process' failure and react to it asynchronously on your own terms
  • pieces of your system fail and you will want redundancy. Therefore the language is made to be network transparent.
  • because the network can fail at arbitrary times (and the other pieces of hardware too), all communication needs to be made via message-passing and asynchronously

So those are the primitives. They give the right tools to make fault-tolerant systems, but are not enough yet. The big concept comes from supervisors, but more exactly from supervision trees.

When you boot your programs and that they have various responsibilities, that is all encoded in the supervision tree. So if for example, I'm building a program to count and report election results from a nationwide vote, my supervision tree could be like this:

code:
                             [root supervisor]
                              |               \
                    [tally_supervisor]       [live reporting sup]
                    /         |                 |        \
             [storage sup]    |       [session sup]      [web server sup]
            /          |      |          |                    / \ \ \
   [worker pool sup]   |      |         ...                [web requests/workers]
    / | | | |        [cache]  |
   [[workers]]]]              |
                              |
                        [district sup]
                        /  | | | | \
                    [various districts'
                     individual supervisors]
                        |               \
                    [counter_sup]       [ballot_sup]
                        |    \                  |
                    [tally] [ballot counting]  [ballot handling]
This system would have two OTP applications: a tally app, and a live-reporting applications. Both are under the VM's root supervisor. The overall tree is started depth-first, from left to right, synchronously. What this means is that my tally supervisor will make sure that the storage layer is up and ready, with its worker pool, before it even begins booting the subsystem in charge of district-specific handling (opening ballot boxes, reading the contents, etc.).

Once that is set up, the supervision tree will start booting that aspect, but will still make sure that the per-district tally process is in place before starting the handling of specific ballots. Only once all of this is at work and under way will the live-reporting app be allowed to boot.

The fun aspect is also that supervisors fail from the leaves first, and gradually up to the root. This means that for tally handling to fail, I need to have enough ballot handling to fail that it kills the ballot supervisor, then have that supervisor die enough that it kills the district supervisor. Then if too many district supervisors die, only then will it bubble up. And if the tally app fails, so will the live reporting app be taken down.

What's interesting here is that the supervision tree's individual supervisors can be programmed with various tolerance levels and strategies. Meaning I can say that I can tolerate a supervisor dying once per hour as well as a million times a minute, or 5 times a second if I want. I can also tell them that once one of their children dies, to either restart it, let it go, or only restart if it was an unexpected shutdown. I can also tell the supervisor that when one of the children die, either restart only this one, or restart all those booted after, or all children whatsoever.

This would, for example, let me specify (in a single line of code) that when ballot counting goes wrong, only restart the ballot counter ,but if the tally for the entire regional office is going to poo poo, start over for the entire office. This is regardless of the specific exception, with the expectation it was transitive (as 99% of bugs are).

So what becomes the general strategy? All your long-term, critical, must-be-safe functionality is packed up up up in the supervision tree, closer to the beginning. If it can't run, the system can't live. All your unsafe, risk-friendly operations are moved down the tree, near the leaves, where they can be allowed to fail. If they're really risky, they can be moved to another node entirely and keep transparently talking to the current one.

You can think of your state in 3 broad categorie: static (rarely changes, known everywhere -- like configuration data), transient and computable (a TCP connection would be transient computable state, if the IP and port to connect to is known. I can then allow myself to lose and rebuild that connection again), or transient and uncomputable (user-submitted data when the user is gone, for example).

Static state is easy (it goes in a table somewhere, anyone can fetch it, it just needs to be there at boot time). Computable state is easy -- put the rootset in the supervisor and it can pass it back to the workers, and the workers can rebuild it. The uncomputable state is tricky. In the voting system above, it would be handled at the leaves, but once we know the state to be correct, it is moved into the persistent storage, where the system isn't allowed to fail. The workers either store it locally or offshore the data elsewhere.

Doing all of this yields very cool systems where the following is encoded in the program structure:
  • what is critical or not
  • what is allowed to fail or not
  • how software should boot according to which guarantees (what is a critical subcomponent or not)
  • how software should fail, meaning it defines the legal states of partial failures you find yourself in
  • how software is upgraded (because it can be upgraded live, based on the supervision structure)
  • how components interdepend on each other
Oh and logging of all exceptional cases, restarts, etc. is handled out of the box by supervisors (and overridable).

So this is good, as long as you say "well that's if you can be allowed failures". And I say 'of course!'. The reason this model is good is that most failures seen in the wild are transient failures.

The reason for this goes like this:



The bugs that are frequent and repeatable are easy to find in devs. Unless you ship a fundamentally broken product, you're gonna find them whether it's with types, tests, careful review, or users yelling at you on the phone.

The repeatable bugs that are in features that are infrequently used are going to be harder -- most likely less spent will be spent on weeding out error out of these because they're either unimportant or used by a minority of people.

Transient bugs that require thousands, millions, or billions of samples to show up are almost guaranteed to never show up statistically speaking (which is where exhaustive proofs and models come in hand if you have life-critical software).

So what will show up in production?



Frequent usage with frequent repeatability shouldn't be there unless your system or process is fundamentally flawed in ways no tech can save it, or if you made a huge mistake and production is very different from testing.

Rare usage and repeatable faults are gonna be what you have logs for. Maybe one or two users are gonna be angry and you'll need to spend time debugging it.

Then a fuckton of bugs are going to be that lovely rear end nondetermenistic transient set of issues. Thankfully, those are often handled by restarts:



Trying again resets a state or waits until a weird combination is gone, and things work this time. rare usage and repeatable faults are a '?' there because that 100% depends on the usage pattern.

In fact, systems I have deployed in production have had that level of error reporting:



That's right, 1.2 millions exceptions a day for a while. Turns out they were transient and not customer impacting. The system ran for 6 months before we had the bandwidth and took the time to address the issue. But it was running fine with no customer complains. We got rid of the error to lower our bandwidth bill, actually.

But I'm not done yet. That's not where Erlang stops. In Programming Forth (Stephen Pelc, 2011), the author says "Debugging isn't an art, it's a science!" and provides the following (ASCII-fied) diagram:
code:
    find a problem ---> gather data ---> form hypothesis ----,
    .--------------------------------------------------------'
    '-> design experiment ---> prove hypothesis --> fix problem
Which then loops back on itself. By far, the easiest bits are 'finding the problem'. The difficult ones are to form the right hypothesis that will let you design a proper experiment and prove your fix works.

It's especially true of Heisenbugs that ruin your life by disappearing as you observe them.

So how do you go about it? GATHER MORE DATA. The more data you have, the easiest it becomes to form good hypotheses. Traditionally, there's four big ways to do it in the server world:
  • Gather system metrics
  • Add logs and read them carefully
  • Try to replicate it locally
  • Get a core dump and debug that
Those are all available in Erlang, but they're often impractical:
  • System metrics are often wide and won't help with fine-grained bugs, but will help provide context
  • logs can generate lots and lots of expansive and useless data, and logging itself may cause the bug to stop happening. In fact, given transient bugs are unexpected, it's quite possible nothing will be logged about them and you'd need to go dive in, edit the code, deploy it, look at logs, and hope they show the right thing.
  • Replicating it locally without any prior information is more or less blind programming. Take shots in the dark until you figure out you've killed the right monster.
  • Core dumps are post-facto items. They often show you the bad state, but rarely how to get there.
More recently, systemtap/dtrace have come into the picture. These help a lot for some classes of bugs. However, I have not yet felt the need to run either in production. Why?

Because Erlang comes with tracing tools that trace every god drat thing for you out of the box. Process spawns? It's in. Messages sent and received? It's in. Function calls filtered with specific arguments or return values? it's in! Garbage collections? it's in. Processes that got scheduled for too long? it's in. Sockets that have their buffers full? It's in. Mailbox sizes, allocation of memory per type and process, layers of stacktraces leading to a currently running call? It's all in. Want to gather metrics about arrival rate of messages in specific workers? It's IN!

It's all out of the box. It's all available anywhere, and it's all usable in production, and can all be done safely (as long as you use a lib that forbids insane cases, like tracing the tracing system itself -- and these libs exist)

So when I look at it all, most languages out there, they provide you the hygiene and the bubble. But all fault tolerance is handled by architecture, or through field surgery with a bonesaw to get rid of gangrene. Erlang by contrast gives you an actual immune system on top of everything else, a way to design, run, and introspect systems that are live.

I hope this is specific enough.

MononcQc fucked around with this message at 16:26 on Oct 19, 2015

Luigi Thirty
Apr 30, 2006

Emergency confection port.

but where does nsa_spy() come into the Erlang-running phone switch

MononcQc
May 29, 2007

Luigi Thirty posted:

but where does nsa_spy() come into the Erlang-running phone switch

you can probably just trace whatever. Funnily enough, you can flag individual processes (from within) as non-introspectable by operators. Call process_flag(sensitive, true) and the process will stop showing its data in traces and introspection commands. That lets you make sure that looking at the state of a running process cannot be done without programmer support (or just go dumping the OS' memory, that will bypass it too) either willfully or accidentally. This helps enforcing privacy laws at the operator level for phone switches in the wild and there's regulations about that.

Most poo poo people write now runs in the cloud and I'm p. sure NSA will just ask AWS to access your poo poo and you won't know about it anyway.

MononcQc fucked around with this message at 16:38 on Oct 19, 2015

jony neuemonic
Nov 13, 2009

MononcQc posted:

I hope this is specific enough.

i really wish i had a project worth using erlang on.

Opulent Ceremony
Feb 22, 2012
We just merged the master branch into dev after someone made a needed quick fix on master to be quickly pushed to production. This merge stomped on a bunch of new work on a file that master hasn't seen changes to outside from having dev merged into it. Why did this happen and what are we doing wrong with git?

pepito sanchez
Apr 3, 2004
I'm not mexican

Symbolic Butt posted:

there's this weird philosophy in oop concerning code reuse, and it's not just with inheritance

or should I say it's the general notion of "planning ahead" that leads to some really awkward code

i think this hits the nail on the head. from being super babby i would try and plan abstract classes ahead (ala car extends vehicle, of course it will!) and it's a bad habit. it's only been in the past year or so that i notice it's just something that comes out of your code naturally when you see it, like a utility class many classes use whether implementing an interface or not (though still usually implementing an interface), but you can't see the code reuse ahead of time -- i think that's good inheritance and that's where it's usually seen.

but being the terrible programmer i am i still recently did this



:doh:

lord of the files
Sep 4, 2012

Opulent Ceremony posted:

We just merged the master branch into dev after someone made a needed quick fix on master to be quickly pushed to production. This merge stomped on a bunch of new work on a file that master hasn't seen changes to outside from having dev merged into it. Why did this happen and what are we doing wrong with git?

you're not doing anything wrong, you just happened to have two developers working on the same file, it happens. when you have a conflict, I would recommend running "git mergetool" (or "git mergetool –y" for many file at once) and resolving the conflicts by hand. the size of the codebase and how many developers you have will determine how often you run into a conflict. more developers + smaller codebase = more conflicts.

Valeyard
Mar 30, 2012


Grimey Drawer
Words cannot describe how laughably bad this code base is.

As someone in here told me, i will at least learn how to work things out based solely on the source with no documentation. It's not even just the code that's bad though, all the processes and workflows are horrible. Especially deployment

Jerry Bindle
May 16, 2003

pepito sanchez posted:

i think this hits the nail on the head. from being super babby i would try and plan abstract classes ahead (ala car extends vehicle, of course it will!) and it's a bad habit. it's only been in the past year or so that i notice it's just something that comes out of your code naturally when you see it, like a utility class many classes use whether implementing an interface or not (though still usually implementing an interface), but you can't see the code reuse ahead of time -- i think that's good inheritance and that's where it's usually seen.

but being the terrible programmer i am i still recently did this



:doh:

i've done that before too -- so have other people. the important thing is that you realized the error and you did't stand on top of it like an obstinate goat thinking you're the premier oop thought leader

VikingofRock
Aug 24, 2008




Opulent Ceremony posted:

We just merged the master branch into dev after someone made a needed quick fix on master to be quickly pushed to production. This merge stomped on a bunch of new work on a file that master hasn't seen changes to outside from having dev merged into it. Why did this happen and what are we doing wrong with git?

What flags did you run merge with? And did it say that there was a conflict, or did it just wipe out the changes without saying anything?

Opulent Ceremony
Feb 22, 2012

VikingofRock posted:

What flags did you run merge with? And did it say that there was a conflict, or did it just wipe out the changes without saying anything?

To be more specific, I'm using git from within TFS in Visual Studio, and it tries to automate away a lot of the git processes for you. As an example, their UI for merging does not appear to offer any options with it, you simply select which branch merges into which other branch. Usually it will do an automated merge commit for you, and I think during that process the file in dev was reverted to the older and less-good master one. But just that one.

The MUMPSorceress
Jan 6, 2012


^SHTPSTS

Gary’s Answer

This is incredible. Thank you for this effort post. At the risk of sounding dumb, does erlang execute in some sort of hosting runtime environment that all of this supervision and message passing is possible? I would assume that you couldn't bolt this sort of thing onto a language like C.

To get that clean taste out of your mouths, here's your MUMPS lesson of the day. You are about to learn about one of the dumbest MUMPS idioms (yes, MUMPS has idioms, like the $O loops I talked about last time.

This idiom is called "i 1". As you recall, "i" is the abbreviation for IF in the MUMPS language. It's used, unsurprisingly, to conditionally execute code based on boolean expressions. However, it doesn't work quite how you'd expect. Now, I have no idea if MUMPS was designed on a platform that had a strangely-behaving branch-if-equals operation at the assembly level (or they were working around a limitation like only having a jump-if-equals or something), but if-else blocks behave...unconventionally in MUMPS.

Remember how MUMPS doesn't have an order of operations and instead just evaluates all expressions left-to-right unless there are parentheses? If-else chains are evaluated top-to-bottom in a similar way. "Wait, isn't that how it always works?" you're asking. Well, yeah, except that boolean expressions are evaluated using the $T global variable.

$T is a variable that holds the result of the most recent truth evaluation. So, for example, if I write the code "i 1=1" $T will contain "1".

Now, "i" simply evaluates the expression to its right then executes the subsequent code if $T is 1. But remember, $T is global. Most of you have already guessed the consequence of this. Tell me, what does the following code output?
code:

printcrap
   i 1=1 d
   . w "Taking  a crap",!
   . d makeitsloppy
   e d
   . "it's wet and sloppy",!
   q
makeitsloppy
   i 1=0
   q

If you guessed that the out put is "Taking a crap\nit's wet and sloppy", you're absolutely right.
You see, $T is a global within a process, so if you call out to a function or subroutine that does its own boolean evaluation before you reach the "e" (ELSE) in your if-else, you'll gently caress things up. "e" (in the absence of another "i" immediately after) just checks $T and executes the subsequent code if $T=0. yay! Now, at least at Epic, we've architected things so that each user has their own process on the database and background stuff is sandboxed to various background users, so you're never going to gently caress up poo poo outside of your own currently running code with this bug, but it's still stupid.

So, that's where the i 1 idiom comes in. This is the way you should always right if-else blocks in MUMPS:
code:

printcrap
   i 1=1 d i 1
   . w "Taking  a crap",!
   . d makeitsloppy
   e d
   . "it's wet and sloppy",!
   q
makeitsloppy
   i 1=0
   q

Remember, an entire line of code is always executed as a distinct unite. in line 1 of printcrap, the "d" tells MUMPS to execute the nested code (periods can be thought of like braces around the code preceded with periods). The stuff prefixed with periods is actually executed on its own distinct stack level (which itself leads to fun horrors sometimes). So, when you reach the end of that .-block, you "return" from it after the "d" and MUMPS finishes executing the line. Since you couldn't be at the end of the line unless $T was 1 (otherwise you would have skipped to the else), you simply execute "i 1" to reset $T to 1. This ensures you don't fall into your else block.

It's considered good style to do this everywhere, even if calling into code that you know for a fact never changes $T. This is because someone could always change that code later and you don't want correctness of your program to depend on some other rear end in a top hat checking all the callers of his tag for $T issues when he makes a change.

This has been your MUMPS infection for the day.

Adbot
ADBOT LOVES YOU

Valeyard
Mar 30, 2012


Grimey Drawer

eschaton posted:

will you get to change the design or implementation of the code for which you're writing tests, to make it more testable?

hahaha of course you won't

loving lol, of course not

i mean its not black box, but changing anything in this 300k line madness is going to be crazy enough

  • Locked thread