Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Jerry Bindle
May 16, 2003
the problem i've faced with modeling databases with an ERD occurs after the model is implemented in to a schema. at that point you have two documents that need to be kept in sync, the ERD eventually becomes stale. mysql workbench automates this process pretty well. it lets you push the ERD to schema, and back annotate the schema to the ERD. but ofc there isn't a mac version, and its mysql specific.

Adbot
ADBOT LOVES YOU

Asshole Masonanie
Oct 27, 2009

by vyelkin

lmao

DimpledChad
May 14, 2002
Rigging elections since '87.

Barnyard Protein posted:

the problem i've faced with modeling databases with an ERD occurs after the model is implemented in to a schema. at that point you have two documents that need to be kept in sync, the ERD eventually becomes stale. mysql workbench automates this process pretty well. it lets you push the ERD to schema, and back annotate the schema to the ERD. but ofc there isn't a mac version, and its mysql specific.

there is a mac version

Shaggar
Apr 26, 2006

triple sulk
Sep 17, 2014



brap
Aug 23, 2004

Grimey Drawer
i guess the mongodb people were thinking of the word "humongous" when they made the name but somehow didn't realize that the rest of the world would think "mongoloid...wait wtf?"

its like if they called it niggerdb and said "but we were thinking of the word niggardly, like our database is so niggardly with your resources!"

oh no blimp issue
Feb 23, 2011

fleshweasel posted:

i guess the mongodb people were thinking of the word "humongous" when they made the name but somehow didn't realize that the rest of the world would think "mongoloid...wait wtf?"

its like if they called it niggerdb and said "but we were thinking of the word niggardly, like our database is so niggardly with your resources!"

everyone i know calls it mongDB

Valeyard
Mar 30, 2012


Grimey Drawer

I'll initiate the repo

DimpledChad
May 14, 2002
Rigging elections since '87.
hmongDB for you asians out there

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
so i did really well on my phone interview with Redhat and have an on site scheduled this week. I started the interview process thinking it would just be a good experience, but now it's seeming like something I might actually be able to make happen.

I'll be super torn if I get it.

1) It's contract. In talking to everyone at RH, it sounds like, for the most part, they only hire contract and then hire full time out of their contract pool. No idea whether or not that is bullshit.

2) I have a feeling they will pay me enough to make the lack of benefits a wash over my current position. I'd go to redhat even if I was making less money because:


3) It's Redhat. Even if they dont pick me up again after the contract expires, I feel like I'll be opening up a lot of doors. Except:

4) I really like my current company. They're good people and I like my co-workers. But our codebase is awful and I feel like I'm already kinda stagnating.

bobbilljim
May 29, 2013

this christmas feels like the very first christmas to me
:shittydog::shittydog::shittydog:

MALE SHOEGAZE posted:

so i did really well on my phone interview with Redhat and have an on site scheduled this week. I started the interview process thinking it would just be a good experience, but now it's seeming like something I might actually be able to make happen.

I'll be super torn if I get it.

1) It's contract. In talking to everyone at RH, it sounds like, for the most part, they only hire contract and then hire full time out of their contract pool. No idea whether or not that is bullshit.

2) I have a feeling they will pay me enough to make the lack of benefits a wash over my current position. I'd go to redhat even if I was making less money because:


3) It's Redhat. Even if they dont pick me up again after the contract expires, I feel like I'll be opening up a lot of doors. Except:

4) I really like my current company. They're good people and I like my co-workers. But our codebase is awful and I feel like I'm already kinda stagnating.

you don't owe your current company anything. do what works for you man

qntm
Jun 17, 2009
leaving a nice collection of coworkers behind is the thing I find hardest

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
yeah the problem isn't the company, it's that i like my coworkers and if i leave they're gonna be hosed because i have a ton of product knowledge and there is no one who can replace me.

like i know if i have an offer from RH my company is gonna offer me a big raise to compete, because the software I maintain makes way more in a month than I make in a year, and i'm the only one who can keep it running currently. but that might not be enough to keep me on because RH.

bobbilljim
May 29, 2013

this christmas feels like the very first christmas to me
:shittydog::shittydog::shittydog:

MALE SHOEGAZE posted:

yeah the problem isn't the company, it's that i like my coworkers and if i leave they're gonna be hosed because i have a ton of product knowledge and there is no one who can replace me.

like i know if i have an offer from RH my company is gonna offer me a big raise to compete, because the software I maintain makes way more in a month than I make in a year, and i'm the only one who can keep it running currently. but that might not be enough to keep me on because RH.

these are not necessarily your fault, and your coworkers probably wont be anymore hosed than they already are :shrug:

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

MALE SHOEGAZE posted:

yeah the problem isn't the company, it's that i like my coworkers and if i leave they're gonna be hosed because i have a ton of product knowledge and there is no one who can replace me.

like i know if i have an offer from RH my company is gonna offer me a big raise to compete, because the software I maintain makes way more in a month than I make in a year, and i'm the only one who can keep it running currently. but that might not be enough to keep me on because RH.

they'll figure something out

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
tbh the cowowker thing is not that important. two out of four already know because i told them. just a deead fly in the complex web that is my decision making process

Asshole Masonanie
Oct 27, 2009

by vyelkin
I just had to do kind of the same thing, leave a company I liked for a better career and it's rough but it's better in the long view.

Dessert Rose
May 17, 2004

awoken in control of a lucid deep dream...
i have a site on squarespace right now that is basically an infrequently updated blog. they are charging me a somewhat reasonable rate to host it and run their software.

which is neat but i remember someone mentioning just using wordpress and rendering it to static pages (because lol @ wordpress' security track record)

a bit of searching leads me to some really crappy looking plugins and a bunch of people snorting about "lol why would you ever want to do that, nothing on the web is static except aol home pages"

is there something i'm missing?

CPColin
Sep 9, 2003

Big ol' smile.
I updated our coding standards document to say that, if you have a value that's used only once and the name of the constant you would make out of it doesn't impart any new meaning to the value, it's fine to leave it as a "magic number."

It blew people's minds.

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

MALE SHOEGAZE posted:

yeah the problem isn't the company, it's that i like my coworkers and if i leave they're gonna be hosed because i have a ton of product knowledge and there is no one who can replace me.

if you're contracted rather than employed by Red Hat you could still do work for your old company.

come up with a consulting rate for times they need your help or something, and let them know you'll be available up to 5 or 10 hours/week or something along those lines, and could use that time to train others on the system not just fix bugs. make that rate comparable to the raise they'd offer, plus a bunch because they won't have to withhold taxes or pay benefits (and you will).

and if the work for RedHat is true contract work, they can have no say in whether you do this. they might even appreciate it, because it makes you less likely to wind up reclassified as an employee for whom they should've been withholding and paying taxes and the like.

(just don't do any work for your old employer at RH or using their stuff, or vice versa.)

wolffenstein
Aug 2, 2002
 
Pork Pro
p sure he's gonna be working 60+ hours if rh contracts him. who says linux is exempt from tech industry average workweek

Valeyard
Mar 30, 2012


Grimey Drawer

MALE SHOEGAZE posted:

yeah the problem isn't the company, it's that i like my coworkers and if i leave they're gonna be hosed because i have a ton of product knowledge and there is no one who can replace me.

like i know if i have an offer from RH my company is gonna offer me a big raise to compete, because the software I maintain makes way more in a month than I make in a year, and i'm the only one who can keep it running currently. but that might not be enough to keep me on because RH.

Congrats, it sounds like you are likely gonna take the job if they offer it. Either way it's nice

Valeyard
Mar 30, 2012


Grimey Drawer
so in haskell I am used to the standard map definition being something like

code:
map                     :: (a->b) -> [a] -> [b]
map f  []               =  []
map f (x:xs)            =  f x : map f xs
but can someone explain what is going on here, in an apparent alternate definition

code:
1. map1 f lst =
2.  let
3.    maph f (x:xs) lst_ 
4.      | length xs==0 = lst_++[f x]
5.      | otherwise = maph f xs (lst_++[f x]) 
6.  in
7.    maph f lst []
first of all, are "lst_" and "ls_++" just horrible looking names or are the underscores/plus signs actually important?

in line 3, is it saying that lst_ is the output?

im obviously missing some key funamental because I am not understanding how this works, line 7 looks like its telling it just to output an empty list constantly

MononcQc
May 29, 2007

Valeyard posted:

so in haskell I am used to the standard map definition being something like

code:
map                     :: (a->b) -> [a] -> [b]
map f  []               =  []
map f (x:xs)            =  f x : map f xs
but can someone explain what is going on here, in an apparent alternate definition

code:
1. map1 f lst =
2.  let
3.    maph f (x:xs) lst_ 
4.      | length xs==0 = lst_++[f x]
5.      | otherwise = maph f xs (lst_++[f x]) 
6.  in
7.    maph f lst []
first of all, are "lst_" and "ls_++" just horrible looking names or are the underscores/plus signs actually important?

in line 3, is it saying that lst_ is the output?

im obviously missing some key funamental because I am not understanding how this works, line 7 looks like its telling it just to output an empty list constantly
So I'm not a Haskeller (I just read it for all the cool datastructure papers), but ++ is the appending operator. So x:xs is a list with x at the head of the list xs. a ++ b means the two lists a and b are appended together.

In line 3 what you see is the definition of the 'internal' maph function which expects 3 arguments: a function, a list of elements, and an accumulator (lst_) that it passes to itself all the time. The accumulator is initialized as [] on line 7.

What you see is a map that uses an accumulator as a third argument, so that the implementation is tail-recursive/iterative rather than just body recursive (the append operation happens before recursing, whereas in the traditional map, the recursion happens before the cons'ing with ':'). I know of sometimes decent reasons to do it in a strict language, but I'm not sure of whether these hold correctly in a lazy language like Haskell, so I'll refrain from speculating on that.

E: re lst_ being a terrible name, I'm more surprised it isn't called lst' or just acc or something.

MononcQc fucked around with this message at 16:26 on Apr 28, 2015

FamDav
Mar 29, 2008
lst_ is a variable, ++ takes two lists and concatenates them together.

Zaxxon
Feb 14, 2004

Wir Tanzen Mekanik

Valeyard posted:

code:
1. map1 f lst =
2.  let
3.    maph f (x:xs) lst_ 
4.      | length xs==0 = lst_++[f x]
5.      | otherwise = maph f xs (lst_++[f x]) 
6.  in
7.    maph f lst []

it's basically doing map as reduce, passing an empty list into the accumulator.

Zaxxon fucked around with this message at 16:31 on Apr 28, 2015

gonadic io
Feb 16, 2011

>>=

Valeyard posted:

code:
1. map1 f lst =
2.  let
3.    maph f (x:xs) lst_ 
4.      | length xs==0 = lst_++[f x]
5.      | otherwise = maph f xs (lst_++[f x]) 
6.  in
7.    maph f lst []

the others have explained this just fine, MononcQc is quite right in that the accumulator, lst_, is usually called "acc" or something.

What's also worth noting is that this definition is written in a pretty horrible way. I was going to pick out exactly the problems with it, but is it a coursework or something, that you have to improve this form of it? Without changing the structure I mean, but this function can be optimised a LOT.

gonadic io
Feb 16, 2011

>>=

MononcQc posted:

I know of sometimes decent reasons to do it in a strict language, but I'm not sure of whether these hold correctly in a lazy language like Haskell, so I'll refrain from speculating on that.

tail recursion is pretty important in functional languages (at least, those with TCO) because it lets the function be compiled internally as a loop so you don't get the costs of a function call each iteration that recursion normally gives you.

Valeyard, see if you can define 'reverse' in a form that uses tail call recursion too i.e. like:

code:
reverse :: [a] -> [a]
reverse xs = reverseh xs []
    where
    reverseh :: [a] -> [a] -> [a]
    reverseh []     acc = ...
    reverseh (x:xs) acc = reverseh (...)
reverseh is called tail recursive because it calls itself with different arguments and does nothing with the result except return it. reverse is normally defined as "reverse (x:xs) = reverse xs ++ [x]" but see how it calls append (++) AFTER it recurses and so isn't tail recursive?

gonadic io fucked around with this message at 17:02 on Apr 28, 2015

Valeyard
Mar 30, 2012


Grimey Drawer

MononcQc posted:

a ++ b means the two lists a and b are appended together.

FamDav posted:

lst_ is a variable, ++ takes two lists and concatenates them together.

see that is fine, but ls_++[f x] with no spaces is really weird, so this is just the same as ls_ ++ [f x] with spaces, right?

MononcQc posted:

In line 3 what you see is the definition of the 'internal' maph function which expects 3 arguments: a function, a list of elements, and an accumulator (lst_) that it passes to itself all the time. The accumulator is initialized as [] on line 7.

What you see is a map that uses an accumulator as a third argument, so that the implementation is tail-recursive/iterative rather than just body recursive (the append operation happens before recursing, whereas in the traditional map, the recursion happens before the cons'ing with ':'). I know of sometimes decent reasons to do it in a strict language, but I'm not sure of whether these hold correctly in a lazy language like Haskell, so I'll refrain from speculating on that.

E: re lst_ being a terrible name, I'm more surprised it isn't called lst' or just acc or something.

gonadic io posted:

the others have explained this just fine, MononcQc is quite right in that the accumulator, lst_, is usually called "acc" or something.

ok the fact that that is just an accumlator does clear up a lot, and I can see what is going on now. its doing a check to see if the length of the tail is 0, if so then just add the function-applied-to-the-head into the accumlator, otherwise call the internal function on the tail and still add the function-applied-to-the-head into the accumlator,

gonadic io posted:

What's also worth noting is that this definition is written in a pretty horrible way. I was going to pick out exactly the problems with it, but is it a coursework or something, that you have to improve this form of it? Without changing the structure I mean, but this function can be optimised a LOT.

no coursework, it's just revision for the exam. I dont think you were around before when i mentioned the exam (80% of the course) is a multiple choice test. This question presented 4 definitions roughly similar to the horrible one I posted and asked which was the right one

thank you to all

gonadic io
Feb 16, 2011

>>=

Valeyard posted:

see that is fine, but ls_++[f x] with no spaces is really weird, so this is just the same as ls_ ++ [f x] with spaces, right?
yes


Valeyard posted:

no coursework, it's just revision for the exam. I dont think you were around before when i mentioned the exam (80% of the course) is a multiple choice test. This question presented 4 definitions roughly similar to the horrible one I posted and asked which was the right one
URGH it's been """"obfusticated"""", that explains it

FamDav
Mar 29, 2008

gonadic io posted:

tail recursion is pretty important in functional languages (at least, those with TCO) because it lets the function be compiled internally as a loop so you don't get the costs of a function call each iteration that recursion normally gives you.

Valeyard, see if you can define 'reverse' in a form that uses tail call recursion too i.e. like:

code:
reverse :: [a] -> [a]
reverse xs = reverseh xs []
    where
    reverseh :: [a] -> [a] -> [a]
    reverseh []     acc = ...
    reverseh (x:xs) acc = reverseh (...)
reverseh is called tail recursive because it calls itself with different arguments and does nothing with the result except return it. reverse is normally defined as "reverse (x:xs) = reverse xs ++ [x]" but see how it calls append (++) AFTER it recurses and so isn't tail recursive?

everyone knows reverse = foldl (flip (:)) []

Valeyard
Mar 30, 2012


Grimey Drawer

gonadic io posted:

yes

URGH it's been """"obfusticated"""", that explains it

yeah think thats bad, heres Q1

I already know the answers to these, Q1 was a total guess between C and D since I couldnt actually tell what it was trying to parse, but the answer is C

Q2 I said the answer is A because since the patterns its matching against arent similar so the lack of a try isnt going to cause any issues with consumed input, the real answer is in fact A

gonadic io
Feb 16, 2011

>>=
It looks like there's some text missing in the definition of showParser, as there an infix operator (<|>) without anything to the right of it.

Apart from that, I actually like that question though since it isn't stupidly obfuscated (lack of linebreaks not withstanding), asks you about sensible code and makes sure that you know how to use parsec.

Which year undergrads is it for?

The reason that that's the answer to q1 is because It's trying to parse a haskell type which starts with an uppercase letter then can have and number of upper or lower case letters as in the example given in the question. It's not D because if it were, that would allow lists and tuples in the middle of a name

MononcQc
May 29, 2007

gonadic io posted:

tail recursion is pretty important in functional languages (at least, those with TCO) because it lets the function be compiled internally as a loop so you don't get the costs of a function call each iteration that recursion normally gives you.

Right, I'm familiar with that. But to give a precise example, for the map function in a strict language (here Erlang because I'm familiar) defined in a body-recursive manner, I have:
code:
map(_, []) -> [];
map(F, [H|T]) -> [F(H) | map(F, T)].
For [1,2,3,4], this gives me an 'exploded' stack frame view a bit like:

cons(F(1), cons(F(2), cons(F(3), cons(F(4), [])))).

If I go for the tail recursive version:
code:
map(F, List) -> map(F, List, []).

map(_, [], Acc) -> lists:reverse(Acc);
map(F, [H|T], Acc) -> map(F, T, [F(H)|Acc]).

%% or alternatively
map(_, [], Acc) -> Acc;
map(F, [H|T], Acc) -> map(F, T, Acc++[F(H)]).
The latter form is not preferable because the list in `Acc' requires rewriting at each append operation, so it's better to prepend+reverse once.

In any case, rather than having a stack frame that is mostly N references to the same function and pending cons operations, the first tail-recursive version instead maintains: the original list in memory (cannot be GC'd yet), the accumulator (which is N conses), and a single reference to a function F. Then a third list has to be generated when the accumulator is reversed.

So whereas the standard case is to go for tail-recursive because you don't want a large set of stack frames to be maintained, the tail-recursive map function's reduced frame stack is mostly going to be taken back up by the accumulator you needed to build up in memory and then discard; the body recursive function might be more efficient, especially since you reduce the risk of needing to force a GC right away (or to ask the OS for more memory), and skip an entire traversal (for reversal).

This yields a case where body recursion can often micro-benchmark faster than tail recursion for functions such as map.

The question I have is whether that cost is similar in the case of a lazy language, where the cost of the '++' append operation might be amortized and done at once. In this case, you wouldn't need the cons+reverse approach, and therefore you wouldn't see a practical advantage to body recursion over tail recursion given just using 'append' could do it.

This is where a lack of experience with Haskell makes it hard for me to pronounce myself on the value (or lack thereof) of tail vs. body recursion in this case.

MononcQc fucked around with this message at 18:28 on Apr 28, 2015

brap
Aug 23, 2004

Grimey Drawer

Valeyard posted:

yeah think thats bad, heres Q1

I already know the answers to these, Q1 was a total guess between C and D since I couldnt actually tell what it was trying to parse, but the answer is C

Q2 I said the answer is A because since the patterns its matching against arent similar so the lack of a try isnt going to cause any issues with consumed input, the real answer is in fact A



wow is he completely loving the indentation on the code in order to up the challenge or something

gonadic io
Feb 16, 2011

>>=

MononcQc posted:

Right, I'm familiar with that. But to give a precise example, for the map function in a strict language (here Erlang because I'm familiar) defined in a body-recursive manner, I have:
code:
map(_, []) -> [];
map(F, [H|T]) -> [F(H) | map(F, T)].
For [1,2,3,4], this gives me an 'exploded' stack frame view a bit like:

cons(F(1), cons(F(2), cons(F(3), cons(F(4), [])))).

If I go for the tail recursive version:
code:
map(F, List) -> map(F, List, []).

map(_, [], Acc) -> lists:reverse(Acc);
map(F, [H|T], Acc) -> map(F, T, [F(H)|Acc]).

%% or alternatively
map(_, [], Acc) -> Acc;
map(F, [H|T], Acc) -> map(F, T, Acc++[F(H)]).
The latter form is not preferable because the list in `Acc' requires rewriting at each append operation, so it's better to prepend+reverse once.

In any case, rather than having a stack frame that is mostly N references to the same function and pending cons operations, the first tail-recursive version instead maintains: the original list in memory (cannot be GC'd yet), the accumulator (which is N conses), and a single reference to a function F. Then a third list has to be generated when the accumulator is reversed.

So whereas the standard case is to go for tail-recursive because you don't want a large set of stack frames to be maintained, the tail-recursive map function's reduced frame stack is mostly going to be taken back up by the accumulator you needed to build up in memory and then discard; the body recursive function might be more efficient, especially since you reduce the risk of needing to force a GC right away (or to ask the OS for more memory), and skip an entire traversal (for reversal).

This yields a case where body recursion can often micro-benchmark faster than tail recursion for functions such as map.

The question I have is whether that cost is similar in the case of a lazy language, where the cost of the '++' append operation might be amortized and done at once. In this case, you wouldn't need the cons+reverse approach, and therefore you wouldn't see a practical advantage to body recursion over tail recursion given just using 'append' could do it.

This is where a lack of experience with Haskell makes it hard for me to pronounce myself on the value (or lack thereof) of tail vs. body recursion in this case.

(++) is perhaps the best example for lazy evaluation making it hard to reason about code.
it's O(n) in its first argument and O(1) (amortised) in its second.

so if you have (xs ++ ys) ++ zs that's a fair bit slower than xs ++ (ys ++ zs) because in the former case xs will get traversed twice - once when evaluating ++ys and once when evaluating ++zs (assuming that the expresion as a whole gets evaluated).

this is basically the difference between left folds and right folds. foldl is tail recursive whereas foldr is body recursive. hence reverse gets defined using foldl and map using foldr.

in fact in GHC cabal-install there was a bug causing large compilation times that folded (++) over a list from the left (like (xs ++ ys) ++ zs) which has O(n^2) behaviour over the length of all the lists and the fix was to change it to a right-fold ( like xs ++ (ys ++ zs) which is O(n) which drastically reduced compilation times for the change of a single letter in the code.

e: i exaggerated a bit but the details are largely correct: https://github.com/nominolo/HTTP/commit/b9bd0a08fa09c6403f91422e3b23f08d339612eb

it's a reverse defined in terms of foldr (++), then it got changed to using the built-in foldl one.

gonadic io fucked around with this message at 20:02 on Apr 28, 2015

VikingofRock
Aug 24, 2008




gonadic io posted:

(++) is perhaps the best example for lazy evaluation making it hard to reason about code.
it's O(n) in its first argument and O(1) (amortised) in its second.

so if you have (xs ++ ys) ++ zs that's a fair bit slower than xs ++ (ys ++ zs) because in the former case xs will get traversed twice - once when evaluating ++ys and once when evaluating ++zs (assuming that the expresion as a whole gets evaluated).

this is basically the difference between left folds and right folds. foldl is tail recursive whereas foldr is body recursive. hence reverse gets defined using foldl and map using foldr.

in fact in GHC there was a bug causing large compilation times that folded (++) over a list from the left (like (xs ++ ys) ++ zs) which has O(n^2) behaviour over the length of all the lists and the fix was to change it to a right-fold ( like xs ++ (ys ++ zs) which is O(n) which drastically reduced compilation times for the change of a single letter in the code.

This is pretty interesting. Thanks for the writeup! Also, I think your foldr link is broken--there was nothing there when I tried going to it.

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

Valeyard posted:

so in haskell I am used to the standard map definition being something like

code:
map                     :: (a->b) -> [a] -> [b]
map f  []               =  []
map f (x:xs)            =  f x : map f xs
but can someone explain what is going on here, in an apparent alternate definition

code:
1. map1 f lst =
2.  let
3.    maph f (x:xs) lst_ 
4.      | length xs==0 = lst_++[f x]
5.      | otherwise = maph f xs (lst_++[f x]) 
6.  in
7.    maph f lst []
first of all, are "lst_" and "ls_++" just horrible looking names or are the underscores/plus signs actually important?

in line 3, is it saying that lst_ is the output?

im obviously missing some key funamental because I am not understanding how this works, line 7 looks like its telling it just to output an empty list constantly

i dont mean to be rude but are you sure you took a haskell class at the literal haskell place?

gonadic io
Feb 16, 2011

>>=

VikingofRock posted:

This is pretty interesting. Thanks for the writeup! Also, I think your foldr link is broken--there was nothing there when I tried going to it.

I fixed it, it works for foldr.com but not for https://www.foldr.com

also my mistake, the bug was in cabal-install rather than ghc itself

Adbot
ADBOT LOVES YOU

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

gonadic io posted:

(++) is perhaps the best example for lazy evaluation making it hard to reason about code.
it's O(n) in its first argument and O(1) (amortised) in its second.

so if you have (xs ++ ys) ++ zs that's a fair bit slower than xs ++ (ys ++ zs) because in the former case xs will get traversed twice - once when evaluating ++ys and once when evaluating ++zs (assuming that the expresion as a whole gets evaluated).

this is basically the difference between left folds and right folds. foldl is tail recursive whereas foldr is body recursive. hence reverse gets defined using foldl and map using foldr.

in fact in GHC there was a bug causing large compilation times that folded (++) over a list from the left (like (xs ++ ys) ++ zs) which has O(n^2) behaviour over the length of all the lists and the fix was to change it to a right-fold ( like xs ++ (ys ++ zs) which is O(n) which drastically reduced compilation times for the change of a single letter in the code.

This seems absolutely awful and makes me like Haskell less

  • Locked thread