|
i do wait no i hate my life for other reasons
|
# ? Sep 6, 2013 07:28 |
|
|
# ? May 17, 2024 05:04 |
|
unixbeard posted:how many people here use ocaml professionally and dont work at jane st I dunno if I wanna say anything about "professionally" in YOSPOS...
|
# ? Sep 6, 2013 07:29 |
|
mnd posted:I'm using it at work, and I don't work at Jane St. Does that count? It seems like such a niche language, I have used it a bit but the only place I know that does anything serious with it was them. What sort of stuff do you use it for?
|
# ? Sep 6, 2013 08:05 |
|
unixbeard posted:It seems like such a niche language, I have used it a bit but the only place I know that does anything serious with it was them. What sort of stuff do you use it for? I should mention that although I am using OCaml (for this particular thing), I'm pretty sure no one else here is. You could safely dump this in the "research project" bucket at the moment. Though, that would apply to this entire effort, not just the use of OCaml.
|
# ? Sep 6, 2013 08:20 |
|
i use ocaml professionally and i do not work at jane st
|
# ? Sep 6, 2013 08:46 |
|
how does it feel to be so fuckin kewl
|
# ? Sep 6, 2013 09:19 |
|
hackbunny posted:how does it feel to be so fuckin kewl
|
# ? Sep 6, 2013 09:32 |
|
car, cdr, heads and tails all sound so annoying when you've got pattern matching. I grew used to pattern matching and every time I'm in a language that tries to be somewhat functional and doesn't have it, I get very irritated. Fortunately schemers and lispers from all around the place develop macro systems for that, but yeah.
|
# ? Sep 6, 2013 12:51 |
|
serious q: what does ml have over haskell? I hear that it's faster, is that a direct consequence of its strict semantics? edit: what I mean by that is that laziness is great for actually programming but high performance haskell is basically about getting rid of the laziness whenever possible if you know a value is going to be computed eventually anyway
|
# ? Sep 6, 2013 14:02 |
|
I heard good things of their module systems and functor definitions (http://homepages.inf.ed.ac.uk/mfourman/teaching/mlCourse/notes/L11.html ) but couldn't say how Haskell compares to that specifically.
|
# ? Sep 6, 2013 14:12 |
|
gucci void main posted:yeh the rest of it was collected before it could even be finished CL has the opposite problem. they saw they had five or six featuresets across the participating vendors, and decided to include all of them JewKiller 3000 posted:common lisp has everything and nobody uses any of it this is pretty much true CL is such a big language it's like C++, every codebase uses its own subset of the thing
|
# ? Sep 6, 2013 14:13 |
|
MononcQc posted:car, cdr, heads and tails all sound so annoying when you've got pattern matching. I grew used to pattern matching and every time I'm in a language that tries to be somewhat functional and doesn't have it, I get very irritated. so back when i was writing CL, i didn't know what pattern-matching was. i had destructuring-bind and i was happy in the ensuing years, lispers have written eighteen libraries to get ml-style pattern matching. eighteen. http://www.cliki.net/pattern%20matching
|
# ? Sep 6, 2013 14:15 |
|
Notorious b.s.d. posted:in the ensuing years, lispers have written eighteen libraries to get ml-style pattern matching. Olin Shivers of scheme shell fame wrote about the CL/Scheme habit of 80% solutions at length in the readme for his regex library quote:There's a problem with tool design in the free software and academic
|
# ? Sep 6, 2013 14:21 |
|
JewKiller 3000 posted:i am not qualified to argue about call/cc but oleg kiselyov certainly is: http://okmij.org/ftp/continuations/against-callcc.html this is a fun page it is good to know that certain fractions of the scheme community hate call/cc as much as i do
|
# ? Sep 6, 2013 14:42 |
|
Otto Skorzeny posted:Olin Shivers of scheme shell fame wrote about the CL/Scheme habit of 80% solutions at length in the readme for his regex library This kind of thing always makes me a bit uncomfortable with the code I write. The stuff I do for a living is developing on large server environments with certain strict behaviors to be had when dealing with overload, latency, throughput, or whatever. General 100% solutions tend to be much slower / less efficient / whatever property, because the general case will make compromises and assumptions about what is allowed or forbidden that do not necessarily apply to your particular use case, or might even be entirely at the opposite of the spectrum. One example I have from the last couple of weeks is a logging library. One logging library I keep recommending all the time turned out to be too slow for our use cases where overload situations would make IO become synchronous and would lock up the node as a sequential bottleneck right in time-sensitive areas of the code. It's a case I had never encountered before with that lib (and hence why I keep recommending it), and it turned out I had to rewrite a tiny replacement that catered to our use case by batching data received, making things asynchronous, raising throughput while making latency slightly worse for individual lines. It's a lovely logging library by all means of usability, but it eliminated all kinds of issues for us and definitely made things nicer and more predictable, without us needing to just log less data. So I think I end up being the kind of person who releases 80% libraries here and there, and even though I try to document their narrow use cases, they're not useful for the general public a lot of the time. I'd like to release more general stuff (and I sometimes do), but it just wouldn't work the same in production for us because of what you can decide to bake in as an assumption of what you're allowed to do or not to do. I guess I'm contributing to making things worse in the Erlang library ecosystem
|
# ? Sep 6, 2013 14:52 |
|
yo was your talk archived?
|
# ? Sep 6, 2013 14:53 |
|
Posting Principle posted:yo was your talk archived? http://oreillynet.com/pub/e/2877 It's gonna be in the original format for a couple of weeks/months, which means you gotta subscribe and then watch it in the weird rear end GUI that does the live streaming and stuff. After that time period, they'll put it on their youtube channel, which hopefully won't be too terrible of a format.
|
# ? Sep 6, 2013 15:14 |
|
MononcQc posted:This kind of thing always makes me a bit uncomfortable with the code I write. Java logging libraries include asynchronous appenders. Code I didn't write is the best code
|
# ? Sep 6, 2013 16:08 |
|
guys i'm gonna learn Haskell because i have this problem i need to deal with where too many girls talk to me
|
# ? Sep 6, 2013 16:09 |
|
aw man almost two hockey avatar budz posts in a row
|
# ? Sep 6, 2013 16:10 |
|
AlsoD posted:serious q: what does ml have over haskell? I hear that it's faster, is that a direct consequence of its strict semantics? it depends this is obviously true if you have some gigantic thunk that does arithmetic beause in general 'a + b' is smaller when it gets forced for obvious reasons. but if i have some very small thunk like 'f 20' which balloons into some massive data structure when forced (because, say, f x = Node x (f (x - 1)) (f (x - 1)), you want to evaluate f 10 as late as possible so that you're not stuck with this gigantic thing in memory i don't know much about ml but it always seemed to have a better module system to me; haskell's module system is just basic 'you can specify what you export, what you import, and the qualified name of the module if any' stuff.
|
# ? Sep 6, 2013 16:11 |
|
Nomnom Cookie posted:Java logging libraries include asynchronous appenders. Code I didn't write is the best code Code had asynchronous mode, but on some nodes this invariably led to overload. The problem was the number of calls without paging being built in there for some types of IO (i.e. disk IO has paging, but stdout didn't) which led to the async code progressively accumulating a larger backlog, at which point it toggles to synchronous mode automatically. Then we never really intended to drop any of our log messages -- making it switch from 'INFO' to 'WARN' would have fixed the issue, but we wanted to see if it was possible to just make it handle everything at once -- and we made it work. I'm trying to think of a way to port my stuff to the logging library everyone uses, but I'm not sure it's super usable for that.
|
# ? Sep 6, 2013 16:13 |
|
entity framework loving blows
|
# ? Sep 6, 2013 16:32 |
|
I cant believe people think orms make development faster.
|
# ? Sep 6, 2013 16:44 |
|
Jane street seemed like the most annoying place to work. Traders in the same space as engineers, all bunched together on one side of the building. On ocaml, the lack of type classes is annoying but defining new infix operators is weird when you can't control precedence and fixity.
|
# ? Sep 6, 2013 16:46 |
|
MononcQc posted:Code had asynchronous mode, but on some nodes this invariably led to overload. The problem was the number of calls without paging being built in there for some types of IO (i.e. disk IO has paging, but stdout didn't) which led to the async code progressively accumulating a larger backlog, at which point it toggles to synchronous mode automatically. i disagree with the design of that logging library. java has this guy (Ceki Gülcü) who seems to be obsessed with logging. he wrote log4j, then slf4j, then logback. each time leveraging key learnings from past utilizations. have you considered using logback its pretty good
|
# ? Sep 6, 2013 16:51 |
|
Shaggar posted:I cant believe people think orms make development faster. its not an orm but jdbctemplate is a straight win over writing your own jdbc code
|
# ? Sep 6, 2013 16:52 |
|
Nomnom Cookie posted:i disagree with the design of that logging library. java has this guy (Ceki Gülcü) who seems to be obsessed with logging. he wrote log4j, then slf4j, then logback. each time leveraging key learnings from past utilizations. have you considered using logback its pretty good they're all compatible with sane migration paths too
|
# ? Sep 6, 2013 16:57 |
|
Shaggar posted:I cant believe people think orms make development faster. an ORM can make development faster if and only if your application owns the schema. (using an ORM to query somebody else's database or view is usually not a great idea for lots of reasons)
there are lots of reasons to consider ORM even if it isn't a panacea Notorious b.s.d. fucked around with this message at 17:10 on Sep 6, 2013 |
# ? Sep 6, 2013 17:07 |
i am terrible at sql and activerecord is OK. you might say it suits my needs.
|
|
# ? Sep 6, 2013 17:08 |
|
i use orms because then i only have to deal with being terrible in one language instead of two
|
# ? Sep 6, 2013 17:09 |
|
Nomnom Cookie posted:i disagree with the design of that logging library. java has this guy (Ceki Gülcü) who seems to be obsessed with logging. he wrote log4j, then slf4j, then logback. each time leveraging key learnings from past utilizations. have you considered using logback its pretty good I'm not sure how portable to my case that architecture is. The inheritance of levels is nice, and the automation of level-checks before logging is good too. A lot of the principles are okay, but the problem I see there is that doing something like a "synchronized block" to provide thread safety when appending and formatting in that single sequential point is a lovely idea when you have >30,000 preemptively scheduled processes and multiple hundreds of log messages a second making it there. From that point of view (and knowing we never turn off logging), it is better to format the log at the call site, and then batch them up to be sent to their final destination. This distributes work across all processes, and ensures that your sequential bottleneck is minimal, improving performance node-wide. You make a lot of barely-noticeable small pauses to format, compared to a few seconds-long pauses with the centralized approach, which suck when what you're doing is trying to set up connections or accepting requests or whatever. Notorious b.s.d. posted:they're all compatible with sane migration paths too The migration path for Erlang -> Java on production software isn't the sanest around.
|
# ? Sep 6, 2013 17:48 |
|
MononcQc posted:I'm not sure how portable to my case that architecture is. The inheritance of levels is nice, and the automation of level-checks before logging is good too. A lot of the principles are okay, but the problem I see there is that doing something like a "synchronized block" to provide thread safety when appending and formatting in that single sequential point is a lovely idea when you have >30,000 preemptively scheduled processes and multiple hundreds of log messages a second making it there. but aren't you synchronizing submits to the batch? how is that different?
|
# ? Sep 6, 2013 18:01 |
|
i like redbeans ORM for php because it just creates tables/columns if the table/column doesnt already exists it seems like a great idea
|
# ? Sep 6, 2013 18:13 |
|
uG posted:i like redbeans ORM for php because it just creates tables/columns if the table/column doesnt already exists it seems like a great idea every time i've used a system that has just kinda made columns when they don't exist has sucked lots of rear end gucci void main posted:i am terrible at sql and activerecord is OK. you might say it suits my needs. i'm pretty great at sql and activerecord is fantastic sql when i Care, orm when i don't
|
# ? Sep 6, 2013 18:16 |
|
a php orm that just makes poo poo up about table schema as it goes along?
|
# ? Sep 6, 2013 18:20 |
|
git clone trooper posted:a php orm that just makes poo poo up about table schema as it goes along? yeah i didn't want to say anything but really since it's php it's probably only set up for mysql which is the php of databases
|
# ? Sep 6, 2013 18:21 |
|
disclaimer: ive never written a line of php
|
# ? Sep 6, 2013 18:21 |
|
Shaggar posted:but aren't you synchronizing submits to the batch? how is that different? Yes and no. There's a possible difference in the level of contention and operations done in the synchronous block. At the lowest level of the VM, process mailboxes have two queues: an inner one which is locked by the receiver process while executing, and an outer one which other processes will use and thus not compete for a message queue lock with the executing process. When the inner queue is depleted the receiver process will lock the outer queue and move the entire thing to the inner one. Rinse and repeat. This means that you do the append operation synchronously, but the consumption of the queue is amortized over that cost and that there is very little to deal with there. At the library level, the process drains its own mailbox as fast as possible and puts it in a higher level queue where results are appended in 'pages' of a determined size to be pushed as one unit to the final writer. The only locking you need to do is to access the tail pointer and plug your data there, to make it simple (full details in the source). The rest is done in isolation of the rest of the flow -- transfering data from the mailbox to a higher-level queue we can operate on to do filtering, segment data into appropriate page sizes, etc. There is no GC that's gonna hit that queue, no operation except appending to be done on it (and eventually transferring it), and it's part of the core logic of the entire runtime system and you know it's gonna be one of the fastest thing you can do even on multiple cores in that language.
|
# ? Sep 6, 2013 18:31 |
|
|
# ? May 17, 2024 05:04 |
|
MononcQc posted:I'm not sure how portable to my case that architecture is. The inheritance of levels is nice, and the automation of level-checks before logging is good too. A lot of the principles are okay, but the problem I see there is that doing something like a "synchronized block" to provide thread safety when appending and formatting in that single sequential point is a lovely idea when you have >30,000 preemptively scheduled processes and multiple hundreds of log messages a second making it there. is logback formatting synchronized, i don't think it is. anyway what the async appender does is shove log events onto a queue for the writer thread to consume. pretty much what you're describing except that its not the default
|
# ? Sep 6, 2013 18:44 |