Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Zombywuf posted:

http://www.boost.org/doc/libs/1_46_1/libs/iostreams/doc/guide/pipelines.html

I know you have this really disturbing relationship with Haskell Janin, but the comment was about syntax and expressiveness, not data storage. Take a deep breath, relax, and consider that no-one was suggesting the replacement of Haskell with Bash.
ah, so the actual syntax for your example would be something like

code:
lolboost::pipeline_var tmp_b, tmp_c;
lolboohst::pipeline pipeline_1(one).push_back(two).tee(&tmp_b).push_back(three).redirect_stdout(&tmp_c);
lolboost::pipeline pipeline_2(four).arg(&tmp_b).arg(&tmp_c);
pipeline_1.run();
pipeline_2.run);
Yeah, that's so much easier to read.

Adbot
ADBOT LOVES YOU

Zombywuf
Mar 29, 2008

Internaut! posted:

lmax seems like a solution to a problem no one has - retail traders don't care about latency and even if they did they're probably looking at ~50ms to the lmax server minimum, so sub-microsecond order processing times don't really matter

meanwhile institutional systems need to do a lot more than just take orders, an efficient desk is constantly bumping up against constraints for risk, capital allocation, trader/trade priority etc which introduces contention and latency in communicating with other systems, in the lmax jargon the business processor quickly becomes the bottleneck erasing the benefits of their disruptor architecture

I may be misunderstanding, but isn't the point of the disruptor to reduce overhead in the business processor? My understanding is that it works quite like packet mmap in Linux, i.e. the business processor uses the data straight out of the input buffer, and there can be multiple business processors using the same input buffer. One of the features of this is that it generates very little garbage needing collection.

The problem they were supposedly solving was reducing costs for the exchange, the low transaction processing time and reduced garbage collection gives them some very nice capacity planning benefits.

Then again, maybe I'm just a ring buffer fanboi.


Janin posted:

ah, so the actual syntax for your example would be something like

code:
lolboost::pipeline_var tmp_b, tmp_c;
lolboohst::pipeline pipeline_1(one).push_back(two).tee(&tmp_b).push_back(three).redirect_stdout(&tmp_c);
lolboost::pipeline pipeline_2(four).arg(&tmp_b).arg(&tmp_c);
pipeline_1.run();
pipeline_2.run);
Yeah, that's so much easier to read.

I wouldn't criticise other peoples ability to get by in multiple languages if I were you.

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Zombywuf posted:

I wouldn't criticise other peoples ability to get by in multiple languages if I were you.
Then please, by all means, post the actual working C++ code that does the equivalent of that haskell and/or bash.

Zombywuf
Mar 29, 2008

Janin posted:

Then please, by all means, post the actual working C++ code that does the equivalent of that haskell and/or bash.

C++ code:
try {
  auto b = two(one(a));
  auto d = four(b, three(b));
} catch (...) {
}

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Zombywuf posted:

C++ code:
try {
  auto b = two(one(a));
  auto d = four(b, three(b));
} catch (...) {
}
This is completely different, good job being mutually incompetent in three languages I guess.

Emacs Headroom
Aug 2, 2003

Janin posted:

This is completely different, good job being mutually incompetent in three languages I guess.

well the spec was written in haskell and no non-spergs can read that

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
oh good another slapfight between zombywuf and janin. you two should really just meet up and get rid of all this tension by way of a vigorous sexual encounter

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Ridgely_Fan posted:

well the spec was written in haskell and no non-spergs can read that
it's not hard to read, zombywuf is just dumb as gently caress, and keeps the goalposts mounted on a backpack for maximum mobility.

Muddy Terrain
Dec 23, 2004

by Y Kant Ozma Post

Hammerite posted:

oh good another slapfight between zombywuf and janin. you two should really just meet up and get rid of all this tension by way of a vigorous sexual encounter

Janin is asexual and zombywuf is obsessed with defeating feminism, so a match made in heaven.

Muddy Terrain fucked around with this message at 17:26 on May 30, 2012

Zombywuf
Mar 29, 2008

Janin posted:

it's not hard to read, zombywuf is just dumb as gently caress, and keeps the goalposts mounted on a backpack for maximum mobility.

That's the flexibility generic programming gives you.

Hammerite posted:

oh good another slapfight between zombywuf and janin. you two should really just meet up and get rid of all this tension by way of a vigorous sexual encounter

You can't have sex in Haskell, I doubt Janin would be able to understand it.

tef
May 30, 2004

-> some l-system crap ->
my browser eat my post so I am too lazy to expand

Internaut! posted:

lmax seems like a solution to a problem no one has - retail traders don't care about latency and even if they did they're probably looking at ~50ms to the lmax server minimum, so sub-microsecond order processing times don't really matter

meanwhile institutional systems need to do a lot more than just take orders, an efficient desk is constantly bumping up against constraints for risk, capital allocation, trader/trade priority etc which introduces contention and latency in communicating with other systems, in the lmax jargon the business processor quickly becomes the bottleneck erasing the benefits of their disruptor architecture

finally the real hft guys can already process far faster than this, so at least in my industry I don't really see the point of lmax but I'm sure there's some emerging niche (retail prop hft?) where it could be useful

I enjoyed the talk as a heretic view on queues and other concurrency primitives. Notably the anecdote about queues being either full or empty. Some erlang anecdotes seem to back this up.

I like bounded queues/buffers because they have a self-syncrhonizing property - i.e you spin waiting to push things into queues, and this propagates the delay out of the network, rather than hoping the intermediate queue won't explode.

Mostly I just have an allergy for message brokers and a fetish for ring buffers.

quote:

we use a number of posix ipc mechanisms including semaphores, named pipes and queues, which was an extremely efficient way to implement the system originally or so we're told

I imagine they still are :v:

(although I recall something nice about futexes)

quote:

unfortunately the pace of change in our industry indicates that a more flexible approach is preferable at this point so we're currently evaluating new ways to deliver concurrency with a lot more software flexibility, including new fangled (for us) approaches like pure functions, pi calculus, csp, higher level concurrency primitives like actors etc

I am curious to what you make of rust: http://www.rust-lang.org/


quote:

eh no but once you've chosen a c/linux architecture, you're looking at either substantial compiler and/or kernel mods to recreate tandem-style dual processing and it's just not worth it for us, hot swappable commodity hardware and stateless clusters mean the uptime of individual processes isn't critical even if we couldn't reboot every 12 hours or so which we can

likely the reason we never chose tandem hardware itself (this was before my time) is that it would be like trying to make a pickup truck perform like a corvette - it's possible, but it's probably going to be easier and cheaper to just buy a corvette in the first place, especially if you don't need the towing capacity

:iiaca:

I was mostly kidding, but c/linux is a cheapass option compared to the lumbering hulks that are tandems, or so I'm told. I don't imagine the tradeoffs make sense for everyone.

I've always been a bit tandem curious. I wonder how much of this is faux nostalgia for a system i've never used.

quote:

yeah it seems simple but I'm seeing stm performance degrade in unexpected and delightful ways in some of my clojure tests

welcome to stm :q: enjoy your stay.

It's relatively immature compared to the more primitive mechanisms you've been using, but I still hear nice things about clojure.

quote:

at this point that could be equal parts clojure and equal parts ignorance on my part but ultimately there's no free lunch; if you implement transactions using the finest grained operating system primitives available, you will be able to tune, control and predict your transaction system's performance under various loads far better than you'll be able to by just wrapping code blocks in dosync or whatever, even if you have to pay for that control and predictability in developer blood

tradeoffs suck, eh?

thing is, stm is becoming a bit like gc - sooner or later it will be ubiquitous. there is still use cases for life without it, but, as you so rightly declare, there will be blood.

quote:

don't feel bad, read noted web guru martin fowler's review of lmax for profound insights from someone who has likely never been anywhere near a realtime system either

really martin in memory processes work faster than ones requiring an an oo/relational->disk step do they, fascinating

i tend to avoid essays by object mentors and their ilk, for fear I may become one.

those shills who end up peddling their anecdotes from keynote to keynote.

tef
May 30, 2004

-> some l-system crap ->

Broken Dictionary posted:

Janin is asexual and zombywuf is obsessed with defeating feminism, so a match made in heaven.

it's almost as if they're having arguments with themselves, rather than talking to each other.

tef
May 30, 2004

-> some l-system crap ->

Internaut! posted:

likely the reason we never chose tandem hardware itself (this was before my time) is that it would be like trying to make a pickup truck perform like a corvette - it's possible, but it's probably going to be easier and cheaper to just buy a corvette in the first place, especially if you don't need the towing capacity

:iiaca:

I have absolutely no idea what this means, I'm allergic to jeremy clarkson

Sneaking Mission
Nov 11, 2008

brake horsepower

tef
May 30, 2004

-> some l-system crap ->
tef posts papers

http://research.google.com/pubs/pub38125.html

quote:

Many of the services that are critical to Google’s ad business have historically been backed by MySQL. We have recently migrated several of these services to F1, a new RDBMS developed at Google. F1 implements rich relational database features, including a strictly enforced schema, a powerful parallel SQL query engine, general transactions, change tracking and notification, and indexing, and is built on top of a highly distributed storage system that scales on standard hardware in Google data centers. The store is dynamically sharded, supports transactionally-consistent replication across data centers, and is able to handle data center outages without data loss.

google goes no nosql :q:

skeevy achievements
Feb 25, 2008

by merry exmarx

Zombywuf posted:

I may be misunderstanding, but isn't the point of the disruptor to reduce overhead in the business processor? My understanding is that it works quite like packet mmap in Linux, i.e. the business processor uses the data straight out of the input buffer, and there can be multiple business processors using the same input buffer. One of the features of this is that it generates very little garbage needing collection.

The problem they were supposedly solving was reducing costs for the exchange, the low transaction processing time and reduced garbage collection gives them some very nice capacity planning benefits.

Then again, maybe I'm just a ring buffer fanboi.

I decided to listen to their presentation and it includes a pretty good overview of modern computer architecture, after which I was waiting for them to tell the crowd why they used Java for their project instead of C which makes minimizing code and data structure overhead/garbage minimization/cache line control/etc trivial, but no explanation was forthcoming, regardless I'm amazed you can write java code and accurately predict how well it will execute on hardware down to register and cache considerations (although I'd have liked to see some proof)

the talk reveals their main clients are actually some big euro hft firms, not retail, so the quest for minimum latency was warranted but I don't understand why their customers aren't using their own systems if they're that big - not my area of expertise

Zombywuf
Mar 29, 2008

Internaut! posted:

the talk reveals their main clients are actually some big euro hft firms, not retail, so the quest for minimum latency was warranted but I don't understand why their customers aren't using their own systems if they're that big - not my area of expertise

Maybe their own systems are crap? I generally assume internally developed software is crap until proven otherwise.

tef
May 30, 2004

-> some l-system crap ->

Internaut! posted:

I decided to listen to their presentation and it includes a pretty good overview of modern computer architecture, after which I was waiting for them to tell the crowd why they used Java for their project instead of C

a bad case of shaggaritis? I think they may have custom logic to integrate with/existing skills/staff. There are a bunch of tradeoffs they could be making but i'd be hesitant to name any single reason.

p.s off to the malt whisky society

tef
May 30, 2004

-> some l-system crap ->
oh and bonzoesc and I are giving a talk at the same mini conference next month :getin:

Nomnom Cookie
Aug 30, 2009



Internaut! posted:

I decided to listen to their presentation and it includes a pretty good overview of modern computer architecture, after which I was waiting for them to tell the crowd why they used Java for their project instead of C which makes minimizing code and data structure overhead/garbage minimization/cache line control/etc trivial, but no explanation was forthcoming, regardless I'm amazed you can write java code and accurately predict how well it will execute on hardware down to register and cache considerations (although I'd have liked to see some proof)

the talk reveals their main clients are actually some big euro hft firms, not retail, so the quest for minimum latency was warranted but I don't understand why their customers aren't using their own systems if they're that big - not my area of expertise

the spiffy optimizations modern JVMs do are mostly things like "oh i see that in 10000 invocations, the interface type at this call site has always resolved to the same concrete type so I will just inline that method" and now your invokeinterface bytecode has optimized to a (never taken) branch and a call. allocating POD types is also fast as hell. otoh this turns into an exercise in writing c in java so why bother aside from path dependence

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

tef posted:

p.s off to the malt whisky society

will you be around at the end of june?

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

tef posted:

oh and bonzoesc and I are giving a talk at the same mini conference next month :getin:

yessssssssss

Shaggar
Apr 26, 2006

Internaut! posted:

I decided to listen to their presentation and it includes a pretty good overview of modern computer architecture, after which I was waiting for them to tell the crowd why they used Java for their project instead of C which makes minimizing code and data structure overhead/garbage minimization/cache line control/etc trivial, but no explanation was forthcoming, regardless I'm amazed you can write java code and accurately predict how well it will execute on hardware down to register and cache considerations (although I'd have liked to see some proof)

the talk reveals their main clients are actually some big euro hft firms, not retail, so the quest for minimum latency was warranted but I don't understand why their customers aren't using their own systems if they're that big - not my area of expertise

watched part of this. cool smart guys using java ftw.

trex eaterofcadrs
Jun 17, 2005
My lack of understanding is only exceeded by my lack of concern.
you can most definitely tune java and understand what it does wrt register allocation, memory management and cache coherency... you just have to be really educated about the thing.

EMILY BLUNTS
Jan 1, 2005

Janin posted:

it's not hard to read, zombywuf is just dumb as gently caress, and keeps the goalposts mounted on a backpack for maximum mobility.

coming from the poster who stuck the goalposts onto a rocket

vapid cutlery
Apr 17, 2007

php:
<?
"it's george costanza" ?>

Internaut! posted:

lmax seems like a solution to a problem no one has - retail traders don't care about latency and even if they did they're probably looking at ~50ms to the lmax server minimum, so sub-microsecond order processing times don't really matter

meanwhile institutional systems need to do a lot more than just take orders, an efficient desk is constantly bumping up against constraints for risk, capital allocation, trader/trade priority etc which introduces contention and latency in communicating with other systems, in the lmax jargon the business processor quickly becomes the bottleneck erasing the benefits of their disruptor architecture

finally the real hft guys can already process far faster than this, so at least in my industry I don't really see the point of lmax but I'm sure there's some emerging niche (retail prop hft?) where it could be useful


we use a number of posix ipc mechanisms including semaphores, named pipes and queues, which was an extremely efficient way to implement the system originally or so we're told

unfortunately the pace of change in our industry indicates that a more flexible approach is preferable at this point so we're currently evaluating new ways to deliver concurrency with a lot more software flexibility, including new fangled (for us) approaches like pure functions, pi calculus, csp, higher level concurrency primitives like actors etc


eh no but once you've chosen a c/linux architecture, you're looking at either substantial compiler and/or kernel mods to recreate tandem-style dual processing and it's just not worth it for us, hot swappable commodity hardware and stateless clusters mean the uptime of individual processes isn't critical even if we couldn't reboot every 12 hours or so which we can

likely the reason we never chose tandem hardware itself (this was before my time) is that it would be like trying to make a pickup truck perform like a corvette - it's possible, but it's probably going to be easier and cheaper to just buy a corvette in the first place, especially if you don't need the towing capacity

:iiaca:


yeah it seems simple but I'm seeing stm performance degrade in unexpected and delightful ways in some of my clojure tests

at this point that could be equal parts clojure and equal parts ignorance on my part but ultimately there's no free lunch; if you implement transactions using the finest grained operating system primitives available, you will be able to tune, control and predict your transaction system's performance under various loads far better than you'll be able to by just wrapping code blocks in dosync or whatever, even if you have to pay for that control and predictability in developer blood


don't feel bad, read noted web guru martin fowler's review of lmax for profound insights from someone who has likely never been anywhere near a realtime system either

really martin in memory processes work faster than ones requiring an an oo/relational->disk step do they, fascinating

nerd

vapid cutlery
Apr 17, 2007

php:
<?
"it's george costanza" ?>

Resplendent Spiral posted:

coming from the poster who stuck the goalposts onto a rocket

skeevy achievements
Feb 25, 2008

by merry exmarx

trex eaterofcadrs posted:

you can most definitely tune java and understand what it does wrt register allocation, memory management and cache coherency... you just have to be really educated about the thing.

are there books on the subject, I like the jvm a lot but there's not much I can do with it at work unless I can address some of the more egregious performance issues (which for most applications and domains are probably fine)

Opinion Haver
Apr 9, 2007

Zombywuf posted:

C++ code:
try {
  auto b = two(one(a));
  auto d = four(b, three(b));
} catch (...) {
}

lol the entire point of this was to make values that are wrapped in an Optional type play nice with functions that don't take Optional types

tef
May 30, 2004

-> some l-system crap ->

BonzoESC posted:

yessssssssss

no prizes for guessing what my talk is about

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
is it about some nerd poo poo?

trex eaterofcadrs
Jun 17, 2005
My lack of understanding is only exceeded by my lack of concern.

Internaut! posted:

are there books on the subject, I like the jvm a lot but there's not much I can do with it at work unless I can address some of the more egregious performance issues (which for most applications and domains are probably fine)

i don't know of any "savage java optimization" books, mostly everything i know of (a small domain indeed) are due to talks and papers :(

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

why do you advocate a terrorist language

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

that article posted:

Powerful ‘Flame’ cyberweapon tied to popular Angry Birds game

The most sophisticated and powerful cyberweapon uncovered to date was written in the LUA computer language, cyber security experts tell Fox News -- the same one used to make the incredibly popular Angry Birds game.

LUA is favored by game programmers because it’s easy to use and easy to embed. Flame is described as enormously powerful and large, containing some 250,000 lines of code, making it far larger than other such cyberweapons. Yet it was built with gamer code, said Cedric Leighton, a retired Air Force Intelligence officer who now consults in the national security arena.

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
you know who else used lua? that's right,

Gazpacho
Jun 18, 2004

by Fluffdaddy
Slippery Tilde
lua akbar

Zombywuf
Mar 29, 2008

yaoi prophet posted:

lol the entire point of this was to make values that are wrapped in an Optional type play nice with functions that don't take Optional types

I don't think you know as much about type theory as you think you do.

homercles
Feb 14, 2010

programming language thread

not the compsci undergraduate circlejerk thread

oh what am i talking about carry on

tef
May 30, 2004

-> some l-system crap ->
this is the programming sperg [[safespace]]

Adbot
ADBOT LOVES YOU

homercles
Feb 14, 2010

mavis beacon teaches type theory

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply