Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!
yeah I was gonna say I don't see any reason you couldn't use slf4j/logback and just write you're own batching appender. formatting is done during appending so theoretically you could have an appender that just sends raw, unformatted events on thru to a batching process (at its outer queue) which internally handles formatting and actual writing of logs based on internal settings about what to do when getting full or w/e.

ex: in process A you do logger.info("its pizza time") and the underlying logger fires that event into the queue of process LOGGER with no formatting done. Then that process is responsible for handling its outer queue and making sure each event is formatted and spat out where it belongs. Even though the submit to the LOGGER process would be synchronized, if I understand u correctly that's what you're already doing. the heavy tasks that might result in pauses will be done by the LOGGER process which is what you want plus you move formatting there as well, so you're normal processes are faster.

Adbot
ADBOT LOVES YOU

Nomnom Cookie
Aug 30, 2009



also because java doesn't use a billion processes you won't have to context switch to do logging

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!
I love all the logging hijackers the slf4j guy wrote so you can consolidate 3rd party libs onto a single logging platform.

Opinion Haver
Apr 9, 2007

instead of logging have you considered just writing correct code???

MononcQc
May 29, 2007

Nomnom Cookie posted:

is logback formatting synchronized, i don't think it is. anyway what the async appender does is shove log events onto a queue for the writer thread to consume. pretty much what you're describing except that its not the default

Yeah I was looking at the sequence diagram from the architecture docs, and saw the loop square and thought 'I guess this is the synchronous poo poo'. Reading this: http://logback.qos.ch/manual/appenders.html makes it look like there's filtering done as part of that call and no explicit formatting.

However, looking at specific appenders provided such as OutputStreamAppender reveals that there's a chain of calls that is append() > subAppend() > writeOut > this.encoder.doEncode(event);, which in turns defers to whatever encoder you have defined.

At this point I'm not a java dev and I'm not used to reading it, but it looks like by default it does do all its formatting (layout?) and whatnot as part of the synchronized area of the code, although you would be free to reimplement that behaviour to not do it.

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!
When people talk about processes in earlang do they actually mean threads or do they really mean separate processes?

MononcQc
May 29, 2007

Shaggar posted:

yeah I was gonna say I don't see any reason you couldn't use slf4j/logback and just write you're own batching appender. formatting is done during appending so theoretically you could have an appender that just sends raw, unformatted events on thru to a batching process (at its outer queue) which internally handles formatting and actual writing of logs based on internal settings about what to do when getting full or w/e.

ex: in process A you do logger.info("its pizza time") and the underlying logger fires that event into the queue of process LOGGER with no formatting done. Then that process is responsible for handling its outer queue and making sure each event is formatted and spat out where it belongs. Even though the submit to the LOGGER process would be synchronized, if I understand u correctly that's what you're already doing. the heavy tasks that might result in pauses will be done by the LOGGER process which is what you want plus you move formatting there as well, so you're normal processes are faster.

fyi this is pretty much what I ended up implementing except not in Java and formatting done at the call site instead because I just can and it's better to do that than overload a single process to be doing a lot of lovely string formatting in our case.

I'm trying to retrofit that in the standard logging library that does proper handling of all the filtering, log event level management, tracing support, etc. in a way that will be non-painful and optional to people who require it.

Shaggar posted:

When people talk about processes in earlang do they actually mean threads or do they really mean separate processes?

You get one native thread per core. Each thread runs a scheduler and a run queue. They do they equivalent of green threads, but with processes (nothing shared), they have their own isolated memory and GC going on, and they're preemptively scheduled. There's load-balancing and work-stealing, process immigration and whatnot that's automated to make sure you get a good distribution over all CPUs done for you automatically, but you can also tell the VM to respect CPU affinity as much as possible if you want.

MononcQc fucked around with this message at 19:23 on Sep 6, 2013

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!

MononcQc posted:

Yeah I was looking at the sequence diagram from the architecture docs, and saw the loop square and thought 'I guess this is the synchronous poo poo'. Reading this: http://logback.qos.ch/manual/appenders.html makes it look like there's filtering done as part of that call and no explicit formatting.

However, looking at specific appenders provided such as OutputStreamAppender reveals that there's a chain of calls that is append() > subAppend() > writeOut > this.encoder.doEncode(event);, which in turns defers to whatever encoder you have defined.

At this point I'm not a java dev and I'm not used to reading it, but it looks like by default it does do all its formatting (layout?) and whatnot as part of the synchronized area of the code, although you would be free to reimplement that behaviour to not do it.

90% of logback users are gonna use stuff as is and not care about the implementation cause it wont affect them.
another 9% will decide they need to write their own appender because they have a special snowflake destination for the logs and they will just extend something like outputstreamappender cause they don't need/want/care to have to change how encoding/formatting works
then the 1% of folks like you can just reimplement it in a way that suits their super custom needs, but still fits into the appender api

its a really good library.

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!

MononcQc posted:

fyi this is pretty much what I ended up implementing except not in Java and formatting done at the call site instead because I just can and it's better to do that than overload a single process to be doing a lot of lovely string formatting in our case.

I'm trying to retrofit that in the standard logging library that does proper handling of all the filtering, log event level management, tracing support, etc. in a way that will be non-painful and optional to people who require it.


You get one native thread per core. Each thread runs a scheduler and a run queue. They do they equivalent of green threads, but with processes (nothing shared), and they're preemptively scheduled. There's load-balancing and work-stealing, process immigration and whatnot that's automated to make sure you get a good distribution over all CPUs done for you automatically, but you can also tell the VM to respect CPU affinity as much as possible if you want.

so processes are green threads that cant share memory/objects but can communicate thru defined message endpoints?

MononcQc
May 29, 2007

Shaggar posted:

so processes are green threads that cant share memory/objects but can communicate thru defined message endpoints?

more or less yes. that's the gist of it. poo poo's optimized for that so starting a new process is ~300 words in memory and takes ~10µs, message passing is equally fast, and you get to be able to do introspection on pretty much everything you want for each process natively, but otherwise that's pretty much that.

MononcQc fucked around with this message at 19:27 on Sep 6, 2013

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!
my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread.

MononcQc
May 29, 2007

Shaggar posted:

my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread.

yeah I can see why that could cause that kind of reaction. When you have 50,000 of these lightweight threads/processes, you don't give much of a poo poo to see one of them coming and going in half a millisecond every 15 minutes or so and letting them exit to handle the signal somewhere else is super attractive as an idea compared to doing it in the OS where you'd be trashing everything to oblivion.

Shaggar
Apr 26, 2006
Probation
Can't post for 4 hours!
we do that in other languages we just call them by their correct name (threads)

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

MononcQc posted:

more or less yes. that's the gist of it. poo poo's optimized for that so starting a new process is ~300 words in memory and takes ~10µs, message passing is equally fast, and you get to be able to do introspection on pretty much everything you want for each process natively, but otherwise that's pretty much that.

when you're banging out a message to an erlang on another computer does the runtime put it in a local queue so it doesn't take forever?

MononcQc
May 29, 2007

Shaggar posted:

we do that in other languages we just call them by their correct name (threads)

bbbut it's already running threads

MononcQc
May 29, 2007

Cocoa Crispies posted:

when you're banging out a message to an erlang on another computer does the runtime put it in a local queue so it doesn't take forever?

Pretty much, yeah. There's a special Erlang process / piece of VM magic at both ends of the communication that picks things up where they left on the other node and keeps things working transparently.

Nomnom Cookie
Aug 30, 2009



Shaggar posted:

we do that in other languages we just call them by their correct name (threads)

erlang has special snoeflake m:n threads an calls them proe=cesses

MononcQc
May 29, 2007

:smuggo:

Nomnom Cookie
Aug 30, 2009



Shaggar posted:

90% of logback users are gonna use stuff as is and not care about the implementation cause it wont affect them.
another 9% will decide they need to write their own appender because they have a special snowflake destination for the logs and they will just extend something like outputstreamappender cause they don't need/want/care to have to change how encoding/formatting works
then the 1% of folks like you can just reimplement it in a way that suits their super custom needs, but still fits into the appender api

its a really good library.

logback is so great anytime i have to program something thats not on the jvm i want to log something and look at what existing code uses/whats in the stdlib and its like ugh. this all sucks dick i want logback

at oldjob they managed to create a logging class that was actually worse than using phps logging functions

Nomnom Cookie
Aug 30, 2009




Smug Erlang Weenie

MononcQc
May 29, 2007

I'm gonna go high-five go programmers for their goroutines

prefect
Sep 11, 2001

No one, Woodhouse.
No one.




Dead Man’s Band

Shaggar posted:

my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread.

this is why the arguments about exceptions go on extra-long

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

MononcQc posted:

I'm gonna go high-five go programmers for their goroutines
make sure you say "too slow, joe" a decade before they even move their hands

alternate director's cut ending: they'll say "too slow, joe" until jvm erlang is a thing again (heard rumors of 6-8x speed improvements over beam)

MononcQc
May 29, 2007

yeah the big improvements of Erjang were over numerical code (mostly floating point ops) and message passing (no copying), but that meant Erjang had to forego the soft realtime constraints that BEAM Erlang gives you.

The project got a little kick lately because they're making it work with Elixir to see if there's more interest from what I understand

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

MononcQc posted:

yeah the big improvements of Erjang were over numerical code (mostly floating point ops) and message passing (no copying), but that meant Erjang had to forego the soft realtime constraints that BEAM Erlang gives you.

The project got a little kick lately because they're making it work with Elixir to see if there's more interest from what I understand

yeah, it made it into our engineering scrum today

Nomnom Cookie
Aug 30, 2009



erlang on the jvm :gizz:

Max Facetime
Apr 18, 2009

Otto Skorzeny posted:

Baller sig btw

thanks, I also have some ideas for a followup: successful project delivery and wrap-up:


unixbeard
Dec 29, 2004

here are some posts about some c++ stuff i had never bothered to try understand http://cachelatency.com/ they seem reasonable :shobon:

double sulk
Jul 2, 2010

I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all.

weird
Jun 4, 2012

by zen death robot

gucci void main posted:

One language to rule them all.

quit talking about common lisp

Condiv
May 7, 2008

Sorry to undo the effort of paying a domestic abuser $10 to own this poster, but I am going to lose my dang mind if I keep seeing multiple posters who appear to be Baloogan.

With love,
a mod


gucci void main posted:

I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all.

hehe this is pretty funny thanks sulk

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

unixbeard posted:

here are some posts about some c++ stuff i had never bothered to try understand http://cachelatency.com/ they seem reasonable :shobon:

Pretty basic stuff, some things he doesn't seem to understand fully though, like auto, or maybe it's just the condescending tone of "some of you other programmers may need training wheels for your code, but not this guy :smug:" ("auto" is not just for lazy fuckers, dummo, it's a necessity to support some of the new language features). Or, i may just be biased against idiots who use the term "syntactic sugar"

unixbeard
Dec 29, 2004

hackbunny posted:

Pretty basic stuff, some things he doesn't seem to understand fully though, like auto, or maybe it's just the condescending tone of "some of you other programmers may need training wheels for your code, but not this guy :smug:" ("auto" is not just for lazy fuckers, dummo, it's a necessity to support some of the new language features). Or, i may just be biased against idiots who use the term "syntactic sugar"

yeah I dont really know c++ that well, especially this sort of stuff.

unixbeard
Dec 29, 2004

i would like to better understand the preconditions for use. it seems to me the new features were added for a reason like "i am doing x and y, and now I need to do z, which i cant due to some intractable problem or only through some super hackish solution, but new feature is designed to resolve it cleanly"

hackbunny
Jul 22, 2007

I haven't been on SA for years but the person who gave me my previous av as a joke felt guilty for doing so and decided to get me a non-shitty av

unixbeard posted:

i would like to better understand the preconditions for use. it seems to me the new features were added for a reason like "i am doing x and y, and now I need to do z, which i cant due to some intractable problem or only through some super hackish solution, but new feature is designed to resolve it cleanly"

Oook, I'll try. I'll be brief cos Im on a phone

Auto is not just convenient (especially to write foreach loops), but necessary due to the fact that in C++11 some expressions may have a type that you can't even express in code. For example, lambda expressions result in automatically generated objects with automatically generated type names that you simply can't know in advance

Shared_ptr is simply the closest thing to a garbage collector that you'll see in standard C++ (besides, if it's good enough for Cpython...). It's not a strictly necessary thing unlike other features AFAIK

Decltype solves a long, long-standing issue with macro-like functions like min/max (among other examples of doing fancy template poo poo with return types). With a macro implementation of say, min, you could pass 10UL and 10.1, and get a return type of float thanks to the magic of operator ?:. But the macro implementation would (1) pollute the namespace with a really really common token (gently caress you <windows.h>), and (2) evaluate its arguments more than once (a couple gcc extensions took care of this specific issue, at least for C). Until C++11 the function implementation would solve these issues, but introduce a new one: it couldn't have heterogeneous arguments (like unsigned long and float from the example above), because then what type the min function would return? Basically: historically in C++ you couldn't do fancy poo poo with return types that you could with function argument types. Decltype closes the gap

The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11

hackbunny fucked around with this message at 12:58 on Sep 7, 2013

Cocoa Crispies
Jul 20, 2001

Vehicular Manslaughter!

Pillbug

hackbunny posted:

The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11

c++ fetishists love yak shaving and abhor yagni

Notorious b.s.d.
Jan 25, 2003

by Reene

gucci void main posted:

I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all.

nope

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

topaz is as close as you'll get

Nomnom Cookie
Aug 30, 2009



hackbunny posted:

The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11

java.security.SecureRandom biyatch

Adbot
ADBOT LOVES YOU

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

Nomnom Cookie posted:

java.security.SecureRandom biyatch

Came to post this

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply