|
yeah I was gonna say I don't see any reason you couldn't use slf4j/logback and just write you're own batching appender. formatting is done during appending so theoretically you could have an appender that just sends raw, unformatted events on thru to a batching process (at its outer queue) which internally handles formatting and actual writing of logs based on internal settings about what to do when getting full or w/e. ex: in process A you do logger.info("its pizza time") and the underlying logger fires that event into the queue of process LOGGER with no formatting done. Then that process is responsible for handling its outer queue and making sure each event is formatted and spat out where it belongs. Even though the submit to the LOGGER process would be synchronized, if I understand u correctly that's what you're already doing. the heavy tasks that might result in pauses will be done by the LOGGER process which is what you want plus you move formatting there as well, so you're normal processes are faster.
|
# ? Sep 6, 2013 18:56 |
|
|
# ? Jun 2, 2024 20:52 |
|
also because java doesn't use a billion processes you won't have to context switch to do logging
|
# ? Sep 6, 2013 18:57 |
|
I love all the logging hijackers the slf4j guy wrote so you can consolidate 3rd party libs onto a single logging platform.
|
# ? Sep 6, 2013 19:00 |
|
instead of logging have you considered just writing correct code???
|
# ? Sep 6, 2013 19:02 |
|
Nomnom Cookie posted:is logback formatting synchronized, i don't think it is. anyway what the async appender does is shove log events onto a queue for the writer thread to consume. pretty much what you're describing except that its not the default Yeah I was looking at the sequence diagram from the architecture docs, and saw the loop square and thought 'I guess this is the synchronous poo poo'. Reading this: http://logback.qos.ch/manual/appenders.html makes it look like there's filtering done as part of that call and no explicit formatting. However, looking at specific appenders provided such as OutputStreamAppender reveals that there's a chain of calls that is append() > subAppend() > writeOut > this.encoder.doEncode(event);, which in turns defers to whatever encoder you have defined. At this point I'm not a java dev and I'm not used to reading it, but it looks like by default it does do all its formatting (layout?) and whatnot as part of the synchronized area of the code, although you would be free to reimplement that behaviour to not do it.
|
# ? Sep 6, 2013 19:13 |
|
When people talk about processes in earlang do they actually mean threads or do they really mean separate processes?
|
# ? Sep 6, 2013 19:14 |
|
Shaggar posted:yeah I was gonna say I don't see any reason you couldn't use slf4j/logback and just write you're own batching appender. formatting is done during appending so theoretically you could have an appender that just sends raw, unformatted events on thru to a batching process (at its outer queue) which internally handles formatting and actual writing of logs based on internal settings about what to do when getting full or w/e. fyi this is pretty much what I ended up implementing except not in Java and formatting done at the call site instead because I just can and it's better to do that than overload a single process to be doing a lot of lovely string formatting in our case. I'm trying to retrofit that in the standard logging library that does proper handling of all the filtering, log event level management, tracing support, etc. in a way that will be non-painful and optional to people who require it. Shaggar posted:When people talk about processes in earlang do they actually mean threads or do they really mean separate processes? You get one native thread per core. Each thread runs a scheduler and a run queue. They do they equivalent of green threads, but with processes (nothing shared), they have their own isolated memory and GC going on, and they're preemptively scheduled. There's load-balancing and work-stealing, process immigration and whatnot that's automated to make sure you get a good distribution over all CPUs done for you automatically, but you can also tell the VM to respect CPU affinity as much as possible if you want. MononcQc fucked around with this message at 19:23 on Sep 6, 2013 |
# ? Sep 6, 2013 19:20 |
|
MononcQc posted:Yeah I was looking at the sequence diagram from the architecture docs, and saw the loop square and thought 'I guess this is the synchronous poo poo'. Reading this: http://logback.qos.ch/manual/appenders.html makes it look like there's filtering done as part of that call and no explicit formatting. 90% of logback users are gonna use stuff as is and not care about the implementation cause it wont affect them. another 9% will decide they need to write their own appender because they have a special snowflake destination for the logs and they will just extend something like outputstreamappender cause they don't need/want/care to have to change how encoding/formatting works then the 1% of folks like you can just reimplement it in a way that suits their super custom needs, but still fits into the appender api its a really good library.
|
# ? Sep 6, 2013 19:21 |
|
MononcQc posted:fyi this is pretty much what I ended up implementing except not in Java and formatting done at the call site instead because I just can and it's better to do that than overload a single process to be doing a lot of lovely string formatting in our case. so processes are green threads that cant share memory/objects but can communicate thru defined message endpoints?
|
# ? Sep 6, 2013 19:24 |
|
Shaggar posted:so processes are green threads that cant share memory/objects but can communicate thru defined message endpoints? more or less yes. that's the gist of it. poo poo's optimized for that so starting a new process is ~300 words in memory and takes ~10µs, message passing is equally fast, and you get to be able to do introspection on pretty much everything you want for each process natively, but otherwise that's pretty much that. MononcQc fucked around with this message at 19:27 on Sep 6, 2013 |
# ? Sep 6, 2013 19:25 |
|
my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread.
|
# ? Sep 6, 2013 19:28 |
|
Shaggar posted:my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread. yeah I can see why that could cause that kind of reaction. When you have 50,000 of these lightweight threads/processes, you don't give much of a poo poo to see one of them coming and going in half a millisecond every 15 minutes or so and letting them exit to handle the signal somewhere else is super attractive as an idea compared to doing it in the OS where you'd be trashing everything to oblivion.
|
# ? Sep 6, 2013 19:31 |
|
we do that in other languages we just call them by their correct name (threads)
|
# ? Sep 6, 2013 19:34 |
|
MononcQc posted:more or less yes. that's the gist of it. poo poo's optimized for that so starting a new process is ~300 words in memory and takes ~10µs, message passing is equally fast, and you get to be able to do introspection on pretty much everything you want for each process natively, but otherwise that's pretty much that. when you're banging out a message to an erlang on another computer does the runtime put it in a local queue so it doesn't take forever?
|
# ? Sep 6, 2013 19:36 |
|
Shaggar posted:we do that in other languages we just call them by their correct name (threads) bbbut it's already running threads
|
# ? Sep 6, 2013 19:38 |
|
Cocoa Crispies posted:when you're banging out a message to an erlang on another computer does the runtime put it in a local queue so it doesn't take forever? Pretty much, yeah. There's a special Erlang process / piece of VM magic at both ends of the communication that picks things up where they left on the other node and keeps things working transparently.
|
# ? Sep 6, 2013 19:49 |
|
Shaggar posted:we do that in other languages we just call them by their correct name (threads) erlang has special snoeflake m:n threads an calls them proe=cesses
|
# ? Sep 6, 2013 20:09 |
|
|
# ? Sep 6, 2013 20:11 |
|
Shaggar posted:90% of logback users are gonna use stuff as is and not care about the implementation cause it wont affect them. logback is so great anytime i have to program something thats not on the jvm i want to log something and look at what existing code uses/whats in the stdlib and its like ugh. this all sucks dick i want logback at oldjob they managed to create a logging class that was actually worse than using phps logging functions
|
# ? Sep 6, 2013 20:12 |
|
Smug Erlang Weenie
|
# ? Sep 6, 2013 20:13 |
|
I'm gonna go high-five go programmers for their goroutines
|
# ? Sep 6, 2013 20:20 |
|
Shaggar posted:my autism is triggered when you refer to a process exiting cause I think u mean ur runtime and not just a thread. this is why the arguments about exceptions go on extra-long
|
# ? Sep 6, 2013 20:23 |
|
MononcQc posted:I'm gonna go high-five go programmers for their goroutines alternate director's cut ending: they'll say "too slow, joe" until jvm erlang is a thing again (heard rumors of 6-8x speed improvements over beam)
|
# ? Sep 6, 2013 20:35 |
|
yeah the big improvements of Erjang were over numerical code (mostly floating point ops) and message passing (no copying), but that meant Erjang had to forego the soft realtime constraints that BEAM Erlang gives you. The project got a little kick lately because they're making it work with Elixir to see if there's more interest from what I understand
|
# ? Sep 6, 2013 20:44 |
|
MononcQc posted:yeah the big improvements of Erjang were over numerical code (mostly floating point ops) and message passing (no copying), but that meant Erjang had to forego the soft realtime constraints that BEAM Erlang gives you. yeah, it made it into our engineering scrum today
|
# ? Sep 6, 2013 20:50 |
|
erlang on the jvm
|
# ? Sep 6, 2013 21:28 |
|
Otto Skorzeny posted:Baller sig btw thanks, I also have some ideas for a followup: successful project delivery and wrap-up:
|
# ? Sep 7, 2013 00:51 |
|
here are some posts about some c++ stuff i had never bothered to try understand http://cachelatency.com/ they seem reasonable
|
# ? Sep 7, 2013 06:46 |
I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all.
|
|
# ? Sep 7, 2013 08:43 |
|
gucci void main posted:One language to rule them all. quit talking about common lisp
|
# ? Sep 7, 2013 09:06 |
|
gucci void main posted:I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all. hehe this is pretty funny thanks sulk
|
# ? Sep 7, 2013 11:35 |
|
unixbeard posted:here are some posts about some c++ stuff i had never bothered to try understand http://cachelatency.com/ they seem reasonable Pretty basic stuff, some things he doesn't seem to understand fully though, like auto, or maybe it's just the condescending tone of "some of you other programmers may need training wheels for your code, but not this guy " ("auto" is not just for lazy fuckers, dummo, it's a necessity to support some of the new language features). Or, i may just be biased against idiots who use the term "syntactic sugar"
|
# ? Sep 7, 2013 12:18 |
|
hackbunny posted:Pretty basic stuff, some things he doesn't seem to understand fully though, like auto, or maybe it's just the condescending tone of "some of you other programmers may need training wheels for your code, but not this guy " ("auto" is not just for lazy fuckers, dummo, it's a necessity to support some of the new language features). Or, i may just be biased against idiots who use the term "syntactic sugar" yeah I dont really know c++ that well, especially this sort of stuff.
|
# ? Sep 7, 2013 12:20 |
|
i would like to better understand the preconditions for use. it seems to me the new features were added for a reason like "i am doing x and y, and now I need to do z, which i cant due to some intractable problem or only through some super hackish solution, but new feature is designed to resolve it cleanly"
|
# ? Sep 7, 2013 12:22 |
|
unixbeard posted:i would like to better understand the preconditions for use. it seems to me the new features were added for a reason like "i am doing x and y, and now I need to do z, which i cant due to some intractable problem or only through some super hackish solution, but new feature is designed to resolve it cleanly" Oook, I'll try. I'll be brief cos Im on a phone Auto is not just convenient (especially to write foreach loops), but necessary due to the fact that in C++11 some expressions may have a type that you can't even express in code. For example, lambda expressions result in automatically generated objects with automatically generated type names that you simply can't know in advance Shared_ptr is simply the closest thing to a garbage collector that you'll see in standard C++ (besides, if it's good enough for Cpython...). It's not a strictly necessary thing unlike other features AFAIK Decltype solves a long, long-standing issue with macro-like functions like min/max (among other examples of doing fancy template poo poo with return types). With a macro implementation of say, min, you could pass 10UL and 10.1, and get a return type of float thanks to the magic of operator ?:. But the macro implementation would (1) pollute the namespace with a really really common token (gently caress you <windows.h>), and (2) evaluate its arguments more than once (a couple gcc extensions took care of this specific issue, at least for C). Until C++11 the function implementation would solve these issues, but introduce a new one: it couldn't have heterogeneous arguments (like unsigned long and float from the example above), because then what type the min function would return? Basically: historically in C++ you couldn't do fancy poo poo with return types that you could with function argument types. Decltype closes the gap The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11 hackbunny fucked around with this message at 12:58 on Sep 7, 2013 |
# ? Sep 7, 2013 12:55 |
|
hackbunny posted:The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11 c++ fetishists love yak shaving and abhor yagni
|
# ? Sep 7, 2013 15:53 |
|
gucci void main posted:I'm putting my faith in Ruby. It might take 10 years, but eventually the performance will resemble C's. It's basically a compile-to-C language right now as it is. There's just a whole bunch of inefficiencies in the implementation. Once they get ironed out, we'll finally be able to have our cake and eat it too. One language to rule them all. nope
|
# ? Sep 7, 2013 16:22 |
|
topaz is as close as you'll get
|
# ? Sep 7, 2013 16:45 |
|
hackbunny posted:The rationale behind the new RNG API is pretty well explained by that blog post. Ill add that no other language that I know of has such a fancypants built-in RNG as C++11 java.security.SecureRandom biyatch
|
# ? Sep 7, 2013 16:53 |
|
|
# ? Jun 2, 2024 20:52 |
|
Nomnom Cookie posted:java.security.SecureRandom biyatch Came to post this
|
# ? Sep 7, 2013 17:24 |