|
MononcQc posted:hi everyone sorry i dont post here as often as i should here is a pic of me browsing yospos 0/10 monitor not displaying yospos color theme not cga
|
# ? Aug 6, 2013 02:32 |
|
|
# ? Jun 11, 2024 13:40 |
|
im using elinks you scrub. also here's me before going to bed ;)
|
# ? Aug 6, 2013 02:34 |
|
find a way to shoehorn these into thisotplife imo
|
# ? Aug 6, 2013 02:49 |
|
MononcQc posted:im using elinks you scrub. wow so erlang
|
# ? Aug 6, 2013 04:55 |
|
Started reading on gossip protocols and epidemic stuff. This paper is a decent intro.
|
# ? Aug 6, 2013 06:28 |
|
read a few pages back someone was talkin poo poo about awk and my feeling is that they can go gently caress themselves with a forked stick
|
# ? Aug 6, 2013 07:39 |
|
rotor are you my dad
|
# ? Aug 6, 2013 07:42 |
|
FamDav posted:rotor are you my dad i've done a lot of things in my life i'm not proud of
|
# ? Aug 6, 2013 07:43 |
|
rotor posted:i've done a lot of things in my life i'm not proud of
|
# ? Aug 6, 2013 10:15 |
|
|
# ? Aug 6, 2013 13:16 |
|
awk has its uses, like most good tools, sometimes a busybox shell is all you have like in initrd images
|
# ? Aug 6, 2013 13:36 |
|
I wrote an awk once to take the output of the pep8 tool and turn it into an awk script that fixed the pep8 errors in the original file. This was the easiest way to make tef's code readable.
|
# ? Aug 6, 2013 14:47 |
|
Cool as gently caress: http://spritesmods.com/?art=hddhack&page=1
|
# ? Aug 6, 2013 17:51 |
|
MononcQc posted:Whatever, I did find Awk pretty nice for the way it works. It's rather straight to the point, fast enough, and standard in most places so it's nice to put a specific short script together to gather data. Anything bigger we send to splunk though. i was talkin to a dude who does consulting on hbase and hadoop poo poo apparently every single one of his customers is doing request log parsing thousand node compute grids to read httpd logs no joke this may be how splunk made themselves a billion dollar company idk
|
# ? Aug 6, 2013 20:53 |
|
MononcQc posted:Started reading on gossip protocols and epidemic stuff. This paper is a decent intro. noice
|
# ? Aug 6, 2013 21:09 |
|
Notorious b.s.d. posted:i was talkin to a dude who does consulting on hbase and hadoop poo poo seriously? jesus i can eat logs and poo poo excels and csvs all day. thread that poo poo, too. TURBO STYLE. how many cores u got, bitch
|
# ? Aug 6, 2013 21:15 |
|
Notorious b.s.d. posted:i was talkin to a dude who does consulting on hbase and hadoop poo poo That and Splunk charge in scales of porsches ($100,0000) and houses ($250,000). Because no-one else can scale log parsing like they can. Their secret? Custom awk, gzipped plain text log files and a few scraps of Python. Where I currently work we do with 1 db what major academic research projects do with Hadoop, I can't remember how many nodes but it's a lot more than 1.
|
# ? Aug 6, 2013 22:27 |
hi I got a promotion today so I get to keep on doing the exact same poo poo except feel more important somehow
|
|
# ? Aug 6, 2013 22:39 |
|
interactive visualization of cpu and memory speeds
|
# ? Aug 6, 2013 22:40 |
funny how easy it is to move up in programming while doing next to nothing
|
|
# ? Aug 6, 2013 22:40 |
|
OBAMA BIN LinkedIn posted:hi I got a promotion today so I get to keep on doing the exact same poo poo except feel more important somehow congratulations!
|
# ? Aug 6, 2013 22:51 |
|
Zombywuf posted:That and Splunk charge in scales of porsches ($100,0000) and houses ($250,000). Because no-one else can scale log parsing like they can. Their secret? Custom awk, gzipped plain text log files and a few scraps of Python. well the parsing is the easy part, especially if you do most of it on the clients/agents what is their magic for the full text index? that poo poo is wicked fast
|
# ? Aug 6, 2013 23:30 |
|
Jonny 290 posted:seriously? jesus i can eat logs and poo poo excels and csvs all day. thread that poo poo, too. TURBO STYLE. how many cores u got, bitch believe it or not a lot of hadoop users aren't doing anything with the cool java-based hadoop/cascading APIs, just using "streaming" hadoop "streaming" is when your hadoop job is python/perl scripts that talk on stdin/stdout. world's most complicated job control for unix pipes
|
# ? Aug 6, 2013 23:32 |
|
Notorious b.s.d. posted:believe it or not a lot of hadoop users aren't doing anything with the cool java-based hadoop/cascading APIs, just using "streaming" hahaha for fucks sake i need to get to denver and start slutting it up
|
# ? Aug 6, 2013 23:33 |
|
This is cool it needs cache line support, and Disk/SSD/Network visualizations (with time compression ofc)
|
# ? Aug 6, 2013 23:37 |
|
Notorious b.s.d. posted:what is their magic for the full text index? that poo poo is wicked fast gzip and grep (and a healthy disk cache). Seriously. Their indexes are just gzipped text files. There's no magic, think of it as a column store with one column and good compression. Think about how many instructions you can execute in the time it takes to read a single page from a disk and you'll see why gzip is a good solution.
|
# ? Aug 7, 2013 14:18 |
|
Latest Awk program helped diagnose why a node crashed by identifying a concurrency bottleneck from a crash dump Awk owns. E: gaddamn I gotta find a new avatar MononcQc fucked around with this message at 15:23 on Aug 7, 2013 |
# ? Aug 7, 2013 15:02 |
|
LiveScript is a fork of Coco, which is itself a fork of CoffeeScript
|
# ? Aug 7, 2013 15:28 |
|
abraham linksys posted:LiveScript is a fork of Coco, which is itself a fork of CoffeeScript lol ppl just making poo poo up now
|
# ? Aug 7, 2013 15:33 |
|
Jonny 290 posted:lol ppl just making poo poo up now now?
|
# ? Aug 7, 2013 15:41 |
|
MononcQc posted:Latest Awk program helped diagnose why a node crashed by identifying a concurrency bottleneck from a crash dump Awk owns. in fact i may go cga that mugshot of you like right now eh nevermind its coming out too lovely Bloody fucked around with this message at 15:55 on Aug 7, 2013 |
# ? Aug 7, 2013 15:45 |
|
yosposting from inside moz london
|
# ? Aug 7, 2013 16:12 |
|
do they have red pandas there too?
|
# ? Aug 7, 2013 16:33 |
|
tef posted:yosposting from inside moz london how many people there speak with cockney accents?
|
# ? Aug 7, 2013 16:34 |
|
prefect posted:how many people there speak with cockney accents? None of them will speak with cockerney accents, I can tell you that.
|
# ? Aug 7, 2013 16:48 |
|
tef posted:yosposting from inside moz london But in the U.K. moz means Morrissey.
|
# ? Aug 7, 2013 17:07 |
prefect posted:how many people there speak with cockney accents? they all actually speak like this https://www.youtube.com/watch?v=HSPwqV8CNG0
|
|
# ? Aug 7, 2013 17:09 |
|
Notorious b.s.d. posted:i was talkin to a dude who does consulting on hbase and hadoop poo poo as far as i can tell the companies with thousand-plus node clusters are all doing slightly more than parsing, more like clickstream analysis, with parsing and preprocessing happening upstream but there is a whole lot of needless or inefficient use of hadoop, probably because the teams that build and run the infrastructure have a vested interest in expanded use and increased scale and because it's really fuckin easy to use hadoop streaming
|
# ? Aug 9, 2013 08:23 |
|
PENETRATION TESTS posted:as far as i can tell the companies with thousand-plus node clusters are all doing slightly more than parsing, more like clickstream analysis, with parsing and preprocessing happening upstream the last thing is what i'd put money on. anything that's "really fuckin easy" is hard to argue against
|
# ? Aug 9, 2013 18:00 |
|
|
# ? Jun 11, 2024 13:40 |
|
at least in the org i work with the most egregious offender is Hive, people do huge fuckin queries that spin up tens of thousands of mappers to do simple SQL-like queries... they end up taking ten minutes or so, so gently caress it stuff like give me this entry in the user table joined to her entries in this other table -> read in the entirety of both tables, throw out all but a few rows, join them on one reducer while the other 99 get no input they're all one-off queries so they aren't even much of a burden but they're just so offensively inefficient given that all the same data is in a big expensive and efficient relational database
|
# ? Aug 9, 2013 21:40 |