|
basically tcp is a skeuomorph for the phone network
|
# ? Jun 28, 2013 05:49 |
|
|
# ? May 12, 2024 20:14 |
|
JewKiller 3000 posted:you're welcome to your opinion, but loving say something, not just "HURR PYTHON GOOD LANGUAGE DURR SPAM EGGS HEIL PEP 8" python is cool because: - its really easy - its fast enough for your dumb bullshit - it has a lot of libraries for most common stuff you wanna do - its not php Socracheese fucked around with this message at 06:03 on Jun 28, 2013 |
# ? Jun 28, 2013 05:55 |
|
thats every plang that isnt php
|
# ? Jun 28, 2013 06:09 |
|
Nomnom Cookie posted:go does closures properly *and* has a really novel and useful take on OOP. you can think of it as "C done right" or alternately "javascript made by people who have a clue" counterpoint: but then you would be using go i like that there are two programming languages named go there's go and there's go! thats some ruby style bullshit right there
|
# ? Jun 28, 2013 07:24 |
cURLing in the tcp stack
|
|
# ? Jun 28, 2013 09:05 |
|
uG posted:thats every plang that isnt php cpan owns
|
# ? Jun 28, 2013 12:47 |
|
uG posted:all you python scrubs are going to be programming perl6 in 10 years (if you havent killed urself by then) eh I'm ok with this. like it could be way worse.
|
# ? Jun 28, 2013 13:08 |
|
the thing that sucks the most about javascript for me is that it is totally ok with you writing really lovely code because of no type safety, several confusing ways of declaring functions, etc. working on your own poo poo isn't so bad. working with other developers across large projects some of which really suck at coding is where the weaknesses really stand out.
|
# ? Jun 28, 2013 14:50 |
|
JewKiller 3000 posted:you're welcome to your opinion, but loving say something, not just "HURR PYTHON GOOD LANGUAGE DURR SPAM EGGS HEIL PEP 8" i like python because it's really well integrated into arcgis (which is what i use p much all day every day my job) so it's super easy to automate all these dumb things that need to be done. also it was easy to learn, i'm a big dumb idiot moron (as evidenced by the code i've posted in this thread) and even i was able to pick it up really quickly
|
# ? Jun 28, 2013 15:07 |
|
Nomnom Cookie posted:it even has language support for maps (that's a computer scientist way of saying "hash table" for you p-langers) so if you're coming from a bad language you can just dive right in and poo poo out code like nobody's business a) what language implements an associative container and calls it an hash table instead of a map? b) all hash tables are maps, but not all maps are hash tables c) i was curious what go's underlying implementation for map is and the spec doesnt even give algorithmic complexity for its operations. thanks rob.
|
# ? Jun 28, 2013 15:20 |
|
FamDav posted:a) what language implements an associative container and calls it an hash table instead of a map? in perl they're officially named "associative arrays", but everybody calls them "hashes"
|
# ? Jun 28, 2013 15:22 |
|
FamDav posted:a) what language implements an associative container and calls it an hash table instead of a map? perl, ruby. python calls it a dict, not a hash or a map. quote:c) i was curious what go's underlying implementation for map is and the spec doesnt even give algorithmic complexity for its operations. thanks rob. quote:hashmap.c is a multilevel hash table (a short tree so that it doesn't http://golang.org/src/pkg/runtime/hashmap.c
|
# ? Jun 28, 2013 15:24 |
|
perlguts posted:A stash is a hash table (associative array) that contains all of the different objects that are contained within a package. of course if you are reading perlguts im so sorry
|
# ? Jun 28, 2013 15:38 |
|
why cant they put that in their spec? i realize that the implementation of their map type is changing every day but i would appreciate it if i could just go read through the spec and have a general understanding of the performance characteristics of basic types.
|
# ? Jun 28, 2013 15:39 |
|
FamDav posted:why cant they put that in their spec? i realize that the implementation of their map type is changing every day but i would appreciate it if i could just go read through the spec and have a general understanding of the performance characteristics of basic types. if you cared about performance at all you wouldnt be using a plang duh
|
# ? Jun 28, 2013 15:43 |
|
FamDav posted:why cant they put that in their spec? i realize that the implementation of their map type is changing every day but i would appreciate it if i could just go read through the spec and have a general understanding of the performance characteristics of basic types. because it might be a property of the implementation, rather than the design
|
# ? Jun 28, 2013 15:55 |
|
tef posted:because it might be a property of the implementation, rather than the design
|
# ? Jun 28, 2013 17:01 |
|
qft and also god drat that movie was horrific
|
# ? Jun 28, 2013 18:51 |
|
what moobie is dat?
|
# ? Jun 28, 2013 19:10 |
|
my favorite reason to use python it doesn't make me want to kill myself but can still get me a job
|
# ? Jun 28, 2013 19:59 |
|
MeruFM posted:my favorite reason to use python same except hardware engineering instead of plang
|
# ? Jun 28, 2013 20:00 |
|
hardware specific C makes me want to kill myself so I still take issue with that especially if the predecessor felt the need to write in assembly
|
# ? Jun 28, 2013 20:05 |
|
I actually used Clojure for something useful today. It felt like I was hacking the matrix.
|
# ? Jun 28, 2013 20:16 |
|
Doc Block posted:what moobie is dat? antichrist
|
# ? Jun 28, 2013 21:24 |
|
so i'm working on a thing to do loudness monitoring for broadcast TV. basically i'm just trying to implement this algorithm: http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-3-201208-I!!PDF-E.pdf pretty sure i've got the algorithm figured out but holy christ my program takes like 10 minutes to go through a 20 minute AIFF file. it should be near instant. i've never dealt with inputs of this nature. basically i'm just using libsndfile to read all of the PCM signals into a really huge array (48000 samples/second * 2 channels * 60 seconds * 20 minutes). clearly this is not optimal. how should i go about optimizing this?
|
# ? Jul 4, 2013 02:47 |
|
If it's 16 bits per sample then remember to multiply that number by 2, giving you about ~220MB of samples to process. If it's 24-bit multiply by 3, giving you about ~660MB of samples data. If you can, load it as-needed instead of loading the whole thing first. Maybe have a thread that does the loading and another that processes each chunk once it's loaded (so while the loading thread is blocked waiting for the OS to load more data, the processing thread can be chewing away on the previous chunk). Or, even better, do one processing thread per core, breaking the data up into as many pieces as you have processing threads. Assuming the algorithm you're using isn't dependent on the results of the previous operation or whatever. edit: and that's just brute-force stuff. Make sure your implementation of the algorithm is well optimized. Run it through a profiler and see where your bottlenecks are. Doc Block fucked around with this message at 03:05 on Jul 4, 2013 |
# ? Jul 4, 2013 02:57 |
|
Doc Block posted:
Yeah, this was my first thought, only because i've seen it suggested elsewhere. it's pretty easy to understand why breaking it into chunks and multithreading it will make it a lot faster, but the idea that loading as-needed will help is hard for me to grasp. is there any reason to spawn more threads than I have cores?
|
# ? Jul 4, 2013 03:07 |
|
i needed to setup an admin site with some configuration stuff today, and started setting up entity framework to hook into an existing MS SQL Server instance. thought I should try something new - so I tried RavenDB in embedded mode. digging the simplicity and design of it so far.
|
# ? Jul 4, 2013 03:08 |
|
chumpchous posted:Yeah, this was my first thought, only because i've seen it suggested elsewhere. it's pretty easy to understand why breaking it into chunks and multithreading it will make it a lot faster, but the idea that loading as-needed will help is hard for me to grasp. read my edit. when you load and convert the entire thing into raw samples before you can even start processing, your program is doing almost nothing during that time. so you load it in chunks, with another thread that does the processing (with a processing queue). so while the loading thread is blocked, waiting for the OS to load more data off the disk (or is converting it into raw samples), the processing thread is chewing on the previous chunk, and when the loading thread has loaded another chunk it adds it to the "to be processed" queue. this is assuming many chunks (i.e. not just 3 big chunks or whatever, unless the file size is really small). or load and convert it to raw samples all in one go, and then do the "one chunk and thread per core" thing. and if the processor supports hyperthreading then you can spawn even more threads (a 4 core CPU with hyperthreading means 8 logical cores, so you spawn 8 threads). the reason you probably don't want more threads than cores is that, once each core is near 100% utilization, adding more threads doesn't make things go any faster. plus you have to think about cache utilization, which is partly a data structure issue. unnecessary pointer dereferences causing cache misses, etc. right now, though, you need to figure out where your program's time is being spent so that you don't waste time blindly making useless optimizations. do that by profiling the code. the exact method depends on the language & compiler you're using. and, by the way, what language and compiler are you using? Doc Block fucked around with this message at 03:51 on Jul 4, 2013 |
# ? Jul 4, 2013 03:42 |
|
the slow as poo poo version was a rough out i did in ruby, just to make sure i understood the process -- which obviously accounts for a lot of the slowness. now i'm redoing it in straight C with LLVM, but i havent started yet. i did quite a bit of profiling when i was doing objective C stuff, so I'm fairly familiar with those tools. thanks for the help. i'm officially drinking now, but i'll respond in more detail tomorrow
|
# ? Jul 4, 2013 03:57 |
|
chumpchous posted:Yeah, this was my first thought, only because i've seen it suggested elsewhere. it's pretty easy to understand why breaking it into chunks and multithreading it will make it a lot faster, but the idea that loading as-needed will help is hard for me to grasp. Basically: Read in X bytes at a time on IO thread (asynchronously) into a processing channel Have N available computation thread(s) process chunks as available from channel Challenge: making sure chunk boundary edge cases are handled properly (probably not a huge deal in your case) Tune X and N for your workload/computer it's not usually as simple as N = number of cores since getting 100% utilization is rare Or implement this on a dsp whatever floats your boat Test Measure and Iterate Malcolm XML fucked around with this message at 04:02 on Jul 4, 2013 |
# ? Jul 4, 2013 04:00 |
|
chumpchous posted:the slow as poo poo version was a rough out i did in ruby, just to make sure i understood the process -- which obviously accounts for a lot of the slowness. now i'm redoing it in straight C with LLVM, but i havent started yet. i did quite a bit of profiling when i was doing objective C stuff, so I'm fairly familiar with those tools. Might want to check if you can have something like MATLAB generate the code for you actually
|
# ? Jul 4, 2013 04:03 |
|
if you're doing it on a mac, you can use Instruments to help profile your code, and you can take advantage of grand central dispatch (even in C) to make the threading easier.
|
# ? Jul 4, 2013 04:04 |
|
just use boost
|
# ? Jul 4, 2013 04:20 |
|
but c++ is terrible
|
# ? Jul 4, 2013 05:02 |
|
yeah but its also just c so you can do everything in c but still use c++ libs its a beautiful abomination mlmp dsyp
|
# ? Jul 4, 2013 05:21 |
|
yes that is something a terrible programmer would think and say
|
# ? Jul 4, 2013 05:25 |
|
420 write my own string libraries every day
|
# ? Jul 4, 2013 05:28 |
|
uG posted:yes that is something a terrible programmer would think and say let me show you my thesis code
|
# ? Jul 4, 2013 05:47 |
|
|
# ? May 12, 2024 20:14 |
|
Bloody posted:let me show you my thesis code which i now have to rewrite a significant portion of because its poorly designed and a new batch of input data is formatted rather differently than the old batch
|
# ? Jul 17, 2013 17:09 |