|
Little performance question: This runs in 5 seconds: code:
This runs in 14 seconds: code:
|
# ¿ Sep 5, 2008 19:44 |
|
|
# ¿ May 2, 2024 07:56 |
|
Fenderbender posted:I don't know if this would be an ok place for this or anything, but my company is looking for web and application developers specializing in Perl. If you're interested, send me a PM and I'll give you some contact information. I figure 3+ years on a life system should be decent experience for what you want. Mithaldu fucked around with this message at 14:59 on Sep 16, 2008 |
# ¿ Sep 16, 2008 14:21 |
|
Thanks and sent.
|
# ¿ Sep 16, 2008 14:59 |
|
Using ActivePerl 5.10.0 under WinXP, running this:code:
code:
Mithaldu fucked around with this message at 14:24 on Sep 27, 2008 |
# ¿ Sep 27, 2008 13:28 |
|
TiMBuS posted:It looks like OpenGL probably isn't thread-safe. No idea about strawberry, i use ActivePerl. Anyhow, you're pretty much right. This fixed it: code:
|
# ¿ Sep 30, 2008 14:19 |
|
In ActivePerl PPM, you can add the trouchelle repository (if you're on 5.10.0). Have you tried the GTK module from there? It fails on install when trying to download the libatk 1.0.0 dll, but that should be a trivial matter of hunting down the correct file and putting it in /bin. Edit: Libraries here: http://trouchelle.com/ppm/scripts/dll/ ; Glib, ExtUtils-PkgConfig, Cairo needs installed too, manually. And since GTK is retarded, you'll need to copy C:\Perl\site\lib\auto\Cairo\Cairo.dll to C:\Perl\bin, as well as http://trouchelle.com/ppm/scripts/dll/libpng12-0.dll.gz. The synopsis example worked for me after that. As for making an executable: I did it by making a bin directory in my project directory which contains the perl executable and dlls. Then i added site/lib, open a command shell, sysinternals' procmon and try to run my script with "bin\perl script.pl". Procmon will tell you what files perl is looking for and if you use filters cleverly you can quickly have it weed out the ones it has found after you copied them to "site/lib" and iteratively build up a small distro for your script. Next step would be to make sure that your sript is a module and is in some appropiately named sub-directory. Then you create a .bat file in the main directory that does the above perl call, and convert it with one of the many bat2exe tools into an executable. Lastly, use http://www.perlmonks.org/?node_id=410030 to lose the command shell window. You can see an example of the whole setup in action here: http://code.google.com/p/dwarvis/source/browse/trunk/lifevis Mithaldu fucked around with this message at 15:10 on Oct 4, 2008 |
# ¿ Oct 4, 2008 14:54 |
|
I wish any of the Profilers would work in multi-thread OpenGL applications. Either they just plain don't work, since GLUT hijacks the exit call or they only pick up the first thread my program starts, which only checks the memory use every two seconds.
|
# ¿ Oct 7, 2008 17:59 |
|
Triple Tech posted:It's a shame that Devel::NTYProf doesn't work on Windows. I wonder what it's written on top of that Windows lacks?
|
# ¿ Oct 8, 2008 02:41 |
|
Cross-platform stuff has its way of sneaking up on you. Aside from that, just from the experience of realizing my own first project at the moment, i'd say the first priority is making it work in the first place and then branch into other stuff once you know it works. Additionally, windows programmers tend to have a good bit of knowledge about linux simply because they're geeks. Heck, many probably know enough about linux to have decided that its flaws outweigh windows'. However linux programmers often don't even have windows around in the first place, nevermind any knowledge about it. What system call? Mind giving a bit more information? I don't have the time to commit to a completely new project now, but i would still like to let it roll around in my brainpan a bit. =)
|
# ¿ Oct 8, 2008 03:29 |
|
"Limited" scopes aren't hard to track at all. "use strict;" and you'll never have to spend a thought about scoping at all, it just comes naturally: - variables are either: -- passed as input at the beginning of a block, via @_ -- created at any point in a block via my - variables are valid: -- from start to end of a block -- within all blocks created inside the current block If i remember correctly, this works the same in c, doesn't it? As for loose typing: Yes, it is not very efficient. However that is the ONLY criticism you can level against it, that it uses more memory than it *needs* to. In practice it allows extremely fast prototyping. Example: I have implemented, solo, next to my usual 40 hours work per week, a basic, but fully functional 3d engine, that does some seriously weird stuff in the data processing part to boot. It took me less than 25 days and there's stuff in it like 4-dimensional arrays, mixed in being number-based and associative. Being OCD about memory-use is all good and fine, but only if memory use is actually an important factor. And declarations, scoping (and in part also type, either scalar, array or hash) also happen in Perl the same as in C, as long as you follow sane programming guide lines. I also suggest reading this: http://www.perlmonks.org/?node_id=87628 It is a bit old, being from 2001 and talks about stuff that was relevant for 5.6, but the bulk of it is still relevant. Mithaldu fucked around with this message at 18:06 on Oct 13, 2008 |
# ¿ Oct 13, 2008 18:01 |
|
You may wanna try using http://search.cpan.org/~mlehmann/Coro-4.8/Coro/Socket.pm and http://search.cpan.org/~mlehmann/Coro-4.8/Coro.pm to get parallelization. Coro allows such kinda stuff without having to gently caress around with multiple processes or Perl's horrible threads. Otherwise just have one main script and one data getter script. Data getter: 1. connects to server and gets registry data 2. uses Data::Dumper or Storable to serialize data 3. dumps data into time-stamp-tagged file in a certain directory Constroller: 1. starts and spawns x getters with exec() 2. sleeps for y seconds 3. checks directory for new files and processes them if needed 4. if number of processed files not equal x -> 2.
|
# ¿ Oct 22, 2008 13:06 |
|
syphon^2 posted:What am I doing wrong? What this thing does is basically this: It keeps a list of all coroutines waiting in the script, with the main routine being a coroutine in itself. Each coroutine has a timer and a ready state switch. When you create a coroutine it is added to the list, as soon as you switch its state to ready, the timer starts running. Now your program proceeds as normal until you either call cede() or schedule(). cede() puts the currently running coroutine into the list, switches it to ready, then selects the coroutine that has been waiting the longest and either starts running it, or continues running it after its last cede(). schedule() does something similar, only it doesn't switch the current coroutine to ready. Yes, this isn't actual multi-threading, since it pretty much only runs one thread on one CPU. However, if you have proper code design, which means you have a sufficient amount of schedules in your loops, and a half-way decently designed idle loop controller that switches coroutines back to ready state when they're needed, you pretty much have actual parallel processing, only it runs on one core only, instead of 2. The nice thing here for you is this: Sockets block because they keep waiting for a response. However, now take a socket implementation that calls schedule() after every loop in its response waiter. Suddenly your main program gets all the CPU it needs to keep running and launch more sockets and all it needs to do different is call schedule often enough to have the controller reactivate the sockets so they can check if there was an answer. The main point here is: Sockets don't lock because they keep the CPU busy, but because they're idling until something happens. What you want is to do something during the time they spend idling. Coro::Socket makes that possible. If you're interested, you can check here for a life example: http://dwarvis.googlecode.com/svn/trunk/lifevis/Lifevis/Viewer.pm Mithaldu fucked around with this message at 16:55 on Oct 22, 2008 |
# ¿ Oct 22, 2008 16:51 |
|
Sartak posted:2.07, released today, does! Nevermind, selecting inode as mirror showed it. Also, NYTProf doesn't work with Coro at all. Throws bunches of errors and trying to run nytprofhtml on the out file results in it trying to read the file, then crashing without even so much as a peep. Procmon shows it accessing the file and then directly jumping to the vsjitdebugger. Mithaldu fucked around with this message at 22:41 on Nov 1, 2008 |
# ¿ Nov 1, 2008 21:49 |
|
Voltaire posted:7 character input is a character. Voltaire posted:if ($input = [9,8,7,6,5,4,3,2,1,0] == [0,1,2,3,4,5,6,7,8,9]) Also, they're right. All you need for this is one regexp and some ifs. Lastly, try reading the PBP sometime, if you're going to program in it. Any coworkers will be grateful for that.
|
# ¿ Nov 9, 2008 21:56 |
|
Voltaire posted:i have. Voltaire posted:im not even sure where to start with this one without using split and join. Oh, and if you're going to be in IT professionally, start reading this website daily, starting from now: http://thedailywtf.com And on another note: I think the intent of the exercise is to check if you're actually a Perl programmer or a C(++/#) programmer. Perl has some really powerful tools that C programmers often don't really know about and ignore, as it also has everything they're already used to. As such C programmers trying to do Perl often do more damage than a complete Perl newbie would do. Mithaldu fucked around with this message at 22:10 on Nov 9, 2008 |
# ¿ Nov 9, 2008 22:04 |
|
Not a question, but something i just found out and thought i'd share: Ever tried iterating through a hash so you could do something each value in it? The usual solution is to do: code:
code:
|
# ¿ Nov 15, 2008 14:36 |
|
Am i doing something wrong or is map really slow?code:
|
# ¿ Nov 15, 2008 16:30 |
|
2MB @ 80 line length is about 25000 lines. 50000 lines at a more likely medium length of 40. Depending on your CPU, that can make a good bit of difference. Also important to think about is: What *exactly* is it printing? How much data does it have to pull from the memory per print? How many references does it have to resolve per variable? etc. etc.TiMBuS posted:map performance sucks
|
# ¿ Nov 17, 2008 23:04 |
|
It's not a kneejerk reaction at all. For some reason i have serious problems wrapping my head around map and just plain reading it, as opposed to a for loop. My brain keeps expecting a comma or some parantheses. I also, to some extent, dislike the use of "magic variables" like $_ or functions that implicitly use it. As such, i have never seen map having any readability advantage for me at all. Additionally i have, over the last few years, never had any need to iterate over an array, do something simple enough to fit into one line with map and then retain it while creating a new array for the output. I have literally been looking for advantages of it over the past two weeks and aside from it being a different syntax (which is a disadvantage for me) i have found exactly none. It being slower than for only drives the nail in its coffin of uselessness for me. TL;DR: Don't jump to conclusions and accusations so quickly, eh?
|
# ¿ Nov 17, 2008 23:57 |
|
fansipans posted:It makes code a hell of a lot more readable ... Edit: I think my problem is that i mentally translate every code i see into actual english sentences. That works very well for most constructs of Perl unless golfed. Map is impossible for me to translate into an english instruction. Mithaldu fucked around with this message at 15:45 on Nov 18, 2008 |
# ¿ Nov 18, 2008 15:32 |
|
I'm not entirely sure what you're trying to say with that, but honestly, I'd see myself as the latter kind of guy. When i read my Perl code it is really just *reading*, like one would usually do with a book. I see the code and effortlessly know the intent of the instruction. Certain things however just gently caress with that flow, so i work around them. It's not that i can't do it, it's just that they slow me down too much. I know what it does, but that reading seems entirely backwards to my mind. vvvv Mithaldu fucked around with this message at 17:17 on Nov 18, 2008 |
# ¿ Nov 18, 2008 16:44 |
|
I didn't say anything about functional programming, just about map in the context of Perl. In fact, i have this on my "to read" list: http://gigamonkeys.com/book/
|
# ¿ Nov 18, 2008 20:25 |
|
code:
(And in case anyone's wondering why i'm benchmarking this: This operation occurs 120000+ times in a loop that i'd optimally would like to crunch into 0.1 seconds.)
|
# ¿ Nov 21, 2008 22:10 |
|
Never knew about that. Your explanation makes sense though, as perldoc only says it's *treated* as compile-time in respect to whether it affects other files or not.
|
# ¿ Nov 21, 2008 22:27 |
|
Voltaire posted:
For the love of christ. If you have the gall to ask people on the internet to solve your homework which the company (what company is it anyhow?) which gave you a Perl job despite you quite obviously just plain now knowing perl, then at the loving least run your poo poo through perltidy before posting it. It hurts my eyes just to look at it. [ http://search.cpan.org/~shancock/Perl-Tidy-20071205/bin/perltidy ]
|
# ¿ Nov 26, 2008 16:39 |
|
Currently working on a web app that uses sqlite and does some heavy scrunching on market order data grabbed from a third party. Thanks to NYTProf i managed to get its runtime divided by a factor of 400, so i can only chime in on the love for it here.
|
# ¿ Dec 14, 2008 12:24 |
|
Figures that three hours after the previous post i stumble upon an actual question: I am trying to do a large number of inserts (30 chunks with ~200_000 inserts each) into an SQLite database and am looking for methods to speed that up since doing single inserts for each is pitifully slow. So far i got this: code:
code:
code:
|
# ¿ Dec 14, 2008 15:22 |
|
The reason why i'm suspecting it's an SQLite limitation is because it just plain cannot do things like this: INSERT into table (one,two) VALUES (1,2),(2,3) Gonna try restricting it though and see what happens. Edit: Now i'm confused... DBD::SQLite::st execute_array failed: 1 bind values supplied but 8 expected Edit2: Ok, looked at the documentation again. I'm a retard. I thought the binds bound data sets, but instead bind columns. Mithaldu fucked around with this message at 16:00 on Dec 14, 2008 |
# ¿ Dec 14, 2008 15:53 |
|
Did both of these already. Also, just tried splitting it into chunks of 10000 and it added on 30% more processing time. That's how it looks right now, and i don't think there's anything more that can be done. code:
|
# ¿ Dec 16, 2008 18:23 |
|
as far as SQLite is concerned, these are all exactly the same thing: AutoCommit=0 / commit BEGIN / END BEGIN / COMMIT begin_work / commit I've implemented all of these before and the results and benchmarks are all identical. Only reason i am using BEGIN/END is that that's what the SQLite manual talked about and it makes for good visual markers in the code. However i'd like to hear about why begin_work / commit is better. Triple Tech posted:There's way too much going on there. Triple Tech posted:Gosh, your code is all over the place. I realize i didn't spend much time making it look pretty, but i'd like to hear what more experienced people have to say anyway.
|
# ¿ Dec 16, 2008 19:59 |
|
Triple Tech posted:Code should be written from both a semantic and aesthetic perspective. Design-wise, you should be telling the driver what to do, not having the driver just pass on you talking to the database directly. If the driver abstracts the concept of starting and stopping transactions, and that abstraction has no implementation penalty, then you should use it. Triple Tech posted:For visual markers in your code, just use space, comments, and an editor with syntax highlighting. Triple Tech posted:NYTProf? Open the file index.html in any browser. Keep in mind it is a line-based profiler. The top table tells you how much realtime it spent on what subroutine call in what module. The table below lists it by module. DumpGetter.pm is the main module i'm working on. If you click on any report in the lower table, it shows you the time spend in the relevant things in a new page. Triple Tech posted:Isolation.
|
# ¿ Dec 16, 2008 20:46 |
|
satest4 posted:According to your profiling, your DELETE statements are eating up way more time than the INSERTs. drat, you're right, I completely missed that. Guess i really should try parsing the files manually. satest4 posted:INSERTs seem pretty quick at 0.00014 seconds per query
|
# ¿ Dec 16, 2008 21:30 |
|
Working on something related to the previous question i had and stumbled upon this: Is there a way to to somehow "use" a bunch of modules with only one command? Preferrably while having said command being a subroutine imported from another file.
|
# ¿ Jan 1, 2009 02:43 |
|
So, simply like this?code:
|
# ¿ Jan 1, 2009 03:19 |
|
drat, was hoping i did something wrong. When i do the stuff above, subs don't end up usable in MyModule when exported by "A" as follows:code:
|
# ¿ Jan 1, 2009 19:48 |
|
satest4, thanks for the awesome explanation. I really should've realized that i need to export all functions that i want to carry over and i also learned something about the specifics of begin blocks. As for "use base", i honestly really don't get what you're trying to say there. I DID notice that it didn't work for transporting simply exported functions, but i don't have any idea why, since i never looked into how OO internals work in Perl. However, it really IS part of the solution for my problem here, since CGI::Application plugins attach their functions to the caller when imported and thus these functions survive the "use base" transition. I've basically made an extended version of Titanium. (And i really should've looked at its source earlier, since it's really loving simple.) My biggest thanks however go to Ninja Rope, since that is EXACTLY what i was looking for, and then some.
|
# ¿ Jan 3, 2009 13:33 |
|
Triple Tech posted:Okay, now my real question. We've covered what. Now why. Why is this superior than traditional return's? I've not yet seen anything you couldn't do with just return. Tree-search. If you have to search through some tree recursively and are only looking for one single result, i'd gather you can do something like this: 1. Mark the point where you start searching. 2. Dive into tree. 3. Once you've got the result, jump back to the marked point with the result. Compared with bubbling the result back up with returns, this could probably be faster. More importantly though it should be easier to implement with less code, since you don't need to return ANYTHING at all, and can thus also completely skip any sorts of checks on returns. Mithaldu fucked around with this message at 07:27 on Jan 15, 2009 |
# ¿ Jan 15, 2009 07:23 |
|
String comparison is done with eq or ne. Stuff like != only works on numbers. http://perldoc.perl.org/perlop.html
|
# ¿ Jan 15, 2009 23:48 |
|
If you want concurrency without using actual threads, you could try Coro.
|
# ¿ Jan 21, 2009 09:58 |
|
|
# ¿ May 2, 2024 07:56 |
|
I'm currently working on a cross-platform gui flashcard application that i'm thinking about adding to CPAN. Flashcard sets would be stored as individual modules. However i'm also thinking about making it possible to link the items in the flashcard sets to audio bits, which would optimally be present in the form of audiobanks available as modules on cpan. As i'm doing this with wxPerl i have one restriction though: The audio bits need to be plain wav files and actually present as files; i can't simply feed them to Wx::Sound as a variable. Right now i'm pondering though exactly how to accomplish this. The ways i can think of would be as follows: - Have the audiobank module be a hash which contains serialized (Storable, FreezeThaw or something like that) versions of the audio files. When the sound files are required they are unserialized, dumped with File::Temp and the filename forwarded. Problem: Not very elegant. - Simply add the .wavs as files to the module package and figure out the path to the files, then forward that. Problem: I have no idea if there is a module for that and i don't want to do it the brute way. Do any of you have better ideas as to how to accomplish this or other comments? Maybe i'm thinking about it in the wrong way completely?
|
# ¿ Mar 2, 2009 00:32 |