|
Honestly, that's an odd enough request that I'm curious as to why exactly you want to do this. I'm not saying it's wrong, I'm just curious.
|
# ? Feb 11, 2013 19:54 |
|
|
# ? May 8, 2024 05:52 |
|
What's your opinions on Python GUI toolkits? I use Linux at home and all Macs at work so I'd like to have good compatibility between those, if that's even an issue anymore, the searching I did turned up some real old results so I don't know what to believe. There was some stuff about Tkinter (I think) looking like crap in comparison but I think it has had an update in that respect. So, Tkinter, PyGTK, wxPython or any others?
|
# ? Feb 11, 2013 20:07 |
|
PySide is pretty nice. Especially since (almost) all existing PyQt tutorials and docs are valid for it.
|
# ? Feb 11, 2013 20:16 |
|
QuarkJets posted:In Python 2.6 I want to take a dictionary and convert all of the keys, which are strings with various cases, to lowercase strings. Right now I am doing it like this: Why? You're going to have to create new keys for each value, and the values won't be duplicated between the old and new dicts. Reinserting the keys and deleting them will probably be slower than just creating a new dict, and more cumbersome. quote:Does this code do that, and how can I check that this is the case? You probably want to use .iteritems() rather than .items().
|
# ? Feb 11, 2013 20:35 |
|
Jerry SanDisky posted:PySide is pretty nice. Especially since (almost) all existing PyQt tutorials and docs are valid for it. The big differences iirc are that PySide uses the new style slots, and also handles qvariants without having to mangle sip.
|
# ? Feb 11, 2013 20:36 |
|
yaoi prophet posted:Honestly, that's an odd enough request that I'm curious as to why exactly you want to do this. I'm not saying it's wrong, I'm just curious. I am taking dictionaries as input and trying to read values with specific keys, but there are sometimes inconsistencies in capitalization. Sometimes the dictionaries can be large, but I do have plenty of memory available; I just want to write effective code that doesn't waste time and memory making huge dictionary copies So I think that what I will do is check whether a key is already lower case, and if it isn't then I'll create a new lowercase key with the old value. That would be better than a full dictionary copy, I think
|
# ? Feb 12, 2013 01:33 |
|
QuarkJets posted:I am taking dictionaries as input and trying to read values with specific keys, but there are sometimes inconsistencies in capitalization. Sometimes the dictionaries can be large, but I do have plenty of memory available; I just want to write effective code that doesn't waste time and memory making huge dictionary copies It would probably only be better if you have a relatively low proportion on non-lower case keys and making new keys is a rare operation. You can't "change" a key, since that would also change its hash value, you can only make new keys and delete old ones. So it might end up being like 6-of-one or half-dozen of another when choosing between the solutions. You can always profile with your data to see if there's a winner just to make sure.
|
# ? Feb 12, 2013 01:38 |
|
Emacs Headroom posted:It would probably only be better if you have a relatively low proportion on non-lower case keys and making new keys is a rare operation. You can't "change" a key, since that would also change its hash value, you can only make new keys and delete old ones. So it might end up being like 6-of-one or half-dozen of another when choosing between the solutions. You can always profile with your data to see if there's a winner just to make sure. Yeah, having to create a lowercase key is relatively uncommon (most come lowercased already). I suppose that I could run some tests and find out for sure whether one way or the other is actually faster on average for my data QuarkJets fucked around with this message at 02:02 on Feb 12, 2013 |
# ? Feb 12, 2013 01:59 |
|
Thinking deeper, I should specify that the dictionary values are actually numpy arrays each with 1k to 1B entries. There are maybe only 100 keys in the dictionaries, really it's these arrays that are large. When I create a new key and give it the same value as the old key, then a reference gets passed and I am not actually creating any new arrays in memory, correct? So I don't really even need to worry about deleting the old keys since minimal memory is used by two keys both pointing to the same array
|
# ? Feb 12, 2013 02:17 |
|
Yeah I believe that's correct. Assignment is naming, you're just giving these arrays a new name (the member of the dictionary indexed by 'butts' or what have you).
|
# ? Feb 12, 2013 02:19 |
|
QuarkJets posted:Thinking deeper, I should specify that the dictionary values are actually numpy arrays each with 1k to 1B entries. There are maybe only 100 keys in the dictionaries, really it's these arrays that are large. Unless you're going to be vastly increasing the number of keys, it shouldn't be an issue. That said, you can pretty easily do: code:
The Insect Court fucked around with this message at 03:04 on Feb 12, 2013 |
# ? Feb 12, 2013 03:00 |
|
The Insect Court posted:Unless you're going to be vastly increasing the number of keys, it shouldn't be an issue. That said, you can pretty easily do: Excellent, thanks guys!
|
# ? Feb 12, 2013 06:53 |
|
The Insect Court posted:Unless you're going to be vastly increasing the number of keys, it shouldn't be an issue. That said, you can pretty easily do: What is this "toBeDeleted"? Why do the deletions in a separate step? You can just do them as you go. Nothing is going out of scope within the loop, v is still a reference to the dictionary element.
|
# ? Feb 12, 2013 16:15 |
|
Hammerite posted:What is this "toBeDeleted"? Why do the deletions in a separate step? You can just do them as you go. Nothing is going out of scope within the loop, v is still a reference to the dictionary element. The only reason I can see for doing something like that is if you wanted an array of the deleted values before you deleted them. Even then, I agree that they should be deleted as you go.
|
# ? Feb 12, 2013 18:46 |
|
Hammerite posted:What is this "toBeDeleted"? Why do the deletions in a separate step? You can just do them as you go. Nothing is going out of scope within the loop, v is still a reference to the dictionary element. You could, but you'd need some way to make certain that the transformed key isn't the same as the original one, otherwise you could lose entries. So something like: code:
|
# ? Feb 12, 2013 22:58 |
|
The Insect Court posted:You could, but you'd need some way to make certain that the transformed key isn't the same as the original one, otherwise you could lose entries. So something like: This is literally what I posted already, in slightly greater generality.
|
# ? Feb 12, 2013 23:12 |
|
Thought I would share this, although it's more of an OS level thing than a Python specific thing. I was trying to make an accurate rate limiting function, albeit just a simple one, and I found that I could never get good accuracy out of it. After poking around a bit, I realized it must be due to the accuracy of time.sleep(), which I was using to limit rates by blocking code execution. So here is a nifty graph of the percent error for small sleep times using time.sleep(). Machine used was OS X 10.7.4. We now return to your regularly scheduled programming.
|
# ? Feb 13, 2013 14:30 |
|
My Rhythmic Crotch posted:Thought I would share this, although it's more of an OS level thing than a Python specific thing. Isn't that due to time.sleep() giving up the process's current timeslice, so the jitter is due to context swaps? Did you find a function that will give you a specific wall-clock delay? Something like ctypes' nanosleep() or clock_nanosleep(), the latter with time.perf_counter().
|
# ? Feb 13, 2013 18:15 |
|
My Rhythmic Crotch posted:Thought I would share this, although it's more of an OS level thing than a Python specific thing. This might be fun for us all to test. Could you post the source of the test? I can even add a quick matplotlib block so people don't have to dump it into excel to make the graph. EDIT: Nevermind, I realized how stupidly easy this is to code, I'll write one up quick and post it. JetsGuy fucked around with this message at 19:32 on Feb 13, 2013 |
# ? Feb 13, 2013 19:15 |
|
babyeatingpsychopath posted:Isn't that due to time.sleep() giving up the process's current timeslice, so the jitter is due to context swaps? Did you find a function that will give you a specific wall-clock delay? Something like ctypes' nanosleep() or clock_nanosleep(), the latter with time.perf_counter(). Most people seem to recommend just somehow wrapping or calling nanosleep() - I haven't got around to it yet but I'll try to come up with something. JetsGuy, it is easy to code, but about halfway down on this page, there is a neat implementation that uses Python's decorator syntax, which I had previously never used.
|
# ? Feb 13, 2013 21:03 |
|
My Rhythmic Crotch posted:JetsGuy, it is easy to code, but about halfway down on this page, there is a neat implementation that uses Python's decorator syntax, which I had previously never used. I didn't use this, and the only reason I haven't posted my source yet is because I'm about a third of the way done my own test. Want to be sure it (kinda) works before I post it! If we're gonna compare, it's probably best to use the same timing methods. EDIT: The test range is 1000 test times, and for each test time, it is averaging over 1000 trials. So it takes a little while, even if the range is 1e-4 to 1e-2. EDIT 2: Can you tell its a slow work day? JetsGuy fucked around with this message at 21:24 on Feb 13, 2013 |
# ? Feb 13, 2013 21:21 |
|
I tried my same rate limiting code on a different machine, and guess what, it worked much better. Quad core linux box with 2.6.something kernel with PREEMPT: 1-2% away from requested rate. Core i5 Macbook Air: 20% under requested rate. Anyway, I looked around and found this, and was able to get it to compile for Linux but not OS X. So here is a new graph of time.sleep vs nanosleep running on the Linux rig: Next I could try a kernel without PREEMPT and see if the performance goes to poo poo or not. [Edit] Yes, your way is much better. It would be cool to do a more thorough treatment of the data. I'm just kind of screwing around right now. My Rhythmic Crotch fucked around with this message at 23:17 on Feb 13, 2013 |
# ? Feb 13, 2013 23:11 |
|
Is it possible to randomly call a function by using a variable without using a bunch of if statments? For example, I have function1, function2, function3, etc. I'd have a random integer 'X' selected from 1-3. X=1 runs function1(), X=2 runs function2(), etc. Is there some sort of way to do this all on one line that runs functionX()?
|
# ? Feb 13, 2013 23:44 |
|
Drunk Badger posted:Is it possible to randomly call a function by using a variable without using a bunch of if statments? Yeah, you can do that: code:
|
# ? Feb 14, 2013 00:06 |
|
My Rhythmic Crotch posted:[Edit] Yes, your way is much better. It would be cool to do a more thorough treatment of the data. I'm just kind of screwing around right now. It's the scientist in me, I can't help but to be like "yes, but we should AVERAGE over MANY N!". Anyway, I wanna stab a bitch, because I got to the end of the test only to find out I forgot to put in a line to stop python from just closing out the graph without saving it. Also, I CONSTANTLY I'm typoing python as pythong. yikes.
|
# ? Feb 14, 2013 00:18 |
|
Drunk Badger posted:Is it possible to randomly call a function by using a variable without using a bunch of if statments? code:
|
# ? Feb 14, 2013 00:35 |
|
I think I need to add some more information now that I have a better idea what I'm doing. In each function, that function returns a random number itself. So what I have is a random number deciding what function gets called that produces another random number. So function1 randomly returns it's own set of random numbers, function2 randomly returns a different set of numbers, etc. While the examples do produce a result, I'm finding that every function ends up returning the same result as it's called. function1 returns 1 the entire time on one run of the script, and always returns 2 on another run of the same script. It seems to run the function once, save that value as a permanent result, and just return that number. Instead, I want it to run the function and possibly give me something different.
|
# ? Feb 14, 2013 01:55 |
|
JetsGuy posted:Also, I CONSTANTLY I'm typoing python as pythong. yikes. http://www.pythong.org/
|
# ? Feb 14, 2013 02:14 |
|
Welcome to the Pythong Package Index.
|
# ? Feb 14, 2013 02:34 |
|
Well then.
|
# ? Feb 14, 2013 02:48 |
|
Drunk Badger posted:I think I need to add some more information now that I have a better idea what I'm doing. I'm not sure what it is you're trying to work towards, but here is something silly I put together quickly. Maybe it will help you, IDK. Python code:
|
# ? Feb 14, 2013 04:10 |
|
Drunk Badger posted:I think I need to add some more information now that I have a better idea what I'm doing. Post code that's breaking. Screwing with the random seed usually shouldn't be necessary.
|
# ? Feb 14, 2013 04:41 |
|
Python code:
|
# ? Feb 14, 2013 06:52 |
|
Ok. I ran it *again* and found out something fun about the system I was running it on. The minimum sleep time on that machine was ~0.01s! So all those trials were taking entirely too long because despite what i was telling sleep to do, it was doing a sleep time of 0.01! Anyway, I ported the code over to a Lion machine (the other machine was Snow Leopard). And here's what I got, it's very similar to My Rhythmic Crotch. I added a quick check to catch (most) cases where your resolution of time won't be tested because of your OS's sleep limits. I'm running a much more fine resolution one now today. I'm expecting the same results, but with those spikes completely gone. And here's the source: code:
JetsGuy fucked around with this message at 16:24 on Feb 14, 2013 |
# ? Feb 14, 2013 16:22 |
|
JetsGuy posted:Ok. I ran it *again* and found out something fun about the system I was running it on. I think that might have to do more with the kernel sleep function than with Python. Probably the kernel doesn't want to guarantee that a thread can have control back within 10ms. Maybe it's different for real-time kernels?
|
# ? Feb 14, 2013 16:27 |
|
Emacs Headroom posted:I think that might have to do more with the kernel sleep function than with Python. Probably the kernel doesn't want to guarantee that a thread can have control back within 10ms. Maybe it's different for real-time kernels? Yeah, that's what I figured too. For reference, it turns out on my Lion machine, the minimum sleep time resolution seems to be ~1e-4. I'm somewhat surprised though that there's not much of a warning there. In any case, on my Lion machine (10.7.5), the minimum seems to be around 1e-4: code:
JetsGuy fucked around with this message at 16:46 on Feb 14, 2013 |
# ? Feb 14, 2013 16:43 |
|
fivre posted:Post code that's breaking. Screwing with the random seed usually shouldn't be necessary. Here's an example from a routing simulator I'm building. code:
Output posted:2 It seems to be running the function for each element in 'noderoute = [...', once it gets to that line of code, as I can put a print function and it will display 6 lines of text before it starts putting out the numbers. I know I can just do a bunch of if loops, but I'm wondering if there's a shorter way to do what I want.
|
# ? Feb 14, 2013 17:21 |
|
The functions you're calling are being evaluated when you assign that list to noderoute, not when you're printing different indexes of noderoute. What you want to do is put the functions themselves in the list, and then call the functions when you iterate through the list in your loop.
|
# ? Feb 14, 2013 17:37 |
|
What are those strange ; characters at the end of your lines of code?
|
# ? Feb 14, 2013 17:52 |
|
|
# ? May 8, 2024 05:52 |
|
Lysidas posted:What are those strange ; characters at the end of your lines of code? Things I don't know if I really need because I'm new to this and have no formal training?
|
# ? Feb 14, 2013 17:59 |