Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!
Work has started on porting django to 3k, I find this gives a nice example on porting a non-trivial app to 3k.

http://wiki.python.org/moin/PortingDjangoTo3k

This is gonna be fun.

Adbot
ADBOT LOVES YOU

Scaevolus
Apr 16, 2007

Python 2.6b1 was released, changelog here

Things of note:

- Issue #2831: enumerate() now has a ``start`` argument.
- Issue #2138: Add factorial() the math module.
- Added the multiprocessing module, PEP 371.

m0nk3yz
Mar 13, 2002

Behold the power of cheese!

Scaevolus posted:

Python 2.6b1 was released, changelog here

Things of note:

- Issue #2831: enumerate() now has a ``start`` argument.
- Issue #2138: Add factorial() the math module.
- Added the multiprocessing module, PEP 371.

Yeah, just note I boned the patch for the multiprocessing module and forgot to add it a portion to the makefile see http://bugs.python.org/issue3150

rant: The fact I have to add a chunk to setup.py in trunk *and* edit a makefile *and* twiddle some of the document indexes to add a single package sucks.

ATLbeer
Sep 26, 2004
Über nerd
Did anyone else know that they could write applications for their Nokia phone in pure Python?

I sure as hell didn't

http://opensource.nokia.com/projects/pythonfors60/

I need to dig out my discarded N-Series and play with this.

bitprophet
Jul 22, 2004
Taco Defender
That's actually been known for a while, but yea, it's definitely cool. If I wasn't a Mac fag with an iPhone I likely would own one of those Python capable Nokias by now :)

I actually started learning Python for a course where we wrote wirelessly enabled apps for the Sharp Zaurus running a micro-Linux. Not quite as cramped an environment as a cell phone, but close. It was fun.

ATLbeer
Sep 26, 2004
Über nerd
I'm working with a project where I want to override the default set/get methods of an object since I need to do some funky stuff with them while they are being used but, keeping the external interfaces easy to use by using the standard access methods. The way I believe I override the methods doesn't seem to be working...

What am I doing wrong here

code:
class RemoteObj(object):
	code	= "" 
	args	= []
	def __get__(self, obj, typ=None): 
		print "KIIIIIIIIKK"

	def __set__(self, obj, val): 
		print "AHHHHHHH"

	def __delete__(self, obj): 
		pass
	
k = RemoteObj()
k.code = "why is this not hitting the __set__ or __get__ methods"
print k.code
code:
$ python remote_objects.py
why is this not hitting the __set__ or __get__ methods
Python 2.4.4 (#1, Oct 18 2006, 10:34:39)
[GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin


edit: The output I would expect from the above code would be

AHHHHHHH
KIIIIIIIIIK
why is this not hitting the __set__ or __get__ methods


since the "AHHH" would be printed when the __set__ method would be invoked then the __get__ method would be invoked on access then printed (maybe... I think I need some caffeine here. Well... The value I assigned to code probably wouldn't be printed since I didn't properly replace the __set__ or __get__ methods to actually store the value but, first things first. I'm not even overriding them properly yet.


I'm a moron since in >2.2 python it's now __getattr__ and __setattr__ instead of __get__ __set__

move along

ATLbeer fucked around with this message at 20:13 on Jun 23, 2008

hey mom its 420
May 12, 2007

Yeah. Also watch that you don't fall into an infinite loop within the __getattr__ or __setattr__ methods. Manipulate the __dict__ directly.

ATLbeer
Sep 26, 2004
Über nerd

Bonus posted:

Yeah. Also watch that you don't fall into an infinite loop within the __getattr__ or __setattr__ methods. Manipulate the __dict__ directly.

Yeah, I learned that pretty quickly. Still can't seem to get the __getattr__ to overload properly here

code:
class RemoteObj(object):
	def __getattr__(self, obj):
		print "GETTING"
		return self.__dict__[obj]

	def __setattr__(self, obj, val): 
		print "SETTING"
		self.__dict__[obj] = val

	code	= "" 
	args	= []
	
k = RemoteObj()
k.code = "why is this not hitting the __getattr__ method"
k.boo = "yo yo"
print k.code
output
code:
$ python remote_objects.py
SETTING
SETTING
why is this not hitting the  __get__ method



Latest version of this mess...

code:
cclass RemoteObj(object):
	def __getattribute__(self, obj):
		print "GETTING", obj
		return object.__getattribute__(self, obj)

	def __setattr__(self, obj, val): 
		print "SETTING", obj, val
		self.__dict__[obj] = val

	code	= "" 
	args	= []
	
k = RemoteObj()
k.code = "why is this not hitting the or __get__ method"
print k.code
Output:
code:
$$ python remote_objects.py
SETTING code why is this not hitting the or __get__ method
GETTING __dict__
GETTING code
why is this not hitting the or __get__ method

After all of this, I've almost forgetting what I was trying to do here... :(

What is with that second invocation of the __getattribute__ method using the __dict__ object? Where is that call coming from? (Never mind... It's me with the self.get call. But, that doesn't make sense. I'm calling object.__getattribute__ not self.__getattribute__. If I really was invoking my method again I would be stuck in a recursion loop. Wth?

ATLbeer fucked around with this message at 21:05 on Jun 23, 2008

tripwire
Nov 19, 2004

        ghost flow
m0nk3yz, I know you worked on pyprocessing so perhpas you can answer a stupid question. I'm trying to parallelize a serial python program; in the program there is a list of "chromosome" objects which are supposed to get a unique id when they are instanced. The relevant code for it looks like this:
code:

class Chromosome(object):
    _id = 0

    def __init__(self, some arguments..):
        self._id = self.__get_new_id()
        .
        .
        .
    id = property(lambda self: self._id)

    @classmethod
    def __get_new_id(cls):
        cls._id += 1
        return cls._id

My question is, what is the best/most elegant way to make sure that concurrent processes don't produce id's which collide with any other process?

Since new chromosomes are going to be generated constantly in a loop, should I give up just incrementing a counter to generate unique id's? Would it be better to rework the __get_new_id function such that it generates a pseudo-random value with a very low chance of collision? Would I be better off trying to make sure each concurrent process only access a central id factory in synchronization? There is probably a really simple solution but I'm too dumb to figure it out and I don't want to rush into trying to code without knowing what I'm trying to accomplish.

latexenthusiast
Apr 22, 2008
Ok, so I didn't go through all 19 pages of this thread, but this might be a dumb question that hasn't been asked yet. I'm running Mac OS X.4.10, which natively has Python 2.3.5 installed, as evidenced by the following
code:
Last login: Mon Jun 23 14:38:22 on ttyp1
Welcome to Darwin!
matlocks-powerbook:~ matlock$ python
Python 2.3.5 (#1, Mar 20 2005, 20:38:20) 
[GCC 3.3 20030304 (Apple Computer, Inc. build 1809)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> 
Anyway, I'm attempting to install Python 2.5.1, but I've run into an issue with the binary that I got from python.org. The following thumbnail is to demonstrate that I am indeed using the proper package.



The problem comes with the installer, which won't let me select anything to install.



It says that I already have a newer vesion of each item on the checklist, and simply pressing install or easy install doesn't work because it says there's nothing to install. So how on earth do I get Python 2.5.1 on this computer? I'd prefer to not have to install using the source code because that always seems to take awhile and sometimes errors occur along the way and it ends up being kind of a headache. Also, I'm posting this here because I couldn't find anything on google about this, and I'm hoping that one of you has a good idea as to what's going on. Thanks.

edit: Ok, I guess http://www.mtheory.co.uk/support/index.php?title=Installing_Python_-_iPython%2C_Numpy%2C_Scipy_and_Matplotlib_on_OS_X was able to help me out, and going through the official Python website, I found that a version 2.5.2 was available, and that seemed to work. It upgraded the natively installed Python on my computer, which the website linked in this edit seems to indicate might be a bad thing, but we'll see.

latexenthusiast fucked around with this message at 01:26 on Jun 24, 2008

Allie
Jan 17, 2004

Bonus posted:

Yeah. Also watch that you don't fall into an infinite loop within the __getattr__ or __setattr__ methods. Manipulate the __dict__ directly.

I can't imagine getting an infinite loop with __getattr__ - it's only called when you try to access the value of an undefined attribute.

For setting an attribute in __setattr__ I would use super().__setattr__(). I think it'd even fix the issue with __getattribute__ being called when accessing __dict__, since __getattribute__ is called for all attribute access.

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


Is there any way to call a function that expects a number arguments with a list of those arguments without resorting to this:

code:
import cx_Oracle as oracle
date = [int(x) for x in time.strftime('%Y,%m,%d,%H,%M,%S').split(',')]
[b]timestamp = oracle.Timestamp(date[0],date[1],date[2],date[3],date[4],date[5])[/b]
The above works fine, it just seems really inelegant.

And yes, I am aware of the many alternate methods to get a timestamp from the DBAPI, this just came up in something I was working on and I wanted to know if there was a pretty way to call the oracle.Timestamp() function without doing "date[0],date[1],..."

deedee megadoodoo fucked around with this message at 13:23 on Jun 24, 2008

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!

HatfulOfHollow posted:

Is there any way to call a function that expects a number arguments with a list of those arguments without resorting to this:

code:
import cx_Oracle as oracle
date = [int(x) for x in time.strftime('%Y,%m,%d,%H,%M,%S').split(',')]
[b]timestamp = oracle.Timestamp(date[0],date[1],date[2],date[3],date[4],date[5])[/b]
The above works fine, it just seems really inelegant.

And yes, I am aware of the many alternate methods to get a timestamp from the DBAPI, this just came up in something I was working on and I wanted to know if there was a pretty way to call the oracle.Timestamp() function without doing "date[0],date[1],..."

have you tried *date?

Not sure if you have to turn it into a tuple first.

deimos fucked around with this message at 13:29 on Jun 24, 2008

hey mom its 420
May 12, 2007

Milde: Ah, yeah, right, I always consfuse get/setattr with get/setattribute. Those are really awkward names.

HatfulOfHollow: try timestamp = oracle.Timestamp(*date). You can also call keyword arguments in that way by using ** with any dict-like object.

e: beaten :pwn:

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I know when I first started working with python I had read something about how to call a function with a list of arguments but I couldn't for the life of me remember what it was. Thanks to both of you.

m0nk3yz
Mar 13, 2002

Behold the power of cheese!

tripwire posted:

m0nk3yz, I know you worked on pyprocessing so perhpas you can answer a stupid question. I'm trying to parallelize a serial python program; in the program there is a list of "chromosome" objects which are supposed to get a unique id when they are instanced. The relevant code for it looks like this:
code:

class Chromosome(object):
    _id = 0

    def __init__(self, some arguments..):
        self._id = self.__get_new_id()
        .
        .
        .
    id = property(lambda self: self._id)

    @classmethod
    def __get_new_id(cls):
        cls._id += 1
        return cls._id

My question is, what is the best/most elegant way to make sure that concurrent processes don't produce id's which collide with any other process?

Since new chromosomes are going to be generated constantly in a loop, should I give up just incrementing a counter to generate unique id's? Would it be better to rework the __get_new_id function such that it generates a pseudo-random value with a very low chance of collision? Would I be better off trying to make sure each concurrent process only access a central id factory in synchronization? There is probably a really simple solution but I'm too dumb to figure it out and I don't want to rush into trying to code without knowing what I'm trying to accomplish.

You have a couple of ways of doing this - you have already touched on one which is to change __get_new_id to generate pseudo-random integers for each of the objects and basically making it a shared objects with locks/etc (see processing.RLock/etc). In my code, I generate empty objects with unique IDs ahead of time (think "BlankObject.id") and pump a good amount of them into a shared queue which I then pass into the processes (processing.Queue).

The bad thing about my approach is that you could run out of blank objects, so I have to keep a producer in the background pumping in new objects so the workers generating the objects always have new blanks. In your case it does make sense to make a new shared object which essentially generates the unique IDs. In my case, I could also just fill a queue with unique objects - or subclass processing.Queue and overwrite the get() method to generate batches of IDs if the queue is empty.

A simpler approach is to pick a seed and pass it to the child processes so they can in turn pass it to a random call - you have a pretty low chance of collisions with random especially if in your ID you include some other attribute of the object. In another implementation, each object I spawned used a random number (generated from a seed) + 2-3 other attributes of the object being created.

A few things to think about : When using processing.Queue, you pay a serialization and deserialization cost for things coming in and out of the Queue. The same goes for the cost of lock acquisition and releasing, it really depends on where you want to take the hit. If you go with the "shared object generating the ids" approach - that object is going to have to keep an ever-growing list of IDs it's handed out so that it really does ensure that there isn't a conflict.

Random thought: use a seed passed to random and the machine-time (time.time) to generate the IDs

m0nk3yz fucked around with this message at 14:14 on Jun 24, 2008

Bozart
Oct 28, 2006

Give me the finger.

m0nk3yz posted:

You have a couple of ways of doing this - you have already touched on one which is to change __get_new_id to generate pseudo-random integers for each of the objects and basically making it a shared objects with locks/etc (see processing.RLock/etc). In my code, I generate empty objects with unique IDs ahead of time (think "BlankObject.id") and pump a good amount of them into a shared queue which I then pass into the processes (processing.Queue).

The bad thing about my approach is that you could run out of blank objects, so I have to keep a producer in the background pumping in new objects so the workers generating the objects always have new blanks. In your case it does make sense to make a new shared object which essentially generates the unique IDs. In my case, I could also just fill a queue with unique objects - or subclass processing.Queue and overwrite the get() method to generate batches of IDs if the queue is empty.

A simpler approach is to pick a seed and pass it to the child processes so they can in turn pass it to a random call - you have a pretty low chance of collisions with random especially if in your ID you include some other attribute of the object. In another implementation, each object I spawned used a random number (generated from a seed) + 2-3 other attributes of the object being created.

A few things to think about : When using processing.Queue, you pay a serialization and deserialization cost for things coming in and out of the Queue. The same goes for the cost of lock acquisition and releasing, it really depends on where you want to take the hit. If you go with the "shared object generating the ids" approach - that object is going to have to keep an ever-growing list of IDs it's handed out so that it really does ensure that there isn't a conflict.

Random thought: use a seed passed to random and the machine-time (time.time) to generate the IDs

Why not just use the incremented IDs, and have each process remember its process ID. If you want to combine the results, you can create unique IDs then from the PID and innovation number. Also, I am not sure if NEAT would work if the innovation numbers were not strictly increasing, but I could be wrong. I would mainly just be way too lazy to go through the code and see what I would have to change to use random IDs instead of incremented ones, but to each their own.

Zombywuf
Mar 29, 2008

tripwire posted:

My question is, what is the best/most elegant way to make sure that concurrent processes don't produce id's which collide with any other process?

If you're not running the code on windows, grab a timestamp with time.clock() and use the least significant bits of it. If collision probability is too high frmo that, combine it with the process/thread id.

ATLbeer
Sep 26, 2004
Über nerd

Bonus posted:

Milde: Ah, yeah, right, I always consfuse get/setattr with get/setattribute. Those are really awkward names.

HatfulOfHollow: try timestamp = oracle.Timestamp(*date). You can also call keyword arguments in that way by using ** with any dict-like object.

e: beaten :pwn:

Can someone show me the Python docs on the *array, **array information? I know what it does but, I've never actually seen the docs for it. It's also hard to google "python *" :(

No Safe Word
Feb 26, 2005

ATLbeer posted:

Can someone show me the Python docs on the *array, **array information? I know what it does but, I've never actually seen the docs for it. It's also hard to google "python *" :(

http://python.org/doc/current/ref/calls.html

hey mom its 420
May 12, 2007

http://docs.python.org/tut/node6.html#SECTION006700000000000000000

It's really simple, when defining functions, *args have to come after any other arguments and **kwargs have to come after any *args.

Also, quick demo:
code:
>>> def meow(arg1, *args, **kwargs):
...   print arg1
...   print args
...   print kwargs
...
>>> meow("one", "two", "three", heh="feh", meh="teh")
one
('two', 'three')
{'heh': 'feh', 'meh': 'teh'}
>>>

m0nk3yz
Mar 13, 2002

Behold the power of cheese!

Bozart posted:

Why not just use the incremented IDs, and have each process remember its process ID. If you want to combine the results, you can create unique IDs then from the PID and innovation number. Also, I am not sure if NEAT would work if the innovation numbers were not strictly increasing, but I could be wrong. I would mainly just be way too lazy to go through the code and see what I would have to change to use random IDs instead of incremented ones, but to each their own.

Good idea - also, each process and thread inside of python does support get name / get id calls - you can even name them anything you want (say, a unique seed for each one which allows you to make unique names for each process namespace)

Sock on a Fish
Jul 17, 2004

What if that thing I said?
Is there a way to capture what gets printed to the screen after having Python execute a console command using os.system()? I'm trying to write a script that will start a Windows service, check for success, and then start the service again if needed. NET START returns the same result code for both when the service is already started and when it just plain fails to start. However, the text that it prints to the console is different depending on whether the service fails or is already started.

JoeNotCharles
Mar 3, 2005

Yet beyond each tree there are only more trees.

Sock on a Fish posted:

Is there a way to capture what gets printed to the screen after having Python execute a console command using os.system()? I'm trying to write a script that will start a Windows service, check for success, and then start the service again if needed. NET START returns the same result code for both when the service is already started and when it just plain fails to start. However, the text that it prints to the console is different depending on whether the service fails or is already started.

Use the subprocess module instead of system.

Sock on a Fish
Jul 17, 2004

What if that thing I said?

JoeNotCharles posted:

Use the subprocess module instead of system.

I tried that, but I can't find any way to get those error messages. stderr and stdout are both null.

king_kilr
May 25, 2007
http://docs.python.org/lib/node528.html

Are you using Popen and giving them a file object, or something else to write to?

Scaevolus
Apr 16, 2007

HatfulOfHollow posted:

Is there any way to call a function that expects a number arguments with a list of those arguments without resorting to this:

code:
import cx_Oracle as oracle
date = [int(x) for x in time.strftime('%Y,%m,%d,%H,%M,%S').split(',')]
[b]timestamp = oracle.Timestamp(date[0],date[1],date[2],date[3],date[4],date[5])[/b]
The above works fine, it just seems really inelegant.

And yes, I am aware of the many alternate methods to get a timestamp from the DBAPI, this just came up in something I was working on and I wanted to know if there was a pretty way to call the oracle.Timestamp() function without doing "date[0],date[1],..."
You want time.localtime() (or time.gmtime()), they return a tuple (tm_year,tm_mon,tm_mday,tm_hour,tm_min,tm_sec,tm_wday,tm_yday,tm_isdst)
code:
>>> time.localtime()
(2008, 6, 24, 11, 16, 30, 1, 176, 1)
So, you should really be doing
code:
timestamp = oracle.Timestamp(*time.localtime()[:-3])
That should be a faster than splitting a string and doing a list comprehension.

tef
May 30, 2004

-> some l-system crap ->

Sock on a Fish posted:

I tried that, but I can't find any way to get those error messages. stderr and stdout are both null.

I've not been having with Popen recently, maybe it is how you are calling it:

code:
import subprocess
p = subprocess.Popen(command,
                     stdin=subprocess.PIPE,
                     stdout=subprocess.PIPE,
                     stderr=subprocess.PIPE,
                     bufsize=1)

print p.stdout.readlines()
print p.stderr.readlines()

Sock on a Fish
Jul 17, 2004

What if that thing I said?

tef posted:

I've not been having with Popen recently, maybe it is how you are calling it:

code:
import subprocess
p = subprocess.Popen(command,
                     stdin=subprocess.PIPE,
                     stdout=subprocess.PIPE,
                     stderr=subprocess.PIPE,
                     bufsize=1)

print p.stdout.readlines()
print p.stderr.readlines()

Score! I thought those extra arguments were the defaults and was just specifying my command. Thanks!

tef
May 30, 2004

-> some l-system crap ->

Sock on a Fish posted:

Score! I thought those extra arguments were the defaults and was just specifying my command. Thanks!

If you use subprocess to grab something that is a bit more than a few lines, you may find that it blocks when calling read() or readlines() on one of the outputs if the process is writing to the other.

I.e if you are waiting on stderr, and it is writing to stdout things can deadlock due to buffering.

The solution I used was

code:
import threading

class Buffer(threading.Thread):
    """This buffers the contents of a file object in a seperate thread. It
    is used to concurrently read from stdout and stderr at the same time"""
    def __init__ (self,file):
        threading.Thread.__init__(self)
        self.file=file
        self.buffer = None
    def run(self):
        self.buffer=self.file.readlines()


... some call to popen later ...

buffer=Buffer(stderr)
buffer.start()

print stdout.readlines()

buffer.join()

print buffer.buffer

tripwire
Nov 19, 2004

        ghost flow

m0nk3yz posted:

Good idea - also, each process and thread inside of python does support get name / get id calls - you can even name them anything you want (say, a unique seed for each one which allows you to make unique names for each process namespace)

Thanks, your reply is very helpful. I think I will try making each process generate an id with os.time * PID as a seed.

Bozart: You are correct that there will probably be a conflict with innovation numbers. I didn't pay too much thought to this but now that you mention it, I'm sure that the crossover function will not work correctly if each genes "innovation number" doesn't rise consistently with newly added genes.

In neat-python, new chromosomes are made via reproduction of previous chromosomes; either it will have two unique parents, or it will be a clone of one parent. In either case, a mutation is probabilistically applied to the child. In regular neat-python this sometimes has the effect of adding a new gene, but I've also modified the algorithm to allow mutations to remove/prune genes as well.

There are two kinds of genes: nodes, which have a unique identifier within a chromosome, and connection genes, which are identified by the two nodegenes they connect. Connection genes ALSO have an "innovation" number, which, like the chromosome_id, persists globally, and is rigged to rise with every new gene. This is to track when a given gene arose, and to ensure that crossover is more likely to produce children with functional genes.

To reproduce, two chromosomes are lined up by their matching genes. The child inherits all the genes which match, and it also inherits the disjoint genes from the parent who is fitter.
However, when a connection gene is shared by both parents but differs in its innovation number (i.e. both parents have a connection between two points, but they differ in when the connection was added), the child can only inherit the gene from one parent.

The result will be that if this algorithm is run in parallel, there will be collisions in the innovation number for each gene, and breeding chromosomes from two separate processes is liable to fail.

Since a huge amount of the actual work being done by the CPU is generating and comparing these genes, is it worth it performance-wise to try and lock a global innovation number?

Am I better off ditching the current crossover function and trying to figure out a more parallel friendly version?

tripwire fucked around with this message at 00:09 on Jun 25, 2008

Zombywuf
Mar 29, 2008

tripwire posted:

:words:
Thinking about it, the most reliable way (i.e. not dependant on the OS/likely to break on faster hardware) would be to have a global counter which is incremented for each thread with each thread getting an id from it. Then have each thread maintain a local count to generate locally unique ids. A globally unique id can be then be made by:
code:
chromosone_id = (thread_id << 16) | local_id
local_id += 1
This assumes no more than 2^16 threads and no more than 2^16 ids generated per thread. If you want less threads but more local ids, just change the amount shifted.

Bozart
Oct 28, 2006

Give me the finger.

tripwire posted:

Am I better off ditching the current crossover function and trying to figure out a more parallel friendly version?

Just from an evolutionary algorithm perspective (which, once again, I am painfully new at) maybe you could just run the program through (for example) 100 generations in 10 different threads, and then take random species from each thread to generate 10 new threads, and repeat. Kind of like when one species invades a new ecology?

Then again this topic lends itself to very nice analogies which don't translate to useful improvements in the algorithms.

Zombywuf
Mar 29, 2008

Bozart posted:

Just from an evolutionary algorithm perspective (which, once again, I am painfully new at) maybe you could just run the program through (for example) 100 generations in 10 different threads, and then take random species from each thread to generate 10 new threads, and repeat. Kind of like when one species invades a new ecology?

Then again this topic lends itself to very nice analogies which don't translate to useful improvements in the algorithms.

There is something like this called islanding. It can be useful to prevent your population getting stuck in local minema. Normally you would swap a few individuals between islands every x generations.

politicorific
Sep 15, 2007
posted this in the django thread but I didn't get any response. Here's my issue - I have a database encoded in utf-8 values which I am trying to port to google's appengine code. The thing is that appengine doesn't support unicode natively in it's tools to convert databases and the suggested patches are not working with my csv files. So I'm exploring alternatives.

the alternative idea I've come up with is to build a program that loads a url/fills out forms and saves it in the appengine datastore.

I'm not exactly sure what would be the best method for doing this, is there a decent tutorial on how to use the http aspects of python for this? Or a better way to get my data into appengine?

tripwire
Nov 19, 2004

        ghost flow

Zombywuf posted:

There is something like this called islanding. It can be useful to prevent your population getting stuck in local minema. Normally you would swap a few individuals between islands every x generations.

This is kind of implicitly what I want my modification of the algorithm to do by being parallel. At certain points the population branches/forks so that an identical population can explore different mutations at the same time.. eventually after the populations start drifting farther apart, a new unified population is created from all of the information gained during concurrent evolution.

omg! stop posting!
Sep 25, 2007

by Fragmaster
If I were to buy one, and only one, python book to supplement the wealth of python literature which I can find freely on the web, which one should it be? I spend a lot of hours of my life on the train, hours that could be spent reading up on python.

Just one book, for brief spurts of reading during downtime on the train (think 45 mins - 1 hour one way) , or in school/waiting rooms/work. What should it be? A reference manual sounds like a good idea, but I don't know if I'd be better of buying something like the Python Cookbook. Then again, I could also read a book that teaches python and work my way through that.

I can use python to solve problems, but after looking at other solutions in Python Challenge, my solutions appear to be much more tedious than the elegant ones posted in the wiki. After seeing the more elegant solutions, I understand them fully and learn much from them. I am never confused by them and always appreciate them immediately. I just go "ohhhhhh, that makes much more sense than what I was doing." and then I copy their style next time. ;)

So what should I pick up?

omg! stop posting! fucked around with this message at 20:52 on Jun 26, 2008

deedee megadoodoo
Sep 28, 2000
Two roads diverged in a wood, and I, I took the one to Flavortown, and that has made all the difference.


I own Python in a Nutshell and Learning Python. I use Python in a Nutshell on an almost daily basis even though it only covers version 2.2. I just supplement with the online docs. After finishing Learning Python I never looked at it again (even though I could probably stand to brush up on some of the basics again). It sounds like you want a combination of both of these. I don't think that exists. Or if it does, I haven't found it.

The only book for any language that I can think of that really covers both of those bases is Thinking in Java/C++. If only Bruce Eckel wrote Thinking in Python (and not the electronic version available at his website, that was last updated in 2001 and isn't as comprehensive as his other works).

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!
Core Python is a great book.

Adbot
ADBOT LOVES YOU

oce
Dec 27, 2004

the numbers don't lie
I'm trying to write a GUI application in wxPython, (on OS X, if it matters) using panels and sizers. Is there a flag or option or something to outline the borders of panels? The more I change from the rough "absolute-positioned" layout to the sizer one, the worse it looks, and it's hard to tell which panels are taking up which areas of the screen.

edit: Found it, close enough anyway - when making/extending a Panel pass in style=wx.SUNKEN_BORDER to __init__()!

oce fucked around with this message at 16:07 on Jun 27, 2008

  • Locked thread