Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
good jovi
Dec 11, 2000

'm pro-dickgirl, and I VOTE!

If you're just writing code for yourself, then fine, do whatever you want. But when you participate in a community, there is some expectation that you adhere to that community's standards. This is true at any level, and programming is no different. It's a little harder when a community's norms have been dictated by one person (ie, our BDFL), but they are what they are, and rejecting them will get you the same sort of reaction as things like refusing to wear clothing or not using silverware does in the real world.

Adbot
ADBOT LOVES YOU

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord
I love doing list comprehensions the way Peter Norvig does:
Python code:
    return next((WATER, ZEBRA)
                for (red, green, ivory, yellow, blue) in c(orderings)
                if imright(green, ivory)
                for (Englishman, Spaniard, Ukranian, Japanese, Norwegian) in c(orderings)
                if Englishman is red
                if Norwegian is first
                if nextto(Norwegian, blue)
                for (coffee, tea, milk, oj, WATER) in c(orderings)
                if coffee is green
                if Ukranian is tea
                if milk is middle
                for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in c(orderings)
                if Kools is yellow
                if LuckyStrike is oj
                if Japanese is Parliaments
                for (dog, snails, fox, horse, ZEBRA) in c(orderings)
                if Spaniard is dog
                if OldGold is snails
                if nextto(Chesterfields, fox)
                if nextto(Kools, horse)
                )
(he's breaking 79 characters here but you get the idea)

Dren
Jan 5, 2001

Pillbug
I like that syntax, Symbolic Butt.

Does anyone know what the optimizations in list comprehensions are that make them faster than an equivalent for loop?

e.g.

listcomp.py:
Python code:
def listcomp():
    l = [x + 5 for x in xrange(100)]

def forloop():
    l = []
    for x in xrange(100):
        l.append(x + 5)
code:
dren@computer:~/projects/test
$ python -m timeit -s "import listcomp" "listcomp.listcomp()"
100000 loops, best of 3: 6.64 usec per loop
dren@computer:~/projects/test
$ python -m timeit -s "import listcomp" "listcomp.forloop()"
100000 loops, best of 3: 12.8 usec per loop
It's nearly double the time!

BigRedDot
Mar 6, 2008

Dren posted:

I like that syntax, Symbolic Butt.

Does anyone know what the optimizations in list comprehensions are that make them faster than an equivalent for loop?


Everything in the loop is python that goes through the python interpreter. Everything the list comprehension does happens at the Python C-API level.

Dominoes
Sep 20, 2007

Do you need access to a machine running the target OS when making a binary? I've been making Windows x64 files using CX_freeze, but ideally would like mac, linux and win-32 ones. I got my program to work on Ubuntu running from source - everything works. (although the GUI fonts are too big for their buttons - easy fix) I looked up making a .deb file, and the instructions were complex, and seemed to require being on Linux to do it. It looks like to create a Mac version, I need to be on a Mac. For a 32-bit Win version, I think I can do it from a 64-bit OS, but need a 32-bit installation of Python. Is this accurate, or is there a way to make different binaries without jumping on different computers/OSes/versions of Python?

Dominoes fucked around with this message at 22:19 on Jun 12, 2013

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Dominoes posted:

I got my program to work on Ubuntu running from source - everything works. (although the GUI fonts are too big for their buttons - easy fix) I looked up making a .deb file, and the instructions were complex, and seemed to require being on Linux to it.

Ubuntu is Linux and, what's more, a Debian (.deb(!)) based distribution, so you're about as close to the target of that format as you can get if you're actively using Ubuntu.

accipter
Sep 12, 2003

Dominoes posted:

Do you need access to a machine running the target OS when making a binary? I've been making Windows x64 files using CX_freeze, but ideally would like mac, linux and win-32 ones. I got my program to work on Ubuntu running from source - everything works. (although the GUI fonts are too big for their buttons - easy fix) I looked up making a .deb file, and the instructions were complex, and seemed to require being on Linux to it. It looks like to create a Mac version, I need to be on a mac. For a 32-bit Win version, I think I can do it from a 64-bit OS, but need a 32-bit installation of Python. Is this accurate, or is there a way to make different binaries without jumping on different computers/OSes/versions of Python?

You are talking about cross-compiling, which is a huge pain in the rear end (unless things have changed since I tried to do it). Why have you decided to distribute a compiled version of your program?

Dominoes
Sep 20, 2007

accipter posted:

Why have you decided to distribute a compiled version of your program?
So people other than myself can use it.

Dominoes fucked around with this message at 22:21 on Jun 12, 2013

Dren
Jan 5, 2001

Pillbug
If your intended result is cx freeze binaries packaged natively for various operating systems you will most likely need to get access to those platforms. Cross compiling is difficult. Cross-creating packages seems incredibly hard, if not impossible, since you generally need a platform specific toolchain to do that sort of thing. Besides, you'll need those access to those platforms in order to test the packages you create.

PS - Creating a .deb file is supported by setup.py but maybe not for something compiled with cx freeze. I used setup.py to create an rpm for me the other day and was pleasantly surprised by how simple and nice it was.

Dominoes
Sep 20, 2007

Dren posted:

If your intended result is cx freeze binaries packaged natively for various operating systems you will most likely need to get access to those platforms. Cross compiling is difficult. Cross-creating packages seems incredibly hard, if not impossible, since you generally need a platform specific toolchain to do that sort of thing. Besides, you'll need those access to those platforms in order to test the packages you create.

PS - Creating a .deb file is supported by setup.py but maybe not for something compiled with cx freeze. I used setup.py to create an rpm for me the other day and was pleasantly surprised by how simple and nice it was.
Thank you. I'll try to get Python 32-bit (probalby cx_freeze and QT 32-bit too) installed on my laptop's Windows drive, to prevent confusion on my main computer with two versions of Python. I'd imagine I wouldn't need to change anything, other than the packaged DLLs. I followed this official Ubuntu tutorial, but ran into multiple errors. The line at the top states "Note: These instructions are only for creating packages for personal use.", but the overall package guide seems more geared towards best practices, and isn't Python specific.

Dominoes fucked around with this message at 22:36 on Jun 12, 2013

SurgicalOntologist
Jun 17, 2004

Anyone know how I can get partial history scrolling in the PyCharm console? You know what I mean... like you type
>> s
press the up arrow and filter your history by commands starting with 's'. It's gotta be an option somewhere, I just don't know what to look for.

I'm running ipython in the PyCharm console if that matters. If it isn't clear, I'm not very up on how all these things (environment, console, PyCharm, modules) interact.

Jose Cuervo
Aug 25, 2004
I have the following code:
Python code:
word= 'bumblebee'
versionA= [letter for letter in word].sort()
print versionA
 
versionB= [letter for letter in word]
versionB.sort() 
print versionB
which results in the following output
Python code:
None
['b', 'b', 'b', 'e', 'e', 'e', 'l', 'm', 'u']
I don't understand why the first version does not work (or really returns 'None') but the second version does work.

Movac
Oct 31, 2012

Jose Cuervo posted:

I don't understand why the first version does not work (or really returns 'None') but the second version does work.

Python's list.sort() sorts the list in-place and doesn't return anything. Version A is creating the list, sorting it, then discarding the result.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Use sorted to get a sorted iterable, like sorted(['a', 'c', 'b']).

Dren
Jan 5, 2001

Pillbug
list.sort() is an in place operation that returns None

http://docs.python.org/2/tutorial/datastructures.html

quote:

list.sort()
Sort the items of the list, in place.

Jose Cuervo
Aug 25, 2004

Dren posted:

list.sort() is an in place operation that returns None

http://docs.python.org/2/tutorial/datastructures.html

Excellent. Thanks for the help all three of you.

Dren
Jan 5, 2001

Pillbug
Does anyone know a way to deep copy a generator?

edit: after some googling it appears that copy generators is something that just can't be done. Then I figured out some cute syntax with lambda in order to get what I really needed.

Dren fucked around with this message at 19:01 on Jun 13, 2013

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
Your post is light on details, but itertools.tee might be useful.

Innocent Bystander
May 8, 2007
Born in the LOLbarn.

Lysidas posted:

Your post is light on details, but itertools.tee might be useful.

I guess the question asker already found it, but I didn't know about this. Anybody interested in copying iterators can see it in action here:

http://codepad.org/iz747daK

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Innocent Bystander posted:

I guess the question asker already found it, but I didn't know about this. Anybody interested in copying iterators can see it in action here:

http://codepad.org/iz747daK

It's not really copying the iterator in the fullest sense, though, because you can't use the original iterator any more?

Itertools docs posted:

Once tee() has made a split, the original iterable should not be used anywhere else; otherwise, the iterable could get advanced without the tee objects being informed.

Opinion Haver
Apr 9, 2007

Hammerite posted:

It's not really copying the iterator in the fullest sense, though, because you can't use the original iterator any more?

Yeah, but it gives you two independent iterators, so just do iterCopy, iter = itertools.tee(iter) or something.

OnceIWasAnOstrich
Jul 22, 2006

Replace the original iterator with one of the copies and make it really hard to iterate it since you don't have a reference to it.

edit: vvvv Yeah I've never used itertools.tee() anyway. Its usually easier to just turn it into a list. I guess it would be useful in the instance that you have a very large or infinite iterator you want to split and you are very confident that one of your iterators won't get away from the other one in which case you end up with plenty of memory usage and even more extra overhead.

OnceIWasAnOstrich fucked around with this message at 03:24 on Jun 14, 2013

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
Sure, but maybe the original iterator came from somewhere else and you don't know how many references to it exist elsewhere in your program, and you can't be sure it won't be used. I can't suggest a likely scenario where this might happen but it is possible in principle.

Opinion Haver
Apr 9, 2007

Hammerite posted:

Sure, but maybe the original iterator came from somewhere else and you don't know how many references to it exist elsewhere in your program, and you can't be sure it won't be used. I can't suggest a likely scenario where this might happen but it is possible in principle.

Yeah but the only way around that is to hack on the original object's .next() method and that'd be horrifically evil.

Innocent Bystander
May 8, 2007
Born in the LOLbarn.

Hammerite posted:

Sure, but maybe the original iterator came from somewhere else and you don't know how many references to it exist elsewhere in your program, and you can't be sure it won't be used. I can't suggest a likely scenario where this might happen but it is possible in principle.

Yeah I agree with this whole heartedly, it seems janky at best.

tef
May 30, 2004

-> some l-system crap ->

Hammerite posted:

It's not really copying the iterator in the fullest sense, though, because you can't use the original iterator any more?

In the sense that reading mutates an iterator, you can't really copy an iterator without invalidating it.

Hammerite posted:

Sure, but maybe the original iterator came from somewhere else and you don't know how many references to it exist elsewhere in your program, and you can't be sure it won't be used. I can't suggest a likely scenario where this might happen but it is possible in principle.

In this instance, you probably wouldn't be able to use the iterator anyway.

tef
May 30, 2004

-> some l-system crap ->

Innocent Bystander posted:

Yeah I agree with this whole heartedly, it seems janky at best.

Iterators aren't really for cloning. They're like a stream, you can read from them, but there isn't really a seek option to go back. Once you've read from it, you're responsible for keeping the output around.

If you need to keep a copy of a generator's output, the easy way is to just turn it into a list x = list(gen). You can then re-iterate over it as many times as you please, but the original is gone. itertools.tee is essentially a lazy method of the above, without reading the list in advance. This means, that unlike building a list upfront, it is possible to call .next() on the original, underlying iterator.

As mentioned, if you have a situation where neither of these two methods work, you can't really touch the original iterator anyway, as calling next() can never be undone. You are likely doing something very very bad. The janky thing is sharing an iterator, not how itertools.tee behaves.

Dren
Jan 5, 2001

Pillbug
tee is interesting and I hadn't heard of it but I wasn't trying to copy an iterator. I wanted to deep copy an arbitrary generator. I'll post some specifics tomorrow along with the solution I settled on just in case anyone is interested. I exercised it a bit and it's a fairly nice work around that avoids, in this use case, the need to deep copy a generator.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Dren posted:

I exercised it a bit and it's a fairly nice work around that avoids, in this use case, the need to deep copy a generator.

Classic XY problem.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

tef posted:

In the sense that reading mutates an iterator, you can't really copy an iterator without invalidating it.


In this instance, you probably wouldn't be able to use the iterator anyway.

All that my posts were meant to point out is that using itertools.tee() is not truly copying an iterator in the sense that other types of objects can be copied (obtaining deep copies that are independent of the originals from the point of copying onwards). They were not meant to imply that there exists, or that there does not exist, such a way to copy generators that the other poster had been missing. Only that the suggested method was not what it was presented as. Nothing more, nothing less.

Dren
Jan 5, 2001

Pillbug
I have a personal library I use for python coroutines. Here is some sample code using it to make a processing pipeline that prints even numbers that are sent to it:
Python code:
from coroutine.utility import co_filter
from coroutine.sinks import printer

def main():
    f = lambda x : x % 2 == 0
    pipe = co_filter(f, printer())

    for i in xrange(100):
        pipe.send(i)
    pipe.close()

if __name__ == '__main__':
    main()
help for co_filter and printer:
code:
co_filter(*args, **kwargs)
    Coroutine.  Applies function 'func' to data sent to it.  If func returns
    true the data is forwarded.
    
    .send() accepts data to be filtered
    
    arguments:
    func   - filter function used to screen data
    target - coroutine to forward data to

printer(*args, **kwargs)
    Coroutine.  Prints what is sent to it.
    
    .send() accepts data to print
    
    arguments:
    None
I had implemented a coroutine called threaded that could be inserted into a pipeline such that the entire pipeline after the thread would execute within the thread. Code using this coroutine looked like this:
Python code:
from coroutine import threaded, roundrobin
from coroutine.utility import co_filter
from coroutine.sinks import locking_printer
import threading

def main():
    f = lambda x : x % 2 == 0
    lock = threading.Lock()
    num_threads = 4
    threaded_pipes = []
    for i in xrange(num_threads):
        pipe = threaded(co_filter(f, locking_printer(lock)))
        threaded_pipes.append(pipe)
    pipe = roundrobin(*threaded_pipes)

    for i in xrange(100):
        pipe.send(i)
    pipe.close()

if __name__ == '__main__':
    main()
In this example, roundrobin is a coroutine that distributes data evenly among the threads. I didn't like this approach for several reasons. Using round robin to feed the threads is inefficient. Why not have all of the threads share a Queue and pull messages off when they can? Dropping down to one Queue has another benefit, there are n-1 fewer queues sucking up resources. Something else that bothered me was having to assemble the pipeline below the thread in a for loop. It's gross syntax.

So to fix these issues I modified the threaded coroutine to use one Queue and accept the number of threads to be created as an argument. Here is some sample code:
Python code:
from coroutine import threaded
from coroutine.utility import co_filter
from coroutine.sinks import locking_printer
import threading

def main():
    f = lambda x : x % 2 == 0
    lock = threading.Lock()
    num_threads = 4
    pipe = threaded(num_threads, co_filter(f, locking_printer(lock)))

    for i in xrange(100):
        pipe.send(i)
    pipe.close()

if __name__ == '__main__':
    main()
Much cleaner! Faster too thanks to the elimination of the round robin approach. But this code doesn't work! An attempt to run this code will result in an exception:
Python code:
Exception in thread Thread-3:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/dren/projects/coroutine/coroutine.py", line 89, in run_target
    target.send(item)
ValueError: generator already executing
Issue being that multiple threads are trying to send to the same generator object. Generators aren't like functions, they have state so you can't freely call them from multiple threads. So at this point I started researching (and asked on here) about a way to deep copy a generator. My thought being that if I could deep copy the pipeline that is sent as the argument to threaded I could make a copy of it for each thread and be good to go. (I was sort of predisposed to this idea since I also implemented a multiprocessing coroutine where this syntax just works thanks to fork making a copy of everything).

Turns out deep copying a generator is not supported by CPython. I found a version specific python 2.5.2 hack on Activestate where someone deep copied a generator by manipulating bytecode. After I saw that I realized I was on a path that I did not want to go down.

Turns out there is an easy answer. lambda.
Python code:
from coroutine import threaded
from coroutine.utility import co_filter
from coroutine.sinks import locking_printer
import threading

def main():
    f = lambda x : x % 2 == 0
    lock = threading.Lock()
    num_threads = 4
    pipe = threaded(num_threads, lambda : co_filter(f, locking_printer(lock)))

    for i in xrange(100):
        pipe.send(i)
    pipe.close()

if __name__ == '__main__':
    main()
The pipeline supplied to threaded is wrapped in a lambda, preventing it from being evaluated immediately. Inside threaded, after the threads are created, the lambda is evaluated to instantiate a per-thread pipeline.

So that's what was going on with my question yesterday and I hope someone finds this sort of interesting. Something I'm struggling with is to figure out whether or not what I've thrown together here is anything more than some of the python functional programming stuff implemented in reverse. That is, is there anything that can be done using this approach that can't be done with forward iterators? I'm not sure that there is, though the syntax is a bit nicer for certain problems. For instance it seems easier to avoid having to make lots of copies of the same list than it is using existing python functional programming tools like list comprehensions. Unfortunately, list comprehensions are so goddamn fast compared to interpreted python code that while there may be a benefit to this approach in terms of memory footprint, operations are much slower than if they had been done with a list comprehension.

I'd also like to play with gevent and greenlets. I don't know much about them but I'd like to see if they are applicable in the same way that I am using threads and multiprocessing.

Dren fucked around with this message at 18:24 on Jun 14, 2013

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Dren posted:

I'd also like to play with gevent and greenlets. I don't know much about them but I'd like to see if they are applicable in the same way that I am using threads and multiprocessing.

I haven't had a need for them in a couple years, but greenlets are pretty cool stuff. I think I was actually using eventlet at the time.

Dren
Jan 5, 2001

Pillbug

Thermopyle posted:

I haven't had a need for them in a couple years, but greenlets are pretty cool stuff. I think I was actually using eventlet at the time.

Yeah, what I got from reading about them a bit is that they are useful when there is blocking IO that can be made asynchronous and an event notification framework to allow switching back to operations when their IO calls are ready. That's not really something I have in any of my use cases so I'm not sure I'll be able to make any use of them.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Dren posted:

The pipeline supplied to threaded is wrapped in a lambda, preventing it from being evaluated immediately. Inside threaded, after the threads are created, the lambda is evaluated to instantiate a per-thread pipeline.

I don't think this is a very good idea. Encouraging threads to use the same object isn't really great. The threads will print in different orders depending on who schedules what (race conditions), and since it's CPU-bound they'll probably contend over the mutex a lot.

Dren
Jan 5, 2001

Pillbug

Suspicious Dish posted:

I don't think this is a very good idea. Encouraging threads to use the same object isn't really great. The threads will print in different orders depending on who schedules what (race conditions), and since it's CPU-bound they'll probably contend over the mutex a lot.

Not sure what you mean with "Encouraging threads to use the same object isn't really great." Each thread evaluates the lambda in order to get its own instance of the pipeline. The lambda is a function and functions are fine, afaik, to share between threads. As for the different orders that things will get printed in, printing the numbers in order is not a requirement. In the example shown contention over the mutex due to this process being CPU bound actually slows things down quite a bit. This was a purposefully simple example -- the thing I'm really using the threaded coroutine for is IO bound and gets a rather significant speed up from being threaded or multiprocessed.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I was thinking the locked printer was shared between threads, but then I realized that there was a new locked printer for every thread. So that means that the lock is over stdout. You could also share the locked printer object between threads, without a significant difference.

In a more complex pipeline you'll probably share more objects between threads.

Just because I'm curious, I'd like to see your original problem, if you're OK with sharing it.

Dren
Jan 5, 2001

Pillbug

Suspicious Dish posted:

I was thinking the locked printer was shared between threads, but then I realized that there was a new locked printer for every thread. So that means that the lock is over stdout. You could also share the locked printer object between threads, without a significant difference.

In a more complex pipeline you'll probably share more objects between threads.

Just because I'm curious, I'd like to see your original problem, if you're OK with sharing it.

locked_printer is a coroutine as well. If I shared it it'd have the problem with multiple threads trying to .send() to it at once. The lambda thing does a really nice job of removing all of the mess. The GIL is acquired in order to do a print so you don't actually need to lock around the print statement since python does that for you under the hood.

Here's the coroutine library (forgive the incompleteness) https://github.com/robwiss/coroutine

Here's the multithreaded version of the program: http://codepad.org/unquCIu3
Here's the multiprocessing version: http://codepad.org/KpncRiaY

The amounts of threads and processes used were chimped to give the best results on my machine.

The program uses the audioread library to generate md5sums of the audio data in MP3s to facilitate identifying exact audio duplicates in an MP3 collection. I believe I've posted about it on here before. The guy I wrote it for believes that it fills a need for people who have giant MP3 collections so I might polish it up and release it in some fashion. The sticking point is figuring out a useful way to provide the results. I think there are some MP3 organization utilities that will read arbitrary ID3 tags so maybe I could include the hashes as a custom ID3 tag. Right now I just pickle a dictionary containing the results.

Jose Cuervo
Aug 25, 2004
I have been learning about classes from Chapters 15 and 16 of this book. I have copied the code for the Old Maid card game but am unable to get the code to work properly.

Specifically, when removing matches from the hand the code will detect a match even if there is not one. From my debugging I have found that the code snippet
Python code:
if match in self.cards:
in the remove_matches method returns True even when it should return False. However when I print out self.cards the "match" is not in there, so I am at a loss as to why it is being detected.

EDIT: Isolated the problem to the changes I made to __cmp__ in the Card class. The changes I made were supposed to make Aces rank higher than Kings, as soon as I remove that code things work. Can anyone explain why the code I wrote makes things not work correctly?

To clarify, this is what I have:
Python code:
	def __cmp__(self, other):
		# Check the suits
		if self.suit > other.suit: return 1
		if self.suit < other.suit: return -1
		
		# Make aces rank higher than kings
		if self.rank == 1: self_rank= 1
		else: self_rank= 14
		if other.rank == 1: other_rank= 1
		else: other_rank= 14
		
		# Suits are the same, check the ranks
		if self_rank > other_rank: return 1
		if self_rank < other_rank: return -1

		# Suits and ranks are the same
		return 0
and this is what it was originally (that works):
Python code:
	def __cmp__(self, other):
		# Check the suits
		if self.suit > other.suit: return 1
		if self.suit < other.suit: return -1
		
		# Suits are the same, check the ranks
		if self.rank > other.rank: return 1
		if self.rank < other.rank: return -1

		# Suits and ranks are the same
		return 0

Jose Cuervo fucked around with this message at 00:18 on Jun 15, 2013

Met48
Mar 15, 2009

Jose Cuervo posted:

Python code:
		# Make aces rank higher than kings
		if self.rank == 1: self_rank= 1
		else: self_rank= 14
		if other.rank == 1: other_rank= 1
		else: other_rank= 14

These lines are resulting in any non-ace card being treated as having the highest rank. This then results in any comparison between cards with the same suit but different non-ace ranks being considered equal. You probably want something like this:

Python code:
self_rank = self.rank
if self_rank == 1:
    self_rank = 14
other_rank = other.rank
if other_rank == 1:
    other_rank = 14
Then any aces will be treated as having the highest rank, while other ranks will be compared normally.

Adbot
ADBOT LOVES YOU

Winkle-Daddy
Mar 10, 2007
Hey fellow pythoners. I have come across an issue, and I'm not sure if the problem is in the python, or if it's in the game engine I'm using (pyglet). I decided to write my own text engine instead of using the built in methods available in pyglet.

I've got this in main.py:
Python code:
if __name__ == '__main__':
	textItems = []
	game_intro = text_object(text='Game Title',xpos=50,ypos=90,batch=GLOBAL_BATCH,fontsheet=GLOBAL_FONTSHEET,group=GTG)
	game_intro.toggleVisible(1)
	game_intro.timeBomb(10)
	textItems.append(game_intro)
	b = text_object(text='Some More Text',xpos=10,ypos=10,batch=GLOBAL_BATCH,fontsheet=GLOBAL_FONTSHEET,group=GTG)
	b.toggleVisible(1)
	b.timeBomb(30)
	textItems.append(b)
	pyglet.app.run()
Update function:
Python code:
def update(dt):
	for x in textItems:
		x.update(dt)
So, I have a "text_object" class which basically creates a sprite for each letter in the "text" argument and returns it back. So, I created a "timeBomb" method so that I can delete the object after x number of frame updates, where x is the argument provided.

Here is the relevant portion of the text_object class:
Python code:
	def timeBomb(self,frames):
		self.timeBombTrue = 1
		self.explodeDelay = frames
		self.explodeCount = 0

	def update(self,dt):
		if(self.timeBombTrue == 1):
			if(self.explodeCount < self.explodeDelay):
				self.explodeCount = self.explodeCount+1
			elif(self.explodeCount >= self.explodeDelay):
				self.destroyFree
	def destroyFree(self):
		for x in self.text_objects:
			x.delete()
So, when I try this, I get an error that NoneType object has no attribute delete. The weird thing is, if I go back to main.py and use only a single text object it works perfect. When I add the second text object, the second one needing to be destroyed throws that error. I decided to use a try statement there instead and do:

Python code:
	def destroyFree(self):
		for x in self.text_objects:
			try:
				x.delete()
			except:
				print(x)
Which results in:

code:
<pyglet.sprite.Sprite object at 0x269a610>
<pyglet.sprite.Sprite object at 0x269a750>
<pyglet.sprite.Sprite object at 0x269a890>
<pyglet.sprite.Sprite object at 0x269a9d0>
<pyglet.sprite.Sprite object at 0x269ab10>
<pyglet.sprite.Sprite object at 0x269ac50>
<pyglet.sprite.Sprite object at 0x269ad90>
<pyglet.sprite.Sprite object at 0x269afd0>
<pyglet.sprite.Sprite object at 0x269c0d0>
<pyglet.sprite.Sprite object at 0x269c210>
<pyglet.sprite.Sprite object at 0x269c350>
Any clue why my object seems to get hosed up when I add a second one? Or what I can do to try to track the problem down further?
e: truncated some non-relevant code to prevent table breakage.

Winkle-Daddy fucked around with this message at 01:53 on Jun 15, 2013

  • Locked thread