Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
m0nk3yz
Mar 13, 2002

Behold the power of cheese!

Centipeed posted:

I'm thinking, since I'll start coding for my university project after Christmas, that just jumping straight into Python 3.0 is a better idea than 2.6. I've already learned some Python, but it's not ingrained in my head so much that 3.0 will be confusing, and I'd rather learn the new standard than the "old" one, especially since they're releasing it next month.

Is this a good idea? Also, is the Python.org tutorial going to be the only learning resource for 3.0 until other people start writing them? I'm not particularly fond of how the Python.org tutorial starts out, since I'm already familiar with the basics of programming, having done Java and some C++. It seems like a tutorial designed for newcomers to programming, since it starts with using Python as a calculator and whatnot.

No. Do not jump into python 3000 - Python 3000 is not going to be widely adopted for some time - you're much better off focusing on python 2.6 and using that to build up your skills. Python 3000 is not "that big" of a change from 2.6 that it will make all of your 2.6 knowledge and skills worthless.

Adbot
ADBOT LOVES YOU

Mashi
Aug 15, 2005

Just wanted you to know your dinner's cold and the children all agree you're a shitheel!

Centipeed posted:

Is this a good idea?

You'd be better off reading about the changes in python 3000, because you can adopt some of them in earlier versions of python (at least in your mind). To name but a few, don't use reduce(), always use xrange(), pretty much everything that used to return a list will return an interator, etc.
http://docs.python.org/dev/3.0/whatsnew/3.0.html

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...

m0nk3yz posted:

No. Do not jump into python 3000 - Python 3000 is not going to be widely adopted for some time - you're much better off focusing on python 2.6 and using that to build up your skills. Python 3000 is not "that big" of a change from 2.6 that it will make all of your 2.6 knowledge and skills worthless.

This. Python 3000 is cool but (a) it's unclear how long it will need to takeover from current Python (several years seems to be the general consensus), (b) in many ways it's just a tidying up of the language so you won;t be wasting your time with current Python, (c) virtually all the learning resources around are devoted to current Python, and (d) it's unclear to me if anyone is actually using it to build things "out in the wild" as yet.

Kire
Aug 25, 2006
Wow those last three posts are an eye-opener. I was holding off on buying some books, and really diving into python on my own time, until after 3.0 came out. But if the consensus is that it won't be widely used for another year, then I will go ahead and get books written for 2.5 and download the 2.6 build at home.

I love how simple the IDLE IDE is. I downloaded netbeans because one of the screenshots made it look like it had lots of pop-up help when writing, but, it sure is complex. Way more than I need.

Sock on a Fish
Jul 17, 2004

What if that thing I said?
Part of a backup script that I'm writing uses subprocess to call mysqldump. I think it's failing out because the output is too big, but I'm not too sure what to do about it. Here's the relevant part of my script:
code:
sqldump = subprocess.Popen("/usr/bin/mysqldump u=" + mysql_username " -p" + mysql_password + " --all-databases", stdout=subprocess.PIPE)
gzfile = open(daily_mysql_dir + "/" + date_string + ".sql.tar.gz", 'wb')
gzip = subprocess.Popen("tar czf", stdin=sqldump.stdout, stdout=gzfile)
sqldump.wait()
gzip.wait()

gzfile.close()
And here's the error that gets raised:
code:
Traceback (most recent call last):
  File "./mysql-dir-backup.py", line 91, in ?
    sqldump = subprocess.Popen("/usr/bin/mysqldump --all-databases u=" + mysql_username +  " -p" + mysql_password, stdout=subprocess.PIPE)
  File "/usr/lib/python2.4/subprocess.py", line 542, in __init__
    errread, errwrite)
  File "/usr/lib/python2.4/subprocess.py", line 975, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory
I looked at the parts of subprocess.py referenced in the error, and it looks like this error gets raised if the stderr pipe is filled with more than 1MB of data. That doesn't make sense to me, since if I execute this command in my shell it's going to write a whole bunch of data to stdout but nothing to stderr.

What's going on here?

edit: Just realized that one of my mysqldump options was set wrong, but I corrected that and I still get the same error message.

Sock on a Fish fucked around with this message at 18:31 on Nov 25, 2008

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Sock on a Fish posted:

mysqldump

Try:

code:
sqldump = subprocess.Popen(["/usr/bin/mysqldump", "-u=" + mysql_username, "-p" + mysql_password, "--all-databases"], stdout=subprocess.PIPE)
gzfile = open(daily_mysql_dir + "/" + date_string + ".sql.tar.gz", 'wb')
gzip = subprocess.Popen(["tar", "czf"], stdin=sqldump.stdout, stdout=gzfile)

Sock on a Fish
Jul 17, 2004

What if that thing I said?

Janin posted:

Try:

code:
sqldump = subprocess.Popen(["/usr/bin/mysqldump", "-u=" + mysql_username, "-p" + mysql_password, "--all-databases"], stdout=subprocess.PIPE)
gzfile = open(daily_mysql_dir + "/" + date_string + ".sql.tar.gz", 'wb')
gzip = subprocess.Popen(["tar", "czf"], stdin=sqldump.stdout, stdout=gzfile)

Sweet, the sql dump is kicking off but it looks like it's not making it into the pipe. tar is sassing me about it:
code:
tar: Cowardly refusing to create an empty archive

Lurchington
Jan 2, 2003

Forums Dragoon
that's a common tar error, and although I'm not expert enough to check your command syntax, it usually means you didn't put in the second positional argument of a standard tar command

lurchington@example:~$ tar jvcf test.tar.gz
tar: Cowardly refusing to create an empty archive
Try `tar --help' or `tar --usage' for more information.

lurchington@example:~$ tar jvcf test.tar.gz test
test
lurchington@example:~$

Sock on a Fish
Jul 17, 2004

What if that thing I said?
Welp, it looks like tar doesn't easily take input from stdin. swapping in gzip works just fine.

The man page says I can use the f option to specify input from any file object, including stdin. I'd really like to use tar so I can just have a single function for creating archives of both directories and sqldumps, can anyone tell me what's up with tar?

JoeNotCharles
Mar 3, 2005

Yet beyond each tree there are only more trees.

Sock on a Fish posted:

Welp, it looks like tar doesn't easily take input from stdin. swapping in gzip works just fine.

The man page says I can use the f option to specify input from any file object, including stdin. I'd really like to use tar so I can just have a single function for creating archives of both directories and sqldumps, can anyone tell me what's up with tar?

You're invoking it with "-" for the filename?

Sock on a Fish
Jul 17, 2004

What if that thing I said?

JoeNotCharles posted:

You're invoking it with "-" for the filename?

Yep.

code:
gzip = subprocess.Popen(["tar", "czf","-"], stdin=sqldump.stdout, stdout=gzfile)
It acts cowardly about it.

I tried this arrangement instead:
code:
srv_tar = tarfile.open(tar_path, mode='w:gz')

srv_tar.add(sqldump.stdout)
sqldump.wait()
srv_tar.close()
It returned this error:
code:
Traceback (most recent call last):
  File "./mysql-dir-backup.py", line 106, in ?
    srv_tar.add(sqldump.stdout)
  File "/usr/lib/python2.4/tarfile.py", line 1211, in add
    if self.name is not None \
  File "/usr/lib/python2.4/posixpath.py", line 403, in abspath
    if not isabs(path):
  File "/usr/lib/python2.4/posixpath.py", line 49, in isabs
    return s.startswith('/')
AttributeError: 'file' object has no attribute 'startswith'

Zombywuf
Mar 29, 2008

Sock on a Fish posted:

:words:

Why, for the love of all that is holy, are you not making a bash script?

code:
mysqldump | gzip -c > mybackup.sql.gz
Why is tar even involved in this process?

Sock on a Fish
Jul 17, 2004

What if that thing I said?

Zombywuf posted:

Why, for the love of all that is holy, are you not making a bash script?

code:
mysqldump | gzip -c > mybackup.sql.gz
Why is tar even involved in this process?

I haven't done any bash scripting while I have worked with Python, and the script does more than just gzip a sql dump. That's pretty much it. I know that shell scripts are a lot more common for these kind of tasks.

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"

Sock on a Fish posted:

Yep.

code:
gzip = subprocess.Popen(["tar", "czf","-"], stdin=sqldump.stdout, stdout=gzfile)
It acts cowardly about it.

I just re-read your code, and :psyduck:

Did you try what you're trying to do on the command line, before writing your code? You're trying to make tar create an archive from a bunch of character data. That doesn't make any sense at all, and if you'd tried something like it from the command line first you'd discover that:

code:
$ ls | tar czf temp.tar.gz 
tar: Cowardly refusing to create an empty archive
Try `tar --help' or `tar --usage' for more information.
If you just want to compress one file, don't put it in an archive.

Furthermore, you're calling external applications to handle what Python has built-in libraries for. If you want to build a tar archive, use tarfile. If you want to compress data, use the gzip or bzip2 modules.

tripwire
Nov 19, 2004

        ghost flow

Sock on a Fish posted:

code:
sqldump = subprocess.Popen("/usr/bin/mysqldump u=" + mysql_username " -p" + mysql_password + " --all-databases", stdout=subprocess.PIPE)
gzfile = open(daily_mysql_dir + "/" + date_string + ".sql.tar.gz", 'wb')
gzip = subprocess.Popen("tar czf", stdin=sqldump.stdout, stdout=gzfile)
sqldump.wait()
gzip.wait()

gzfile.close()
I have no idea if this is the right or pythonic way but I might do this:
code:
import cPickle as pickle
import gzip

sqldump = subprocess.Popen("/usr/bin/mysqldump u=" + mysql_username " -p" + mysql_password + " --all-databases", stdout=subprocess.PIPE).stdout.read()
file = gzip.open('whateverfilenameyouwant','w', compresslevel = 9)
pickle.dump(sqldump, file)
file.close()
To open it again gzip.open() it and just unpickle it.

tripwire fucked around with this message at 05:26 on Nov 26, 2008

Sock on a Fish
Jul 17, 2004

What if that thing I said?

Janin posted:

I just re-read your code, and :psyduck:

Did you try what you're trying to do on the command line, before writing your code? You're trying to make tar create an archive from a bunch of character data. That doesn't make any sense at all, and if you'd tried something like it from the command line first you'd discover that:

code:
$ ls | tar czf temp.tar.gz 
tar: Cowardly refusing to create an empty archive
Try `tar --help' or `tar --usage' for more information.
If you just want to compress one file, don't put it in an archive.

Furthermore, you're calling external applications to handle what Python has built-in libraries for. If you want to build a tar archive, use tarfile. If you want to compress data, use the gzip or bzip2 modules.

Thanks, I didn't realize that tar didn't work that way. The documentation for tar says it'll take any file object as input, but it looks like that's not the whole story. I'll give the python libraries a shot.

tripwire
Nov 19, 2004

        ghost flow

Sock on a Fish posted:

Thanks, I didn't realize that tar didn't work that way. The documentation for tar says it'll take any file object as input, but it looks like that's not the whole story. I'll give the python libraries a shot.

Do it the way I showed! 4 lines!

functional
Feb 12, 2008

I have an XML document that looks like this

code:
<file>
 <entry>dataIwant</entry>
</file>
with some other stuff thrown in.

I am using minidom. I already have the XML file in a string. How do I extract the string "dataIwant"? I've tried minidom.parseString(xmlstring).getElementsByTagName('entry')[0] but this just gives me:

<DOM Element: entry at 0x5f1af8>

You would think this would be easier to find.

Edit:
minidom.parseString(xmlstring).getElementsByTagName('entry')[0].childNodes[0].toxml()

Such a simple operation should not be this complicated...

functional fucked around with this message at 03:33 on Nov 29, 2008

TOO SCSI FOR MY CAT
Oct 12, 2008

this is what happens when you take UI design away from engineers and give it to a bunch of hipster art student "designers"
The DOM is a Java-style API ported to Python, which explains why it's so obtuse. You need to get the entry's child node, which will be a text node, and then retrieve its data:

code:
entry = minidom.parseString(xmlstring).getElementsByTagName('entry')[0]
print entry.firstChild.data

tbradshaw
Jan 15, 2008

First one must nail at least two overdrive phrases and activate the tilt sensor to ROCK OUT!

functional posted:

I am using minidom. I already have the XML file in a string. How do I extract the string "dataIwant"?

May I suggest checking out Genshi for your XML parsing? Its paradigm of operating on XML as streams can be really great for extracting things. (Just give it an X-Path statement for what you want, and shazam!)

It has it's own set of pros and cons, but I've done an HTML "scraper" with it and found it to be really delightful to work with.

tbradshaw fucked around with this message at 09:02 on Nov 27, 2008

jonypawks
Dec 1, 2003

"Why didn't you tell me you were the real Snake?" -- Ken
I will second the recommendation for Genshi, it is a really great tool. Here's how to do what you want in Genshi:

code:
>>> import genshi
>>> stream = genshi.XML("<file><entry>dataIwant</entry></file>")
>>> print stream.select('entry/text()')
dataIwant
But this example doesn't really show off Genshi's capabilities. This example in the documentation will probably better explain how to grab a specific piece of an xml file like you want to do.

jonypawks fucked around with this message at 08:32 on Nov 27, 2008

tef
May 30, 2004

-> some l-system crap ->
lxml also works pretty well, but I did have to wrap it to make it more pleasant to deal with (namespaces)2.

it supports all of xpath, and you can even get xpath with regexes working.

(elementtree api is not my friend)


Infact, here is an lxml wrapper i've been using*** - it also has a beautifulsoup compatibility method too:

http://secretvolcanobase.org/~tef/lxml/



*** A very cut down version - I actually use a slightly different api, and a lot more html processing shortcuts (like extracting form values)

tef fucked around with this message at 12:10 on Nov 27, 2008

Cyne
May 30, 2007
Beauty is a rare thing.

Edit:

Nevermind, should've Googled it to begin with, found a solution.

Cyne fucked around with this message at 04:45 on Nov 28, 2008

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...

tef posted:

(elementtree api is not my friend)

Which parts don't you like? I ask because I've *coff* wrapped elementtree myself to compensate for some deficiencies.

tef
May 30, 2004

-> some l-system crap ->

outlier posted:

Which parts don't you like? I ask because I've *coff* wrapped elementtree myself to compensate for some deficiencies.

Hard to inherit from or extend without composition, and having to handle the namespaces by hand.

For example: At work, we get it to throw exceptions when an xpath doesn't match - dieing early has helped us find a number of issues in the screen scrapers.

Juomes
Apr 8, 2008
Edit: Dumb mistake

Juomes fucked around with this message at 05:43 on Nov 28, 2008

MononcQc
May 29, 2007

I've started running into unicode errors while sending data around http calls and my database (charset in utf-8 for the server, db and tables).

code:
>>> u'éédaë'
u'\xe9\xe9da\xeb'
>>> unicode(u'éédaë')
u'\xe9\xe9da\xeb'
>>> unicode('eedae')
u'eedae'
>>> unicode('éédaë')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
So there I'd expect python to effectively convert the str to unicode for me. It seems it doesn't when special characters are there.

Then there's this:
code:
>>> import urllib
>>> urllib.urlencode({'entry':'éédaë'})
'entry=%C3%A9%C3%A9da%C3%AB'
>>> urllib.urlencode({'entry':u'éédaë'})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/urllib.py", line 1250, in urlencode
    v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)
In this case, specifying unicode does nothing?

But then:
code:
>>> import simplejson
>>> simplejson.dumps({'entry':'éédaë'})
'{"entry": "\\u00e9\\u00e9da\\u00eb"}'
>>> simplejson.dumps({'entry':u'éédaë'})
'{"entry": "\\u00e9\\u00e9da\\u00eb"}'
So there, it's fine to use anything...

I googled around a bit and most of it seem to be about unicode vs byte strings. However, explanations have been a bit scarce, so I figure I may be seeing the whole thing wrong.
How is unicode string conversion supposed to work? Maybe I'm tired, but I see no consistency in how different modules and the language itself seem to implement this.

Allie
Jan 17, 2004

Is there any reason you need unicode objects to begin with? If you keep your data UTF-8-encoded to begin with you shouldn't have any problems. If a library is giving you unicode objects, you should encode them before passing them off to other libraries that aren't Unicode-aware.

The reason unicode('éédaë') doesn't work is because Python is trying to decode your UTF-8 string as ASCII. In general, you should use str.decode() and unicode.encode() to convert between the two types.

Habnabit
Dec 30, 2007

lift your skinny fists like
antennas in germany.

MononcQc posted:

I googled around a bit and most of it seem to be about unicode vs byte strings. However, explanations have been a bit scarce, so I figure I may be seeing the whole thing wrong.
How is unicode string conversion supposed to work? Maybe I'm tired, but I see no consistency in how different modules and the language itself seem to implement this.

Some libraries just flat out don't support unicode. Usually it's mentioned in the documentation. urlencode, though, is a binary encoding. Expecting it to work on unicode strings makes as much sense as expecting encoding a unicode string to base64 to work. There is no standardized encoding for urlencoding non-ASCII strings, though most people use utf-8.

Python 2.x does some implicit coercion between str and unicode, but (fortunately) 3.x won't even try. Using ASCII as the default encoding for this coercion is the most sensible way to do it, because most unicode encodings use the lower seven bits the same way.

chemosh6969
Jul 3, 2004

code:
cat /dev/null > /etc/professionalism

I am in fact a massive asswagon.
Do not let me touch computer.
I'm interested it learning how to use python to search for and download movie info from places like imdb.com and allmovie.com.

Where's the best place to go to start learning?

functional
Feb 12, 2008

Phew, what a lot of great responses. I have to spend more time around Python people.

Lonely Wolf
Jan 20, 2003

Will hawk false idols for heaps and heaps of dough.

chemosh6969 posted:

I'm interested it learning how to use python to search for and download movie info from places like imdb.com and allmovie.com.

Where's the best place to go to start learning?

I know you mean using Python to collect the information, but http://www.imdb.com/interfaces#plain

It's free for personal use, but you have to work it out with them if you want to use it on a web site. I assume that means a lot of money.

As far as using Python google "python screen scraping"

ATLbeer
Sep 26, 2004
Über nerd
So... This may go in the Mac thread but, it's a Python specific problem

I just got a new Mac and when I enter into the Python shell I can't use the Up/Down/Left/Right arrows. For example... This is what happens when i push the up button

code:
>>> ^[[A
Left arrow

code:
>>> ^[[D
Etc...

Help me!

Lonely Wolf
Jan 20, 2003

Will hawk false idols for heaps and heaps of dough.
I don't know anything about macs, but it looks like you don't have libreadline installed.

Allie
Jan 17, 2004

Are you using Apple's distribution of Python 2.5? It should load readline by default, which provides key bindings.

On a slightly unrelated note, I just discovered that the readline library in Apple's 2.5 distribution is actually a wrapper around libedit, and the bind syntax is completely different. To turn on completion you have to do readline.parse_and_bind('bind ^I rl_complete') instead of using 'tab: complete'. I can't believe I never knew this. :psyduck:

tef
May 30, 2004

-> some l-system crap ->

chemosh6969 posted:

I'm interested it learning how to use python to search for and download movie info from places like imdb.com and allmovie.com.

Where's the best place to go to start learning?

:toot: python Screenscraper reporting in :toot:

Downloading stuff from internet:

urllib2 is pretty good, but pycurl is a little bit more featured (but more awkward to use). I would use the former first unless you *really* need transparent gzip decompression or https proxy support.

Parsing html:

BeautifulSoup does the job, but it is slow and it doesn't like XML all that well either. ElementTree can do XML parsing, but I personally find the api clunky but usable. Don't use regexes. lxml uses the element tree api, supports xpath over html and xml. it's quite a nice package with a cruddy api.

if you know xpath already use lxml, it's good enough - otherwise use beautiful soup.

MononcQc
May 29, 2007

I've gotten as much out of the way as possible with unicode, and most stuff now works, but I've ran in another problem:

My database is currently having unicode as its charset and seems everything is stored fine when I access it via the command line, directly output its content to the browser or browse it with phpmyadmin.

However, I need to pass the results through a module which seems to perform a conversion from str to unicode. the problem I've spotted is now this:

for the string 'NOËL', I get:
<type 'str' > NO\xcBL
the problem is that it should be 'NO\xc3\x8bL' for a str type. 'NO\xcBL' is the unicode encoding.

So apparently, I get an unicode encoded string under the format str. When the lib tries to re-encode it to unicode, the errors mentioned in my last post pop up.

I'm currently using mysqldb to do my queries and it seems to always return 'str' strings. Is there anything I can do to fix that encoding problem then? Could it be a locale problem?

Allie
Jan 17, 2004

Unicode isn't an encoding, it's a more general standard that specifies many different encodings, character sets, collations, etc. Your database is returning a latin-1-encoded string, not a UTF-8 string. You should check to see if your database's charset is set to UTF-8, and that you're instantiating your MySQLdb connection with charset='utf-8' and use_unicode=True as part of the arguments (to MySQLdb.connect()).

Digital Spaghetti
Jul 8, 2007
I never gave a reach-around to a spider monkey while reciting the Pledge of Alligence.
This is more an appengine question, but does anyone know how I can parse XML in Google Appengine?

I'm not 100% sure my problem below is an appengine issue or a Python + AppEngine on windows issue?

I've googled and can't find any consistent help. Here is my code so far:

code:
from xml.dom import minidom
class MainHandler(webapp.RequestHandler):

  def get(self):
    userip = self.request.remote_addr
    if userip == "127.0.0.1":
        userip = "78.86.108.213"
    
    url = "http://api.hostip.info/?ip=%s" % (userip)
    result = urlfetch.fetch(url)
    if result.status_code == 200:
        location = minidom.parseString(result.content)
    
    path = os.path.join(os.path.dirname(__file__), 'templates/homepage/index.html')
    
    template_values = {
        'appName': 'jMaps Demos',
        'userip': userip,
        'location': location
    }
    
    self.response.out.write(template.render(path, template_values))
However I'm getting this traceback:

code:
Traceback (most recent call last):
  File "C:\Program Files\Google\google_appengine\google\appengine\ext\webapp\__init__.py", line 499, in __call__
    handler.get(*groups)
  File "C:\Documents and Settings\tpiper\My Documents\Aptana Studio\jmapsdemos\main.py", line 35, in get
    location = minidom.parseString(result.content)
  File "C:\Documents and Settings\tpiper\My Documents\Aptana Studio\jmapsdemos\xml\dom\minidom.py", line 1927, in parseString
    from xml.dom import expatbuilder
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1443, in load_module
    return self.FindAndLoadModule(submodule, fullname, search_path)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1351, in FindAndLoadModule
    description)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1301, in LoadModuleRestricted
    description)
  File "C:\Documents and Settings\tpiper\My Documents\Aptana Studio\jmapsdemos\xml\dom\expatbuilder.py", line 32, in <module>
    from xml.parsers import expat
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1443, in load_module
    return self.FindAndLoadModule(submodule, fullname, search_path)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1351, in FindAndLoadModule
    description)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1301, in LoadModuleRestricted
    description)
  File "C:\Documents and Settings\tpiper\My Documents\Aptana Studio\jmapsdemos\xml\parsers\expat.py", line 4, in <module>
    from pyexpat import *
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1443, in load_module
    return self.FindAndLoadModule(submodule, fullname, search_path)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1351, in FindAndLoadModule
    description)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 847, in decorate
    return func(self, *args, **kwargs)
  File "C:\Program Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1301, in LoadModuleRestricted
    description)
ImportError: DLL load failed: The referenced assembly is not installed on your system.

Adbot
ADBOT LOVES YOU

chemosh6969
Jul 3, 2004

code:
cat /dev/null > /etc/professionalism

I am in fact a massive asswagon.
Do not let me touch computer.
I'm guessing xml.dom isn't installed or it's not on the path, since it's giving an import error.

  • Locked thread