Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
SirPablo
May 1, 2004

Pillbug

Nippashish posted:

Just do the linear regression calculations yourself instead of calling polyfit, like so:
code:
import numpy as np

# Generate some dummy data
n, m, d = 181, 360, 10
D = np.dstack([i+np.zeros((n,m)) for i in xrange(d)])
D += 0.1*np.random.standard_normal(size=D.shape)
D += 2

# solve the grid of linear regression problems
X = np.vstack([np.ones(d), np.arange(d)])
solution = np.einsum('ij,klj->kli',
    np.linalg.solve(np.dot(X, X.T), X),
    D)
slopes = solution[:,:,1]
yints = solution[:,:,0]
slopes and yints are now (n,m) arrays where slopes[i,j] and yints[i,j] are the parameters of the line fit to D[i,j,:].

Whoa! This cut the run time down from 26s to 0.5s!!! Maybe I need to revisit some linear algebra from many years ago. THANK YOU, this helps me big time.

Adbot
ADBOT LOVES YOU

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES

BigRedDot posted:

Beyond that, polyfit is non-trivial and you are doing quite a few of them.

Yeah. I worked all goddamned night on that sat poo poo I was talking about, but I've come up with a few other thoughts:

As I see it, there are two major places this is getting slowed down.
1) Opening those GRIB files. I don't use GRIB, but if it's anything like FITS, reading big files can really slow you down, especially if you write it wrong. The GRIB reader looks like it's got a specialized extraction module, but I'm betting that takes time. It may be worth writing up a short script that does a timeit on how long it takes to get that data.

2) As many of us have said, the fitter is not trivial. If you have error bars, or it's using any kind of advanced methods in the linear fit, it's going to take time.

Both these points are unavoidable, and probably the major bottlenecks in the code. Yeah, fixing the arrays and how you are entering the data into them will probably speed it up SOME, but maybe not a ton. I'm still ~very~ impressed that it's taking only 45 seconds to do over 65,000 fits. Of course, if you have to do this 10,000 times, 45s all of a sudden becomes a huge pain in the rear end.

I am thinking you'll want to thread/parallel process this if you want it to run any quicker. Using the threading module in python, so far, seems fairly quick to pick up. As I said above, I am literally just a few days new to threading, so I can only really give you a place to start. The trickier part that I don't have a good feel for yet is how you would on-the-fly create the array in a manner that would respect the threads output, though I expect you'd need to use Queue

:siren:AGAIN, I AM BRAND NEW TO THREADING MYSELF, FEEL FREE TO CORRECT THIS, PYTHONERS:siren:
I have yet to teach myself how to manage thread output merging all the threads (would appreciate a clearer explanation than what I'm finding on google, guys!) BUt this may give you want you want.

code:
class fit_area(threading.Thread):
    def __init___(self, data_arr, x_pos, y_pos):
        self.data_arr = data_arr
        self.x_pos = x_pos
        self.y_pos = y_pos
        thread.Thread.__init___(self)

    def fit_data(self):
        pos_slope, pos_iner = np.polyfit(np.arange(10), self.data_arr, 1)[0]
        return slope, inter

    def run(self):
        slope, inter = self.fit_data()
        slopes[x_pos, y_pos] = slope
        yints[x_pos, y_pos] = inter
        
        other stuff
        you want to do

# Big data array
D = [[[[0]*181]181*360]*10]

# Load data
for x in range(10):
    D[:,:,x] = pygrib.open(files[x]).values.resize((181,360))

# Initiate arrays for linear regression    
slopes = [[[0]*181]*360]
yints = [[[0]*181]*360]

# Compute regression
for ix in range(0,360):
    for iy in range(0,181):
        thread = fit_area(D[:,ix,iy], ix, iy)
        thread.start()
        threads.append(thread)

for t in threads:
    t.join()

print "Done?"


JetsGuy fucked around with this message at 19:22 on Mar 1, 2013

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES

quote:

Never append to a numpy array. They are optimized for other use cases and appending will force a copy. If anyone had asked me I would have said even including an append function in the api is a mistake.

So I remember this lesson from a while ago, but for my own clarification, what constitutes "appending'? That is, if I *change* the value of a numpy array, does numpy create a copy of that array in memory like it does when it appends? Or does it edit that specific value in the memory?

SirPablo posted:

Whoa! This cut the run time down from 26s to 0.5s!!! Maybe I need to revisit some linear algebra from many years ago. THANK YOU, this helps me big time.

:laugh: It figures that I spend ~60-90 minutes wriitng up the above and the problem was solved in the interim. It just goes to prove - everyone else's problems are more interesting than your own! :v:

Anyway, be sure to run some sanity checks that the einsums are giving you the correct results.

JetsGuy fucked around with this message at 19:24 on Mar 1, 2013

Nippashish
Nov 2, 2005

Let me see you dance!

JetsGuy posted:

So I remember this lesson from a while ago, but for my own clarification, what constitutes "appending'? That is, if I *change* the value of a numpy array, does numpy create a copy of that array in memory like it does when it appends? Or does it edit that specific value in the memory?

Appending means changing the number of elements. Numpy arrays are contiguous blocks of memory (which is essential for doing matrix operations quickly) but it means that if you add a new element then behind the scenes numpy has to re-allocate the buffer it's using and copy all the old array contents to the new buffer. Allocating some memory with np.zeros() and then overwriting the zeros with interesting numbers is okay, another alternative is to build a ordinary python list by append()-ing values (or entire arrays) to it and then converting it to a numpy type with np.asarray or np.concatenate.

JetsGuy posted:

I have yet to teach myself how to manage thread output merging all the threads (would appreciate a clearer explanation than what I'm finding on google, guys!) BUt this may give you want you want.

Doing CPU intensive work in threads doesn't work well in python because of the global interpreter lock. Maybe someone more knowledgeable than I can comment on why this is the case, but in practice this means that you can't really get more than one core's worth of CPU work from python code, even if you have multiple threads. (This applies to code written in python, but C modules can create their own threads for which this doesn't apply. That's why things like np.dot can use more than one core for big operations). I think there are other implementations of python that avoid this limitation, but that's a moot point because if you want to use numpy you have no choice but to use CPython (i.e. the standard python).

There are two ways to work around this. One is to use the multiprocessing module, which has an api more or less like the threading module but uses processes instead of threads. Processes have a bit higher startup cost than threads, so you need to have enough work for each process that this cost is worthwhile. They also have more restrictions on how memory can be shared, but as long as you just want to have a bunch of worker processes that each work independently this is rarely a problem. A good way to set this up is to create a multiprocessing.Pool and then use pool.map(function_to_call, list_of_arguments_for_each_call). The multiprocessing module uses pickle to transport objects between processes, so the arguments and return values of function_to_call need to be pickle-able.

Using the multiprocessing module requires a bit of boilerplate to set up. An easier option, which uses a multiprocessing.Pool under the hood, is to use the joblib module. The joblib module has tools to make executing the pattern I described above easier with their embarrassingly parallel helper. They also have some tools for memoizing expensive functions, which I haven't personally used. Their embarrasingly parallel helper is really nice though, it hides all the boilerplate you need for multiprocessing and works around some weird warts multiprocessing has, like not being able to ctrl+c when a pool is running and properly propogating errors in the worker processes back to the parent process so you can see what went wrong. I highly recommend joblib if you want to write embarrassingly parallel code in python.

Nippashish fucked around with this message at 21:44 on Mar 1, 2013

BigRedDot
Mar 6, 2008

SirPablo posted:

Whoa! This cut the run time down from 26s to 0.5s!!! Maybe I need to revisit some linear algebra from many years ago. THANK YOU, this helps me big time.

You'd probably have more chance of seeing Einstein summation in the context of relativity or tensors. I probably should have thought of einsum, I work with the guy who contributed it. :)

Lurchington
Jan 2, 2003

Forums Dragoon

Nippashish posted:


Doing CPU intensive work in threads doesn't work well in python because of the global interpreter lock. Maybe someone more knowledgeable than I can comment on why this is the case, but in practice this means that you can't really get more than one core's worth of CPU work from python code, even if you have multiple threads.

http://www.dabeaz.com/python/UnderstandingGIL.pdf
pg 9 or so is one of my favorite explanations

Nippashish posted:

Using the multiprocessing module requires a bit of boilerplate to set up.

haven't seen joblib, but remember that a simple find/replace for:
threading -> multiprocessing
Thread -> Process

is a 90% solution if you're "upgrading" threads to processes

SirPablo
May 1, 2004

Pillbug

JetsGuy posted:


:laugh: It figures that I spend ~60-90 minutes wriitng up the above and the problem was solved in the interim. It just goes to prove - everyone else's problems are more interesting than your own! :v:

Anyway, be sure to run some sanity checks that the einsums are giving you the correct results.

Yea, I find myself sometimes working on other issues because it a nice break. Thanks for the effort.

As far as opening the grib files, it is really efficient actually. To open about 50 of them takes 0.5 seconds.

I'll let this stuff run a bit to compare but my first look suggests the data are highly similar (though not identical, but that may be due to some other parts of the script). If you're curious, here is the tool I'm working on. http://www.wrh.noaa.gov/psr/modTrend/

evilentity
Jun 25, 2010

Wildtortilla posted:

I'm currently taking PSU's Certification in Geographic Information Systems (aka, intelligently using ArcMAP). In May I'll be starting my final course for the certificate and I have a huge array of options, but I'm leaning towards their course in Python. From looking at job postings for GIS positions, I'd wager at least 50% of postings include knowing Python as a desirable skill. However, coming from a background in geology, I have no experience with coding; would the links in the OP be a good place for me to start or should I start elsewhere since I have no experience? The first link in the tutorials "MIT Introduction to Computer Science and Programming" and the contents of this post seem like they'd be a good start for me. Any suggestions?

You could try
http://learnpythonthehardway.org/
or
https://www.edx.org/courses/MITx/6.00x/2013_Spring/about
Intro to CS in python. I did previous one and it was pretty fun, but difficulty ramps up quickly after few lectures.
Im sure people around here have other suggestions.

Python is pretty easy to get into, so with enough determination you will get it.

n0manarmy
Mar 18, 2003

EDIT: ^^^ Dive into python is a good resource I have been using to help learn python. http://www.diveintopython.net/

Any suggestions on how to best approach GUI Development for Python? I'm doing very simple applications for friends and myself. I've done these programs without a GUI and they work, however I would like to keep my development moving forward by developing a GUI for my apps as well. I'm using either Aptana or IDLE but I prefer to use Aptana. I'm not adverse to switching IDEs if there is one that is more geared towards general python development and go GUI support

One example of an app that I've built is a program that prompts the user for a directory that contains their music, builds a list of all the files recursively in this directory and then randomizes the list, finally copying this list to a destination directory. The target users car stereo uses SD cards but plays music in order of how it was copied to the SD card.

Cat Plus Plus
Apr 8, 2011

:frogc00l:

n0manarmy posted:

EDIT: ^^^ Dive into python is a good resource I have been using to help learn python. http://www.diveintopython.net/

DIP is terrible (ODBC :cripes:) and outdated. It's best to forget about it.
Official Python tutorial is decent and should be enough to learn the language.

Wolfgang Pauli
Mar 26, 2008

One Three Seven
Did you try Codecademy? The individual lessons have their ups and downs (a criticism of style, not substance, as at times it's like they show you rather than guide you through a lesson and let you do things), but it'll learn you some Python. If you have trouble following it, do the infinitely more sensible Javascript course first then follow up the Python one. I wanted to learn Python for the edX AI course and I blew through the Codecademy track in about a week (though I already knew C/Javascript). I'm in a state where I can actually follow the arbitrary code snippets and tutorials such that I pull off stackoverflow, and I could immediately jump into dicking around in Pygame and PIL.

Be aware that you should still keep the Python official tutorial handy once you're through with it, as Codecademy doesn't teach you tuples and sets and such, and won't cover much of the standard library.

aeverous
Nov 13, 2009
What do you guys use for your Python work, I'm currently using Notepad++ with the pyNPP plugin but earlier this year I used VS2010 for a C# project and gently caress if I didn't get really spoiled by the code completion. I've looked around at Python IDEs and they all look a bit crap except PyCharm which is pretty expensive. Are there any free/OSS Python IDEs with really solid code completion and a built in dark theme?

My Rhythmic Crotch
Jan 13, 2011

Right now I'm using Sublime Text 2, but it doesn't have code completion. It does have code "suggestion" (not sure if that's the right term) which is better than nothing. I believe Eclipse can bet setup to do code completion with Python but I haven't tried it.

Hed
Mar 31, 2004

Fun Shoe
After a brief stint with PyCharm after the sale last year, I found myself going back to Sublime Text 2. At work I either use vim or Eclipse when I need to get into coding, which is less and less.

mmm11105
Apr 27, 2010

aeverous posted:

What do you guys use for your Python work, I'm currently using Notepad++ with the pyNPP plugin but earlier this year I used VS2010 for a C# project and gently caress if I didn't get really spoiled by the code completion. I've looked around at Python IDEs and they all look a bit crap except PyCharm which is pretty expensive. Are there any free/OSS Python IDEs with really solid code completion and a built in dark theme?

If you like VS2010, use PTvS(Python Tools for Visual Studio). Python in Visual Studio with full IntelliSense

NOTinuyasha
Oct 17, 2006

 
The Great Twist
If you like PyCharm's coding assistance you can give PTVS a try, but personally I had lots of issues with it and went running back to PyCharm after like a day.

Edit: Beaten

Lazerbeam
Feb 4, 2011

Moving this from the Game Development Thread:

Lazerbeam posted:

After learning a bit of Python I though I'd take a look at some game related modules, but I can't seem to get Pyglet nor Pygame to work. I'm sure I'm missing something obvious, but when trying to import pygame/pyglet I always just get the error message "no module named 'pygame/pyglet'". I'm using Python 3.3., Pyglet 1.2 alpha and pygame 1.9.2a. Any help would be appreciated, thanks.

Wolfgang Pauli
Mar 26, 2008

One Three Seven
As far as I know, Pygame doesn't support 3.x. If you're trying to do game dev with Python, you should really switch to 2.7.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
From the project download page, Pygame's 1.9.2a0 release isn't packaged as a Windows build for 3.3. I only see pygame-1.9.2a0.win32-py3.2.msi, so you may need to install 3.2.

Did the Pygame installer find your 3.3 installation?

Lazerbeam
Feb 4, 2011

I found it here: https://bitbucket.org/pygame/pygame/downloads

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
I just installed 3.3 in a VM, ran the pygame installer that you linked to, and was able to "import pygame" without any problems:

code:
Python 3.3.0 (v3.3.0:bd8afb90ebf2, Sep 29 2012, 10:55:48) [MSC v.1600 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> import pygame
>>> 
Pygame only supports 32-bit builds of Python; did you install a 64-bit build? I don't have a 64-bit Windows VM handy so at the moment I can't check whether the Pygame installer will do the right thing (i.e. refuse to install) if you're using 64-bit Python.

Lazerbeam
Feb 4, 2011

Yes I have 64bit Python, would I have anything to lose by switching over to 32bit? I suppose I'll have to switch over anyway though

a lovely poster
Aug 5, 2011

by Pipski
http://www.lfd.uci.edu/~gohlke/pythonlibs has 64 bit 3.3 python pygame binaries

a lovely poster fucked around with this message at 21:08 on Mar 3, 2013

Lazerbeam
Feb 4, 2011


The import option actually works now so I'll see what I can do, thank you :D

Tiax Rules All
Jul 22, 2007
You are but the grease for the wheels of his rule.
Dumb newbie question, but in Python 3.3 I'm having trouble with getting variables and strings to play nicely in the same print function. For instance,

miles = (10 * .5)
print ("10 kilometers is about", miles, "miles.")

is giving me the output:
('10 kilometers is about', 5.0, 'miles.')

When I'm trying to get:
10 kilometers is about 5.0 miles.

Can anyone tell me what I'm doing wrong?

Movac
Oct 31, 2012

Tiax Rules All posted:

Dumb newbie question, but in Python 3.3 I'm having trouble with getting variables and strings to play nicely in the same print function.

I don't think you're using Python 3.3. In Python 3, print was changed from a statement to a function, so those arguments would be passed individually to print() as you intend. In Python 2, since print was a simple statement that doesn't use parentheses, you're creating a tuple of 2 strings and a number that is then printed. Try running python as "python3" rather than "python", to be sure you run the correct version.

Emacs Headroom
Aug 2, 2003

Tiax Rules All posted:

Can anyone tell me what I'm doing wrong?

This will work in either Python2 or 3:

Python code:
print('10 kilometers is about %.1f miles.' % miles)
This is string interpolation, and is generally better for making strings to print than concatenating arguments (for one thing you'll have control over how many decimals to print out with a float)

Tiax Rules All
Jul 22, 2007
You are but the grease for the wheels of his rule.
Whoops. Looks like I forgot to replace the idle shortcut on my desktop from python 2. That's embarrassing.

String interpolation definitely looks easier and prettier to use.

Thanks!

Wolfgang Pauli
Mar 26, 2008

One Three Seven

Emacs Headroom posted:

This is string interpolation, and is generally better for making strings to print than concatenating arguments (for one thing you'll have control over how many decimals to print out with a float)
You don't have to convert to string when using a variable there? I was always paranoid and wrapped everything in str().

Emacs Headroom
Aug 2, 2003

Wolfgang Pauli posted:

You don't have to convert to string when using a variable there? I was always paranoid and wrapped everything in str().

Er, no. You can feed in several different types and some formatting information, for instance %s refers to a string, %i to an int, %f to a floating point, %.2f to a floating point with two decimal places printed, etc. I think if you feed something that's not a string to a %s, it'll try to cast it first with str().

Here's the reference

fart simpson
Jul 2, 2005

DEATH TO AMERICA
:xickos:

Does anyone else use string.format() instead of the % syntax for string formatting? I find it easier to use and read, but it seems like most people don't use it?

aeverous
Nov 13, 2009
% is like C I think that's a big part of it.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
I exclusively use str.format. At this point the % string formatting looks strange and wrong to me. I like being able to omit format specifiers unless I want to tweak the output format -- as far as I know you can always use e.g. '%s' % 3 but I prefer to leave the type out entirely, as in '{}'.format(3).

how!!
Nov 19, 2011

by angerbot

MeramJert posted:

Does anyone else use string.format() instead of the % syntax for string formatting? I find it easier to use and read, but it seems like most people don't use it?

I use % almost exclusively. It is more concise and simpler that .format(). Why was .format() even added? What was wrong with %? I think ultimately, whether you choose one type or the other is a matter of bike shedding.

Cat Plus Plus
Apr 8, 2011

:frogc00l:

how!! posted:

I use % almost exclusively. It is more concise and simpler that .format(). Why was .format() even added? What was wrong with %? I think ultimately, whether you choose one type or the other is a matter of bike shedding.

Tidier reordering ({1} {0} vs %2$s %1$s — I'm not even sure if that's the syntax; so much for "simpler and concise") and named arguments ({foo} {bar} vs %(foo)s %(bar)s), no types in format strings, more functionality ({foo.field}, {foo!r}), and bit cleaner API (ugh singleton tuples). Pretty much the only drawback is having to escape { and }.

scissorman
Feb 7, 2011
Ramrod XTreme
I've been experimenting with embedding OpenGL in PyQt4/PySide with the QGLWidget.
However doing anything big is a lot of work, since this is a very low-level solution.
Are there any ready-made scene graphs or game engines I can use with PyQt instead?

I've only been able to find old implementations, which don't work because I'm using Python 3.3.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

PiotrLegnica posted:

Tidier reordering ({1} {0} vs %2$s %1$s — I'm not even sure if that's the syntax; so much for "simpler and concise") and named arguments ({foo} {bar} vs %(foo)s %(bar)s), no types in format strings, more functionality ({foo.field}, {foo!r}), and bit cleaner API (ugh singleton tuples). Pretty much the only drawback is having to escape { and }.

Also, as stated in PEP 3101:

PEP 3101 posted:

The '%' operator is primarily limited by the fact that it is a binary operator, and therefore can take at most two arguments. One of those arguments is already dedicated to the format string, leaving all other variables to be squeezed into the remaining argument. The current practice is to use either a dictionary or a tuple as the second argument, but as many people have commented [3], this lacks flexibility. The "all or nothing" approach (meaning that one must choose between only positional arguments, or only named arguments) is felt to be overly constraining.

Emacs Headroom
Aug 2, 2003
Maybe a templating engine like jinja should just get merged into the standard installation. Any time I've needed something more powerful than the built-in string formatting I've used that (or the django templating).

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES

Nippashish posted:

Appending means changing the number of elements. Numpy arrays are contiguous blocks of memory (which is essential for doing matrix operations quickly) but it means that if you add a new element then behind the scenes numpy has to re-allocate the buffer it's using and copy all the old array contents to the new buffer. Allocating some memory with np.zeros() and then overwriting the zeros with interesting numbers is okay, another alternative is to build a ordinary python list by append()-ing values (or entire arrays) to it and then converting it to a numpy type with np.asarray or np.concatenate.

Ok, thanks for the clarification. As I said, I knew Numpy copies arrays every time you append, but I didn't understand how or why. :)

Nippashish posted:

Doing CPU intensive work in threads doesn't work well in python because of the global interpreter lock. Maybe someone more knowledgeable than I can comment on why this is the case, but in practice this means that you can't really get more than one core's worth of CPU work from python code, even if you have multiple threads. (This applies to code written in python, but C modules can create their own threads for which this doesn't apply. That's why things like np.dot can use more than one core for big operations). I think there are other implementations of python that avoid this limitation, but that's a moot point because if you want to use numpy you have no choice but to use CPython (i.e. the standard python).

There are two ways to work around this. One is to use the multiprocessing module, which has an api more or less like the threading module but uses processes instead of threads. Processes have a bit higher startup cost than threads, so you need to have enough work for each process that this cost is worthwhile. They also have more restrictions on how memory can be shared, but as long as you just want to have a bunch of worker processes that each work independently this is rarely a problem. A good way to set this up is to create a multiprocessing.Pool and then use pool.map(function_to_call, list_of_arguments_for_each_call). The multiprocessing module uses pickle to transport objects between processes, so the arguments and return values of function_to_call need to be pickle-able.

Using the multiprocessing module requires a bit of boilerplate to set up. An easier option, which uses a multiprocessing.Pool under the hood, is to use the joblib module. The joblib module has tools to make executing the pattern I described above easier with their embarrassingly parallel helper. They also have some tools for memoizing expensive functions, which I haven't personally used. Their embarrasingly parallel helper is really nice though, it hides all the boilerplate you need for multiprocessing and works around some weird warts multiprocessing has, like not being able to ctrl+c when a pool is running and properly propogating errors in the worker processes back to the parent process so you can see what went wrong. I highly recommend joblib if you want to write embarrassingly parallel code in python.

Thanks a bunch! Looks like I've been "teaching" myself the wrong things! :doh: Threading seemed rather cool and easy to pick up too, I guess it figures that I needed to be doing multiprocessing instead.

Adbot
ADBOT LOVES YOU

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES
So I'm trying much harder lately to keep PEP8 compliant. My latest code is clean of everything but these annoying E712s. Quick googling seems to suggest this isn't uncommon, and the suggestion is to simply suppress them. I'm still confused, however, as to why this is "bad":

code:
# Kills program if file DNE
if os.path.exists(sys.argv[1]) == False:
    sys.exit("That data file does not exist.  Please try again.\n")
E712 yells at me that:
comparison to False should be 'if cond is False:' or 'if not cond:'

:psyduck: Isn't that what I am doing?

FAKE EDIT:
Oh goddamnit, I get it, it's wanting me to do this:
code:
# Kills program if file DNE
if os.path.exists(sys.argv[1]) is False:
    sys.exit("That data file does not exist.  Please try again.\n")
posting anyway in case someone has a brain fart similar to this.

  • Locked thread