Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
BigRedDot
Mar 6, 2008

sharktamer posted:

Is this easy enough to use with flask as is? If so, would you be willing to add some use cases to the user guide?

Absolutely, you can currently serve up both static pages pages or pages with server-backed plots. We're in the process of adding an "HTML Fragment" session that will make this even easier. Anyway one of my colleagues just offered to work up a simple standalone Flask example and add it to the user guide so hopefully I can point you to something concrete in the next few days.

Adbot
ADBOT LOVES YOU

onionradish
Jul 6, 2006

That's spicy.
I'm writing some unit tests for a script that parses an html page. For testing, I'm using a reference HTML file that's a full WGET of the target page. What's best practice for the assert against a function that returns all tags according to a selector, like lxml's "cssselect()" or bs4's "find_all()"?

Let's say that the reference page is supposed to return 14 <a> items as a list. Is it enough to just verify the "len()" of the results, just check a few of the actual values (maybe first and last), or verify that the full result list matches my list of expected results?

The answer might be "it depends," and that's ok. Mostly I'm wondering if the either or both of first two methods would be considered insufficient or bad practice.

Dren
Jan 5, 2001

Pillbug
Why do you have it in a special function? Isn't returning all the tags matching a selector what find_all() does? Why would you stuff that behavior in a new function then try to test it?

onionradish
Jul 6, 2006

That's spicy.

Dren posted:

Why do you have it in a special function? Isn't returning all the tags matching a selector what find_all() does? Why would you stuff that behavior in a new function then try to test it?

Maybe I oversimplified the example or maybe I'm over-testing. In actual practice, it would only be a function becasue it's going to be called multiple times and has conditionals. Maybe a better example would be a function that returns the urls of leeched images found on a page. (I have a couple of clients that I've been unable to break of the habit when they make blog posts.) So a "leeched_images(url)" function might return 10 of 14 found <img> on site A, and 1 of 6 <img> on site B.

Super-hacky pseudo-code below. If the right thing to do is test the "img_is_on_host(imgsrc)" function and not test "leeched_images()" at all, that's fine. Just trying to understand where to draw the line on testing.

Python code:
# Very hacky example
import lxml.html

def img_is_on_host(imgsrc):
    # in practice, this would be more sophisticated, not hardcoded string, etc
    return imgsrc.startswith('http://clientsite_1.com')

def leeched_images(url):
   # in practice, would resolve relative urls in img src, etc.
    doc = lxml.html.parse(pageurl).getroot()
    images = doc.cssselect('img')
    for image in images:
        imgsrc = image.get('src')
        if not img_is_on_host(imgsrc):
            yield imgsrc

for imageurl in leeched_images('http://clientsite_1.com/blog'):
    print imageurl

onionradish fucked around with this message at 23:00 on Mar 11, 2014

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Plorkyeran posted:

Build wheels for all of your packages?

Got wheels built for everything, yielded a huge improvement in how long it takes to deploy the app! Thanks!

salisbury shake
Dec 27, 2011

salisbury shake posted:

sharktamer posted:

There's also AwfulPy. I've not had much luck with it in the past though.

notInuyasha, this isn't a python question, but how did you link that highlighted section?
Hi :) what problem did you have with it? I need an excuse to work on it.

Gonna resurrect this from the bottom of the last page :)

Dren
Jan 5, 2001

Pillbug
Onionradish, I would consider what you're doing to be more integration testing than unit testing because you are running sample input through the entire routine rather than chunking up the routine into small pieces and verifying a smaller subset of behaviors.

Dren
Jan 5, 2001

Pillbug
The thing about unit testing is that you want to structure your code so that you are only testing inputs and outputs of functions. The idea being that if you can trust those inputs and outputs, everything will work. So for your code the appropriate testing would be to test leeched_images() as a whole. You'd want to see that for some input url you get the proper list of leeched images as a result. You could check the len of the list and verify that the actual results are as expected.

It sounds like you want finer grained testing, to do that, break your code up into more functions like so:

Python code:
import lxml.html

def img_is_on_host(imgsrc, host_url):
    return imgsrc.startswith(host_url)

def get_img_elems(page_url):
    doc = lxml.html.parse(page_url).getroot()
    return doc.cssselect('img')

def get_paths(img_elems):
    paths = [x.get('src') for x in img_elems]
    # do stuff to resolve relative paths
    return paths

def leeched_images(page_url, host_url):
    img_elems = get_img_elems(page_url)
    paths = get_paths(img_elems)
    leeched_imgs = [p for p in paths if not img_is_on_host(p, host_url)]
    return leeched_imgs

for imageurl in leeched_images('http://clientsite_1.com/blog', ):
    print imageurl
With the code like this you can test smaller pieces. E.g. to test get_img_elems you could provide two test cases, one where it has to get two elements and one where it has to get none. I don't think more detailed testing than that is required because the methods used should have been thoroughly tested in their library. Something similar could be done for get_paths and img_is_on_host. Then, leeched_images can be tested by verifying its output for some page_urls and host_urls.

onionradish
Jul 6, 2006

That's spicy.
Thanks for taking the time to write that out -- it was really helpful.

OnceIWasAnOstrich
Jul 22, 2006

Today I spent some time testing out some scripts that fit a lot of linear models. I'm using statsmodels for the formula building and fitting and pandas for the data. I started off using my generic Anaconda environment which has all of the Accelerate packages installed, like MKL numpy. When I fit my model (~250 data points, 4 predictors) it uses all 16 threads on my processor and takes about 250ms. This seemed kind of slow so I compared it to the standard numpy environment to see if my model fitting is just really really slow that way. It turns out standard numpy performs the same fit in 60% of the time using only one thread.


With Anaconda Accelerate enabled with 16 threads. Maxes out all 16 threads for the duration of the test.
Python code:
In [51]: timeit results = smf.ols('X ~ A + B + C + D', data=data).fit()
10 loops, best of 3: 253 ms per loop
With standard Numpy 1.8 on 1 thread. Maxes out 1 thread for the duration of the test.
Python code:
In [9]: timeit results = smf.ols('X ~ A + B + C + D', data=data).fit()
10 loops, best of 3: 151 ms per loop
I guess the moral of this story is beware using pre-optimized libraries?

sharktamer
Oct 30, 2011

Shark tamer ridiculous

salisbury shake posted:

Gonna resurrect this from the bottom of the last page :)

Well, quite simply it doesn't work. Your setup.py script has a reference to some 'AwfulGUI_old' package that doesn't exist. Then when I remove that to get the setup.py working, I get the following when trying to create an AwfulPy instance like in your README.md.

It's entirely possible it's me doing something stupid though, but I think this might be a rare case where it's not.

salisbury shake
Dec 27, 2011
Yea, I should delete the setup.py, it was for initializing the PyPi repo. Has nothing to do with setting anything up.

Accidentally leaked py3k super() usage into py2k compatible code :downs:. You should be able to run it with python3 or ipython3 and hopefully that'll be the only issue you run into.

I'll run it with python2 and clear out any incompatible assumptions I've made or go full py3k Done. If you pull it now should work w/ python2 as it does on my Sid instance.
Found a couple of bugs in the search that uses the old search.php and lastread, though.

Thanks for the feedback and taking the time to post get the whole traceback :)

Nippashish
Nov 2, 2005

Let me see you dance!

OnceIWasAnOstrich posted:

Today I spent some time testing out some scripts that fit a lot of linear models. I'm using statsmodels for the formula building and fitting and pandas for the data. I started off using my generic Anaconda environment which has all of the Accelerate packages installed, like MKL numpy. When I fit my model (~250 data points, 4 predictors) it uses all 16 threads on my processor and takes about 250ms. This seemed kind of slow so I compared it to the standard numpy environment to see if my model fitting is just really really slow that way. It turns out standard numpy performs the same fit in 60% of the time using only one thread.

Your models are very small so I'm not surprised you're losing time to overhead from multiple cores. Try using the anaconda + accelerate with MKL_NUM_THREADS=1 set in your environment; this will use MKL but will tell it to only use one thread. If your models are independent then consider parallelizing over models instead of inside the optimization. I like to use joblib for this since it has a very nice and painless interface for executing task-parallel jobs using multiprocessing.

OnceIWasAnOstrich
Jul 22, 2006

Nippashish posted:

Your models are very small so I'm not surprised you're losing time to overhead from multiple cores. Try using the anaconda + accelerate with MKL_NUM_THREADS=1 set in your environment; this will use MKL but will tell it to only use one thread. If your models are independent then consider parallelizing over models instead of inside the optimization. I like to use joblib for this since it has a very nice and painless interface for executing task-parallel jobs using multiprocessing.

Thanks, it turns out there is zero speedup with MKL over normal numpy in this particular application, but at least now I don't need to create a virtualenv or whatever the conda equivalent is to prevent a 2-fold slowdown. I've been using the map function of multiprocessing Pools to parallelize the normal version, which seems a lot simpler than bringing in a whole other library to do the same thing with extra syntax and imports. I guess it might help with the ability to automatically memmap arrays but I am already using HDF5 which works great with read-only multiprocessing.

Nippashish
Nov 2, 2005

Let me see you dance!

OnceIWasAnOstrich posted:

I've been using the map function of multiprocessing Pools to parallelize the normal version, which seems a lot simpler than bringing in a whole other library to do the same thing with extra syntax and imports.

The embarrassingly parallel for loops thing I linked to essentially does this but with less boilerplate. If you're already set up using multiprocessing directly then there is not much point to switch to something new though. I just mention it because I like their interface a lot more than using multiprocessing directly.

scholzie
Mar 30, 2003

If I had a daughter, she'd probably be pregnant by the time she turned 12.
Thanks in advance for reading - this is kind of long and confusing...

I have a reasonably complex logging system I need to implement for an application I'm writing and I'm having a hard time finding REAL use cases that I can steal best practices from. Some help would be really appreciated...

First, it's probably useful to understand the application - apologies for verbosity, but it's required to see why the logging requirements are a little complicated. At the most basic level it's basically a secondary init in that its main purpose is to start and stop other applications. The difference is that these can be installed in user space and run within their own security context and don't require root to install or run. Each user who has scripts to run will create a special dot-directory in their $HOME, and within that directory is a manifest of all the "packages" they want to execute ('packages.ini') along with one directory per entry containing everything that is required to run (or a gzipped tarball of such a directory, called a .pip, which is how I'll refer to them from now on). Each pip has a settings.ini file which defines all sorts of things like the entry point and arguments for the script, etc. Also in this file is the log entry string format, filename (or handler type, potentially) and logging.LEVEL that the person who wrote that script wants to use.

The main application lives in myApp.py, and all the special classes (CommandBase, Package, Manifest) I've created live in lib/myApp_lib.py. CommandBase is a class from which all other application classes inherit, and it contains the master level ConfigParser crap to pass around to all the various other classes. This is important later.

I need to log the following:

  • Master application log - Logs everything the app is doing (searching for users, $HOMEs, validating certificates, etc.). This will go to /var/log/myApp and probably rotate since it'll be fairly verbose
  • pip logs at some logging.LEVEL defined in the app config file, within the Master log at /var/log/myApp
  • pip logs at the logging.LEVEL, format, and location of the package writer's choosing from the pip's settings.ini file. This could be literally any type of supported handler (email, http, file, etc.)

My confusion is stemming from the fact that each pip needs to log to two places at two (possibly) different verbosenessesseses, and the master application is also logging its own stuff. What I'm trying to figure out is how many loggers I need, where they should be instantiated, and how I should be passing log messages around from my derived classes.

Should I create the Logger object inside CommandBase? In this case every instance of any class that inherits it should have its own Logger (which means one Logger per pip, as each pip is its own object - good!). Or should I create the Logger objects inside every class that I want logging for? If so, how should they be named? I can't simply use getLogger(__name__) because then the Manifest object and Package object will have different Loggers associated and that's not what we want.

My best guess is that I should create a master Logger either in main() or in __init__() for the myApp class, then the child Logger will go into the Package class so there'll be one logfile per package. But then I'm stuck at that point on how to get the package log to write to two places at two LEVELs. I'm also not sure how to get info from the other classes to be written to these logs (especially information from libraries I didn't write) - should I just be catching Exceptions and Warnings and passing them into the logs with try/except blocks?

I know this is really confusing if you don't have access to my code, which is why I'm hoping someone can offer some guidance and point me to some public code somewhere that makes use of the standard logging class and then I can figure it out from there. If it's helpful I may be able to write out some skeleton code to describe the structure a bit clearer - let me know if that's useful. Thank you so, so, SO much

John DiFool
Aug 28, 2013

scholzie posted:

Thanks in advance for reading - this is kind of long and confusing...

I have a reasonably complex logging system I need to implement for an application I'm writing and I'm having a hard time finding REAL use cases that I can steal best practices from. Some help would be really appreciated...

If your code is heavily OO then you might look into the mixin concept. You could create a mix-in class for each discrete log file you want to have, and then inherit from the proper logging class in each class you want to do logging in.

Here's a rough sketch that I think will work but I haven't tested it:
Python code:
class BaseLog(object):
	def __init__(self, file_obj):
		# Do any necessary setup to the file_obj you need.

	def log(self, string):
		# Write to the file_obj

class SpecificLog(BaseLog):
	# Init this mix-ins file object here...
	log_file = open('specific.log', 'a')
	def __init__(self):
		super(SpecificLog, self).__init__(log_file)

class ClassToLogWith(SomeBaseClass, SpecificLog):
	# BLAH BLAH BLAH
	def some_function(self):
		self.log("Hey, some_function was called!")
So BaseLog takes care of the basic stuff that any log file needs. SpecificLog is a really light derived class from BaseLog which opens a file object at the CLASS level (I'm a little fuzzy on if this will work) so every instantiation of SpecificLog will write to the same file object. Then you derive from SpecificLog if you want a class to log to 'specific.log', and just call your log method which you built up in BaseLog.

Hopefully this makes some sense.

BeefofAges
Jun 5, 2004

Cry 'Havoc!', and let slip the cows of war.

I'm curious, why reinvent the wheel? There are already a lot of tools out there that exist for scheduling and running other tools.

For example, just toss up a CI server and let people create their own jobs with their own logging configurations within it.

Surprise T Rex
Apr 9, 2008

Dinosaur Gum
Not sure this is the best place to ask, but there didn't seem to be a Qt framework thread.

I'm trying to build a basic file browser in PyQt, using a QTreeView for the directory list, and a separate QListView to show the appropriate image files in that folder.

Problem is, when I use setRootPath() to change the shown directory in the QListView, to the one selected in the QTreeView, it somehow messes with the filters, and the ListView will show the system drives instead of the directory contents.

I'm using QDir.Files | QDir.NoDotAndDotDot to filter the QListView, and originally, there was an issue with directories showing up despite these filters, too, but I've solved that by resetting the filters after each setRootPath(), but that hasn't helped the drives to not randomly appear.

And now that I've typed that out, I'm not sure why I didn't use QFileDialog and save myself a whole lot of work, except that I don't think it'll quite exactly follow the behaviour I'd like to achieve.

accipter
Sep 12, 2003

Surprise T Rex posted:

Not sure this is the best place to ask, but there didn't seem to be a Qt framework thread.

I'm trying to build a basic file browser in PyQt, using a QTreeView for the directory list, and a separate QListView to show the appropriate image files in that folder.

Problem is, when I use setRootPath() to change the shown directory in the QListView, to the one selected in the QTreeView, it somehow messes with the filters, and the ListView will show the system drives instead of the directory contents.

I'm using QDir.Files | QDir.NoDotAndDotDot to filter the QListView, and originally, there was an issue with directories showing up despite these filters, too, but I've solved that by resetting the filters after each setRootPath(), but that hasn't helped the drives to not randomly appear.

And now that I've typed that out, I'm not sure why I didn't use QFileDialog and save myself a whole lot of work, except that I don't think it'll quite exactly follow the behaviour I'd like to achieve.

Do you want the same filters applied to both the QTreeView and QListView?

In the past, I have used the same QFileSystemModel for both Tree and List Views. I maintain location compatibility between the two views by adjusting for each tree/list view setRootIndex(fileSytemModel->index(path)).

Surprise T Rex
Apr 9, 2008

Dinosaur Gum
I need the TreeView to only show directories and the ListView to only show files, so using the same FileSystemModel seems like it wouldn't work, unless I'm missing a detail. I did try implementing a QFileDialog instead, but I'd have to mess around with it a bit anyway, since the views aren't quite how I'd like them.

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug
3.4.0 is out :toot:

sharktamer
Oct 30, 2011

Shark tamer ridiculous

Does that mean python comes with pip included now?

BigRedDot
Mar 6, 2008

Well, it looks like Python will finally be getting a new infix operator suitable for matrix multiply! https://groups.google.com/forum/#!topic/python-ideas/aHVlL6BADLY%5B51-75-false%5D

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

sharktamer posted:

Does that mean python comes with pip included now?

Yes, and as an added bonus, the global pip3.4 respects any active pyvenv-3.4 virtualenv. In 3.3 on Debian and Ubuntu, I had to install pip itself in the virtualenv since the system pip would try to write to /usr/local/lib/python3.3. (The old virtualenv package automatically installs pip and setuptools into virtualenvs as they're created.)

more like dICK
Feb 15, 2010

This is inevitable.
If there are any Ontarians heading to Pycon next month, check out http://pythononrails.ca/ for a really cool way to get there.

sofokles
Feb 7, 2004

Fuck this
I'm finally trying to learn a language properly, and python it is, since I found "Learn Python the hard way", which suits my learning style well, online. Wanting some training material i found 46 Simple Python Exercises : http://www.ling.gu.se/~lager/python_exercises.html that I'm enjoying very much so far. Looking forward, I find this, under Simple exercises including I/O :

quote:

The International Civil Aviation Organization (ICAO) alphabet assigns code words to the letters of the English alphabet acrophonically (Alfa for A, Bravo for B, etc.) so that critical combinations of letters (and numbers) can be pronounced and understood by those who transmit and receive voice messages by radio or telephone regardless of their native language, especially when the safety of navigation or persons is essential. Here is a Python dictionary covering one version of the ICAO alphabet:
d = {'a':'alfa', 'b':'bravo', 'c':'charlie', 'd':'delta', 'e':'echo', 'f':'foxtrot',
'g':'golf', 'h':'hotel', 'i':'india', 'j':'juliett', 'k':'kilo', 'l':'lima',
'm':'mike', 'n':'november', 'o':'oscar', 'p':'papa', 'q':'quebec', 'r':'romeo',
's':'sierra', 't':'tango', 'u':'uniform', 'v':'victor', 'w':'whiskey',
'x':'x-ray', 'y':'yankee', 'z':'zulu'}

Your task in this exercise is to write a procedure speak_ICAO() able to translate any text (i.e. any string) into spoken ICAO words. You need to import at least two libraries: os and time. On a mac, you have access to the system TTS (Text-To-Speech) as follows: os.system('say ' + msg), where msg is the string to be spoken. (Under UNIX/Linux and Windows, something similar might exist.) Apart from the text to be spoken, your procedure also needs to accept two additional parameters: a float indicating the length of the pause between each spoken ICAO word, and a float indicating the length of the pause between each word spoken.

Perfect. (Under UNIX/Linux and Windows, something similar might exist.)

Fun, and challenging, in the way that Python has it's own runtime that isn't perfectly aligned with .Net or COM or CLR or whatever they are called deep down in the Windows engine room that powers my internet vehicle.

Now, part of my motivation for learning Python is that I've read it's popular as a "glue" language. I had my first sniff of glue when I was thirteen, and I never did it again, beacuse the effects weren't that hilarious, but on the other hand gluing can't do much damage.

In fact it's probably gluing I want to learn more than anything else. Or as they call it : seamless integration.

I've done some decent problem solving and model building using Excel, Matlab, Mathematica, VBA, C#, VB and whatnot (but always task oriented, fixing something urgent now) so I'm pretty sure I can solve the "speak" part with a given string in a .Net environment using one of the text to speach libraries. However - the initialising - the handshaking - the sending of the string, or the charachers ( in a stream?) (over a socket or a port?) - the confirmation etc. Making two runtimes speak and solve a problem together is what I want to learn and why this excersice is perfect.

So, where do I start ? (I consider IronPython a workaround, I'm learning the hard way)

vikingstrike
Sep 23, 2007

whats happening, captain
Write a function in whatever language you want to do the "speak" part and then use os.system to call it. With this approach you would use python to generate the arguments to pass to your other function.

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

For Windows, you should be able to call the Microsoft Speech API (SAPI) from Python. I saw this page: http://stackoverflow.com/questions/15879802/python-win32com-client-geteventssapi-spsharedrecocontext-returns-none

I haven't ever tried calling any COM stuff with Python, it just seemed like a possibility so I googled for it.

BeefofAges
Jun 5, 2004

Cry 'Havoc!', and let slip the cows of war.

http://code.activestate.com/recipes/578839-python-text-to-speech-with-pyttsx/ ?

or

https://gist.github.com/alexsleat/1362973

pram
Jun 10, 2001
Maybe the Python thread is a better place for this :yayclod:

I'm thinking about writing a crappy GUI for some of my Ansible playbooks, so I can have other people execute them with a button or something. Any ideas on the best way to accomplish this? I was thinking of using django and keeping some light data like last execution etc and just using os.system. Are there any libraries or frameworks out there that work well with executing stuff like this?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

pram posted:

Maybe the Python thread is a better place for this :yayclod:

I'm thinking about writing a crappy GUI for some of my Ansible playbooks, so I can have other people execute them with a button or something. Any ideas on the best way to accomplish this? I was thinking of using django and keeping some light data like last execution etc and just using os.system. Are there any libraries or frameworks out there that work well with executing stuff like this?
Any reason you'd roll your own instead of using something like RunDeck?

more like dICK
Feb 15, 2010

This is inevitable.
I may be misunderstanding what you need, but what about Ansible Tower http://www.ansible.com/tower

pram
Jun 10, 2001

Misogynist posted:

Any reason you'd roll your own instead of using something like RunDeck?

Hmm interesting, I considered Jenkins but this looks better. Thanks!


more like dICK posted:

I may be misunderstanding what you need, but what about Ansible Tower http://www.ansible.com/tower

I've seen this but the pricing is ridiculous.

namaste friends
Sep 18, 2004

by Smythe
I think I know the answer to this but I thought I'd ask you guys anyway. I've been writing some crap that makes use of the jsonselect module. I've decided that I'd like to build a curses interface that requires the npyscreen module. The problem is npyscreen, while it 'supports' python 2.7, doesn't work properly unless I'm using python 3.3. Unfortunately, jsonselect doesn't support python 3 at all.

I think the answer is, 'be a better coder and stop using all these lovely modules'. Unfortunately I'm time and intellect limited so I'm thinking of just reverting back to python 2.7 (the crux of my code is to take two big json files, find a specific leaf in each json, calculate a diff) and figuring out something else for my interface.

What would you guys do?

Dren
Jan 5, 2001

Pillbug
It appears there are a ton of JSONPath projects. Maybe try jsonpath-rw instead? https://pypi.python.org/pypi/jsonpath-rw. It says it works in 2.7 and 3.3.

evensevenone
May 12, 2001
Glass is a solid.

pram posted:

Maybe the Python thread is a better place for this :yayclod:

I'm thinking about writing a crappy GUI for some of my Ansible playbooks, so I can have other people execute them with a button or something. Any ideas on the best way to accomplish this? I was thinking of using django and keeping some light data like last execution etc and just using os.system. Are there any libraries or frameworks out there that work well with executing stuff like this?

We use Jenkins for this, which is kind of nice since that way they get run off a central server that stores the logs of each run, multiple people can watch a run in progress, the build doesn't abort if you close the window, etc. You can also set up about 9,000 notification options.

Maluco Marinero
Jan 18, 2001

Damn that's a
fine elephant.
I'm setting up CI for a project, first time using Python in that context and I'm wondering what approach is best.

From my brief investigation so far it seems like tox can be fed my requirements.txt so I can spin up a virtualenv and run tests in the QA phase, and after that, deploying should I just use tox on the deployment target? Or should I be using pex or wheels or something like that. In most cases I'll definitely be deploying to another Linux server, but not always sure whether it'll be rpm or Debian.

Any advice from people who've done it before?

good jovi
Dec 11, 2000

'm pro-dickgirl, and I VOTE!

Maluco Marinero posted:

I'm setting up CI for a project, first time using Python in that context and I'm wondering what approach is best.

From my brief investigation so far it seems like tox can be fed my requirements.txt so I can spin up a virtualenv and run tests in the QA phase, and after that, deploying should I just use tox on the deployment target? Or should I be using pex or wheels or something like that. In most cases I'll definitely be deploying to another Linux server, but not always sure whether it'll be rpm or Debian.

Any advice from people who've done it before?

If you're deploying to multiple environments, not just using different python versions, I don't think tox is really enough for you. It sounds like you just need separate testing/integration environments for each target. Maybe a build slave for each?

Adbot
ADBOT LOVES YOU

Sir_Substance
Dec 13, 2013
I have a question about the interpreter for y'all.

I'm running a long running task an an Arduino. The task itself is not relevant, but it periodically sends status updates over serial, which I am picking up with a python script on my raspberry pi, and echoing across jabber so they hit my phone.

The problem is, the python script is killing itself after about 20-30 hours. It must be a problem with the script because I've run the program for 48 hours on my desktop using the arduino IDE serial monitor and the arduino is fine.

Does the python interpreter have some kind of "you've been running in a constant loop too long I'm going to :commissar: you" built into it?

  • Locked thread