|
Pretty sure BigRedDot wrote bokeh
|
# ? Jan 28, 2015 00:35 |
|
|
# ? Jun 13, 2024 07:07 |
|
dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software
|
# ? Jan 28, 2015 00:38 |
|
cingulate posted:Yes, I already have the MKL and actually just compiled R to make use of it (... instead of going the comfortable route and downloading Revolution Analytics' R distribution). As for compiling it yourself, I mean if you like tinkering or would like to learn about building stuff like this, then sure I suppose there's no reason not to give it a shot. The NumPy site.cfg (the file you need to change before you build) actually has a section commented out with some instructions on how to build with MKL. If you want to try to build a real conda package, you can even check out the conda recipe that is used to build mkl numpy on github: https://github.com/conda/conda-recipes/tree/master/numpy-mkl OTOH you'll probably spend more than 30 bucks of your time getting it done, so if you just want it to work, now... Blinkz0rz posted:Pretty sure BigRedDot wrote bokeh Hah! Would that I could take credit for the whole thing. Bokeh has become a fairly large project and is absolutely a team effort. Especially happy that we seem to be getting more new contributors lately. I will toot my own horn about one thing though: I just wrote a bunch of Sphinx extensions for bokeh, in particular you can inline bokeh plots directly into sphinx .rst files. Will be available in a dev build later this week or the 0.8 release next month. pmchem posted:dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software
|
# ? Jan 28, 2015 06:08 |
|
dear bigreddot please get the conda statsmodels package to the recent (0.6.1) version I need the ordinal GEE api Also I'll ask my Edit: BigRedDot posted:I work for Continuum, and I wrote the original version of conda I've started using Anaconda for all my Python needs and have just recommended our newest PhD students to set up their systems using Anaconda Cingulate fucked around with this message at 10:23 on Jan 28, 2015 |
# ? Jan 28, 2015 10:20 |
|
While you're here, what's the typical lag time between NVidia releasing a new CUDA compute capability (such as the GTX 980 with CC 5.2) and NumbaPro supporting those new features?
|
# ? Jan 28, 2015 10:30 |
|
Cingulate posted:dear bigreddot please get the conda statsmodels package to the recent (0.6.1) version I need the ordinal GEE api code:
Quarkjets I posed the question, I'll let you know what I hear back. BigRedDot fucked around with this message at 16:47 on Jan 28, 2015 |
# ? Jan 28, 2015 16:44 |
|
What's the best option for me to generate documentation for a module based on the docstrings? Sphinx?
|
# ? Jan 28, 2015 17:30 |
|
Quarkjets, this was the reply:quote:A couple facets to the answer: With the next Numba and NumbaPro release, we should be forward compatible to future compute capabilities as long as the user has an up to date NVIDIA driver installed.
|
# ? Jan 28, 2015 18:06 |
|
EAT THE EGGS RICOLA posted:What's the best option for me to generate documentation for a module based on the docstrings? Sphinx? If you want to document a few modules on one page, you'll often end up with .rst sources that look similar to this: code:
Edit: I have had to learn waaaaay more about sphinx and docutils internals than I ever would had though (because their docs kind of suck.... irony?), so if you have specific questions let me know. BigRedDot fucked around with this message at 18:12 on Jan 28, 2015 |
# ? Jan 28, 2015 18:10 |
|
BigRedDot posted:
Cool, thanks. I've used it before a bunch of times, was more checking that I wasn't missing some new and wonderful thing for documentation.
|
# ? Jan 28, 2015 18:40 |
|
BigRedDot posted:It looks like it already is?
|
# ? Jan 28, 2015 18:52 |
|
Looking for advice on handling an incoming data stream. I'm dealing with information from a stock broker. My previous setup made GET and POST calls using requests. I'd ask the server for data, and I'd get JSON. Now I'm dealing with a server that sends a stream of messages. The connection can assign 'watcher' functions that take a single input: a text message from the data stream, and do something with it. Ie print it, log it, etc. Whenever a new message comes in, the function processes it automatically, in its own thread I think. What's the best way to capture this data? Here's an implementation I made that finds data that arrives in an arbitrary order when requested, and matches it up as soon as it has the info. It's pretty messy, and I'm assuming there's a nicer or more generally applicable way. I think I need some way to make it time out if unsuccessful. Maybe a timer in a new thread that can kill the loop? Python code:
Dominoes fucked around with this message at 21:29 on Jan 28, 2015 |
# ? Jan 28, 2015 21:08 |
|
I would use asyncio, although that may entail making lots of other changes.Python code:
|
# ? Jan 28, 2015 21:24 |
|
Thanks; that library seems like exactly what I'm looking for. The docs are a bit daunting, but giving it a shot.
|
# ? Jan 29, 2015 17:27 |
|
Dominoes posted:Thanks; that library seems like exactly what I'm looking for. The docs are a bit daunting, but giving it a shot. I'm lazy and barely thought about what you're doing, but look into gevent or eventlet and see if they're something that does something.
|
# ? Jan 29, 2015 18:53 |
|
Thermopyle posted:I'm lazy and barely thought about what you're doing, but look into gevent or eventlet and see if they're something that does something. Dominoes fucked around with this message at 00:12 on Jan 30, 2015 |
# ? Jan 30, 2015 00:08 |
|
Edit: Never mind, answered my own question.
PongAtari fucked around with this message at 15:08 on Jan 30, 2015 |
# ? Jan 30, 2015 15:06 |
|
Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code?
|
# ? Jan 30, 2015 22:23 |
|
No. New versions (other than going from 2 to 3) are backwards compatible, unless they use packages marked as on a provisional basis. Not all 3.4 code will work with 3.3, but 3.3 code will work on 3.4.
|
# ? Jan 30, 2015 22:42 |
|
Lazerbeam posted:Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code? Here's a comparison of the major differences going from 3.3 to 3.4.2 https://www.python.org/downloads/release/python-342/ It's mostly additions. Any changes are extremely low-level, so you probably won't have to change any of your code unless you're doing something extremely weird
|
# ? Jan 30, 2015 22:44 |
|
Lazerbeam posted:Stupid question: Does upgrading from 3.3.0 to 3.4.2 mean I'll have to change any of my code? Stuff was deprecated, you probably didn't use them. There were some C API changes. So almost certainly not. https://docs.python.org/3/whatsnew/3.4.html#deprecated-3-4 E: welp thanks awful.apk
|
# ? Jan 30, 2015 23:12 |
|
Just upgraded and my stuff seems to work. Thanks
|
# ? Jan 30, 2015 23:38 |
|
How do I add a self-signed certificate to somewhere that python will trust it for https? this is on a centos machine. Eg: My company has a self-signed certificate we use for testing stuff. I've trusted this certificate on my machine every way I know how. I can go to https://test.company.com (which serves the self signed certificate), and I don't get a browser warning. I can curl https://test.company.com and it works. I can openssl s_client test.company.com and it works. If I use this python tool, I get loving "ERROR (SSLError): [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed" edit: had to add it to /usr/lib/python2.6/site-packages/requests/cacert.pem Illusive Fuck Man fucked around with this message at 20:06 on Feb 2, 2015 |
# ? Feb 2, 2015 17:39 |
|
I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek. I just want to do something like parfor x in range(0,10): my_list[x] = (foo(x)) FWIW, this is on Python 2.7. IPython, that is.
|
# ? Feb 2, 2015 20:39 |
|
Cingulate posted:I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek. What are you trying to accomplish? Python code:
Also the standard warning about blocking in Python. Use the non-blocking methods multiprocessing provides you if you're avoiding threading because of blocking slow downs. salisbury shake fucked around with this message at 21:01 on Feb 2, 2015 |
# ? Feb 2, 2015 20:55 |
|
Cingulate posted:I don't get parallel for loop syntax. At all. Like, I keep starring at the documentation to multiprocessing or whatever, and it's all Greek. This article may be helpful: http://chriskiehl.com/article/parallelism-in-one-line/ In particular, using the map() function on a multiprocessing Pool() object at the very end.
|
# ? Feb 2, 2015 20:55 |
|
e: nm
|
# ? Feb 4, 2015 00:01 |
|
I'm trying to extend my unit/functional testing and do some development that doesn't hit live URLs for my web scraping scripts. I'm struggling to understand how to monkeypatch or mock/stub the requests module. The answers I'm finding via StackOverflow and web search are incomplete or just punt the solution down the field to recommend additional libraries like responses/HTTPretty. All I want for my basic testing/development is to override the requests.get() method to return an object with a .content attribute that I'd fill with a local HTML file/string, but I can't find a complete example. I'd like to see a solution that uses the mock module -- since it's part of the standard 3.x library -- so I can apply the knowledge to other objects or attributes down the road using a common module rather than some custom fake class I can cobble together. Python code:
EDIT: I've been able to get the desired results by making fake classes in the test module and manually overriding methods and attributes, but it doesn't seem like the best way to address the issue, especially if I ever had to work with others on the same codebase. onionradish fucked around with this message at 20:12 on Feb 4, 2015 |
# ? Feb 4, 2015 20:07 |
|
Mocking is something like this: Python code:
The tricky part of mocking is that you mock where the imported function being used, not where it was declared. For example: Python code:
|
# ? Feb 4, 2015 20:34 |
|
Thanks -- I'd been making it way more complicated than necessary!
|
# ? Feb 5, 2015 15:15 |
|
vikingstrike posted:This article may be helpful: http://chriskiehl.com/article/parallelism-in-one-line/ In particular, using the map() function on a multiprocessing Pool() object at the very end. salisbury shake posted:What are you trying to accomplish?
|
# ? Feb 6, 2015 01:42 |
|
Cingulate posted:Thank you, it actually did. You can use a loop and accomplish the same thing with Pool.apply() or apply_async()
|
# ? Feb 6, 2015 05:16 |
|
I'm running 2.7.x on Windows and have an sqlite3 annoyance. The version of the sqlite3.dll that comes bundled with Python 2.7 on Windows (at least) is not current so some of my scripts that need the new DLL fail (like access to the Firefox bookmark Sqlite database). Up to now, I've just manually replaced the default sqlite3.dll file in C:\Python27\DLLs with a newer one manually downloaded from the sqlite site. Of course, I have to remember to do this on every time there's a Python upgrade or I install from scratch. Is there a "proper" way to override the default sqlite3 DLL or handle this other than what I'm doing? And, as a rant, why the hell hasn't the Python 2.7 Windows distro been updated to use the latest DLL in the first place? I just updated to 2.7.9 and got bit in the rear end with this yet again.
|
# ? Feb 6, 2015 18:04 |
|
onionradish posted:I'm running 2.7.x on Windows and have an sqlite3 annoyance. I don't really know the answers to your question, but have you checked to see whether Python 3.4 has the latest DLL? That might kill two birds with one stone, since 2.7 is kind of becoming deprecated Also, have you tried using Anaconda? It might have the latest DLL
|
# ? Feb 6, 2015 19:30 |
|
pmchem posted:dear bigreddot please include pylint by default in anaconda installers, I have to fill out paperwork that gets signed by several people each time I want a tiny little change in my software I'm so glad I'm not working govt science anymore. This was literally the bane of my existence. I managed to cause an interdepartmental shitfight that lead to a senior bureacrat resigning in protest after I committed an urgent code change live to fix an error in wind speed calculation without going through UAT and all the blah blah committees and change request and bulshit ITIL juggling since loving firefighters where in danger by the calculation being wrong, but when your a bureacrat having your triplicate filled CRF go through the proper channels and get stamped by middle managers in all 7 circles of hell is more important than not having dead firemen. gently caress that poo poo!
|
# ? Feb 9, 2015 04:34 |
|
Okay, I think I basically got the multiprocessing thing now - my problem was that I was trying to avoid a functional style, but once I stopped trying to make everything be a for loop, it started making sense. However - the thing I want to parallelise is already parallelised. What I mean is, I have a function that inherently utilises 10 or so of our (>100) cores. I want to run multiple instances of that function in parallel, to get closer to utilising 50 or so of our cores (and no, I can't really make the functions themselves able to parallelise more efficiently). Basically, I want to apply a large decomposition to large datasets. I have 20 independent large datasets and want to process them in parallel. But the decomposition function is already mildly parallelised. When I simply do what's explained in vikingstrike's link (multiprocessing.dummy.pool), I actually make everything much slower because the individual sessions only utilise 1 core. Can I somehow parallelise parallelised functions (execute multiple instances of a parallelised function in parallel)? Am I making sense?
|
# ? Feb 11, 2015 01:03 |
|
In the scrabble solver I'm working on, there's a big difference in the time it takes to compute moves when it's running by itself and when it's running as part of the Flask web app. For example, given the following game: I ran the computation 10 times both on it's own and as part of the Flask app. The Flask app is importing the same module, instantiating the same classes and calling the same methods as the standalone test. Calculating every legal move, these are the results: code:
code:
|
# ? Feb 11, 2015 17:59 |
|
This is loving awesome.
|
# ? Feb 11, 2015 18:31 |
|
I am drawing a blank on this stupid problem so I come for your help. I'm using the pysvn module and getting an error on a unicode file name. If I run svn by hand from the command line everything works fine but pysvn throws this error...code:
code:
|
# ? Feb 11, 2015 18:31 |
|
|
# ? Jun 13, 2024 07:07 |
|
ArcticZombie posted:In the scrabble solver I'm working on, there's a big difference in the time it takes to compute moves when it's running by itself and when it's running as part of the Flask web app. Could it be that every time you update the blank tile to check a new letter this is invoking something unnecessary in Flask? It's kind of hard to be sure of anything without seeing any of your code
|
# ? Feb 11, 2015 20:16 |