Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
pmchem
Jan 22, 2010


CarForumPoster posted:

Why not just use pandas?

Pandas HDF5 support has pytables as a dependency, so, maybe he is?

Adbot
ADBOT LOVES YOU

mr_package
Jun 13, 2000
Is anyone seriously taking functional concepts and writing Python using them? Even just writing functions in a more pure way, not mutating state etc..

For example, I have some data in a dictionary, and if I want to add a new key/value pair I just do dictionary['key'] = value. This is totally standard Python. But you can also write a function that returns a new, modified dictionary, and using PEP448 it's even a one-liner:
code:
def add_to_dict(source: dict, new_key: str, new_value: int) -> dict:
    return {**source, **{new_key: new_value}}
This seems insane to me, because dictionaries are supposed to be mutable and that's how we use them. But should we? Before I drink the kool aid on this, has anyone actually started writing Python in this way and is it actually giving any of the benefits I hear about from FP land?

Was kind of inspired by this https://kite.com/blog/python/functional-programming but also been reading about it for a while.

edit: fix: need "**" in front of {new_key: new_value}

mr_package fucked around with this message at 08:31 on Apr 4, 2019

NinpoEspiritoSanto
Oct 22, 2013




I write a lot of FP in python. Paradigms should be used for what they make sense to use them for. Sometimes you NEED mutable state, sometimes you need classes. Often, however, you can do most simple things with an FP approach. FP is easy to reason about because you can talk about your flow in terms of data structures.

Functions and fixed data is loving great for testing btw.

Don't fall into the "x pattern is the best" trap though because that's what got the world stuck on OOP. OOP is great for a lot of things, FP is great for a lot of others and a whole bunch of apps if they're honest should be making use of both if they already don't. Use what makes sense for the problem you're solving immediately and the great thing about python is you can bolt them together and have it work. One thing I really like about python is despite it being purely OOP in the implementation it doesn't get in your way even if OOP is the last thing you want (for the most part... It still shits about with recursion, annoyingly).

NinpoEspiritoSanto fucked around with this message at 00:57 on Apr 4, 2019

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
When she says

quote:

When possible and reasonably convenient, try to keep functions “pure”, and keep state that changes in well-thought-out, well marked places.
It's less of the concept of keeping the functions pure, as it is to keep your data consistent and not redundant (as far as python is concerned). I've been learning django for a project and they throw around he term DRY alot, and that is the idea: replace as much of the data (state of your program) that you can with equivalent code, or that is, code that produces an equivalent result. This makes it far easier to reason about the state of your program at any given point.

She mentions "idempotent functions", but you want to consider what input from the user or system calls (impurities) will be non-idempotent, and will require a change to the program's state; and which inputs are really idempotent where the result can be produced purely using code without changing the program's state.

The primary purpose of using immutable data structures is to avoid scope errors. That is, it prevents an inner scoped function from altering the previously asserted behavior of an outer function that calls it.

If you look at the code I posted on the last page, as an example of functional programming, you should notice that there is only one piece of data that is effectively mutable: the 'guides' list variable. I could've used reduce or something but whatever. 'columnized', is only read from to generate the 'header' and 'data' lists. Everything is done from two reads of the file from 'parts' and 'other_parts', once to discover the guide sizes, and then once more to sort the data into columns.

I consider dicts to be a sort of 'backing store' for your program when used mutably, or as a sort of annotated source of data when used immutably. The only sort of use for that example that I can think of is when you want to 'tag' some data with additional data on the way out the door, but you should never remove items! Of course you would not have this guarantee in python unless you used a dict type that is explicitly immutable.

dougdrums fucked around with this message at 02:03 on Apr 4, 2019

Dominoes
Sep 20, 2007

mr_package posted:

Is anyone seriously taking functional concepts and writing Python using them? Even just writing functions in a more pure way, not mutating state etc..

For example, I have some data in a dictionary, and if I want to add a new key/value pair I just do dictionary['key'] = value. This is totally standard Python. But you can also write a function that returns a new, modified dictionary, and using PEP448 it's even a one-liner:
code:
def add_to_dict(source: dict, new_key: str, new_value: int) -> dict:
    return {**source, {new_key: new_value}}
This seems insane to me, because dictionaries are supposed to be mutable and that's how we use them. But should we? Before I drink the kool aid on this, has anyone actually started writing Python in this way and is it actually giving any of the benefits I hear about from FP land?

Was kind of inspired by this https://kite.com/blog/python/functional-programming but also been reading about it for a while.
Don't stress it. That's nice syntax; use if it it makes your code easier to read/write. No need to be pure.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
Yeah honestly that update syntax along with the fact that you can do dictionaries in comprehensions is one of my favorite things about python.

E: It just occurred to me that you can rewrite existing keys with that syntax, but I guess it produces a seperate object?

dougdrums fucked around with this message at 02:54 on Apr 4, 2019

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I want to try more functional stuff in Python but the single-expression lambda thing turns me off. I'm willing to be beaten over the head about how infantile that is.

If we want something to play with in that regards, I recently had a list of strings that I wanted to iterate in groups of fours. Generally there's some mish-mash of syntax in itertools to do that, but they all were kind of wonky. It smelled like something with a functional solution (ie "Do this to every group of four elements of this list") but expressing it was gross.

Furism
Feb 21, 2006

Live long and headbang
Is there a way to setup PyCharms in such a way that I point it to the source code of a package so that when in a module I import that package PyCharms won't look into the installed packages?

My use case is that I work on a package, which is Project A, but I want to reference that package in Project B while still being able to add code as needed to package A. But when I share the module of Package B, I'd like people to fetch it from PyPi (because I also publish the Project A to PyPi). Am I making sense?

Basically I want to be able to say

code:
import furism-package
in my code, but on my machine it'll look into the source code, while for anybody else it'll look in the installed packages.

SurgicalOntologist
Jun 17, 2004

That's the normal behavior. However the package is installed, that's where python and pycharm will look.

Maybe what you're missing is an editable install? Install package A with
code:

pip install -e .

Rocko Bonaparte posted:

I want to try more functional stuff in Python but the single-expression lambda thing turns me off. I'm willing to be beaten over the head about how infantile that is.

If we want something to play with in that regards, I recently had a list of strings that I wanted to iterate in groups of fours. Generally there's some mish-mash of syntax in itertools to do that, but they all were kind of wonky. It smelled like something with a functional solution (ie "Do this to every group of four elements of this list") but expressing it was gross.

Check out the functions in the toolz package.

SurgicalOntologist fucked around with this message at 13:59 on Apr 4, 2019

Ahz
Jun 17, 2001
PUT MY CART BACK? I'M BETTER THAN THAT AND YOU! WHERE IS MY BUTLER?!

Furism posted:

Is there a way to setup PyCharms in such a way that I point it to the source code of a package so that when in a module I import that package PyCharms won't look into the installed packages?

My use case is that I work on a package, which is Project A, but I want to reference that package in Project B while still being able to add code as needed to package A. But when I share the module of Package B, I'd like people to fetch it from PyPi (because I also publish the Project A to PyPi). Am I making sense?

Basically I want to be able to say

code:
import furism-package
in my code, but on my machine it'll look into the source code, while for anybody else it'll look in the installed packages.

yes, in settings -> project structure, you can add arbitrary directories as libraries/sources/resources without having packages installed the typical way via pip->site-packages.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Furism posted:

Is there a way to setup PyCharms in such a way that I point it to the source code of a package so that when in a module I import that package PyCharms won't look into the installed packages?

My use case is that I work on a package, which is Project A, but I want to reference that package in Project B while still being able to add code as needed to package A. But when I share the module of Package B, I'd like people to fetch it from PyPi (because I also publish the Project A to PyPi). Am I making sense?

Basically I want to be able to say

code:
import furism-package
in my code, but on my machine it'll look into the source code, while for anybody else it'll look in the installed packages.

This is The Correct Way:

SurgicalOntologist posted:

Maybe what you're missing is an editable install? Install package A with
code:
pip install -e .


It's not pycharm specific, because library developers have always needed a way to do this.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Thermopyle posted:

I spent a dozen hours working on something that works better than pyautogui for Windows machines.

Then while Googlin' something else I found that pywinauto exists and incorporates 90% of what I wanted to do.

gently caress.

I wanted to go back to this because I was reminded of pywinauto today. This is the most troublesome module of about twenty or so that my work project drags into its stuff. A lot of that comes down to how it does or doesn't install itself in relation to pywin32.

For example, yesterday, I discovered it doesn't express its dependency on pywin32 unless it detects it's not installed via its setup.py. So if you are, say, combing dependency metadata in order to convert all of your Python modules to Chocolatey packages, you will miss it and pip will try to hit the public web to get pywin32. I've also had problems with having to restart pip after installing it and things like that. For GUI automation, I'm trying to move over to bindings to Microsoft's UI automation framework. It seems to attach to just about anything--unlike AutoIt, for example.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

Rocko Bonaparte posted:

If we want something to play with in that regards, I recently had a list of strings that I wanted to iterate in groups of fours. Generally there's some mish-mash of syntax in itertools to do that, but they all were kind of wonky. It smelled like something with a functional solution (ie "Do this to every group of four elements of this list") but expressing it was gross.

ime functional languages tend to have a built-in function that does this, I don't think python does yet - whenever I try to use it in a functional way it always feels a little half-baked to me. Probably because it was designed with stuff like comprehensions and generator expressions that encourage pure functional behaviour (take an input and iterate over it to produce a new output), so all the other things feel a bit like an afterthought to try and expand that. (I know map and filter have been in there forever, but a lot of the time it's like... why not just use a generator or comprehension?)

look at partition in the link SurgicalOntologist posted anyhow

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Rocko Bonaparte posted:

I wanted to go back to this because I was reminded of pywinauto today. This is the most troublesome module of about twenty or so that my work project drags into its stuff. A lot of that comes down to how it does or doesn't install itself in relation to pywin32.

For example, yesterday, I discovered it doesn't express its dependency on pywin32 unless it detects it's not installed via its setup.py. So if you are, say, combing dependency metadata in order to convert all of your Python modules to Chocolatey packages, you will miss it and pip will try to hit the public web to get pywin32. I've also had problems with having to restart pip after installing it and things like that. For GUI automation, I'm trying to move over to bindings to Microsoft's UI automation framework. It seems to attach to just about anything--unlike AutoIt, for example.

I actually picked my project back up because I decided pywinauto is just weird, slow, and while I liked its API at first glance...in practice I don't like it.

So I don't have to reinvent the wheel I'll probably leverage pywinauto in some manner. Probably vendoring them or just copy/pasting some of their code instead of making them a dependency, because like you say, their pywin32 dependency is done all lovely.

Right now, I'm just using autoitx via its COM interface so I can work on the part I'm most interested in...the API. As you say, autoit basically doesn't know about UIA so it only works with traditional window controls. I'll pull in something UIA-based at some point.


In other news, jetbrains and anaconda just announced a collaboration. Don't really know what that entails at a practical level other than right now they just released improved conda support for PyCharm and PyCharm for Anaconda...which doesn't sound like its any different from regular pycharm?

TwystNeko
Dec 25, 2004

*ya~~wn*
So I've decided to learn python. My first project is going to be custom bike turn signals using an esp32 running micropython, and a max7219. So I'm a bit confused with how I'd install the max7219 lib - I did install Python 3 manually ( and will be switching to Anaconda when I get home ). But if I install it to my PC, that won't be on the esp32, right? This is my first foray into Python. The ultimate goal is to have an esp8266 that connects to the esp32 and relays button presses to control an LED matrix to signal left/right.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

TwystNeko posted:

So I've decided to learn python. My first project is going to be custom bike turn signals using an esp32 running micropython, and a max7219. So I'm a bit confused with how I'd install the max7219 lib - I did install Python 3 manually ( and will be switching to Anaconda when I get home ). But if I install it to my PC, that won't be on the esp32, right? This is my first foray into Python. The ultimate goal is to have an esp8266 that connects to the esp32 and relays button presses to control an LED matrix to signal left/right.

https://docs.micropython.org/en/latest/reference/packages.html

Does that answer your question? The packages and python installed on your PC do not have a bearing on which python and packages are on your esp32.

FWIW, there's not really any need to switch to Anaconda unless you have some specific need you're not mentioning here.

TwystNeko
Dec 25, 2004

*ya~~wn*
Oh, I was just reading the first few pages of the thread (out of date, I guess) and it was mentioned that if I'm just learning, Anaconda is better than a straight Python install.

And yea, that link helps. :D

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

Honestly when I got a new computer I just installed straight python, I used Anaconda before but the combination of having multiple library sources to worry about (conda, pip) and the fact their virtual environment stuff still didn't work with the standard windows terminal made me think it's just not worth the hassle

apparently they have some hacky script to make powershell work now, that runs a copy of python every time you open a PS terminal whether you're using conda or not :wtc:

shrike82
Jun 11, 2005

As an aside, I've gotten in the habit of using Docker to isolate environments in lieu of something like pipenv.

Anyone have thoughts on best practices?

Furism
Feb 21, 2006

Live long and headbang

Ahz posted:

yes, in settings -> project structure, you can add arbitrary directories as libraries/sources/resources without having packages installed the typical way via pip->site-packages.

Awesome, thanks!

Frequent Handies
Nov 26, 2006

      :yum:

shrike82 posted:

As an aside, I've gotten in the habit of using Docker to isolate environments in lieu of something like pipenv.

Anyone have thoughts on best practices?

I don't know about best practice but I round this quite helpful when I was started doing the same and wanted to streamline them.

QuarkJets
Sep 8, 2008

Boris Galerkin posted:

Speaking of data, I'm looking to store several GB of CSV data in a single compressed HDF5 file and I was wondering what package I should use for that. Right now I've found h5py, pytables, and there's also xarray I guess. Are there any pros/cons for any of these?

It depends on what you want your data to resemble. h5py has a closer resemblance to numpy arrays, pandas/pytables has a closer resemblance to a database table. Since it's a CSV you could probably make use of the extra features in pytables, and won't necessarily benefit from h5py's features

QuarkJets
Sep 8, 2008

shrike82 posted:

As an aside, I've gotten in the habit of using Docker to isolate environments in lieu of something like pipenv.

Anyone have thoughts on best practices?

I think either of those are fine, one isolates a computing environment in general and the other just isolates a python environment specifically. Best practice is going to come down to the use case

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm wanting to create a server process that other processes can subscribe to to get a stream of events from. Note that these are processes all on one machine.

The thing is...I'm feeling kinda lazy and don't really want to wire up something with sockets or wtf ever. Is there anything pre-built to make this easier that I should look at? Anyone else done something like this and have any tips they want to share?

The Fool
Oct 16, 2003


Thermopyle posted:

I'm wanting to create a server process that other processes can subscribe to to get a stream of events from. Note that these are processes all on one machine.

The thing is...I'm feeling kinda lazy and don't really want to wire up something with sockets or wtf ever. Is there anything pre-built to make this easier that I should look at? Anyone else done something like this and have any tips they want to share?

Friend of mine did a similar thing using twisted, might be worth looking at.

unpacked robinhood
Feb 18, 2013

by Fluffdaddy
This zmq example doesn't really match the no sockets part of your request but it might help a little ?

Hadlock
Nov 9, 2004

shrike82 posted:

As an aside, I've gotten in the habit of using Docker to isolate environments in lieu of something like pipenv.

Anyone have thoughts on best practices?

I also use docker for development purposes, it's been flawless as it solves all kinds of dependency hell problems. In my opinion it's the gold standard way to do Python dev.

Only best practice I can think of is to also deploy your app in production as a container of possible, as then you've got a single artifact that was developed, tested, and then deployed to prod in the exact same way at each stage.

I technically have Python installed on my local machine, but only for quickly debugging small problems periodically

P.S. check out podman, Red Hat's alternative to Docker, it's 100% compatible you can just alias docker to podman and away you go. It went 1.0 a couple months back and is getting baked in to Red Hat 8's package manager as their defacto container solution. I've been using it on my personal dev laptop now for about a month with no issues

https://developers.redhat.com/blog/2018/11/20/buildah-podman-containers-without-daemons/

Hadlock fucked around with this message at 09:15 on Apr 6, 2019

limaCAT
Dec 22, 2007

il pistone e male
Slippery Tilde
How do you debug python code running in a docker container? Do you have an ide that you can use there?

Data Graham
Dec 28, 2009

📈📊🍪😋



Yeah, same question. I super depend on iPython and the django CLI shell, and having to do long annoying docker-compose commands to get into the shell or install packages or run migrations or whatever is super unsatisfying compared to just doing a local venv.

I'm very open to shifting my thinking more into docker land, but so far it's still an uphill battle.

susan b buffering
Nov 14, 2016

Probably using a remote debugger, I imagine.

punished milkman
Dec 5, 2018

would have won
PyCharm (pro edition) has support for remote debugging with Docker. I worked on a small Django project at work alongside docker-compose a few years ago, though it was a nightmare to set up.

TwystNeko
Dec 25, 2004

*ya~~wn*
Okay, so I've got this ESP32 working rather well with Micropython, controlling a MAX 7219-based LED matrix, and capacitive touch sensing for buttons. However, I need some help with interrupts.I think.

To prevent this being an X/Y problem, my end goal is to be displaying an "idle" animation on the matrix, and when the touch sensor is triggered, display a different image in increments of 15 seconds, then going back to the idle animation.

The nice thing about using the MAX7219, it "holds" the image displayed until specified otherwise, so I can just sleep the ESP32 for however long. What's the 'best' way to do this?

This is what I've got right now:

code:
import max7219, framebuf, time
from machine import Pin, SPI, TouchPad
spi = SPI(1, baudrate=10000000, polarity=1, phase=0, sck=Pin(4), mosi=Pin(2))
ss = Pin(5, Pin.OUT)
d = max7219.Matrix8x8(spi, ss, 2)
t = TouchPad(Pin(14))

def loadbmp(bmp):
    with open(bmp, 'rb') as f:
        f.readline()
        f.readline()
        f.readline()
        data = bytearray(f.read())
    fbuf = framebuf.FrameBuffer(data, 8,8, framebuf.MONO_HLSB)
    return fbuf

while True:
    if t.read() < 450:
        d.blit(loadbmp('leftarrow.pbm'),0,0)
        d.show()
        time.sleep_ms(5000)
    else:
        d.blit(loadbmp('sq1.pbm'), 0,0)
        d.show()
        time.sleep_ms(1000)
        d.blit(loadbmp('sq2.pbm'), 0,0)
        d.show()
        time.sleep_ms(1000)

It works, but if it's triggered during the animation it has to wait up to 2 seconds. TBF, it's turn signals for a bicycle, so it doesn't have to be instant.

crazysim
May 23, 2004
I AM SOOOOO GAY

punished milkman posted:

PyCharm (pro edition) has support for remote debugging with Docker. I worked on a small Django project at work alongside docker-compose a few years ago, though it was a nightmare to set up.

PyCharm Pro does have integration with docker-compose. I'm very new to this but it seems alright. It's worth giving it a try.

Foxfire_
Nov 8, 2010

TwystNeko posted:

Okay, so I've got this ESP32 working rather well with Micropython, controlling a MAX 7219-based LED matrix, and capacitive touch sensing for buttons. However, I need some help with interrupts.I think.

To prevent this being an X/Y problem, my end goal is to be displaying an "idle" animation on the matrix, and when the touch sensor is triggered, display a different image in increments of 15 seconds, then going back to the idle animation.

The nice thing about using the MAX7219, it "holds" the image displayed until specified otherwise, so I can just sleep the ESP32 for however long. What's the 'best' way to do this?

This is what I've got right now:

code:
import max7219, framebuf, time
from machine import Pin, SPI, TouchPad
spi = SPI(1, baudrate=10000000, polarity=1, phase=0, sck=Pin(4), mosi=Pin(2))
ss = Pin(5, Pin.OUT)
d = max7219.Matrix8x8(spi, ss, 2)
t = TouchPad(Pin(14))

def loadbmp(bmp):
    with open(bmp, 'rb') as f:
        f.readline()
        f.readline()
        f.readline()
        data = bytearray(f.read())
    fbuf = framebuf.FrameBuffer(data, 8,8, framebuf.MONO_HLSB)
    return fbuf

while True:
    if t.read() < 450:
        d.blit(loadbmp('leftarrow.pbm'),0,0)
        d.show()
        time.sleep_ms(5000)
    else:
        d.blit(loadbmp('sq1.pbm'), 0,0)
        d.show()
        time.sleep_ms(1000)
        d.blit(loadbmp('sq2.pbm'), 0,0)
        d.show()
        time.sleep_ms(1000)

It works, but if it's triggered during the animation it has to wait up to 2 seconds. TBF, it's turn signals for a bicycle, so it doesn't have to be instant.

Set up two ISRs:
- On the GPIO activating, set a flag then return.
- On the timer expiring, set a different flag then return (not really needed since you can infer it from the first one)

Forever:
- Update the display with the picture for this state
- Load the timer with the time till the next frame and start it
- Sleep the chip till there's an interrupt
- Mask interrupts
- Disable timer
- Copy global flags to locals and clear global ones
- Unmask interrupts
- Decide new state based on whether it was timer or button that woke you up

Depending on hardware, you might need to do something fancier to debounce the GPIO

TwystNeko
Dec 25, 2004

*ya~~wn*
Thanks. That gives me a roadmap of sorts - I've never used ISRs, so now I can learn. I've been looking through the micropython docs, and it notes that TouchPads can wake an ESP32 from sleep. So I might have to ditch the animation, for power saving reasons - this has to be run off a battery pack.

shrike82
Jun 11, 2005

punished milkman posted:

PyCharm (pro edition) has support for remote debugging with Docker.

i use that, as well as jupyter notebooks connected to a server running within the container. honestly, the latter works for me most of the time because of the nature of my work - ML model building.

both are finicky so i'm hoping someone has a better workflow

Hadlock
Nov 9, 2004

Data Graham posted:

Yeah, same question. I super depend on iPython and the django CLI shell, and having to do long annoying docker-compose commands to get into the shell or install packages or run migrations or whatever is super unsatisfying compared to just doing a local venv.

Generally I name my container "python" and then also spin up an "app1-pg" container or something

Then just, docker exec -it python bash

Probably if it weren't already muscle memory I'd setup an alias for it. For me it's just like SSH-ing into a lan server with a very tightlyv defined firewall.

The default python container on build will read all your packages from requirements.txt and install them so that's not been an issue for me.

For my django container, I used a pattern I had been using for other containers which did some "on boot" last second config, and just spliced in the create/run migration manage.py stuff to always happen on boot

There's definitely going to be some friction if there's a "but this is the way we've always done it" mentality as it is different, but the benefits of never ever having the wrong versions of libraries or whatever pave over your carefully constructed ecosystem is really quite nice

For remote debug if you're doing it over port 1234 you might run something like

docker run -itd --name=myapp -p 8080:8080 -p 1234:1234 -e DEBUG=1 goonynamespace/myapp: latest

If you have a Django app with the auto reload turned on you could throw a -v in there to volume mount your latest code on your desktop to, something like

-v ~/GitHub/mygoonynamespace/myapp:/myapp

My Rhythmic Crotch
Jan 13, 2011

Anyone here use Fabric on windows? I use rsync_project from fabric.contrib.project to deploy stuff, and while it works great on linux and windows, I have no idea what I need to install to satisfy the rsync requirement on windows. I've tried cygwin and it's crap and the rsync does not work at all (just hangs).

I know it must be possible because I had a coworker at one point (now gone) who was able to do it... *sigh*

edit: this works great: https://www.rsync.net/resources/howto/windows_rsync.html

Still, gently caress windows

My Rhythmic Crotch fucked around with this message at 19:07 on Apr 8, 2019

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm an extensive docker user, but I struggle to see it's usefulness in regular python development.

I mean, I use docker to spin up postgres or redis or whatever when my application needs it, and I deploy my python web applications with docker, but I just don't get any real benefit out of developing code inside a container. I mean, I've already got virtualenvs to manage isolated python environments which seems isolated enough for managing libraries.

Adbot
ADBOT LOVES YOU

The Fool
Oct 16, 2003


The benefit is to be able to develop in an environment as close to production as possible. With Docker you can deploy the same configurations as production without a bunch of admin overhead.

You also wouldn't develop "inside a container", you'd either push your code through vc and a hook would bring it in to docker, or your dev docker configuration would have a volume pointing at your code on your workstation.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply