|
So the specific use case is a monorepo with a handful of shared internal libraries. DB wrappers, loggers, API wrappers, that kind of stuff. Then a growing collection of services that get deployed w. their deps bundled up, often including these internal libraries. I could copy all the folders around with docker for sure but it would be a little less fragile if the tool managing all the other deps also managed the ones that come from a local directory. The main reason I'd like to pip install them is that a pip install of the local dep will also resolve and bundle up any dependencies that the local package has. This gets a little hairy if a local dep relies on another local dep, but I'm choosing to ignore that for now since it isn't the case. The March Hare fucked around with this message at 16:05 on Jan 31, 2022 |
# ? Jan 31, 2022 16:01 |
|
|
# ? May 30, 2024 14:12 |
|
Is there a way to tie an HTML <img> to a Flask-WTF SubmitField? I want to use a pair of images as Save and Cancel buttons on a page, to dump the data in a pair of session[] variables to the DB, but I'm not turning anything useful up.
|
# ? Jan 31, 2022 21:48 |
|
I am disgustingly overjoyed that I managed to make a Flask form dump data to the next page with a super-temporary session[] variable, then pull that into the PolyRoof object with minimal typos and data miscalculation. However, I did manage to forget the return in the __repr__ function for the object that's meant to produce a JSON representation for the DB to store. I've never been so happy to see [2022-02-02 15:07:20,860] INFO in routes: Poly roof data calculated: {"width": 192, "projection": 48, "header_short": 1, "header_long": 0, "header_screws": 64.0, "gutter_short": 1, "gutter_long": 0, "gutter_type": "E GUTTER", "fascia": 0, "fascia_sides": 0, "fascia_type": "E FASCIA", "caulk": 2, "cap_screws": 50, "area": 64, "perimeter": 40, "four_foot": 4.0, "two_foot": 0, "total_seams": 577.0, "poly_lags": 12.0, "headers": "1 EA @ 24'\n0 EA @ 30'", "gutters": "1 EA @ 24'\n0 EA @ 30'"} in terminal output.
|
# ? Feb 2, 2022 20:59 |
|
Can someone tell if i’m supposed to use setup.py, setup.cfg, or pyproject.toml to maintain my pip package nowadays? I am irritated by all of these things existing I’d like to be able to differentiate between dev dependencies and regular dependencies if that makes a difference.
|
# ? Feb 5, 2022 00:54 |
punished milkman posted:Can someone tell if i’m supposed to use setup.py, setup.cfg, or pyproject.toml to maintain my pip package nowadays? I am irritated by all of these things existing You’re supposed to use pyproject.toml, but you may also need to have some shims or other functionality in setup.py, e.g., if your package depends on setuptools for something.
|
|
# ? Feb 5, 2022 01:02 |
|
pyproject alone is what you should use, add a setup.cfg if you need it (you should not need it). You basically never need a setup.py, packaging should not rely on writing code so you are strongly discouraged from doing that
|
# ? Feb 5, 2022 04:07 |
|
Defining dependencies: https://python-poetry.org/docs/pyproject/#dependencies-and-dev-dependencies
|
# ? Feb 5, 2022 04:12 |
|
Been learning Python for a few weeks and got lost in the weeds of managing my python environments. I can make things work, but I'm pretty sure I'm doing it wrong. I wish I could just focus on learning python, but whatever. I'm pretty sure I have the system python 2 and python 3 installed on my mac which I barely really use I also have one python 2 and one python 3 virtual environment in ~/bin/Environments/ . I activate either virtual environment (using virtualenv I think) depending on which version of python I want to use at the time across a bunch of different projects. My virtual environments have a ton of different packages in them that span my projects. This isn't right is it? It feels too "global". Now I'm hearing about pipenv and how much cleaner it is. Is pipenv something I could learn to use in lieu of virtualenv or are they normally used alongside each other? I don't want to half-rear end understand virtualenv before adding more poo poo and confuse myself even more.
|
# ? Feb 5, 2022 05:52 |
|
Sleepy Robot posted:Been learning Python for a few weeks and got lost in the weeds of managing my python environments. I can make things work, but I'm pretty sure I'm doing it wrong. I wish I could just focus on learning python, but whatever. A standard way to handle this is to define an environment for each project. You can use a requirements.txt (pip) or an environment.yml (conda/mamba) or a pipfile (pipenv) for this, they're all fine approaches, but the key to solving your problem is that you need to separate the environments for each of your projects instead of having them span projects. You can't just use pipenv alone to solve this, you have to very deliberately define environment files for each package and then employ self-discipline in switching between those environments as you move between projects. I prefer conda environments for a lot of reasons, so I install the mambaforge client and then each of my projects has 1 or more environment.yml files (it's useful to separate packages that are only used during development and unit testing from packages that I deploy with) that I can automagically create a fresh environment out, to either use locally or as part of CI/CD. You would use pipenv alongside virtualenv or conda/mamba. pipenv is not a replacement for environment control, it's really a packaging tool, it's for controlling dependencies. Pinning with a requirements.txt or environment.yml is fine too, sometimes preferable even.
|
# ? Feb 5, 2022 10:16 |
|
Is there a virtualenv alternative for the fish shell? Activating a virtual environment seems to only work in bash: source ./<env_name>/bin/activate
|
# ? Feb 5, 2022 17:48 |
|
I think newer releases have an activate.fish to use instead.
|
# ? Feb 5, 2022 18:20 |
|
I quite like pipenv but it always feels like a bit of a waste to do pipenv lock - r > requirements.txt and have my CI testing suite pip install requirements.txt It’s great for me, the human, because the commands are intuitive, the venv management is nicely abstracted, and it’s trivial to handle dev only dependencies (pipenv lock -r -d > requirement-dev.txt). I have a question about that too. Often there will be sub dependencies pinned at whatever the latest version is included, and I’ll just manually delete them and keep the dependency alone, so that way when the dependency updates it’ll just auto-install the latest version of its dependencies. My intuitive reaction is that I *want* the latest version of those sub-dependencies, and these packages will often come with so many keeping track of them is a pain. On the flip side, maybe that’s just as it should be, and gently caress you, you get to review all twenty of these every time dependabot issues a PR. I’m not sure if this is an anti-pattern or not. It sort of feels like one, but I also barely know what I’m doing sooooooooo I guess I’ll just ask: What *are* best practices for managing sub-dependencies? The Iron Rose fucked around with this message at 19:03 on Feb 5, 2022 |
# ? Feb 5, 2022 19:01 |
|
You will eventually get hosed by something that installs the latest version and the latest version has API breaking changes. I like to keep my requirements.txt short and understandable and in order though so I don’t pin all packages. No idea what industry practice is here.
|
# ? Feb 5, 2022 19:11 |
Your dependency definition will lock current version of every subdependency in the lock file. Testing and deployment environments should be installed via lock file only, ignoring dependency definitions, imo.
|
|
# ? Feb 5, 2022 19:16 |
|
CarForumPoster posted:You will eventually get hosed by something that installs the latest version and the latest version has API breaking changes. Industry standard (PEP) is to pin everything in an environment, but keep package dependencies (install_requires) short and sweet. The install_requires block defines the minimum set of dependencies to install and use your package, environment files should fully reproduce an environment
|
# ? Feb 5, 2022 19:17 |
|
Thanks!
|
# ? Feb 5, 2022 19:34 |
|
ugly requirements.txt it is then! I guess the prettier hierarchy is stored in my Pipfile anyways.
|
# ? Feb 6, 2022 05:50 |
|
The even better answer is "use a docker container" but I do think that publishing to pypi or a conda channel is still very useful for a lot of people
|
# ? Feb 6, 2022 06:17 |
|
I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that. The problem I'm running into is that I feel like the steps between an sdist and a wheel are some arcane nonsense that I have no idea how to hook into in order to get the steps I want. My assumption is that I want to the wheel to only include the final webpacked JS bundles (Django just should be treating them as normal static files at that point), not the pre-webpack JS files (or any of the scaffolding for that matter, e.g. package.json), but that the sdist should include these things. Is this the kind of thing I can easily do without making my own build backend? Additionally, if I could just dump the directory structure into a temp folder and jam that into a wheel I'd be content with that but I can't seem to find an easy way to do that, since "wheel pack" seems to assume I have a wheel already that I'm trying to modify, not make one from scratch. If calling the standard "build --wheel" and then pulling the result apart and stuffing in the files that I need in a custom backend is the easiest path forward, then so be it, just want to make sure I'm not missing something obvious.
|
# ? Feb 6, 2022 11:26 |
|
UraniumAnchor posted:I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that. I would definitely vendorise the pre-build JS bundle. I can't see any benefit to letting users re-bundle the JS on their own machines. I'd be looking at the package_data and data_files options for including the bundle. Maybe you would also need an entry in MANIFEST.in. Updating the manifest could even be enough on its own. It's all so vague and confusing.
|
# ? Feb 6, 2022 13:05 |
|
UraniumAnchor posted:I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that. sdist and wheels are more about architecture dependent binaries. You should always include the fully built JS in either one of those.
|
# ? Feb 6, 2022 15:38 |
|
What's the pythonic way of building a pandas dataframe from a nested json array? I'm retrieving quake data in json format. An example JSON can be found here: https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&starttime=2022-02-06 Each quake populates a list found at: quakes['features'] . To get the magnitude of the first quake on the list, I'd use quakes['features'][0]['properties']['mag'] I'd like to build a dataframe so that each row represents a quake. The column names are the quake properties found at: quakes['features'][X]['properties'] I hope that makes sense!
|
# ? Feb 6, 2022 16:00 |
Hughmoris posted:What's the pythonic way of building a pandas dataframe from a nested json array? I only glanced at your JSON from phone, so I’m not fully confident this works, but the way to do it here should be along the following lines: 1. Read JSONs into a list of dictionaries (or do this one by one if that’s too much) 2. Write function that creates a dictionary in the format of {“feature_name”: feature_value} out of a quake JSON 3. Apply the function to your dataset via, e.g., list comprehension 4. dataframe = pd.DataFrame(output_list) In other words, the trick to know here is that pandas can construct a dataframe out of a list of flat dictionaries right away. The most that you may need to do there is to provide data type definitions, if pandas doesn’t infer them correctly (and it’s a good practice to just do that always anyway).
|
|
# ? Feb 6, 2022 16:25 |
|
ynohtna posted:I would definitely vendorise the pre-build JS bundle. I can't see any benefit to letting users re-bundle the JS on their own machines. The bundle is definitely intended to be at least somewhat customizable (I don't know how often that happens in practice other than my own semi-public tweaks that don't always make it into master), so ideally it wouldn't actually build the bundle until the wheel step. In a perfect world running a pip install that's pointing to a github commit will end up with the same end result as installing the wheel, but with a bundle step in the middle. That's the part I'm unsure of how to hook up. I definitely have the 'pre-packaged data' part figured out, but this isn't pre-packaged in the sense that it's committed to git directly, and I really don't want it to be. That feels too much like committing build artifacts.
|
# ? Feb 6, 2022 18:33 |
|
UraniumAnchor posted:I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that. Poetry can take care of most of this. You're using .git, right (RIGHT???)? Add the files that you want to package into an sdist to your repository. Add the packages that you definitely want to exclude to your .gitignore. Use poetry to initialize the project. Define dependencies and whatnot, tweak the pyproject. `poetry build` will automatically build a source distribution and then a wheel. You can publish these to pypy with `poetry publish` (which will also automatically create the sdist and the wheel, you don't need to `build` first unless you want to) Using poetry means that you should basically delete your setup.py, you should only need a pyproject.toml and the `poetry` command.
|
# ? Feb 6, 2022 19:53 |
|
cinci zoo sniper posted:I only glanced at your JSON from phone, so I’m not fully confident this works, but the way to do it here should be along the following lines: That helped, thanks! Also, I discovered that I can wget the file in CSV format instead of JSON. The CSV file was about 1/4 of the size of the JSON for the same date ranges, didn't expect that.
|
# ? Feb 6, 2022 20:07 |
Hughmoris posted:That helped, thanks! CSV has only one header, whereas all those “feature_”, “feature_b” object names will be repeated in each JSON file. If you subtract their character lengths from the total, you’ll be looking at a similar difference.
|
|
# ? Feb 6, 2022 20:16 |
|
QuarkJets posted:Poetry can take care of most of this. You're using .git, right (RIGHT???)? Add the files that you want to package into an sdist to your repository. Add the packages that you definitely want to exclude to your .gitignore. Use poetry to initialize the project. Define dependencies and whatnot, tweak the pyproject. `poetry build` will automatically build a source distribution and then a wheel. You can publish these to pypy with `poetry publish` (which will also automatically create the sdist and the wheel, you don't need to `build` first unless you want to) Poetry looks like it will do what I want for the most part, I think I'll need to write an inline plugin unless there's an existing plugin that lets me run an arbitrary command right before packing the wheel. All I need to do is run a couple of yarn commands to actually build the JS bundles before packing everything.
|
# ? Feb 6, 2022 20:20 |
|
UraniumAnchor posted:Poetry looks like it will do what I want for the most part, I think I'll need to write an inline plugin unless there's an existing plugin that lets me run an arbitrary command right before packing the wheel. All I need to do is run a couple of yarn commands to actually build the JS bundles before packing everything. Sounds like something you could stick in a makefile, to me
|
# ? Feb 6, 2022 22:52 |
|
cinci zoo sniper posted:CSV has only one header, whereas all those “feature_”, “feature_b” object names will be repeated in each JSON file. If you subtract their character lengths from the total, you’ll be looking at a similar difference. That makes perfect sense, thanks. I have a barebones dev VM that I'm using to learn AWS. Its running Kubuntu and the system version of python is 3.8.10 . Is it possible to create virtual environments (venv) using a newer version of Python? Or is the gist of venv just give you a copy of the version used to create it? Ideally, I can create virtual environments with the newest release of Python, without mangling with my installed system environment.
|
# ? Feb 7, 2022 04:13 |
|
Hughmoris posted:That makes perfect sense, thanks. I am pretty sure that venv only creates new environments with the same python version as the current environment What you want is mamba (e.g. newer and better conda). You can create a conda environment with whatever version of Python you want, then from there you can go pure python with pip or continue using conda channels if you want. The mambaforge installer is light and can be installed in a user directory without administrative privileges, you see this being done in dockerfiles all the time (e.g. Download mambaforge, unpack it, create a new env, install stuff into it with mamba or pip) QuarkJets fucked around with this message at 04:43 on Feb 7, 2022 |
# ? Feb 7, 2022 04:36 |
You can specify python= when you create the venv, so it’s not necessarily the same as what the current enviro is, but whatever you specify does have to be installed and available via the specified name already.
|
|
# ? Feb 7, 2022 05:04 |
QuarkJets posted:I am pretty sure that venv only creates new environments with the same python version as the current environment pyenv would also do the job here.
|
|
# ? Feb 7, 2022 08:08 |
|
"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function." Oh. That's why. No return statement to spit the prettified data back On another note, I'm quite pleased with myself for realizing I can pass the error code and message as parameters to a boilerplate error.html page instead of a separate page for each message. EDIT: That was what else I wanted to ask. Instead of writing a prettify_data function for every possible math-performing class, I want to have one PrettyData object that can accept a generic object and basically...run down the line. If XYZ property (key in the object dict?) exists, Is there an easy way to do this? Do I loop over the passed object's vars() call with a for key, item in object_dict and have a bunch of if calls to check if keys exist? Can you even pass generic objects in Python without knowing exactly what properties/keys the object is going to have? EDIT 2: Now I'm thinking of having the math class object (PolyRoof, PanRoof, etc.) itself prettify its own data by passing itself to a PrettyData object and having that accessible as the math class object's pretty_data property. D34THROW fucked around with this message at 17:52 on Feb 7, 2022 |
# ? Feb 7, 2022 17:32 |
|
D34THROW posted:"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function." Use type hints. Any decent editor will warn you if a function hinted as returning a value does not return a value (and might also warn about returning an object of the wrong type, depending on how complex the definition is)
|
# ? Feb 7, 2022 17:50 |
|
D34THROW posted:"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function." What kind of prettyfying are we talking about here? I would not try to write a universal class that iterates over attributes in other classes, but you could have a class invoke a generic prettyfy function on data that it loads, that would probably be cleaner imo. The new class or function should just transform data passed to it, basically
|
# ? Feb 7, 2022 17:57 |
|
QuarkJets posted:What kind of prettyfying are we talking about here? Basically I'm taking data like roof_data.header_screws, roof_data.cap_screws, roof_data.flashing_screws and making a dict of lists that are iterated over by the template in order to generate the data tables, putting out (respectively). Maybe "prep for tableification" is a better term: pre:SMS 14 X 2 1/2 | 100 EA TEK 10 X 3/4 | 100 EA TEK 10 X 3/4 W/NEO | 100 EA Python code:
HTML code:
D34THROW fucked around with this message at 18:27 on Feb 7, 2022 |
# ? Feb 7, 2022 18:03 |
|
It's not unreasonable or uncommon in my experience to have a function in a class that outputs a specific datatype for other use (as_dict, as_list, etc). I wouldn't necessarily throw all the formatting logic in there, because I think you risk overcrowding the class, but then again I just did that myself in a reporting project. In general, probably good to avoid having to double refer to data (maintaining a list of keys and values outside of the relevant class, etc) but I suspect others in here would have better advice. Also: Nthing the recommendation for typehints. They're an absolute lifesaver for saving you one more run when you forgot something simple like a return/etc, and they're not that hard to include. https://www.pythontutorial.net/python-basics/python-type-hints/ seems like an alright intro, and it makes your IDE a ton more readable. Falcon2001 fucked around with this message at 19:19 on Feb 7, 2022 |
# ? Feb 7, 2022 19:17 |
|
I too like to have an "to_dict" method in a class, maybe customizing that name if the dict has a specific purpose like as an SQL insert And then it's all a matter of determining where to prettyfy things. Maybe the answer is to prettyfy in to_dict. Maybe prettyfy is used in a bunch of properties (e.g. your class could have private raw data, and a property for each that formats the raw data, or you could just store the pretty data, depending on what else the class needs to do)
|
# ? Feb 7, 2022 22:16 |
|
|
# ? May 30, 2024 14:12 |
|
How should I handle a situation in which code wants to create an instance of a class, but shouldn't because it's being passed the wrong data? Let's say I have a class definition: code:
|
# ? Feb 8, 2022 20:03 |