Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
The March Hare
Oct 15, 2006

Je rêve d'un
Wayne's World 3
Buglord
So the specific use case is a monorepo with a handful of shared internal libraries. DB wrappers, loggers, API wrappers, that kind of stuff. Then a growing collection of services that get deployed w. their deps bundled up, often including these internal libraries.

I could copy all the folders around with docker for sure but it would be a little less fragile if the tool managing all the other deps also managed the ones that come from a local directory.

The main reason I'd like to pip install them is that a pip install of the local dep will also resolve and bundle up any dependencies that the local package has. This gets a little hairy if a local dep relies on another local dep, but I'm choosing to ignore that for now since it isn't the case.

The March Hare fucked around with this message at 16:05 on Jan 31, 2022

Adbot
ADBOT LOVES YOU

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:
Is there a way to tie an HTML <img> to a Flask-WTF SubmitField? I want to use a pair of images as Save and Cancel buttons on a page, to dump the data in a pair of session[] variables to the DB, but I'm not turning anything useful up.

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:
I am disgustingly overjoyed that I managed to make a Flask form dump data to the next page with a super-temporary session[] variable, then pull that into the PolyRoof object with minimal typos and data miscalculation. However, I did manage to forget the return in the __repr__ function for the object that's meant to produce a JSON representation for the DB to store.

I've never been so happy to see [2022-02-02 15:07:20,860] INFO in routes: Poly roof data calculated: {"width": 192, "projection": 48, "header_short": 1, "header_long": 0, "header_screws": 64.0, "gutter_short": 1,
"gutter_long": 0, "gutter_type": "E GUTTER", "fascia": 0, "fascia_sides": 0, "fascia_type": "E FASCIA", "caulk": 2, "cap_screws": 50, "area": 64, "perimeter": 40, "four_foot": 4.0,
"two_foot": 0, "total_seams": 577.0, "poly_lags": 12.0, "headers": "1 EA @ 24'\n0 EA @ 30'", "gutters": "1 EA @ 24'\n0 EA @ 30'"}
in terminal output.

punished milkman
Dec 5, 2018

would have won
Can someone tell if i’m supposed to use setup.py, setup.cfg, or pyproject.toml to maintain my pip package nowadays? I am irritated by all of these things existing

I’d like to be able to differentiate between dev dependencies and regular dependencies if that makes a difference.

cinci zoo sniper
Mar 15, 2013




punished milkman posted:

Can someone tell if i’m supposed to use setup.py, setup.cfg, or pyproject.toml to maintain my pip package nowadays? I am irritated by all of these things existing

I’d like to be able to differentiate between dev dependencies and regular dependencies if that makes a difference.

You’re supposed to use pyproject.toml, but you may also need to have some shims or other functionality in setup.py, e.g., if your package depends on setuptools for something.

QuarkJets
Sep 8, 2008

pyproject alone is what you should use, add a setup.cfg if you need it (you should not need it). You basically never need a setup.py, packaging should not rely on writing code so you are strongly discouraged from doing that

QuarkJets
Sep 8, 2008

Defining dependencies: https://python-poetry.org/docs/pyproject/#dependencies-and-dev-dependencies

Sleepy Robot
Mar 24, 2006
instant constitutional scholar, just add astonomist
Been learning Python for a few weeks and got lost in the weeds of managing my python environments. I can make things work, but I'm pretty sure I'm doing it wrong. I wish I could just focus on learning python, but whatever.

I'm pretty sure I have the system python 2 and python 3 installed on my mac which I barely really use
I also have one python 2 and one python 3 virtual environment in ~/bin/Environments/ . I activate either virtual environment (using virtualenv I think) depending on which version of python I want to use at the time across a bunch of different projects. My virtual environments have a ton of different packages in them that span my projects. This isn't right is it? It feels too "global".

Now I'm hearing about pipenv and how much cleaner it is. Is pipenv something I could learn to use in lieu of virtualenv or are they normally used alongside each other? I don't want to half-rear end understand virtualenv before adding more poo poo and confuse myself even more.

QuarkJets
Sep 8, 2008

Sleepy Robot posted:

Been learning Python for a few weeks and got lost in the weeds of managing my python environments. I can make things work, but I'm pretty sure I'm doing it wrong. I wish I could just focus on learning python, but whatever.

I'm pretty sure I have the system python 2 and python 3 installed on my mac which I barely really use
I also have one python 2 and one python 3 virtual environment in ~/bin/Environments/ . I activate either virtual environment (using virtualenv I think) depending on which version of python I want to use at the time across a bunch of different projects. My virtual environments have a ton of different packages in them that span my projects. This isn't right is it? It feels too "global".

Now I'm hearing about pipenv and how much cleaner it is. Is pipenv something I could learn to use in lieu of virtualenv or are they normally used alongside each other? I don't want to half-rear end understand virtualenv before adding more poo poo and confuse myself even more.

A standard way to handle this is to define an environment for each project. You can use a requirements.txt (pip) or an environment.yml (conda/mamba) or a pipfile (pipenv) for this, they're all fine approaches, but the key to solving your problem is that you need to separate the environments for each of your projects instead of having them span projects. You can't just use pipenv alone to solve this, you have to very deliberately define environment files for each package and then employ self-discipline in switching between those environments as you move between projects.

I prefer conda environments for a lot of reasons, so I install the mambaforge client and then each of my projects has 1 or more environment.yml files (it's useful to separate packages that are only used during development and unit testing from packages that I deploy with) that I can automagically create a fresh environment out, to either use locally or as part of CI/CD.

You would use pipenv alongside virtualenv or conda/mamba. pipenv is not a replacement for environment control, it's really a packaging tool, it's for controlling dependencies. Pinning with a requirements.txt or environment.yml is fine too, sometimes preferable even.

Armauk
Jun 23, 2021


Is there a virtualenv alternative for the fish shell? Activating a virtual environment seems to only work in bash: source ./<env_name>/bin/activate

necrotic
Aug 2, 2005
I owe my brother big time for this!
I think newer releases have an activate.fish to use instead.

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
I quite like pipenv but it always feels like a bit of a waste to do pipenv lock - r > requirements.txt and have my CI testing suite pip install requirements.txt


It’s great for me, the human, because the commands are intuitive, the venv management is nicely abstracted, and it’s trivial to handle dev only dependencies (pipenv lock -r -d > requirement-dev.txt).

I have a question about that too. Often there will be sub dependencies pinned at whatever the latest version is included, and I’ll just manually delete them and keep the dependency alone, so that way when the dependency updates it’ll just auto-install the latest version of its dependencies. My intuitive reaction is that I *want* the latest version of those sub-dependencies, and these packages will often come with so many keeping track of them is a pain. On the flip side, maybe that’s just as it should be, and gently caress you, you get to review all twenty of these every time dependabot issues a PR.

I’m not sure if this is an anti-pattern or not. It sort of feels like one, but I also barely know what I’m doing sooooooooo I guess I’ll just ask: What *are* best practices for managing sub-dependencies?

The Iron Rose fucked around with this message at 19:03 on Feb 5, 2022

CarForumPoster
Jun 26, 2013

⚡POWER⚡
You will eventually get hosed by something that installs the latest version and the latest version has API breaking changes.

I like to keep my requirements.txt short and understandable and in order though so I don’t pin all packages. No idea what industry practice is here.

cinci zoo sniper
Mar 15, 2013




Your dependency definition will lock current version of every subdependency in the lock file. Testing and deployment environments should be installed via lock file only, ignoring dependency definitions, imo.

QuarkJets
Sep 8, 2008

CarForumPoster posted:

You will eventually get hosed by something that installs the latest version and the latest version has API breaking changes.

I like to keep my requirements.txt short and understandable and in order though so I don’t pin all packages. No idea what industry practice is here.

Industry standard (PEP) is to pin everything in an environment, but keep package dependencies (install_requires) short and sweet. The install_requires block defines the minimum set of dependencies to install and use your package, environment files should fully reproduce an environment

CarForumPoster
Jun 26, 2013

⚡POWER⚡
Thanks!

The Iron Rose
May 12, 2012

:minnie: Cat Army :minnie:
ugly requirements.txt it is then!

I guess the prettier hierarchy is stored in my Pipfile anyways.

QuarkJets
Sep 8, 2008

The even better answer is "use a docker container" but I do think that publishing to pypi or a conda channel is still very useful for a lot of people

UraniumAnchor
May 21, 2006

Not a walrus.
I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that.

The problem I'm running into is that I feel like the steps between an sdist and a wheel are some arcane nonsense that I have no idea how to hook into in order to get the steps I want.

My assumption is that I want to the wheel to only include the final webpacked JS bundles (Django just should be treating them as normal static files at that point), not the pre-webpack JS files (or any of the scaffolding for that matter, e.g. package.json), but that the sdist should include these things.

Is this the kind of thing I can easily do without making my own build backend? Additionally, if I could just dump the directory structure into a temp folder and jam that into a wheel I'd be content with that but I can't seem to find an easy way to do that, since "wheel pack" seems to assume I have a wheel already that I'm trying to modify, not make one from scratch. If calling the standard "build --wheel" and then pulling the result apart and stuffing in the files that I need in a custom backend is the easiest path forward, then so be it, just want to make sure I'm not missing something obvious.

ynohtna
Feb 16, 2007

backwoods compatible
Illegal Hen

UraniumAnchor posted:

I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that.

The problem I'm running into is that I feel like the steps between an sdist and a wheel are some arcane nonsense that I have no idea how to hook into in order to get the steps I want.

My assumption is that I want to the wheel to only include the final webpacked JS bundles (Django just should be treating them as normal static files at that point), not the pre-webpack JS files (or any of the scaffolding for that matter, e.g. package.json), but that the sdist should include these things.

Is this the kind of thing I can easily do without making my own build backend? Additionally, if I could just dump the directory structure into a temp folder and jam that into a wheel I'd be content with that but I can't seem to find an easy way to do that, since "wheel pack" seems to assume I have a wheel already that I'm trying to modify, not make one from scratch. If calling the standard "build --wheel" and then pulling the result apart and stuffing in the files that I need in a custom backend is the easiest path forward, then so be it, just want to make sure I'm not missing something obvious.

I would definitely vendorise the pre-build JS bundle. I can't see any benefit to letting users re-bundle the JS on their own machines.

I'd be looking at the package_data and data_files options for including the bundle.

Maybe you would also need an entry in MANIFEST.in. Updating the manifest could even be enough on its own. It's all so vague and confusing.

necrotic
Aug 2, 2005
I owe my brother big time for this!

UraniumAnchor posted:

I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that.

The problem I'm running into is that I feel like the steps between an sdist and a wheel are some arcane nonsense that I have no idea how to hook into in order to get the steps I want.

My assumption is that I want to the wheel to only include the final webpacked JS bundles (Django just should be treating them as normal static files at that point), not the pre-webpack JS files (or any of the scaffolding for that matter, e.g. package.json), but that the sdist should include these things.

Is this the kind of thing I can easily do without making my own build backend? Additionally, if I could just dump the directory structure into a temp folder and jam that into a wheel I'd be content with that but I can't seem to find an easy way to do that, since "wheel pack" seems to assume I have a wheel already that I'm trying to modify, not make one from scratch. If calling the standard "build --wheel" and then pulling the result apart and stuffing in the files that I need in a custom backend is the easiest path forward, then so be it, just want to make sure I'm not missing something obvious.

sdist and wheels are more about architecture dependent binaries. You should always include the fully built JS in either one of those.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
What's the pythonic way of building a pandas dataframe from a nested json array?

I'm retrieving quake data in json format. An example JSON can be found here: https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&starttime=2022-02-06

Each quake populates a list found at: quakes['features'] . To get the magnitude of the first quake on the list, I'd use quakes['features'][0]['properties']['mag']

I'd like to build a dataframe so that each row represents a quake. The column names are the quake properties found at: quakes['features'][X]['properties']

I hope that makes sense!

cinci zoo sniper
Mar 15, 2013




Hughmoris posted:

What's the pythonic way of building a pandas dataframe from a nested json array?

I'm retrieving quake data in json format. An example JSON can be found here: https://earthquake.usgs.gov/fdsnws/event/1/query?format=geojson&starttime=2022-02-06

Each quake populates a list found at: quakes['features'] . To get the magnitude of the first quake on the list, I'd use quakes['features'][0]['properties']['mag']

I'd like to build a dataframe so that each row represents a quake. The column names are the quake properties found at: quakes['features'][X]['properties']

I hope that makes sense!

I only glanced at your JSON from phone, so I’m not fully confident this works, but the way to do it here should be along the following lines:

1. Read JSONs into a list of dictionaries (or do this one by one if that’s too much)
2. Write function that creates a dictionary in the format of {“feature_name”: feature_value} out of a quake JSON
3. Apply the function to your dataset via, e.g., list comprehension
4. dataframe = pd.DataFrame(output_list)

In other words, the trick to know here is that pandas can construct a dataframe out of a list of flat dictionaries right away. The most that you may need to do there is to provide data type definitions, if pandas doesn’t infer them correctly (and it’s a good practice to just do that always anyway).

UraniumAnchor
May 21, 2006

Not a walrus.

ynohtna posted:

I would definitely vendorise the pre-build JS bundle. I can't see any benefit to letting users re-bundle the JS on their own machines.

I'd be looking at the package_data and data_files options for including the bundle.

Maybe you would also need an entry in MANIFEST.in. Updating the manifest could even be enough on its own. It's all so vague and confusing.

The bundle is definitely intended to be at least somewhat customizable (I don't know how often that happens in practice other than my own semi-public tweaks that don't always make it into master), so ideally it wouldn't actually build the bundle until the wheel step. In a perfect world running a pip install that's pointing to a github commit will end up with the same end result as installing the wheel, but with a bundle step in the middle. That's the part I'm unsure of how to hook up. I definitely have the 'pre-packaged data' part figured out, but this isn't pre-packaged in the sense that it's committed to git directly, and I really don't want it to be. That feels too much like committing build artifacts.

QuarkJets
Sep 8, 2008

UraniumAnchor posted:

I have a Python (Django) project that also requires a JS bundle built by webpack in order to fully function. I couldn't really find a way to get a setup.py that did what I wanted so I have yet to actually stick it up on PyPi but I'd like to try and address that.

The problem I'm running into is that I feel like the steps between an sdist and a wheel are some arcane nonsense that I have no idea how to hook into in order to get the steps I want.

My assumption is that I want to the wheel to only include the final webpacked JS bundles (Django just should be treating them as normal static files at that point), not the pre-webpack JS files (or any of the scaffolding for that matter, e.g. package.json), but that the sdist should include these things.

Is this the kind of thing I can easily do without making my own build backend? Additionally, if I could just dump the directory structure into a temp folder and jam that into a wheel I'd be content with that but I can't seem to find an easy way to do that, since "wheel pack" seems to assume I have a wheel already that I'm trying to modify, not make one from scratch. If calling the standard "build --wheel" and then pulling the result apart and stuffing in the files that I need in a custom backend is the easiest path forward, then so be it, just want to make sure I'm not missing something obvious.

Poetry can take care of most of this. You're using .git, right (RIGHT???)? Add the files that you want to package into an sdist to your repository. Add the packages that you definitely want to exclude to your .gitignore. Use poetry to initialize the project. Define dependencies and whatnot, tweak the pyproject. `poetry build` will automatically build a source distribution and then a wheel. You can publish these to pypy with `poetry publish` (which will also automatically create the sdist and the wheel, you don't need to `build` first unless you want to)

Using poetry means that you should basically delete your setup.py, you should only need a pyproject.toml and the `poetry` command.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

cinci zoo sniper posted:

I only glanced at your JSON from phone, so I’m not fully confident this works, but the way to do it here should be along the following lines:

1. Read JSONs into a list of dictionaries (or do this one by one if that’s too much)
2. Write function that creates a dictionary in the format of {“feature_name”: feature_value} out of a quake JSON
3. Apply the function to your dataset via, e.g., list comprehension
4. dataframe = pd.DataFrame(output_list)

In other words, the trick to know here is that pandas can construct a dataframe out of a list of flat dictionaries right away. The most that you may need to do there is to provide data type definitions, if pandas doesn’t infer them correctly (and it’s a good practice to just do that always anyway).

That helped, thanks!

Also, I discovered that I can wget the file in CSV format instead of JSON. The CSV file was about 1/4 of the size of the JSON for the same date ranges, didn't expect that.

cinci zoo sniper
Mar 15, 2013




Hughmoris posted:

That helped, thanks!

Also, I discovered that I can wget the file in CSV format instead of JSON. The CSV file was about 1/4 of the size of the JSON for the same date ranges, didn't expect that.

CSV has only one header, whereas all those “feature_”, “feature_b” object names will be repeated in each JSON file. If you subtract their character lengths from the total, you’ll be looking at a similar difference.

UraniumAnchor
May 21, 2006

Not a walrus.

QuarkJets posted:

Poetry can take care of most of this. You're using .git, right (RIGHT???)? Add the files that you want to package into an sdist to your repository. Add the packages that you definitely want to exclude to your .gitignore. Use poetry to initialize the project. Define dependencies and whatnot, tweak the pyproject. `poetry build` will automatically build a source distribution and then a wheel. You can publish these to pypy with `poetry publish` (which will also automatically create the sdist and the wheel, you don't need to `build` first unless you want to)

Using poetry means that you should basically delete your setup.py, you should only need a pyproject.toml and the `poetry` command.

Poetry looks like it will do what I want for the most part, I think I'll need to write an inline plugin unless there's an existing plugin that lets me run an arbitrary command right before packing the wheel. All I need to do is run a couple of yarn commands to actually build the JS bundles before packing everything.

QuarkJets
Sep 8, 2008

UraniumAnchor posted:

Poetry looks like it will do what I want for the most part, I think I'll need to write an inline plugin unless there's an existing plugin that lets me run an arbitrary command right before packing the wheel. All I need to do is run a couple of yarn commands to actually build the JS bundles before packing everything.

Sounds like something you could stick in a makefile, to me

Hughmoris
Apr 21, 2007
Let's go to the abyss!

cinci zoo sniper posted:

CSV has only one header, whereas all those “feature_”, “feature_b” object names will be repeated in each JSON file. If you subtract their character lengths from the total, you’ll be looking at a similar difference.

That makes perfect sense, thanks.

I have a barebones dev VM that I'm using to learn AWS. Its running Kubuntu and the system version of python is 3.8.10 .

Is it possible to create virtual environments (venv) using a newer version of Python? Or is the gist of venv just give you a copy of the version used to create it? Ideally, I can create virtual environments with the newest release of Python, without mangling with my installed system environment.

QuarkJets
Sep 8, 2008

Hughmoris posted:

That makes perfect sense, thanks.

I have a barebones dev VM that I'm using to learn AWS. Its running Kubuntu and the system version of python is 3.8.10 .

Is it possible to create virtual environments (venv) using a newer version of Python? Or is the gist of venv just give you a copy of the version used to create it? Ideally, I can create virtual environments with the newest release of Python, without mangling with my installed system environment.

I am pretty sure that venv only creates new environments with the same python version as the current environment

What you want is mamba (e.g. newer and better conda). You can create a conda environment with whatever version of Python you want, then from there you can go pure python with pip or continue using conda channels if you want. The mambaforge installer is light and can be installed in a user directory without administrative privileges, you see this being done in dockerfiles all the time (e.g. Download mambaforge, unpack it, create a new env, install stuff into it with mamba or pip)

QuarkJets fucked around with this message at 04:43 on Feb 7, 2022

Bad Munki
Nov 4, 2008

We're all mad here.


You can specify python= when you create the venv, so it’s not necessarily the same as what the current enviro is, but whatever you specify does have to be installed and available via the specified name already.

cinci zoo sniper
Mar 15, 2013




QuarkJets posted:

I am pretty sure that venv only creates new environments with the same python version as the current environment

What you want is mamba (e.g. newer and better conda). You can create a conda environment with whatever version of Python you want, then from there you can go pure python with pip or continue using conda channels if you want. The mambaforge installer is light and can be installed in a user directory without administrative privileges, you see this being done in dockerfiles all the time (e.g. Download mambaforge, unpack it, create a new env, install stuff into it with mamba or pip)

pyenv would also do the job here.

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:
"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function."

Oh.

That's why.

No return statement to spit the prettified data back :doh:


On another note, I'm quite pleased with myself for realizing I can pass the error code and message as parameters to a boilerplate error.html page instead of a separate page for each message.


EDIT: That was what else I wanted to ask. Instead of writing a prettify_data function for every possible math-performing class, I want to have one PrettyData object that can accept a generic object and basically...run down the line. If XYZ property (key in the object dict?) exists,

Is there an easy way to do this? Do I loop over the passed object's vars() call with a for key, item in object_dict and have a bunch of if calls to check if keys exist?

Can you even pass generic objects in Python without knowing exactly what properties/keys the object is going to have?


EDIT 2: Now I'm thinking of having the math class object (PolyRoof, PanRoof, etc.) itself prettify its own data by passing itself to a PrettyData object and having that accessible as the math class object's pretty_data property.

D34THROW fucked around with this message at 17:52 on Feb 7, 2022

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

D34THROW posted:

"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function."

Oh.

That's why.

No return statement to spit the prettified data back :doh:


On another note, I'm quite pleased with myself for realizing I can pass the error code and message as parameters to a boilerplate error.html page instead of a separate page for each message.

Use type hints. Any decent editor will warn you if a function hinted as returning a value does not return a value (and might also warn about returning an object of the wrong type, depending on how complex the definition is)

QuarkJets
Sep 8, 2008

D34THROW posted:

"Why isn't this table getting populated? The data is formed well, the vars() of it looks good, everything is populated...lemme go look at the function."

Oh.

That's why.

No return statement to spit the prettified data back :doh:


On another note, I'm quite pleased with myself for realizing I can pass the error code and message as parameters to a boilerplate error.html page instead of a separate page for each message.


EDIT: That was what else I wanted to ask. Instead of writing a prettify_data function for every possible math-performing class, I want to have one PrettyData object that can accept a generic object and basically...run down the line. If XYZ property (key in the object dict?) exists,

Is there an easy way to do this? Do I loop over the passed object's vars() call with a for key, item in object_dict and have a bunch of if calls to check if keys exist?

Can you even pass generic objects in Python without knowing exactly what properties/keys the object is going to have?


EDIT 2: Now I'm thinking of having the math class object (PolyRoof, PanRoof, etc.) itself prettify its own data by passing itself to a PrettyData object and having that accessible as the math class object's pretty_data property.

What kind of prettyfying are we talking about here?

I would not try to write a universal class that iterates over attributes in other classes, but you could have a class invoke a generic prettyfy function on data that it loads, that would probably be cleaner imo. The new class or function should just transform data passed to it, basically

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:

QuarkJets posted:

What kind of prettyfying are we talking about here?

Basically I'm taking data like roof_data.header_screws, roof_data.cap_screws, roof_data.flashing_screws and making a dict of lists that are iterated over by the template in order to generate the data tables, putting out (respectively). Maybe "prep for tableification" is a better term:
pre:
SMS 14 X 2 1/2      | 100 EA
TEK 10 X 3/4        | 100 EA
TEK 10 X 3/4 W/NEO  | 100 EA
This is one of the existing prettification functions, then the HTML that generates the table of tables. If there is a better way to do this, I'm all ears - just bear in mind this is new territory for me :v:
Python code:
def prettify_poly_data(roof_data: PolyRoof):
    extrusion = []
    fasteners = []
    adhesive = []
    labor = []

    extrusion = [
                    ['PANELS', roof_data.panels],
                    ['HEADER', roof_data.headers],
                    [roof_data.gutter_type, roof_data.gutters],
                    [roof_data.fascia_type, roof_data.fascia],
                    ['16" COIL', lf(roof_data.flashing)],
                    ['DROPOUTS', ea(roof_data.dropouts)],
                    ['DOWNSPOUTS', ea(roof_data.downspouts)],
                    ['ELBOWS', ea(roof_data.elbows)]
                ]
    fasteners = [
                    ['TEK 10 X 3/4 W/NEO', ea(roof_data.flashing_screws)],
                    ['TEK 12 X 3/4', ea(roof_data.cap_screws)],
                    ['TEK 14 X 4 7/8', ea(roof_data.poly_lags)],
                    ['SMS 14 X 2 1/2', ea(roof_data.header_screws)]
                ]
    adhesive = [
                ['VULKEM 116', ea(roof_data.caulk)],
                ['POWERBOND', ea(roof_data.tape, "50'")]
                ]
    labor = [
                ['FLASHING',lf(roof_data.labor_flashing)],
                ['ROOF', sf(roof_data.labor_roof)],
                ['DOWNSPOUTS', ea(roof_data.labor_downs)]
            ]

    return dict(
                extrusion=extrusion, 
                fasteners=fasteners, 
                adhesive=adhesive, 
                labor=labor
                )
HTML code:
<table class="centered outside-container">
    <colgroup>
        <col class="inner-containers">
        <col class="inner-containers">
        <col class="inner-containers">
        <col class="inner-containers">
    </colgroup>
    <tr>
        <td colspan="4" class="reportheader">{{ report_type }} Estimates</td>
    </tr>
    <tr class="jobinfo">
        <td colspan="2" class="jobinfo"><b>Customer: </b>
            {{ job_info['cust'] }}</td>
        <td colspan="2" class="jobinfo"><b>Job Number: </b>
                    {{ job_info['job_no'] }}</td>
    </tr>
    <tr>
        <td class="middle-container">
            <table class="inside-container">
                <colgroup>
                    <col class="mater-name">
                    <col class="mater-amount">
                </colgroup>
                <tr>
                    <td colspan="2" class="inside-header">
                        <b>Extrusion and Metal</b>
                    </td>
                </tr>
                {% for row in pretty_data['extrusion'] %}
                <tr class="inner-data">
                    <td class="left"> {{row[0] }}</td>
                    <td class="right"> {{row[1] }}</td>
                </tr>
                {% endfor %}
            </table>
        </td>
        <td class="middle-container">
            <table class="inside-container">
                <colgroup>
                    <col class="mater-name">
                    <col class="mater-amount">
                </colgroup>
                <tr>
                    <td colspan="2" class="inside-header">
                        <b>Fasteners</b>
                    </td>
                </tr>
                {% for row in pretty_data['fasteners'] %}
                <tr>
                    <td class="left"> {{row[0] }}</td>
                    <td class="right"> {{row[1] }}</td>
                </tr>
                {% endfor %}
            </table>
        </td>
        <td class="middle-container">
            <table class="inside-container">
                <colgroup>
                    <col class="mater-name">
                    <col class="mater-amount">
                </colgroup>
                <tr>
                    <td colspan="2" class="inside-header">
                        <b>Adhesives and Sealants</b>
                    </td>
                </tr>
                {% for row in pretty_data['adhesive'] %}
                <tr>
                    <td class="left"> {{row[0] }}</td>
                    <td class="right"> {{row[1] }}</td>
                </tr>
                {% endfor %}
            </table>
        </td>
        <td class="middle-container">
            <table class="inside-container">
                <colgroup>
                    <col class="mater-name">
                    <col class="mater-amount">
                </colgroup>
                <tr>
                    <td colspan="2" class="inside-header">
                        <b>Labor</b>
                    </td>
                </tr>
                {% for row in pretty_data['labor'] %}
                <tr>
                    <td class="left"> {{row[0] }}</td>
                    <td class="right"> {{row[1] }}</td>
                </tr>
                {% endfor %}
            </table>
        </td>
    </tr>
    <tr>
        <td colspan="4" class="button-container">
            <br>
            <form action="" method="post" novalidate>{{ form.hidden_tag() }}
            {{ form.save_report }}  
                <div class="pseudobutton">
                    <a href="{{ url_for('calculators.print_report') }}" 
                        target="_blank" class="pseudobutton">Print Report</a>
                </div>
            </form>
        </td>
    </tr>
</table>
{% endautoescape %}{% else %}

D34THROW fucked around with this message at 18:27 on Feb 7, 2022

Falcon2001
Oct 10, 2004

Eat your hamburgers, Apollo.
Pillbug
It's not unreasonable or uncommon in my experience to have a function in a class that outputs a specific datatype for other use (as_dict, as_list, etc). I wouldn't necessarily throw all the formatting logic in there, because I think you risk overcrowding the class, but then again I just did that myself in a reporting project.

In general, probably good to avoid having to double refer to data (maintaining a list of keys and values outside of the relevant class, etc) but I suspect others in here would have better advice.

Also: Nthing the recommendation for typehints. They're an absolute lifesaver for saving you one more run when you forgot something simple like a return/etc, and they're not that hard to include. https://www.pythontutorial.net/python-basics/python-type-hints/ seems like an alright intro, and it makes your IDE a ton more readable.

Falcon2001 fucked around with this message at 19:19 on Feb 7, 2022

QuarkJets
Sep 8, 2008

I too like to have an "to_dict" method in a class, maybe customizing that name if the dict has a specific purpose like as an SQL insert

And then it's all a matter of determining where to prettyfy things. Maybe the answer is to prettyfy in to_dict. Maybe prettyfy is used in a bunch of properties (e.g. your class could have private raw data, and a property for each that formats the raw data, or you could just store the pretty data, depending on what else the class needs to do)

Adbot
ADBOT LOVES YOU

death cob for cutie
Dec 30, 2006

dwarves won't delve no more
too much splatting down on Zot:4
How should I handle a situation in which code wants to create an instance of a class, but shouldn't because it's being passed the wrong data?

Let's say I have a class definition:

code:
class SampleClass():
	
	def __init__(self, a, b, c):
		self.a = a
		self.b = b
		self.c = c
Let's say a really needs to be an integer - if we pass it something that can't be converted into an integer, I need to make sure an instance of SampleClassisn't created. I know that when I call SampleClass(a, b, c) in my code, it calls __new__(cls, ...) which then calls __init__(self, ...) before returning the instance of the class we've created. Would I want something like a try/except in __new__ that would check to see if a can become an integer, and if not, cause __new__ to just return None instead of returning a new instance of SampleClass? Or is there a smarter way of handling this?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply