|
Sure, that's a very common use case for classmethods, though without seeing the rest of your code, I don't see a reason you need two separate classes for that example.Python code:
|
# ? Mar 25, 2022 19:22 |
|
|
# ? May 16, 2024 04:14 |
|
D34THROW posted:
It apparently supports both, according to the home page. https://pokeapi.co/
|
# ? Mar 25, 2022 19:31 |
|
I get this is a fun pet project, but it seems weird to rely on an API for Pokémon data. How much of it can there really be? It looks like there are API dumps, but it's in thousands of directories of JSON files, as opposed to like a SQL dump or something else infinitely more useful.
|
# ? Mar 25, 2022 20:03 |
|
Well after doing a very simple cache of the api and editing the code to pull the dict things do run notably faster. I think I'll try my hand creating a loading light thread and then close the project out. Here is my full code as of now: https://paste.pythondiscord.com/elixanitih nullfunction posted:I've never worked with Qt so I can't speak to the offload of work in QThreads or whatever, hopefully this gets the caching idea across, and may get you thinking about how you can modify your program's structure to make your life a little easier in the future: This is indeed a template I will refer to in the future. Seems useful and organized. There is one instance in the code where I need to use the api again with not the typical url: code:
|
# ? Mar 25, 2022 20:21 |
|
ExcessBLarg! posted:I get this is a fun pet project, but it seems weird to rely on an API for Pokémon data. How much of it can there really be? A) Pull the data as needed from the API. Local data is as up-to-date as the API data's. B) Package the data with your app and be forced to update every time the Pokedex changes. I suppose if you really wanted to get technical, you could have a persistent SQLite DB that consists of a single table populated by pulling data from the API and checking a persistently stored version number against the API version or something like that? If you're not being sarcastic, each Pokemon - of which the current Pokedex lists 898 - has a name, a height, a weight, a type, a description, a list of moves, a list of resistances, a list of weaknesses, what it evolves into, what it evolves from, what it can evolve into in some cases, et al.
|
# ? Mar 25, 2022 21:20 |
|
Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB. When building professional application the biggest concern with using an external API is availability, and usually the risk of using one only makes sense if the data is highly real-time sensitive or the size of the data is unwieldy. Not that availability is a huge concern with here, but the discussion has already turned towards caching the data locally and dealing with response delays. The PokéAPI folks argue the benefit to using their API is that websites can be all kept up to date quickly after new games are released, which is true. But what happens to all these fan sites when PokéAPI no longer pays their hosting bill and goes offline?
|
# ? Mar 25, 2022 23:52 |
|
ExcessBLarg! posted:Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB. Yeah you really get the best of both worlds if you just store that data locally as a table and periodically update it with API calls - at this scale local caching of the entire dataset is very likely the right move. Surely the PokeAPI people would also prefer that, since that's fewer hits to their servers
|
# ? Mar 26, 2022 00:12 |
|
I don't know that they do. I think they prefer to have the bragging rights of "powering" a bunch of fan sites. Their data model doesn't really make sense from an efficiency standpoint. Edit: Yes, "serving over 250,000,000 API calls each month!"
|
# ? Mar 26, 2022 00:20 |
|
In their section on fair use section they mention:quote:Locally cache resources whenever you request them. Which I would interpret to mean, they'd prefer requests to the API only when getting it for the first time or checking for updates, rather than being designed as a free server. they say it's an educational service so i think using it for production-level anything is probably not intended If your estimated usage is very light then they wont even notice it but if there's the potential for a lot of calls (what if the alpha goes viral and 1 billion people start playing it! or, far more likely, there's some bug or other that just silently spams out infinite api calls and gets you banned) then it would be better if it were pointing at a cached copy of the data instead generally i don't trust my code to point at external servers during development, even if i 100% need to be getting data externally i'll usually just set it up first to check the connection and initial data, then comment it out, mock or use a local/cached version of the data while working on everything, including error handling and try/retry/backoff limits and so forth, then re-connect it to the external api towards the end to test e: although thinking about it, i normally work with files that are a lot bigger than what a single pokemon definition would need. might be overkill boofhead fucked around with this message at 01:03 on Mar 26, 2022 |
# ? Mar 26, 2022 00:58 |
|
I don't know why I'm looking at this. Anyways it looks like searchable Pokédex projects go back a decade or more. One of the more popular ones has a database-dump-as-csv on GitHub, in a highly (IMO unnecessarily) normalized form (example). This was forked, converted into even more redundant JSON (example), which is now hosted as an API that encourages you to make a separate request for each berry-firmness or whatever. All of this should be a small handful of database tables, or one large JSON/YAML/whatever file. YAML even supports references. I'd say it's fine as a hobbyist engineering project just to try out these technologies but ultimately I'd say the complexity is difficult to defend given the scope of the underlying data. ExcessBLarg! fucked around with this message at 01:09 on Mar 26, 2022 |
# ? Mar 26, 2022 01:05 |
|
ExcessBLarg! posted:Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB. It's a rolling open source api that users update. It isn't even complete and is still in beta.
|
# ? Mar 26, 2022 01:28 |
|
That may be true but it doesn't change the fact that it's overengineered, in a bad way. Look, when your JSON is 80% reference URLs by volume you're doing it wrong.
|
# ? Mar 26, 2022 02:16 |
|
I don't think the fact that it's community-run or beta really answers the "what happens if the API is no longer reachable?" question, if anything, it underscores it. It's a question you should be asking yourself each time you interface with something outside your immediate control, though, even as a hobbyist. Answering those questions will give you natural boundaries in the code you write, hints on where to break things apart into more manageable chunks. Code that works and does what you intend it to do is an achievement at any level, and if you have no ambitions past hobbyist that's fair game too. Rather than shelving it, try adding a local file cache! Even if you're just saving the JSON you got from the webserver, it means you can still use it when your internet connection is down, and a refactor would do the code you posted some good, especially if you ever want to change anything about it in the future.
|
# ? Mar 26, 2022 03:00 |
|
The nice thing about HTTP is you can also transparently cache the data if you were to setup a Squid cache on your host and proxy the requests though it, which may be a whole different project worth trying. Except, it turns out if you make HTTP requests to pokeapi.io it force upgrades you to HTTPS via a 301. That seems a bit much, given the nature of the data. Connection reuse is another fun one, so you don't have to burn so many TLS session negotiations.
|
# ? Mar 26, 2022 03:19 |
|
Well, you wouldn't want any bad actors intercepting your data, would you? Pokemon color is highly sensitive data and you wouldn't want any prying eyes to see that Pikachu is yellow. Wait! poo poo! I'm already leaking data!
|
# ? Mar 26, 2022 03:21 |
|
I use SSL to encrypt the video stream to my monitor, but my monitor just displays the encrypted results; I have trained my eyes to perform real-time decryption
|
# ? Mar 26, 2022 05:42 |
|
PGT order processor is functional Now to root out weird rear end edge cases that might occur in production and compensate for them. Simply fantastic passing an unpacked list-comprehension list to an *args function. I should really look into multiple dispatch so that I can have a version of combine_and_add_dicts that takes a list and a version that takes *args. Python code:
|
# ? Mar 28, 2022 15:48 |
|
I mentioned it in the other thread but this should also work if you want a version that works with generators without having to unpack/repack them:Python code:
|
# ? Mar 28, 2022 21:20 |
|
ExcessBLarg! posted:I mentioned it in the other thread but this should also work if you want a version that works with generators without having to unpack/repack them: This probably makes more sense; occasionally I'll be passing If I wanted to use functools and singledispatch to create a version that works with an arbitrary number of args, and a version that works with a generator expression, how would I go about that? Or rather, what is the "type" of a generator expression so that I can create @combine_and_add_dicts.register(generator) so to speak?types.GeneratorType? EDIT: That's exactly what it was, works flawlessly, thank you! Another question: I notice a lot of things use the Sphinx-style docstrings but I was curious what else was preferred? Sphinx-style are made to be parsed by Sphinx and aren't super clear in IntelliSense where something like the numpy docstrings are more user-legible. Something about the Google docstrings doesn't sit right with me. EDIT: I sorta like the numpy standard and Sphinx supports it. I at least want to have some documentation in case I'm no longer at the company and they won't pay me to maintain it. I unfortunately care about these type of pet issues that have bothered me for a long time. Refactoring my documentation is gonna take at least day or two. How fun D34THROW fucked around with this message at 16:04 on Mar 30, 2022 |
# ? Mar 30, 2022 14:59 |
|
D34THROW posted:If I wanted to use functools and singledispatch to create a version that works with an arbitrary number of args, and a version that works with a generator expression, how would I go about that? Or rather, what is the "type" of a generator expression so that I can create @combine_and_add_dicts.register(generator) so to speak?types.GeneratorType? EDIT: That's exactly what it was, works flawlessly, thank you! To be honest if I were writing it I'd probably do something like: Python code:
|
# ? Mar 30, 2022 17:50 |
|
After a long time away, I'm back to poking at Python a bit to help the job prospects. Watching a presentation on "High Performance Python", I saw an interesting bit about identifying memory usage in pandas. The memory usage described in df.info() is an approximation, and you need df.info(memory_usage="deep") to get a more accurate picture. It may be old hat to this crowd but definitely something useful for me to tuck away. Screenshot of the difference after reading in a 62MB csv file. In this case, df.info() was off by an order of magnitude. https://www.youtube.com/watch?v=xT9SL35ilfM
|
# ? Mar 31, 2022 00:59 |
|
Hughmoris posted:After a long time away, I'm back to poking at Python a bit to help the job prospects. This happens when you deal with datatypes that are 'object', e.g. "who the gently caress knows how big each of these things is". So a cursory inspection just multiplies an arbitrary object size by the number of rows but misses all of the additional data that may be stuffed into the objects as nested dictionaries or whatever. That's where the "deep" part comes in. Personally I don't think that "pandas" and "performance" belong in the same sentence QuarkJets fucked around with this message at 01:16 on Mar 31, 2022 |
# ? Mar 31, 2022 01:08 |
|
QuarkJets posted:
You're likely right but I'll hit performance bottlenecks due to lovely code long before I hit pandas' ceiling. My code will never be fast but hopefully it'll be faster. Hughmoris fucked around with this message at 01:45 on Mar 31, 2022 |
# ? Mar 31, 2022 01:35 |
|
I dont think I will ever shed that feeling of "this lovely loving spaghetti works? " when i successfully implement a new feature and eliminate all the typos. Up to calculating window/door material, storm panels, poly roofs, pan roofs, and glass walls. Next step is turning glass walls into a glass room but that's just totaling.
|
# ? Mar 31, 2022 13:03 |
|
I'm running into an error on some homework. I've got a virtual environment running, I've installed Django, but when I try to upload a json file to my database on the server, I get this error.code:
|
# ? Mar 31, 2022 13:21 |
|
I Phone posting so I can't tell you the whole context but where it breaks is dict["items"] then .items(), is that a legitimate field with another dict in it? Probably you just wanted dict.items()
|
# ? Mar 31, 2022 13:27 |
|
boofhead posted:I The json file has no field "items". I've emailed my teacher, but that might take a while. I followed her instructions exactly, I've no idea why it failed. I hate using the command line. edit: all that's in the json file is code:
|
# ? Mar 31, 2022 13:30 |
The Django 'loaddata" script expects a fixture JSON file to be in a certain format. How did you create your JSON file? What format are you following? The error is because a fixture JSON for Django needs to have a "fields" key in each item, i.e. JSON code:
|
|
# ? Mar 31, 2022 13:50 |
|
Data Graham posted:The Django 'loaddata" script expects a fixture JSON file to be in a certain format. How did you create your JSON file? What format are you following? The json file was made in another section of the assignment and fits the set criteria. Reformatting it doesn't seem to have fixed the problem. code:
|
# ? Mar 31, 2022 13:56 |
|
Ah, i see what i did wrong. You're right, it doesn't have the fields attribute. How do i add that?
|
# ? Mar 31, 2022 13:59 |
I mean, that code there just shows you writing out the file, it doesn't show what "books_fixture" is or whether it's in the correct format. If you use Django's "dumpdata" script it will write the fixture in the correct format. And as for reformatting it not fixing the problem, really? Is the error the same? e: If you're creating "books_fixture" manually, you need to add all the fields inside a "fields" sub-dictionary inside each item's main dictionary, not all at the same level.
|
|
# ? Mar 31, 2022 13:59 |
|
ugh, none of the documentation I was given told me what a fixture file looks like. Starting to hate this teacher. How do I turn a CSV into a fixture JSON? code:
|
# ? Mar 31, 2022 14:05 |
You can add the fields dict like this:Python code:
Python code:
Though I would also caution against modifying "book" directly, because you're using it as your iterator. I would make a new object for each book to add to the fixture, and then set the fields and other keys on that, like book_obj = {} -- otherwise you're going to be trying to add "book" to the fixture but with extra keys added to it, and "book" isn't in the right format. Best to start over fresh with each book object and only include the keys you explicitly care about. If the teacher didn't give you any clue as to what the format should be, yeah that's butt. But if you already have your Django app set up, you can use the Django admin to look at the Books model and add some data through the GUI, and then do a "dumpdata" CLI command to output the fixture in the correct format, for you to refer to. code:
Data Graham fucked around with this message at 14:22 on Mar 31, 2022 |
|
# ? Mar 31, 2022 14:19 |
|
no, part of part A of the assignment is to code the conversion manually. I'm messing with it now.
|
# ? Mar 31, 2022 14:21 |
|
ok, got that working. Now, the last part is done according to her specifications, I've copied the code exactly. But I'm getting an error.code:
|
# ? Mar 31, 2022 15:14 |
|
code:
the second file is what you're trying to import stuff from, and the first file is where you're trying to import (and use) it and the command "from books import urls" is what your code is trying to do but failing boofhead fucked around with this message at 15:41 on Mar 31, 2022 |
# ? Mar 31, 2022 15:38 |
|
boofhead posted:
__init__ is empty. I wasn't told to modify it, only urls.
|
# ? Mar 31, 2022 15:40 |
|
Mycroft Holmes posted:__init__ is empty. I wasn't told to modify it, only urls. sorry I'm a bit out of it today and was thinking of javascript index files for importing look inside the /books/ directory (it'll contain a package) and see what it's doing. The empty __init__.py is just a placeholder file that tells python to treat the directory as a package, so you can import it and do stuff with it so either your import line is pointing at the wrong thing (i.e. it should be: from not_books import urls) or it's trying to import the wrong thing (i.e.: from books import not_urls)
|
# ? Mar 31, 2022 15:43 |
|
boofhead posted:sorry I'm a bit out of it today and was thinking of javascript In the books folder, there is a urls file. it should be functioning, I've no idea why it's not. This is the code I was instructed to type.
|
# ? Mar 31, 2022 15:45 |
|
|
# ? May 16, 2024 04:14 |
|
Mycroft Holmes posted:In the books folder, there is a urls file. it should be functioning, I've no idea why it's not. This is the code I was instructed to type. goddamnit, she made me make two urls files. I need to rejigger where stuff is.
|
# ? Mar 31, 2022 15:47 |