Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
nullfunction
Jan 24, 2005

Nap Ghost
Sure, that's a very common use case for classmethods, though without seeing the rest of your code, I don't see a reason you need two separate classes for that example.

Python code:
class PGT:
    def __init__(self, **kwargs):
        # Your init stuff
    
    @classmethod 
    def from_csv(cls, path):
        # CSV stuff here

    @classmethod 
    def from_xml(cls, path):
        # XML stuff here
APIs with questionable choices are everywhere. It could be worse: returning a 200 OK and a stack trace in the body is definitely something I've run into, all so the person maintaining the API can say "Oh, that API never errors, must be your code :smuggo:"

Adbot
ADBOT LOVES YOU

Deffon
Mar 28, 2010

D34THROW posted:


It blows my mind that the API goes by name and not number. Is it because of bullshit like Galarian Whateverthefuck and Kalosian Hoosiewhatsit?

It apparently supports both, according to the home page.

https://pokeapi.co/

ExcessBLarg!
Sep 1, 2001
I get this is a fun pet project, but it seems weird to rely on an API for Pokémon data. How much of it can there really be?

It looks like there are API dumps, but it's in thousands of directories of JSON files, as opposed to like a SQL dump or something else infinitely more useful.

punk rebel ecks
Dec 11, 2010

A shitty post? This calls for a dance of deduction.
Well after doing a very simple cache of the api and editing the code to pull the dict things do run notably faster.

I think I'll try my hand creating a loading light thread and then close the project out.

Here is my full code as of now: https://paste.pythondiscord.com/elixanitih

nullfunction posted:

I've never worked with Qt so I can't speak to the offload of work in QThreads or whatever, hopefully this gets the caching idea across, and may get you thinking about how you can modify your program's structure to make your life a little easier in the future:

This is indeed a template I will refer to in the future. Seems useful and organized.

There is one instance in the code where I need to use the api again with not the typical url:

code:
    #Summary, ID, Genus, Evolves From
    def misc_info(self):            
        
        self.national_number = Data.pokemon_dict["id"]
        Data.run_api(self,"pokemon-species", self.national_number)
        try:
            self.evolves_from = self.data["evolves_from_species"]["name"]
            self.evolves_from = self.data["evolves_from_species"]["name"].title()
            self.evolves_from_link = self.data["evolves_from_species"]["url"]
        except TypeError:
            self.evolves_from = "Nothing"
            self.evolves_from_link = None
            
        self.genus = []
        self.summary = []

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:

ExcessBLarg! posted:

I get this is a fun pet project, but it seems weird to rely on an API for Pokémon data. How much of it can there really be?

It looks like there are API dumps, but it's in thousands of directories of JSON files, as opposed to like a SQL dump or something else infinitely more useful.

A) Pull the data as needed from the API. Local data is as up-to-date as the API data's.
B) Package the data with your app and be forced to update every time the Pokedex changes.

I suppose if you really wanted to get technical, you could have a persistent SQLite DB that consists of a single table populated by pulling data from the API and checking a persistently stored version number against the API version or something like that?


If you're not being sarcastic, each Pokemon - of which the current Pokedex lists 898 - has a name, a height, a weight, a type, a description, a list of moves, a list of resistances, a list of weaknesses, what it evolves into, what it evolves from, what it can evolve into in some cases, et al.

ExcessBLarg!
Sep 1, 2001
Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB.

When building professional application the biggest concern with using an external API is availability, and usually the risk of using one only makes sense if the data is highly real-time sensitive or the size of the data is unwieldy.

Not that availability is a huge concern with here, but the discussion has already turned towards caching the data locally and dealing with response delays.

The PokéAPI folks argue the benefit to using their API is that websites can be all kept up to date quickly after new games are released, which is true. But what happens to all these fan sites when PokéAPI no longer pays their hosting bill and goes offline?

QuarkJets
Sep 8, 2008

ExcessBLarg! posted:

Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB.

When building professional application the biggest concern with using an external API is availability, and usually the risk of using one only makes sense if the data is highly real-time sensitive or the size of the data is unwieldy.

Not that availability is a huge concern with here, but the discussion has already turned towards caching the data locally and dealing with response delays.

The PokéAPI folks argue the benefit to using their API is that websites can be all kept up to date quickly after new games are released, which is true. But what happens to all these fan sites when PokéAPI no longer pays their hosting bill and goes offline?

Yeah you really get the best of both worlds if you just store that data locally as a table and periodically update it with API calls - at this scale local caching of the entire dataset is very likely the right move. Surely the PokeAPI people would also prefer that, since that's fewer hits to their servers

ExcessBLarg!
Sep 1, 2001
I don't know that they do. I think they prefer to have the bragging rights of "powering" a bunch of fan sites. Their data model doesn't really make sense from an efficiency standpoint.

Edit: Yes, "serving over 250,000,000 API calls each month!"

boofhead
Feb 18, 2021

In their section on fair use section they mention:

quote:

Locally cache resources whenever you request them.

Which I would interpret to mean, they'd prefer requests to the API only when getting it for the first time or checking for updates, rather than being designed as a free server. they say it's an educational service so i think using it for production-level anything is probably not intended

If your estimated usage is very light then they wont even notice it but if there's the potential for a lot of calls (what if the alpha goes viral and 1 billion people start playing it! or, far more likely, there's some bug or other that just silently spams out infinite api calls and gets you banned) then it would be better if it were pointing at a cached copy of the data instead

generally i don't trust my code to point at external servers during development, even if i 100% need to be getting data externally i'll usually just set it up first to check the connection and initial data, then comment it out, mock or use a local/cached version of the data while working on everything, including error handling and try/retry/backoff limits and so forth, then re-connect it to the external api towards the end to test

e: although thinking about it, i normally work with files that are a lot bigger than what a single pokemon definition would need. might be overkill

boofhead fucked around with this message at 01:03 on Mar 26, 2022

ExcessBLarg!
Sep 1, 2001
I don't know why I'm looking at this. Anyways it looks like searchable Pokédex projects go back a decade or more. One of the more popular ones has a database-dump-as-csv on GitHub, in a highly (IMO unnecessarily) normalized form (example). This was forked, converted into even more redundant JSON (example), which is now hosted as an API that encourages you to make a separate request for each berry-firmness or whatever.

All of this should be a small handful of database tables, or one large JSON/YAML/whatever file. YAML even supports references.

I'd say it's fine as a hobbyist engineering project just to try out these technologies but ultimately I'd say the complexity is difficult to defend given the scope of the underlying data.

ExcessBLarg! fucked around with this message at 01:09 on Mar 26, 2022

punk rebel ecks
Dec 11, 2010

A shitty post? This calls for a dance of deduction.

ExcessBLarg! posted:

Yeah, I mean, APIs make sense when pulling data from Twitter or something as that's a realtime service. The Pokédex updates, what, once a year? There's no way a SQL database is larger than 10 MB. It would make a lot more sense to store that locally with an auto update service for the entire DB.

When building professional application the biggest concern with using an external API is availability, and usually the risk of using one only makes sense if the data is highly real-time sensitive or the size of the data is unwieldy.

Not that availability is a huge concern with here, but the discussion has already turned towards caching the data locally and dealing with response delays.

The PokéAPI folks argue the benefit to using their API is that websites can be all kept up to date quickly after new games are released, which is true. But what happens to all these fan sites when PokéAPI no longer pays their hosting bill and goes offline?

It's a rolling open source api that users update. It isn't even complete and is still in beta.

ExcessBLarg!
Sep 1, 2001
That may be true but it doesn't change the fact that it's overengineered, in a bad way.

Look, when your JSON is 80% reference URLs by volume you're doing it wrong.

nullfunction
Jan 24, 2005

Nap Ghost
I don't think the fact that it's community-run or beta really answers the "what happens if the API is no longer reachable?" question, if anything, it underscores it. It's a question you should be asking yourself each time you interface with something outside your immediate control, though, even as a hobbyist. Answering those questions will give you natural boundaries in the code you write, hints on where to break things apart into more manageable chunks. Code that works and does what you intend it to do is an achievement at any level, and if you have no ambitions past hobbyist that's fair game too.

Rather than shelving it, try adding a local file cache! Even if you're just saving the JSON you got from the webserver, it means you can still use it when your internet connection is down, and a refactor would do the code you posted some good, especially if you ever want to change anything about it in the future.

ExcessBLarg!
Sep 1, 2001
The nice thing about HTTP is you can also transparently cache the data if you were to setup a Squid cache on your host and proxy the requests though it, which may be a whole different project worth trying.

Except, it turns out if you make HTTP requests to pokeapi.io it force upgrades you to HTTPS via a 301. That seems a bit much, given the nature of the data.

Connection reuse is another fun one, so you don't have to burn so many TLS session negotiations.

Macichne Leainig
Jul 26, 2012

by VG
Well, you wouldn't want any bad actors intercepting your data, would you? Pokemon color is highly sensitive data and you wouldn't want any prying eyes to see that Pikachu is yellow.

Wait! poo poo! I'm already leaking data! :ohdear:

QuarkJets
Sep 8, 2008

I use SSL to encrypt the video stream to my monitor, but my monitor just displays the encrypted results; I have trained my eyes to perform real-time decryption

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:
PGT order processor is functional :woop: Now to root out weird rear end edge cases that might occur in production and compensate for them.

Simply fantastic passing an unpacked list-comprehension list to an *args function. I should really look into multiple dispatch so that I can have a version of combine_and_add_dicts that takes a list and a version that takes *args.

:allears:
Python code:
self.materials = combine_and_add_dicts(
            *[li.materials for li in self.line_items.values()]
        )

def combine_and_add_dicts(*dicts) -> dict:
    """
    Combines an arbitrary number of dicts, summing values where keys
    match and adding keys where they do not exist.

    :param list dicts:
        A list of `dict` objects to combine.
    :return:
        A combined dict.
    :rtype: dict
    """
    final_dict = defaultdict(int)
    for key, val in chain(((k, v) for d in dicts for (k, v) in d.items())):
        final_dict[key] += val
    return dict(final_dict)

ExcessBLarg!
Sep 1, 2001
I mentioned it in the other thread but this should also work if you want a version that works with generators without having to unpack/repack them:
Python code:
def combine_and_add_dicts(dicts) -> dict:
    final_dict = defaultdict(int)
    for key, val in chain.from_iterable(map(dict.items, dicts)):
        final_dict[key] += val
    return dict(final_dict)

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:

ExcessBLarg! posted:

I mentioned it in the other thread but this should also work if you want a version that works with generators without having to unpack/repack them:
Python code:
def combine_and_add_dicts(dicts) -> dict:
    final_dict = defaultdict(int)
    for key, val in chain.from_iterable(map(dict.items, dicts)):
        final_dict[key] += val
    return dict(final_dict)

This probably makes more sense; occasionally I'll be passing

If I wanted to use functools and singledispatch to create a version that works with an arbitrary number of args, and a version that works with a generator expression, how would I go about that? Or rather, what is the "type" of a generator expression so that I can create @combine_and_add_dicts.register(generator) so to speak?types.GeneratorType? EDIT: That's exactly what it was, works flawlessly, thank you!

Another question: I notice a lot of things use the Sphinx-style docstrings but I was curious what else was preferred? Sphinx-style are made to be parsed by Sphinx and aren't super clear in IntelliSense where something like the numpy docstrings are more user-legible. Something about the Google docstrings doesn't sit right with me.

EDIT: I sorta like the numpy standard and Sphinx supports it. I at least want to have some documentation in case I'm no longer at the company and they won't pay me to maintain it. I unfortunately care about these type of pet issues that have bothered me for a long time. Refactoring my documentation is gonna take at least day or two. How fun :shepicide:

D34THROW fucked around with this message at 16:04 on Mar 30, 2022

ExcessBLarg!
Sep 1, 2001

D34THROW posted:

If I wanted to use functools and singledispatch to create a version that works with an arbitrary number of args, and a version that works with a generator expression, how would I go about that? Or rather, what is the "type" of a generator expression so that I can create @combine_and_add_dicts.register(generator) so to speak?types.GeneratorType? EDIT: That's exactly what it was, works flawlessly, thank you!
I personally wouldn't try to match types for this. For the single-argument version you want to match any kind of iterable, not just a list OR generator, or whatever.

To be honest if I were writing it I'd probably do something like:
Python code:
def combine_and_add_dicts(*dicts) -> dict:
    if len(dicts) == 1:
        dicts = dicts[0]
    final_dict = {}
    for key, val in chain.from_iterable(map(dict.items, dicts)):
        if key in final_dict:
            final_dict[key] += val
        else:
            final_dict[key] = val
    return final_dict
Which has the bonus of working with any summable value type, not just integers.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
After a long time away, I'm back to poking at Python a bit to help the job prospects.

Watching a presentation on "High Performance Python", I saw an interesting bit about identifying memory usage in pandas. The memory usage described in df.info() is an approximation, and you need df.info(memory_usage="deep") to get a more accurate picture.

It may be old hat to this crowd but definitely something useful for me to tuck away.

Screenshot of the difference after reading in a 62MB csv file. In this case, df.info() was off by an order of magnitude.



https://www.youtube.com/watch?v=xT9SL35ilfM

QuarkJets
Sep 8, 2008

Hughmoris posted:

After a long time away, I'm back to poking at Python a bit to help the job prospects.

Watching a presentation on "High Performance Python", I saw an interesting bit about identifying memory usage in pandas. The memory usage described in df.info() is an approximation, and you need df.info(memory_usage="deep") to get a more accurate picture.

It may be old hat to this crowd but definitely something useful for me to tuck away.

Screenshot of the difference after reading in a 62MB csv file. In this case, df.info() was off by an order of magnitude.



https://www.youtube.com/watch?v=xT9SL35ilfM

This happens when you deal with datatypes that are 'object', e.g. "who the gently caress knows how big each of these things is". So a cursory inspection just multiplies an arbitrary object size by the number of rows but misses all of the additional data that may be stuffed into the objects as nested dictionaries or whatever. That's where the "deep" part comes in.

Personally I don't think that "pandas" and "performance" belong in the same sentence

QuarkJets fucked around with this message at 01:16 on Mar 31, 2022

Hughmoris
Apr 21, 2007
Let's go to the abyss!

QuarkJets posted:


Personally I don't think that "pandas" and "performance" belong in the same sentence

You're likely right but I'll hit performance bottlenecks due to lovely code long before I hit pandas' ceiling. My code will never be fast but hopefully it'll be faster.

Hughmoris fucked around with this message at 01:45 on Mar 31, 2022

D34THROW
Jan 29, 2012

RETAIL RETAIL LISTEN TO ME BITCH ABOUT RETAIL
:rant:
I dont think I will ever shed that feeling of "this lovely loving spaghetti works? :psyduck:" when i successfully implement a new feature and eliminate all the typos. Up to calculating window/door material, storm panels, poly roofs, pan roofs, and glass walls. Next step is turning glass walls into a glass room but that's just totaling.

Mycroft Holmes
Mar 26, 2010

by Azathoth
I'm running into an error on some homework. I've got a virtual environment running, I've installed Django, but when I try to upload a json file to my database on the server, I get this error.

code:
Traceback (most recent call last):
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\serializers\json.py", line 70, in Deserializer
    yield from PythonDeserializer(objects, **options)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\serializers\python.py", line 125, in Deserializer
    for (field_name, field_value) in d["fields"].items():
KeyError: 'fields'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    main()
  File "manage.py", line 18, in main
    execute_from_command_line(sys.argv)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\__init__.py", line 446, in execute_from_command_line
    utility.execute()
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\__init__.py", line 440, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\base.py", line 414, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\base.py", line 460, in execute
    output = self.handle(*args, **options)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\commands\loaddata.py", line 102, in handle
    self.loaddata(fixture_labels)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\commands\loaddata.py", line 163, in loaddata
    self.load_label(fixture_label)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\commands\loaddata.py", line 251, in load_label
    for obj in objects:
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\serializers\json.py", line 74, in Deserializer
    raise DeserializationError() from exc
django.core.serializers.base.DeserializationError: Problem installing fixture 'C:\Users\nickt\CSC221\lab10\project-books\django_books\books\fixtures\book_import_data.json':
No idea what it means.

boofhead
Feb 18, 2021

I
Phone posting so I can't tell you the whole context but where it breaks is dict["items"] then .items(), is that a legitimate field with another dict in it? Probably you just wanted dict.items()

Mycroft Holmes
Mar 26, 2010

by Azathoth

boofhead posted:

I
Phone posting so I can't tell you the whole context but where it breaks is dict["items"], is that a legitimate field with a list or dict in it? Probably you just wanted dict.items()

The json file has no field "items". I've emailed my teacher, but that might take a while. I followed her instructions exactly, I've no idea why it failed. I hate using the command line.

edit: all that's in the json file is
code:
[{"rank": 1, "title": "Pride and Prejudice", "author": "Jane Austen", "year": 1813, "model": "books.Book", "pk": 1}, {"rank": 2, "title": "To Kill a Mockingbird", "author": "Harper Lee", "year": 1960, "model": "books.Book", "pk": 2}, {"rank": 3, "title": "The Great Gatsby", "author": "F. Scott Fitzgerald", "year": 1925, "model": "books.Book", "pk": 3}, {"rank": 4, "title": "One Hundred Years of Solitude", "author": "Gabriel Garcia Marquez", "year": 1967, "model": "books.Book", "pk": 4}, {"rank": 5, "title": "In Cold Blood", "author": "Truman Capote", "year": 1965, "model": "books.Book", "pk": 5}, {"rank": 6, "title": "Wide Sargasso Sea", "author": "Jean Rhys", "year": 1966, "model": "books.Book", "pk": 6}, {"rank": 7, "title": "Brave New World", "author": "Aldous Huxley", "year": 1932, "model": "books.Book", "pk": 7}, {"rank": 8, "title": "I Capture The Castle", "author": "Dodie Smith", "year": 1948, "model": "books.Book", "pk": 8}, {"rank": 9, "title": "Jane Eyre", "author": "Charlotte Bronte", "year": 1847, "model": "books.Book", "pk": 9}, {"rank": 10, "title": "Crime and Punishment", "author": "Fyodor Dostoevsky", "year": 1866, "model": "books.Book", "pk": 10}]

Data Graham
Dec 28, 2009

📈📊🍪😋



The Django 'loaddata" script expects a fixture JSON file to be in a certain format. How did you create your JSON file? What format are you following?

The error is because a fixture JSON for Django needs to have a "fields" key in each item, i.e.

JSON code:
[
{
    "model": "users.musicgenre",
    "pk": 1,
    "fields": {
        "name": "50's"
    }
},
...
]

Mycroft Holmes
Mar 26, 2010

by Azathoth

Data Graham posted:

The Django 'loaddata" script expects a fixture JSON file to be in a certain format. How did you create your JSON file? What format are you following?

The error is because a fixture JSON for Django needs to have a "fields" key in each item, i.e.

JSON code:
[
{
    "model": "users.musicgenre",
    "pk": 1,
    "fields": {
        "name": "50's"
    }
},
...
]

The json file was made in another section of the assignment and fits the set criteria. Reformatting it doesn't seem to have fixed the problem.

code:
# Create a new book_import_data.json file formatted as a Django fixture JSON file

# INSERT CODE FOR STEP X
import json
with open('book_import_data.json', 'w') as outfile:
    json.dump(books_fixture, outfile)
outfile.close()

Mycroft Holmes
Mar 26, 2010

by Azathoth
Ah, i see what i did wrong. You're right, it doesn't have the fields attribute. How do i add that?

Data Graham
Dec 28, 2009

📈📊🍪😋



I mean, that code there just shows you writing out the file, it doesn't show what "books_fixture" is or whether it's in the correct format.

If you use Django's "dumpdata" script it will write the fixture in the correct format. And as for reformatting it not fixing the problem, really? Is the error the same?


e: If you're creating "books_fixture" manually, you need to add all the fields inside a "fields" sub-dictionary inside each item's main dictionary, not all at the same level.

Mycroft Holmes
Mar 26, 2010

by Azathoth
ugh, none of the documentation I was given told me what a fixture file looks like. Starting to hate this teacher. How do I turn a CSV into a fixture JSON?

code:
# Create a books_fixture list with all the classics_data re-formatted correctly for a fixture file
# REMEMBER: You MUST add 'model' : 'books.Book' & 'pk' : <int> to each book entry 

# INSERT CODE FOR STEPS 4-5
books_fixture = []
pk = 1
for book in classics_data:
    book['model'] = 'books.Book'
    book['pk'] = pk
    books_fixture.append(book)
    pk = pk + 1
print(books_fixture)
This is what I have from a previous step. How do I modify this code to add a fields subcategory to each entry? God, I have no idea what I'm doing.

Data Graham
Dec 28, 2009

📈📊🍪😋



You can add the fields dict like this:

Python code:
book["fields"] = {
    "rank": 1,
    "title": "Pride and Prejudice",
    ...
}
Or you can do each item all in one go like:

Python code:
book = {
    "pk": 1,
    "model": "books.Book",
    "fields": {
        "rank": 1,
        "title": "Pride and Prejudice",
        ...
    },
}
There's a lot of leeway in how you can make a dict. You can even do it with dict(pk=1, model='books.Book', ...) if you like the look of it better.

Though I would also caution against modifying "book" directly, because you're using it as your iterator. I would make a new object for each book to add to the fixture, and then set the fields and other keys on that, like book_obj = {} -- otherwise you're going to be trying to add "book" to the fixture but with extra keys added to it, and "book" isn't in the right format. Best to start over fresh with each book object and only include the keys you explicitly care about.

If the teacher didn't give you any clue as to what the format should be, yeah that's butt. But if you already have your Django app set up, you can use the Django admin to look at the Books model and add some data through the GUI, and then do a "dumpdata" CLI command to output the fixture in the correct format, for you to refer to.

code:
./manage.py dumpdata --format json --indent 4

Data Graham fucked around with this message at 14:22 on Mar 31, 2022

Mycroft Holmes
Mar 26, 2010

by Azathoth
no, part of part A of the assignment is to code the conversion manually. I'm messing with it now.

Mycroft Holmes
Mar 26, 2010

by Azathoth
ok, got that working. Now, the last part is done according to her specifications, I've copied the code exactly. But I'm getting an error.

code:
Exception in thread django-main-thread:
Traceback (most recent call last):
  File "C:\Program Files\Python38\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "C:\Program Files\Python38\lib\threading.py", line 870, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
    fn(*args, **kwargs)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\commands\runserver.py", line 134, in inner_run
    self.check(display_num_errors=True)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\management\base.py", line 487, in check
    all_issues = checks.run_checks(
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\checks\registry.py", line 88, in run_checks
    new_errors = check(app_configs=app_configs, databases=databases)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\checks\urls.py", line 14, in check_url_config
    return check_resolver(resolver)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\core\checks\urls.py", line 24, in check_resolver
    return check_method()
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\urls\resolvers.py", line 480, in check
    for pattern in self.url_patterns:
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\utils\functional.py", line 49, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\urls\resolvers.py", line 696, in url_patterns
    patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\utils\functional.py", line 49, in __get__
    res = instance.__dict__[self.name] = self.func(instance)
  File "C:\Users\nickt\CSC221\lab10\project-books\books_env\lib\site-packages\django\urls\resolvers.py", line 689, in urlconf_module
    return import_module(self.urlconf_name)
  File "C:\Program Files\Python38\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\nickt\CSC221\lab10\project-books\django_books\django_books\urls.py", line 19, in <module>
    from books import urls as books_urls
ImportError: cannot import name 'urls' from 'books' (C:\Users\nickt\CSC221\lab10\project-books\django_books\books\__init__.py)
Don't know what this means.

boofhead
Feb 18, 2021

code:
File "C:\Users\nickt\CSC221\lab10\project-books\django_books\django_books\urls.py", line 19, in <module>
    from books import urls as books_urls
ImportError: cannot import name 'urls' from 'books' (C:\Users\nickt\CSC221\lab10\project-books\django_books\books\__init__.py)
take a look at the contents of these two files (the last 2 lines in your error log)

the second file is what you're trying to import stuff from, and the first file is where you're trying to import (and use) it

and the command "from books import urls" is what your code is trying to do but failing

boofhead fucked around with this message at 15:41 on Mar 31, 2022

Mycroft Holmes
Mar 26, 2010

by Azathoth

boofhead posted:

code:
File "[b]C:\Users\nickt\CSC221\lab10\project-books\django_books\django_books\urls.py[/b]", line 19, in <module>
    [b]from books import urls as books_urls[/b]
ImportError: cannot import name 'urls' from 'books' ([b]C:\Users\nickt\CSC221\lab10\project-books\django_books\books\__init__.py[/b])
take a look at the contents of these two files

the second file is what you're trying to import stuff from, and the first file is where you're trying to import (and use) it

ive also highlighted the command because that's what your code is trying to do but failing

__init__ is empty. I wasn't told to modify it, only urls.

boofhead
Feb 18, 2021

Mycroft Holmes posted:

__init__ is empty. I wasn't told to modify it, only urls.

sorry I'm a bit out of it today and was thinking of javascript index files for importing

look inside the /books/ directory (it'll contain a package) and see what it's doing. The empty __init__.py is just a placeholder file that tells python to treat the directory as a package, so you can import it and do stuff with it

so either your import line is pointing at the wrong thing (i.e. it should be: from not_books import urls) or it's trying to import the wrong thing (i.e.: from books import not_urls)

Mycroft Holmes
Mar 26, 2010

by Azathoth

boofhead posted:

sorry I'm a bit out of it today and was thinking of javascript

look inside the /books/ directory (it'll contain a package) and see what it's doing. The empty __init__.py is just a placeholder file that tells python to treat the directory as a package, so you can import it and do stuff with it

so either your import line is pointing at the wrong thing (i.e. it should be: from not_books import urls) or it's trying to import the wrong thing (i.e.: from books import not_urls)

In the books folder, there is a urls file. it should be functioning, I've no idea why it's not. This is the code I was instructed to type.

Adbot
ADBOT LOVES YOU

Mycroft Holmes
Mar 26, 2010

by Azathoth

Mycroft Holmes posted:

In the books folder, there is a urls file. it should be functioning, I've no idea why it's not. This is the code I was instructed to type.

goddamnit, she made me make two urls files. I need to rejigger where stuff is.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply