Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SurgicalOntologist
Jun 17, 2004

Rocko Bonaparte posted:

I'm wondering how other people might be dealing with a situation of scratchwork classes and rigid type checking. I have some classes that are getting progressively fed information so their fields start out optional. When they are in-use, these will be filled.

Type checking hates this if I don't null check everything once the instance is in-use. One of the situations was a good candidate to use a builder because there was a lot of logic associated with generating defaults. However, another one is just some scratchwork.

So right now I'm trying a dataclass that has the fields optional while it's being internally messed around, but it basically returns a version of itself with fields not optional and not null. If I add a property, I will have to remember to put it in both places. I'm not sure about doing reflection stuff to auto-populate because I have to specify the typing information. For as much as I'm using it, that's fine, but it smells a little bit. Has anybody else worked through a problem like this before?

These containers are areas where people love to put the wrong thing in the wrong spot so I specifically want type checking to get involved for them.

I try to design classes such that there are no optional fields and it only gets instantiated when it is truly "ready". The idea of strong typing is that it shouldn't even be possible that an invariant-breaking instance can exist. Typically that means custom constructors (classmethods that return an instance) for the various construction paths. Maybe that's when you meant by a builder, I'm not sure.

I guess your dataclass thing is kind of an extreme version of that, but if you have such a variety of paths that an alternate constructor is not enough, and you are needing some of the same properties before the instance is "valid" that you also use on valid instances, that's a big sign to me that you haven't picked out the right classes in the first place. For example, maybe you can group some of those fields into another class, that gets instantiated and has properties and then becomes an attribute of an instance of the higher-level class. What you're dealing with tends to happen to me when I find myself putting everything related into one class. If I had to guess, your problem is having too few classes

Adbot
ADBOT LOVES YOU

mystes
May 31, 2006

It is pretty hard to make the builder pattern completely type safe in current mainstream languages.

mystes fucked around with this message at 01:00 on Jan 29, 2021

SurgicalOntologist
Jun 17, 2004

Haha, you didn't have to edit out calling me out. I'll cop to not knowing the builder pattern. I guess that's why I'm in the Python thread :eng99:

Presto
Nov 22, 2002

Keep calm and Harry on.

SurgicalOntologist posted:

Haha, you didn't have to edit out calling me out. I'll cop to not knowing the builder pattern. I guess that's why I'm in the Python thread :eng99:
Eh, don't worry. I've been programming professionally for almost 25 years now, and I don't know what any of these patterns the kids are talking about these days are either. :corsair:

Pie in the Sky
Apr 16, 2009

whoops here we go again



mystes posted:

It's part of the python standard now but the normal python interpreter doesn't check it. You can use it with mypy or an ide for type checking.

https://docs.python.org/3/library/typing.html

Wow, I had no idea about this. Will definitely be looking deeper into it. Thank you!

Nohearum
Nov 2, 2013
I've got a script that pulls a bunch of data from a government API that has worked for months, but suddenly stopped working a few days ago due to SSL errors in the requests module. I've tried it on 3 different machines and I get the same behavior, but I can visit the website in firefox and the certificate seems to be valid etc. Any thoughts on how to resolve this?

I also tried downloading the latest cacert file from https://curl.se/ca/cacert.pem and pointing the session object to that and still got the same error.

code:
from requests import Session
s = Session()
s.get('https://www.wcc.nrcs.usda.gov')

requests.exceptions.SSLError: HTTPSConnectionPool(host='www.wcc.nrcs.usda.gov', port=443): 
Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError
(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)')))

Da Mott Man
Aug 3, 2012


Nohearum posted:

I've got a script that pulls a bunch of data from a government API that has worked for months, but suddenly stopped working a few days ago due to SSL errors in the requests module. I've tried it on 3 different machines and I get the same behavior, but I can visit the website in firefox and the certificate seems to be valid etc. Any thoughts on how to resolve this?

I also tried downloading the latest cacert file from https://curl.se/ca/cacert.pem and pointing the session object to that and still got the same error.

code:
from requests import Session
s = Session()
s.get('https://www.wcc.nrcs.usda.gov')

requests.exceptions.SSLError: HTTPSConnectionPool(host='www.wcc.nrcs.usda.gov', port=443): 
Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError
(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)')))

You need the whole cert chain in the pem not just the root ca.

Nohearum
Nov 2, 2013

Da Mott Man posted:

You need the whole cert chain in the pem not just the root ca.

I'm not sure whats going on here. Navigated to that website in firefox, went to View Certificates, downloaded the full chain .pem file and am still getting the same error.
code:
from requests import Session
s = Session()
s.verify = './fullchain.pem'
s.get('https://www.wcc.nrcs.usda.gov')

Da Mott Man
Aug 3, 2012


Nohearum posted:

I'm not sure whats going on here. Navigated to that website in firefox, went to View Certificates, downloaded the full chain .pem file and am still getting the same error.
code:
from requests import Session
s = Session()
s.verify = './fullchain.pem'
s.get('https://www.wcc.nrcs.usda.gov')

Strange both on windows and WSL this code works for me.

code:
from requests import Session

s = Session()

s.verify = './fullchain.pem'
res = s.get('https://www.wcc.nrcs.usda.gov')
print(res.content)
Maybe try wiping your virtualenv.

Nohearum
Nov 2, 2013

Da Mott Man posted:

Strange both on windows and WSL this code works for me.

code:
from requests import Session

s = Session()

s.verify = './fullchain.pem'
res = s.get('https://www.wcc.nrcs.usda.gov')
print(res.content)
Maybe try wiping your virtualenv.

Thanks I finally ended up getting this working. I had to pull that full chain pem from both https://www.wcc.nrcs.usda.gov/ and the API endpoint https://www.wcc.nrcs.usda.gov/awdbWebService/services?WSDL and concantenate. For some reason those two paths return completely different certs (one signed by DigiCert, the other by Entrust) and it the SOAP api wouldn't work without having both of them in the pem file.

Butter Activities
May 4, 2018

I was gonna say that the DOD's VPN DNS server poo poo the bed this weekend, but I guess it was something else.

Lumpy
Apr 26, 2002

La! La! La! Laaaa!



College Slice
Can someone explain like I am stupid and rusty with python to boot on how modules and so on work in python 3? I am having to emergency work on something, and I am finding "No module named..." and "No such file or directory" to be far more vexing than the coding bit. I have lost the mental model for imports!

I have this structure right now:

code:
file_a.py
file_b.py
file_c.py
data_a.csv
data_b.csv
data_c.csv
Each of those files reads in some data from csv files, and outputs to some other csv files. There is a lot of common code in the python, but more importantly, I need to make the files get their data from elsewhere, and output data in a different way as well. My plan is to write some code in a couple subdirectories like so:

code:
file_a.py
file_b.py
file_c.py
data_loaders/
  load_data.py
  local_load_data.py
  remote_load_data.py
data_output/
  output_data.py
  local_output.py
  remote_output.py
data_a.csv
data_b.csv
data_c.csv
Where load_data.py uses either local_load_data.py or remote_data_load.py and all the top-level file_*.py always call the functions in data_loaders/load_data.py (and data_output/output_data.py)

However I can't figure out how to import things without lots of errors, or linting errors, or both. How do I properly import functions from files in the same directory? That seems like the worst of the issues. I think I used to know how to do this...

ArcticZombie
Sep 15, 2010
Back in Python 2 it would've worked the way you're likely trying right now, but Python 3 removed implicit relative imports. With a tree like:
code:
file_a.py
file_b.py
file_c.py
data_loaders/
  load_data.py
  local_load_data.py
  remote_load_data.py
data_output/
  output_data.py
  local_output.py
  remote_output.py
data_a.csv
data_b.csv
data_c.csv
you could do:
Python code:
# file_{a,b,c}.py
import data_loaders.load_data
Python code:
# data_loaders/load_data.py
from . import local_load_data
from . import remote_load_data
Alternatively, you could do this:
Python code:
# file_{a,b,c}.py
import data_loaders.load_data
Python code:
# data_loaders/load_data.py
import data_loaders.local_load_data
That last one may look crazy, but essentially it boils down to the directory of the file you're executing (file_{a,b,c}.py) is added to the search path for imports, so when data_loaders/load_data.py is being imported, data_loaders is in the search path. If you ran data_loaders/load_data.py directly, that import would fail, because data_loaders isn't in the search path. Using this method, you would be able to import stuff in data_output from within data_loaders, which you wouldn't be able to do with the first approach.

accipter
Sep 12, 2003
I have the structure below. Basically, p and m are both used by an instance of calc to perform a calculation. All of those components (calc, and p and m, which are stored within calc) are then used by outputs to compute a set of metrics of the calculation. This works great for a single threaded process, but I would like to move to use dask and convert the classes into immutable calculators. Or maybe just copy the calculator are return a new instance with the new parameters (p and m).

Python code:
count = 20
outputs.reset()
for i, p in enumerate(pysra.variation.iter_varied_profiles(
    profile,
    count,
    var_velocity=var_velocity,
)):
    # Here we auto-descretize the profile for wave propagation purposes
    p = p.auto_discretize()
    for j, m in enumerate(motions):
        name = (f'p{i}', f'm{j}')
        calc(m, p, p.location('outcrop', index=-1))
        outputs(calc, name=name)
A few questions. Is there a design pattern for this? Any suggestions on how to break everything into steps and then re-assemble it?

As I write this, I feel like the answer is to return a result from calc that is passed to outputs to be processed and turned into more results, which are eventually collected.

Tayter Swift
Nov 18, 2002

Pillbug
I have a 40 GB CSV, about 50 million records by 90-ish fields. I need to sort it by fields VIN and Date, and remove duplicated VINs by most recent Date. My machine has 64GB.

What's a sensible way to accomplish the task? After compacting categories in dask, saving it as parquet and re-reading it into a pandas df it's about 21GB, but it still cant be easily manipulated in pandas, and I know these are not dask-friendly parallelizable operations.

9-Volt Assault
Jan 27, 2007

Beter twee tetten in de hand dan tien op de vlucht.

Tayter Swift posted:

I have a 40 GB CSV, about 50 million records by 90-ish fields. I need to sort it by fields VIN and Date, and remove duplicated VINs by most recent Date. My machine has 64GB.

What's a sensible way to accomplish the task? After compacting categories in dask, saving it as parquet and re-reading it into a pandas df it's about 21GB, but it still cant be easily manipulated in pandas, and I know these are not dask-friendly parallelizable operations.

A database.

NinpoEspiritoSanto
Oct 22, 2013




Sqlite can probably handle that.

Tayter Swift
Nov 18, 2002

Pillbug
Haha, fair enough. I've worked with SQL plenty but never worked with SQLite before, so maybe it's time to learn something new.

For now, I just powered through with them df.sort_values and df.drop_duplicate commands, because gently caress RAM. Only took about an hour total and my computer didn't melt, so better than I expected to be honest :shrug:

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Anyone need an extra set of novice hands for an open source project? I can't think of any pet projects to create, and I'd like to take a stab at contributing to a meaningful project. It can be your project, or someone else's project that you're contributing to.

duck monster
Dec 15, 2004

Presto posted:

Eh, don't worry. I've been programming professionally for almost 25 years now, and I don't know what any of these patterns the kids are talking about these days are either. :corsair:

A lot of the big patterns popular in the Java and C++ worlds are tied very heavily to the strengths and weaknesses of the type systems and object models of those languages and dont really make a lot of sense in dynamic languages like Python, Ruby and JS. Thats not to say people don't use them, but frankly a lot of its ex Java people exporting their madness into their new homes. And some patterns make plenty of sense, but I'd really like to see the canon rewritten for python, because some patterns make sense on python, some dont, and theres some patterns that wouldnt make sense on Java/C++ but would make sense in Python/etc

Dominoes
Sep 20, 2007

Hughmoris posted:

Anyone need an extra set of novice hands for an open source project? I can't think of any pet projects to create, and I'd like to take a stab at contributing to a meaningful project. It can be your project, or someone else's project that you're contributing to.
Python package and installation manager. I don't have time to maintain it now. Mostly works, but there are cases that cause it to fail. Ie certain combos of OS, dependency, Py version etc. I think the best goal is fixing edge cases so it works, or fails as gracefully as possible.

Dominoes fucked around with this message at 05:53 on Feb 21, 2021

Hughmoris
Apr 21, 2007
Let's go to the abyss!

Dominoes posted:

Python package and installation manager. I don't have time to maintain it now. Mostly works, but there are cases that cause it to fail. Ie certain combos of OS, dependency, Py version etc. I think the best goal is fixing edge cases so it works, or fails as gracefully as possible.

I'll take a look, thanks!

jaete
Jun 21, 2009


Nap Ghost

Tayter Swift posted:

I have a 40 GB CSV, about 50 million records by 90-ish fields. I need to sort it by fields VIN and Date, and remove duplicated VINs by most recent Date. My machine has 64GB.

What's a sensible way to accomplish the task? After compacting categories in dask, saving it as parquet and re-reading it into a pandas df it's about 21GB, but it still cant be easily manipulated in pandas, and I know these are not dask-friendly parallelizable operations.

You could try PySpark, especially if it's already in parquet format. While Spark is meant to run on multi-node clusters it might "scale" reasonably to one machine as well.

Bad Munki
Nov 4, 2008

We're all mad here.


Trying to figure out if I'm type hinting correctly on a thing. I have a function that can take a bunch of parameters that can be, for example, an int, a range, or a list of ints and/or ranges (non-homogeneous).

So the usage would be any of the following:
code:
thing(foo=1)
thing(foo=range(5, 10))
thing(foo=[1, 2, 5, range(7, 11)])
My def looks something like this:
code:
def thing(foo: Union[int, range, Iterable[Union[int, range]]] = None) -> dict:
    #blah blah blah
   return {}
Is that big ol' complex-looking Union[] with the nested Iterable correct? It seems clunky. PyCharm seems to almost get it right, it's happy with the above examples, and if I try any of the following, it complains:
code:
thing('a')
thing(['a'])
However, if I start my list with something valid, it doesn't complain about later invalid elements. This should NOT pass, but does:
code:
thing([1, 2, 'a'])
Is this just PyCharm not quite grokking what I'm doing, or should I be hinting differently?

Dominoes
Sep 20, 2007

I'm kind of drunk, but it looks like you're trying to overload functions in Python. Don't do that.

NinpoEspiritoSanto
Oct 22, 2013




Dominoes posted:

I'm kind of drunk, but it looks like you're trying to overload functions in Python. Don't do that.

I'm not drunk but can confirm. Either send sensible data in or have discrete functions for the different data types.

Bad Munki
Nov 4, 2008

We're all mad here.


I don't think I'm overloading anything? I've got a function that'll take a list of ints/ranges, and as a convenience, if your list is only one item long, you can just give it the item alone. Handling that is trivial within the function, step one is simply "if it's just a single int or range, make it into a list of that one thing and carry on." I'm sure it's 99% likely I hosed up the hint above, so that's what I'm trying to get right.

In this case, a real example would be that the user is searching for frames of data from a satellite, and they often have a list of frames they want, which may include, say, frames 100, 150, 200, and everything from 300-400. Another user just wants to search for a single frame, 100. Forcing them to provide a list of all 104 values for the sake of purity of data type seems silly, as does providing multi_frame_search() and single_frame_search() variants.

If it were SUPER offensive as-is and the cops are already on their way, I would consider forcing it to always be a list. But it's still gonna be a list of ints and/or ranges. That part's a requirement. Making the list-ness of the input optional just seems polite.

Bad Munki fucked around with this message at 23:11 on Feb 22, 2021

Foxfire_
Nov 8, 2010

Tayter Swift posted:

I have a 40 GB CSV, about 50 million records by 90-ish fields. I need to sort it by fields VIN and Date, and remove duplicated VINs by most recent Date. My machine has 64GB.

What's a sensible way to accomplish the task? After compacting categories in dask, saving it as parquet and re-reading it into a pandas df it's about 21GB, but it still cant be easily manipulated in pandas, and I know these are not dask-friendly parallelizable operations.

Turn it into a numpy record array, then do the sorting via numpy. If the de-duplication is not trivial to do via numpy interface, write it in numba so you can do a natural iterative thing instead of trying to hammer it into array operations with many temporary copies. If still too big for memory, np.memmap() is easy and won't be that much slower.

e: Type hint looks correct to me, except that it should be Optional[Union[int, range, Iterable[Union[int, range]]]] if you're allowing None like the default suggests. Not surprising that tooling doesn't understand complicated things

I read it as foo is one of:
- None
- int
- range
- Iterable where each entry is either int or range

Foxfire_ fucked around with this message at 23:10 on Feb 22, 2021

Bad Munki
Nov 4, 2008

We're all mad here.


Foxfire_ posted:

I read it as foo is one of:
- None
- int
- range
- Iterable where each entry is either int or range

Yeah, that's what I'm after. Except the None is actually a mistake in my example, it's not actually optional, but other than that, yeah.

OnceIWasAnOstrich
Jul 22, 2006

Bad Munki posted:

I don't think I'm overloading anything? I've got a function that'll take a list of ints/ranges, and as a convenience, if your list is only one item long, you can just give it the item alone. What's so weird about that? I'm sure it's 99% likely I hosed up the hint above, so that's what I'm trying to get right.

In this case, a real example would be that the user is searching for frames of data from a satellite, and they often have a list of frames they want, which may include, say, frames 100, 150, 200, and everything from 300-400. Another user just wants to search for a single frame, 100. Forcing them to provide a list of all 104 values for the sake of purity of data type seems silly, as does providing multi_frame_search() and single_frame_search() variants.

If it were SUPER offensive as-is and the cops are already on their way, I would consider forcing it to always be a list. But it's still gonna be a list of ints and/or ranges. That part's a requirement. Making the list-ness of the input optional just seems polite.

It is the heterogeneous list that is messing your type checking up as far as I can tell, not being either a int or a list and I think your type hints are about as good as it will get. I'm guessing (without checking) that PyCharm isn't handling covariant typing on your Iterable and is assuming invariant type and checking the first item and assuming the entire list is ints. What happens if your first list element is a string?

If you really care about that and wanted a cleaner function you could accept kwargs for lists with one for int-lists and one for range-lists since presumably you have code to separate the items out anyway and handle them differently in your function already.

edit: I'm having a hard time figuring out whether the PEP/mypy actually allow for covariant type lists. I think maybe the type system doesn't allow that for mutable types? I think maybe PyCharm sees list, the type system specifies lists can only be invariant, so it sees a List[int], converts to Iterable[int] and calls it a day?

OnceIWasAnOstrich fucked around with this message at 23:19 on Feb 22, 2021

Bad Munki
Nov 4, 2008

We're all mad here.


OnceIWasAnOstrich posted:

It is the heterogeneous list that is messing your type checking up as far as I can tell, not being either a int or a list and I think your type hints are about as good as it will get. I'm guessing (without checking) that PyCharm isn't handling covariant typing on your Iterable and is assuming invariant type and checking the first item and assuming the entire list is ints. What happens if your first list element is a string?
PyCharm correctly gripes in that case.

quote:

If you really care about that and wanted a cleaner function you could accept kwargs for lists with one for int-lists and one for range-lists since presumably you have code to separate the items out anyway and handle them differently in your function already.
Haha, actually, nope. This whole thing is a wrapper for a public REST API that has over 30 entirely optional parameters, and these lists getting passed in will simply get joined together as a long string that is handed directly to the parameter in question. So in GET querystring style, it'll look something like foo=100,150,200,300-400 and the REST API does what it does.

QuarkJets
Sep 8, 2008

Bad Munki posted:

I don't think I'm overloading anything? I've got a function that'll take a list of ints/ranges, and as a convenience, if your list is only one item long, you can just give it the item alone. Handling that is trivial within the function, step one is simply "if it's just a single int or range, make it into a list of that one thing and carry on." I'm sure it's 99% likely I hosed up the hint above, so that's what I'm trying to get right.

In this case, a real example would be that the user is searching for frames of data from a satellite, and they often have a list of frames they want, which may include, say, frames 100, 150, 200, and everything from 300-400. Another user just wants to search for a single frame, 100. Forcing them to provide a list of all 104 values for the sake of purity of data type seems silly, as does providing multi_frame_search() and single_frame_search() variants.

If it were SUPER offensive as-is and the cops are already on their way, I would consider forcing it to always be a list. But it's still gonna be a list of ints and/or ranges. That part's a requirement. Making the list-ness of the input optional just seems polite.

I think the proper way to handle that would be to use *args, but I think it's more standard to still require an iterable even if the iterable is only 1 item long. It just keeps things simpler and cleaner

Bad Munki
Nov 4, 2008

We're all mad here.


QuarkJets posted:

I think the proper way to handle that would be to use *args, but I think it's more standard to still require an iterable even if the iterable is only 1 item long. It just keeps things simpler and cleaner

The real case where this matters has 30 more of these arguments, instead of just a single list-ish thing. :negative:

USUALLY they aren't all used in conjunction, normally it'll be half a dozen of them. But in what combination is entirely flexible. poo poo's wild. I can post the full length definition if you wanna go blind.

But yeah, I may end up just forcing a list (of ints/ranges) and stopping there. Because this is out of control:
code:
absoluteorbit: Optional[Union[int, range, Iterable[Union[int, range]]]] = None,
Stepping down to just a list would shave a couple layers off that particular onion.

Bad Munki fucked around with this message at 23:28 on Feb 22, 2021

QuarkJets
Sep 8, 2008

Bad Munki posted:

The real case where this matters has 30 more of these arguments. :negative:

That should be fine, *args is just another iterable. So the code works for any of 1) User wants to pass a single argument, 2) User wants to pass 2 or more single arguments, 3) User wants to pass an iterable (they just need to expand it first, e.g. *input), 4) User wants to pass a combination of single arguments and iterables (each iterable needs to be expanded). If your function just processes the iterable *args, then you get all of the above combinations for free without any special handling

Don't do this if you're operating on arrays of data or something like that, but if you're just passing around a mix of single arguments and tuples that you want to process all together then *args is a good option

Personally I use the pattern of defining a function that operates on 1 element, and defining a different function that accepts an iterable of those elements that just repeatedly calls the first function. This also trivializes parallelization, when that's warranted

QuarkJets fucked around with this message at 23:35 on Feb 22, 2021

Bad Munki
Nov 4, 2008

We're all mad here.


Perhaps of interest, it has been suggested elsewhere that I can simplify
code:
Optional[Union[int, range, Iterable[Union[int, range]]]]
to simply
code:
Optional[Union[int, Iterable]]
since range is, itself, an Iterable, and recursing this hint should maintain accuracy? I dunno, I'll have to play with it some more. That second version is about tolerable, really, if it's correct.

Foxfire_
Nov 8, 2010

OnceIWasAnOstrich posted:

edit: I'm having a hard time figuring out whether the PEP/mypy actually allow for covariant type lists. I think maybe the type system doesn't allow that for mutable types? I think maybe PyCharm sees list, the type system specifies lists can only be invariant, so it sees a List[int], converts to Iterable[int] and calls it a day?

I don't think it's thought through enough to actually have specified behavior outside of whatever some particular tool assumes. Is Optional[int] a type or a statement about two different types? No actual object will ever have a type of Optional[int]. Some names might variously refer to None or an int, should the name get a 'type' (distinct from the object type) that somehow logically gets attached to it? From CPython's point of view, none of the type hints have anything to do with actual types, so there's nothing there to guide anything.

PEP484 has
code:
T = TypeVar('T', int, float, complex)
Vector = Iterable[Tuple[T, T]]
in one of its examples, so that at least didn't think there was anything wrong with having non-homogenous lists

e: That's maybe a bad example since you could read it as requiring every tuple in the iterable to have the same number type. List[Any] does definitely show up in the PEP though

Foxfire_ fucked around with this message at 00:31 on Feb 23, 2021

first move tengen
Dec 2, 2011
Sorry, extremely basic question coming in. I'm doing Google's introductory python tutorial and it's based on Python 2 while the version I downloaded (through Anaconda) is 3.

The tutorial teaches that entering an expression prints its value in Python.

quote:

>>> a = 6 ## set a variable in this interpreter session
>>> a ## entering an expression prints its value
6

But when I try to replicate that in Spyder it doesn't print unless I use the print(a) function. Is that something that changed from Python 2 to 3?
I'm also wondering that about string methods, like s.isalpha(). If I type print(s.isalpha()) it will return True or False, while if I just type s.isalpha() and run that, I won't see anything.

QuarkJets
Sep 8, 2008

Goatse Master posted:

Sorry, extremely basic question coming in. I'm doing Google's introductory python tutorial and it's based on Python 2 while the version I downloaded (through Anaconda) is 3.

The tutorial teaches that entering an expression prints its value in Python.


But when I try to replicate that in Spyder it doesn't print unless I use the print(a) function. Is that something that changed from Python 2 to 3?
I'm also wondering that about string methods, like s.isalpha(). If I type print(s.isalpha()) it will return True or False, while if I just type s.isalpha() and run that, I won't see anything.

If you're in an interactive terminal, it'll print regardless of whether you're using 2 or 3. In Spyder this means using the prompt at the bottom-right, which is an interactive IPython terminal.

If you execute a file (such as by editing its contents with the Spyder editor and then running it) then the session is not interactive, so you don't get auto-printing or a few other things that come with interactive sessions.

Zoracle Zed
Jul 10, 2001
a python tutorial in 2021 targeting py2 is... very odd.

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Yeah I decided to look it up... their terminal prompt has a 2008 date lol, nice

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply