Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
OnceIWasAnOstrich
Jul 22, 2006

DoctorTristan posted:

Have I misunderstood pip’s version specifiers, or is it doing something weird here?

No, you are using it right. Something in the dependency resolution it is doing seems to be calling for cryptography. I would normally say that something else you have installed or are installing has a conflict with the cryptography version number. What version of Python are you using? Maybe there are no cryptography packages for your Python in that range or the repo/your environment is busted.

You can also try pip installing a version of the package in the appropriate version range and see what it does. Now that I think about it if there was an actual conflicting dependency pip would normally list it in that same output.

OnceIWasAnOstrich fucked around with this message at 15:28 on Jun 17, 2021

Adbot
ADBOT LOVES YOU

OnceIWasAnOstrich
Jul 22, 2006

Jose Cuervo posted:

I have written a function in Python named load_clustering. I have used this function in a number of places in other code. I now realise that a more appropriate name for this function is load_classification. Is there a way to modify the name of the function so that from now on I can use it as load_clustering, but keep the old name as well so that the older code with the old name of load_clustering does not break?

Assuming this is exactly what you want, you can just do load_clustering = old_func_name and use/import the new name. Functions are just objects that you can create new references to like most other objects. I would recommend refactoring, which could be potentially as easy as a find/replace but is indeed made more easy/reliable with refactoring tools like PyCharm or other IDEs have.

OnceIWasAnOstrich
Jul 22, 2006

You could also use the built-in tempfile TemporaryDirectory which comes pre-built as a context manager and move the files out of the directory once they are successfully written. This also has the advantage of automatically choosing a platform-dependent location by default.

You can also do this on a file-by-file basis with TemporaryFile context managers but the directory seemed more appropriate for the original request. As mentioned, this is probably going to be better and handle edge cases more effectively than anyone is this thread could manage on their first try.

OnceIWasAnOstrich fucked around with this message at 20:58 on Aug 2, 2021

OnceIWasAnOstrich
Jul 22, 2006

Dawncloack posted:

Is there a module or framework that would allow me to recreate a google sheet/collaborative excel in python?

My dream would be to have some excels in my raspberry and serving them in my home network. I am aware of the bottle and similar frameworks but when I try to google this I get too many results about how to make the raspberry interact with google docs.

Thanks!

What are you hoping it gets you to do this in Python? If you just want a self-hosted browser-based spreadsheet system and you don't need to hack around in it in Python, maybe use something like OnlyOffice or Collabora that you can just start up as a Docker image and start using.

OnceIWasAnOstrich
Jul 22, 2006

Bad Munki posted:

Ehh, not really an XY Problem situation since I don’t actually have anything specific in mind, just curious if there was anything fun there.

You could make your own functions that themselves call your own custom magic functions and make them part of whatever custom classes you like. You can use them as a sort of OOP single (maybe kind-of multiple?) dispatch. You can abuse/overload existing operators and their to do strange things with the normal magic functions on your custom objects. You can do some silly things to define something approximating custom operators. They aren't special other than that they have a bunch of underscores and some of them are used by built-in functions in specific ways.

OnceIWasAnOstrich
Jul 22, 2006

Thom ZombieForm posted:

I'd like to measure and print the memory usage (RSS) before and after each line of python code in a file, without actually calling the resource module function and print for every. single. line. I know there's decorators for functions, but are there any pointers or terms I can google to look into doing something like this?

https://pypi.org/project/memory-profiler/

Also recommended is https://pypi.org/project/line-profiler/.

edit: If you have more specific memory needs and want to use some specific tools like heapy or something for measuring it, you can use sys.settrace() to run before each line execution.

OnceIWasAnOstrich fucked around with this message at 22:14 on Aug 23, 2021

OnceIWasAnOstrich
Jul 22, 2006

Loezi posted:

But in any case (:haw:) isn't it he same degree of "you don't have to care" as with tolower() and toupper() if you transform both sides with those as well?

Sure, if you only ever use ASCII/English.

Python code:
>>> ss = "wissen"
>>> eszet = "wißen"
>>> ss==eszet
False
>>> ss.lower()==eszet.lower()
False
>>> ss.casefold()==eszet.casefold()
True

OnceIWasAnOstrich
Jul 22, 2006

Epsilon Plus posted:

I guess I forgot to actually mention that - specifically where I bring something like this up is when we talk about binary search. We have students try to implement it, they end up slicing their input array/list, and I like to show them how that looks fine but actually has some performance issues later on. I show them a binary search implementation that wants to slice the input vs. one that just keeps track of the leftmost/rightmost index of the array it needs to look at, and have them see how for small lists the performance difference is basically nil, but that there gets to be a fairly rapid gulf between them as the list grows in size.

This works to get 1000 (or x) random numbers nicely put into a numpy array, but then the actual sorting takes a really long time.

I'm a bit confused about what you want. If you want it sorted it is pretty not-random by definition. If you want it to be randomly generated but also sorted, you are clearly going to run into the big-O behavior you are trying to teach anyway. If you want to create a number that is some amount randomly bigger than the number before it (semi-random?), you're going to have to do N operations anyway. As you've discovered, Python functions have a bit of overhead. You'll certainly run into painful Python function-calling slowness unless you happen to find a function that does this thing you want pre-written in a faster language out of Numpy or whatever or figure out some combinations of those that gives you what you want.

Why do they need to be random and not just a sequence like you made? You could create them with a step if you wanted them not-sequential or make like a thousand sub-lists with different step sizes and concatenate them so it isn't a consistent step size and you still mostly need a binary search. Or do that with whatever the maximum size you can feasibly sort, create all the sub-ranges with appropriate min/max, then concatenate those for something approximating "random+sorted".

OnceIWasAnOstrich
Jul 22, 2006

Kivy? HTML Canvas and Javascript?

OnceIWasAnOstrich fucked around with this message at 14:36 on Nov 6, 2021

OnceIWasAnOstrich
Jul 22, 2006

Hadlock posted:

How is kivy vs like, pygame or whatever


There is a lot more to it and it is much better maintained. It's a lot more batteries-included for UI stuff especially. Pygame is one (of several) of the backends Kivy will use for rendering. It is also genuinely not very hard to package it for various platforms and handle touch control.

OnceIWasAnOstrich
Jul 22, 2006

I've never done any serious work with it but I did throw together a quick gimmick android app with a few buttons and a constantly-updating visual with it once. It came together extremely smoothly and quickly considering I just don't make GUIs as a rule.

Without a bunch of aesthetics work it does have A Look that makes it very obvious it isn't native anything.

OnceIWasAnOstrich
Jul 22, 2006

Yeah I can't really tell what is happening (do all of those functions return a modified game-state?) but I can definitely answer that that doesn't feel like Python. My guess is a more-Pythonic way would involve OOP with game state object and methods on it that do things and manipulate the state via object attributes.

OnceIWasAnOstrich
Jul 22, 2006

ArcticZombie posted:

Can setup.pyless packages be installed editable (pip install -e .) yet? This is the only reason my packages still have a minimal setup.py with the actual stuff in setup.cfg/pyproject.toml.

PEP 660 is accepted but hasn't been implemented yet, unfortunately.

OnceIWasAnOstrich
Jul 22, 2006


I just want to say this thing is wonderful. I already had a moderately-complicated Python app using Click that has a very slow startup time on most commands because it needs to read and set a bunch of cloud metadata. An import and one line of code and this gave it a fully-featured REPL with tab completion and everything that saves substantial amounts of time when running multiple commands sequentially.

OnceIWasAnOstrich
Jul 22, 2006

QuarkJets posted:

I have never encountered an anaconda or conda-forge package that failed to install dependencies, but I don't usually use Windows. What channel are you using? It wouldn't surprise me if this was some bit of Microsoft fuckery that requires extra steps, since you mentioned needing a Windows compiler

I only use pip as a source of last resort, for really obscure stuff that has no conda package. That step takes place after installing everything else I need with mamba. You shouldn't need pip for conda dependencies, if that's happening then something has gone deeply wrong. Oh, and if you use pip you need to pay attention to what it's doing - pip will happily try and replace packages with a different version, if it decides that's required, but that might break any of your conda packages

If you use 3rd party channels then you're in the wild west, like pypi basically. I prefer conda-forge for most things, since that's exclusively open source packages and tends to update a lot faster than the anaconda channel.

To follow up on this, I find almost all dependency problems go away when you use conda-forge (and maybe some high-quality channels like bioconda if applicable), remove the Anaconda "defaults" repo, and set channel_priority = strict. As a side bonus everything is nicely open-source and appropriately licensed.

And yes, always do Pip steps last and if you must run conda/mamba again in an env after running Pip, don't. Just delete the environment and build it again in the right order, this is what envs are for.

OnceIWasAnOstrich
Jul 22, 2006

samcarsten posted:

So, this is the example given by Starlette on requests:

code:
async def app(scope, receive, send):
    assert scope['type'] == 'http'
    request = Request(scope, receive)
    content = '%s %s' % (request.method, request.url.path)
    response = Response(content, media_type='text/plain')
    await response(scope, receive, send)
I get the first 2 lines, but not what the rest means.

code:
    request = Request(scope, receive)
Here we are constructing a Request object which holds all the data that Starlette gets from the browser ASGI scope and receive channel. This includes things like what URL was requested, all the HTTP headers, query parameters, cookies, and body.

code:
    content = '%s %s' % (request.method, request.url.path)
This is constructing a string with the HTTP request method (GET, PUT, POST, etc) and the URL endpoint for this request. This function is just echoing the request type and URL, so this is what will be returned.

code:
    response = Response(content, media_type='text/plain')
This constructs a Response object which is a convenient holder for all of the things that Starlette is going to send back to the browser. You don't need strictly need this but it makes a lot of more complicated things easy. This response just has the text string we want to give back to the user, and specifies that it is in fact just plain text. You can use this to respond with HTTP error codes, alternative data types, redirects, files, data streams, whatever Starlette supports.

code:
    await response(scope, receive, send)
Starlette functions like this don't actually return a meaningful value, instead they call the Response object (which is a "callable" object), telling it to send a responds in the ASGI scope/connection that this "app" function was called with. This causes the Starlette framework to send the text response back to the browser (and this function waits until that function returns, then continues on, returning a None value and ending the response).

OnceIWasAnOstrich
Jul 22, 2006

I don't feel good for having conceived this:

Python code:
g=(y:=y+1 for x, _ in enumerate(iter(bool, True)) if ((y:=0) if not x else 1))

OnceIWasAnOstrich
Jul 22, 2006

rich thick and creamy posted:

Have you given Poetry a whirl? I've started playing around with it a few months ago. It can build a venv for your project and keeps track of dependencies in a .toml file. Haven't stumbled on any huge annoyances just yet.

I second this. If you are developing a package, library or executable, and it needs to be in an environment with its correct dependencies, imo poetry is the best thing to use right now.

It keeps your dependencies nice and tidy and also generates a lock file to reproduce an exact environment, depending on your needs. Just makes it easy to do things properly the modern way.

Have separate dev dependencies? It's got you covered. Devs get those, but someone just pip installing the package doesn't.

OnceIWasAnOstrich
Jul 22, 2006

The March Hare posted:

Did Poetry ever fix their resolver being insanely slow?

Hmm, not sure. It's not a problem I've run into in a way that bothered me. It seems roughly equivalent to pip in speed? We have a few projects with a couple dozen dependencies each and while solving isn't instant or anything, it isn't an issue I face much. We don't use open-ended version specifications for dependencies so this isn't really a problem except when we upgrade them which doesn't need to be fast, but if you do use open ended (or worse no version restrictions) keeping them restrictice and recent goes a long way to keeping things brisk.

If that is a big blocker though, PDM is the way to go and is another good option in general, though I still prefer Poetry.

OnceIWasAnOstrich
Jul 22, 2006

Oysters Autobio posted:

The vscode terminal is still just my local machine. So how do I connect that to the remote JupyterHub?

Hmm, this is kind of a big ask. Your setup works by connecting to the remote Jupyter kernel over HTTP when you are using a Notebook in VS Code via the JupyterHub extension. You of course can get a remote terminal via Jupyter, Jupyter lab does it, but it does it with (I think) XTerm.js or something similar running on the host communicating over websockets. To make this work over a Jupyter kernel link, you would have to rewrite (or re-implement in the form of an extension) the VS Code terminal to use the Jupyter kernel protocol and possibly add software to your kernel environments to run the terminal session and transfer data over websockets or some other protocol. The extension just isn't set up to do that, as far as I know.

You can, of course also do remote terminals (and editors, and extensions, and everything else), but those are implemented over SSH. To make that work you would not really be using JupyterHub and would instead connect to the host running the Jupyter kernels over SSH and then access both the terminal and notebook kernels over that tunnel.

The latter is possible, but maybe not with your JupyterHub if you can't make SSH connections to the underlying machines. The former is possible but I don't think it is currently implemented, at least I don't know of any implementations.

Oysters Autobio posted:

Side note, I use the "Show Contextual Help" feature in Jupyterlab a lot since it shows docstrings and info for any given object in the IDE. What's the equivalent for this in vs code?

Individual extensions for various languages provide this via language servers. This works in the normal editor interface, but with notebooks I imagine it would be the responsibility of the notebook extension. I don't use that, perhaps there is a feature in settings you could enable? That feature might just not be available in notebooks.

--

Another thing I know is possible is to run a web version of VS Code (or code-server) remotely and access it via the Jupyter proxies, something like https://github.com/betatim/vscode-binder or https://github.com/victor-moreno/jupyterhub-deploy-docker-VM/tree/master/singleuser/srv/jupyter_codeserver_proxy. You've still got a separate interface, but at least they're both running on the same host in the same browser.

Adbot
ADBOT LOVES YOU

OnceIWasAnOstrich
Jul 22, 2006

Oysters Autobio posted:

So this kind of setup was what I was thinking was more doable, but I don't know what accesses I have in terms of SSH so I'll see what I can do. It doesn't help that my vscode is on a Windows machine because our workstations aren't connected to the Windows App store for me to install a WSL Linux distro. Adding PuTTY to the mix isn't appealing.

On this particular point, Windows doesn't matter. Although I do use WSL-2 some, I do most of my work at work all on a Windows laptop I don't have admin or store access on. Most of that development work is done in VS Code with remote on some remote Linux machine that I may or may not have root on. I only have putty installed so I can test things for putty users. I otherwise do everything with the OpenSSH that is built into Win10 systems.

This does require SSH access to the remote systems, something admins may be trying to avoid by providing JupyterHub instead. In general, if you are using remote compute, even non-admin windows isn't going to be the dealbreaker or even necessarily an inconvenience.

I hesitate to suggest this, because it could be an end-run around restrictions and you should check policies, but you could absolutely run the VS Code or code-server CLI from within a Jupyter notebook and set up a tunnel for your local VS Code client that way. It is kind of a manual version of one of the links I provided.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply