Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
12 rats tied together
Sep 7, 2006

You can pip install --user, or just normal pip install, but yeah you generally should not sudo pip install unless you intentionally set out to modify system python. It's almost always fine, until it randomly isn't, and it's just not worth the hassle of eventually having to deal with (especially if you don't particularly care about systems administration).

I like pyenv for managing python installs. The usual workflow is that you:

1. Start a new python project by creating a new directory.
2. Place a .python-version file in the directory with the contents of the version this project should use.
3. Navigate to the directory and "pyenv install" to install whatever version of python that is.
4. Use python normally from now on.

It probably doesn't work on windows, but I'd really suggest using WSL to do any sort of python development on windows anyway.

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

wolrah posted:

I understand the use of virtualenvs and such for keeping different projects' dependencies isolated from the system and each other, but if I actually do want the latest version of something installed system-wide is sudo pip install actually the best way to go?

My specific use case right now is for youtube-dl but there are a few other utilities written in Python that use pip as their official package manager but I'd generally want to have installed system-wide with no need to activate environments before use.

Yeah, system wide pip is there for that reason, but you need to be very careful.

The safer way to gain the convenience you desire is to create a venv and then modify your path to point to the python that lives in it

QuarkJets fucked around with this message at 21:41 on Dec 30, 2020

Dominoes
Sep 20, 2007

12 rats tied together posted:

I like pyenv for managing python installs. The usual workflow is that you:
...
It probably doesn't work on windows.
There's also this program I wrote that abstracts the python installation and package management.

quote:

I'd really suggest using WSL to do any sort of python development on windows anyway.
Why?

necrotic
Aug 2, 2005
I owe my brother big time for this!
pyenv is nice because I can have a distinct env for each global package I need (CLI tools, not libs to use) very easily. I guess you can do this with conda, too? But I haven't really messed with it.

Zugzwang
Jan 2, 2005

You have a kind of sick desperation in your laugh.


Ramrod XTreme
Do any of y'all have experience distributing packages with Cython? I've been Googling about this as much as I can and have also tried reverse-engineering the Cythonizing components of packages such as pandas, all to no avail.

Right now my folder structure is:
code:
setup.py
setup.cfg (didn't always have this, didn't make a difference when I did)
cy_test/
    __init__.py
    py_code.py
    cy/
        __init__.py
       cy_code.c
       cy_code.pyx
py_code.py and cy_code.pyx both contain boring functions that either print 'hello' or return 5. They work, and the Cython stuff builds fine if I do it locally through the command line, but nothing except __init__.py makes it into the installed cy/ directory when I do a local pip install. The py_code.py-related stuff installs and imports fine.

setup.py currently looks like this:
code:
from setuptools import Extension, setup, find_packages
from Cython.Build import cythonize
import os

extensions = [
	Extension(
		name='cy_test.cy',
		sources=['cy_test\\cy\\cy_code.pyx'],
		language='c',
	)
]

setup(
	name='cy_test',
	version='1.0',
	description='Cython test',
	packages=find_packages(),
	ext_modules=cythonize(extensions, compiler_directives = {"language_level": 3, "embedsignature": True}),
	install_requires=[
		'cython',
	],
	zip_safe=False
)
I've tried including and not including the cy_code.c file in sources, doesn't make a difference.

I noticed that other Cython-using packages have a variety of helper files like MANIFEST.in in the main directory, but none of the helper files seem to mention Cython, so ¯\_(ツ)_/¯

I've also tried this with the built .pyd extension in the cy/ folder, and the imports within __init__.py just fail.

At this point, I've spent way more time on this than I care to think about. Help me Python thread Kenobi, you're my only hope.

Zugzwang fucked around with this message at 04:47 on Dec 31, 2020

Butter Activities
May 4, 2018

Bad Munki posted:

Similarly on OSX messing with system Python will wreck your poo poo, just absolutely don’t.

Huh I replaced my default python files when I didn’t know better with anaconda and I never actually had a serious issue. Probably just lucky though, since I installed a shitload of common libraries before I edited the system files to all point to it.

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you
The Unreal Engine now has a Python 3 in there with an unreal package, but it's treated as a built-in.

We can get PyCharm to use the appropriate Python environment that the engine is using... but obviously because this unreal package is a built-in like `time` or `shutil` (i.e. there is no `unreal.__file__` or file path we can access via introspection), PyCharm won't recognize it for IntelliSense purposes. So we can't browse its contents and whatnot for insight into what's available.

Is there a way we can get PyCharm to recognize this built-in module for its custom Python version so that we can work on scripts separately in an IDE and have the benefit of IntelliSense?

Dominoes
Sep 20, 2007

There are options for configuring the Python interpreter, including adding additional paths for libs, but I don't understand what you're attempting.

mystes
May 31, 2006

Love Stole the Day posted:

The Unreal Engine now has a Python 3 in there with an unreal package, but it's treated as a built-in.

We can get PyCharm to use the appropriate Python environment that the engine is using... but obviously because this unreal package is a built-in like `time` or `shutil` (i.e. there is no `unreal.__file__` or file path we can access via introspection), PyCharm won't recognize it for IntelliSense purposes. So we can't browse its contents and whatnot for insight into what's available.

Is there a way we can get PyCharm to recognize this built-in module for its custom Python version so that we can work on scripts separately in an IDE and have the benefit of IntelliSense?
Some pages seem to indicate that unreal engine will generate an unreal.py file you can use. E.g. https://mycgdoc.com/Stub-File-in-PyCharm-269b748c0c3942aeb7bba4df568415f8

Is this what you're looking for?

mystes fucked around with this message at 08:58 on Jan 1, 2021

Love Stole the Day
Nov 4, 2012
Please give me free quality professional advice so I can be a baby about it and insult you

mystes posted:

Some pages seem to indicate that unreal engine will generate an unreal.py file you can use. E.g. https://mycgdoc.com/Stub-File-in-PyCharm-269b748c0c3942aeb7bba4df568415f8

Is this what you're looking for?

Following the backlink on that page, I found the "Stub File" section here on that backlinked page: https://mycgdoc.com/Python-Integration-22eef40f46c44f03aa25117ed939ecc6#269b748c0c3942aeb7bba4df568415f8

That one step was the missing piece! Thank you so much

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

I'm absolutely losing my mind with a bug in python that i don't think should be possible hence the solution has got to be super simple

the function in question is
Python code:
def complete_input(
    imputed: List, input_list: List[ProcessedPrediction], Lactate: bool = True) -> List[ProcessedPrediction]:
    """
    Takes imputed variables and adds them to inputs

    Args:
        imputed: list of imputed variables
        input_list: Prediction object or list of objects with missing variable to be filled in
        Lactate: if true variable to be filled in is lactate, else albumin will be populated

    Returns:
        List of prediction inputs with missing lactate or albumin filled in

    """

    completed = []
    for i in imputed:
        for j in input_list:
            if Lactate is True:
                j.Lactate = i
            else:
                j.Albumin = i

            print(j)
            completed.append(j)
   

    print("---")
    print(completed)
    print("---")
    return completed

which looks fine but the j that prints and the j that gets appended are different :psyduck:

example output:
code:
Age=40 ASA=3 HR=87 SBP=120 WCC=13.0 Na=135 K=7.0 Urea=2.0 Creat=20 Lactate=28.0 Albumin=40 GCS=15 Resp=2 Cardio=1 Sinus=False CT_performed=True Indication=1 Malignancy=2 Soiling=2 Lactate_missing=1 Albumin_missing=0
Age=40 ASA=3 HR=87 SBP=120 WCC=13.0 Na=135 K=7.0 Urea=2.0 Creat=20 Lactate=36.0 Albumin=40 GCS=15 Resp=2 Cardio=1 Sinus=False CT_performed=True Indication=1 Malignancy=2 Soiling=2 Lactate_missing=1 Albumin_missing=0
Age=40 ASA=3 HR=87 SBP=120 WCC=13.0 Na=135 K=7.0 Urea=2.0 Creat=20 Lactate=21.0 Albumin=40 GCS=15 Resp=2 Cardio=1 Sinus=False CT_performed=True Indication=1 Malignancy=2 Soiling=2 Lactate_missing=1 Albumin_missing=0
Age=40 ASA=3 HR=87 SBP=120 WCC=13.0 Na=135 K=7.0 Urea=2.0 Creat=20 Lactate=40.0 Albumin=40 GCS=15 Resp=2 Cardio=1 Sinus=False CT_performed=True Indication=1 Malignancy=2 Soiling=2 Lactate_missing=1 Albumin_missing=0
Age=40 ASA=3 HR=87 SBP=120 WCC=13.0 Na=135 K=7.0 Urea=2.0 Creat=20 Lactate=29.0 Albumin=40 GCS=15 Resp=2 Cardio=1 Sinus=False CT_performed=True Indication=1 Malignancy=2 Soiling=2 Lactate_missing=1 Albumin_missing=0
---
[ProcessedPrediction(Age=40, ASA=3, HR=87, SBP=120, WCC=13.0, Na=135, K=7.0, Urea=2.0, Creat=20, Lactate=29.0, Albumin=40, GCS=15, Resp=2, Cardio=1, Sinus=False, CT_performed=True, Indication=1, Malignancy=2, Soiling=2, Lactate_missing=1, Albumin_missing=0), 
ProcessedPrediction(Age=40, ASA=3, HR=87, SBP=120, WCC=13.0, Na=135, K=7.0, Urea=2.0, Creat=20, Lactate=29.0, Albumin=40, GCS=15, Resp=2, Cardio=1, Sinus=False, CT_performed=True, Indication=1, Malignancy=2, Soiling=2, Lactate_missing=1, Albumin_missing=0), 
ProcessedPrediction(Age=40, ASA=3, HR=87, SBP=120, WCC=13.0, Na=135, K=7.0, Urea=2.0, Creat=20, Lactate=29.0, Albumin=40, GCS=15, Resp=2, Cardio=1, Sinus=False, CT_performed=True, Indication=1, Malignancy=2, Soiling=2, Lactate_missing=1, Albumin_missing=0), 
ProcessedPrediction(Age=40, ASA=3, HR=87, SBP=120, WCC=13.0, Na=135, K=7.0, Urea=2.0, Creat=20, Lactate=29.0, Albumin=40, GCS=15, Resp=2, Cardio=1, Sinus=False, CT_performed=True, Indication=1, Malignancy=2, Soiling=2, Lactate_missing=1, Albumin_missing=0), 
ProcessedPrediction(Age=40, ASA=3, HR=87, SBP=120, WCC=13.0, Na=135, K=7.0, Urea=2.0, Creat=20, Lactate=29.0, Albumin=40, GCS=15, Resp=2, Cardio=1, Sinus=False, CT_performed=True, Indication=1, Malignancy=2, Soiling=2, Lactate_missing=1, Albumin_missing=0)]
---
The lactate is changing appropriately with the print statement but when i print the list they're all the same. I have no idea how that can be possible?

QuarkJets
Sep 8, 2008

Each time that you append j to the list, you are actually appending a pointer to the data. That pointer is being modified to point to different data with each iteration of the loop, but the pointer itself is the same. So you wind up with a bunch of identical looking elements because all of the pointers are being updated to point to the same data (note how in the output list all of the elements look like the last data element in the input list; this is why). There are many ways to fix this

You are actually modifying the entries in the input list with this function. If that's your intention, you can just eliminate the completed list entirely and instead print input_list

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

QuarkJets posted:

Each time that you append j to the list, you are actually appending a pointer to the data. That pointer is being modified to point to different data with each iteration of the loop, but the pointer itself is the same. So you wind up with a bunch of identical looking elements because all of the pointers are being updated to point to the same data (note how in the output list all of the elements look like the last data element in the input list; this is why). There are many ways to fix this

You are actually modifying the entries in the input list with this function. If that's your intention, you can just eliminate the completed list entirely and instead print input_list

That suddenly makes so much sense! thanks. I knew it would be something simple in the end. I want multiple copies of the input list, each with a value from imputed filled in so i'll work on a fix that achieves that.

QuarkJets
Sep 8, 2008

The general rule is that Python will not create a copy unless asked. Since you expect the output list to have all of the elements of the input list, except modified, you can copy it at the start of the function and then modify the elements in place.


Python code:

completed = list(input_list) 
for entry in completed:
    # do something to entry
print(completed) 

accipter
Sep 12, 2003

QuarkJets posted:

The general rule is that Python will not create a copy unless asked. Since you expect the output list to have all of the elements of the input list, except modified, you can copy it at the start of the function and then modify the elements in place.


Python code:

completed = list(input_list) 
for entry in completed:
    # do something to entry
print(completed) 

This will just create a copy of the list instance, you might need to also copy the objects held by the list.

Dominoes
Sep 20, 2007

copy.deepcopy(j)

QuarkJets
Sep 8, 2008

accipter posted:

This will just create a copy of the list instance, you might need to also copy the objects held by the list.

Oh right, that's true. Use a little list comprehension with deepcopy instead

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I've been playing around with some idea from my questions a few pages back, including a way of dynamically determining available model prediction classes. See the following code:

code:
from abc import ABC, abstractmethod


class Animal(ABC):
    def __init__(self):
        super().__init__()
    
    @abstractmethod
    def speak(self):
        pass

class Cat(Animal):
    def __init__ (self, y=0):
        super().__init__() 
        self.y = y
        self.fixed = 'A cat'
    def speak(self):
        print('meow!')
            
    
class Dog(Animal):
    def __init__ (self, x=0):
        super().__init__() 
        self.x = x;
        self.fixed = 'A Dog'
    def speak(self):
        print ('bark!')
        
        
class Puppy(Dog):
    def __init__ (self, x=0):
        super().__init__() 
        self.x = x;
        self.fixed = 'a puppy'


for c in Animal.__subclasses__():
    print ( c.__name__ )
whose output is:
code:
Cat
Dog
So, I'm able to enumerate my Cat and Dog classes, but not the derived Puppy class, which I'm not quite sure why. Is there something I can do to find it as well?

(Also, how are ya'll able to do those Python code blocks?)

necrotic
Aug 2, 2005
I owe my brother big time for this!
It's not a direct subclass of Animal. You would need to recurse through each subclass and look at their subclasses, too.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

necrotic posted:

It's not a direct subclass of Animal. You would need to recurse through each subclass and look at their subclasses, too.

yaaaa I get this, but it seems kind of odd that that registration trick only works with direct subclasses. Like, so long as all the classes in the inheritance tree support the original ABC interface, wouldn't it be useful to crawl through them?

necrotic
Aug 2, 2005
I owe my brother big time for this!

Cyril Sneer posted:

yaaaa I get this, but it seems kind of odd that that registration trick only works with direct subclasses. Like, so long as all the classes in the inheritance tree support the original ABC interface, wouldn't it be useful to crawl through them?

On a case by case basis. Having the subclass method only return direct descendants makes the most sense, it would be a more common case than the entire set of children and grand-children and... All the way down. If you do need that whole hierarchy implementing recursion (or a queue) is simple enough.

Mursupitsku
Sep 12, 2011
I'm running a Monte Carlo simulation with xgboost and sklearn.calibration.CalibratedClassifierCV. It works well except its way too slow. What are my options to make it faster?

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?
Profile it and find out what’s taking the most time, then optimise that part. Repeat until performance is acceptable.

If you want more specific guidance you’ll need to post code and profiler output.

Mursupitsku
Sep 12, 2011

DoctorTristan posted:

Profile it and find out what’s taking the most time, then optimise that part. Repeat until performance is acceptable.

If you want more specific guidance you’ll need to post code and profiler output.

If you could point me to some resources on how to perform profiling I'd be grateful. I think starting there would be the best course of action. My code is quite long and spaghetty so I'd rather not post it for now.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Mursupitsku posted:

If you could point me to some resources on how to perform profiling I'd be grateful. I think starting there would be the best course of action. My code is quite long and spaghetty so I'd rather not post it for now.

Yea with a MC simulation, shaving 300ms over 5000 trials is a good win.

I'm assuming the part that is taking too long is the simulation, not the training of the models. I'd start with using pythons built in timing in several places and figure out what is taking the most time and see if I can optimize it.

Places I look first when using sklearn are:
1) for loops, particularly on dataframes...can I apply or map a function instead? Can I use numpy instead?
2) other python libraries/packages, for example if I am doing some image transformation using Pillow, how long does that take
3) The classifier itself, how long does it take to spit out a pred? Can I decrease the number of features in my model and see a big speedup without much accuracy loss?

If you're still writing the code, don't run a bunch of trials. Code should run in 10 seconds during the initial write.

pmchem
Jan 22, 2010


CarForumPoster posted:

Yea with a MC simulation, shaving 300ms over 5000 trials is a good win.

5000 trials? Let me tell you, I once ran a MC simulation with over a trillion trials during my Ph.D. work. Fortran, not python, but still an amusing memory. One of my dissertation reviewers asked if the number was a typo. During the course of the work I found a bug in the random number generator for a commonly used Fortran compiler (which I reported to the devs and they later corrected).

Mursupitsku
Sep 12, 2011

CarForumPoster posted:

Yea with a MC simulation, shaving 300ms over 5000 trials is a good win.

I'm assuming the part that is taking too long is the simulation, not the training of the models. I'd start with using pythons built in timing in several places and figure out what is taking the most time and see if I can optimize it.

Places I look first when using sklearn are:
1) for loops, particularly on dataframes...can I apply or map a function instead? Can I use numpy instead?
2) other python libraries/packages, for example if I am doing some image transformation using Pillow, how long does that take
3) The classifier itself, how long does it take to spit out a pred? Can I decrease the number of features in my model and see a big speedup without much accuracy loss?

If you're still writing the code, don't run a bunch of trials. Code should run in 10 seconds during the initial write.

Thank you for the tips.

Its probably the classifier. When I ran the same simulation with a regular logistic regression as the classifier it ran maybe 3 times faster.

SnatchRabbit
Feb 23, 2006

by sebmojo
I have a stupid python question. I'm working on some python Lambdas in AWS and I want to increment or decrement my autoscaling group based on events. Now, there are minimum and maximum sizes to my ASGs I need to take into account when altering the desired #s in the ASG so I want somehow build that into my ++ and -- functions. I was thinking of using range() but I'm not sure if that's the right way to go:

code:
    def decrease_asg(asgName,currentMax,currentDesired):
        ### range ()? 
        newMax = currentMax - 1
        newDesired = currentDesired - 1
        response = client.update_auto_scaling_group(
        AutoScalingGroupName=asgName,
        MaxSize=newMax,
        DesiredCapacity=newDesired)
        print(response)

    def increase_asg(asgName,currentMax,currentDesired):
        
        newMax = currentMax + 1
        newDesired = currentDesired + 1
        response = client.update_auto_scaling_group(
        AutoScalingGroupName=asgName,
        MaxSize=newMax,
        DesiredCapacity=newDesired)
        print(response)

12 rats tied together
Sep 7, 2006

Just a general process thing, if you can coerce your events into a time series metric, this integration is much easier to configure in AWS and has some prebuilt policies and triggers you can set: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-policy.html

More specifically for your use case, I would suggest not mutating MaxSize. If you want to decrement or increment the number of instances in your asg, consider modifying desired only. Max can be left alone, managed manually, or set by some kind of ops team / billing team. The autoscaling service is smart enough in general to not launch 400 instances when you just want to temporarily scale from 22 to 23.

Mursupitsku
Sep 12, 2011

Mursupitsku posted:

Thank you for the tips.

Its probably the classifier. When I ran the same simulation with a regular logistic regression as the classifier it ran maybe 3 times faster.

After running some tests it seems that the classifier is the only thing taking a significant amount of time to run. I haven't yet tested if dropping features would increase the performance. In any case there is only 17 features total and my gut tells I cant really drop any of them.

Would just throwing more computing power at it work? Atm I'm running it on an semi old laptop.

What are my options to optimize the classifier itself?

OnceIWasAnOstrich
Jul 22, 2006

Mursupitsku posted:

After running some tests it seems that the classifier is the only thing taking a significant amount of time to run. I haven't yet tested if dropping features would increase the performance. In any case there is only 17 features total and my gut tells I cant really drop any of them.

Would just throwing more computing power at it work? Atm I'm running it on an semi old laptop.

What are my options to optimize the classifier itself?

Part of the "eXremeness" of XGBoost is that it does scale pretty well with more hardware and threads. XGBoost is pretty well optimized already, but perhaps you are using an excessively-large model. 17 is a decent number of features and it could be worth actually doing some feature selection testing. You may also be using default hyperparameters that make a more-complex-than-necessary model and could reduce the tree depth, maximum number of trees, or number of boosting rounds.

Famethrowa
Oct 5, 2012

So, fair warning, this is homework related but I'm not looking for any hand-holding, just trying to understand my options and tools.

If I wanted to crack a Scytale cipher string, in the format of a single uninterrupted string of text (think: xyxyxyxyxy) what would be the best approach? Is there a function in pandas to reformat the text and place parts into different cells out of an excel file?

Ideally once reformatted, I could then reconstruct and print the string based on a key which would instruct how to transverse the reformatted string and pluck out of cells.

I just don't know where to start reading up on this. :shobon:

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?
Do you want me give you a hint to point you in the right direction or solve the whole thing for you?

Assuming the former; you don’t need pandas for that. Just put the cipher text into a string and read the docs on string operations and python’s slicing syntax.

Famethrowa
Oct 5, 2012

DoctorTristan posted:

Do you want me give you a hint to point you in the right direction or solve the whole thing for you?

Assuming the former; you don’t need pandas for that. Just put the cipher text into a string and read the docs on string operations and python’s slicing syntax.

My code currently is attempting that, but I'm having issues with transversal. I need to slice, say, every third letter, and then once I hit the end of the string, loop back to the beginning of the string to finish the count and reslice. I could perhaps just duplicate the string many times over to achieve that, but that feels clunky as hell.

My reasoning for pandas is that the manual way of decoding the cipher involves a grid, and a column/row format would serve that purpose.

e. I've been messing with while loops and am just not quite there yet.

Famethrowa fucked around with this message at 19:26 on Jan 11, 2021

Wallet
Jun 19, 2006

Famethrowa posted:

My code currently is attempting that, but I'm having issues with transversal. I need to slice, say, every third letter, and then once I hit the end of the string, loop back to the beginning of the string to finish the count and reslice. I could perhaps just duplicate the string many times over to achieve that, but that feels clunky as hell.

My reasoning for pandas is that the manual way of decoding the cipher involves a grid, and a column/row format would serve that purpose.

e. I've been messing with while loops and am just not quite there yet.
If you iterate through the string and keep an index you can use the modulo operator (%) to split up the characters into however many lines you want to try as a solution.

You could also use the notation for slicing something in steps that Doctor was, I assume, pointing you towards
code:
[::3]

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

Famethrowa posted:

My code currently is attempting that, but I'm having issues with transversal. I need to slice, say, every third letter, and then once I hit the end of the string, loop back to the beginning of the string to finish the count and reslice. I could perhaps just duplicate the string many times over to achieve that, but that feels clunky as hell.

My reasoning for pandas is that the manual way of decoding the cipher involves a grid, and a column/row format would serve that purpose.

e. I've been messing with while loops and am just not quite there yet.

You said you didn't want hand holding, but you should use the slicing notation
code:
s[start:stop:step]
.

I'm not the one marking your work, but if I were I would deduct marks if you used pandas for this - it really isn't the right tool for the job.

I'd give a bonus point or two if you managed to solve it in one line.

QuarkJets
Sep 8, 2008

If you really want a column/row format, a 2D numpy array would be better than a pandas dataframe. But I don't think that's necessary. I feel like there may be a clever list comprehension that could give a fast, succinct solution but I'll be damned if I know how to implement it

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?
You shouldn’t use numpy either.

QuarkJets posted:

I feel like there may be a clever list comprehension that could give a fast, succinct solution

Not posting it since OP wants to figure things out themselves, but there’s a one-line solution.

Wallet
Jun 19, 2006

DoctorTristan posted:

Not posting it since OP wants to figure things out themselves, but there’s a one-line solution.

I feel like whether that's relevant depends on the context of the class. If this is supposed to be a lesson about loops then solving it with a one line list comprehension isn't going to get you high marks. You definitely don't need anything outside of the standard library.

QuarkJets posted:

I feel like there may be a clever list comprehension that could give a fast, succinct solution but I'll be damned if I know how to implement it.

Using the slicing notation and iterating over a range to offset the starting character for each slice probably does it.

Adbot
ADBOT LOVES YOU

Famethrowa
Oct 5, 2012

Wallet posted:

Using the slicing notation and iterating over a range to offset the starting character for each slice probably does it.

Thank you for the feedback everyone. I was working on some other classes so this went on backburner. This is what I'm focusing on right now, just need to grok how to offset in a way that will output correctly for each key/cipher length tested. Thinking perhaps I hardcode a brute-force function to test a set of possible keys rather then use an input key.

Should mention, this isn't for a programming class, my goal here is solely to solve this cipher. I could do it by hand, but I really wanted to get better at Python.

Famethrowa fucked around with this message at 16:56 on Jan 12, 2021

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply