Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
QuarkJets
Sep 8, 2008

jusion posted:

Ok - I think you may be talking past each other then, because his link (http://floating-point-gui.de/basic/) is not that.

QuarkJets posted:

Is writing bad analogies required for numerical analysis, now?

You're seriously fighting for the position that says "a person new to programming has to have a solid understanding of the difficult topic of numerical analysis before writing a program that uses floats". Is that really the hill that you want to die on?

Malcolm XML posted:

well it keeps me in a job, so pragmatically no


but yes you should understand your tools before you use them, maybe read the fine manual as well

That page is "here are the basics", Malcolm's posts are saying that you actually need much more than a basic understanding. I think his intention was that a beginner should read all of the pages on that web site, not just the page that was linked.

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Symbolic Butt posted:

oh man, I vaguely remember hearing about this fiasco, do you have a link?

Nah that was years ago, sorry. I recall several different reddit threads about the problem appearing on "professional" exchanges though, back when everyone in the ecosystem had no more experience than "I built my gaming rig from parts I ordered on newegg"

QuarkJets
Sep 8, 2008

Malcolm XML posted:

Yes I'm going to take life advice from posting on the something awful comedy forums

(I think it's pretty clear that every post is tongue firmly jammed in cheek)

People who tell learners to ignore things that they barely understand never fails to amuse

Did anyone actually say that? I thought it was more "you need this basic level of knowledge" vs "you need this advanced level of knowledge"

QuarkJets
Sep 8, 2008

Cingulate posted:

Thanks everyone.

I think step 1 should be to make my generate_data less horribly ineffective, it's a big nested loop. Bummer - I thought this'd force me to learn something new :v:

This could be an opportunity to learn multiprocessing, if that's something you don't know already. I think your problem just calls for 2 Queues, and a Process that reads requests from queue1 and places X, y tuples into that queue2. Your main process reads X,y tuples from queue 2 and places restricted numbers of requests (which can just be None or whatever you want) into queue1 (this is to restrict the subprocess from just processing everything at once; if you don't mind all of your X,y tuples sitting in memory at the same time, you can skip having queue1 and just have your subprocess continuously pumping data into queue2)

Multiprocessing sidesteps the GIL by launching multiple processes instead of multiple threads, so the best design is to generate simple data in separate processes and use more complex objects (such as the model) in your main process.

QuarkJets
Sep 8, 2008

Cingulate posted:

I know joblib and multiprocessing's pool - I use them a lot actually, because most of my problems are trivially parallelizable. This one, I fear, is not: the training happens in Theano (I think I hosed something up with my Tensorflow installation?), so it compiles CUDA code for the GPU and then it sits there. So the training always needs to happen in the main thread/kernel/session. But yes, if I can send the data generation to a secondary process, that would save me some time. So this can be done with the multiprocessing module, did I get that right?..

(Having multiple X sit in memory probably isn't viable cause they're pretty big - tens of GB - , but I guess I can make them smaller and find a few GB somewhere and it should work.)

Yes, you can do that with the multiprocessing module. In your main thread you'd start by adding a request for (X,y) to an input Queue, then you'd set up a for loop that reads (X,y) from an output Queue, places a new entry in the input Queue, and then begins running the model. A separate Process reads from the request Queue and writes to the output Queue. This creates a situation where your main process is handling the model training while at the same time a Process is creating the (X,y) tuple for the next iteration of model training.

If you have enough memory to hold all of the (X,y) tuples in memory simultaneously then you can do all of the above with just 1 Queue instead of 2 (you can start a Process that just fills the output Queue with as many (X,y) tuples as you want without ever checking an input Queue for requests). Or you can use an input queue and just make sure that it never has more than M requests, in case you're worried about the generation of (X,y) tuples sometimes taking longer than a model training iteration

QuarkJets fucked around with this message at 00:37 on Jun 16, 2017

QuarkJets
Sep 8, 2008

Mirthless posted:

I'm using classes already though maybe incorrectly?

I'm using functions for things like class/race selection, then returning the value to the dictionary when i call the function, like if you pick a Half-Elf Fighter or whatever the Fighter class gets assigned to Player['Class'] and the Half-Elf class gets assigned to Player['Race'] etc


code:
class Fighter(CClass):          
    HitDie = 10
    FighterCon = True
    HumanAge = 15 + (randint(1, 4))
    DwarfAge = 40 + (randint(5, 20))
    ElfAge = 130 + (randint(5, 30))
    GnomeAge = 60 + (randint(5, 20))
    HalfElfAge = 22 + (randint(3, 12))
    HalflingAge = 20 + (randint(3, 12))
    HalfOrcAge = 13 + (randint(1, 4))
    StatRequirements = 9,7,0,0,0,0
    WeaponProficiencies =  "Any weapon appropriate to size and race"
    PrimeReq = "Strength"
    ClassName = "Fighter"
    StartGold = randint(50, 200)
    WeaponSlots = 4
    NonWeapon = -2
    SaveThrows = [16, 17, 18, 20, 19]
    WeapProfs = ("Sword","Great Sword","Curved Sword",
                 "Axe","Hand Axe","Pole Arm","Mace","Flail","Monk",
                 "Club","Flail","Hammer","Mace",
                 "Morning Star","Staff","Dagger","Dart",
                 "Shortbow","Longbow","Crossbow","Spear",
                 "Lance","Javelin","Sling")
    ArmorProfs = ("Light", "Medium", "Heavy", "SmallShield", "BigShield")

    def __init__():
        pass
code:
class HalfElf(Race):
    Intelligence = (0, 4, 18)
    Dexterity = (0, 6, 18)
    Constitution = (0, 6, 18)
    
    Young = range(24, 40)
    Mature = range(41, 100)
    MiddleAged = range(101, 175)
    Old = range(176, 250)
    Venerable = range(251, 325)

    RaceName = "Half-Elf"
    Vision = "Infrared, 60'"
    PlaceHolderForSleepAndCharmBonus = 1
    ValidClass = "1234790"

    TC = (10, 0, 0, 0, 5, 0, 0, 0)
    def __init__(self):
        pass
    def ClassList(self):
        ClassChoices = (ccl.Fighter, ccl.Cleric, ccl.MagicUser, ccl.Thief, ccl.Ranger, ccl.Assassin)
        return ClassChoices
Thanks, that's heartening at least.

I knew I needed to learn and practice with classes more and this was all due for a rewrite anyway, back to the drawing board

It looks like your Fighter class defines ages for various Races. It might be better if a single Age value is defined under CClass (because all characters should have an age), and then Fighter and Race can modify it, maybe using if statements to check the race and class type and then adjusting age appropriately.

QuarkJets
Sep 8, 2008

Look into the json and ConfigParser modules, there should be relatively little required modification (or maybe even no required modification) to the dictionary that you get from json in order for ConfigParser to write out the INI file that you want

QuarkJets
Sep 8, 2008

Methanar posted:

Got it working, thanks for the suggestion.

The important bit
code:
            ini = open(self.ini_path, 'w')
            for key,val in groups.iteritems():
              ini.write('[' + key + ']')
              ini.write('\n')
              ini.write('\n'.join(val))
              ini.write('\n')
            ini.close()
I'm sure this is garbage code, but it works.

It's fine code but I'd have written it like this:

code:
	with open(self.ini_path, 'w') as ini:
		ini.write('[{key_name}]\n{val_names}\n'.format(
				key_name=key, 
				val_names='\n'.join(val)))
but with better indentation (4-spaces instead of whatever the forums is expanding tab to)

QuarkJets
Sep 8, 2008

Loezi posted:

What's the easiest library to make a GUI with, given that I require hassle-free licensing (which rules out PyQt)?

Native elements are appreciated but not necessarily required.

PySide is LGPL which is hassle-free

QuarkJets
Sep 8, 2008

No, I don't think there's a definitive answer on whether importing a GPL module is legally distinct from linking to a GPL library. I don't think it matters so much for LGPL but it's a pretty important distinction for GPL, so I understand why people developing commercial software might want to avoid GPL code

QuarkJets
Sep 8, 2008

Thermopyle posted:

Keep in mind that most answers to these type of questions will come from people who have only seriously used one or two of the options.

I've used 4 different options on pretty large applications and I'd say you'll have the easiest time with HTML/CSS/JS + some python web framework and a browser.

However, every solution has its pros and cons and its hard to say which is actually easiest when there's so many variables from your experience level to the type of application you're developing to your ability to handle context switching between paradigms.

Yeah we're all going to have our biases

My thoughts are that the HTML/CSS/JS route is abhorrent, Qt (and therefore PySide) is love, wxPython is an acceptable alternative to Qt, and tKinter is a bit ugly and has a bad API

QuarkJets
Sep 8, 2008

The Coding Horrors thread killed any mild interest I had in golang

QuarkJets
Sep 8, 2008

now that python has overtaken javascript hopefully we can enjoy the rich bounty of coding horrors associated with novices using that language

QuarkJets
Sep 8, 2008

IIRC the people who made Spyder were actively trying to mimic the Matlab IDE, to act as an entry point to former Matlab users who aren't already fully consumed by Stockholm's Syndrome

QuarkJets
Sep 8, 2008

Use github

QuarkJets
Sep 8, 2008

Boris Galerkin posted:

So update on the whole offline anaconda thing:

It's a giant pain in the rear end.

First of all, there is an "--offline" flag available when you do "conda install x" except that it doesn't actually do anything other than raise errors saying that something is trying to use the internet.

I tried copying my entire miniconda folder instead except this doesn't work either because the path miniconda is installed to is hardcoded or something, so it can't be changed.

I saw that all of the bz2 archives for all the packages are kept in minocondaroot/pkgs, so I thought I'd just copy that entire pkgs folder to the new computer and drop them in there, thinking that the conda installer would find the cached files there. Except it doesn't. No amount of "--offline" or "--use-index-cache" or "--use-local" worked and the offline flag kept raising errors like I said above.

I found some random forums post about running "conda index /path/to/local/files" so I tried to do that, but the problem is that I would have needed to sort all of my bz2 files into their respective sub folders (noarch, linux64, etc) as according to those packages metadata. I wasn't going to do this for 100+ packages so I nixed that idea.

In the end what I did was export my configured root environment to know which packages i needed to install, and then write a bash script to iterate through them with "conda install file.bz2", and then upload all the bz2 files from my internet connected computer. This of course wasn't as easy as it sounds because some of the packages need to be installed in some certain order due to them depending on things from conda-forge.

In the end the handful of packages that had to recompile locally with different option flags didn't work when copied over to the other computer and installed this way. So I'll need to either figure out why or just compile those things locally for every computer, which would be the fastest thing.

Anyway that's my complaint right now.

The shebangs at the top of all of the miniconda (or anaconda) files are pointing to whatever directory that you installed to. So for instance if you move the folder and then try to run Spyder then it won't launch because it can't find Python (because Spyder thinks your Python lives in a specific place, but then you moved it)

You have two solutions to this dilemma

1) Write a script that goes through all of the files and modifies all of the shebangs to whatever you want
2) Install miniconda or anaconda to whatever directory path that you want to use on your target system (so the shebangs will be set correctly once moved)

Do either of those and then move miniconda to your target system, and it will simply work.

QuarkJets
Sep 8, 2008

Burn the Heretic. Kill the Mutant. Purge the Unclean.

... using assertions

QuarkJets
Sep 8, 2008

It's in the Science subforum: https://forums.somethingawful.com/showthread.php?threadid=3359430

QuarkJets
Sep 8, 2008

The March Hare posted:

Been a while since I've had to do this but I need to make a GUI for a Python project. Need it to work on Python 3, absolutely needs to work on Windows (better if cross platform, but not required), absolutely need to be able to put it in the system tray, and would prefer it not make me want to die while working with it.

Any advice or am I stuck choosing between tkinter and qt or whatever?

Qt is pretty easy to use

QuarkJets
Sep 8, 2008

The March Hare posted:

I used qt a while back and I don't remember it being difficult, but I do remember the documentation for pyqt sort of sucking. I checked out the kivy docs this afternoon and it looks totally fine, probably just going to use it and see what happens. Thanks y'all~

The documentation for Qt is extensive, and pyqt is basically just a wrapper for that. Anytime that you have a question about pyqt there's a very good chance that your question is actually about qt

QuarkJets
Sep 8, 2008

Yeah your environment probably only has 1 version of pandas installed, so your script is going to use that version. If you want a different version then you would need to either downgrade pandas in that environment (which is the easier option) or create a new environment with the downgraded pandas and then either activate that environment whenever you want to use this script or modify your script's shebang to use the correct environment's python binary

QuarkJets
Sep 8, 2008

Do you use conda? Conda lets you downgrade packages really easily, then you can just upgrade again when you're done

QuarkJets
Sep 8, 2008

Seventh Arrow posted:

I'm trying to comprehend numpy arrays - I have an assignment and the first question is to create a random vector of size 20 and sort it in descending order. This is what I came up with:

code:
 
import numpy as np
a = np.random.random((1,20))
np.sort(-a)
I get the following result, so I think it works:

array([[-0.94139218, -0.70652483, -0.67840897, -0.67044282, -0.62539388,
-0.61770677, -0.58816414, -0.46556941, -0.44944398, -0.4487512 ,
-0.43776743, -0.41519608, -0.39534896, -0.34280607, -0.23698099,
-0.0829909 , -0.05634266, -0.05450404, -0.04979055, -0.02429839]])

I'm not sure about the parameters used for 'np,' though - in this case ((1,20)). So I think '1' means that is has one dimension, so it's a vector. The '20' seems to be the total size of the array. Then I see many arrays that have a third number, but I'm not sure what it means...the tutorials that I've seen so far refuse to stoop to my level. Can anyone elucidate?

np is the numpy module, importing it as np just gives you a shorthand way of accessing everything in numpy. The other way is to just "import numpy" and then you'd be calling numpy.random.random(). It's very common to see numpy aliased to np in this way, it's convenient and common practice

What you're really doing is giving arguments to the random() function of the np.random module. The arguments you provided are a tuple: (1,20). The tuple you provide defines what shape the output array will have. You could have also provided (20,) and it would have still given you a 20-element array. If you wanted a 4x10 array (40 elements), you'd give it (4,10), ie:

np.random.random((4,10))

QuarkJets fucked around with this message at 02:12 on Nov 24, 2017

QuarkJets
Sep 8, 2008

For more information on random, you can try googling for the numpy random module:

https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html

You can call any of the functions defined there by invoking np.random.whatever_function_you_want() with the proper arguments. The docs will describe what the arguments are and what they're used for. For instance, you invoked numpy.random.random, which according to the documentation returns a random array of floats. But what if you want integers? That page shows there's a randint function:

https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.randint.html#numpy.random.randint

You'd call it with np.random.randint

QuarkJets
Sep 8, 2008

Seventh Arrow posted:

Ok great, thanks! I did know about the "np" thing but thanks anyhoo :) So in the next question it says to create a 5x5 array with 6's on the borders and 0's on the inside. So I guess I would use ((5,5)) for the tuple, yes? I'll have to look into arranging the numbers in such a specific fashion though.

edit: wait, the ((5,5)) doesn't seem right

Also, is that a real quote from Duck Dunn?

random.random((5,5)) would give you a 5x5 array of random floats. You don't really want a random array of values though; you may want to use something like np.zeros or np.ones

QuarkJets
Sep 8, 2008

Learned today that PyCharm's debugger can display huge arrays in a separate window as a table of cells. I always thought this was something that pycharm lacked but I was just totally wrong

QuarkJets
Sep 8, 2008

creatine posted:

Does anyone have a clear and concise resource that talks about using native C code in Python?

I've got some C functions I've written that I want to interface with a Python script but everything points me to cython, which looks like a mixed language of the two

If you have actual C-style functions dealing with regular old primitive types (float, int, etc) you can just compile a shared library and load it with ctypes.

I would not recommend using Cython... for anything, really. There are better alternatives no matter what you want to do

QuarkJets
Sep 8, 2008

VikingofRock posted:

Okay, so I asked a friend and he said that it's because the lambda is storing a reference to val, and so when val is updated throughout the dictionary comprehension it changes the val "pointed to" by all the lambdas. And this works:

Python code:
def over(val):
    return lambda x: x / val

letters = {'a': 1.0, 'b': 2.0, 'c': 3.0}
fns = {key: over(val) for key, val in letters.items()}
for key in fns:
    print(key, fns[key](1))
# output:
#
# a 1.0
# b 0.5
# c 0.3333333333333333
So I guess the real problem is that I don't understand variable assignment in python. Does anyone know of a good resource for understanding how that really works? In most of the languages that I'm used to, either variables are very explicitly copies or references (C, C++, Rust) or are immutable (Haskell), so the assignment model of python is pretty confusing to me.

Is there a reason that you're returning a lambda instead of just x/val?

QuarkJets
Sep 8, 2008

Cingulate posted:

What's bad about Cython? Don't a bunch of scientific packages use it a lot?

Most scientific packages are written in compiled C or Fortran. For instance many of the Numpy libraries are compiled FORTRAN77. It's true that there are a number of packages that use Cython (including some parts of Numpy), but that's more due to Cython having been around forever than it being the optimal choice for building extension packages today.

For compiling Python, Numba is faster and way easier to use than Cython. No need to write separate .pyx files that get precompiled into a huge amount of boilerplate, simply write Python functions without vectorization and then attach decorators to them; boom, C performance with much less mess

For compiling small snippets of C code to be called from Python, ctypes is the fastest and easiest to use option

For compiling large amounts of basically anything to be used with Python, SWIG is the go-to option and is faster and easier to use than Cython's C-interfacing features.

QuarkJets
Sep 8, 2008

Why don't dictionaries have an extend(dict) method?

QuarkJets
Sep 8, 2008

Wallet posted:

I have very limited programming experience generally and even less experience with Python, so I'll apologize if this is a really stupid question, but I wasn't able to find much from googling:

I've got a csv file with a little over 90,000 rows that each have a key in the first column and a value in the second. I also have a list of keys that I want to retrieve the values for.

Currently, I'm using csv.reader to read the file into a dictionary and then looping through my list of keys to retrieve the value for each from the dictionary. This works, but I have a feeling that this is a really stupid/inefficient way of going about things.

The other approach that comes to mind is creating a duplicate of the list of keys that I want to retrieve values for, iterating through the rows of the file checking if that row matches any of the keys I'm after, storing the value and removing the key from my duplicate list if it does match, and continuing on until the duplicate list is empty.

Am I an idiot? Is either of these approaches appropriate? Is there a better solution?

The best approach may depend on what you want to do with the keys and values. Do you want to iterate over every key/value pair?

code:
for key, value in csv_dict.items()"
    do_something(key, value)
Do you only want to access specific keys?

code:
for key in interesting_key_set:
    if key in csv_dict:
        do_something(key, csv_dict[key])
Are you only interested in using the dictionary to filter out duplicate keys, and then you're only really interested in doing something with all of the values?

code:
for value in csv_dict.values():
  do_something(value)

QuarkJets fucked around with this message at 06:10 on Jan 9, 2018

QuarkJets
Sep 8, 2008

ufarn posted:

Is there a canonical way to parallellize a for loop that is compatible with 2.7? I've seen stuff like multiprocessing, concurrent.futures, and all sorts of stuff, but no idea which is the preferred option.

The loop has another loop inside it, but it's over like ten elements so it's not worth optimizing that part.

There are lots of options but I've always found multiprocessing the easiest to use and most widely applicable. If you have no idea what to use and don't want to explain the problem further then maybe give that a shot

QuarkJets
Sep 8, 2008

Seventh Arrow posted:

Ok that does indeed make sense, thanks. But I think classes or their objects do need to be defined I think, right? That what "def __init__" is for?

Actually, I think I'll just look up the python documentation and review some of this stuff since I'm not gonna get my book back any time soon.

You don't have to pre-define cl, because it's definitely being defined in your if/elif/else block. You're setting cl to a string (eg "red", "blue", etc are all instances of the string class) in those blocks of code. This effectively means that the "cl = color" line does nothing and can just be deleted. An IDE like pycharm would point this out for you as well.

If you want an instance of a color class, then you would indeed need to define a color class first. But it doesn't sound like that's what you actually want; the function you're accessing just wants a string with the name of the requested color

QuarkJets
Sep 8, 2008

Cingulate posted:

What do you want to do with the data? is it literally only sorting? Because if you want to do anything more with that, you'll probably want to use Pandas, or at least Numpy.

Once you have the data in either format, the sorting will be absolutely trivial (literally df.sort()), but it would be a bit more complicated to get the data in there due to the lone "3" in the 4th to last line.

So really, it depends a bit on what exactly you want to do.

It looks like the data is such that any line with 1 integer is just a count of the number of subsequent rows until the next single-integer line

wasey if you wanted to use classes you could do something like this:
Python code:
# let's just make a list of all of the activities
activity_list = []
# open file
with open('my_file.txt', 'r') as fi:
    # iterate over all lines in the file
    for line in fi:
        # each line is either 1 integer or 3 integers separated by spaces
        # split the line by spaces
        line_split = line.split(' ')
        # throw away the single-integer lines
        if len(line_split) != 3:
            new_activity = Activity(int(line_split[0]), int(line_split[1]), int(line_split[2]))
            # Maybe we put the activity in a list or something
            activity_list.append(new_activity)
I agree with what others are saying, you probably want a dataframe or something. This is just a demonstrative example of what you're currently trying to do

QuarkJets
Sep 8, 2008

vikingstrike posted:

Anybody had any issues with PyCharm skipping over breakpoints when debugging? My Google searches have failed me, and it's getting super annoying because I can't figure out how to replicate the issues.

Only when I've accidentally Run the code instead of Debugging it.

QuarkJets
Sep 8, 2008

Rocko Bonaparte posted:

Is there a good article somewhere that goes through some of the idiosyncrasies of Popen? I'm sure anybody here that has used it many times in different circumstances have found little quirks based on things like the OS, how the application handles pipes, what happens with arguments, shell=True, etc.

I'm not talking "hurr look at the subprocess/psutil documentation hurr." I'm talking about contextual, system caveats that plague using Popen and friends for spawning and monitoring other processes and their output.

I don't know of any such article, and while I'm sure one exists I will instead offer some recommendations from my own limited experience:

A) shell=True will actually launch a new shell, which you want to avoid if you can; it's going to almost always be possible to use the sequence form of Popen instead of just passing in a string and shell=True, so just do that. Basically you can just build your entire command string with arguments and flags as a list or a tuple and then pass that in as the first argument of Popen, bada-bing bada-boom

B) the stdout and stderr variables basically work like you'd expect, and if you want to read from them from within your Python session then there's the helpful subprocess.PIPE object. If you don't set stdout or stderr to anything then they'll just do whatever your stdout / stderr normally do (e.g. print to the terminal)

C) if your Popen object goes out of scope while executing, such as when you start a long-running Popen inside of a function and then that function suddenly exits, then Python will helpfully close it for you; be sure to do something like call Popen.wait() if you actually want to keep the process running until it completes

D) I only code in *nix so who the gently caress knows what a Windows environment is going to do but the same advice is probably all still true

QuarkJets
Sep 8, 2008

German Joey posted:

I'm a very long-time Python user, but most of my experience has been with the 2.x line. I've used a few versions of 3.x, but mostly kept my code to be compatible with 2.x. However! I now have a job-interview coming up at a place that uses 3.6.

I was able to pass this place's take-home coding interview just fine (or at least, fine enough to get a callback to the on-site, heh) but I'm worried about getting asked to whiteboard something related to some more recent 3.x feature that I'm less familiar with at the on-site interview. Which feature? Who knows, well not me, and that's what I'm worried about! Thus, my question: is there something I can read that has an overview of important 3.x features? Are there any important 3.x-only libraries I should know about? Thank you in advance for your advice, fellow goons.

A number of the features added to 3.x were backported to 2.7, which is good. But there are cheat sheets for the remainder.

This python.org article is a starting point:
https://docs.python.org/3/howto/pyporting.html
Especially this section:
https://docs.python.org/3/howto/pyporting.html#learn-the-differences-between-python-2-3

It links to a number of documents cheat sheets, and here's another:
http://sebastianraschka.com/Articles/2014_python_2_3_key_diff.html

What does your new employer do?

QuarkJets
Sep 8, 2008

Boris Galerkin posted:

About the notebooks: I guess I’m asking why would you do that? What I would do, if I really needed to do this, is just open up a new blank script in PyCharm, type in my multiplication and hit f5 or whatever the hotkey is to run the script. Like I said Jupyter Notebooks seems like a great tool but I just don’t get it. Most of the stuff I’m finding online remind me of when I didn’t get Docker: lots of people saying how great they are but nobody really “showing” how great they are.

I think if you're already a PyCharm user then the only other reason to open a notebook is if you plan to share the results (not just code) with someone else.

QuarkJets
Sep 8, 2008

What are those attributes? Strings? Floats? Other classes?

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

I would use a for loop to do the thing twice.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply