Work through Think Python or something
|
|
# ? Feb 23, 2021 08:36 |
|
|
# ? Jun 3, 2024 21:42 |
|
You're right this does seem dumb learning with something outdated lol. Glad I hadn't gotten too far into it.
|
# ? Feb 23, 2021 17:13 |
|
I wrote a bunch of async scrapers that produce results via async generators, and I want to consume the results of the generators as soon as they're produced. Basically, I want something like itertools.chain() or asyncio.as_completed() but for async generators.Python code:
|
# ? Mar 2, 2021 01:33 |
Look at trio and the task nurseries. I've a scraper running at work that returns back results as completed.
|
|
# ? Mar 2, 2021 13:25 |
|
What's the current standard test framework for open source projects? I'm starting to work on a project and I'm unsure whether to use tox, nose, pytest, etc.
|
# ? Mar 2, 2021 13:59 |
pytest/tox
|
|
# ? Mar 2, 2021 14:20 |
|
Bundy posted:pytest/tox I like pytest, personally, but not for any particularly good reason.
|
# ? Mar 2, 2021 14:34 |
|
echoing liking pytest, the assertions feel much more natural and I've had annoying experiences with tox but that could just be me
|
# ? Mar 2, 2021 15:50 |
|
I'm working on the Flask Tutorial where you make a little blog. I've got the part where you initialize the database and I am getting an error and I have no idea I'm missing. At the bottom of this page it says to just run the command flask init-db and move on the the next page. When I run it I get a syntax error. code:
code:
code:
code:
|
# ? Mar 2, 2021 20:10 |
|
bigperm posted:I know this might be a big ask but this is where I would traditionally give up in frustration and would appreciate if anyone could point out the tiny error or misconfiguration that might let me push through here. code:
|
# ? Mar 2, 2021 20:18 |
|
pytest fixtures are good and my friend.
|
# ? Mar 2, 2021 20:19 |
|
KICK BAMA KICK posted:From schema.sql: code:
Thank you so much.
|
# ? Mar 2, 2021 20:25 |
|
Hollow Talk posted:pytest fixtures are good and my friend. This is a good reason. The most recent thing I've been working on is an API and pytest has made it straightforward to set up fixtures to handle all of the fiddly bits so I can e.g. test refreshing an auth token like this: Python code:
|
# ? Mar 2, 2021 21:18 |
|
Bundy posted:Look at trio and the task nurseries. I've a scraper running at work that returns back results as completed. Thanks. Turns out that aiostream.stream.merge() does exactly what I wanted.
|
# ? Mar 3, 2021 02:31 |
|
I kinda wish DocTests where more common. They really do solve two problems at once (In depth code documentation and unit-ish testing)
duck monster fucked around with this message at 02:28 on Mar 19, 2021 |
# ? Mar 5, 2021 18:51 |
|
I have a list of triples, each containing a GroupID (str), a payload (str, can be ignored for the purposes of this discussion) and a priotity (float, higher = more important). The size of the list is unknown and the order of the triples in the list is effectively random. I need to produce a new list, at most MAX_LEN in size, containing the top-MAX_LEN most important triples. They need to be ordered s.t. the triples are grouped together by GroupID, with more important items first within groups, and the groups further ordered by the maximal within-group priority. So if we asume MAX_LEN = 4, for the input code:
code:
code:
|
# ? Mar 8, 2021 13:07 |
|
I think that first line is out of place; if you immediately truncate then you've chosen a maximum number of entries in arbitrary order and then you sorted them, but the problem statement makes it sound like you want to choose a maximum number of entries from the fully sorted list. So truncating to MAX_LEN should be the last step, not the first And are you sure that example output is right? As-written, I thought that you wanted to sort by GroupID first, and then sort entries with the same GroupID by priority; but your output takes the row with the highest priority and then appends all of the rows with the same GroupID before moving on to the next highest priority, and so on. Two very different interpretations! In other words this is what I would have thought the output should be, based on how you wrote the problem: code:
|
# ? Mar 8, 2021 13:31 |
|
QuarkJets posted:I think that first line is out of place; if you immediately truncate then you've chosen a maximum number of entries in arbitrary order and then you sorted them, but the problem statement makes it sound like you want to choose a maximum number of entries from the fully sorted list. So truncating to MAX_LEN should be the last step, not the first Truncating first is the correct behaviour: the description was written by me for that post, trying to verbalize the behaviour I was showing with the examples. As another example, this code:
code:
|
# ? Mar 8, 2021 13:42 |
|
Has anybody here gotten on to the Python machine learning train? I started diving in and was surprised. For what you hear about it, I kind of figured I'd see some more magic, but it's been kind of disappointed. I've been particularly looking at neural networks (multi-layer perceptrons) to do the classic XOR problem. With scikit-learn, the API is kind of what I would have expected at this point for consolidation. It was pretty simple. The problem was the lack of convergence. I had to find a very specific recipe on Stack Overflow to get a success rate around 96% (I'd have the whole experiment rerun 100 times). When I was hand-writing out this crap, I could count on nailing it every time. I think I should only need two neurons in the hidden layer, but the recipe that worked needed three. Both two or four+ in the hidden layer made things worse. I then looked at what this was like with pytorch and ran into 100+ LOC stuff that had to do a lot of handcrafting to even set it up. I was surprised by the amount of coding. Also, none of it actually would run with whatever version I got, and this code was all of 2 years old. So that API looks to be a huge pile of flux. Even here, I was reading posts about inconsistencies in solving for XOR. Right now I'm trying to just get a pulse on the nature of all this. It looks like there's a lot of options and magic sauce and people just kind of poke it with a stick until it appears to work--if it ever even does. I'd expect some failure in more complicated scenarios but I'm even talking about the XOR problem as understood from the 1950s and solved in the 1990s. I'm used to scientific application culture due to working with tons of electrical engineers, and I get real strong vibes of that in all of this. It smells like a lot of this code doesn't do what the people who wrote it think it does. I guess you get away with that in AI sometimes when you can converge on a solution; your own bugs just cause it to converge slower, and you wouldn't necessarily know there's an actual problem. Edit: Right now I'm backing up and sketching an old-school multi-layer perceptron to see if I can consistently solve XOR and verify if I'm not going crazy insisting on a 100% convergence rate. I had moved on to neuroevolution when I was last looking at this stuff, and that definitely had a 100% success rate since one network in the "herd" would eventually come up with the right thing and the wrong ones would die.
|
# ? Mar 9, 2021 17:46 |
|
Most machine learning experts are treating it like an art more than a science but there are also like a billion research papers published every year actually trying to explain why some recipes work better than others, like for instance a few years ago there were several articles explaining why dropout is a stupid technique for lazy idiots who don't know any better but it's not like people stopped using dropout I serve as a reviewer for journal articles and as far as I can tell the most common way to develop a successful new network is to start from one that already works well
|
# ? Mar 9, 2021 22:23 |
|
I've recently done quite a bit with ML at various levels in Python. The Scikit-Learn version is very simple, easy to use, but there is a lot buried in the many, many arguments to some methods and the defaults are very frequently going to be unhelpful to you. You might not have the same choice in optimization methods you might use otherwise and the API isn't really designed for internal-to-the-model modularity like that so it is up to whoever wrote that model to give you all the options or you need to customize the model yourself (lots of code). PyTorch is very much for creating models and methods, not really for use as a simple API for using established/implemented methods like scikit. That said, the type/size of models you want PyTorch for tend to have a lot of extra complication in setting up massive parallel training that isn't really conducive to something super-simple like the Scikit API although stuff like pytorch-lightning get close. I've done a lot with sequence learning using models that use the HuggingFace-style API for Transformers models implemented in either PyTorch or Tensorflow. That API is a lot closer to what you might expect for direct use although it is clearly evolving rapidly as more and more methods get churned out gradually expanding the range of stuff the coordinated API might need to do.
|
# ? Mar 9, 2021 22:51 |
|
Anyone know if weakref.finalizer objects will be called when the interpreter receives signals to shutdown like SIGTERM and others?
|
# ? Mar 9, 2021 22:58 |
|
I don't have much to say but I wanted to pop in and say I had to strip a 3k line excel sheet of addresses of duplicates with some weird rear end parameters and I ended up using pandas to do it. I was amazed at the power of that.
|
# ? Mar 9, 2021 23:00 |
|
Rocko Bonaparte posted:Has anybody here gotten on to the Python machine learning train? If you're trying to solve a practical problem, consider a decision tree.
|
# ? Mar 10, 2021 01:02 |
|
"Trap" maybe isn't the right word when machine learning experts are in massive demand and are actually solving unique problems that haven't been easily solvable by classical methods. There's definitely a lot of hype present, but it's also just a really useful tool, like learning how to use Docker or CUDA e: And I want to clarify that there are absolutely a ton of grifters that are taking advantage of the hype and trying to apply ML to everything while pretending like it's magic, but that advice is more relevant to project managers than developers QuarkJets fucked around with this message at 03:53 on Mar 10, 2021 |
# ? Mar 10, 2021 03:49 |
|
salisbury shake posted:Anyone know if weakref.finalizer objects will be called when the interpreter receives signals to shutdown like SIGTERM and others? Turns out they're only called upon SIGINT or during an otherwise clean shutdown.
|
# ? Mar 10, 2021 23:22 |
|
A couple of my favorite libraries implement nice _repr_html_ methods so objects print rich formatted representations in Jupyter Notebook. Whenever I've looked into it their source code, there's a ton of artisanal hand-crafted html & css formatting. Anyone ever seen a library for doing that kind of thing automatically that handles composition? Like if Fart and Poop both implement _repr_html_, a Butt object that has fart and poop variables should just shove their _html_repr_s into a table or tree or something.
|
# ? Mar 11, 2021 06:00 |
|
Zoracle Zed posted:A couple of my favorite libraries implement nice _repr_html_ methods so objects print rich formatted representations in Jupyter Notebook. Whenever I've looked into it their source code, there's a ton of artisanal hand-crafted html & css formatting. Anyone ever seen a library for doing that kind of thing automatically that handles composition? Like if Fart and Poop both implement _repr_html_, a Butt object that has fart and poop variables should just shove their _html_repr_s into a table or tree or something. I've used Pygments for highlighting but I don't remember how much it can handle as far as layout goes. Wallet fucked around with this message at 14:13 on Mar 11, 2021 |
# ? Mar 11, 2021 14:09 |
|
QuarkJets posted:Most machine learning experts are treating it like an art more than a science but there are also like a billion research papers published every year actually trying to explain why some recipes work better than others, like for instance a few years ago there were several articles explaining why dropout is a stupid technique for lazy idiots who don't know any better but it's not like people stopped using dropout I can appreciate this in any rapidly expanding discipline where the frontier is inevitably going to be beyond what you're currently looking at from general discussions or books. I'm still perplexed by the goofiness doing basic stuff. I'm posting some screwing around I did for giggles at the end here. QuarkJets posted:I serve as a reviewer for journal articles and as far as I can tell the most common way to develop a successful new network is to start from one that already works well OnceIWasAnOstrich posted:I've recently done quite a bit with ML at various levels in Python. The Scikit-Learn version is very simple, easy to use, but there is a lot buried in the many, many arguments to some methods and the defaults are very frequently going to be unhelpful to you. I should have considered that when I was searching for the package to install and found a whole bunch of "pytorch-*" stuff. Is the pytorch basically a base framework at this point? Do I work with it through a different library? Dominoes posted:It's a hype trap. Like data science a few years back. Might be a good resume pad if you're looking for a job or VC funds. Somebody called you out on this but it doesn't mean you're wrong either. I'm doing some expeditionary stuff at work because it's politically vogue to try to hit some of our problems with some machine learning. When I heard about it, I first thought about what exactly they're trying to accomplish in even the most basic terms of inputs and outputs. Nobody really knows and that's a warning sign. But since I was the idiot that tried to use neuroevolution for stock market stuff a decade ago, I'm wading through it myself. I suspect there are machine learning solutions to these particular problems; my general take on if its possible is if I can model a situation and "see" a solution but it's particularly difficult to outright code the solution in a contemporary way. However, if I can code an assessment of success then I have a fitness function and "I'm halfway there" (fighting non-linearity in the model sounds like what will take the remaining 50% of effort until it expands to 99% of the effort...). Going in another direction: are you implying a decision tree would not be machine learning? Or was that just a "X instead of Y" kind of answer? I agree with the idea but I'm testing the whole ecosystem using a domain I've learned in the past. So since I'm cozy with perceptrons, I figured I'd assess the different libraries based on my previous experience with them. This is showing me how goofy this stuff is and implies that if I naively go off and use something like the decision trees that I'm going to be wading into some muck that doesn't mesh up evenly with how it's taught as theory. So I did more screwing around with scikit-learn's MLPClassifier to try to manually parameterize a perceptron to solve XOR. I did a little side project where I wrote my own network using the given weights and got a successful XOR prediction. When I extend that to the MLPClassifier, I get a different prediction. I'm not doing any kind of training here; I'm manually setting the weights and thresholds/biases/intercepts. I tend to look at neuron activation as a threshold instead of an intercept or bias so I wonder if I'm interpreting the intercept_ attribute incorrectly. The coefs_ fields really do look like regular old weights across different layers; it adjusts based on hidden_layer_sizes and the data I fit. I run the fit() method to get the initial topology and then blow it over. No luck: code:
|
# ? Mar 11, 2021 23:33 |
|
QuarkJets posted:"Trap" maybe isn't the right word when machine learning experts are in massive demand and are actually solving unique problems that haven't been easily solvable by classical methods. There's definitely a lot of hype present, but it's also just a really useful tool, like learning how to use Docker or CUDA Rocko Bonaparte posted:Somebody called you out on this but it doesn't mean you're wrong either. I'm doing some expeditionary stuff at work because it's politically vogue to try to hit some of our problems with some machine learning. When I heard about it, I first thought about what exactly they're trying to accomplish in even the most basic terms of inputs and outputs. Nobody really knows and that's a warning sign. But since I was the idiot that tried to use neuroevolution for stock market stuff a decade ago, I'm wading through it myself. I suspect there are machine learning solutions to these particular problems; my general take on if its possible is if I can model a situation and "see" a solution but it's particularly difficult to outright code the solution in a contemporary way. However, if I can code an assessment of success then I have a fitness function and "I'm halfway there" (fighting non-linearity in the model sounds like what will take the remaining 50% of effort until it expands to 99% of the effort...). We're on the same page. I'm opportunistic ML and AI techniques will transform our world, and we're gradually getting there. In their current form, I'm not convinced ANNs, SVMs etc are good fits for many problems. There are exceptions, like image recognition. There's enough noise today that I'd guess a random mention of ML or AI (especially a press release, job description, business plan, resume etc) is full of it; I've updated my priors. You can classify a decision tree as ML if you want, or not. It's an easy-to-grasp, but powerful tool for creating complex behavior. If you'd like to get into ML, carefully consider why first. Would this have been appealing 5 years ago? Dominoes fucked around with this message at 00:00 on Mar 12, 2021 |
# ? Mar 11, 2021 23:55 |
|
Rocko Bonaparte posted:I tend to look at neuron activation as a threshold instead of an intercept or bias so I wonder if I'm interpreting the intercept_ attribute incorrectly. The coefs_ fields really do look like regular old weights across different layers; it adjusts based on hidden_layer_sizes and the data I fit. I run the fit() method to get the initial topology and then blow it over. A multilayer perceptron works differently than the example you have. In your example, neurons output a binary 0/1 value directly. In the MLP scheme that Scikit-Learn uses, you have a non-linear activation function that maps to a specific range for each neuron. To implement the classifier, the MLPClassifer takes the output of the final layer which I believe will be of shape [num_samples, num_classes], and uses the softmax function to normalize that output to a probability distribution over your classes for each sample. The classifier will then output the class identify with the highest probability. Dominoes posted:You can classify a decision tree as ML if you want, or not. It's an easy-to-grasp, but powerful tool for creating complex behavior. I'm curious, would you maybe draw a line at a non-ML decision tree being human-interpretable? I definitely agree that there is obviously a ton of hype and my personal hand-wavy boundary is that ML models are models where there is no attempt to make the model structure reflect how the modeled phenomenon actually works, and the point isn't to understand the phenomenon through the model, just to make a good [insert goal here]. Clearly decision trees in certain incarnations are ML, especially ensemble tree methods. One of the more powerful and "successful" big machine learning models isn't an ANN but is instead a complicated method of creating ensembles of decision trees. OnceIWasAnOstrich fucked around with this message at 00:09 on Mar 12, 2021 |
# ? Mar 11, 2021 23:59 |
|
I don't have a reason to draw a line; categorization is a tool you can apply to a problem. Maybe you have a reason to draw a line for DTs as ML or not. In the same sense, choose a tool suitable for the problem you're working with. Maybe it's something categorized as ML. I reject xhoosing ML when it's the wrong tool.
|
# ? Mar 12, 2021 00:25 |
|
Dominoes posted:I don't have a reason to draw a line; categorization is a tool you can apply to a problem. Maybe you have a reason to draw a line for DTs as ML or not. Sorry, I didn't mean to make you draw a line. My point was that, personally, I would never say a type of model is or isn't ML. From my perspective ML is more of the approach or philosophy to problem solving. To me, a linear regression could be (and is) used as machine learning, but can just as easily not be. Also, Rocko, your issue with applying the MLP classifier to the XOR problem can be illustrated this way. All of the methods used for optimization of model weights are based on gradients (SGD/Adam more so than LBFGS). You can easily end up in a situation where your optimizer gets stuck in a particular region of parameter space and needs to propose much larger changes to the parameter to improve the gradient than it is capable of making. If you use SGD or Adam as your optimizer, you can visualize this: Python code:
You can see that about a quarter of these manage to converge on the correct solution, a loss value near zero. MLPClassifier uses log-loss/cross-entropy. If you leave the learning_rate_init default at 0.001, you should get a lot of warnings telling you it didn't converge with 200 iterations and a plot like this: With the default learning rate, it can't make changes to the parameters fast enough to reach the correct solution in the default maximum number of iterations. I am not sure why L-BFGS does as poorly as it does here, I get 28% properly fit, most of the models converge on similar incorrect solutions. There aren't quite enough knobs to tweak with that particular optimizer here. This is one of those issues I was referring to when I say that the defaults for the Scikit models are not always sane and definitely not always suitable. Making these stochastic optimizers converge properly and quickly by tweaking both model and optimizer hyperparameters is where much of the "art" comes into it.
|
# ? Mar 12, 2021 01:09 |
|
Hey I have a “how to do this faster” Python/pandas question. I have an 8Mx50 col data set as a DF and I want to do some multi column partial string matching/filtering. For example I need to check whether any of 20 partial strings exist in col Address1 when col City contains New York.
|
# ? Mar 13, 2021 04:45 |
|
Something like this might work ok: frame[‘has_match’] = frame.loc[frame.City.str.contains(‘New York’), ‘Address’].str.contains(r’your regex string’, regex=True)
|
# ? Mar 13, 2021 04:50 |
|
My take on ML is that it's just loose fitting. Classically, we use physics or basic math principles or a bunch of learned experience to create the model, but for some problems we just don't have enough information, or the interactions are too complicated to model using classical techniques. ML is great at addressing these because you don't need to fully understand all of your system fundamentals. It's a hand-wavy approach for a hand-wavy problem that steers you in the right direction. Where things will go is that we'll use our classical understanding to solve the parts we understand really well, and to limit or inform the ML, and then you let it do its thing to clean up the edges for you, producing a really exceptional result. You might even gain a performance boost because you're not necessarily solving the whole classical problem anymore, which is great. Right now one of the big challenges is how you get the "classical" information back out, with error bars. One of the problems with the ML black box is that the answers are often relative or categorized such that it's very difficult to understand the statistical confidence in that value. Yeah, it got the right answer, but what does that value mean statistically? Is that a 1 sigma or a 5 sigma result? How sure are you that the blob is a tumor? There's a big difference! To contribute: I'm a long time C++ programmer who has recently jumped into python again after a ~10 year hiatus for work reasons and I have to say...f-strings are the poo poo. I've been bitten by type shenanigans a few times, and it's going to continue to haunt me forever, but f-strings, man...drat. I didn't realize the future was already here. I hope to never touch any python2 code ever again. Phayray fucked around with this message at 06:07 on Mar 13, 2021 |
# ? Mar 13, 2021 06:01 |
|
fstrings support in current PyCharm is great, it prepends the f and adds the closing } for you if you type { into a string.
|
# ? Mar 13, 2021 10:05 |
|
vikingstrike posted:Something like this might work ok: I was kinda drunk in the prev posting and should clarify that .str.contains is what I’m using now and is way too slow. isin would be sufficiently fast but I don’t think can find substrings. It’d work for city but not for the 20 address partial strings that I’d use in .str.contains For example if wanting to find thing on John rd or Jane road I might use “John|Jane” in str.contains This works obvs but is massively too slow as I wanna do this sort of operation potentially hundreds of times. Considering a SQL DB.
|
# ? Mar 13, 2021 10:21 |
Use sqlite or something god drat pandas has become the new excel
|
|
# ? Mar 13, 2021 11:26 |
|
|
# ? Jun 3, 2024 21:42 |
|
Wouldn't it be best to just sort by new York and then do the search on address? I would ask on steak overflow tbh, alot of pandas obsessed weirdos on there who answer my pandas questions within ten minutes
|
# ? Mar 13, 2021 12:06 |