|
Yep. Thanks.
|
# ? Aug 20, 2015 20:22 |
|
|
# ? May 19, 2024 21:43 |
|
1000101 posted:I think I'm going to go ahead with what you're suggesting. To make sure I understand you right, dump the actual json somewhere in the module path (say in a templates directory); when I need it, read it in as a dict then just set the values I need like I was setting them in a dictionary. Then return the final JSON just like before. Yeah that's what I would do. I think it'll be a little more readable and more maintainable.
|
# ? Aug 20, 2015 23:55 |
|
I'm working on a small toy app, and while I've mostly used Django, it occurred to me that this might be a great time to check out Flask(mostly API-driven, not much in the way of user-backed database stuff). In reading through their documentation, specifically in regards to Python 3, they recommend sticking to 2.x for the time being, due to some packages still needing work for 3 compatibility(Flask and Werkzeug are mentioned). I doubt this will be a big deal for the project I have in mind, but I'm just curious if this is still a thing in 2015, or it's mostly solved and the docs just haven't been updated(they're stamped with 2013). Any experiences or ruminations are welcome!
|
# ? Aug 21, 2015 01:08 |
|
xpander posted:I'm working on a small toy app, and while I've mostly used Django, it occurred to me that this might be a great time to check out Flask(mostly API-driven, not much in the way of user-backed database stuff). In reading through their documentation, specifically in regards to Python 3, they recommend sticking to 2.x for the time being, due to some packages still needing work for 3 compatibility(Flask and Werkzeug are mentioned). I doubt this will be a big deal for the project I have in mind, but I'm just curious if this is still a thing in 2015, or it's mostly solved and the docs just haven't been updated(they're stamped with 2013). Any experiences or ruminations are welcome!
|
# ? Aug 21, 2015 01:13 |
|
So presently an app I'm spit-polishing uses a big pile of functions to do some basic processing work; the main Flask app takes the website data, passes it through to an interface function written to unify all the other code, which then passes the data off to some functions for building the right data structures (there are three different data structures I use for different purposes) and then doing some processing, finally spitting out the result for Flask to display. I'm wondering if abstracting some of this into something more object-oriented would be fruitful, since the biggest issue is that the functions end up requiring between 4-7 arguments to do all the work and the code can be somewhat ugly when that's all floating around (the functions are as separate as they can be at this point). There are three ways I think this could be done to clean things up: one, throw all the parameters into a class and let the functions pull what they need out. Two, throw all the data structures into a class and pass that around. Three, build an all-encompassing class, instantiate. I don't like the third option that much, since it seems mostly pointless (everything would need to be called the exact same way, except now as an object! WOW!). The second is better, but there are three data structures I use and a single class for them seems gratuitous and dangerous. Finally, the first: I like it, but since different functions need different arguments, the only thing it would do is move the location of the ugliness. Now, there's something to be said for cleaning up the ugliness a bit, but since I don't have any real idea whether object-oriented design would be remotely useful in this case at all, I'm curious if this sounds like a good idea, or if it's just overcomplicating things. Basically, I'm asking a more general question about designing Python applications: should I be wary of throwing around objects when it's not clear to me (an amateur) that there's a very good reason (I'm not sure slightly deuglifying code is a very good reason, but I could be wrong!)? And what are some ways to tell that there's a good reason for throwing around objects (beyond the obvious cases like representing objects and their properties)? Ghost of Reagan Past fucked around with this message at 02:17 on Aug 21, 2015 |
# ? Aug 21, 2015 02:12 |
|
Ghost of Reagan Past posted:So presently an app I'm spit-polishing uses a big pile of functions to do some basic processing work; the main Flask app takes the website data, passes it through to an interface function written to unify all the other code, which then passes the data off to some functions for building the right data structures (there are three different data structures I use for different purposes) and then doing some processing, finally spitting out the result for Flask to display. I'm wondering if abstracting some of this into something more object-oriented would be fruitful, since the biggest issue is that the functions end up requiring between 4-7 arguments to do all the work and the code can be somewhat ugly when that's all floating around (the functions are as separate as they can be at this point). If you're not sure how best to start arranging things into classes, one thing you could do is this - you said that a lot of these functions take several parameters. Look at the parameter lists of your various functions. Are there any "groups" of parameters that occur together a lot? If so, that might be a clue that those parameters form the data members of a class.
|
# ? Aug 21, 2015 09:09 |
|
Hammerite posted:If you're not sure how best to start arranging things into classes, one thing you could do is this - you said that a lot of these functions take several parameters. Look at the parameter lists of your various functions. Are there any "groups" of parameters that occur together a lot? If so, that might be a clue that those parameters form the data members of a class.
|
# ? Aug 21, 2015 16:32 |
|
Cingulate posted:
You can also delay your expensive computation to the last minute with a generator expression Python code:
quote:things, stuff = zip(*tuple_gen)
|
# ? Aug 23, 2015 17:27 |
|
I realize this is a vague question but as a beginner Python hobbyist, should I be trying to learn IPython? Most of what I've read about it has been positive but I've been doing most of my coding in the default IDLE and Vim. I'm curious if I should try to add IPython to my tool set.
|
# ? Aug 24, 2015 01:56 |
|
Hughmoris posted:I realize this is a vague question but as a beginner Python hobbyist, should I be trying to learn IPython? Most of what I've read about it has been positive but I've been doing most of my coding in the default IDLE and Vim. I'm curious if I should try to add IPython to my tool set. I typically use two different setups when I am programming Python. The first is PyCharm -- this is really an amazing IDE and has plugin for vim movement. The other is Vim with an IPython console. I don't use all of the IPython features, but it is really handy explore the objects and access the docstrings (? and ??).
|
# ? Aug 24, 2015 02:17 |
|
Hughmoris posted:I realize this is a vague question but as a beginner Python hobbyist, should I be trying to learn IPython? Most of what I've read about it has been positive but I've been doing most of my coding in the default IDLE and Vim. I'm curious if I should try to add IPython to my tool set. I found the ipython console to be invaluable when learning or doing much of anything in Python.
|
# ? Aug 24, 2015 09:06 |
|
The Qtconsole version of Ipython's especially useful.
|
# ? Aug 24, 2015 10:14 |
|
... and if you're doing data science, you should proceed straight to the iPython Notebook.
|
# ? Aug 24, 2015 10:39 |
|
I recommend the first few videos in this Youtube series for learning Jupyter (Ipython Notebook), along with their accompanying notebooks, in the video descriptions.
|
# ? Aug 24, 2015 11:29 |
|
Cingulate posted:... and if you're doing data science, you should proceed straight to the Jupyter replaces iPython Notebook in the new version.
|
# ? Aug 25, 2015 08:35 |
|
Given a pandas series of length 1, I can do thiscode:
For what it's worth, the series originated from a DataFrame from which I have selected one specific cell.
|
# ? Aug 25, 2015 09:01 |
|
How are you getting a series back from a single dataframe cell? Selecting a cell by index or iloc will return the value of that cell - are you doing something else?
|
# ? Aug 25, 2015 15:45 |
|
Viking_Helmet posted:How are you getting a series back from a single dataframe cell? Selecting a cell by index or iloc will return the value of that cell - are you doing something else? This has come up a few times now though. The actual code in this case has however completely changed, looks like this now and is still really bad and slow: code:
|
# ? Aug 25, 2015 16:06 |
|
When you're using a library that provides objects whose data is provided by an online bank for personal checking/savings which of the following designs is most appealing to you? 1. Getters The object has methods like get_foo() which do an HTTP request(s) each time you call them. 2. Lazy properties The object has properties like obj.foo which do an HTTP request(s) the first time you access them and then always return that value. 3. Regular properties Same as lazy properties except do the HTTP request(s) every time you access them. 4. Configurable caching, lazy properties Per object configurable time period that causes lazy properties to do HTTP requests the first time you access them, and then return that value every time for up to X period of time and then do an HTTP request again. 5. Something else? -------- The HTTP requests are pretty slow. This banks online services have always been slow as poo poo. I can pretty easily provide them all, but I usually prefer opinionated libraries. Anyway, this isn't for anything super serious, I'm mostly just experimenting with API designs. However, assume this library is written to be used far and wide.
|
# ? Aug 26, 2015 13:29 |
|
Thermopyle posted:When you're using a library that provides objects whose data is provided by an online bank for personal checking/savings which of the following designs is most appealing to you? Of the options you listed I think I'd like getters with configurable caching defaulting to none, but no suggestions of my own.
|
# ? Aug 26, 2015 13:43 |
|
When I write code against your library, I want it to be trivial for a maintainer to identify things like "is this operation safe for my UI thread", "when did this information come from", and "is it safe to assume that the value I got in the outer function scope is the same as the one two layers deeper". This makes me prefer that most of the data be in uninteresting (possibly immutable) data bags, and that the operations doing network activity that output the uninteresting models be distinct functions with searchable names. This also makes it less likely that I will have to do unnecessary wrapping to make my code against your model sanely testable.
|
# ? Aug 26, 2015 16:19 |
|
So this launches a window which displays the correct @ character, but the window then immediately freezes up and must be terminated. Any idea why? Windows 10, Python 2.7.code:
|
# ? Aug 26, 2015 17:43 |
|
Rime posted:So this launches a window which displays the correct @ character, but the window then immediately freezes up and must be terminated. Any idea why? Windows 10, Python 2.7. You probably need to call libtcod.console_wait_for_keypress(True) at the end of the loop. Right now it looks like it's just drawing an @ over and over again as quickly as it can. Look at https://kooneiform.wordpress.com/2009/03/29/241/ for an example
|
# ? Aug 26, 2015 19:53 |
|
Yup, that's the ticket. Dur dur dur.
|
# ? Aug 26, 2015 19:55 |
|
I wrote a webapp that dumps out a big piece of JSON for historical queries on market data, for example Facebook (NASDAQ:FB). The predominant client app is written in R living on OpenCPU.org and thus the data set is savagely optimized to parse as an R data frame. Today I looked at NumPy and Pandas in Python and whilst the latter was ok-ish the former I have no idea.R code:
code:
Python code:
code:
Python code:
code:
1) Why does the data frame in Pandas looks like it is transposed? I have to call df.transpose() to view it in the same form as R. 2) Is it possible and how to build a multi-format friendly table in NumPy without hard coding the columns? MrMoo fucked around with this message at 23:47 on Aug 26, 2015 |
# ? Aug 26, 2015 23:40 |
|
MrMoo posted:My questions are: Look at the time series data. It is provided in columns, not rows. You can provide it to pandas in rows by the following: Python code:
|
# ? Aug 27, 2015 00:07 |
|
-
Dominoes fucked around with this message at 00:53 on Aug 27, 2015 |
# ? Aug 27, 2015 00:13 |
|
Lots of people posted:Learn IPython Thanks for the advice/tips. I'll take a look at the videos posted.
|
# ? Aug 27, 2015 00:41 |
|
accipter posted:Look at the time series data. It is provided in columns, not rows. You can provide it to pandas in rows by the following: Thanks, looks like I have the NumPy version with this: Python code:
Python code:
Python code:
MrMoo fucked around with this message at 02:16 on Aug 27, 2015 |
# ? Aug 27, 2015 01:55 |
|
If you have tables with columns of heterogeneous type, or your columns have names, or you have missing values, or really if your data is anything except a big contiguous block of floating point numbers, then you should put it in a pandas DataFrame. You can try to use numpy for things that are not big contiguous blocks of floating point numbers, and there is even kind of support for doing that (as you have found), but literally no one actually does that in real life.
|
# ? Aug 27, 2015 02:11 |
|
Pandas has more success with ISO 8601, no doubt with '15 JUN 2015' I have to fix it server side.Python code:
|
# ? Aug 27, 2015 03:09 |
|
MrMoo posted:Pandas has more success with ISO 8601, no doubt with '15 JUN 2015' I have to fix it server side. Use pandas.to_datetime(), it's good at parsing pretty much whatever I've tried to throw at it: Python code:
|
# ? Aug 27, 2015 16:55 |
|
So I just ran across the genesis of this in 2.7, and I figured I'd post a something about it. If this is super well known by everyone just scroll on by, but I wasn't aware of it and it led to a careless omission in a coworker's code causing a nasty little bug.Python code:
Python code:
Python code:
Python code:
That said, do any of you know why this works this way? Is it just so you can assign defaults to positional arguments or is there another reason? Is it just me that sees implicit conversion of a positional argument to a keyword argument as kinda weird?
|
# ? Aug 28, 2015 08:56 |
|
I've got an idea for a thing that involves a lot of things that I've never used before and thought I would ask for some help. I'm working at a company that uses a CFD program that largely operates as a black box, not giving any information until the run is finished. After poking through the files that the program generates while it is running, it generates and stores a lot of data in a bunch of Notepad - readable files delimited by column, headers in text and values for various properties of the flow. I think this is ripe for a program/script that reads each file and plots each (or a selection of) parameters as they are generated using Matplotlib. This way we can see if the simulation is worth continuing early on instead of having to wait until the end. 1) The files are not csv files, but .in and .out . Will a module like csvwrite happily take it anyway if its in a clearly delimited format? 2) What else could I do to read this file? 3) I'm used to working with arrays of numbers in numpy but these files will have a couple of header rows (column number, Parameter name, units) and then a list of floats in scientific notation. Is it ok to use text and floats in an array or am I better off using a different way of storing these? 4) The CFD program updates these files continuously - how would I make my script/program check the files for updates? I think its a pretty simple thing to do to simply check the files every ten seconds or so and generate new images by wrapping the entire thing in a timed loop, but I think that computationally instead of reading and plotting values from (number of files) x 15 variables x 40000 iterations every ten seconds its probably easier to simply append the new values to the arrays (or other device) where they are stored. Any pointers?
|
# ? Aug 28, 2015 11:23 |
|
Zero Gravitas posted:I've got an idea for a thing that involves a lot of things that I've never used before and thought I would ask for some help. There's also an append method.
|
# ? Aug 28, 2015 11:48 |
|
Do y'all know if PyCharm has a shortcut for unicode character entry similar to iPython's QtConsole and Jupyter? Ie backslash + name +tab
|
# ? Aug 28, 2015 22:59 |
|
Ghost of Reagan Past posted:I'm wondering if abstracting some of this into something more object-oriented would be fruitful, since the biggest issue is that the functions end up requiring between 4-7 arguments to do all the work and the code can be somewhat ugly when that's all floating around (the functions are as separate as they can be at this point). If you have more than 6 you've probably missed one (Kernighan or Ritchie, probably (maybe Plaugher)) quote:There are three ways I think this could be done to clean things up: one, throw all the parameters into a class and let the functions pull what they need out. Two, throw all the data structures into a class and pass that around. Three, build an all-encompassing class, instantiate. I don't like the third option that much, since it seems mostly pointless (everything would need to be called the exact same way, except now as an object! WOW!). The second is better, but there are three data structures I use and a single class for them seems gratuitous and dangerous. Finally, the first: I like it, but since different functions need different arguments, the only thing it would do is move the location of the ugliness. Now, there's something to be said for cleaning up the ugliness a bit, but since I don't have any real idea whether object-oriented design would be remotely useful in this case at all, I'm curious if this sounds like a good idea, or if it's just overcomplicating things. It depends how these functions compose and interact. You may find it easier to take option three: one big rear end class with the state and methods, and a simpler interface on top for what the program does, or at least a simpler api to call and interact with it. From there you may find it easier to slowly clump things together. quote:Basically, I'm asking a more general question about designing Python applications: should I be wary of throwing around objects when it's not clear to me (an amateur) that there's a very good reason (I'm not sure slightly deuglifying code is a very good reason, but I could be wrong!)? And what are some ways to tell that there's a good reason for throwing around objects (beyond the obvious cases like representing objects and their properties)? The reason other languages (Java, Ruby especially) use objects all the goddam time is because they have no other choice. Instead of a function do_thing we have a tyranny of ThingDoer objects with a do_thing method. Don't feel you have to use classes to get things done in python, but there are some good reasons to use classes: To hide a decision that you might want to change in future, to hide a difficult mechanism (eg requests vs urllib2) To spread out the moving parts, so they don't touch each other (but this requires care, it's still easy to have all the methods in different files but yet touching all the same datastructures at runtime) To pull out some feature into stand alone code. The code shouldn't try to be more elegant than the problem it solves. Many problems are ugly, clunky, and clumsy, and it's not unheard of for some numerical computation methods to be pages upon pages of work (See also automation scripting). Sometimes code is best left to be ugly and rudimentary over stylish with a veneer of elegance.
|
# ? Aug 29, 2015 03:11 |
|
Thermopyle posted:When you're using a library that provides objects whose data is provided by an online bank for personal checking/savings which of the following designs is most appealing to you? A get method. If you want to implement caching, give the caller a way to clear the cache. I feel like people discover properties and then want to use them for everything. I know that's exactly what I did when I discovered them (I did the same with decorators). They're definitely neat and very useful in certain situations but they're not a replacement for methods. In my opinion, accessing a property should not have any side effects (i.e. no web requests or cache updating) and the value returned shouldn't change unless the underlying object changes (which, I suppose, falls under the no effects rule). Ideally, you should be able to get away with treating properties like attributes. If I'm using your library I don't want to be checking the docs every few minutes to make sure the attribute I'm accessing isn't actually a property that's going to go do some expensive IO under the hood. I also don't want the behavior/cost of the operation to change so significantly depending on the state of the underlying object. "Explicit is better than implicit" and all that jazz.
|
# ? Aug 29, 2015 04:20 |
|
keanu posted:A get method. If you want to implement caching, give the caller a way to clear the cache. Yep. The person I'm working with keeps wanting to make everything a caching, lazy property like I described. What we compromised on is a base API which implements get_* methods, and then provide some helper classes to allow end users to make their own caching, lazy properties if they want to use them that way.
|
# ? Aug 29, 2015 18:08 |
|
|
# ? May 19, 2024 21:43 |
|
Stringent posted:
You have a misunderstanding of function arguments. There is nothing at all about a function definition that makes an argument positional or keyword. There are only arguments, some or all of which may also have default values. The positional/keyword distinction is solely, only, entirely determined by how you pass arguments when you call the function: code:
code:
Edit: FWIW I think this is probably a common confusion, given that the spelling of default arguments in function definitions is basically the same as the spelling of passing keyword arguments in function invocations. But they are two separate, different things. BigRedDot fucked around with this message at 22:16 on Aug 29, 2015 |
# ? Aug 29, 2015 22:03 |