|
Wait, what are you saying?
|
# ? Jul 23, 2012 23:21 |
|
|
# ? May 9, 2024 14:24 |
|
the posted:Let me just verify I'm doing this correctly. My instructions are: For starters, since this isn't strictly a programming question you may want to post it in the Scientific/Math Computing thread as you're likely to get better answers. Your error values are not being taken into account because you are fitting the line using only the time and velocity values. The only point that error values factor into this is to just display them as errorbars. EDIT: ^^^ He was saying that if you post code use the python syntax highlighter EDIT2: You may want to look into the "w" input parameter to numpy.polyfit if you want to weight based on certainty. Obviously, you would need to do something like (1-uncertainty) to weight it properly. Modern Pragmatist fucked around with this message at 00:38 on Jul 24, 2012 |
# ? Jul 24, 2012 00:33 |
|
the posted:Is this doing the right thing? I shared it with a classmate who said, "What you did was graph the polyfit line with error bars on the graph. the line will be different when the error bars are taken into account." But it looks like that IS what I'm doing, or am I wrong? I don't know how numpy.polyfit works, but reading through the doc entry, it doesn't take into account your error bars, so your friend is right. The point being that you *must* take into account the accuracy of each measurement when fitting any data. I can bore the whole thread with a lesson in basic statistics, but I think that discussion may be best elsewhere. If you don't understand why you need the error bars, PM me and I'll be happy to explain why they are essential. I'd use: scipy.optimize.curve_fit http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html It will by default assume all error bars are equal, but you can pass an array of the error bars to it with the "sigma" argument. Here's a snippet. It's been a little while since I played with curve_fit, so you may need to play with it a bit, but here: Python code:
The covariance matrix is awesome, because it actually gives you an idea of how dependent your parameters on each other. This is especially important for data that doesn't behave well (aka real life data). JetsGuy fucked around with this message at 17:52 on Jul 24, 2012 |
# ? Jul 24, 2012 17:48 |
|
I've got a problem that's probably fairly simple to solve. I have two numpy arrays, call them x1 and y1. I want to search the y1 array for any 0 values and then delete that value out of the y1 array as well as the corresponding value at the same index of the x1 array. Here's how I was trying to do it: Python code:
|
# ? Jul 26, 2012 03:44 |
|
Adam Bowen posted:I've got a problem that's probably fairly simple to solve. I have two numpy arrays, call them x1 and y1. I want to search the y1 array for any 0 values and then delete that value out of the y1 array as well as the corresponding value at the same index of the x1 array. You are modifying the loop while you are iterating over it. I would recommend something like this: Python code:
accipter fucked around with this message at 04:29 on Jul 26, 2012 |
# ? Jul 26, 2012 03:50 |
|
Perfect! Thanks.
|
# ? Jul 26, 2012 04:15 |
|
I'm doing a PDE problem where I have to update a time-step, and then use that to update a position-step. I know this may be a bit mathy, but I'm getting stuck in the programming portion of it. In this, y is an array of two dimensions, the first is the position and the second is the time (since it's a partial differential equation I'm dealing with). It's for calculating the velocity over time of a wave (a plucked string, for example) between points xa and xb. The plot should look like a Gaussian function when I'm done. Python code:
the fucked around with this message at 21:59 on Jul 26, 2012 |
# ? Jul 26, 2012 21:55 |
|
the posted:I'm doing a PDE problem where I have to update a time-step, and then use that to update a position-step. I know this may be a bit mathy, but I'm getting stuck in the programming portion of it. As I mentioned last time you posted, post anything that is more mathematical or scientific in nature in the Scientific/Maths(s) Computing Megathread. You will get better responses there.
|
# ? Jul 27, 2012 18:30 |
|
the posted:
Any reason you don't want to do it using "for" control loop? You will need to define the size of your arrays and then it will be something like: Python code:
|
# ? Jul 27, 2012 22:00 |
|
This is what I ended up doing to get it to work:Python code:
|
# ? Jul 28, 2012 02:08 |
|
I have a quick question regarding user input. I have fashioned a nice little graphing program, but want to add LaTeX capablities to it. The old way this grapher worked was to simply as for user input and pass that to the axis label argument. Typically, in order to use latex arguments, you must tell python that you want a string literal. For example: ax.set_xlabel(r"X$_{axis}$") However, since I want the string to be input by the user, I'm trying to figure out how to pass the input to the code. Obviously, the following won't work: ax.set_xlabel(r+user_input) Or otherwise. Essentially, I'm trying to figure out how to make python read a variable, which a string, as a string literal. EDIT: The answer was simple. I had converted the variable passed by the user converted to a string via num.str(). I instead take that part out, and just set the labels I need to the straight up user input. This works. JetsGuy fucked around with this message at 21:51 on Jul 29, 2012 |
# ? Jul 29, 2012 21:39 |
|
JetsGuy posted:Typically, in order to use latex arguments, you must tell python that you want a string literal. For example: You're bit confused about what string literal is. String literals are purely syntactic thing, they're not relevant during execution. That 'r' prefix makes the literal "raw" — that is, makes Python ignore escape sequences (\n and friends) inside. It doesn't make the string special in any way — r"foo" and "foo" are exactly the same when the program is running. Runtime strings don't need this, because escape sequence magic happens during compilation phase, so you should just do ax.set_xlabel(user_input).
|
# ? Jul 29, 2012 21:55 |
|
PiotrLegnica posted:You're bit confused about what string literal is. Thanks for the explanation!
|
# ? Jul 29, 2012 23:34 |
|
Aaaaaand apple continues its tradition of wiping users python directorys instead of migrating them during operating system upgrades. Hey apple, gently caress you. At least loving back that poo poo up you stupid cunts. e: Oh hey mercurial has had its internal organs eviscerated by it too. e2: quote:bash-3.2# ./manage.py duck monster fucked around with this message at 01:37 on Jul 30, 2012 |
# ? Jul 30, 2012 01:31 |
|
Well that sucks. Guess you'll have to do a `pip freeze > pymodules.txt` before updates in the future. That sounds really weird, though. I mean, what's the point? Is it because they want to update your Python version by force? There has to be some reason behind it.
|
# ? Jul 30, 2012 01:49 |
|
gurgling blood.... pip install django South django-logdb and a few things Then ./manage syncdb code:
|
# ? Jul 30, 2012 01:56 |
|
Never use system Python (also never use a non virtualenv)
|
# ? Jul 30, 2012 14:31 |
|
Macports has been consistently OK for me, and hasn't had anything wiped after an OS update.
|
# ? Jul 30, 2012 15:25 |
|
duck monster posted:Hey apple, gently caress you. At least loving back that poo poo up you stupid cunts. I use apple because I really have to for work, but I gotta say, Apple really loves assuming its users don't know what the gently caress they're doing. Not that Win7 is any better.
|
# ? Jul 30, 2012 16:09 |
|
On the one hand this is awful, but it's also worth noting that dealing with system python on OS X has been historically awful, and using something like Macports has long been considered best practices just to avoid having to deal with this nonsense.
|
# ? Jul 30, 2012 16:18 |
|
Hubis posted:On the one hand this is awful, but it's also worth noting that dealing with system python on OS X has been historically awful, and using something like Macports has long been considered best practices just to avoid having to deal with this nonsense.
|
# ? Jul 30, 2012 16:27 |
|
On OS X, package management is kind of bitch and using the system's default python installation can cause you trouble. Most people recommend using MacPorts or Homebrew to get a more unix-vibe to managing your python stuff. Keeps this more portable during upgrades, and you don't have to worry about Apple steamrolling your installations. MacPorts and Homebrew both have their pluses and minuses, that's topic for another time.
|
# ? Jul 30, 2012 16:44 |
|
I use the system python for most things, but in a virtualenv. Upgraded to ML, reinstalled distribute, virtualenv, virtualenvwrapper, and everything's worked fine.
|
# ? Jul 30, 2012 17:02 |
|
ufarn posted:Can you expound on this? I am switching from a Windows/Ubuntu laptop to a MacBook, and have no fond memories of the pursuit of what turned out to be ActivePython in order to get anything Python to work. Python on Windows is pretty easy nowadays. Just download it, install it ... and it works. Even better, after you install Python, install virtualenv which just works. (Granted, you'll run in to issues with some packages and getting binary versions of stuff, but even then it's not terrible.) (PS Everyone should use virtualenv.)
|
# ? Jul 30, 2012 18:28 |
|
Thermopyle posted:Python on Windows is pretty easy nowadays.
|
# ? Jul 30, 2012 18:47 |
|
I typically use pythons under /Library/Frameworks, but I set up a .pydistutils.cfg to install python packages under my home directory. So no sudo required and it keeps everything else pristine.
|
# ? Jul 30, 2012 19:41 |
|
Just because the macports issue has come up again, I'd just like to say that macports sucks balls. I just download whatever packages I need, and install them manually. It's far less maddening than finding out package Z doesn't work because program Y needed an updated version of compiler X but didn't use the SPECIAL MACPORTS APPROVED VERSION and therefore the whole system crashes like Pauly Shore's career.
|
# ? Jul 30, 2012 19:55 |
|
ufarn posted:I consistently run into problem with the PATH env. variable without ActivePython. I know that everyone doesn't run into that problem, but I always do for some reason. If you want to have multiple Python versions on Windows, it's best to use the new launcher.
|
# ? Jul 30, 2012 22:48 |
|
JetsGuy posted:Just because the macports issue has come up again, I'd just like to say that macports sucks balls. I just download whatever packages I need, and install them manually. It's far less maddening than finding out package Z doesn't work because program Y needed an updated version of compiler X but didn't use the SPECIAL MACPORTS APPROVED VERSION and therefore the whole system crashes like Pauly Shore's career. There's also Fink and Homebrew!
|
# ? Jul 30, 2012 22:55 |
|
Ridgely_Fan posted:There's also Fink and Homebrew! Fink which has all the same problems, and will escalate them exponentially if you foolishly try to use both. Homebrew is great if it actually have recipes for what you want.
|
# ? Jul 30, 2012 23:05 |
|
Ridgely_Fan posted:There's also Fink and Homebrew! I've been very happy with Homebrew's python, though you have to read their wiki page before using to get access to easy_install. I don't touch the system python.
|
# ? Jul 30, 2012 23:05 |
|
duck monster posted:Aaaaaand apple continues its tradition of wiping users python directorys instead of migrating them during operating system upgrades. They keep hurting you - why do you keep coming back?
|
# ? Jul 31, 2012 15:18 |
|
So I've got a design question. I'm converting some code over to Python 3 and I've obviously run into the string/bytestring stuff. Basically, I have a container object that receives a list of mixed values. Some are numbers, others are strings, but some are already literal strings (str). I want the user to be able to specify a new encoding to basically re-decode the strings within the list. I'm not sure of the best way to design this. I guess the primary options that I've thought of are: 1) Provide Element with an decode() method and an _encoding property to store the current encoding used. The issue with this is that it seems excessive since ~20% of the items are actually going to be strings. 2) Provide Element with an decode() method and a _bytestring property to store the original bytestring. This seems clunky and has the same problems as #1. 3) Basically do everything at the Container level, shown below. This really seems clunky and not very robust if I ever have different Element subclasses later on. Python code:
Modern Pragmatist fucked around with this message at 18:47 on Jul 31, 2012 |
# ? Jul 31, 2012 16:04 |
|
Modern Pragmatist posted:I want the user to be able to specify a new encoding to basically re-decode the strings within the list. Er, decoded strings are Unicode. Encoding and then decoding them again doesn't give you any benefit, only a chance of data loss. At best it's costly identity operation, and you get the input back. You should avoid having encoded strings for any other reason than storage (or sending across network or whatever). If you do any processing, you should be operating on Unicode objects.
|
# ? Jul 31, 2012 17:57 |
|
PiotrLegnica posted:Er, decoded strings are Unicode. Encoding and then decoding them again doesn't give you any benefit, only a chance of data loss. At best it's costly identity operation, and you get the input back. One example case is if a person's name is entered into the container. By default, it will come into the program as a string in some default encoding. I want to have it so you can specify an alternate encoding to convert it into something readable by the user. To do that, you have to convert from default_encoding back into a bytestring and then decode it into the new one. EDIT: Wait. Here's what I want to do: Python code:
Modern Pragmatist fucked around with this message at 19:02 on Jul 31, 2012 |
# ? Jul 31, 2012 18:26 |
|
That's not correct at all. If you have a str, you already have Unicode codepoints somehow. All you have to do to get it into bytes in an encoding encoding is to use encode.
|
# ? Jul 31, 2012 19:00 |
|
Suspicious Dish posted:That's not correct at all. If you have a str, you already have Unicode codepoints somehow. All you have to do to get it into bytes in an encoding encoding is to use encode. I just edited my last post to try to be clearer. So I'm not confused on the terminology: Unicode == str? I'm given a string (str) 'foo' and I want to convert to a different encoding that yields 'baa'. To do this, don't I have to use: Python code:
I also could just be completely turned around. I'm not sure why but these strings in python 3 are getting the better of me. Modern Pragmatist fucked around with this message at 19:12 on Jul 31, 2012 |
# ? Jul 31, 2012 19:07 |
|
Modern Pragmatist posted:EDIT: Wait. Here's what I want to do: I know this is what you want to do. Scratch what I said earlier about identity operation, though — nothing good can come out of doing this. Modern Pragmatist posted:One example case is if a person's name is entered into the container. By default, it will come into the program as a string in some default encoding. I want to have it so you can specify an alternate encoding to convert it into something readable by the user. To do that, you have to convert from default_encoding back into a bytestring and then decode it into the new one. There are two sources of strings, generally: your program and outside world. In case of your program, managing them is fairly simple: encode your files with something the Python implementation can recognise and use Unicode literals — you will work with Unicode objects from the start. In case of outside world, things get complicated. You will get your strings as pre-encoded bytestrings, and to get Unicode objects out of them, you will have to decode them first. Never decode a bytestring with a different codec than the one used to create it (except for stuff like decoding 7-bit ASCII text as UTF-8, because UTF-8 is designed to allow this; you generally still need to be sure about source encoding to be able to decode successfully — otherwise you risk data corruption/loss). The only time you should be encoding the Unicode objects to bytestrings is to store them back into the outside world (files, network, rendering — note that some of those things have APIs that encode transparently). So, tell us what are you trying to solve with this (I don't really get what you mean by "converting to something readable by the user"), because your solution is not really correct. If you want to do encoding conversion, then the sequence is bytestring -> Unicode -> bytestring.
|
# ? Jul 31, 2012 19:25 |
|
Your Unicode string has a sequence of code points. The string "foo" contains three code points: U+0066 LATIN SMALL LETTER F U+006F LATIN SMALL LETTER O U+006F LATIN SMALL LETTER O If I understand you correctly, you want to translate that sequence of code points into another sequence of code points: U+0062 LATIN SMALL LETTER B U+0061 LATIN SMALL LETTER A U+0061 LATIN SMALL LETTER A This has nothing to do with bytes, or encodings. What is the mapping that you want?
|
# ? Jul 31, 2012 19:25 |
|
|
# ? May 9, 2024 14:24 |
|
So my data is coming from DICOM (medical imaging) files. Basically it's a very large header with all sorts of patient information plus the image information itself. One of the fields that can be defined in the header is a specific encoding that should be used to decode the text values in the header. The problem is that the file-defined encoding is one of the last things that we read in. The current way that this is being handled is that all the header fields are read in and anything that is text is decoded from the bytestring from the file to a unicode string using iso8859. Then, if the user specifies that they want to decode all of the strings, then we look to see whether a specific encoding was defined, otherwise we just stick to iso8859. I guess one way around it would be to just leave it as a bytestring until we read in all values, and then convert them using the specific encoding if it's provided and iso8859 otherwise. Suspicious Dish posted:This has nothing to do with bytes, or encodings. What is the mapping that you want? The mapping that I want is to go from something like iso8859 to iso_ir_144.
|
# ? Jul 31, 2012 19:35 |