Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dominoes
Sep 20, 2007

Eela6 posted:

I agree. Differential equations are prone to all sorts of numerical analysis problems. Don't roll your own solutions - use a known stable algorithm.
Not related to this, but the state of ODE solvers is not very good: If you need anything other than a basic solver (like those offered by scipy.integrate), you will have to roll your own, or use poorly-documented libs(like scikit odes, and another that escapes me). I had to roll my own rk4 due to having to interrupt the solver when certain conditions are met, and had multiple objects. (I think you can do either, but not both with scipy). There are alternatives that claim to do both, that I couldn't get working, and have awful APIs.

Dominoes fucked around with this message at 19:41 on Jun 17, 2017

Adbot
ADBOT LOVES YOU

Eela6
May 25, 2007
Shredded Hen
Really? I had no idea.

(I rolled my own back when I was in academia, but that was part of my numerical analysis course work. I figured there had to be good libraries out there.)

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

So I'll type out a detailed reply and link to the repository tomorrow because I've just had a ton of ribs and wine but I'm not trying to solve the differential equation I'm trying to find a solution to a separate set of equations related to height, weight, gender etc that generate a set of parameters that feed into the differential equations.

SurgicalOntologist
Jun 17, 2004

The function you posted, which you determined by profiling is slowing your code down, is an implementation of the Euler method (the whole 1-second steps thing) for solving differential equations. So maybe you're also doing other things, but you absolutely are solving differential equations.

And by the way, your bigger problem is interesting to me. I'm writing my dissertation on estimating parameters of differential equations from observed timeseries (if this makes sense: it's a hierarchical method in which the diff eq parameters are regression DVs. Allowing one to test hypotheses like "the experimental manipulation will significantly affect parameter x"). Anyways, it sounds like what you're doing would fit into my framework. It's not ready, so I'm not going to suggest you use it, but your problem sounds like something I could include in my intro or conclusion when I list the applications in various domains. When you get a chance, could you share in general terms the problem you're trying to solve, and the approach you're taking? I'll probably have follow-up questions so maybe a PM would be more appropriate so we don't clog up the thread.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
I've created a simple python script that checks an RSS movie feed and performs an IMDB lookup if it finds new entries. I'd like to run this script every 15 minutes. Is it better practice to keep the script running in a loop and have it sleep for 15 minutes, or to use Windows Task Scheduler to launch it every 15 minutes? Or does it not really matter?

breaks
May 12, 2001

Use the task scheduler unless you have a good reason not to. Running it in a loop will work until your computer reboots or it throws an exception and whatever series of other problems, then by the time you find and fix all those all you get for the extra work and inconvenience is probably a worse task scheduler.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

breaks posted:

Use the task scheduler unless you have a good reason not to. Running it in a loop will work until your computer reboots or it throws an exception and whatever series of other problems, then by the time you find and fix all those all you get for the extra work and inconvenience is probably a worse task scheduler.

Ok, I'll give task scheduler a shot. Thanks.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

breaks posted:

Use the task scheduler unless you have a good reason not to. Running it in a loop will work until your computer reboots or it throws an exception and whatever series of other problems, then by the time you find and fix all those all you get for the extra work and inconvenience is probably a worse task scheduler.

Good news: python has one built in: import sched

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

SurgicalOntologist posted:

The function you posted, which you determined by profiling is slowing your code down, is an implementation of the Euler method (the whole 1-second steps thing) for solving differential equations. So maybe you're also doing other things, but you absolutely are solving differential equations.

And by the way, your bigger problem is interesting to me. I'm writing my dissertation on estimating parameters of differential equations from observed timeseries (if this makes sense: it's a hierarchical method in which the diff eq parameters are regression DVs. Allowing one to test hypotheses like "the experimental manipulation will significantly affect parameter x"). Anyways, it sounds like what you're doing would fit into my framework. It's not ready, so I'm not going to suggest you use it, but your problem sounds like something I could include in my intro or conclusion when I list the applications in various domains. When you get a chance, could you share in general terms the problem you're trying to solve, and the approach you're taking? I'll probably have follow-up questions so maybe a PM would be more appropriate so we don't clog up the thread.

Ok so i'll try and explain as best as i can given a few people were interested.

I'm an anaesthetist and one of the ways we keep people asleep is using a constant infusion of the anaesthetic drug propofol. Now rather then just do fixed rate infusions people decided to come up with target controlled infusions where you enter in a patients Age, weight, height, sex and the pump then works out various parameters and constantly adjusts the rate of the infusion to try and maintain the plasma concentration you've set up.

Here's a diagram


What happens when we stick in the patient characteristics is the two models in use today (Marsh and schnider) work out what V1,V2,V3, k12 etc are for that patient (here's a link to a paper that goes over tci pretty well and shows what the existing models do: https://academic.oup.com/bja/article/103/1/26/462196/Pharmacokinetic-models-for-propofol-defining-and). The underlying maths for the 3 compartment model are the same for both.

So now what am i trying to do?
Well the problem with marsh and schnider is that they are based on very few patients (12 and 24 respectively iirc) which is pretty scary for how they then get extrapolated to tens of thousands of people each year who aren't similar to the peopl in those original studies. There have however been a range of other studies looking at propofol concentrations and lots of researchers have been good enough to release their data for anyone to use on opentci.org.

So what i'm trying to do is take my 600ish combined dataset and generate a new set of formulae that will generate v1,k10,k31 etc for a patient and do so more accurately then the existing models over a wider range of parameters.

Where the code comes in is that i want a function that runs a test parameter set against the whole patient set (yes i know i need a test set to avoid overfitting) and then returns an error which i'll minimise either using one of the sci-py functions for that or maybe play around with a genetic algorithm for fun.

In terms of the maths http://www.pfim.biostat.fr/PFIM_PKPD_library.pdf is pretty detailed

SurgicalOntologist
Jun 17, 2004

So just to clarify, you want to find the parameters that best reproduce the timeseries in the data?

If so, this is a real tricky problem (and yes, you are solving differential equations along the way.. these are the simulations). I'd be surprised if you have any luck with the local optimization methods in scipy, and genetic algorithms are a rabbit hole I also went down, to lots of time spent for little reward. The state of the art is AFAIK this method. My dissertation is basically expanding it to the case where you have multiple data series and want to characterize how they their parameters vary. Which seems to be exactly what you want to do. Unfortunately I don't have an open-source package ready yet for the general case.

On the other hand... I jumped straight to assuming you have a nonlinear, chaotic system, and I didn't look closely enough at the equations to see if that's the case or not. If you have a linear dynamical system you probably don't even need to solve it numerically and parameter estimation should be straightforward.

But, if you do have a nonlinear system, and don't feel like implementing Abarbanel's method or tracking down his MATLAB implementation, you are left with something like what you are doing, what I call a "black box" parameter estimation method. Choose a parameter set to test, run the simulations, compute the error. In which case we're right back to where we started, regardless of what kind of optimization you end up doing over it, your immediate problem is speeding up the simulations. And the answer here is what was suggested before: formulate the system as a function which takes a state vector (i.e. numpy array) and parameters as input and outputs the rates of change of the state vector, and send it to scipy.integrate.odeint. If you can't figure it out from the scipy docs you should be able to find more examples online, or post back for help.

Once that's fast I would start by testing parameter sets (in your case, IIUC, you are testing "super-parameter" sets in that they are parameters of the equations that determine the actual diffeq parameters. Or you are testing the actual equations rather than just their parameters. Either way, the same recommendations) more or less manually. Run some examples by hand to get a sense of how the errors behave. Do some brute force testing (i.e., test every single permutation over a range of the parameters--if you have a bunch it may have to be an incredibly coarse range, but still worth it) and plot the errors in various ways. Spend some time doing this kind of thing before you start looking into optimizers, because that's a whole can of worms and optimizers often don't respond well to "black box" type problems (where you can't tell it the Jacobian of the error function). "Getting a feel for it" will get you pretty far and help you to understand what the optimizers are doing.

And again, if you do have a linear system, or use a method like Arbanel's to (sort of) lineraize your system, then you can determine the Jacobian and a basic optimizer will do the job.

Good luck! You've picked a problem that is much harder than it first appears.

SurgicalOntologist fucked around with this message at 17:46 on Jun 19, 2017

Loving Africa Chaps
Dec 3, 2007


We had not left it yet, but when I would wake in the night, I would lie, listening, homesick for it already.

SurgicalOntologist posted:

So just to clarify, you want to find the parameters that best reproduce the timeseries in the data?

If so, this is a real tricky problem (and yes, you are solving differential equations along the way.. these are the simulations). I'd be surprised if you have any luck with the local optimization methods in scipy, and genetic algorithms are a rabbit hole I also went down, to lots of time spent for little reward. The state of the art is AFAIK this method. My dissertation is basically expanding it to the case where you have multiple data series and want to characterize how they their parameters vary. Which seems to be exactly what you want to do. Unfortunately I don't have an open-source package ready yet for the general case.

On the other hand... I jumped straight to assuming you have a nonlinear, chaotic system, and I didn't look closely enough at the equations to see if that's the case or not. If you have a linear dynamical system you probably don't even need to solve it numerically and parameter estimation should be straightforward.

But, if you do have a nonlinear system, and don't feel like implementing Abarbanel's method or tracking down his MATLAB implementation, you are left with something like what you are doing, what I call a "black box" parameter estimation method. Choose a parameter set to test, run the simulations, compute the error. In which case we're right back to where we started, regardless of what kind of optimization you end up doing over it, your immediate problem is speeding up the simulations. And the answer here is what was suggested before: formulate the system as a function which takes a state vector (i.e. numpy array) and parameters as input and outputs the rates of change of the state vector, and send it to scipy.integrate.odeint. If you can't figure it out from the scipy docs you should be able to find more examples online, or post back for help.

Once that's fast I would start by testing parameter sets (in your case, IIUC, you are testing "super-parameter" sets in that they are parameters of the equations that determine the actual diffeq parameters. Or you are testing the actual equations rather than just their parameters. Either way, the same recommendations) more or less manually. Run some examples by hand to get a sense of how the errors behave. Do some brute force testing (i.e., test every single permutation over a range of the parameters--if you have a bunch it may have to be an incredibly coarse range, but still worth it) and plot the errors in various ways. Spend some time doing this kind of thing before you start looking into optimizers, because that's a whole can of worms and optimizers often don't respond well to "black box" type problems (where you can't tell it the Jacobian of the error function). "Getting a feel for it" will get you pretty far and help you to understand what the optimizers are doing.

And again, if you do have a linear system, or use a method like Arbanel's to (sort of) lineraize your system, then you can determine the Jacobian and a basic optimizer will do the job.

Good luck! You've picked a problem that is much harder than it first appears.

Thanks! yeah appreciate it's going to be difficult but it's a fun project i initially pitched at a hackday and i'm learning a bunch while doing it so that in itself makes it worth it as well as it potentially being very useful if it works out!

KernelSlanders
May 27, 2013

Rogue operating systems on occasion spread lies and rumors about me.
On the plus side, two compartment models are easy.

Philip Rivers
Mar 15, 2010

What's the best way to store dynamic values in Python do y'all think? I have objects with elements that are constantly changing and need to be updated and then checked against each timestep and I'm not really very good at optimization!

Eela6
May 25, 2007
Shredded Hen

Philip Rivers posted:

What's the best way to store dynamic values in Python do y'all think? I have objects with elements that are constantly changing and need to be updated and then checked against each timestep and I'm not really very good at optimization!

This is a very vague question. Can you be more specific?

Philip Rivers
Mar 15, 2010

Eela6 posted:

This is a very vague question. Can you be more specific?

Okay, so I have a set of points and a set of lines defined by two endpoints. Basically I'm working on an intersection algorithm and I need to figure out when a given point intersects one of the lines, but what makes it tricky for me is that the lines exert force on the endpoints and pull them together so the position of the endpoints are constantly updating. Here's what it looks like in action:





I don't know what the best way to access all those endpoints quickly would be, I'm very naively self taught and don't know much at all about data/efficiency.

Eela6
May 25, 2007
Shredded Hen
Depending on the size of your simulation, I would worry about it too much. Use whatever you find most readable. If you find that it's too inefficient for the calculations you're trying to do, start looking into the numpy / scipy packages.

Philip Rivers
Mar 15, 2010

Yeah, I'm currently having efficiency issues. The naive implementation is pulling the endpoint positions from a list of the line objects but it's not fast enough. I don't know if a numpy array would be useful because since the positions are always updating I would need to rebuild the array every timestep.

Cingulate
Oct 23, 2012

by Fluffdaddy

Philip Rivers posted:

Okay, so I have a set of points and a set of lines defined by two endpoints. Basically I'm working on an intersection algorithm and I need to figure out when a given point intersects one of the lines, but what makes it tricky for me is that the lines exert force on the endpoints and pull them together so the position of the endpoints are constantly updating. Here's what it looks like in action:





I don't know what the best way to access all those endpoints quickly would be, I'm very naively self taught and don't know much at all about data/efficiency.
Did you check if one of the many graph/network analysis toolboxes works for your problem?

Eela6
May 25, 2007
Shredded Hen

Philip Rivers posted:

Yeah, I'm currently having efficiency issues. The naive implementation is pulling the endpoint positions from a list of the line objects but it's not fast enough. I don't know if a numpy array would be useful because since the positions are always updating I would need to rebuild the array every timestep.

Rebuilding numpy arrays is very fast, especially if you properly vectorize.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Try looking for a computational geometry algorithm that matches what you're trying to do

Philip Rivers
Mar 15, 2010

Malcolm XML posted:

Try looking for a computational geometry algorithm that matches what you're trying to do

I've tried but a lot of it is over my head, I don't have much grounding in algorithms that I can readily parse the info out there.

Eela6 posted:

Rebuilding numpy arrays is very fast, especially if you properly vectorize.

Do you have any resources on this? I'm very out of my depth with a lot of this and I'm just fumbling in the dark.

Cingulate posted:

Did you check if one of the many graph/network analysis toolboxes works for your problem?

I've tried a few different things but I'll give NetworkX a try.

Cingulate
Oct 23, 2012

by Fluffdaddy

Philip Rivers posted:

Do you have any resources on this? I'm very out of my depth with a lot of this and I'm just fumbling in the dark.
It's not perfect, but it's a start ...

https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Numpy_Python_Cheat_Sheet.pdf

Generally, Numpy is your pythonic way of calling highly optimized packages for numerical computation. As Eela6 said, try to vectorize: not much looping, but operating on vectors and arrays as a whole.

Symbolic Butt
Mar 22, 2009

(_!_)
Buglord

Philip Rivers posted:

I've tried but a lot of it is over my head, I don't have much grounding in algorithms that I can readily parse the info out there.

I'm pretty sure a basic line sweep algorithm would help you on this: https://courses.csail.mit.edu/6.006/spring11/lectures/lec24.pdf

I'll try to find an online class with a video lecture on this later. If you're already on the level of drawing all these lines on the screen then implementing line sweep should not be tricky for you.

Philip Rivers
Mar 15, 2010

Thanks you two, I'll look over those :)

Eela6
May 25, 2007
Shredded Hen

Philip Rivers posted:

Do you have any resources on this? I'm very out of my depth with a lot of this and I'm just fumbling in the dark.

This video by Jake Vanderplass is a good start. https://www.youtube.com/watch?v=EEUXKG97YRw

Philip Rivers
Mar 15, 2010

Eela6 posted:

This video by Jake Vanderplass is a good start. https://www.youtube.com/watch?v=EEUXKG97YRw

This is interesting, thank you!

e: This was super duper helpful. My new favorite thing is np.frompyfunc because holy moly it feels good to apply a method to all the elements in a list of objects and build an array out of that output in like two lines.

Philip Rivers fucked around with this message at 01:42 on Jun 21, 2017

huhu
Feb 24, 2006
Does anyone have a suggestion for a tutorial that walks through the basics of sockets up to putting it on a server and being able to type in "huhu.com/index.html" and get a page to load? I've got something basic running locally but would like to explore sockets, wsgi, and related topics more in depth.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

huhu posted:

Does anyone have a suggestion for a tutorial that walks through the basics of sockets up to putting it on a server and being able to type in "huhu.com/index.html" and get a page to load? I've got something basic running locally but would like to explore sockets, wsgi, and related topics more in depth.

Google "python web server from scratch". There's a bunch of decent-looking stuff on the first page.

Also you can look over the code to Python's own simple http server. https://github.com/python/cpython/blob/3.6/Lib/http/server.py

Thermopyle fucked around with this message at 20:21 on Jun 22, 2017

VikingofRock
Aug 24, 2008




So I've been doing more python lately, and while I feel like I have a decent grasp on the language itself, I struggle with a lot of the idioms for structuring a program. For example, how should I layout the directory structure of my program, what is __init__.py, where do I list my dependencies, stuff like that. Does anyone have a good resource for learning that sort of stuff? Like the things that are necessary to use python effectively, but which are outside of the scope of the language itself.

edit: I often don't really know what various tools are, either, which falls into the same category. Like what is virtualenv / venv / pyvenv, etc? Is there a standard python code formatter? What other tools should I know about?

VikingofRock fucked around with this message at 03:24 on Jun 23, 2017

accipter
Sep 12, 2003

VikingofRock posted:

So I've been doing more python lately, and while I feel like I have a decent grasp on the language itself, I struggle with a lot of the idioms for structuring a program. For example, how should I layout the directory structure of my program, what is __init__.py, where do I list my dependencies, stuff like that. Does anyone have a good resource for learning that sort of stuff? Like the things that are necessary to use python effectively, but which are outside of the scope of the language itself.

edit: I often don't really know what various tools are, either, which falls into the same category. Like what is virtualenv / venv / pyvenv, etc? Is there a standard python code formatter? What other tools should I know about?

Here is the official documentation: https://docs.python.org/3/tutorial/modules.html#packages , and this is pretty good guide on how to package a Python module: https://python-packaging.readthedocs.io/en/latest/index.html . I would also recommend the setuptools documentation: https://setuptools.readthedocs.io/en/latest/index.html . However, it is a more focused on setuptools, rather than the organization of a package. How to organize a package depends on the scale of the package. If it is small enough, you can put everything in __init__.py.

PEP8 is the recommended standard format for Python code. I use it because it is a standard, but there are people that complain about it (then again people always find something to complain about). I have also been used YAPF. When I am developing a package, I use py.test with flake8. This checks the format for all of my code, and if it violates PEP8 then I fix the code by hand.

Depending on what you are working on and the OS, Python dependencies can be in conflict. A virtual environment is a way to create a project specific collection of Python packages. I used miniconda to develop packages that simultaneously support Python 2.7 and 3.6. I do this by having conda environmental for 2.7 and 3.6 installed simultaneously, and test against each one.

VikingofRock
Aug 24, 2008




accipter posted:

Here is the official documentation: https://docs.python.org/3/tutorial/modules.html#packages , and this is pretty good guide on how to package a Python module: https://python-packaging.readthedocs.io/en/latest/index.html . I would also recommend the setuptools documentation: https://setuptools.readthedocs.io/en/latest/index.html . However, it is a more focused on setuptools, rather than the organization of a package. How to organize a package depends on the scale of the package. If it is small enough, you can put everything in __init__.py.

PEP8 is the recommended standard format for Python code. I use it because it is a standard, but there are people that complain about it (then again people always find something to complain about). I have also been used YAPF. When I am developing a package, I use py.test with flake8. This checks the format for all of my code, and if it violates PEP8 then I fix the code by hand.

Depending on what you are working on and the OS, Python dependencies can be in conflict. A virtual environment is a way to create a project specific collection of Python packages. I used miniconda to develop packages that simultaneously support Python 2.7 and 3.6. I do this by having conda environmental for 2.7 and 3.6 installed simultaneously, and test against each one.

Thank you for all the information, this is very helpful. I'm worried that there are more questions like this that I don't even know to ask, though. Is there a book or something that covers these topics for python?

Edit: I'm looking at Effective Python. Does anyone have any experience with that book? If so, would you recommend it?

VikingofRock fucked around with this message at 03:59 on Jun 24, 2017

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!

Philip Rivers posted:

Yeah, I'm currently having efficiency issues. The naive implementation is pulling the endpoint positions from a list of the line objects but it's not fast enough. I don't know if a numpy array would be useful because since the positions are always updating I would need to rebuild the array every timestep.

Have you actually profiled your code and determined that the slowdown is what you think it is? Since it sounds like you're new to this type of stuff I don't get the impression that you profiled it and are just going by gut feeling. And although I don't know what exactly you are trying to do with your code I'm having a hard time imagining that pulling essentially coordinate locations of the nodes in 2D would be a significant cause of slowdown unless there's much more going on under the hood that you've not mentioned.

Anyway it sounds like a fun problem though and got me thinking:

1. You could implement/use some kind of sparse matrix storage scheme if your problem size was on the order of like hundreds of thousands.
2. You could take advantage of the fact that the lines connecting each node obeys the same formula y = mx + b and build some kind of system of equations which numpy could easily solve.

Hughmoris
Apr 21, 2007
Let's go to the abyss!
Are there any recommended articles/tutorials/blogs on working with sqlite in Python? I've just started learning a little bit about SQL and I'm trying to find best practices when incorporating it into a script.

I'd like to use it in a small script that parses an RSS feed and, if it's a new entry, inserts it into the DB.

accipter
Sep 12, 2003

Hughmoris posted:

Are there any recommended articles/tutorials/blogs on working with sqlite in Python? I've just started learning a little bit about SQL and I'm trying to find best practices when incorporating it into a script.

I'd like to use it in a small script that parses an RSS feed and, if it's a new entry, inserts it into the DB.

Do you want to work with sqlite directly? Or indirectly? If you want to work with it indirectly, look at Object Relational Mappers such as peewee or SQLAlchemy. Peewee is simpler, while SQLAlchemy is the standard (?) ORM for Python.

Hughmoris
Apr 21, 2007
Let's go to the abyss!

accipter posted:

Do you want to work with sqlite directly? Or indirectly? If you want to work with it indirectly, look at Object Relational Mappers such as peewee or SQLAlchemy. Peewee is simpler, while SQLAlchemy is the standard (?) ORM for Python.

I can't say I know enough to know which way I want to go. I'll do some reading on ORM, thanks.

Ganson
Jul 13, 2007
I know where the electrical tape is!

Cingulate posted:

Turns out there is a perfectly fine ${name} module for doing just what I did manually very badly. :downs:

This is the story of my life. You get it right or you learn!

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I want to make flow charts and it looks like GraphViz is the library I want. Does anyone have experience with the various Python bindings to it? I've seen recommendations for graphviz, pygraphviz, and pydot.

Dominoes
Sep 20, 2007

Update on the status of pip on windows: It works pretty good! Here's my requirements.txt; you need to install numpy+mkl, scipy, and llvmlite via Chris Gohlke installers (numpy on its own works fine in pip, btw); the rest will pip install, assuming you have C++ build tools 2015; the link to DL that shows up in the error you get for not having it installed.

Python code:
# Install the following packages from Chris Gohlke's site:
# numpy-mkl
# scipy
# llvmlite (Numba prereq)

# Some of the packages below require MS VSC++ build tools; link will show
# in the terminal when attempting to install.

# Scipy stack
pandas
matplotlib
jupyter
sympy


# Science extras
pandas-datareader
scikit-learn
numba  
scikit-image
pillow
seaborn


# Misc
PyQt5
django-toolbelt
twine
wheel
requests
requests_oauthlib
pytest
h5py
toolz
cytoolz
beautifulsoup4
lxml
sqlalchemy
saturn
fplot

Dominoes fucked around with this message at 05:17 on Jul 1, 2017

Philip Rivers
Mar 15, 2010

I got the basic model I was working on functioning at a decent clip.

https://www.youtube.com/watch?v=TLCNX1JUs70

Right now the algorithm I'm using to detect intersections is pretty simple, it'll just check the dot product between the unit vectors of the tubes from end one to end two and the unit vector between end one and the open point and connect if it's close enough to 1. There's one miss at the very end that I think I solved by changing a few conditions (one thing the algorithm checks is that the open tube is longer than its initial length to prevent self-attachment, so if the open end crosses too close to end one of the tube it might hop over it - but it seems like I can make the initial length of the open tube arbitrarily small and it still works).

As you can see it does start to slow down a bit when the number of tubes gets high enough but my hope is that I can optimize it with array functions that check the dot product against all the tubes in the system simultaneously, but I haven't quite figured that one out yet. But the basic logic of checking the dot product seems to work pretty effectively on its own, so I'm happy with how it's progressing.

Adbot
ADBOT LOVES YOU

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Well I just had a frustrating time figuring out what was wrong with a part of my code when it turns out it was numpy being weird.

So I have a numpy array where some values are inf and some values are nan. I create a Boolean list of where these values in the array are inf/nan, and then I use these indices to do something to another array.

Like so:

code:

import numpy as np

a = np.array([1, np.sqrt(-1), np.nan, np.inf])  # [1, nan, nan, inf]

print(a == np.inf)
# F, F, F, T, as expected

print(a == np.nan)
# F, F, F, F, which is wrong

print(np.isnan(a))
# F, T, T, F

Is there a reason it does this? Does np.nan has an actual numerical value or something? I would have thought it would be treated in the same way as None where it just "is" or "isn't."

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply