Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender

Rocko Bonaparte posted:

I wanted to ask some more stuff about neural networks, but I wondered first if somebody could recommend a good overall theory book on them that's focused on using neural networks, and less on making something from scratch. You'll probably get what I mean from the questions I have. Assuming I am using backwards propagation to train a neural network:

I'll start with a question for you: Why are you using a neural network? They really aren't suitable for more than a few specific subfields in machine learning, and they tend to perform poorly outside those areas. What specific problem are you solving, and if you're following someone's existing working on machine learning in that area, what papers are you basing your work on?

Rocko Bonaparte posted:

1. What does it mean that the single greatest factor in training the network is the random variables with which I start? I seem to only get a few percentage reduced error from my starting number after 500 runs, and the success seems to depend more on the random values.

Most likely, either your training data is poor quality, the problem you are solving has too many local minima, or you implemented your algorithm incorrectly.

Rocko Bonaparte posted:

2. What are some good rule of thumb formulas for calculation the amount of neurons for my hidden layer? The first one I heard was to have 1.5x as many as the input layer, but I've heard of other formulas based on the number of different combinations I want represented in my output.

There is a certain amount of experimentation involved here. 1.5*input is a decent enough place to start for most problems.

Rocko Bonaparte posted:

3. Have there been any advances in training neural networks that I should be looking for instead? I don't mean things like pruning but more like things that make backwards propagation obsolete.

In my experience, most highly successful supervised-training neural networks are essentially trained by backpropagation, but have been modified and carefully tuned to match the model relevant for their operation and frequently use slight variants of the neural network algorithm to that end. In many cases, they are combined with other machine learning techniques in order to make a gestalt learner with the neural network portion specialized to the specific aspects of the model it is best suited for, and supplemented with Bayesian or decision-tree learners for other model aspects.

Edit: I tend to strongly discourage neural networks because even in cases where they are well-suited and the training data available is of acceptable quantity and quality, they still require tweaking and experimentation to get robust results out. Worse, if your problem isn't suitable for neural network learners, and that isn't obvious to you from your model, then the failure case neural networks deliver is terrible: they will mostly look like they just aren't well trained, complete with occasionally and unpredictably giving you high-confidence-seeming answers that appear correct. It's incredibly unhelpful, and smart people have wasted years fighting to get neural networks to solve problems that other machine learning techniques can adapt to in mere weeks of programmer time.

ShoulderDaemon fucked around with this message at 07:21 on Feb 9, 2009

Adbot
ADBOT LOVES YOU

tripwire
Nov 19, 2004

        ghost flow

Rocko Bonaparte posted:

I wanted to ask some more stuff about neural networks, but I wondered first if somebody could recommend a good overall theory book on them that's focused on using neural networks, and less on making something from scratch. You'll probably get what I mean from the questions I have. Assuming I am using backwards propagation to train a neural network:

1. What does it mean that the single greatest factor in training the network is the random variables with which I start? I seem to only get a few percentage reduced error from my starting number after 500 runs, and the success seems to depend more on the random values.
2. What are some good rule of thumb formulas for calculation the amount of neurons for my hidden layer? The first one I heard was to have 1.5x as many as the input layer, but I've heard of other formulas based on the number of different combinations I want represented in my output.
3. Have there been any advances in training neural networks that I should be looking for instead? I don't mean things like pruning but more like things that make backwards propagation obsolete.
Feed forward single hidden layer perceptrons really suck poo poo for most anything more advanced than pole balancing. Try experimenting with spiking and/or recurrent nets if possible.

I learned a bunch of stuff fooling around with NEAT and related algorithms. There is a lovely python module called neat-python which implements several different kinds of neural nets; you have continuous time recurrent AKA leaky nets, integrate and fire, a spiking neural net implemented according to Eugene M. Izhikevich 2003 paper (also a recommended read, available here: http://vesicle.nsi.edu/users/izhikevich/publications/spikes.pdf )
neat python is available here:
http://code.google.com/p/neat-python/
I recommend perusing some of the source for the different NN models as they have most of them in python and C++
If you are concerned about performance consider trying to implement the EANT algorithm as described here: http://www.siebel-research.de/evolutionary_learning/
The idea with EANT is to store NNs as linear genomes and activate them via a computationally fast process of popping and pushing elements from the genome.

If you are satisfied with the mechanics of your neural nets and the mechanism through which they are trained, consider working on your characterization of fitness in your solution space. This is an excellent paper on the strength of using archives, coevolution and a focus on novelty to allow the training to be far more robust:
http://www.alifexi.org/papers/ALIFExi_pp329-336.pdf

Strong Sauce
Jul 2, 2003

You know I am not really your father.





So does no one program in ruby other than to use it for Ruby on Rails? I was kind of hoping there be a thread about ruby since I just started using it and could have used some help when I was implementing a genetic algorithm class. I was hoping someone with more knowledge about the subject could actually start a decent OP.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Phew I'm see a splay of answers so I'll try to describe what I've been trying to do. I've been working on a little automated stock trading tool for awhile for fun and profit (someday). I've been trying to make short term decisions based on daily stock data, which would be very noisy since I'm basically trying to make a decision on tomorrow's picks based on, say, 7 data points. To make things more complicated I was trying to make decisions on all stocks and find the best choice. I'm using continuous signals for that rather than boolean buy/sell, so that I can rank the picks.

Self-defense time since I've been hassled by this so drat much: I don't want to get into intraday data since I have to start paying a tab for the data feed and this is just a hobby right now. But having more data points would definitely help. I find a lot of stuff out there limiting, and I do optimization work for a living so I somewhat enjoy smashing things out by hand. And I'm only doing stocks right now since it's easier for me to get the data regularly for free sources, but I'm not going to insist on sticking to that.

What I'm trying first is throwing a few of my better indicators together into a neural network because I haven't been able to find a rule to use them together yet. I do this for every stock, and add in the slope and acceleration of those indicators too. Right now it's just two indicators, which turns into 6 signals with slope and acceleration between them.

For each input I calculate its standard deviation and assume a mean of 0.0 using the whole population's representation of the values and then fit that into the range 0.0 to 1.0 using a normal distribution. 0.5 means "hold", 0.0 means "strongest sell" and 1.0 means "strongest buy." The output is based on the percent price increase the stock from opening day to opening day, which would be my shortest holding period unless I started daytrading. That signal gets standardized too.

I'm using Encog, some Java toolkit that seems to be "open-source, closed-information;" they guy basically charges support through selling a book, doing classes, and somesuch. Generally I've been able to get backwards propagation to work, but I wanted to know some of the tips in tweaking it. With a decade of stock data, the best I got started out with an error of about 16.8% and moved down to 15.9% after 500 runs. I had sent the momentum really low since I was saturating with my first experiments. However, I see now I was debugging an issue where occasionally a NaN or an infinite number would get thrown into the pool, which would naturally screw everything up.

BTW--I believe Encog's default model for backwards propagation assumes values ranged 0.0 to 1.0. I tried not using values in that range and it went completely bonkers.

So first thing might be to increase the momentum again. But meanwhile I've been adding an interface for the neural net for my backtester so I can see how it really performs. Problem is that it performs way too slow because I calculate the indicators over and over again each iteration, and I've been pulling 12 hour work days so I haven't been able to write something to precalculate it. :(

Anyways I was hoping to find a book that was more top-down. Say, something that can explain the basics up front in a chapter then get into some of the various technologies so I can be more aware of what's out there. I am hoping to start with a library that I can tweak. For example, I already want to adjust the training mechanism to "punish" decisions that are in the opposite direction. Say, if it generates 0.6 (weak buy) when it should have generated 0.8 (stronger buy), that's better than if it generates 0.6 when it should have generated 0.4 (weak sell) since that would trigger different behavior. For that I don't necessarily need to know the full behavior of the neurons or write my own model from scratch--I assume.

Smugdog Millionaire
Sep 14, 2002

8) Blame Icefrog

Strong Sauce posted:

So does no one program in ruby other than to use it for Ruby on Rails? I was kind of hoping there be a thread about ruby since I just started using it and could have used some help when I was implementing a genetic algorithm class. I was hoping someone with more knowledge about the subject could actually start a decent OP.

Ruby's libraries outside of web stuff are lacking and Ruby's execution speed up until a few weeks ago was too slow to do any serious computation (it's probably still slower than comparable languages). Sorry, I'm as disappointed as you are :(

bitprophet
Jul 22, 2004
Taco Defender

Free Bees posted:

Ruby's libraries outside of web stuff are lacking and Ruby's execution speed up until a few weeks ago was too slow to do any serious computation (it's probably still slower than comparable languages). Sorry, I'm as disappointed as you are :(

I'd also wager that part of the reason is Python basically "got there first": it's extremely similar (interpreted, terse syntax compared to C++/Java/etc, powerful language constructs, etc) but has a much bigger/better built-in library plus a significantly larger pool of third-party libraries.

It's also as old or older, and probably as important, was more widely adopted in the English speaking world before Ruby managed to break out of Japan (which was really only in the mid '00s when Rails came out).

So my conjecture is that Ruby basically has no niche to fill outside of people who like Ruby because they've used Rails; the "want to script/do systems engineering/do non-C embedded work/etc/etc, without using Java or C++" niche was filled by Python beforehand, due to the above factors.

dancavallaro
Sep 10, 2006
My title sucks
Anyone know of a programming language where you can redefine integer literals (i.e. 3 = 5)?

Avenging Dentist
Oct 1, 2005

oh my god is that a circular saw that does not go in my mouth aaaaagh

dancavallaro posted:

Anyone know of a programming language where you can redefine integer literals (i.e. 3 = 5)?

You know, there's this amazing thing in programming languages called variables that let you do stuff like this in a slightly more abstract fashion.

(Seriously, what would even be the point of something like this?)

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Screw the point of it, what does it even mean? Are literals actually implemented as variables? Are numbers no longer mathematical abstractions, but instead merely figments of a fungible and expressive reality? Do we really have the power to achieve our dreams? Wake up sheeple!!

Anyway, you could do that with punchcard FORTRAN: the literal (say) 42 actually was uniqued to a particular cell, which you could then freely assign to. Supposedly it was useful if you screwed up some earlier literal to temporary "edit" it to the real value, although I imagine this sort of patch very quickly got out of hand. I have also heard that some early COBOL implementations had a similar bug with by-reference parameter passing, although that was less global.

EDIT: I am not aware of any modern languages which intentionally permit this, though, and rightly so.

dancavallaro
Sep 10, 2006
My title sucks
Calm down, it's out of curiosity. I was just wondering if any language even allowed this.

Mithaldu
Sep 25, 2007

Let's cuddle. :3:
Just guessing here, but couldn't you write a source filter in perl that would treat all numbers as variables which when first accessed get initialized with their respective value and could then be changed?

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
In Haskell you can define your own type in the Num class, and give it some nonsense fromIntegral definition, which would have approximately the same effect.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

dancavallaro posted:

Anyone know of a programming language where you can redefine integer literals (i.e. 3 = 5)?
In all seriousness, I meet at least one CS grad student per year who thinks that thinks there's some mystical block of memory that defines integers. :suicide:

edit: dancavallaro, are you a grad student? :raise:

dancavallaro
Sep 10, 2006
My title sucks

Dijkstracula posted:

In all seriousness, I meet at least one CS grad student per year who thinks that thinks there's some mystical block of memory that defines integers. :suicide:

edit: dancavallaro, are you a grad student? :raise:

Not a grad student, just undergrad, but I'm taking a programming language design course right now and my professor said she thinks there's a programming language that lets you redefine integer literals (she seemed to think it might be Forth, although I couldn't find anything about Forth letting you do that). I realize how stupid my question must have sounded, but it was out of curiosity, not ignorance.

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe
Hmm. In addition to the bits about COBOL and FORTRAN, I just remembered having heard a persistent story about being able to alter constants "in Lisp" — which is totally meaningless, since basically every Lisp implementation has its own unique-snowflake semantics on things like this, and there are dozens if not hundreds of relatively common Lisp (:rimshot:) implementations out there. So that might be the rumor your instructor was repeating.

tef
May 30, 2004

-> some l-system crap ->
I'm pretty sure CLC intercal allows you to redefine integer literals :eng101:

http://intercal.freeshell.org/
http://esolangs.org/wiki/CLC-INTERCAL
code:
   DO .1 <- .2/#1
   DO .2 <- #3
   DO .1 <- #1

quote:

What happens here? The first assignment introduces overloading of .2, so the second assignment assigns #3 to #1. In other words, from now on every time you say 1 you really mean 3. We have changed the value of a constant, but not just that. Consider the third assignment: this assigns #3 to .3, not #1 to .1, for obvious reasons. This can be a great obfuscation tool.

tef fucked around with this message at 01:50 on Feb 10, 2009

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Well, of course, if it's horrible, INTERCAL will let you do it :v:

6174
Dec 4, 2004

rjmccall posted:

Anyway, you could do that with punchcard FORTRAN: the literal (say) 42 actually was uniqued to a particular cell, which you could then freely assign to. Supposedly it was useful if you screwed up some earlier literal to temporary "edit" it to the real value, although I imagine this sort of patch very quickly got out of hand.

Do you know what version of the language allowed this? I'm certainly no expert on old Fortran, but I've never come across anything like this working with Fortran IV/66, 77, or 90/95 code. However what I have worked with hasn't been on a punchcard (thankfully).

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

6174 posted:

Do you know what version of the language allowed this? I'm certainly no expert on old Fortran, but I've never come across anything like this working with Fortran IV/66, 77, or 90/95 code. However what I have worked with hasn't been on a punchcard (thankfully).

I believe it was just a punchcard thing, and that all the real revisions of the language took that out. It's also possible that my source was thinking of some other language; I don't have direct knowledge of it, either.

baquerd
Jul 2, 2007

by FactsAreUseless

Rocko Bonaparte posted:

Phew I'm see a splay of answers so I'll try to describe what I've been trying to do. I've been working on a little automated stock trading tool for awhile for fun and profit (someday). I've been trying to make short term decisions based on daily stock data, which would be very noisy since I'm basically trying to make a decision on tomorrow's picks based on, say, 7 data points. To make things more complicated I was trying to make decisions on all stocks and find the best choice. I'm using continuous signals for that rather than boolean buy/sell, so that I can rank the picks.

Just a heads up, this idea has been beat to death repeatedly (google neural network stock trading), and has yet to produce anything other than curiosities. I work for a hedge fund, and peripherally with our predictive algorithms. Using a neural network, while seemingly interesting and valid on the surface, just doesn't pan out. Considering we have access to direct market data everywhere at every time, I wish you luck but would give you tremendous odds against your success.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

quadreb posted:

Just a heads up, this idea has been beat to death repeatedly (google neural network stock trading), and has yet to produce anything other than curiosities. I work for a hedge fund, and peripherally with our predictive algorithms. Using a neural network, while seemingly interesting and valid on the surface, just doesn't pan out. Considering we have access to direct market data everywhere at every time, I wish you luck but would give you tremendous odds against your success.
It's worth a shot but I'm not giving it high expections. The real problem that might just be futile is that I'm trying to use daily data to make short-term decisions. If I had intraday data over the course of a few days I could probably come up with something more robust eventually without having to try something fancy. The issue is that the datafeeds can cost quite a bit for the amount of money with which I could enter the market. There's OpenTick but they're not taking new people right now.

The neural networks have been pretty neat though regardless of this current little evil scheme. There's some stuff in past work where I could have tried using one where nobody else had a better idea. Say there was an odd problem somebody was complaining about that they could record when it happened but had no luck predicting so that they could pause and capture some data about it. I could imagine training a network to try to make a prediction.

I'll take any tips you might have for other areas to look for automated trading though.

baquerd
Jul 2, 2007

by FactsAreUseless

Rocko Bonaparte posted:

I'll take any tips you might have for other areas to look for automated trading though.

Unfortunately, other than saying to look in the area of assessing your risk and exposure and basing decisions on that, I am contractually limited in what I can say.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

quadreb posted:

Unfortunately, other than saying to look in the area of assessing your risk and exposure and basing decisions on that, I am contractually limited in what I can say.
That's okay--normally what I get from people that don't do this stuff for a living is "You're a moron. Look at charts and stop coding stuff." It takes the fun and science out of it though.

Does anybody have a recommendation for a Java neural network toolkit other than Encog? It was just one I picked up Googling around and gave a shot. The amount of code I've written for working with Encog's neural network library is very short; everything else so far has been making things more efficient since it takes a very long time to test a year's worth of data with it using the old system I had. It would recalculate a bunch of numbers each run through.

Anyways I'll look up machine learning generally and see what else is out there. I'll probably come running back here before I start putting time and code into something else.

Alan Greenspan
Jun 17, 2001

The company I work for needs about 50 custom-designed icons. What would be a typical price tag for that? Where do you guys get custom icons from?

baquerd
Jul 2, 2007

by FactsAreUseless

Alan Greenspan posted:

The company I work for needs about 50 custom-designed icons. What would be a typical price tag for that? Where do you guys get custom icons from?

I recommend a bubble sort.

Are you sure you're in the right thread?

Try asking for offers and portfolios in SA-Mart

POKEMAN SAM
Jul 8, 2004

Alan Greenspan posted:

The company I work for needs about 50 custom-designed icons. What would be a typical price tag for that? Where do you guys get custom icons from?

One of the programmers I work with creates our toolbar button images, I, however, am not that creative. Though I did create an image for one of the applications I work on and it turned out alright.

raminasi
Jan 25, 2005

a last drink with no ice
I need to assemble sub-polygons into a whole polygon (e.g. take three adjacent rectangles and find the rectangle that encompasses all of them). It would be pretty simple to aggregate the lists of vertices that define them and then remove duplicates, except that I need the resulting vertex list to be properly ordered, i.e. define no edges that cross each other. I can't make any assumptions about the order of the vertices in the sub-polygons other than that they're properly ordered too. I figure this isn't an insurmountable problem, but I don't want to spend days trying to invent a wheel that's already been invented. Does anyone have any ideas or pointers towards places to start?

tripwire
Nov 19, 2004

        ghost flow

Rocko Bonaparte posted:

It's worth a shot but I'm not giving it high expections. The real problem that might just be futile is that I'm trying to use daily data to make short-term decisions. If I had intraday data over the course of a few days I could probably come up with something more robust eventually without having to try something fancy. The issue is that the datafeeds can cost quite a bit for the amount of money with which I could enter the market. There's OpenTick but they're not taking new people right now.

The neural networks have been pretty neat though regardless of this current little evil scheme. There's some stuff in past work where I could have tried using one where nobody else had a better idea. Say there was an odd problem somebody was complaining about that they could record when it happened but had no luck predicting so that they could pause and capture some data about it. I could imagine training a network to try to make a prediction.

I'll take any tips you might have for other areas to look for automated trading though.
When I replied to you I wasn't sure exactly what you were using the neural nets for which is why my answer was more keyed to neural nets as controllers rather than as classifiers.

Like people have said, using backprop on stock market is nothing new; people have probably been doing that since the 60s. If you are interested in forging new ground and making some progress you should really read up on some of the directions people are taking neural nets in nowadays; coming at the problem in a novel way might be all you need to see some success.

Geoff Hinton has a talk on google video that might be a couple of years old now where he goes over restricted boltzmann machines as applied to all sorts of classification and categorization problems, and he shows some techniques for making them very robust across a wide variety of problems. I think he might have still been using backprop as a training algorithm so maybe check that out.

If you are interested in trying different training methods, theres quite a few promising directions you could go that might be interesting.

There are genetic programming inspired approaches which have some method of expressing a genome into a net, also known as neuroevolution. NEAT and EANT fall into this category; both algorithms mutate the nets and "speciate" the nets by how similar they are, but they differ in the method they use to optimize/train the nets. In these approaches the nets are not only optimized by weight or bias, they also are capable of mutating topologically (i.e. gaining/losing synapses, adding or pruning neurons). EANT makes a distinction between optimizing the weights and exploring new structures and so does them in phases, making it an open question of how to best balance exploration versus exploitation.

One solution to this that I don't think I've seen anyone do yet is applying the upper confidence bound policies from this paper: http://adsabs.harvard.edu/abs/2008arXiv0805.3415G
In other words, these two strategies seem to be made to complement each other, and might provide you a way to work around your sketchy/short-term data with a good deal of robustness (i.e. with UCB you shouldn't have to worry about overfitting the data as badly). Sorry if all this is a little too far out there.

tripwire fucked around with this message at 15:05 on Feb 11, 2009

nonathlon
Jul 9, 2004
And yet, somehow, now it's my fault ...
An oddly specific question - being used to editing code on the Mac with TextMate, I'm happy using E (a TextMate look-a-like) on the PC. But on Ubuntu? Every editor is giving me the creeps.

I don't want an editor that requires I set up a "project". I don't want something I have to choose and open every file I need. What I like about TextMate is being able to open a folder as a project, and have the folder structure there in a sidebar as I edit the code in multiple tabs. Is there anything like this for Linux?

tef
May 30, 2004

-> some l-system crap ->
a warning:

regardless of what question you ask, if it mentions 'text editing' and 'linux' you will get a few responses saying "vim" or "emacs"

these are probably not what you want.


(for example the thread asking for nano syntax highlighting is about 3 pages of vim people and then it breaks into vi vs emacs)

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

Offhand, I don't know of a Linux analog to TextMate. From my experience, you'll have editors fall into three camps: the IDEs such as Eclipse or NetBeans, the generic NotePad-ish editors like gedit, and then the "real man"'s editors: vim and emacs. (which I would argue could very well be what he wants)

Not to turn this question into an editor holy war, but since I'm a vimfag, I would recommend you look at vim. It supports tabs, and you can get a directory tree by typing :e. You might need something like one of these plugins and a bit of scripting to make it really work the way you want, though. I'm sure emacs can be configured to do a similar thing, too; someone else would have to chime in on that, though.

TagUrIt
Jan 24, 2007
Freaking Awesome

outlier posted:

...able to open a folder as a project, and have the folder structure there in a sidebar as I edit the code in multiple tabs...

Depending on the language, you might be able to use Eclipse to do that. It "requires" that you set up a folder as a project, but I think you can just import an existing project.

There's also some plugins for gedit that can get what you want. In specific, I'd look at ClassBrowser and if that isn't what you need, maybe fileset.

There's probably a way to do the same in (g)vi(m) or (x)emacs, but you probably don't want to touch either if you don't have any experience.

narbsy
Jun 2, 2007

outlier posted:

An oddly specific question - being used to editing code on the Mac with TextMate, I'm happy using E (a TextMate look-a-like) on the PC. But on Ubuntu? Every editor is giving me the creeps.

I don't want an editor that requires I set up a "project". I don't want something I have to choose and open every file I need. What I like about TextMate is being able to open a folder as a project, and have the folder structure there in a sidebar as I edit the code in multiple tabs. Is there anything like this for Linux?

You can get creative and set gedit up to do the file structure thing. I may be hallucinating, but I think I've also seen gedit do code suggestion for CSS. Unsure if it can do that for anything else.

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

So I'm taking a computational differential equations class this term, which has a project component. I'd like to do my work in something that isn't Matlab. I assumed that any real scientific computing, based on the limited set of applications that I've used, would have to be written in some unholy alliance of C and Fortran. However, I came across SciPy and it sounds like they're trying to market Python of all things as a scientific computing language. Anybody have any thoughts about if this is a direction that things might go in, or is this nothing more than a bunch of Python dorks with Not Invented Here syndrome not wanting to link against LAPACK?

bitprophet
Jul 22, 2004
Taco Defender

Dijkstracula posted:

So I'm taking a computational differential equations class this term, which has a project component. I'd like to do my work in something that isn't Matlab. I assumed that any real scientific computing, based on the limited set of applications that I've used, would have to be written in some unholy alliance of C and Fortran. However, I came across SciPy and it sounds like they're trying to market Python of all things as a scientific computing language. Anybody have any thoughts about if this is a direction that things might go in, or is this nothing more than a bunch of Python dorks with Not Invented Here syndrome not wanting to link against LAPACK?

Uh. SciPy has been used for years and in some areas (like bioinformatics) Python is basically the king.

Like every other language, it does have its own share of NIHS, but SciPy is definitely not that, as far as I know :)

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

bitprophet posted:

Uh. SciPy has been used for years and in some areas (like bioinformatics) Python is basically the king.
No kidding, I had no idea :) Great, that's good to hear. Thanks.

bitprophet
Jul 22, 2004
Taco Defender

Dijkstracula posted:

No kidding, I had no idea :) Great, that's good to hear. Thanks.

No prob, sorry for sounding acerbic, was kind of defensive given your "golly they're using Python for this?!" tone ;)

Python seems to do very well in academia, I'm assuming because it's got a very simple/easy syntax (so people who aren't CS professors can still pick it up pretty easily), while remaining powerful (so you can express your crazy scientific algorithms/ideas without writing tons of e.g. Java boilerplate) and with a seriously large amount of built-in and third-party libraries (like SciPy, PyGame, etc etc).

EDIT: Sorry for all the Python cheerleading I seem to do in this thread and others :shobon:

DOUBLE EDIT: To sate my own curiosity I looked up SciPy's SVN repo to see just how long they've actually been around; looks like the first commit to SVN was in 02/2001, so that's pretty decent as such things go.

bitprophet fucked around with this message at 23:11 on Feb 11, 2009

Dijkstracula
Mar 18, 2003

You can't spell 'vector field' without me, Professor!

bitprophet posted:

No prob, sorry for sounding acerbic, was kind of defensive given your "golly they're using Python for this?!" tone ;)
Ha, no, it didn't bother me, especially since I (without ever having considered it) never really regarded Python as a language particularily suited to this sort of stuff, so you pretty much read my tone correctly :)

Since you're a Python guy, any idea what the speed hits are with using SciPy's routines as opposed to, say, a reasonable BLAS library?

edit: oh hey there's a python thread; why don't I go post in there instead :f5:

Dijkstracula fucked around with this message at 23:26 on Feb 11, 2009

The Balance Niggy
May 11, 2001

tane wave
Ok so this isn't really "programming" but here goes. I have a template for a letter that needs to be sent to 270 addresses. I want the each address to have their own address in the letter, ex:

Samuel Fappery
2394 Baller rear end Dr.
Chicago, IL 69696

Now I just deleted my student version of the VB.NET IDE and my friend told me roughly that it is possibly through macros(he's busy at the moment and I can't get in touch with him). Each address is in a single excel file with corresponding rows. I know how to pull from the data in the cell, but I see no simple way to have it print 270 times for each different person without programming something outside of word. Any help would be appreciated, but at this rate I'm thinking of just copying and pasting for a bandaid and programming it later, but if it can done within any of the programs that comes with Office 2007 that would be great, thanks.

Adbot
ADBOT LOVES YOU

ShoulderDaemon
Oct 9, 2003
support goon fund
Taco Defender
You want "mail merge", which should be documented in the help for office.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply