|
Dijkstracula posted:Wait, wait, what? What the hell are you talking about? Prolog is a logic language and has nothing to do with databases. Datalog is an example to the contrary - logical programming makes it very easy to express the relationship between things.
|
# ? Jun 4, 2009 19:07 |
|
|
# ? May 30, 2024 03:13 |
|
tef posted:Datalog is an example to the contrary - logical programming makes it very easy to express the relationship between things. Dijkstracula fucked around with this message at 19:16 on Jun 4, 2009 |
# ? Jun 4, 2009 19:12 |
|
Evil Home Stereo posted:I need some ideas for a good way to go about solving this problem: Declaratively - you need to find a language in which you can express your constraints easily, and then use library code to solve it. quote:This is basically what I'm working with: You could do this in prolog but it would involve a lot of leg work, as although it is quite easy to write the solver, I'm not aware of any off the top of my head that will do enough for you. Admittedly you could write a tiny constraint solver which would be fun. Alternatively you could use a computer algebra system, which tend to cater to these sorts of programs. Wikipedia is kinda useful here and it would be worth having a poke around: http://en.wikipedia.org/wiki/Category:Computer_algebra_systems http://en.wikipedia.org/wiki/SymPy
|
# ? Jun 4, 2009 19:14 |
|
Thanks for the links, I'll check those out. I should point out that I'm an undergrad chemical engineering student so, while I have some computer science experience, it's only with things like Matlab and Java. Part of the problem is that we're trying to make a GUI with sliders for the variables, so you can change one of the values, and see how the rest move in response to that. I should also add that we're trying to make this an app that we can put on a website so from what my boss is telling me there are limitations on what languages we can use. Is there any way to do this in Java, or a language similar to the ones I have experience with? I don't really understand why it is possible to solve this type of problem in some languages and not others.
|
# ? Jun 4, 2009 20:00 |
|
The problem can be solved in any language, but it's easier in some than in others. Also, some languages end up working better for web programming because they have support from a webserver(Apache and mod_php) or their own server just for webapps in a particular language(Tomcat and Java) I'd probably approach it with Javascript so that I could keep everything executing in the browser window for the sake of simplicity.
|
# ? Jun 4, 2009 20:05 |
|
royallthefourth posted:I'd probably approach it with Javascript so that I could keep everything executing in the browser window for the sake of simplicity. Yikes. To Evil Home Stereo: If you only know of matlab, then you may not realise how different things like mathematica, macsyma and similar things are - it is less about powerful calculations, but more about symbolic manipulation. I would suggest looking at something like sage, which is a web based computer algebra system written in python. You should be able to write formulas directly into it and solve them in the browser: http://en.wikipedia.org/wiki/SAGE_(computer_algebra_system)#Calculus
|
# ? Jun 4, 2009 22:41 |
|
Evil Home Stereo posted:Thanks for the links, I'll check those out. I should point out that I'm an undergrad chemical engineering student so, while I have some computer science experience, it's only with things like Matlab and Java. Part of the problem is that we're trying to make a GUI with sliders for the variables, so you can change one of the values, and see how the rest move in response to that. It's possible to solve the problem in any Turing complete language (most of them). Java applets have been around for over a decade and it's easy to add swing components to one to make your GUI. Javascript + DHTML might also be a possibility as mentioned. Most any language can be integrated into a GUI.
|
# ? Jun 5, 2009 05:51 |
|
Evil Home Stereo posted:I don't really understand why it is possible to solve this type of problem in some languages and not others. *algorithm isn't quite the right word; the technical term is "computable function", which has different implications, which delves into a bit more CS theory than you probably care about at this point.
|
# ? Jun 5, 2009 15:09 |
|
Awhile ago I was asking a lot of questions about administering neural networks, and having used them long enough, I wanted to try to write up something of my own so that I can eliminate some of the unneeded abstraction for my own stuff and hopefully increase performance. Of course I'm getting stuck on 101 kind of stuff. My big question: Do contemporary neural networks use threshold values in their neurons? I'm talking generally about anything in the past decade using a sigmoid function, rather than a boolean gate. I know thresholds were bandied around in previous generations, so I see it get thrown around all over the place online. But I also see it not mentioned at all. Currently I have a basic way of connecting neurons, and I'm trying to find an example online for the XOR problem. I wanted to check if I have the basics going before I try to write training code of my own. The problem is that a lot of it is incomplete. I thought I'd try the one at the bottom of this page: // Based on http://www.gene-expression-programming.com/GepBook/Chapter5/Section3/SS1.htm If I feed that topology in, I'm definiately not getting right answers. It's confusing. I could be doing it wrong though. If nobody else can suggest a working XOR toplogy I can mimic, we can start plodding through my code.
|
# ? Jun 6, 2009 17:18 |
|
Rocko Bonaparte posted:Awhile ago I was asking a lot of questions about administering neural networks, and having used them long enough, I wanted to try to write up something of my own so that I can eliminate some of the unneeded abstraction for my own stuff and hopefully increase performance. Of course I'm getting stuck on 101 kind of stuff. The sigmoid function provides its own form of continuous-valued "threshold" through the shape of its graph, so you generally won't find explicit thresholds used there. Are you looking for options to change the topology of your network using genetic algorithms or are you just interested in getting a network to be able to solve the XOR problem (which is actually fairly historically notorious due to its non-linear separability)?
|
# ? Jun 6, 2009 22:48 |
|
I'm writing a game on top of Google's App Engine for my English class (we're supposed to write to a public audience, so I figured I'd write a game and then write fake blog posts about it). Right now, the plan is to have the player submit or choose a category (we haven't decided on submit or choose yet), grab thumbnails of Google Image Search results for words within that category, and then have the player guess the word based on looking at the images (and possibly seeing the letters scrambled up). My friend (who's writing the game with me) found this website onelook.com, and figured out that if you search on there for something like "*:emotion", it'll give you words related to emotion (http://onelook.com/?w=*%3Aemotion&ls=a). So I wrote some Python to scrape related word results from onelook. The issue I'm seeing now is that using this onelook site is not the best way to find words in a category. For example, the results on onelook for words related to "animals" are here: http://onelook.com/?w=*%3Aanimals&ls=a and just from eyeballing it, it looks like at least the results don't really contain any actual species of animals, which is what we would probably want for the game. I don't blame the site for this, we miscalculated what we needed. So finally my question: are there any websites where I can find words separated into categories? Like a list of animals, emotions, foods, countries, etc.
|
# ? Jun 7, 2009 00:19 |
|
The Red Baron posted:The sigmoid function provides its own form of continuous-valued "threshold" through the shape of its graph, so you generally won't find explicit thresholds used there. Are you looking for options to change the topology of your network using genetic algorithms or are you just interested in getting a network to be able to solve the XOR problem (which is actually fairly historically notorious due to its non-linear separability)? So if somebody has something describing a topology and weights already for solving XOR with which they have some confidence, I would want to see it so I could hand specify all of it and see if my model is correct. That's all. The link I gave previously is coming up with a network that isn't solving XOR, but it's also showing a lot of irrelevant numbers, and I might not be reading the GIF image of the topology correctly.
|
# ? Jun 7, 2009 01:58 |
|
Rocko Bonaparte posted:So if somebody has something describing a topology and weights already for solving XOR with which they have some confidence, I would want to see it so I could hand specify all of it and see if my model is correct. That's all. If you're just looking for any old example that solves the XOR problem it's not too hard to come up with one by hand: Click here for the full 607x485 image. Thresholds are drawn in the nodes, while connection weights are on the edges.
|
# ? Jun 7, 2009 02:26 |
|
Nippashish posted:If you're just looking for any old example that solves the XOR problem it's not too hard to come up with one by hand:
|
# ? Jun 7, 2009 02:44 |
|
Rocko Bonaparte posted:Gaaah see there it is. I'll google around and in one glance I'll read everybody's using sigmoids and thresholds are out. In another glance I'll get topologies with thresholds. I'm not implementing thresholds right now so I'm not really sure what to do. Should I just normalize all the input weights to each node so that the threshold would just be 1.0? I went with thresholds since the page you linked to is also using thresholds. A threshold function can be approximated with a very steep sigmoid function so in that sense thresholds are (almost) a special case of sigmoids. Sigmoids are analytically nicer to work with since they're differentiable, but AFAIK there aren't any fundamental differences in the types of problems they can solve (I am by no means an expert though). edit: To approximate thresholds with sigmoids if you have a transfer function like: hardlim(x-b) where x is the input and b is the threshold you can approximate that with a sigmoid using: 1/(1+exp(-100(x-b))) or whatever your favorite sigmoid function is. Nippashish fucked around with this message at 03:00 on Jun 7, 2009 |
# ? Jun 7, 2009 02:51 |
|
I'll have to add thresholds in my model then. Meanwhile:Nippashish posted:I went with thresholds since the page you linked to is also using thresholds.
|
# ? Jun 7, 2009 03:30 |
|
Rocko Bonaparte posted:It does? Is that expressed in the genomes? They're not in the diagram, which is what I'm trying to follow. They talk about thresholds here: http://www.gene-expression-programming.com/GepBook/Chapter5/Section1.htm where they explain the transformation into a tree, but on closer reading it looks like they keep the thresholds at 1 and adjust only the weights/network structure (I think). They don't seem to include the weights in their encoding either though so I don't know what's going on there.
|
# ? Jun 7, 2009 03:46 |
|
So I have pretty much no programming knowledge and for the most part feel like I hit a rock with as far as I can go without delving deep into some coding tutorials. Basically, I have a batch that uses a program based in command prompt that generates a txt file. In the text file is a 4 digit integer that is always in the same place, but changes value everytime: quote:
The specific integer there in bold. The batch I created for now, will create the txt file then load it in notepad and then let me input the number by hand then finish it off. code:
|
# ? Jun 7, 2009 04:22 |
|
Nippashish posted:They talk about thresholds here: http://www.gene-expression-programming.com/GepBook/Chapter5/Section1.htm where they explain the transformation into a tree, but on closer reading it looks like they keep the thresholds at 1 and adjust only the weights/network structure (I think). They don't seem to include the weights in their encoding either though so I don't know what's going on there. [edit]Heck, when I do the second example at http://www.gene-expression-programming.com/GepBook/Chapter5/Section3/SS1.htm. For a, b = 0 I should get near 0. I am doing out the math and getting 0.2765. Sure that's less than 0.5, but that's far from ideal. Perhaps these neurons aren't using the sigmoid function? Neuron.java code:
code:
Rocko Bonaparte fucked around with this message at 04:45 on Jun 7, 2009 |
# ? Jun 7, 2009 04:36 |
|
Rocko Bonaparte posted:Then I guess I'll post my meager source code then and maybe I can come to some understanding about why it is failing. I'm hoping I find a glaring problem right after doing this and it's all over: Yeah, thresholds really aren't being used often anymore. I'm not sure if this explains anything but one thing I notice is different in your code compared to the neat-python neural net implementation is that the output is set to plus or minus 1 if the inputs exceed plus or minus 30. You might also try out using hyperbolic tan rather than the logistic function sigmoid. Check it out and test it against yours to see where they differ. http://code.google.com/p/neat-python/source/browse/trunk/neat/nn/nn_pure.py tripwire fucked around with this message at 00:51 on Jun 8, 2009 |
# ? Jun 8, 2009 00:49 |
|
I've got a question about Makefiles. I've got a Fortran program with about two dozen subroutines, all of which are dependent on 1-4 modules. For ease of editing, all the subroutines are in their own .f90 files; for the sake of clarity, I'd like to move 12 or so of those .f90 files into a subdirectory and leave them there. Here's the issue: I can't figure out how to get the Makefile to create and keep the .o files in the subdirectory, and how to make sure the program knows the .o files are there when it runs. I've searched Google, but either I'm not using the right search terms or I don't know enough about Makefiles to understand why the results are useful to me.
|
# ? Jun 8, 2009 04:15 |
|
Grundulum posted:I've got a question about Makefiles. Explicitly pass -o subdir/file.o as an option?
|
# ? Jun 8, 2009 04:33 |
|
Right now the problem I'm having is getting the subdirectory commands to locate the modules in the primary directory. I figure one way around this might be to kludge together a script that copies over the module files as part of the make, but that seems inelegant. Is there a better way to get make commands in subdirectories to recognize dependencies in other directories?
|
# ? Jun 8, 2009 05:01 |
|
Are you talking about the INCLUDE statement in FORTRAN? (I don't know the language, though I should). If so, add -I.. to the command-line options to search the parent directory for includes.
|
# ? Jun 8, 2009 05:05 |
|
No, I was being a buffoon. Dependencies are stupidly easy to locate in other directories, but other error messages convinced me I was doing it incorrectly. The correct way is filename: relative path to dependency, e.g. foo.o: foo.f90 dir/bar.mod. In one of my previous Google searches, I came up with the string "$(@D)/$(<F) -o $(@D)/$(@F)", which seems to do the trick. I have no clue what all those variables stand for, but I'm pretty sure I could find it in the GNU page on make. Fake edit: well, poo poo. The insane errors I've been getting aren't related to my shifting object files around. I just moved everything back to the usual directory, ran make as usual, and submitted the job, and I'm getting the same errors. It's the cluster that's having the issues, not me. gently caress, that was a waste of five hours. Guess I'll post again if I have questions. Thanks for your help, Avenging Dentist.
|
# ? Jun 8, 2009 05:39 |
|
tripwire posted:Yeah, thresholds really aren't being used often anymore. I'm not sure if this explains anything but one thing I notice is different in your code compared to the neat-python neural net implementation is that the output is set to plus or minus 1 if the inputs exceed plus or minus 30. You might also try out using hyperbolic tan rather than the logistic function sigmoid.
|
# ? Jun 9, 2009 22:09 |
|
Rocko Bonaparte posted:I have a ton of dirty tricks for Java performance You should post these anyway
|
# ? Jun 10, 2009 00:17 |
|
Rocko, if you want there is existing very good (and fast) c implementations of various kinds of neural nets. Shouldn't you be able to inject that code into your java application if you are concerned about performance?
|
# ? Jun 10, 2009 00:45 |
|
tripwire posted:Rocko, if you want there is existing very good (and fast) c implementations of various kinds of neural nets. Shouldn't you be able to inject that code into your java application if you are concerned about performance? The code I've written that is using the neural networks is in Java, so there's Java in some form or another. If I went to C I'd have to deal with the native interface in all its glory. A microkernel would mean reducing the number of times it goes back and forth. What I'd really like would be a common way of instantiating neurons across languages. With these genetic neural networks, I don't see why one couldn't construct a network in Java as in C using the genes. So I could train the network in a highly-optimized, native C program (that is free to actually optimize double-precision numbers). Then I could dump the genes out to disk and have a Java version of the same load these genes and create a network out of it. Really, the two libraries could be completely different with different APIs, so long as they could build the network from each other's output. Even in Java you could have two different ways of representing a network. One would have the layers in dynamic structures so you can add or remove neurons and links when it's time to play with the topologies, but you could then reload the network into one that uses fixed arrays and all of that overhead goes away. I assume this would be EANT-friendly because EANT only plays with the topology in certain phases, right? One wholesale trick I was considering was to instantiate the current neat4j library I'm using in my own little wrapper, and call its main method with a small, dummy training set. Each time the methods inside the library exit, they get a chance to be recompiled to be faster. So after a few dummy runs with data that doesn't even concern me, I could hit it with the large workload that I care about, and it should run much faster. I believe the JVMs were getting better at being able to recompile on the fly--even in code that's actively running--so I don't know how relevant that is anymore. The biggest problem though is the floating-point arithmetic. Java won't reorder those at all. They've bandied around a revision of the Java specification for awhile now to do fast FP, but it's never gone anywhere. I think that JSR is something like a decade old. I suspect the big computational cruncher in the network is basically the multiply-add loops going into the activation functions, but I'll profile before I conclude that. In a native x86 compiler, it should be able to sort out a much better solution to calculating them (MAD instructions come to mind). Edit: Wow my back propagation neural network is really screwing up. I don't think I'll be starting a thread on AI for awhile. Edit 2: What's a good, trustworthy source for neural network 101 kind of stuff? I'm using this site right now: http://richardbowles.tripod.com/neural/neural.htm I started with another site and implemented their backwards propagation algorithm. I was modifying the weights before calculating the errors for the preceding neurons, which apparently is a big no-no. The other site didn't make any mention of this, and from the notation, I actually assumed I was supposed to do that. I'm sure I have my own stupid mistakes, but I suspect all my bugs right now are conceptually-related. Rocko Bonaparte fucked around with this message at 17:52 on Jun 10, 2009 |
# ? Jun 10, 2009 04:07 |
|
Rocko Bonaparte posted:It depends on the API to the C implementation. If the native code functions like a microkernel where I dump it a batch of data and let it loose for awhile, I could imagine it performing pretty well. If instead I had to, for example, call the native functions every time I wanted to evaluate the network, it might turn out to be worse. I would pay the cost of transforming my data to conform to the native code, then there's the overhead of the JVM handing control over to the native code and back. I know you are really set on java and the neat4j package but I highly recommend taking a look at the architecture of the neat-python package because (in my opinion) it solves these issues in a very elegant manner. I'm not sure how much of that is due to nicities of the python-c api, or how much would be applicable to java or the way you have your project set up now, but it might be worth a look over. The author's first language isn't english so some of the classes have names which might be seen as confusing: chromosome.py for example contains the definition of a Chromosome class; this is the class used at the conceptual level of the genetic algorithm, and contains a structured set of node/neuron genes and connection/synapse genes. At the conceptual level of the genetic algorithm, its really not important what specific implementation is used for neural nets or their constituent components. The actual type of neural net that is used to implement a given chromosome can be a specialized subclass like a feedforward net or a continuous time recurrent net (the python C api makes it possible and easy to drop in C implementations if one desires). Here is an example from when I was trying to use NEAT to evolve walking gaits for ragdoll models: http://pastebin.com/m1eadde5f That code is responsible for feeding input values into a neat-python neural net, and applying the output values to some muscles in a simulator. I import the pure python implementation of the neural net there in the second line: code:
|
# ? Jun 10, 2009 18:08 |
|
Also rocko: check this guys page out http://mypage.iu.edu/~rdbeer/
|
# ? Jun 10, 2009 18:17 |
|
I'm not so seat on neat4j, but I am set on java since the network is a small part of something I've written over a year that's in the tens of thousands of lines of code. So the final network would have to run in Java, even if I, say, trained the networks using a program in another language. Since I'm having so much trouble, I might try some basic stuff with those Python libraries to validate that I'm doing the basics correctly. I don't know much about the Python-C interface, other than hearing about it being very fast for language bridges. The Java-native interface is very slow, meant more for supporting legacy stuff that you can't just rid yourself of for whatever you're doing in Java. It's very difficult to lean on it for a performance boost, since whatever native code goes under the hood will have make up for the lost cycles the JVM consumes in relinquishing and regaining control.
|
# ? Jun 10, 2009 19:32 |
|
Here's a total retard question. Designing a site with Dreamweaver: I've created a page, and wanted to make sure that the best (and only?) way to get it to center on various resolutions is by using frames. Is that true, or is there a better way? Thanks.
|
# ? Jun 11, 2009 19:23 |
|
Cock Inspector posted:Here's a total retard question. Designing a site with Dreamweaver: <div style="width: whatever; margin: auto;"> ... </div>
|
# ? Jun 11, 2009 20:41 |
|
I sorted out my back propagation issues. For testing, I thought I was being smart by using a deterministic weight of 0.5 on every link. The problem with that is my network is also symmetrical, so all the links will end up getting adjusted the same amount during each correction. So it would end up deadlocking. When I randomized all the weights, it trained right up. I'm thinking it's time to move this stuff off into an AI thread here. Is anybody game to that? My own contributions to the thread would be pretty weak, since I'd be mostly asking questions that would get me moving more towards implementing EANT in Java. Another thing about Java optimization is that if you can apply things like the distributive property to floating-point numbers, do so manually. The JVM will issue the floating-point instructions in the order they're given in the code. I believe that's also due to precision paranoia.
|
# ? Jun 12, 2009 16:59 |
|
Rocko Bonaparte posted:When I randomized all the weights, it trained right up. Sufficiently randomized starting is absolutely critical to any sort of evolutionary algorithm, otherwise the algorithm doesn't have a large enough search space to work with.
|
# ? Jun 14, 2009 17:20 |
|
In Terminal, I accidentally copied a directory to my /Users/dilbread directory instead of /Users/dilbread/files/ directory and now my Users/dilbread directory is swimming in a pile of poop. Is there anything I can do besides ls -lt and just deleting the newer files one by one?
|
# ? Jun 16, 2009 18:29 |
|
dilbread posted:In Terminal, I accidentally copied a directory to my /Users/dilbread directory instead of /Users/dilbread/files/ directory and now my Users/dilbread directory is swimming in a pile of poop. Is there anything I can do besides ls -lt and just deleting the newer files one by one? http://www.unix.com/unix-dummies-questions-answers/1078-moving-files-based-creation-date.html Moreover, http://www.google.com/search?q=moving+files+by+date+unix&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
|
# ? Jun 16, 2009 18:41 |
|
quadreb posted:http://www.unix.com/unix-dummies-questions-answers/1078-moving-files-based-creation-date.html
|
# ? Jun 16, 2009 20:03 |
|
|
# ? May 30, 2024 03:13 |
|
I'm using fread in VC++ 9 to read a binary file, and it keeps returning me a bad pointercode:
|
# ? Jun 16, 2009 20:27 |