|
OverloadUT posted:Because you shouldn't have a whole bunch of properties and methods that are only used in a specific context and will be ignored all other times. I don't think you've really thought this design through. D&D monsters do not (generally) persist across encounters, whereas PCs do. The monster data (in your config file) is closer to a character class than an actual character: i.e. a template to create a character. Basically, you are trying to conflate two different things. The PCs' information absolutely should persist, since any items used, etc. will be gone in the next encounter as well. What you need is something to spawn monsters for the encounter based on monster "classes". You could probably use the factory pattern for this, though I am loathe to recommend design patterns (but even more loathe to write more than I have, so...).
|
# ¿ Apr 8, 2010 02:45 |
|
|
# ¿ May 14, 2024 06:01 |
|
OverloadUT posted:I'm saying that in well-structured code you shouldn't have just one gigantic class with a billion properties for every single thing that monster might be used for. My app will deal with "monsters" outside the scope of combat where initiative, status effects, and currentHP are all concepts that do not exist. Therefore those properties should not be in the base Monster class. This argument is tenuous at best. (Especially since I can't see why you'd store initiative with the character anyway, as opposed to the key into a sorted dictionary that you iterate over for the round.) Other status effects absolutely do exist outside of combat rounds. (Ok, this is in 3.5, but if you use 4e, then ) Also I'm pretty sure D&D monsters don't even really have non-combat stats since they're, you know, monsters. OverloadUT posted:Yes this is the basic idea but I agree the factory pattern is not the way to go. A MonsterInstance will be created from a "MonsterTemplate" or "MonsterPrototype" or something like that. A rose by any other name EDIT: here is some slightly more specific advice: 1) You are overdesigning your idea (and I'm going to presume underimplementing, since it doesn't sound like you have a proof-of-concept yet), though this is perhaps fitting given the D&D rules set's tendency to do the same. 2) There's nothing that says you have to serialize all the combat data back to the source file for the character (and you probably don't have to serialize any of it for monsters, unless they're persistent, in which case they're basically characters). 3) You're worrying about the complexity of adding a few members for HP and the like while seemingly ignoring the complexity of all sorts of other general-purpose rules necessary for characters (inventory management, XP, skills, to name a few). 4) If you're hell-bent on keeping HP out of the Character class, why not put it in the Encounter class and have characters act as controllers (and buckets of stats) with combat changes occurring in the Encounter's data? 5) I question the purpose of this in the first place, since how would the DM cheat on his dice rolls if the game were computer-assisted? Avenging Dentist fucked around with this message at 08:31 on Apr 8, 2010 |
# ¿ Apr 8, 2010 08:07 |
|
Much as I like C++ and hate Java, if you are getting a factor of 2 performance increase in basic math, you are doing something horribly wrong in Java.
|
# ¿ Apr 13, 2010 01:23 |
|
Rocko Bonaparte posted:I will send it through pastebin so I can attach on an example of how it hashes a year worth of dates w/o a wall of text: The amount of collisions you have in that hash is absolutely ridiculous (to the point that I saw a few before I even sorted the lines by hash). You have a total of 42 unique hashes in that entire list. EDIT: first things first, I'm going to assume that you think a + b << c == a + (b << c). It doesn't, in general. + has higher precedence than <<. EDITx2: Oh wow. When you take the modulo of that with just about any reasonably-sized power of 2, they all fall in the same bucket. So basically you might as well have just written "return 0;" Avenging Dentist fucked around with this message at 01:58 on Apr 17, 2010 |
# ¿ Apr 17, 2010 01:32 |
|
Scaevolus posted:Why not just take the seconds since the epoch plus the time of day in nanoseconds? Unless he's actually likely to have keys whose only difference is in nanoseconds, this is unnecessary and would probably just increase the chance of collisions. Since he appears to be using 64-bit values, he has enough space to have a unique hash for every 23 milliseconds since the beginning of the universe. He could just store the number of microseconds since some minimum date and be fine. Also this method has the benefit of making sure that relatively nearby times won't be congruent modulo N for most N (i.e. they won't fall into the same buckets). Also Rocko, if by "an additional hashing step", you're referring to the fact that you hash the integers you get from the date and time, that does literally nothing (at basically zero performance cost) since boost's hash function for integral values is the identity function. Avenging Dentist fucked around with this message at 23:43 on Apr 17, 2010 |
# ¿ Apr 17, 2010 23:41 |
|
Rocko Bonaparte posted:As you said boost's foreach is the devil and I'm curious if in using it I'm incurring some nasty overhead. If I can start pegging events to locations in the code I think I'll know better what's really happening. Performance should be pretty close if not the same when you optimize, but there's a lot happening under the hood.
|
# ¿ Apr 18, 2010 00:21 |
|
TheGopher posted:CarlH (the author of those tutorials) wrote some pretty comprehensive information about programming. Maybe you could have taken 1 minute to click on the link and look at the most recently published lessons (which were from awhile ago) to see what I've been learning. You are the one asking for help. It is not shrughes' responsibility to try to decipher what it is you've learned. TheGopher posted:The fact you'll bash him just because he's a Redditor without even reading what he's written really says a lot. I skimmed over his tutorials and, lo and behold, it looks like shrughes' skepticism was totally merited. Teaching gotos before conditionals? Check. Asserting that arrays are pointers? Check. Asserting that all non-primitives are pointers? Ch-- wait what?? For those playing the home game: Reddit Idiot posted:As I stated in an earlier lesson, any time you are working with any type of data more complex than a single variable of a given data type, you are working with a pointer. Granted, this is probably just lovely phrasing, but why would you read a tutorial with such lovely phrasing to begin with? TheGopher posted:I know basic C concepts. Stuff like pointers, arrays, defining classes, typecasting, etc. https://www.youtube.com/watch?v=0WhuikFY1Pg TheGopher posted:The resources I've looked at read like a 5th grader's explanation of fractions, or read like any unix "man" entry. That is what I meant by "too technical" or "poorly written". I want something in between, something I can follow with a reasonable amount of effort. Go program. (This is the serious part of my post. Listen to it.) Avenging Dentist fucked around with this message at 01:51 on Apr 19, 2010 |
# ¿ Apr 19, 2010 01:48 |
|
TheGopher posted:If I'm learning from a lovely resource, how is doing more programming going to teach me to be a better programmer? While you're getting off on being so smart and being such a great programmer maybe you could actually help me instead of trying to troll me out of this topic for not knowing enough. Do you want to know how I and nearly everyone in this subforum learned to program? We programmed. Period. There's no magic elixir, no spellbook that just grants you the skills derived from actual experience. Stop expecting that something like this exists. By the time you know enough to benefit from halfway decent resources you will already know where they are. Go. Figure out some stuff you want to program. Try to do it. Fail. Learn from your failures. Repeat. Programming is hard work, but the process by which you learn is not complicated.
|
# ¿ Apr 19, 2010 02:03 |
|
Secx posted:I did some research and found an algorithm that would fit my needs. I used it in a Ruby script, but the problem is speed. Comparing one name against 155,000 others to find those with a score of > 0.8 took 70 seconds on my laptop. It would take 4 months to compare all 155,000 at that rate. If the function is symmetric, it's 2 months, not four. And why would you do this in Ruby instead of a language suited for numeric processing (e.g. C)? Ruby is slow, and depending on the problem being solved (and the quality of the code), C can be up to 1000 times as fast: http://shootout.alioth.debian.org/u32/benchmark.php?test=all&lang=ruby&lang2=gcc
|
# ¿ Apr 20, 2010 23:28 |
|
You probably want exec(). Also holy god I think you are entering new, unexplored worlds of security vulnerabilities.
|
# ¿ Apr 22, 2010 16:33 |
|
gibbed posted:Execution Operators Really? They really had to take that one feature from Perl that only makes sense in a shell-scripting environment?
|
# ¿ Apr 23, 2010 13:44 |
|
Yes there is it is called Luabind and I used to work on it.
|
# ¿ May 5, 2010 19:09 |
|
Honestly, it sounds like you're pretty close to using separating axis already. I'd go with that off the top of my head. (Also you probably don't mean "object-oriented bounding box".)
|
# ¿ May 6, 2010 20:22 |
|
Peao posted:There MAY be Collab.net but what sort of company asks you to register before you can get to an installer? I'm staying well clear. God who cares
|
# ¿ May 7, 2010 01:58 |
|
Don't use Makefiles. But if you have to for some reason, use autotools (which is also awful but slightly less so than Makefiles).
|
# ¿ May 10, 2010 17:02 |
|
Makefiles are awful, not very platform-independent, and a pain in the rear end to maintain. I use autotools for work because I have to. Big boys use one of Scons, Cmake, Autotools, MSBuild, or Boost.Build, in rough order of coolness.
|
# ¿ May 10, 2010 17:10 |
|
mr_jim posted:Otherwise, go with cmake, or (ugh) autotools. I'm pretty sure Cmake still requires Cmake to be installed on the system if you want the ability to configure.
|
# ¿ May 10, 2010 18:15 |
|
I wonder how they handle overload resolution for that (I wonder because I want to do the same thing in my babby language).
|
# ¿ May 13, 2010 19:47 |
|
You guys are all terrible at advice. Notice where he says:Ziir posted:Edit: Computing power isn't that big of a deal because everything that requires a lot of processing power would be offloaded to a dedicated computer in the lab or a cluster. If you think it's likely that you'll be programming stuff that gets put on a cluster, use whatever OS the cluster uses (or as close to it as possible). Do you really want to deal with weird compiler issues because of Apple's customized (and frequently outdated) version of GCC?
|
# ¿ May 15, 2010 18:54 |
|
Dijkstracula posted:Yeah, I noticed this, but I've never had any issues porting code I've developed on my Mac to a Linux cluster so I didn't comment on it. We regularly have issues making stuff work on Linux and Mac at my job. Usually because Mac does some bizarre thing like making CC point to gcc instead of g++.
|
# ¿ May 15, 2010 19:00 |
|
I don't mean the environment variables, I mean the commands. CC the command, when it exists, should be a symlink to the system C++ compiler. Though I guess this has to do with case-insensitive filesystems on Mac?
|
# ¿ May 15, 2010 22:59 |
|
rjmccall posted:Yeah, that convention is clearly unsupportable on case-insensitive filesystems. I'll admit to never having heard of it before, but apparently it's common among the stately old commercial UNIX vendors. Welcome to autotools.
|
# ¿ May 16, 2010 03:00 |
|
I'd imagine it's because there's still only one mpd running and it was the mpd for the other version of MPI.
|
# ¿ May 19, 2010 20:02 |
|
SiLk-2k5 posted:C++ is very useful for file operations / quick calculations. What?
|
# ¿ Jun 3, 2010 20:03 |
|
Do you know how to make coffee?
|
# ¿ Jun 7, 2010 06:27 |
|
Read this article: http://norvig.com/21-days.html
|
# ¿ Jun 8, 2010 07:30 |
|
In general, compilers do not fold floating-point constants (VS2010 might, due to changes in the C++ standard). If you actually care, and you shouldn't right now, check the assembly code generated by the compiler. You'd probably be better spending your time working on other stuff though, instead of worrying about micro-optimizations (unless you've already profiled your code and know that this is taking a large percentage of your CPU time).
|
# ¿ Jun 10, 2010 19:25 |
|
rjmccall posted:I'm not sure where you're getting this. This is really a very common optimization; I don't have a copy of VS to test with, but both gcc and clang fold wasabimilkshake's constant at -O1. What version of GCC? As I recall, before they started working on C++0x support, they didn't do this. Since you have to do this for C++0x, it's obviously going to be the standard going forward.
|
# ¿ Jun 10, 2010 22:12 |
|
Dijkstracula posted:As for the makefile question, I can't think of anything that could be screwed up at the Makefile level that would cause a segfault. Pointing to the wrong mpicc could do it.
|
# ¿ Jun 10, 2010 23:37 |
|
|
# ¿ May 14, 2024 06:01 |
|
LittleBob posted:It's still this: 1) Why are you calling it with sh instead of just executing it? Wouldn't that make it run in sh instead of bash? 2) Why are you even using shell scripting? Honestly, if you have a hard time with shell scripting, that's the time to load up Perl or Python instead.
|
# ¿ Jun 11, 2010 19:43 |