|
QuarkJets posted:On topic, one of the engineers at the place where I work insists that you should only use 'new' and 'delete' for classes, never for primitives. If you want an array of ints, then you should create it only create it with malloc. Is this just crazy talk or is there a legitimate reason for this notion? I think operator new has a requirement to default-construct the allocated objects, which in practice, for primitive types, means to zero-initialize the whole array. I'm not sure because I think my compiler/library made an exception for primitive types and left them uninitialized, but I don't know if that's standards-compliant However, C-style arrays are dubiously designed in the first place, and in C++ you have better alternatives (the array and vector standard classes) Worst comes to worst, you could override the allocation operators, or customize the standard ones, or define new ones using "placement" syntax. Whatever you do, never use a naked malloc in C++: operator new is not just a fancier syntax
|
# ? Apr 16, 2013 08:12 |
|
|
# ? May 30, 2024 13:12 |
|
hackbunny posted:I think operator new has a requirement to default-construct the allocated objects, which in practice, for primitive types, means to zero-initialize the whole array. I'm not sure because I think my compiler/library made an exception for primitive types and left them uninitialized, but I don't know if that's standards-compliant. It is. new expressions are required to do extra zero-initialization only when followed by (). Thus, new double[1024] does not zero-initialize, and neither does new double, but new double[1024]() and new double() do. This zero-initialization is required unless the base element type (disregarding nested array types) is a class type with a user-declared (not just non-trivial; it has to be declared explicitly) default constructor. The performance problem with new[] is exactly the opposite: if you allocate an array of a type like std::string, then every object has to get individually constructed immediately, and the implementation has to spend extra storage remembering how many elements there were so that delete[] can destroy them later. That makes it mostly useless for, say, backing a data structure. There is also no way to specify how to construct each element, so you typically have to default-construct them all and then assign into them individually as opposed to constructing them in place.
|
# ? Apr 16, 2013 08:39 |
|
Java, I hate you so loving much
|
# ? Apr 16, 2013 09:43 |
|
hackbunny posted:However, C-style arrays are dubiously designed in the first place, and in C++ you have better alternatives (the array and vector standard classes) There is nothing 'dubious' about statically sized arrays. If you know large of an array or buffer you want beforehand, use them. If you want something dynamically sized, consider std::vector. std::array isn't a large pain in C++11, and it is sometimes nice in that it has the normal collection functions defined for it, but it hasn't made regular arrays obsolete, and even very modern Boost code uses them liberally where they make sense.
|
# ? Apr 16, 2013 13:45 |
|
dis astranagant posted:Where does S+ fit in all of this? I had to use it in a stats class once and remember it being somehow related to R. S+ and R are both descendants from the statistical language S developed in Bell Labs in the 70s. They have some differences in implementation, and I've never actually used S+, but they're supposedly very similar. The most important difference is that S+ is a commercial language. R is the one programming language I really know, and I usually like working with it, but that article linked with the problems R gives newbies rings true. I would never recommend people to use it for anything but statistics and maybe math problems. I had a colleague with more coding experience that had to work with it, he hated it like crazy. His most hated feature (except that indexing begins at 1 in R) was that R changes the type of variables quite freely, like changing a matrix to an "R-vector" with no dimensionality attributes if you select only one column or row from it.
|
# ? Apr 16, 2013 14:15 |
|
Otto Skorzeny posted:There is nothing 'dubious' about statically sized arrays. If you know large of an array or buffer you want beforehand, use them. If you want something dynamically sized, consider std::vector. std::array isn't a large pain in C++11, and it is sometimes nice in that it has the normal collection functions defined for it, but it hasn't made regular arrays obsolete, and even very modern Boost code uses them liberally where they make sense. We were talking about dynamic allocation, weren't we? Dynamic C-style arrays are indistinguishable from pointers to objects, so they play horribly with polymorphic classes, and can't be safely freed with delete; if you pass them around, and unless you use template arguments or type-safe wrappers, statically-sized arrays will degrade to dynamically-sized arrays, so they aren't 100% safe either. Arrays in C++ are a "specialist" feature that should give you pause, because they have non-trivial issues, and I'm not even sure I'm remembering all of them
|
# ? Apr 16, 2013 15:09 |
|
hackbunny posted:Java, lol, and there still isn't a loving standard Base64 class in the jre I think Java 8 will finally add it.
|
# ? Apr 16, 2013 15:16 |
|
hackbunny posted:We were talking about dynamic allocation, weren't we? Dynamic C-style arrays are indistinguishable from pointers to objects, so they play horribly with polymorphic classes, and can't be safely freed with delete; if you pass them around, and unless you use template arguments or type-safe wrappers, statically-sized arrays will degrade to dynamically-sized arrays, so they aren't 100% safe either. Arrays in C++ are a "specialist" feature that should give you pause, because they have non-trivial issues, and I'm not even sure I'm remembering all of them I was responding to the unqualified and thus general statement you made, rather than the more pointed advice that you had given before it: hackbunny posted:However, C-style arrays are dubiously designed in the first place, and in C++ you have better alternatives (the array and vector standard classes) Incidentally, some of the issues you mention are misremembered: normal arrays do not decay to "dynamically-sized arrays" when passed as function arguments, but rather they decay to a pointer to their first element. As an aside, the C++ programming language does not have true dynamically sized arrays: it has std::vector which is similar but not identical to a dynamic array, and it has a couple of ways to simulate dynamic arrays using pointers and dynamic memory allocation. C99 has runtime-sized but not resizeable monstrosities called VLAs that are bad and should never be used. This whole conversation reminds me that nobody should talk about arrays without reading the comp.lang.c FAQ (talk about vectors all you want without it), and that I am glad that I live in static allocation land at work
|
# ? Apr 16, 2013 15:24 |
|
Hard NOP Life posted:lol, and there still isn't a loving standard Base64 class in the jre I think Java 8 will finally add it. The best part is how many unique com.sun.* and sun.* entries there are Otto Skorzeny posted:As an aside, the C++ programming language does not have true dynamically sized arrays: it has std::vector which is similar but not identical to a dynamic array, and it has a couple of ways to simulate dynamic arrays using pointers and dynamic memory allocation. I meant vector was explicitly designed to replace malloc/calloc arrays. Stroustrup himself said it. They even amended the specification so that vector was mandated to wrap a single contiguous array, to make it 100% clear what vector is for. Same thing with string, which is just a C++-flavored replacement for C strings (there is no other reason it should be zero-terminated), which kind of justifies its overall crappyness as a string class Otto Skorzeny posted:C99 has runtime-sized but not resizeable monstrosities called VLAs that are bad and should never be used. I hate how easy it is to use them accidentally in gcc Otto Skorzeny posted:This whole conversation reminds me that nobody should talk about arrays without reading the comp.lang.c FAQ (talk about vectors all you want without it), and that I am glad that I live in static allocation land at work I've done my share of loving around with statically-size byte arrays (it was a network protocol and I had absolutely no intention of fighting with packing/alignment settings, therefore byte arrays wrapped in template structs EVERYWHERE) hackbunny fucked around with this message at 16:21 on Apr 16, 2013 |
# ? Apr 16, 2013 16:18 |
|
hackbunny posted:(it was a network protocol and I had absolutely no intention of fighting with packing/alignment settings, therefore byte arrays wrapped in template structs EVERYWHERE) The RTC on my current project at work inexplicably communicates via BCD-encoded ASCII character strings
|
# ? Apr 16, 2013 16:30 |
|
That's so you can make a display circuit in hardware without lots of complex carry logic. You can just drive each digit of the display separately.
|
# ? Apr 16, 2013 16:48 |
|
abiogenesis posted:What do you guys think about R? Off the top of my head, some benefits of R are: The NA "number" for not available or missing values. Truly indispensible when you have input data with missing samples, trying to match vectors/matrices with different dimensions, or have algorithms (e.g., non-circular convolution filter) for which certain output values are genuinely unknown. AFAIK MATLAB doesn't have this, with some folks using NaN (which also exists in R) as a crude approximation. Most functions treat "NA" values reasonably. Any numeric computation language that lacks an equivalent concept is broken in my book. Functions are first-class. "lapply" works like "map" in Python/Ruby/functional languages. I write a lot of R code that's of somewhat functional style when it's too complicated to be vectorized (it usually contains vectorized code internally) and beats the poo poo out of for loops and iterators. And downsides/horrors/WTFs: The language is really "intended" to be used interactively in a terminal (although readline support makes this far better than MATLAB's terminal mode). R scripts are used with the "Rscript" program, which really just invokes the R runtime in a "batch" mode that's essentially equivalent to running the script in the interactive-shell line-by-line. A consequence of this is that the way code is parsed depends on whether it's in a block scope or (interactive) top-level. For example, if you write code in separate-line-for-brace style, e.g., this snippet: code:
R runtime posted:Error: unexpected 'else' in "else" code:
The other downside to R is that it's not awesome with regard to memory management. I'm not aware of a mechanism that allows truly-large data sets (data.frames) to be file backed, so you're limited by those that can fit in RAM. Many operations generate new data.frames/matrices instead of modifying existing ones, so somtimes you'll have to manually delete variables to free memory in the middle of a block, just to avoid running out of RAM or thrashing a lot. Related is the fact that data parsing functions (e.g., read.table) are quite slow due to their flexiblity in being able to automagically parse many formats. These can be sped up quite significantly by declaring column contents and other parsing tricks. Honestly, all this is "fine" and is to be expected when dealing with larger data sets. My complaint is that R doesn't really offer a lot with regard to easy-to-use profiling tools to make these kinds of optimizations easier to implement. Now, all that said, the reason I use R over MATLAB, aside from it's bias towards statistical computing, is because of (i) CRAN, (ii) it's already packaged in most Linux distributions, and (iii) because it's free, it's trivial to instantiate a large cluster of R-running nodes without having to deal with licensing headaches. ExcessBLarg! fucked around with this message at 17:27 on Apr 16, 2013 |
# ? Apr 16, 2013 17:22 |
|
Hard NOP Life posted:lol, and there still isn't a loving standard Base64 class in the jre I think Java 8 will finally add it.
|
# ? Apr 16, 2013 18:05 |
|
ExcessBLarg! posted:The other downside to R is that it's not awesome with regard to memory management. I'm not aware of a mechanism that allows truly-large data sets (data.frames) to be file backed, so you're limited by those that can fit in RAM. Many operations generate new data.frames/matrices instead of modifying existing ones, so somtimes you'll have to manually delete variables to free memory in the middle of a block, just to avoid running out of RAM or thrashing a lot. Related is the fact that data parsing functions (e.g., read.table) are quite slow due to their flexiblity in being able to automagically parse many formats. These can be sped up quite significantly by declaring column contents and other parsing tricks. Honestly, all this is "fine" and is to be expected when dealing with larger data sets. My complaint is that R doesn't really offer a lot with regard to easy-to-use profiling tools to make these kinds of optimizations easier to implement. I haven't used it, but : http://cran.r-project.org/web/packages/bigmemory/index.html ?
|
# ? Apr 16, 2013 19:23 |
|
To add to the discussion about experiment reproducibility: http://www.nextnewdeal.net/rortybomb/researchers-finally-replicated-reinhart-rogoff-and-there-are-serious-problemsquote:All I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel.
|
# ? Apr 16, 2013 20:13 |
|
No, you don't have to clone the repo to make a branch. Yes, I understand, if you make a commit and then another commit the second commit points to the first commit. Yes, in git you can create a branch that points to an arbitrary commit. Yes, this is a common workflow.
|
# ? Apr 16, 2013 22:45 |
|
Otto Skorzeny posted:C99 has runtime-sized but not resizeable monstrosities called VLAs that are bad and should never be used. Why are VLAs bad? I haven't really run into them before, so honest question.
|
# ? Apr 17, 2013 02:08 |
|
Arcsech posted:Why are VLAs bad? I haven't really run into them before, so honest question. It makes it really easy to run out of stack if you aren't careful.
|
# ? Apr 17, 2013 02:16 |
|
Volmarias posted:No, you don't have to clone the repo to make a branch. Yes, I understand, if you make a commit and then another commit the second commit points to the first commit. Yes, in git you can create a branch that points to an arbitrary commit. Yes, this is a common workflow. Isn't being able to do that kind of the whole point of git?
|
# ? Apr 17, 2013 03:30 |
|
evensevenone posted:Isn't being able to do that kind of the whole point of git? It sounds like someone taking their mental model of how e.g. Subversion works and mapping it directly to git. Which isn't entirely unfair when the operation and result in both cases is called "branching" and "a branch". On the other hand, if this is the tenth time you've tried to explain how it works...
|
# ? Apr 17, 2013 04:39 |
|
Yeah the real problem is that git commands have no particular relationship to what one would reasonably assume that they do based on their names, or what those commands do on other vcs.
|
# ? Apr 17, 2013 05:26 |
|
evensevenone posted:Yeah the real problem is that git commands have no particular relationship to what one would reasonably assume that they do based on their names, or what those commands do on other vcs. Branching makes perfect sense to me It's exactly like a branch of a tree in that they join at one point and then go separate directions, but both go back to the trunk at some point. Merging is just making those two branches into one. (Rebasing, however, has always screwed me up.)
|
# ? Apr 17, 2013 06:45 |
|
The command means exactly what the word says, but people used to Subversion/Perforce tend to forget that those systems "implement" content branching by not really implementing it all but making you simulate it yourself with the directory tree. This is why so many people, when they want to branch content in git, approach it as "I gots to make a new directory somewheres and copy stuffs to it." Gazpacho fucked around with this message at 08:40 on Apr 17, 2013 |
# ? Apr 17, 2013 06:57 |
|
evensevenone posted:Yeah the real problem is that git commands have no particular relationship to what one would reasonably assume that they do based on their names, or what those commands do on other vcs. Git branch does exactly what I would expect it to. What do you think it should do based on its name?
|
# ? Apr 17, 2013 07:13 |
|
branch isn't too bad, except that you have to checkout the branch you just made for some reason. However add/reset, fetch/pull, checkout and rebase are pretty badly named. Which are also the ones you use the most.
|
# ? Apr 17, 2013 07:47 |
|
evensevenone posted:branch isn't too bad, except that you have to checkout the branch you just made for some reason. Not if you use git branch newbranch.
|
# ? Apr 17, 2013 07:54 |
|
yaoi prophet posted:Not if you use git branch newbranch. I think you meant git checkout -b newbranch.
|
# ? Apr 17, 2013 08:34 |
|
evensevenone posted:branch isn't too bad, except that you have to checkout the branch you just made for some reason. Not really. Add adds files to the index. Reset resets the state of the index. Fetch fetches the remote repository. Pull pulls the remote repository in (fetch + merge). Rebase transplants a set of commits onto a new base. Checkout is checkout. You're confusing me. The commands literally could not be expressed simpler, in my mind.
|
# ? Apr 17, 2013 08:41 |
|
Doctor w-rw-rw- posted:Not really. Add adds files to the index. Reset resets the state of the index. Fetch fetches the remote repository. Pull pulls the remote repository in (fetch + merge). Rebase transplants a set of commits onto a new base. Checkout is checkout. You're confusing me. The commands literally could not be expressed simpler, in my mind. It may seem obvious partly because you've been using git enough to have internalized the meanings. I understand it all now, but when I was first learning I remember I had some trouble. For one thing, some commands do seemingly different things based on context. Checkout can revert an unstaged file as well as check out a branch. Reset is needed for unstaging files (which confused me as I initially thought checkout would be more sensible for that), but it's also possible to modify what your branch's HEAD is. Also, there are some almost, sort of synonyms like fetch and pull which doesn't help matters. To a beginner not all of the commands have obvious meanings or are clear about which situations they should be used in, especially if they've had experience with a different system. I think part of the problem is the distributed aspect introduces some additional terminology (fetch vs pull vs merge, push vs commit). Also the concept of staging before you commit is different from, say, svn if I remember correctly, which creates the add/commit, reset/checkout/revert confusion. Now that I understand git's philosophy and underlying concepts better I can see how all those "problems" are actually the result of a logical and consistent design, but it's definitely not a completely obvious one.
|
# ? Apr 17, 2013 12:55 |
|
I think whether or not some of git's commands make sense depend a lot on whether the user understands the index and the reflog.
|
# ? Apr 17, 2013 13:18 |
|
Actually, speaking of git, I have been seriously considering a better version control system. My current system is the (bad) practice of just saving the old version in a "backups" folder when I make anything other than a very minor change (e.g. the wording of a prompt). I tried BZR, but because it requires the abhorrent monster known as macports, I really am turned off to using any kind of version control software that requires I use something like fink or macports. I have had to completely reinstall my system after my macports experience, and I really want to avoid that horror ever again. Anyway, after a quick look at the git website, it seems like it's a standalone version control program. Is this true? Is it any good?
|
# ? Apr 17, 2013 15:36 |
|
JetsGuy posted:Actually, speaking of git, I have been seriously considering a better version control system. My current system is the (bad) practice of just saving the old version in a "backups" folder when I make anything other than a very minor change (e.g. the wording of a prompt). Have you looked at Homebrew at all? OS X comes with git as part of the XCode toolset anyway, so you don't even have to bother with installing/building it.
|
# ? Apr 17, 2013 15:40 |
|
JetsGuy posted:Actually, speaking of git, I have been seriously considering a better version control system. My current system is the (bad) practice of just saving the old version in a "backups" folder when I make anything other than a very minor change (e.g. the wording of a prompt). git comes with the Xcode Command Line Tools. Use homebrew, not macports. Use git. Git's good.
|
# ? Apr 17, 2013 15:41 |
|
JetsGuy posted:Actually, speaking of git, I have been seriously considering a better version control system. My current system is the (bad) practice of just saving the old version in a "backups" folder when I make anything other than a very minor change (e.g. the wording of a prompt). Good god, man. Yes. I don't know exactly what you mean by "standalone", but there is a .dmg installer available, so go ahead and use that. Also, git is the best.
|
# ? Apr 17, 2013 15:41 |
|
JetsGuy posted:Anyway, after a quick look at the git website, it seems like it's a standalone version control program. Is this true? Is it any good? Condolences about your being stranded on a desert island and congrats on your rescue.
|
# ? Apr 17, 2013 15:44 |
|
I have been really avoiding looking terribly deeply into homebrew because after fink destroyed a system of mine in grad school and macports destroyed one as a post doc, I have a huge aversion to any sort of "hay we install our own poo poo" program. Again, I haven't really read terribly much about it, although the symlinks in /usr/local is at least a good sign. One of my biggest aversions is that I feel like for things like that you need to start with a clean system. Otherwise you end up with like 3 versions of F90 or something and they all have links in /usr/local creating optimal fun. Given the quick response, and apparently, I already have git because I need Xcode command line tools for my work. I'll give it a whirl. Thanks!
|
# ? Apr 17, 2013 15:48 |
|
Bunny Cuddlin posted:Condolences about your being stranded on a desert island and congrats on your rescue. Haha, as I've said above, even the most modest of "good coding practices" you have to learn on your own in scientific programming circles. ...and everyone I know who does do version control systems use something that has hosed my system like BZR.
|
# ? Apr 17, 2013 15:50 |
|
JetsGuy posted:Haha, as I've said above, even the most modest of "good coding practices" you have to learn on your own in scientific programming circles. Actually scratch that don't use git. Learn the basics behind causality first.
|
# ? Apr 17, 2013 16:02 |
|
horse mans posted:Actually scratch that don't use git. Learn the basics behind causality first. Every time I think of causality I'm using it in a general relativity sense, which is where I talk about it the most. Is there some sort of computer science term here, or is this just a joke about how I was apparently terrible with handling BZR (via macports).
|
# ? Apr 17, 2013 16:11 |
|
|
# ? May 30, 2024 13:12 |
|
Gazpacho fucked around with this message at 19:49 on Apr 17, 2013 |
# ? Apr 17, 2013 16:58 |