|
Ithaqua posted:What are the horrors? All of the compiler errors, to start
|
# ¿ Mar 23, 2013 10:40 |
|
|
# ¿ Apr 30, 2024 03:42 |
|
Lumpy posted:"Ace can be one *and* 11? What kind of God would allow that!" But you'd have no way of knowing that given the way in which this was coded. Each suit in this class has 14 cards, including a 1 and an A. It really is as though the coder had never seen a deck of cards before
|
# ¿ Mar 24, 2013 03:44 |
|
Suspicious Dish posted:And I'm proven once again wrong about whatever stupid argument we had last week being the dumbest one. I didn't think we'd reach D&D poo poo in here. We've at least not reached as low as GBS yet
|
# ¿ Mar 25, 2013 00:48 |
|
The Gripper posted:They do, but then you have to deal with there being no authoritative repository and all the bullshit that comes with that (mostly just verifying that the copy you choose to start from is pristine). I don't understand, why is git bad for large projects?
|
# ¿ Mar 25, 2013 20:11 |
|
JetsGuy posted:I loving despise IDL, and i hate the fact that I need to use it for my work. Usually, I can get around it by using Python, but I often have to use some of my team's programs, and astronomers LOVE IDL. You know how much physicists love loving FORTRAN? Yeah, astronomers love IDL like that. A part of it is that the astrolib library has been ingrained very deeply in the community. So, understandably, you are sometimes just stuck dealing with IDL. There's also just the fact that we all tend to like to code in the language we know, and scientists tend to be very much of the thought "eh, why bother" when it comes to learning new programming languages not named FORTRAN or IDL... and in some circumstances, Perl. FORTRAN and IDL are loved by dinosaurs; modern physicists have mostly made the switch to C++ or Python. There's a bunch of legacy poo poo written in FORTRAN, and there will always be a FORTRAN niche, but the language is quickly disappearing from actual scientific usage (as an example, the everything at CERN is almost exclusively C++ and Python with a few things that are based in FORTRAN; for computational purposes there will be some FORTRAN libraries hanging around for a very long time, I think we all agree) It's exactly for the reasons that you say; the older crowd wants to keep using FORTRAN and IDL because that's what they've always used, whereas the younger crowd wants to use Python and C++ because it's what they learned in school (and a lot of other reasons)
|
# ¿ Apr 12, 2013 11:44 |
|
VikingofRock posted:This has been my experience as well, and thank god. Although ROOT is pretty annoying sometimes, and in my experience it is used a ton by the particle physics community. Having used both ROOT and MATLAB for many years now, you should count your lucky stars that you get to use ROOT. It has a whole bunch of horrible problems and bizarre ways of doing things done, but MATLAB is one hundred times worse and lacks a lot of the graphical power that ROOT has (although for writing little one-off projects that just produce results, MATLAB is better; what I'm saying is that ROOT is far better for producing plots and other pretty things, whereas MATLAB is more of a result workhorse with the presentation of data thrown in as an afterthought) Also, many of the problems with ROOT disappear if you start using PyROOT. Give that a shot
|
# ¿ Apr 13, 2013 08:37 |
|
Progressive JPEG posted:The worst code I've yet encountered was produced by Astro researchers. The best theory I can come up with is that perhaps the industry is accustomed to one-off code that only needs to function until a paper gets published. This may also be why they're fine with using something like IDL in the first place. You are exactly right. Even today most graduate physics/astronomy students have maybe one computational physics course before entering grad school, and most grad programs may only offer one additional computational course. There's not even an introduction to programming in these programs, you're told about these tools that are necessary for solving certain problems but nobody explains how to actually produce good code or anything like that. My graduate level computational class was basically just a class on Mathematica and was completely useless All that we have is our own experience and the experience of our peers, which often isn't much to go on. Legacy code becomes untouchable because it produces the results that we expect. CERN specifically has an advantage in that there are actual computer scientists working there alongside the physicists, and there are sometimes workshops to help people learn better coding skills. Many fields don't get that; you're in a basement lab with a bunch of other grad students who are just as clueless as you are. QuarkJets fucked around with this message at 08:44 on Apr 13, 2013 |
# ¿ Apr 13, 2013 08:41 |
|
Shugojin posted:Oh, on the topic of FORTRAN people. I did some research (briefly, stuff came up in my life) for a professor in the department who used to use FORTRAN. That makes me really sad. They're not even in the same ball pit, really. FORTRAN is what you use when you need good computational code that runs incredibly fast and you don't need to write your own libraries. It's a computational programming language. MATLAB is more of a computational toolbox; it has a lot of useful algorithms that can make your job easy if you don't already have other implementations, but it's slow as gently caress, has awful syntax and even worse documentation. For an example of horrible code that makes me cry, in MATLAB all errors that occur during onCleanup are transformed into warnings. Hope you caught all possible failure points!
|
# ¿ Apr 15, 2013 05:20 |
|
JetsGuy posted:Maybe I'm too used to reading code written by scientists for scientists, but generally speaking, I don't think I'd call it "terrible". Indeed, most of the code I deal with heavily depends on the user knowing what the gently caress they are doing, and there's no "customer" where we have to worry about every little thing. I don't think that makes it "terrible" but yeah, it's intended only for one purpose, often for only one person or group. There's often not too many fail safes if you enter the wrong inputs, but part of science is knowing to "sanity check" your own results. NEVER trust a black box. Sure, I wouldn't say that all science code is terrible. You're just a lot more likely to find terrible code in science projects because so few scientists receive any formal computer science training. There are still plenty of scientific projects with well-written code for one reason or another, whether that's due to luck or due to some of the coders having exposure to real computer science training or even just exposure to examples of good code. I took a C course from the engineering department when I was a physics undergrad before I took computational physics, and it was a huge boon to the entire class to have someone who knew how to code. Everyone else was completely lost for that first week, having never coded before, and we spent some long hours together learning how to write simple C programs. It's insane that this was normal practice in physical science education less than a decade ago (and I guess still is). Fast forward to grad school, a few more years of computational programming under my belt, and it was the same story as undergrad: most of the grad students in our graduate-level computational course had either never touched code before or had only coded in their undergrad-level computational course and had barely struggled through it and hated the whole experience. JetsGuy posted:True. The larger point I was making was that in scientific programming the only concern is does it work, and beyond that there's not much concern about writing it in a certain "style". And that's part of the problem. By the time that I was well into grad school I had finally realized that OOP had a bunch of advantages even if the code itself wasn't going to be read by anyone else. In my third year I wrote a set of classes that trivialized all of the extra work that I was doing on a regular basis. For instance, in ROOT I would typically make stacked histogram+data plots in a very specific way, so I wrote a class in Python that would do all of this extra work for me. When I moved onto new projects that required very similar plots with minor tweaks, I used the same class. For my fourth year of grad school I spent about 40% of my waking hours drinking, versus only 10% for my third year, thanks to OOP.
|
# ¿ Apr 15, 2013 18:47 |
|
Is there someplace that I can read about R, comparing it to MATLAB? I know nothing about R Generally I like to use NumPy and matplotlib, only falling back on our MATLAB site license if there's some sort of legacy code that I don't have the time to replicate. On topic, one of the engineers at the place where I work insists that you should only use 'new' and 'delete' for classes, never for primitives. If you want an array of ints, then you should create it only create it with malloc. Is this just crazy talk or is there a legitimate reason for this notion?
|
# ¿ Apr 16, 2013 05:59 |
|
Salvador Dalvik posted:I wouldn't worry about bad habits coming from a first or second year CS student, that's gonna happen no matter what. They'll eventually grow out of it, and probably replace that stupid habit with other stupid things like single-point-of-return. I had a coworker throw a SPOR comment at me the other day. He accused me of writing spaghetti code because I had three different spots where a function could return something. It's not as though the logic is hard to understand, but apparently if you're using more than one return then that just fucks with some peoples' heads I guess. e: code:
QuarkJets fucked around with this message at 20:13 on Apr 21, 2013 |
# ¿ Apr 21, 2013 20:09 |
|
Suspicious Dish posted:Don't use isinstance. Please. Why not?
|
# ¿ Apr 21, 2013 20:45 |
|
Lurchington posted:This seems to be the worst combination of unhelpful and rude. So if you want to use several of the interfaces that are available in dict, it's preferable to check for these one-by-one or wrap everything in a try block instead of just using isinstance? And this is because you may want to pass something that looks like a dict but doesn't actually inherit from dict? That simultaneously feels more and less pythonic. Reading through the collections module, would there be something wrong with using: code:
|
# ¿ Apr 21, 2013 21:20 |
|
Jonnty posted:Sorry, I'll be less snarky this time. The one-by-one interface check you're proposing is basically "calling the method" - it'll fail if any of the dict interface that you're using isn't there. If you're legitimately passing something that isn't a dict into a function that takes a dict, you probably want to change how you're doing things, although I suppose there may be valid reasons for it, so if you need to then wrap everything in a try-catch. But yeah, that's the point of duck typing - if it looks like a duck, quacks like a duck, then treat it like a duck. If you want static typing and big inheritance hierarchies, go for Java or C#. But isinstance(x,collections.Mapping) is a nice and simple way to use duck typing that also lets you prepare for non-ducks, right? If I'm concerned that some user might call my code with nonsensical arguments, then I should make sure that the code knows how to handle nonsensical arguments. Using a try/except block or an isinstance(x,collections.Mapping) check seems equivalent to me, if we're talking about dicts specifically. You've suggested that I should just assume that the argument is dict-like and let an exception get raised if it's not, but that's not always a desired outcome... and sometimes it's even sloppy. For instance, if this is part of a computational suite that takes 6 hours to run, and this particular method just prints some stuff to the terminal at the end of those 6 hours, then I probably don't want the code to stop running just because I failed to print something to the terminal. "That's why you can use try/except" you say, but an isinstance(obj,collections.Mapping) would seemingly do the job just as well If you're the only one who's ever going to use your code, then feel free to just assume that a given argument has specific attributes QuarkJets fucked around with this message at 22:37 on Apr 21, 2013 |
# ¿ Apr 21, 2013 22:34 |
|
Suspicious Dish posted:This is known as "safe argument checking" and it's considered a bad practice nowadays. If somebody hands you a bad thing, it's their error, not yours. Even if you make sure that it's a mapping, it's possible for somebody to hand you an "invalid object", and an isinstance won't save you and you'll crash regardless. Making the assumption that arguments that are passed to you are valid prevents you from having to do all of that effort yourself, and lets you say "if I get bad input, I'll have undefined behavior", which is an extremely useful thing. This seems inefficient in some computational cases, such as the one that I described. While I can understand this practice being employed in general, I dislike the idea of a user wasting an afternoon of computing resources just because they fed my code some invalid input. You can't always catch invalid input, but there are some simple cases where a quick isinstance() could potentially save a lot of computing time. If they pass invalid input to the computational portions of the code then to hell with them
|
# ¿ Apr 21, 2013 23:01 |
|
Wheany posted:Well good thing your example is made up. Sort of. I'm loosely describing actual applications without describing their specifics. This isn't really relevant anyway, there are plenty of examples where you'd want to check that an object has certain attributes rather than just throwing an exception every time that something unexpected gets passed. You can do this with isinstance or try/except blocks. I mean duck typing isn't about assuming that an object has certain attributes, it just relates to using an object's attributes in determining semantics rather than using an object's inheritance. I can understand generally assuming that an object has specific attributes and letting the user sort out any problems, but there are specific cases where that is not what you want to do. quote:What exactly do you plan on doing with the wrong kind of input? If it doesn't have the same attributes as dict (keys, etc), then I won't bother trying to iterate over and print out the keys/values in that dict. The computational results still get saved elsewhere (because those are created independently of the method that uses the dict) and everything else is good, but the code didn't throw any exceptions because I've decided that the method in question isn't as important as the method that calls it. If the user doesn't pass a dict-like object then I won't try to iterate over it and print the values. quote:What if the wrong kind of input is wrong in a way you did not anticipate? Is it now okay to have wasted an afternoon of computing resources? If it's an object with mapping attributes, I'll treat it as an object with mapping attributes. If it's not, then I won't do anything with it. Simple as that.
|
# ¿ Apr 22, 2013 05:07 |
|
Hammerite posted:Maybe I don't understand your field, but I hope that if you had a program you wanted to run that takes 6 hours when fed "real" data, you would test it out on a small case/with a small amount of dummy data first, to check that it works. Of course, but you can't always predict user behavior
|
# ¿ Apr 22, 2013 22:45 |
|
Jabor posted:So what if it doesn't behave as you expect, and throws an exception itself when you try to access it? Does that make it somehow okay to waste those six hours of computation? If the user crafts an object that looks like a dict but throws an exception when you access it like a dict, then it would be their own problem in my implementation. You are correct in that a try block would do this better for a case where a user tried to implement a mapping type but hosed it up, I hadn't thought of that To your last question, they are functionally equivalent if you are using collections.Mapping, except in the example you described.
|
# ¿ Apr 22, 2013 22:59 |
|
HappyHippo posted:Assuming we're still talking about that code snippet you posted on the last page, the problem with it is that you're effectively hiding errors by returning false instead of throwing an exception. Basically the code is intended to tell you if the dictionary contains some element. By returning false if the argument isn't a dictionary you're effectively hiding bugs, and in a really subtle way since false is a common and expected return value from such a function. In static typing the compiler reports a type error at compile time. With dynamic typing you don't have that luxury so you have to make sure that it throws an error at run time. The best place to do this is right when it happens, as this makes debugging as easy as possible. Crashes caused by passing the wrong argument type are very easy bugs to fix. By returning false you're effectively delaying the error so that it will show up at some later time in the program. Then the person debugging has to retrace the steps to find out what went wrong until they eventually find that they called your function with the wrong arguments. That's if they even found the bug in the first place. Yes, I am doing that by design; instead of throwing an exception, I am raising a warning and proceeding because the submodule in question is really inconsequential in the grand scheme of things. I left out the print statements in the example Sometimes it's useful to give the user some leeway and print warnings instead of making GBS threads in the user's lap at the first sign of a discrepancy. QuarkJets fucked around with this message at 23:41 on Apr 22, 2013 |
# ¿ Apr 22, 2013 23:39 |
|
Suspicious Dish posted:Do your automated tests ensure that everything is OK though? It's literally just printing the contents of a dictionary to the terminal as a convenience, and the dictionary has no influence on any other part of the code, so everything is definitely okay. ExcessBLarg! posted:Honestly the whole premise is quite strange to me. I regularly deal with long-running computations that take a day or two to finish one "cycle" of. What I meant by my comment is that I can't predict whether a user will do the smart thing and perform a limited-data test before performing a full run. I just want to protect bad people from doing bad things to themselves Back when I was doing grid computing at CERN and was still very new to Python, I used try/excepts all over the place. Sometimes this would allow me to extract good results despite some simple bone-headed mistake that I or someone else had made and that didn't present itself until the code had finished running on the grid. When your jobs might be queued for half a day before they even start running, limited tests and try/except blocks together help to save a lot of time. Obviously these aren't always enough, but sometimes they do the trick, and boy does it feel good when you have data despite some piece of the code not working as expected (so perhaps plots of 90% of my quantities are available, but a function that calculates/plots another 10% are bad; that's no problem if I still have a ton of great plots to show people!) QuarkJets fucked around with this message at 07:26 on Apr 23, 2013 |
# ¿ Apr 23, 2013 07:23 |
|
Sedro posted:
Wow, there are some layers, here. At first I thought that if you changed while(false) to while(success) that it would fix the code, but then I realized that it just creates an infinite loop when success becomes TRUE. Then I realized that the programmer intentionally did this to make the code flow in a specific way; he wanted the code chunk in the do/while loop to run exactly once, but he also wanted to be able to break out of it and then run some other cleanup code if anything went wrong. There are better ways to do this, but breaking the do/while loop does give the desired flow. Neat e: Or what everyone above me said, that'll teach me to read further before posting. This is the kind of thing that you do when you want a single point of return and aren't comfortable with passing pointers, I guess. QuarkJets fucked around with this message at 19:32 on Apr 27, 2013 |
# ¿ Apr 27, 2013 19:28 |
|
Scaevolus posted:Coding horror: a C++ code base where the author doesn't realize delete, like malloc, works with nulls. He then proceeds to manage all his memory manually, with null checks around the deletes. Ehhh, not realizing that delete works with nulls isn't a horror. Maybe you could describe some of the manual memory management? Is it particularly horrible or is the null checking the part that you find horrible?
|
# ¿ Apr 30, 2013 08:59 |
|
Zaphod42 posted:As an interesting note, games are moving away from C++. The game engine is made in C++ and the bottlenecks and the rendering calls and all that, sure, but once that gets nailed down you can take a really, really long time to make tiny little improvements to the engine codebase. As an example, almost a decade ago Civilization IV used tons and tons of Python, which also made the game very easy to mod (because Python is generally easy to pick up). Major parts of the interface, map generation, and scripted events were entirely Python, although it probably ran on an engine coded in a compiled language e: Also memory management in C++ is pretty straightforward, you people are crazy QuarkJets fucked around with this message at 08:39 on May 1, 2013 |
# ¿ May 1, 2013 06:09 |
|
How do these people manage to get programming jobs without the skills that the jobs require?
|
# ¿ May 5, 2013 01:07 |
|
I'm lost, what is all of this fizz buzz poo poo?
|
# ¿ May 21, 2013 08:35 |
|
Sagacity posted:I had a coworker for a while who would refuse to use a regular text editor and instead was continually loving around with vim. Whenever he wanted to ask me something he would spend minutes just mashing keys just to open the right file, trying to get a window open, trying to get syntax highlighting to work. He wanted to be a real code hacker so bad! This suggests that he doesn't even have a nice .vimrc
|
# ¿ Jun 3, 2013 08:35 |
|
Volmarias posted:As opposed to... cat, grep, and sed
|
# ¿ Jun 4, 2013 07:20 |
|
My advisor in grad school (physics) suggested indenting as much as possible in Python in order to help with flow control. So while I was writing stuff like this:Python code:
Python code:
|
# ¿ Jun 13, 2013 10:39 |
|
The Gripper posted:Well you can also wrap if statements across lines if you really want: Doesn't every part of the if statement get checked when you do this? I try to stay away from those kinds of if statements And yeah, as other people have pointed out there's not really any such thing as "production" code in academia, with a few exceptions
|
# ¿ Jun 16, 2013 00:59 |
|
Zorn posted:Never program a computer. To be faired, the if statement thing isn't even my fault; I was taught that in an Intro C course offered in the computer engineering track at my university. So I assumed that was right even if it's not You're kind of being a dick, though
|
# ¿ Jun 18, 2013 11:33 |
|
KaneTW posted:You have not seen physicist code. Mathematician code is a million times worse. Here's a psuedocode example of something that I saw recently: Python code:
QuarkJets fucked around with this message at 21:15 on Jun 21, 2013 |
# ¿ Jun 21, 2013 21:07 |
|
Lysidas posted:Variable names like that make sense when you're in the middle of implementing an algorithm from a paper or textbook, but you rename them after you get it working It's also fun that the constructor returns an int some of the time (why does it return this int? And why 4? No one knows, not even the coder)
|
# ¿ Jun 22, 2013 01:37 |
|
Lysidas posted:I would have mentioned that too, but I figured it was a side effect of you typing pseudocode from memory. That "works" in 2.4 but not 2.6, and I don't have a 2.5 installation handy to see whether it works in that version. Would the int even be accessible in any way? IE, if I called MyObject(args) would I get an int instances instead of a MyObject instance if that's what the constructor returns?
|
# ¿ Jun 22, 2013 03:44 |
|
Lysidas posted:No; the value returned by __init__ is lost. Remember that all Python functions return None unless you specify otherwise -- if the return value of __init__ was used as the instantiated object you'd get None when creating an instance of any class. Look at what you've done; now you are the coding horror.
|
# ¿ Jun 22, 2013 09:22 |
|
Python code:
|
# ¿ Jun 29, 2013 20:06 |
|
shrughes posted:I stopped reading at the weird overparenthesization. I only spot maybe 3 instances where dropping parentheses makes sense; two if statements that don't need them, and one more at the return. Although it's not like this hurts anything. There are a bunch of things wrong with this code, but using parentheses in an if statement isn't one of them
|
# ¿ Jun 29, 2013 22:30 |
|
shrughes posted:When judging somebody by their code you can just stop when you see the parenthesis style being weird. That the person who wrote this code is too mentally deficient to just let go of his way of doing things and follow the obvious Python standard is enough to categorize this as bad code. I wouldn't call it "obvious"; PEP8 suggests wrapping long if statement expressions with parentheses, so it'd be easy for a new Python programmer to see an example of that, get confused and start putting parentheses in all of their if statements. Again, this is probably the most inconsequential and minor complaint that you could possibly raise regarding that code
|
# ¿ Jun 29, 2013 23:40 |
|
zergstain posted:Everyone uses vi/vim. I'm sure if we used an IDE, people would just configure the indenting the way they like and we'd still have the same problem. And I know vim can take care of it, but good luck pushing everybody to use the same config. And now you have also revealed that your team is using CVS
|
# ¿ Jul 9, 2013 05:56 |
|
Pilsner posted:Can we at least agree that people who indent with spaces instead of tabs are objectively wrong? It fucks up diff'ing. We can agree that all tabs should be expanded to N spaces in your editor of choice so that people who use tabs and spaces get the same code
|
# ¿ Jul 9, 2013 20:21 |
|
|
# ¿ Apr 30, 2024 03:42 |
|
Kilson posted:No agreement can be made about abominable practices. The abominable practice being the use of tabs in code, since it really can gently caress up any language that uses white space; expanding tabs into whitespace is good, of course
|
# ¿ Jul 10, 2013 02:43 |