Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
QuarkJets
Sep 8, 2008

Ithaqua posted:

What are the horrors?

All of the compiler errors, to start

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Lumpy posted:

"Ace can be one *and* 11? What kind of God would allow that!"

But you'd have no way of knowing that given the way in which this was coded. Each suit in this class has 14 cards, including a 1 and an A. It really is as though the coder had never seen a deck of cards before

QuarkJets
Sep 8, 2008

Suspicious Dish posted:

And I'm proven once again wrong about whatever stupid argument we had last week being the dumbest one. I didn't think we'd reach D&D poo poo in here.

We've at least not reached as low as GBS yet

QuarkJets
Sep 8, 2008

The Gripper posted:

They do, but then you have to deal with there being no authoritative repository and all the bullshit that comes with that (mostly just verifying that the copy you choose to start from is pristine).

In KDE's case everything would be recoverable no matter how broken things were (as you said, each developer has a copy), the horror is mostly that a project that large considered Git a key part of their backup plan.

I don't understand, why is git bad for large projects?

QuarkJets
Sep 8, 2008

JetsGuy posted:

I loving despise IDL, and i hate the fact that I need to use it for my work. Usually, I can get around it by using Python, but I often have to use some of my team's programs, and astronomers LOVE IDL. You know how much physicists love loving FORTRAN? Yeah, astronomers love IDL like that. A part of it is that the astrolib library has been ingrained very deeply in the community. So, understandably, you are sometimes just stuck dealing with IDL. There's also just the fact that we all tend to like to code in the language we know, and scientists tend to be very much of the thought "eh, why bother" when it comes to learning new programming languages not named FORTRAN or IDL... and in some circumstances, Perl.

I'm just so loving happy that Python has been (quickly) taking over the Universe (literally :colbert: ). Many, many astronomy libraries are being written and released for Python. The young astronomer community is embracing it, and hopefully, by the time I'm a crusty old scientist with my own horde of grad students and post docs, IDL will be a distant memory.

This is all without bringing up how loving outrageously expensive the IDL licenses are. Only loving MATLAB surpasses it in terms of "hey bend over", and at least MATLAB has enough of an audience to justify their insane license costs.

FORTRAN and IDL are loved by dinosaurs; modern physicists have mostly made the switch to C++ or Python. There's a bunch of legacy poo poo written in FORTRAN, and there will always be a FORTRAN niche, but the language is quickly disappearing from actual scientific usage

(as an example, the everything at CERN is almost exclusively C++ and Python with a few things that are based in FORTRAN; for computational purposes there will be some FORTRAN libraries hanging around for a very long time, I think we all agree)

It's exactly for the reasons that you say; the older crowd wants to keep using FORTRAN and IDL because that's what they've always used, whereas the younger crowd wants to use Python and C++ because it's what they learned in school (and a lot of other reasons)

QuarkJets
Sep 8, 2008

VikingofRock posted:

This has been my experience as well, and thank god. Although ROOT is pretty annoying sometimes, and in my experience it is used a ton by the particle physics community.

Having used both ROOT and MATLAB for many years now, you should count your lucky stars that you get to use ROOT. It has a whole bunch of horrible problems and bizarre ways of doing things done, but MATLAB is one hundred times worse and lacks a lot of the graphical power that ROOT has

(although for writing little one-off projects that just produce results, MATLAB is better; what I'm saying is that ROOT is far better for producing plots and other pretty things, whereas MATLAB is more of a result workhorse with the presentation of data thrown in as an afterthought)

Also, many of the problems with ROOT disappear if you start using PyROOT. Give that a shot

QuarkJets
Sep 8, 2008

Progressive JPEG posted:

The worst code I've yet encountered was produced by Astro researchers. The best theory I can come up with is that perhaps the industry is accustomed to one-off code that only needs to function until a paper gets published. This may also be why they're fine with using something like IDL in the first place.

You are exactly right. Even today most graduate physics/astronomy students have maybe one computational physics course before entering grad school, and most grad programs may only offer one additional computational course. There's not even an introduction to programming in these programs, you're told about these tools that are necessary for solving certain problems but nobody explains how to actually produce good code or anything like that. My graduate level computational class was basically just a class on Mathematica and was completely useless

All that we have is our own experience and the experience of our peers, which often isn't much to go on. Legacy code becomes untouchable because it produces the results that we expect.

CERN specifically has an advantage in that there are actual computer scientists working there alongside the physicists, and there are sometimes workshops to help people learn better coding skills. Many fields don't get that; you're in a basement lab with a bunch of other grad students who are just as clueless as you are.

QuarkJets fucked around with this message at 08:44 on Apr 13, 2013

QuarkJets
Sep 8, 2008

Shugojin posted:

Oh, on the topic of FORTRAN people. I did some research (briefly, stuff came up in my life) for a professor in the department who used to use FORTRAN.

She'd somehow been convinced to switch over to MATLAB. :smith:

Her FORTRAN code was all good and worked wonderfully and quickly but her MATLAB was actually written largely by some compsci grad student and was strangely awful, took forever to run, and based on my brief time there I guess it had some poorly defined limits because it would give output that went outside of the range of the input data.

Poor lady. I hope she got it all working because jesus.

That makes me really sad. They're not even in the same ball pit, really. FORTRAN is what you use when you need good computational code that runs incredibly fast and you don't need to write your own libraries. It's a computational programming language. MATLAB is more of a computational toolbox; it has a lot of useful algorithms that can make your job easy if you don't already have other implementations, but it's slow as gently caress, has awful syntax and even worse documentation.

For an example of horrible code that makes me cry, in MATLAB all errors that occur during onCleanup are transformed into warnings. Hope you caught all possible failure points!

QuarkJets
Sep 8, 2008

JetsGuy posted:

Maybe I'm too used to reading code written by scientists for scientists, but generally speaking, I don't think I'd call it "terrible". Indeed, most of the code I deal with heavily depends on the user knowing what the gently caress they are doing, and there's no "customer" where we have to worry about every little thing. I don't think that makes it "terrible" but yeah, it's intended only for one purpose, often for only one person or group. There's often not too many fail safes if you enter the wrong inputs, but part of science is knowing to "sanity check" your own results. NEVER trust a black box.

That all said, I would like to point out that you are correct that most physics and astrophysics people are NOT taught good coding practice. We have to learn ourselves, and even I only learned what I did because I follow threads here a lot and read a LOT of stack overflow when I'm looking for a function. I had only ONE computer coding class and that was a grad level computational physics class. We were thrown right into "here's the math" of how to do algorithms and expected to just know the rest (or learn it on our own).

I do know groups of guys (usually the astrophysics guys who run hardcore star simulations) who know lots of great coding techniques but that's not the norm.

In any case, yeah, physics should include more coding methods classes. I know nothing about memory management for example, and I have a rudimentary knowledge of how to OOP (because OOP is often unneeded for science stuff).

So yeah, in the sense that it's designed to get a question answered and that's about it, I guess you'd call it "terrible", but I find it more readable than most of the well written code I come across.

Sure, I wouldn't say that all science code is terrible. You're just a lot more likely to find terrible code in science projects because so few scientists receive any formal computer science training. There are still plenty of scientific projects with well-written code for one reason or another, whether that's due to luck or due to some of the coders having exposure to real computer science training or even just exposure to examples of good code.

I took a C course from the engineering department when I was a physics undergrad before I took computational physics, and it was a huge boon to the entire class to have someone who knew how to code. Everyone else was completely lost for that first week, having never coded before, and we spent some long hours together learning how to write simple C programs. It's insane that this was normal practice in physical science education less than a decade ago (and I guess still is). Fast forward to grad school, a few more years of computational programming under my belt, and it was the same story as undergrad: most of the grad students in our graduate-level computational course had either never touched code before or had only coded in their undergrad-level computational course and had barely struggled through it and hated the whole experience.

JetsGuy posted:

True. The larger point I was making was that in scientific programming the only concern is does it work, and beyond that there's not much concern about writing it in a certain "style".

SOMETIMES there is option in "can it work faster". This is however more when you are dealing with heavy loads of data and often times when you're doing parallel processing to begin with (so you're already ahead of the game in terms of scientific programmers). In the end, scientists are happy when the code works the way it is intended to. No one but them and collaborators will likely ever use or see the code so there's no reason to do much beyond that.

I find that coding with OOP as opposed to procedural code is much more readable (and of course, much easier to fix). So it is what I do now, but a lot of programs I run across from collaborators are written procedurally just because it does what it is required to do.

And that's part of the problem. By the time that I was well into grad school I had finally realized that OOP had a bunch of advantages even if the code itself wasn't going to be read by anyone else. In my third year I wrote a set of classes that trivialized all of the extra work that I was doing on a regular basis. For instance, in ROOT I would typically make stacked histogram+data plots in a very specific way, so I wrote a class in Python that would do all of this extra work for me. When I moved onto new projects that required very similar plots with minor tweaks, I used the same class. For my fourth year of grad school I spent about 40% of my waking hours drinking, versus only 10% for my third year, thanks to OOP.

QuarkJets
Sep 8, 2008

Is there someplace that I can read about R, comparing it to MATLAB? I know nothing about R

Generally I like to use NumPy and matplotlib, only falling back on our MATLAB site license if there's some sort of legacy code that I don't have the time to replicate.

On topic, one of the engineers at the place where I work insists that you should only use 'new' and 'delete' for classes, never for primitives. If you want an array of ints, then you should create it only create it with malloc. Is this just crazy talk or is there a legitimate reason for this notion?

QuarkJets
Sep 8, 2008

Salvador Dalvik posted:

I wouldn't worry about bad habits coming from a first or second year CS student, that's gonna happen no matter what. They'll eventually grow out of it, and probably replace that stupid habit with other stupid things like single-point-of-return.

I had a professor that was on a kick about SPOR, so I would intentionally include at least one multiple return statement in every project and defend it during presentation/code review. He was a pretty chill dude, though.

I had a coworker throw a SPOR comment at me the other day. He accused me of writing spaghetti code because I had three different spots where a function could return something. It's not as though the logic is hard to understand, but apparently if you're using more than one return then that just fucks with some peoples' heads I guess.

e:

code:
def my_function(some_dict):
  if isinstance(some_dict,dict):
    if "poop" in some_dict:
      return True
    elif "crap" in some_dict:
      return True
  return False
:siren: SPAGHETTI CODE :siren:

QuarkJets fucked around with this message at 20:13 on Apr 21, 2013

QuarkJets
Sep 8, 2008

Suspicious Dish posted:

Don't use isinstance. Please.

Why not?

QuarkJets
Sep 8, 2008

Lurchington posted:

This seems to be the worst combination of unhelpful and rude.

in answer to QuarkJets:
This seems like a decently laid-out (and only a bit snarky) of a description why isinstance isn't the best way to go about most things: http://www.canonical.org/~kragen/isinstance/

So if you want to use several of the interfaces that are available in dict, it's preferable to check for these one-by-one or wrap everything in a try block instead of just using isinstance? And this is because you may want to pass something that looks like a dict but doesn't actually inherit from dict? That simultaneously feels more and less pythonic.

Reading through the collections module, would there be something wrong with using:
code:
 isinstance(x,collections.Mapping)
if I wanted to guarantee that object x implements at least all of the same interfaces as dict? This implements the attribute checking that has been alluded to but in a far cleaner way, and it allows users to pass their own dict-like non-dict-inheriting objects, no? This seems like the ultimate "pythonic" solution if you're using types that are described in collections

QuarkJets
Sep 8, 2008

Jonnty posted:

Sorry, I'll be less snarky this time. The one-by-one interface check you're proposing is basically "calling the method" - it'll fail if any of the dict interface that you're using isn't there. If you're legitimately passing something that isn't a dict into a function that takes a dict, you probably want to change how you're doing things, although I suppose there may be valid reasons for it, so if you need to then wrap everything in a try-catch. But yeah, that's the point of duck typing - if it looks like a duck, quacks like a duck, then treat it like a duck. If you want static typing and big inheritance hierarchies, go for Java or C#.

But isinstance(x,collections.Mapping) is a nice and simple way to use duck typing that also lets you prepare for non-ducks, right? If I'm concerned that some user might call my code with nonsensical arguments, then I should make sure that the code knows how to handle nonsensical arguments. Using a try/except block or an isinstance(x,collections.Mapping) check seems equivalent to me, if we're talking about dicts specifically.

You've suggested that I should just assume that the argument is dict-like and let an exception get raised if it's not, but that's not always a desired outcome... and sometimes it's even sloppy. For instance, if this is part of a computational suite that takes 6 hours to run, and this particular method just prints some stuff to the terminal at the end of those 6 hours, then I probably don't want the code to stop running just because I failed to print something to the terminal. "That's why you can use try/except" you say, but an isinstance(obj,collections.Mapping) would seemingly do the job just as well

If you're the only one who's ever going to use your code, then feel free to just assume that a given argument has specific attributes

QuarkJets fucked around with this message at 22:37 on Apr 21, 2013

QuarkJets
Sep 8, 2008

Suspicious Dish posted:

This is known as "safe argument checking" and it's considered a bad practice nowadays. If somebody hands you a bad thing, it's their error, not yours. Even if you make sure that it's a mapping, it's possible for somebody to hand you an "invalid object", and an isinstance won't save you and you'll crash regardless. Making the assumption that arguments that are passed to you are valid prevents you from having to do all of that effort yourself, and lets you say "if I get bad input, I'll have undefined behavior", which is an extremely useful thing.

This seems inefficient in some computational cases, such as the one that I described. While I can understand this practice being employed in general, I dislike the idea of a user wasting an afternoon of computing resources just because they fed my code some invalid input. You can't always catch invalid input, but there are some simple cases where a quick isinstance() could potentially save a lot of computing time.

If they pass invalid input to the computational portions of the code then to hell with them

QuarkJets
Sep 8, 2008

Wheany posted:

Well good thing your example is made up.

Sort of. I'm loosely describing actual applications without describing their specifics.

This isn't really relevant anyway, there are plenty of examples where you'd want to check that an object has certain attributes rather than just throwing an exception every time that something unexpected gets passed. You can do this with isinstance or try/except blocks.

I mean duck typing isn't about assuming that an object has certain attributes, it just relates to using an object's attributes in determining semantics rather than using an object's inheritance. I can understand generally assuming that an object has specific attributes and letting the user sort out any problems, but there are specific cases where that is not what you want to do.

quote:

What exactly do you plan on doing with the wrong kind of input?

What if the wrong kind of input is wrong in a way you did not anticipate? Is it now okay to have wasted an afternoon of computing resources?

If it doesn't have the same attributes as dict (keys, etc), then I won't bother trying to iterate over and print out the keys/values in that dict. The computational results still get saved elsewhere (because those are created independently of the method that uses the dict) and everything else is good, but the code didn't throw any exceptions because I've decided that the method in question isn't as important as the method that calls it.

If the user doesn't pass a dict-like object then I won't try to iterate over it and print the values.

quote:

What if the wrong kind of input is wrong in a way you did not anticipate? Is it now okay to have wasted an afternoon of computing resources?

If it's an object with mapping attributes, I'll treat it as an object with mapping attributes. If it's not, then I won't do anything with it. Simple as that.

QuarkJets
Sep 8, 2008

Hammerite posted:

Maybe I don't understand your field, but I hope that if you had a program you wanted to run that takes 6 hours when fed "real" data, you would test it out on a small case/with a small amount of dummy data first, to check that it works.

Of course, but you can't always predict user behavior

QuarkJets
Sep 8, 2008

Jabor posted:

So what if it doesn't behave as you expect, and throws an exception itself when you try to access it? Does that make it somehow okay to waste those six hours of computation?

When is using isinstance a better idea than just wrapping a try-except around the whole thing?

If the user crafts an object that looks like a dict but throws an exception when you access it like a dict, then it would be their own problem in my implementation. You are correct in that a try block would do this better for a case where a user tried to implement a mapping type but hosed it up, I hadn't thought of that

To your last question, they are functionally equivalent if you are using collections.Mapping, except in the example you described.

QuarkJets
Sep 8, 2008

HappyHippo posted:

Assuming we're still talking about that code snippet you posted on the last page, the problem with it is that you're effectively hiding errors by returning false instead of throwing an exception. Basically the code is intended to tell you if the dictionary contains some element. By returning false if the argument isn't a dictionary you're effectively hiding bugs, and in a really subtle way since false is a common and expected return value from such a function. In static typing the compiler reports a type error at compile time. With dynamic typing you don't have that luxury so you have to make sure that it throws an error at run time. The best place to do this is right when it happens, as this makes debugging as easy as possible. Crashes caused by passing the wrong argument type are very easy bugs to fix. By returning false you're effectively delaying the error so that it will show up at some later time in the program. Then the person debugging has to retrace the steps to find out what went wrong until they eventually find that they called your function with the wrong arguments. That's if they even found the bug in the first place.

It reminds me of a criticism made of PHP in that "fractal of bad design" rant: given a choice between throwing an error and doing something nonsensical, PHP almost always chooses the nonsensical thing.

Yes, I am doing that by design; instead of throwing an exception, I am raising a warning and proceeding because the submodule in question is really inconsequential in the grand scheme of things. I left out the print statements in the example

Sometimes it's useful to give the user some leeway and print warnings instead of making GBS threads in the user's lap at the first sign of a discrepancy.

QuarkJets fucked around with this message at 23:41 on Apr 22, 2013

QuarkJets
Sep 8, 2008

Suspicious Dish posted:

Do your automated tests ensure that everything is OK though?

It's literally just printing the contents of a dictionary to the terminal as a convenience, and the dictionary has no influence on any other part of the code, so everything is definitely okay.

ExcessBLarg! posted:

Honestly the whole premise is quite strange to me. I regularly deal with long-running computations that take a day or two to finish one "cycle" of.

1. Six hours isn't really that long. If the cycles are crazy expensive or crazy unobtainable (supercomputer) then sure, but in those cases you make sure your poo poo works well in advance of launchin on those platforms. If something fucks up after six hours on a riff-raff machine, whatever, restart, go to bed.

2. I have difficulty coming up with a scenario where a user, when performing a limited-data test would provide the right kind of dict-like object, but when perfoming a full-data run would not. True, user behavior can't always be predicted, but generally users don't do crazy batshit stuff either, and if they do, it's certainly on them.

What I meant by my comment is that I can't predict whether a user will do the smart thing and perform a limited-data test before performing a full run. I just want to protect bad people from doing bad things to themselves :(

Back when I was doing grid computing at CERN and was still very new to Python, I used try/excepts all over the place. Sometimes this would allow me to extract good results despite some simple bone-headed mistake that I or someone else had made and that didn't present itself until the code had finished running on the grid. When your jobs might be queued for half a day before they even start running, limited tests and try/except blocks together help to save a lot of time. Obviously these aren't always enough, but sometimes they do the trick, and boy does it feel good when you have data despite some piece of the code not working as expected (so perhaps plots of 90% of my quantities are available, but a function that calculates/plots another 10% are bad; that's no problem if I still have a ton of great plots to show people!)

QuarkJets fucked around with this message at 07:26 on Apr 23, 2013

QuarkJets
Sep 8, 2008

Sedro posted:

code:
ret = malloc (sizeof (image_t)) + 8 * img_header.data_size;
Edit: but the coding horror is do ... while (false)

Wow, there are some layers, here. At first I thought that if you changed while(false) to while(success) that it would fix the code, but then I realized that it just creates an infinite loop when success becomes TRUE. Then I realized that the programmer intentionally did this to make the code flow in a specific way; he wanted the code chunk in the do/while loop to run exactly once, but he also wanted to be able to break out of it and then run some other cleanup code if anything went wrong. There are better ways to do this, but breaking the do/while loop does give the desired flow. Neat

e: Or what everyone above me said, that'll teach me to read further before posting. This is the kind of thing that you do when you want a single point of return and aren't comfortable with passing pointers, I guess.

QuarkJets fucked around with this message at 19:32 on Apr 27, 2013

QuarkJets
Sep 8, 2008

Scaevolus posted:

Coding horror: a C++ code base where the author doesn't realize delete, like malloc, works with nulls. He then proceeds to manage all his memory manually, with null checks around the deletes.

Ehhh, not realizing that delete works with nulls isn't a horror. Maybe you could describe some of the manual memory management? Is it particularly horrible or is the null checking the part that you find horrible?

QuarkJets
Sep 8, 2008

Zaphod42 posted:

As an interesting note, games are moving away from C++. The game engine is made in C++ and the bottlenecks and the rendering calls and all that, sure, but once that gets nailed down you can take a really, really long time to make tiny little improvements to the engine codebase.

Meanwhile, all the game specific logic is coded in a scripting language that runs on top and is much faster and easier to work in, although less efficient.

All the major players have been doing it this way for awhile, actually.

So as usual, the answer is that no one language is superior, and all have their uses. Sometimes a hybrid system is even best.

As an example, almost a decade ago Civilization IV used tons and tons of Python, which also made the game very easy to mod (because Python is generally easy to pick up). Major parts of the interface, map generation, and scripted events were entirely Python, although it probably ran on an engine coded in a compiled language

e: Also memory management in C++ is pretty straightforward, you people are crazy

QuarkJets fucked around with this message at 08:39 on May 1, 2013

QuarkJets
Sep 8, 2008

How do these people manage to get programming jobs without the skills that the jobs require?

QuarkJets
Sep 8, 2008

I'm lost, what is all of this fizz buzz poo poo?

QuarkJets
Sep 8, 2008

Sagacity posted:

I had a coworker for a while who would refuse to use a regular text editor and instead was continually loving around with vim. Whenever he wanted to ask me something he would spend minutes just mashing keys just to open the right file, trying to get a window open, trying to get syntax highlighting to work. He wanted to be a real code hacker so bad!

This suggests that he doesn't even have a nice .vimrc :psyduck:

QuarkJets
Sep 8, 2008

Volmarias posted:

As opposed to...

cat, grep, and sed

QuarkJets
Sep 8, 2008

My advisor in grad school (physics) suggested indenting as much as possible in Python in order to help with flow control. So while I was writing stuff like this:

Python code:
def event_is_interesting(event):
  if(len(event.Leptons) != 2):
    return False
  if(event.Leptons[0].Pt() < 20.0):
    return False
  if(event.Leptons[1].Pt() < 20.0):
    return False
  if(event.MET() < 60.0):
    return False
  ... #at least 10 other if statements
  return True
He'd suggest to write stuff like this:

Python code:
#In main loop
if(len(event.Leptons) == 2):
    if(event.Leptons[0].Pt() < 20.0):
        if(event.Leptons[1].Pt() < 20.0): 
             #etc, then do something if everything passes
It'd easily go 20 if statements deep for a more complicated event selection process

QuarkJets
Sep 8, 2008

The Gripper posted:

Well you can also wrap if statements across lines if you really want:
Python code:
def event_is_interesting(event):
  if(len(event.Leptons) != 2 or 
    event.Leptons[0].Pt() < 20.0 or
	event.Leptons[1].Pt() < 20.0 or
    event.MET() < 60.0):
      return False
  return True
Your advisor is a pretty terrible person and I'd hate to see how crappy his actual production code is, if he thinks nesting for things like that is a good idea I can just imagine the other "me learning VB6" style choices he makes.

Doesn't every part of the if statement get checked when you do this? I try to stay away from those kinds of if statements

And yeah, as other people have pointed out there's not really any such thing as "production" code in academia, with a few exceptions

QuarkJets
Sep 8, 2008

Zorn posted:

Never program a computer.

To be faired, the if statement thing isn't even my fault; I was taught that in an Intro C course offered in the computer engineering track at my university. So I assumed that was right even if it's not

You're kind of being a dick, though

QuarkJets
Sep 8, 2008

KaneTW posted:

You have not seen physicist code.

Mathematician code is a million times worse. Here's a psuedocode example of something that I saw recently:

Python code:
#Note: all tabs are actual tabs rather than tab-expanded white space
def myclass(object):
    def __init__(self, a, b, c, d, e, f, g, h, i): 
        self.status = "none"
        if(a == 4):
           self.status = "4"
           return 4
        #other checks.  Also, variables d-f are never used

        if(self.status == "none"): 
            #do something

QuarkJets fucked around with this message at 21:15 on Jun 21, 2013

QuarkJets
Sep 8, 2008

Lysidas posted:

Variable names like that make sense when you're in the middle of implementing an algorithm from a paper or textbook, but you rename them after you get it working :argh:

e: and you haven't needed to extend object since December 2008 :v:

It's also fun that the constructor returns an int some of the time (why does it return this int? And why 4? No one knows, not even the coder)

QuarkJets
Sep 8, 2008

Lysidas posted:

I would have mentioned that too, but I figured it was a side effect of you typing pseudocode from memory. That "works" in 2.4 but not 2.6, and I don't have a 2.5 installation handy to see whether it works in that version.

Python code:
Python 2.4.3 (#1, Jan  9 2013, 06:47:03) 
[GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class Test(object):
...   def __init__(self):
...     return 1
... 
>>> Test()
__main__:1: RuntimeWarning: __init__() should return None
<__main__.Test object at 0x2aba648c5750>
Python code:
Python 2.6.5 (r265:79063, Oct  1 2012, 22:04:36) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> class Test(object):
...   def __init__(self):
...     return 1
... 
>>> Test()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: __init__() should return None, not 'int'
Python code:
Python 3.3.1 (default, Apr 17 2013, 22:30:32) 
[GCC 4.7.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> class Test:
...   def __init__(self):
...     return 1
... 
>>> Test()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: __init__() should return None, not 'int'

Would the int even be accessible in any way? IE, if I called MyObject(args) would I get an int instances instead of a MyObject instance if that's what the constructor returns? :psyduck:

QuarkJets
Sep 8, 2008

Lysidas posted:

No; the value returned by __init__ is lost. Remember that all Python functions return None unless you specify otherwise -- if the return value of __init__ was used as the instantiated object you'd get None when creating an instance of any class.

Python __init__ methods are better described as "initializers" than "constructors". You can define a real constructor that returns the wrong thing, if you really want to:

Python code:
>>> class Test:
...   def __new__(cls):
...     return 1
... 
>>> t = Test()
>>> t
1

Look at what you've done; now you are the coding horror.

QuarkJets
Sep 8, 2008

Python code:
def read_data(data, desired_key):
    if(isinstance(data, dict)):
        indexes = []
        i=0
        desired_val = "none"
        for key in data.iterkeys():
            indexes.append(i)
            indexes.append(key)
            indexes.append(data[key])
            i += 1
        for i in range(len(indexes)):
            if(indexes[i] == desired_key):
                return (indexes[i], indexes[i+1], indexes[i-1])

QuarkJets
Sep 8, 2008

shrughes posted:

I stopped reading at the weird overparenthesization.

I only spot maybe 3 instances where dropping parentheses makes sense; two if statements that don't need them, and one more at the return. Although it's not like this hurts anything. There are a bunch of things wrong with this code, but using parentheses in an if statement isn't one of them

QuarkJets
Sep 8, 2008

shrughes posted:

When judging somebody by their code you can just stop when you see the parenthesis style being weird. That the person who wrote this code is too mentally deficient to just let go of his way of doing things and follow the obvious Python standard is enough to categorize this as bad code.

I wouldn't call it "obvious"; PEP8 suggests wrapping long if statement expressions with parentheses, so it'd be easy for a new Python programmer to see an example of that, get confused and start putting parentheses in all of their if statements.

Again, this is probably the most inconsequential and minor complaint that you could possibly raise regarding that code

QuarkJets
Sep 8, 2008

zergstain posted:

Everyone uses vi/vim. I'm sure if we used an IDE, people would just configure the indenting the way they like and we'd still have the same problem. And I know vim can take care of it, but good luck pushing everybody to use the same config.

Edit: As much as I'd love to, reformatting the whole file is frowned upon because I would show up on every line on CVS annotate (primitive is right, but we're working on a Perforce migration, not sure if that will help in this case though).

And now you have also revealed that your team is using CVS

QuarkJets
Sep 8, 2008

Pilsner posted:

Can we at least agree that people who indent with spaces instead of tabs are objectively wrong? It fucks up diff'ing.

We can agree that all tabs should be expanded to N spaces in your editor of choice so that people who use tabs and spaces get the same code

Adbot
ADBOT LOVES YOU

QuarkJets
Sep 8, 2008

Kilson posted:

No agreement can be made about abominable practices. :mad:

The abominable practice being the use of tabs in code, since it really can gently caress up any language that uses white space; expanding tabs into whitespace is good, of course

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply