Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
comedyblissoption
Mar 15, 2006

SVN is so intolerably bad at merging branches that the standard SVN workflow is to not branch. Instead the preferred strategy is to merge a bunch of half-baked and incomplete commits in with everyone else's half-baked and incomplete commits.

Adbot
ADBOT LOVES YOU

comedyblissoption
Mar 15, 2006

cool thing you can do in git w/ rebase -i:

  • be in the middle of working on something
  • you notice you want to refactor X.
  • you can commit your current in progress changes and then refactor X with or without your current in-progress changes
  • put it in the refactoring commit before your changes and make sure all the tests pass w/ the refactoring
  • continue on with your in-progress changes
  • the refactoring is separated out in the commit history from your in-progress changes and is much easier to read

svn does not work very well for this because it encourages munging refactorings and actual functional changes together

this is only important if you care about being able to more easily code review or pick out functional changes in your commit history

comedyblissoption
Mar 15, 2006

svn generally encourages giant commits instead of more granular commits because youre forced to push if you want to save your work as a commit

git allows things like having incremental commits that you can revert back to at any time if you make a mistake and you aren't done w/ your feature yet

svn has some really stupid things like not being able to go back to your functioning working copy w/ in-progress changes before a pull destroys your working copy. sure you can see if there are changes before you pull, but if you resolve any conflicts and pull you can still be in a hosed up state that is more difficult to recover from than if you could just go back to before it was hosed up. some people will copy their entire working directory to another directory before a pull to mitigate this issue

comedyblissoption
Mar 15, 2006

this is a use case every time you want to make small incremental changes without being forced to mix your changes w/ everyone else's poo poo yet

svn you just accept your fate and make giant commits that do a whole bunch of poo poo at once and accept that your commit is going to be much harder to read once you do a pull and there are conflicts

comedyblissoption
Mar 15, 2006

this is also a use case if you have a development team size of one

comedyblissoption
Mar 15, 2006

i think attempting to merge and branch in svn gives you ptsd that you try to avoid it as much as possible

comedyblissoption
Mar 15, 2006

also i agree very frequent merge conflicts means your poo poo is probably all hosed up

that doesnt mean you have to have a horrible solution for resolving merges

comedyblissoption
Mar 15, 2006

in svn you can't push your working changes and then have someone review a logical commit history of 5 different incremental commits before it gets merged into trunk

in svn you have to have someone sit at your box before you push and look at one giant commit or you send them a patch file through e-mail or some process outside of your version control system. you also need to write down what particular commit this was created from, and your commit represented by the patch file is subject to change if there are merge conflicts after the review. in git you can actually push the merge along w/ the 5 incremental commits for review without actually pushing it to trunk yet

alternatively, you could use svn's horrible branching model and push 5 incremental commits to this branch and then have someone review it

but svn's branch merging is so bad the general recommended practice is to not branch

it's obnoxious to get back the history of that branch in svn after you delete it

comedyblissoption fucked around with this message at 00:36 on Jan 18, 2015

comedyblissoption
Mar 15, 2006

In SVN this can happen:
  • make some changes and get something working
  • you want to commit, so you pull
  • the pull automagically pulls in a bunch of changes. you may have resolved merge conflicts. you may have looked before you jumped. doesn't matter fuckery can happen regardless
  • suddenly a bunch of poo poo is broken and you want to go back to when it was working for whatever reason
In SVN, unless you copied your working directory to somewhere else before the pull, your poo poo is all hosed up and you have to do a bunch of manual fuckery to try to revert back to your good state before the pull

In git this is trivial and you just reset to before the merge

In git you can easily flip back to your good working state and your in-progress merge and also share this with other people

comedyblissoption fucked around with this message at 01:18 on Jan 18, 2015

comedyblissoption
Mar 15, 2006

Jabor posted:

Serious question, how does code review (real code review, the kind where you have someone review and sign off on it before the code actually gets added to trunk) work with svn?

With git you have a branch with your proposed change and your reviewer looks at that and pulls it to the master repo if they think it's good. But it sounds like actually using svn branches is discouraged over on that side of things?
  • You send a giant patch file annotated with the commit that the patch file applies to
  • You use a branch which people recommend you avoid in SVN for good reason
  • Have someone look over your shoulder and say looks good before you push
  • You push and have someone review the code after it's already been merged to the trunk
All of those options are inferior to the idiomatic ways you might review in git

comedyblissoption fucked around with this message at 01:19 on Jan 18, 2015

comedyblissoption
Mar 15, 2006

suffix posted:

i have a terrible programmer question: how do you do dependency injection right?

i get the concept and how it makes unit testing easier, but on the projects i've worked on it always ends up with at least one of two problems:
- all your code ends up unnecessarily tied to a specific injection container
- your outermost layer ends up being a gigantic mess of instantiating and connecting every object in the program, maybe in xml for good measure

like 90% of the time we don't have any alternative implementations and only want to switch it out for our unit tests,
so it doesn't feel right to just pass it up until your main code has to care about a helper class to a helper class to a helper class,
but that's the logical conclusion from the introductory texts i've read
Use a convention-based Dependency Injection framework.

The entire reason for a DI framework is to create the object graph for you without you having to do it all by hand with a giant series of new Foo(new Bar, new Baz(...)). If you have to set something up whenever you add a new dependency in your object graph, your DI framework sucks.

As an example, imagine your class needs an IFoo instance in its constructor. Somewhere in your assembly is a Foo implementing IFoo. Your DI framework should be able to automatically detect this and instantiate Foo and pass it into the IFoo. The DI framework should be able to recursively do this for all of the dependencies of Foo. You should be able to configure if Foo will be a singleton throughout the assembly and the DI framework should handle that.

Be careful if you use the new operator in an object's constructor and you're not constructing a simple value object. The entire point of DI is to make it so you aren't calling new in an object constructor.

Following DI principles is a boiler-platish patch on java-like OOP encouraging bad practices for modularity and composition of programs.

comedyblissoption
Mar 15, 2006

Also run away from poo poo that requires you to program things in XML.

comedyblissoption
Mar 15, 2006

FamDav posted:

if java were amenable to partial application you really wouldnt need nearly as much of the plumbing that goes into, say, spring.

that being said, spring gives you the ability to do configuration at runtime p easily, whether thats at startup or even per request.
Yeah, it's basically a language flaw that proper composition requires newing up a giant object graph at the top level which requires a reflection-based framework not to be tedious.

comedyblissoption
Mar 15, 2006

Maluco Marinero posted:

I have to say I don't really get the value of DI annotations as they're usually presented in JavaScript frameworks. Typically the prototype to receive injection has string annotations, I guess I just don't see the benefit of declaring dependencies like that if you want to be able to control it from the outside. Sure it'll save you time while you're wiring it up, at least a little bit, but there's no easy place to go and see what gets injected where.

Maybe I'm the horror, but I set up something like this in a recent project, where you register classes under a string name with a class. newWith is how its gets instantiated and by default they're singletons, which was overridden in the last call to be instances instead.

JavaScript code:
 


world.register("comms", Comms).newWith();

world.register("planes:location", LocationPlane).newWith();
world.register("planes:form", FormPlane).newWith();
world.register("planes:user", UserPlane).newWith("comms", "actors:location", "actors:tickets");
world.register("planes:tickets", TicketsPlane).newWith("comms");

world.register("lookouts:app", AppLookout).newWith("planes:location", "planes:user").instances(); 
One big benefit in a javascript project w/ string annotations and a DI framework is that it dramatically reduces the possibility of error in creating your object graph. Javascript is dynamically typed and doesn't check the number of arguments in function calls, so this can bite you in the rear end and not be caught until you actually run the application. Javascript kind of sucks.

Even in a statically typed language, I think it can be tedious to have to remember to update a registration file whenever modifying the dependencies or adding a new module. A DI framework does automatically whatever you would do by hand. The only magic-ness should really be how you override the default convention.

Edit: I edited this post b/c I thought this was intended to be used as a service locator but it isn't. Don't use service locators instead of DI.

comedyblissoption fucked around with this message at 12:22 on Jan 21, 2015

comedyblissoption
Mar 15, 2006

I'm dumb and misinterpreted your stuff as a service locator pattern. It isn't.

A non-lovely DI framework looks like what you might expect:

Instead of rolling by hand:
code:
void main(...) {
  var herp = new Herp();
  var derp = new Derp();
  var baz = new Baz(herp, derp);
  var qux = new Qux();
  var foo = new Foo(baz, qux);
  var bar = new Bar();

  //more poo poo
  ...

  var application = new Application(foo, bar, ...);
  application.Run();
}
A non-lovely DI framework should use convention and save you typing and look roughly like:
code:
//a non-lovely framework allows customization in code and doesn't force XML
IDiConfiguration GetDiConfiguration() {
  var diConfiguration = new DiConfiguration();
  diConfiguration.MakeEverythingSingletonsByDefault();
  return diConfiguration;
}

void main(...) {
  var diConfiguration = GetDiConfiguration();
  var diContainer = new diContainer(diConfiguration);

  var application = diContainer.Instantiate(typeof(Application));

  application.Run();
}
Angular's string annotations for dependencies is fine. It's essentially the same as feeding static type information so the DI framework can understand how it's supposed to wire up the dependencies. Wiring it by hand or string annotations are equivalent in coupling. The only caveat is that angular may want you to do annotations in a certain way while another framework wants you to do it in another way. This is really a manifestation of javascript (and maybe dynamically typed langs in general) sucking for stuff like this.

The way they make you declare the role of the dependency (service, factory, controller, etc.) is tight coupling specifically to the Angular framework, though. Other than that Angular is what you should mostly expect out of a DI framework.

Castle Windsor is okay for simple stuff in C# and hopefully you don't need all the crazy customization, although it's there if you need it. The main thing you need to configure is which objects are singletons and not.

Depending on how easy it is to mock stuff for unit tests, you will probably make all your dependencies that you inject interfaces.

EDIT: modified pseudocode since it broke tables

comedyblissoption fucked around with this message at 12:42 on Jan 21, 2015

comedyblissoption
Mar 15, 2006

~Coxy posted:

I dislike the fact that instead of developing the tools to make unit testing easier and more capable, we instead cargo-cult these insane programming styles
The OOP mania where everything must be an object and an object is the smallest unit of program composition has forced these insane programming styles and boilerplate.

Not to say that objects are a bad idea, but making them the smallest unit of program composition is a Bad Idea.

comedyblissoption
Mar 15, 2006

emacs supremacy

comedyblissoption
Mar 15, 2006

but honestly it's really more that everything else sucks so much people are forced to use an archaic relic with extensions in a special bespoke variant of a dead language as their primary text editor

comedyblissoption
Mar 15, 2006

and if you dont also do similar amounts of unpaid overtime you are viewed relatively as a slacker

comedyblissoption
Mar 15, 2006

qntm posted:

because it's an open-and-shut question to which python, and only python, gave the correct answer?
I think dropping filter() and map() is pretty uncontroversial

comedyblissoption
Mar 15, 2006

'agile' means a million different things and is almost a meaningless word now

but developing software iteratively and in a bunch of prioritized small steps of actually working software w/ a feedback loop is generally a better idea than a giant bike-shedding big design up front with little feedback during a long period of development

comedyblissoption
Mar 15, 2006

A better method for debug logs of expensive operations in c#:

code:
logger.Debug("The boner hash is {0}", () => calculateHash(boner));
ILogger.Debug should only log and run the calculateHash function when you have appropriate application configuration settings to enable debug logging. You need the second parameter to be a Func<string> so that it will only run the expensive operation when you have debug logging enabled. Since C# uses strict evaluation, not using a Func here will cause the expensive calculateHash function to be ran even if youre not in debug mode.

comedyblissoption
Mar 15, 2006

Note that the above would require something like:
code:
interface ILogger {
  Debug(string formatString, params Func<string>[] lazyStrings);
}
A much simpler interface would probably be:
code:
interface ILogger {
  Debug(Func<string> lazyString);
}
And you would invoke it like:
code:
logger.Debug(() => String.Format("The boner hash is {0}", calculateHash(boner)));
But if youre almost always using String.Format it's a bunch of ugly boilerplate much like an if conditional that people can easily forget.

comedyblissoption fucked around with this message at 23:37 on Feb 28, 2015

comedyblissoption
Mar 15, 2006

tef posted:

this is great until some gently caress puts a side effect in a log statement
If you're using a language that allows side effects for a function that returns string (which is basically everything except haskell and other pure-langs), you always have this problem. There's nothing you can do in those languages to prevent some idiot from printing paper in a CalculateTax method.

Note that since C# has overloading, you can always provide an ILogger.Debug that takes actual strings and not lazy strings.

Using a lazy string as an overload of the ILogger.Debug for expensive computations is a better approach than requiring everyone to always remember to wrap their logger.debug in an if conditional.

comedyblissoption
Mar 15, 2006

A common logger used in .net is NLogger. It seems Good Enough for simple things. You can log at different levels of priority (e.g. logger.Debug, logger.Error, logger.Info) and configure the level of logging in your app configuration. You can build the lazy strings for expensive computations w/ your own wrapper interface trivially if need be.

This is a much better approach than commenting/uncommenting out code for ad hoc logging.

This is also better than having to remember to put if conditionals around all of your logging statements.

comedyblissoption
Mar 15, 2006

eschaton posted:

dehumanize yourself and face to bikeshed

comedyblissoption
Mar 15, 2006

MALE SHOEGAZE posted:

rails is very bad but we're going to be building a new service from scratch and id ont really know what to use. go is neat but i hate that it doesnt have generics and i dont know that i want to try to get spun up on a new language while on a deadline.
C# is really good for a statically typed imperative language borrowing a lot from functional langs if you don't mind being tied heavily to .net

comedyblissoption
Mar 15, 2006


quote:

Now, this does require one huge prerequisite: every candidate must have a side project that they wrote, all by themselves, to serve as their calling card.
I don’t think that’s unreasonable. In fact, I think you can very happily filter out anyone who doesn’t have such a calling card.
this article bemoans false negatives in the industry in favor of replacing it w/ another requirement w/ a bunch of false negatives

comedyblissoption
Mar 15, 2006

also the interviewer will require much more time investment per candidate

comedyblissoption
Mar 15, 2006

false negatives in the interview process is actually a problem that needs to be addressed though I agree

coding on a whiteboard in a technical interview is really loving dumb when you could just use an actual computer instead w/ internet access

for most programming jobs doing a basic fizzbuzz screener followed by some basic straight forward collection processing is probably good enough. transform this list of stuff into that other type. group this list of stuff and then count the amount of times x happens in each group. this type of poo poo is required by most programming jobs ityool 2015.

don't do technical problems that require you to have an aha moment in regards to the properties and relationships between numbers if you havent encountered that type of problem before unless this is required as part of the job

don't do technical problems that require recalling how to efficiently implement in-place array computer science data structures algorithms unless you actually need to implement novel algorithms of this nature as part of the job

i think this would decrease the number of false negatives w/o dramatically increasing the number of false positives

false positives are not cheap at most corporate cultures for various reasons. being willing to hire people then quickly turning around and firing them in under a month or two has its own cons.

comedyblissoption
Mar 15, 2006

the main reason technical interviews are needed is because you cannot give candidates the benefit of the doubt. technical interviews are adversial and awkward because unfortunately you cannot give people the benefit of the doubt

you unfortunately cannot give a candidate the benefit of the doubt for certifications, technical college degrees, or claimed work experience because people lie about these and people also actually have these while being awful

in a typical job board style hiring process, the majority of applicants are the people that have trouble finding gainful employment in the industry because they are bad at it. you need some way to screen these. the people who are not bad have much shorter job search times and are not on the market for long

the only way you could make technical interviews not adversial and awkward is to have some way to give candidates the benefit of the doubt without having to technically check them. the only things i can think of that work at scale is professional licensing/certification as exhibited in other fields. this has its own set of large cons though and id rather programming not have this.

comedyblissoption
Mar 15, 2006

Corla Plankun posted:

i don't understand why interviewers don't just give people the benefit of a doubt and let a 30-day probation period sort out the false positives

this job isn't that fuckin hard. if you can fake your way through an interview you'll probably make a fine python janitor
how to hire is an economic question that needs to consider time and resources

you waste a lot of time hiring then firing someone in 30 days because you have to on-ramp the hiring process all over again for that position and re-start the training and take a ding to lowered morale for wasting all that time on someone. even in a place w/ the lowest hiring and firing overhead imaginable that's still a lot of wasted time

a lot of applicants with seemingly ok resumes will apply for technical positions and not be able to do fizz buzz-caliber questions in an interview or workplace situation. people like this cannot be python janitors

you do at least basic screening because it's more expensive not to

comedyblissoption
Mar 15, 2006

If the data sets are not known in advance when the program runs but you have some type of name or identifier for each data set, then use a dictionary of keys to values where the keys could be a identifier or string name for the data set.

If the data sets are known in advance when the program runs, define a new data structure whose properties correspond to the data sets.

comedyblissoption
Mar 15, 2006

i think it's okay to hire marginal people since people can get better and be trained if they like the work

it's poisonous for an employer to hold out for a magical unicorn developer they can underpay

you just want to avoid people where the role is so above their head that your team would've been better off if you had no one in the position and you took the money for their compensation, dug it in a hole, and burned it during the time they were employed

which is why a place that has absolutely no technical coding screening should trigger alarm bells

comedyblissoption
Mar 15, 2006

Notorious b.s.d. posted:

lambdas are really, really useful for collections

code:
List<Student> students = persons.stream()
        .filter(p -> p.getAge() > 18)
        .map(Student::new)
        .collect(Collectors.toCollection(ArrayList::new));
i would argue that lambdas were added to the language primarily to enable the improved collections libraries. they were certainly developed together.
Elaborating on this, the vast majority of for loops in application level programming could be beneficially replaced by functions that take lambdas as parameters.

Benefits over for loops:
  • more succinct
  • reveals intent and is more readable
  • modular
  • less error-prone

Imagine the equivalent for loop(s) required for the above example. Now imagine more complicated examples with grouping, joins, counting, and whatever arbitrary collection processing.

Using lambdas w/ collections is analogous to sticking pipes together that feed into each other. This is a much better situation than wracking your brain over what i j and k mean right now in a deeply nested for loop.

comedyblissoption fucked around with this message at 02:54 on Mar 26, 2015

comedyblissoption
Mar 15, 2006

Do the java 8 stdlib collection processing functions w/ lambdas use lazy evaluation?

comedyblissoption
Mar 15, 2006

comedyblissoption
Mar 15, 2006

Luigi Thirty posted:

preference for using dashes instead of spaces in everything makes me think of the time i had to do cobol once and i involuntarily vomit all over my computer
yeah but camel-case is a worse abomination upon the lord because it is both harder to type and read than dash case

code:
someStupidLongName

some-stupid-long-name

some_stupid_long_name

comedyblissoption
Mar 15, 2006

class/instance/module-level mutable variables that could've been scoped to only be immutable function parameters are basically the same as global variables and i see this everywhere

Adbot
ADBOT LOVES YOU

comedyblissoption
Mar 15, 2006

St Evan Echoes posted:

never realised how much i lean on linq until i started rewriting a package from c# to java

thought it would take 5 minutes, looks like ill be here all day
java 8 supposedly has collections w/ lazily evaluating higher order functions so you can move beyond for loop janitoring

if you need linq as expression trees passed into iqueryables well youre prolly hosed then

  • Locked thread