Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Volguus
Mar 3, 2009

Ephphatha posted:

Guess this is the most appropriate place to ask, there was some talk in one of the other threads about some articles detailing why over-using globals in code is an anti-pattern but I've never managed to find anything more comprehensive than a vague blog post. Anyone have any material they'd recommend? I don't expect anything to change at my workplace but it might make me feel better to know I'm not the only person who wants to actually use appropriate scoping to the extent the lovely languages I have to code in support it.

Well, there are at least few things that happen when you use (not even abuse) global variables (singletons included):
1. Cannot parallelize the code
You maybe can parallelize the code, but you'll need so many mutexes all over the place that there'll be no advantage gained with threads

2. Any change has side-effects
And that will slow down releases, unless you have 110% test coverage unexpected bugs will crop up all over the place. Developers will be effectively afraid to touch anything for hell may break loose

3. Developers (some at least, the good ones) will hate working on that code, they'll leave the company and only the mediocre ones will remain. Unless you have some other extraordinary things to hire and keep good people, the company is doomed (unless they're in such a niche market that they have no competition).

And of course a lot of other issues, but these were the first things that I thought of.

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

ChickenWing posted:

My project is actually experiencing this sort of issue as well - our teams are agile, but our backend is still operating waterfall. Thus, when we say "oh we didn't anticipate this in requirements let's ask backend to update real quick" backend says "no changes without a CR and also we're not touching that feature for another three months so uh get hosed I guess?"

Luckily we don't need the backend to touch too many things (and also I have personal involvement in none of them) but from what I've overheard it's an incredibly unfun clusterfuck.

But isn't this what should happen? I mean, agile or not, you cannot just come an throw in a new requirement in the backlog (this sprint's backlog no less) and expect that it will get done when you want it to get done. When it's important, sure, team leads, managers, directors, whatevers meet and agree that X and Y get dropped and feature Z gets done instead.

At the end of the day, that's the entire point of agile: "Don't loving bother me with your poo poo today. Wait until planning meeting". Isn't it?

Volguus
Mar 3, 2009

ChickenWing posted:

If I was planning on jumping from a full time to a contract job, how much should I be bumping up my expected compensation to account for the fact that I wouldn't have vacation days/benefits/etc anymore?

I have read somewhere (some blog) so take this with a grain of salt, that what one should do is decide how much they want to make per year, divide that number by 1000 and ask for that much per hour. For example: targeting $50k per year would bring you to $50/hour. If you get to work the entire year, you'd make $100k. Awesome. But the question is: will you be able to work the entire year? What's your expected downtime between jobs? Will you work 40 hours per week or less?

Volguus
Mar 3, 2009

Gounads posted:

If you aren't making your commitment consistently, there's no need to game the estimates.

Just commit to less.

Which will mean that that "high-risk" item that needs to be done won't ever get done. Who's the fool to commit to it?

Volguus
Mar 3, 2009

Pollyanna posted:

... I have no idea how it works and don't care to find out.

If you value your sanity, you would probably want to keep it that way.

Volguus
Mar 3, 2009

ChickenWing posted:

Funny, this is basically what my tech lead says to me every time I use reflection :v:

(okay it's not that bad but he is of the opinion that reflection is best used in extreme moderation and flinches every time I mention how easy it was to solve a particular problem with it)

He is right. There are cases where reflection is the only way (or the most elegant way) of achieving a goal, but take care, the price you pay for it is quite high.

Volguus
Mar 3, 2009
OSGI is massive and it itself is using reflection too, although is very capable. Use it if you need its features.
At the end of the day, for plugins you pretty much are forced to use reflection either via a library or custom code. Most likely .NET uses reflection too. Since java 1.6 one can use ServiceLoader for simple plugin support and maybe in Java 9 with the new module system there will be easier ways. But, dynamically loading a class at runtime, instantiating an object and calling methods from it, that most definitely is using reflection of some kind.

Volguus
Mar 3, 2009

revmoo posted:

Been there. I don't know how people can stand to use svn.

Because they have been using it since forever. When it came out SVN was better than CVS and it was a an almost drop-in replacement. Now git comes changing everything. From their point of view the distributed nature of git only means that now they can (and have to) commit locally before pushing to a server. Which is not really a thing they care about. So here comes a new tool that forces them to re-learn how to use version control and forces them to change their workflow for no (obvious) benefits. No wonder they resist that kind of change.

Volguus
Mar 3, 2009

Clanpot Shake posted:

I can't speak to CVS, but having gone from svn to git, the way branching works alone is benefit enough to switch. I didn't find it hard at all to grok.

What exactly (from a user's perspective) did you hate about svn branching vs git? I used svn until 2003 or so, then moved to perforce (since the company i moved to used perforce) and only using git since 2008. I honestly forgot what was so bad about svn branching (i do remember not having that many branches before though).

Volguus
Mar 3, 2009
Svn required a certain process to be followed and bit you in the rear end if you didn't. The anecdotes above confirm that.
From personal experience, Git requires a process to be followed as well, if you want to be happy and not have headaches. Sure, it handles certain things better than Svn but is not without its faults. You can confuse Git without too much trouble too.
The idea is that if your organization is following the svn process and the tool itself doesn't get in the way of developers, nightly build, etc. there really shouldn't be any reason to move to git. The worst thing is to change for the sake of change.

Volguus
Mar 3, 2009

Mniot posted:


Once I'm actually ready, I rebase into


and force-push.

What? Why? Force-push? In almost 10 years i had to force-push once, only because someone force-pushed and broke everything. Force-push for no reason can do a lot of things, none of which are desirable.

Volguus
Mar 3, 2009

revmoo posted:

:goonsay:

"Guys, I deleted a file named _canary and now builds are failing with an error that says 'canary hash mismatch.' How will we ever discover the cause?!??!"

But what's the purpose of that? Just curious, never thought of having such a check in place.

Volguus
Mar 3, 2009

lifg posted:

Protip: have your deploy script check the company directory for your name, and abort if it's missing.

I can see a reason to do that though. A nefarious reason, but a reason. With the canary though ... im missing something.

Volguus
Mar 3, 2009
So BDD tests are like integration tests but expressed in a different language? An english-like but not quite a programming language? I can't say I really understand them from that github page.

Volguus
Mar 3, 2009

Gounads posted:

As far as I can tell, the dream is having non-developers write automation tests, presumably so you can pay them less which turns into a nightmare of having developers having to implement all the automation tests and then fix the BDD syntax as well.

If the tests are not written in a programming language, won't that make them quite limited in what they can do? Therefore, won't the tests be quite useless since they can only do very simple things? To test that my million dollars website doesn't go down is easy. But shouldn't I test that the response received (when I press the red button) is also the correct one? That the object/data returned by that API call actually makes sense and is the correct data?

Volguus
Mar 3, 2009

raminasi posted:

I enjoy programming a great deal but am also a little confused by people who spend the day coding and then come home and spend their evening doing more coding.

I found that I do a lot more coding at home when what I do at work is boring. When I do cool stuff at work, the home-coding time is almost non-existent.

Volguus
Mar 3, 2009

Volmarias posted:

There's a bunch of folks here that use plain text editors line VIM, Emacs, Sublime, etc in lieu of an IDE. I have no idea how they work so well with it, but they do.

IntelliJ for me, all the way.

For some is what they're used to. For others, is just showing off.

Volguus
Mar 3, 2009

ToxicSlurpee posted:

Personally I just wish that C# was more popular. I really like C# but I do Java for a living right now.

You will hate it once you see the ecosystem around it. The package manager is a joke, the build system is completely brain dead, the libraries (as few as they are) are ... written by toddlers or ask a bazillion $ if they got a grown-up in there somewhere and Microsoft itself is doing their best to destroy the will to live of any developer crazy enough to try out their platform (they're clearly winning this game against Oracle, not that Oracle is not trying too).
Compared to Java, the .NET platform is a mess, to put it lightly. C# as a language ... yes, is fine. The rest isn't.

Volguus
Mar 3, 2009
I had the same choice to make once and I went with the MacBook. Never used a Mac before, heard so many great things about it, I was all excited. After 6 months I gave it back. Me and macOS do not get along at all. I would have wanted to install Linux on that machine but IT didn't let me. Hardware wise great machine (except the drat keyboard). Software wise that thing is/was a WTF every 5 minutes.

But hey, at least now I know to never buy one with my money.

Volguus
Mar 3, 2009

Pollyanna posted:

:psyduck: Did you never code review your contractor's work?

I highly doubt that code reviews are that popular in most companies. Yes, they should be done, but I'm thinking that a vast majority don't bother.

Volguus
Mar 3, 2009
For the first time I've ever heard of the open-office concept it was early 2000s in an article that I read on the net. In the article, the author compared the cohesion and the quality of BeOS versus the mess that Windows was at the time (I think XP just got released, but I may be wrong on that one). One important factor in this was (according to the author) the communication line that was always present and always open between the BeOS developers: kernel people sitting in a spitting distance from the graphics guys, from the audio people and so on and so forth. Whenever anyone had a question or an idea it was an immediate discussion, consensus was reached and everyone was on the same page. Contrast this to the Microsoft culture: closed offices, teams that hated each-other (much less talk), you shall not knock on the door of the kernel guys or half the Windows installation in the world would get a blue screen, etc. The result was obvious: one was an elegant, capable and reliable OS the other one was a complete mess. The left hand never had any idea what the right hand was doing at Microsoft.

The rest is history now.

(nowadays though open-office just means cheap-rear end fuckers, people who would kill puppies for breakfast if it would save them a penny. of course, productivity tanks and the company eventually folds, but hey ... we're "open").

Volguus
Mar 3, 2009

raminasi posted:

I'm not sure what lessons we're supposed to draw from the story, given that the history is Microsoft riding Windows to become one of the most successful companies ever created.

Yes, exactly. BeOS is dead Microsoft is still going. Hate or love windows (xp, vista, 7, 8 or 10), as dysfunctional as they are, as a complete mess of an OS that thing is even now, they're nowhere near going down. On the contrary.

The lesson, i guess, is that theory and practice don't always agree. On paper, the open office with ad-hoc collaboration can give you a better designed, more cohesive product. In practice, letting engineers do their work undisturbed actually creates a successful product.

Volguus
Mar 3, 2009

Messyass posted:

What about pair programming? There's even a thing called mob programming, although I've never encountered it in practice.

Personally I believe that pair programming is great for mentoring (regardless who is actually writing the code) and is great for those times when "I have this really tough problem, let's solve it together on the computer instead of the whiteboard". Other than that, is a bit of a waste of time in my opinion.

Volguus
Mar 3, 2009

Carbon dioxide posted:

I have a coworker who explains the more complex frameworks we use to her three year old kid at home.

I can't help but wonder what that'll do to the kid's upbringing.

https://www.youtube.com/watch?v=RnqAXuLZlaE

Volguus
Mar 3, 2009

a foolish pianist posted:

Yeah, seriously. We do a lot of business with European ISPs, and everything just shuts down for the whole month of August. From an American standpoint, it seems insane.

Wish I could take the month off too, though.

To be honest, when I arrived in Canada and heard that the minimum required vacation days per year was 10 (while most employers were offering 15) I thought that was loving insane. How can anyone live with less than 25 working days per year of vacation, like normal people get? Then I heard about US.
After 10 years and I still can't comprehend it. It is, as far as I can tell, an impenetrable mystery on why people put up with this.

But yeah, August is dead. So is December.

Volguus
Mar 3, 2009

MisterZimbu posted:

I'm not smart enough to figure out how to handle time/dates and handle all the conversions from/to UTC/DateTimeOffset from the browser to moment to knockout to newtonsoft json to webapi to dapper to sql and back.

Whoever invented time is an rear end in a top hat :argh:

The easiest thing to do is to not handle anything. Everything is UTC, until the moment of display. When sending it back, convert to UTC in the client. Then simply not worry about it. Otherwise you'll go insane.

Volguus
Mar 3, 2009
I have been asked for my resume by new bosses before. I just imagined that they just want to see what are everyone's experience, nothing more than that. Maybe their boss is asking them about"who in your team can do X".
I never gave it more thought than that.

Volguus
Mar 3, 2009

smackfu posted:

Usually the trouble we run into is when the source data is just a date with no time. A lot of the date/time libraries only have a time stamp type and will treat that as 0:00 UTC which then might actually be the previous date when converted into someone's local time zone. Which is generally not what we want. Imagine displaying someone's birthday as the wrong date, for instance.

When you deal with "source data", you do what you have to do, no other choice. My advice was when you control the full chain.

Volguus
Mar 3, 2009
Another popular approach in C code is to use goto. It does make the code more readable. Then again there are people who wouldn't use goto if their life depended on it.

Volguus
Mar 3, 2009

Phobeste posted:

Linux kernel style goto error handling is the best for those sorts of workflows in c where you have multiple function calls which depend on things the previous function call did and if any one of them fails you need to wind up all the stuff you did before it. Hell I'm not even sure how else you'd do it without RAII-style code.

There are ways:

Nested if/else to hell and back
Return checks that will free whatever has been initialized until then (and pray you don't forget something)

Or you can choose to keep your sanity, don't adopt blanket rules (such as never use goto) , and just use the thing and not behave like a child.

Volguus
Mar 3, 2009

necrobobsledder posted:

I really, really hope Kotlin doesn't turn into Dart or Groovy where most of the software community just doesn't get behind it after doing some toy apps with it. There's too many things that remind me of what Groovy tried to do and now it's mostly relegated to Jenkins scripted jobs these days (because Grails is... not popular). We have enough dead languages and platforms on the JVM these days and it's just getting frustrating almost now.

It needs to provide something really really really amazing for the masses to try to switch. And, in my personal opinion, it doesn't. Kotlin is (at least right now) at most a big "meh".

Skandranon posted:

Eek! How do Angular 1.3 apps still exist? Hit me up on PM and I can show you how to use Angular 1.5/6 with TypeScript that will make your life so much easier.
They were written when 1.3 was da bomb. Angular 1.5 brought in a bunch of new things and broke a bunch of others. Since in the JS world the concept of "backwards compatibility" doesn't exist even for minor versions (and everyone is guilty of that, which makes me wonder if all of those developers are just toddlers playing with legos? they surely behave that way), either you work 100 hours per day just to keep things afloat or say gently caress it and stay put on a version that at least works. One has a product to make and ship not fiddle around with 100 versions that break in the most unexpected and bizarre ways. And you cannot do both.

Volguus
Mar 3, 2009

Main Paineframe posted:

This is where those "oh god, we need to update this code for a new browser/OS but it has a dependency on a library that hasn't been updated in ten years" situations come from. Sooner or later, you (or your successor) are going to have to deal with upgrading. The only choice you get to make is whether to do it sooner (when you're a minor version or two behind) or later (when you're so far behind that you're basically rebuilding the whole thing).

Hey, we're talking about web/JS here, right? While in the past IE required a special branch for itself, nowadays the situation is not so dire (chrome is taking IEs place with its monopoly and deviation from standards but it's not so bad yet), so "new browser/OS" shouldn't be a thing anymore. And the upgrade choice will always be never since, as I said before, they break poo poo all the time so you either babysit it constantly or you don't. 10 years from now (hell, 10 months from now) the landscape will be completely different anyway (from libraries, frameworks to build tools and hopefully package managers) so ... it's really not a thing to worry about : you will have to rewrite it later. Or ... you don't and the new interns that will be hired 5 years from now (they work for pickles and they're perfect for JS development) will hate your guts. But, that's on them , you're in management now and get to yell at them for being lazy pricks.

Volguus
Mar 3, 2009

Doom Mathematic posted:

Just to make sure we're on the same page here, some people define "scrum master" to be "team leader" whereas others define it to be "the person who leads the five-minute standup meeting we have each morning". On my team, the team leader is a fixed role held by a senior developer, but responsibility for actually running the scrum rotates among team members each day.

From my personal experience, the scrum master is neither a team leader nor the person who only leads the standups. He/she has a bigger role, essentially being the bridge between the development team and the rest of the people (qa, project management, client representatives, etc.). The main role is to ensure that the developers have everything they need to do their work: a prepared and prioritized backlog for the planning meeting, answers to any questions that may come up, lead and prepare the demo meetings, the pretty charts and reports for the execs, etc. Leading the standups is but one (and a minor one at that) of their roles.
But yes, it does happen more often than not that the team lead takes on all those responsibilities. Or ... nobody does and everything is chaotic. A good scrum master can be the difference between a successful project and a failure.

Volguus
Mar 3, 2009

pigdog posted:

Uh, a lot of that stuff there is the product owner's job. I can't remember if technically a PO may also be SM, but I'm pretty sure it's not encouraged.

Umm...no. The product owner is the client (or the client's representative). He's not the bridge to anything. He has the backlog, sure, but unless the scrum master will have meetings with him before planning, that backlog will be a complete mess by that time. And those meetings will, of course, not involve the developers. Now, you can come and say: in every company I've worked at the PO is the client and the PM and the CEO and the janitor and the IT guy. And yes, there are companies who do that but in ideal conditions those roles are filled by different people with completely different skills.

Volguus
Mar 3, 2009

Mniot posted:

I had a manager who claimed that he managed open-source projects on the weekends. He was a buffoon and a dick.

I bet they'd tell you about mentoring other CEOs.

Good project managers are invaluable (I have seen a couple in the last 20 years). I can do just fine without run-of-the-mill ones though.

Volguus
Mar 3, 2009

ChickenWing posted:

How do you lot decide what levels you log information at? I'm standardizing our server logs and finding it fairly difficult to come up with even a vague standard for what goes in error/warn/info/debug/trace, especially clarifying boundaries for the latter three.

So far I've settled on:

ERROR: soft/hard workflow failures. Something that was supposed to happen did not.
WARN: recoverable problems and indicators that something is wrong with external flows/internal data validity
INFO: basic "server is doing a thing", endpoint logging, SQL query logging, job health checks, aggregate stats
DEBUG: specific "server is doing a thing", specific stats, reporting non-exceptional exceptions
TRACE: method entry/exit, data dumps


obviously there's no One True Logging Standard, different strategies are applicable for different log consumers, etc - just looking for some wider feedback than the two people that provided input in my office
If SQL query logging means for you what it means for me (logging every SQL query made by the application) then I would put that in DEBUG or TRACE not in INFO. But other than that ... yea, it looks reasonable enough.

Volguus
Mar 3, 2009

Portland Sucks posted:

Would you actually write out every SQL query made by the application to a log file? I've also been trying to work out a logging scheme and questioning what is even worth keeping. Is there any point in logging all of my SQL queries if I'm going to be generating hundreds of thousands of them per day? At what point is that just noise.

It is usually just noise, that's why they're at the debug or trace level not any higher. But, while developing it may be useful for debugging things, sometimes. Depending on what logging framework you use you can disable them even in debug mode.

Volguus
Mar 3, 2009

CPColin posted:

Yeah, I'm leaning toward just saying I won't be in the office during the last week of the year and waiting until my boss knows before I tell anybody else. It's just annoying that the next sprint technically starts today, with my boss out, so my announcement can't line up with it. I absolutely do want to give notice no later than tomorrow, because our insurance is in the middle of changing and I want to give HR plenty of time to not gently caress up whatever COBRA situation happens between now and whenever I become eligible at my new job (#SinglePayerNow).

Yea, say when you will about taxes and poo poo, but not having to worry about this kind of very essential crap when changing jobs is a nice thing to have.

Volguus
Mar 3, 2009

CPColin posted:

"DoD" is "Definition of Done" and on both the Scrum teams I've been on, it's been, "The work is done!" because nobody ever feels the need to push it farther. I wish the Scrum Guide recommended an example that teams should start from, although that's not their style. I kept trying to wheedle a stronger definition out of my teammates by using the question, "How do I know when I'm allowed to commit?" over and over. It didn't really work.

The Definition of Done for feature X is specified by the product owner and is written down in the backlog. If there is no PO nor a backlog, meh ... whatever goes i guess.

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009

CPColin posted:

I think it was something about how the build process was set up on TFS. It might be doing something similar to when a Java project uses Maven to call an Ant script and suddenly there's extra layers of configurations and tool dependencies. Probably didn't help that all the StackOverflow answers pointed to a complicated upgrade process.


Those might be "acceptance criteria" you're talking about. The DoD is supposed to be decided by the Dev Team alone (i.e., not the PO), according to the Scrum Guide™ and is supposed to apply, regardless of what's being worked on. It's supposed to get revised during the Sprint Retrospective, but neither team I've been on has ever concerned itself with doing so.

You're right. I forgot. The broader scope is usually defined by the team and usually promptly forgotten. The 'acceptance criteria' is all that remains of a DoD after some period of time.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply