Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Messyass
Dec 23, 2003

I think Kanban is ultimately 'better' if you've got an experienced team, but a good team should certainly be able to make Scrum work as well.

Adbot
ADBOT LOVES YOU

Messyass
Dec 23, 2003

Yeah, TFS has a 'capacity per day' field that works very well for that purpose.

In the end though, hiring talented people is really what it's about. Remember "individuals and interactions over processes and tools" :eng101:

Messyass
Dec 23, 2003

Sarcasmatron posted:

There seems to be some conflation of Agile and Scrum.

Scrum is one of many Agile methodologies, including, but not limited to:

RAD, Scrum, Kanban, UP/RUP, and XP/Paired -- I'm just listing ones I've worked with directly, there are a lot more, including all of the *DD methodologies.

From my experience, Agile is primarily about getting people who are not good at synchronous, real-time communication to get good at synchronous, real-time communication. Secondarily, it's about the collection of data to be able to determine a development team's velocity, which informs product roadmap and/or allocation of development time among projects within a project portfolio competing for development time and budget.

Scrum is training wheels for Kanban. The training wheels come off when you have a self-motivated team that communicates effectively in real-time, whether co-located or remotely, and when that team is able to measure their velocity and report it in as close to real-time as possible. Generally speaking, this takes longer than a couple of sprints, especially if you want the velocity number to be defensible to others: stakeholders, management, etc.

The way I see it:

Agile is not a methodology. Any software development process can be called agile (lower case) if it adheres to the values and principles of the manifesto for agile software development.

Scrum is a methodology as it prescribes a specific software development process that is (pretty) agile. I agree with the 'training wheels' description in the sense that it's a "learn the rules before your break them"-thing. It's very useful for that.

Kanban does not really prescribe a process but is more of a way to continuously improve your existing process.

Other practices and techniques such as continuous delivery, DDD and (A)TDD combine well with agile processes.

For me the most important part of it all is being able to deliver working (and valuable) software early and often, getting customer feedback, and incorporating that feedback.

Messyass
Dec 23, 2003

Has anybody ever actually had the luxury of a product owner who:
- knows what she's talking about
- has mandate to make decisions
- is available day to day?

In my experience two out of three has been the maximum.

Messyass
Dec 23, 2003

Fluegel posted:

I was recently hired as a Requirements Engineer at the frontend branch of a software development company. I have about as much IT-knowledge and experience as your average goon and a humanities degree with a professional background in media and journalism. I cannot write a line of code to save my life. My main task will be to work with the PO and produce good user stories. My team more or less uses Scrum and they aim to follow it more closely. I`m in for one hell of a ride.

What I'm asking is, I guess, do you guys have any thoughts regarding requirements engineering? Do you have tips on what to read up on regarding the position in general and the writing of user stories in particular?

I'm officially a Requirements Engineer as well. The position is in a bit of weird place nowadays because it's not like the role exists in Scrum or anything. Ideally you'd have the developers talking to the PO and other domain experts directly anyway.

On the other hand, writing good user stories is still a skill. And i'm not talking about the 'as a ... i want ... so that ...' part, but about the acceptance criteria / tests.

Since we're recommending books:
http://www.amazon.com/Specification-Example-Successful-Deliver-Software/dp/1617290084

Basically, don't be this guy:

https://www.youtube.com/watch?v=n3Sle_o1bcs

Messyass
Dec 23, 2003

Vulture Culture posted:

The entire point of estimation in Agile, and people don't stress this enough, is that it forces developers to think about the time their dumb scope-creep idea actually takes before they go ahead and just do it

Yeah, the idea is that if you don't know enough about it to estimate it, you don't know enough about it to build it.

Messyass
Dec 23, 2003

I've found some basic heuristics that seem to work well for us at the moment during sprint planning:
- Any story should be divided up into tasks that can easily be done by one developer in one day. The exact number of hours doesn't matter that much, but if you're not able to do this, you probably don't know enough about the story to finish it.
- A story containing more than 10 tasks is suspicious and should probably be split up into two stories so that they can be tested separately.

I'm sympathetic towards the #noestimates thing to the extend that, if you can get away with doing less estimation, by all means do it

Messyass
Dec 23, 2003

Volguus posted:

Which will mean that that "high-risk" item that needs to be done won't ever get done. Who's the fool to commit to it?

You shouldn't commit to a high-risk item. You either (roughly) know how to do it or you don't. If you don't, first plan a spike or something.

Messyass
Dec 23, 2003

As long as your projects never take longer than two weeks you'll be fine.

Messyass
Dec 23, 2003

My Rhythmic Crotch posted:

This was for a project estimated to generate 1M records per month, hah.

That's not really an argument for using SQL. The manager's argument for not using SQL is obviously worse, but still.

Messyass
Dec 23, 2003

KoRMaK posted:

That raises further questions, such as what's the point of storing data you'll never see or use again? And in what case would you pull data back out?

It's not that you'll never use it, but that you'll rarely use it or query it like you would in a relational DB . Audit logs for example. Or document storage.

Don't get me wrong, SQL would still almost always be my default choice as well, but it's not a bad idea to at least think about the particular problem you're solving. Even within one system you might use two different solutions side by side (as in CQRS). You might make a different choice if you have 1000 commands for every query than if you 1000 queries for every command.

Messyass
Dec 23, 2003

Khisanth Magus posted:

This will be accompanied by a promotion and raise as soon as the new head of IT finishes with his current project of consolidating and defining the different positions in the department, as all promotions have been put on hold until that has been done.

In other words, you're going do to a whole lot of work without being paid enough for it.

Messyass
Dec 23, 2003

CPColin posted:

This is what I always try to get at when we interview people: "What was the worst bug you've encountered and how did you go about fixing it?" and "What are some of the first places you look when you encounter code you don't recognize?"

I couldn't give you a straight answer to either of those questions tbh. I would ask what you mean by the 'worst bug'? Biggest effect? Hardest to find? Largest amount of stupidity that caused it?

And what do you mean by 'code I don't recognize'? Code doesn't appear out of thin air. Do you mean how I would approach someone else's code in general?

Messyass
Dec 23, 2003

Edison was a dick posted:

It's worse when it's the CEO who holds this opinion.

No, that's to be expected. I don't expect a CEO to really grasp why unit tests help, although in the long run she should be able to see the benefits.

If a developer doesn't understand why they are good, I'd question his professionalism.

Messyass
Dec 23, 2003

For me it's quite simple: if you want to be truly agile you need some degree of continuous delivery, which means you need continuous integration, which means you need proper automated testing on all levels of the test pyramid.

The only other options are that you're a god programmer who never makes mistakes or you have an endless supply of human testers at your disposal.

Messyass
Dec 23, 2003

Painstakingly refactoring such a mess and rebuilding it from scratch are both terrible ideas. The only question is which is the least terrible option.

Messyass
Dec 23, 2003

Ithaqua posted:

Sounds like he is actually awful at SQL. And everything else related to being a developer.

Yeah I guess you become amazing at "tuning" SQL when you don't have a sane design in the first place.

Messyass
Dec 23, 2003

Pollyanna posted:

I did end up getting feedback from one of my co-workers on my performance, and the response boiled down to this:

  • tl;dr: Make sure PRs are as polished as possible before submitting.
  • tl;dr: Test and handle every single possible situation.
  • tl;dr: Refactoring is something we leave to tickets in the Tech Debt epic.
  • tl;dr: Code to the lowest common denominator on the team.


That sounds pretty reasonable. Your PR's should be polished. If you want earlier feedback, just ask someone to pair program or be your rubber duck.

Whether you should test everything really depends on how critical the thing you're making is. The "permutations of feature A * permutations of feature B * permutations of feature C" does worry me a bit. That smells of too much coupling between features.

The last point I definitely agree with. Boring code > clever code. That doesn't mean you should be avoiding useful language features just because someone isn't willing to learn them, but in general it's a good rule.

Messyass
Dec 23, 2003

Iverron posted:

I'm currently employed at a 20-25 person agency doing .NET work, but I listen to the Freelancer's Show podcast semi-regularly because the content interests me and I feel like a lot of the same principles apply to a small agency.

Anyway, one of the regular people on their panel is Jonathan Stark, who is a very vocal proponent of nixing hourly billing. Hourly billing and the processes that surround it make me want to strangle myself more than just about anything else with this job, so I'm obviously interested in the subject.

He can come off kinda shill-y on his website / social media, but he seems like a smart guy. There's a series of articles that describe his philosophy on his site, the best single one is probably this one: https://expensiveproblem.com/trust-fractures-how-hourly-billing-hurts-software-projects. but the rest are ok too.

Any thoughts on this? Useful? Impractical? Just a guy trying to find a niche to sell a book or two in?

He states:

quote:

When I'm ranting about hourly billing, I'm specifically referring to hourly billing within the context of a project, which I define thusly:

A collaborative enterprise that is designed to achieve a particular aim.

That's cool for a freelancer who does small, manageable projects, but it doesn't scale. I'm convinced that when it comes to developing products of strategic importance to a company, the whole idea of a "project" is an illusion. There is no project, there is no "particular aim".

https://www.infoq.com/articles/kelly-beyond-projects

In my experience the amount of arguing with the client is way way worse when you try to fix the scope / budget / time, instead of just delivering value and building trust.

Messyass
Dec 23, 2003

baquerd posted:

No, the primary objective of grooming is to understand and discuss the stories to share knowledge and planning across the team. The secondary objective is to rough-in sizes for stories to give the PO (or whoever) a rough idea of the work required to clear the backlog and to force thinking about scope and relative effort. As a tertiary point, grooming gives you the opportunity to break stories apart (or join them together) to aid in the primary and secondary objectives, but at no point should the objective be to make stories as small as possible. Stories should fit into a sprint and be well understood, anyone sperging about sizes after that is missing the point.

To give a serious answer: I think it's always worthwhile to at least ask the question "can we do less and still deliver value". This may very well result in stories being split up, and often part 2 of the story will drop down the backlog because it turns out part 1 will do for now. It's a good way to avoid gold plating.

Messyass
Dec 23, 2003

As was posted on the previous page, it's called refinement now.

Messyass
Dec 23, 2003

necrobobsledder posted:

My take is that if you have the discipline to succeed at microservices, you probably had the discipline to properly modularize your code and operate it effectively.

Exactly. Just last week I advised a company to think about modularizing their application as if they were using microservices, but to not use microservices.

I'm sure that microservices can be very useful when you're dealing with the scaling challenges that Netflix or Spotify have to deal with, but for most other companies, the best thing you can take away from it all is that shared databases are bad.

Messyass
Dec 23, 2003

I know Kanban doesn't have many rules per se, so it's hard to even do it wrong, but every implementation described on these last two pages sounds like an abomination.

The idea is to visualize the entire flow of a piece of functionality and to make sure there's a shared responsibility between biz/dev/ops to pull it through as quickly as possible.

If the problem is that not enough stories are ready than that is exactly what Kanban will make visible (you do need to have an agreed upon Definition of Ready). Kanban isn't going to solve the problem for you though. Again, that's the shared responsibility of everyone involved in the process.

Messyass
Dec 23, 2003

Just stop estimating effort, unless it is to make an honest decision whether to deliver a certain feature or not. And in that case demand that at least the same amount of energy is devoted to estimating the value of the feature.

Messyass
Dec 23, 2003

So I'm reading through this RFP...
  • The client wants "an agile way of working"
  • The deadline is fixed (about a year from now)
  • The scope is fixed
  • The requirements are obviously vague as all hell
I'm expected to come up with a serious reply to this shitshow.

Messyass
Dec 23, 2003

The way I see it you really can't win with this sort of thing. You're always going to have a conflict over what exactly was agreed upon. Best case you come out ahead but you've got a pissed off customer, worst case you're bleeding money.

Messyass
Dec 23, 2003

Chaos is a ladder :unsmigghh:

Messyass
Dec 23, 2003

Munkeymon posted:

The one time I worked with a decent scrum master he was working with ~4 product teams and constantly meeting (or at least communicating) with stakeholders to better understand what they wanted so he could prioritize the backlog for the planning meetings. It worked great and I miss it.

That's cool, but the second part (meeting with stakeholders and prioritizing the backlog) is officially the job of the product owner.

Messyass
Dec 23, 2003

Pollyanna posted:

How do people manage to extract inputs and outputs from product owners when working on apps that have both a UI and an API? Right now our app has run into untold numbers of stupid bugs because of inputs and outputs being unclearly specified (what type? fractional percentages vs whole numbers???), conflated between API inputs and UI fields ("birthday" in the UI vs. "currentAge" for the API), and just plain misunderstandings (two variables named "health care", one being input and the other output).

I'm staring at a word document with like a paragraph each for 25 inputs and it still doesn't answer my questions. Getting the information I need is like pulling teeth, especially when the person you're working with doesn't understand the difference between UI fields and API inputs. I'm starting to get sick of it. How is stuff like this usually compiled and documented?

Specification by Example, Acceptance Test Driven Development, Behavior Driven Development, whatever you want to call it. Do it.

You will need to actually talk to people though.

Messyass
Dec 23, 2003

For me, the only essential part of being agile is being able to release working, valuable software often.

There are a million ways to achieve that. FIxed-length iterations or story points are by no means required and I doubt they are even helpful.

Messyass fucked around with this message at 11:23 on Apr 6, 2017

Messyass
Dec 23, 2003

Keetron posted:

I work as a qa engineer writing front end and rest & soap tests for our systems. All my tests are automated scripts using a fitnesse/java platform.
Today I had someone ask me how I could write tests if the software wasn't done yet. I answered I would give my best interpretation of the very meager userstory and would guess what would be in the json response to test on.
"But what if your tests fail?"
Then either the software or the test is at fault, the frontend dev, backend dev and me would discuss what would need to be in there based on our perception and after some 15 minutes we would reach consensus and would all modify our code to match that consensus. Sometimes we would involve the PO or BA, if needed.

I hardly write bugs anymore.

This is by far the best testing job I have ever had, in the testing world for 13 years now.

Seriously people, if you're not using acceptance test driven development / specification by example / behavior driven development in tyool 2017... what the gently caress.

Messyass
Dec 23, 2003

Rocko Bonaparte posted:

Coming soon to Safari Bookshelf: Agile Process Patterns. The cover will be a black and white drawing Cthulu and it will make you crazy just looking at it.

My mind is blown that this doesn't exist yet.

Messyass
Dec 23, 2003

Mniot posted:

I did a medium-sized REST API test suite that was basically like https://github.com/grantcurrey/cucumber-rest#full-example

[...]

The claim of BDD (I think) is that your PM or (god forbid) marketing team can write their requirements into the DSL. Or maybe they're just supposed to be able to read it? I haven't had a PM or marketer who would consider doing either so I donno.

That's a horrendous use of Gherkin. Of course the PM isn't going to read that. It's a purely technical test and doesn't explain the business value at all.

If you do BDD right, you work together with the PM/analist before implementation to come up with a set of examples that expresses the value of the story in terms that the business understands but that can also be implemented as tests. It can be a challenge to find this ubiquitous language, but that's also exactly what will help you in the long term. As a bonus a suite of well written, business readable test provides you with living documentation.

A well designed system will have most BDD tests implemented at the unit level / domain model, and not at the API level or UI level. That also makes it much easier to write tests that are actually functional and aren't bogged down by technical details.

Messyass
Dec 23, 2003

Steve French posted:

Your last point is totally incongruous with the first two paragraphs in your post to me. The BDD tests should not be technical in nature, and the PMs should be involved from the get go? Ok, sure. Unit tests and domain model tests, not API or UI tests? How is that not the opposite of what you just said?

This is from "50 Quick Ideas To Improve Your Tests":

quote:

Decouple coverage from purpose

Because people mix up terminology from several currently popular processes and trends in the industry, many teams confuse the purpose of a test with its area of coverage. As a result, people often write tests that are slower than they need to be, more difficult to maintain, and often report failures at a much broader level than they need to.

For example, integration tests are often equated with end-to-end testing. In order to check if a service component is talking to the database layer correctly, teams often write monstrous end-to-end tests requiring a dedicated environment, executing workflows that involve many other components. But because such tests are very broad and slow, in order to keep execution time relatively short, teams can afford to exercise only a subset of various communication scenarios between the two components they are really interested in. Instead, it would be much more effective to check the integration of the two components by writing more focused tests. Such tests would directly exercise only the communication scenarios between the two interesting areas of the system, without the rest.

Another classic example of this confusion is equating unit tests with technical checks. This leads to business-oriented checks being executed at a much broader level than they need to be. For example, a team we worked with insisted on running transaction tax calculation tests through their user interface, although the entire tax calculation functionality was localised to a single unit of code. They were misled by thinking about unit tests as developer-oriented technical tests, and tax calculation clearly fell outside of that. Given that most of the risk for wrong tax calculations was in a single Java function, decoupling coverage (unit) from purpose (business test) enabled them to realise that a business-oriented unit test would do the job much better.

A third common way of confusing coverage and purpose is thinking that acceptance tests need to be executed at a service or API layer. This is mostly driven by a misunderstanding of Mike Cohn’s test automation pyramid. In 2009, Cohn wrote an article titled The Forgotten Layer of the Test Automation Pyramid, pointing out the distinction between user interface tests, service-level and unit tests. Search for ‘test automation pyramid’ on Google Images, and you’ll find plenty of examples where the middle tier is no longer about API-level tests, but about acceptance tests (the top and bottom are still GUI and unit). Some variants introduce additional levels, such as workflow tests, further confusing the whole picture.

To add insult to injury, many teams try to clearly separate unit tests from what they call ‘functional tests’ that need different tools. This makes teams avoid unit-testing tools for functional testing, instead introducing horrible monstrosities that run slowly, require record-and-replay test design and are generally automated with bespoke scripting languages that are quite primitive compared to any modern programming tool.

In other words:
What are business people most interested in? Business logic.
Where is business logic executed? In the domain layer (if your system is properly designed that is).
Where should it be tested? At the unit/component level.

This is where Cucumber/Specflow shines. If you have a well designed domain model these tests are quite easy to write and they can be run in seconds.

Actually testing that your API works, that your database connection functions correctly and that your UI doesn't break is mostly a technical concern. Of course business people have opinions about the UI but that doesn't necessarily mean you write automated tests for it.

Messyass fucked around with this message at 14:20 on Apr 30, 2017

Messyass
Dec 23, 2003

Pollyanna posted:

Really, the biggest advantage from all this is:

  • Someone wrote something down, and
  • This stuff only exists in one place, and
  • Anyone can go to that place and read that stuff, and
  • All tickets, stories, and pieces of work are derived from that stuff, and
  • Any changes are reflected from that stuff.

Being able to claim all this solves a fuckload of problems and prevents a lot of confusion. I wish I had this. :cry:

That's basically it.

In an ideal situation it wouldn't be "someone wrote something down", but "people actually got together and agreed on what to write down".

And the last point in your list is especially important. Imagine having functional documentation that's always guaranteed to be up-to-date. Holy poo poo.

Messyass
Dec 23, 2003

My Rhythmic Crotch posted:

SAFe is so 2016. The new hotness is CrossAgile - you do Crossfit while coding. Supposed to give you mad productivity gains. The consultant fees are baller

Develop your abs while you develop your apps!

Messyass
Dec 23, 2003

Working in complete silence doesn't make sense to me. Software development is design. It's a collaborative, creative process. How "productive" a single developer on his/her own is, is hardly a concern. They may be very efficiently making the wrong thing for all you know.

I'm not saying you should sit in the same open plan office as your customer service department, but I'm all for great team rooms with space to work together, white boards on all the walls, etc.

Messyass
Dec 23, 2003

necrobobsledder posted:

I'm not saying I agree with it, but the problem with that line of thinking is presuming that more situations like that occur than what we typically complain about in here like people constantly getting distracted. And there is strong academic research showing that the best programmers do not necessarily come from better schools, better tier companies, do better in their interviews, get better peer reviews, nor even have more years of experience than others - the top 3 factors were.... 1. a quiet room 2. uninterrupted time 3. means of isolation.

How do we define "the best programmers" though? Succesful software development involves so much more than being good at programming.

Messyass
Dec 23, 2003

Skandranon posted:

I think it's closer that the theory of open offices is a stupid theory. Software does not get written collaboratively. Try having 15 people collaborate over what code to write, see how far you get.

What about pair programming? There's even a thing called mob programming, although I've never encountered it in practice.

Adbot
ADBOT LOVES YOU

Messyass
Dec 23, 2003

geeves posted:

Has anyone broken up a monolithic application into containerized microservices?

I assume someone here probably has. I've been trying to find lessons learned from going through this process, but most are blogs that are disguised as selling cloud or consulting services that just say "Yes! Do this!"

The goal of this is to save money and help continue to (as my sys admin coworker put it) "crawl out of the rear end crack of automation". We're 90% there, but deploying the entire app for 1 bug has, for a while, been extremely excessive. It may also to force our product team to perhaps focus more tightly on what they are actually asking for given the new structure.

Aside from the technical challenge, even just logically cutting up a monolith into microservices is HARD. Do you have clear bounded contexts within the monolith already?

So make sure you're doing it for the right reasons. What exactly is the problem with "deploying the entire app"? Can you fix your deployment pipeline without needing microservices?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply