Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
jesus WEP
Oct 17, 2004


a pm who hates meetings - the only thing you need for tolerable standups

Adbot
ADBOT LOVES YOU

Valeyard
Mar 30, 2012


Grimey Drawer
i have two different standups in the morning, thats how agile i am, like a cat

kitten emergency
Jan 13, 2008

get meow this wack-ass crystal prison
we used to have one standup in the morning that lasted like 20 minutes and people complained

so now everyone has like 3 separate standups that last like an hour or so total because no one can really get going until 10:30 or so

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
our scrum leader is amazing and we're agile kanbsn or something. i dont really care what it is but basically our ba's make my job really easy so i try to pay them back by making their job easy. it's by far the best management situation ive encountered and I basically have total loyalty to my managers

Phobeste
Apr 9, 2006

never, like, count out Touchdown Tom, man
we do scrum and it's fine i guess? i'm currently the tech lead and de facto project owner for a team of three people (including me) while the fourth is on paternity leave. i feel like i'm bad at running things but also that little damage can be done as long as we are good at code quality and testing, which we're not, but are getting better at

also another team in my org has been continually awful since their tech lead quit a couple years ago, they've had rotating leadership and org charts and goals and it never seems to get better. now they're trying to work on an our-company specific spin of an electron.js program that's being primarily written by another part of the company that's largely web people and it's going awfully.

the real reason i brought this up was to complain that their problems meant everybody got scrum training, again, but this time it was hours and hours of scrum training on the day after somebody's going away party that was on st patricks day. i'm a goddamn hero for making it through

Soricidus
Oct 21, 2010
freedom-hating statist shill
lol if you do any work in the mornings, or indeed at all

Valeyard
Mar 30, 2012


Grimey Drawer

Soricidus posted:

lol if you do any work in the mornings, or indeed at all

yeah, by the time my standups are done its almost 11am, and then you dont really want to start much before lunch at 12 and then its 1pm and well gently caress almost time to go home

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?
agile seems like a decent idea that was ruined when business people got ahold of it

HappyHippo
Nov 19, 2003
Do you have an Air Miles Card?

Valeyard posted:

yeah, by the time my standups are done its almost 11am, and then you dont really want to start much before lunch at 12 and then its 1pm and well gently caress almost time to go home

every friday

Arcsech
Aug 5, 2008
our standups are surprisingly decent, we're usually done in 20 minutes even when theres a huge tangent.

but our end of sprint meetings, oh boy. 3 hours of sprint demos across 3 teams in different locations, then 5 hours of sprint planning. oh and this is on friday so get ready for another 5 hours of sprint planning on monday when we rehash everything we did friday

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
our problem is that our team owns like 5 things and we should really be split up but that's an org level problem. i think we handle it well given our resources

Brain Candy
May 18, 2006

triple sulk posted:

tdd is great in theory but your most useful tests are the ones written against edge case bugs you don't come up with until they show themselves

test your tests with mutation testing

Valeyard
Mar 30, 2012


Grimey Drawer
all roads lead to boosting coverage

Deep Dish Fuckfest
Sep 6, 2006

Advanced
Computer Touching


Toilet Rascal

Arcsech posted:

but our end of sprint meetings, oh boy. 3 hours of sprint demos across 3 teams in different locations, then 5 hours of sprint planning. oh and this is on friday so get ready for another 5 hours of sprint planning on monday when we rehash everything we did friday

at my last job, the first time we had a sprint planning meeting on a friday afternoon, it went on until 6:30 or so, which did a pretty good job of convincing everyone involved that either we should speed those up, or at the very least not hold them on a friday afternoon. i suppose it was rather productive in that way

Brain Candy
May 18, 2006

Valeyard posted:

all roads lead to boosting coverage

if you use mutation testing to make your metrics you end up with less coverage because you see how useless coverage is

ErIog
Jul 11, 2001

:nsacloud:

YeOldeButchere posted:

at my last job, the first time we had a sprint planning meeting on a friday afternoon, it went on until 6:30 or so, which did a pretty good job of convincing everyone involved that either we should speed those up, or at the very least not hold them on a friday afternoon. i suppose it was rather productive in that way

The answer is that all meetings should be held at 4:30pm on a Friday afternoon to convince people to stop wasting time in meetings.

ErIog
Jul 11, 2001

:nsacloud:

Brain Candy posted:

test your tests with mutation testing

Someone needs to adapt Trusting Trust to Testing Tests.

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

hackbunny posted:

NextStep, OS X's pre-transition deadname

isn't the real joke something about how Obj-C doesn't have namespaces

~Coxy
Dec 9, 2003

R.I.P. Inter-OS Sass - b.2000AD d.2003AD

lord funk posted:

just spent 20 minutes working on a subclass and trying to figure out why it wasn't working

OH HO HO HO turns out it wasn't used anywhere at all in the app. just some rancid dead code hanging out in the project

install R#

and then you can lol in a code review about someone who spend 2 days refactoring methods that aren't used anywhere

MononcQc
May 29, 2007

triple sulk posted:

tdd is great in theory but your most useful tests are the ones written against edge case bugs you don't come up with until they show themselves

[quote="http://research.microsoft.com/apps/mobile/news.aspx?post=/en-us/news/features/nagappan-100609.aspx"]
The study and its results were published in a paper entitled Realizing quality improvement through test driven development: results and experiences of four industrial teams, by Nagappan and research colleagues E. Michael Maximilien of the IBM Almaden Research Center; Thirumalesh Bhat, principal software-development lead at Microsoft; and Laurie Williams of North Carolina State University. What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.

“Over a development cycle of 12 months, 35 percent is another four months, which is huge,” Nagappan says. “However, the tradeoff is that you reduce post-release maintenance costs significantly, since code quality is so much better. Again, these are decisions that managers have to make—where should they take the hit? But now, they actually have quantified data for making those decisions.”
[/quote]

Progressive JPEG
Feb 19, 2003


The entire study was performed against one 9 person team in IBM and three ~7 person teams in Microsoft. All of the teams were adding onto a legacy system that was already in production. The time increase was spitballed by just asking the managers how much time they thought the additional testing added. The study doesn't specify how that question was asked, so who knows whether they hosed up the whole study by framing the question poorly.

Even by their own admission: "the results should not be taken to be valid in the general sense, only in the context in which the case study was conducted." So lol who knows what any of this means, we just figured we'd waste everyone's time

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS
coveralls failed my last pull request because i decreased coverage by 0.009% because i deleted a bunch of code lol

fritz
Jul 26, 2003

Progressive JPEG posted:

So lol who knows what any of this means, we just figured we'd waste everyone's time


isnt that basically the case with 'all' software development studies

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Blinkz0rz posted:

coveralls failed my last pull request because i decreased coverage by 0.009% because i deleted a bunch of code lol

yeah coveralls sucks

tef
May 30, 2004

-> some l-system crap ->
software metrics: more lines of code means more bad code

i cannot recall the study i found but a large amount of code complexity metrics didn't really make much more noise than a rough count of lines

i would say that using static analysis where available would be time better spent. A lot of metrics create busywork

JimboMaloi
Oct 10, 2007

Progressive JPEG posted:

The entire study was performed against one 9 person team in IBM and three ~7 person teams in Microsoft. All of the teams were adding onto a legacy system that was already in production. The time increase was spitballed by just asking the managers how much time they thought the additional testing added. The study doesn't specify how that question was asked, so who knows whether they hosed up the whole study by framing the question poorly.

Even by their own admission: "the results should not be taken to be valid in the general sense, only in the context in which the case study was conducted." So lol who knows what any of this means, we just figured we'd waste everyone's time

i think the point is more the measured decrease in defects than the subjectively rated increase in development time. sulk was saying tdd doesn't capture your weird edge cases, which is true but doesnt mean theres no value to it.


uncurable mlady posted:

so how do I learn how to do tdd or w/e

how is test formed

is there something I can read on it because on one level I get it but on the other level I just don't get it at all

like, let's say I have a function that gets some data from a rest api. is my test just making a blob of json that conforms to the expected output then writing a test that takes that data and plops out the expected output?

the only really effective way ive seen is pairing with someone who knows it, but theres a couple of good books. TDD By Example is the original kent beck book which shows a style that works really well if youre doing nice functional code with minimal side effects. Growing Object-Oriented Code Guided By Tests is also really good, but is very OO focused and very mockist which is a dealbreaker for some folks.

tef
May 30, 2004

-> some l-system crap ->
like the only thing metrics wise that i remember is that "number of local variables is fine, add more if you need em, but if you have too many function arguments you have probably missed one"

which i have lived through time and time again with minor api changes requiring a configuration detail to be passed through and somehow i end up with a 20 arg constructor

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

tef posted:

software metrics: more lines of code means more bad code

i cannot recall the study i found but a large amount of code complexity metrics didn't really make much more noise than a rough count of lines

i would say that using static analysis where available would be time better spent. A lot of metrics create busywork

your article went around my team and myself and another guy loved it and are actively trying to reduce the size and complexity of a bunch of our tools

MrMoo
Sep 14, 2000

tef posted:

software metrics: more lines of code means more bad code

i cannot recall the study i found but a large amount of code complexity metrics didn't really make much more noise than a rough count of lines

i would say that using static analysis where available would be time better spent. A lot of metrics create busywork

Greg Wilson covered a lot of it in this talk: https://vimeo.com/9270320

brap
Aug 23, 2004

Grimey Drawer
TDD is stupid for a lot of things but it's at least simple to actually do

step 1. write an empty function with some name that consumes some arguments and returns some type
step 2. write unit tests for the function that fail
step 3. implement the function

the trouble is that it assumes all software dev problems are "I know what the input looks like, I know what the output looks like and I just need to figure out what to do in the middle"

one of the better things about TDD probably is that it forces you toward more reasonable designs, specifically designs that make you isolate nondeterministic parts (90% of the time, this refers to your POS CRUD web app going to the DB) from deterministic parts (that elusive stuff called "logic" we are always trying to find a place for)

Brain Candy
May 18, 2006

tef posted:

software metrics: more lines of code means more bad code

i cannot recall the study i found but a large amount of code complexity metrics didn't really make much more noise than a rough count of lines

i would say that using static analysis where available would be time better spent. A lot of metrics create busywork

this is exactly why i'm happy about mutation testing

the tooling mutates your code, by flipping conditionals, etc. and then runs your test suite
if the tests failed, congrats, you've got good tests

if it passes, it can guide you to which tests to delete

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope

MrMoo posted:

Greg Wilson covered a lot of it in this talk: https://vimeo.com/9270320

goddamn

Wheany
Mar 17, 2006

Spinyahahahahahahahahahahahaha!

Doctor Rope
has there been any studies if people like "whoops, something went wrong, lol" -type errors more than "database error, error code 0xdeadbeef, please contact your system administrator" -type errors?

because the former aggravate the gently caress out of me. but then i'm a programmer, maybe actual people like that

gonadic io
Feb 16, 2011

>>=

Wheany posted:

has there been any studies if people like "whoops, something went wrong, lol" -type errors more than "database error, error code 0xdeadbeef, please contact your system administrator" -type errors?

because the former aggravate the gently caress out of me. but then i'm a programmer, maybe actual people like that

when writing an internal tool consumed by non-programmers i tend write a little "please contact the tech team" and then include the exception message, as they tend to be both an actual sentence while still useful to me

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder
yeah i think it really depends on the domain.

gonadic io
Feb 16, 2011

>>=

MALE SHOEGAZE posted:

yeah i think it really depends on the domain.

it helps that absolutely any problem that somebody has with the service (there's only like 3 users lol startips) i can personally deal with. there's no expectation that somebody could fix their own errors based on the message (which is when it becomes really important to be include useful information while still being accessible.

The Leck
Feb 27, 2001

tef posted:

software metrics: more lines of code means more bad code

i cannot recall the study i found but a large amount of code complexity metrics didn't really make much more noise than a rough count of lines

i would say that using static analysis where available would be time better spent. A lot of metrics create busywork
this is one that i read about metrics usefulness (may have come from MononcQc): https://www.mn.uio.no/ifi/personer/vit/dagsj/sjoberg.anda.mockus.esem.2012.pdf. like people said before, pretty small sample size for this sort of thing, but i found it interesting in the context of a software metrics class i took.

quote:

Conclusion: Apart from size, surrogate maintainability measures may not reflect future maintenance effort. Surrogates need to be evaluated in the contexts for which they will be used. While traditional metrics are used to identify problematic areas in the code, the improvements of the worst areas may, inadvertently, lead to more problems for the entire system. Our results suggest that local improvements should be accompanied by an evaluation at the system level.

tef
May 30, 2004

-> some l-system crap ->

Blinkz0rz posted:

your article went around my team and myself and another guy loved it and are actively trying to reduce the size and complexity of a bunch of our tools

christ

tef
May 30, 2004

-> some l-system crap ->

yep although one of the books he cites defends the 10x talk

(actually it's a real good talk)

Adbot
ADBOT LOVES YOU

tef
May 30, 2004

-> some l-system crap ->

Wheany posted:

has there been any studies if people like "whoops, something went wrong, lol" -type errors more than "database error, error code 0xdeadbeef, please contact your system administrator" -type errors?

because the former aggravate the gently caress out of me. but then i'm a programmer, maybe actual people like that

in before mononcqc http://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf

  • Locked thread