Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Steve French
Sep 8, 2003

Your last point is totally incongruous with the first two paragraphs in your post to me. The BDD tests should not be technical in nature, and the PMs should be involved from the get go? Ok, sure. Unit tests and domain model tests, not API or UI tests? How is that not the opposite of what you just said?

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009
So BDD tests are like integration tests but expressed in a different language? An english-like but not quite a programming language? I can't say I really understand them from that github page.

FireWhizzle
Apr 2, 2009

a neckbeard elemental
Grimey Drawer

Volguus posted:

So BDD tests are like integration tests but expressed in a different language? An english-like but not quite a programming language? I can't say I really understand them from that github page.

P much.. it's intended to test from a users perspective:

Given I am in this initial state
When I press the big red button
Then the 640 million dollar website should not go down

the "Given ______ " statement would map to a function that processes it, as would the when and then statements. Many repeatable actions like clicking on poo poo, filling in text, moving the mouse, hitting keys are usually already bundled into whatever BDD software you are using, and said BDD software also has an API for you to write your own behavior processors.

e: BDD in general is a very broad term much like "Agile" and "TDD" - so it's completely random which company will implement what

FireWhizzle fucked around with this message at 16:15 on Apr 28, 2017

Gounads
Mar 13, 2013

Where am I?
How did I get here?
As far as I can tell, the dream is having non-developers write automation tests, presumably so you can pay them less which turns into a nightmare of having developers having to implement all the automation tests and then fix the BDD syntax as well.

Volguus
Mar 3, 2009

Gounads posted:

As far as I can tell, the dream is having non-developers write automation tests, presumably so you can pay them less which turns into a nightmare of having developers having to implement all the automation tests and then fix the BDD syntax as well.

If the tests are not written in a programming language, won't that make them quite limited in what they can do? Therefore, won't the tests be quite useless since they can only do very simple things? To test that my million dollars website doesn't go down is easy. But shouldn't I test that the response received (when I press the red button) is also the correct one? That the object/data returned by that API call actually makes sense and is the correct data?

Gounads
Mar 13, 2013

Where am I?
How did I get here?

Volguus posted:

If the tests are not written in a programming language, won't that make them quite limited in what they can do? Therefore, won't the tests be quite useless since they can only do very simple things? To test that my million dollars website doesn't go down is easy. But shouldn't I test that the response received (when I press the red button) is also the correct one? That the object/data returned by that API call actually makes sense and is the correct data?

That's the problem, the tests do get written in a real language by the developer. In the end, there is code written for "go to initial state" "press big button" "verify site isn't down". The test-runner knows that when it encounters "Given I am in this initial state" it should run that "go to initial state" function.

The hopeful payoff is that those functions that do have to be written can be re-used in lots of tests.

It's more complicated than that, and there's lots of helpers to make it easier and there are a bunch of pieces of tests that don't need to be explicitly implemented since they are simple enough for the framework to understand.

Disclaimer: I've only read about this stuff.

KoRMaK
Jul 31, 2012



BDD is behavior driven dev or business driven dev?

darthbob88
Oct 13, 2011

YOSPOS

KoRMaK posted:

BDD is behavior driven dev or business driven dev?
Behavior, IIRC. It's supposed to test the program's behavior and API from a user-facing perspective. When I click the button, I should get bacon, and when there is no bacon, I should be shown an error message.

Bongo Bill
Jan 17, 2012

The term is also sometimes used to describe the type of syntax used in some testing frameworks' DSLs, where they've gotten function names and signatures that mimic that style of writing requirements.

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

Basically you rewrite the applications behaviour in actions and expected results, based on the same non-specs the devs are using.

I can do a big effort post on test automation if you guys want. It is my core job so there is some bias but only the good kind.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Keetron posted:

Basically you rewrite the applications behaviour in actions and expected results, based on the same non-specs the devs are using.

I can do a big effort post on test automation if you guys want. It is my core job so there is some bias but only the good kind.

I'm sure at least one person would benefit from reading it, please do.

Gildiss
Aug 24, 2010

Grimey Drawer
I would like to read it as well but maybe the development hell thread isn't the best venue? Do post a link though because I only exist in hell.

Count Thrashula
Jun 1, 2003

Death is nothing compared to vindication.
Buglord

Keetron posted:

Basically you rewrite the applications behaviour in actions and expected results, based on the same non-specs the devs are using.

I can do a big effort post on test automation if you guys want. It is my core job so there is some bias but only the good kind.

That'd be kinda cool

Space Kablooey
May 6, 2009


Keetron posted:

Basically you rewrite the applications behaviour in actions and expected results, based on the same non-specs the devs are using.

I can do a big effort post on test automation if you guys want. It is my core job so there is some bias but only the good kind.

nthing the testing effort post.

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

Huh, that is more response than I anticipated. First I can plug my blog a bit more: http://thenewcareeradventures.blogspot.com where this post will be put on as well because that is what we do, write re-usable stuff and use it all over the place. Everything there and here is my opinion based on about 13+ years of testing, test management and test automation, there might be some stuff that works for you or you consider it horse poo poo, it works for me and the clients I work for. I use java lingo as that is all I know.

On the how and what on test automation by just some goon.

In all the below I will assume you know WHY we automate tests. In case you do not know why: it is so that the current state of the code is tested on the (test) appropriate level, the tests go green as to prove there is no regression. If you add a feature and tests go red, this better be in line with your feature. We automate so we can check super fast after a build, say 10 minutes after build completion, that no regression occurs. When this is implemented well and everyone is more or less on board, it works really really well. If it is implemented badly and many oppose, it is expensive, painful and pointless, like so many things in life that would fit that description. Anyway, to testing.

First we establish there are testing levels and each level has it's special purpose. To illustrate this, we use the test pyramid so you can see that the tests at the bottom are many and the tests at the top are the 1%. We will start at the bottom and work up.

Mine will look a lot like this:

Stolen from Uncle Bob who pops up a lot if you read about test automation. He wrote FitNesse and moved on to Cucumber later.

Unit tests at the bottom are written by the devs. They might not enjoy it but it is to automatically, at each build, ensure that the behaviour of the code covered by unit tests did not change or at least is working as expected. Unit testing should cover at least one method and at most one class. There is a lot of talk on the internet about how to unit test and enough build in frameworks to last you a lifetime. If you do not unit test your code is bad and you should feel bad. These test are automated and build in the code.

Component tests is testing application components such as front-end, back-end or whatever you have using stubs and drivers (or mocks, call em what you want) in an automated environment using an automation framework that supports all this. It is crucial to know that the individual parts work before you move to the next level. These tests are ideally part of the codebase and ran every build, developed in parallel with the application code. If you are lucky, you works some place where the QA's in your scrumteam write these tests.

System testing is the various components together to create a system, using a front-end driver (most likely the Selenium library) and backend XML stubs or a pre-defined database. In a good place, this is containerised using Docker and automatically ran every build. It is also possible this is deployed to a System Test Environment but that sucks as you cannot ST your feature branches which you really really want. Automate the poo poo out of this. Keep in mind this is not integrated with anything that is outside the scope of your codebase. DO NOT INTEGRATE this test you rear end in a top hat. We have the next level for that. Because we mock all interfacing, we skip bi-lateral testing and move straight to...

System Integration Testing. Deployed on a dedicated and fully integrated, across all systems I mean with that integrated, ChainTest environment so that I can check if I run something in the system next over, it will trigger action something correctly in my system. You can automate this but you will run into data issues a lot unless of course you automate the data generation in your scripts. Some people will say it is to much effort to automate all this, I call it my job and you will be surprised how fast something can be set up by experienced and motivated people. As an aside, I loathe manual testing so I rather automate anyway.

Chain Testing is where the client comes in, buys product X and all invoices are set, emails are send and everything is done in all appropriate systems. Call it End to End if you want to, I don't care. Key to this is that you are no longer testing all functions, you are making sure that somehow it is possible to do the bulk of the work without the systems making GBS threads themselves and calling each other names. Don't automate this, it is not worth the effort as you only want to do some checks before the next step. Even better, make some junior QA do it "to get to know the systems".

Finally we end up at User Acceptance Testing, where the users go over the systems to see if they like it and they can do their job or their future job properly. They should not find obvious bugs, only process stuff. If a user complains about the data, strangle him with his tie or explain the data migration test is another day. If they keep complaining, ask their manager for a replacement as this one is obviously broken.

There you go, at least now we agree on what we call what or you will know what I talk about when I say System Test.

Next up, test tooling.

Basically, there is tons of tooling and most of it is poo poo but you will still use it as it less lovely than the alternative. Any test tool that claims that end users can use it to automate their tests is lying. At most, it allows you to create a test that can be explained to (and understood) and possibly expanded by the BA or PO. Worst case they will try and gently caress up your trunk by force pushing to prod some experimental feature branch. So there we are, the best thing would be a tool that will create readable tests that hide a lot of the technical stuff, where the test input/expected output is easily modified and expanded so you add testcases if more come up (when talking to BA) or changed when you are pointed at flaws in your reasoning. Keep in mind too that all these frameworks are indeed nice visual skins on top of an application you build to test the application that your colleagues build.

I mentioned FitNesse, which uses a tabular view in a wiki to display your testcases while calling a piece of java code that will execute the actual script. Easy to expand, horrible to set up. Super easy integration with Jenkins, mediocre integration with IntelliJ. Very shiny, business loves it. If you build your tests using the wiki editor you are an idiot. Test scripts are java only.
There is Cucumber that uses Gherkin (that horrible Given...When...Then) to run predefined test step definitons. It looks like magic until you find out that it regex matches against the full string after Given to run a certain test. I felt cheated. Smooth integration with build tools and IntelliJ, default option for a lot of projects these days. Kind of OK to show business as it is not very shiny. Developers like it because it is not shiny. Cucumber supports scripts in java, javascript and no doubt others.
There is protractor too but I never worked with it.

In all cases, you build your test scripts kind of generic in java or javascript or what not and you can use all possibilities of these languages. For front end you use the Selenium library, for backend whatever you need.
With this, you can see that a test engineer is building an application using a framework that is a few / a lot of levels simpler than the System Under Test (SUT), but an application still. It should be designed to build and run integrated with your CI/CD tool of choice, so go with a framework that does this out of the box. If a test engineer proposes something that is completely home-brew, fire him/her as there is plenty of choice in the market for something that others can use too.
So as you can feel now, there are Testers who mainly test manually and there are Test Engineers, who build a system so they never have to do manual testing again. If you move into QA, take the second path as manual execution sucks, it is slow and it is stealing from the future for reasons you can find in my blog.

Finally there is the TAO of Test. Tests should be:
Transparant so one can see what it tests and how.
Atomic, meaning a test should only test one thing or at least have only one purpose.
Overdraagbaar but drat that is Dutch so can someone suggest a term that means transferrable starting with an O? The guy coming after you should be able to work with your tests.

When reading this, I know I only touched on the basics now, there are many really interesting points I missed such as dynamic stubbing, random data generation, plugins for frameworks, reporting and automatic feature building. In that last point: have your CI tool check git for new or changed feature branches and start a build/test cycle on each commit using docker containers. Really neat to have realtime monitoring of your quality.

Space Kablooey
May 6, 2009


Thanks for posting, my only issue with the post is that the pyramid doesn't match the levels you described, and I think you will be better off making an image that corresponds to your definitions, even if it simple.

I have a question: If I have an application which its only interface is a RESTful API, and I have a suite of tests that asserts that calling a certain route, say http://localhost/api/v1/things/ returns me a list of things, which level does that fall? Component testing? How would a System test (or above) would look like in this scenario?

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!

HardDiskD posted:

Thanks for posting, my only issue with the post is that the pyramid doesn't match the levels you described, and I think you will be better off making an image that corresponds to your definitions, even if it simple.

I have a question: If I have an application which its only interface is a RESTful API, and I have a suite of tests that asserts that calling a certain route, say http://localhost/api/v1/things/ returns me a list of things, which level does that fall? Component testing? How would a System test (or above) would look like in this scenario?

I would say that if you're actually hosting the API on webserver and it's backed by a real DB (even if the data is fake) that's a system test. If you're calling the API's backing functions rather than going over a network and mocking the database, that's a component test.

It's all just names though, more important is to be aware of what the purpose of the test is and excise/mock anything that's not part of that.

Steve French
Sep 8, 2003

Cucumber is pretty stupid, IMO, with the disclaimer that the only time I used it was on a team with only developers so it made especially no sense. Someone writes the test as plain text, and then someone else has to write the logic of the actual tests anyway, and then also an extra layer of regular expressions to map the text to the tests. I would dispute that "developers love it," on that basis.

The silliest thing is that it requires that each line start with one of those 5 or so words, but they have absolutely no impact on the test whatsoever; it's a totally arbitrary limitation on what text you can write.

Eggnogium
Jun 1, 2010

Never give an inch! Hnnnghhhhhh!
I would say too much time is spent defining the specific levels of the pyramid and what tests belong to which in different application. Application architecture and release requirements vary enough that you should have different levels and definitions that meet your product's need. More important is the concept of the pyramid: that you have a range of test types that generally go from fast, small-scope, and frequently running to slow, large-scope, and less-frequently-running. Decide on the levels and mocking policies for each that are right for your project without worrying too much about how nicely they slot into a specific pyramid.

The advantage of this approach is the long running tests which grant high confidence in a release don't hinder the developer's iterative workflow, but there are also smaller tests that give high-enough confidence and can run frequently to give fast developer feedback. Some projects are small enough in scope that you may only need one layer of tests.

There is also a tendency for tests to get more unreliable as they go higher up the pyramid and encompass more interactions ("I'll just sleep 10 seconds, that should give the queue time to process my request"). Unreliability at all layers, even the top, should be ruthlessly purged and the application designed to support deterministic tests at every level. Nothing is more of a counter-productive time sink than a suite of tests that block you from shipping good code.

CPColin
Sep 9, 2003

Big ol' smile.

Keetron posted:

Finally there is the TAO of Test. Tests should be:
Transparant so one can see what it tests and how.
Atomic, meaning a test should only test one thing or at least have only one purpose.
Overdraagbaar but drat that is Dutch so can someone suggest a term that means transferrable starting with an O? The guy coming after you should be able to work with your tests.

You could use "Transferable, Atomic, Obvious," maybe.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Steve French posted:

Cucumber is pretty stupid, IMO, with the disclaimer that the only time I used it was on a team with only developers so it made especially no sense. Someone writes the test as plain text, and then someone else has to write the logic of the actual tests anyway, and then also an extra layer of regular expressions to map the text to the tests. I would dispute that "developers love it," on that basis.

The silliest thing is that it requires that each line start with one of those 5 or so words, but they have absolutely no impact on the test whatsoever; it's a totally arbitrary limitation on what text you can write.

I have never seen an example of how Cucumber could be used that wasn't blatantly insane and useless which always makes me assume that I'm just misunderstanding it, but then I read yet another blog post that assumes that the actual implementation of the tests is just magicked into existence...

Steve French
Sep 8, 2003

Plorkyeran posted:

I have never seen an example of how Cucumber could be used that wasn't blatantly insane and useless which always makes me assume that I'm just misunderstanding it, but then I read yet another blog post that assumes that the actual implementation of the tests is just magicked into existence...

Yeah, I have trouble picturing it being worthwhile in any circumstance, but in particular it blew my mind to see it in use at a startup of like 6 people, all developers. If there is ever any value in it, it's certainly not to be found there.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
The sole case I've seen it be useful in (and I think I've talked about it before) is in a small business where there was ONE person who understood the specific business rules for a billing system. It was full of special-cases, exceptions, and exceptions to exceptions and it was impossible for a developer to test it properly because all we'd see is numbers in, numbers out.

Now, would it have been just as easy for the business user to give us a spreadsheet full of test cases with input/output? Yeah, probably. But this was a business user who was notorious for blaming others when she messed up. So having it explicit that "you wrote this test case, the test case is passing, it's not our fault" was actually important.

It's a very, very narrow set of legitimate use cases, but they are out there.

Steve French
Sep 8, 2003

So in that case it seems like the real value is in the fact that the stakeholder actually formally wrote poo poo down and both parties can be held to that. The fact that those words get passed through a regex and run as tests is at some level immaterial, at least to the stakeholder, right? Like the same thing from a development standpoint could be accomplished by writing tests to verify that same behavior in whatever way suits the devs, and then if there is a conflict it can be resolved by actually verifying the behavior. After all, the tests could be broken: why should the guy assume the tests are implemented correctly if he doesn't think the rest of it is?

Rudest Buddhist
May 26, 2005

You only lose what you cling to, bitch.
Fun Shoe
I thought the idea behind tests was so that I can brazenly make huge changes to the codebase without fear of breaking anything.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Gounads posted:

As far as I can tell, the dream is having non-developers write automation tests, presumably so you can pay them less which turns into a nightmare of having developers having to implement all the automation tests and then fix the BDD syntax as well.
The industry trend toward making developers responsible for almost everything touching a computer without giving them enough authority to reject tasks from others is what I point at when I ask why they deserve so much pay. Modern developers need to take on what used to be QA and SDET tasks of writing test suites, part-time PMs doing all the Agile BS overhead of non-work, part architecture with picking and choosing the kind of hell they want to live with in a year, security (sanitizing inputs, avoiding bad crypto practices, etc.), Internet protocols and standards, and operational knowledge to make their piles of crap installable and maintainable in production. These all turn quickly into more frequent meetings as those other roles go to dedicated people, so even less time gets spent on... development.

Steve French posted:

Yeah, I have trouble picturing it being worthwhile in any circumstance, but in particular it blew my mind to see it in use at a startup of like 6 people, all developers. If there is ever any value in it, it's certainly not to be found there.
I think technically it works ok for its scope where I am because we have a ratio of something like 4:1 QA / test to developers. At present the Cucumber feature files work out ok with our glacial pace of development, so test coverage is actually going up over time. However, for most sane places that ship more features than service technical debt and don't need a massive army of QA to perform manual regression testing, I'd say BDD is just one step higher than your integration tests and also inherits the downsides of TDD such as no guarantee of the quality of the architecture that passes those tests over time. As such, it's probably a Bad Idea unless you're already pretty good at TDD.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Rudest Buddhist posted:

I thought the idea behind tests was so that I can brazenly make huge changes to the codebase without fear of breaking anything.

Broadly yes; they're talking about a form of TDD set up to stop business people from yelling at devs for implementing what they were told to do though. And TDD is supposed to be like that but writing the tests first. In practice, it either works out well or the code is written to the test and it's a giant mess.

smackfu
Jun 7, 2004

On the topic of bad tests: Mutation testing is a neat idea that I've played around a bit with. Randomly changes the logic in your code and sees if your tests actually fail. Only problem is that it's slow so I didn't add it to our build, which inevitably means we never actually run it.

Brain Candy
May 18, 2006

smackfu posted:

On the topic of bad tests: Mutation testing is a neat idea that I've played around a bit with. Randomly changes the logic in your code and sees if your tests actually fail. Only problem is that it's slow so I didn't add it to our build, which inevitably means we never actually run it.

How slow is 'slow'? What I've done is set it up to run everynight so you get a replacement for code coverage, which means it's there to point you in the places that you need to think about.

Pollyanna
Mar 5, 2005

Milk's on them.


Rudest Buddhist posted:

I thought the idea behind tests was so that I can brazenly make huge changes to the codebase without fear of breaking anything.

There's a few advantages to testing. One is that you make sure it does what you expect, and another is that you can change things and make sure it still does what you expect. And if it suddenly doesn't, you have to make the decision between whether your requirements changed, or your changes violated the requirements - this is more useful to have than you think.

smackfu
Jun 7, 2004

Brain Candy posted:

How slow is 'slow'? What I've done is set it up to run everynight so you get a replacement for code coverage, which means it's there to point you in the places that you need to think about.

Yeah a nightly run seems like a good idea. Slow means "add ten minutes to the build" which I will hear about from every other dev.

Messyass
Dec 23, 2003

Steve French posted:

Your last point is totally incongruous with the first two paragraphs in your post to me. The BDD tests should not be technical in nature, and the PMs should be involved from the get go? Ok, sure. Unit tests and domain model tests, not API or UI tests? How is that not the opposite of what you just said?

This is from "50 Quick Ideas To Improve Your Tests":

quote:

Decouple coverage from purpose

Because people mix up terminology from several currently popular processes and trends in the industry, many teams confuse the purpose of a test with its area of coverage. As a result, people often write tests that are slower than they need to be, more difficult to maintain, and often report failures at a much broader level than they need to.

For example, integration tests are often equated with end-to-end testing. In order to check if a service component is talking to the database layer correctly, teams often write monstrous end-to-end tests requiring a dedicated environment, executing workflows that involve many other components. But because such tests are very broad and slow, in order to keep execution time relatively short, teams can afford to exercise only a subset of various communication scenarios between the two components they are really interested in. Instead, it would be much more effective to check the integration of the two components by writing more focused tests. Such tests would directly exercise only the communication scenarios between the two interesting areas of the system, without the rest.

Another classic example of this confusion is equating unit tests with technical checks. This leads to business-oriented checks being executed at a much broader level than they need to be. For example, a team we worked with insisted on running transaction tax calculation tests through their user interface, although the entire tax calculation functionality was localised to a single unit of code. They were misled by thinking about unit tests as developer-oriented technical tests, and tax calculation clearly fell outside of that. Given that most of the risk for wrong tax calculations was in a single Java function, decoupling coverage (unit) from purpose (business test) enabled them to realise that a business-oriented unit test would do the job much better.

A third common way of confusing coverage and purpose is thinking that acceptance tests need to be executed at a service or API layer. This is mostly driven by a misunderstanding of Mike Cohn’s test automation pyramid. In 2009, Cohn wrote an article titled The Forgotten Layer of the Test Automation Pyramid, pointing out the distinction between user interface tests, service-level and unit tests. Search for ‘test automation pyramid’ on Google Images, and you’ll find plenty of examples where the middle tier is no longer about API-level tests, but about acceptance tests (the top and bottom are still GUI and unit). Some variants introduce additional levels, such as workflow tests, further confusing the whole picture.

To add insult to injury, many teams try to clearly separate unit tests from what they call ‘functional tests’ that need different tools. This makes teams avoid unit-testing tools for functional testing, instead introducing horrible monstrosities that run slowly, require record-and-replay test design and are generally automated with bespoke scripting languages that are quite primitive compared to any modern programming tool.

In other words:
What are business people most interested in? Business logic.
Where is business logic executed? In the domain layer (if your system is properly designed that is).
Where should it be tested? At the unit/component level.

This is where Cucumber/Specflow shines. If you have a well designed domain model these tests are quite easy to write and they can be run in seconds.

Actually testing that your API works, that your database connection functions correctly and that your UI doesn't break is mostly a technical concern. Of course business people have opinions about the UI but that doesn't necessarily mean you write automated tests for it.

Messyass fucked around with this message at 14:20 on Apr 30, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
A lot of that goes out the window in some respects when an application fails operationally like "Service discovery failed to transition to a new version of the code consistently in the newest software update and connection draining didn't complete successfully due to incorrect timeout settings in an older configuration management release on old nodes" which is very, very similar to what happened to Knight Capital shortly before they lost $440MM in 30 minutes. They didn't exactly get to do a post mortem to explain how they'd prevent that failure in the future - they straight up went bankrupt.

So my point is that most test methods that people talk about suck at systems in the real world even if implemented perfectly, which is what all the SREs and devops folks are trying to hammer home with better monitoring, instrumentation, etc. Business people without a strong grounding in operational systems (read: most of them because that's "blue collar" work in perception and the other half are operational architectural astronauts) oftentimes fail to see how much their business is impacted when they treat operations as a cost center (something honestly must, sure, but there's a minimum level of competence that should exist for companies at different levels of risk, right?). For every Knight Capital story though, there's 100 companies that are doing something similar practices in systems without really making a change because as long as money keeps rolling in few seem to really care.

GutBomb
Jun 15, 2005

Dude?
Over the past 18 years I've been mostly a web server administrator doing light development with PHP, classic ASP, and ColdFusion (I know) mostly implementing IIS and apache servers and content management systems, web analytics systems, etc. Over the past year I sort of shifted my focus to front end web development and it's like a completely different world. Instead of dreading going to work, and then once I'm there looking at youtube and SA all day waiting for a fire to put out, I'm now excited to go to work again and learn new poo poo and solve problems and figure out how to do poo poo. The tools I'm using now are all different to what I was using previously and way more robust, things like docker are awesome and IntelliJ and AngularJS, gulp, browsersync... I'm in love with this poo poo again. And I was about to quit doing computers for a living and try anything else because I felt like my soul was being eaten. Now I find myself working on poo poo at home over the weekend because I'm enjoying what I do again. Anyway that's my story. Writing code owns.

amotea
Mar 23, 2008
Grimey Drawer
All aboard the AGILE RELEASE TRAIN motherfuckers, we're leaving the ARCHITECTURAL RUNWAY!

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

I hoped to never lay my eye on that abomination again, thank you for loving up my day.
Actually, I wanted to ask about Specification By Example and you people's experience with it but I will wait for the shitstorm that is SAFE to blow over.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
I'm cleared for takeoff on the architectural runway!

Pollyanna
Mar 5, 2005

Milk's on them.


amotea posted:

All aboard the AGILE RELEASE TRAIN motherfuckers, we're leaving the ARCHITECTURAL RUNWAY!



This is some overengineered bullshit right here.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Steve French posted:

So in that case it seems like the real value is in the fact that the stakeholder actually formally wrote poo poo down and both parties can be held to that. The fact that those words get passed through a regex and run as tests is at some level immaterial, at least to the stakeholder, right? Like the same thing from a development standpoint could be accomplished by writing tests to verify that same behavior in whatever way suits the devs, and then if there is a conflict it can be resolved by actually verifying the behavior. After all, the tests could be broken: why should the guy assume the tests are implemented correctly if he doesn't think the rest of it is?
Someone formally wrote poo poo down and both parties can be held to that. Regardless of who writes them, the whole point of Cucumber-style declarative BDD is that non-developer stakeholders understand how to read them. This can be used to iteratively agree on a spec without a big up-front formal document. If that isn't a requirement, definitely avoid Cucumber.

Adbot
ADBOT LOVES YOU

Pollyanna
Mar 5, 2005

Milk's on them.


Really, the biggest advantage from all this is:

  • Someone wrote something down, and
  • This stuff only exists in one place, and
  • Anyone can go to that place and read that stuff, and
  • All tickets, stories, and pieces of work are derived from that stuff, and
  • Any changes are reflected from that stuff.

Being able to claim all this solves a fuckload of problems and prevents a lot of confusion. I wish I had this. :cry:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply