this post was submitted on 29 Jan 2024
6 points (87.5% liked)

Programming

17270 readers
39 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
 

Hey everyone, I'm part of a company that's been trying to modernize. Our team has switched to Agile, switched to some cloud storage, and is slowly trying to add automated tests to its various legacy applications. I know normally automated tests would just be done with the user story as part of the definition of done, and while going forward I want to do that with future user stories, I still I want to be able to keep track of the large amount of work to do with adding automated tests to cover the huge parts of the code already done. It will be kind of a large development effort by itself done by at least 2-3 devs/juniors, and me kind of leading this effort but pretty new at it myself lol.

We're using Azure DevOps which has organized things from big to small with Epics, Features, User Stories, and Tasks. We're trying to decide how to frame and track the work within this context. So even though user stories aren't the best way to illustrate this from what I've read because it isn't user driven functionality, it's the best way to track with what we got, so with that context, here are the ideas so far.

  1. One person suggested an Automated Test Feature, sticking it in this Global epic we have for miscellaneous structure and framework work. Then make one user story each with all automated tests a module has, giving each individual class and pages to test within those modules with a task, and writing within the description the individual tests for each page/class. They don't want the backlog diluted with too many of these automated test stories I think.

  2. Another person suggested creating an Epic for automated tests user stories created up to now, then a feature for each module, then a user story for each class/page to be tested, then a task for each test the developer has to make for each one of those. This person was me, I thought it felt more organized and you can see what dev is working on what piece, but I can see how it balloons the backlog with a ton more user stories for this effort. Although it's at least all in one Epic folder that's easy to ignore.

  3. Our QA wanted one only user story for all automated tests to really prevent clutter, but also was okay with the first idea when I kind of pushed back on it. Since all user stories are usually tested by them and this is kind of superfluous stuff mostly for devs at the moment that isn't application functionality, so I can see why they want it as small and out of the way in the backlog as possible.

  4. Another person just suggested creating a user story for each test, but instead of putting them all in one place, placing them in the proper Feature category that the originating story is kind of testing went in. I get the logic of this, too, but I was afraid of it being confusing for it to track being all scattered around, and user and system driven functionality mixed with tests. But then, I guess we also categorize things in sprints, so maybe this wouldn't be as confusing as I first thought.

Anyway, if anyone had any suggestions or a better way to organize it than these, let me know!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

Thanks! Haha ya, it may be a little misguided, but I think we were allowed to do this partly as busy work to give us something to do between releases. We're kind of in a transition period, and it's something to do while my higher ups negotiate contracts for further work, stakeholders and customers prioritize next items for the next release, etc. Admittedly, these transition periods kinda scare me in terms of you never know when you'll lose work or something, so even if they think it's busy work, for me it's shoring up my resume with tech and leadership experience I should already have that the rest of the industry will be looking for just in case the worst happens lol. I think I've been working at this place too long now, kinda got complacent, but been more interested in looking around and catching up on what I've been missing as the company has been looking to modernize and we've been simultaneously approaching the release of this version of their software.

( Funny enough, I was initially hired to work on automated testing since I had done some at my previous company, immediately got placed doing other dev work to catch up on our schedule. Now it's been years and I'm trying to remember how this all works lol.)

Right now, we're mostly just doing happy path testing tbh. But that's a good point that we should look into our tools to see how it signals code coverage and everything. That might be some reading up I have to do. I think it's a combination of MSTest, or whatever comes with Visual Studios, some Telerik Just Mock and Test Studio tools our company already had licenses for, and Selenium.

You're right that a story per test is probably a bit too much Jira. I was more thinking of a story per class, but even that's probably a bit much with how big this legacy application of theirs is now. I don't want to overwhelm us all in backlog management paperwork, so now I think I'm leaning towards zooming out a bit and doing a story per module.

[โ€“] [email protected] 2 points 9 months ago* (last edited 9 months ago)

My area is Java, so I'm not as familiar with .Net (or whatever you are doing) but look into mutation testing and see if there is a tool for that. It will help identify all the various code paths, so for example if you have a line that says if (Object.value() == "foo") then... then it will make sure you have a test case where Object.value is "foo" and one where it isn't to make sure both paths are tested.

In Java the tool I've used for this is pitest, but I don't see that they support the MS ecosystem. This is way, way better than just code coverage percentages because I can cover a lot of lines by saying assert service.processObject(obj) != null without actually testing the code very much at all.