Testing advanced reference qualifier using ATF

Joe Stoner
Kilo Contributor

I'm creating some tests for change requests now that we are upgrading to Istanbul and I have not found a way to verify that an advanced reference qualifier is working. For example, we filter the assignment group to exclude certain groups, and would like to test which groups the user is allowed to choose from. I can try to set the value to be a group that is not allowed, but this would fail and only allow me to check one bad value, rather than loop through possible values to test. Is this possible with ATF?

I'm currently working on this for Change Management, but would like to be able to do this in other modules (incident, problem) in the future as well.

4 REPLIES 4

chrissquires1
Tera Guru

There's no good way that I've found to accomplish this type of test using ATF.   You can do your own test with a server-side script, but it doesn't really test the UI.   Assuming the reference qualifier is a utility (some javascript function) that is called, you can call the same utility in the server-side step, perform validation check, and gs.log the results. This prints to the ATF log, not the system log so it gives you a decent visual check of your results.   If needed, you can also pass the result to another step, but it's involved and usually not worth the trouble because I've not yet found a way to pass an object.


Joe Stoner
Kilo Contributor

Thanks, I had a feeling that would be the answer. Looks like ATF isn't quite ready to fully replace our current testing process, but at least it's a start.


You can always submit things you find that you need ATF to do as an enhancement request.


Keep in mind that Jakarta expands ATF to Service Catalog (and allows for scheduling of tests) and Kingston is anticipated to provide Service Portal testing capability.


That doesn't mean they can't throw us a bone on some of our other requests along the way



I know that some peopled have billed ATF as their regression testing tool for releases, patches and/or hotfixes.   Personally, I don't see this being viable given that one and only one test can run at a time, regardless of who requests it.   Depending on how in-depth your testing is, you can spend upwards of 40 minutes just testing one aspect of Incident (like escalation, for example).   That time starts to add up quickly as you add other products/services.   Not to mention that tests pretty much need to have a babysitter to prevent them from timing out.   So you can have one person spend hours watching the client runner, or a few people spending an hour each...


Community Alums
Not applicable