Any best practices for survey/assessment results in the context of a multitask requested item?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎07-25-2018 07:35 AM
Hello friends,
I've been looking for content on this subject and am not really finding anything, so I thought I would get the discussion going.
My company currently uses surveys/assessments on incidents and requests. For incidents, it's straightforward enough. However, I would be curious to discuss how some of you are handling survey/assessment results in the context of a multi-task request.
For example, the typical example of a complex request would be onboarding a new employee, a situation that routinely involves creating and assigning many SCTASKs in the context of this one request. What if the administrative assistant who filled out the form is not happy because it took too long? Or, perhaps she had a bad experience with one (and only one) of the support stakeholders involved. It seems unfair that a bad survey result at the RITM level be attributed to every group / individual stakeholder involved in fulfilling the related SCTASKs, however, I'm really not seeing how this could be handled more precisely, short of a level of constant human interaction and analysis that no one wants to be doing.
The complex data structure of requests often seems to bring up stakes that are not present for other, single level ticket types and I am struggling at how to give the request survey results meaning.
Any thoughts from the community about how this information is being used in your neck of the woods? How do YOU produce information in terms of a satisfaction rate for an assignment group that handles requests?
Looking forward to hearing from you guys!
- Labels:
-
Request Management
-
Service Catalog
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎07-25-2018 08:07 AM
Hi Pierre
Good to start the conversation.
Surveys are the darlings of management consultants everywhere. Companies are often wedded to the idea that they can use a survey to "communicate" with their consumers.
In my experience it almost never works out that way. Survey responses are routinely poor in number with a general average response % of <15%. I have heard of higher figures but have never seen them personally.
A lot of users will see a survey as just another task before they can get onto their work so it becomes a chore.
People play with trigger conditions to reduce the frequency or target. Others work hard on reducing button clicks and the UX.
However more people tend to use the survey as a mark of dis-satisfaction than to give praise.
Consider the airport security setting. On your way out you will see a little pillar with some coloured faces on. Green smiley etc.
Couldn't be more simple?
Watch folk go by. If it's a good day the odd one may go green most just drift by. If they have had a bad experience they make a bee line for it and mash red button repeatedly.
So we've stripped down the survey and we've made it as low impact as possible and we are getting a bias already.
Now it's good to note that the customer is annoyed and we can use that but consider the multi-task setting.
Does the passenger care if check in was good but queuing for the scanner was bad or that the baggage claim was faulty or the electronic scanners wouldn't pick up passports and they had to get finger printed to enter the country?
Company asks the passenger how was your flight? You may get a mood stamp off them
Company asks - how was this, how was that, how was this other thing - chances are they are not going to bother.
We currently go for mood stamp surveys on requests then any that are particularly bad get a follow up by the service manager to understand in more detail what the issue was.
This generates quite a bit of work and there's an argument that certain demographics, melenials for example, tend to over complain in the expectation that they will get something.
Be interesting what other folk have experienced.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎07-25-2018 08:48 AM
Hey Scott,
Great points. You're not only confirming what I was thinking, but drawing some amazing real world parallels with general issues in rating the customer experience and turning it into actionable information. Beyond that, if you dig down into incidents, which are supposed to be simpler to deal with, it's entirely possible that out of the complexity of the impact and/or administrative laziness, an incident was passed along different groups anyway and you're back to square one with the same issue.
Wasn't expecting to just work the awareness angle, but it's certainly a conversation worth having. Would still be interested in finding out if other stakeholders out there are trying to deal with these issues, but the argument can certainly be made that this information is more complex than it seems. I'm always a little unconfortable when asked to produce reports where I know there is a sizeable margin for error, but that people are willing to live with it because it's the only information they have. I feel like it's a road that leads to rose coloured glasses most of the time, making the information somewhat useless since you basically seem to be drawing the picture you were looking for any way and now have "numbers" to back it up.
If anybody else is willing to contribute, please feel free. If you've tried to address these types of issues with any specific configurations or practices, I'd love discuss them.
thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-22-2019 08:00 AM
Hey Pierre,
My company ran into the same situation like your, multiple SCTASKs involved by many support group in one RITM. One of the SCTASK has bad experiment, it not fair if survey result count for entire the RITM.
I would like to know do you have any solution to address this situation? or any ideas ....
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎01-25-2019 06:01 AM
Hey there! Can't say we've made much progress unfortunately. This multi-level structure for requests, for all its logic, comes with some challenges. What would compound the issue further would be having the survey at the REQ level which could include several cart items. We're still wrapping our head around the exact meaning and repercussions of these levels, trying to keep the focus for what is either on the client's side or on the support stakeholder's side. To throw an additional spanner in the works, we've introduced the possibility of adding tasks to RITM workflows manually, which can muddy the waters further. However, we recognize that this is due to low maturity of some request types and workflows and will aim to fix those aspects.
The best representation you can get are for the following types of situations:
- Requests or incidents coming into the serivce desk by phone and handled at first contact. Any survey results are probably targeted at one group and one analyst (unless the user just doesn't like the telephone menu system!)
- Requests with only one task or incidents submitted through self-service and handled at the first level. That would make the survey representative of the service provided by the one and only group that handled it.
Other situations are greyer areas. It's still good to know generally how satisfied people are with certain types of requests, even if fixing the source of complaints can take a lot of data mining. We're looking at slighly longer surveys for more complex requests as a trial solution, but the longer a survey is, the less of a chance there is that clients will fill them out. I guess it all comes down to how big a sample you are looking to get...
feel free to reply back, always interested in exchanging with peers!