Applying New Risk Assessment Methodology (RAM) to Existing Entities
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago - last edited 3 weeks ago
Hi All,
We are implementing Phase 2 of our GRC setup and have a question around risk assessment methodologies for existing entities.
In Phase 1, we configured a Risk Assessment Methodology (RAM) and it was applied across all entities.
Now in Phase 2, we have introduced a new RAM, and the client wants this new methodology to be applicable to all existing entities, similar to how it worked in Phase 1.
What I have done so far, Moved the old RAM to Draft state and Updated the Primary Risk Assessment Methodology to the new RAM on the entity class.
However, this does not seem sufficient. I can still see existing Risk Assessment Scopes that are linked to the older RAM.
My questions:
What is the recommended approach to apply a new RAM to existing entities?
Is there an out-of-the-box way to update existing Risk Assessment Scopes, or is a script/fix script required?
If scripting is required, are there any best practices or risks to be aware of
Any guidance or experience on handling RAM changes for existing data would be greatly appreciated.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
This is a common pain point when evolving your IRM setup, and the short answer is: there is no OOTB way to bulk-migrate existing Risk Assessment Scopes to a new RAM. What you've done so far (updating the Entity Class and moving the old RAM to Draft) only governs net-new scopes going forward. Existing scope records carry a hard reference to the RAM that was active when they were created, and the platform does not retroactively update them.
To directly answer your three questions:
**1. Recommended approach to apply a new RAM to existing entities**
You have two paths depending on your situation:
- Clean break: Close or retire the existing scopes tied to the old RAM, document any historical assessment data you need to preserve, then create new scopes under the new RAM. This is the cleanest option and avoids data integrity risk. If you're on Yokohama or Zurich, standing up new Risk Assessment Projects under the new RAM makes this approach even more manageable as RAPs give you a structured way to onboard entities under a new methodology without it feeling like starting from scratch.
- Scripted migration: If you have a high volume of existing scopes and need continuity of those records, a background/fix script to update the methodology field on existing scope records is the practical route. More on this in Q3.
One critical thing to flag before going further: if you moved the old RAM to Draft and had any assessments in Monitor or Closed state, those scopes and assessments may have already been deleted by the platform. Moving a RAM to Draft state deletes associated scopes that have active assessments. It’s worth verifying what state your data is in before taking next steps.
2. Is there an OOTB way to update existing Risk Assessment Scopes?
No. There is no OOTB bulk-update mechanism for this. The RAM field on a scope record is not editable through the UI once assessments have been initiated. This behavior has been consistent from Washington DC through Zurich, it's a platform design decision, not a version-specific limitation. Your version does, however, determine what options are available to you (see below).
3. Scripting best practices and risks
If scripting is the route, here's what to know:
Tables to target:
- sysarm_assessment_scope: this is your primary target; update the `methodology` field to the new RAM's sys_id
- Review sysarm_risk_assessment: individual assessment instances are also RAM-linked and may need separate handling
- Check any factor or group factor references specific to the old RAM, orphaned references can cause scoring errors
Best practices:
- Run in sub-prod first against a small batch of scopes before executing broadly
- Filter your GlideRecord query tightly (by entity class and old RAM sys_id) to avoid unintended mass updates
- Export/backup impacted records before running
- Do NOT update scopes that have assessments in Monitor state, this can corrupt active assessment instances
- Handle those in-progress scopes separately (close them out manually, then migrate)
Risks to be aware of:
- Changing the RAM reference on a scope does not retroactively fix assessment instances already generated under the old RAM, those instances will still calculate against old RAM factors. This creates a scoring discontinuity that will affect historical trending and heatmap data.
- Make sure your client understands there will be a break in comparability between pre- and post-migration assessment scores if the new RAM has different factors or weighting logic.
Does version matter?
Yes, your ServiceNow version affects which path is most practical:
- Washington DC / Xanadu: Fix script is essentially your only option for bulk-migrating existing scopes. Plan carefully and test thoroughly.
- Yokohama / Zurich: You have an additional architectural option: Risk Assessment Projects. RAPs provide a structured project-based framework for running assessments across entities, and they make the "close old scopes, open new RAP under new RAM" approach significantly less painful than it was in earlier versions. If your existing scopes don't have a deep history of closed assessments worth preserving, this path may be cleaner than scripting a migration.
What version are you on? That would help narrow down the best recommendation for your specific situation.
