
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 05:50 PM
We've been using Discovery in our environment for a while now, and get a good amount of data back, though it could use some additional tweaking.
One thing that I've noticed, and I believe that Discovery aids in the creation, is that we have a massive number of duplicate Product Models in our system. I am very confident that all but a few were generated automatically from Discovery, and we now have 513,330 product models in the product model table, most of which are duplicates.
As an example, I know for a fact that we have 2 Dymo LabelWriter Print Servers on our network, but we have 311 product models, all listed as "LabelWriter Print Server". Others that I've seen that are also obvious duplicates include 17,710 for "3825" (which appears to be referring to a Cisco 3825), or 81,874 for a "Catalyst 37xxStack".
How can I consolidate these duplicates into single product models that are associated with CIs and Assets, and ensure that these huge number of duplicates aren't created again?
Solved! Go to Solution.
- Labels:
-
Discovery

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2019 06:51 PM
Hi, have you looked in to using the field normalization plugin To consolidate product models? Seems like a viable option...
Does this help?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2019 01:43 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-26-2019 03:08 AM
Hi, we are currently trying to normalize the company table with the Field Normalization plugin.
May I ask how you set up the configuration? Correct me if I'm wrong but to me it looks like we have to create a normal value and a rule or alias for each record, since we want to use the 'coalesce each normal' functionality.
I have now set up a script on our dev instance that flags the duplicates with 'name' + [to be normalized] and then creates a normal value with coalesce to the original record and a rule that considers the affix for each distinct name.
The job is now running for 23 hours and we are down only 800 companies from ~15.000.
What am I doing wrong?
Thanks in advance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2019 11:52 AM
Marcel,
I recently discovered (no pun intended) the same problem at my company. We have close to 400,000 hardware models, many with 1 Asset related. When you used Normalization, did it re-point the Assets to the one model and eliminate the other models? And is there any documentation available regarding how to use Normalization in this way?

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2019 12:00 PM
Hi Jim -- see docs on
Field normalization and transformation

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2019 01:09 PM
I found that Normalization doesn't always do the job fully. Even when I have matching criteria, many times I'd still be left with a lot of duplicates. Many times what I'd do is create a new alias or add to the rule and then change the name of the still outstanding duplicate records to match the new alias criteria and it would go through. I'm not sure why these aren't caught the first time through, but I've noticed the same behavior when normalizing company names as well.