
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-15-2022 09:18 AM
Hi all,
We are a company utilizing Teradata's enterprice data warehouse solution as our data lake. Since we use it for all kinds of purposes (multiple business processes, financials as well as risk & compliance), we need to create a structure that's meaningful to all of our stakeholders - be it business analysts, data analysts, other business applications, compliance officers, etc. At the same time we need to be able to address alerts and events, plus incidents and changes. Not to mention GDPR DPIA's and other regulatory policies.
The databases that are created frequently as part of the architecture need to be tagged and identified so that we have good information governance and access management.
How should we go about this in a "best practice" CSDM 4.0 way?
All recommendations are very welcome since we are already in need of tags for our newly created databases.
Cheers,
Kristine
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-05-2023 07:09 AM - edited ‎10-05-2023 07:13 AM
The APM product provides Data Domain on top of the Information Object that we have in CSDM. The intended use when we implemented Information Objects in APM is to understand what data is being used / stored in each Business Application. This would typically trigger a specific type of audit, such as PCI audits when storing and using Credit Cards for example.
What you seem to be describing is "data as a product" in that data is a deliverable stream or package that one team provides another team to consume and use in some way. If this is the "product" use case, then extending the the product model types to include a Data Product Model this may be an appropriate approach. This would also allow you to articulate a product catalog and catalog item that would allow consumers to obtain these data as a type of product.
My main caution is that this will add a lot of extra management overhead, and I would encourage you to work with those who provide and consume data in this way be at the table to understand what level of manual labor is required to initially document this level of detail, and then to maintain going forward. It's franky hard enough for most companies to manage their Business Applications and Services at an appropriate level, let alone data products in this way. If the providers and consumers are willing to do it, then knock yourself out. : )

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎04-28-2022 12:53 AM
Hi Kristine,
The use cases you bring up in the community are really interesting. And again I never worked with such a requirement and can not imagine, how you can be compliant to CSDM.
But in the last weeks I somehow saw in the docs, that ServiceNow has an information portfolio, where you can manage different database levels and connect them to business applications and hardware/software CIs. When I remember me correctly you have the APM module in your company, so maybe you can have a look into it.
Here the picture from the ServiceNow docs, how the data could be connect.
I would be glad to hear about the solution you found.
Regards Sebastian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎05-03-2023 01:33 PM
we are also looking into data lakes as a solution to our current challenges regarding data analysis.
i'd be very interesting in hearing more about this. we tried with Power BI but found it limiting;

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎09-14-2023 10:29 PM
Good morning all,
there are 2 aspects in the question I think.
1 is to model Teradata as a solution and 1 is what it means to data consumers.
The data consumers use applications and processing solutions to work with the teradata data (but 'just' a part of that data they are depending on/or could be potential impact).
Meaning:
Financials might use another processing solution to consume/work with teradata data than Risk and Compliance.
Is it identifiable that a part of the data(bases) is used for Financials and another part of the data(bases) is used for Risk and Compliance. I guess so as it is based on architecture. Operational reason to model it like that is to understand potential impact on eg database level. --> So the specific database supports eg a solution used for financials. If it is modeled as teradata solution supporting multiple processing/reporting solutions then 1 disruption on DB level results in an impact for all consumers which I believe is not the real situation.
BR,
Barry

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎10-03-2023 04:03 AM
Hi all,
since last time we have looked into modelling the sub sets of data according to where the various Information Objects are placed as @Barry Kant and @SebastianKunzke mentions. By linking IO to DBs that we have discovered automatically, we see that it is possible to show upstream relationships to service offerings as well. But now there's a need to follow up on Information as an Asset, as well as Information As a Service (DORA, ISO27001++), and the need to formalize the governance of Data Products and Data Product Models turn up. Is there a plan to create Data Product Models, @Mark Bodman ? Or do we need to create this ourselves? We have started using all other Product Models, but lacks this one. If we then want to show back business resilience in our BCM work, we could point to valuable information assets in our alm_assets table, referencing data models that again (somehow) references Information objects and thereby classifications, ownership and data domains. If this exists, it would also be possible to create a data product tag in the GitLab repos, so that once a new DB catalog is created, it will have a matching Data product model ID in ServiceNow as well (haven't quite thought of how this could be orchestrated, but can use a YML or JSON file for this, I guess).
Or is there another way we are supposed to handle Data models?
We are about to start the journey to integrate with Kong API Gateway, and would like to manage information content equally robust as the APIs that provide them. We have also started looking into an integration with Snowflake, so things are happening fast.
Best regards,
Kristine