How to automatically create a target table for a data import set?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-06-2015 04:32 PM
I have been asked to migrate an existing (and very complex) structured knowledge repository to ServNow ( and this is my first exposure to SN so please forgive a dumb question). I realize that I will need to create the main data tables one column at a time from scratch in order to have all the proper configuration (7 tables 50-120 columns each lots of ref fields)
BUT I have another 170 tables that I have in Excel that I can load to an import set as they are more simple ( no reference fields etc) These each have 10-70 columns each and all unique. These will only be referenced by records in the main data tables.
I need to transform these to a set of 170 tables ( a one time exercise) What I hope to do is to automatically create the custom target table from the individual import sets.
Is this possible?
Is there a better way? Can MS Access be used to create tables via ODBC Connection? (For instance I can create these kinds of table directly from MS Access in SharePoint in one click)
Many Thanks
R
- Labels:
-
Integrations
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-08-2015 01:34 AM
Hi there,
I hope when you say a "170 tables" you don't actually mean 170 tables because if so, I would really reconsider how you want to implement your knowledge base and what the end goal is.
Are you creating your own KB within SN or looking at leveraging the KB already in SN?
I am not sure there is an automated way to create tables though others on here might be able to advise.
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-08-2015 12:39 PM
Hello Peter,
This project was intended to be a "lift and shift" from another platform to SN at the request of the client who wants to integrate the existing (successful) knowledge management database from the other platform into SN. IN that other database about 70 % structured (choice or reference) and 30% of the fields are HTML (unstructured). There is one knowledge attribute (a column in the main table) that we link to an underlying list of additional detail. In times past we stored those as excel attachments, but that makes linking to the other systems (say via JDBC) a problem. We eventually stored this extra detail as lists/tables and that worked very well in all aspects allowing for linkage for automatic updates as well as to have the ability to add additional information to these tables without it being overwritten upon updates all with better performance (user experience). Given that background, if you have a better approach please share.
My fall back is to just store the excel file as creating all these tables one column at a time is cost prohibitive based on the budget for the project (out of my control).
We are leaning towards a custom application vs the existing knowledge base at this time, but will reconsider if a good case is made to use the existing one.
Thanks for your interest and if you have any suggestions for creating new tables (with pre-existing content) more efficiently please let me know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-08-2015 03:08 PM
Hi Rakin, 170 tables seems to be a lot of tables, Could you please share the schema of the main table and a couple of other tables? I want to understand the structure you are using a little better.
If you want to create tables automatically you can develop a script to do so. You will first need to have the tables information as a Javascript object or as JSON. There are online tools that can help you create the structure based on an excel file . Ex:
vat tables = [
{ "name" : "table1", fields : [
{"name" : "field1", "type": "string"},
{"name" : "field1", "type": "string"},
]},
];
Then you can run a script in the background to create the tables:
for (var i = 0; i < tables.length; i++){
var table = new GlideRecord('sys_db_object');
table.initialize();
table.name = tables[i].name;
tableSysID = table.insert();
var fields = tables[i].fields;
for (var j = 0; j < fields.length; j++){
var field = new GlideRecord('sys_dictionary');
field.initialize();
field.name = tableSysID;
field.column_label = fields[j].name;
field.internal_type = fields[j].type;
field.insert();
}
}
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‎03-08-2015 03:22 PM
Thanks I was afraid I would have to find a coder for the project, but that (JSON) does look like the best way so far. Many thanks!
In regards to the number of tables.. donjt think of them as managing some actions or as part of any work flow. They are just a part of the KM record (like an encyclopedia where each main record is a topic and these tables simply hold additional detail that's too big for the record to display) so if the main KM data table has 1000 records, the one could potentially have 1000 of these supplementary tables that do no "work", they only display semi static information from another system as a reference. I am happy to have only 170 examples and not a 1000 but the point is these could be even stored as an excel file embedded in a record if the maintenance for the manual sync wasn't a nuisance.
Many thanks for the suggestion