SEVERE *** ERROR *** Can not parse content

JJ20
Kilo Guru

Restarted MID server (due to it appearing hung thanks to a discovery job). Noticing the following error in agent.0.log: how do we determine what is causing that fatal? 11/04/20 05:07:41 (622) Worker-Standard:ServiceDiscoveryProbe-3286778d1b9ce0108ef69829bc4bcba5 SEVERE *** ERROR *** Can not parse content com.snc.sw.exception.FileParsingException: Failed to parse XML file. Error: Problem with File Structure, Invalid XPath expression: "": Unexpected ''

1 ACCEPTED SOLUTION

JJ20
Kilo Guru

bug in the kubernetes pattern to be fixed in Paris.

View solution in original post

13 REPLIES 13

Rahul Priyadars
Tera Sage
Tera Sage

Please also look into ECC Queue Log of Discovery. Here you can see which PAYLOAD data is giving problem?

I am assuming some special character is causing this.

JSON translator needs to conform to valid XML standards:

  • Elements that start with a number cannot contain letters
  • Elements cannot start with “XML”
  • Cannot have “-“ or most special characters such as “#”,  “@”, etc.

Regards

RP

Hello Rahul,

I think we may be experiencing the issue you are describing with special characters. We have a text file stored on a Linux Server that contains data we would like to extract and update the CI.

For example, in the text file we have a variable named

level_2_support=UNIX/Linux Level 2
level_3_support=x86-Platform-Level 3

I am using the Parse File operation and collecting the support team name values which may have the special characters "-" or "/".

From your statement above, I assume that we should not use these special characters if we choose to collect them via the Parse File option?

 

Hello @Rahul Priyadars  i direct my question to you because i have seen you involved in different streams about kubernetes. I am having a similar issue on which the json file can not be parsed, i have seen the information from this json file and it seems to be ok. Another thing i have realized is that discovering kubernetes clusters from openshift which are small like around 12 to 20 nodes they get discovered well, but i am having trouble with clusters over 80 nodes, is there a possibility the amount of data in the json file is to big that the tool can't handle it?. i can see also the mid servers falling down every time i try scanning them... 

thank you !!

Sandeep132
Kilo Sage

Hi JJ,

Are you seeing this error on any pattern execution? I had similar issue with Kubernetes discovery for openShift 4 and have a workaround for the issue.

Thanks,