Skip to main contentCúram on Kubernetes

MustGather data for SPM

Collecting MustGather Data for Cúram on Kubernetes

Introduction

The information laid out in this section details the information needed when raising a support case. Note that not all information detailed is necessary when raising an issue on GitHub.

Cúram may request the Helm Charts and Dockerfiles developed by the engineering team as part of the support process.

Question

What information should I collect for issues with Cúram in a Kubernetes environment to assist Meravite Cúram Support with their investigations?

Answer

First, a comprehensive description of the problem along with answers to the following questions:

  • Can the issue be recreated, and steps provided for the Cúram Support to do same?
  • What is the impact of the issue; for instance, frequency of occurrence and pervasiveness?
  • Have any workarounds or mitigations been identified?
  • What investigation has already been performed?

Collecting MustGather Data Guidelines

Outlined below are must gather data guidelines that we recommend be followed:

  • Do not provide any unanonymized data or information.
  • Save all output to text files or files of appropriate formats. Include the command that created the output file.
  • Logs and other artifacts can only be obtained from a Kubernetes pod if the pod is running; the two exceptions are:
    • stdout is still available via kubectl logs (see stdout logs from pods for detail) even if the pod is in a non-running state such as completed
    • logs that are stored persistently are available over the lifetime of that storage medium
  • Regarding command examples that follow:
    • In commands with variable information, such as namespaces, these are represented in the example commands by angle brackets, which must be replaced with the correct value(s).
    • Commands where additional options are possible are indicated by an ellipses: …

General Cúram MustGather

Beyond the answers to these questions, general third party software information (for example, versions) and more specific diagnostic, log, and other information for the areas where the issue is seen, must be collected. This general information is described in the sections that follow. For issues with the following third party software, their external MustGather links should be consulted, as appropriate:

The following sections describe how to gather information specific to Cúram running on Kubernetes and provide information relating to where logs and traces reside relating to the MustGather.

Cúram and Base Third Party Software Versions

In a Cúram environment the Ant configreport target includes the following versions in its generated zip file:

  • Cúram version - available in the Installerlogs/Installhistory.txt file within the resulting $SERVER_DIR/config_report.zip file
  • Inside the ConfigReporter*.log file of the config folder within the resulting $SERVER_DIR/config_report.zip file you will find the:
    • Java version
    • Application server version
    • Database version
    • Ant version

To generate the $SERVER_DIR/config_report.zip file run the Ant configreport target:

cd $SERVER_DIR
./build.sh configreport

Please provide the config_report.zip file.

Runtime Systems

Operating Systems and Platforms

Include this information for all relevant platforms used for deployment, where the Azure CLI or Minikube is run.

On most systems the following command provides basic platform and operating system information:

uname -a

The following commands display the relevant versions for Docker, Helm and Kubernetes:

docker version # Provides version information for the Docker client and server
docker info # Provides general runtime information about Docker
helm version # Provides Helm version information
# Depending on the environment, obtain the appropriate Kubernetes version information:
minikube version # If applicable
oc version # If applicable (OpenShift)
kubectl version # Applies to Minikube, Kubernetes and OpenShift environments

Helm Information

For deployments, gather information relevant or appropriate to the environment and the issue faced.

For Helm, use the command appropriate to the Helm version to list the deployments in the environment, which provides the release and chart names needed in later Helm commands:

helm list -a --namespace <name_space>

Verify Helm Chart Correctness

Use the helm lint command to verify the correctness of the Helm Charts, prior to installation, for example:

cd $SPM_HOME/helm-charts/spm
helm lint .

You should receive successful output from the helm lint command like this:

==> Linting .
Lint OK
1 chart(s) linted, no failures

If any errors are encountered, correct them, and recreate your issue.

Helm Chart Rendering

Running the helm template command processes the chart and generates output similar to what would be generated during deployment, however, the helm install command (see the following examples) provides additional information; notably overrides provided via the install. So, where possible, the output from helm install is preferable to that of helm template.

helm template <release_name> <chart_name> ...

Check the output to verify that the expected values appear as intended.

Helm Chart Installation

If errors are encountered when installing Helm Charts, use the helm install command with the following options to generate additional information:

  • --debug - checks the generated manifests of a release without installing the chart
  • --dry-run - produces output similar to helm template but additionally executes the secrets and verifies the objects

For example:

helm install <release_name> <chart_name> --debug --dry-run ...

Verify Helm Chart Installation

The following command provides more information about what has been deployed or is running in Kubernetes.

The helm get all command provides information equivalent to helm install --debug --dry-run. For example:

helm get all <release-name> ...

Kubernetes Deployment Information

Information on a Kubernetes Environment

If running in a Minikube environment use the following command to confirm its status:

minikube status

Use the following commands to get information about the running Kubernetes environment.

Display basic information about the cluster configuration:

kubectl config view --namespace <name_space> ...

Display basic information about the nodes and pods in the cluster:

kubectl get nodes -o wide --namespace <name_space> ...
kubectl get pods -o wide --namespace <name_space> ...

For each of the pods displayed by the kubectl get pods command use the pod name as input to the following command to get more detailed information about that pod, for example:

kubectl describe pod <pod_name> --namespace <name_space> ...

A convenient way to run the kubectl describe pod <pod_name> --namespace <name_space> ... command for all the pods in a namespace is as follows:

mkdir pod
pods=`kubectl get pods -o name --namespace <name_space>`
for p in $pods ; do kubectl describe $p --namespace <name_space> > pod/$p.txt ; done

The results will be in the pod folder created using the mkdir command.

MustGather to Collect Logs and Other Artifacts

The following sections identify how to obtain logs and artifacts for specific areas of Kubernetes.

Externalizing Logs and Artifacts Inside a Kubernetes Pod

Externalizing logs and artifacts from a Kubernetes pod varies by the type of pod. The sections that follow identify locations for this information. The kubectl command reference gives several examples of how to externalize files from within a pod, such as:

kubectl cp <name_space>/<pod_name>:/logs /tmp/logs
kubectl exec -n <name_space> <pod_name> -- tar cf - /logs | tar xf - -C /tmp

Note: Both of the above kubectl command formats produce the following spurious, benign error which can be ignored:

tar: Removing leading `/' from member names

Minikube Logs

Use the minikube logs command to obtain logs for the Minikube environment.

The logging level can be set when starting Minikube via the --v option of the minikube start command. For instance, and to conveniently save the output at the same time:

minikube start --alsologtostderr --v=3 2>&1 | tee /tmp/minikube-start.log

Noting that the log output at startup differs from the output from the runtime log content via minikube logs.

See Minikube documentation for more information.

Stdout Logs from Pods

The kubectl logs provides the stdout logs for a specified Kubernetes pod; so the log data provided is specific to the type of pod or software running within it. For instance, if a pod is running WebSphere Liberty this command provides the equivalent of the contents of the WebSphere Liberty console.log file and for other pods will provide stdout log output specific to that pod’s context; for instance MQ, Cúram XML Server, or Cúram batch.

The command has various options, such as --follow, --tail, and --since to control and display the log output. See the kubectl help logs command for option details.

Third Party Software Logs and Artifacts

Beyond basic stdout logging from a pod, there may be additional logging or artifacts needed for investigating an issue. These logs and artifacts will be available within the pod’s file system or, in the case of object storage, in the persistent storage location specified.

Further, the specific logs, artifacts, and their locations vary depending on the software running within the pod.

The following sections identify these various locations.

WebSphere Liberty Stdout Logs

WebSphere Liberty pods are identified by the pod naming pattern beginning with: <helm_release_name>-apps- or <helm_release_name>-uawebapp-.

Thus, if the Helm release was named “purpleclown”, pod names like these can be found by running a kubectl get pods command:

  • purpleclown-apps-curam-consumer-7c578b9b59-t487n
  • purpleclown-apps-curam-producer-6794fc9b94-4srxr
  • purpleclown-apps-rest-producer-6858d55bd-x6z6s
  • purpleclown-apps-rest-consumer-5567ddc4cd-qpwp7
  • purpleclown-uawebapp-54b56d5866-2mnk5

where the number of pods may vary depending on the applications deployed and the number of replicas specified.

The default log location for WebSphere Liberty is the /logs folder of the pod’s file system and, for example, can be accessed with the following command:

kubectl exec --namespace <name_space> -it <pod_name> -- ls -l /logs

which would produce output like:

-rw-r----- 1 default root 62334 Apr 14 06:01 messages.log
-rw-r--r-- 1 default root 2134 Mar 19 06:44 messages_20.04.14_06.00.47.0.log
-rw-r----- 1 default root 973 Apr 14 06:00 trace.log

When using object storage, the WebSphere Liberty logs, JVM dumps, garbage collection logs, and Cúram JMX statistics are symbolically linked to the pod’s file system in /tmp/logs, available in the bound cloud storage.

WebSphere Liberty Configuration Information

WebSphere Liberty configuration information can be found in the pod’s file system in /liberty/wlp/usr/servers/defaultServer, which contains:

server.xml
jvm.options
server.env
bootstrap.properties
adc_conf/

The WebSphere Liberty server package command can be used to zip up the server, but, the logs folder, being stored separately, must be handled directly from the /logs folder.

MQ Logs and Artifacts

MQ pods are identified by the pod naming pattern beginning with: <helm_release_name>-mqserverapps-.

So, if the Helm release was named “purpleclown” pod names like the following can be found by running a kubectl get pods command:

  • purpleclown-mqserver-curam-557d64c76-7sm8r
  • purpleclown-mqserver-rest-dbb994dc8-4p5cv

where the number of pods may vary depending on the Cúram applications deployed.

The location of MQ logs is inside the MQ pods is as documented in the MQ IBM Documentation. The various log, error, and trace locations identified in the MQ IBM Documentation (and its MustGather), can be investigated by accessing the MQ pod’s shell, for example:

kubectl exec --namespace <name_space> -it <mq_pod_name> -- /bin/bash

Cúram XML Server Logs and Artifacts

Cúram XML Server pods are identified by the pod naming pattern beginning with: <helm_release_name>-xmlserver-.

So, if the Helm release was named “purpleclown” pods named like the following can be found by running a kubectl get pods command:

  • purpleclown-xmlserver-86b588c784-z76mz

where the number of pods may vary depending on the number of replicas specified.

Using the kubectl exec command, as follows, will provide access into the default XML Server directory /opt/ibm/Curam/xmlserver:

kubectl exec --namespace <name_space> -it <xmlserver_pod_name> -- /bin/bash

The log file in /opt/ibm/Curam/xmlserver/tmp/xmlserver.log is the same as the stdout produced by the kubectl logs command.

For more information about XML server tracing and logging, see Configuring the XML server** in the Cúram XML Infrastructure Guide*. Gathering this data is beyond the scope of this document.

Cúram PDF documentation is available to download from Cúram Support Docs.

Cúram Batch Logs and Artifacts

Cúram batch pods are identified by the pod naming pattern beginning with: <helm_release_name>-batch-.

So, if the Helm release was named “purpleclown” pods named like the following can be found by running a kubectl get pods command:

  • purpleclown-batch-1587036600-9rrnw

where the number of pods may vary depending on the number of replicas specified.

Use the kubectl logs command to view the batch pod’s stdout.

For these pods the relevant log files can be found within the /opt/ibm/Curam/release/buildlogs folder of the pod’s file system

Using the kubectl exec command as illustrated will place you into the pod:

kubectl exec --namespace <name_space> -it <batch_pod_name> -- /bin/bash

The log file is in /opt/ibm/Curam/release/buildlogs/BatchLaucherXXXXXXXXXX.log.

If the persistence volume is enabled to store data, it will be saved to the Azure blob storage account,

in the “logs” directory within the pod’s directory.

If persistence is not enabled, the logs will be generated in the folder indicated above by default.

Enabling persistence activates the parameter

-Ddir.bld.log=$MOUNT_POINT/$HOSTNAME/logs

In Batch, the -Ddir.bld.log option is used to specify a system property that defines the path to the directory where

the build logs will be stored. This option is typically used in the context of build commands or automation scripts to

dynamically configure properties without modifying configuration files.

Configuring logs correctly is crucial for debugging and maintaining the build process.

The -Ddir.bld.log option is a flexible and powerful way to dynamically configure the log directory for Batch build processes.

$MOUNT_POINT/$HOSTNAME/logs is a folder created adhoc to store the logs in the Azure blob storage account.

HTTP Server Logs and Artifacts

The Apache HTTP Server is only used for hosting Cúram static content. This information would only be of interest for issues relating specifically to static content.

HTTP Server pods are identified by the pod naming pattern beginning with: <helm_release_name>-web-.

So, if the Helm release was named “purpleclown” pods named like the following can be found by running a kubectl get pods command:

  • purpleclown-web-5c9cf5d868-ncdqk

where the number of pods may vary depending on the number of replicas specified.

For these pods the relevant log files can be found within the /var/log/httpd folder of the pod’s file system. For instance:

kubectl exec --namespace <name_space> -it <pod_name> -- ls -l /var/log/httpd/error_log /var/log/httpd/access_log

would show the relevant logs:

-rw-r--r-- 1 default root 7826 Jul 28 09:32 /var/log/httpd/access_log
-rw-r--r-- 1 default root 3297 Jul 28 09:25 /var/log/httpd/error_log