MustGather data for SPM
Collecting MustGather data for SPM on Kubernetes
- Introduction
- Collecting MustGather data guidelines
- General SPM MustGather
- Runtime systems
- Helm information
- Kubernetes deployment information
- Detailed MustGather for logs and other artifacts
Introduction
The information laid out in this section details the information needed when raising a support case. Note that not all information detailed is necessary when raising an issue on GitHub.
Question
What information should I collect for issues with Merative Social Program Management (SPM) in a Kubernetes environment to assist Meravite Cúram Support with their investigations?
Answer
First, a comprehensive description of the problem along with answers to the following questions:
- Can you recreate the issue and provide the steps for Merative Support to do same?
- What is the impact of the issue; for instance, frequency of occurrence and pervasiveness?
- Have you identified any workarounds or mitigations?
- What investigation have you already performed?
Collecting MustGather data guidelines
Outlines below are must gather data guidelines that we recommend are followed.
- Do not provide any unanonymized data or information.
- Save all output to text files or files of appropriate formats. Include the command that created the output file.
- Logs and other artifacts can only be obtained from a Kubernetes pod if the pod is running; the two exceptions are:
- stdout is still available via
kubectl logs
(see stdout logs from pods for detail) even if the pod is in a non-running state such as completed - logs that are stored persistently are available over the lifetime of that storage medium
- stdout is still available via
- Regarding command examples that follow:
- In commands with variable information, such as namespaces, these are represented in the example commands by angle brackets, which you must replace.
- Commands where additional options are possible are indicated by an ellipses: …
General SPM MustGather
Beyond the answers to these questions, you need to collect general third party software information (for example, versions) and more specific diagnostic, log, and other information for the areas where you are seeing the issue. This general information is described in the sections that follow. For issues with the following third party software you should consult their external MustGather links as appropriate:
The following sections describe how to gather information specific to SPM running on Kubernetes and provide information relating to where logs and traces reside relating to the MustGather.
SPM and base third party software versions
In a SPM environment the Ant configreport
target includes the following versions in its generated zip file:
- SPM version - available in the
Installerlogs/Installhistory.txt
file within the resulting$SERVER_DIR/config_report.zip
file - Inside the ConfigReporter*.log file of the config folder within the resulting
$SERVER_DIR/config_report.zip
file you will find the: - Java version
- Application server version
- Database version
- Ant version
To generate the $SERVER_DIR/config_report.zip
file run the Ant configreport
target:
cd $SERVER_DIR./build.sh configreport
Please provide the config_report.zip
file.
Runtime systems
Operating systems and platforms
Include this information for all relevant platforms used for deployment, where you run the Azure CLI or Minikube.
On most systems the following command provides basic platform and operating system information:
uname -a
Kubernetes and related software versions
The following commands display the relevant versions for Docker, Helm and Kubernetes:
docker version # Provides version information for the Docker client and serverdocker info # Provides general runtime information about Dockerhelm version # Provides Helm version information# Depending on your environment, obtain the appropriate Kubernetes version information:minikube version # If applicableoc version # If applicable (OpenShift)kubectl version # Applies to Minikube, Kubernetes and OpenShift environments
Helm information
For deployments, gather information relevant or appropriate to your environment and the issue faced.
For Helm use the command appropriate to your Helm version to list the deployments in the environment, which provides the release and chart names needed in later Helm commands:
helm list -a --namespace <name_space>
Verify Helm Chart correctness
Use the helm lint
command to verify the correctness of your Helm Charts (prior to installation), for example:
cd $SPM_HOME/helm-charts/spmhelm lint .
You should receive successful output from the helm lint
command like this:
==> Linting .Lint OK1 chart(s) linted, no failures
If you encounter any errors, correct these, and recreate your issue.
Helm Chart rendering
Running the helm template
command processes the chart and generates output similar to what would be generated during deployment;
however, the helm install
command (see the following examples) provides additional information; notably overrides provided via the install.
So, where possible, the output from helm install
is preferable to that of helm template
.
helm template <release_name> <chart_name> ...
Check the output to verify that the expected values appear as intended.
Helm Chart installation
If you encounter errors when installing Helm Charts use the helm install
command with the following options to generate additional information
--debug
- checks the generated manifests of a release without installing the chart--dry-run
- produces output similar tohelm template
but additionally executes the secrets and verifies the objects
For example:
helm install <release_name> <chart_name> --debug --dry-run ...
Verify Helm Chart installation
The following command provides more information about what has been deployed or is running in Kubernetes.
The helm get all
command provides information equivalent to helm install --debug --dry-run
. For example:
helm get all <release-name> ...
Kubernetes deployment information
Enquiring a Kubernetes environment
If running in a Minikube environment use the following command to confirm its status:
minikube status
Use the following commands to get information about the running Kubernetes environment.
Display basic information about the cluster configuration:
kubectl config view --namespace <name_space> ...
Display basic information about the nodes and pods in the cluster:
kubectl get nodes -o wide --namespace <name_space> ...kubectl get pods -o wide --namespace <name_space> ...
For each of the pods displayed by the kubectl get pods
command use the pod name as input to the following command to get more detailed information about that pod, for example:
kubectl describe pod <pod_name> --namespace <name_space> ...
A convenient way to run the kubectl describe pod <pod_name> --namespace <name_space> ...
command for all the pods in a namespace is as follows:
mkdir podpods=`kubectl get pods -o name --namespace <name_space>`for p in $pods ; do kubectl describe $p --namespace <name_space> > $p.txt ; done
The results will be in the pod
folder created using the kubectl
command.
MustGather to collect logs and other artifacts
The following sections identify how to obtain logs and artifacts for specific areas of Kubernetes.
Externalizing logs and artifacts inside a Kubernetes pod
Externalizing logs and artifacts from a Kubernetes pod varies by the type of pod. The sections that follow identify locations for this information. The kubectl command reference gives several examples of how to externalize files from within a pod, such as:
kubectl cp <name_space>/<pod_name>:/logs /tmp/logskubectl exec -n <name_space> <pod_name> -- tar cf - /logs | tar xf - -C /tmp
Note: Both of the above kubectl
command formats produce the following spurious, benign error which you can ignore:
tar: Removing leading `/' from member names
Minikube logs
Use the minikube logs
command to obtain logs for the Minikube environment.
You can set the logging level when starting Minikube via the --v
option of the minikube start
command.
For instance, and to conveniently save the output at the same time:
minikube start --alsologtostderr --v=3 2>&1 | tee /tmp/minikube-start.log
Noting that the log output at startup differs from the output from the runtime log content via minikube logs
.
See Minikube documentation for more information.
stdout logs from pods
The kubectl logs
provides the stdout logs for a specified Kubernetes pod; so the log data provided is specific to the type of pod or software running within it.
For instance, if a pod is running WebSphere Liberty this command provides the equivalent of the contents of the WebSphere Liberty console.log
file
and for other pods will provide stdout log output specific to that pod’s context; for instance MQ, Cúram XML Server, or Cúram batch.
The command has various options, such as --follow
, --tail
, and --since
to control and display the log output.
See the kubectl help logs
command for option details.
Third party software logs and artifacts
Beyond basic stdout logging from a pod there may be additional logging or artifacts needed for investigating an issue. These logs and artifacts will be available within the pod’s file system or, in the case of object storage, in the persistent storage location specified Cloud Object Storage.
Further, the specific logs, artifacts, and their locations vary depending on the software running within the pod.
The following sections identify these various locations.
WebSphere Liberty stdout logs
WebSphere Liberty pods are identified by the pod naming pattern beginning with: <helm_release_name>-apps-
or <helm_release_name>-uawebapp-
.
Thus, if your Helm release was named “purpleclown” you would find pod names like these from a kubectl get pods
command:
purpleclown-apps-curam-consumer-7c578b9b59-t487n
purpleclown-apps-curam-producer-6794fc9b94-4srxr
purpleclown-apps-rest-producer-6858d55bd-x6z6s
purpleclown-apps-rest-consumer-5567ddc4cd-qpwp7
purpleclown-uawebapp-54b56d5866-2mnk5
where the number of pods may vary depending on the applications deployed and the number of replicas specified.
The default log location for WebSphere Liberty is the /logs
folder of the pod’s file system and, for example, can be accessed with the following command
kubectl exec --namespace <name_space> -it <pod_name> -- ls -l /logs
which would produce output like:
-rw-r----- 1 default root 62334 Apr 14 06:01 messages.log-rw-r--r-- 1 default root 2134 Mar 19 06:44 messages_20.04.14_06.00.47.0.log-rw-r----- 1 default root 973 Apr 14 06:00 trace.log
When using object storage the WebSphere Liberty logs, JVM dumps and garbage collection logs, and Cúram JMX statistics are symbolically linked to the pod’s file system in /tmp/logs
and available in your COS bucket.
WebSphere Liberty configuration information
WebSphere Liberty configuration information can be found in the pod’s file system in /liberty/wlp/usr/servers/defaultServer
, which contains:
server.xmljvm.optionsserver.envbootstrap.propertiesadc_conf/
The WebSphere Liberty server package
command can be used to zip up the server; but, the logs folder, being stored separately, must be handled directly from the /logs
folder.
MQ logs and artifacts
MQ pods are identified by the pod naming pattern beginning with: <helm_release_name>-mqserverapps-
.
So, if your Helm release was named “purpleclown” you would find pod names like the following from a kubectl get pods
command:
purpleclown-mqserver-curam-557d64c76-7sm8r
purpleclown-mqserver-rest-dbb994dc8-4p5cv
where the number of pods may vary depending on the SPM applications deployed.
The location of MQ logs is inside the MQ pods is as documented in the MQ IBM Documentation. You can interrogate the various log, error, and trace locations identified in the MQ IBM Documentation (and its MustGather) by accessing the MQ pod’s shell; for example:
kubectl exec --namespace <name_space> -it <mq_pod_name> -- /bin/bash
Cúram XML Server logs and artifacts
Cúram XML Server pods are identified by the pod naming pattern beginning with: <helm_release_name>-xmlserver-
.
So, if your Helm release was named “purpleclown” you would find pods named like the following from a kubectl get pods
command:
purpleclown-xmlserver-86b588c784-z76mz
where the number of pods may vary depending on the number of replicas specified.
Using the kubectl exec
command as illustrated will place you into the default XML Server directory /opt/ibm/Curam/xmlserver
:
kubectl exec --namespace <name_space> -it <xmlserver_pod_name> -- /bin/bash
The log file in /opt/ibm/Curam/xmlserver/tmp/xmlserver.log
is the same as the stdout produced by the kubectl logs
command.
For more information about XML server tracing and logging, see Configuring the XML server** in the Social Program Management XML Infrastructure Guide*. Gathering this data is beyond the scope of this document.
Cúram batch logs and artifacts
Cúram batch pods are identified by the pod naming pattern beginning with: <helm_release_name>-batch-
.
So, if your Helm release was named “purpleclown” you would find pods named like the following from a kubectl get pods command:
purpleclown-batch-1587036600-9rrnw
where the number of pods may vary depending on the number of replicas specified.
Use the kubctl logs
command to view the batch pod’s stdout.
For these pods the relevant log files can be found within the /opt/ibm/Curam/release/buildlogs
folder of the pod’s file system
Using the kubectl exec
command as illustrated will place you into the pod:
kubectl exec --namespace <name_space> -it <batch_pod_name> -- /bin/bash
The log file is in /opt/ibm/Curam/release/buildlogs/BatchLaucher123456789.log
.
If the persistence volume is enabled to store data, it will be saved to the Azure blob storage account,
in the “logs” directory within the pod’s directory.
If persistence is not enabled, the logs will be generated in the folder indicated above by default.
Enabling persistence activates the parameter
-Ddir.bld.log=$MOUNT_POINT/$HOSTNAME/logs
In Batch, the -Ddir.bld.log option is used to specify a system property that defines the path to the directory where
the build logs will be stored. This option is typically used in the context of build commands or automation scripts to
dynamically configure properties without modifying configuration files.
Configuring logs correctly is crucial for debugging and maintaining the build process.
The -Ddir.bld.log option is a flexible and powerful way to dynamically configure the log directory for Batch build processes.
$MOUNT_POINT/$HOSTNAME/logs is a folder created adhoc to store the logs in the Azure blob storage account.
HTTP Server logs and artifacts
The Apache HTTP Server is only used for hosting SPM static content this information would only be of interest for issues relating specifically to static content.
HTTP Server pods are identified by the pod naming pattern beginning with: <helm_release_name>-web-
.
So, if your Helm release was named “purpleclown” you would find pods named like the following from a kubectl get pods
command:
purpleclown-web-5c9cf5d868-ncdqk
where the number of pods may vary depending on the number of replicas specified.
For these pods the relevant log files can be found within the /var/log/httpd
folder of the pod’s file system.
For instance:
kubectl exec --namespace <name_space> -it <pod_name> -- ls -l /var/log/httpd/error_log /var/log/httpd/access_log
would show the relevant logs:
-rw-r--r-- 1 default root 7826 Jul 28 09:32 /var/log/httpd/access_log-rw-r--r-- 1 default root 3297 Jul 28 09:25 /var/log/httpd/error_log