Skip to main contentIBM Cúram SPM Performance Tuning

Tuning the JMS producer pods

Overview

The term JMS producer refers to the functional structure of SPM applications (for example, curam, rest, etc.) as deployed in OpenShift such that the client portion of the application is isolated from its server side functionality. That is, SPM clients produce JMS messages that perform asynchronous operations, which are consumed by corresponding JMS consumer pods. For more information please review Transaction isolation.

Benefits of this separation include:

  • Allowing unique tuning that caters to the type of work done in each pod type
  • Separation of JMS/MQ put and get functionality
  • Scale JMS producer independently of the JMS consumer
  • Isolation of the Kubernetes services

SPM configuration reference overrides

Helm charts allow for flexibility in specifying tuning settings. In SPM deployments, tuning configuration settings can be made globally, by deployment type (e.g. producer), or by application (e.g. curam).

The list below that illustrates this tuning flexibility where <applicationID> is replaced by the lower-case EAR file basename; that is, in the case of Curam.ear use the value curam:

  • apps.global.tuningDefaults - global tuning
  • global.apps.config.<applicationID>.producerTuning - dictionary containing tuning values specific to the producer pods for that application
  • global.apps.config.<applicationID> - for the following keys:
    • jvm - Liberty JVM heap and other settings
    • replicaCount - the number of replicas
    • resources - varies by application
    • The various keys from the apps.tuningDefaults dictionary.

Further, apps.tuningDefaults.resources allows for fine tuning of a pod’s resources, overriding global.apps.config.<applicationID>.resources.

Pod replica count

The purpose of specifying a replica count is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

The OpenShift and Kubernetes documentation provide more information about replicas:

The number of replicas in a deployment can be specified globally via apps.replicaCount or via global.apps.config.<applicationID>.replicaCount: where <applicationID> is replaced by the lower-case EAR file basename (e.g., curam, citizenportal, rest, etc).

The default is a single replica.

For example, an override file to specify 2 curam replicas and 4 rest replicas:

global:
apps:
config:
curam:
replicaCount: 2
rest:
replicaCount: 4
...

The number of replicas can also be specified at a more granular level for producer and consumer deployments. For example, an override file to specify three curam producer replicas and six curam consumer replicas:

global:
apps:
config:
curam:
producerTuning:
replicaCount: 3
consumerTuning:
replicaCount: 6
...

Pod requests and limits

The OpenShift and Kubernetes documentation provide more information about requests and limits:

The SPM Helm charts are coded to be able to specify cpu and memory requests and limits. For instance, this shows the default settings for curam application pods:

global:
apps:
config:
curam:
resources:
limits:
cpu: 2
memory: 4Gi
requests:

These requests and limits can also be specified at a more granular level for producer and consumer deployments.

For example:

global:
apps:
config:
curam:
producerTuning:
resources:
limits:
cpu: 2
memory: 3584Mi

Liberty thread pool

The total number of threads that SPM uses in a producer pod can be set initially to (requested_cpu * 2). Setting the number of threads to twice the number of cores is based on experience that processing in SPM is usually split relatively equally between I/O and CPU.

The SPM Helm charts allow for overriding the WebSphere Liberty executor thread pool minimum (coreThreads) and maximum (maxThreads) settings either globally (e.g. apps.tuningDefaults.coreThreads) or per application as per the Initial Tuning Settings.

For example, a tuning specification of 8 threads for curam producer pods:

global:
apps:
config:
curam:
producerTuning:
coreThreads: 8
maxThreads: 8
...

The coreThreads and maxThreads values map to the pod’s WebSphere Liberty configuration in /config/server.xml and the values are populated via /config/server.env.

For example:

<server>
...
<executor coreThreads="${env.EX_CORE_THREADS}" maxThreads="${env.EX_MAX_THREADS}" />
</server>

Liberty JDBC configuration tuning

Data source: jdbc/curamdb

A Social Program Management (SPM) transaction can require two JDBC connections, one for the transaction itself and another one for the key server.

Therefore, size the jdbc/curamdb data source connection pool to prevent deadlocks, with more connections available than threads that SPM uses.

Data source: jdbc/curamtimerdb

The EJB timer service is used by all SPM transactions, but only once per transaction, in our application infrastructure and at the very start of an SPM transaction. Currently, no reference to or usage of this service exists after the very start of an SPM transaction.

You can tune the size of the jdbc/curamtimerdb data source connection pool to be the same size as the number of threads, which would ensure that no contention can occur on the pool. However, given that the time that is spent using the EJB timer service is typically short compared to the duration of the transactions, a smaller size should work well without significant contention. So our advice is to start with the default size, monitor the system, and then increase the size if evidence exists of a significant contention under normal conditions.

Statement Cache Size

As a starting value for SPM, set the data source prepared statement cache size for jdbc/curamdb to 1000. Then, monitor the cache use and increase it if discards occur. In our experience, preventing discards can increase throughput by up to 20%.

JDBC configuration tuning

The WebSphere Liberty JDBC configuration is tunable for each of these database definitions:

Database definitionsDescription
curamdbUsed by SPM applications
curamtimerdbUsed by the SPM timer infrastructure
curamsessdbUsed for WebSphere Liberty’s HTTP session replication

The following yaml keys are provided for tuning the WebSphere Liberty JDBC configuration:

  • maxPoolSize - Maximum number of database connections; Helm chart default: 8

    • curamdb_maxPoolSize = (( max_threads * 2 ) + 1))
    • curamtimerdb_maxPoolSize = ( max_threads + 1)
    • curamsessdb_maxPoolSize= ( max_threads + 1 )
  • numConnectionsPerThreadLocal - Number of connections to the database to be cached for each thread; Helm chart default: 2

    • curamdb_numConnectionsPerThreadLocal = 2
    • curamtimerdb_numConnectionsPerThreadLocal = 2
    • curamsessdb_numConnectionsPerThreadLocal = 2
  • purgePolicy - Connections to be destroyed in the pool when a stale connection is detected; Helm chart default: EntirePool

    • curamdb_purgePolicy = EntirePool
    • curamtimerdb_purgePolicy = EntirePool
    • curamsessdb_purgePolicy = EntirePool
  • statementCacheSize - Maximum number of cached statements per connection; Helm chart default: 1000

    • curamdb_statementCacheSize = 1000
    • curamtimerdb_statementCacheSize = 1000
    • curamsessdb_statementCacheSize = 1000

The SPM Helm charts allow for overriding the JDBC configuration either globally (e.g., apps.tuningDefaults.curamdb_maxPoolSize) or per application as per the Initial Tuning Settings examples provided. Here we illustrate tuning settings for curamdb in curam producer pods:

global:
apps:
config:
curam:
producerTuning:
# Curam Producer Database Settings
curamdb_maxPoolSize: 17
curamdb_numConnectionsPerThreadLocal: 2
curamdb_purgePolicy: EntirePool

The various keys in the preceding list map to the pod’s WebSphere Liberty configuration in /config/adc_conf/server_resources_jdbc.xml and the setting values are populated via /config/server.env. For example, showing the relevant parts of the curamdb configuration:

<server>
<dataSource id="curamdb" jndiName="jdbc/curamdb"
statementCacheSize="${env.DS_CURAMDB_CACHE_SIZE}">
<connectionManager
maxPoolSize="${env.CM_CURAMDB_MAX_POOL_SIZE}"
numConnectionsPerThreadLocal="${env.CM_CURAMDB_CONN_PER_THREAD}"
purgePolicy="${env.CM_CURAMDB_PURGE_POLICY}"
/>
</dataSource>

Liberty JMS configuration tuning

The WebSphere Liberty JMS configuration is tunable for the JMS connection manager settings associated with the CuramQueueConnectionFactory

JMS connection manager tuning

The following JMS connection manager settings, associated with the CuramQueueConnectionFactory, can be tuned:

Connection manager settingYaml default comes fromYaml key used if specificDescription
maxPoolSizemax_thread + 1apps.<applicationID>.producerTuning.mqMaxPoolSizeSpecifies the maximum number of physical connections for the connection pool
minPoolSizemax_thread + 1apps.<applicationID>.producerTuning.mqMinPoolSizeSpecifies the minimum number of physical connections for the connection pool
numConnectionsPerThreadLocal6apps.<applicationID>.producerTuning.mqNumConnectionsPerThreadLocalSpecifies the number of connections to cache for each executor thread
maxConnectionsPerThread6apps.<applicationID>.producerTuning.maxJMSConnectionsPerThreadLimits the number of open connections on each thread

The keys in the preceding table map to the pod’s WebSphere Liberty configuration in /config/adc_conf/server_resources_messaging.xml and the setting values are populated via /config/server.env as shown in this WebSphere Liberty configuration fragment:

<server>
...
<connectionManager
id="ConMgr6"
maxPoolSize="${env.CM_MQ_MAXPOOLSIZE}"
minPoolSize="${env.CM_MQ_MINPOOLSIZE}"
numConnectionsPerThreadLocal="${env.CM_JMS_NUM_CONNECTIONS_PER_THREAD_LOCAL}"
maxConnectionsPerThread="${env.CM_JMS_MAX_CONNECTIONS_PER_THREAD}"
/>

The SPM Helm charts allow for overriding the JMS configuration either globally (e.g. apps.tuningDefaults.maxThreads) or per application as per the Initial Tuning Settings examples provided. Here we illustrate tuning settings for curamdb in curam producer pods:

global:
apps:
config:
curam:
producerTuning:
mqMaxPoolSize: 9
mqMinPoolSize: 9
mqNumConnectionsPerThreadLocal: 6
maxJMSConnectionsPerThread: 6

Liberty JVM heap

WebSphere Liberty JVM options are specified via a yaml array in global.apps.config.<applicationID>.jvm for all pod types of an application or for specific pod types such as producer, via the global.apps.config.<applicationID>.producerTuning dictionary of tuning values.

Start with the following settings:

  • For a given consumer pod where memory requests = 3584Mi, tune the JVM heap size by using the following example:
-Xmx = 2560M
-Xms = 2560M
-Xmn = 1536M

Fragment showing JVM settings for the curam JMS producer pods (as distinct from the curam JMS producer pods):

global:
apps:
config:
curam:
producerTuning:
jvm: ['-Xms2560M','-Xmx2560M','-Xmn1536M']

These settings are placed in the pod’s /config/jvm.options file at deployment, for instance:

-Xms2560M
-Xmx2560M
-Xmn1536M

Liberty HTTP session replication

SPM deployed in Kubernetes uses WebSphere Liberty HTTP session replication for failover. This replication is done using the database as the persistence and sharing mechanism for HTTP sessions. In our performance tests we have seen at least an order of magnitude improvement in service times from the REST producer pod by switching the write frequency of the HTTP session replication from “End of Servlet service” to “Time based”.

When using “End of Servlet service”, for each HTTP request arriving to the pod, the HTTP session is first read from the database, then the SPM code is executed, and lastly the HTTP session is updated in the database before the HTTP response is sent. When using “Time based”, HTTP requests incur a much smaller overhead as the HTTP sessions are updated in the database asynchronously to the HTTP requests.

Example in context:

<server>
<httpSessionDatabase
....
skipIndexCreation="false"
writeFrequency="TIME_BASED_WRITE"
writeInterval="2m"
/>
</server>