23 Oct 2013

"javax.jms.TransactionInProgressException: Cannot rollback() inside an XASession"

The following problem has come up repeatedly among ActiveMQ/Camel/Karaf users so I want to capture it here in this post.

Most of the time the use case is a Camel route deployed into Karaf consuming messages using XA transaction from an ActiveMQ broker (using Aries as the TX manager). A possible example route can be found in the Camel-WMQ-AMQ-XA-TX demo on my fuse-demos github repoository. Deploying this demo to Fuse ESB Enterprise 7.1 will immediately reproduce the problem.

When Camel tries to commit the XA transaction it raises the following warning:
14:58:37,063 | WARN  | Consumer[ESB_IN] | PooledSession | 122 - org.apache.activemq.activemq-pool - 5.7.0.fuse-71-047 | 
Caught exception trying rollback() when putting session back into the pool, will invalidate. 
javax.jms.TransactionInProgressException: Cannot rollback() inside an XASession
    at org.apache.activemq.ActiveMQXASession.rollback(ActiveMQXASession.java:76)[125:org.apache.activemq.activemq-core:5.7.0.fuse-71-047]
    at org.apache.activemq.pool.PooledSession.close(PooledSession.java:120)[122:org.apache.activemq.activemq-pool:5.7.0.fuse-71-047]
    at org.springframework.jms.connection.JmsResourceHolder.closeAll(JmsResourceHolder.java:193)
    at org.springframework.jms.connection.ConnectionFactoryUtils$JmsResourceSynchronization.releaseResource(ConnectionFactoryUtils.java:412)
    at org.springframework.jms.connection.ConnectionFactoryUtils$JmsResourceSynchronization.releaseResource(ConnectionFactoryUtils.java:1)
    at org.springframework.transaction.support.ResourceHolderSynchronization.beforeCompletion(ResourceHolderSynchronization.java:72)
    at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerBeforeCompletion(TransactionSynchronizationUtils.java:106)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerBeforeCompletion(AbstractPlatformTransactionManager.java:940)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:738)
    at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
    at org.apache.aries.transaction.GeronimoPlatformTransactionManager.commit(GeronimoPlatformTransactionManager.java:76)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_65]
    at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_65]
    at org.apache.aries.proxy.impl.ProxyHandler$1.invoke(ProxyHandler.java:54)[13:org.apache.aries.proxy.impl:1.0.0]
    at org.apache.aries.proxy.impl.ProxyHandler.invoke(ProxyHandler.java:119)[13:org.apache.aries.proxy.impl:1.0.0]
    at com.sun.proxy.$Proxy74.commit(Unknown Source)[148:org.springframework.transaction:3.0.7.RELEASE]
    at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:257)[153:org.springframework.jms:3.0.7.RELEASE]
    at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1058)[153:org.springframework.jms:3.0.7.RELEASE]
    at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1050)[153:org.springframework.jms:3.0.7.RELEASE]
    at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:947)[153:org.springframework.jms:3.0.7.RELEASE]
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)[:1.6.0_65]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)[:1.6.0_65]
    at java.lang.Thread.run(Thread.java:695)[:1.6.0_65]

This problem is caused by a bug in activemq-pool of version 5.7.0 (and most likely earlier versions). It is raised under ENTMQ-441.

Version 5.8.0 of activemq-pool has this bug fixed already. So anyone running into this problem, check if you're using ActiveMQ 5.7.0 on the client side. If so, upgrade to ActiveMQ 5.8.0 on the client. JBoss Fuse 6.0 uses ActiveMQ 5.8.0.

If you are running on Fuse ESB Enterprise 7.x and you cannot upgrade to JBoss Fuse 6.0, then a possible solution is to replace the activemq-pool/5.7.0 bundle with version 5.8.0. You can follow these steps:

  • osgi:stop <bundleid of your Camel route doing XA>

  • osgi:list -l -t 0 | grep activemq-pool.
    Take note of the bundle id

  • osgi:uninstall <bundleid>

  • As the 5.8.0 version of activemq-pool is not an OSGi bundle, we need to wrap it accordingly
      osgi:install 'wrap:mvn:org.apache.activemq/activemq-pool/5.8.0 \
    $Bundle-SymbolicName=org.apache.activemq.activemq-pool&Bundle-Version=5.8.0& \

  • osgi:refresh <bundleid of your Camel route doing XA>

  • osgi:start <bundleid of your Camel route doing XA>

This should rewire your Camel bundle to use the new activemq-pool jar. If you still see the exception, run a packages:imports and verify if it pulls in activemq-pool from the new jar file. It may be that instead it now pulls in from activemq-spring.
In this case I suggest to change the maven-bundle-plugin configuration and explicitly require activemq-pool to be of version 5.8.0 or higher, using e.g. the following configuration

      *, org.apache.activemq.pool;version="[5.8.0,6)" 

You will need to rebuild and redeploy the Camel route.

2 Aug 2013

Creating ActiveMQ Broker cluster topologies using Fuse Fabric

Fuse Fabric is a very powerful open source integration platform (iPaaS) developed by Red Hat (former FuseSource). If you don't know what Fabric is, have a look at the project web site: http://fuse.fusesource.org/fabric/.
Introducing Fabric in this post would mean to simply repeat the documentation of the Fabric web site, so I'll spare that.

Instead, this blog post will attempt to deep dive a bit into the fabric:mq-create command. I hope this post will be useful to anyone who uses Fabric and wants to setup highly dynamic broker networks using Fabric. Let's get right into it...

The fabric:mq-create command allows to create broker instances that run in their own OSGi containers. For help on general usage type
fabric:mq-create --help

In Fabric every broker has a broker name and is part of a broker group. The broker name is the last argument specified on the mq-create command and names an individual broker. The group name is specified using the --group argument. If that argument is omitted, the broker gets the default group name called 'default' assigned. These concepts of broker names and group names are important as the next sections will show.
fabric:mq-create creates a new Fabric profile that contains the specified broker configuration. The profile name is the same as the broker name which is specified as the last argument to mq-create. This profile can then be easily deployed to one or multiple containers.
The fabric:mq-create command is quite powerful and can also be used to create more complex broker topologies. Here is a quick walk-through of some of the most common topologies.

Single Master/Slave Broker Pair

For high availability of a broker one may want to configure a master/slave broker pair. The main idea in Fabric is that two broker instances that use the same group name and the same broker name will form a master/slave pair. Using this idea a master/slave pair can be created using
fabric:mq-create --create-container broker1,broker2 MasterSlaveBroker

This command creates a Fabric profile called MasterSlaveBroker. in this profile it configures an ActiveMQ broker with the same name MasterSlaveBroker. As no --group argument was specified the group name will be 'default'. The option --create-container broker1,broker2 also deploys this profile to two new OSGi container instances called broker1 and broker2. As both container instances deploy the very same profile, they will instantiate a broker with the same broker name and the same group name and as such form a master/slave pair.
Note: The option --create-container is really optional. Its also possible to simply create the Fabric profile first using mq-create and then in a later step deploy this profile to different container using fabric:container-create-* commands.

Broker network with two interconnected Master/Slave pairs

Extending the previous topology to create two broker pairs that are interconnected using a network connector. Each pair consists of a master and slave broker instance. This can be created by invoking
    --group network-brokers1
    --create-container broker1_1,broker1_2
    --networks network-brokers2 
    --networks-password admin
    --networks-username admin

    --group network-brokers2
    --create-container broker2_1,broker2_2
    --networks network-brokers1 
    --networks-password admin
    --networks-username admin

These commands create two Fabric profiles. Each profile configures an ActiveMQ broker with the names MasterSlaveBroker1 and MasterSlaveBroker2. Each broker configuration also sets a group name so that the broker instances will be part of that group. Further the MasterSlaveBroker1 that is part of the group network-brokers1 configures a network connector to the broker instances in the group network-brokers2 and vice versa.
Using --create-container we instantiate two OSGi container in each command that each deploy the relevant Fabric profile and create the broker instances. The two container in each --create-container argument will form a master/slave broker pair as they both use the same broker name (either MasterSlaveBroker1 or MasterSlaveBroker2) and the same group name (either network-brokers1 or network-brokers2).
By default the brokers created by mq-create are secured and require authentication. So when configuring the network bridge it is then necessary to supply username and password credentials in order to successfully establish a network bridge. Username and password are supplied using the arguments --networks-password admin --networks-username admin.
Altogether these two commands create four broker instances out of which only two will be master broker instances, the other two will be slave instance. Each master/slave pair belongs to one broker group. A network bridge between the two master will be established in both directions. If one of the master broker dies or gets shut down, the slave broker will take over within a few seconds and the network bridges will get re-established from and to the new master broker.

Fully connected Broker Mesh

The above example sets up a master/slave pair for each broker group, where only one instance is active at a time. Its also possible to configure for multiple active broker instances within the same group. For the broker instance to be active independent of other instances it simply needs to have a unique broker name within the same group. These instances can also be networked using a full mesh topology.
When running
    --group BrokerClusterMesh
    --networks BrokerClusterMesh
    --create-container MeshBroker1 
    --networks-password admin
    --networks-username admin

    --group BrokerClusterMesh
    --networks BrokerClusterMesh
    --create-container MeshBroker2 
    --networks-password admin
    --networks-username admin

it will again create two broker profiles named BrokerClusterMesh1 and BrokerClusterMesh2. Each profile configures an ActiveMQ broker. Both broker configurations are part of the same group BrokerClusterMesh. Using --create-container, each profile gets deployed to exactly one OSGi container. Since both broker instances have their own broker name configured (BrokerClusterMesh1 and BrokerClusterMesh2) they will be both active broker instances within the same group BrokerClusterMesh. Using --network BrokerClusterMesh a network bridge is configured in each broker that points to its own network group name. This in essence will create a network bridge from each broker to all the other broker instances within the same group and form a full mesh topology. In this example only two broker instances get created so its a fairly small mesh. However you can easily add another broker to the group, e.g. by running
    --group BrokerClusterMesh 
    --networks BrokerClusterMesh 
    --create-container MeshBroker3 
    --networks-password admin 
    --networks-username admin

and all broker instances (the new one that is introduced as well as the two existing instances) will each reconfigure their network connectors to connect to all the other broker instances in this group. So a full mesh of 3 broker instances gets created. This mesh can be expanded with additional instances if needed. Once a new instance is introduced all broker instances reconfigure their network bridges accordingly.
Note: Its generally not advisable to create large broker meshes (e.g. > 4 broker instances) as depending on the use case it will cause some chatter on advisory messages that these broker instances exchange.

Full Broker Mesh with Master/Slave pair on each Broker

Combining the last two use cases its also possible to configure a full broker mesh where each broker consists of a master/slave pair. This is achieved using the following commands:
    --group BrokerClusterMeshWithSlave 
    --networks BrokerClusterMeshWithSlave 
    --create-container MeshBroker1,MeshBroker2 
    --networks-password admin 
    --networks-username admin 

    --group BrokerClusterMeshWithSlave 
    --networks BrokerClusterMeshWithSlave 
    --create-container MeshBroker3,MeshBroker4 
    --networks-password admin 
    --networks-username admin 

This command differs from the previous example only in the --create-container argument. The previous example deploys the broker configuration to only one container. Now we deploy each broker configuration to two containers. Each container within --create-container will use exactly the same broker configuration (i.e. the same broker name and the same group name) and both instances will therefore form a master/slave broker pair. Each master broker will create a network bridge to all other active broker instances within the same network group.
Its of course possible to add additional master/slave pairs to this broker group if needed and all active broker instances (i.e. master broker instances) will reconfigure their network bridge dynamically as new brokers enter or leave the network group. To add another master/slave broker pair to the mesh you can simply run
    --group BrokerClusterMeshWithSlave 
    --networks BrokerClusterMeshWithSlave 
    --create-container MeshBroker5,MeshBroker6 
    --networks-password admin 
    --networks-username admin 

Additional notes

- Each of the broker profiles that get created by the above mq-create examples all create a profile that has the profile mq-base as the parent.
- The profile mq-base contains a broker.xml configuration file which serves as the basis broker configuration for all the broker instances created above. So you could upfront adjust this broker.xml in mq-base to your needs (e.g. configure systemUsage and policy entries) and then create your broker instances using mq-create and they will all leverage this configuration.
- The broker configuration in mq-base does not configure any network bridges. The configuration about network bridges is not stored in the broker.xml when using mq-create. Instead it is stored in the configuration file org.fusesource.mq.fabric.server-.properties that is created when running mq-create. This file stores all network connector related configuration using the 'network.' keyword. E.g.

The network bridge gets created and configured based on this configuration and not based on activemq.xml! This allows for fairly dynamic configuration changes at runtime.
- The network connector [configuration](http://activemq.apache.org/networks-of-brokers.html) of ActiveMQ allows further fine grained configuration. This configuration can be added manually to the configuration file org.fusesource.mq.fabric.server-.properties in each profile by prefixing the configuration property with 'network.'. E.g. in order to set the property decreaseNetworkConsumerPriority=true on the network connector, one can simply add

to org.fusesource.mq.fabric.server-.properties. Likewise with all the other network connector properties.

JMS connections are leaked when stopping a Camel route that consumes from JMS using Springs SingleConnectionFactory

I came across this interesting problem today and want to capture it.
Suppose this simple Camel route, consuming from ActiveMQ and logging the message
<camelcontext id="camelContext" xmlns="http://camel.apache.org/schema/spring">
  <route id="jms-consumer4">
    <from uri="amq:queue:Test" />
    <to uri="log:JMSConsumer?level=INFO&showBody=true" />

<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amq">
  <property name="connectionFactory" ref="singleCF" />
  <property name="useSingleConnection" value="true" />
  <property name="usePooledConnection" value="false" />
  <property name="preserveMessageQos" value="true" />

<bean class="org.springframework.jms.connection.SingleConnectionFactory" id="singleCF">
  <property name="targetConnectionFactory" ref="AMQCF" />
  <property name="reconnectOnException" value="true" />

<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AMQCF">
  <property name="userName" value="admin" />
  <property name="password" value="admin" />
  <property name="brokerURL" value="tcp://localhost:61616" />
  <property name="copyMessageOnSend" value="false" />
  <property name="useAsyncSend" value="true" />

After deploying this to ServiceMix (JBoss Fuse 6.0 in my test) it works just fine. The problem occurs when stopping this Camel route (via an osgi:stop of the corresponding bundle). The connection into the ActiveMQ broker does not get closed! Drilling into the JMX view of the broker the connection is still registered in JMX. Even worse each restart and stop of the Camel route leaks another connection.

I did a bit of root cause analysis and found:
When the Camel route is stopped it calls into SingleConnectionFactory.destroy(). This cleans up the internally held ActiveMQConnection. At this stage the connection is properly removed from the broker's JMX view, which you will probably only notice when debugging through the code.
However the Spring JMS listener used by the Camel JMS consumer is still running and it detects that the connection is down and tries to transparently reconnect. This calls into SingleConnectionFactory.createConnection again (full stack trace below [1]). The SingleConnectionFactory does happily reopen a new connection into the broker which then remains registered in JMX and open although the route gets stopped.

So how to resolve this?

Instead of using Springs SingleConnectionFactory I recommend to use ActiveMQs PooledConnectionFactory instead. So the above Camel route configuration becomes

<bean class="org.apache.activemq.camel.component.ActiveMQComponent" id="amq">
  <property name="connectionFactory" ref="pooledCF" />
  <property name="useSingleConnection" value="true" />
  <property name="usePooledConnection" value="false" />
  <property name="preserveMessageQos" value="true" />

<bean class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop" id="pooledCF" init-method="start">
  <property name="connectionFactory" ref="AMQCF" />
  <property name="maxConnections" value="1" />

<bean class="org.apache.activemq.ActiveMQConnectionFactory" id="AMQCF">
  <property name="userName" value="admin" />
  <property name="password" value="admin" />
  <property name="brokerURL" value="tcp://localhost:61616" />
  <property name="copyMessageOnSend" value="false" />
  <property name="useAsyncSend" value="true" />

When using the ActiveMQs PooledConnectionFactory instead, things behave pretty much the same with one subtle but important difference.
Similar to above, stopping the Camel bundle calls into PooledConnectionFactory.stop(). This internally closes all ActiveMQConnections (only one in this example case but potentially more) which also unregisters the connection from the brokers JMX view. Now, Springs JMS listener used by the Camel JMS consumer is still running and detects the connection closure and tries to transparently reconnect. This calls into PooledConnectionFactory.createConnection(). This implementation however contains the following check:
if (stopped.get()) {
  LOG.debug("PooledConnectionFactory is stopped, skip create new connection.");
  return null;

AtomicBoolean stopped will be set to true and no new connection is established!
Springs SingleConnectionFactory does not have this logic. It happily reopens a new connection after it got destroyed. Please note the properties init-method="start" destroy-method="stop" on the PooledConnectionFactory bean definition are important as otherwise you may also leak connections when shutting down your bundles.
Daemon Thread [Camel (camelContext) thread #21 - JmsConsumer[EwdTest1]] 
(Suspended (breakpoint at line 280 in org.springframework.jms.connection.SingleConnectionFactory))
owns: java.lang.Object (id=9254)
owns: java.lang.Object (id=9255)
owns: java.lang.Object (id=9286)
org.springframework.jms.connection.SingleConnectionFactory.initConnection() line: 280
org.springframework.jms.connection.SingleConnectionFactory.createConnection() line: 225
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer(org.springframework.jms.support.JmsAccessor).createConnection() line: 184
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer(org.springframework.jms.listener.AbstractJmsListeningContainer).createSharedConnection() line: 404
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer(org.springframework.jms.listener.AbstractJmsListeningContainer).refreshSharedConnection() line: 389
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer(org.springframework.jms.listener.DefaultMessageListenerContainer).refreshConnectionUntilSuccessful() line: 869
org.apache.camel.component.jms.DefaultJmsMessageListenerContainer(org.springframework.jms.listener.DefaultMessageListenerContainer).recoverAfterListenerSetupFailure() line: 851
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run() line: 982
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(java.lang.Runnable) line: 895
java.util.concurrent.ThreadPoolExecutor$Worker.run() line: 918
java.lang.Thread.run() line: 680

9 Apr 2013

How to change colors in Karaf shell console?

Karaf uses colored output, this is great!!
E.g. running log:tail or log:display prints the karaf log in different colors depending on the log level of the log message.

However sometimes the colors used by default don't work nicely on your terminals background.
You could change the background  of the terminal but would it not be nicer to simple reconfigure the color codes used by Karaf?

Of course this is possible.
Simply add something like the following to $KARAF_HOME/etc/org.apache.karaf.log.cfg

# ANSI Colors
fatalColor = 31;1
errorColor = 31;1
warnColor = 35
infoColor = 36
debugColor = 39
traceColor = 39

and restart your Karaf/ServiceMix/Fuse ESB Enterprise/JBoss Fuse container.
How do these values map to actual colors? See http://en.wikipedia.org/wiki/ANSI_escape_code#graphics, in particular the section on colors.

Appending a ";1" to the color value renders the text in bold as well. Other formatting options such as italic or blinking don't seem to be supported.