03/14/2017 / 0 Comments
A Data Center provides the comfort and functionality of a regular Atlassian application server version with added features.
The utilization of several nodes offers an increase in availability of the system and increased performance for the user. It is easy to add additional nodes within the Data Center, allowing for an improved scalability.
The availability of the system is secured even if one or several nodes fail since all information is replicated on all nodes. Only if all nodes fail is the system at risk to fail completely. As long as at least one node is working, the system is outage safe.
A Data Center node is an instance of JIRA/Confluence on a server. Normally, only one entity is installed on a server, equating a node with a server.
By distributing the system to several server the availability is increased and by adding extra server the system becomes more scalable.
The Data Center user administration works exactly like the regular server version. It can be managed via Atlassian Crowd, Active Directory or similar systems, just like the regular version.
The users do not need to know the individual node addresses, they are divided by the load balancer to the respective node automatically. The load balancer has its own address. For example you can make your Data Center available at www.my-data-center.com and the load balancer forwards the user to a different node with a different address, without the user noticing the rerouting.
If the node fails during the activity the user is forwarded to a different node without a loss in availability.
The load balancer intercepts the user request and distributes them between the nodes in the Data Center. For the user it seems like he is always using the same server and individual outages do not effect the performance.
The load balancer is not integrated in the Data Center and has to be installed and configured individually. Apache httpd is one possible source, but different load balancers can be used as well. Since there are so many different load balancers and settings to choose from, the system can be designed according to your requirements. One example is the employment of one standard-node and assigning the remaining nodes as emergency nodes for outage or overload scenarios.
The single most important requirement for the load balancer is the support of sticky-sessions. Sticky-sessions allow for assigning a user one session during the activity on the server. That way the user does not need to sign back in every time he makes a request and is able to work in the Data Center as if he’d been accessing a node directly. A new session is only started if a node fails and the user needs to be redirected to a different one
There are some prerequisites for setting up the JIRA Data Center.
The individual steps of the setup are:
The resulting JIRA System is highly performant and easy to scale if the basic rules are followed.
If there is no system available yet the first step in configuring a Data Center is to set up and configure a Java Server version.
The installation can only be expanded to a Data Center if a suitable database is used. The already integrated H2-database is not suitable for using a Data Center.
The respective database server must be available to all nodes of the Data Center later in the configuration.
After the successful configuration of the JIRA Server the system can be reconfigured to a Data Center.
The resources must be available to all nodes. A shared home folder is created, including all important data for all nodes. It allows for replication of all plugins and similar.
Examples for shared access are NFS and SMB.
Subsequently, the JIRA server is shut down and the cluster.properties file is generated.
cluster.properties contains all important information for the JIRA Data Center, to identify the nodes and enable replications.
The file is located in the local home directory of the Data Center nodes, since it has to be configured for each individual node.
There are at least two entries required in the file:
The identifying name of the node is set, for example node1
The previously assigned network resource has to be entered, for example //my-Server/my-home-folder
There are some additional entries that might be necessary for the replication to work correctly:
The address of the node can be added manually if it can’t be fetched automatically
The method for identifying other nodes can be adjusted. Normally it is set on default. If there are issues finding other nodes it can be set to automatic. That way nodes are identified via multicast and there are additional entries required to set up multicasting.
The respective multicast-address has to be added, based on the range of the multicast. To limit the multicasting to all systems in the subnet the address is 22.214.171.124.
The standard port is 40001 and should be adjusted if this port is already used by other services or blocked otherwise.
The standard-TimeToLive is 32 and can be adjusted according to the size of the network to limit unnecessary traffic or to reach all nodes.
The host name again is the address of the nodes and should be added manually if it can’t be fetched automatically
The server can be rebooted when the file is successfully configured.
The configuration takes place automatically and the respective files are added to the home directory.
If an error is displayed the server request can be displayed in the browser.
A common error during the first start of the Data Center is a missing Data Center license for the server. An option to add the Data Center license should be prompted. The server has to be rebooted and should function properly with the next start.
To add new nodes later and to divert the traffic between all nodes the balancer has to be configured accordingly. After installing and configuring the balancer its address needs to be added to JIRA as basis-URL to facilitate forwarding if a node is disabled.
If one node is added successfully, new nodes can be added without much effort.
The entire process is easier if the existing JIRA installation is copied to the new server, as well as the JIRA directory and the local JIRA home directory. That way the replication won’t differ much from the original and the process is faster and easier.
The only adjustments that need to be made can be found in cluster.properties in the home directory. This file identifies the node and defines the communication with the other parts of the Data Center.
The name and the address of the node has to be changed as well as the path to the shared network resources if it differs.
After adjusting the entries the node can be booted and added to the balancer after a successful start.
From this point on the balancer distributes the user to the individual nodes and therefore increases the productivity and availability of the system. Any change to the nodes is automatically replicated to the other nodes. This includes for example plugins, which are automatically distributed to all nodes and are available to all JIRA users.
Major problems in the Data Center are mostly due to problems with the balancer, especially with sticky sessions during the login. The JIRA login may for example fail due to errors in the configuration of the sticky sessions.
Synchrony can be installed as a cluster, to ensure availability of the functions. For example, a Synchrony node can be set up on each Confluence node.
There are some prerequisites for setting up the Confluence Data Center.
The individual steps of the setup are:
The resulting Confluence System is highly performant and easy to scale if the basic rules are followed.
The first step of setting up a Confluence Data Center takes place on the first server by installing Confluence. In contrast to the installation of a single server the Data Center license has to be specified at the respective part of the installation. This prompts the correct setup procedure.
First the name for the cluster needs to be specified as well as the assigned collective folder that holds the important files for the nodes.
Then, the interface for the communication of the nodes in the cluster needs to be selected.
Multicast should be selected for detecting new nodes for easier adding of nodes. In the case of TCP/IP all nodes need to be specified manually and the settings need to be changed for eery added node. The multicast address is chosen automatically if selected.
The next steps are the same for setting up a single server. A database needs to be installed and the data can either be imported from a backup or a new page needs to be created. Also, the user management via Confluence or JIRA System needs to be decided. The last step of the configuration is the creation of an admin account.
Synchrony offers the possibility to edit files as a group in real time. It is an individual software program which can be installed in a cluster to improve availability.
Synchrony is supplied with Confluence and can be installed without further pre-requirements. The local home directory contains the file synchrony-standalone.jar . This file is copied to the location it is booted from, as well as the driver for the database located in the folder Confluence -> WEB-INF -> lib in the installation directory.
Now Synchrony with all parameters can be booted with a start command. Since several parameter are required a Batch-/Script-file can be created.
The following describes all parameters and their significance to start Synchrony. Parameter that are not described should be left at default.
java -Xss2048k -Xmx2g #Classpath must include standalone.jar , as well as the driver, for example: - classpath c:/Synchrony/Synchrony-standalone.jar;c:/Synchrony/postgresql-9.4.1212.jar -DSynchrony.cluster.impl=hazelcast-btf #Port to access Synchrony (default: 8091) -DSynchrony.port=
#Port for Hazelcast to communicate (default: 5701) -Dcluster.listen.port= #Port for the node to communicate with other nodes (default: 25500) -DSynchrony.cluster.base.port= # If TCP/IP is used to find other nodes the following parameter is required: -Dcluster.join.type=tcpip # List of nodes to communicate via TCP/IP -Dcluster.join.tcpip.members= # If multicast is used to find other nodes the following parameter is required: -Dcluster.join.type=multicast # Multicast-address used for communication (for example 126.96.36.199) -Dcluster.join.multicast.group= -Dcluster.join.multicast.port=54327 -Dcluster.join.multicast.ttl=32 -DSynchrony.context.path=/Synchrony # IP/host name for the cluster (three times) -DSynchrony.cluster.bind= -DSynchrony.bind= -Dcluster.interfaces= # IP/host name for the Synchrony load-balancer -DSynchrony.service.url= # Keys created by Confluence (the same for all nodes). They can be found at Confluence.cfg.xml (see below) -Djwt.private.key= -Djwt.public.key= # The URL of the database, for example jdbc:postgresql://mydbserver.com:5432/Confluencedb -DSynchrony.database.url= # User name and password for accessing the database -DSynchrony.database.username= -DSynchrony.database.password= # The following parameter can be left as default but have to be specified -Dip.whitelist=127.0.0.1,localhost -Dauth.tokens=dummy -Dopenid.return.uri=http://example.com -Ddynamo.events.table.name=5 -Ddynamo.snapshots.table.name=5 -Ddynamo.secrets.table.name=5 -Ddynamo.limits.table.name=5 -Ddynamo.events.app.read.provisioned.default=5 -Ddynamo.events.app.write.provisioned.default=5 -Ddynamo.snapshots.app.read.provisioned.default=5 -Ddynamo.snapshots.app.write.provisioned.default=5 -Ddynamo.max.item.size=5 -Ds3.Synchrony.bucket.name=5 -Ds3.Synchrony.bucket.path=5 -Ds3.Synchrony.eviction.bucket.name=5 -Ds3.Synchrony.eviction.bucket.path=5 -Ds3.app.write.provisioned.default=100 -Ds3.app.read.provisioned.default=100 -Dstatsd.host=localhost -Dstatsd.port=8125 Synchrony.core sql
To start Synchrony all annotations and line breaks need to be removed to allow for a single request with all parameter.
After a successful start Confluence can be set up for using Synchrony. To allow for the setup Confluence needs the parameter including the address for the Synchrony cluster. The best way to add it is via setenv.bat/setenv.sh found in the bin-directory in the Confluence installation directory. An additional line is added below the line starting with set CATALINA_OPTS:
Now the Confluence node can be rebooted and the function can be tested with an admin-account.
The option Collaborative Editing can be found under General Configuration. If this option is enabled and working Synchrony is configured successfully.
To add new nodes to the system the installation folder and the local home folder need to be copied to the new server. To make the copy fully functional, some adjustments need to be made.
First, Confluence.cfg.xml in the Confluence home folder needs to be adjusted. The entries confluence.cluster.home , confluence.cluster.interface and hibernate.connection.url are modified:
States the common released home folder. Depending on the configuration it needs to be modified for the server.
The interface selected for the communication within the cluster must be provided. It differs for each server in most cases and has to be modified accordingly.
Atlassian offers a Java file that provides all available interfaces and the respective description. It can be found here.
The file can be executed with the command „java -jar Listinterfaces-v2.jar“ in the terminal.
The database URL. Depending on the cluster configuration the address needs to be modified.
The node then has to be added to the load balancer. The procedure depends on the load balancer used. The load balancer address is added as basis-URL in Confluence to allow for a successful implementation of the Data Center.
Individual nodes can now be booted. Atlassian suggests starting the nodes successively and only start the next node after the first one is available in the load balancer.
To test the functioning of the cluster a new page can be created via a node. Now access to and editing can be tested via the remaining nodes.
An overview of the cluster nodes can be found in admin settings. General settings -> Clustering shows all cluster and the respective degree of capacity utilization.
Single Sign on can be used with a Data Center. It can be configured accordingly.
The time-out for the login-session can be configured for each node individually. One node can for example contain short time-out durations and then forward the next login attempt to a different node. Alternatively, all nodes can have the same time-out duration.
The SecSign ID 2FA is completely compatible with the JIRA and Confluence Data Center Versions. The only prerequisite is the installation of the 2FA plugin. This can only be done with an admin account.
The plugin can be found in the Atlassian Marketplace, available via the plugin manager. In a Data Center plugins are automatically replicated to all nodes, making the 2FA available to all servers immediately. The SecSign IDs are also replicated automatically without further adjustments.
The SecSign ID login uses the same Session-Time-Outs that are configured for the regular Atlassian-Login, also individually configurable for each node (see above).
Want to learn more about SecSign’s innovative and highly secure
solutions for protecting your user accounts and sensitive data?
Use our contact form to submit your information, and a SecSign sales representative will contact you within one business day.
If you need assistance with an existing SecSign account or product
installation, please see the FAQs for more information on the most common questions. You don’t find the solution to your problem? Don’t hesitate to contact the
I am Interested in