Generic selectors
Exact matches only
Search in title
Search in content

Atlassian Data Center

03/14/2017 / 0 Comments

Table of context

    1. General Information about the Data Center

    What does the Data Center offer?

    A Data Center provides the comfort and functionality of a regular Atlassian application server version with added features.
    The utilization of several nodes offers an increase in availability of the system and increased performance for the user. It is easy to add additional nodes within the Data Center, allowing for an improved scalability.

    Is the Data Center outage-safe?

    The availability of the system is secured even if one or several nodes fail since all information is replicated on all nodes. Only if all nodes fail is the system at risk to fail completely. As long as at least one node is working, the system is outage safe.

    What is a node?

    A Data Center node is an instance of JIRA/Confluence on a server. Normally, only one entity is installed on a server, equating a node with a server.
    By distributing the system to several server the availability is increased and by adding extra server the system becomes more scalable.

    Where is the Data Center located in the user administration?

    The Data Center user administration works exactly like the regular server version. It can be managed via Atlassian Crowd, Active Directory or similar systems, just like the regular version.

    How are user distributed to the respective server?

    The users do not need to know the individual node addresses, they are divided by the load balancer to the respective node automatically. The load balancer has its own address. For example you can make your Data Center available at www.my-data-center.com and the load balancer forwards the user to a different node with a different address, without the user noticing the rerouting.
    If the node fails during the activity the user is forwarded to a different node without a loss in availability.

    What is the load balancer and how is it integrated in the Data Center?

    The load balancer intercepts the user request and distributes them between the nodes in the Data Center. For the user it seems like he is always using the same server and individual outages do not effect the performance.
    The load balancer is not integrated in the Data Center and has to be installed and configured individually. Apache httpd is one possible source, but different load balancers can be used as well. Since there are so many different load balancers and settings to choose from, the system can be designed according to your requirements. One example is the employment of one standard-node and assigning the remaining nodes as emergency nodes for outage or overload scenarios.

    Which requirements need to be fulfilled by the load-balancer and what are sticky-sessions?

    The single most important requirement for the load balancer is the support of sticky-sessions. Sticky-sessions allow for assigning a user one session during the activity on the server. That way the user does not need to sign back in every time he makes a request and is able to work in the Data Center as if he’d been accessing a node directly. A new session is only started if a node fails and the user needs to be redirected to a different one

    2. How do I set up the JIRA Data Center?

    There are some prerequisites for setting up the JIRA Data Center.
    The individual steps of the setup are:

    1. Installing a standard JIRA Server
    2. Reconfigure individual server to Data Center
    3. Add new nodes.

    The resulting JIRA System is highly performant and easy to scale if the basic rules are followed.

    1. Installing a standard JIRA Server

    If there is no system available yet the first step in configuring a Data Center is to set up and configure a Java Server version.

    The installation can only be expanded to a Data Center if a suitable database is used. The already integrated H2-database is not suitable for using a Data Center.
    The respective database server must be available to all nodes of the Data Center later in the configuration.

    2. Reconfigure individual server to Data Center

    After the successful configuration of the JIRA Server the system can be reconfigured to a Data Center.

    Release network resources

    The resources must be available to all nodes. A shared home folder is created, including all important data for all nodes. It allows for replication of all plugins and similar.
    Examples for shared access are NFS and SMB.
    Subsequently, the JIRA server is shut down and the cluster.properties file is generated.

    What is cluster.properties and where do I find it?

    cluster.properties contains all important information for the JIRA Data Center, to identify the nodes and enable replications.
    The file is located in the local home directory of the Data Center nodes, since it has to be configured for each individual node.
    There are at least two entries required in the file:

    • JIRA.node.id

    The identifying name of the node is set, for example node1

    • JIRA.shared.home

    The previously assigned network resource has to be entered, for example //my-Server/my-home-folder
    There are some additional entries that might be necessary for the replication to work correctly:

    • ehcache.listener.hostName

    The address of the node can be added manually if it can’t be fetched automatically

    • ehcache.peer.discovery

    The method for identifying other nodes can be adjusted. Normally it is set on default. If there are issues finding other nodes it can be set to automatic. That way nodes are identified via multicast and there are additional entries required to set up multicasting.

    • ehcache.multicast.address

    The respective multicast-address has to be added, based on the range of the multicast. To limit the multicasting to all systems in the subnet the address is 224.0.0.1.

    • ehcache.multicast.port

    The standard port is 40001 and should be adjusted if this port is already used by other services or blocked otherwise.

    • ehcache.multicast.timeToLive

    The standard-TimeToLive is 32 and can be adjusted according to the size of the network to limit unnecessary traffic or to reach all nodes.

    • ehcache.multicast.hostName

    The host name again is the address of the nodes and should be added manually if it can’t be fetched automatically

    The server can be rebooted when the file is successfully configured.

    Start the first node

    The configuration takes place automatically and the respective files are added to the home directory.
    If an error is displayed the server request can be displayed in the browser.
    A common error during the first start of the Data Center is a missing Data Center license for the server. An option to add the Data Center license should be prompted. The server has to be rebooted and should function properly with the next start.

    Load Balancer

    To add new nodes later and to divert the traffic between all nodes the balancer has to be configured accordingly. After installing and configuring the balancer its address needs to be added to JIRA as basis-URL to facilitate forwarding if a node is disabled.

    3. Add new nodes

    If one node is added successfully, new nodes can be added without much effort.
    The entire process is easier if the existing JIRA installation is copied to the new server, as well as the JIRA directory and the local JIRA home directory. That way the replication won’t differ much from the original and the process is faster and easier.

    The only adjustments that need to be made can be found in cluster.properties in the home directory. This file identifies the node and defines the communication with the other parts of the Data Center.
    The name and the address of the node has to be changed as well as the path to the shared network resources if it differs.
    After adjusting the entries the node can be booted and added to the balancer after a successful start.

    From this point on the balancer distributes the user to the individual nodes and therefore increases the productivity and availability of the system. Any change to the nodes is automatically replicated to the other nodes. This includes for example plugins, which are automatically distributed to all nodes and are available to all JIRA users.

    Major problems in the Data Center are mostly due to problems with the balancer, especially with sticky sessions during the login. The JIRA login may for example fail due to errors in the configuration of the sticky sessions.

    3. What is Synchrony and what can it be used for?


    Synchrony is an Atlassian software that works with Confluence. It is used for collectively editing documents.
    With Synchrony edits are forwarded to all connected users and changes can be accessed in real time.
    However, there are limitations for traceability of changes and version control. For example, individual changed can not be traced to the respective user and only the user saving the file is assigned to the file. Also, while editing the file it can not be restored to an unsaved version, only previously published versions can be restored.

    Synchrony can be installed as a cluster, to ensure availability of the functions. For example, a Synchrony node can be set up on each Confluence node.

    4. How do I set up a Confluence Data Center?

    There are some prerequisites for setting up the Confluence Data Center.
    The individual steps of the setup are:

    1. Setup of the first node
    2. Optional: Synchrony setup
    3. Add new nodes.

    The resulting Confluence System is highly performant and easy to scale if the basic rules are followed.

    1.Setup of the first node

    The first step of setting up a Confluence Data Center takes place on the first server by installing Confluence. In contrast to the installation of a single server the Data Center license has to be specified at the respective part of the installation. This prompts the correct setup procedure.

    First the name for the cluster needs to be specified as well as the assigned collective folder that holds the important files for the nodes.
    Then, the interface for the communication of the nodes in the cluster needs to be selected.
    Multicast should be selected for detecting new nodes for easier adding of nodes. In the case of TCP/IP all nodes need to be specified manually and the settings need to be changed for eery added node. The multicast address is chosen automatically if selected.

    The next steps are the same for setting up a single server. A database needs to be installed and the data can either be imported from a backup or a new page needs to be created. Also, the user management via Confluence or JIRA System needs to be decided. The last step of the configuration is the creation of an admin account.

    2. Optional: Synchrony setup

    Synchrony offers the possibility to edit files as a group in real time. It is an individual software program which can be installed in a cluster to improve availability.
    Synchrony is supplied with Confluence and can be installed without further pre-requirements. The local home directory contains the file synchrony-standalone.jar . This file is copied to the location it is booted from, as well as the driver for the database located in the folder Confluence -> WEB-INF -> lib in the installation directory.

    Now Synchrony with all parameters can be booted with a start command. Since several parameter are required a Batch-/Script-file can be created.

    The following describes all parameters and their significance to start Synchrony. Parameter that are not described should be left at default.

    java 
    -Xss2048k 
    -Xmx2g 
    #Classpath must include standalone.jar , as well as the driver, for example: 
- classpath c:/Synchrony/Synchrony-standalone.jar;c:/Synchrony/postgresql-9.4.1212.jar
    -DSynchrony.cluster.impl=hazelcast-btf 
    #Port to access Synchrony  (default: 8091)
-DSynchrony.port= 
    #Port for Hazelcast to communicate (default: 5701)
-Dcluster.listen.port=
    #Port for the node to communicate with other nodes (default: 25500)
-DSynchrony.cluster.base.port=
     
    # If TCP/IP is used to find other nodes the following parameter is required: 
-Dcluster.join.type=tcpip 
    # List of nodes to communicate via TCP/IP
-Dcluster.join.tcpip.members= 
     
    # If multicast is used to find other nodes the following parameter is required:
-Dcluster.join.type=multicast
    # Multicast-address used for communication (for example 224.0.0.1)
-Dcluster.join.multicast.group= 
    -Dcluster.join.multicast.port=54327 
    -Dcluster.join.multicast.ttl=32 
     
    
    -DSynchrony.context.path=/Synchrony 
    #  IP/host name for the cluster (three times)
-DSynchrony.cluster.bind= 
    -DSynchrony.bind= 
    -Dcluster.interfaces=
    
    # IP/host name for the Synchrony load-balancer 
    -DSynchrony.service.url= 
    # Keys created by Confluence (the same for all nodes). They can be found at Confluence.cfg.xml (see below)
    -Djwt.private.key= 
    -Djwt.public.key=
    # The URL of the database, for example jdbc:postgresql://mydbserver.com:5432/Confluencedb
    -DSynchrony.database.url= 
    # User name and password for accessing the database
    -DSynchrony.database.username= 
    -DSynchrony.database.password=  
    
    # The following parameter can be left as default but have to be specified
    -Dip.whitelist=127.0.0.1,localhost
    -Dauth.tokens=dummy 
    -Dopenid.return.uri=http://example.com 
    -Ddynamo.events.table.name=5 
    -Ddynamo.snapshots.table.name=5
    -Ddynamo.secrets.table.name=5 
    -Ddynamo.limits.table.name=5 
    -Ddynamo.events.app.read.provisioned.default=5 
    -Ddynamo.events.app.write.provisioned.default=5 
    -Ddynamo.snapshots.app.read.provisioned.default=5 
    -Ddynamo.snapshots.app.write.provisioned.default=5 
    -Ddynamo.max.item.size=5 
    -Ds3.Synchrony.bucket.name=5 
    -Ds3.Synchrony.bucket.path=5 
    -Ds3.Synchrony.eviction.bucket.name=5 
    -Ds3.Synchrony.eviction.bucket.path=5 
    -Ds3.app.write.provisioned.default=100
    -Ds3.app.read.provisioned.default=100
    -Dstatsd.host=localhost 
    -Dstatsd.port=8125 
    Synchrony.core 
    sql

    To start Synchrony all annotations and line breaks need to be removed to allow for a single request with all parameter.
    After a successful start Confluence can be set up for using Synchrony. To allow for the setup Confluence needs the parameter including the address for the Synchrony cluster. The best way to add it is via setenv.bat/setenv.sh found in the bin-directory in the Confluence installation directory. An additional line is added below the line starting with set CATALINA_OPTS:

    set CATALINA_OPTS=-DSynchrony.service.url=/Synchrony/v1 %CATALINA_OPTS%

    Now the Confluence node can be rebooted and the function can be tested with an admin-account.
    The option Collaborative Editing can be found under General Configuration. If this option is enabled and working Synchrony is configured successfully.

    3. Add new nodes

    To add new nodes to the system the installation folder and the local home folder need to be copied to the new server. To make the copy fully functional, some adjustments need to be made.

    • confluence.cfg.xml

    First, Confluence.cfg.xml in the Confluence home folder needs to be adjusted. The entries confluence.cluster.home , confluence.cluster.interface and hibernate.connection.url are modified:

    • confluence.cluster.home

    States the common released home folder. Depending on the configuration it needs to be modified for the server.

    • confluence.cluster.interface

    The interface selected for the communication within the cluster must be provided. It differs for each server in most cases and has to be modified accordingly.
    Atlassian offers a Java file that provides all available interfaces and the respective description. It can be found here.
    The file can be executed with the command „java -jar Listinterfaces-v2.jar“ in the terminal.

    • Hibernate.connection.url

    The database URL. Depending on the cluster configuration the address needs to be modified.

    Load Balancer

    The node then has to be added to the load balancer. The procedure depends on the load balancer used. The load balancer address is added as basis-URL in Confluence to allow for a successful implementation of the Data Center.

    Start the nodes

    Individual nodes can now be booted. Atlassian suggests starting the nodes successively and only start the next node after the first one is available in the load balancer.
    To test the functioning of the cluster a new page can be created via a node. Now access to and editing can be tested via the remaining nodes.
    An overview of the cluster nodes can be found in admin settings. General settings -> Clustering shows all cluster and the respective degree of capacity utilization.

    5. Is Single Sign on possible with the Data Center?

    Single Sign on can be used with a Data Center. It can be configured accordingly.
    The time-out for the login-session can be configured for each node individually. One node can for example contain short time-out durations and then forward the next login attempt to a different node. Alternatively, all nodes can have the same time-out duration.

    6. How do I use 2FA with SecSign with a Data Center?

    The SecSign ID 2FA is completely compatible with the JIRA and Confluence Data Center Versions. The only prerequisite is the installation of the 2FA plugin. This can only be done with an admin account.
    The plugin can be found in the Atlassian Marketplace, available via the plugin manager. In a Data Center plugins are automatically replicated to all nodes, making the 2FA available to all servers immediately. The SecSign IDs are also replicated automatically without further adjustments.
    The SecSign ID login uses the same Session-Time-Outs that are configured for the regular Atlassian-Login, also individually configurable for each node (see above).

    SecSign 2FA