EC2: Java EE Cloud Deployment, Clustering, Session Replication, and Setting up Amazon Load Balancer
From Resin 4.0 Wiki
This tutorial is a continuation of:
Java EE EC2 Deployment with Resin
This covers setting up a Resin cluster in Amazon EC2. Much of the cluster setup would be the same in other environments as well. Resin is the only mainstream Java EE application server with clustering and cloud deployment built in and fully elastic that works in an EC2 environment. There are no add-ons, hacks, or tricks. Resin was simply designed to work well in the cloud. There are some extra things added for EC2, which would equally apply to other Virtualization 2.0 environments like Xen, and VMWare.
Overview
There are some issues that IP addresses are ephemeral in EC2. If you restart a server, it loses its IP addresses. Think of DHCP, but the lease expires instantly if the box is not using it. In a spoke / hub architecture, you need to know how to find the hub. The hub is like a cluster DHCP server. It knows the topology of the cluster.
Some changes in the last few releases of Resin work around these issue by allowing Resin to use public IP to find Triad members, and then members exchanging private IP addresses. The initial discovery is done through the Amazon Dynamic IP, and then cluster communication traffic happens on the internal Amazon network.
Resin typically discovers the server id by looking up the address combination of the instance. In this case, the local boxes do not know any IP address so you have to tell Resin what the server id is so it can look up the address for it. The public IP addresses created with Amazon Dynamic IP are hidden to the Amazon AMI instance, i.e., you will not see it with the ifconfig command.
You need to use the private IP addresses so that you do not incur additional expense of bandwidth metering from Amazon. You need Resin clustering to have session replication and session failover. You also need clustering setup to have cloud deploy where you deploy to one Triad member and that gets replicated to every server in the cluster.
There are some improvements going into 4.0.31 which will make this configuration even easier. A fair bit of this worked as far back as release 4.0.27. You will want to use 4.0.31 for new deployments, and the directions in this guide will closely match 4.0.31 and beyond.
Create two Elastic IP addresses (assuming you are using two machines both in a single Cluster).
Use Amazon Console to create another instance of the server you setup in the first tutorial.
The first three static servers in a cluster make up the Triad.
Before you continue, you may want some more background on how Resin's spoke and hub (Triad) clustering architecture works. There are slides decks and white papers available on Resin's cloud and clustering technology which is optimized for EC2.
If you don't have time to read a whitepaper, but want to get the gist of how Resin deployment and clustering works, I recommend this short video Resin Clustering and Cloud Deployment.
Clustering and Session Replication is a Resin Pro feature
You will need to get an evaluation license or a license to use Resin's clustering support. To get an evaluation license go here: Contact.
Install the license file
In Resin 4.0.31 and above, to deploy a license locally:
$ resinctl license-add --license 7777777.license
In Resin 4.0.31 and above, to deploy a license remotely:
$ resinctl license-add --license 7777777.license --address 23.21.195.83 --port 8080 --user admin --password mypassword
You can also copy the file to this machine and the move the license file to /etc/resin/licenses. The command line tool is just a convenience to install the license. If you have problem using it, just remember to copy the license to /etc/resin/license.
Setup Amazon AMI user-data passing list of triad members
Pass the following user-data to each Amazon instance that is running Resin:
elastic_cloud_enable : true home_cluster : app app.https : 8443 admin_user : admin admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE web_admin_enable : true remote_admin_enable : true web_admin_external : true web_admin_ssl : true app_servers : ext:23.21.106.227 ext:23.21.121.216 cluster_system_key : changeme
(New User data is only available after a restart.) Resin reading user-data assumes you followed the step in step one where you setup the ec2.xml file.
Not that ext:{IPADDRESS} denotes that this is a public IP. Resin will use the public address to ask that server what its private addresses is. This is where the system_key
comes in. It is the passkey so that Resin can talk to this public address and get its private address.
Pass Server Id
To use this you must pass the server id.
For 23.21.106.227 add app-0 as follows:
home_server : app-0 https : 8443 admin_user : admin admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE ...
For 23.21.195.83 add app-1 as follows:
home_server : app-1 https : 8443 admin_user : admin admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE ...
Amazon Load Balancer
Create an Amazon Load Balancer. Add the two instances to the LB. (Use the smallest possible recheck interval for testing). Use sticky cookie support, use application cookie, set the name to JSESSIONID. For more information on how to setup the Amazon Load Balancer go to this Amazon Load Balancer tutorial.
Now you have a LB and session replication just works.
Deploy the war file and show that it is deployed to every server
Deploying to one server in the cluster will automatically deploy to every server in the cluster.
$ resinctl deploy --address 23.21.195.83 --port 8080 --user admin --password mypassword hello.war $ resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword production/webapp/default/hello
Go ahead and undeploy it and ensure it is undeployed on all three servers.
Setting up a third Triad member
Resin allows up to three Triad members.
If you added another machine, you would just duplicate the first server virtual instance again, and run another instance.
You would also want to change the Amazon AMI user-data to include the new ip address, and make sure you change the /etc/init.d/resin
to pass the right sever id (in this case app-2).
User Data Passed to Resin instances
home_server : app-0 #home_server : app-1 for triad 1, and home_server : app-2 for Triad 2 elastic_cloud_enable : true home_cluster : app app.https : 8443 admin_user : admin admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE web_admin_enable : true remote_admin_enable : true web_admin_external : true web_admin_ssl : true app_servers : ext:23.21.106.227 ext:23.21.195.83 ext:23.21.121.216 cluster_system_key : changeme
It is app-0 for 23.21.106.227 and app-1 for 23.21.195.83 and app-2 for 23.21.107.99.
You would have to restart all three servers for user-data to be visible. This is a feature/limitation of EC2 and Xen user-data not of Resin. The Amazon AMI instance see that version of the user-data that they are started with and they do not see a new copy unless they are restarted. If you stored this configuration in S3 or a shared disk (or NFS mount), Resin could pick up changes and automatically configure the servers. This is typically non-issue for Resin Triad members because you know ahead of time how many Triad members you are going to have so user-data is perfect for this. Triad public IPs are fairly static so it is ok to manage them with static Amazon AMI user-data.
Setting up a dynamic spoke server
Beyond the first three servers, all other servers can be dynamic. A dynamic server or spoke server talks to the hub (the Triad makes up the hub), and then it joins the cluster.
You do not need to edit the xml file, you just need to change the user-data as follows:
User Data Passed to Resin instances for 4.0.28 and above
elastic_cloud_enable : true #home_server : app-0 ## Don't set home_server for elastic servers home_cluster : app https : 8443 admin_user : admin admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE app_servers : ext:23.21.106.227 ext:23.21.195.83 ext:23.21.107.99 cluster_system_key : changeme890
In order for the Triad to accept dynamic servers they need elastic_cloud_enable : true
as well as the spoke servers.
Managing deployments
Once the spoke server joins, it contacts the triad and asks for apps that have been deployed. Then it automatically gets those deployments. A deploy to one triad server is a deployment to all.
I wrote this small script to demonstrate:
$ cat ./deploylist.sh echo triad 0 resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword echo triad 1 resinctl deploy-list --address 23.21.121.216 --port 8080 --user admin --password mypassword echo triad 2 resinctl deploy-list --address 23.21.195.83 --port 8080 --user admin --password mypassword echo spoke 0 resinctl deploy-list --address 107.22.127.189 --port 8080 --user admin --password mypassword
Output
$ ./deploylist.sh triad 0 production/webapp/default/hello triad 1 production/webapp/default/hello triad 2 production/webapp/default/hello spoke 0 production/webapp/default/hello
Other Cookbooks and Tutorials
- Building a simple listing in JSP: covers model 2, Servlets, JSP intro.
- Java EE Servlet tutorial : Adding create, update and delete to the bookstore listing: covers more interactions.
- Java EE Servlet tutorial : Using JSPs to create header, footer area, formatting, and basic CSS for bookstore.
- Java EE Servlet tutorial : Adding MySQL and JDBC to bookstore example.
- Java EE Servlet tutorial : Adding validation and JSP tag files to bookstore example.
- Java EE Servlet tutorial : Adding I18N support to bookstore example.
- Java EE Servlet tutorial : Load testing and health monitoring using bookstore example.
- Java EE Servlet tutorial : Setting up clustering and session replication.
- Java EE Servlet tutorial : Setting up security for bookstore example.
- Java EE Servlet tutorial : File uploads for bookstore example.
- Java EE Servlet tutorial : Using JPA for bookstore example.
- Java EE Servlet tutorial : Using JCache for bookstore example.