EC2: Java EE Cloud Deployment, Clustering, Session Replication, and Setting up Amazon Load Balancer

From Resin 4.0 Wiki

(Difference between revisions)
Jump to: navigation, search
(Created page with "This tutorial is a continuation of: Java EE EC2 Deployment with Resin There are some issues that IP addresses are e...")
 
 
(11 intermediate revisions by one user not shown)
Line 2: Line 2:
  
 
[[Java_EE_Cloud_application_deployment_with_Amazon_EC2| Java EE EC2 Deployment with Resin]]
 
[[Java_EE_Cloud_application_deployment_with_Amazon_EC2| Java EE EC2 Deployment with Resin]]
 +
 +
This covers setting up a Resin cluster in Amazon EC2. Much of the cluster setup would be the same in other environments as well.
 +
Resin is the only mainstream Java EE application server with clustering and cloud deployment built in and fully elastic that works in an EC2 environment.
 +
There are no add-ons, hacks, or tricks. Resin was simply designed to work well in the cloud.
 +
There are some extra things added for EC2, which would equally apply to other Virtualization 2.0 environments like Xen, and VMWare.
 +
 +
 +
 +
==Overview==
  
 
There are some issues that IP addresses are ephemeral in EC2. If you restart a server, it loses its IP addresses.  
 
There are some issues that IP addresses are ephemeral in EC2. If you restart a server, it loses its IP addresses.  
Think of DHCP, but the lease expires instantly if the box is not using it. In a spoke / hub architecture, you need to know how to find the hub. The hub is like
+
Think of DHCP, but the lease expires instantly if the box is not using it. In a spoke / hub architecture, you need to know how to find the hub. The hub is like a cluster DHCP server. It knows the topology of the cluster.
a cluster DHCP server. It knows the topology of the cluster.
+
  
Some changes in the last few releases of Resin work around these issue by allowing Resin to use public IP to find Triad members, and then members exchanging private IP addresses.
+
Some changes in the last few releases of Resin work around these issue by allowing Resin to use public IP to find Triad members, and then members exchanging private IP addresses. The initial discovery is done through the Amazon Dynamic IP, and then cluster communication traffic happens on the internal Amazon network.
  
Resin typically discovers the server id by looking up the address combination of the instance. In this case, the local boxes do not know any address so you have to tell Resin
+
Resin typically discovers the server id by looking up the address combination of the instance. In this case, the local boxes do not know any IP address so you have to tell Resin what the server id is so it can look up the address for it. The public IP addresses created with Amazon Dynamic IP are hidden to the Amazon AMI instance, i.e., you will not see it with the ifconfig command.
what the server id is so it can look up the address it. The public IP addresses of an Amazon AMI instance is hidden from that instance, i.e., you will not see it with the ifconfig command.
+
  
You need to use the private IP addresses so that you do not incur additional expense of bandwidth metering from Amazon. You need Resin clustering to have session replication and session failover.
+
You need to use the private IP addresses so that you do not incur additional expense of bandwidth metering from Amazon. You need Resin clustering to have session replication and session failover. You also need clustering setup to have cloud deploy where you deploy to one Triad member and that gets replicated to every server in the cluster.
  
There are some improvements going into 4.0.28 which will make this configuration even easier. This is a how-to for 4.0.27.
+
There are some improvements going into 4.0.31 which will make this configuration even easier. A fair bit of this worked as far back as release 4.0.27. You will want to use 4.0.31 for new deployments, and the directions in this guide will closely match 4.0.31 and beyond.
  
  
Line 20: Line 27:
 
Use Amazon Console to create another instance of the server you setup in the first tutorial.
 
Use Amazon Console to create another instance of the server you setup in the first tutorial.
  
The first three servers in a cluster make up the Triad.
+
The first three static servers in a cluster make up the Triad.
  
 +
Before you continue, you may want some more background on how Resin's spoke and hub (Triad) clustering architecture works. There are slides decks and white papers available on Resin's
 +
[http://www.caucho.com/resin-application-server/3g-java-clustering-cloud/ cloud and clustering technology which is optimized for EC2].
 +
 +
If you don't have time to read a whitepaper, but want to get the gist of how Resin deployment and clustering works, I recommend this short video [http://www.youtube.com/watch?v=vDvoNwFXwdE Resin Clustering and Cloud Deployment].
 +
 +
==Clustering and Session Replication is a Resin Pro feature==
 +
 +
You will need to get an evaluation license or a license to use Resin's clustering support.
 +
To get an evaluation license go here: [http://www.caucho.com/about/contact/ Contact].
 +
 +
===Install the license file===
 +
 +
In Resin 4.0.31 and above, to deploy a license locally:
 +
 +
<pre>
 +
$ resinctl license-add --license 7777777.license
 +
</pre>
 +
 +
In Resin 4.0.31 and above, to deploy a license remotely:
 +
 +
 +
<pre>
 +
$ resinctl license-add --license 7777777.license --address 23.21.195.83 --port 8080 --user admin --password mypassword
 +
</pre>
 +
 +
You can also copy the file to this machine and the move the license file to /etc/resin/licenses. The command line tool is just a convenience to install the license. If you have problem using it, just remember to copy the license to /etc/resin/license.
 +
 +
==Setup Amazon AMI user-data passing list of triad members ==
  
 
Pass the following user-data to each Amazon instance that is running Resin:
 
Pass the following user-data to each Amazon instance that is running Resin:
  
 
<pre>
 
<pre>
https : 8443
+
elastic_cloud_enable : true
 +
home_cluster : app
 +
app.https         : 8443
 
admin_user : admin
 
admin_user : admin
 
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 
web_admin_enable : true
 
web_admin_enable : true
remote_cli_enable : true
+
remote_admin_enable : true
 
web_admin_external : true
 
web_admin_external : true
app_servers : ext:23.21.106.227 ext:23.21.195.83
+
web_admin_ssl : true
system_key : changeme890
+
app_servers : ext:23.21.106.227 ext:23.21.121.216
 +
cluster_system_key : changeme
 
</pre>
 
</pre>
  
Line 39: Line 77:
  
 
Not that ext:{IPADDRESS} denotes that this is a public IP. Resin will use the public address to ask that server what its private addresses is. This is where the <code>system_key</code>
 
Not that ext:{IPADDRESS} denotes that this is a public IP. Resin will use the public address to ask that server what its private addresses is. This is where the <code>system_key</code>
comes in.
+
comes in. It is the passkey so that Resin can talk to this public address and get its private address.
  
Modify the /etc/init.d/resin of each server to pass the server id to resinctl. (There is commented block that sets up the id, just uncomment that block and put in the server id).
 
  
 +
== Pass Server Id ==
 +
To use this you must pass the server id.
  
With the default configuration Server 0 has the app id of app-0 (ext:23.21.106.227), whilst Server 1 has the server id of app-1 (ext:23.21.195.83).
+
For 23.21.106.227 add app-0 as follows:
 +
<pre>
 +
home_server : app-0
 +
https : 8443
 +
admin_user : admin
 +
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 +
...
 +
</pre>
  
Essentially you are starting up Resin like this on box 0:
 
  
$ sudo resinctl start -server app-0
+
For 23.21.195.83 add app-1 as follows:
 +
<pre>
 +
home_server : app-1
 +
https : 8443
 +
admin_user : admin
 +
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 +
...
 +
</pre>
  
You are starting it up like this on box 1:
 
  
$ sudo resinctl start -server app-1
 
  
Create an Amazon Load Balancer. Add the two instances to the LB. (Use the smallest possible recheck interval for testing). Use sticky cookie support, use application cookie, set the name to JSESSIONID.
+
== Amazon Load Balancer ==
 +
 
 +
Create an Amazon Load Balancer. Add the two instances to the LB. (Use the smallest possible recheck interval for testing). Use sticky cookie support, use application cookie, set the name to JSESSIONID. For more information on how to setup the Amazon Load Balancer go to this [http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/GettingStartedGuide/Welcome.html Amazon Load Balancer tutorial].
  
 
Now you have a LB and session replication just works.  
 
Now you have a LB and session replication just works.  
 +
 +
== Deploy the war file and show that it is deployed to every server ==
 +
  
 
Deploying to one server in the cluster will automatically deploy to every server in the cluster.
 
Deploying to one server in the cluster will automatically deploy to every server in the cluster.
Line 67: Line 122:
 
$ resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword
 
$ resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword
  
production/webapp/default/blog
+
production/webapp/default/hello
  
 
</pre>
 
</pre>
  
 +
Go ahead and undeploy it and ensure it is undeployed on all three servers.
  
 +
==Setting up a third Triad member==
 +
 +
Resin allows up to three Triad members.
 
If you added another machine, you would just duplicate the first server virtual instance again, and run another instance.
 
If you added another machine, you would just duplicate the first server virtual instance again, and run another instance.
 +
You would also want to change the Amazon AMI user-data to include the new ip address, and make sure you change the <code>/etc/init.d/resin</code> to pass the right sever id (in this case app-2).
 +
 +
'''User Data Passed to Resin instances '''
 +
<pre>
 +
home_server : app-0 #home_server : app-1 for triad 1, and home_server : app-2 for Triad 2
 +
elastic_cloud_enable : true
 +
home_cluster : app
 +
app.https        : 8443
 +
admin_user : admin
 +
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 +
web_admin_enable : true
 +
remote_admin_enable : true
 +
web_admin_external : true
 +
web_admin_ssl : true
 +
app_servers : ext:23.21.106.227  ext:23.21.195.83 ext:23.21.121.216
 +
cluster_system_key : changeme
 +
</pre>
 +
 +
 +
It is app-0 for 23.21.106.227 and app-1 for 23.21.195.83 and app-2 for 23.21.107.99.
 +
 +
You would have to restart all three servers for [http://docs.amazonwebservices.com/AWSEC2/2011-05-15/UserGuide/index.html?AESDG-chapter-instancedata.html user-data] to be visible. This is a feature/limitation of EC2 and Xen user-data not of Resin. The Amazon AMI instance see that version of
 +
the user-data that they are started with and they do not see a new copy unless they are restarted. If you stored this configuration in S3 or a shared disk (or NFS mount), Resin could pick up changes and automatically configure the servers. This is typically non-issue for Resin Triad members because you know ahead of time how many Triad members you are going to have so user-data is perfect for this. Triad public IPs are fairly static so it is ok to manage them with static Amazon AMI user-data.
 +
 +
==Setting up a dynamic spoke server==
 +
 +
Beyond the first three servers, all other servers can be dynamic. A dynamic server or spoke server talks to the hub (the Triad makes up the hub), and then it joins the cluster.
 +
 +
 +
You do not need to edit the xml file, you just need to change the user-data as follows:
 +
 +
 +
'''User Data Passed to Resin instances for 4.0.28 and above'''
 +
<pre>
 +
elastic_cloud_enable : true
 +
#home_server : app-0 ## Don't set home_server for elastic servers
 +
home_cluster : app
 +
https : 8443
 +
admin_user : admin
 +
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
 +
app_servers : ext:23.21.106.227 ext:23.21.195.83 ext:23.21.107.99
 +
cluster_system_key : changeme890
 +
</pre>
 +
 +
In order for the Triad to accept dynamic servers they need <code>elastic_cloud_enable : true</code> as well as the spoke servers.
 +
 +
 +
==Managing deployments==
 +
 +
Once the spoke server joins, it contacts the triad and asks for apps that have been deployed.
 +
Then it automatically gets those deployments. A deploy to one triad server is a deployment to all.
 +
 +
I wrote this small script to demonstrate:
 +
 +
<pre>
 +
$ cat ./deploylist.sh
 +
echo  triad 0
 +
resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword 
 +
echo  triad 1
 +
resinctl deploy-list --address 23.21.121.216 --port 8080 --user admin --password mypassword 
 +
echo  triad 2
 +
resinctl deploy-list --address 23.21.195.83 --port 8080 --user admin --password mypassword 
 +
echo spoke 0
 +
resinctl deploy-list --address 107.22.127.189 --port 8080 --user admin --password mypassword 
 +
</pre>
 +
 +
'''Output'''
 +
<pre>
 +
$ ./deploylist.sh
 +
triad 0
 +
production/webapp/default/hello
 +
triad 1
 +
production/webapp/default/hello
 +
triad 2
 +
production/webapp/default/hello
 +
spoke 0
 +
production/webapp/default/hello
 +
</pre>
 +
 +
 +
== Other Cookbooks and Tutorials ==
 +
 +
* [[Building a simple listing in JSP]]: covers model 2, Servlets, JSP intro.
 +
* [[Java EE Servlet tutorial : Adding create, update and delete to the bookstore listing]]: covers more interactions.
 +
* [[Java EE Servlet tutorial : Using JSPs to create header, footer area, formatting, and basic CSS for bookstore]].
 +
* [[Java EE Servlet tutorial : Adding MySQL and JDBC to bookstore example]].
 +
* [[Java EE Servlet tutorial : Adding validation and JSP tag files to bookstore example]].
 +
* [[Java EE Servlet tutorial : Adding I18N support to bookstore example]].
 +
* [[Java EE Servlet tutorial : Load testing and health monitoring using bookstore example]].
 +
* [[Java EE Servlet tutorial : Setting up clustering and session replication]].
 +
* [[Java EE Servlet tutorial : Setting up security for bookstore example]].
 +
* [[Java EE Servlet tutorial : File uploads for bookstore example]].
 +
* [[Java EE Servlet tutorial : Using JPA for bookstore example]].
 +
* [[Java EE Servlet tutorial : Using JCache for bookstore example]].

Latest revision as of 00:00, 10 September 2012

This tutorial is a continuation of:

Java EE EC2 Deployment with Resin

This covers setting up a Resin cluster in Amazon EC2. Much of the cluster setup would be the same in other environments as well. Resin is the only mainstream Java EE application server with clustering and cloud deployment built in and fully elastic that works in an EC2 environment. There are no add-ons, hacks, or tricks. Resin was simply designed to work well in the cloud. There are some extra things added for EC2, which would equally apply to other Virtualization 2.0 environments like Xen, and VMWare.


Contents

Overview

There are some issues that IP addresses are ephemeral in EC2. If you restart a server, it loses its IP addresses. Think of DHCP, but the lease expires instantly if the box is not using it. In a spoke / hub architecture, you need to know how to find the hub. The hub is like a cluster DHCP server. It knows the topology of the cluster.

Some changes in the last few releases of Resin work around these issue by allowing Resin to use public IP to find Triad members, and then members exchanging private IP addresses. The initial discovery is done through the Amazon Dynamic IP, and then cluster communication traffic happens on the internal Amazon network.

Resin typically discovers the server id by looking up the address combination of the instance. In this case, the local boxes do not know any IP address so you have to tell Resin what the server id is so it can look up the address for it. The public IP addresses created with Amazon Dynamic IP are hidden to the Amazon AMI instance, i.e., you will not see it with the ifconfig command.

You need to use the private IP addresses so that you do not incur additional expense of bandwidth metering from Amazon. You need Resin clustering to have session replication and session failover. You also need clustering setup to have cloud deploy where you deploy to one Triad member and that gets replicated to every server in the cluster.

There are some improvements going into 4.0.31 which will make this configuration even easier. A fair bit of this worked as far back as release 4.0.27. You will want to use 4.0.31 for new deployments, and the directions in this guide will closely match 4.0.31 and beyond.


Create two Elastic IP addresses (assuming you are using two machines both in a single Cluster). Use Amazon Console to create another instance of the server you setup in the first tutorial.

The first three static servers in a cluster make up the Triad.

Before you continue, you may want some more background on how Resin's spoke and hub (Triad) clustering architecture works. There are slides decks and white papers available on Resin's cloud and clustering technology which is optimized for EC2.

If you don't have time to read a whitepaper, but want to get the gist of how Resin deployment and clustering works, I recommend this short video Resin Clustering and Cloud Deployment.

Clustering and Session Replication is a Resin Pro feature

You will need to get an evaluation license or a license to use Resin's clustering support. To get an evaluation license go here: Contact.

Install the license file

In Resin 4.0.31 and above, to deploy a license locally:

$ resinctl license-add --license 7777777.license 

In Resin 4.0.31 and above, to deploy a license remotely:


$ resinctl license-add --license 7777777.license --address 23.21.195.83 --port 8080 --user admin --password mypassword 

You can also copy the file to this machine and the move the license file to /etc/resin/licenses. The command line tool is just a convenience to install the license. If you have problem using it, just remember to copy the license to /etc/resin/license.

Setup Amazon AMI user-data passing list of triad members

Pass the following user-data to each Amazon instance that is running Resin:

elastic_cloud_enable : true
home_cluster : app
app.https         : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
web_admin_enable : true
remote_admin_enable : true
web_admin_external : true
web_admin_ssl : true
app_servers : ext:23.21.106.227 ext:23.21.121.216 
cluster_system_key : changeme

(New User data is only available after a restart.) Resin reading user-data assumes you followed the step in step one where you setup the ec2.xml file.

Not that ext:{IPADDRESS} denotes that this is a public IP. Resin will use the public address to ask that server what its private addresses is. This is where the system_key comes in. It is the passkey so that Resin can talk to this public address and get its private address.


Pass Server Id

To use this you must pass the server id.

For 23.21.106.227 add app-0 as follows:

home_server : app-0
https : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
...


For 23.21.195.83 add app-1 as follows:

home_server : app-1
https : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
...


Amazon Load Balancer

Create an Amazon Load Balancer. Add the two instances to the LB. (Use the smallest possible recheck interval for testing). Use sticky cookie support, use application cookie, set the name to JSESSIONID. For more information on how to setup the Amazon Load Balancer go to this Amazon Load Balancer tutorial.

Now you have a LB and session replication just works.

Deploy the war file and show that it is deployed to every server

Deploying to one server in the cluster will automatically deploy to every server in the cluster.


$ resinctl deploy --address 23.21.195.83 --port 8080 --user admin --password mypassword  hello.war


$ resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword

production/webapp/default/hello

Go ahead and undeploy it and ensure it is undeployed on all three servers.

Setting up a third Triad member

Resin allows up to three Triad members. If you added another machine, you would just duplicate the first server virtual instance again, and run another instance. You would also want to change the Amazon AMI user-data to include the new ip address, and make sure you change the /etc/init.d/resin to pass the right sever id (in this case app-2).

User Data Passed to Resin instances

home_server : app-0 #home_server : app-1 for triad 1, and home_server : app-2 for Triad 2
elastic_cloud_enable : true
home_cluster : app
app.https         : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
web_admin_enable : true
remote_admin_enable : true
web_admin_external : true
web_admin_ssl : true
app_servers : ext:23.21.106.227  ext:23.21.195.83 ext:23.21.121.216
cluster_system_key : changeme


It is app-0 for 23.21.106.227 and app-1 for 23.21.195.83 and app-2 for 23.21.107.99.

You would have to restart all three servers for user-data to be visible. This is a feature/limitation of EC2 and Xen user-data not of Resin. The Amazon AMI instance see that version of the user-data that they are started with and they do not see a new copy unless they are restarted. If you stored this configuration in S3 or a shared disk (or NFS mount), Resin could pick up changes and automatically configure the servers. This is typically non-issue for Resin Triad members because you know ahead of time how many Triad members you are going to have so user-data is perfect for this. Triad public IPs are fairly static so it is ok to manage them with static Amazon AMI user-data.

Setting up a dynamic spoke server

Beyond the first three servers, all other servers can be dynamic. A dynamic server or spoke server talks to the hub (the Triad makes up the hub), and then it joins the cluster.


You do not need to edit the xml file, you just need to change the user-data as follows:


User Data Passed to Resin instances for 4.0.28 and above

elastic_cloud_enable : true
#home_server : app-0 ## Don't set home_server for elastic servers
home_cluster : app
https : 8443
admin_user : admin
admin_password : {SSHA}generatethispasswordwithREsinCTL/XJCE
app_servers : ext:23.21.106.227 ext:23.21.195.83 ext:23.21.107.99
cluster_system_key : changeme890

In order for the Triad to accept dynamic servers they need elastic_cloud_enable : true as well as the spoke servers.


Managing deployments

Once the spoke server joins, it contacts the triad and asks for apps that have been deployed. Then it automatically gets those deployments. A deploy to one triad server is a deployment to all.

I wrote this small script to demonstrate:

$ cat ./deploylist.sh
echo  triad 0
resinctl deploy-list --address 23.21.106.227 --port 8080 --user admin --password mypassword  
echo  triad 1
resinctl deploy-list --address 23.21.121.216 --port 8080 --user admin --password mypassword  
echo  triad 2
resinctl deploy-list --address 23.21.195.83 --port 8080 --user admin --password mypassword  
echo spoke 0
resinctl deploy-list --address 107.22.127.189 --port 8080 --user admin --password mypassword  

Output

$ ./deploylist.sh
triad 0
production/webapp/default/hello
triad 1
production/webapp/default/hello
triad 2
production/webapp/default/hello
spoke 0
production/webapp/default/hello


Other Cookbooks and Tutorials

Personal tools
TOOLBOX
LANGUAGES