Resin has the very convenient ability to import configuration from an HTTP URL. This feature in combination with clever usage of environment variables can be a huge help avoiding the issues caused by maintaining local copies of configuration files in a large environment. In this Wiki article I setup a webserver as a centralized configuration repository and show how to modify your local Resin configuration such that the same file can be used by any Resin instance in any environment.
Posts Tagged ‘cluster’
Like other lightweight servlet containers and Java EE application servers, Resinâ€™s deployment is file-system based. In order to deploy an application, all you need to do is copy your file to the Resin deployment directory. As you might also know, Resin has supported hot deployment for quite a while, which is a great feature for agile development that often results in frequent incremental deployments.
This deployment model is very simple, effective and popular. However, file-system based deployment has a few weaknesses that can arise in environments with very stringent availability and reliability requirements. It is very difficult to do deployment in a clustered environment because the same file must be deployed simultaneously to all servers in the cluster. Often this can result in some down-time that must be announced beforehand. No back-up facility is provided by the file system, so you must often save a backup copy of the old deployment somewhere yourself. File system based deployment also makes it very difficult to use the same server environment for different stages of development such as QA, user acceptance testing and production without following complicated deployment procedures.
The remote deployment model introduced in Resin 4.0 goes a long way in solving these particular problems by supporting clustered, versioned and staged deployment. This blog entry discusses these features in detail.
Because the main purpose of Resin 4.0 was support for dynamic servers, we needed to upgrade our distributed session management to handle the case where servers can appear and disappear frequently. Resin 3.1 session replication relies on a static set of servers to choose backup and triplicate servers. Dynamic servers change that model because a 3.1 session backup might be shut down indefinitely. So we needed a new architecture.
At the same time as we redesigned sessions, I wanted to generalize the distributed store to support standard caching and storage using the javax.cache API, while retaining the scalability and reliability of our original design.
- A hub of 3 fully-redundant servers for reliability (the triad)
- Other servers dynamically appear and disappear for deployment flexibility
- Updates using lightweight messaging using BAM/HMTP
- Cache entry ownership and leasing for performance
- Support storage (infinite expire), caching (timed expire), and session (timed idle invalidation)
- Support serialized objects (using Hessian), and binary data
The cache architecture is heavily influenced by the messaging and extra complications of distribution.
When you deploy an application using eclipse or ant, Resin propagates the .war updates to the cluster using the triad as the reliable store. The deployment is both incremental and transactional. Only changed files need to be sent, and the updates are only visible when they are complete and visible. Partial or incomplete deployment does not update the site.
- eclipse/ant sends the .war to triad server A
- Triad server A replicates the .war to triad servers B and C
- The triad updates the rest of the cluster with the new .war file
- Once all servers have the new version complete and verified, they can restart the web-app with the new data
With Resin 4.0, we can expose our distributed caching/store capabilities to all developers, using the javax.cache interface. Internally, our distributed cache builds on 10 years of work with distributed sessions. So the jcache support isn’t a new capability, really, it’s just newly available for developers.
As an introduction, I want to show the prototypical code sample (it’s an injected Map), and the configuration (one XML tag in the resin-web.xml). The configuration is simple because Resin’s clustered cache is designed for Resin and automatically inherits the cluster and triad configuration from the Resin deployment.
Distributed caching reduces load on the database, improves latency, and can improve reliability by providing a failover capability in a load-balancing configuration. Essentially, you store serialized objects in a Map and the cache makes the Map entries visible across the entire cluster pod automatically.