WebSphere ESB Topologies (Part 1)
WebSphere Application Server (WAS) has a variety of ways of defining servers, and their relationships to each other. These are often called WAS topologies. Let’s revisit some of the WAS topology concepts from a WebSphere ESB perspective (although many things you may know or learn about WAS topologies probably apply equally to WebSphere ESB, and vice-versa, because ESB is built on top of WAS).
There is a hierarchy of objects in an ESB topology:
-
Cell - This contains one or more nodes. Cells are completely independent of each other from a topology perspective.
-
Node - This contains zero or more servers.
-
Server - This is a physical server (we’ll be talking primarily about application servers here). There is a 1:1 mapping between a server and an OS process.
An ESB installation can have one or more profiles, which define nodes in a topology (in other words, there is a 1:1 mapping between profiles and nodes). These profiles can have three types:
-
Stand-alone
-
Deployment manager
-
Custom
These profiles and the overall topology are orthogonal to ESB installations. An entire topology (with a variety of profile types) can be run from a single installation, or each profile can be part of a separate installation. To further confuse matters, a physical machine can have one or more ESB installations (although typically it only has one).
A stand-alone profile is easiest to understand. This defines a single node, which exists in a single cell. The single node contains a single default server. If you install ESB using the ‘Complete’ option, you will get a profile created of this type - called ‘default’, containing a server called ‘server1’ (the node and cell name will be some permutation of the hostname of your machine). Administration is done through an administrative console attached to the node.
Alternatively, you can set up a more complex configuration. If you create a deployment manager node, you can use this to manage other nodes. Typically those other nodes start out as custom nodes. When you ‘federate’ them to a deployment manager, they become part of the deployment manager’s cell, and a special type of server called a ’node agent’ is created on the custom profile. Often this federation is done when the profile for that node is created. This federation allows configuration information to be shared between nodes - the administrative console used is now part of the deployment manager node, and configuration information is synchronised according to a schedule, or on demand. Resources (for example, JDBC connections), can be created at ‘cell’, ’node’, or ‘server’ scope, and can only be seen at that level. Application servers also need to be manually created for custom nodes - they don’t contain any by default. Cells can only contain one deployment manager.
I plan to write a Part 2 on this topic soon, covering clustering. Watch this space…
Comments