Operating with application knowledge
原文:http://www.javaworld.com/javaworld/jw-10-2008/jw-10-load-balancing-2.html
传输层负载均衡(例如基于TCP/IP的负载均衡器)对静态网站是足够了,但是对动态网站来说,进程需要更高层的负载均衡技术。例如,服务器端的应用必须处理缓存或应用会话数据,对客户端关联(client affinity)的支持成为一个重要的考量。本文讨论中应用层实现服务器负载均衡,以满足大多数动态Web网站的需要。
中间服务器负载均衡器(Intermediate server load balancers)
与底层负载均衡机制相比,应用层服务器负载均衡的工作需要有对应用的了解。如图1所示,一个流行的负载均衡体系结构(architecture)包括了一个应用层负载均衡器和一个传输层负载均衡器。
图1 传输层和应用层的负载均衡机制
应用层负载均衡器被传输层负载均衡器看作一个普通的服务器。进来的TCP连接被转发给应用层负载均衡器。当应用层负载均衡器得到一个应用层请求,它基于应用层数据判断采用哪个目标服务器然后将请求转发给此服务器。
清单1显示了一个应用层负载均衡器使用一个HTTP请求参数决定使用哪个后端服务器。与传输层负载均衡器相比,它基于应用层的HTTP请求确定路由,以HTTP请求作为转发单位(the unit of forwarding is a HTTP request)。与memcached采用的方法类似,采用了一个基于哈希主键(hash-key)的分区算法确定使用哪个服务器。通常,使用用户ID或会话(session)ID作为分区key,这样每次同一个的用户总是被同一个服务器实例处理。用户的客户端总是关联(或者说沾上)上一个服务器。
清单1 中间应用层负载均衡器
class LoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private HttpClient httpClient;
/*
* this class does not implement server monitoring or healthiness checks
*/
public LoadBalancerHandler(InetSocketAddress... srvs) {
servers.addAll(Arrays.asList(srvs));
}
public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}
public void onDestroy() throws IOException {
httpClient.close();
}
public void onRequest(final IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();
// determine the business server based on the id's hashcode
Integer customerId = request.getRequiredIntParameter("id");
int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}
// retrieve the business server address and update the Request-URL of the request
InetSocketAddress server = servers.get(idx);
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
request.setRequestUrl(newUrl);
// proxy header handling (remove hop-by-hop headers, ...)
// ...
// create a response handler to forward the response to the caller
IHttpResponseHandler respHdl = new IHttpResponseHandler() {
@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}
@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
// forward the request in a asynchronous way by passing over the response handler
httpClient.send(request, respHdl);
}
}
class LoadBalancer {
public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
HttpServer loadBalancer = new HttpServer(8080, new LoadBalancerHandler(srvs));
loadBalancer.run();
}
}
清单1中,LoadBalancerandler读取HTTP id请求参数,然后计算其hash值。某些情况下,负载均衡器必须读取(一部分)HTTP body内容才能满足负载均衡算法的需要。基于模计算的结果通过HttpClient对象完成转发请求。出于效率的考虑,HttpClient也能重用到服务器的连接。通过HttpResponseHandler以异步的方式处理响应。关于异步非阻塞HTTP编程,参考Asynchronous HTTP and Comet architectures(http://www.javaworld.com/javaworld/jw-03-2008/jw-03-asynchhttp.html)
另一种中间应用层服务器负载均衡技术是cookie注入(cookie injection),负载均衡器检查请求是否包含有一个特殊的负载均衡用cookie。如果找不到此cookie,通过一个分布式算法(例如轮询,round-robin)选择一个服务器,一个负载均衡会话cookie被加入到返回的响应中。当浏览器得到此会话cookie,该cookie就会临时保存中内存里,在浏览器关闭后此cookie将不复存在。在会话中浏览器会把此cookie加入到接下来的所有请求中,然后请求被发送给负载均衡器。通过将相关服务器作为cookie值,负载均衡器那个判断哪个服务器将负责处理(一个浏览器会话中的)请求。清单2是一个基于cookie注入机制实现的负载均衡器。
清单2 基于cookie诸如的应用层负载均衡器
class CookieBasedLoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private int serverIdx = 0;
private HttpClient httpClient;
/*
* this class does not implement server monitoring or healthiness checks
*/
public CookieBasedLoadBalancerHandler(InetSocketAddress... realServers) {
servers.addAll(Arrays.asList(realServers));
}
public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}
public void onDestroy() throws IOException {
httpClient.close();
}
public void onRequest(final IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();
IHttpResponseHandler respHdl = null;
InetSocketAddress serverAddr = null;
// check if the request contains the LB_SLOT cookie
cl : for (String cookieHeader : request.getHeaderList("Cookie")) {
for (String cookie : cookieHeader.split(";")) {
String[] kvp = cookie.split("=");
if (kvp[0].startsWith("LB_SLOT")) {
int slot = Integer.parseInt(kvp[1]);
serverAddr = servers.get(slot);
break cl;
}
}
}
// request does not contains the LB_SLOT -> select a server
if (serverAddr == null) {
final int slot = nextServerSlot();
serverAddr = servers.get(slot);
respHdl = new IHttpResponseHandler() {
@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
// set the LB_SLOT cookie
response.setHeader("Set-Cookie", "LB_SLOT=" + slot + ";Path=/");
exchange.send(response);
}
@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
} else {
respHdl = new IHttpResponseHandler() {
@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}
@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
}
// update the Request-URL of the request
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), serverAddr.getHostName(), serverAddr.getPort(), url.getFile());
request.setRequestUrl(newUrl);
// proxy header handling (remove hop-by-hop headers, ...)
// ...
// forward the request
httpClient.send(request, respHdl);
}
// get the next slot by using the using round-robin approach
private synchronized int nextServerSlot() {
serverIdx++;
if (serverIdx >= servers.size()) {
serverIdx = 0;
}
return serverIdx;
}
}
class LoadBalancer {
public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
CookieBasedLoadBalancerHandler hdl = new CookieBasedLoadBalancerHandler(srvs);
HttpServer loadBalancer = new HttpServer(8080, hdl);
loadBalancer.run();
}
}
不便的是,cookie注入方式只能中浏览器接受cookie的情况下才能正常工作。如果用户关闭了cookie,将失去客户关联。
一般来说,中间应用层(intermediate application-level)负载均衡器解决方案的确定是它需要额外的节点或处理。将一个传输层和一个应用层服务器负载均衡器集成到一起能解决这一问题,但是价格昂贵,而且限制了访问应用层数据带来的灵活性。
基于HTTP重定向的服务器负载均衡器(HTTP redirect-based server load balancer)
一个避免网络跳跃(network hop)的方法是使用HTTP重定向(HTTP redirect)指令。在重定向指令的帮助下,服务器将一个客户端重新路由到另一个位置。服务器返回一个类似303 See Other的重定向响应,而不是返回请求的对象。客户端确认新地址(location)然后重发请求,体系结构如图2所示:
图2 基于HTTP重定向的应用层负载均衡机制
清单3实现了一个基于HTTP重定向的应用层负载均衡器。该负载均衡器并不转发请求,而是发送一个重定向(redirect)的状态代码,包含了一个替换地址(alternate location)。根据HTTP规范,服务器将使用此替换地址(alternate location)重新发送请求。以后的请求将直接发送给相关服务器。不需要由额外的网络跳跃(network hops)
清单3 基于HTTP重定向的应用层负载均衡器
class RedirectLoadBalancerHandler implements IHttpRequestHandler {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
/*
* this class does not implement server monitoring or healthiness checks
*/
public RedirectLoadBalancerHandler(InetSocketAddress... realServers) {
servers.addAll(Arrays.asList(realServers));
}
@Execution(Execution.NONTHREADED)
public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();
// determine the business server based on the id´s hashcode
Integer customerId = request.getRequiredIntParameter("id");
int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}
// create a redirect response -> status 303
HttpResponse redirectResponse = new HttpResponse(303, "text/html", "<html>....");
// ... and add the location header
InetSocketAddress server = servers.get(idx);
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
redirectResponse.setHeader("Location", newUrl.toString());
// send the redirect response
exchange.send(redirectResponse);
}
}
class Server {
public static void main(String[] args) throws Exception {
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
RedirectLoadBalancerHandler hdl = new RedirectLoadBalancerHandler(srvs);
HttpServer loadBalancer = new HttpServer(8080, hdl);
loadBalancer.run();
}
}
HTTP重定向方法有两个弱点。第一,整个服务器架构(infrastructure)对客户端变得可见,如果客户端是Internet上的一个匿名客户,这可能导致安全问题。一般希望通过隐藏服务器架构减少可能的攻击。第二,这种方式对提供可用性没有什么帮助。类似基于DNS的负载均衡机制,当一个服务器坏掉时客户端不能切换到另一台服务器。客户端没有什么简单的办法识别死掉的服务器,它将不断的重试。如果客户端使用原始的请求进一步调用,网络跳数又将。。。因为每次请求又将到达负载均衡器然后重定向到服务器。
服务器端服务器负载均衡器拦截(Server-side server load balancer interceptor)
另一个避免额外的网络跳跃(network hops)的办法是在逻辑上将应用层服务器负载均衡器移到服务器端。如图3所示,负载均衡器成为一个拦截器。
图3 服务器端负载均衡器拦截器
清单4实现了一个服务器端应用层负载均衡器拦截器。代码激活与第一个LoadBalancerHanler相同。不同之处在于如果请求目标与本地服务器相同,请求将本地转发,而不是使用HttpClient。
清单4. 服务器端应用层负载均衡器拦截器
class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
private final List<InetSocketAddress> servers = new ArrayList<InetSocketAddress>();
private InetSocketAddress localServer;
private HttpClient httpClient;
/*
* this class does not implement server monitoring or healthiness checks
*/
public LoadBalancerRequestInterceptor(InetSocketAddress localeServer, InetSocketAddress... srvs) {
this.localServer = localeServer;
servers.addAll(Arrays.asList(srvs));
}
public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}
public void onDestroy() throws IOException {
httpClient.close();
}
public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();
Integer customerId = request.getRequiredIntParameter("id");
int idx = customerId.hashCode() % servers.size();
if (idx < 0) {
idx *= -1;
}
InetSocketAddress server = servers.get(idx);
// local server?
if (server.equals(localServer)) {
exchange.forward(request);
// .. no
} else {
URL url = request.getRequestUrl();
URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
request.setRequestUrl(newUrl);
// proxy header handling (remove hop-by-hop headers, ...)
// ...
IHttpResponseHandler respHdl = new IHttpResponseHandler() {
@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}
@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
httpClient.send(request, respHdl);
}
}
}
class Server {
public static void main(String[] args) throws Exception {
RequestHandlerChain handlerChain = new RequestHandlerChain();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
handlerChain.addLast(new LoadBalancerRequestInterceptor(new InetSocketAddress("srv1", 8030), srvs));
handlerChain.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain.addLast(new MyRequestHandler());
HttpServer httpServer = new HttpServer(8030, handlerChain);
httpServer.run();
}
}
此方法减少了额外网络跳跃(network hops)。平均来说,本地处理的请求的百分比等于100除以服务器的数量(??)。不幸的是此方法仅中服务器数量很小时比较有用。
客户端服务器负载均衡器拦截器(Client-side server load balancer interceptor)
可以在客户端实现一个拦截器,而负载均衡在逻辑上等于一个服务器端负载均衡器拦截器。(Load balancing logic equivalent to that of a server-side load balancer interceptor can be implemented as an interceptor on the client side.)这种情况下不需要有传输层负载均衡器。图4显示了此体系结构。
图4. 客户端负载均衡器拦截器
代码清单5给给HttpClient加了一个拦截器。因为负载均衡机制代码作为一个拦截器写的,此负载均衡机制对客户端应用是不可见的。
清单5. 客户端应用层负载均衡器拦截器
class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
private final Map<String, List<InetSocketAddress>> serverClusters = new HashMap<String, List<InetSocketAddress>>();
private HttpClient httpClient;
/*
* this class does not implement server monitoring or healthiness checks
*/
public void addVirtualServer(String virtualServer, InetSocketAddress... realServers) {
serverClusters.put(virtualServer, Arrays.asList(realServers));
}
public void onInit() {
httpClient = new HttpClient();
httpClient.setAutoHandleCookies(false);
}
public void onDestroy() throws IOException {
httpClient.close();
}
public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
IHttpRequest request = exchange.getRequest();
URL requestUrl = request.getRequestUrl();
String targetServer = requestUrl.getHost() + ":" + requesrUrl.getPort();
// handle a virtual address
for (Entry<String, List<InetSocketAddress>> serverCluster : serverClusters.entrySet()) {
if (targetServer.equals(serverCluster.getKey())) {
String id = request.getRequiredStringParameter("id");
int idx = id.hashCode() % serverCluster.getValue().size();
if (idx < 0) {
idx *= -1;
}
InetSocketAddress realServer = serverCluster.getValue().get(idx);
URL newUrl = new URL(requesrUrl.getProtocol(), realServer.getHostName(), realServer.getPort(), requesrUrl.getFile());
request.setRequestUrl(newUrl);
// proxy header handling (remove hop-by-hop headers, ...)
// ...
IHttpResponseHandler respHdl = new IHttpResponseHandler() {
@Execution(Execution.NONTHREADED)
public void onResponse(IHttpResponse response) throws IOException {
exchange.send(response);
}
@Execution(Execution.NONTHREADED)
public void onException(IOException ioe) throws IOException {
exchange.sendError(ioe);
}
};
httpClient.send(request, respHdl);
return;
}
}
// request address is not virtual one -> do nothing by forwarding request for standard handling
exchange.forward(request);
}
}
class SimpleTest {
public static void main(String[] args) throws Exception {
// start the servers
RequestHandlerChain handlerChain1 = new RequestHandlerChain();
handlerChain1.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain1.addLast(new MyRequestHandler());
HttpServer httpServer1 = new HttpServer(8040, handlerChain1);
httpServer1.start();
RequestHandlerChain handlerChain2 = new RequestHandlerChain();
handlerChain2.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
handlerChain2.addLast(new MyRequestHandler());
HttpServer httpServer2 = new HttpServer(8030, handlerChain2);
httpServer2.start();
// create the client
HttpClient httpClient = new HttpClient();
// ... and add the load balancer interceptor
LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("localhost", 8030), new InetSocketAddress("localhost", 8030) };
lbInterceptor.addVirtualServer("customerService:8080", srvs);
httpClient.addInterceptor(lbInterceptor);
// run some tests
GetRequest request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
IHttpResponse response = httpClient.call(request);
assert (response.getHeader("X-Cached") == null);
request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached").equals("true"));
request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached") == null);
request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
response = httpClient.call(request);
assert (response.getHeader("X-Cached").equals("true"));
// ...
}
}
客户端方式具有高效率、高可用性以及高可伸缩性。不幸的是,对于基于Internet的客户端来说也有着非常严重的缺点。类似于基于HTTP重定向负载均衡器,整个服务器架构对客户端都是可见的。此外这种方式经常强迫客户端Web应用执行跨越(cross-domain)调用。出于安全考虑,Web浏览器和基于浏览器的容器(例如Flash或JavaScript)会阻塞不同域(domains)的调用。这意味着需要中客户端实现一些变通方案(workarounds) 。参考(http://www.digital-web.com/articles/client_side_load_balancing/)
客户端负载均衡机制并不局限于基于HTTP的应用。例如,JBoss支持smart stubs。一个stub是这样一个对象:他啊由服务器生成,并实现了一个远程服务的业务接口。客户端通过stub对象进行本地调用。在负载均衡环境里,服务器生成的stub对象成为一个知道如何将调用路由到合适服务器的拦截器。
应用会话数据支持(Application session data support)
应用会话数据代表了一个特定用户的应用会话状态。对传统的Web应用来说,应用会话数据存储中服务器端,如清单6所示。
清单6. 基于会话的服务器
class MySessionBasedRequestHandler implements IHttpRequestHandler {
@SynchronizedOn(SynchronizedOn.SESSION)
public void onRequest(IHttpExchange exchange) throws IOException {
IHttpRequest request = exchange.getRequest();
IHttpSession session = exchange.getSession(true);
//..
Integer countRequests = (Integer) session.getAttribute("count");
if (countRequests == null) {
countRequests = 1;
} else {
countRequests++;
}
session.setAttribute("count", countRequests);
// and return the response
exchange.send(new HttpResponse(200, "text/plain", "count=" + countRequests));
}
}
class Server {
public static void main(String[] args) throws Exception {
HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());
httpServer.run ();
}
}
清单6中,应用会话数据(容器,container)通过getSession(...)方法访问。当true作为参数传入时,如果不存在会话,那么一个新的会话(session)被创建。根据Servlet API,一个叫做JSESSIONID的cookie被发送给客户端。JSESSIONID cookie的值是一个唯一的会话(session)ID。此ID被用于标识会话对象,此会话对象存放在服务器端。当收到接下来的客户请求时,服务器会根据客户请求的cookie消息头得到相关的会话对象。为了支持不接受cookie的客户端,URL重写(URL rewriting)可以用来做会话跟踪。使用URL重写,每个响应页面的本地URL被动态的重写以包含会话ID。
与缓存数据相比,应用会话数据并不是冗余的。如果服务器崩溃,应用会话数据将会丢失。接下来要做的是,应用会话数据要么存储在一个全局位置,要么中各个相关的服务器间进行复制。
如果数据要被复制,通常所有相关的服务器都持有所有会话的应用数据(all the servers involved hold the application data of all sessions)。因此这种方式只能中小规模的服务器上可伸缩。服务器内存总是有限的,而更新必须复制到所有相关服务器上。要支持更大规模的服务器,这些服务器必须分区划分成更小的服务器组。与全复制方式相比,全局位置存储方式使用一个数据库、或者一个文件系统、又或一个内存会话服务器(in-memory session servers)在全局位置存储会话数据。
一般来讲,应用会话数据的处理不会强制使客户端关联(affine)到服务器。如果使用复制方式,通常所有的服务器都会持有应用会话对象。如何会话数据被修改了,这些变化必须被复制到所有的服务器。而在全局存储方式中,应用数据中请求被处理之前获取。发送响应将把会话数据的变化写会到全局存储器。该存储器必须是高可用的,是整个系统的一个热点组件。如果存储器不可用了,服务器将不能处理请求。
然而,由客户端关联(client affinity)导致的本地化将会使同步对同一会话的多个并行请求变得非常容易。进一步阅读(http://www.ibm.com/developerworks/library/j-jtp09238.html)此外,如果客户端与服务器管理,许多更高效的技术就可以实现了。例如如果使用会话服务器(session server),会话服务器在功能将缩减成备份的角色。这种体系结构(architecture)如图5所示。通常,会话ID作为此类体系结构的负载均衡key。
当响应返回,对应用会话数据的修改也被写入到会话服务器(When the response is written, modifications to the application session data are written to the session server)。和非客户端管理方式相比,服务器只在容错事件中读取应用会话数据(the servers read application session data only in the event of a failover.)。
清单7定义了ISessionManager,它基于xLightweb HTTP库实现了此类行为。
清单7 会话管理
class BackupBasedSessionManager implements ISessionManager {
private ISessionManager delegee = null;
private HttpClient httpClient = null;
public BackupBasedSessionManager(HttpClient httpClient, ISessionManager delegee) {
this.httpClient = httpClient;
this.delegee = delegee;
}
public boolean isEmtpy() {
return delegee.isEmtpy();
}
public String newSession(String idPrefix) throws IOException {
return delegee.newSession(idPrefix);
}
public void registerSession(HttpSession session) throws IOException {
delegee.registerSession(session);
}
public HttpSession getSession(String sessionId) throws IOException {
HttpSession session = delegee.getSession(sessionId);
// session not available? -> try to get it from the backup location
if (session == null) {
String id = URLEncoder.encode(sessionId);
IHttpResponse response = httpClient.call(new GetRequest("http://sessionservice:8080/?id=" + id));
if (response.getStatus() == 200) {
try {
byte[] serialized = response.getBlockingBody().readBytes();
ObjectInputStream in = new ObjectInputStream(new ByteArrayInputStream(serialized));
session = (HttpSession) in.readObject();
registerSession(session);
} catch (ClassNotFoundException cnfe) {
throw new IOException(cnfe);
}
}
}
return session;
}
public void saveSession(String sessionId) throws IOException {
delegee.saveSession(sessionId);
HttpSession session = delegee.getSession(sessionId);
ByteArrayOutputStream bos = new ByteArrayOutputStream() ;
ObjectOutputStream out = new ObjectOutputStream(bos) ;
out.writeObject(session);
out.close();
byte[] serialized = bos.toByteArray();
String id = URLEncoder.encode(session.getId());
PostRequest storeRequest = new PostRequest("http://sessionservice:8080/?id=" + id + "&ttl=600", "application/octet-stream", serialized);
httpClient.send(storeRequest, null); // send the store request asynchronous and ignore result
}
public void removeSession(String sessionId) throws IOException {
delegee.removeSession(sessionId);
String id = URLEncoder.encode(sessionId);
httpClient.call(new DeleteRequest("http://sessionservice:8080/?id=" + id));
}
public void close() throws IOException {
delegee.close();
}
}
class Server {
public static void main(String[] args) throws Exception {
// set the server's handler
HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());
// create a load balanced http client instance
HttpClient sessionServerHttpClient = new HttpClient();
LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("sessionSrv1", 5010), new InetSocketAddress("sessionSrv2", 5010)};
lbInterceptor.addVirtualServer("sessionservice:8080", srvs);
sessionServerHttpClient.addInterceptor(lbInterceptor);
// wrap the local built-in session manager by backup aware session manager
ISessionManager nativeSessionManager = httpServer.getSessionManager();
BackupBasedSessionManager sessionManager = new BackupBasedSessionManager(sessionServerHttpClient, nativeSessionManager);
httpServer.setSessionManager(sessionManager);
// start the server
httpServer.start();
}
}
图5 Backup session server based application session data support
清单7中,BackupBasedSessionManager负责管理服务器端的会话。BackupBasedSessionManager实现了ISessionManager接口,以拦截容器的会话管理。如何session没有在本地找到,BackupBasedSessionManager将试着从会话服务器(session server)获取会话。这只会中服务器容错(server failover)后发生。如果会话状态变了,BackupBasedSessionManager的saveSession()方法会被调用以中备份应用服务器上存储会话。一个客户端服务器负载均衡方式可以被用来存取会话服务器。
Tomcat的负载均衡体系结构(Apache Tomcat load balancing architectures)
为什么目前为止都没提到Java Servlet API?。与诸如xLightweb这样的HTTP库相比,Servlet API被设计成一个纯同步的、阻塞式API。异步不足、非阻塞的支持将使基于Servlet API的实现效率低下。对于中间负载均衡器方式和服务器端负载均衡方式都是如此。客户端基于拦截器的负载均衡机制不在Servlet API管辖之内,因为这是一个服务器端的API。
能做的就是基于Servlet API实现一个基于HTTP转发的服务器负载均衡器。Tomcat 5有这样一个应用,叫做balancer(此应用不在进Tomcat 6的发布中)
一个流行的Tomcat负载均衡方式是将Apache HTTP Server作为一个Web服务器,通过Apache Tomcat Connector(AJP)协议将请求发送给某个Tomcat实例,如图6所示。
图6. 流行的Apache Tomcat架构
通过使用Apache mod_proxy_balancer模块,Web服务器将作为一个应用层服务器负载均衡器。客户端关联(Client affinity)基于cookie/path JESSIONID参数的方式实现。
改变服务器的响应将必要的路由信息(决定哪个才是目标服务器)添加到JSESSIONID值里。如果客户端随后继续发送请求,路由信息将错请求的JESSIONID值中提取。基于此信息,请求将被转发到相应的目标服务器。
要使应用会话数据(application session data)高可用,必须建立一个Tomcat集群。Tomcat提供两种基本方法:将会话存放至一个共享文件系统(或数据库)中,或者使用in-memory应用。In-memory应用是最流行的Tomcat集群方式。
开发人员可以写自己的Apache应用层负载均衡器模块以将负载分布至各个Tomcat实例中。或者,也可以使用基于硬件/软件的负载均衡解决方案。
结论
客户端服务器负载均衡机制是最简单而且最强大的技术。不需要中间服务器负载均衡器,客户端直接与服务器通信。然而,其应用范围有限。对Internet客户来说必须支持跨域(cross-domain)调用,这会带来复杂性和某些限制。
与客户端服务器负载均衡机制相比,纯传输层负载均衡体系结构(architectures)要简单、灵活而且高效,客户端也没有限制。此类体系结构经常与分布式缓存或会话服务器(session server)联合使用。然而,如果从缓存(或会话)服务器进出的数据移动开销不断增长,此类体系结构将变得效率低下。通过实现基于应用层服务器负载均衡器的客户端联系(client affinity),可以避免中服务器间拷贝大的数据集。这不仅用于应用层服务器负载均衡,例如特定的保险(premium)用户可以被转发到专用的支持高质服务的服务器上。
最后,选择具体的服务器负载均衡体系结构(architecture)取决于你的架构(infrastructure)特定的业务需求和限制。