Load balancing mechanism distributes service requests across cloud applications deployed in

data centers spread around the world. Every cloud data centers themselves must have their own

load balancers to schedule the incoming service requests towards appropriate resources.

A load-balancing technique can use various tactics for assigning directions to the service

requests. In simple form, a load balancer listens to network ports where service requests arrive to

identify the kind of application and resources has been requested for. Then, to assign the request

to appropriate resource among the available resources, some scheduling algorithm are used.

Depending upon request type and resource requirement to serve the request, different

implementations of load balancing mechanisms are required in cloud computing. Among

them service load balancing is the most critical one. Other requirement types are load balanced

virtual switch, load balanced storage mechanism and so on.

Among different implementations, service load balancing which distributes application service

requests among resources is vital for the success of cloud computing.

In service load balancing, workloads are balanced among multiple instances of each cloud

service implementation. The duplicate implementations are organized into a resource pool that

responds to fluctuating request volumes. The load balancers can be positioned as either an

external or built-in component to allow the host servers to balance the workloads themselves. The load balancer system in a cloud implementation that directly interfaces with clients is

called the front-end node. All the incoming requests first arrive in this front end node at the

service provider’s end. This node then distributes the requests towards appropriate resources

for further execution. These resources which are actually virtual machines are called as back

end nodes.

In cloud computing implementation, when the load balancers at the front-end node receive

multiple requests for a particular service from clients, they distribute those requests among

available virtual servers based on some defined scheduling algorithms. This scheduling happens

depending on some policy so that all of the virtual servers stay evenly loaded. This ensures

maximum as well as efficient utilization of physical and virtual resources. Figure 11.1 represents

the function of a load balancer in cloud computing environment.

Here it has been assumed that all of these three virtual servers are equally loaded before

six similar types of service requests appearing before the system (from same or different

consumers). The incoming requests encounter the load balancer first. Load balancers use

scheduling algorithm and distribute the requests among these three available virtual servers.

Here, the load balancers act as the front-end-nodes for all of the incoming requests and the

virtual servers act as the back-end-nodes.


The use of multiple similar resources with load balancing technique incorporated, instead of a

single powerful resource, increases the system’s reliability through redundancy.

Template Design © VibeThemes. All rights reserved.