No products in the cart.
In traditional computing environment, once the resource capacity of a system is enhanced
manually the status of the system is retained (ever after) till further human intervention, even
if those resources do not get utilized. This under-utilization causes wastage of resources and
increases the cost of computing. Yet, very little can be done about it.
The main reason behind this problem is that the infrastructure architecture is not dynamic
in traditional computing system, which prevents implementation of dynamic scaling. Static
scaling requires system shut-down (system to restart) and hence is avoided unless it becomes
extremely essential. For this reason, in traditional static scaling environment, although
resource capacity expansion was sometime considered a possible option, capacity contraction
was beyond imagination because service disruption had to be avoided.
In contrast to the actual definition, conventionally, scalability has been about supplying
additional capacity to a system. It was uncommon to reduce capacity of a system, although
technically it was always possible. No one ever thought of migrating to a system of lesser
capability, even when workload was reduced below the average level. System designers have
built computing systems by arranging resources to meet peak demand, wasting resources and
increasing cost.
In the traditional static scaling approach, computing system requires a ‘restart’ for the scaling
effect to take place which causes service disruption.
In traditional computing environment, once the resource capacity of a system is enhanced
manually the status of the system is retained (ever after) till further human intervention, even
if those resources do not get utilized. This under-utilization causes wastage of resources and
increases the cost of computing. Yet, very little can be done about it.
The main reason behind this problem is that the infrastructure architecture is not dynamic
in traditional computing system, which prevents implementation of dynamic scaling. Static
scaling requires system shut-down (system to restart) and hence is avoided unless it becomes
extremely essential. For this reason, in traditional static scaling environment, although
resource capacity expansion was sometime considered a possible option, capacity contraction
was beyond imagination because service disruption had to be avoided.
In contrast to the actual definition, conventionally, scalability has been about supplying
additional capacity to a system. It was uncommon to reduce capacity of a system, although
technically it was always possible. No one ever thought of migrating to a system of lesser
capability, even when workload was reduced below the average level. System designers have
built computing systems by arranging resources to meet peak demand, wasting resources and
increasing cost.
In the traditional static scaling approach, computing system requires a ‘restart’ for the scaling
effect to take place which causes service disruption.