Network queueing is a very important application of queueing theory. The term
'network of queues' describes a situation where the input from one queue is the
output from one or more others. This is true in many situations from
telecommunications to a PC. Below is a description of some of the broad
applications of network queueing describing how theories apply to them. Also
there are a few general network queueing theories.
Computer Networks
A simple example of network queueing is the central server network. This
consists of a CPU (Central Processing Unit), storage units it can access
and input devices to access it. The task the CPU performs are queued on
different criteria. Also, the storage units could have their own individual
queues. Queues tend to be ordered in a number of
ways. They can also be executed either on a one by one serial basis or bit
by bit by Time Sharing.
It is not always neccessary to treat customers in a queue equally. A priority
queueing system may often be used to give some jobs preferential treatment.
Time sharing is when the CPU is dedicated for one task for a fixed period of
time after which it is switched to another task. The task can then be recycled
ie put back in the queue so that the remainder of it is executed at another
time. This can be repeated until the task is complete.
A job queue can be executed one by one or alternatively via
time sharing but either way when the jobs are in the queue, the order in
which they are executed is crucial.
First in First Out (FIFO)
A first in first out queue is the same basis we would (hopefully) use
in real life for a cinema queue or a phone box in that the tasks are executed in
the order that they arrived.
Last in First Out (LIFO)
A last in first out queue is precisely the opposite as a new job is
started as soon as it is queued. This may not be as foolish as it sounds as if
the job is done on a time sharing basis it may be sensible to start it as soon
as possible then switch back to other jobs for a while. This method may imply
the need for pre-emption ie the ability to stop a job half
way through to start another one. On the other hand the job could be finished
before the new one is started or perhaps the queue does not need the feature of
being able to add a job half way through.
Smallest job first (SJF)
A smallest job first queue orders the jobs in terms of the smallest one
first which means you get as many jobs complete as quickly as possible even
though in total the time taken is the same.
An example of a priority queue is a PC. There is a queue of events recieved from each
input device(ie mouse, keyboard etc). Imagine a system with two queues, one for the mouse
and one for the keyboard which lead into a master queue of input events. Mouse movement
may be given priority over mouse button presses. Any event from the mouse may be
given priority in the master queue over any event in the keyboard queue.
It is important in this system to have a way of ealing with a situation where a customer
of priority 1 arrives in a queue headed by a customer of priority 2 (ie 1 has higher
priority than 2). This situation can be dealt with either in an emptive or pre-emptive
fashion. In an emptive system the new entry waits for the other to be completed before
beginning. In a pre-emptive system the queue can stop the current entry half way through
it's execution to start the new one.
Network communication
There are several broad methods connected to network communication:
Circuit Switching
When a call is made from a source to a destination it must traverse several
nodes along the way. Which nodes it traverses is determined by the availability
of free channels along the way. Each node has a queue for calls requesting a
channel. Once a channel has been opened the call can progress to the next node
and wait for a channel there. The channel remains open until the source or
destination (once reached) closes the call.
Packet Switching
Messages are transmitted through intermediate stages and the route a message takes
depends entirely upon the current load on the system. The route allocation is
dynamic. Each stage requires a random amount of time reflecting the length of the
queue at that stage.
Broadcasting
Radio Communication
Considering the nodes as transmitters/recievers you can treat each as having a queue
for their channels. Without going into great detail of the various systems used: it
is always neccessary to consider the fact that a to open a channel you must check to
see if the two adjacent channels are also free as interference blocks transmissions.
When the channels are not free it may be neccessary to re-allocate communications
that already have channels to make room.
Digital Communication
This is done on the basis of time slots. For a given communication link it could have
several or all slots filled and no interference would take place making allocation
far simpler. The aspect of nodes with queues still applies however.
Markov Chains
A stochastic process is a Markov process if the future of the process depends only upon
the present state of the process implying that the present state is a direct result of
its history. A Markov process is called a Markov chain if its state space is discrete
ie it moves from one clearly defined state to another.
The Jackson Queueing Network
Consider a FIFO queueing network:
A queueing network is closed if there are no arrivals from outside of it. Clearly the
nature of this system will be that of Markovian Chain as there are only a finite number
of states of the network and the next state depends on the ones previous to it. However,
a network can also be open. An open queueing network can recieve customers from outside
to recieve services at its nodes. Jackson was the first to consider such systems in detail.
It is assumed that the arrival rate of customers conforms to the poisson distribution.
The average service time at each node and the probability of a node leaving the network is
known. From this Jackson formulated a way of calculating the Markovian equilibrium
state of the network. ie a tuple containing the equilibrium number of customers at every node.
Other Network types
The Gordon-Newell network is essentially a closed version of the Jackson network while the
BCMP (Baskett, Chandy, Muntz and Palacios) network includes facility for different classes
of customers. This requires indexing of the different service time requirements at each node
plus different leaving probabilities for each class. Variations of this network are used
commonly in computer systems and networks today.
References
Queueing Networks and Product Forms - A Systems Approach
Nico M. Van Dijk
Queueing Networks - Exact Computational Algorithm: A Unified Theory Based on Decomposition
and Aggregation
Adrian E. Conway and Nicolas D. Georganas
Probability, Statistics, and Queueing Theory - With Computer Science Applications
Arnold O. Allen
Last Updated: 13th June 1999
Written by: Robert Kay