6 A distributed system

6.4 The Sydney Olympic Games system

IBM was responsible for the computer systems which were used in the 2000 Olympic Games. There were a number of components to the system, these included:

The statistics associated with the development were staggering:

Clearly the project was a major challenge in software engineering, project management and logistics terms. It was also a major problem in terms of distributed systems development.

First, very high reliability was required. If a computer malfunctioned then it should not affect the functioning of the system, for example if the computer tracking athletes’ timings in a race malfunctioned then the system should substitute another computer for it on-the-fly with no discernible difference to the users of the timing data.

Second, a large number of disparate pieces of hardware were used – both computers and output devices such as scoreboards. The system was developed as a classical client–server system in such a way that new hardware could be easily added. This was achieved via the use of standard protocols.

Third, high performance was required, for example results from the sailing events should have been sent to officials, journalists and competitors a few seconds after a race was completed. IBM carried out a large number of performance studies to ensure this was achieved and used techniques such as adaptive switching and data fragmentation to achieve this.

Fourth, scalability needed to be built into the system. IBM had made major investments in its Olympics system and a main aim was that it should be capable of being reused time and time again, even if the number of sports, number of competitors and duration of the games get larger. Using a client–server system ensured that there is a high probability of this happening in the future.