From the Mainframe to the Cloud, back and forth

The software architectures of choice have being evolving in cycles, like fashion, driven by two competing forces, hardware capabilities and software complexity. Basically, increase of the hardware capabilities push for simpler software models which become again more complex as they try to tackle new problems. It is interesting to analyse this cycle from a software architecture viewpoint to uncover the factors that drive software architectural design.

A good example is the move in the last 40 years, from the mainframe architecture to the cloud architecture.

When hardware is expensive and scarce it should be very well managed. This was the scenario in the mainframe period where the batch computational model ruled, driven by job scheduling policies that aim to take the most of the CPU resource. Preemption of processes should be controlled because of the overhead associated with context switching. The software architecture of these systems was made of a unique component, the mainframe, with its scheduler. Describing these architectures focus on the scheduler policies and its internal management of jobs. Performance was the main concern.

As the cost of hardware reduced and its computational power increased a new quality raised: usability. Writing the program in punch cards with deferred compilation and error correction did not help in system development. Therefore a new software architecture emerged where software developers wrote programs in terminals, a bit like type writers, that send the typed characters to the mainframe which does some processing and provides feedback. In this architecture the role of the scheduler changed in what may be seen as the advent of modern operating systems.

This was the emerging of the interactive systems where performance was traded off for usability, which had a big impact on the role of computers in business, because it allowed the development of all sort of applications where the computational power was available, often remotely, to a large number of business activities and agents. The software architecture for this kind of system needed to take the remote connections into consideration and their latency and failures.

With more people using the mainframe remotely and a constant decrease of the hardware cost the terminals started becoming more “intelligent” and performed more actions without needing to interact with the mainframe. This architecture further improved the usability, faster feedback to the end user without the communication delay, and availability, because the terminal could take part in the management of the network connection with the mainframe.

The final result of this architecture evolution was a structure of private networks where servers, some of them mainframes, provided services to dedicated clients with very rich functionality to support highly interactive interfaces. This architecture was possible because of the computational power of clients. From a commercial viewpoint it was the golden age of database management systems (DBMS) which supported two-tier architectures. One of the most relevant software architecture decisions was about which DBMS to use because it would mean a relation for life with the DBMS provider, the business application was built as a complete instantiation of the DBMS technology.

Then, there was the web!

The web had a huge impact on the software architecture of applications. A public network where any computer can become a client, means that it was necessary to give up from the complexity of clients executing complex logic because the number of clients is unknown and they do not have installed the software required by the two-tier desktop applications. Therefore, the new architectures meant a loss of usability. Browsers, the brand new name for this old “terminal” clients, were able to render html pages and request services from servers using the httpd client-server protocol. From the user interface point of view the submit button ruled and symbolized the border between a feedback-less end user interaction with the web page and a subsequent slow communication to a remote server. The software architecture resulting from these systems became closer to the old mainframe architecture, though lessening the commercial dependency with the DBMS vendors due to three- or four- tiers that moved the business logic from the database engine to a intermediate, non-persistent, tier. The initial experiences to have rich functional clients by allowing code to move from the server to the browser, e.g. Java Applets, were dismissed due to security problems.

Although it is reasonable the loss of usability due to the new context it is interestingly to observe that the business applications deployed in private networks also decided to sacrifice usability and move to a browser interface. There are several reasons for the adoption of n-tier architectures in the internal business context: seamless between the intranet and the internet, cost of licenses and dependence from vendors, reduction of administration costs. By using the same software architecture companies were more akin to expand their business to the web. On the other hand, the possibility to use different software in each one of the tiers created a diversification that reduced companies dependency on a single provider. Finally, in large companies with thousands of desktops, the upgrade of the client software had a non negligible cost.

The growth of the internet and its internal structuring, a complex architecture in itself, with a lot of redundancy, created a network with very low latency and wide bandwidth data transmission. Therefore, software architectures could try to take advantage of the new context by moving computation to the browser with frequent asynchronous interactions with the server, which reduces the gab between the usability of software architecture two-tier desktops and software architecture n-tier browsers. Technologies like Ajax and JavaScript were instrumental in this change to more rich (fat) clients. These solutions resulted in todays hype on frameworks like AngularJS which support the model-view-controller architectural pattern in the browser and overthrown the submit button from the web interface.

And then, there is the cloud!

The current stage of software architectures for web applications lives in the edge between rich clients full of functionality that require significative computational power and light terminals that delegate most of the computation to services in the cloud. An interesting example of this divide can be observed in the architectures of Google Chrome and Amazon Silk. The former uses advanced techniques in the browser to boost performance and enhance users experience, whereas the latter applies exactly the same tactics in the Amazon cloud. In Amazon Silk the browser, in this case a Kindle browser, only renders html and even javascript is executed in a process, in the Amazon cloud, that emulates the browser behavior and sends the generated html to Kindle. This is possible due to the computational power of the cloud and the low latency of the network because every interaction with the browser results in a request to the server.

Back and forth are software architectures dressing old emperors clothes due to the continuous contextual change, and in the cloud approach to web applications it is of no less importance the sociopolitical impact of the internet as a service, moving from a public internet to a private one. Surely, some of the future decisions on the software architectures for web applications will be impacted by this tension.