A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.
Most Modern mainframe design is not so much defined by single task computational speed, typically defined as MIPS rate or FLOPS in the case of floating point Calculation, as much as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility with older software, and high hardware and computational utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.
Software upgrades require resetting portions of the system, and are only non-disruptive when using facilities such as IBM’s z/OS and Parallel Sysplex, which support workload sharing so that one system can take over another’s application while it is being refreshed. In more current Development and Implementation of mainframe infrastructure, there have been several IBM mainframe installations that have delivered over a decade of service as of 2007, without interruption.[citation needed] Mainframes are defined by [high availability], one of the main reasons for their longevity, since they are typically used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers. Proper planning and implementation is required to exploit these features, and if improperly implemented, may serve to inhibit the benefits provided.
In the 1960s, most mainframes had no explicitly interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, for system operators, in implementing programming techniques. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals, and terminal emulation, but not graphical user interfaces, by the 1980s. This format of end-user computing reached mainstream obsolescence in the 1990s due to the personal computer. Most modern mainframes have partially or entirely phased out classic terminal access for end-users in favour of Web user interfaces. Developers and operational staff typically continue to use terminals or terminal emulators.
Historically, mainframes acquired their name, in part because of their substantial size, and because of requirements for specialized heating, ventilation, and air conditioning (HVAC), and electrical power, essentially posing a “Main Framework” of dedicated infrastructure. The requirements of high-infrastructure design were drastically reduced during the mid-1990s with CMOS mainframe designs replacing the older bipolar technology. IBM claimed that its newer mainframes can reduce data center energy costs for power and cooling, and that they could reduce physical space requirements compared to server farms.