Inside the Hidden World of Legacy IT Systems - IEEE Spectrum

2021-12-22 06:21:53 By : Ms. Fish Liao

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

“Fix the damn unemployment system!"

This past spring, tens of millions of Americans lost their jobs due to lockdowns aimed at slowing the spread of the SARS-CoV-2 virus. And untold numbers of the newly jobless waited weeks for their unemployment benefit claims to be processed, while others anxiously watched their bank accounts for an extra US $600 weekly payment from the federal government.

Delays in processing unemployment claims in 19 states—Alaska, Arizona, Colorado, Connecticut, Hawaii, Iowa, Kansas, Kentucky, New Jersey, New York, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Texas, Vermont, Virginia, and Wisconsin—are attributed to problems with antiquated and incompatible state and federal unemployment IT systems. Most of those systems date from the 1980s, and some go back even further.

Things were so bad in New Jersey that Governor Phil Murphy pleaded in a press conference for volunteer COBOL programmers to step up to fix the state's ­Disability Automated Benefits ­System. A clearly exasperated Murphy said that when the pandemic passed, there would be a post mortem focused on the question of “how the heck did we get here when we literally needed cobalt [sic] programmers?"

Similar problems have emerged at the federal level. As part of the federal government's pandemic relief plan, eligible U.S. taxpayers were to receive $1,200 payments from the Internal Revenue Service. However, it took up to 20 weeks to send out all the payments because the IRS computer systems are even older than the states' unemployment systems, some dating back almost 60 years.

As the legendary investor Warren Buffett once said, “It's only when the tide goes out that you learn who's been swimming naked." The pandemic has acted as a powerful outgoing tide that has exposed government's dependence on aging legacy IT systems.

But governments aren't the only ones struggling under the weight of antiquated IT. It is equally easy to find airlines, banks, insurance companies, and other commercial entities that continue to rely on old IT, contending with software or hardware that is no longer supported by the supplier or has defects that are too costly to repair. These systems are prone to outages and errors, vulnerable to cyber­intrusions, and progressively more expensive and difficult to maintain.

Since 2010, corporations and governments worldwide have spent an estimated $35 trillion on IT products and services. Of this amount, about three-quarters went toward operating and maintaining existing IT systems. And at least $2.5 trillion was spent on trying to replace legacy IT systems, of which some $720 billion was wasted on failed replacement efforts.

But it's astonishing how seldom people notice these IT systems, even with companies and public institutions spending hundreds of billions of dollars every year on them. From the time we get up until we go to bed, we interact, often unknowingly, with ­dozens of IT systems. Our voice-activated digital assistants read the headlines to us before we hop into our cars loaded with embedded processors, some of which help us drive, others of which entertain us as we guzzle coffee brewed by our own robotic baristas. Infrastructure like wastewater treatment plants, power grids, air traffic control, telecommunications services, and government administration depends on hundreds of thousands of unseen IT systems that form another, hidden infrastructure. Commercial organizations rely on IT systems to manage payroll, order supplies, and approve cashless sales, to name but three of thousands of automated tasks necessary to the smooth functioning of a modern economy. Though these systems run practically every aspect of our lives, we don't give them a second thought because, for the most part, they function. It doesn't even occur to us that IT is something that needs constant attention to be kept in working order.

In his landmark study The Shock of the Old: Technology and Global History Since 1900 (Oxford University Press, 2007), British historian David Edgerton claims that although maintenance and repair are central to our relationship with technology, they are “matters we would rather not think about." As a result, technology maintenance “has lived in a twilight world, hardly visible in the formal accounts societies make of themselves."

Indeed, the very invisibility of legacy IT is a kind of testament to how successful these systems are. Except, of course, when they're not.

There's no formal definition of “legacy system," but it's commonly understood to mean a critical system that is out of date in some way. It may be unable to support future business operations; the vendors that supplied the application, operating system, or hardware may no longer be in business or support their products; the system architecture may be fragile or complex and therefore unsuitable for upgrades or fixes; or the finer details of how the system works are no longer understood.

To modernize a computing system or not is a question that bedevils nearly every organization. Given the many problems caused by legacy IT systems, you'd think that modernization would be a no-brainer. But that decision isn't nearly as straightforward as it appears. Some legacy IT systems end up that way because they work just fine over a long period. Others stagger along because the organization either doesn't want to or can't afford to take on the cost and risk associated with modernization.

Obviously, a legacy system that's critical to day-to-day operations cannot be replaced or enhanced without major disruption. And so even though that system contributes mightily to the organization's operations, management tends to ignore it and defer modernization. On most days, nothing goes catastrophically wrong, and so the legacy system remains in place.

This “kick the can" approach is understandable. Most IT systems, whether new or modernized, are expensive affairs that go live late and over budget, assuming they don't fail partially or completely. These situations are not career-enhancing experiences, as many former chief information officers and program managers can attest. Therefore, once an IT system is finally operating reliably, there's little motivation to plan for its eventual retirement.

What management does demand, however, is for any new IT system to provide a return on investment and to cost as little as possible for as long as possible. Such demands often lead to years of underinvestment in routine maintenance. Of course, those same executives who approved the investment in the new system probably won't be with the organization a decade later, when that system has legacy status.

Similarly, the developers of the system, who understand in detail how it operates and what its limitations are, may well have moved on to other projects or organizations. For especially long-lived IT systems, most of the developers have likely retired. Over time, the system becomes part of the routine of its users' daily life, like the office elevator. So long as it works, no one pays much attention to it, and eventually it recedes into the organization's operational shadows.

Thus does an IT system quietly age into legacy status.

Millions of people every month experience the frustrations and inconveniences of decrepit legacy IT.

U.K. bank customers know this frustration only too well. According to the U.K. Financial Conduct Authority, the nation's banks reported nearly 600 IT operational and security incidents between October 2017 and September 2018, an increase of 187 percent from a year earlier. Government regulators point to the banks' reliance on decades-old IT systems as a recurring cause for the incidents.

Airline passengers are equally exasperated. Over the past several years, U.S. air carriers have experienced on average nearly one IT-related outage per month, many of them attributable to legacy IT. Some have lasted days and caused the delay or cancellation of thousands of flights.

Poorly maintained legacy IT systems are also prone to cybersecurity breaches. At the credit reporting agency Equifax, the complexity of its legacy systems contributed to a failure to patch a critical vulnerability in the company's Automated Consumer Interview System, a custom-built portal developed in the 1970s to handle consumer disputes. This failure led, in 2017, to the loss of 146 million individuals' sensitive personal information.

Aging IT systems also open the door to crippling ransomware attacks. In this type of attack, a cyberintruder hacks into an IT system and encrypts all of the system data until a ransom is paid. In the past two years, ransomware attacks have been launched against the cities of Atlanta and Baltimore as well as the Florida municipalities of Riviera Beach and Lake City. The latter two agreed to pay their attackers $600,000 and $500,000, respectively. Dozens of state and local governments, as well as school systems and hospitals, have experienced ransomware attacks.

Even if they don't suffer an embarrassing and costly failure, organizations still have to contend with the steadily climbing operational and maintenance costs of legacy IT. For instance, a recent U.S. Government Accountability Office report found that of the $90 billion the U.S. government spent on IT in fiscal year 2019, nearly 80 percent went toward operation and maintenance of existing systems. Furthermore, of the 7,000 federal IT investments the GAO examined in detail, it found that 5,233 allocated all their funding to operation and maintenance, leaving no monies to modernize. From fiscal year 2010 to 2017, the amount spent on IT modernization dropped by $7.3 billion, while operation and maintenance spending rose by 9 percent. Tony Salvaggio, founder and CEO of CAI, an international firm that specializes in supporting IT systems for government and commercial firms, notes that ever-growing IT legacy costs will continue to eat government's IT modernization “seed corn."

While not all operational and maintenance costs can be attributed to legacy IT, the GAO noted that the rise in spending is likely due to supporting obsolete computing hardware—for example, two-thirds of the Internal Revenue Service's hardware is beyond its useful life—as well as “maintaining applications and systems that use older programming languages, since programmers knowledgeable in these older languages are becoming increasingly rare and thus more expensive."

Take COBOL, a programming language that dates to 1959. Computer science departments stopped teaching COBOL some decades ago. And yet the U.S. Social Security Administration reportedly still runs some 60 million lines of COBOL. The IRS has nearly as much COBOL ­programming, along with 20 million lines of assembly code. And, according to a 2016 GAO report, the departments of Commerce, Defense, Treasury, Health and Human Services, and ­Veterans Affairs are still “using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago."

Given the vast amount of outdated software that's still in use, the cost of maintaining it will likely keep climbing not only for government, but for commercial organizations, too.

The first step in fixing a massive problem is to admit you have one. At least some governments and companies are finally starting to do just that. In December 2017, for example, President Trump signed the Modernizing Government Technology Act into law. It allows federal agencies and departments to apply for funds from a $150 million Technology Modernization Fund to accelerate the modernization of their IT systems. The Congressional Budget Office originally indicated the need was closer to $1.8 billion per year, but politicians' concerns over whether the money would be well spent resulted in a significant reduction in funding.

Part of the modernization push by governments in the United States and abroad has been to provide more effective administrative controls, increase the reliability and speed of delivering benefits, and improve customer service. In the commercial sector, by contrast, IT modernization is being driven more by competitive pressures and the availability of newer computing technologies like cloud computing and machine learning.

“Everyone understands now that IT drives organization innovation," Salvaggio told IEEE Spectrum. He believes that the capabilities these new technologies will create over the next few years are “going to blow up 30 to 40 percent of [existing] business models." Companies saddled with legacy IT systems won't be able to compete on the expected rapid delivery of improved features or customer service, and therefore “are going to find themselves forced into a box canyon, unable to get out," Salvaggio says.

This is already happening in the banking industry. Existing firms are having a difficult time competing with new businesses that are spending most of their IT budgets on creating new offerings instead of supporting legacy systems. For example, Starling Bank in the United Kingdom, which began operations in 2014, offers only mobile banking. It uses Amazon Web Services to host its services and spent a mere £18 million ($24 million) to create its infrastructure. In comparison, the U.K.'s TSB bank, a traditional full-service bank founded in 1810, spent £417 million ($546 million) moving to a new banking platform in 2018.

Starling maintains all its own code and does an average of one software release per day. It can do this because it doesn't have the intricate connections to myriad legacy IT systems, where every new software release carries a measurable risk of operational failure, according to the U.K.'s bank regulators. Simpler systems mean fewer and shorter IT-related outages. Starling has had only one major outage since it opened, whereas each of the three largest U.K. banks has had at least a dozen apiece over the same period.

Modernization creates its own problems. Take the migration of legacy data to a new system. When TSB moved to its new IT platform in 2018, some 1.9 million online and mobile customers discovered they were locked out of their accounts for nearly two weeks. And modernizing one legacy system often means having to upgrade other interconnecting systems, which may also be legacy. At the IRS, for instance, the original master tax file systems installed in the 1960s have become buried under layers of more modern, interconnected systems, each of which made it harder to replace the preceding system. The agency has been trying to modernize its interconnected legacy tax systems since 1968 at a cumulative cost of at least $20 billion in today's money, so far with very little success. It plans to spend up to another $2.7 billion on modernization over the next five years.

Another common issue is that legacy systems have duplicate functions. The U.S. Navy is in the process of installing its $167 million Navy Pay and Personnel system, which aims to consolidate 223 applications residing in 55 separate IT systems, including 10 that are more than 30 years old and a few that are more than 50 years old. The disparate systems used 21 programming languages, executing on nine operating systems ranging across 73 data centers and networks.

Such massive duplication and data silos sound ridiculous, but they are shockingly common. Here's one way it often happens: The government issues a new mandate that includes a requirement for some type of automation, and the policy comes with fresh funding to implement it. Rather than upgrade an existing system, which would be disruptive, the department or agency finds it easier to just create a new IT system, even if some or most of the new system duplicates what the existing system is doing. The result is that different units within the same organization end up deploying IT systems with overlapping functions.

“The shortage of thinking about systems engineering" along with the lack of coordinating IT developments to avoid duplication have long plagued government and corporations alike, Salvaggio says.

The best way to deal with legacy IT is to never let IT become legacy. Growing recognition of legacy IT systems' many costs has sparked a rethinking of the role of software maintenance. One new approach was recently articulated in Software Is Never Done, a May 2019 report from the U.S. Defense Innovation Board. It argues that software should be viewed “as an enduring capability that must be supported and continuously improved throughout its life cycle." This includes being able to test, integrate, and deliver improvements to software systems within short periods of time and on an ongoing basis.

Here's what that means in practice. Currently, software development, operations, and support are considered separate activities. But if you fuse those activities into a single integrated activity—employing what is called DevOps—the operational system is then always “under development," continuously and incrementally being improved, tested, and deployed, sometimes many times a day.

DevOps is just one way to keep core IT systems from turning into legacy systems. The U.S. Defense Advanced Research Projects Agency has been exploring another, potentially more effective way, recognizing the longevity of IT systems once implemented.

Since 2015, DARPA has funded research aimed at making software that will be viable for more than 100 years. The Building Resource Adaptive Software Systems (BRASS) program is trying to figure out how to build “long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate," according to program manager ­Sandeep Neema.

Creating such timeless systems will require a “start from scratch" approach to software design that doesn't make assumptions about how an IT system should be designed, coded, or maintained. That will entail identifying the logical (libraries, data formats, structures) and physical resources (processing, storage, energy) a software program needs for execution. Such analyses could use advanced AI techniques that discover and make visible an application's operations and interactions with other applications and systems. By doing so, changes to resources or interactions with other systems, which account for many system failures or inefficient operations, can be actively managed before problems occur. Developers will also need to create a capability, again possibly using AI, to monitor and repair all elements of the execution environment in which the application resides.

The goal is to be able to update or upgrade applications without the need for extensive intervention by a human programmer, Neema told Spectrum, thereby “buying down the cost of maintenance."

The BRASS program has funded nine projects, each of which represents different aspects of what a resource-adaptive software system will need to do. Some of the projects involve UAVs, mobile robots, and high-performance computing. The final results of the effort are expected later this year, when the technologies will be released to open-source repositories, industry, and the Defense Department.

Neema says no one should expect BRASS to deliver “a general-purpose software repair capability." A more realistic outcome is an approach that can work within specific data, software, and system parameters to help the maintainers who oversee those systems to become more efficient and effective. He of course hopes that private companies and other government organizations will build on the BRASS program's results.

The COVID-19 pandemic has exposed the debilitating consequences of relying on antiquated IT systems for essential services. Unfortunately, that dependence, along with legacy IT's enormous and increasing costs, will still be with us long after the pandemic has ended. For the U.S. government alone, even a concerted and well-executed effort would take decades to replace the thousands of existing legacy systems. Over that time, current IT systems will also become legacy and themselves require replacement. Given the budgetary impacts of the pandemic, even less money for legacy system modernization may be available in the future across all government sectors.

The problems associated with legacy systems will only worsen as the Internet of Things, with its billions of interconnected computing devices, matures. These devices are already being connected to legacy IT, which will make it even more difficult to replace and modernize those systems. And eventually the IoT devices will become legacy. Just as with legacy systems today, those devices likely won't be replaced as long as they continue to work, even if they are no longer supported. The potential cybersecurity risk of vast numbers of obsolete but still operating IoT devices is a huge unknown. Already, many IoT devices have been deployed without basic cybersecurity built into them, and this shortsightedness is taking a toll. Cybersecurity concerns compelled the U.S. Food and Drug Administration to recall implantable pacemakers and insulin pumps and the National Security Agency to warn about IoT-enabled smart furniture, among other things of the Internet.

Now imagine a not-too-distant future where hundreds of millions or even billions of legacy IoT devices are deeply embedded into government and commercial offices, schools, hospitals, factories, homes, and even people. Further imagine that their cybersecurity or technical flaws are not being fixed and remain connected to legacy IT systems that themselves are barely supported. In such a world, the pervasive dependence upon increasing numbers of interconnected, obsolete systems will have created something far grimmer and murkier than Edgerton's twilight world.

This article appears in the September 2020 print issue as “The Hidden World of Legacy IT."

As a risk consultant for businesses and a slew of three-lettered U.S. government agencies, Contributing Editor Robert N. Charette has seen more than his share of languishing legacy IT systems. As a civilian, he's also been a casualty of a legacy system gone berserk. A few years ago, his bank's IT system, which he later found out was being upgraded, made an error that was most definitely not in his favor.

He'd gone to an ATM to withdraw some weekend cash. The machine told him that his account was overdrawn. Puzzled, because he knew he had sufficient funds in his account to cover the withdrawal, he had to wait until Monday to contact the bank for an explanation. When he called, the customer service representative insisted that he was indeed overdrawn. This was an understatement, considering that the size of the alleged overdraft might have caused a person less versed in software debacles to have a stroke.

“ 'You know, you're overdrawn by [US] $1,229,200,' " Charette recalls being told. “I was like, well, that's interesting, because I don't have that much money in my bank account."

The customer service rep then acknowledged it could be an error caused by a computer glitch during a recent systems upgrade. Two days later he received the letter pictured above from his bank, apparently triggered by a check he had written for $55.80. Charette notes that it wasn't the million-dollar-plus overdraft that triggered the letter, just that last double-nickel drop in the bucket.

The bank never did send a letter apologizing for the inconvenience or explaining the problem, which he believes likely affected other customers. And like so many of the failed legacy-system upgrades—some costing billions, which Charette describes here—it never made the news, either.

Contributing Editor Robert N. Charette is an acknowledged international authority on information technology and systems risk management. A self-described “risk ecologist,” he is interested in the intersections of business, political, technological, and societal risks. Along with being editor for IEEE Spectrum’s Risk Factor blog, Charette is an award-winning author of multiple books and numerous articles on the subjects of risk management, project and program management, innovation, and entrepreneurship. A Life Senior Member of the IEEE, Charette was a recipient of the IEEE Computer Society’s Golden Core Award in 2008.

Deutsche Telekom, Orange, and others warn the continent is falling behind the U.S. and Japan

Michael Koziol is an associate editor at IEEE Spectrum where he covers everything telecommunications. He graduated from Seattle University with bachelor's degrees in English and physics, and earned his master's degree in science journalism from New York University.

European telecom companies recently sounded an alarm that they may be falling behind the rest of the world in their efforts to develop open-interface radio access network (RAN) technologies. The technologies, called Open RAN and which would provide new ways to mix and match network components by “opening up” the interfaces between them, are widely believed to be an important opportunity to drive down the costs of network deployments and allow new players to enter a rigid market.

Five companies—Deutsche Telekom, Orange, Telecom Italia, Telefónica, and Vodafone—published a report outlining why they feel Europe as a whole is lagging behind other regions such as the U.S. and Japan in developing Open RAN. The companies point to both a lack of companies developing key components, notably silicon chips, for Open RAN technologies, as well as the need to get incumbent equipment vendors Ericsson and Nokia on board with Open RAN development. And there’s a deeper issue in that the exact definition of Open RAN is still in flux, allowing different companies to prioritize different technologies and proclaim that they fall under the banner of Open RAN.

Briefly put, the cellular networks we use to send text messages and make calls have a few basic components. Cell towers receive analog signals via their antennas; radio units convert those signals to their corresponding digital versions; and then baseband units process the signals, correct errors, and route the digital signals to wherever they need to go. Traditionally, network operators like any of the five companies listed above purchase this equipment from a single vendor, be it Ericsson, Nokia, Samsung, or Huawei, to name the biggest players.

Vendors have an incentive to lock operators into their respective ecosystems by ensuring their components don’t work with components from their competitors. Frustrated by being in a captive market and the expenses associated with upgrading and rolling out networks, several operators created the O-RAN Alliance to force the development of standards and technologies that would lead to Open RAN implementation.

Olivier Simon, the Radio Innovation Director at Orange, says there are three aspects to Open RAN. The first is openness between the interfaces of different network components. The second is decoupling network software from hardware and moving more of a network’s operations into the cloud. And the third is increased intelligence: letting AI and machine learning techniques manage more of the network’s performance. “I think everyone agrees there are these three aspects,” says Simon, “but what becomes more tricky is that none of them are mandatory.”

For example, a recent report that calls out the difficulties in getting the Open RAN effort to coalesce around an established set of changes to the fundamentals of network architecture points out that Nokia has developed Open RAN software, but that its software runs only on Nokia’s hardware. Nokia’s developments do feature open component interfaces (the first issue addressed by Open RAN), but the operators authoring the report take issue with the lack of software-hardware decoupling in Nokia’s developments (the second of the three issues network carriers wish to tackle).

Nokia has pushed back on the report, explaining that its components are compliant with the O-RAN Alliance’s definitions for open interfaces. But that gets back to the issue at the core of Open RAN development: Do the three aspects have equal weight, and in what ways should they be prioritized and implemented? Nokia argues that its components are Open RAN-compliant because they have open interfaces. The operators that authored the report feel that’s not enough because the Nokia components don't adequately prioritize the other emerging aspects they say are critical to the success of the effort.

There is a sense from the operators that these kinds of back-and-forth arguments about what makes a particular component compliant are stymying Open RAN development in Europe. “It’s more an ecosystem question than a deep technical question,” says Simon. The network operators consider Ericsson and Nokia to be important parts of Europe’s telecom ecosystem because of their global dominance in supplying equipment. But there's a downside to having these telecom titans around, because it can be difficult for large incumbents in an industry to react swiftly to a new technology.

That has allowed U.S.-based companies like Mavenir and Parallel Wireless, and Japanese companies like Rakuten, to set the pace for Open RAN development, according to an email from Fransz Seiser, the Vice President of Access Disaggregation at Deutsche Telekom. That said, Rakuten (a Japanese network operator) has enlisted Nokia to develop and build its Open RAN network using equipment from multiple vendors, which Nokia points to as another piece of evidence that it is committed to Open RAN development. Still, it could be argued that the Rakuten-Nokia multi-vendor deployment, while prioritizing the open interface aspect, yet again fails to satisfactorily focus on the mission to decouple software and hardware). Seiser echoes Nokia's sense of things, insomuch as the company has demonstrated progress with its recent introduction of multi-vendor support.

Tanveer Saad, the Head of Edge Cloud Innovation and Ecosystems at Nokia, says that Nokia is taking a bigger responsibility as a “solution provider.” “We are actually building the solution with different components from our ecosystem players, so it can be a multi-vendor kind of environment.” On the one hand, Nokia has demonstrated a desire to ‘move quick’ in response to the disruption created by Open RAN by developing multi-vendor solutions like the one for Rakuten. On the other, the company seems to be hoping that it can maintain dominance as an end-to-end network provider, albeit by offering options with multiple vendors’ components instead of just their own.

Also driving the sense that Europe is falling behind is a lack of companies such as Broadcom, Intel, and Qualcomm producing silicon chips that will ultimately be integral to Open RAN's success. These chips will be vital for the development of AI that can manage networks. While the consensus is that this third aspect of Open RAN is the farthest in the future, the operators argue that Europe should begin investing in alternatives to existing chip manufacturers in order to shore up that weakness in the continent's network communications ecosystem.

Open RAN surprised some in the industry with how quickly it has risen to prominence. The O-RAN Alliance was founded in 2018 with just five members. It now has over 260 members. Cellular “generations” such as 4G and 5G typically exist on a ten-year cycle of research, standardization, and commercialization. Open RAN is moving much faster, and even despite the recent concerns about lagging behind—or perhaps because of them—there is still a sense in the industry that Open RAN will make a big impact in the coming years.

From the pyramids to the Hummer, more is often less

Vaclav Smil writes Numbers Don’t Lie, IEEE Spectrum's column devoted to the quantitative analysis of the material world. Smil does interdisciplinary research focused primarily on energy, technical innovation, environmental and population change, food and nutrition, and on historical aspects of these developments. He has published 40 books and nearly 500 papers on these topics. He is a distinguished professor emeritus at the University of Manitoba and a Fellow of the Royal Society of Canada (Science Academy). In 2010 he was named by Foreign Policy as one of the top 100 global thinkers, in 2013 he was appointed as a Member of the Order of Canada, and in 2015 he received an OPEC Award for research on energy. He has also worked as a consultant for many U.S., EU and international institutions, has been an invited speaker in more than 400 conferences and workshops and has lectured at many universities in North America, Europe, and Asia (particularly in Japan).

There is a fundamental difference between what can be designed and built and what makes sense. History provides a lesson in the shape of record-setting behemoths that have never since been equaled.

The Egyptian pyramids started small, and in just a few generations, some 4,500 years ago, there came Khufu’s enormous pyramid, which nobody has ever tried to surpass. Shipbuilders in ancient Greece kept on expanding the size of their oared vessels until they built, during the third century BCE, a tessarakonteres, with 4,000 oarsmen. That vessel was too heavy, too ponderous, and therefore a naval failure. And architect Filippo Brunelleschi’s vast cupola for Florence’s Cathedral of Santa Maria del Fiore, built without scaffolding and finished in 1436, was never replicated.

The modern era has no shortage of such obvious overshoots. The boom in oil consumption following the Second World War led to ever-larger oil tankers, with sizes rising from 50,000 to 100,000 and 250,000 deadweight tonnes (dwt). Seven tankers exceeded 500,000 dwt, but their lives were short, and nobody has built a million-dwt tanker. Technically, it would have been possible, but such a ship would not fit through the Suez or Panama canals, and its draft would limit its operation to just a few ports.

The economy-class-only configuration of the Airbus A380 airliner was certified to carry up to 853 passengers, but it has not been a success. In 2021, just 16 years after it entered service, the last plane was delivered, a very truncated lifespan. Compare it with the hardly puny Boeing 747, which will see its final delivery in 2022, 53 years after the plane’s first flight, an almost human longevity. Clearly, the 747 was the right-sized record-breaker.

Of course, the most infamous overshoot of all airplane designs was Howard Hughes’s H-4 Hercules, dubbed the “Spruce Goose,” the largest plane ever made out of wood. It had a wingspan of nearly 100 meters, and it was propelled by eight reciprocating engines, but it became airborne only once, for less than a minute, on 2 November 1947, with Hughes himself at the controls.

Another right-size giant is Ford’s heavy and powerful F-150, now in its 14th generation: In the United States, it has been the bestselling pickup since 1977 and the best-selling vehicle since 1981. In contrast, the Hummer, a civilian version of a military assault vehicle, had a brief career but is now being resurrected in an even heavier electric version: The largest version using an internal combustion engine, the H1, weighed nearly 3.5 tonnes, the electric Hummer, 4.1 tonnes. I doubt we will see 14 generations of this beast.

But these lessons of excess carry little weight with designers and promoters pursuing record sizes. Architects discuss buildings taller than a mile, cruise ship designers have already packed nearly 7,000 people into a single vessel (Symphony of the Seas, built 2018) and people are dreaming about much larger floating cities (perfect for spreading the next pandemic virus). There are engineers who think that we will soon have wind turbines whose more than 200-meter diameter blades will fold, like palm fronds, in hurricanes.

Depending on where you stand you might see all of this either as an admirable quest for new horizons (a quintessential human striving) or irrational and wasteful overreach (a quintessential human hubris).

This article appears in the January 2022 print issue as “Extreme Designs.”

Many flying behemoths have been tried, and most have failed. Of the planes shown in the illustration, the only one that has succeeded is the Antonov An-225, designed in the Soviet Union in the 1980s. This large cargo-lifter carries up to 130 tons of heavy machinery or construction parts on chartered flights to all continents

Join Teledyne for a three-part webinar series on high-performance data acquisition basics

Time: 10 AM PST | 1 PM EST

Join Teledyne SP Devices for part 3 in a three-part introductory webinar series on high-performance digitizers.

Topics covered in this Part 3 of the webinar series:

Who should attend? Developers working with high-performance data acquisition systems that would like to understand the capabilities and building blocks of a digitizer.

What attendees will learn? How digitizer features and functions can be used in different applications and measurement scenarios.

Presenter: Thomas Elter, Senior Field Applications Engineer

** Click here to watch Part 1 "What is a High-Performance Digitizer?" on demand.

** Click here to watch Part 2 "How to Select a High-Performance Digitizer" on demand.