How Super is Super Enough?

The Electronic Numerical Integrator and Computer was built in Philadelphia in 1946. In 1947 it was moved to the U.S. Army’s Balistic Research Laboratory (pictured here) at Aberdeen Proving Ground, Maryland. There it operated continuously until its shutdown in 1955. Photograph from the U.S. Army archives.

The Electronic Numerical Integrator and Computer was built in Philadelphia in 1946. In 1947 it was moved to the U.S. Army’s Balistic Research Laboratory (pictured here) at Aberdeen Proving Ground, Maryland. There it operated continuously until its shutdown in 1955. Photograph from the U.S. Army archives.

THE FIRST GENERAL-PURPOSE COMPUTER, the Electronic Numerical Integrator And Computer (ENIAC), was built in 1946 and took up 1800 square feet—roughly the size of a volleyball court.

It also consumed a considerable amount of power, running on 175 kilowatts, around the amount of energy it takes to heat an electric oven at 350 degrees Fahrenheit. Rumors had it that every time the computer powered up, the lights in its home city of Philadelphia dimmed.

At the time the ENIAC was a revolutionary machine, able to perform up to five thousand calculation cycles per second. By today’s standards, with an ordinary laptop performing two billion cycles per second, that’s chump change.

However, behemoth computers are far from relics of the past. Today, they are at the forefront of computing technology.

In 2012, the Obama administration earmarked $126 million for the research and development of new supercomputers that would fill a football field and perform as many operations per second as 50 million laptops can. The previous budget set aside $24 million.

Current supercomputers perform on the scale of one quadrillion, or 1015, calculations per second: the petascale. Next-generation supercomputers would operate at least 1,000 times faster, at one quintillion, or 1018, calculations per second: the exascale.

Why do we need such powerful machines? Researchers in many fields now produce so much data that we are having trouble storing, analyzing, sharing and visualizing all of it. This is known as big data.

Each day, people around the world create 2.5 billion bytes of data, enough to fill more than 156 million iPhones—and rates of data generation are only increasing. In the last two years alone, we have generated 90% of the world’s data. The Internet plays a large role in data accumulation through digital picture and video archives, posts to social media, records of financial transactions and search indexing, among other activities.

In science, technologies that can observe and measure larger and larger sets of data are pushing research to unprecedented frontiers. The Sloan Digital Sky Survey at Apache Point Observatory in New Mexico collected more data in its first few weeks of operation in 2010 than had previously existed in the history of astronomy. The survey continues to gather around 200 gigabytes of data, enough to fill 12 iPhones, every night.

The Large Hadron Collider, the world’s largest and highest-energy particle accelerator, produces 15 million gigabytes of data per year, enough to fill more than 937 thousand iPhones.

Our ability to analyze data cannot keep up with the staggering rates of production. Research efforts that regularly scrape up against data size limitations include meteorology, environmental research, gene sequencing, neuroscience mapping and complex physics simulations.

With new technologies to generate data, we need data storage and processing capabilities to keep up. When high-throughput data generation and computation are coupled, the results are impressive. While sequencing the first human genome took a decade, today a person’s genome can be sequenced in days.

Exascale supercomputers would strongly benefit computation-heavy research areas such as aerospace engineering, climate modeling, astrophysics, biology and national security efforts including cryptography and surveillance.

Supercomputers now in use have been applied to materials science, earthquake modeling, the residual ecological effects of the Gulf oil spill and other topics. Researchers have even used them to analyze whether the tone of news and social media coverage can reliably predict social conflicts, movements and revolutions.

At Argonne National Laboratory near Chicago, the fourth fastest supercomputer in the world is running the most complex simulation of the universe ever attempted. The simulation begins shortly after the big bang and runs a time lapse spanning 12 billion years. Currently underway, it uses data from high-fidelity sky surveys like Sloan to model how stars, and entire galaxies, form.

As much as supercomputing unleashes new possibilities for research, it comes with high costs. Supercomputer design and assembly cost anywhere from $100 to 250 million. Annual energy costs amount to six to seven million dollars, on top of maintenance costs.

High performance computing (HPC) systems generate a tremendous amount of heat, therefore requiring a large amount of energy for cooling. At Lawrence Livermore National Laboratory, Sequoia—the second fastest supercomputer in the world—uses 3,000 gallons of water per minute to cool down. On average, it uses six or seven million watts of electricity, enough to power 16,600 average households.

Supercomputers also have a woefully short lifespan. Top supercomputers remain in their prime for maybe a couple years before being succeeded by a faster model.

“It’s a useful resource for about five years,” Mike McCoy, who supervises Sequoia, told TIME. “Then, historically speaking, it makes no sense to keep them because the cost of maintenance and power is so much it makes more sense to go out and get a new system.”

Furthermore, advances in supercomputing are approaching limits imposed by the laws of physics. In the past, supercomputer developers would speed up existing models by increasing the rates of the processors.

“We found that we can’t increase the frequency like we used to,” said McCoy, “simply because the amount of heat generated would melt the computer.”

For now, developers are simply adding more processors, which is why supercomputers keep growing in size. Still, there is a practical limit to size and energy consumption, and before we can achieve exascale computing, we need to consider entirely new designs and approaches for minimizing power consumption.

If we tried to build an exascale supercomputer by just scaling up current supercomputers, “it would take 1.5 gigawatts of power to run it, more than 0.1 percent of the total U.S. power grid,” wrote Peter Kogge, a professor of computer science and engineering at the University of Notre Dame, in an article for IEEE Spectrum. “You’d need a good-size nuclear power plant next door.”

Given these problems, Kogge said, “the party isn’t exactly over, but the police have arrived and the music has been turned way down.”

To achieve exascale computing by 2021, a time frame that most experts believe to be feasible, we need to invest in developing new technologies to overcome hurdles in memory, power consumption, speed and storage.

“Success in assembling such a machine will demand a coordinated cross-disciplinary effort,” wrote Kogge. “Device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits—something that can beat what are normally voracious power requirements down to manageable levels.”

The Obama administration has recognized the importance of investing in the development of exascale supercomputers. In addition to increasing the budget for supercomputing research, the White House announced in the spring of 2012 a national “Big Data Initiative,” in which six federal departments and agencies committed more than $200 million to big data research projects.

The Big Data Initiative included a National Science Foundation grant of $10 million over five years to AMPLab, a big data research institute at University of California, Berkeley. The Department of Energy committed to providing $25 million over five years for big data research at six national laboratories and seven universities.

The U.S. is not the only nation to pursue supercomputing. In a race to gain a competitive edge in the global market for technology and innovation, China, Japan, the EU and Russia are also pouring millions of dollars into supercomputer research.

This year, the U.S. reclaimed its position as home of the world’s fastest supercomputer after being unseated by Japan last year and China two years ago. The U.S. has the largest number of HPC systems, with 250 of the 500 systems on the most recent list of fastest supercomputers in the world from the TOP500 project. Asia accounts for 124 systems, and Europe accounts 105.

China ranks second in number of HPC systems installed, with 72 supercomputers. In terms of supercomputing output, however, Japan holds the number-two position, after the U.S. and ahead of China.

Given the prohibitive power consumption and short life spans of HPC systems, are supercomputers worth all the fuss and taxpayers’ dollars? As the rate of supercomputing advances plateaus, should we be asking ourselves, “How super is super enough?”

HPC systems enable cutting-edge research during a time when businesses, academics and governments have more data than they know what to do with. As data-generating technologies improve, our data accumulation rates will only increase. Without the supercomputers to process that data, there will be a bottleneck in data analysis, management and storage.

Since their earliest days, supercomputers have been drivers of computer technology. Many components in commonplace computer architecture today were developed for supercomputers. During its time, ENIAC seemed to use up an exorbitant amount of space and energy. However, today we carry around technologies derived from ENIAC in our pockets—in our tablets and smart phones.

The development of technology is an iterative process. As one technological improvement builds on another, societies grow in the scope of work and access to information they can achieve. The spirit of innovation is to always believe that there is more work to be done.

In the spirit of innovation then, let’s keep striving for the exascale.

This piece was originally written for a Brown University course, “Writing Science” with Cornelia Dean.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s