The Future Of Energy

Perhaps even more than exposing the instability of the worldwide economic ponzi system, so far 2011 has been most remarkable for fully demonstrating the fragility of the global energy complex, which in the aftermath of the Fukushima nuclear crisis (and the moratorium on nuclear energy in Germany now, and soon other places), and the MENA revolutions, have raised the question of what happens in a world in which crude is getting ever scarcer, while the one main legacy energy alternative, fission-based nuclear power, just took a giant step back. The topic of limitations in conventional and possibilites in alternative energy has gripped the general public's mind to such an extent that Popular Science magazine has dedicated its entire July edition to answering that very critical question. As PopSci says: "Oil’s amazing efficiency is one reason it remains in such high demand, especially for transportation, and it’s also why finding an alternative will be so difficult. But find one we must. We have already burned our way through most of the world’s easy oil. Now we’re drilling for the hard stuff: unconventional resources such as shale and heavy oil that will be more difficult and expensive to discover, extract, and refine. The environmental costs are also on the rise." So what is the existing line up of future alternatives to the current crude oil-dominated energy paradigm. Below we present the complete list.

Next Generation Nukes.

Nuclear power may have taken a major step back after the biggest nuclear catastrophe since Fukushima, but that does not mean existing Generation III projects (the Fukushima reactor is a Gen II) are not viable and safe. Below is a summary of the key aspects of this program now coming on line in Japan, France and Russia.

In the 30 years since regulators last approved the construction of a new nuclear plant in the U.S., engineers have improved reactor safety considerably. (You can see some of the older, not-so-safe ones in this sweet gallery.) The newest designs, called Generation III+, are just beginning to come online. (Generation I plants were early prototypes; Generation IIs were built from the 1960s to the 1990s and include the facility at Fukushima; and Generation IIIs began operating in the late 1990s, though primarily in Japan, France and Russia.)

Unlike their predecessors, most Generation III+ reactors have layers of passive safety elements designed to stave off a meltdown, even in the event of power loss. Construction of the first Generation III+ reactors is well under way in Europe. China is also in the midst of building at least 30 new plants. In the U.S., the Southern Company recently broke ground on the nation’s first Generation III+ reactors at the Vogtle nuclear plant near Augusta, Georgia. The first of two reactors is due to come online in 2016.


A central feature of this system is an 800,000-gallon water tank positioned directly above the containment shell. The reservoir’s valves rely on electrical power to remain closed. When power is lost, the valves open and the water flows down toward the containment shell. Vents passively draw air from outside and direct it over the structure, furthering the evaporative cooling.
 
Depending on the type of emergency, an additional reservoir within the containment shell can be manually released to flood the reactor. As water boils off, it rises and condenses at the top of the containment shell and streams back down to cool the reactor once more. Unlike today’s plants, most of which have enough backup power onsite to last just four to eight hours after grid power is lost, the AP1000 can safely operate for at least three days without power or human intervention.

Summarizing a typical Gen III schematic:

Regardless of the safety precautions, existing fission-based power will always carry the risk of a meltdown. Which brings us to...

Thorium-Powered Molten-Salt Reactor

Even with their significant safety improvements, Generation III+ plants can, theoretically, melt down. Some people within the nuclear industry are calling for the implementation of still newer reactor designs, collectively called Generation IV. The thorium-powered molten-salt reactor (MSR) is one such design. In an MSR, liquid thorium would replace the solid uranium fuel used in today’s plants, a change that would make meltdowns all but impossible.

MSRs were developed at Tennessee’s Oak Ridge National Laboratory in the early 1960s and ran for a total of 22,000 hours between 1965 and 1969. “These weren’t theoretical reactors or thought experiments,” says engineer John Kutsch, who heads the nonprofit Thorium Energy Alliance. “[Engineers] really built them, and they really ran.” Of the handful of Generation IV reactor designs circulating today, only the MSR has been proven outside computer models. “It was not a full system, but it showed you could successfully design and operate a molten-salt reactor,” says Oak Ridge physicist Jess Gehin, a senior program manager in the lab’s Nuclear Technology Programs office.

The MSR design has two primary safety advantages. Its liquid fuel remains at much lower pressures than the solid fuel in light-water plants. This greatly decreases the likelihood of an accident, such as the hydrogen explosions that occurred at Fukushima. Further, in the event of a power outage, a frozen salt plug within the reactor melts and the liquid fuel passively drains into tanks where it solidifes, stopping the fission reaction. “The molten-salt reactor is walk-away safe,” Kutsch says. “If you just abandoned it, it had no power, and the end of the world came--a comet hit Earth--it would cool down and solidify by itself.”

In addition to safety, Thorium power provides other strategic benefits:

Without the need for large cooling towers, MSRs can be much smaller than typical light-water plants, both physically and in power capacity. Today’s average nuclear power plant generates about 1,000 megawatts. A thorium-fueled MSR might generate as little as 50 megawatts. Smaller, more numerous plants could save on transmission loss (which can be up to 30 percent on the present grid). The U.S. Army is interested in using MSRs to power individual bases, Kutsch says, and Google, which relies on steady power to keep its servers running, held a conference on thorium reactors last year. “The company would love to have a 70- or 80-megawatt reactor sitting next door to a data center,” Kutsch says.

A sample MSR reactor shown below:

Naturally, the transition from fission power to MSR would involve massive costs and a huge overhaul in the existing regulatory regime. Which is why, instead of going the MSR route, why not just focus on a totally different energy creation paradigm, namely...

Fusion Power

The reason fusion power has been the holy grail in energy production is simple: it is the most efficient form of energy creation available. After all fusion power, that at the heart of the sun, is the source of life.

The well-publicized failures of cold fusion may have tainted the field’s reputation, but physicists have been successfully joining nuclei with hot fusion since 1932. Today, research in hot fusion could lead to a clean energy source free from the drawbacks that dog fission power plants. Fusion power plants cannot melt down; they won’t produce long-lived, highly radioactive waste; and fusion fuel cannot be easily weaponized.

At the forefront of the effort to realize fusion-based power is ITER, an international collaboration to build the world’s largest fusion reactor. At the heart of the project is a tokamak, a doughnut-shaped vessel that contains the fusion reaction. In this vessel, magnetic fields confine a plasma composed of deuterium and tritium, two isotopes of hydrogen, while particle beams, radio waves and microwaves heat it to 270 million degrees Fahrenheit, the temperature needed to sustain the fusion reaction. During the reaction, the deuterium and tritium nuclei fuse, producing helium and a neutron. In a fusion power plant, those energetic neutrons would heat a structure, called a blanket, in the tokamak and that heat would be used to turn a turbine to produce electricity.

The ITER reactor will be the largest tokamak ever made, producing 500 megawatts of power, about the same output as a coal-fired power plant. But ITER won’t generate electricity; it’s just a gigantic physics experiment, albeit one with very high potential benefits. A mere 35 thousandths of an ounce of deuterium-tritium fuel could produce energy equivalent to 2,000 gallons of heating oil. And ITER’s process is “inherently safe,” says Richard Pitts, a senior scientific officer on the project. “It can never, ever be anything like what you see in the fission world--in Chernobyl or Fukushima--and this is why it is so attractive.”

Alas, fusion energy is at best decades away:

To fully commercialize tokamak-based fusion, developers must overcome several challenges. First is the matter of breeding the tritium; there are only about 50 pounds of it in the world at any given time because it is not naturally occurring and decays quickly. (Deuterium is not radioactive and can be distilled from water.) Although ITER may use tritium produced by nuclear power plants, a full-scale fusion plant will need to produce its own supply--neutrons from the fusion reaction could be used to convert a stash of lithium into tritium. In addition, physicists must also determine which materials can best withstand the by-products of the fusion reaction, which will wear down the tokamak’s walls. Finally, residual radioactivity in the device will pose maintenance problems because people won’t be able to work safely within the vessel. ITER scientists must develop robots capable of replacing parts that can weigh up to 10 tons.
 
ITER will begin experiments in 2019 in France. If those are successful, the data produced by the project will aid the ITER team in the design of DEMO, a proposed 2,000- to 4,000-megawatt demonstration fusion power plant that will be built by 2040.

ITER in action:

Fuel
 
Engineers inject two hydrogen isotopes, deuterium and tritium, into the tokamak, a high-powered doughnut-shaped vacuum chamber.
 
Plasma
 
A strong electric current heats the deuterium and tritium gases and ionizes them, forming a ring of plasma, a glowing soup of charged particles.
 
Heat
 
Radio waves, microwaves and high-energy deuterium particle beams heat the plasma. At high temperatures, the deuterium and tritium fuse to form a helium atom and a neutron.
 
Containment
 
If the plasma touches the walls of the tokamak, it will scuttle the fusion reaction. The charged particle is confined in a magnetic field made from 39 superconducting poloidal, toroidal and central solenoid magnets positioned around the outside of the doughnut and within its hole.
 
Lining
 
The vessel is lined with a steel blanket 1.5 feet thick to protect the tokamak walls from highly energetic neutrons.

Why the need for the above energy alternatives? One does not have to believe in peak oil to realize that crude is becoming increasingly difficult to procure. Per PopSci:

Even if we were ready to mass-produce a new generation of, say, biofueled plug-in hybrid electric cars by 2020, and even if we--in an absurdly best-case scenario--started cranking out those new cars as fast as we now make gas guzzlers (about 70 million a year, worldwide), we would still need another 15 years to swap out the fleet. In the meantime, oil consumption will continue to rise, as demand from fast-growing economies in Asia outweighs any green gains by Western nations.

David Victor, an international energy policy specialist at the University of California at San Diego, says consumption won’t even begin tapering off for another 20 years. At that point, daily consumption, now at 85 million barrels a day (mbd), will have topped 100 mbd. Realistically, says James Sweeney, director of the Precourt Energy Efficiency Center at Stanford University, cutting global oil consumption to a more economically and environmentally tolerable level (say, 30 mbd) will probably take at least four decades. Before then, he says, “we will use a lot of oil.”

How much? At the rate Victor suggests, we’ll need something like a trillion barrels of crude to get us to the peak of oil consumption sometime in the 2030s--and, in all likelihood, another trillion barrels to get us down the other side, to a point where oil is a vastly smaller part of the energy economy. Just to bridge the gap, then, we’ll have to extract about two trillion barrels of oil during the next four decades--almost double the 1.2 trillion barrels we’ve already burned through since Pennsylvania wildcatters launched the oil age in 1859.

Hossein Kazemi, a professor of petroleum engineering at the Colorado School of Mines, says that about half of those final two trillion barrels have already been discovered and are waiting in “proven” reserves that can be exploited profitably using today’s technology. The other half won’t come so easily. By some estimates, the Earth contains up to eight trillion more barrels of oil, but that oil exists in many forms, some of which, such as shale oil, can be extremely expensive to extract or refine. And as we work our way through the easiest oil, we will also be confronted by increasing external costs—real costs that nonetheless aren’t accounted for at the gas pump. A desperate rush to extract oil from unstable nations can topple regimes, for instance, even as extracting it from environmentally fragile spots can do major harm to the land or the sea.

Which means that we face a series of complex choices, not just about where to extract what kind of oil, but also about when to extract it. Going after everything at once may seem wise, especially to oil entrepreneurs invested in specific resources or policymakers unconcerned about external costs. But as engineers develop new extraction and refinement techniques, oil that is expensive or environmentally harmful now may be cheaper or cleaner in the future. With that in mind, what would happen if we considered how best to extract our two trillion barrels not from the short-term perspective of a politician or a businessman, but from the longer view of a petroleum engineer? Which oil would we save for last, and which would we go for first?

Below is a list of legacy energy forms that are currently being exploited and which provide a far lower capital investment need to generate incremental returns:

Shale

Total reserves: 3 trillion barrels of oil equivalent (BOE)

Given the political anxiety surrounding the prospect of importing oil, U.S. policymakers will be understandably tempted to reach first for the closest, richest oil resource. For many, that would suggest shale oil. The vast deposits located beneath Colorado, Utah and Wyoming alone could generate up to 800 billion barrels of oil. But policymakers should resist that urge.

Oil shale is created when kerogen, the organic precursor to oil and natural gas, accumulates in rock formations without being subjected to enough heat to be completely cooked into oil. Petroleum engineers have long known how to finish the job, by heating the kerogen until it vaporizes, distilling the resulting gas into a synthetic crude, and refining that crude into gasoline or some other fuel. But the process is expensive. The kerogen must either be strip-mined and converted aboveground or cooked, often by electrical heaters, in the ground and then pumped to the surface. Either process pushes production costs up to $90 a barrel. As all crude prices rise, though, the added expense of shale oil may come to seem reasonable--and it is likely to drop in any case if the shale oil industry, now made up of relatively small pilot operations, scales up.

The problem is that the external costs of shale oil are also very high. It is not energy-dense (a ton of rock yields just 30 gallons of pure kerogen), so companies will be removing millions of tons of material from thousands of acres of land, which can introduce dangerous amounts of heavy metals into the water system. The in-ground method, meanwhile, can also contaminate groundwater (although Shell and other companies say this can be prevented by freezing the ground). Both methods are resource-intensive. Producing a barrel of synthetic crude requires as many as three barrels of water, a major constraint in the already parched Western U.S. With in-ground, the kerogen must be kept at temperatures as high as 700°F for more than two years, and aboveground processes use a lot of heat as well. Those demands, coupled with kerogen’s low energy density, yield returns ranging from 10:1 (that is, 10 barrels of output for every one barrel of input) to an abysmal 3:1.

Coal

Total reserves: 1.5 trillion BOE

Coal can also be converted into a synthetic crude, as the German army, desperate for fuel, demonstrated during World War II. The method of transformation is simple: Engineers blast the coal with steam, breaking it into a gas that can then be converted, by the Fischer-Tropsch process, into gasoline and other fuels. Many energy companies are promoting various coal-to-liquid processes (CTL) as a way to replace oil, especially in the U.S. and other coal-rich nations.

The appeal is obvious. At a conversion rate of just under two barrels per ton, the world’s 847 billion tons of recoverable coal theoretically represent roughly 1.5 trillion barrels of synthetic oil, or a substantial piece of the final trillion.

Like shale oil, however, CTL has significant shortcomings. Its energy return is unimpressive; a barrel’s worth of invested energy nets just three to six barrels of CTL. Moreover, coal contains about 20 percent more carbon than oil does, and converting it to liquid raises the ratio even further. CTL fuels have a carbon footprint nearly twice as large as that of conventional oil--1,650 pounds of CO2 per barrel of CTL, versus 947 pounds per barrel of conventional.

Even if producers installed a vast and expensive system to capture and sequester the CO2 produced during the conversion process, says Edward Rubin, a professor of environmental engineering at Carnegie Mellon University, coal production uses so much energy that CO2 emissions from CTL fuels would still be as great as those of conventional oil. At best, making fuel from coal would get us no closer to a more climate-compatible energy system.

All of that aside, even the supply of coal is not infinite. Researchers at the Rand Corporation concluded in 2008 that replacing just 10 percent of U.S. daily transportation fuel with CTL would take 400 million tons of coal annually, which would mean expanding the American coal industry, which is already straining environmental limits, by 40 percent. Although such an undertaking might be politically feasible in China or other nations, Rubin says, “I have a hard time seeing that in this country.”

Heavy Oil

Total reserves: 1 to 2 trillion BOE

Other unconventional resources may, despite having many shortcomings, become somewhat more attractive as new extraction methods come online. One of these is “heavy oil,” which ranges from the molasses-like crude in Venezuela to the bituminous oil sands of Alberta. For decades, oil traders saw heavy oil as inferior to light crude, which is easier to extract and whose smaller-chain molecules are more readily refined. Heavy oil’s bigger molecules, in contrast, were suited mainly to low-profit products, such as ship fuel or asphalt. But new refining techniques are making heavy oil more renderable into gasoline, and new extraction methods are making it easier to get out of the ground.

At a heavy-oil field outside Bakersfield, California, for instance, Chevron deploys computer-guided steam injection to thin the oil sufficiently to pump out. Even more promising are oil-sands operations in Alberta, where companies are now separating the brittle bitumen from sand and clay and cooking it into synthetic crude. At a conversion rate of one barrel for every two tons of sand, Alberta’s oil sands alone may contain up to 315 billion barrels of crude. As refining costs have dropped, output has reached 1.5 mbd and could more than quadruple, to 6.3 mbd, by 2035.

That said, heavy-oil production also has plenty of external costs. As with the kerogen in shale, the bitumen is processed either in-ground or by strip-mining. Both processes consume up to 4.5 barrels of water for every barrel of oil they produce and yield an unimpressive EROEI of about 7:1. And because heavy oils are carbon-rich, the CO2 footprint of crude from bitumen is up to 20 percent higher than that of conventional crude—not as bad as coal, but not exactly friendly to the environment either. Carbon-capture and -sequester techniques can only keep so much of that CO2 out of the atmosphere. Oil-sands operations are sprawling, and as a result, very little of the total CO2 emissions can be captured (one study suggests we might trap just 40 percent by 2030).

If carbon-capture techniques improve, though, heavy oil could make up a substantial share of the final two trillion barrels for a carbon penalty substantially below that of either CTL or shale oil. A further advantage (from the U.S. perspective) is that a lot of heavy oil is located in a politically stable country that’s right next door.

Ultra-Deep Offshore

Total reserves: 0.1 to 0.7 trillion BOE

The “deep” in ultra-deep refers to the depths plumbed by floating oil rigs (typically, anything beyond 5,000 feet). But the more important depth is the distance from the ocean floor to the oil itself. It’s not easy to start an excavation a mile or two underwater, much less one that continues on for several more miles underground (the current record, set in 2009 in the Gulf of Mexico, is nearly seven miles). But an ever-expanding drilling fleet is deploying new techniques in horizontal drilling, sub-sea robotics and “four-dimensional” seismology (which geologists use to track oil and natural-gas deposit conditions in real time) to rapidly expand output. Although fewer than half the world’s ultra-deep provinces have been fully explored, deepwater output in the past decade has more than tripled, to 5 mbd, and it could double again by 2015.

As the Deepwater Horizon disaster made clear last year, though, tapping this resource can involve significant external costs. The pressure in ultra-deep reservoirs can reach up to 2,000 times that at sea level. The oil within can be extremely hot (up to 400°F) and rife with corrosive compounds (including hydrogen sulfide, which when in water can dissolve steel). And the pipes that rise from the seafloor are so long and heavy that the platforms supporting them must be extraordinarily large simply to stay afloat. The biggest discovery in decades, Brazil’s “pre-salt play,” meanwhile, is defended by a 1.5-mile-thick ceiling of salt, which had the beneficial effect of absorbing surrounding heat and keeping the oil from breaking down—but which also, in doing so, congealed the oil into a paraffinic jelly that drillers must now thin with chemicals before they can extract it.

Not surprisingly, ultra-deepwater oil is some of the most expensive in the business. A single drilling platform can cost $600 million or more (especially if the deepwater is in the Arctic, where rigs must be armored to withstand Force-10 winter storms and hull-crushing ice floes), and companies can easily spend $100 million drilling a single ultra-deepwater well. The result of all this effort is a modest EROEI--from 15:1 all the way down to 3:1.

Thus, even as companies scramble to improve safety, most of the research and development in the ultra deep will focus on saving money and energy. Remotely controlled, steerable drill heads, for example, allow companies to drill multiple bores from a single platform (thus lowering costs and the aboveground footprint) and to follow the path of narrow oil seams, greatly increasing oil output. (The record for a horizontal bore, set by Exxon near Russia’s Sakhalin Island, is also about seven miles.) To further cut drilling costs, companies will steadily boost rates of penetration with more-powerful drill motors, drill bits made of ever-harder materials and, eventually, a drilling process that uses no bits at all. Tests at Argonne National Laboratory suggest that high-powered lasers can penetrate rock faster than conventional bits, either by superheating the rock until it shatters or by melting it.

Costs will further recede as companies develop more-accurate “multi-channel” seismic prospecting techniques that will, by combining up to a million seismic signals, help them avoid the ultimate waste of drilling into empty rock. And to better measure the oil reservoirs themselves, companies are creating heat- and pressure-resistant “downhole” sensors (similar to devices NASA developed to monitor rocket engines) that communicate to surface computers via optical fiber.

As the volume of data rises, the industry will also create more-powerful tools to analyze it, from monster compression algorithms (courtesy of Hollywood animators) to entirely new computing architectures. “If we go to a million channels [of seismic data], then we need petaflop computation capability, which we currently do not have,” says Bruce Levell, Shell’s chief scientist for geology. To get that capability, oil firms are working with Intel, IBM and other hardware firms. In the future, Levell says, the oil business “is really going to drive high-performance computing.”

Natural Gas

Total reserves: 1 trillion BOE

Natural gas, or simply “gas” in industry parlance, has long been oil’s biggest potential rival as a transport fuel. Gas is cleaner than oil--it emits fewer particulates and a quarter less carbon for the same amount of energy output--yet today it powers less than 3 percent of the U.S. transportation fleet (mainly in the form of compressed natural gas, or CNG). This proportion is poised to grow, though, in part because the overall supply of gas keeps growing.

With advances in a drilling technique called hydraulic fracturing, or “fracking,” companies can now profitably extract gas from previously hard-to-reach shale formations. Worldwide reserves of shale gas currently stand at 6,662 trillion cubic feet, the energy equivalent of 827 billion barrels of oil. And that doesn’t include the gas that is routinely discovered alongside oil in oil fields and that is sure to be found in some of those yet-to-be-explored deepwater basins.

Gas is so plentiful that, in energy-equivalent terms, its price is a quarter that of oil--a bargain that is already transforming CNG from a niche fuel, used mainly in bus fleets, to a product for general consumption. The Texas refiner Valero, for instance, will soon begin selling CNG at new stations in the U.S.

A gas-powered future could still have some high external costs, though. Fracking can be extremely hazardous to the local environment. The method uses high-pressure fluids to break open deep rock formations in which gas is trapped, and these fluids often contain toxins that might contaminate groundwater supplies. But such risks, which have received substantial media coverage and are now the focus of a new White House panel, may be controllable. Gas deposits are typically thousands of feet belowground, while groundwater tables are much closer to the surface, so most contamination is thought to take place where the rising bore intersects with the water table--a risk that could be minimized by requiring drillers to more carefully seal the walls of the bore.

That said, allocating too much natural gas to transportation might have surprisingly negative consequences. First, it would most likely increase demand for natural gas so much that prices would rise, thereby undermining the current cost advantage. Second, shifting a large volume of gas to the transportation sector would mean pulling that volume away from the power sector, where it is more constructively displacing coal, whose carbon content is far higher than oil’s. But converting specific sectors of the transportation system (delivery fleets, for instance, or buses) could simultaneously cut CO2 emissions and reduce oil demand.

Enhanced Oil Recovery

Total reserves: 0.5 trillion BOE

The resource that comes with the lowest external cost might be the oil we left behind, back when energy was a lot cheaper. Drillers typically end up extracting just a third of the oil in a given field, in part because when they drain reservoirs they also decrease the pressure that pushes oil to the surface, making it more expensive to extract the remaining barrels. In the U.S., abandoned oil fields may still contain a staggering 400 billion barrels of residual oil; worldwide, the figure is probably in the trillions. Extracting all of it is economically impossible, but advances in enhanced oil recovery, or EOR, could boost extraction rates to as high as 70 percent.

EOR could add perhaps half a trillion “new” barrels worldwide. And it could also carry a substantial environmental bonus. One of the most promising EOR methods involves “flooding” oil reservoirs with CO2, which dissolves into the oil, making it both thinner and more voluminous, and thus easier to extract. Once the oil is extracted, the CO2 can be separated, re-injected into the field, and sequestered there permanently. An aggressive strategy in which CO2 is captured from single-point sources (such as power plants or refineries) and pumped into oil fields could increase U.S. oil output by as much as 3.6 mbd while sequestering nearly a billion tons of CO2. And depending on the method, EOR can have an EROEI as high as 20:1.

EOR can’t entirely bridge the gap--but in a perfect world, we would at least begin by tapping those barrels, along with the oil--equivalent barrels of natural gas. That way, we would be using the least damaging resources first and saving the worst barrels for later, when (if all goes well) future engineering innovations will let us extract and consume them more safely and efficiently.

But of course, we don’t live in a perfect world. For now, oil producers will do what they have always done, which is to extract oil as cheaply as they can. And oil consumers will follow suit, buying the cheapest energy they can. We may eventually ask the market to take the true costs of production into account, perhaps by way of a carbon tax or some kind of climate regulation. Or we may not. Energy policy has never been particularly far-sighted. There is little chance that the transition to a clean-energy economy will be entirely clean. It will require trade-offs and compromises, and the cost of those trade-offs and compromises will rise with every year that we wait to get serious about moving away from oil.

 


One thing is certain: the status quoTM, which is just as entrenched in the legacy financial system as it is in the existing energy paradigm, will do nothing until it is far too late to provide for a contingency plan while it is still feasible and not cost-prohibitive. After all, by the time things get so bad that there is no choice but to move on to something "else" it will be some other, far less entitled, generation's problem.

Source: Popular Science