Civilization Needs Sustainable Energy – Fusion Breeding May Be Best

Civilization requires power. For a while it can get by with the power supplies we currently use, fossil fuel, nuclear fuel; and hydroelectric, solar and wind. Only the last 3 are sustainable. The first two will run out as some point, very likely well before the end of this century, especially if the less developed parts of the world come up to OECD standards. This paper makes the case that solar and wind are not up to the job, and neither is pure fusion, at least in this century. However, using fusion to breed fissile material for current nuclear reactors could play an important role well before century’s end. The requirements on a fusion device used as a breeder are considerably relaxed from the requirements for pure fusion. It is likely that an ITER type device, could be used for fusion breeding on a large scale. Fusion breeding can support nuclear fuel for civilization, at 3040 terawatts (TW), at least as far into the future as the dawn of civilization was in the past.


Figure 1. Plot of energy use from BP Energy Outlook 2019
The vertical scale is in billions of barrels of oil per year equivalent. To switch into more familiar units, 1 Btoe per year is about one terawatt (TW). https://www.bp.com/content/dam/bp/businesssites/en/global/corporate/pdfs/energy-economics/energy-outlook/bp-energy-outlook-2019.pdf At this point the world uses about 14 TW. As we can see from the middle graph, the power use is very unequal. The 1.2 billion in the OECD countries use about 6 TW, or 5 kW per capita. In the USA, we use ~ 8kW per capita. The other 6 billion people use ~8TW, or about 1.3kW per capita. How much longer will this be acceptable?
By midcentury, the world population is expected to level off at ~ 10 billion, each of whom will demand a middleclass life style. Bringing the world up to OECD standards would seem to necessitate 50 TW of world power. However, energy efficiency (i.e. GDP per Watt) would be expected to improve as well (typically 0.5-1% per year) [Hoffert 1998], so optimistically the number might be closer to 35-40TW. There is no way this is anything but an extremely necessary and desirable goal.
In 2009 I was at a science meeting where a high-ranking member of the Chinese academy of science said that in 2000, the average Chinese used ~ 10% of the average power of the average American, and then (in 2009) used ~20%, and they would not rest until their per capita power use is ~ equal. Now it is ~30%.
The developing world will use whatever fuel works best, and very likely at this time, that is coal. The Chinese and Indians are very rapidly and enormously increasing their coal use today. China today is by far the largest CO 2 emitter in the world and they are still building coal fired power plants at a rapid pace.
India is doing everything it can to catch up. Soon Africa, the rest of Asia, and Latin America will do the same. Nothing can stop this. Asia is hardly even paying lip service to solar and wind today. Economic solar and wind power, in the quantity necessary to support modern civilization, is almost certainly a pipe dream, as we will discuss in the next section. Despite 30 years of heavily subsided development, solar and wind have not dented the world's use of fossil fuel. At the very least, we should be thinking seriously of sustainable alternatives.
Whether the concern is exhausting fossil fuel (we will exhaust it in 1/3 the time at 40TW as at 14), or is worrying about CO 2 in the atmosphere, or is knowing that solar and wind cannot do the job (see the next section); these lead to one and only one conclusion. Nuclear power must play an important role. Let us think of increasing nuclear power by about a factor of 20 to ~ 20TW (i.e.~7TWe) worldwide by century's end, reducing fossil fuel somewhat to ~10TW, so it will last at least as long as current estimates, and increasing hydro and renewables to 1-3 TW each. This would obviously require something of a crash program in expanding nuclear power. There is every reason to think this possible technically, although perhaps not politically. At least in the United States, regulations, lawsuits, protest marches, bureaucratic delays, environmental impact statements done and redone numerous times, NIMBY (not in my back yard), BANANA (build absolutely nothing anywhere near anything),… have all thrown sand in the gears of nuclear power for decades.
These could be the biggest problem it faces. Even if the nuclear company is successful, typically 20 years are wasted as they strangle in bureaucratic red tape and court cases, enormously increasing the price of nuclear power. Regulation reform is the American, and perhaps the worldwide nuclear industry's biggest battle right now.
Yet even if the nuclear industry solves this problem, it faces a much bigger problem on the physics and technical side. Fissile 235 U comprises only 0.7% of the uranium resource. Supplies of mined 235 U are limited, almost certainly much less than the reserve of fossil fuel. One rather pessimistic estimate is that the energy resource about 60-300 Terawatt years [Hoffert 2002]. Other estimates are higher, but no estimate is high enough, that if it were correct, there would be enough uranium to sustainably supply the world's thermal nuclear reactors with 20-30TW (i.e. ~7-10TWe).
Hence some sort of fuel breeding is necessary. Breeding brings into play the entire uranium and thorium resource. To get an idea of the size of this resource, thermal nuclear reactors have been delivering about 300 GWe (900 GWth) for about 40 years. This means that in depleted uranium alone, there is sufficient fuel for ~ 3 TWe for ~ 400 years! Nuclear fuel via breeding, for all practical purposes, is inexhaustible and sustainable in the same sense as hydro, wind or solar.
There are certainly conventional approaches to breeding, including fast neutron reactors, and thermal thorium reactors (Manheimer 2020 #1). But the options are few. This article makes the case that not only can fusion breed, it is the best breeder. It should be getting much more attention than it has up to now.

Some Difficulties with Solar and Wind Power
As solar and wind generated electricity are the other carbon free power to compete with nuclear power, it is useful to take a dispassionate look at them. There are many constraints based on fundamental science, economics, reliability, material, and environmental matters, which show that they can never be a major source of power, at least not given the present knowledge. This section is an expanded version of an earlier publication [Manheimer 2021 #2].
To see immediately how advocacy of solar and wind distorts the truth, most reports on solar and wind apparatus quote the 'nameplate' power. This is the maximum the device generates when conditions are exactly right. But conditions are rarely exactly right. Nameplate power is not important power, average power is. For instance, a solar panel might produce a kilowatt at high noon on a summer day, but averaging over all conditions, it would be more like 200 Watts. A wind turbine might produce 2 Megawatts (MW) when the wind is blowing at the right speed, from the right direction, but perhaps only 500 Kilowatts (kW) averaged over all condition. In most conventional power stations, coal, gas, nuclear, the average power is very nearly the peak power, so there is little confusion. However, reports on solar and wind power have a tremendous potential for confusion, as the nameplate power (usually reported) is typically a factor of 4 or 5 times the average power (the more meaningful number). It is important to keep this fact in mind when going over claims of delivered power. Advocates of solar and wind, unfortunately, often talk of nameplate power, as if it were average power.
In this section we discuss various aspects of solar and wind power, and how they are limited by basic physics, safety, economic and environmental constraints.
Section A discusses the basic constraints in terms of wind and solar power coming to earth. Section B discusses the reliability or unreliability of wind and solar power and the need for backup power. Section C discusses the material needs of solar and wind. Section D discusses the cost of wind and solar. There are many conflicting elements of this cost, including government subsidies, which are difficult to unravel. The Washington Times [Keene] estimates the US government subsidies for constructing wind turbines between 2016 and 2020 was ~$24B. However, there is one simple way to evaluate the cost, discussed there. Section E discusses a tsunami of cost yet to come, namely cost of decommissioning these monsters (a modern 4 MW nameplate wind turbine is as tall as the Washington monument); once they reach the end of their lives, typically 25 years.
As a final indication of the lack of confidence of the world in the potential of solar and wind power, there was a large international meeting to discuss the climate dilemma in Scotland in November 2021. World leaders, including President Biden and many European leaders attended. However, the leaders of Brazil, Russia, China and Turkey voted with their feet, and did not attend. The leader of India attended but announced that India would not be reducing its CO 2 emission until 2070, realistically an absolutely meaningless commitment. These are large, important, technically advanced countries, containing ~ 40% of the world population. Their unwillingness to participate demonstrates not only their skepticism of the nearly universal western claims of a climate crisis, but also their skepticism of solar and wind power. Actually, the western countries are not all that different. Typically, some western bureaucrat says that we have to stop or reduce the use of fossil in this way and that. Occasionally the new rule is put to a vote, and the new rule is almost always rejected by the voters. Or as Yogi Berra put it "If people don't want to come to the ballpark, you can't stop 'em".
However, many of these countries are rapidly developing, and will likely greatly increase their use of both carbon based and nuclear power. Fusion breeding might be a very attractive alternative for them later in this century.

The Solar and Wind Power Available
When considering solar and wind power, the very first issue is what is the available power? Solar energy, in mid latitudes, at high noon on a summer day is about 1Gigawatt (GW) per square kilometer. However, averaging over night and day (cutting it in half), solar angle and added absorption from the longer path (cutting it about in half again), sun, rain, snow, clouds…., it is roughly 200 MW/km 2 . The maximum efficiency of a solar panel is given by the Shockley Queisser [Shockley] limit of ~30%. Most operating solar panels are around 15-20%, so they are near the theoretical maximum. Assuming this 20% efficiency figure, a 1GW solar power plant would cover about 25 km 2 , and the land could not be used for anything else. While this sounds small compared to the area of, say the United States, Russia, Brazil, or China, it would be difficult to find this amount of land available in say the American northeast. The cost of rural land there is about $5000 per acre, so 25km 2 would cost ~$25M. This is not that great a deterrent, but finding 25 available contiguous square kilometers in a place like the American northeast probably is. The 15-20% efficient solar panels cost ~$3/nameplate Watt, so these would cost ~ $15B for the 1 GW average power solar farm. Then there is the cost of installation and hook up. To do this one needs a team of skilled workers, working over every square inch of the 25 km 2 . Likely it dominates the cost of the solar installations.
Let us consider the peak power, average power, and cost of a large solar farm in the United States. Consider the Topaz Solar Farm, in California, a region of the country which one would expect to be very hospitable to solar power, as opposed to for instance the rainy, snowy east coast or midwest [Topaz]. At one time Topaz was the world's larges solar facility. It covered and area of 13 square kilometers, but as Figure 2 shows, it does not cover all of the allowed space. It is billed as having a capacity of 580 MW, but looking at the small print, it delivers 1,200,000 Megawatt hours every year, meaning its average power is 130MW. The cost to build it was $2.5B, or roughly $20B for a 1 GW average power plant. However this published figure is most likely a significant underestimate of the cost, when everything is taken into account. The facility as run into financial trouble, and is considering bankrupcy [Leslie]. Now let us consider wind power. Only about 1-2% of the solar power impingent on earth goes into wind. Generously granting 2%, and considering the Betz limit [Betz] on the maximum efficiency of the conversion of wind power to mechanical energy of 60%, we assume 50% efficiency. Hence a 1GW average power wind farm would cover at least 500 km 2 . Unlike a solar farm, this land could be used for some other purposes, but not many. It could be used for grazing animals, and perhaps for growing some crops not requiring much human intervention, but it is unfit for human habitation. The noise would be deafening, and in the winter, in the cold regions of the country, large chunks of ice, hundreds of kilograms, fall off the turbine blades, killing anyone that were struck by them. At least in the American northeast, are 500 km 2 of reasonably contiguous land, without human habitation really available anywhere?
The cost of a turbine is typically ~$2/Watt of nameplate power, or ~$8/Watt of average power. If one considers 4 MW nameplate power turbines (about the height of the Washington monument), the 1GW average power plant needs ~1000 at a cost ~$8B. This does not account for the cost of installation, putting up 1000 structures the size of the Washington monument could not be cheap! And how much does 500 km 2 of contiguous land cost, especially in a place like the American northeast or west cost?

The (Un)reliability of Solar Power
Recently, under adverse weather conditions, at least 3 places which relied heavily on solar and wind lost power for substantial periods of time. These are not places in poor areas of the world, which struggle to afford minimum power, but in 3 of the richest places in the world, Germany, California and Texas.
Texas is usually a warm state, but being located in the great plains, every few years it experiences a frigid winter. That was its experience in February 2021, where it was snow covered and frigid (for Texas) for a long period of time.
Texas has made a large investment in solar and wind power, one quarter of the wind power of the United States is in Texas. In the winter this failed; see Figure 3. Much of the state experienced long periods without electric power as windmills froze [California] and solar panels became snow covered [Craig, Burnett, Knutson]  With the failure of wind and solar, gas-powered plants rushed in to take up the slack, but were only able to partially fill in, especially with the increased demand due to the weather. Figure 4 is a graph of the power supplied by various power sources in Texas during the week of worst power loss [28].  The Wall Street Journal [California] even mentioned that the Texas problem in the winter was not its only problem. In June 2021, there was a heat spell, certainly not unusual for Texas, and again solar and wind largely failed, with gas rushing to take up as much of the slack as it could.
Search the Texas dilemma on the internet, and everything but their reliance on wind and solar is blamed. But Oklahoma had about the same weather as Texas, but did not rely on wind and solar to nearly the same extent, and had no problem.
Germany also has been relying very strongly on wind and solar, and the severe winter of 2020-2021 has played havoc with it. The country was exceptionally cold and snow covered and large parts of the country lost electric power for a long period of time. Germany attempted to purchase power from neighboring countries, but there was none at any price to sell; they could supply only their own population. Figure 5 shows a snow-covered solar panel in Germany, and its effect on their school children as they attempted to do their homework [Foti]. Figure 5. Snow covered solar panels in Germany, and its effect on their school children California has been converting to solar power over the last decade or two. It had decommissioned all of its coal fired power plants, and its nuclear power plants except for Diablo Canyon. It has some gas fired power, but minimizes it to the extent possible. The state had a great deal of solar power available on summer afternoons, but this faded away in the late afternoon and evening when air condition was most needed. In a heat wave in summer 2020, it did not have enough power, and had to instigate rolling blackouts [California]. It attempted to purchase power from other states, but it already gets about 1/3 of its power from neighboring states, and none was available.
The Wall Street Journal ended their editorial [California] with the sentence: "Pro survival tip: Buy a diesel generator -while you still can." Solar power from photovoltaic sources can only be used when the sun is shining; wind power, only when the wind is blowing. Thus, to have reliable power, solar and wind power must be backed up by another power source which runs under all conditions. Gas-powered plants are used for this purpose. This is not an unreasonable approach, but of course the cost of the gas plants, often idle, must be added to the cost to wind or solar. As the Wall Street Journal phrased it "A big problem is that subsidies and mandates have spurred an over-development of renewables, which has resulted in gas plants operating at lower levels or even idle much of the time. Keeping standby units in top condition is hugely expensive. So, when plants are required to run all out to meet surging demand or back up renewables, problems crop up -as they did this week." (California) To provide backup power, there is talk of a revolution in battery technology, but this seems far-fetched. The Tesla car's lithium ion battery stores about 100 kwhrs. The United States uses 400 gigawatts (GW) of electric power, and if one section of the country is out of wind or sunshine, say Texas or California, the battery backup would have to provide this power, probably 100 GW. The Tesla battery would provide this backup for 3.6 microseconds! We would need ~3x10 5 batteries to provide a second's worth, ~ twenty-four billion to provide a day; and this is probably not sufficient. The United States would need probably half a dozen to a dozen of these battery stations across the country. A Tesla battery now costs around $5-10k. Even assuming the cost could be reduced to $1000, the cost of the backup system alone for the United States would be ~ $150 trillion. It makes no sense! In addition, there is the safety issue of so much chemical and electrical energy stored in one place, particularly for a battery like the Tesla lithium-ion battery, which has a well-known fire danger, even if the battery is not delivering power. The 100 billion batteries in each station stations have a stored energy of roughly equivalent to that of a tenmegaton bomb.
Of course, any energy storage scheme will have that much energy stored and will represent a potential danger. However, the danger involved in storing it in lithium-ion batteries is unique. The batteries have a well-known fire danger, even when the battery is not delivering power. Furthermore, once the fire starts, it is very difficult to extinguish with conventional fire suppression techniques. The danger is particularly acute on an airplane, where fires in the hold have occurred, occasionally bringing down a plane when flying at altitude. While these were cargo planes with no passengers, the crew was killed from Lithium battery fires in Boeing 747 flights over both South Korea and Dubai in 2010 and 2011 [Dooley]. Figure 6 illustrates the potential danger to the aircraft.

Figure 6. A UPS large freight aircraft destroyed (fortunately on the ground) from a fire of lithium batteries in the hold
It is not only cargo planes that had to deal with fire. China Southern Airlines, Flight CZ3539 had a fire in the overhead luggage bin [Bibby]. Fortunately, it was when the plane was boarding, and not when it was in the air, and everyone could disembark with no injuries. If that fire had ignited an hour later, who knows what catastrophe might have occurred. Figure 7 is a photo taken on the plane.

The Material Requirements for Solar and Wind
Sunlight and wind may be free, but the infrastructure to convert these into usable electric power, and the material needed are very considerable. This has been studied by the Manhattan Institute by Mark Mills. As an example, Figure 8, from Ref [Mills 2020] shows a schematic of the number of different materials required to construct different types of power plants, per terawatt hour. Clearly a 1 GW wind farm, will use about ten times the material as a gas-powered plant, and a 1 GW solar panel farm, nearly 20 times as much.  Figure 8. The material needed for soar, hydro, wind, geothermal, and natural gas plants. Clearly the wind and solar use tremendously more materials of all kind, that do gas fired plants An important element for manufacturing modern batteries is lithium. It is usually found in high desert areas, and mining it is extremely water intensive. For instance, one of the best sources of lithium in the United States is in Death Valley. Mining lithium in these areas typically needs about 2 metric tons of water for each kilogram of lithium extracted. A typical Tesla battery typically has about 10 kg of lithium, requiring the use of 20 metric tons of water [Hooke, Evans]. The hundred billion or so Tesla batteries required to provide backup power for the United States, say every ten years, would more than double the country's water infrastructure, and do so in some of the country's driest places.

The Cost for Delivered Power
It is often claimed that solar and wind electrical energy is getting cheaper, and often much cheaper than that of generated by coal, gas, oil or nuclear. Here is a typical example [Timmer]. While these have no fuel costs, as we have seen in the last section, the material and labor costs, compared to conventional power costs are enormous. After all the labor cost of installing a 1 GW wind average power farm, namely ~1000 modern 4GW name plate power turbines, each as tall as the Washington Monument, over an area of at least 500 square kilometers, must be quite high compared to installing a single building housing a 1 GW gas powered plant.
Hence, there are enormous scientific, technical, economic and environmental, barriers which are in reality, just about impossible to overcome [Sellenberger 2020, Mills 2019]. Furthermore, there are government subsidies in most countries which affect the price. These subsidies are very confusing to unravel, but in all likelihood, they are significant [Keene] The skeptical arguments, while correct, are not necessarily easy for a layman to follow. After all who, in say the United States, notices or cares if, to build solar panels and wind turbines, we have to dig up a lot of indium, lanthanum, neodymium, europium and other rare earth elements somewhere, likely in some remote, poor African country, which will not complain about us trashing its environment, and paying its citizens slave wages.
It is now possible to compare nuclear to solar and wind on a large scale. There is what this author has called 'a gigantic laboratory' in Europe [Manheimer 2018]. It is France and Germany. France for years has generated most of its electricity (~75-80%) by nuclear power. Germany, in about 2000, had adopted a different route. It has embarked on an 'energiewende', a German word for energy transformation to solar and wind energy. Accordingly, it has decommissioned many of its coal fired power plants, and is in the process of decommissioning what once were its 17 nuclear power reactors. At this point, it is getting about 25-30% of its electrical power from wind and/or solar; the rest from other sources. Some articles on the energiewende, call it a smashing success [Mathews]; others, a dismal failure [Dohman].
Where does the truth lie?
There is one thing anybody can easily figure out. Namely despite all the claims of low cost solar and wind, how does the cost of electricity in Germany and France compare? This is simple and noncontroversial. Furthermore, since the whole purpose of the energiewende is to reduce the CO 2 input into the atmosphere, how well do Germany and France do? Again, this is simple and noncontroversial. Figure 9 shows  The graph shows that, at least up to now, after ~20 years, the German energiewende has failed on both counts. It has not reduced the price of electricity, but rather has greatly increased it. It has not reduced the per capita German CO 2 emission into the atmosphere as compared to France, or even the United States. The impact of the high cost of electricity in Germany is such that almost five million people there were unable to pay their electric bills in 2019, and were cut off from the grid [Editorial team].
In summary, France has cheaper electricity and emits less CO 2 per capita -both by about half -than does Germany. For those who say that nuclear power is too expensive and environmentally unviable, there is a simple one-word answer, France. The French have had a nuclear economy for decades, and have achieved this economically, without harming any of its citizens, or ruining their environment. The conclusion is obvious. By themselves, sunlight and wind are free but converting them to electricity is very, very expensive.
For all of the publicity and propaganda on how cheap solar power is, proponents estimate as much as $100 trillion will be needed by 2050 to decarbonize the world's energy systems [Ruchner]. But we have just seen that it costs at least this to provide backup lithium-ion battery power in a single country, the United States.

The End of the Life Cycle
There is an additional cost and environmental damage of solar and wind power, which has hardly appeared yet. Namely solar panels and wind turbines are only expected to last ~25 years. Since most solar wind panels and wind turbines are younger than this, we have only an inkling of the problem that is rapidly approaching.
Let us first consider solar panels. These panels last about 25 years, so the 250,000 tons we have to recycle this year is just a trickle compared to the deluge coming at us in 2050, when there will have been a total of 78 million tons to dispose of. These are not appropriate for landfills, as they contain hazardous and poison materials such as lead and cadmium, which can leach into the soil.
However, recycling is expensive. The cost of the recycled materials is considerably more than the cost of the raw materials. For this reason, many places, including (surprisingly!) even environmentally conscious California, are disposing worn out panels in landfills, which is cheap, but environmentally very harmful [Solar panels]. There are also American efforts to export worn out solar panels to landfills in underdeveloped countries, most likely in Africa. Trashing their environment by taking advantage of their loose mining restrictions is not enough, we will also trash it by sending them our own dangerous garbage, which we cannot safely dispose of in our own country [Solar panels].
Even if we had perfect recycling of used solar panels, there is still the environmental danger of their destruction by natural events. A tornado destroyed a solar farm in Southern California, and Hurricane Maria destroyed a large solar facility in Puerto Rico [Shellenberger 2018]. Who knows what damage was done to the local environments? Figure 10 is a photo of the Puerto Rican facility after the Hurricane. Regarding wind turbines, the problem is twofold. Since the blades are fiberglass and last only about 10 years, we have had considerable experience here.
These blades are gigantic, and are very costly to ship and dispose of, but a land fill is a reasonable option if it is large enough. Once they are buried, they will do little if any harm to the local environment. There are just a few landfills in the United States capable of handling these blades. One is near Casper Wyoming. Figure 11 is a photo of a portion of this landfill [Martin]. Considering the enormous amount of land and material they need, at their birth, solar and wind installations do great harm to the overall environment. At their death, they are even more harmful. Then, they almost certainly form more of an environmental crisis than any other power source.
To summarize, when reading all the adoring media on how economic, safe, reliable, and environmentally viable solar panels and wind turbines are, it pays to keep in mind Richard Feynman's famous quote regarding the Challenger disaster: "When introducing a new technology, reality has to take precedence over public relations, for nature cannot be fooled".

The Fusion Program
The world has been attempting to develop economical fusion reactors for more than half a century. While great progress has been made, there still is a very long way to go. In a fusion reaction, two isotopes of hydrogen, deuterium and tritium join to form a helium nucleus, and in the process release an energetic neutron and an energetic alpha particle (i.e. a helium nucleus). Note that while deuterium occurs in nature, tritium does not. It must be bred from a reaction of a lithium nucleus with a neutron [Manheimer 2020 #1]. The energy of these particles would be absorbed by some sort of heat exchanger, called a blanket, and this would generate electric power in the standard way. It is regarded as a clean energy system, one which produces no by-product which is either a proliferation risk or a pollutant. A schematic of this reaction is shown in Figure 13.  Figure 13. A schematic of the fusion DT reaction. To react, the D and T must have energy about 10 kiloelectron volts (keV) or more (atoms burning in a typical fire have energy of about 0.1 electron Volt). The fusion products are a 14 Million electron Volt (MeV) neutron, and a 3.5 MeV alpha particle (i.e. a helium nucleus). In a conventional fission reactor, the neutrons that form the chain reaction are produced at ~ 2MeV, and the fission fragments are produced at ~ 200 MeV. Hence the fusion reaction produces much more energetic neutrons than a fission reaction reactor. However, it takes 10 fusion reactions to produce the energy of one fission reaction. Both of these facts are the key to the great advantage of fusion breeding over fission breeding.
A problem is that to form this reaction, the energy of the D and T are so high that they cannot be part of a liquid or solid, but rather form a fully ionized gas called a plasma. The fusion plasma consists of deuterium and tritium ions, and free electrons. This plasma is so hot that it cannot be in contact with any material surface. Hence in the fusion effort, the plasma would be confined by magnetic fields, and the process is thereby called magnetic fusion energy (MFE). Electrons and ions have a difficult time moving across magnetic field lines, but can move freely along the magnetic field. This motivates the use of a toroidal field.
The plasma device which does this is called a tokamak. The plasma in it is characterized by the electron number density n (typically around 10 20 per cubic meter), and plasma temperature T (typically around 5-10 keV), and relies on the plasma and toroidal magnetic field, typically 3-5 Teslas. The major radius is typically 1-3 meters, and the minor radius is typically about one third of that.
The plasma carries a toroidal current I (typically millions of Amps, i.e. Mega amps) in order that it can be in equilibrium. This current produces a smaller poloidal magnetic field perpendicular to the main toroidal field. This current is usually driven by making the plasma a secondary of a transformer, the primary being a changing magnetic field through the center of the torus. However, the transformer can hold only so many volt seconds, and then the plasma current cannot be driven any more. Hence some means must be found to run the tokamak at steady state, or at least at high duty cycle. The ohmic heating from the current drive also heats the plasma. However more heating power is needed as the plasma resistivity decreases as it heats. This is in the form of rf, microwaves, millimeter waves of neutral beams. The ratio of the fusion power to the heating power is defined as Q. Heating of the plasma by these external sources has been reasonably successful. However, driving the current this way is more difficult, see Section VIII.
Over the last 5 or so decades, larger, and smarter tokamaks were made by various worldwide fusion labs. One measure of the success of the tokamak is the so-called triple product, nTτ which is roughly proportional to Q [Manheimer 2014]. Here τ is the energy containment time (basically the plasma energy contained divided by the power to maintain it). Over the years, this has advanced considerably. In fact, until about the year 2000, it had advanced about as fast as the number of transistors on a chip, as shown in Figure 14. Note that the record nTτ is 1.6x10 21 , achieved by JT-60U. It was achieved over 20 years ago, and still stands.  Figure 14. The increase in tokamak triple product nTτ over about a 40 year period, as compared to the number of transistors on a chip. The problem is that at every point along the red curve, the electronics industry was able to market a useful, profitable product. The tokamak will have to advance considerably along the blue curve before it earns anything. Graphs like this abound on the internet.
The largest tokamaks so far are in Europe, called JET (Joint European Tokamak) [Gibson], Princeton NJ, called TFTR (Tokamak Fusion Test Reactor, disassembled in about 1999) [Hawryluk], and Japan, called JT-60U (Japan Tokamak) [Kusama]. JET and TFTR actually ran with DT plasmas, and produced copious fusion neutrons, achieving a Q of around 0.2-0.5 depending on the mode of operation. The neutron production is shown in Figure  15  The Japanese tokamak also got some very impressive results. It was not equipped to deal with tritium, but could deal with deuterium plasmas. These also have fusion reactions, and measuring the neutrons from the DD reaction, enabled the JT-60U group to extrapolate to what the Q would have been in a DT plasma. With their W shaped divertor they managed to get an equivalent Q of 1.25 [Kusama, Manheimer 2014]. (The divertor is the region just outside the main part of the plasma which handles the exhaust of the plasma as it diffuses toward the walls of the vacuum chamber.) A plot of the equivalent Q as a function of plasma current, with and without the W shaped divertor is shown in Figure 16. There are two fundamental tokamak parameters we now briefly introduce. The first is the q of the plasma. It is a spatially dependent quantity, but at the edge of the plasma, it is proportional to the reciprocal of plasma current. Generally the discharge is characterized by a single parameter, q 95 , the q at the magnetic surface containing 95% of the current. If the q 95 is too low (i.e. if the current is too high), the plasma will be unstable to current driven modes called, both MHD modes and resistive tearing modes. If it is too high, the plasma has a more and more difficult time finding an equilibrium (i.e. with zero current, there is no equilibrium). The second parameter is called the beta (β), which is the pressure of the plasma divided by the magnetic pressure. Actually theoretically, it is more useful to deal with the normalized beta, β N . The beta is proportional to β N times the current, or β N /q 95 . If β N is too high, the plasma can be unstable to what are called ballooning modes. Troyon and Gruber have worked out the theory [Troyon 1985, Troyon 1988, and found that without wall stabilization, the maximum β N is around 2.5 or 3; with wall stabilization, around 5 or 6.
However this author has the attitude that the wall is doing enough without stabilizing the discharge. It is absorbing large fluxes of neutrons and radiation, and is an initial element of a heat exchanger. Developing the wall is not so simple, that more tasks can be added to it role. In fact, there is virtually no experimental evidence of long-lived discharges with β N of 5 or 6. More detail on these parameters is in the linked and cited references.
There are two fundamental problems that the tokamak project faces. The first is that the current can be driven by the transformer only a certain time, limited by the Volt seconds stored in the transformer. The second is that often the plasma discharge will terminate, seemingly without cause. When it does, the plasma and poloidal magnetic field energy are suddenly dumped somewhere in the vacuum chamber. Shown in Figure 17 is some very important data from JT-60U [Ishida, Manheimer 2014]. It is a plot in β N q 95 space of a variety of discharges, some steady, others transient, i.e. likely ended in a disruption. Clearly the most desirable discharge for fusion is that at maximum pressure, and maximum current, in other word discharges around q>3, and β N~2 .5. Figure 17. A scatter plot of discharges stable, and terminated prematurely, for JT-60U. t E is the energy confinement time.
Notice that JT-60 has also gotten discharges with high β N , as high as 5 for q 95 of 6 and for q 95 as low as 2, but for β N about 1. However, these constitute no improvement in the actual beta, which is proportional to β N /q 95 . In addition, these all disrupt in less than 5 energy confinement times.
As JT-60U is Ohmically driven, there is a limit to how long the discharge can be. However they have good evidence of stable discharges with β N of ~2.5 extending out to 30 seconds, as shown in Figure 18 [Ide]. However just because a β N = 2.5 discharge can last for 30 seconds, does not mean it always will, furthermore just because it lasts for 30 seconds, does not mean it will last for 30 minutes, or 30 hours or 30 days, as a discharge in a reactor must. Hence this is an important area for tokamak research.
While these reactors were regarded as successful, they were nowhere near being economic energy producers. In about the year 2000, this advance in nTτ stalled. The reason is that as the years progressed, the tokamaks became larger and more expensive, until about the year 2000. Then no nation was individually ready, willing or able to take the next step.

The ITER Project
The success of the tokamak program motivated the world to join together to form a larger tokamak called ITER (International Tokamak Experimental Reactor). The international negotiations to approve ITER, and the argument over where to put it, took years. Forming and maintaining the plasma, and plasma current, which is needed for equilibrium, takes a considerable amount of power. ITER's hope is to achieve ten times more neutron power than is injected into the machine with neutral beams and microwaves, i.e. Q=10 by 2040. Specifically, it hopes that with 50 megawatts (MW) of injected power (mostly microwaves and neutral beams), it will produce 500MW of fusion power, for a 400 second pulse. See Figure 19. Figure 19. An artist's conception of the ITER tokamak, from the ITER web site. The major radius from the center of the torus to the center of the vacuum chamber is 6 meters and the toroidal magnetic field is about 5 Teslas.

Some Difficulties with ITER
One obvious difficulty with ITER is the cost and time it takes to go from concept to reality [Manheimer 2018]. Another is that it is not at all clear that ITER has come up with a way to drive the current steady state or high duty cycle. Unless it can do this, no matter how great the achievement of ITER is, it cannot lead to economical fusion. We discuss this more fully in Section VIII.
However, what is not generally publicized is that after a success with ITER, there are enormous obstacles to producing economic power in a pure fusion mode, from what the ITER web site calls its follow on, the DEMO. Note that electric power is produced with an efficiency of ~ 1/3, so the 500 MW of power means ~ 170MW sent to the grid. However, beams and microwaves are not produced with 100% efficiency either, once again 1/3 is a typical, and in fact an optimistic number. Thus the 50 MW of input power would take ~150MW of wall plug power. Hence pure fusion with an ITER like reactor would leave virtually nothing for the grid. There have been speculations of more efficient generators, those typically depending on a high speed flow of high temperature burning gas. These have been developed on a small scale. How well they would match with a fusion source is unknown. However, relying on a special generator just to make fusion relevant, does not sound like a very good argument for either fusion or the special generator. Better if fusion can fit in with existing infrastructure.
To make ITER an economical machine, first the gain would have to increase by at least a factor of 3 or 4, so that the circulating power is a much smaller fraction of the total power. Second, the fusion power would have to be increased by a factor of about 5 or 6 to make it comparable to current power stations. Third the size and cost would have to be reduced substantially so as to make it economically competitive. But with larger power and a smaller size, the loading on the fusion blanket would increase by at least an order of magnitude. These are not minor details! They would certainly take decades and 10's of billions of $$$ to solve, assuming they could be solved at all.
Furthermore, tokamaks have been constrained by what this author has called conservative design rules [Manheimer 2009 and. These rules are not controversial, they have been in the literature for over a decade. The author has given seminars on them at many fusion labs, and presented them at many scientific conferences. He has never seen them challenged in person or in print. Conservative design rules do not show that a 3GW tokamak is impossible; it just shows that one smaller and cheaper than ITER almost certainly is. Depending on the toroidal field, the major radius of a 9 T tokamak [Sorbom] would have to be ~6m [14], and of a 5 T tokamak (like ITER, [Campbell]) would have to be ~ 12m. In other words, adding the minor radius, and the shielding, a 5T 3 GW tokamak (the bare reactor) would reach from about the goal line to ~ the 35-40-yard line of an American football field [14]. This is almost certainly not smaller and cheaper than ITER.
In conclusion, the ITER pathway will not lead to economical pure fusion in this century. In the next two sections we outline a different path forward, one which will build on a success by ITER, but then will proceed in a different direction.
This path could resolve issues which ITER will not and cannot.

Fusion Breeding
As we have just seen the scientific, technical, time, political and economic constraint on a continuation of ITER toward economic pure fusion are considerable. It is extremely unlikely that ITER will lead to economic fusion this century, if ever. However, assuming success, a breeder is a much shorter jump away. The plasma physics of a successful ITER would match well with the requirements for breeding. The only things left to do would be to develop an ITER like machine that ran in true steady state or high duty cycle (not just 400 seconds) and add the breeding of both the tritium and 233 U and recovery of unburned tritium from the plasma exhaust. The expected Q and power from a successful ITER are fine for breeding.
Exploiting fusion breeding, instead of pure fusion, an ITER like device, is an alternative which is likely to achieve economic fuel production not too long after midcentury. Furthermore, it would fit in with existing nuclear infrastructure.
Fusion breeding is the use of 14 MeV fusion neutrons not only to boil water with their kinetic energy, but also to breed 10 times more nuclear fuel, for separate, nuclear reactors. The fact that the fusion neutron is an energetic 14 MeV, rather than a much less energetic 2 MeV fission neutron, is key to the advantage of fusion breeding. Injecting the 14 MeV neutron into a material like Be, Pb or U, the first thing that happens is that the neutron produces other spallation neutrons as is slows down. Depending on the material, and blanket design, the fusion neutron might produce as many as 2-4 additional spallation neutrons. This is not a possibility with fission neutrons, their energy is too low.  Figure 20 shows the cross section for producing one or two extra neutrons, in a lead target, as a function of incident neutron energy [Manheimer 2020 #1, National, Ragheb]. Clearly for a Pb target, a spallation reaction producing only a single extra neutron needs an incident neutron with at least 7 MeV, and to produce 2 extra neutrons, 14 MeV. The number of neutrons produced as an incident 14MeV neutron slows down in a variety of solid targets is shown in Table 1 [Moir 1982]. The first job of one of the spallation neutrons is to collide with the lithium in the blanket to breed tritium. This tritium is then inserted into the tokamak plasma to replace the tritium that was lost in the fusion reaction (the deuterium can easily be replaced by deuterium from the environment).
Then the other neutrons are used to breed nuclear fuel. There are two possible breeding routes, breeding 233 U from thorium, and breeding 239 Pu from 238 U. Either are fine as fuels for a thermal reactor, but here we concentrate on the thorium cycle here, as plutonium is something we would like to avoid to the extent possible.
One of the other neutrons can be absorbed by a 232 Th (thorium) nucleus to form a 233 Th. However, this is unstable to a beta decay; that is the neutron expels an electron and moves up in the periodic table to become 233 Pa (protactinium). But this is also unstable with half-life of a month, to another beta decay, becoming 233 U (uranium), which is stable. But 233 U is a perfectly good fuel for thermal nuclear reactors. This fusion and breeding processes are shown schematically in Figure 21: Figure 21. A schematic of the decay process where a fusion neutron is absorbed by a thorium nucleus, setting into motion a decay process which finally ends up as a 233 U nucleus; a perfectly good fuel for a thermal nuclear reactor.
The actual calculations of neutronics in the blanket are computed by complex Monte Carlo simulations, which the world's major nuclear labs have and use. These start with a 14 MeV neutron impinging on some material, and calculating its trajectory, its production of daughter particles and their future paths, and the energy produced or absorbed. Doing this calculation many times, these codes can get reasonable statistics [Moir 2013, Dolan, Plechaty].
For a homogeneous blanket of 7 Li + 0.8%Th + 0.02 6 Li, a single 14 Mev neutron incident would produce 1.1 tritium nuclei and 0.8 233 U nuclei, and release 17 MeV additional energy. The process could be improved with a 2-zone blanket, where the first zone, perhaps 238 U produces additional spallation neutrons, but does not split because the neutron energy is too high. That is, at 14 MeV, the fission cross section is much less than the spallation cross section. The neutron and its daughter neutrons slow down here and enter the second zone, where the tritium and 233 U are produced.
Fusion breeding could produce ~150 MeV of 233 U from each 14 MeV neutron, effectively multiplying the neutron energy produced in the fusion reaction by at least a factor of 10. As we will see an ITER like fusion breeder could fuel ~ 5 thermal reactors of equal power. It takes 2 fission breeders, at maximum breeding rate to fuel one thermal reactor. There, in a nutshell, is the great potential advantage of fusion breeding over fission breeding.
While the energy of this 233 U is not released in the fusion reactor, it is stored for later release by a thermal nuclear reactor. Hence this author does not see any advantage to using a fusion reactor as a component of a fission reactor, so as to make the fission reactor subcritical. The world has known how to build critical thermal reactors safely, economically, and environmentally viable for ~70 years. A fission reactor with a fusion reactor inside as a component, just to make the total reactor subcritical, would make the fission reactor extraordinarily more complex. The world does not need it.
Hence an ITER like device, could be an end in itself as a breeder, not merely a means to the next step toward who knows what DEMO, who knows for how many more tens of billions of $$, in who know how many more decades later? Realistically pure fusion on the ITER pathway is a 22 nd century option, assuming it is an option at all.
While there are fission-based breeding schemes, this work makes the case that fusion not only can breed, but that it is the best breeder. Fusion breeder designs typically calculate that a single fusion neutron can end up producing somewhere between half and a single fissile 233 U from fissile 232 Th (Moir 1982).
Specifically, an ITER type machine would be fine for fusion breeding, but not for pure fusion. The 500 MW of fusion neutrons from ITER, used as a breeder, could produce perhaps 5 GW of uranium fuel. The original ITER design (here we call it Large ITER), which was designed to produce 1.5 GW of neutron power, could produce 15 GW of uranium fuel, enough to fuel 5 standard 1 GWe light water reactors. Furthermore, the breeding reactions are exothermic and roughly double the neutron power [Moir 2013]. While the MFE effort has been reluctant to partner with the nuclear industry, which might not even want it, it should consider realities. Economically pure MFE on the ITER pathway may well be out of reach. Economic fusion breeding most likely is not.
While a pure fusion reactor might have either a solid or liquid blanket, a fusion breeder almost certainly must have a flowing liquid blanket. This way, as it breeds tritium and 233 U (actually protactinium), these can be separated out of the flow away from the reactor. Molten salts have been proposed as the fluid, specifically, a fluorine salt like FLiBe. The Be serves as a neutron multiplier, and the Li breeds the tritium. Uranium, thorium and protactinium are all soluble in molten FLiBe.
Breeding is a perfectly acceptable, and possibly even a better outcome for the fusion effort. There are fission breeding possibilities, for instance fast neutron reactors. Russian and Indian nuclear scientists and engineers are actively pursuing these, and Russia now has two operating sodium cooled fast reactors, their BN-600 and BN-800.
However, the breeding options are few, and fusion breeding should be considered. This author believes that if fuel supply were no problem, any fission reactor designer would choose a thermal over a fast neutron reactor. The thermal neutron reaction cross section is a few thousand times greater than the fast neutron reaction as shown in Fig. 22, displaying the fission cross section as a function of neutron energy for fissile 235 U, as well as for fertile 238 U [National]. Figure 22. The fission and neutron absorption cross section in barns (1 barn is 10 -24 cm) for 235 U and 238 U as a function of the energy of the incident neutron. The cross sections look about the same for all fertile and fissile nuclei, depending whether their atomic number is odd or even.
Furthermore, a thermal reactor has a wide choice of coolants, not just liquid sodium or lead [Garwin]. Fusion breeding gives this choice of coolant to the reactor designer. Also, it would provide a raw fuel with minimal proliferation risks and which fits in with current nuclear infrastructure.
Even in a best-case scenario, where pure fusion does prove to be viable for the 22 nd century, fusion breeding could provide an intermediate product, of real economic value, a product which might be desperately needed by midcentury as more and more countries demand a lifestyle like those in the west.
A tokamak breeder like ITER, while expensive, provides fuel for as many as 5 thermal reactors of equal power, as well as power for the grid (the breeding reactions are exothermic, so the total produced power is now ~ double the neutron power). A very rough estimate of the cost of fusion bred fuel, based on the current cost of ITER, comes to about 1-3 cents per kwhr [Manheimer 2014[Manheimer , 2021. Mined uranium fuel, while it lasts, currently costs about 0.5-1 cent per kwhr. Gasoline at $2 per gallon costs about 5 cents per kwhr for the raw fuel, and about triple that if it were to produce electric power with a typical efficiency of 1/3.
We now show, the advantage fusion breeding has over fission breeding. The fission reaction directly produces 2-course, in either case there are losses, so probably somewhere between half and one neutron per reaction is available for breeding 233 U from 232 Th, or 239 Pu from 238 U. However, the fission reaction produces about 200 MeV, while the DT fusion reaction produces only about 20. Hence for reactors of equal power, a fusion reactor generates about 10 times more neutrons, and therefore breeds at least 10 times more nuclear fuel than a fission breeder reactor does. Hence a fusion breeder can fuel about 5 light water reactors (LWR's) of equal power. (Recall that the breeding reactions are exothermal, so the breeder fusion reactor power is about double the neutron power.) It takes 2 fission breeders at maximum breeding rate to fuel one. In other words, a fusion reactor is neutron rich and energy poor, while a fission reaction is energy rich and neutron poor, a perfect match.

The Scientific Prototype, or What to Do between Now and 2040
Over the past 20 years, this author has suggested (to no avail) that the American magnetic fusion program, concentrate its resources on building a tokamak he has called the 'scientific prototype' [Manheimer 2013] This is a tokamak about the size of JT-60U, but would run at steady state or high duty cycle, with a DT plasma, and have a Q~1. As we have seen, this has already been achieved in a 30 second pulse in JT-60U with its W shaped divertor in a deuterium plasma.
The scientific prototype, in DT, would also produce its own tritium. Furthermore, since only a fraction of the tritium in the tokamak burns, most of the tritium is exhausted through the divertor. Hence one would have to recover this tritium for reuse. If in a first experiment of this type, it could not produce enough tritium to fuel itself, it would produce as much as it could; same for tritium recovery. In short, no matter how successful ITER is, there are problems which it must solve, which it has hardly begun to address.
The idea of the scientific prototype is to address (and hopefully solve) these problems on as small a scale as possible. The scientific prototype will be expensive, but not beyond the means of countries like the United States, China, Japan or the European Union. If we do not address these problems now, when will we? The hope would be to achieve the goals of the scientific prototype around 2040, just as ITER hopefully is successful in achieving a Q~10 machine. The fact that we know what has been done in JT-60U would certainly simplify the design process. We would like to use a tokamak that produces essentially the same plasma, but in DT instead of just deuterium. Of course, a large part of the design would be hardening the reactor to deal with the tritium and 14 MeV neutrons, as well as producing tritium and recovering unburned tritium.
Had the American fusion program built the scientific prototype 20 years ago as proposed then [Manheimer 1999 and, the world would probably now have a reasonably good idea of its success or failure. Instead, the American magnetic fusion program spun its wheels proposing one ignition scheme or new plasma configuration, after another, none of which got built or funded.
While the author's efforts to advocate the scientific prototype have been focused on the United States, there is no reason it cannot be set up in any of the main sponsors of ITER. This author does not advocate that the scientific prototype be done by an international consortium. The international negotiations to pull this off, would probably take up most of the time between now and 2040.
If both the scientific prototype and ITER are successful by ~2040, there is no reason why fusion breeding cannot be begun on a large scale. On the other hand, if the world's choice is to continue with pure fusion, it would be on the next plateau in that effort.
One assumption in the argument for the scientific prototype was that the plasma current could be driven externally by microwaves, millimeter waves and/or neutral beams. Recent results from a variety of tokamaks have called this assumption into question.
The need for a steady state tokamak has hardly escaped the world's attention. In 1993, the PPPL put in a proposal for a steady state tokamak called tokamak physics experiment, or TPX [Schmidt]. This was proposed to be a tokamak which had its current driven only externally, by beams and/or microwaves, as well as by a characteristic of the toroidal geometry called bootstrap current.
TPX was designed with a 2.25-meter major radius and R/a=4.5. It was to have a superconducting magnet with a toroidal field of 4 Teslas, and have a plasma current of 2 MA. It was designed for initially 20 MW of heating and current drive power, and ultimately 50 MW. It desired to show efficient current drive without an Ohmic current, one disruption every 10 hours of operation, initially 100 second pulses, ultimately 1000 second. It hoped to achieve an average density of ~10 20 m -3 , a temperature of 10-20 keV, a confinement time of ~350msec, and a triple product of 3-5x10 20 . The triple product expected was to be a factor of 3 to 5 below the record set on JT-60, but with long pulse and external current drive, it would make many records in other ways.
Unfortunately, TPX was never built. The proposal was ultimately abandoned in favor of a variety of proposals for burning plasmas, which were also never built. Instead, the Princeton lab settled for paper studies for stellarators, even though it is currently a secondary fusion configuration and the Germans and Japanese programs were far ahead. It also and built a spherical tokamak, which this author argued will never provide economical fusion due to a thin center post, which almost certainly cannot stand up long to the fusion neutron flux, much less remain superconducting [Manheimer 2014].
However other laboratories have built superconducting tokamaks in the interim, Tore Supra in France [Giruzzil], KSTAR (Korea superconducting tokamak advanced reactor) in Korea [Jong-Gu], and EAST (experimental advanced superconducting tokamak) in China [Wan, Xiang]. These have been up and running for quite a while and have been powered by ~10 MW of microwaves and beams. There have been stories in the major media, especially regarding the latter two, of maintaining high temperatures (i.e. ~5-10keV) for a hundred or more seconds, maintained only by external beam and microwave power [Another]. Figure 23 shows the loop voltage on a 60 second run from EAST [Wan]. However, while these tokamaks have certainly achieved impressive results, they have come nowhere near matching the hoped-for results of TPX.
Here we very briefly review some of these results. Perhaps first and foremost, the best triple product up to now turned out to be 10 19 [Xiang], at least a factor of 150 less than what was achieved on JT-60U and about a factor of 30-50 less than what TPX hoped to achieve. One reason is that the density and temperatures are lower than what TPX hoped to achieve (and what JT-60U did achieve). Not only that, as seen in Fig 24, [Xiang], the density and temperature are rather peaked, and using the central density and temperature, rather than the radial average, gives a rather optimistic estimate of the triple product. In fact, the stored plasma energy in both EAST and KSTAR varies from about 100 to 400kJ [71][72][73]. This is considerably less than the nearly 10 MJ stored in JT-60. Figure 24. Radial plots of electron density and temperature in a typical shot of the EAST tokamak. The stored energy in EAST and KSTAR vary with shot between about 100kJ and 400kJ.
The rather lower energy in EAST and KSTAR, along with the 5-10 MW of input power mean that the energy confinement time is quite small, perhaps 50 milliseconds. The combination of low stored energy and low confinement time result in a triple fusion project much smaller than what had been achieved on many other tokamaks.
Tore Supre published some data on the statistics of disruptions. Their measure is the disruption frequency, which was always well above a rate of one every 10 hours. Shown in Figure 25 is their plot [Ginuzzil]. Figure 25. A scatter plot of the frequency of disruptions for a large number of shots. Note that a disruption rate of one every 10 hours would be a frequency of ~3x10 -5 .
The best disruption frequency Tore Supra had achieved is an order of magnitude higher than what TPX aimed for.
In this author's opinion, the results from these 3 superconducting tokamaks have been rather discouraging, at least as regards the potential of external current drive. But where does this leave the scientific prototype? Some insight can be derived from [Segal]. This work examines the comparison of a pulsed tokamak, with mostly Ohmic current drive, with a steady state tokamak with external current drive. This work is focused on a future tokamak which is providing economic power. Each half cycle is expected to be an hour to an hour and a half. However, the goals of the scientific prototype are much more modest.
One thing which Ref.
[Segal] does not discuss is the current waveform, that is the magnetic flux as a function of time over several cycles. However, judging from the paper, the switch from clockwise to counterclockwise toroidal current seems to be assumed instantaneous. This is a switch from one MHD equilibrium to another, with an intermediate state having no MHD equilibrium (i.e. zero toroidal current) in between. Once the current is zero, the remaining unconfined plasma will splash out into the vacuum wall nearly instantaneously. Surely there must be some time for the current to make the switch, as well as to clean out the remaining residue of the unconfined plasma.
This work sees the main goal of the scientific prototype as achieving steady state or high duty cycle power, and making an initial stab at breeding tritium and recovering unburned tritium. It does not have to achieve 100% on the first try.
Accordingly, it would modify that assumed in [Segal], by assuming an initial temporal current profile which rises, and then stays constant for a long time, providing equilibrium for the plasma, and then reverses. Let us consider current on time, t(shot), of 100 seconds and a t(relax) of 500 seconds, where the remnants of the unconfined plasma are swept away, and the system is prepared for the next equilibrium having a reversed current. In other words, the initial scientific prototype would aim for a duty cycle of ~17%. The tokamak plasma would be one very much like JT-60U which has already demonstrated Q=1.25 in a 30 second shot.
The current would be driven mostly by the central transformer, but the project could certainly continue research on external current drive so that if there is further success, one could switch to it. Furthermore, it could work on extending t(shot) and reducing t(relax). If it could reverse the two from what is assumed above (i.e. achieve a duty cycle of ~83%), it would be just about as good as true steady state. A schematic of the time development of the central flux is shown in Figure 26 for both the reactor and the scientific prototype. Regarding the plasma, this section emphasized the similarity with that obtained in JT-60 [Kusama, Ishida, Ide]. This similarity certainly gives confidence that the scientific prototype can be achieved, at least regarding the plasma.
However, the tokamak itself would be very different from JT-60. First of all, it would need room for a blanket, adding at least a meter to the major radius, and likely increasing the minor radius as well. Secondly, in order to breed tritium on site, it would need a flowing blanket, most likely a molten salt like FLiBe, where the flow is in and out of the vacuum system, so the tritium could be removed as it is produced. At a later stage, it might be worth adding some thorium to the flow to produce some 233 U. For the first time, the fusion project would be producing something the world could actually use. Finally, it would have to capture as much unburned tritium as possible.
[Zakharov] has proposed doing this by flowing liquid lithium along the divertor plates (outside the region of confined plasma) and ultimately out of the vacuum system. These would absorb some or all of the tritium escaping the plasma.
To summarize, the scientific prototype would provide crucial data on running a tokamak at high duty cycle before ITER even begins to be concerned with such matters. If all goes well, it might even be able to this work at steady state, or at much higher duty cycle. It would provide crucial data necessary for fusion, which could not be achieved in any other way.

The Energy Park
"The Energy Park" uses the fact that a fusion breeder can breed fuel for about 5 (LWR's) of equal power, and each year an LWR discharges about 1/5 of its fuel as plutonium and higher actinides [Garwin].
Hence one envisions an energy infrastructure where there is one fusion breeder to supply fuel, and one fast neutron reactor to burn the 'waste' actinides [Manheimer 2014[Manheimer , 2018[Manheimer , 2020[Manheimer #'s 1&2, 2021. The fast neutron reactor could be something like the Integral Fast Reactor (IFR), developed by Argonne National Laboratory. It ran successfully at 60 MW for years before it was disassembled. It could run on any actinide and could run in either a breeder or burner mode. As we see in Fig 22, at ~1-2 MeV neutron energy, fissile and fertile materials have about the same fission cross sections. Thus, the IFR can be run in a mode to simply 'burn' any actinide. Specifically, it could be used to burn all the plutonium and other higher actinides that an LWR discharges. This is unlike the French recycling program, which uses thermal reactors to partially burn waste plutonium.
The British, who have the largest plutonium 'waste' stockpile, are now seriously considering constructing a much larger version of the IFR called PRISM to 'treat' their large stockpile of plutonium waste. Perhaps they are making an important step in the ultimate development of the energy park.
A schematic of the energy park, which appeared in Refs. [Manheimer 2006[Manheimer , 2009[Manheimer , 2014[Manheimer , 2018, 2020 #'s 1 &2, 2021 #1, Moir 2013] is shown in Figure 27. Most of the elements of the energy park are available today, only the fusion breeder needs full development. Figure 27. The energy park: A. low security fence; B. 5 thermal 1GWe nuclear reactors, LWRs or more advanced reactors; C. output electricity; D. manufactured fuel pipeline, E. cooling pool for storage of highly radioactive fission products for 300-500 years necessary for them to become inert. This is a time human society can reasonably plan for, unlike the ~ half million years it would take for the plutonium 'waste' to be buried in a repository, essentially creating a plutonium mine; F. liquid or gaseous fuel factory; G. high security fence, everything with proliferation risk, during the short time before it is diluted or burned, is behind this high security fence; H. separation plant. This separates the material discharged from the reactors (B) into fission products and transuranic elements. Fission products go to storage (E), transuranic elements got to (I); I, the 1GWe integral fast reactor (IFR) or other fast neutron reactor where actinides like plutonium are burned; J. the fusion breeder, producing 1GWe itself and also producing the fuel (ultimately enriched to ~4% 233 U in 238 U) for the 5 thermal nuclear reactors for a total of 7 GWe produced in the energy park.
With the current plague of hacking and ransomware, the park's computers would also have to be protected by the digital analog of low and high security fences.
The world-wide use of energy parks could generate carbon free power, in an economically and environmentally viable way, and with little or no proliferation risk. They could supply tens of TW at least as far into the future as the dawn of civilization was in the past.

Conclusions
If civilization as we know it is to be sustained, a sustainable power source is necessary, one at about the same quantity and price as fossil fuel. This paper argued that wind and solar cannot fill this role, and pure magnetic fusion cannot either, at least in this century, if ever. Nuclear power can, but only if some sort of breeding is used to provide fuel. It argued that fusion breeding not only can breed, and likely can do so well before century's end, but that it is the best way to breed.
So, if not breeding, what are the options for ITER and magnetic fusion? Could one prove, with solid theoretical and experimental evidence, that CDR's are not correct or one has a way around them? This has not happened in 50 years of tokamak research. If so, could one handle the enormous wall loading? Would a sponsor sign on to the concept of a very expensive CDR limited DEMO with R ≥~1 2M and B~ 5T, or R ≥ 6M with B~ 9 T? How will sponsors react when after a success with ITER, they then learn that another half century or century of effort, for who know how many tens of billions will be required before magnetic fusion can provide economic power? How much patience do we think they have? It seems that a thorium or fast neutron fission breeder would be a better, and less expensive choice. Russia and India are already champing at the bit.
While it is not impossible that something smaller and cheaper than ITER will lead directly to economic pure fusion power, it is very unlikely, based on 50 years of experience with tokamaks, and the best science at this point. Most likely the largest MFE flagship is now cruising top speed, right toward the iceberg. However, there is still time to steer the ship. Fusion breeding seems to be both possible and necessary. The time to prepare to lay the groundwork for fusion breeding is NOW.

Acknowledgement
This work was not sponsored by any organization, public or private. The author appreciates very much correspondence with Jeffery Freidberg and Jeffery Harris. This paper is dedicated to the memory of Dan Meneley and George Stanford. I spent a week with Dan at a science meeting in Ottawa in 2006, where we spent a great deal of time together. Dan worked on both the CANDU and IFR nuclear reactors, and during this time I learned a great deal about nuclear physics and nuclear reactors from him. We had kept in close touch, he helped me a great deal on [Manheimer 2014] where I acknowledged his help. We exchanged many emails over the years. George was an expert physicist and reactor designer. He and I never met in real space, but we spent a fair amount of time together in cyber space. He was one of the principal designers of the IFR. Although both Dan and George were nuclear scientists, both of them recognized the role plasma and fusion science could play in their field. Some of my email exchanges with both of them is excerpted in [Manheimer 2021#1] Appendix: Other options (or non-options) for acquiring fissile fuel.
There are at least two other approaches for acquiring fissile fuel. First there is uranium from the oceans, and second there is accelerator production of uranium.
Many uranium chemicals are water soluble, and accordingly, the world's oceans have a large amount of uranium dissolved in them. In a cubic meter of sea water there is about 3.2x10 -6 kg of uranium, for a concentration of about 3.2x10 -9 , well below the concentration for which normal mining is economical. This corresponds to 1.8MJ/m 3 of fission fuel ( 235 U) The flow of the 235 U in all the world's rivers is about 2 TW [Hoffert 2002]. Even with a series of filters which extract 100% of the uranium in the river flows, it would not be enough to extract 10 TW of fissile fuel. As Hoffert quotes: "Getting 10 TW of primary power from 235 U crustal ores or seawater extraction may not be impossible, but it would be a big stretch".
The Japanese, using one of their local ocean currents, have extracted uranium this way. In numerous trials, the Japanese program extracted around 100 grams per month [Guidez] (i.e. ~ 1 gram of 235 U per month, or ~ 10 grams per year).
Hence the extraction rate would have to increase by ~ 5 orders of magnitude to fuel a 1 GWE reactor, which burns ~ one metric ton of 235 U per year. It would have to increase uranium production by 8 orders to fuel 1 TW, and by 9, to fuel 10 TW. Obviously, uranium from seawater has to clear many, many hurdles. ITER has to increase its nTτ many fewer orders of magnitude to become an economical fusion breeder.
Regarding accelerator production, a 1 GeV proton impinging on a high Z target, for instance Pb, can generate as many as 30 spallation neutrons. These can breed 233 U from thorium as described here.
But let us look at the entire process. Start with 6 GeV of chemical energy in say coal. This produces electricity with typically 1/3 efficiency, so it produces 2 GeV of wall plug electric energy, which could power the accelerator. But accelerators like this are typically 50% efficient, so it produces the 1 GeV proton. This then generates about 30 spallation neutrons. Say each one produces a 233 U with no wasted neutrons, so we have 30 233 U nuclei. If each one splits and gives 200 MeV fission fragments, the 30 nuclei will give a total energy of 6 GeV, just the energy we started with in the coal.