FOREWARD I didn’t write a foreword to the original edition of A BriefHistory of Time. That was done by Carl Sagan. Instead, Iwrote a short piece titled “Acknowledgments” in which I wasadvised to thank everyone. Some of the foundations that hadgiven me support weren’t too pleased to have been mentioned,however, because it led to a great increase in applications. I don’t think anyone, my publishers, my agent, or myself,expected the book to do anything like as well as it did. It wasin the London Sunday Times best-seller list for 237 weeks,longer than any other book (apparently, the Bible andShakespeare aren’t counted). It has been translated intosomething like forty languages and has sold about one copy forevery 750 men, women, and children in the world. As NathanMyhrvold of Microsoft (a former post-doc of mine) remarked: Ihave sold more books on physics than Madonna has on sex. The success of A Brief History indicates that there iswidespread interest in the big questions like: Where did wecome from? And why is the universe the way it is? I have taken the opportunity to update the book and includenew theoretical and observational results obtained since thebook was first published (on April Fools’ Day, 1988). I haveincluded a new chapter on wormholes and time travel. Einstein’s General Theory of Relativity seems to offer thepossibility that we could create and maintain wormholes, littletubes that connect different regions of space-time. If so, wemight be able to use them for rapid travel around the galaxyor travel back in time. Of course, we have not seen anyonefrom the future (or have we?) but I discuss a possibleexplanation for this. I also describe the progress that has been made recently infinding “dualities” or correspondences between apparentlydifferent theories of physics. These correspondences are astrong indication that there is a complete unified theory ofphysics, but they also suggest that it may not be possible toexpress this theory in a single fundamental formulation. Instead,we may have to use different reflections of the underlyingtheory in different situations. It might be like our being unableto represent the surface of the earth on a single map andhaving to use different maps in different regions. This would bea revolution in our view of the unification of the laws ofscience but it would not change the most important point: thatthe universe is governed by a set of rational laws that we candiscover and understand. On the observational side, by far the most importantdevelopment has been the measurement of fluctuations in thecosmic microwave background radiation by COBE (the CosmicBackground Explorer satellite) and other collaborations. Thesefluctuations are the finger-prints of creation, tiny initialirregularities in the otherwise smooth and uniform earlyuniverse that later grew into galaxies, stars, and all thestructures we see around us. Their form agrees with thepredictions of the proposal that the universe has no boundariesor edges in the imaginary time direction; but furtherobservations will be necessary to distinguish this proposal fromother possible explanations for the fluctuations in thebackground. However, within a few years we should knowwhether we can believe that we live in a universe that iscompletely self-contained and without beginning or end. CHAPTER 1 OUR PICTURE OF THE UNIVERSE A well-known scientist (some say it was Bertrand Russell)once gave a public lecture on astronomy. He described howthe earth orbits around the sun and how the sun, in turn,orbits around the center of a vast collection of stars called ourgalaxy. At the end of the lecture, a little old lady at the backof the room got up and said: “What you have told us isrubbish. The world is really a flat plate supported on the backof a giant tortoise.” The scientist gave a superior smile beforereplying, “What is the tortoise standing on.” “You’re very clever,young man, very clever,” said the old lady. “But it’s turtles allthe way down!” Most people would find the picture of our universe as aninfinite tower of tortoises rather ridiculous, but why do we thinkwe know better? What do we know about the universe, andhow do we know it? Where did the universe come from, andwhere is it going? Did the universe have a beginning, and if so,what happened before then? What is the nature of time? Will itever come to an end? Can we go back in time? Recentbreakthroughs in physics, made possible in part by fantasticnew technologies, suggest answers to some of theselongstanding questions. Someday these answers may seem asobvious to us as the earth orbiting the sun - or perhaps asridiculous as a tower of tortoises. Only time (whatever that maybe) will tell. As long ago as 340 BC the Greek philosopher Aristotle, inhis book On the Heavens, was able to put forward two goodarguments for believing that the earth was a round sphererather than a Hat plate. First, he realized that eclipses of themoon were caused by the earth coming between the sun andthe moon. The earth’s shadow on the moon was always round,which would be true only if the earth was spherical. If theearth had been a flat disk, the shadow would have beenelongated and elliptical, unless the eclipse always occurred at atime when the sun was directly under the center of the disk. Second, the Greeks knew from their travels that the North Starappeared lower in the sky when viewed in the south than itdid in more northerly regions. (Since the North Star lies overthe North Pole, it appears to be directly above an observer atthe North Pole, but to someone looking from the equator, itappears to lie just at the horizon. From the difference in theapparent position of the North Star in Egypt and Greece,Aristotle even quoted an estimate that the distance around theearth was 400,000 stadia. It is not known exactly what lengtha stadium was, but it may have been about 200 yards, whichwould make Aristotle’s estimate about twice the currentlyaccepted figure. The Greeks even had a third argument thatthe earth must be round, for why else does one first see thesails of a ship coming over the horizon, and only later see thehull? Aristotle thought the earth was stationary and that the sun,the moon, the planets, and the stars moved in circular orbitsabout the earth. He believed this because he felt, for mysticalreasons, that the earth was the center of the universe, andthat circular motion was the most perfect. This idea waselaborated by Ptolemy in the second century AD into acomplete cosmological model. The earth stood at the center,surrounded by eight spheres that carried the moon, the sun,the stars, and the five planets known at the time, Mercury,Venus, Mars, Jupiter, and Saturn (Fig. 1.1). The planetsthemselves moved on smaller circles attached to their respectivespheres in order to account for their rather complicatedobserved paths in the sky. The outermost sphere carried theso-called fixed stars, which always stay in the same positionsrelative to each other but which rotate together across the sky. What lay beyond the last sphere was never made very clear,but it certainly was not part of mankind’s observable universe. Ptolemy’s model provided a reasonably accurate system forpredicting the positions of heavenly bodies in the sky. But inorder to predict these positions correctly, Ptolemy had to makean assumption that the moon followed a path that sometimesbrought it twice as close to the earth as at other times. Andthat meant that the moon ought sometimes to appear twice asbig as at other times! Ptolemy recognized this flaw, butnevertheless his model was generally, although not universally,accepted. It was adopted by the Christian church as the pictureof the universe that was in accordance with Scripture, for ithad the great advantage that it left lots of room outside thesphere of fixed stars for heaven and hell. A simpler model, however, was proposed in 1514 by a Polishpriest, Nicholas Copernicus. (At first, perhaps for fear of beingbranded a heretic by his church, Copernicus circulated hismodel anonymously.) His idea was that the sun was stationaryat the center and that the earth and the planets moved incircular orbits around the sun. Nearly a century passed beforethis idea was taken seriously. Then two astronomers - theGerman, Johannes Kepler, and the Italian, Galileo Galilei -started publicly to support the Copernican theory, despite thefact that the orbits it predicted did not quite match the onesobserved. The death blow to the Aristotelian/Ptolemaic theorycame in 1609. In that year, Galileo started observing the nightsky with a telescope, which had just been invented. When helooked at the planet Jupiter, Galileo found that it wasaccompanied by several small satellites or moons that orbitedaround it. This implied that everything did not have to orbitdirectly around the earth, as Aristotle and Ptolemy had thought. (It was, of course, still possible to believe that the earth wasstationary at the center of the universe and that the moons ofJupiter moved on extremely complicated paths around theearth, giving the appearance that they orbited Jupiter. However,Copernicus’s theory was much simpler.) At the same time,Johannes Kepler had modified Copernicus’s theory, suggestingthat the planets moved not in circles but in ellipses (an ellipseis an elongated circle). The predictions now finally matched theobservations. As far as Kepler was concerned, elliptical orbits were merelyan ad hoc hypothesis, and a rather repugnant one at that,because ellipses were clearly less perfect than circles. Havingdiscovered almost by accident that elliptical orbits fit theobservations well, he could not reconcile them with his idea thatthe planets were made to orbit the sun by magnetic forces. Anexplanation was provided only much later, in 1687, when SirIsaac Newton published his Philosophiae Naturalis PrincipiaMathematica, probably the most important single work everpublished in the physical sciences. In it Newton not only putforward a theory of how bodies move in space and time, buthe also developed the complicated mathematics needed toanalyze those motions. In addition, Newton postulated a law ofuniversal gravitation according to which each body in theuniverse was attracted toward every other body by a force thatwas stronger the more massive the bodies and the closer theywere to each other. It was this same force that caused objectsto fall to the ground. (The story that Newton was inspired byan apple hitting his head is almost certainly apocryphal. AllNewton himself ever said was that the idea of gravity came tohim as he sat “in a contemplative mood” and “was occasionedby the fall of an apple.”) Newton went on to show that,according to his law, gravity causes the moon to move in anelliptical orbit around the earth and causes the earth and theplanets to follow elliptical paths around the sun. The Copernican model got rid of Ptolemy’s celestial spheres,and with them, the idea that the universe had a naturalboundary. Since “fixed stars” did not appear to change theirpositions apart from a rotation across the sky caused by theearth spinning on its axis, it became natural to suppose thatthe fixed stars were objects like our sun but very much fartheraway. Newton realized that, according to his theory of gravity, thestars should attract each other, so it seemed they could notremain essentially motionless. Would they not all fall together atsome point? In a letter in 1691 to Richard Bentley, anotherleading thinker of his day, Newton argued that this wouldindeed happen if there were only a finite number of starsdistributed over a finite region of space. But he reasoned thatif, on the other hand, there were an infinite number of stars,distributed more or less uniformly over infinite space, this wouldnot happen, because there would not be any central point forthem to fall to. This argument is an instance of the pitfalls that you canencounter in talking about infinity. In an infinite universe, everypoint can be regarded as the center, because every point hasan infinite number of stars on each side of it. The correctapproach, it was realized only much later, is to consider thefinite situation, in which the stars all fall in on each other, andthen to ask how things change if one adds more stars roughlyuniformly distributed outside this region. According to Newton’slaw, the extra stars would make no difference at all to theoriginal ones on average, so the stars would fall in just as fast. We can add as many stars as we like, but they will still alwayscollapse in on them-selves. We now know it is impossible tohave an infinite static model of the universe in which gravity isalways attractive. It is an interesting reflection on the general climate ofthought before the twentieth century that no one had suggestedthat the universe was expanding or contracting. It was generallyaccepted that either the universe had existed forever in anunchanging state, or that it had been created at a finite time inthe past more or less as we observe it today. In part this mayhave been due to people’s tendency to believe in eternal truths,as well as the comfort they found in the thought that eventhough they may grow old and die, the universe is eternal andunchanging. Even those who realized that Newton’s theory of gravityshowed that the universe could not be static did not think tosuggest that it might be expanding. Instead, they attempted tomodify the theory by making the gravitational force repulsive atvery large distances. This did not significantly affect theirpredictions of the motions of the planets, but it allowed aninfinite distribution of stars to remain in equilibrium - with theattractive forces between nearby stars balanced by the repulsiveforces from those that were farther away. However, we nowbelieve such an equilibrium would be unstable: if the stars insome region got only slightly nearer each other, the attractiveforces between them would become stronger and dominateover the repulsive forces so that the stars would continue tofall toward each other. On the other hand, if the stars got abit farther away from each other, the repulsive forces woulddominate and drive them farther apart. Another objection to an infinite static universe is normallyascribed to the German philosopher Heinrich Olbers, who wroteabout this theory in 1823. In fact, various contemporaries ofNewton had raised the problem, and the Olbers article was noteven the first to contain plausible arguments against it. It was,however, the first to be widely noted. The difficulty is that inan infinite static universe nearly every line of sight would endon the surface of a star. Thus one would expect that thewhole sky would be as bright as the sun, even at night. Olbers’ counter-argument was that the light from distant starswould be dimmed by absorption by intervening matter. However, if that happened the intervening matter wouldeventually heat up until it glowed as brightly as the stars. Theonly way of avoiding the conclusion that the whole of the nightsky should be as bright as the surface of the sun would be toassume that the stars had not been shining forever but hadturned on at some finite time in the past. In that case theabsorbing matter might not have heated up yet or the lightfrom distant stars might not yet have reached us. And thatbrings us to the question of what could have caused the starsto have turned on in the first place. The beginning of the universe had, of course, been discussedlong before this. According to a number of early cosmologiesand the Jewish/Christian/Muslim tradition, the universe startedat a finite, and not very distant, time in the past. Oneargument for such a beginning was the feeling that it wasnecessary to have “First Cause” to explain the existence of theuniverse. (Within the universe, you always explained one eventas being caused by some earlier event, but the existence of theuniverse itself could be explained in this way only if it hadsome beginning.) Another argument was put forward by St. Augustine in his book The City of God. He pointed out thatcivilization is progressing and we remember who performed thisdeed or developed that technique. Thus man, and so alsoperhaps the universe, could not have been around all that long. St. Augustine accepted a date of about 5000 BC for theCreation of the universe according to the book of Genesis. (Itis interesting that this is not so far from the end of the lastIce Age, about 10,000 BC, which is when archaeologists tell usthat civilization really began.)Aristotle, and most of the other Greek philosophers, on theother hand, did not like the idea of a creation because itsmacked too much of divine intervention. They believed,therefore, that the human race and the world around it hadexisted, and would exist, forever. The ancients had alreadyconsidered the argument about progress described above, andanswered it by saying that there had been periodic floods orother disasters that repeatedly set the human race right backto the beginning of civilization. The questions of whether the universe had a beginning intime and whether it is limited in space were later extensivelyexamined by the philosopher Immanuel Kant in his monumental(and very obscure) work Critique of Pure Reason, published in1781. He called these questions antinomies (that is,contradictions) of pure reason because he felt that there wereequally compelling arguments for believing the thesis, that theuniverse had a beginning, and the antithesis, that it had existedforever. His argument for the thesis was that if the universedid not have a beginning, there would be an infinite period oftime before any event, which he considered absurd. Theargument for the antithesis was that if the universe had abeginning, there would be an infinite period of time before it,so why should the universe begin at any one particular time? In fact, his cases for both the thesis and the antithesis arereally the same argument. They are both based on hisunspoken assumption that time continues back forever, whetheror not the universe had existed forever. As we shall see, theconcept of time has no meaning before the beginning of theuniverse. This was first pointed out by St. Augustine. Whenasked: “What did God do before he created the universe?” Augustine didn’t reply: “He was preparing Hell for people whoasked such questions.” Instead, he said that time was aproperty of the universe that God created, and that time didnot exist before the beginning of the universe. When most people believed in an essentially static andunchanging universe, the question of whether or not it had abeginning was really one of metaphysics or theology. One couldaccount for what was observed equally well on the theory thatthe universe had existed forever or on the theory that it wasset in motion at some finite time in such a manner as to lookas though it had existed forever. But in 1929, Edwin Hubblemade the landmark observation that wherever you look, distantgalaxies are moving rapidly away from us. In other words, theuniverse is expanding. This means that at earlier times objectswould have been closer together. In fact, it seemed that therewas a time, about ten or twenty thousand million years ago,when they were all at exactly the same place and when,therefore, the density of the universe was infinite. This discoveryfinally brought the question of the beginning of the universeinto the realm of science. Hubble’s observations suggested that there was a time, calledthe big bang, when the universe was infinitesimally small andinfinitely dense. Under such conditions all the laws of science,and therefore all ability to predict the future, would breakdown. If there were events earlier than this time, then theycould not affect what happens at the present time. Theirexistence can be ignored because it would have noobservational consequences. One may say that time had abeginning at the big bang, in the sense that earlier times simplywould not be defined. It should be emphasized that thisbeginning in time is very different from those that had beenconsidered previously. In an unchanging universe a beginning intime is something that has to be imposed by some beingoutside the universe; there is no physical necessity for abeginning. One can imagine that God created the universe atliterally any time in the past. On the other hand, if theuniverse is expanding, there may be physical reasons why therehad to be a beginning. One could still imagine that God createdthe universe at the instant of the big bang, or even afterwardsin just such a way as to make it look as though there hadbeen a big bang, but it would be meaningless to suppose thatit was created before the big bang. An expanding universe doesnot preclude a creator, but it does place limits on when hemight have carried out his job! In order to talk about the nature of the universe and todiscuss questions such as whether it has a beginning or anend, you have to be clear about what a scientific theory is. Ishall take the simpleminded view that a theory is just a modelof the universe, or a restricted part of it, and a set of rulesthat relate quantities in the model to observations that wemake. It exists only in our minds and does not have any otherreality (whatever that might mean). A theory is a good theoryif it satisfies two requirements. It must accurately describe alarge class of observations on the basis of a model thatcontains only a few arbitrary elements, and it must makedefinite predictions about the results of future observations. Forexample, Aristotle believed Empedocles’s theory that everythingwas made out of four elements, earth, air, fire, and water. Thiswas simple enough, but did not make any definite predictions. On the other hand, Newton’s theory of gravity was based onan even simpler model, in which bodies attracted each otherwith a force that was proportional to a quantity called theirmass and inversely proportional to the square of the distancebetween them. Yet it predicts the motions of the sun, themoon, and the planets to a high degree of accuracy. Any physical theory is always provisional, in the sense that itis only a hypothesis: you can never prove it. No matter howmany times the results of experiments agree with some theory,you can never be sure that the next time the result will notcontradict the theory. On the other hand, you can disprove atheory by finding even a single observation that disagrees withthe predictions of the theory. As philosopher of science KarlPopper has emphasized, a good theory is characterized by thefact that it makes a number of predictions that could inprinciple be disproved or falsified by observation. Each timenew experiments are observed to agree with the predictions thetheory survives, and our confidence in it is increased; but ifever a new observation is found to disagree, we have toabandon or modify the theory. At least that is what is supposed to happen, but you canalways question the competence of the person who carried outthe observation. In practice, what often happens is that a new theory isdevised that is really an extension of the previous theory. Forexample, very accurate observations of the planet Mercuryrevealed a small difference between its motion and thepredictions of Newton’s theory of gravity. Einstein’s generaltheory of relativity predicted a slightly different motion fromNewton’s theory. The fact that Einstein’s predictions matchedwhat was seen, while Newton’s did not, was one of the crucialconfirmations of the new theory. However, we still useNewton’s theory for all practical purposes because the differencebetween its predictions and those of general relativity is verysmall in the situations that we normally deal with. (Newton’stheory also has the great advantage that it is much simpler towork with than Einstein’s!)The eventual goal of science is to provide a single theorythat describes the whole universe. However, the approach mostscientists actually follow is to separate the problem into twoparts. First, there are the laws that tell us how the universechanges with time. (If we know what the universe is like atany one time, these physical laws tell us how it will look at anylater time.) Second, there is the question of the initial state ofthe universe. Some people feel that science should beconcerned with only the first part; they regard the question ofthe initial situation as a matter for metaphysics or religion. Theywould say that God, being omnipotent, could have started theuniverse off any way he wanted. That may be so, but in thatcase he also could have made it develop in a completelyarbitrary way. Yet it appears that he chose to make it evolvein a very regular way according to certain laws. It thereforeseems equally reasonable to suppose that there are also lawsgoverning the initial state. It turns out to be very difficult to devise a theory todescribe the universe all in one go. Instead, we break theproblem up into bits and invent a number of partial theories. Each of these partial theories describes and predicts a certainlimited class of observations, neglecting the effects of otherquantities, or representing them by simple sets of numbers. Itmay be that this approach is completely wrong. If every-thingin the universe depends on everything else in a fundamentalway, it might be impossible to get close to a full solution byinvestigating parts of the problem in isolation. Nevertheless, it iscertainly the way that we have made progress in the past. Theclassic example again is the Newtonian theory of gravity, whichtells us that the gravitational force between two bodies dependsonly on one number associated with each body, its mass, butis otherwise independent of what the bodies are made of. Thusone does not need to have a theory of the structure andconstitution of the sun and the planets in order to calculatetheir orbits. Today scientists describe the universe in terms of two basicpartial theories - the general theory of relativity and quantummechanics. They are the great intellectual achievements of thefirst half of this century. The general theory of relativitydescribes the force of gravity and the large-scale structure ofthe universe, that is, the structure on scales from only a fewmiles to as large as a million million million million (1 withtwenty-four zeros after it) miles, the size of the observableuniverse. Quantum mechanics, on the other hand, deals withphenomena on extremely small scales, such as a millionth of amillionth of an inch. Unfortunately, however, these two theoriesare known to be inconsistent with each other - they cannotboth be correct. One of the major endeavors in physics today,and the major theme of this book, is the search for a newtheory that will incorporate them both - a quantum theory ofgravity. We do not yet have such a theory, and we may stillbe a long way from having one, but we do already knowmany of the properties that it must have. And we shall see, inlater chapters, that we already know a fair amount about thepredications a quantum theory of gravity must make. Now, if you believe that the universe is not arbitrary, but isgoverned by definite laws, you ultimately have to combine thepartial theories into a complete unified theory that will describeeverything in the universe. But there is a fundamental paradoxin the search for such a complete unified theory. The ideasabout scientific theories outlined above assume we are rationalbeings who are free to observe the universe as we want andto draw logical deductions from what we see. In such a scheme it is reasonable to suppose that we mightprogress ever closer toward the laws that govern our universe. Yet if there really is a complete unified theory, it would alsopresumably determine our actions. And so the theory itselfwould determine the outcome of our search for it! And whyshould it determine that we come to the right conclusions fromthe evidence? Might it not equally well determine that we drawthe wrong conclusion.? Or no conclusion at all? The only answer that I can give to this problem is based onDarwin’s principle of natural selection. The idea is that in anypopulation of self-reproducing organisms, there will be variationsin the genetic material and upbringing that different individualshave. These differences will mean that some individuals arebetter able than others to draw the right conclusions about theworld around them and to act accordingly. These individualswill be more likely to survive and reproduce and so theirpattern of behavior and thought will come to dominate. It hascertainly been true in the past that what we call intelligenceand scientific discovery have conveyed a survival advantage. Itis not so clear that this is still the case: our scientificdiscoveries may well destroy us all, and even if they don’t, acomplete unified theory may not make much difference to ourchances of survival. However, provided the universe has evolvedin a regular way, we might expect that the reasoning abilitiesthat natural selection has given us would be valid also in oursearch for a complete unified theory, and so would not lead usto the wrong conclusions. Because the partial theories that we already have aresufficient to make accurate predictions in all but the mostextreme situations, the search for the ultimate theory of theuniverse seems difficult to justify on practical grounds. (It isworth noting, though, that similar arguments could have beenused against both relativity and quantum mechanics, and thesetheories have given us both nuclear energy and themicroelectronics revolution!) The discovery of a complete unifiedtheory, therefore, may not aid the survival of our species. Itmay not even affect our life-style. But ever since the dawn ofcivilization, people have not been content to see events asunconnected and inexplicable. They have craved anunderstanding of the underlying order in the world. Today westill yearn to know why we are here and where we camefrom. Humanity’s deepest desire for knowledge is justificationenough for our continuing quest. And our goal is nothing lessthan a complete description of the universe we live in. CHAPTER 2 Space and Time Our present ideas about the motion of bodies date back toGalileo and Newton. Before them people believed Aristotle, whosaid that the natural state of a body was to be at rest andthat it moved only if driven by a force or impulse. It followedthat a heavy body should fall faster than a light one, because itwould have a greater pull toward the earth. The Aristotelian tradition also held that one could work outall the laws that govern the universe by pure thought: it wasnot necessary to check by observation. So no one until Galileobothered to see whether bodies of different weight did in factfall at different speeds. It is said that Galileo demonstrated thatAristotle’s belief was false by dropping weights from the leaningtower of Pisa. The story is almost certainly untrue, but Galileodid do something equivalent: he rolled balls of different weightsdown a smooth slope. The situation is similar to that of heavybodies falling vertically, but it is easier to observe because theSpeeds are smaller. Galileo’s measurements indicated that eachbody increased its speed at the same rate, no matter what itsweight. For example, if you let go of a ball on a slope thatdrops by one meter for every ten meters you go along, theball will be traveling down the slope at a speed of about onemeter per second after one second, two meters per secondafter two seconds, and so on, however heavy the ball. Ofcourse a lead weight would fall faster than a feather, but thatis only because a feather is slowed down by air resistance. Ifone drops two bodies that don’t have much air resistance, suchas two different lead weights, they fall at the same rate. On themoon, where there is no air to slow things down, the astronautDavid R. Scott performed the feather and lead weightexperiment and found that indeed they did hit the ground atthe same time. Galileo’s measurements were used by Newton as the basis ofhis laws of motion. In Galileo’s experiments, as a body rolleddown the slope it was always acted on by the same force (itsweight), and the effect was to make it constantly speed up. This showed that the real effect of a force is always to changethe speed of a body, rather than just to set it moving, as waspreviously thought. It also meant that when-ever a body is notacted on by any force, it will keep on moving in a straight lineat the same speed. This idea was first stated explicitly inNewton’s Principia Mathematica, published in 1687, and isknown as Newton’s first law. What happens to a body when aforce does act on it is given by Newton’s second law. Thisstates that the body will accelerate, or change its speed, at arate that is proportional to the force. (For example, theacceleration is twice as great if the force is twice as great.) Theacceleration is also smaller the greater the mass (or quantity ofmatter) of the body. (The same force acting on a body oftwice the mass will produce half the acceleration.) A familiarexample is provided by a car: the more powerful the engine,the greater the acceleration, but the heavier the car, the smallerthe acceleration for the same engine. In addition to his laws ofmotion, Newton discovered a law to describe the force ofgravity, which states that every body attracts every other bodywith a force that is proportional to the mass of each body. Thus the force between two bodies would be twice as strong ifone of the bodies (say, body A) had its mass doubled. This iswhat you might expect because one could think of the newbody A as being made of two bodies with the original mass. Each would attract body B with the original force. Thus thetotal force between A and B would be twice the original force. And if, say, one of the bodies had twice the mass, and theother had three times the mass, then the force would be sixtimes as strong. One can now see why all bodies fall at thesame rate: a body of twice the weight will have twice the forceof gravity pulling it down, but it will also have twice the mass. According to Newton’s second law, these two effects will exactlycancel each other, so the acceleration will be the same in allcases. Newton’s law of gravity also tells us that the farther apartthe bodies, the smaller the force. Newton’s law of gravity saysthat the gravitational attraction of a star is exactly one quarterthat of a similar star at half the distance. This law predicts theorbits of the earth, the moon, and the planets with greataccuracy. If the law were that the gravitational attraction of astar went down faster or increased more rapidly with distance,the orbits of the planets would not be elliptical, they wouldeither spiral in to the sun or escape from the sun. The big difference between the ideas of Aristotle and thoseof Galileo and Newton is that Aristotle believed in a preferredstate of rest, which any body would take up if it were notdriven by some force Or impulse. In particular, he thought thatthe earth was at rest. But it follows from Newton’s laws thatthere is no unique standard of rest. One could equally well saythat body A was at rest and body B was moving at constantspeed with respect to body A, or that body B was at rest andbody A was moving. For example, if one sets aside for amoment the rotation of the earth and its orbit round the sun,one could say that the earth was at rest and that a train on itwas traveling north at ninety miles per hour or that the trainwas at rest and the earth was moving south at ninety milesper hour. If one carried out experiments with moving bodieson the train, all Newton’s laws would still hold. For instance,playing Ping-Pong on the train, one would find that the ballobeyed Newton’s laws just like a ball on a table by the track. So there is no way to tell whether it is the train or the earththat is moving. The lack of an absolute standard of rest meant that onecould not determine whether two events that took place atdifferent times occurred in the same position in space. Forexample, suppose our Ping-Pong ball on the train bouncesstraight up and down, hitting the table twice on the same spotone second apart. To someone on the track, the two bounceswould seem to take place about forty meters apart, because thetrain would have traveled that far down the track between thebounces. The nonexistence of absolute rest therefore meant thatone could not give an event an absolute position in space, asAristotle had believed. The positions of events and the distancesbetween them would be different for a person on the train andone on the track, and there would be no reason to prefer oneperson’s position to the other’s. Newton was very worried by this lack of absolute position,or absolute space, as it was called, because it did not accordwith his idea of an absolute God. In fact, he refused to acceptlack of absolute space, even though it was implied by his laws. He was severely criticized for this irrational belief by manypeople, most notably by Bishop Berkeley, a philosopher whobelieved that all material objects and space and time are anillusion. When the famous Dr. Johnson was told of Berkeley’sopinion, he cried, “I refute it thus!” and stubbed his toe on alarge stone. Both Aristotle and Newton believed in absolute time. That is,they believed that one could unambiguously measure theinterval of time between two events, and that this time wouldbe the same whoever measured it, provided they used a goodclock. Time was completely separate from and independent ofspace. This is what most people would take to be thecommonsense view. However, we have had to change ourideas about space and time. Although our apparentlycommonsense notions work well when dealing with things likeapples, or planets that travel comparatively slowly, they don’twork at all for things moving at or near the speed of light. The fact that light travels at a finite, but very high, speedwas firstdiscovered in 1676 by the Danish astronomer Ole ChristensenRoemer. He observed that the times at which the moons ofJupiter appeared to pass behind Jupiter were not evenlyspaced, as one would expect if the moons went round Jupiterat a constant rate. As the earth and Jupiter orbit around thesun, the distance between them varies. Roemer noticed thateclipses of Jupiter’s moons appeared later the farther we werefrom Jupiter. He argued that this was because the light fromthe moons took longer to reach us when we were fartheraway. His measurements of the variations in the distance of theearth from Jupiter were,? however, not very accurate, and so his value for the speedof light was 140,000 miles per second, compared to themodern value of 186,000 miles per second. Nevertheless,Roemer’s achievement, in not only proving that light travels ata finite speed, but also in measuring that speed, wasremarkable - coming as it did eleven years before Newton’spublication of Principia Mathematica. A proper theory of thepropagation of light didn’t come until 1865, when the Britishphysicist James Clerk Maxwell succeeded in unifying the partialtheories that up to then had been used to describe the forcesof electricity and magnetism. Maxwell’s equations predicted thatthere could be wavelike disturbances in the combinedelectromagnetic field, and that these would travel at a fixedspeed, like ripples on a pond. If the wavelength of these waves(the distance between one wave crest and the next) is a meteror more, they are what we now call radio waves. Shorterwavelengths are known as microwaves (a few centimeters) orinfrared (more than a ten-thousandth of a centimeter). Visiblelight has a wavelength of between only forty and eightymillionths of a centimeter. Even shorter wavelengths are knownas ultraviolet, X rays, and gamma rays. Maxwell’s theory predicted that radio or light waves shouldtravel at a certain fixed speed. But Newton’s theory had got ridof the idea of absolute rest, so if light was supposed to travelat a fixed speed, one would have to say what that fixed speedwas to be measured relative to. It was therefore suggested that there was a substance calledthe “ether” that was present everywhere, even in “empty” space. Light waves should travel through the ether as soundwaves travel through air, and their speed should therefore berelative to the ether. Different observers, moving relative to theether, would see light coming toward them at different speeds,but light’s speed relative to the ether would remain fixed. Inparticular, as the earth was moving through the ether on itsorbit round the sun, the speed of light measured in thedirection of the earth’s motion through the ether (when wewere moving toward the source of the light) should be higherthan the speed of light at right angles to that motion (when wear not moving toward the source). In 1887Albert Michelson(who later became the first American to receive the Nobel Prizefor physics) and Edward Morley carried out a very carefulexperiment at the Case School of Applied Science in Cleveland. They compared the speed of light in the direction of the earth’smotion with that at right angles to the earth’s motion. To theirgreat surprise, they found they were exactly the same! Between 1887 and 1905 there were several attempts, mostnotably by the Dutch physicist Hendrik Lorentz, to explain theresult of the Michelson-Morley experiment in terms of objectscontracting and clocks slowing down when they moved throughthe ether. However, in a famous paper in 1905, a hithertounknown clerk in the Swiss patent office, Albert Einstein,pointed out that the whole idea of an ether was unnecessary,providing one was willing to abandon the idea of absolute time. A similar point was made a few weeks later by a leadingFrench mathematician, Henri Poincare. Einstein’s argumentswere closer to physics than those of Poincare, who regardedthis problem as mathematical. Einstein is usually given the creditfor the new theory, but Poincare is remembered by having hisname attached to an important part of it. The fundamental postulate of the theory of relativity, as itwas called, was that the laws of science should be the samefor all freely moving observers, no matter what their speed. This was true for Newton’s laws of motion, but now the ideawas extended to include Maxwell’s theory and the speed oflight: all observers should measure the same speed of light, nomatter how fast they are moving. This simple idea has someremarkable consequences. Perhaps the best known are theequivalence of mass and energy, summed up in Einstein’sfamous equation E=mc2 (where E is energy, m is mass, and cis the speed of light), and the law that nothing may travelfaster than the speed of light. Because of the equivalence ofenergy and mass, the energy which an object has due to itsmotion will add to its mass. In other words, it will make itharder to increase its speed. This effect is only really significantfor objects moving at speeds close to the speed of light. Forexample, at 10 percent of the speed of light an object’s massis only 0.5 percent more than normal, while at 90 percent ofthe speed of light it would be more than twice its normalmass. As an object approaches the speed of light, its massrises ever more quickly, so it takes more and more energy tospeed it up further. It can in fact never reach the speed oflight, because by then its mass would have become infinite, andby the equivalence of mass and energy, it would have taken aninfinite amount of energy to get it there. For this reason, anynormal object is forever confined by relativity to move atspeeds slower than the speed of light. Only light, or otherwaves that have no intrinsic mass, can move at the speed oflight. An equally remarkable consequence of relativity is the way ithas revolutionized our ideas of space and time. In Newton’stheory, if a pulse of light is sent from one place to another,different observers would agree on the time that the journeytook (since time is absolute), but will not always agree on howfar the light traveled (since space is not absolute). Since thespeed of the light is just the distance it has traveled divided bythe time it has taken, different observers would measuredifferent speeds for the light. In relativity, on the other hand,all observers must agree on how fast light travels. They still,however, do not agree on the distance the light has traveled,so they must therefore now also disagree over the time it hastaken. (The time taken is the distance the light has traveled -which the observers do not agree on - divided by the light’sspeed - which they do agree on.) In other words, the theoryof relativity put an end to the idea of absolute time! Itappeared that each observer must have his own measure oftime, as recorded by a clock carried with him, and thatidentical clocks carried by different observers would notnecessarily agree. Each observer could use radar to say where and when anevent took place by sending out a pulse of light or radiowaves. Part of the pulse is reflected back at the event and theobserver measures the time at which he receives the echo. Thetime of the event is then said to be the time halfway betweenwhen the pulse was sent and the time when the reflection wasreceived back: the distance of the event is half the time takenfor this round trip, multiplied by the speed of light. (An event,in this sense, is something that takes place at a single point inspace, at a specified point in time.) This idea is shown in Fig. 2.1, which is an example of a space-time diagram. Using thisprocedure, observers who are moving relative to each other willassign different times and positions to the same event. Noparticular observer’s measurements are any more correct thanany other observer’s, but all the measurements are related. Anyobserver can work out precisely what time and position anyother observer will assign to an event, provided he knows theother observer’s relative velocity. Nowadays we use just this method to measure distancesprecisely, because we can measure time more accurately thanlength. In effect, the meter is defined to be the distancetraveled by light in 0.000000003335640952 second, asmeasured by a cesium clock. (The reason for that particularnumber is that it corresponds to the historical definition of themeter - in terms of two marks on a particular platinum barkept in Paris.) Equally, we can use a more convenient, newunit of length called a light-second. This is simply defined asthe distance that light travels in one second. In the theory ofrelativity, we now define distance in terms of time and thespeed of light, so it follows automatically that every observer willmeasure light to have the same speed (by definition, 1 meterper 0.000000003335640952 second). There is no need tointroduce the idea of an ether, whose presence anyway cannotbe detected, as the Michelson-Morley experiment showed. Thetheory of relativity does, however, force us to changefundamentally our ideas of space and time. We must acceptthat time is not completely separate from and independent ofspace, but is combined with it to form an object calledspace-time. It is a matter of common experience that one can describethe position of a point in space by three numbers, orcoordinates. For instance, one can say that a point in a roomis seven feet from one wall, three feet from another, and fivefeet above the floor. Or one could specify that a point was ata certain latitude and longitude and a certain height above sealevel. One is free to use any three suitable coordinates, althoughthey have only a limited range of validity. One would notspecify the position of the moon in terms of miles north andmiles west of Piccadilly Circus and feet above sea level. Instead,one might de-scribe it in terms of distance from the sun,distance from the plane of the orbits of the planets, and theangle between the line joining the moon to the sun and theline joining the sun to a nearby star such as Alpha Centauri. Even these coordinates would not be of much use in describingthe position of the sun in our galaxy or the position of ourgalaxy in the local group of galaxies. In fact, one may describethe whole universe in terms of a collection of overlappingpatches. In each patch, one can use a different set of threecoordinates to specify the position of a point. An event is something that happens at a particular point inspace and at a particular time. So one can specify it by fournumbers or coordinates. Again, the choice of coordinates isarbitrary; one can use any three well-defined spatial coordinatesand any measure of time. In relativity, there is no realdistinction between the space and time coordinates, just asthere is no real difference between any two space coordinates. One could choose a new set of coordinates in which, say, thefirst space coordinate was a combination of the old first andsecond space coordinates. For instance, instead of measuringthe position of a point on the earth in miles north of Piccadillyand miles west of Piccadilly, one could use miles northeast ofPiccadilly, and miles north-west of Piccadilly. Similarly, inrelativity, one could use a new time coordinate that was the oldtime (in seconds) plus the distance (in light-seconds) north ofPiccadilly. It is often helpful to think of the four coordinates of anevent as specifying its position in a four-dimensional spacecalled space-time. It is impossible to imagine a four-dimensionalspace. I personally find it hard enough to visualizethree-dimensional space! However, it is easy to draw diagramsof two-dimensional spaces, such as the surface of the earth. (The surface of the earth is two-dimensional because theposition of a point can be specified by two coordinates, latitudeand longitude.) I shall generally use diagrams in which timeincreases upward and one of the spatial dimensions is shownhorizontally. The other two spatial dimensions are ignored or,sometimes, one of them is indicated by perspective. (These arecalled space-time diagrams, like Fig. 2.1.) For example, in Fig. 2.2 time is measured upward in years and the distance alongthe line from the sun to Alpha Centauri is measuredhorizontally in miles. The paths of the sun and of AlphaCentauri through space-time are shown as the vertical lines onthe left and right of the diagram. A ray of light from the sunfollows the diagonal line, and takes four years to get from thesun to Alpha Centauri. As we have seen, Maxwell’s equations predicted that thespeed of light should be the same whatever the speed of thesource, and this has been confirmed by accurate measurements. It follows from this that if a pulse of light is emitted at aparticular time at a particular point in space, then as time goeson it will spread out as a sphere of light whose size andposition are independent of the speed of the source. After onemillionth of a second the light will have spread out to form asphere with a radius of 300 meters; after two millionths of asecond, the radius will be 600 meters; and so on. It will belike the ripples that spread out on the surface of a pond whena stone is thrown in. The ripples spread out as a circle thatgets bigger as time goes on. If one stacks snapshots of theripples at different times one above the other, the expandingcircle of ripples will mark out a cone whose tip is at the placeand time at which the stone hit the water (Fig. 2.3). Similarly,the light spreading out from an event forms a(three-dimensional) cone in (the four-dimensional) space-time. This cone is called the future light cone of the event. In thesame way we can draw another cone, called the past lightcone, which is the set of events from which a pulse of light isable to reach the given event (Fig. 2.4). Given an event P, one can divide the other events in theuniverse into three classes. Those events that can be reachedfrom the event P by a particle or wave traveling at or belowthe speed of light are said to be in the future of P. They willlie within or on the expanding sphere of light emitted from theevent P. Thus they will lie within or on the future light cone ofP in the space-time diagram. Only events in the future of Pcan be affected by what happens at P because nothing cantravel faster than light. Similarly, the past of P can be defined as the set of allevents from which it is possible to reach the event P travelingat or below the speed of light. It is thus the set of events thatcan affect what happens at P. The events that do not lie inthe future or past of P are said to lie in the elsewhere of P(Fig. 2.5). What happens at such events can neither affect norbe affected by what happens at P. For example, if the sunwere to cease to shine at this very moment, it would not affectthings on earth at the present time because they would be inthe elsewhere of the event when the sun went out (Fig. 2.6). We would know about it only after eight minutes, the time ittakes light to reach us from the sun. Only then would eventson earth lie in the future light cone of the event at which thesun went out. Similarly, we do not know what is happening atthe moment farther away in the universe: the light that we seefrom distant galaxies left them millions of years ago, and in thecase of the most distant object that we have seen, the light leftsome eight thousand million years ago. Thus, when we look atthe universe, we are seeing it as it was in the past. If one neglects gravitational effects, as Einstein and Poincaredid in 1905, one has what is called the special theory ofrelativity. For every event in space-time we may construct alight cone (the set of all possible paths of light in space-timeemitted at that event), and since the speed of light is the sameat every event and in every direction, all the light cones will beidentical and will all point in the same direction. The theoryalso tells us that nothing can travel faster than light. Thismeans that the path of any object through space and timemust be represented by a line that lies within the light cone ateach event on it (Fig. 2.7). The special theory of relativity wasvery successful in explaining that the speed of light appears thesame to all observers (as shown by the Michelson-Morleyexperiment) and in describing what happens when things moveat speeds close to the speed of light. However, it wasinconsistent with the Newtonian theory of gravity, which saidthat objects attracted each other with a force that depended onthe distance between them. This meant that if one moved oneof the objects, the force on the other one would changeinstantaneously. Or in other gravitational effects should travelwith infinite velocity, instead of at or below the speed of light,as the special theory of relativity required. Einstein made anumber of unsuccessful attempts between 1908 and 1914 tofind a theory of gravity that was consistent with specialrelativity. Finally, in 1915, he proposed what we now call thegeneral theory of relativity. Einstein made the revolutionary suggestion that gravity is nota force like other forces, but is a consequence of the fact thatspace-time is not flat, as had been previously assumed: it iscurved, or “warped,” by the distribution of mass and energy init. Bodies like the earth are not made to move on curvedorbits by a force called gravity; instead, they follow the nearestthing to a straight path in a curved space, which is called ageodesic. A geodesic is the shortest (or longest) path betweentwo nearby points. For example, the surface of the earth is atwo-dimensional curved space. A geodesic on the earth is calleda great circle, and is the shortest route between two points(Fig. 2.8). As the geodesic is the shortest path between anytwo airports, this is the route an airline navigator will tell thepilot to fly along. In general relativity, bodies always followstraight lines in four-dimensional space-time, but theynevertheless appear to us to move along curved paths in ourthree-dimensional space. (This is rather like watching anairplane flying over hilly ground. Although it follows a straightline in three-dimensional space, its shadow follows a curvedpath on the two-dimensional ground.)The mass of the sun curves space-time in such a way thatalthough the earth follows a straight path in four-dimensionalspace-time, it appears to us to move along a circular orbit inthree-dimensional space. fact, the orbits of the planets predicted by general relativityare almost exactly the same as those predicted by theNewtonian theory of gravity. However, in the case of Mercury,which, being the nearest planet to the sun, feels the strongestgravitational effects, and has a rather elongated orbit, generalrelativity predicts that the long axis of the ellipse should rotateabout the sun at a rate of about one degree in ten thousandyears. Small though this effect is, it had been noticed before1915 and served as one of the first confirmations of Einstein’stheory. In recent years the even smaller deviations of the orbitsof the other planets from the Newtonian predictions have beenmeasured by radar and found to agree with the predictions ofgeneral relativity. Light rays too must follow geodesics in space-time. Again, thefact that space is curved means that light no longer appears totravel in straight lines in space. So general relativity predictsthat light should be bent by gravitational fields. For example,the theory predicts that the light cones of points near the sunwould be slightly bent inward, on account of the mass of thesun. This means that light from a distant star that happened topass near the sun would be deflected through a small angle,causing the star to appear in a different position to anobserver on the earth (Fig. 2.9). Of course, if the light fromthe star always passed close to the sun, we would not be ableto tell whether the light was being deflected or if instead thestar was really where we see it. However, as the earth orbitsaround the sun, different stars appear to pass behind the sunand have their light deflected. They therefore change theirapparent position relative to other stars. It is normally verydifficult to see this effect, because the light from the sun makesit impossible to observe stars that appear near to the sun thesky. However, it is possible to do so during an eclipse of thesun, when the sun’s light is blocked out by the moon. Einstein’s prediction of light deflection could not be testedimmediately in 1915, because the First World War was inprogress, and it was not until 1919 that a British expedition,observing an eclipse from West Africa, showed that light wasindeed deflected by the sun, just as predicted by the theory. This proof of a German theory by British scientists was hailedas a great act of reconciliation between the two countries afterthe war. It is ionic, therefore, that later examination of thephotographs taken on that expedition showed the errors wereas great as the effect they were trying to measure. Theirmeasurement had been sheer luck, or a case of knowing theresult they wanted to get, not an uncommon occurrence inscience. The light deflection has, however, been accuratelyconfirmed by a number of later observations. Another prediction of general relativity is that time shouldappear to slower near a massive body like the earth. This isbecause there is a relation between the energy of light and itsfrequency (that is, the number of waves of light per second): the greater the energy, the higher frequency. As light travelsupward in the earth’s gravitational field, it loses energy, and soits frequency goes down. (This means that the length of timebetween one wave crest and the next goes up.) To someonehigh up, it would appear that everything down below wasmaking longer to happen. This prediction was tested in 1962,using a pair of very accurate clocks mounted at the top andbottom of a water tower. The clock at the bottom, which wasnearer the earth, was found to run slower, in exact agreementwith general relativity. The difference in the speed of clocks atdifferent heights above the earth is now of considerablepractical importance, with the advent of very accurate navigationsystems based on signals from satellites. If one ignored thepredictions of general relativity, the position that one calculatedwould be wrong by several miles! Newton’s laws of motion put an end to the idea of absoluteposition in space. The theory of relativity gets rid of absolutetime. Consider a pair of twins. Suppose that one twin goes tolive on the top of a mountain while the other stays at sealevel. The first twin would age faster than the second. Thus, ifthey met again, one would be older than the other. In thiscase, the difference in ages would be very small, but it wouldbe much larger if one of the twins went for a long trip in aspaceship at nearly the speed of light. When he returned, hewould be much younger than the one who stayed on earth. This is known as the twins paradox, but it is a paradox only ifone has the idea of absolute time at the back of one’s mind. In the theory of relativity there is no unique absolute time, butinstead each individual has his own personal measure of timethat depends on where he is and how he is moving. Before 1915, space and time were thought of as a fixedarena in which events took place, but which was not affectedby what happened in it. This was true even of the specialtheory of relativity. Bodies moved, forces attracted and repelled,but time and space simply continued, unaffected. It was naturalto think that space and time went on forever. The situation, however, is quite different in the general theoryof relativity. Space and time are now dynamic quantities: whena body moves, or a force acts, it affects the curvature of spaceand time - and in turn the structure of space-time affects theway in which bodies move and forces act. Space and time notonly affect but also are affected by everything that happens inthe universe. Just as one cannot talk about events in theuniverse without the notions of space and time, so in generalrelativity it became meaningless to talk about space and timeoutside the limits of the universe. In the following decades this new understanding of spaceand time was to revolutionize our view of the universe. The oldidea of an essentially unchanging universe that could haveexisted, and could continue to exist, forever was replaced bythe notion of a dynamic, expanding universe that seemed tohave begun a finite time ago, and that might end at a finitetime in the future. That revolution forms the subject of thenext chapter. And years later, it was also to be the startingpoint for my work in theoretical physics. Roger Penrose and Ishowed that Einstein’s general theory of relativity implied thatthe universe must have a beginning and, possibly, an end. CHAPTER 3 THE EXPANDING UNIVERSE If one looks at the sky on a clear, moonless night, thebrightest objects one sees are likely to be the planets Venus,Mars, Jupiter, and Saturn. There will also be a very largenumber of stars, which are just like our own sun but muchfarther from us. Some of these fixed stars do, in fact, appearto change very slightly their positions relative to each other asearth orbits around the sun: they are not really fixed at all! This is because they are comparatively near to us. As the earthgoes round the sun, we see them from different positionsagainst the background of more distant stars. This is fortunate,because it enables us to measure directly the distance of thesestars from us: the nearer they are, the more they appear tomove. The nearest star, called Proxima Centauri, is found to beabout four light-years away (the light from it takes about fouryears to reach earth), or about twenty-three million millionmiles. Most of the other stars that are visible to the naked eyelie within a few hundred light-years of us. Our sun, forcomparison, is a mere light-minutes away! The visible starsappear spread all over the night sky, but are particularlyconcentrated in one band, which we call the Milky Way. Aslong ago as 1750, some astronomers were suggesting that theappearance of the Milky Way could be explained if most of thevisible stars lie in a single disklike configuration, one example ofwhat we now call a spiral galaxy. Only a few decades later, theastronomer Sir William Herschel confirmed this idea bypainstakingly cataloging the positions and distances of vastnumbers of stars. Even so, the idea gained complete acceptanceonly early this century. Our modern picture of the universe dates back to only 1924,when the American astronomer Edwin Hubble demonstratedthat ours was not the only galaxy. There were in fact manyothers, with vast tracts of empty space between them. In orderto prove this, he needed to determine the distances to theseother galaxies, which are so far away that, unlike nearby stars,they really do appear fixed. Hubble was forced, therefore, touse indirect methods to measure the distances. Now, theapparent brightness of a star depends on two factors: howmuch light it radiates (its luminosity), and how far it is fromus. For nearby stars, we can measure their apparent brightnessand their distance, and so we can work out their luminosity. Conversely, if we knew the luminosity of stars in other galaxies,we could work out their distance by measuring their apparentbrightness. Hubble noted that certain types of stars always havethe same luminosity when they are near enough for us tomeasure; therefore, he argued, if we found such stars inanother galaxy, we could assume that they had the sameluminosity - and so calculate the distance to that galaxy. If wecould do this for a number of stars in the same galaxy, andour calculations always gave the same distance, we could befairly confident of our estimate. In this way, Edwin Hubble worked out the distances to ninedifferent galaxies. We now know that our galaxy is only one ofsome hundred thousand million that can be seen using moderntelescopes, each galaxy itself containing some hundred thousandmillion stars. Fig. 3.1 shows a picture of one spiral galaxy thatis similar to what we think ours must look like to someoneliving in another galaxy. We live in a galaxy that is about onehundred thousand light-years across and is slowly rotating; thestars in its spiral arms orbit around its center about onceevery several hundred million years. Our sun is just anordinary, average-sized, yellow star, near the inner edge of oneof the spiral arms. We have certainly come a long way sinceAristotle and Ptolemy, when thought that the earth was thecenter of the universe! Stars are so far away that they appear to us to be justpinpoints of light. We cannot see their size or shape. So howcan we tell different types of stars apart? For the vast majorityof stars, there is only one characteristic feature that we canobserve - the color of their light. Newton discovered that if lightfrom the sun passes through a triangular-shaped piece of glass,called a prism, it breaks up into its component colors (itsspectrum) as in a rainbow. By focusing a telescope on anindividual star or galaxy, one can similarly observe the spectrumof the light from that star or galaxy. Different stars havedifferent spectra, but the relative brightness of the differentcolors is always exactly what one would expect to find in thelight emitted by an object that is glowing red hot. (In fact, thelight emitted by any opaque object that is glowing red hot hasa characteristic spectrum that depends only on its temperature- a thermal spectrum. This means that we can tell a star’stemperature from the spectrum of its light.) More-over, we findthat certain very specific colors are missing from stars’ spectra,and these missing colors may vary from star to star. Since weknow that each chemical element absorbs a characteristic set ofvery specific colors, by matching these to those that are missingfrom a star’s spectrum, we can determine exactly whichelements are present in the star’s atmosphere. In the 1920s, when astronomers began to look at thespectra of stars in other galaxies, they found something mostpeculiar: there were the same characteristic sets of missingcolors as for stars in our own galaxy, but they were all shiftedby the same relative amount toward the red end of thespectrum. To understand the implications of this, we must firstunderstand the Doppler effect. As we have seen, visible lightconsists of fluctuations, or waves, in the electromagnetic field. The wavelength (or distance from one wave crest to the next)of light is extremely small, ranging from four to seventen-millionths of a meter. The different wavelengths of light arewhat the human eye sees as different colors, with the longestwavelengths appearing at the red end of the spectrum and theshortest wavelengths at the blue end. Now imagine a source oflight at a constant distance from us, such as a star, emittingwaves of light at a constant wavelength. Obviously thewave-length of the waves we receive will be the same as thewavelength at which they are emitted (the gravitational field ofthe galaxy will not be large enough to have a significant effect). Suppose now that the source starts moving toward us. Whenthe source emits the next wave crest it will be nearer to us, sothe distance between wave crests will be smaller than when thestar was stationary. This means that the wavelength of thewaves we receive is shorter than when the star was stationary. Correspondingly, if the source is moving away from us, thewavelength of the waves we receive will be longer. In the caseof light, therefore, means that stars moving away from us willhave their spectra shifted toward the red end of the spectrum(red-shifted) and those moving toward us will have theirspectra blue-shifted. This relationship between wavelength andspeed, which is called the Doppler effect, is an everydayexperience. Listen to a car passing on the road: as the car isapproaching, its engine sounds at a higher pitch (correspondingto a shorter wavelength and higher frequency of sound waves),and when it passes and goes away, it sounds at a lower pitch. The behavior of light or radio waves is similar. Indeed, thepolice make use of the Doppler effect to measure the speed ofcars by measuring the wavelength of pulses of radio wavesreflected off them. ln the years following his proof of the existence of othergalaxies, Rubble spent his time cataloging their distances andobserving their spectra. At that time most people expected thegalaxies to be moving around quite randomly, and so expectedto find as many blue-shifted spectra as red-shifted ones. It wasquite a surprise, therefore, to find that most galaxies appearedred-shifted: nearly all were moving away from us! Moresurprising still was the finding that Hubble published in 1929: even the size of a galaxy’s red shift is not random, but isdirectly proportional to the galaxy’s distance from us. Or, inother words, the farther a galaxy is, the faster it is movingaway! And that meant that the universe could not be static, aseveryone previously had thought, is in fact expanding; thedistance between the different galaxies isg all the time. The discovery that the universe is expanding was one of thegreat intellectual revolutions of the twentieth century. Withhindsight, it is easy wonder why no one had thought of itbefore. Newton, and others should have realized that a staticuniverse would soon start to contract under the influence ofgravity. But suppose instead that the universe is expanding. If itwas expanding fairly slowly, the force of gravity would cause iteventually to stop expanding and then to start contracting. However, if it was expanding at more than a certain criticalrate, gravity would never be strong enough to stop it, and theuniverse would continue to expand forever. This is a bit likewhat happens when one fires a rocket upward from thesurface of the earth. If it has a fairly low speed, gravity willeventually stop the rocket and it will start falling back. On theother hand, if the rocket has more than a certain critical speed(about seven miles per second), gravity will not be strongenough to pull it back, so it will keep going away from theearth forever. This behavior of the universe could have beenpredicted from Newton’s theory of gravity at any time in thenineteenth, the eighteenth, or even the late seventeenth century. Yet so strong was the belief in a static universe that itpersisted into the early twentieth century. Even Einstein, whenhe formulated the general theory of relativity in 1915, was sosure that the universe had to be static that he modified histheory to make this possible, introducing a so-calledcosmological constant into his equations. Einstein introduced anew “antigravity” force, which, unlike other forces, did not comefrom any particular source but was built into the very fabric ofspace-time. He claimed that space-time had an inbuilt tendencyto expand, and this could be made to balance exactly theattraction of all the matter in the universe, so that a staticuniverse would result. Only one man, it seems, was willing totake general relativity at face value, and while Einstein andother physicists were looking for ways of avoiding generalrelativity’s prediction of a nonstatic universe, the Russianphysicist and mathematician Alexander Friedmann instead setabout explaining it. Friedmann made two very simple assumptions about theuniverse: that the universe looks identical in whichever directionwe look, and that this would also be true if we were observingthe universe from anywhere else. From these two ideas alone,Friedmann showed that we should not expect the universe tobe static. In fact, in 1922, several years before Edwin Hubble’sdiscovery, Friedmann predicted exactly what Hubble found! The assumption that the universe looks the same in everydirection is clearly not true in reality. For example, as we haveseen, the other stars in our galaxy form a distinct band of lightacross the night sky, called the Milky Way. But if we look atdistant galaxies, there seems to be more or less the samenumber of them. So the universe does seem to be roughly thesame in every direction, provided one views it on a large scalecompared to the distance between galaxies, and ignores thedifferences on small scales. For a long time, this was sufficientjustification for Friedmann’s assumption - as a roughapproximation to the real universe. But more recently a luckyaccident uncovered the fact that Friedmann’s assumption is infact a remarkably accurate description of our universe. In 1965 two American physicists at the Bell TelephoneLaboratories in New Jersey, Arno Penzias and Robert Wilson,were testing a very sensitive microwave detector. (Microwavesare just like light waves, but with a wavelength of around acentimeter.) Penzias and Wilson were worried when they foundthat their detector was picking up more noise than it ought to. The noise did not appear to be coming from any particulardirection. First they discovered bird droppings in their detectorand checked for other possible malfunctions, but soon ruledthese out. They knew that any noise from within theatmosphere would be stronger when the detector was notpointing straight up than when it was, because light rays travelthrough much more atmosphere when received from near thehorizon than when received from directly overhead. The extranoise was the same whichever direction the detector waspointed, so it must come from outside the atmosphere. It wasalso the same day and night and throughout the year, eventhough the earth was rotating on its axis and orbiting aroundthe sun. This showed that the radiation must come frombeyond the Solar System, and even from beyond the galaxy, asotherwise it would vary as the movement of earth pointed thedetector in different directions. In fact, we know that the radiation must have traveled to usacross most of the observable universe, and since it appears tobe the same in different directions, the universe must also bethe same in every direction, if only on a large scale. We nowknow that whichever direction we look, this noise never variesby more than a tiny fraction: so Penzias and Wilson hadunwittingly stumbled across a remarkably accurate confirmationof Friedmann’s first assumption. However, be-cause the universeis not exactly the same in every direction, but only on averageon a large scale, the microwaves cannot be exactly the same inevery direction either. There have to be slight variationsbetween different directions. These were first detected in 1992by the Cosmic Background Explorer satellite, or COBE, at alevel of about one part in a hundred thousand. Small thoughthese variations are, they are very important, as will beexplained in Chapter 8. At roughly the same time as Penzias and Wilson wereinvestigating noise in their detector, two American physicists atnearby Princeton University, Bob Dicke and Jim Peebles, werealso taking an interest in microwaves. They were working on asuggestion, made by George Gamow (once a student ofAlexander Friedmann), that the early universe should have beenvery hot and dense, glowing white hot. Dicke and Peeblesargued that we should still be able to see the glow of the earlyuniverse, because light from very distant parts of it would onlyjust be reaching us now. However, the expansion of theuniverse meant that this light should be so greatly red-shiftedthat it would appear to us now as microwave radiation. Dickeand Peebles were preparing to look for this radiation whenPenzias and Wilson heard about their work and realized thatthey had already found it. For this, Penzias and Wilson wereawarded the Nobel Prize in 1978 (which seems a bit hard onDicke and Peebles, not to mention Gamow!). Now at first sight, all this evidence that the universe looksthe same whichever direction we look in might seem to suggestthere is some-thing special about our place in the universe. Inparticular, it might seem that if we observe all other galaxies tobe moving away from us, then we must be at the center ofthe universe. There is, however, an alternate explanation: theuniverse might look the same in every direction as seen fromany other galaxy too. This, as we have seen, was Friedmann’ssecond assumption. We have no scientific evidence for, oragainst, this assumption. We believe it only on grounds ofmodesty: it would be most remarkable if the universe lookedthe same in every direction around us, but not around otherpoints in the universe! In Friedmann’s model, all the galaxiesare moving directly away from each other. The situation israther like a balloon with a number of spots painted on itbeing steadily blown up. As the balloon expands, the distancebetween any two spots increases, but there is no spot that canbe said to be the center of the expansion. Moreover, thefarther apart the spots are, the faster they will be movingapart. Similarly, in Friedmann’s model the speed at which anytwo galaxies are moving apart is proportional to the distancebetween them. So it predicted that the red shift of a galaxyshould be directly proportional to its distance from us, exactlyas Hubble found. Despite the success of his model and hisprediction of Hubble’s observations, Friedmann’s work remainedlargely unknown in the West until similar models werediscovered in 1935 by the American physicist HowardRobertson and the British mathematician Arthur Walker, inresponse to Hubble’s discovery of the uniform expansion of theuniverse. Although Friedmann found only one, there are in fact threedifferent kinds of models that obey Friedmann’s twofundamental assumptions. In the first kind (which Friedmannfound) the universe is expanding sufficiently slowly that thegravitational attraction between the different galaxies causes theexpansion to slow down and eventually to stop. The galaxiesthen start to move toward each other and the universecontracts. Fig. 3.2 shows how the distance between twoneighboring galaxies changes as time increases. It starts at zero,increases to a maximum, and then decreases to zero again. Inthe second kind of solution, the universe is expanding sorapidly that the gravitational attraction can never stop it, thoughit does slow it down a bit. Fig. 3.3 Shows the Separationbetween neighboring galaxies in this model. It starts at zero andeventually the galaxies are moving apart at a steady speed. Finally, there is a third kind of solution, in which the universeis expanding only just fast enough to avoid recollapse. In thiscase the separation, shown in Fig. 3.4, also starts at zero andincreases forever. However, the speed at which the galaxies aremoving apart gets smaller and smaller, although it never quitereaches zero. A remarkable feature of the first kind of Friedmann model isthat in it the universe is not infinite in space, but neither doesspace have any boundary. Gravity is so strong that space isbent round onto itself, making it rather like the surface of theearth. If one keeps traveling in a certain direction on thesurface of the earth, one never comes up against animpassable barrier or falls over the edge, but eventually comesback to where one started. In the first kind of Friedmann model, space is just like this,but with three dimensions instead of two for the earth’ssurface. The fourth dimension, time, is also finite in extent, butit is like a line with two ends or boundaries, a beginning andan end. We shall see later that when one combines generalrelativity with the uncertainty principle of quantum mechanics, itis possible for both space and time to be finite without anyedges or boundaries. The idea that one could go right round the universe andend up where one started makes good science fiction, but itdoesn’t have much practical significance, because it can beshown that the universe would recollapse to zero size beforeone could get round. You would need to travel faster than lightin order to end up where you started before the universecame to an end - and that is not allowed! In the first kind of Friedmann model, which expands andrecollapses, space is bent in on itself, like the surface of theearth. It is therefore finite in extent. In the second kind ofmodel, which expands forever, space is bent the other way, likethe surface of a saddle. So in this case space is infinite. Finally,in the third kind of Friedmann model, with just the critical rateof expansion, space is flat (and therefore is also infinite). But which Friedmann model describes our universe? Will theuniverse eventually stop expanding and start contracting, or willit expand forever? To answer this question we need to knowthe present rate of expansion of the universe and its presentaverage density. If the density is less than a certain criticalvalue, determined by the rate of expansion, the gravitationalattraction will be too weak to halt the expansion. If the densityis greater than the critical value, gravity will stop the expansionat some time in the future and cause the universe torecollapse. We can determine the present rate of expansion bymeasuring the velocities at which other galaxies are movingaway from us, using the Doppler effect. This can be done veryaccurately. However, the distances to the galaxies are not verywell known because we can only measure them indirectly. Soall we know is that the universe is expanding by between 5percent and 10 percent every thousand million years. However,our uncertainty about the present average density of theuniverse is even greater. If we add up the masses of all thestars that we can see in our galaxy and other galaxies, thetotal is less than one hundredth of the amount required to haltthe expansion of the universe, even for the lowest estimate ofthe rate of expansion. Our galaxy and other galaxies, however,must contain a large amount of “dark matter” that we cannotsee directly, but which we know must be there because of theinfluence of its gravitational attraction on the orbits of stars inthe galaxies. Moreover, most galaxies are found in clusters, andwe can similarly infer the presence of yet more dark matter inbetween the galaxies in these clusters by its effect on themotion of the galaxies. When we add up all this dark matter,we still get only about one tenth of the amount required tohalt the expansion. However, we cannot exclude the possibilitythat there might be some other form of matter, distributedalmost uniformly throughout the universe, that we have not yetdetected and that might still raise the average density of theuniverse up to the critical value needed to halt the expansion. The present evidence therefore suggests that the universe willprobably expand forever, but all we can really be sure of isthat even if the universe is going to recollapse, it won’t do sofor at least another ten thousand million years, since it hasalready been expanding for at least that long. This should notunduly worry us: by that time, unless we have colonizedbeyond the Solar System, mankind will long since have diedout, extinguished along with our sun! All of the Friedmann solutions have the feature that at sometime in the past (between ten and twenty thousand millionyears ago) the distance between neighboring galaxies must havebeen zero. At that time, which we call the big bang, the densityof the universe and the curvature of space-time would havebeen infinite. Because mathematics cannot really handle infinitenumbers, this means that the general theory of relativity (onwhich Friedmann’s solutions are based) predicts that there is apoint in the universe where the theory itself breaks down. Sucha point is an example of what mathematicians call a singularity. In fact, all our theories of science are formulated on theassumption that space-time is smooth and nearly fiat, so theybreak down at the big bang singularity, where the curvature ofspace-time is infinite. This means that even if there were eventsbefore the big bang, one could not use them to determinewhat would happen afterward, because predictability wouldbreak down at the big bang. Correspondingly, if, as is the case, we know only what hashappened since the big bang, we could not determine whathappened beforehand. As far as we are concerned, eventsbefore the big bang can have no consequences, so they shouldnot form part of a scientific model of the universe. We shouldtherefore cut them out of the model and say that time had abeginning at the big bang. Many people do not like the idea that time has a beginning,probably because it smacks of divine intervention. (The CatholicChurch, on the other hand, seized on the big bang model andin 1951officially pronounced it to be in accordance with theBible.) There were therefore a number of attempts to avoid theconclusion that there had been a big bang. The proposal thatgained widest support was called the steady state theory. It wassuggested in 1948 by two refugees from Nazi-occupied Austria,Hermann Bondi and Thomas Gold, together with a Briton, FredHoyle, who had worked with them on the development ofradar during the war. The idea was that as the galaxies movedaway from each other, new galaxies were continually forming inthe gaps in between, from new matter that was beingcontinually created. The universe would therefore look roughlythe same at all times as well as at all points of space. Thesteady state theory required a modification of general relativityto allow for the continual creation of matter, but the rate thatwas involved was so low (about one particle per cubic kilometerper year) that it was not in conflict with experiment. Thetheory was a good scientific theory, in the sense described inChapter 1: it was simple and it made definite predictions thatcould be tested by observation. One of these predictions wasthat the number of galaxies or similar objects in any givenvolume of space should be the same wherever and wheneverwe look in the universe. In the late 1950s and early 1960s asurvey of sources of radio waves from outer space was carriedout at Cambridge by a group of astronomers led by MartinRyle (who had also worked with Bondi, Gold, and Hoyle onradar during the war). The Cambridge group showed that mostof these radio sources must lie outside our galaxy (indeedmany of them could be identified with other galaxies) and alsothat there were many more weak sources than strong ones. They interpreted the weak sources as being the more distantones, and the stronger ones as being nearer. Then thereappeared to be less common sources per unit volume of spacefor the nearby sources than for the distant ones. This couldmean that we are at the center of a great region in theuniverse in which the sources are fewer than elsewhere. Alternatively, it could mean that the sources were morenumerous in the past, at the time that the radio waves left ontheir journey to us, than they are now. Either explanationcontradicted the predictions of the steady state theory. Moreover, the discovery of the microwave radiation by Penziasand Wilson in 1965 also indicated that the universe must havebeen much denser in the past. The steady state theorytherefore had to be abandoned. Another attempt to avoid the conclusion that there musthave been a big bang, and therefore a beginning of time, wasmade by two Russian scientists, Evgenii Lifshitz and IsaacKhalatnikov, in 1963. They suggested that the big bang mightbe a peculiarity of Friedmann’s models alone, which after allwere only approximations to the real universe. Perhaps, of allthe models that were roughly like the real universe, onlyFriedmann’s would contain a big bang singularity. InFriedmann’s models, the galaxies are all moving directly awayfrom each other - so it is not surprising that at some time inthe past they were all at the same place. In the real universe,however, the galaxies are not just moving directly away fromeach other - they also have small sideways velocities. So inreality they need never have been all at exactly the same place,only very close together. Perhaps then the current expandinguniverse resulted not from a big bang singularity, but from anearlier contracting phase; as the universe had collapsed theparticles in it might not have all collided, but had flown pastand then away from each other, producing the presentexpansion of the the universe that were roughly likeFriedmann’s models but took account of the irregularities andrandom velocities of galaxies in the real universe. They showedthat such models could start with a big bang, even though thegalaxies were no longer always moving directly away from eachother, but they claimed that this was still only possible incertain exceptional models in which the galaxies were all movingin just the right way. They argued that since there seemed tobe infinitely more Friedmann-like models without a big bangsingularity than there were with one, we should conclude thatthere had not in reality been a big bang. They later realized,however, that there was a much more general class ofFriedmann-like models that did have singularities, and in whichthe galaxies did not have to be moving any special way. Theytherefore withdrew their claim in 1970. The work of Lifshitz and Khalatnikov was valuable because itshowed that the universe could have had a singularity, a bigbang, if the general theory of relativity was correct. However, itdid not resolve the crucial question: Does general relativitypredict that our universe should have had a big bang, abeginning of time? The answer to this carne out of acompletely different approach introduced by a Britishmathematician and physicist, Roger Penrose, in 1965. Using theway light cones behave in general relativity, together with thefact that gravity is always attractive, he showed that a starcollapsing under its own gravity is trapped in a region whosesurface eventually shrinks to zero size. And, since the surface ofthe region shrinks to zero, so too must its volume. All thematter in the star will be compressed into a region of zerovolume, so the density of matter and the curvature ofspace-time become infinite. In other words, one has asingularity contained within a region of space-time known as ablack hole. At first sight, Penrose’s result applied only to stars; it didn’thave anything to say about the question of whether the entireuniverse had a big bang singularity in its past. However, at thetime that Penrose produced his theorem, I was a researchstudent desperately looking for a problem with which tocomplete my Ph.D. thesis. Two years before, I had beendiagnosed as suffering from ALS, commonly known as LouGehrig’s disease, or motor neuron disease, and given tounderstand that I had only one or two more years to live. Inthese circumstances there had not seemed much point inworking on my Ph.D.- I did not expect to survive that long. Yet two years had gone by and I was not that much worse. In fact, things were going rather well for me and I had gottenengaged to a very nice girl, Jane Wilde. But in order to getmarried, I needed a job, and in order to get a job, I neededa Ph.D. In 1965 I read about Penrose’s theorem that any bodyundergoing gravitational collapse must eventually form asingularity. I soon realized that if one reversed the direction oftime in Penrose’s theorem, so that the collapse became anexpansion, the conditions of his theorem would still hold,provided the universe were roughly like a Friedmann model onlarge scales at the present time. Penrose’s theorem had shownthat any collapsing star must end in a singularity; thetime-reversed argument showed that any Friedmann-likeexpanding universe must have begun with a singularity. Fortechnical reasons, Penrose’s theorem required that the universebe infinite in space. So I could in fact, use it to prove thatthere should be a singularity only if the universe was expandingfast enough to avoid collapsing again (since only thoseFriedmann models were infinite in space). During the next few years I developed new mathematicaltechniques to remove this and other technical conditions fromthe theorems that proved that singularities must occur. Thefinal result was a joint paper by Penrose and myself in 1970,which at last proved that there must have been a big bangsingularity provided only that general relativity is correct andthe universe contains as much matter as we observe. Therewas a lot of opposition to our work, partly from the Russiansbecause of their Marxist belief in scientific determinism, andpartly from people who felt that the whole idea of singularitieswas repugnant and spoiled the beauty of Einstein’s theory. However, one cannot really argue with a mathematical theorem. So in the end our work became generally accepted andnowadays nearly everyone assumes that the universe startedwith a big bang singularity. It is perhaps ironic that, havingchanged my mind, I am now trying to convince other physiciststhat there was in fact no singularity at the beginning of theuniverse - as we shall see later, it can disappear once quantumeffects are taken into account. We have seen in this chapter how, in less than half acentury, man’s view of the universe formed over millennia hasbeen transformed. Hubble’s discovery that the universe wasexpanding, and the realization of the insignificance of our ownplanet in the vastness of the universe, were just the startingpoint. As experimental and theoretical evidence mounted, itbecame more and more clear that the universe must have hada beginning in time, until in 1970 this was finally proved byPenrose and myself, on the basis of Einstein’s general theory ofrelativity. That proof showed that general relativity is only anincomplete theory: it cannot tell us how the universe startedoff, because it predicts that all physical theories, including itself,break down at the beginning of the universe. However, generalrelativity claims to be only a partial theory, so what thesingularity theorems really show is that there must have been atime in the very early universe when the universe was so smallthat one could no longer ignore the small-scale effects of theother great partial theory of the twentieth century, quantummechanics. At the start of the 1970s, then, we were forced toturn our search for an understanding of the universe from ourtheory of the extraordinarily vast to our theory of theextraordinarily tiny. That theory, quantum mechanics, will bedescribed next, before we turn to the efforts to combine thetwo partial theories into a single quantum theory of gravity. CHAPTER 4 THE UNCERTAINTY PRINCIPLE The success of scientific theories, particularly Newton’s theoryof gravity, led the French scientist the Marquis de Laplace atthe beginning of the nineteenth century to argue that theuniverse was completely deterministic. Laplace suggested thatthere should be a set of scientific laws that would allow us topredict everything that would happen in the universe, if only weknew the complete state of the universe at one time. Forexample, if we knew the positions and speeds of the sun andthe planets at one time, then we could use Newton’s laws tocalculate the state of the Solar System at any other time. Determinism seems fairly obvious in this case, but Laplace wentfurther to assume that there were similar laws governingeverything else, including human behavior. The doctrine of scientific determinism was strongly resistedby many people, who felt that it infringed God’s freedom tointervene in the world, but it remained the standard assumptionof science until the early years of this century. One of the firstindications that this belief would have to be abandoned camewhen calculations by the British scientists Lord Rayleigh and SirJames Jeans suggested that a hot object, or body, such as astar, must radiate energy at an infinite rate. According to thelaws we believed at the time, a hot body ought to give offelectromagnetic waves (such as radio waves, visible light, or Xrays) equally at all frequencies. For example, a hot body shouldradiate the same amount of energy in waves with frequenciesbetween one and two million million waves a second as inwaves with frequencies between two and three million millionwaves a second. Now since the number of waves a second isunlimited, this would mean that the total energy radiated wouldbe infinite. In order to avoid this obviously ridiculous result, the Germanscientist Max Planck suggested in 1900 that light, X rays, andother waves could not be emitted at an arbitrary rate, but onlyin certain packets that he called quanta. Moreover, eachquantum had a certain amount of energy that was greater thehigher the frequency of the waves, so at a high enoughfrequency the emission of a single quantum would require moreenergy than was available. Thus the radiation at highfrequencies would be reduced, and so the rate at which thebody lost energy would be finite. The quantum hypothesis explained the observed rate ofemission of radiation from hot bodies very well, but itsimplications for determinism were not realized until 1926, whenanother German scientist, Werner Heisenberg, formulated hisfamous uncertainty principle. In order to predict the futureposition and velocity of a particle, one has to be able tomeasure its present position and velocity accurately. Theobvious way to do this is to shine light on the particle. Someof the waves of light will be scattered by the particle and thiswill indicate its position. However, one will not be able todetermine the position of the particle more accurately than thedistance between the wave crests of light, so one needs to uselight of a short wavelength in order to measure the position ofthe particle precisely. Now, by Planck’s quantum hypothesis, onecannot use an arbitrarily small amount of light; one has to useat least one quantum. This quantum will disturb the particleand change its velocity in a way that cannot be predicted. moreover, the more accurately one measures the position, theshorter the wavelength of the light that one needs and hencethe higher the energy of a single quantum. So the velocity ofthe particle will be disturbed by a larger amount. In otherwords, the more accurately you try to measure the position ofthe particle, the less accurately you can measure its speed, andvice versa. Heisenberg showed that the uncertainty in theposition of the particle times the uncertainty in its velocity timesthe mass of the particle can never be smaller than a certainquantity, which is known as Planck’s constant. Moreover, thislimit does not depend on the way in which one tries tomeasure the position or velocity of the particle, or on the typeof particle: Heisenberg’s uncertainty principle is a fundamental,inescapable property of the world. The uncertainty principle had profound implications for theway in which we view the world. Even after more than seventyyears they have not been fully appreciated by manyphilosophers, and are still the subject of much controversy. Theuncertainty principle signaled an end to Laplace’s dream of atheory of science, a model of the universe that would becompletely deterministic: one certainly cannot predict futureevents exactly if one cannot even measure the present state ofthe universe precisely! We could still imagine that there is a setof laws that determine events completely for some supernaturalbeing, who could observe the present state of the universewithout disturbing it. However, such models of the universe arenot of much interest to us ordinary mortals. It seems better toemploy the principle of economy known as Occam’s razor andcut out all the features of the theory that cannot be observed. This approach led Heisenberg, Erwin Schrodinger, and PaulDirac in the 1920s to reformulate mechanics into a new theorycalled quantum mechanics, based on the uncertainty principle. In this theory particles no longer had separate, well-definedpositions and velocities that could not be observed, Instead,they had a quantum state, which was a combination of positionand velocity. In general, quantum mechanics does not predict a singledefinite result for an observation. Instead, it predicts a numberof different possible outcomes and tells us how likely each ofthese is. That is to say, if one made the same measurementon a large number of similar systems, each of which started offin the same way, one would find that the result of themeasurement would be A in a certain number of cases, B in adifferent number, and so on. One could predict theapproximate number of times that the result would be A or B,but one could not predict the specific result of an individualmeasurement. Quantum mechanics therefore introduces anunavoidable element of unpredictability or randomness intoscience. Einstein objected to this very strongly, despite theimportant role he had played in the development of these ideas. Einstein was awarded the Nobel Prize for his contribution toquantum theory. Nevertheless, Einstein never accepted that theuniverse was governed by chance; his feelings were summedup in his famous statement “God does not play dice.” Mostother scientists, however, were willing to accept quantummechanics because it agreed perfectly with experiment. Indeed,it has been an outstandingly successful theory and underliesnearly all of modern science and technology. It governs thebehavior of transistors and integrated circuits, which are theessential components of electronic devices such as televisionsand computers, and is also the basis of modern chemistry andbiology. The only areas of physical science into which quantummechanics has not yet been properly incorporated are gravityand the large-scale structure of the universe. Although light is made up of waves, Planck’s quantumhypothesis tells us that in some ways it behaves as if it werecomposed of particles: it can be emitted or absorbed only inpackets, or quanta. Equally, Heisenberg’s uncertainty principleimplies that particles behave in some respects like waves: theydo not have a definite position but are “smeared out” with acertain probability distribution. The theory of quantummechanics is based on an entirely new type of mathematicsthat no longer describes the real world in terms of particlesand waves; it is only the observations of the world that maybe described in thoseterms. There is thus a duality between waves and particles inquantum mechanics: for some purposes it is helpful to think ofparticles as waves and for other purposes it is better to thinkof waves as particles. An important consequence of this is thatone can observe what is called interference between two sets ofwaves or particles. That is to say, the crests of one set ofwaves may coincide with the troughs of the other set. The twosets of waves then cancel each other out rather than addingup to a stronger wave as one might expect (Fig. 4.1). Afamiliar example of interference in the case of light is the colorsthat are often seen in soap bubbles. These are caused byreflection of light from the two sides of the thin film of waterforming the bubble. White light consists of light waves of alldifferent wavelengths, or colors, For certain wavelengths thecrests of the waves reflected from one side of the soap filmcoincide with the troughs reflected from the other side. Thecolors corresponding to these wavelengths are absent from thereflected light, which therefore appears to be colored. Interference can also occur for particles, because of the dualityintroduced by quantum mechanics. A famous example is theso-called two-slit experiment (Fig. 4.2). Consider a partition withtwo narrow parallel slits in it. On one side of the partition oneplaces a source of fight of a particular color (that is, of aparticular wavelength). Most of the light will hit the partition,but a small amount will go through the slits. Now suppose oneplaces a screen on the far side of the partition from the light. Any point on the screen will receive waves from the two slits. However, in general, the distance the light has to travel fromthe source to the screen via the two slits will be different. Thiswill mean that the waves from the slits will not be in phasewith each other when they arrive at the screen: in some placesthe waves will cancel each other out, and in others they willreinforce each other. The result is a characteristic pattern oflight and dark fringes. The remarkable thing is that one gets exactly the same kindof fringes if one replaces the source of light by a source ofparticles such as electrons with a definite speed (this meansthat the corresponding waves have a definite length). It seemsthe more peculiar because if one only has one slit, one doesnot get any fringes, just a uniform distribution of electronsacross the screen. One might therefore think that openinganother slit would just increase the number of electrons hittingeach point of the screen, but, because of interference, it actuallydecreases it in some places. If electrons are sent through theslits one at a time, one would expect each to pass through oneslit or the other, and so behave just as if the slit it passedthrough were the only one there - giving a uniform distributionon the screen. In reality, however, even when the electrons aresent one at a time, the fringes still appear. Each electron,therefore, must be passing through both slits at the same time! The phenomenon of interference between particles has beencrucial to our understanding of the structure of atoms, thebasic units of chemistry and biology and the building blocks outof which we, and everything around us, are made. At thebeginning of this century it was thought that atoms were ratherlike the planets orbiting the sun, with electrons (particles ofnegative electricity) orbiting around a central nucleus, whichcarried positive electricity. The attraction between the positiveand negative electricity was supposed to keep the electrons intheir orbits in the same way that the gravitational attractionbetween the sun and the planets keeps the planets in theirorbits. The trouble with this was that the laws of mechanicsand electricity, before quantum mechanics, predicted that theelectrons would lose energy and so spiral inward until theycollided with the nucleus. This would mean that the atom, andindeed all matter, should rapidly collapse to a state of very highdensity. A partial solution to this problem was found by theDanish scientist Niels Bohr in 1913. He suggested that maybethe electrons were not able to orbit at just any distance fromthe central nucleus but only at certain specified distances. Ifone also supposed that only one or two electrons could orbit atany one of these distances, this would solve the problem of thecollapse of the atom, because the electrons could not spiral inany farther than to fill up the orbits with e least distances andenergies. This model explained quite well the structure of the simplestatom, hydrogen, which has only one electron orbiting aroundthe nucleus. But it was not clear how one ought to extend itto more complicated atoms. Moreover, the idea of a limited setof allowed orbits seemed very arbitrary. The new theory ofquantum mechanics resolved this difficulty. It revealed that anelectron orbiting around the nucleus could be thought of as awave, with a wavelength that depended on its velocity. Forcertain orbits, the length of the orbit would correspond to awhole number (as opposed to a fractional number) ofwavelengths of the electron. For these orbits the wave crestwould be in the same position each time round, so the waveswould add up: these orbits would correspond to Bohr’s allowedorbits. However, for orbits whose lengths were not a wholenumber of wavelengths, each wave crest would eventually becanceled out by a trough as the electrons went round; theseorbits would not be allowed. A nice way of visualizing the wave/particle duality is theso-called sum over histories introduced by the Americanscientist Richard Feynman. In this approach the particle is notsupposed to have a single history or path in space-time, as itwould in a classical, nonquantum theory. Instead it is supposedto go from A to B by every possible path. With each paththere are associated a couple of numbers: one represents thesize of a wave and the other represents the position in thecycle (i.e., whether it is at a crest or a trough). The probabilityof going from A to B is found by adding up the waves for allthe paths. In general, if one compares a set of neighboringpaths, the phases or positions in the cycle will differ greatly. This means that the waves associated with these paths willalmost exactly cancel each other out. However, for some sets ofneighboring paths the phase will not vary much between paths. The waves for these paths will not cancel out Such pathscorrespond to Bohr’s allowed orbits. With these ideas, in concrete mathematical form, it wasrelatively straightforward to calculate the allowed orbits in morecomplicated atoms and even in molecules, which are made upof a number of atoms held together by electrons in orbits thatgo round more than one nucleus. Since the structure ofmolecules and their reactions with each other underlie all ofchemistry and biology, quantum mechanics allows us in principleto predict nearly everything we see around us, within the limitsset by the uncertainty principle. (In practice, however, thecalculations required for systems containing more than a fewelectrons are so complicated that we cannot do them.)Einstein’s general theory of relativity seems to govern thelarge-scale structure of the universe. It is what is called aclassical theory; that is, it does not take account of theuncertainty principle of quantum mechanics, as it should forconsistency with other theories. The reason that this does notlead to any discrepancy with observation is that all thegravitational fields that we normally experience are very weak. How-ever, the singularity theorems discussed earlier indicate thatthe gravitational field should get very strong in at least twosituations, black holes and the big bang. In such strong fieldsthe effects of quantum mechanics should be important. Thus, ina sense, classical general relativity, by predicting points ofinfinite density, predicts its own downfall, just as classical (thatis, nonquantum) mechanics predicted its downfall by suggestingthat atoms should collapse to infinite density. We do not yethave a complete consistent theory that unifies general relativityand quantum mechanics, but we do know a number of thefeatures it should have. The consequences that these wouldhave for black holes and the big bang will be described in laterchapters. For the moment, however, we shall turn to the recentattempts to bring together our understanding of the otherforces of nature into a single, unified quantum theory. CHAPTER 5 ELEMENTARY PARTICLES AND THE FORCES OFNATURE Aristotle believed that all the matter in the universe wasmade up of four basic elements - earth, air, fire, and water. These elements were acted on by two forces: gravity, thetendency for earth and water to sink, and levity, the tendencyfor air and fire to rise. This division of the contents of theuniverse into matter and forces is still used today. Aristotlebelieved that matter was continuous, that is, one could divide apiece of matter into smaller and smaller bits without any limit: one never came up against a grain of matter that could not bedivided further. A few Greeks, however, such as Democritus,held that matter was inherently grainy and that everything wasmade up of large numbers of various different kinds of atoms. (The word atom means “indivisible” in Greek.) For centuriesthe argument continued without any real evidence on eitherside, but in 1803 the British chemist and physicist John Daltonpointed out that the fact that chemical compounds alwayscombined in certain proportions could be explained by thegrouping together of atoms to form units called molecules. However, the argument between the two schools of thoughtwas not finally settled in favor of the atomists until the earlyyears of this century. One of the important pieces of physicalevidence was provided by Einstein. In a paper written in 1905,a few weeks before the famous paper on special relativity,Einstein pointed out that what was called Brownian motion -the irregular, random motion of small particles of dustsuspended in a liquid - could be explained as the effect ofatoms of the liquid colliding with the dust particles. By this time there were already suspicions that these atomswere not, after all, indivisible. Several years previously a fellowof Trinity College, Cambridge, J. J. Thomson, had demonstratedthe existence of a particle of matter, called the electron, thathad a mass less than one thousandth of that of the lightestatom. He used a setup rather like a modern TV picture tube: a red-hot metal filament gave off the electrons, and becausethese have a negative electric charge, an electric field could beused to accelerate them toward a phosphor-coated screen. When they hit the screen, flashes of light were generated. Soonit was realized that these electrons must be coming from withinthe atoms themselves, and in 1911 the New Zealand physicistErnest Rutherford finally showed that the atoms of matter dohave internal structure: they are made up of an extremely tiny,positively charged nucleus, around which a number of electronsorbit. He deduced this by analyzing the way in whichalpha-particles, which are positively charged particles given offby radioactive atoms, are deflected when they collide withatoms. At first it was thought that the nucleus of the atom wasmade up of electrons and different numbers of a positivelycharged particle called the proton, from the Greek wordmeaning “first,” because it was believed to be the fundamentalunit from which matter was made. However, in 1932 acolleague of Rutherford’s at Cambridge, James Chadwick,discovered that the nucleus contained another particle, called theneutron, which had almost the same mass as a proton but noelectrical charge. Chadwick received the Nobel Prize for hisdiscovery, and was elected Master of Gonville and Caius College,Cambridge (the college of which I am now a fellow). He laterresigned as Master because of disagreements with the Fellows. There had been a bitter dispute in the college ever since agroup of young Fellows returning after the war had votedmany of the old Fellows out of the college offices they had heldfor a long time. This was before my time; I joined the collegein 1965 at the tail end of the bitterness, when similardisagreements forced another Nobel Prize - winning Master, SirNevill Mott, to resign. Up to about thirty years ago, it was thought that protonsand neutrons were “elementary” particles, but experiments inwhich protons were collided with other protons or electrons athigh speeds indicated that they were in fact made up ofsmaller particles. These particles were named quarks by theCaltech physicist Murray Gell-Mann, who won the Nobel Prizein 1969 for his work on them. The origin of the name is anenigmatic quotation from James Joyce: “Three quarks forMuster Mark!” The word quark is supposed to be pronouncedlike quart, but with a k at the end instead of a t, but isusually pronounced to rhyme with lark. There are a number of different varieties of quarks: thereare six “flavors,” which we call up, down, strange, charmed,bottom, and top. The first three flavors had been known sincethe 1960s but the charmed quark was discovered only in 1974,the bottom in 1977, and the top in 1995. Each flavor comes inthree “colors,” red, green, and blue. (It should be emphasizedthat these terms are just labels: quarks are much smaller thanthe wavelength of visible light and so do not have any color inthe normal sense. It is just that modern physicists seem tohave more imaginative ways of naming new particles andphenomena - they no longer restrict themselves to Greek!) Aproton or neutron is made up of three quarks, one of eachcolor. A proton contains two up quarks and one down quark;a neutron contains two down and one up. We can createparticles made up of the other quarks (strange, charmed,bottom, and top), but these all have a much greater mass anddecay very rapidly into protons and neutrons. We now know that neither the atoms nor the protons andneutrons within them are indivisible. So the question is: whatare the truly elementary particles, the basic building blocks fromwhich everything is made? Since the wavelength of light ismuch larger than the size of an atom, we cannot hope to“look” at the parts of an atom in the ordinary way. We needto use something with a much smaller wave-length. As we sawin the last chapter, quantum mechanics tells us that all particlesare in fact waves, and that the higher the energy of a particle,the smaller the wavelength of the corresponding wave. So thebest answer we can give to our question depends on how higha particle energy we have at our disposal, because thisdetermines on how small a length scale we can look. Theseparticle energies are usually measured in units called electronvolts. (In Thomson’s experiments with electrons, we saw thathe used an electric field to accelerate the electrons. The energythat an electron gains from an electric field of one volt is whatis known as an electron volt.) In the nineteenth century, whenthe only particle energies that people knew how to use werethe low energies of a few electron volts generated by chemicalreactions such as burning, it was thought that atoms were thesmallest unit. In Rutherford’s experiment, the alpha-particles hadenergies of millions of electron volts. More recently, we havelearned how to use electromagnetic fields to give particlesenergies of at first millions and then thousands of millions ofelectron volts. And so we know that particles that were thoughtto be “elementary” thirty years ago are, in fact, made up ofsmaller particles. May these, as we go to still higher energies, inturn be found to be made from still smaller particles? This iscertainly possible, but we do have some theoretical reasons forbelieving that we have, or are very near to, a knowledge of theultimate building blocks of nature. Using the wave/particle duality discussed in the last chapter,every-thing in the universe, including light and gravity, can bedescribed in terms of particles. These particles have a propertycalled spin. One way of thinking of spin is to imagine theparticles as little tops spinning about an axis. However, this canbe misleading, because quantum mechanics tells us that theparticles do not have any well-defined axis. What the spin of aparticle really tells us is what the particle looks like fromdifferent directions. A particle of spin 0 is like a dot: it looksthe same from every direction (Fig. 5.1-i). On the other hand,a particle of spin 1 is like an arrow: it looks different fromdifferent directions (Fig. 5.1-ii). Only if one turns it round acomplete revolution (360 degrees) does the particle look thesame. A particle of spin 2 is like a double-headed arrow (Fig. 5.1-iii): it looks the same if one turns it round half a revolution(180 degrees). Similarly, higher spin particles look the same ifone turns them through smaller fractions of a completerevolution. All this seems fairly straightforward, but theremark-able fact is that there are particles that do not look thesame if one turns them through just one revolution: you haveto turn them through two complete revolutions! Such particlesare said to have spin ?. All the known particles in the universe can be divided intotwo groups: particles of spin ?, which make up the matter inthe universe, and particles of spin 0, 1, and 2, which, as weshall see, give rise to forces between the matter particles. Thematter particles obey what is called Pauli’s exclusion principle. This was discovered in 1925 by an Austrian physicist, WolfgangPauli - for which he received the Nobel Prize in 1945. He wasthe archetypal theoretical physicist: it was said of him that evenhis presence in the same town would make experiments gowrong! Pauli’s exclusion principle says that two similar particlescan-not exist in the same state; that is, they cannot have boththe same position and the same velocity, within the limits givenby the uncertainty principle. The exclusion principle is crucialbecause it explains why matter particles do not collapse to astate of very high density under the influence of the forcesproduced by the particles of spin 0, 1, and 2: if the matterparticles have very nearly the same positions, they must havedifferent velocities, which means that they will not stay in thesame position for long. If the world had been created withoutthe exclusion principle, quarks would not form separate,well-defined protons and neutrons. Nor would these, togetherwith electrons, form separate, well-defined atoms. They would allcollapse to form a roughly uniform, dense “soup.” A proper understanding of the electron and other spin-? particles did not come until 1928, when a theory was proposedby Paul Dirac, who later was elected to the LucasianProfessorship of Mathematics at Cambridge (the sameprofessorship that Newton had once held and that I now hold). Dirac’s theory was the first of its kind that was consistent withboth quantum mechanics and the special theory of relativity. Itexplained mathematically why the electron had spin-?; that is,why it didn’t look the same if you turned it through only onecomplete revolution, but did if you turned it through tworevolutions. It also predicted that the electron should have apartner: an anti-electron, or positron. The discovery of thepositron in 1932 confirmed Dirac’s theory and led to his beingawarded the Nobel Prize for physics in 1933. We now knowthat every particle has an antiparticle, with which it canannihilate. (In the case of the force-carrying particles, theantiparticles are the same as the particles themselves.) Therecould be whole antiworlds and antipeople made out ofantiparticles. However, if you meet your antiself, don’t shakehands! You would both vanish in a great flash of light. Thequestion of why there seem to be so many more particles thanantiparticles around us is extremely important, andI shall return to it later in the chapter. In quantum mechanics, the forces or interactions betweenmatter particles are all supposed to be carried by particles ofinteger spin - 0, 1, or 2. What happens is that a matterparticle, such as an electron or a quark, emits a force-carryingparticle. The recoil from this emission changes the velocity ofthe matter particle. The force-carrying particle then collides withanother matter particle and is absorbed. This collision changesthe velocity of the second particle, just as if there had been aforce between the two matter particles. It is an importantproperty of ‘ the force-carrying particles that they do not obeythe exclusion principle. This means that there is no limit to thenumber that can be exchanged, and so they can give rise to astrong force. However, if the force-carrying particles have ahigh mass, it will be difficult to produce and exchange themover a large distance. So the forces that they carry will haveonly a short range. On the other hand, if the force-carryingparticles have no mass of their own, the forces will be longrange. The force-carrying particles exchanged between matterparticles are said to be virtual particles because, unlike “real” particles, they cannot be directly detected by a particle detector. We know they exist, however, because they do have ameasurable effect: they give rise to forces between matterparticles. Particles of spin 0, 1, or 2 do also exist in somecircumstances as real particles, when they can be directlydetected. They then appear to us as what a classical physicistwould call waves, such as waves of light or gravitational waves. They may sometimes be emitted when matter particles interactwith each other by exchanging virtual force-carrying particles. (For example, the electric repulsive force between two electronsis due to the exchange of virtual photons, which can never bedirectly detected; but if one electron moves past another, realphotons may be given off, which we detect as light waves.)Force-carrying particles can be grouped into four categoriesaccording to the strength of the force that they carry and theparticles with which they interact. It should be emphasized thatthis division into four classes is man-made; it is convenient forthe construction of partial theories, but it may not correspondto anything deeper. Ultimately, most physicists hope to find aunified theory that will explain all four forces as differentaspects of a single force. Indeed, many would say this is theprime goal of physics today. Recently, successful attempts havebeen made to unify three of the four categories of force - andI shall describe these in this chapter. The question of theunification of the remaining category, gravity, we shall leave tilllater. The first category is the gravitational force. This force isuniversal, that is, every particle feels the force of gravity,according to its mass or energy. Gravity is the weakest of thefour forces by a long way; it is so weak that we would notnotice it at all were it not for two special properties that it has: it can act over large distances, and it is always attractive. Thismeans that the very weak gravitational forces between theindividual particles in two large bodies, such as the earth andthe sun, can all add up to produce a significant force. Theother three forces are either short range, or are sometimesattractive and some-times repulsive, so they tend to cancel out. In the quantum mechanical way of looking at the gravitationalfield, the force between two matter particles is pictured as beingcarried by a particle of spin 2 called the graviton. This has nomass of its own, so the force that it carries is long range. Thegravitational force between the sun and the earth is ascribed tothe exchange of gravitons between the particles that make upthese two bodies. Although the exchanged particles are virtual,they certainly do produce a measurable effect - they make theearth orbit the sun! Real gravitons make up what classicalphysicists would call gravitational waves, which are very weak -and so difficult to detect that they have not yet been observed. The next category is the electromagnetic force, whichinteracts with electrically charged particles like electrons andquarks, but not with uncharged particles such as gravitons. It ismuch stronger than the gravitational force: the electromagneticforce between two electrons is about a million million millionmillion million million million (1 with forty-two zeros after it)times bigger than the gravitational force. However, there aretwo kinds of electric charge, positive and negative. The forcebetween two positive charges is repulsive, as is the forcebetween two negative charges, but the force is attractivebetween a positive and a negative charge. A large body, suchas the earth or the sun, contains nearly equal numbers ofpositive and negative charges. Thus the attractive and repulsiveforces between the individual particles nearly cancel each otherout, and there is very little net electromagnetic force. However,on the small scales of atoms and molecules, electromagneticforces dominate. The electromagnetic attraction betweennegatively charged electrons and positively charged protons inthe nucleus causes the electrons to orbit the nucleus of theatom, just as gravitational attraction causes the earth to orbitthe sun. The electromagnetic attraction is pictured as beingcaused by the exchange of large numbers of virtual masslessparticles of spin 1, called photons. Again, the photons that areexchanged are virtual particles. However, when an electronchanges from one allowed orbit to another one nearer to thenucleus, energy is released and a real photon is emitted -which can be observed as visible light by the human eye, if ithas the right wave-length, or by a photon detector such asphotographic film. Equally, if a real photon collides with anatom, it may move an electron from an orbit nearer thenucleus to one farther away. This uses up the energy of thephoton, so it is absorbed. The third category is called the weak nuclear force, which isresponsible for radioactivity and which acts on all matterparticles of spin-?, but not on particles of spin 0, 1, or 2,such as photons and gravitons. The weak nuclear force wasnot well understood until 1967, when Abdus Salam at ImperialCollege, London, and Steven Weinberg at Harvard bothproposed theories that unified this interaction with theelectromagnetic force, just as Maxwell had unified electricity andmagnetism about a hundred years earlier. They suggested thatin addition to the photon, there were three other spin-1particles, known collectively as massive vector bosons, thatcarried the weak force. These were called W+ (pronounced Wplus), W- (pronounced W minus), and Z? (pronounced Znaught), and each had a mass of around 100 GeV (GeVstands for gigaelectron-volt, or one thousand million electronvolts). The Weinberg-Salam theory exhibits a property known asspontaneous symmetry breaking. This means that what appearto be a number of completely different particles at low energiesare in fact found to be all the same type of particle, only indifferent states. At high energies all these particles behavesimilarly. The effect is rather like the behavior of a roulette ballon a roulette wheel. At high energies (when the wheel is spunquickly) the ball behaves in essentially only one way - it rollsround and round. But as the wheel slows, the energy of theball decreases, and eventually the ball drops into one of thethirty-seven slots in the wheel. In other words, at low energiesthere are thirty-seven different states in which the ball canexist. If, for some reason, we could only observe the ball atlow energies, we would then think that there were thirty-sevendifferent types of ball! In the Weinberg-Salam theory, at energies much greater than100 GeV, the three new particles and the photon would allbehave in a similar manner. But at the lower particle energiesthat occur in most normal situations, this symmetry betweenthe particles would be broken. WE, W, and Z? would acquirelarge masses, making the forces they carry have a very shortrange. At the time that Salam and Weinberg proposed theirtheory, few people believed them, and particle accelerators werenot powerful enough to reach the energies of 100 GeVrequired to produce real W+, W-, or Z? particles. However,over the next ten years or so, the other predictions of thetheory at lower energies agreed so well with experiment that, in1979, Salam and Weinberg were awarded the Nobel Prize forphysics, together with Sheldon Glashow, also at Harvard, whohad suggested similar unified theories of the electromagnetic andweak nuclear forces. The Nobel committee was spared theembarrassment of having made a mistake by the discovery in1983 at CERN (European Centre for Nuclear Research) of thethree massive partners of the photon, with the correct predictedmasses and other properties. Carlo Rubbia, who led the teamof several hundred physicists that made the discovery, receivedthe Nobel Prize in 1984, along with Simon van der Meer, theCERNengineer who developed the antimatter storage systememployed. (It is very difficult to make a mark in experimentalphysics these days unless you are already at the top! )The fourth category is the strong nuclear force, which holdsthe quarks together in the proton and neutron, and holds theprotons and neutrons together in the nucleus of an atom. It isbelieved that this force is carried by another spin-1 particle,called the gluon, which interacts only with itself and with thequarks. The strong nuclear force has a curious property calledconfinement: it always binds particles together into combinationsthat have no color. One cannot have a single quark on itsown because it would have a color (red, green, or blue). Instead, a red quark has to be joined to a green and a bluequark by a “string” of gluons (red + green + blue = white). Such a triplet constitutes a proton or a neutron. Anotherpossibility is a pair consisting of a quark and an antiquark (red+ antired, or green + antigreen, or blue + antiblue = white). Such combinations make up the particles known as mesons,which are unstable because the quark and antiquark canannihilate each other, producing electrons and other particles. Similarly, confinement prevents one having a single gluon on itsown, because gluons also have color. Instead, one has to havea collection of gluons whose colors add up to white. Such acollection forms an unstable particle called a glueball. The fact that confinement prevents one from observing anisolated quark or gluon might seem to make the whole notionof quarks and gluons as particles somewhat metaphysical. However, there is another property of the strong nuclear force,called asymptotic freedom, that makes the concept of quarksand gluons well defined. At normal energies, the strong nuclearforce is indeed strong, and it binds the quarks tightly together. However, experiments with large particle accelerators indicatethat at high energies the strong force becomes much weaker,and the quarks and gluons behave almost like free particles. Fig. 5.2 shows a photograph of a collision between ahigh-energy proton and antiproton. The success of theunification of the electromagnetic and weak nuclear forces led toa number of attempts to combine these two forces with thestrong nuclear force into what is called a grand unified theory(or GUT). This title is rather an exaggeration: the resultanttheories are not all that grand, nor are they fully unified, asthey do not include gravity. Nor are they really completetheories, because they contain a number of parameters whosevalues cannot be predicted from the theory but have to bechosen to fit in with experiment. Nevertheless, they may be astep toward a complete, fully unified theory. The basic idea ofGUTs is as follows: as was mentioned above, the strongnuclear force gets weaker at high energies. On the other hand,the electromagnetic and weak forces, which are notasymptotically free, get stronger at high energies. At some veryhigh energy, called the grand unification energy, these threeforces would all have the same strength and so could just bedifferent aspects of a single force. The GUTs also predict thatat this energy the different spin-? matter particles, like quarksand electrons, would also all be essentially the same, thusachieving another unification. The value of the grand unification energy is not very wellknown, but it would probably have to be at least a thousandmillion million GeV. The present generation of particleaccelerators can collide particles at energies of about onehundred GeV, and machines are planned that would raise thisto a few thousand GeV. But a machine that was powerfulenough to accelerate particles to the grand unification energywould have to be as big as the Solar System - and would beunlikely to be funded in the present economic climate. Thus itis impossible to test grand unified theories directly in thelaboratory. However, just as in the case of the electromagneticand weak unified theory, there are low-energy consequences ofthe theory that can be tested. The most interesting of these is the prediction that protons,which make up much of the mass of ordinary matter, canspontaneously decay into lighter particles such as antielectrons. The reason this is possible is that at the grand unificationenergy there is no essential difference between a quark and anantielectron. The three quarks inside a proton normally do nothave enough energy to change into antielectrons, but veryoccasionally one of them may acquire sufficient energy to makethe transition because the uncertainty principle means that theenergy of the quarks inside the proton cannot be fixed exactly. The proton would then decay. The probability of a quarkgaining sufficient energy is so low that one is likely to have towait at least a million million million million million years (1followed by thirty zeros). This is much longer than the timesince the big bang, which is a mere ten thousand million yearsor so (1 followed by ten zeros). Thus one might think that thepossibility of spontaneous proton decay could not be testedexperimentally. However, one can increase one’s chances ofdetecting a decay by observing a large amount of mattercontaining a very large number of protons. (If, for example,one observed a number of protons equal to 1 followed bythirty-one zeros for a period of one year, one would expect,according to the simplest GUT, to observe more than oneproton decay.)A number of such experiments have been carried out, butnone have yielded definite evidence of proton or neutron decay. One experiment used eight thousand tons of water and wasperformed in the Morton Salt Mine in Ohio (to avoid otherevents taking place, caused by cosmic rays, that might beconfused with proton decay). Since no spontaneous protondecay had been observed during the experiment, one cancalculate that the probable life of the proton must be greaterthan ten million million million million million years (1 withthirty-one zeros). This is longer than the lifetime predicted bythe simplest grand unified theory, but there are more elaboratetheories in which the predicted lifetimes are longer. Still moresensitive experiments involving even larger quantities of matterwill be needed to test them. Even though it is very difficult to observe spontaneousproton decay, it may be that our very existence is aconsequence of the reverse process, the production of protons,or more simply, of quarks, from an initial situation in whichthere were no more quarks than antiquarks, which is the mostnatural way to imagine the universe starting out. Matter on theearth is made up mainly of protons and neutrons, which inturn are made up of quarks. There are no antiprotons orantineutrons, made up from antiquarks, except for a few thatphysicists produce in large particle accelerators. We haveevidence from cosmic rays that the same is true for all thematter in our galaxy: there are no antiprotons or antineutronsapart from a small number that are produced as particle/antiparticle pairs in high-energy collisions. If there were largeregions of antimatter in our galaxy, we would expect to observelarge quantities of radiation from the borders between theregions of matter and antimatter, where many particles wouldbe colliding with their anti-particles, annihilating each other andgiving off high-energy radiation. We have no direct evidence as to whether the matter inother galaxies is made up of protons and neutrons orantiprotons and anti-neutrons, but it must be one or the other: there cannot be a mixture in a single galaxy because in thatcase we would again observe a lot of radiation fromannihilations. We therefore believe that all galaxies arecomposed of quarks rather than antiquarks; it seemsimplausible that some galaxies should be matter and someantimatter. Why should there be so many more quarks thanantiquarks? Why are there not equal numbers of each? It iscertainly fortunate for us that the numbers are unequalbecause, if they had been the same, nearly all the quarks andantiquarks would have annihilated each other in the earlyuniverse and left a universe filled with radiation but hardly anymatter. There would then have been no galaxies, stars, orplanets on which human life could have developed. Luckily,grand unified theories may provide an explanation of why theuniverse should now contain more quarks than antiquarks,even if it started out with equal numbers of each. As we haveseen, GUTs allow quarks to change into antielectrons at highenergy. They also allow the reverse processes, antiquarksturning into electrons, and electrons and antielectrons turninginto antiquarks and quarks. There was a time in the very earlyuniverse when it was so hot that the particle energies wouldhave been high enough for these transformations to take place. But why should that lead to more quarks than antiquarks? Thereason is that the laws of physics are not quite the same forparticles and antiparticles. Up to 1956 it was believed that the laws of physics obeyedeach of three separate symmetries called C, P, and T. Thesymmetry C means that the laws are the same for particlesand antiparticles. The symmetry P means that the laws are thesame for any situation and its mirror image (the mirror imageof a particle spinning in a right-handed direction is onespinning in a left-handed direction). The symmetry T meansthat if you reverse the direction of motion of all particles andantiparticles, the system should go back to what it was atearlier times; in other words, the laws are the same in theforward and backward directions of time. In 1956 twoAmerican physicists, Tsung-Dao Lee and Chen Ning Yang,suggested that the weak force does not in fact obey thesymmetry P. In other words, the weak force would make theuniverse develop in a different way from the way in which themirror image of the universe would develop. The same year, acolleague, Chien-Shiung Wu, proved their prediction correct. Shedid this by lining up the nuclei of radioactive atoms in amagnetic field, so that they were all spinning in the samedirection, and showed that the electrons were given off more inone direction than another. The following year, Lee and Yangreceived the Nobel Prize for their idea. It was also found thatthe weak force did not obey the symmetry C. That is, it wouldcause a universe composed of antiparticles to behave differentlyfrom our universe. Nevertheless, it seemed that the weak forcedid obey the combined symmetry CP. That is, the universewould develop in the same way as its mirror image if, inaddition, every particle was swapped with its antiparticle! However, in 1964 two more Americans, J. W. Cronin and ValFitch, discovered that even the CP symmetry was not obeyedin the decay of certain particles called K-mesons. Cronin andFitch eventually received the Nobel Prize for their work in1980. (A lot of prizes have been awarded for showing that theuniverse is not as simple as we might have thought!)There is a mathematical theorem that says that any theorythat obeys quantum mechanics and relativity must always obeythe combined symmetry CPT. In other words, the universewould have to behave the same if one replaced particles byantiparticles, took the mirror image, and also reversed thedirection of time. But Cronin and Fitch showed that if onereplaces particles by antiparticles and takes the mirror image,but does not reverse the direction of time, then the universedoes not behave the same. The laws of physics, therefore, mustchange if one reverses the direction of time - they do not obeythe symmetry T. Certainly the early universe does not obey the symmetry T: as time runs forward the universe expands - if it ranbackward, the universe would be contracting. And since thereare forces that do not obey the symmetry T, it follows that asthe universe expands, these forces could cause moreantielectrons to turn into quarks than electrons into antiquarks. Then, as the universe expanded and cooled, the antiquarkswould annihilate with the quarks, but since there would bemore quarks than antiquarks, a small excess of quarks wouldremain. It is these that make up the matter we see today andout of which we ourselves are made. Thus our very existencecould be regarded as a confirmation of grand unified theories,though a qualitative one only; the uncertainties are such thatone cannot predict the numbers of quarks that will be left afterthe annihilation, or even whether it would be quarks orantiquarks that would remain. (Had it been an excess ofantiquarks, however, we would simply have named antiquarksquarks, and quarks antiquarks.)Grand unified theories do not include the force of gravity. This does not matter too much, because gravity is such a weakforce that its effects can usually be neglected when we aredealing with elementary particles or atoms. However, the factthat it is both long range and always attractive means that itseffects all add up. So for a sufficiently large number of matterparticles, gravitational forces can dominate over all other forces. This is why it is gravity that determines the evolution of theuniverse. Even for objects the size of stars, the attractive forceof gravity can win over all the other forces and cause the starto collapse. My work in the 1970s focused on the black holesthat can result from such stellar collapse and the intensegravitational fields around them. It was this that led to the firsthints of how the theories of quantum mechanics and generalrelativity might affect each other - a glimpse of the shape of aquantum theory of gravity yet to come. CHAPTER 6 BLACK HOLES The term black hole is of very recent origin. It was coinedin 1969 by the American scientist John Wheeler as a graphicdescription of an idea that goes back at least two hundredyears, to a time when there were two theories about light: one,which Newton favored, was that it was composed of particles;the other was that it was made of waves. We now know thatreally both theories are correct. By the wave/particle duality ofquantum mechanics, light can be regarded as both a wave anda particle. Under the theory that light is made up of waves, itwas not clear how it would respond to gravity. But if light iscomposed of particles, one might expect them to be affected bygravity in the same way that cannonballs, rockets, and planetsare. At first people thought that particles of light traveledinfinitely fast, so gravity would not have been able to slow themdown, but the discovery by Roemer that light travels at a finitespeed meant that gravity might have an important effect. On this assumption, a Cambridge don, John Michell, wrote apaper in 1783 in the Philosophical Transactions of the RoyalSociety of London in which he pointed out that a star that wassufficiently massive and compact would have such a stronggravitational field that light could not escape: any light emittedfrom the surface of the star would be dragged back by thestar’s gravitational attraction before it could get very far. Michellsuggested that there might be a large number of stars like this. Although we would not be able to see them because the lightfrom them would not reach us, we would still feel theirgravitational attraction. Such objects are what we now call blackholes, because that is what they are: black voids in space. Asimilar suggestion was made a few years later by the Frenchscientist the Marquis de Laplace, apparently independently ofMichell. Interestingly enough, Laplace included it in only the firstand second editions of his book The System of the World, andleft it out of later editions; perhaps he decided that it was acrazy idea. (Also, the particle theory of light went out of favorduring the nineteenth century; it seemed that everything couldbe explained by the wave theory, and according to the wavetheory, it was not clear that light would be affected by gravityat all.)In fact, it is not really consistent to treat light likecannonballs in Newton’s theory of gravity because the speed oflight is fixed. (A cannonball fired upward from the earth will beslowed down by gravity and will eventually stop and fall back;a photon, however, must continue upward at a constant speed. How then can Newtonian grav-ity affect light?) A consistenttheory of how gravity affects light did not come along untilEinstein proposed general relativity in 1915. And even then itwas a long time before the implications of the theory formassive stars were understood. To understand how a black hole might be formed, we firstneed an understanding of the life cycle of a star. A star isformed when a large amount of gas (mostly hydrogen) startsto collapse in on itself due to its gravitational attraction. As itcontracts, the atoms of the gas collide with each other moreand more frequently and at greater and greater speeds - thegas heats up. Eventually, the gas will be so hot that when thehydrogen atoms collide they no longer bounce off each other,but instead coalesce to form helium. The heat released in thisreaction, which is like a controlled hydrogen bomb explosion, iswhat makes the star shine. This additional heat also increasesthe pressure of the gas until it is sufficient to balance thegravitational attraction, and the gas stops contracting. It is a bitlike a balloon - there is a balance between the pressure of theair inside, which is trying to make the balloon expand, and thetension in the rubber, which is trying to make the balloonsmaller. Stars will remain stable like this for a long time, withheat from the nuclear reactions balancing the gravitationalattraction. Eventually, however, the star will run out of itshydrogen and other nuclear fuels. Paradoxically, the more fuel astar starts off with, the sooner it runs out. This is because themore massive the star is, the hotter it needs to be to balanceits gravitational attraction. And the hotter it is, the faster it willuse up its fuel. Our sun has probably got enough fuel foranother five thousand million years or so, but more massivestars can use up their fuel in as little as one hundred millionyears, much less than the age of the universe. When a starruns out of fuel, it starts to cool off and so to contract. Whatmight happen to it then was first understood only at the endof the 1920s. In 1928 an Indian graduate student, SubrahmanyanChandrasekhar, set sail for England to study at Cambridge withthe British astronomer Sir Arthur Eddington, an expert ongeneral relativity. (According to some accounts, a journalist toldEddington in the early 1920s that he had heard there wereonly three people in the world who understood generalrelativity. Eddington paused, then replied, “I am trying to thinkwho the third person is.”) During his voyage from India,Chandrasekhar worked out how big a star could be and stillsupport itself against its own gravity after it had used up all itsfuel. The idea was this: when the star becomes small, thematter particles get very near each other, and so according tothe Pauli exclusion principle, they must have very differentvelocities. This makes them move away from each other andso tends to make the star expand. A star can thereforemaintain itself at a constant radius by a balance between theattraction of gravity and the repulsion that arises from theexclusion principle, just as earlier in its life gravity was balancedby the heat. Chandrasekhar realized, however, that there is a limit to therepulsion that the exclusion principle can provide. The theory ofrelativity limits the maximum difference in the velocities of thematter particles in the star to the speed of light. This meansthat when the star got sufficiently dense, the repulsion causedby the exclusion principle would be less than the attraction ofgravity. Chandrasekhar calculated that a cold star of more thanabout one and a half times the mass of the sun would not beable to support itself against its own gravity. (This mass is nowknown as the Chandrasekhar limit.) A similar discovery wasmade about the same time by the Russian scientist LevDavidovich Landau. This had serious implications for the ultimate fate of massivestars. If a star’s mass is less than the Chandrasekhar limit, itcan eventually stop contracting and settle down to a possiblefinal state as a “white dwarf” with a radius of a few thousandmiles and a density of hundreds of tons per cubic inch. Awhite dwarf is supported by the exclusion principle repulsionbetween the electrons in its matter. We observe a largenumber of these white dwarf stars. One of the first to bediscovered is a star that is orbiting around Sirius, the brighteststar in the night sky. Landau pointed out that there was another possible finalstate for a star, also with a limiting mass of about one or twotimes the mass of the sun but much smaller even than a whitedwarf. These stars would be supported by the exclusionprinciple repulsion between neutrons and protons, rather thanbetween electrons. They were therefore called neutron stars. They would have a radius of only ten miles or so and adensity of hundreds of millions of tons per cubic inch. At thetime they were first predicted, there was no way that neutronstars could be observed. They were not actually detected untilmuch later. Stars with masses above the Chandrasekhar limit, on theother hand, have a big problem when they come to the end oftheir fuel. In some cases they may explode or manage tothrow off enough matter to reduce their mass below the limitand so avoid catastrophic gravitational collapse, but it wasdifficult to believe that this always happened, no matter how bigthe star. How would it know that it had to lose weight? Andeven if every star managed to lose enough mass to avoidcollapse, what would happen if you added more mass to awhite dwarf ‘or neutron star to take it over the limit? Would itcollapse to infinite density? Eddington was shocked by thatimplication, and he refused to believe Chandrasekhar’s result. Eddington thought it was simply not possible that a star couldcollapse to a point. This was the view of most scientists: Einstein himself wrote a paper in which he claimed that starswould not shrink to zero size. The hostility of other scientists,particularly Eddington, his former teacher and the leadingauthority on the structure of stars, persuaded Chandrasekhar toabandon this line of work and turn instead to other problemsin astronomy, such as the motion of star clusters. However,when he was awarded the Nobel Prize in 1983, it was, at leastin part, for his early work on the limiting mass of cold stars. Chandrasekhar had shown that the exclusion principle couldnot halt the collapse of a star more massive than theChandrasekhar limit, but the problem of understanding whatwould happen to such a star, according to general relativity,was first solved by a young American, Robert Oppenheimer, in1939. His result, however, suggested that there would be noobservational consequences that could be detected by thetelescopes of the day. Then World War II intervened andOppenheimer himself became closely involved in the atom bombproject. After the war the problem of gravitational collapse waslargely forgotten as most scientists became caught up in whathappens on the scale of the atom and its nucleus. In the1960s, however, interest in the large-scale problems ofastronomy and cosmology was revived by a great increase inthe number and range of astronomical observations broughtabout by the application of modern technology. Oppenheimer’swork was then rediscovered and extended by a number ofpeople. The picture that we now have from Oppenheimer’s work isas follows. The gravitational field of the star changes the pathsof light rays in space-time from what they would have beenhad the star not been present. The light cones, which indicatethe paths followed in space and time by flashes of light emittedfrom their tips, are bent slightly inward near the surface of thestar. This can be seen in the bending of light from distantstars observed during an eclipse of the sun. As the starcontracts, the gravitational field at its surface gets stronger andthe light cones get bent inward more. This makes it moredifficult for light from the star to escape, and the light appearsdimmer and redder to an observer at a distance. Eventually,when the star has shrunk to a certain critical radius, thegravitational field at the surface becomes so strong that thelight cones are bent inward so much that light can no longerescape (Fig. 6.1). According to the theory of relativity, nothingcan travel faster than light. Thus if light cannot escape, neithercan anything else; everything is dragged back by thegravitational field. So one has a set of events, a region ofspace-time, from which it is not possible to escape to reach adistant observer. This region is what we now call a black hole. Its boundary is called the event horizon and it coincides withthe paths of light rays that just fail to escape from the blackhole. In order to understand what you would see if you werewatching a star collapse to form a black hole, one has toremember that in the theory of relativity there is no absolutetime. Each observer has his own measure of time. The time forsomeone on a star will be different from that for someone at adistance, because of the gravitational field of the star. Supposean intrepid astronaut on the surface of the collapsing star,collapsing inward with it, sent a signal every second, accordingto his watch, to his spaceship orbiting about the star. At sometime on his watch, say 11:00, the star would shrink below thecritical radius at which the gravitational field becomes so strongnothing can escape, and his signals would no longer reach thespaceship. As 11:00 approached his companions watching fromthe spaceship would find the intervals between successive signalsfrom the astronaut getting longer and longer, but this effectwould be very small before 10:59:59. They would have to waitonly very slightly more than a second between the astronaut’s10:59:58 signal and the one that he sent when his watch read10:59:59, but they would have to wait forever for the 11:00signal. The light waves emitted from the surface of the starbetween 10:59:59 and 11:00, by the astronaut’s watch, wouldbe spread out over an infinite period of time, as seen from thespaceship. The time interval between the arrival of successivewaves at the spaceship would get longer and longer, so thelight from the star would appear redder and redder and fainterand fainter. Eventually, the star would be so dim that it couldno longer be seen from the spaceship: all that would be leftwould be a black hole in space. The star would, however,continue to exert the same gravitational force on the spaceship,which would continue to orbit the black hole. This scenario isnot entirely realistic, however, because of the following problem. Gravity gets weaker the farther you are from the star, so thegravitational force on our intrepid astronaut’s feet would alwaysbe greater than the force on his head. This difference in theforces would stretch our astronaut out like spaghetti or tearhim apart before the star had contracted to the critical radiusat which the event horizon formed! However, we believe thatthere are much larger objects in the universe, like the centralregions of galaxies, that can also undergo gravitational collapseto produce black holes; an astronaut on one of these wouldnot be torn apart before the black hole formed. He would not,in fact, feel anything special as he reached the critical radius,and could pass the point of no return without noticing itHowever, within just a few hours, as the region continued tocollapse, the difference in the gravitational forces on his headand his feet would become so strong that again it would tearhim apart. The work that Roger Penrose and I did between 1965 and1970 showed that, according to general relativity, there must bea singularity of infinite density and space-time curvature withina black hole. This is rather like the big bang at the beginningof time, only it would be an end of time for the collapsingbody and the astronaut. At this singularity the laws of scienceand our ability to predict the future would break down. However, any observer who remained outside the black holewould not be affected by this failure of predictability, becauseneither light nor any other signal could reach him from thesingularity. This remarkable fact led Roger Penrose to proposethe cosmic censorship hypothesis, which might be paraphrasedas “God abhors a naked singularity.” In other words, thesingularities produced by gravitational collapse occur only inplaces, like black holes, where they are decently hidden fromoutside view by an event horizon. Strictly, this is what is knownas the weak cosmic censorship hypothesis: it protects observerswho remain outside the black hole from the consequences ofthe breakdown of predictability that occurs at the singularity,but it does nothing at all for the poor unfortunate astronautwho falls into the hole. There are some solutions of the equations of generalrelativity in which it is possible for our astronaut to see anaked singularity: he may be able to avoid hitting thesingularity and instead fall through a “wormhole” and come outin another region of the universe. This would offer greatpossibilities for travel in space and time, but unfortunately itseems that these solutions may all be highly unstable; the leastdisturbance, such as the presence of an astronaut, may changethem so that the astronaut could not see the singularity untilhe hit it and his time came to an end. In other words, thesingularity would always lie in his future and never in his past. The strong version of the cosmic censorship hypothesis statesthat in a realistic solution, the singularities would always lieeither entirely in the future (like the singularities of gravitationalcollapse) or entirely in the past (like the , big bang). I stronglybelieve in cosmic censorship so I bet Kip Thorne and JohnPreskill of Cal Tech that it would always hold. I lost the bet ona technicality because examples were produced of solutions witha singularity that was visible from a long way away. So I hadto pay up, which according to the terms of the bet meant Ihad to clothe theirnakedness. But I can claim a moral victory. The nakedsingularities were unstable: the least disturbance would causethem either to disappear or to be hidden behind an eventhorizon. So they would not occur in realistic situations. The event horizon, the boundary of the region of space-timefrom which it is not possible to escape, acts rather like aone-way membrane around the black hole: objects, such asunwary astronauts, can fall through the event horizon into theblack hole, but nothing can ever get out of the black holethrough the event horizon. (Remember that the event horizonis the path in space-time of light that is trying to escape fromthe black hole, and nothing can travel faster than light.) Onecould well say of the event horizon what the poet Dante saidof the entrance to Hell: “All hope abandon, ye who enterhere.” Anything or anyone who falls through the event horizonwill soon reach the region of infinite density and the end oftime. General relativity predicts that heavy objects that are movingwill cause the emission of gravitational waves, ripples in thecurvature of space that travel at the speed of light. These aresimilar to light waves, which are ripples of the electromagneticfield, but they are much harder to detect. They can beobserved by the very slight change in separation they producebetween neighboring freely moving objects. A number ofdetectors are being built in the United States, Europe, andJapan that will measure displacements of one part in athousand million million million (1 with twenty-one zeros afterit), or less than the nucleus of an atom over a distance of tenmiles. Like light, gravitational waves carry energy away from theobjects that emit them. One would therefore expect a system ofmassive objects to settle down eventually to a stationary state,because the energy in any movement would be carried awayby the emission of gravitational waves. (It is rather likedropping a cork into water: at first it bobs up and down agreat deal, but as the ripples carry away its energy, iteventually settles down to a stationary state.) For example, themovement of the earth in its orbit round the sun producesgravitational waves. The effect of the energy loss will be tochange the orbit of the earth so that gradually it gets nearerand nearer to the sun, eventually collides with it, and settlesdown to a stationary state. The rate of energy loss in the caseof the earth and the sun is very low - about enough to run asmall electric heater. This means it will take about a thousandmillion million million million years for the earth to run into thesun, so there’s no immediate cause for worry! The change inthe orbit of the earth is too slow to be observed, but thissame effect has been observed over the past few yearsoccurring in the system called PSR 1913 + 16 (PSR stands for“pulsar,” a special type of neutron star that emits regular pulsesof radio waves). This system contains two neutron stars orbitingeach other, and the energy they are losing by the emission ofgravitational waves is causing them to spiral in toward eachother. This confirmation of general relativity won J. H. Taylorand R. A. Hulse the Nobel Prize in 1993. It will take aboutthree hundred million . years for them to collide. Just beforethey do, they will be orbiting so fast that they will emit enoughgravitational waves for detectors like LIGO to pick up. During the gravitational collapse of a star to form a blackhole, the movements would be much more rapid, so the rateat which energy is carried away would be much higher. Itwould therefore not be too long ‘ before it settled down to astationary state. What would this final stage look like? Onemight suppose that it would depend on all the complex featuresof the star from which it had formed - not only its mass andrate of rotation, but also the different densities of various partsof the star, and the complicated movements of the gases withinthe star. And if black holes were as varied as the objects thatcollapsed to form them, it might be very difficult to make anypredictions about black holes in general. In 1967, however, the study of black holes was revolutionizedby Werner Israel, a Canadian scientist (who was born in Berlin,brought up in South Africa, and took his doctoral degree inIreland). Israel showed that, according to general relativity,non-rotating black holes must be very simple; they wereperfectly spherical, their size depended only on their mass, andany two such black holes with the same mass were identical. They could, in fact, be described by a particular solution ofEinstein’s equations that had been known since 1917, found byKarl Schwarzschild shortly after the discovery of generalrelativity. At first many people, including Israel himself, arguedthat since black holes had to be perfectly spherical, a blackhole could only form from the collapse of a perfectly sphericalobject. Any real star - which would never be perfectly spherical- could therefore only collapse to form a naked singularity. There was, however, a different interpretation of Israel’sresult, which was advocated by Roger Penrose and JohnWheeler in particular. They argued that the rapid movementsinvolved in a star’s collapse would mean that the gravitationalwaves it gave off would make it ever more spherical, and bythe time it had settled down to a stationary state, it would beprecisely spherical. According to this view, any non-rotating star,however complicated its shape and internal structure, would endup after gravitational collapse as a perfectly spherical black hole,whose size would depend only on its mass. Further calculationssupported this view, and it soon came to be adopted generally. Israel’s result dealt with the case of black holes formed fromnon-rotating bodies only. In 1963, Roy Kerr, a New Zealander,found a set of solutions of the equations of general relativitythat described rotating black holes. These “Kerr” black holesrotate at a constant rate, their size and shape depending onlyon their mass and rate of rotation. If the rotation is zero, theblack hole is perfectly round and the solution is identical to theSchwarzschild solution. If the rotation is non-zero, the blackhole bulges outward near its equator (just as the earth or thesun bulge due to their rotation), and the faster it rotates, themore it bulges. So, to extend Israel’s result to include rotatingbodies, it was conjectured that any rotating body that collapsedto form a black hole would eventually settle down to astationary state described by the Kerr solution. In 1970 acolleague and fellow research student of mine at Cambridge,Brandon Carter, took the first step toward proving thisconjecture. He showed that, provided a stationary rotating blackhole had an axis of symmetry, like a spinning top, its size andshape would depend only on its mass and rate of rotation. Then, in 1971, I proved that any stationary rotating black holewould indeed have such an axis of symmetry. Finally, in 1973,David Robinson at Kings College, London, used Carter’s andmy results to show that the conjecture had been correct: sucha black hole had indeed to be the Kerr solution. So aftergravitational collapse a black hole must settle down into a statein which it could be rotating, but not pulsating. Moreover, itssize and shape would depend only on its mass and rate ofrotation, and not on the nature of the body that had collapsedto form it. This result became known by the maxim: “A blackhole has no hair.” The “no hair” theorem is of great practicalimportance, because it so greatly restricts the possible types ofblack holes. One can therefore make detailed models of objectsthat might contain black holes and compare the predictions ofthe models with observations. It also means that a very largeamount of information about the body that has collapsed mustbe lost when a black hole is formed, because afterward all wecan possibly measure about the body is its mass and rate ofrotation. The significance of this will be seen in the nextchapter. Black holes are one of only a fairly small number of cases inthe history of science in which a theory was developed in greatdetail as a mathematical model before there was any evidencefrom observations that it was correct. Indeed, this used to bethe main argument of opponents of black holes: how could onebelieve in objects for which the only evidence was calculationsbased on the dubious theory of general relativity? In 1963,however, Maarten Schmidt, an astronomer at the PalomarObservatory in California, measured the red shift of a faintstarlike object in the direction of the source of radio wavescalled 3C273 (that is, source number 273 in the thirdCambridge catalogue of radio sources). He found it was toolarge to be caused by a gravitational field: if it had been agravitational red shift, the object would have to be so massiveand so near to us that it would disturb the orbits of planets inthe Solar System. This suggested that the red shift was insteadcaused by the expansion of the universe, which, in turn, meantthat the object was a very long distance away. And to bevisible at such a great distance, the object must be very bright,must, in other words, be emitting a huge amount of energy. The only mechanism that people could think of that wouldproduce such large quantities of energy seemed to be thegravitational collapse not just of a star but of a whole centralregion of a galaxy. A number of other similar “quasi-stellarobjects,” or quasars, have been discovered, all with large redshifts. But they are all too far away and therefore too difficultto observe to provide conclusive evidence of black holes. Further encouragement for the existence of black holes camein 1967 with the discovery by a research student at Cambridge,Jocelyn Bell-Burnell, of objects in the sky that were emittingregular pulses of radio waves. At first Bell and her supervisor,Antony Hewish, thought they might have made contact with analien civilization in the galaxy! Indeed, at the seminar at whichthey announced their discovery, I remember that they calledthe first four sources to be found LGM 1 - 4, LGM standingfor “Little Green Men.” In the end, however, they andeveryone else came to the less romantic conclusion that theseobjects, which were given the name pulsars, were in factrotating neutron stars that were emitting pulses of radio wavesbecause of a complicated interaction between their magneticfields and surrounding matter. This was bad news for writersof space westerns, but very hopeful for the small number of uswho believed in black holes at that time: it was the firstpositive evidence that neutron stars existed. A neutron star hasa radius of about ten miles, only a few times the critical radiusat which a star becomes a black hole. If a star could collapseto such a small size, it is not unreasonable to expect that otherstars could collapse to even smaller size and become blackholes. How could we hope to detect a black hole, as by its verydefinition it does not emit any light? It might seem a bit likelooking for a black cat in a coal cellar. Fortunately, there is away. As John Michell pointed out in his pioneering paper in1783, a black hole still exerts a gravitational fierce on nearbyobjects. Astronomers have observed many systems in whichtwo stars orbit around each other, attracted toward each otherby gravity. They also observe systems in which there is onlyone visible star that is orbiting around some unseencompanion. One cannot, of course, immediately conclude thatthe companion is a black hole: it might merely be a star thatis too faint to be seen. However, some of these systems, likethe one called Cygnus X-1 (Fig. 6.2), are also strong sources ofX-rays. The best explanation for this phenomenon is thatmatter has been blown off the surface of the visible star. As itfalls toward the unseen companion, it develops a spiral motion(rather like water running out of a bath), and it gets very hot,emitting X-rays (Fig. 63). For this mechanism to work, theunseen object has to be very small, like a white dwarf, neutronstar, or black hole. From the observed orbit of the visible star,one can determine the lowest possible mass of the unseenobject. In the case of Cygnus X-l, this is about six times themass of the sun, which, according to Chandrasekhar’r result, istoo great for the unseen object to be a white dwarf. It is alsotoo large a mass to be a neutron star. It seems, therefore, thatit must be a black hole. There are other models to explain Cygnus X-1 that do notinclude a black hole, but they are all rather far-fetched. A blackhole seems to be the only really natural explanation of theobservations. Despite this, I had a bet with Kip Thorne of theCalifornia Institute of Technology that in fact Cygnus X-1 doesnot contain a black hole! This was a form f insurance policyfor me. I have done a lot of work on black holes, and itwould all be wasted if it turned out that black holes do notexist. But in that case, I would have the consolation of winningmy bet, which would bring me four years of the magazinePrivate Eye. In fact, although the situation with Cygnus X-1 hasnot changed much since we made the bet in 1975, there isnow so much other observational evidence in favor of blackholes that I have conceded the bet. I paid the specified penalty,which was a one-year subscription to Penthouse, to the outrageof Kip’s liberated wife. We also now have evidence for several other black holes insystems like Cygnus X-1 in our galaxy and in two neighboringgalaxies called the Magellanic Clouds. The number of blackholes, however, is almost certainly very much higher; in thelong history of the universe, many stars must have burned alltheir nuclear fuel and have had to collapse. The number ofblack holes may well be greater even than the number ofvisible stars, which totals about a hundred thousand million inour galaxy alone. The extra gravitational attraction of such alarge number of black holes could explain why our galaxyrotates at the rate it does: the mass of the visible stars isinsufficient to account for this. We also have some evidencethat there is a much larger black hole, with a mass of about ahundred thousand times that of the sun, at the center of ourgalaxy. Stars in the galaxy that come too near this black holewill be torn apart by the difference in the gravitational forceson their near and far sides. Their remains and gas that isthrown off other stars, will fall toward the black hole. As in thecase of Cygnus X-l, the gas will spiral inward and will heat up,though not as much as in that case. It will not get hot enoughto emit X rays, but it could account for the very compactsource of radio waves and infrared rays that is observed at thegalactic center. It is thought that similar but even larger black holes, withmasses of about a hundred million times the mass of the sun,occur at the centers of quasars. For example, observations withthe Hubble telescope of the galaxy known as M87 reveal that itcontains a disk of gas 130 light-years across rotating about acentral object two thousand million times the mass of the sun. This can only be a black hole. Matter falling into such asupermassive black hole would provide the only source ofpower great enough to explain the enormous amounts ofenergy that these objects are emitting. As the matter spiralsinto the black hole, it would make the black hole rotate in thesame direction, causing it to develop a magnetic field rather likethat of the earth. Very high-energy particles would be generatednear the black hole by the in-falling matter. The magnetic fieldwould be so strong that it could focus these particles into jetsejected outward along the axis of rotation of the black hole,that is, in the directions of its north and south poles. Such jetsare indeed observed in a number of galaxies and quasars. Onecan also consider the possibility that there might be black holeswith masses much less than that of the sun. Such black holescould not be formed by gravitational collapse, because theirmasses are below the Chandrasekhar mass limit: stars of thislow mass can support themselves against the force of gravityeven when they have exhausted their nuclear fuel. Low-massblack holes could form only if matter was compressed toenormous densities by very large external pressures. Suchconditions could occur in a very big hydrogen bomb: thephysicist John Wheeler once calculated that if one took all theheavy water in all the oceans of the world, one could build ahydrogen bomb that would compress matter at the center somuch that a black hole would be created. (Of course, therewould be no one left to observe it!) A more practical possibilityis that such low-mass black holes might have been formed inthe high temperatures and pressures of the very early universe. Black holes would have been formed only if the early universehad not been perfectly smooth and uniform, because only asmall region that was denser than average could becompressed in this way to form a black hole. But we knowthat there must have been some irregularities, becauseotherwise the matter in the universe would still be perfectlyuniformly distributed at the present epoch, instead of beingclumped together in stars and galaxies. Whether the irregularities required to account for stars andgalaxies would have led to the formation of a significantnumber of “primordial” black holes clearly depends on thedetails of the conditions in the early universe. So if we coulddetermine how many primordial black holes there are now, wewould learn a lot about the very early stages of the universe. Primordial black holes with masses more than a thousandmillion tons (the mass of a large mountain) could be detectedonly by their gravitational influence on other, visible matter oron the expansion of the universe. However, as we shall learnin the next chapter, black holes are not really black after all: they glow like a hot body, and the smaller they are, the morethey glow. So, paradoxically, smaller black holes might actuallyturn out to be easier to detect than large ones! CHAPTER 7 BLACK HOLES AIN’T SO BLACK Before 1970, my research on general relativity hadconcentrated mainly on the question of whether or not therehad been a big bang singularity. However, one evening inNovember that year, shortly after the birth of my daughter,Lucy, I started to think about black holes as I was getting intobed. My disability makes this rather a slow process, so I hadplenty of time. At that date there was no precise definition ofwhich points in space-time lay inside a black hole and whichlay outside. I had already discussed with Roger Penrose theidea of defining a black hole as the set of events from which itwas not possible to escape to a large distance, which is nowthe generally accepted definition. It means that the boundary ofthe black hole, the event horizon, is formed by the light raysthat just fail to escape from the black hole, hovering foreverjust on the edge (Fig. 7.1). It is a bit like running away fromthe police and just managing to keep one step ahead but notbeing able to get clear away! Suddenly I realized that the paths of these light rays couldnever approach one another. If they did they must eventuallyrun into one another. It would be like meeting someone elserunning away from the police in the opposite direction - youwould both be caught! (Or, in this case, fall into a black hole.)But if these light rays were swallowed up by the black hole,then they could not have been on the boundary of the blackhole. So the paths of light rays in the event horizon hadalways to be moving parallel to, or away from, each other. Another way of seeing this is that the event horizon, theboundary of the black hole, is like the edge of a shadow - theshadow of impending doom. If you look at the shadow cast bya source at a great distance, such as the sun, you will see thatthe rays of light in the edge are not approaching each other. If the rays of light that form the event horizon, theboundary of the black hole, can never approach each other,the area of the event horizon might stay the same or increasewith time, but it could never decrease because that would meanthat at least some of the rays of light in the boundary wouldhave to be approaching each other. In fact, the area wouldincrease whenever matter or radiation fell into the black hole(Fig. 7.2). Or if two black holes collided and merged togetherto form a single black hole, the area of the event horizon ofthe final black hole would be greater than or equal to the sumof the areas of the event horizons of the original black holes(Fig. 7.3). This nondecreasing property of the event horizon’sarea placed an important restriction on the possible behavior ofblack holes. I was so excited with my discovery that I did notget much sleep that night. The next day I rang up RogerPenrose. He agreed with me. I think, in fact, that he had beenaware of this property of the area. However, he had beenusing a slightly different definition of a black hole. He had notrealized that the boundaries of the black hole according to thetwo definitions would be the same, and hence so would theirareas, provided the black hole had settled down to a state inwhich it was not changing with time. The nondecreasing behavior of a black hole’s area was veryreminiscent of the behavior of a physical quantity called entropy,which measures the degree of disorder of a system. It is amatter of common experience that disorder will tend to increaseif things are left to themselves. (One has only to stop makingrepairs around the house to see that!) One can create orderout of disorder (for example, one can paint the house), butthat requires expenditure of effort or energy and so decreasesthe amount of ordered energy available. A precise statement of this idea is known as the second lawof thermodynamics. It states that the entropy of an isolatedsystem always increases, and that when two systems are joinedtogether, the entropy of the combined system is greater thanthe sum of the entropies of the individual systems. Forexample, consider a system of gas molecules in a box. Themolecules can be thought of as little billiard balls continuallycolliding with each other and bouncing off the walls of the box. The higher the temperature of the gas, the faster the moleculesmove, and so the more frequently and harder they collide withthe walls of the box and the greater the outward pressure theyexert on the walls. Suppose that initially the molecules are allconfined to the left-hand side of the box by a partition. If thepartition is then removed, the molecules will tend to spread outand occupy both halves of the box. At some later time theycould, by chance, all be in the right half or back in the lefthalf, but it is overwhelmingly more probable that there will beroughly equal numbers in the two halves. Such a state is lessordered, or more disordered, than the original state in which allthe molecules were in one half. One therefore says that theentropy of the gas has gone up. Similarly, suppose one startswith two boxes, one containing oxygen molecules and the othercontaining nitrogen molecules. If one joins the boxes togetherand removes the intervening wall, the oxygen and the nitrogenmolecules will start to mix. At a later time the most probablestate would be a fairly uniform mixture of oxygen and nitrogenmolecules throughout the two boxes. This state would be lessordered, and hence have more entropy, than the initial state oftwo separate boxes. The second law of thermodynamics has a rather differentstatus than that of other laws of science, such as Newton’s lawof gravity, for example, because it does not hold always, just inthe vast majority of cases. The probability of all the gasmolecules in our first boxfound in one half of the box at a later time is many millionsof millions to one, but it can happen. However, if one has ablack hole around there seems to be a rather easier way ofviolating the second law: just throw some matter with a lot ofentropy such as a box of gas, down the black hole. The totalentropy of matter outside the black hole would go down. Onecould, of course, still say that the total entropy, including theentropy inside the black hole, has not gone down - but sincethere is no way to look inside the black hole, we cannot seehow much entropy the matter inside it has. It would be nice,then, if there was some feature of the black hole by whichobservers outside the blackhole could tell its entropy, and which would increasewhenever matter carrying entropy fell into the black hole. Following the discovery, described above, that the area of theevent horizon increased whenever matter fell into a black hole,a research student at Princeton named Jacob Bekensteinsuggested that the area of the event horizon was a measure ofthe entropy of the black hole. As matter carrying entropy fellinto a black hole, the area of its event horizon would go up,so that the sum of the entropy of matter outside black holesand the area of the horizons would never go down. This suggestion seemed to prevent the second law ofthermodynamics from being violated in most situations. However, there was one fatal flaw. If a black hole has entropy,then it ought to also have a temperature. But a body with aparticular temperature must emit radiation at a certain rate. Itis a matter of common experience that if one heats up apoker in a fire it glows red hot and emits radiation, but bodiesat lower temperatures emit radiation too; one just does notnormally notice it because the amount is fairly small. Thisradiation is required in order to prevent violation of the secondlaw. So black holes ought to emit radiation. But by their verydefinition, black holes are objects that are not supposed to emitanything. It therefore seemed that the area of the eventhorizon of a black hole could not be regarded as its entropy. In 1972 I wrote a paper with Brandon Carter and anAmerican colleague, Jim Bardeen, in which we pointed out thatalthough there were many similarities between entropy and thearea of the event horizon, there was this apparently fataldifficulty. I must admit that in writing this paper I wasmotivated partly by irritation with Bekenstein, who, I felt, hadmisused my discovery of the increase of the area of the eventhorizon. However, it turned out in the end that he wasbasically correct, though in a manner he had certainly notexpected. In September 1973, while I was visiting Moscow, I discussedblack holes with two leading Soviet experts, Yakov Zeldovichand Alexander Starobinsky. They convinced me that, accordingto the quantum mechanical uncertainty principle, rotating blackholes should create and emit particles. I believed theirarguments on physical grounds, but I did not like themathematical way in which they calculated the emission. Itherefore set about devising a better mathematical treatment,which I described at an informal seminar in Oxford at the endof November 1973. At that time I had not done the calculationsto find out how much would actually be emitted. I wasexpecting to discover just the radiation that Zeldovich andStarobinsky had predicted from rotating black holes. However,when I did the calculation, I found, to my surprise andannoyance, that even non-rotating black holes should apparentlycreate and emit particles at a steady rate. At first I thoughtthat this emission indicated that one of the approximations Ihad used was not valid. I was afraid that if Bekenstein foundout about it, he would use it as a further argument to supporthis ideas about the entropy of black holes, which I still did notlike. However, the more I thought about it, the more it seemedthat the approximations really ought to hold. But what finallyconvinced me that the emission was real was that the spectrumof the emitted particles was exactly that which would be emittedby a hot body, and that the black hole was emitting particlesat exactly the correct rate to prevent violations of the secondlaw. Since then the calculations have been repeated in anumber of different forms by other people. They all confirmthat a black hole ought to emit particles and radiation as if itwere a hot body with a temperature that depends only on theblack hole’s mass: the higher the mass, the lower thetemperature. How is it possible that a black hole appears to emit particleswhen we know that nothing can escape from within its eventhorizon? The answer, quantum theory tells us, is that theparticles do not come from within the black hole, but from the“empty” space just outside the black hole’s event horizon! Wecan understand this in the following way: what we think of as“empty” space cannot be completely empty because that wouldmean that all the fields, such as the gravitational andelectromagnetic fields, would have to be exactly zero. However,the value of a field and its rate of change with time are likethe position and velocity of a particle: the uncertainty principleimplies that the more accurately one knows one of thesequantities, the less accurately one can know the other. So inempty space the field cannot be fixed at exactly zero, becausethen it would have both a precise value (zero) and a preciserate of change (also zero). There must be a certain minimumamount of uncertainty, or quantum fluctuations, in the value ofthe field. One can think of these fluctuations as pairs ofparticles of light or gravity that appear together at some time,move apart, and then come together again and annihilate eachother. These particles are virtual particles like the particles thatcarry the gravitational force of the sun: unlike real particles,they cannot be observed directly with a particle detector. However, their indirect effects, such as small changes in theenergy of electron orbits in atoms, can be measured and agreewith the theoretical predictions to a remarkable degree ofaccuracy. The uncertainty principle also predicts that there willbe similar virtual pairs of matter particles, such as electrons orquarks. In this case, however, one member of the pair will bea particle and the other an antiparticle (the antiparticles of lightand gravity are the same as the particles). Because energy cannot be created out of nothing, one of thepartners in a particle/antiparticle pair will have positive energy,and the other partner negative energy. The one with negativeenergy is condemned to be a short-lived virtual particle becausereal particles always have positive energy in normal situations. Itmust therefore seek out its partner and annihilate with it. However, a real particle close to a massive body has lessenergy than if it were far away, because it would take energyto lift it far away against the gravitational attraction of the body. Normally, the energy of the particle is still positive, but thegravitational field inside a black hole is so strong that even areal particle can have negative energy there. It is thereforepossible, if a black hole is present, for the virtual particle withnegative energy to fall into the black hole and become a realparticle or antiparticle. In this case it no longer has toannihilate with its partner. Its forsaken partner may fall into theblack hole as well. Or, having positive energy, it might alsoescape from the vicinity of the black hole as a real particle orantiparticle (Fig. 7.4). To an observer at a distance, it willappear to have been emitted from the black hole. The smallerthe black hole, the shorter the distance the particle withnegative energy will have to go before it becomes a realparticle, and thus the greater the rate of emission, and theapparent temperature, of the black hole. The positive energy of the outgoing radiation would bebalanced by a flow of negative energy particles into the blackhole. By Einstein’s equation E = mc2 (where E is energy, m ismass, and c is the speed of light), energy is proportional tomass. A flow of negative energy into the black hole thereforereduces its mass. As the black hole loses mass, the area of itsevent horizon gets smaller, but this decrease in the entropy ofthe black hole is more than compensated for by the entropy ofthe emitted radiation, so the second law is never violated. Moreover, the lower the mass of the black hole, the higherits temperature. So as the black hole loses mass, itstemperature and rate of emission increase, so it loses massmore quickly. What happens when the mass of the black holeeventually becomes extremely small is not quite clear, but themost reasonable guess is that it would disappear completely ina tremendous final burst of emission, equivalent to the explosionof millions of H-bombs. A black hole with a mass a few times that of the sun wouldhave a temperature of only one ten millionth of a degree aboveabsolute zero. This is much less than the temperature of themicrowave radiation that fills the universe (about 2.7? aboveabsolute zero), so such black holes would emit even less thanthey absorb. If the universe is destined to go on expandingforever, the temperature of the microwave radiation willeventually decrease to less than that of such a black hole,which will then begin to lose mass. But, even then, itstemperature would be so low that it would take about a millionmillion million million million million million million million millionmillion years (1 with sixty-six zeros after it) to evaporatecompletely. This is much longer than the age of the universe,which is only about ten or twenty thousand million years (1 or2 with ten zeros after it). On the other hand, as mentioned inChapter 6, there might be primordial black holes with a verymuch smaller mass that were made by the collapse ofirregularities in the very early stages of the universe. Such blackholes would have a much higher temperature and would beemitting radiation at a much greater rate. A primordial blackhole with an initial mass of a thousand million tons would havea lifetime roughly equal to the age of the universe. Primordialblack holes with initial masses less than this figure wouldalready have completely evaporated, but those with slightlygreater masses would still be emitting radiation in the form ofX rays and gamma rays. These X rays and gamma rays arelike waves of light, but with a much shorter wavelength. Suchholes hardly deserve the epithet black: they really are white hotand are emitting energy at a rate of about ten thousandmegawatts. One such black hole could run ten large power stations, ifonly we could harness its power. This would be rather difficult,however: the black hole would have the mass of a mountaincompressed into less than a million millionth of an inch, thesize of the nucleus of an atom! If you had one of these blackholes on the surface of the earth, there would be no way tostop it from falling through the floor to the center of the earth. It would oscillate through the earth and back, until eventually itsettled down at the center. So the only place to put such ablack hole, in which one might use the energy that it emitted,would be in orbit around the earth - and the only way thatone could get it to orbit the earth would be to attract it thereby towing a large mass in front of it, rather like a carrot infront of a donkey. This does not sound like a very practicalproposition, at least not in the immediate future. But even if we cannot harness the emission from theseprimordial black holes, what are our chances of observingthem? We could look for the gamma rays that the primordialblack holes emit during most of their lifetime. Although theradiation from most would be very weak because they are faraway, the total from all of them might be detectable. We doobserve such a background of gamma rays: Fig. 7.5 showshow the observed intensity differs at different frequencies (thenumber of waves per second). However, this background couldhave been, and probably was, generated by processes otherthan primordial black holes. The dotted line in Fig. 7.5 showshow the intensity should vary with frequency for gamma raysgiven off by primordial black holes, if there were on average300 per cubic light-year. One can therefore say that theobservations of the gamma ray background do not provide anypositive evidence for primordial black holes, but they do tell usthat on average there cannot be more than 300 in every cubiclight-year in the universe. This limit means that primordial blackholes could make up at most one millionth of the matter in theuniverse. With primordial black holes being so scarce, it might seemunlikely that there would be one near enough for us toobserve as an individual source of gamma rays. But sincegravity would draw primordial black holes toward any matter,they should be much more common in and around galaxies. So although the gamma ray background tells us that there canbe no more than 300 primordial black holes per cubiclight-year on average, it tells us nothing about how commonthey might be in our own galaxy. If they were, say, a milliontimes more common than this, then the nearest black hole tous would probably be at a distance of about a thousand millionkilometers, or about as far away as Pluto, the farthest knownplanet. At this distance it would still be very difficult to detectthe steady emission of a black hole, even if it was tenthousand megawatts. In order to observe a primordial blackhole one would have to detect several gamma ray quantacoming from the same direction within a reasonable space oftime, such as a week. Otherwise, they might simply be part ofthe background. But Planck’s quantum principle tells us thateach gamma ray quantum has a very high energy, becausegamma rays have a very high frequency, so it would not takemany quanta to radiate even ten thousand megawatts. And toobserve these few coming from the distance of Pluto wouldrequire a larger gamma ray detector than any that have beenconstructed so far. Moreover, the detector would have to be inspace, because gamma rays cannot penetrate the atmosphere. Of course, if a black hole as close as Pluto were to reachthe end of its life and blow up, it would be easy to detect thefinal burst of emission. But if the black hole has been emittingfor the last ten or twenty thousand million years, the chance ofit reaching the end of its life within the next few years, ratherthan several million years in the past or future, is really rathersmall! So in order to have a reasonable chance of seeing anexplosion before your research grant ran out, you would haveto find a way to detect any explosions within a distance ofabout one light-year. In fact bursts of gamma rays from spacehave been detected by satellites originally constructed to lookfor violations of the Test Ban Treaty. These seem to occurabout sixteen times a month and to be roughly uniformlydistributed in direction across the sky. This indicates that theycome from outside the Solar System since otherwise we wouldexpect them to be concentrated toward the plane of the orbitsof the planets. The uniform distribution also indicates that thesources are either fairly near to us in our galaxy or rightoutside it at cosmological distances because otherwise, again,they would be concentrated toward the plane of the galaxy. Inthe latter case, the energy required to account for the burstswould be far too high to have been produced by tiny blackholes, but if the sources were close in galactic terms, it mightbe possible that they were exploding black holes. I would verymuch like this to be the case but I have to recognize thatthere are other possible explanations for the gamma ray bursts,such as colliding neutron stars. New observations in the nextfew years, particularly by gravitational wave detectors like LIGO,should enable us to discover the origin of the gamma raybursts. Even if the search for primordial black holes proves negative,as it seems it may, it will still give us important informationabout the very early stages of the universe. If the earlyuniverse had been chaotic or irregular, or if the pressure ofmatter had been low, one would have expected it to producemany more primordial black holes than the limit already set byour observations of the gamma ray background. Only if theearly universe was very smooth and uniform, with a highpressure, can one explain the absence of observable numbersof primordial black holes. The idea of radiation from black holes was the first exampleof a prediction that depended in an essential way on both thegreat theories of this century, general relativity and quantummechanics. It aroused a lot of opposition initially because itupset the existing viewpoint: “How can a black hole emitanything?” When I first announced the results of mycalculations at a conference at the Rutherford-AppletonLaboratory near Oxford, I was greeted with general incredulity. At the end of my talk the chairman of the session, John G. Taylor from Kings College, London, claimed it was all nonsense. He even wrote a paper to that effect. However, in the endmost people, including John Taylor, have come to theconclusion that black holes must radiate like hot bodies if ourother ideas about general relativity and quantum mechanics arecorrect. Thus, even though we have not yet managed to find aprimordial black hole, there is fairly general agreement that ifwe did, it would have to be emitting a lot of gamma rays andX rays. The existence of radiation from black holes seems to implythat gravitational collapse is not as final and irreversible as weonce thought. If an astronaut falls into a black hole, its masswill increase, but eventually the energy equivalent of that extramass will be returned to the universe in the form of radiation. Thus, in a sense, the astronaut will be “recycled.” It would bea poor sort of immortality, however, because any personalconcept of time for the astronaut would almost certainly cometo an end as he was torn apart inside the black hole! Eventhe types of particles that were eventually emitted by the blackhole would in general be different from those that made up theastronaut: the only feature of the astronaut that would survivewould be his mass or energy. The approximations I used to derive the emission from blackholes should work well when the black hole has a mass greaterthan a fraction of a gram. However, they will break down atthe end of the black hole’s life when its mass gets very small. The most likely outcome seems to be that the black hole willjust disappear, at least from our region of the universe, takingwith it the astronaut and any singularity there might be insideit, if indeed there is one. This was the first indication thatquantum mechanics might remove the singularities that werepredicted by general relativity. However, the methods that I andother people were using in 1974 were not able to answerquestions such as whether singularities would occur in quantumgravity. From 1975 onward I therefore started to develop amore powerful approach to quantum gravity based on RichardFeynrnan’s idea of a sum over histories. The answers that thisapproach suggests for the origin and fate of the universe andits contents, such as astronauts, will be de-scribed in the nexttwo chapters. We shall see that although the uncertaintyprinciple places limitations on the accuracy of all ourpredictions, it may at the same time remove the fundamentalunpredictability that occurs at a space-time singularity. CHAPTER 8 THE ORIGIN AND FATE OF THE UNIVERSE Einstein’s general theory of relativity, on its own, predictedthat space-time began at the big bang singularity and wouldcome to an end either at the big crunch singularity (if thewhole universe recollapsed), or at a singularity inside a blackhole (if a local region, such as a star, were to collapse). Anymatter that fell into the hole would be destroyed at thesingularity, and only the gravitational effect of its mass wouldcontinue to be felt outside. On the other hand, when quantumeffects were taken into account, it seemed that the mass orenergy of the matter would eventually be returned to the restof the universe, and that the black hole, along with anysingularity inside it, would evaporate away and finally disappear. Could quantum mechanics have an equally dramatic effect onthe big bang and big crunch singularities? What really happensduring the very early or late stages of the universe, whengravitational fields are so strong that quantum effects cannot beignored? Does the universe in fact have a beginning or anend? And if so, what are they like? Throughout the 1970s I had been mainly studying blackholes, but in 1981 my interest in questions about the origin andfate of the universe was reawakened when I attended aconference on cosmology organized by the Jesuits in theVatican. The Catholic Church had made a bad mistake withGalileo when it tried to lay down the law on a question ofscience, declaring that the sun went round the earth. Now,centuries later, it had decided to invite a number of experts toadvise it on cosmology. At the end of the conference theparticipants were granted an audience with the Pope. He toldus that it was all right to study the evolution of the universeafter the big bang, but we should not inquire into the big bangitself because that was the moment of Creation and thereforethe work of God. I was glad then that he did not know thesubject of the talk I had just given at the conference - thepossibility that space-time was finite but had no boundary,which means that it had no beginning, no moment of Creation. I had no desire to share the fate of Galileo, with whom I feela strong sense of identity, partly because of the coincidence ofhaving been born exactly 300 years after his death! In order to explain the ideas that I and other people havehad about how quantum mechanics may affect the origin andfate of the universe, it is necessary first to understand thegenerally accepted history of the universe, according to what isknown as the “hot big bang model.” This assumes that theuniverse is described by a Friedmann model, right back to thebig bang. In such models one finds that as the universeexpands, any matter or radiation in it gets cooler. (When theuniverse doubles in size, its temperature falls by half.) Sincetemperature is simply a measure of the average energy - orspeed - of the particles, this cooling of the universe would havea major effect on the matter in it. At very high temperatures,particles would be moving around so fast that they couldescape any attraction toward each other due to nuclear orelectromagnetic forces, but as they cooled off one would expectparticles that attract each other to start to clump together. Moreover, even the types of particles that exist in the universewould depend on the temperature. At high enoughtemperatures, particles have so much energy that wheneverthey collide many different particle/antiparticle pairs would beproduced - and although some of these particles wouldannihilate on hitting antiparticles, they would be produced morerap-idly than they could annihilate. At lower temperatures,however, when colliding particles have less energy,particle/antiparticle pairs would be produced less quickly - andannihilation would become faster than production. At the big bang itself the universe is thought to have hadzero size, and so to have been infinitely hot. But as theuniverse expanded, the temperature of the radiation decreased. One second after the big bang, it would have fallen to aboutten thousand million degrees. This is about a thousand timesthe temperature at the center of the sun, but temperatures ashigh as this are reached in H-bomb explosions. At this time theuniverse would have contained mostly photons, electrons, andneutrinos (extremely light particles that are affected only by theweak force and gravity) and their antiparticles, together withsome protons and neutrons. As the universe continued toexpand and the temperature to drop, the rate at whichelectron/antielectron pairs were being produced in collisionswould have fallen below the rate at which they were beingdestroyed by annihilation. So most of the electrons andantielectrons would have annihilated with each other to producemore photons, leaving only a few electrons left over. Theneutrinos and antineutrinos, however, would not haveannihilated with each other, because these particles interact withthemselves and with other particles only very weakly. So theyshould still be around today. If we could observe them, itwould provide a good test of this picture of a very hot earlystage of the universe. Unfortunately, their energies nowadayswould be too low for us to observe them directly. However, ifneutrinos are not massless, but have a small mass of theirown, as suggested by some recent experiments, we might beable to detect them indirectly: they could be a form of “darkmatter,” like that mentioned earlier, with sufficient gravitationalattraction to stop the expansion of the universe and cause it tocollapse again. About one hundred seconds after the big bang, thetemperature would have fallen to one thousand million degrees,the temperature inside the hottest stars. At this temperatureprotons and neutrons would no longer have sufficient energy toescape the attraction of the strong nuclear force, and wouldhave started to combine together to produce the nuclei ofatoms of deuterium (heavy hydrogen), which contain oneproton and one neutron. The deuterium nuclei would then havecombined with more protons and neutrons to make heliumnuclei, which contain two protons and two neutrons, and alsosmall amounts of a couple of heavier elements, lithium andberyllium. One can calculate that in the hot big bang modelabout a quarter of the protons and neutrons would have beenconverted into helium nuclei, along with a small amount ofheavy hydrogen and other elements. The remaining neutronswould have decayed into protons, which are the nuclei ofordinary hydrogen atoms. This picture of a hot early stage of the universe was firstput forward by the scientist George Gamow in a famous paperwritten in 1948 with a student of his, Ralph Alpher. Gamowhad quite a sense of humor - he persuaded the nuclearscientist Hans Bethe to add his name to the paper to makethe list of authors “Alpher, Bethe, Gamow,” like the first threeletters of the Greek alphabet, alpha, beta, gamma: particularlyappropriate for a paper on the beginning of the universe! Inthis paper they made the remarkable prediction that radiation(in the form of photons) from the very hot early stages of theuniverse should still be around today, but with its temperaturereduced to only a few degrees above absolute zero (-273?C). Itwas this radiation that Penzias and Wilson found in 1965. Atthe time that Alpher, Bethe, and Gamow wrote their paper, notmuch was known about the nuclear reactions of protons andneutrons. Predictions made for the proportions of variouselements in the early universe were therefore rather inaccurate,but these calculations have been repeated in the light of betterknowledge and now agree very well with what we observe. Itis, moreover, very difficult to explain in any other way whythere should be so much helium in the universe. We aretherefore fairly confident that we have the right picture, at leastback to about one second after the big bang. Within only a few hours of the big bang, the production ofhelium and other elements would have stopped. And after that,for the next million years or so, the universe would have justcontinued expanding, without anything much happening. Eventually, once the temperature had dropped to a fewthousand degrees, and electrons and nuclei no longer hadenough energy to overcome the electromagnetic attractionbetween them, they would have started combining to formatoms. The universe as a whole would have continuedexpanding and cooling, but in regions that were slightly denserthan average, the expansion would have been slowed down bythe extra gravitational attraction. This would eventually stopexpansion in some regions and cause them to start torecollapse. As they were collapsing, the gravitational pull ofmatter outside these regions might start them rotating slightly. As the collapsing region got smaller, it would spin faster - justas skaters spinning on ice spin faster as they draw in theirarms. Eventually, when the region got small enough, it wouldbe spinning fast enough to balance the attraction of gravity,and in this way disklike rotating galaxies were born. Otherregions, which did not happen to pick up a rotation, wouldbecome oval-shaped objects called elliptical galaxies. In these,the region would stop collapsing because individual parts of thegalaxy would be orbiting stably round its center, but the galaxywould have no overall rotation. As time went on, the hydrogen and helium gas in thegalaxies would break up into smaller clouds that would collapseunder their own gravity. As these contracted, and the atomswithin them collided with one another, the temperature of thegas would increase, until eventually it became hot enough tostart nuclear fusion reactions. These would convert thehydrogen into more helium, and the heat given off would raisethe pressure, and so stop the clouds from contracting anyfurther. They would remain stable in this state for a long timeas stars like our sun, burning hydrogen into helium andradiating the resulting energy as heat and light. More massivestars would need to be hotter to balance their strongergravitational attraction, making the nuclear fusion reactionsproceed so much more rapidly that they would use up theirhydrogen in as little as a hundred million years. They wouldthen contract slightly, and as they heated up further, wouldstart to convert helium into heavier elements like carbon oroxygen. This, however, would not release much more energy,so a crisis would occur, as was described in the chapter onblack holes. What happens next is not completely clear, but itseems likely that the central regions of the star would collapseto a very dense state, such as a neutron star or black hole. The outer regions of the star may sometimes get blown off ina tremendous explosion called a supernova, which wouldoutshine all the other stars in its galaxy. Some of the heavierelements produced near the end of the star’s life would beflung back into the gas in the galaxy, and would provide someof the raw material for the next generation of stars. Our ownsun contains about 2 percent of these heavier elements,because it is a second- or third-generation star, formed somefive thousand million years ago out of a cloud of rotating gascontaining the debris of earlier supernovas. Most of the gas inthat cloud went to form the sun or got blown away, but asmall amount of the heavier elements collected together to formthe bodies that now orbit the sun as planets like the earth. The earth was initially very hot and without an atmosphere. In the course of time it cooled and acquired an atmospherefrom the emission of gases from the rocks. This earlyatmosphere was not one in which we could have survived. Itcontained no oxygen, but a lot of other gases that arepoisonous to us, such as hydrogen sulfide (the gas that givesrotten eggs their smell). There are, however, other primitiveforms of life that can flourish under such conditions. It isthought that they developed in the oceans, possibly as a resultof chance combinations of atoms into large structures, calledmacromolecules, which were capable of assembling other atomsin the ocean into similar structures. They would thus havereproduced themselves and multiplied. In some cases therewould be errors in the reproduction. Mostly these errors wouldhave been such that the new macromolecule could notreproduce itself and eventually would have been destroyed. However, a few of the errors would have produced newmacromolecules that were even better at reproducingthemselves. They would have therefore had an advantage andwould have tended to replace the original macromolecules. Inthis way a process of evolution was started that led to thedevelopment of more and more complicated, self-reproducingorganisms. The first primitive forms of life consumed variousmaterials, including hydrogen sulfide, and released oxygen. Thisgradually changed the atmosphere to the composition that ithas today, and allowed the development of higher forms of lifesuch as fish, reptiles, mammals, and ultimately the human race. This picture of a universe that started off very hot andcooled as it expanded is in agreement with all the observationalevidence that we have today. Nevertheless, it leaves a numberof important questions unanswered: 1. Why was the early universe so hot? 2. Why is the universe so uniform on a large scale? Whydoes it look the same at all points of space and in alldirections? In particular, why is the temperature of themicrowave back-ground radiation so nearly the same when welook in different directions? It is a bit like asking a number ofstudents an exam question. If they all give exactly the sameanswer, you can be pretty sure they have communicated witheach other. Yet, in the model described above, there would nothave been time since the big bang for light to get from onedistant region to another, even though the regions were closetogether in the early universe. According to the theory ofrelativity, if light cannot get from one region to another, noother information can. So there would be no way in whichdifferent regions in the early universe could have come to havethe same temperature as each other, unless for someunexplained reason they happened to start out with the sametemperature. 3. Why did the universe start out with so nearly the criticalrate of expansion that separates models that recollapse fromthose that go on expanding forever, that even now, tenthousand million years later, it is still expanding at nearly thecritical rate? If the rate of expansion one second after the bigbang had been smaller by even one part in a hundredthousand million million, the universe would have recollapsedbefore it ever reached its present size. 4. Despite the fact that the universe is so uniform andhomogeneous on a large scale, it contains local irregularities,such as stars and galaxies. These are thought to havedeveloped from small differences in the density of the earlyuniverse from one region to another. What was the origin ofthese density fluctuations? The general theory of relativity, on its own, cannot explainthese features or answer these questions because of itsprediction that the universe started off with infinite density atthe big bang singularity. At the singularity, general relativity andall other physical laws would break down: one couldn’t predictwhat would come out of the singularity. As explained before,this means that one might as well cut the big bang, and anyevents before it, out of the theory, because they can have noeffect on what we observe. Space-time would have a boundary- a beginning at the big bang. Science seems to have uncovered a set of laws that, withinthe limits set by the uncertainty principle, tell us how theuniverse will develop with time, if we know its state at any onetime. These laws may have originally been decreed by God, butit appears that he has since left the universe to evolveaccording to them and does not now intervene in it. But howdid he choose the initial state or configuration of the universe? What were the “boundary conditions” at the beginning of time? One possible answer is to say that God chose the initialconfiguration of the universe for reasons that we cannot hopeto understand. This would certainly have been within the powerof an omnipotent being, but if he had started it off in such anincomprehensible way, why did he choose to let it evolveaccording to laws that we could understand? The whole historyof science has been the gradual realization that events do nothappen in an arbitrary manner, but that they reflect a certainunderlying order, which may or may not be divinely inspired. Itwould be only natural to suppose that this order should applynot only to the laws, but also to the conditions at theboundary of space-time that specify the initial state of theuniverse. There may be a large number of models of theuniverse with different initial conditions that all obey the laws. There ought to be some principle that picks out one initialstate, and hence one model, to represent our universe. One such possibility is what are called chaotic boundaryconditions. These implicitly assume either that the universe isspatially infinite or that there are infinitely many universes. Under chaotic boundary conditions, the probability of findingany particular region of space in any given configuration justafter the big bang is the same, in some sense, as theprobability of finding it in any other configuration: the initialstate of the universe is chosen purely randomly. This wouldmean that the early universe would have probably been verychaotic and irregular because there are many more chaotic anddisordered configurations for the universe than there aresmooth and ordered ones. (If each configuration is equallyprobable, it is likely that the universe started out in a chaoticand disordered state, simply because there are so many moreof them.) It is difficult to see how such chaotic initial conditionscould have given rise to a universe that is so smooth andregular on a large scale as ours is today. One would also haveexpected the density fluctuations in such a model to have ledto the formation of many more primordial black holes than theupper limit that has been set by observations of the gammaray background. If the universe is indeed spatially infinite, or if there areinfinitely many universes, there would probably be some largeregions somewhere that started out in a smooth and uniformmanner. It is a bit like the well-known horde of monkeyshammering away on typewriters - most of what they write willbe garbage, but very occasionally by pure chance they will typeout one of Shakespeare’s sonnets. Similarly, in the case of theuniverse, could it be that we are living in a region that justhappens by chance to be smooth and uniform? At first sightthis might seem very improbable, because such smooth regionswould be heavily outnumbered by chaotic and irregular regions. However, suppose that only in the smooth regions weregalaxies and stars formed and were conditions right for thedevelopment of complicated self-replicating organisms likeourselves who were capable of asking the question: why is theuniverse so smooth.? This is an example of the application ofwhat is known as the anthropic principle, which can beparaphrased as “We see the universe the way it is because weexist.” There are two versions of the anthropic principle, the weakand the strong. The weak anthropic principle states that in auniverse that is large or infinite in space and/or time, theconditions necessary for the development of intelligent life willbe met only in certain regions that are limited in space andtime. The intelligent beings in these regions should therefore notbe surprised if they observe that their locality in the universesatisfies the conditions that are necessary for their existence. Itis a bit like a rich person living in a wealthy neighborhood notseeing any poverty. One example of the use of the weak anthropic principle is to“explain” why the big bang occurred about ten thousand millionyears ago - it takes about that long for intelligent beings toevolve. As explained above, an early generation of stars firsthad to form. These stars converted some of the originalhydrogen and helium into elements like carbon and oxygen, outof which we are made. The stars then exploded as supernovas,and their debris went to form other stars and planets, amongthem those of our Solar System, which is about five thousandmillion years old. The first one or two thousand million yearsof the earth’s existence were too hot for the development ofanything complicated. The remaining three thousand millionyears or so have been taken up by the slow process ofbiological evolution, which has led from the simplest organismsto beings who are capable of measuring time back to the bigbang. Few people would quarrel with the validity or utility of theweak anthropic principle. Some, however, go much further andpropose a strong version of the principle. According to thistheory, there are either many different universes or manydifferent regions of a single universe, each with its own initialconfiguration and, perhaps, with its own set of laws of science. In most of these universes the conditions would not be rightfor the development of complicated organisms; only in the fewuniverses that are like ours would intelligent beings develop andask the question, “Why is the universe the way we see it?” The answer is then simple: if it had been different, we wouldnot be here! The laws of science, as we know them at present, containmany fundamental numbers, like the size of the electric chargeof the electron and the ratio of the masses of the proton andthe electron. We cannot, at the moment at least, predict thevalues of these numbers from theory - we have to find themby observation. It may be that one day we shall discover acomplete unified theory that predicts them all, but it is alsopossible that some or all of them vary from universe touniverse or within a single universe. The remarkable fact is thatthe values of these numbers seem to have been very finelyadjusted to make possible the development of life. For example,if the electric charge of the electron had been only slightlydifferent, stars either would have been unable to burnhydrogen and helium, or else they would not have exploded. Ofcourse, there might be other forms of intelligent life, notdreamed of even by writers of science fiction, that did notrequire the light of a star like the sun or the heavier chemicalelements that are made in stars and are flung back into spacewhen the stars explode. Nevertheless, it seems clear that thereare relatively few ranges of values for the numbers that wouldallow the development of any form of intelligent life. Most setsof values would give rise to universes that, although they mightbe very beautiful, would contain no one able to wonder at thatbeauty. One can take this either as evidence of a divinepurpose in Creation and the choice of the laws of science oras support for the strong anthropic principle. There are a number of objections that one can raise to thestrong anthropic principle as an explanation of the observedstate of the universe. First, in what sense can all these differentuniverses be said to exist? If they are really separate from eachother, what happens in another universe can have noobservable consequences in our own universe. We shouldtherefore use the principle of economy and cut them out of thetheory. If, on the other hand, they are just different regions ofa single universe, the laws of science would have to be thesame in each region, because otherwise one could not movecontinuously from one region to another. In this case the onlydifference between the regions would be their initialconfigurations and so the strong anthropic principle wouldreduce to the weak one. A second objection to the strong anthropic principle is that itruns against the tide of the whole history of science. We havedeveloped from the geocentric cosmologies of Ptolemy and hisforebears, through the heliocentric cosmology of Copernicus andGalileo, to the modern picture in which the earth is amedium-sized planet orbiting around an average star in theouter suburbs of an ordinary spiral galaxy, which is itself onlyone of about a million million galaxies in the observableuniverse. Yet the strong anthropic principle would claim thatthis whole vast construction exists simply for our sake. This isvery hard to believe. Our Solar System is certainly aprerequisite for our existence, hand one might extend this tothe whole of our galaxy to allow for an earlier generation ofstars that created the heavier elements. But there does notseem to be any need for all those other galaxies, nor for theuniverse to be so uniform and similar in every direction on thelarge scale. One would feel happier about the anthropic principle, at leastin its weak version, if one could show that quite a number ofdifferent initial configurations for the universe would haveevolved to produce a universe like the one we observe. If thisis the case, a universe that developed from some sort ofrandom initial conditions should contain a number of regionsthat are smooth and uniform and are suitable for the evolutionof intelligent life. On the other hand, if the initial state of theuniverse had to be chosen extremely carefully to lead tosomething like what we see around us, the universe would beunlikely to contain any region in which life would appear. Inthe hot big bang model described above, there was not enoughtime in the early universe for heat to have flowed from oneregion to another. This means that the initial state of theuniverse would have to have had exactly the same temperatureeverywhere in order to account for the fact that the microwaveback-ground has the same temperature in every direction welook. The initial rate of expansion also would have had to bechosen very precisely for the rate of expansion still to be soclose to the critical rate needed to avoid recollapse. This meansthat the initial state of the universe must have been verycarefully chosen indeed if the hot big bang model was correctright back to the beginning of time. It would be very difficult toexplain why the universe should have begun in just this way,except as the act of a God who intended to create beings likeus. In an attempt to find a model of the universe in whichmany different initial configurations could have evolved tosomething like the present universe, a scientist at theMassachusetts Institute of Technology, Alan Guth, suggested thatthe early universe might have gone through a period of veryrapid expansion. This expansion is said to be “inflationary,” meaning that the universe at one time expanded at anincreasing rate rather than the decreasing rate that it doestoday. According to Guth, the radius of the universe increasedby a million million million million million (1 with thirty zerosafter it) times in only a tiny fraction of a second. Guth suggested that the universe started out from the bigbang in a very hot, but rather chaotic, state. These hightemperatures would have meant that the particles in theuniverse would be moving very fast and would have highenergies. As we discussed earlier, one would expect that atsuch high temperatures the strong and weak nuclear forcesand the electromagnetic force would all be unified into a singleforce. As the universe expanded, it would cool, and particleenergies would go down. Eventually there would be what iscalled a phase transition and the symmetry between the forceswould be broken: the strong force would become different fromthe weak and electromagnetic forces. One common example ofa phase transition is the freezing of water when you cool itdown. Liquid water is symmetrical, the same at every point andin every direction. However, when ice crystals form, they willhave definite positions and will be lined up in some direction. This breaks water’s symmetry. In the case of water, if one is careful, one can “supercool” it: that is, one can reduce the temperature below the freezingpoint (O?C) without ice forming. Guth suggested that theuniverse might behave in a similar way: the temperature mightdrop below the critical value without the symmetry between theforces being broken. If this happened, the universe would be inan unstable state, with more energy than if the symmetry hadbeen broken. This special extra energy can be shown to havean antigravitational effect: it would have acted just like thecosmological constant that Einstein introduced into generalrelativity when he was trying to construct a static model of theuniverse. Since the universe would already be expanding just asin the hot big bang model, the repulsive effect of thiscosmological constant would therefore have made the universeexpand at an ever-increasing rate. Even in regions where therewere more matter particles than average, the gravitationalattraction of the matter would have been outweighed by therepulsion of the effective cosmological constant. Thus theseregions would also expand in an accelerating inflationarymanner. As they expanded and the matter particles got fartherapart, one would be left with an expanding universe thatcontained hardly any particles and was still in the supercooledstate. Any irregularities in the universe would simply have beensmoothed out by the expansion, as the wrinkles in a balloonare smoothed away when you blow it up. Thus the presentsmooth and uniform state of the universe could have evolvedfrom many different non-uniform initial states. In such a universe, in which the expansion was acceleratedby a cosmological constant rather than slowed down by thegravitational attraction of matter, there would be enough timefor light to travel from one region to another in the earlyuniverse. This could provide a solution to the problem, raisedearlier, of why different regions in the early universe have thesame properties. Moreover, the rate of expansion of theuniverse would automatically become very close to the criticalrate determined by the energy density of the universe. Thiscould then explain why the rate of expansion is still so close tothe critical rate, without having to assume that the initial rate ofexpansion of the universe was very carefully chosen. The idea of inflation could also explain why there is so muchmatter in the universe. There are something like ten millionmillion million million million million million million million millionmillion million million million (1 with eighty zeros after it)particles in the region of the universe that we can observe. Where did they all come from? The answer is that, in quantumtheory, particles can be created out of energy in the form ofparticle/antiparticle pairs. But that just raises the question ofwhere the energy came from. The answer is that the totalenergy of the universe is exactly zero. The matter in theuniverse is made out of positive energy. However, the matter isall attracting itself by gravity. Two pieces of matter that areclose to each other have less energy than the same two piecesa long way apart, because you have to expend energy toseparate them against the gravitational force that is pulling themtogether. Thus, in a sense, the gravitational field has negativeenergy. In the case of a universe that is approximately uniformin space, one can show that this negative gravitational energyexactly cancels the positive energy represented by the matter. So the total energy of the universe is zero. Now twice zero is also zero. Thus the universe can doublethe amount of positive matter energy and also double thenegative gravitational energy without violation of the conservationof energy. This does not happen in the normal expansion ofthe universe in which the matter energy density goes down asthe universe gets bigger. It does happen, however, in theinflationary expansion because the energy density of thesupercooled state remains constant while the universe expands: when the universe doubles in size, the positive matter energyand the negative gravitational energy both double, so the totalenergy remains zero. During the inflationary phase, the universeincreases its size by a very large amount. Thus the totalamount of energy available to make particles becomes verylarge. As Guth has remarked, “It is said that there’s no suchthing as a free lunch. But the universe is the ultimate freelunch.” The universe is not expanding in an inflationary way today. Thus there has to be some mechanism that would eliminate thevery large effective cosmological constant and so change therate of expansion from an accelerated one to one that isslowed down by gravity, as we have today. In the inflationaryexpansion one might expect that eventually the symmetrybetween the forces would be broken, just as super-cooled wateralways freezes in the end. The extra energy of the unbrokensymmetry state would then be released and would reheat theuniverse to a temperature just below the critical temperaturefor symmetry between the forces. The universe would then goon to expand and cool just like the hot big bang model, butthere would now be an explanation of why the universe wasexpanding at exactly the critical rate and why different regionshad the same temperature. In Guth’s original proposal the phase transition was supposedto occur suddenly, rather like the appearance of ice crystals invery cold water. The idea was that “bubbles” of the new phaseof broken symmetry would have formed in the old phase, likebubbles of steam surrounded by boiling water. The bubbleswere supposed to expand and meet up with each other untilthe whole universe was in the new phase. The trouble was, asI and several other people pointed out, that the universe wasexpanding so fast that even if the bubbles grew at the speedof light, they would be moving away from each other and socould not join up. The universe would be left in a verynon-uniform state, with some regions still having symmetrybetween the different forces. Such a model of the universewould not correspond to what we see. In October 1981, I went to Moscow for a conference onquantum gravity. After the conference I gave a seminar on theinflationary model and its problems at the SternbergAstronomical Institute. Before this, I had got someone else togive my lectures for me, because most people could notunderstand my voice. But there was not time to prepare thisseminar, so I gave it myself, with one of my graduate studentsrepeating my words. It worked well, and gave me much morecontact with my audience. In the audience was a youngRussian, Andrei Linde, from the Lebedev Institute in Moscow. He said that the difficulty with the bubbles not joining up couldbe avoided if the bubbles were so big that our region of theuniverse is all contained inside a single bubble. In order for thisto work, the change from symmetry to broken symmetry musthave taken place very slowly inside the bubble, but this is quitepossible according to grand unified theories. Linde’s idea of aslow breaking of symmetry was very good, but I later realizedthat his bubbles would have to have been bigger than the sizeof the universe at the time! I showed that instead thesymmetry would have broken everywhere at the same time,rather than just inside bubbles. This would lead to a uniformuniverse, as we observe. I was very excited by this idea anddiscussed it with one of my students, Ian Moss. As a friend ofLinde’s, I was rather embarrassed, however, when I was latersent his paper by a scientific journal and asked whether it wassuitable for publication. I replied that there was this flaw aboutthe bubbles being bigger than the universe, but that the basicidea of a slow breaking of symmetry was very good. Irecommended that the paper ? published as it was because itwould take Linde several months to correct it, since anythinghe sent to the West would have to be passed by Sovietcensorship, which was neither very skillful nor very quick withscientific papers. Instead, I wrote a short paper with Ian Mossin the same journal in which we pointed out this problem withthe bubble and showed how it could be resolved. The day after I got back from Moscow I set out forPhiladelphia, where I was due to receive a medal from theFranklin Institute. My secretary, Judy Fella, had used her notinconsiderable charm to persuade British Airways to give herselfand me free seats on a Concorde as a publicity venture. However, I .was held up on my way to the airport by heavyrain and I missed the plane. Nevertheless, I got to Philadelphiain the end and received my medal. I was then asked to give aseminar on the inflationary universe at Drexel University inPhiladelphia. I gave the same seminar about the problems ofthe inflationary universe, just as in Moscow. A very similar idea to Linde’s was put forth independently afew months later by Paul Steinhardt and Andreas Albrecht ofthe University of Pennsylvania. They are now given joint creditwith Linde for what is called “the new inflationary model,” based on the idea of a slow breaking of symmetry. (The oldinflationary model was Guth’s original suggestion of fastsymmetry breaking with the formation of bubbles.)The new inflationary model was a good attempt to explainwhy the universe is the way it is. However, I and several otherpeople showed that, at least in its original form, it predictedmuch greater variations in the temperature of the microwavebackground radiation than are observed. Later work has alsocast doubt on whether there could be a phase transition in thevery early universe of the kind required. In my personalopinion, the new inflationary model is now dead as a scientifictheory, although a lot of people do not seem to have heard ofits demise and are still writing papers as if it were viable. Abetter model, called the chaotic inflationary model, was putforward by Linde in 1983. In this there is no phase transitionor supercooling. Instead, there is a spin 0 field, which, becauseof quantum fluctuations, would have large values in someregions of the early universe. The energy of the field in thoseregions would behave like a cosmological constant. It wouldhave a repulsive gravitational effect, and thus make thoseregions expand in an inflationary manner. As they expanded,the energy of the field in them would slowly decrease until theinflationary expansion changed to an expansion like that in thehot big bang model. One of these regions would become whatwe now see as the observable universe. This model has all theadvantages of the earlier inflationary models, but it does notdepend on a dubious phase transition, and it can moreovergive a reasonable size for the fluctuations in the temperature ofthe microwave background that agrees with observation. This work on inflationary models showed that the presentstate of the universe could have arisen from quite a largenumber of different initial configurations. This is important,because it shows that the initial state of the part of theuniverse that we inhabit did not have to be chosen with greatcare. So we may, if we wish, use the weak anthropic principleto explain why the universe looks the way it does now. Itcannot be the case, however, that every initial configurationwould have led to a universe like the one we observe. One canshow this by considering a very different state for the universeat the present time, say, a very lumpy and irregular one. Onecould use the laws of science to evolve the universe back intime to determine its configuration at earlier times. According tothe singularity theorems of classical general relativity, therewould still have been a big bang singularity. If you evolve sucha universe forward in time according to the laws of science,you will end up with the lumpy and irregular state you startedwith. Thus there must have been initial configurations thatwould not have given rise to a universe like the one we seetoday. So even the inflationary model does not tell us why theinitial configuration was not such as to produce something verydifferent from what we observe. Must we turn to the anthropicprinciple for an explanation? Was it all just a lucky chance? That would seem a counsel of despair, a negation of all ourhopes of understanding the underlying order of the universe. In order to predict how the universe should have started off,one needs laws that hold at the beginning of time. If theclassical theory of general relativity was correct, the singularitytheorems that Roger Penrose and I proved show that thebeginning of time would have been a point of infinite densityand infinite curvature of space-time. All the known laws ofscience would break down at such a point. One might supposethat there were new laws that held at singularities, but it wouldbe very difficult even to formulate such laws at such badlybehaved points, and we would have no guide from observationsas to what those laws might be. However, what the singularitytheorems really indicate is that the gravitational field becomes sostrong that quantum gravitational effects become important: classical theory is no longer a good description of the universe. So one has to use a quantum theory of gravity to discuss thevery early stages of the universe. As we shall see, it is possiblein the quantum theory for the ordinary laws of science to holdeverywhere, including at the beginning of time: it is notnecessary to postulate new laws for singularities, because thereneed not be any singularities in the quantum theory. We don’t yet have a complete and consistent theory thatcombines quantum mechanics and gravity. However, we arefairly certain of some features that such a unified theory shouldhave. One is that it should incorporate Feynman’s proposal toformulate quantum theory in terms of a sum over histories. Inthis approach, a particle does not have just a single history, asit would in a classical theory. Instead, it is supposed to followevery possible path in space-time, and with each of thesehistories there are associated a couple of numbers, onerepresent-ing the size of a wave and the other representing itsposition in the cycle (its phase). The probability that theparticle, say, passes through some particular point is found byadding up the waves associated with every possible history thatpasses through that point. When one actually tries to performthese sums, however, one runs into severe technical problems. The only way around these is the following peculiarprescription: one must add up the waves for particle historiesthat are not in the “real” time that you and I experience buttake place in what is called imaginary time. Imaginary time maysound like science fiction but it is in fact a well-definedmathematical concept. If we take any ordinary (or “real”)number and multiply it by itself, the result is a positivenumber. (For example, 2 times 2 is 4, but so is - 2 times -2.) There are, however, special numbers (called imaginarynumbers) that give negative numbers when multiplied bythemselves. (The one called i, when multiplied by itself, gives -1, 2i multiplied by itself gives - 4, and so on.)One can picture real and imaginary numbers in the followingway: The real numbers can be represented by a line goingfrom left to right, with zero in the middle, negative numberslike - 1, - 2, etc. on the left, and positive numbers, 1, 2, etc. on the right. Then imaginary numbers are represented by aline going up and down the page, with i, 2i, etc. above themiddle, and - i, - 2i, etc. below. Thus imaginary numbers arein a sense numbers at right angles to ordinary real numbers. To avoid the technical difficulties with Feynman’s sum overhistories, one must use imaginary time. That is to say, for thepurposes of the calculation one must measure time usingimaginary numbers, rather than real ones. This has aninteresting effect on space-time: the distinction between timeand space disappears completely. A space-time in which eventshave imaginary values of the time coordinate is said to beEuclidean, after the ancient Greek Euclid, who founded thestudy of the geometry of two-dimensional surfaces. What wenow call Euclidean space-time is very similar except that it hasfour dimensions instead of two. In Euclidean space-time there isno difference between the time direction and directions inspace. On the other hand, in real space-time, in which eventsare labeled by ordinary, real values of the time coordinate, it iseasy to tell the difference - the time direction at all points lieswithin the light cone, and space directions lie outside. In anycase, as far as everyday quantum mechanics is concerned, wemay regard our use of imaginary time and Euclideanspace-time as merely a mathematical device (or trick) tocalculate answers about real space-time. A second feature that we believe must be part of anyultimate theory is Einstein’s idea that the gravitational field isrepresented by curved space-time: particles try to follow thenearest thing to a straight path in a curved space, but becausespace-time is not flat their paths appear to be bent, as if by agravitational field. When we apply Feynman’s sum over historiesto Einstein’s view of gravity, the analogue of the history of aparticle is now a complete curved space-time that representsthe history of the whole universe. To avoid the technicaldifficulties in actually performing the sum over histories, thesecurved space-times must be taken to be Euclidean. That is,time is imaginary and is indistinguishable from directions inspace. To calculate the probability of finding a real space-timewith some certain property, such as looking the same at everypoint and in every direction, one adds up the waves associatedwith all the histories that have that property. In the classical theory of general relativity, there are manydifferent possible curved space-times, each corresponding to adifferent initial state of the universe. If we knew the initial stateof our universe, we would know its entire history. Similarly, inthe quantum theory of gravity, there are many differentpossible quantum states for the universe. Again, if we knewhow the Euclidean curved space-times in the sum over historiesbehaved at early times, we would know the quantum state ofthe universe. In the classical theory of gravity, which is based on realspace-time, there are only two possible ways the universe canbehave: either it has existed for an infinite time, or else it hada beginning at a singularity at some finite time in the past. Inthe quantum theory of gravity, on the other hand, a thirdpossibility arises. Because one is using Euclidean space-times, inwhich the time direction is on the same footing as directions inspace, it is possible for space-time to be finite in extent and yetto have no singularities that formed a boundary or edge. Space-time would be like the surface of the earth, only withtwo more dimensions. The surface of the earth is finite inextent but it doesn’t have a boundary or edge: if you sail offinto the sunset, you don’t fall off the edge or run into asingularity. (I know, because I have been round the world!)If Euclidean space-time stretches back to infinite imaginarytime, or else starts at a singularity in imaginary time, we havethe same problem as in the classical theory of specifying theinitial state of the universe: God may know how the universebegan, but we cannot give any particular reason for thinking itbegan one way rather than another. On the other hand, thequantum theory of gravity has opened up a new possibility, inwhich there would be no boundary to space-time and so therewould be no need to specify the behavior at the boundary. There would be no singularities at which the laws of sciencebroke down, and no edge of space-time at which one wouldhave to appeal to God or some new law to set the boundaryconditions for space-time. One could say: “The boundarycondition of the universe is that it has no boundary.” Theuniverse would be completely self-contained and not affected byanything outside itself. It would neither be created nordestroyed, It would just BE. It was at the conference in the Vatican mentioned earlierthat I first put forward the suggestion that maybe time andspace together formed a surface that was finite in size but didnot have any boundary or edge. My paper was rathermathematical, however, so its implications for the role of God inthe creation of the universe were not generally recognized atthe time (just as well for me). At the time of the Vaticanconference, I did not know how to use the “no boundary” ideato make predictions about the universe. However, I spent thefollowing sum-mer at the University of California, Santa Barbara. There a friend and colleague of mine, Jim Hartle, worked outwith me what conditions the universe must satisfy if space-timehad no boundary. When I returned to Cambridge, I continuedthis work with two of my research students, Julian Luttrel andJonathan Halliwell. I’d like to emphasize that this idea that time and spaceshould be finite “without boundary” is just a proposal: it cannotbe deduced from some other principle. Like any other scientifictheory, it may initially be put forward for aesthetic ormetaphysical reasons, but the real test is whether it makespredictions that agree with observation. This, how-ever, isdifficult to determine in the case of quantum gravity, for tworeasons. First, as will be explained in Chapter 11, we are notyet sure exactly which theory successfully combines generalrelativity and quantum mechanics, though we know quite a lotabout the form such a theory must have. Second, any modelthat described the whole universe in detail would be much toocomplicated mathematically for us to be able to calculate exactpredictions. One therefore has to make simplifying assumptionsand approximations - and even then, the problem of extractingpredictions remains a formidable one. Each history in the sum over histories will describe not onlythe space-time but everything in it as well, including anycomplicated organisms like human beings who can observe thehistory of the universe. This may provide another justificationfor the anthropic principle, for if all the histories are possible,then so long as we exist in one of the histories, we may usethe anthropic principle to explain why the universe is found tobe the way it is. Exactly what meaning can be attached to theother histories, in which we do not exist, is not clear. This viewof a quantum theory of gravity would be much moresatisfactory, however, if one could show that, using the sumover histories, our universe is not just one of the possiblehistories but one of the most probable ones. To do this, wemust perform the sum over histories for all possible Euclideanspace-times that have no boundary. Under the “no boundary” proposal one learns that thechance of the universe being found to be following most of thepossible histories is negligible, but there is a particular family ofhistories that are much more probable than the others. Thesehistories may be pictured as being like the surface of the earth,with the distance from the North Pole representing imaginarytime and the size of a circle of constant distance from theNorth Pole representing the spatial size of the universe. Theuniverse starts at the North Pole as a single point. As onemoves south, the circles of latitude at constant distance fromthe North Pole get bigger, corresponding to the universeexpanding with imaginary time (Fig. 8.1). The universe wouldreach a maximum size at the equator and would contract withincreasing imaginary time to a single point at the South Pole. Ever though the universe would have zero size at the Northand South Poles, these points would not be singularities, anymore than the North aid South Poles on the earth aresingular. The laws of science will hold at them, just as they doat the North and South Poles on the earth. The history of the universe in real time, however, would lookvery different. At about ten or twenty thousand million yearsago, it would have a minimum size, which was equal to themaximum radius of the history in imaginary time. At later realtimes, the universe would expand like the chaotic inflationarymodel proposed by Linde (but one would not now have toassume that the universe was created somehow in the rightsort of state). The universe would expand to a very large size(Fig. 8.1) and eventually it would collapse again into what lookslike a singularity in real time. Thus, in a sense, we are still alldoomed, even if we keep away from black holes. Only if wecould picture the universe in terms of imaginary time wouldthere be no singularities. If the universe really is in such a quantum state, therewould be no singularities in the history of the universe inimaginary time. It might seem therefore that my more recentwork had completely undone the results of my earlier work onsingularities. But, as indicated above, the real importance of thesingularity theorems was that they showed that the gravitationalfield must become so strong that quantum gravitational effectscould not be ignored. This in turn led to the idea that theuniverse could be finite in imaginary time but withoutboundaries or singularities. When one goes back to the realtime in which we live, however, there will still appear to besingularities. The poor astronaut who falls into a black hole willstill come to a sticky end; only if he lived in imaginary timewould he encounter no singularities. This might suggest that the so-called imaginary time is reallythe real time, and that what we call real time is just a figmentof our imaginations. In real time, the universe has a beginningand an end at singularities that form a boundary to space-timeand at which the laws of science break down. But in imaginarytime, there are no singularities or boundaries. So maybe whatwe call imaginary time is really more basic, and what we callreal is just an idea that we invent to help us describe what wethink the universe is like. But according to the approach Idescribed in Chapter 1, a scientific theory is just a mathematicalmodel we make to describe our observations: it exists only inour minds. So it is meaningless to ask: which is real, “real” or“imaginary” time? It is simply a matter of which is the moreuseful description. One can also use the sum over histories, along with the noboundary proposal, to find which properties of the universe arelikely to occur together. For example, one can calculate theprobability that the universe is expanding at nearly the samerate in all different directions at a time when the density of theuniverse has its present value. In the simplified models thathave been examined so far, this probability turns out to behigh; that is, the proposed no boundary condition leads to theprediction that it is extremely probable that the present rate ofexpansion of the universe is almost the same in each direction. This is consistent with the observations of the microwavebackground radiation, which show that it has almost exactly thesame intensity in any direction. If the universe were expandingfaster in some directions than in others, the intensity of theradiation in those directions would be reduced by an additionalred shift. Further predictions of the no boundary condition arecurrently being worked out. A particularly interesting problem isthe size of the small departures from uniform density in theearly universe that caused the formation first of the galaxies,then of stars, and finally of us. The uncertainty principle impliesthat the early universe cannot have been completely uniformbecause there must have been some uncertainties orfluctuations in the positions and velocities of the particles. Usingthe no boundary condition, we find that the universe must infact have started off with just the minimum possiblenon-uniformity allowed by the uncertainty principle. Theuniverse would have then undergone a period of rapidexpansion, as in the inflationary models. During this period, theinitial non-uniformities would have been amplified until theywere big enough to explain the origin of the structures weobserve around us. In 1992 the Cosmic Background Explorersatellite (COBE) first detected very slight variations in theintensity of the microwave background with direction. The waythese non-uniformities depend on direction seems to agree withthe predictions of the inflationary model and the no boundaryproposal. Thus the no boundary proposal is a good scientifictheory in the sense of Karl Popper: it could have been falsifiedby observations but instead its predictions have been confirmed. In an expanding universe in which the density of matter variedslightly from place to place, gravity would have caused thedenser regions to slow down their expansion and startcontracting. This would lead to the formation of galaxies, stars,and eventually even insignificant creatures like ourselves. Thusall the complicated structures that we see in the universe mightbe explained by the no boundary condition for the universetogether with the uncertainty principle of quantum mechanics. The idea that space and time may form a closed surfacewithout boundary also has profound implications for the role ofGod in the affairs of the universe. With the success of scientifictheories in describing events, most people have come to believethat God allows the universe to evolve according to a set oflaws and does not intervene in the universe to break theselaws. However, the laws do not tell us what the universeshould have looked like when it started - it would still be upto God to wind up the clockwork and choose how to start itoff. So long as the universe had a beginning, we could supposeit had a creator. But if the universe is really completelyself-contained, having no boundary or edge, it would haveneither beginning nor end: it would simply be. What place,then, for a creator? CHAPTER 9 THE ARROW OF TIME In previous chapters we have seen how our views of thenature of time have changed over the years. Up to thebeginning of this century people believed in an absolute time. That is, each event could be labeled by a number called “time” in a unique way, and all good clocks would agree on the timeinterval between two events. However, the discovery that thespeed of light appeared the same to every observer, no matterhow he was moving, led to the theory of relativity - and inthat one had to abandon the idea that there was a uniqueabsolute time. Instead, each observer would have his ownmeasure of time as recorded by a clock that he carried: clockscarried by different observers would not necessarily agree. Thustime became a more personal concept, relative to the observerwho measured it. When one tried to unify gravity with quantum mechanics,one had to introduce the idea of “imaginary” time. Imaginarytime is indistinguishable from directions in space. If one can gonorth, one can turn around and head south; equally, if onecan go forward in imaginary time, one ought to be able toturn round and go backward. This means that there can beno important difference between the forward and backwarddirections of imaginary time. On the other hand, when onelooks at “real” time, there’s a very big difference between theforward and backward directions, as we all know. Where doesthis difference between the past and the future come from? Why do we remember the past but not the future? The laws of science do not distinguish between the past andthe future. More precisely, as explained earlier, the laws ofscience are unchanged under the combination of operations (orsymmetries) known as C, P, and T. (C means changingparticles for antiparticles. P means taking the mirror image, soleft and right are interchanged. And T means reversing thedirection of motion of all particles: in effect, running the motionbackward.) The laws of science that govern the behavior ofmatter under all normal situations are unchanged under thecombination of the two operations C and P on their own. Inother words, life would be just the same for the inhabitants ofanother planet who were both mirror images of us and whowere made of antimatter, rather than matter. If the laws of science are unchanged by the combination ofoperations C and P, and also by the combination C, P, and T,they must also be unchanged under the operation T alone. Yetthere is a big difference between the forward and backwarddirections of real time in ordinary life. Imagine a cup of waterfalling off a table and breaking into pieces on the floor. If youtake a film of this, you can easily tell whether it is being runforward or backward. If you run it backward you will see thepieces suddenly gather themselves together off the floor andjump back to form a whole cup on the table. You can tell thatthe film is being run backward because this kind of behavior isnever observed in ordinary life. If it were, crockerymanufacturers would go out of business. The explanation that is usually given as to why we don’t seebroken cups gathering themselves together off the floor andjumping back onto the table is that it is forbidden by thesecond law of thermodynamics. This says that in any closedsystem disorder, or entropy, always increases with time. Inother words, it is a form of Murphy’s law: things always tendto go wrong! An intact cup on the table is a state of highorder, but a broken cup on the floor is a disordered state. One can go readily from the cup on the table in the past tothe broken cup on the floor in the future, but not the otherway round. The increase of disorder or entropy with time is oneexample of what is called an arrow of time, something thatdistinguishes the past from the future, giving a direction to time. There are at least three different arrows of time. First, there isthe thermodynamic arrow of time, the direction of time inwhich disorder or entropy increases. Then, there is thepsychological arrow of time. This is the direction in which wefeel time passes, the direction in which we remember the pastbut not the future. Finally, there is the cosmological arrow oftime. This is the direction of time in which the universe isexpanding rather than contracting. In this chapter I shall argue that the no boundary conditionfor the universe, together with the weak anthropic principle, canexplain why all three arrows point in the same direction - andmoreover, why a well-defined arrow of time should exist at all. I shall argue that the psychological arrow is determined by thethermodynamic arrow, and that these two arrows necessarilyalways point in the same direction. If one assumes the noboundary condition for the universe, we shall see that theremust be well-defined thermodynamic and cosmological arrows oftime, but they will not point in the same direction for thewhole history of the universe. However, I shall argue that it isonly when they do point in the same direction that conditionsare suitable for the development of intelligent beings who canask the question: why does disorder increase in the samedirection of time as that in which the universe expands? I shall discuss first the thermodynamic arrow of time. Thesecond law of thermodynamics results from the fact that thereare always many more disordered states than there areordered ones. For example, consider the pieces of a jigsaw in abox. There is one, and. only one, arrangement in which thepieces make a complete picture. On the other hand, there area very large number of arrangements in which the pieces aredisordered and don’t make a picture. Suppose a system starts out in one of the small number ofordered states. As time goes by, the system will evolveaccording to the laws of science and its state will change. At alater time, it is more probable that the system will be in adisordered state than in an ordered one because there aremore disordered states. Thus disorder will tend to increase withtime if the system obeys an initial condition of high order. Suppose the pieces of the jigsaw start off in a box in theordered arrangement in which they form a picture. If youshake the box, the pieces will take up another arrangement. This will probably be a disordered arrangement in which thepieces don’t form a proper picture, simply because there are somany more disordered arrangements. Some groups of piecesmay still form parts of the picture, but the more you shakethe box, the more likely it is that these groups will get brokenup and the pieces will be in a completely jumbled state inwhich they don’t form any sort of picture. So the disorder ofthe pieces will probably increase with time if the pieces obeythe initial condition that they start off in a condition of highorder. Suppose, however, that God decided that the universe shouldfinish up in a state of high order but that it didn’t matter whatstate it started in. At early times the universe would probablybe in a disordered state. This would mean that disorder woulddecrease with time. You would see broken cups gatheringthemselves together and jumping back onto the table. However,any human beings who were observing the cups would beliving in a universe in which disorder decreased with time. Ishall argue that such beings would have a psychological arrowof time that was backward. That is, they would rememberevents in the future, and not remember events in their past. When the cup was broken, they would remember it being onthe table, but when it was on the table, they would notremember it being on the floor. It is rather difficult to talk about human memory because wedon’t know how the brain works in detail. We do, however,know all about how computer memories work. I shall thereforediscuss the psychological arrow of time for computers. I think itis reasonable to assume that the arrow for computers is thesame as that for humans. If it were not, one could make akilling on the stock exchange by having a computer that wouldremember tomorrow’s prices! A computer memory is basically adevice containing elements that can exist in either of two states. A simple example is an abacus. In its simplest form, thisconsists of a number of wires; on each wire there are anumber of beads that can be put in one of two positions. Before an item is recorded in a computer’s memory, thememory is in a disordered state, with equal probabilities for thetwo possible states. (The abacus beads are scattered randomlyon the wires of the abacus.) After the memory interacts withthe system to be remembered, it will definitely be in one stateor the other, according to the state of the system. (Eachabacus bead will be at either the left or the right of the abacuswire.) So the memory has passed from a disordered state toan ordered one. However, in order to make sure that thememory is in the right state, it is necessary to use a certainamount of energy (to move the bead or to power thecomputer, for example). This energy is dissipated as heat, andincreases the amount of disorder in the universe. One canshow that this increase in disorder is always greater than theincrease in the order of the memory itself. Thus the heatexpelled by the computer’s cooling fan means that when acomputer records an item in memory, the total amount ofdisorder in the universe still goes up. The direction of time inwhich a computer remembers the past is the same as that inwhich disorder increases. Our subjective sense of the direction of time, thepsychological arrow of time, is therefore determined within ourbrain by the thermodynamic arrow of time. Just like acomputer, we must remember things in the order in whichentropy increases. This makes the second law ofthermodynamics almost trivial. Disorder increases with timebecause we measure time in the direction in which disorderincreases You can’t have a safer bet than that! But why should the thermodynamic arrow of time exist atall? Or, in other words, why should the universe be in a stateof high order at one end of time, the end that we call thepast? Why is it not in a state of complete disorder at all times? After all, this might seem more probable. And why is thedirection of time in which disorder increases the same as thatin which the universe expands? In the classical theory of general relativity one cannot predicthow the universe would have begun because all the knownlaws of science would have broken down at the big bangsingularity. The universe could have started out in a verysmooth and ordered state. This would have led to well-definedthermodynamic and cosmological arrows of time, as we observe. But it could equally well have started out in a very lumpy anddisordered state. In that case, the universe would already be ina state of complete disorder, so disorder could not increasewith time. It would either stay constant, in which case therewould be no well-defined thermodynamic arrow of time, or itwould decrease, in which case the thermodynamic arrow oftime would point in the opposite direction to the cosmologicalarrow. Neither of these possibilities agrees with what weobserve. However, as we have seen, classical general relativitypredicts its own downfall. When the curvature of space-timebecomes large, quantum gravitational effects will becomeimportant and the classical theory will cease to be a gooddescription of the universe. One has to use a quantum theoryof gravity to understand how the universe began. In a quantum theory of gravity, as we saw in the lastchapter, in order to specify the state of the universe one wouldstill have to say how the possible histories of the universewould behave at the boundary of space-time in the past. Onecould avoid this difficulty of having to describe what we do notand cannot know only if the histories satisfy the no boundarycondition: they are finite in extent but have no boundaries,edges, or singularities. In that case, the beginning of time wouldbe a regular, smooth point of space-time and the universewould have begun its expansion in a very smooth and orderedstate. It could not have been completely uniform, because thatwould violate the uncertainty principle of quantum theory. Therehad to be small fluctuations in the density and velocities ofparticles. The no boundary condition, however, implied thatthese fluctuations were as small as they could be, consistentwith the uncertainty principle. The universe would have started off with a period ofexponential or “inflationary” expansion in which it would haveincreased its size by a very large factor. During this expansion,the density fluctuations would have remained small at first, butlater would have started to grow. Regions in which the densitywas slightly higher than average would have had theirexpansion slowed down by the gravitational attraction of theextra mass. Eventually, such regions would stop expanding andcollapse to form galaxies, stars, and beings like us. The universewould have started in a smooth and ordered state, and wouldbecome lumpy and disordered as time went on. This wouldexplain the existence of the thermodynamic arrow of time. But what would happen if and when the universe stoppedexpanding and began to contract? Would the thermodynamicarrow reverse and disorder begin to decrease with time? Thiswould lead to all sorts of science-fiction-like possibilities forpeople who survived from the expanding to the contractingphase. Would they see broken cups gathering themselvestogether off the floor and jumping back onto the table? Wouldthey be able to remember tomorrow’s prices and make afortune on the stock market? It might seem a bit academic toworry about what will happen when the universe collapsesagain, as it will not start to contract for at least another tenthousand million years. But there is a quicker way to find outwhat will happen: jump into a black hole. The collapse of astar to form a black hole is rather like the later stages of thecollapse of the whole universe. So if disorder were to decreasein the contracting phase of the universe, one might also expectit to decrease inside a black hole. So perhaps an astronautwho fell into a black hole would be able to make money atroulette by remembering where the ball went before he placedhis bet. (Unfortunately, however, he would not have long toplay before he was turned to spaghetti. Nor would he be ableto let us know about the reversal of the thermodynamic arrow,or even bank his winnings, because he would be trappedbehind the event horizon of the black hole.)At first, I believed that disorder would decrease when theuniverse recollapsed. This was because I thought that theuniverse had to return to a smooth and ordered state when itbecame small again. This would mean that the contractingphase would be like the time reverse of the expanding phase. People in the contracting phase would live their lives backward: they would die before they were born and get younger as theuniverse contracted. This idea is attractive because it would mean a nicesymmetry between the expanding and contracting phases. However, one cannot adopt it on its own, independent of otherideas about the universe. The question is: is it implied by theno boundary condition, or is it inconsistent with that condition? As I said, I thought at first that the no boundary condition didindeed imply that disorder would decrease in the contractingphase. I was misled partly by the analogy with the surface ofthe earth. If one took the beginning of the universe tocorrespond to the North Pole, then the end of the universeshould be similar to the beginning, just as the South Pole issimilar to the North. However, the North and South Polescorrespond to the beginning and end of the universe inimaginary time. The beginning and end in real time can bevery different from each other. I was also misled by work Ihad done on a simple model of the universe in which thecollapsing phase looked like the time reverse of the expandingphase. However, a colleague of mine, Don Page, of Penn StateUniversity, pointed out that the no boundary condition did notrequire the contracting phase necessarily to be the time reverseof the expanding phase. Further, one of my students, RaymondLaflamme, found that in a slightly more complicated model, thecollapse of the universe was very different from the expansion. I realized that I had made a mistake: the no boundarycondition implied that disorder would in fact continue toincrease during the contraction. The thermodynamic andpsychological arrows of time would not reverse when theuniverse begins to recontract, or inside black holes. What should you do when you find you have made amistake like that? Some people never admit that they arewrong and continue to find new, and often mutuallyinconsistent, arguments to support their case - as Eddingtondid in opposing black hole theory. Others claim to have neverreally supported the incorrect view in the first place or, if theydid, it was only to show that it was inconsistent. It seems tome much better and less confusing if you admit in print thatyou were wrong. A good example of this was Einstein, whocalled the cosmological constant, which he introduced when hewas trying to make a static model of the universe, the biggestmistake of his life. To return to the arrow of time, there remains the question: why do we observe that the thermodynamic and cosmologicalarrows point in the same direction? Or in other words, whydoes disorder increase in the same direction of time as that inwhich the universe expands? If one believes that the universewill expand and then contract again, as the no boundaryproposal seems to imply, this becomes a question of why weshould be in the expanding phase rather than the contractingphase. One can answer this on the basis of the weak anthropicprinciple. Conditions in the contracting phase would not besuitable for the existence of intelligent beings who could ask thequestion: why is disorder increasing in the same direction oftime as that in which the universe is expanding? The inflationin the early stages of the universe, which the no boundaryproposal predicts, means that the universe must be expandingat very close to the critical rate at which it would just avoidrecollapse, and so will not recollapse for a very long time. Bythen all the stars will have burned out and the protons andneutrons in them will probably have decayed into light particlesand radiation. The universe would be in a state of almostcomplete disorder. There would be no strong thermodynamicarrow of time. Disorder couldn’t increase much because theuniverse would be in a state of almost complete disorderalready. However, a strong thermodynamic arrow is necessaryfor intelligent life to operate. In order to survive, human beingshave to consume food, which is an ordered form of energy,and convert it into heat, which is a disordered form of energy. Thus intelligent life could not exist in the contracting phase ofthe universe. This is the explanation of why we observe thatthe thermodynamic and cosmological arrows of time point inthe same direction. It is not that the expansion of the universecauses disorder to increase. Rather, it is that the no boundarycondition causes disorder to increase and the conditions to besuitable for intelligent life only in the expanding phase. To summarize, the laws of science do not distinguishbetween the forward and backward directions of time. However,there are at least three arrows of time that do distinguish thepast from the future. They are the thermodynamic arrow, thedirection of time in which disorder increases; the psychologicalarrow, the direction of time in which we remember the pastand not the future; and the cosmological arrow, the direction oftime in which the universe expands rather than contracts. Ihave shown that the psychological arrow is essentially the sameas the thermodynamic arrow, so that the two would alwayspoint in the same direction. The no boundary proposal for theuniverse predicts the existence of a well-defined thermodynamicarrow of time because the universe must start off in a smoothand ordered state. And the reason we observe thisthermodynamic arrow to agree with the cosmological arrow isthat intelligent beings can exist only in the expanding phase. The contracting phase will be unsuitable because it has nostrong thermodynamic arrow of time. The progress of the human race in understanding theuniverse has established a small corner of order in anincreasingly disordered universe. If you remember every wordin this book, your memory will have recorded about two millionpieces of information: the order in your brain will haveincreased by about two million units. However, while you havebeen reading the book, you will have converted at least athousand calories of ordered energy, in the form of food, intodisordered energy, in the form of heat that you lose to the airaround you by convection and sweat. This will increase thedisorder of the universe by about twenty million million millionmillion units - or about ten million million million times theincrease in order in your brain - and that’s if you remembereverything in this book. In the next chapter but one I will tryto increase the order in our neck of the woods a little furtherby explaining how people are trying to fit together the partialtheories I have described to form a complete unified theorythat would cover everything in the universe. CHAPTER 10 WORMHOLES AND TIME TRAVEL The last chapter discussed why we see time go forward: why disorder increases and why we remember the past butnot the future. Time was treated as if it were a straight railwayline on which one could only go one way or the other. But what if the railway line had loops and branches so thata train could keep going forward but come back to a station ithad already passed? In other words, might it be possible forsomeone to travel into the future or the past? H. G. Wells in The Time Machine explored these possibilitiesas have countless other writers of science fiction. Yet many ofthe ideas of science fiction, like submarines and travel to themoon, have become matters of science fact. So what are theprospects for time travel? The first indication that the laws of physics might really allowpeople to travel in time came in 1949 when Kurt Godeldiscovered a new space-time allowed by general relativity. Godelwas a mathematician who was famous for proving that it isimpossible to prove all true statements, even if you limityourself to trying to prove all the true statements in a subjectas apparently cut and dried as arithmetic. Like the uncertaintyprinciple, Godel’s incompleteness theorem may be a fundamentallimitation on our ability to understand and predict the universe,but so far at least it hasn’t seemed to be an obstacle in oursearch for a complete unified theory. Godel got to know about general relativity when he andEinstein spent their later years at the Institute for AdvancedStudy in Princeton. His space-time had the curious propertythat the whole universe was rotating. One might ask: “Rotatingwith respect to what?” The answer is that distant matter wouldbe rotating with respect to directions that little tops orgyroscopes point in. This had the side effect that it would be possible forsomeone to go off in a rocket ship and return to earth beforehe set out. This property really upset Einstein, who hadthought that general relativity wouldn’t allow time travel. However, given Einstein’s record of ill-founded opposition togravitational collapse and the uncertainty principle, maybe thiswas an encouraging sign. The solution Godel found doesn’tcorrespond to the universe we live in because we can showthat the universe is not rotating. It also had a non-zero valueof the cosmological constant that Einstein introduced when hethought the universe was unchanging. After Hubble discoveredthe expansion of the universe, there was no need for acosmological constant and it is now generally believed to bezero. However, other more reasonable space-times that areallowed by general relativity and which permit travel into thepast have since been found. One is in the interior of a rotatingblack hole. Another is a space-time that contains two cosmicstrings moving past each other at high speed. As their namesuggests, cosmic strings are objects that are like string in thatthey have length but a tiny cross section. Actually, they aremore like rubber bands because they are under enormoustension, something like a million million million million tons. Acosmic string attached to the earth could accelerate it from 0to 60 mph in 1/30th of a second. Cosmic strings may soundlike pure science fiction but there are reasons to believe theycould have formed in the early universe as a result ofsymmetry-breaking of the kind discussed in Chapter 5. Becausethey would be under enormous tension and could start in anyconfiguration, they might accelerate to very high speeds whenthey straighten out. The Godel solution and the cosmic string space-time start outso distorted that travel into the past was always possible. Godmight have created such a warped universe but we have noreason to believe he did. Observations of the microwavebackground and of the abundances of the light elementsindicate that the early universe did not have the kind ofcurvature required to allow time travel. The same conclusionfollows on theoretical grounds if the no boundary proposal iscorrect. So the question is: if the universe starts out withoutthe kind of curvature required for time travel, can wesubsequently warp local regions of space-time sufficiently toallow it? A closely related problem that is also of concern to writersof science fiction is rapid interstellar or intergalactic travel. According to relativity, nothing can travel faster than light. If wetherefore sent a spaceship to our nearest neighboring star,Alpha Centauri, which is about four light-years away, it wouldtake at least eight years before we could expect the travelers toreturn and tell us what they had found. If the expedition wereto the center of our galaxy, it would be at least a hundredthousand years before it came back. The theory of relativitydoes allow one consolation. This is the so-called twins paradoxmentioned in Chapter 2. Because there is no unique standard of time, but ratherobservers each have their own time as measured by clocksthat they carry with them, it is possible for the journey toseem to be much shorter for the space travelers than forthose who remain on earth. But there would not be much joyin returning from a space voyage a few years older to findthat everyone you had left behind was dead and gonethousands of years ago. So in order to have any humaninterest in their stories, science fiction writers had to supposethat we would one day discover how to travel faster than light. What most of thee authors don’t seem to have realized is thatif you can travel faster than light, the theory of relativity impliesyou can also travel back in the, as the following limerick says: There was a young lady of WightWho traveled much faster than light. She departed one day,In a relative way,And arrived on the previous nightThe point is that the theory of relativity says hat there is nounique measure of time that all observers will agree on Rather,each observer has his or her own measure of time. If it ispossible for a rocket traveling below the speed of light to getfrom event A (say, the final of the 100-meter race of theOlympic Games in 202) to event B (say, the opening of the100,004th meeting of the Congress of Alpha Centauri), then allobservers will agree that event A happened before event Baccording to their times. Suppose, however, that the spaceshipwould have to travel faster than light to carry the news of therace to the Congress. Then observers moving at differentspeeds can disagree about whether event A occurred before Bor vice versa. According to the time of an observer who is atrest with respect to the earth, it may be that the Congressopened after the race. Thus this observer would think that aspaceship could get from A to B in time if only it could ignorethe speed-of-light speed limit. However, to an observer at AlphaCentauri moving away from the earth at nearly the speed oflight, it would appear that event B, the opening of theCongress, would occur before event A, the 100-meter race. Thetheory of relativity says that the laws of physics appear thesame to observers moving at different speeds. This has been well tested by experiment and is likely toremain a feature even if we find a more advanced theory toreplace relativity Thus the moving observer would say that iffaster-than-light travel is possible, it should be possible to getfrom event B, the opening of the Congress, to event A, the100-meter race. If one went slightly faster, one could even getback before the race and place a bet on it in the sureknowledge that one would win. There is a problem with breaking the speed-of-light barrier. The theory of relativity says that the rocket power needed toaccelerate a spaceship gets greater and greater the nearer itgets to the speed of light. We have experimental evidence forthis, not with spaceships but with elementary particles in particleaccelerators like those at Fermilab or CERN (European Centrefor Nuclear Research). We can accelerate particles to 99.99percent of the speed of light, but however much power wefeed in, we can’t get them beyond the speed-of-light barrier. Similarly with spaceships: no matter how much rocket powerthey have, they can’t accelerate beyond the speed of light. That might seem to rule out both rapid space travel andtravel back in time. However, there is a possible way out. Itmight be that one could warp space-time so that there was ashortcut between A and B One way of doing this would be tocreate a wormhole between A and B. As its name suggests, awormhole is a thin tube of space-time which can connect twonearly flat regions far apart. There need be no relation between the distance through thewormhole and the separation of its ends in the nearly Hatbackground. Thus one could imagine that one could create orfind a wormhole that world lead from the vicinity of the SolarSystem to Alpha Centauri. The distance through the wormholemight be only a few million miles even though earth and AlphaCentauri are twenty million million miles apart in ordinary space. This would allow news of the 100-meter race to reach theopening of the Congress. But then an observer moving toward6e earth should also be able to find another wormhole thatwould enable him to get from the opening of the Congress onAlpha Centauri back to earth before the start of the race. Sowormholes, like any other possible form of travel faster thanlight, would allow one to travel into the past. The idea of wormholes between different regions ofspace-time was not an invention of science fiction writers butcame from a very respectable source. In 1935, Einstein and Nathan Rosen wrote a paper in whichthey showed that general relativity allowed what they called“bridges,” but which are now known as wormholes. TheEinstein-Rosen bridges didn’t last long enough for a spaceshipto get through: the ship would run into a singularity as thewormhole pinched off. However, it has been suggested that itmight be possible for an advanced civilization to keep awormhole open. To do this, or to warp space-time in anyother way so as to permit time travel, one can show that oneneeds a region of space-time with negative curvature, like thesurface of a saddle. Ordi-nary matter, which has a positiveenergy density, gives space-time a positive curvature, like thesurface of a sphere. So what one needs, in order to warpspace-time in a way that will allow travel into the past, ismatter with negative energy density. Energy is a bit like money: if you have a positive balance,you can distribute it in various ways, but according to theclassical laws that were believed at the beginning of the century,you weren’t allowed to be overdrawn. So these classical lawswould have ruled out any possibility of time travel. However, ashas been described in earlier chapters, the classical laws weresuperseded by quantum laws based on the uncertaintyprinciple. The quantum laws are more liberal and allow you tobe overdrawn on one or two accounts provided the totalbalance is positive. In other words, quantum theory allows theenergy density to be negative in some places, provided that thisis made up for by positive energy densities in other places, sothat the total energy re-mains positive. An example of howquantum theory can allow negative energy densities is providedby what is called the Casimir effect. As we saw in Chapter 7,even what we think of as “empty” space is filled with pairs ofvirtual particles and antiparticles that appear together, moveapart, and come back together and annihilate each other. Now,suppose one has two parallel metal plates a short distanceapart. The plates will act like mirrors for the virtual photons orparticles of light. In fact they will form a cavity between them,a bit like an organ pipe that will resonate only at certain notes. This means that virtual photons can occur in the spacebetween the plates only if their wavelengths (the distancebetween the crest of one wave and the next) fit a wholenumber of times into the gap between the plates. If the widthof a cavity is a whole number of wavelengths plus a fraction ofa wave-length, then after some reflections backward andforward between the plates, the crests of one wave will coincidewith the troughs of another and the waves will cancel out. Because the virtual photons between the plates can haveonly the resonant wavelengths, there will be slightly fewer ofthem than in the region outside the plates where virtualphotons can have any wavelength. Thus there will be slightlyfewer virtual photons hitting the inside surfaces of the platesthan the outside surfaces. One would therefore expect a forceon the plates, pushing them toward each other. This force hasactually been detected and has the predicted value. Thus wehave experimental evidence that virtual particles exist and havereal effects. The fact that there are fewer virtual photons between theplates means that their energy density will be less thanelsewhere. But the total energy density in “empty” space faraway from the plates must be zero, because otherwise theenergy density would warp the space and it would not bealmost flat. So, if the energy density between the plates is lessthan the energy density far away, it must be negative. We thus have experimental evidence both that space-timecan be warped (from the bending of light during eclipses) andthat it can be curved in the way necessary to allow time travel(from the Casimir effect). One might hope therefore that as weadvance in science and technology, we would eventually manageto build a time machine. But if so, why hasn’t anyone comeback from the future and told us how to do it? There mightbe good reasons why it would be unwise to give us the secretof time travel at our present primitive state of development, butunless human nature changes radically, it is difficult to believethat some visitor from the future wouldn’t spill the beans. Ofcourse, some people would claim that sightings of UFOs areevidence that we are being visited either by aliens or by peoplefrom the future. (If the aliens were to get here in reasonabletime, they would need faster-than-light travel, so the twopossibilities may be equivalent.)However, I think that any visit by aliens or people from thefuture would be much more obvious and, probably, much moreunpleasant. If they are going to reveal themselves at all, whydo so only to those who are not regarded as reliablewitnesses? If they are trying to warn us of some great danger,they are not being very effective. A possible way to explain the absence of visitors from thefuture would be to say that the past is fixed because we haveobserved it and seen that it does not have the kind of warpingneeded to allow travel back from the future. On the otherhand, the future is unknown and open, so it might well havethe curvature required. This would mean that any time travelwould be confined to the future. There would be no chance ofCaptain Kirk and the Starship Enterprise turning up at thepresent time. This might explain why we have not yet been overrun bytourists from the future, but it would not avoid the problemsthat would arise if one were able to go back and changehistory. Suppose, for example, you went back and killed yourgreat-great-grandfather while he was still a child. There aremany versions of this paradox but they are essentiallyequivalent: one would get contradictions if one were free tochange the past. There seem to be two possible resolutions to the paradoxesposed by time travel. One I shall call the consistent historiesapproach. It says that even if space-time is warped so that itwould be possible to travel into the past, what happens inspace-time must be a consistent solution of the laws of physics. According to this viewpoint, you could not go back in timeunless history showed that you had already arrived in the pastand, while there, had not killed your great-great-grandfather orcommitted any other acts that would conflict with your currentsituation in the present. Moreover, when you did go back, youwouldn’t be able to change recorded history. That means youwouldn’t have free will to do what you wanted. Of course, onecould say that free will is an illusion anyway. If there really is acomplete unified theory that governs everything, it presumablyalso determines your actions. But it does so in a way that isimpossible to calculate for an organism that is as complicatedas a human being. The reason we say that humans have freewill is because we can’t predict what they will do. However, ifthe human then goes off in a rocket ship and comes backbefore he or she set off, we will be able to predict what he orshe will do because it will be part of recorded history. Thus, inthat situation, the time traveler would have no free will. The other possible way to resolve the paradoxes of timetravel might be called the alternative histories hypothesis. Theidea here is that when time travelers go back to the past, theyenter alternative histories which differ from recorded history. Thus they can act freely, without the constraint of consistencywith their previous history. Steven Spiel-berg had fun with thisnotion in the Back to the Future films: Marty McFly was ableto go back and change his parents’ courtship to a moresatisfactory history. The alternative histories hypothesis sounds rather like RichardFeynman’s way of expressing quantum theory as a sum overhistories, which was described in Chapters 4 and 8. This saidthat the universe didn’t just have a single history: rather it hadevery possible history, each with its own probability. However,there seems to be an important difference between Feynman’sproposal and alternative histories. In Feynman’s sum, eachhistory comprises a complete space-time and everything in it. The space-time may be so warped that it is possible to travelin a rocket into the past. But the rocket would remain in thesame space-time and therefore the same history, which wouldhave to be consistent. Thus Feynman’s sum over historiesproposal seems to support the consistent histories hypothesisrather than the alternative histories. The Feynman sum over histories does allow travel into thepast on a microscopic scale. In Chapter 9 we saw that thelaws of science are unchanged by combinations of theoperations C, P, and T. This means that an antiparticle spinningin the anticlockwise direction and moving from A to B can alsobe viewed as an ordinary particle spinning clockwise andmoving backward in time from B to A. Similarly, an ordinaryparticle moving forward in time is equivalent to an antiparticlemoving backward in time. As has been discussed in thischapter and Chapter 7, “empty” space is filled with pairs ofvirtual particles and antiparticles that appear together, moveapart, and then come back together and annihilate each other. So, one can regard the pair of particles as a single particlemoving on a closed loop in space-time. When the pair ismoving forward in time (from the event at which it appears tothat at which it annihilates), it is called a particle. But when theparticle is traveling back in time (from the event at which thepair annihilates to that at which it appears), it is said to be anantiparticle traveling forward in time. The explanation of how black holes can emit particles andradiation (given in Chapter 7) was that one member of avirtual particle/ antiparticle pair (say, the antiparticle) might fallinto the black hole, leaving the other member without a partnerwith which to annihilate. The forsaken particle might fall intothe hole as well, but it might also escape from the vicinity ofthe black hole. If so, to an observer at a distance it wouldappear to be a particle emitted by the black hole. One can, however, have a different but equivalent intuitivepicture of the mechanism for emission from black holes. Onecan regard the member of the virtual pair that fell into theblack hole (say, the antiparticle) as a particle traveling backwardin time out of the hole. When it gets to the point at which thevirtual particle/antiparticle pair appeared together, it is scatteredby the gravitational field into a particle traveling forward in timeand escaping from the black hole. If, instead, it were theparticle member of the virtual pair that fell into the hole, onecould regard it as an antiparticle traveling back in time andcoming out of the black hole. Thus the radiation by black holesshows that quantum theory allows travel back in time on amicroscopic scale and that such time travel can produceobservable effects. One can therefore ask: does quantum theory allow timetravel on a macroscopic scale, which people could use? At firstsight, it seems it should. The Feynman sum over historiesproposal is supposed to be over all histories. Thus it shouldinclude histories in which space-time is so warped that it ispossible to travel into the past. Why then aren’t we in troublewith history? Suppose, for example, someone had gone backand given the Nazis the secret of the atom bomb? One would avoid these problems if what I call thechronology protection conjecture holds. This says that the lawsof physics conspire to prevent macroscopic bodies from carryinginformation into the past. Like the cosmic censorship conjecture,it has not been proved but there are reasons to believe it istrue. The reason to believe that chronology protection operates isthat when space-time is warped enough to make travel into thepast possible, virtual particles moving on closed loops inspace-time can become real particles traveling forward in timeat or below the speed of light. As these particles can go roundthe loop any number of times, they pass each point on theirroute many times. Thus their energy is counted over and overagain and the energy density will become very large. This couldgive space-time a positive curvature that would not allow travelinto the past. It is not yet clear whether these particles wouldcause positive or negative curvature or whether the curvatureproduced by some kinds of virtual particles might cancel thatproduced by other kinds. Thus the possibility of time travelremains open. But I’m not going to bet on it. My opponentmight have the unfair advantage of knowing the future. CHAPTER 11 THE UNIFICATION OF PHYSICS As was explained in the first chapter, it would be verydifficult to construct a complete unified theory of everything inthe universe all at one go. So instead we have made progressby finding partial theories that describe a limited range ofhappenings and by neglecting other effects or approximatingthem by certain numbers. (Chemistry, for example, allows us tocalculate the interactions of atoms, without knowing the internalstructure of an atom’s nucleus.) Ultimately, however, one wouldhope to find a complete, consistent, unified theory that wouldinclude all these partial theories as approximations, and that didnot need to be adjusted to fit the facts by picking the valuesof certain arbitrary numbers in the theory. The quest for sucha theory is known as “the unification of physics.” Einstein spentmost of his later years unsuccessfully searching for a unifiedtheory, but the time was not ripe: there were partial theoriesfor gravity and the electromagnetic force, but very little wasknown about the nuclear forces. Moreover, Einstein refused tobelieve in the reality of quantum mechanics, despite theimportant role he had played in its development. Yet it seemsthat the uncertainty principle is a fundamental feature of theuniverse we live in. A successful unified theory must, therefore,necessarily incorporate this principle. As I shall describe, the prospects for finding such a theoryseem to be much better now because we know so much moreabout the universe. But we must beware of overconfidence -we have had false dawns before! At the beginning of thiscentury, for example, it was thought that everything could beexplained in terms of the properties of continuous matter, suchas elasticity and heat conduction. The discovery of atomicstructure and the uncertainty principle put an emphatic end tothat. Then again, in 1928, physicist and Nobel Prize winnerMax Born told a group of visitors to Gottingen University,“Physics, as we know it, will be over in six months.” Hisconfidence was based on the recent discovery by Dirac of theequation that governed the electron. It was thought that asimilar equation would govern the proton, which was the onlyother particle known at the time, and that would be the end oftheoretical physics. However, the discovery of the neutron andof nuclear forces knocked that one on the head too. Havingsaid this, I still believe there are grounds for cautious optimismthat we may now be near the end of the search for theultimate laws of nature. In previous chapters I have described general relativity, thepartial theory of gravity, and the partial theories that governthe weak, the strong, and the electromagnetic forces. The lastthree may be combined in so-called grand unified theories, orGUTs, which are not very satisfactory because they do notinclude gravity and because they contain a number ofquantities, like the relative masses of different particles, thatcannot be predicted from the theory but have to be chosen tofit observations. The main difficulty in finding a theory thatunifies gravity with the other forces is that general relativity is a“classical” theory; that is, it does not incorporate the uncertaintyprinciple of quantum mechanics. On the other hand, the otherpartial theories depend on quantum mechanics in an essentialway. A necessary first step, therefore, is to combine generalrelativity with the uncertainty principle. As we have seen, thiscan produce some remark-able consequences, such as blackholes not being black, and the universe not having anysingularities but being completely self-contained and without aboundary. The trouble is, as explained in Chapter 7, that theuncertainty principle means that even “empty” space is filledwith pairs of virtual particles and antiparticles. These pairswould have an infinite amount of energy and, therefore, byEinstein’s famous equation E = mc2, they would have aninfinite amount of mass. Their gravitational attraction would thuscurve up the universe to infinitely small size. Rather similar, seemingly absurd infinities occur in the otherpartial theories, but in all these cases the infinities can becanceled out by a process called renormalization. This involvescanceling the infinities by introducing other infinities. Althoughthis technique is rather dubious mathematically, it does seem towork in practice, and has been used with these theories tomake predictions that agree with observations to anextraordinary degree of accuracy. Renormalization, however,does have a serious drawback from the point of view of tryingto find a complete theory, because it means that the actualvalues of the masses and the strengths of the forces cannot bepredicted from the theory, but have to be chosen to fit theobservations. In attempting to incorporate the uncertainty principle intogeneral relativity, one has only two quantities that can beadjusted: the strength of gravity and the value of thecosmological constant. But adjusting these is not sufficient toremove all the infinities. One therefore has a theory that seemsto predict that certain quantities, such as the curvature ofspace-time, are really infinite, yet these quantities can beobserved and measured to be perfectly finite! This problem incombining general relativity and the uncertainty principle hadbeen suspected for some time, but was finally confirmed bydetailed calculations in 1972. Four years later, a possiblesolution, called “supergravity,” was suggested. The idea was tocombine the spin-2 particle called the graviton, which carriesthe gravitational force, with certain other particles of spin 3/2,1, ?, and 0. In a sense, all these particles could then beregarded as different aspects of the same “superparticle,” thusunifying the matter particles with spin ? and 3/2 with theforce-carrying particles of spin 0, 1, and 2. The virtualparticle/antiparticle pairs of spin ? and 3/2 would havenegative energy, and so would tend to cancel out the positiveenergy of the spin 2, 1, and 0 virtual pairs. This would causemany of the possible infinities to cancel out, but it wassuspected that some infinities might still remain. However, thecalculations required to find out whether or not there were anyinfinities left uncanceled were so long and difficult that no onewas prepared to undertake them. Even with a computer it wasreckoned it would take at least four years, and the chanceswere very high that one would make at least one mistake,probably more. So one would know one had the right answeronly if someone else repeated the calculation and got the sameanswer, and that did not seem very likely! Despite these problems, and the fact that the particles in thesuper-gravity theories did not seem to match the observedparticles, most scientists believed that supergravity was probablythe right answer to the problem of the unification of physics. Itseemed the best way of unifying gravity with the other forces. However, in 1984 there was a remarkable change of opinion infavor of what are called string theories. In these theories thebasic objects are not particles, which occupy a single point ofspace, but things that have a length but no other dimension,like an infinitely thin piece of string. These strings may haveends (the so-called open strings) or they may be joined upwith themselves in closed loops (closed strings) (Fig. 11.1 andFig. 11.2). A particle occupies one point of space at each instantof time. Thus its history can be represented by a line inspace-time (the “world-line”). A string, on the other hand,occupies a line in space at each moment of time. So its historyin space-time is a two-dimensional surface called theworld-sheet. (Any point on such a world-sheet can bedescribed by two numbers, one specifying the time and theother the position of the point on the string.) The world-sheetof an open string is a strip: its edges represent the pathsthrough space-time of the ends of the string (Fig. 11.1). Theworld-sheet of a closed string is a cylinder or tube (Fig. 11.2): a slice through the tube is a circle, which represents theposition of the string at one particular time. Two pieces of string can join together to form a singlestring; in the case of open strings they simply join at the ends(Fig. 11.3), while in the case of closed strings it is like the twolegs joining on a pair of trousers (Fig. 11.4). Similarly, a singlepiece of string can divide into two strings. In string theories,what were previously thought of as particles are now picturedas waves traveling down the string, like waves on a vibratingkite string. The emission or absorption of one particle byanother corresponds to the dividing or joining together ofstrings. For example, the gravitational force of the sun on theearth was pictured in particle theories as being caused by theemission of a graviton by a particle in the sun and itsabsorption by a particle in the earth (Fig. 11.5). In stringtheory, this process corresponds to an H-shaped tube or pipe(Fig. 11.6) (string theory is rather like plumbing, in a way). Thetwo vertical sides of the H correspond to the particles in thesun and the earth, and the horizontal crossbar corresponds tothe graviton that travels between them. String theory has a curious history. It was originally inventedin the late 1960s in an attempt to find a theory to describe thestrong force. The idea was that particles like the proton andthe neutron could be regarded as waves on a string. Thestrong forces between the particles would correspond to piecesof string that went between other bits of string, as in a spider’sweb. For this theory to give the observed value of the strongforce between particles, the strings had to be like rubber bandswith a pull of about ten tons. In 1974 Joel Scherk from Paris and John Schwarz from theCalifornia Institute of Technology published a paper in whichthey showed that string theory could describe the gravitationalforce, but only if the tension in the string were very muchhigher, about a thousand million million million million millionmillion tons (1 with thirty-nine zeros after it). The predictions ofthe string theory would be just the same as those of generalrelativity on normal length scales, but they would differ at verysmall distances, less than a thousand million million millionmillion millionth of a centimeter (a centimeter divided by 1 withthirty-three zeros after it). Their work did not receive muchattention, however, because at just about that time most peopleabandoned the original string theory of the strong force infavor of the theory based on quarks and gluons, which seemedto fit much better with observations. Scherk died in tragiccircumstances (he suffered from diabetes and went into a comawhen no one was around to give him an injection of insulin). So Schwarz was left alone as almost the only supporter ofstring theory, but now with the much higher pro-posed valueof the string tension. In 1984 interest in strings suddenly revived, apparently fortwo reasons. One was that people were not really makingmuch progress toward showing that supergravity was finite orthat it could explain the kinds of particles that we observe. Theother was the publication of a paper by John Schwarz andMike Green of Queen Mary College, London, that showed thatstring theory might be able to explain the existence of particlesthat have a built-in left-handedness, like some of the particlesthat we observe. Whatever the reasons, a large number ofpeople soon began to work on string theory and a newversion was developed, the so-called heterotic string, whichseemed as if it might be able to explain the types of particlesthat we observe. String theories also lead to infinities, but it is thought theywill all cancel out in versions like the heterotic string (thoughthis is not yet known for certain). String theories, however,have a bigger problem: they seem to be consistent only ifspace-time has either ten or twenty-six dimensions, instead ofthe usual four! Of course, extra space-time dimensions are acommonplace of science fiction indeed, they provide an idealway of overcoming the normal restriction of general relativitythat one cannot travel faster than light or back in time (seeChapter 10). The idea is to take a shortcut through the extradimensions. One can picture this in the following way. Imaginethat the space we live in has only two dimensions and iscurved like the surface of an anchor ring or torus (Fig. 11.7). Ifyou were on one side of the inside edge of the ring and youwanted to get to a point on the other side, you would have togo round the inner edge of the ring. However, if you wereable to travel in the third dimension, you could cut straightacross. Why don’t we notice all these extra dimensions, if they arereally there? Why do we see only three space dimensions andone time dimension? The suggestion is that the otherdimensions are curved up into a space of very small size,something like a million million million million millionth of aninch. This is so small that we just don’t notice it: we see onlyone time dimension and three space dimensions, in whichspace-time is fairly flat. It is like the surface of a straw. If youlook at it closely, you see it is two-dimensional (the position ofa point on the straw is described by two numbers, the lengthalong the straw and the distance round the circular direction). But if you look at it from a distance, you don’t see thethickness of the straw and it looks one-dimensional (theposition of a point is specified only by the length along thestraw). So it is with space-time: on a very small scale it isten-dimensional and highly curved, but on bigger scales youdon’t see the curvature or the extra dimensions. If this pictureis correct, it spells bad news for would-be space travelers: theextra dimensions would be far too small to allow a spaceshipthrough. However, it raises another major problem. Why shouldsome, but not all, of the dimensions be curled up into a smallball? Presumably, in the very early universe all the dimensionswould have been very curved. Why did one time dimensionand three space dimensions flatten out, while the otherdimensions remain tightly curled up? One possible answer is the anthropic principle. Two spacedimensions do not seem to be enough to allow for thedevelopment of complicated beings like us. For example,two-dimensional animals living on a one-dimensional earth wouldhave to climb over each other in order to get past each other. If a two-dimensional creature ate something it could not digestcompletely, it would have to bring up the remains the sameway it swallowed them, because if there were a passage rightthrough its body, it would divide the creature into two separatehalves: our two-dimensional being would fall apart (Fig. 11.8). Similarly, it is difficult to see how there could be any circulationof the blood in a two-dimensional creature. There would also be problems with more than three spacedimensions. The gravitational force between two bodies woulddecrease more rapidly with distance than it does in threedimensions. (In three dimensions, the gravitational force dropsto 1/4 if one doubles the distance. In four dimensions it woulddrop to 1/5, in five dimensions to 1/6, and so on.) Thesignificance of this is that the orbits of planets, like the earth,around the sun would be unstable: the least disturbance froma circular orbit (such as would be caused by the gravitationalattraction of other planets) would result in the earth spiralingaway from or into the sun. We would either freeze or beburned up. In fact, the same behavior of gravity with distancein more than three space dimensions means that the sunwould not be able to exist in a stable state with pressurebalancing gravity. It would either fall apart or it would collapseto form a black hole. In either case, it would not be of muchuse as a source of heat and light for life on earth. On asmaller scale, the electrical forces that cause the electrons toorbit round the nucleus in an atom would behave in the sameway as gravitational forces. Thus the electrons would eitherescape from the atom altogether or would spiral into thenucleus. In either case, one could not have atoms as we knowthem. It seems clear then that life, at least as we know it, canexist only in regions of space-time in which one time dimensionand three space dimensions are not curled up small. Thiswould mean that one could appeal to the weak anthropicprinciple, provided one could show that string theory does atleast allow there to be such regions of the universe - and itseems that indeed string theory does. There may well be otherregions of the universe, or other universes (whatever that maymean), in which all the dimensions are curled up small or inwhich more than four dimensions are nearly flat, but therewould be no intelligent beings in such regions to observe thedifferent number of effective dimensions. Another problem is that there are at least four differentstring theories (open strings and three different closed stringtheories) and millions of ways in which the extra dimensionspredicted by string theory could be curled up. Why should justone string theory and one kind of curling up be picked out? For a time there seemed no answer, and progress got boggeddown. Then, from about 1994, people started discovering whatare called dualities: different string theories and different waysof curling up the extra dimensions could lead to the sameresults in four dimensions. Moreover, as well as particles, whichoccupy a single point of space, and strings, which are lines,there were found to be other objects called p-branes, whichoccupied two-dimensional or higher-dimensional volumes inspace. (A particle can be regarded as a 0-brane and a stringas a 1-brane but there were also p-branes for p=2 to p=9.)What this seems to indicate is that there is a sort ofdemocracy among supergravity, string, and p-brane theories: they seem to fit together but none can be said to be morefundamental than the others. They appear to be differentapproximations to some fundamental theory that are valid indifferent situations. People have searched for this underlying theory, but withoutany success so far. However, I believe there may not be anysingle formulation of the fundamental theory any more than, asGodel showed, one could formulate arithmetic in terms of asingle set of axioms. Instead it may be like maps - you can’tuse a single map to describe the surface of the earth or ananchor ring: you need at least two maps in the case of theearth and four for the anchor ring to cover every point. Eachmap is valid only in a limited region, but different maps willhave a region of overlap. The collection of maps provides acomplete description of the surface. Similarly, in physics it maybe necessary to use different formulations in different situations,but two different formulations would agree in situations wherethey can both be applied. The whole collection of differentformulations could be regarded as a complete unified theory,though one that could not be expressed in terms of a singleset of postulates. But can there really be such a unified theory? Or are weperhaps just chasing a mirage? There seem to be threepossibilities: 1. There really is a complete unified theory (or a collection ofoverlapping formulations), which we will someday discover if weare smart enough. 2. There is no ultimate theory of the universe, just aninfinite sequence of theories that describe the universe moreand more accurately. 3. There is no theory of the universe: events cannot bepredicted beyond a certain extent but occur in a random andarbitrary manner. Some would argue for the third possibility on the groundsthat if there were a complete set of laws, that would infringeGod’s freedom to change his mind and intervene in the world. It’s a bit like the old paradox: can God make a stone soheavy that he can’t lift it? But the idea that God might want tochange his mind is an example of the fallacy, pointed out bySt. Augustine, of imagining God as a being existing in time: time is a property only of the universe that God created. Presumably, he knew what he intended when he set it up! With the advent of quantum mechanics, we have come torecognize that events cannot be predicted with completeaccuracy but that there is always a degree of uncertainty. Ifone likes, one could ascribe this randomness to the interventionof God, but it would be a very strange kind of intervention: there is no evidence that it is directed toward any purpose. Indeed, if it were, it would by definition not be random. Inmodern times, we have effectively removed the third possibilityabove by redefining the goal of science: our aim is to formulatea set of laws that enables us to predict events only up to thelimit set by the uncertainty principle. The second possibility, that there is an infinite sequence ofmore and more refined theories, is in agreement with all ourexperience so far. On many occasions we have increased thesensitivity of our measurements or made a new class ofobservations, only to discover new phenomena that were notpredicted by the existing theory, and to account for these wehave had to develop a more advanced theory. It wouldtherefore not be very surprising if the present generation ofgrand unified theories was wrong in claiming that nothingessentially new will happen between the electroweak unificationenergy of about 100 GeV and the grand unification energy ofabout a thousand million million GeV. We might indeed expectto find several new layers of structure more basic than thequarks and electrons that we now regard as “elementary” particles. However, it seems that gravity may provide a limit to thissequence of “boxes within boxes.” If one had a particle with anenergy above what is called the Planck energy, ten millionmillion million GeV (1 followed by nineteen zeros), its masswould be so concentrated that it would cut itself off from therest of the universe and form a little black hole. Thus it doesseem that the sequence of more and more refined theoriesshould have some limit as we go to higher and higherenergies, so that there should be some ultimate theory of theuniverse. Of course, the Planck energy is a very long way fromthe energies of around a hundred GeV, which are the mostthat we can produce in the laboratory at the present time. Weshall not bridge that gap with particle accelerators in theforeseeable future! The very early stages of the universe,however, are an arena where such energies must haveoccurred. I think that there is a good chance that the study ofthe early universe and the requirements of mathematicalconsistency will lead us to a complete unified theory within thelifetime of some of us who are around today, always presumingwe don’t blow ourselves up first. What would it mean if we actually did discover the ultimatetheory of the universe? As was explained in Chapter 1, wecould never be quite sure that we had indeed found thecorrect theory, since theories can’t be proved. But if the theorywas mathematically consistent and always gave predictions thatagreed with observations, we could be reasonably confident thatit was the right one. It would bring to an end a long andglorious chapter in the history of humanity’s intellectual struggleto understand the universe. But it would also revolutionize theordinary person’s understanding of the laws that govern theuniverse. In Newton’s time it was possible for an educatedperson to have a grasp of the whole of human knowledge, atleast in outline. But since then, the pace of the development ofscience has made this impossible. Because theories are alwaysbeing changed to account for new observations, they are neverproperly digested or simplified so that ordinary people canunderstand them. You have to be a specialist, and even thenyou can only hope to have a proper grasp of a smallproportion of the scientific theories. Further, the rate ofprogress is so rapid that what one learns at school oruniversity is always a bit out of date. Only a few people cankeep up with the rapidly advancing frontier of knowledge, andthey have to devote their whole time to it and specialize in asmall area. The rest of the population has little idea of theadvances that are being made or the excitement they aregenerating. Seventy years ago, if Eddington is to be believed,only two people understood the general theory of relativity. Nowadays tens of thousands of university graduates do, andmany millions of people are at least familiar with the idea. If acomplete unified theory was discovered, it would only be amatter of time before it was digested and simplified in thesame way and taught in schools, at least in outline. We wouldthen all be able to have some understanding of the laws thatgovern the universe and are responsible for our existence. Even if we do discover a complete unified theory, it wouldnot mean that we would be able to predict events in general,for two reasons. The first is the limitation that the uncertaintyprinciple of quantum mechanics sets on our powers ofprediction. There is nothing we can do to get around that. Inpractice, however, this first limitation is less restrictive than thesecond one. It arises from the fact that we could not solve theequations of the theory exactly, except in very simple situations. (We cannot even solve exactly for the motion of three bodiesin Newton’s theory of gravity, and the difficulty increases withthe number of bodies and the complexity of the theory.) Wealready know the laws that govern the behavior of matterunder all but the most extreme conditions. In particular, weknow the basic laws that underlie all of chemistry and biology. Yet we have certainly not reduced these subjects to the statusof solved problems: we have, as yet, had little success inpredicting human behavior from mathematical equations! Soeven if we do find a complete set of basic laws, there will stillbe in the years ahead the intellectually challenging task ofdeveloping better approximation methods, so that we can makeuseful predictions of the probable outcomes in complicated andrealistic situations. A complete, consistent, unified theory is onlythe first step: our goal is a complete understanding of theevents around us, and of our own existence. CHAPTER 12 CONCLUSION We find ourselves in a bewildering world. We want to makesense of what we see around us and to ask: What is thenature of the universe? What is our place in it and where didit and we come from? Why is it the way it is? To try to answer these questions we adopt some “worldpicture.” Just as an infinite tower of tortoises supporting the fiatearth is such a picture, so is the theory of superstrings. Bothare theories of the universe, though the latter is much moremathematical and precise than the former. Both theories lackobservational evidence: no one has ever seen a giant tortoisewith the earth on its back, but then, no one has seen asuperstring either. However, the tortoise theory fails to be agood scientific theory because it predicts that people should beable to fall off the edge of the world. This has not been foundto agree with experience, unless that turns out to be theexplanation for the people who are supposed to havedisappeared in the Bermuda Triangle! The earliest theoretical attempts to describe and explain theuniverse involved the idea that events and natural phenomenawere controlled by spirits with human emotions who acted in avery humanlike and unpredictable manner. These spiritsinhabited natural objects, like rivers and mountains, includingcelestial bodies, like the sun and moon. They had to beplacated and their favor sought in order to ensure the fertilityof the soil and the rotation of the seasons. Gradually, however,it must have been noticed that there were certain regularities: the sun always rose in the east and set in the west, whetheror not a sacrifice had been made to the sun god. Further, thesun, the moon, and the planets followed precise paths acrossthe sky that could be predicted in advance with considerableaccuracy. The sun and the moon might still be gods, but theywere gods who obeyed strict laws, apparently without anyexceptions, if one discounts stories like that of the sun stoppingfor Joshua. At first, these regularities and laws were obvious only inastronomy and a few other situations. However, as civilizationdeveloped, and particularly in the last 300 years, more andmore regularities and laws were discovered. The success ofthese laws led Laplace at the beginning of the nineteenthcentury to postulate scientific determinism; that is, he suggestedthat there would be a set of laws that would determine theevolution of the universe precisely, given its configuration at onetime. Laplace’s determinism was incomplete in two ways. It did notsay how the laws should be chosen and it did not specify theinitial configuration of the universe. These were left to God. God would choose how the universe began and what laws itobeyed, but he would not intervene in the universe once it hadstarted. In effect, God was confined to the areas thatnineteenth-century science did not under-stand. We now know that Laplace’s hopes of determinism cannotbe realized, at least in the terms he had in mind. Theuncertainty principle of quantum mechanics implies that certainpairs of quantities, such as the position and velocity of aparticle, cannot both be predicted with complete accuracy. Quantum mechanics deals with this situation via a class ofquantum theories in which particles don’t have well-definedpositions and velocities but are represented by a wave. Thesequantum theories are deterministic in the sense that they givelaws for the evolution of the wave with time. Thus if oneknows the wave at one time, one can calculate it at any othertime. The unpredictable, random element comes in only whenwe try to interpret the wave in terms of the positions andvelocities of particles. But maybe that is our mistake: maybethere are no particle positions and velocities, but only waves. Itis just that we try to fit the waves to our preconceived ideasof positions and velocities. The resulting mismatch is the causeof the apparent unpredictability. In effect, we have redefined the task of science to be thediscovery of laws that will enable us to predict events up to thelimits set by the uncertainty principle. The question remains,however: how or why were the laws and the initial state of theuniverse chosen? In this book I have given special prominence to the lawsthat govern gravity, because it is gravity that shapes thelarge-scale structure of the universe, even though it is theweakest of the four categories of forces. The laws of gravitywere incompatible with the view held until quite recently thatthe universe is unchanging in time: the fact that gravity isalways attractive implies that the universe must be eitherexpanding or contracting. According to the general theory ofrelativity, there must have been a state of infinite density in thepast, the big bang, which would have been an effectivebeginning of time. Similarly, if the whole universe recollapsed,there must be another state of infinite density in the future, thebig crunch, which would be an end of time. Even if the wholeuniverse did not recollapse, there would be singularities in anylocalized regions that collapsed to form black holes. Thesesingularities would be an end of time for anyone who fell intothe black hole. At the big bang and other singularities, all thelaws would have broken down, so God would still have hadcomplete freedom to choose what happened and how theuniverse began. When we combine quantum mechanics with general relativity,there seems to be a new possibility that did not arise before: that space and time together might form a finite,four-dimensional space without singularities or boundaries, likethe surface of the earth but with more dimensions. It seemsthat this idea could explain many of the observed features ofthe universe, such as its large-scale uniformity and also thesmaller-scale departures from homogeneity, like galaxies, stars,and even human beings. It could even account for the arrowof time that we observe. But if the universe is completelyself-contained, with no singularities or boundaries, andcompletely described by a unified theory, that has profoundimplications for the role of God as Creator. Einstein once asked the question: “How much choice didGod have in constructing the universe?” If the no boundaryproposal is correct, he had no freedom at all to choose initialconditions. He would, of course, still have had the freedom tochoose the laws that the universe obeyed. This, however, maynot really have been all that much of a choice; there may wellbe only one, or a small number, of complete unified theories,such as the heterotic string theory, that are self-consistent andallow the existence of structures as complicated as humanbeings who can investigate the laws of the universe and askabout the nature of God. Even if there is only one possible unified theory, it is just aset of rules and equations. What is it that breathes fire into theequations and makes a universe for them to describe? Theusual approach of science of constructing a mathematical modelcannot answer the questions of why there should be a universefor the model to describe. Why does the universe go to all thebother of existing? Is the unified theory so compelling that itbrings about its own existence? Or does it need a creator, and,if so, does he have any other effect on the universe? And whocreated him? Up to now, most scientists have been too occupied with thedevelopment of new theories that describe what the universe isto ask the question why. On the other hand, the people whosebusiness it is to ask why, the philosophers, have not been ableto keep up with the advance of scientific theories. In theeighteenth century, philosophers considered the whole of humanknowledge, including science, to be their field and discussedquestions such as: did the universe have a beginning? However, in the nineteenth and twentieth centuries, sciencebecame too technical and mathematical for the philosophers, oranyone else except a few specialists. Philosophers reduced thescope of their inquiries so much that Wittgenstein, the mostfamous philosopher of this century, said, “The sole remainingtask for philosophy is the analysis of language.” What acomedown from the great tradition of philosophy from Aristotleto Kant! However, if we do discover a complete theory, it should intime be understandable in broad principle by everyone, not justa few scientists. Then we shall all, philosophers, scientists, andjust ordinary people, be able to take part in the discussion ofthe question of why it is that we and the universe exist. If wefind the answer to that, it would be the ultimate triumph ofhuman reason - for then we would know the mind of God. ALBERT EINSTEINEinstein’s connection with the politics of the nuclear bomb iswell known: he signed the famous letter to President FranklinRoosevelt that persuaded the United States to take the ideaseriously, and he engaged in postwar efforts to prevent nuclearwar. But these were not just the isolated actions of a scientistdragged into the world of politics. Einstein’s life was, in fact, touse his own words, “divided between politics and equations.” Einstein’s earliest political activity came during the First WorldWar, when he was a professor in Berlin. Sickened by what hesaw as the waste of human lives, he became involved inantiwar demonstrations. His advocacy of civil disobedience andpublic encouragement of people to refuse conscription did littleto endear him to his colleagues. Then, following the war, hedirected his efforts toward reconciliation and improvinginternational relations. This too did not make him popular, andsoon his politics were making it difficult for him to visit theUnited States, even to give lectures. Einstein’s second great cause was Zionism. Although he wasJewish by descent, Einstein rejected the biblical idea of God. However, a growing awareness of anti-Semitism, both beforeand during the First World War, led him gradually to identifywith the Jewish community, and later to become an outspokensupporter of Zionism. Once more unpopularity did not stop himfrom speaking his mind. His theories came under attack; ananti-Einstein organization was even set up. One man wasconvicted of inciting others to murder Einstein (and fined amere six dollars). But Einstein was phlegmatic. When a bookwas published entitled 100 Authors Against Einstein, heretorted, “If I were wrong, then one would have been enough!” In 1933, Hitler came to power. Einstein was in America, anddeclared he would not return to Germany. Then, while Nazimilitia raided his house and confiscated his bank account, aBerlin newspaper displayed the headline “Good News fromEinstein - He’s Not Coming Back.” In the face of the Nazithreat, Einstein renounced pacifism, and eventually, fearing thatGerman scientists would build a nuclear bomb, proposed thatthe United States should develop its own. But even before thefirst atomic bomb had been detonated, he was publicly warningof the dangers of nuclear war and proposing internationalcontrol of nuclear weaponry. Throughout his life, Einstein’s efforts toward peace probablyachieved little that would last - and certainly won him fewfriends. His vocal support of the Zionist cause, however, wasduly recognized in 1952, when he was offered the presidency ofIsrael. He declined, saying he thought he was too naive inpolitics. But perhaps his real reason was different: to quote himagain, “Equations are more important to me, because politics isfor the present, but an equation is something for eternity.” GALILEO GALILEIGalileo, perhaps more than any other single person, wasresponsible for the birth of modern science. His renownedconflict with the Catholic Church was central to his philosophy,for Galileo was one of the first to argue that man could hopeto understand how the world works, and, moreover, that wecould do this by observing the real world. Galileo had believed Copernican theory (that the planetsorbited the sun) since early on, but it was only when he foundthe evidence needed to support the idea that he started topublicly support it. He wrote about Copernicus’s theory inItalian (not the usual academic Latin), and soon his viewsbecame widely supported outside the universities. This annoyedthe Aristotelian professors, who united against him seeking topersuade the Catholic Church to ban Copernicanism. Galileo, worried by this, traveled to Rome to speak to theecclesiastical authorities. He argued that the Bible was notintended to tell us anything about scientific theories, and that itwas usual to assume that, where the Bible conflicted withcommon sense, it was being allegorical. But the Church wasafraid of a scandal that might undermine its fight againstProtestantism, and so took repressive measures. It declaredCopernicanism “false and erroneous” in 1616, and commandedGalileo never again to “defend or hold” the doctrine. Galileoacquiesced. In 1623, a longtime friend of Galileo’s became the Pope. Immediately Galileo tried to get the 1616 decree revoked. Hefailed, but he did manage to get permission to write a bookdiscussing both Aristotelian and Copernican theories, on twoconditions: he would not take sides and would come to theconclusion that man could in any case not determine how theworld worked because God could bring about the same effectsin ways unimagined by man, who could not place restrictionson God’s omnipotence. The book, Dialogue Concerning the Two Chief WorldSystems, was completed and published in 1632, with the fullbacking of the censors - and was immediately greetedthroughout Europe as a literary and philosophical masterpiece. Soon the Pope, realizing that people were seeing the book as aconvincing argument in favor of Copernicanism, regretted havingallowed its publication. The Pope argued that although the bookhad the official blessing of the censors, Galileo had neverthelesscontravened the 1616 decree. He brought Galileo before theInquisition, who sentenced him to house arrest for life andcommanded him to publicly renounce Copernicanism. For asecond time, Galileo acquiesced. Galileo remained a faithful Catholic, but his belief in theindependence of science had not been crushed. Four yearsbefore his death in 1642, while he was still under house arrest,the manuscript of his second major book was smuggled to apublisher in Holland. It was this work, referred to as Two NewSciences, even more than his support for Copernicus, that wasto be the genesis of modern physics. ISAAC NEWTONIsaac Newton was not a pleasant man. His relations withother academics were notorious, with most of his later life spentembroiled in heated disputes. Following publication of PrincipiaMathematica - surely the most influential book ever written inphysics - Newton had risen rapidly into public prominence. Hewas appointed president of the Royal Society and became thefirst scientist ever to be knighted. Newton soon clashed with the Astronomer Royal, JohnFlamsteed, who had earlier provided Newton with much-neededdata for Principia, but was now withholding information thatNewton wanted. New-ton would not take no for an answer: hehad himself appointed to the governing body of the RoyalObservatory and then tried to force immediate publication ofthe data. Eventually he arranged for Flamsteed’s work to beseized and prepared for publication by Flamsteed’s mortalenemy, Edmond Halley. But Flamsteed took the case to courtand, in the nick of time, won a court order preventingdistribution of the stolen work. Newton was incensed andsought his revenge by systematically deleting all references toFlamsteed in later editions of Principia. A more serious dispute arose with the German philosopherGottfried Leibniz. Both Leibniz and Newton had independentlydeveloped a branch of mathematics called calculus, whichunderlies most of modern physics. Although we now know thatNewton discovered calculus years before Leibniz, he publishedhis work much later. A major row ensued over who had beenfirst, with scientists vigorously defending both contenders. It isremarkable, however, that most of the articles appearing indefense of Newton were originally written by his own hand -and only published in the name of friends! As the row grew,Leibniz made the mistake of appealing to the Royal Society toresolve the dispute. Newton, as president, appointed an“impartial” committee to investigate, coincidentally consistingentirely of Newton’s friends! But that was not all: Newton thenwrote the committee’s report himself and had the Royal Societypublish it, officially accusing Leibniz of plagiarism. Still unsatisfied,he then wrote an anonymous review of the report in the RoyalSociety’s own periodical. Following the death of Leibniz, Newtonis reported to have declared that he had taken greatsatisfaction in “breaking Leibniz’s heart.” During the period of these two disputes, Newton had alreadyleft Cambridge and academe. He had been active inanti-Catholic politics at Cambridge, and later in Parliament, andwas rewarded eventually with the lucrative post of Warden ofthe Royal Mint. Here he used his talents for deviousness andvitriol in a more socially acceptable way, successfully conductinga major campaign against counterfeiting, even sending severalmen to their death on the gallows. GLOSSARYAbsolute zero: The lowest possible temperature, at whichsubstances contain no heat energy. Acceleration: The rate at which the speed of an object ischanging. Anthropic principle: We see the universe the way it isbecause if it were different we would not be here to observe it. Antiparticle: Each type of matter particle has a correspondingantiparticle. When a particle collides with its antiparticle, theyannihilate, leaving only energy. Atom: The basic unit of ordinary matter, made up of a tinynucleus (consisting of protons and neutrons) surrounded byorbiting electrons. Big bang: The singularity at the beginning of the universe. Big crunch: The singularity at the end of the universe. Black hole: A region of space-time from which nothing, noteven light, can escape, because gravity is so strong. Casimir effect: The attractive pressure between two flat,parallel metal plates placed very near to each other in avacuum. The pressure is due to a reduction in the usualnumber of virtual particles in the space between the plates. Chandrasekhar limit: The maximum possible mass of a stablecold star, above which it must collapse into a black hole. Conservation of energy: The law of science that states thatenergy (or its equiva-lent in mass) can neither be created nordestroyed. Coordinates: Numbers that specify the position of a point inspace and time. Cosmological constant: A mathematical device used byEinstein to give space-time an inbuilt tendency to expand. Cosmology: The study of the universe as a whole. Dark matter: Matter in galaxies, clusters, and possiblybetween clusters, that can not be observed directly but can bedetected by its gravitational effect. As much as 90 percent ofthe mass of the universe may be in the form of dark matter. Duality: A correspondence between apparently differenttheories that lead to the same physical results. Einstein-Rosen bridge: A thin tube of space-time linking twoblack holes. Also see Wormhole. Electric charge: A property of a particle by which it mayrepel (or attract) other particles that have a charge of similar(or opposite) sign. Electromagnetic force: The force that arises between particleswith electric charge; the second strongest of the fourfundamental forces. Electron: A particle with negative electric charge that orbitsthe nucleus of an atom. Electroweak unification energy: The energy (around 100GeV) above which the distinction between the electromagneticforce and the weak force disappears. Elementary particle: A particle that, it is believed, cannot besubdivided. Event: A point in space-time, specified by its time and place. Event horizon: The boundary of a black hole. Exclusion principle: The idea that two identical spin-1/2particles cannot have (within the limits set by the uncertaintyprinciple) both the same position and the same velocity. Field: Something that exists throughout space and time, asopposed to a particle that exists at only one point at a time. Frequency: For a wave, the number of complete cycles persecond. Gamma rays: Electromagnetic rays of very short wavelength,produced in radio-active decay or by collisions of elementaryparticles. General relativity: Einstein’s theory based on the idea thatthe laws of science should be the same for all observers, nomatter how they are moving. It explains the force of gravity interms of the curvature of a four-dimensional space-time. Geodesic: The shortest (or longest) path between two points. Grand unification energy: The energy above which, it isbelieved, the electro-magnetic force, weak force, and strongforce become indistinguishable from each other. Grand unified theory (GUT): A theory which unifies theelectromagnetic, strong, and weak forces. Imaginary time: Time measured using imaginary numbers. Light cone: A surface in space-time that marks out thepossible directions for light rays passing through a given event. Light-second (light-year): The distance traveled by light inone second (year). Magnetic field: The field responsible for magnetic forces, nowincorporated along with the electric field, into theelectromagnetic field. Mass: The quantity of matter in a body; its inertia, orresistance to acceleration. Microwave background radiation: The radiation from theglowing of the hot early universe, now so greatly red-shiftedthat it appears not as light but as microwaves (radio waveswith a wavelength of a few centimeters). Also see COBE, onpage 145. Naked singularity: A space-time singularity not surrounded bya black hole. Neutrino: An extremely light (possibly massless) particle thatis affected only by the weak force and gravity. Neutron: An uncharged particle, very similar to the proton,which accounts for roughly half the particles in an atomicnucleus. Neutron star: A cold star, supported by the exclusionprinciple repulsion between neutrons. No boundary condition: The idea that the universe is finitebut has no boundary (in imaginary time). Nuclear fusion: The process by which two nuclei collide andcoalesce to form a single, heavier nucleus. Nucleus: The central part of an atom, consisting only ofprotons and neutrons, held together by the strong force. Particle accelerator: A machine that, using electromagnets, canaccelerate moving charged particles, giving them more energy. Phase: For a wave, the position in its cycle at a specifiedtime: a measure of whether it is at a crest, a trough, orsomewhere in between. Photon: A quantum of light. Planck’s quantum principle: The idea that light (or any otherclassical waves) can be emitted or absorbed only in discretequanta, whose energy is proportional to their wavelength. Positron: The (positively charged) antiparticle of the electron. Primordial black hole: A black hole created in the very earlyuniverse. Proportional: ‘X is proportional to Y’ means that when Y ismultiplied by any number, so is X. ‘X is inversely proportionalto Y’ means that when Y is multiplied by any number, X isdivided by that number. Proton: A positively charged particle, very similar to theneutron, that accounts for roughly half the particles in thenucleus of most atoms. Pulsar: A rotating neutron star that emits regular pulses ofradio waves. Quantum: The indivisible unit in which waves may beemitted or absorbed. Quantum chromodynamics (QCD): The theory that describesthe interactions of quarks and gluons. Quantum mechanics: The theory developed from Planck’squantum principle and Heisenberg’s uncertainty principle. Quark: A (charged) elementary particle that feels the strongforce. Protons and neutrons are each composed of threequarks. Radar: A system using pulsed radio waves to detect theposition of objects by measuring the time it takes a single pulseto reach the object and be reflected back. Radioactivity: The spontaneous breakdown of one type ofatomic nucleus into another. Red shift: The reddening of light from a star that is movingaway from us, due to the Doppler effect. Singularity: A point in space-time at which the space-timecurvature becomes infinite. Singularity theorem: A theorem that shows that a singularitymust exist under certain circumstances - in particular, that theuniverse must have started with a singularity. Space-time: The four-dimensional space whose points areevents. Spatial dimension: Any of the three dimensions that arespacelike - that is, any except the time dimension. Special relativity: Einstein’s theory based on the idea that thelaws of science should be the same for all observers, no matterhow they are moving, in the absence of gravitationalphenomena. Spectrum: The component frequencies that make up a wave. The visible part of the sun’s spectrum can be seen in arainbow. Spin: An internal property of elementary particles, related to,but not identical to, the everyday concept of spin. Stationary state: One that is not changing with time: asphere spinning at a constant rate is stationary because it looksidentical at any given instant. String theory: A theory of physics in which particles aredescribed as waves on strings. Strings have length but no otherdimension. Strong force: The strongest of the four fundamental forces,with the shortest range of all. It holds the quarks togetherwithin protons and neutrons, and holds the protons andneutrons together to form atoms. Uncertainty principle: The principle, formulated by Heisenberg,that one can never be exactly sure of both the position andthe velocity of a particle; the more accurately one knows theone, the less accurately one can know the other. Virtual particle: In quantum mechanics, a particle that cannever be directly detected, but whose existence does havemeasurable effects. Wave/particle duality: The concept in quantum mechanicsthat there is no distinction between waves and particles;particles may sometimes behave like waves, and waves likeparticles. Wavelength: For a wave, the distance between two adjacenttroughs or two adjacent crests. Weak force: The second weakest of the four fundamentalforces, with a very short range. It affects all matter particles,but not force-carrying particles. Weight: The force exerted on a body by a gravitational field. It is proportional to, but not the same as, its mass. White dwarf: A stable cold star, supported by the exclusionprinciple repulsion between electrons. Wormhole: A thin tube of space-time connecting distantregions of the universe. Wormholes might also link to parallelor baby universes and could provide the possibility of timetravel. ACKNOWLEDGEMENTSMany people have helped me in writing this book. Myscientific colleagues have without exception been inspiring. Overthe years my principal associates and collaborators were RogerPenrose, Robert Geroch, Brandon Carter, George Ellis, GaryGibbons, Don Page, and Jim Hartle. I owe a lot to them, andto my research students, who have always given me help whenneeded. One of my students, Brian Whitt, gave me a lot of helpwriting the first edition of this book. My editor at BantamBooks, Peter Guzzardi, made innumerable comments whichimproved the book considerably. In addition, for this edition, Iwould like to thank Andrew Dunn, who helped me revise thetext. I could not have written this book without mycommunication system. The software, called Equalizer, wasdonated by Walt Waltosz of Words Plus Inc., in Lancaster,California. My speech synthesizer was donated by Speech Plus,of Sunnyvale, California. The synthesizer and laptop computerwere mounted on my wheelchair by David Mason, ofCambridge Adaptive Communication Ltd. With this system I cancommunicate better now than before I lost my voice. I have had a number of secretaries and assistants over theyears in which I wrote and revised this book. On thesecretarial side, I’m very grateful to Judy Fella, Ann Ralph,Laura Gentry, Cheryl Billington, and Sue Masey. My assistantshave been Colin Williams, David Thomas, and RaymondLaflamme, Nick Phillips, Andrew Dunn, Stuart Jamieson,Jonathan Brenchley, Tim Hunt, Simon Gill, Jon Rogers, andTom Kendall. They, my nurses, colleagues, friends, and familyhave enabled me to live a very full life and to pursue myresearch despite my disability. The End