Understanding the Universe
Understanding the Universe
Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at
(ادامه را كليك كنيد)
Understanding the Universe
Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at
(ادامه را كليك كنيد)
The petawatt laser was able to attain this vast power by delivering a very short-period pulse onto the target. After all, the planet didn't experience a black out as the laser was switched on, the laser is able to amplify the amount of power available by focusing on a microscopic volume for a short period of time. Vulcan blasted its target with the one petawatt laser beam for a mere 1 picosecond (one million millionths of a second). This may seem miniscule, but this microscopic period of time allowed the target material to be heated to the 10 million Kelvin.
These tests not only allow scientists to study what happens when matter is heated to such extremes, it also paves the way to more powerful lasers fusing the nuclei of hydrogen, deuterium and tritium. Self-sustaining nuclear fusion may then be possible, unlocking a gateway into a huge source of energy. It is conceivable that a future fusion reactor will use a powerful, focused laser to start fusion events, allowing the energy produced by each reaction to power the next. This is the basis of self-sustaining nuclear fusion.
"This is an exciting development - we now have a new tool with which to study really hot, dense matter" - Prof. Peter Norreys, STFC funded researcher and Vulcan scientist.
The Vulcan has some stiff competition though. In the US, the Texas Petawatt laser broke the record for most powerful laser a few days ago, reaching energies in excess of one petawatt. But plans for a bigger UK laser, the Hiper (High Power laser Energy Research), will be even more powerful and is intended to investigate fusion power.
In their article "From the Fundamental Theorem of Algebra to Astrophysics: A `Harmonious' Path", which appears in the Notices of the AMS, mathematicians Dmitry Khavinson (University of South Florida) and Genevra Neumann (University of Northern Iowa) describe the mathematical work that surprisingly led them to questions in astrophysics.
The Fundamental Theorem of Algebra (FTA), proofs of which go back to the 18th century, is a bedrock mathematical truth, elegant in its simplicity: Every complex polynomial of degree n has n roots in the complex numbers. In the 1990s, Terry Sheil-Small and Alan Wilmshurst explored the question of extending the FTA to harmonic polynomials. In a surprising twist in 2001, Khavinson, together with G. Swiatek, applied methods from complex dynamics to settle one of the cases of Wilmshurst's conjecture, showing that for a certain class of harmonic polynomials, the number of zeros is at most 3n - 2, where n is the degree of the polynomial.
When she was a postdoc at Kansas State University, Neumann mentioned the 3n-2 result in a talk, and Pietro Poggi-Corradini wondered whether Khavinson and Swiatek's complex dynamics approach could be extended to counting the zeros of rational harmonic functions. (A rational function is a quotient of polynomials, and a rational harmonic function is the sum of a rational function and the complex conjugate of a rational function.) She later asked Khavinson about this possibility. "We didn't have any idea what the answer would be," she said. And they certainly had no idea that an astrophysicist had already conjectured the answer.
"We were slightly surprised that the number came out different, 5n - 5 vs. 3n - 2," recalled Khavinson. They also wondered whether the bound of 5n - 5 was "sharp"---that is, whether it could be pushed any lower. "After checking and re-checking it, we posted a preprint on the arXiv and then returned to our respective business," Khavinson said. "Literally, a week later we received a congratulatory e-mail from Jeffrey Rabin of UCSD kindly telling us that our theorem resolves a conjecture of Sun Hong Rhie in astrophysics." Khavinson and Neumann had no idea that anyone outside of mathematics would be interested in this result.
Rhie has been studying the problem of gravitational lensing, a phenomenon in which light from a celestial source, such as a star or galaxy, is deflected by a massive object (or objects) between the light source and the observer. Because of the deflection, the observer sees multiple images of the same light source. The phenomenon was first predicted in the early 19th century, using Newtonian mechanics. A more accurate prediction was made by Einstein in 1915 using his theory of general relativity, and early observational support came in 1919 during a solar eclipse. The first gravitational lensing system was discovered in 1979.
It turns out that at least in some idealized situations one can count the number of images of the light source seen in a gravitational lensing system by counting the number of zeros of a rational harmonic function---exactly the kind of function Khavinson and Neumann had been studying. While investigating the possible number of images produced by a gravitational lens that has n point masses deflecting the light, Rhie had conjectured the bound of 5n - 5 that so surprised Khavinson and Neumann. Rhie also came up with an ingenious way of constructing an example of a rational harmonic function with exactly 5n - 5 zeros. Together with the result of Khavinson and Neumann, this example establishes that their 5n - 5 bound is sharp.
After hearing about Rhie's work, Khavinson and Neumann contacted other mathematicians and astrophysicists who worked on similar problems and received feedback they then used to revise their paper (it has since appeared in Proceedings of the AMS). These interactions led Khavinson into fruitful collaborations with astrophysicists on related questions. Some of the new results from this work are mentioned in the Notices article.
"I find this kind of interdisciplinary collaboration extremely exciting and stimulating," said Khavinson. "I just hope that I will be able to continue these collaborations. It is one of the most exciting experiences I have had in my life." Neumann is just as enthusiastic, and is grateful to Kansas State physicist Larry Weaver, who helped her to understand the physics of gravitational lensing, and to Rabin, who acted as the link between mathematics and astrophysics. "Professor Rabin's generous email introduced both Dmitry and me to an entirely new world," she said.
chemistry
Introduction
the science that deals with the properties, composition, and structure of substances (defined as elements and compounds), the transformations they undergo, and the energy that is released or absorbed during these processes. Every substance, whether naturally occurring or artificially produced, consists of one or more of the hundred-odd species of atoms that have been identified as elements. Although these atoms, in turn, are composed of more elementary particles, they are the basic building blocks of chemical substances; there is no quantity of oxygen, mercury, or gold, for example, smaller than an atom of that substance. Chemistry, therefore, is concerned not with the subatomic domain but with the properties of atoms and the laws governing their combinations and how the knowledge of these properties can be used to achieve specific purposes.
The great challenge in chemistry is the development of a coherent explanation of the complex behaviour of materials, why they appear as they do, what gives them their enduring properties, and how interactions among different substances can bring about the formation of new substances and the destruction of old ones. From the earliest attempts to understand the material world in rational terms, chemists have struggled to develop theories of matter that satisfactorily explain both permanence and change. The ordered assembly of indestructible atoms into small and large molecules, or extended networks of intermingled atoms, is generally accepted as the basis of permanence, while the reorganization of atoms or molecules into different arrangements lies behind theories of change. Thus chemistry involves the study of the atomic composition and structural architecture of substances, as well as the varied interactions among substances that can lead to sudden, often violent reactions.
Chemistry also is concerned with the utilization of natural substances and the creation of artificial ones. Cooking, fermentation, glass making, and metallurgy are all chemical processes that date from the beginnings of civilization. Today, vinyl, Teflon, liquid crystals, semiconductors, and superconductors represent the fruits of chemical technology. The 20th century has seen dramatic advances in the comprehension of the marvelous and complex chemistry of living organisms, and a molecular interpretation of health and disease holds great promise. Modern chemistry, aided by increasingly sophisticated instruments, studies materials as small as single atoms and as large and complex as DNA (deoxyribonucleic acid), which contains millions of atoms. New substances can even be designed to bear desired characteristics and then synthesized. The rate at which chemical knowledge continues to accumulate is remarkable. Over time more than 8,000,000 different chemical substances, both natural and artificial, have been characterized and produced. The number was less than 500,000 as recently as 1965.
Intimately interconnected with the intellectual challenges of chemistry are those associated with industry. In the mid-19th century the German chemist Justus von Liebig commented that the wealth of a nation could be gauged by the amount of sulfuric acid it produced. This acid, essential to many manufacturing processes, remains today the leading chemical product of industrialized countries. As Liebig recognized, a country that produces large amounts of sulfuric acid is one with a strong chemical industry and a strong economy as a whole. The production, distribution, and utilization of a wide range of chemical products is common to all highly developed nations. In fact, one can say that the “iron age” of civilization is being replaced by a “polymer age,” for in some countries the total volume of polymers now produced exceeds that of iron.
The scope of chemistry
The days are long past when one person could hope to have a detailed knowledge of all areas of chemistry. Those pursuing their interests into specific areas of chemistry communicate with others who share the same interests. Over time a group of chemists with specialized research interests become the founding members of an area of specialization. The areas of specialization that emerged early in the history of chemistry, such as organic, inorganic, physical, analytical, and industrial chemistry, along with biochemistry, remain of greatest general interest. There has been, however, much growth in the areas of polymer, environmental, and medicinal chemistry during the 20th century. Moreover, new specialities continue to appear, as, for example, pesticide, forensic, and computer chemistry.
Most of the materials that occur on Earth, such as wood, coal, minerals, or air, are mixtures of many different and distinct chemical substances. Each pure chemical substance (e.g., oxygen, iron, or water) has a characteristic set of properties that gives it its chemical identity. Iron, for example, is a common silver-white metal that melts at 1,535° C, is very malleable, and readily combines with oxygen to form the common substances hematite and magnetite. The detection of iron in a mixture of metals, or in a compound such as magnetite, is a branch of analytical chemistry called qualitative analysis. Measurement of the actual amount of a certain substance in a compound or mixture is termed quantitative analysis. Quantitative analytic measurement has determined, for instance, that iron makes up 72.3 percent, by mass, of magnetite, the mineral commonly seen as black sand along beaches and stream banks. Over the years, chemists have discovered chemical reactions that indicate the presence of such elemental substances by the production of easily visible and identifiable products. Iron can be detected by chemical means if it is present in a sample to an amount of 1 part per million or greater. Some very simple qualitative tests reveal the presence of specific chemical elements in even smaller amounts. The yellow colour imparted to a flame by sodium is visible if the sample being ignited has as little as one-billionth of a gram of sodium. Such analytic tests have allowed chemists to identify the types and amounts of impurities in various substances and to determine the properties of very pure materials. Substances used in common laboratory experiments generally have impurity levels of less than 0.1 percent. For special applications, one can purchase chemicals that have impurities totaling less than 0.001 percent. The identification of pure substances and the analysis of chemical mixtures enable all other chemical disciplines to flourish.
The importance of analytical chemistry has never been greater than it is today. The demand in modern societies for a variety of safe foods, affordable consumer goods, abundant energy, and labour-saving technologies places a great burden on the environment. All chemical manufacturing produces waste products in addition to the desired substances, and waste disposal has not always been carried out carefully. Disruption of the environment has occurred since the dawn of civilization, and pollution problems have increased with the growth of global population. The techniques of analytical chemistry are relied on heavily to maintain a benign environment. The undesirable substances in water, air, soil, and food must be identified, their point of origin fixed, and safe, economical methods for their removal or neutralization developed. Once the amount of a pollutant deemed to be hazardous has been assessed, it becomes important to detect harmful substances at concentrations well below the danger level. Analytical chemists seek to develop increasingly accurate and sensitive techniques and instruments.
Sophisticated analytic instruments, often coupled with computers, have improved the accuracy with which chemists can identify substances and have lowered detection limits. An analytic technique in general use is gas chromatography, which separates the different components of a gaseous mixture by passing the mixture through a long, narrow column of absorbent but porous material. The different gases interact differently with this absorbent material and pass through the column at different rates. As the separate gases flow out of the column, they can be passed into another analytic instrument called a mass spectrometer, which separates substances according to the mass of their constituent ions. A combined gas chromatograph–mass spectrometer can rapidly identify the individual components of a chemical mixture whose concentrations may be no greater than a few parts per billion. Similar or even greater sensitivities can be obtained under favourable conditions using techniques such as atomic absorption, polarography, and neutron activation. The rate of instrumental innovation is such that analytic instruments often become obsolete within 10 years of their introduction. Newer instruments are more accurate and faster and are employed widely in the areas of environmental and medicinal chemistry.
Modern chemistry, which dates more or less from the acceptance of the law of conservation of mass in the late 18th century, focused initially on those substances that were not associated with living organisms. Study of such substances, which normally have little or no carbon, constitutes the discipline of inorganic chemistry. Early work sought to identify the simple substances—namely, the elements—that are the constituents of all more complex substances. Some elements, such as gold and carbon, have been known since antiquity, and many others were discovered and studied throughout the 19th and early 20th centuries. Today, more than 100 are known. The study of such simple inorganic compounds as sodium chloride (common salt) has led to some of the fundamental concepts of modern chemistry, the law of definite proportions providing one notable example. This law states that for most pure chemical substances the constituent elements are always present in fixed proportions by mass (e.g., every 100 grams of salt contains 39.3 grams of sodium and 60.7 grams of chlorine). The crystalline form of salt, known as halite, consists of intermingled sodium and chlorine atoms, one sodium atom for each one of chlorine. Such a compound, formed solely by the combination of two elements, is known as a binary compound. Binary compounds are very common in inorganic chemistry, and they exhibit little structural variety. For this reason, the number of inorganic compounds is limited in spite of the large number of elements that may react with each other. If three or more elements are combined in a substance, the structural possibilities become greater.
After a period of quiescence in the early part of the 20th century, inorganic chemistry has again become an exciting area of research. Compounds of boron and hydrogen, known as boranes, have unique structural features that forced a change in thinking about the architecture of inorganic molecules. Some inorganic substances have structural features long believed to occur only in carbon compounds, and a few inorganic polymers have even been produced. Ceramics are materials composed of inorganic elements combined with oxygen. For centuries ceramic objects have been made by strongly heating a vessel formed from a paste of powdered minerals. Although ceramics are quite hard and stable at very high temperatures, they are usually brittle. Currently, new ceramics strong enough to be used as turbine blades in jet engines are being manufactured. There is hope that ceramics will one day replace steel in components of internal-combustion engines. In 1987 a ceramic containing yttrium, barium, copper, and oxygen, with the approximate formula YBa2Cu3O7, was found to be a superconductor at a temperature of about 100 K. A superconductor offers no resistance to the passage of an electrical current, and this new type of ceramic could very well find wide use in electrical and magnetic applications. A superconducting ceramic is so simple to make that it can be prepared in a high school laboratory. Its discovery illustrates the unpredictability of chemistry, for fundamental discoveries can still be made with simple equipment and inexpensive materials.
Many of the most interesting developments in inorganic chemistry bridge the gap with other disciplines. Organometallic chemistry investigates compounds that contain inorganic elements combined with carbon-rich units. Many organometallic compounds play an important role in industrial chemistry as catalysts, which are substances that are able to accelerate the rate of a reaction even when present in only very small amounts. Some success has been achieved in the use of such catalysts for converting natural gas to related but more useful chemical substances. Chemists also have created large inorganic molecules that contain a core of metal atoms, such as platinum, surrounded by a shell of different chemical units. Some of these compounds, referred to as metal clusters, have characteristics of metals, while others react in ways similar to biologic systems. Trace amounts of metals in biologic systems are essential for processes such as respiration, nerve function, and cell metabolism. Processes of this kind form the object of study of bioinorganic chemistry. Although organic molecules were once thought to be the distinguishing chemical feature of living creatures, it is now known that inorganic chemistry plays a vital role as well.
Organic compounds are based on the chemistry of carbon. Carbon is unique in the variety and extent of structures that can result from the three-dimensional connections of its atoms. The process of photosynthesis converts carbon dioxide and water to oxygen and compounds known as carbohydrates. Both cellulose, the substance that gives structural rigidity to plants, and starch, the energy storage product of plants, are polymeric carbohydrates. Simple carbohydrates produced by photosynthesis form the raw material for the myriad organic compounds found in the plant and animal kingdoms. When combined with variable amounts of hydrogen, oxygen, nitrogen, sulfur, phosphorus, and other elements, the structural possibilities of carbon compounds become limitless, and their number far exceeds the total of all nonorganic compounds. A major focus of organic chemistry is the isolation, purification, and structural study of these naturally occurring substances. Many natural products are simple molecules. Examples include formic acid (HCO2H) in ants, ethyl alcohol (C2H5OH) in fermenting fruit, and oxalic acid (C2H2O4) in rhubarb leaves. Other natural products, such as penicillin, vitamin B12, proteins, and nucleic acids, are exceedingly complex. The isolation of pure natural products from their host organism is made difficult by the low concentrations in which they may be present. Once they are isolated in pure form, however, modern instrumental techniques can reveal structural details for amounts weighing as little as one-millionth of a gram. The correlation of the physical and chemical properties of compounds with their structural features is the domain of physical organic chemistry. Once the properties endowed upon a substance by specific structural units termed functional groups are known, it becomes possible to design novel molecules that may exhibit desired properties. The preparation, under controlled laboratory conditions, of specific compounds is known as synthetic chemistry. Some products are easier to synthesize than to collect and purify from their natural sources. Tons of vitamin C, for example, are synthesized annually. Many synthetic substances have novel properties that make them especially useful. Plastics are a prime example, as are many drugs and agricultural chemicals. A continuing challenge for synthetic chemists is the structural complexity of most organic substances. To synthesize a desired substance, the atoms must be pieced together in the correct order and with the proper three-dimensional relationships. Just as a given pile of lumber and bricks can be assembled in many ways to build houses of several different designs, so too can a fixed number of atoms be connected together in various ways to give different molecules. Only one structural arrangement out of the many possibilities will be identical with a naturally occurring molecule. The antibiotic erythromycin, for example, contains 37 carbon, 67 hydrogen, and 13 oxygen atoms, along with one nitrogen atom. Even when joined together in the proper order, these 118 atoms can give rise to 262,144 different structures, only one of which has the characteristics of natural erythromycin. The great abundance of organic compounds, their fundamental role in the chemistry of life, and their structural diversity have made their study especially challenging and exciting. Organic chemistry is the largest area of specialization among the various fields of chemistry.
As understanding of inanimate chemistry grew during the 19th century, attempts to interpret the physiological processes of living organisms in terms of molecular structure and reactivity gave rise to the discipline of biochemistry. Biochemists employ the techniques and theories of chemistry to probe the molecular basis of life. An organism is investigated on the premise that its physiological processes are the consequence of many thousands of chemical reactions occurring in a highly integrated manner. Biochemists have established, among other things, the principles that underlie energy transfer in cells, the chemical structure of cell membranes, the coding and transmission of hereditary information, muscular and nerve function, and biosynthetic pathways. In fact, related biomolecules have been found to fulfill similar roles in organisms as different as bacteria and human beings. The study of biomolecules, however, presents many difficulties. Such molecules are often very large and exhibit great structural complexity; moreover, the chemical reactions they undergo are usually exceedingly fast. The separation of the two strands of DNA, for instance, occurs in one-millionth of a second. Such rapid rates of reaction are possible only through the intermediary action of biomolecules called enzymes. Enzymes are proteins that owe their remarkable rate-accelerating abilities to their three-dimensional chemical structure. Not surprisingly, biochemical discoveries have had a great impact on the understanding and treatment of disease. Many ailments due to inborn errors of metabolism have been traced to specific genetic defects. Other diseases result from disruptions in normal biochemical pathways.
Frequently, symptoms can be alleviated by drugs, and the discovery, mode of action, and degradation of therapeutic agents is another of the major areas of study in biochemistry. Bacterial infections can be treated with sulfonamides, penicillins, and tetracyclines, and research into viral infections has revealed the effectiveness of acyclovir against the herpes virus. There is much current interest in the details of carcinogenesis and cancer chemotherapy. It is known, for example, that cancer can result when cancer-causing molecules, or carcinogens as they are called, react with nucleic acids and proteins and interfere with their normal modes of action. Researchers have developed tests that can identify molecules likelyto be carcinogenic. The hope, of course, is that progress in the prevention and treatment of cancer will accelerate once the biochemical basis of the disease is more fully understood.
The molecular basis of biologic processes is an essential feature of the fast-growing disciplines of molecular biology and biotechnology. Chemistry has developed methods for rapidly and accurately determining the structure of proteins and DNA. In addition, efficient laboratory methods for the synthesis of genes are being devised. Ultimately, the correction of genetic diseases by replacement of defective genes with normal ones may become possible.
Polymer chemistry
The simple substance ethylene is a gas composed of molecules with the formula CH2CH2. Under certain conditions, many ethylene molecules will join together to form a long chain called polyethylene, with the formula (CH2CH2)n, where n is a variable but large number. Polyethylene is a tough, durable solid material quite different from ethylene. It is an example of a polymer, which is a large molecule made up of many smaller molecules (monomers), usually joined together in a linear fashion. Many naturally occurring substances, including cellulose, starch, cotton, wool, rubber, leather, proteins, and DNA, are polymers. Polyethylene, nylon, and acrylics are examples of synthetic polymers. The study of such materials lies within the domain of polymer chemistry, a specialty that has flourished in the 20th century. The investigation of natural polymers overlaps considerably with biochemistry, but the synthesis of new polymers, the investigation of polymerization processes, and the characterization of the structure and properties of polymeric materials all pose unique problems for polymer chemists.
Polymer chemists have designed and synthesized polymers that vary in hardness, flexibility, softening temperature, solubility in water, and biodegradability. They have produced polymeric materials that are as strong as steel yet lighter and more resistant to corrosion. Oil, natural gas, and water pipelines are now routinely constructed of plastic pipe. In recent years, automakers have increased their use of plastic components to build lighter vehicles that consume less fuel. Other industries such as those involved in the manufacture of textiles, rubber, paper, and packaging materials are built upon polymer chemistry.
Besides producing new kinds of polymeric materials, researchers are concerned with developing special catalysts that are required by the large-scale industrial synthesis of commercial polymers. Without such catalysts, the polymerization process would be very slow in certain cases.
Many chemical disciplines, such as those already discussed, focus on certain classes of materials that share common structural and chemical features. Other specialties may be centred not on a class of substances but rather on their interactions and transformations. The oldest of these fields is physical chemistry, which seeks to measure, correlate, and explain the quantitative aspects of chemical processes. The Anglo-Irish chemist Robert Boyle, for example, discovered in the 17th century that at room temperature the volume of a fixed quantity of gas decreases proportionally as the pressure on it increases. Thus, for a gas at constant temperature, the product of its volume V and pressure P equals a constant number—i.e., PV = constant. Such a simple arithmetic relationship is valid for nearly all gases at room temperature and at pressures equal to or less than one atmosphere. Subsequent work has shown that the relationship loses its validity at higher pressures, but more complicated expressions that more accurately match experimental results can be derived. The discovery and investigation of such chemical regularities, often called laws of nature, lie within the realm of physical chemistry. For much of the 18th century the source of mathematical regularity in chemical systems was assumed to be the continuum of forces and fields that surround the atoms making up chemical elements and compounds. Developments in the 20th century, however, have shown that chemical behaviour is best interpreted by a quantum mechanical model of atomic and molecular structure. The branch of physical chemistry that is largely devoted to this subject is theoretical chemistry. Theoretical chemists make extensive use of computers to help them solve complicated mathematical equations. Other branches of physical chemistry include chemical thermodynamics, which deals with the relationship between heat and other forms of chemical energy, and chemical kinetics, which seeks to measure and understand the rates of chemical reactions. Electrochemistry investigates the interrelationship of electric current and chemical change. The passage of an electric current through a chemical solution causes changes in the constituent substances that are often reversible—i.e., under different conditions the altered substances themselves will yield an electric current. Common batteries contain chemical substances that, when placed in contact with each other by closing an electrical circuit, will deliver current at a constant voltage until the substances are consumed. At present there is much interest in devices that can use the energy in sunlight to drive chemical reactions whose products are capable of storing the energy. The discovery of such devices would make possible the widespread utilization of solar energy.
There are many other disciplines within physical chemistry that are concerned more with the general properties of substances and the interactions among substances than with the substances themselves. Photochemistry is a specialty that investigates the interaction of light with matter. Chemical reactions initiated by the absorption of light can be very different from those that occur by other means. Vitamin D, for example, is formed in the human body when the steroid ergosterol absorbs solar radiation; ergosterol does not change to vitamin D in the dark.
A rapidly developing subdiscipline of physical chemistry is surface chemistry. It examines the properties of chemical surfaces, relying heavily on instruments that can provide a chemical profile of such surfaces. Whenever a solid is exposed to a liquid or a gas, a reaction occurs initially on the surface of the solid, and its properties can change dramatically as a result. Aluminum is a case in point: it is resistant to corrosion precisely because the surface of the pure metal reacts with oxygen to form a layer of aluminum oxide, which serves to protect the interior of the metal from further oxidation. Numerous reaction catalysts perform their function by providing a reactive surface on which substances can react.
Industrial chemistry
The manufacture, sale, and distribution of chemical products is one of the cornerstones of a developed country. Chemists play an important role in the manufacture, inspection, and safe handling of chemical products, as well as in product development and general management. The manufacture of basic chemicals such as oxygen, chlorine, ammonia, and sulfuric acid provides the raw materials for industries producing textiles, agricultural products, metals, paints, and pulp and paper. Specialty chemicals are produced in smaller amounts for industries involved with such products as pharmaceuticals, foodstuffs, packaging, detergents, flavours, and fragrances. To a large extent, the chemical industry takes the products and reactions common to “bench-top” chemical processes and scales them up to industrial quantities.
The monitoring and control of bulk chemical processes, especially with regard to heat transfer, pose problems usually tackled by chemists and chemical engineers. The disposal of by-products also is a major problem for bulk chemical producers. These and other challenges of industrial chemistry set it apart from the more purely intellectual disciplines of chemistry discussed above. Yet, within the chemical industry, there is a considerable amount of fundamental research undertaken within traditional specialties. Most large chemical companies have research-and-development capability. Pharmaceutical firms, for example, operate large research laboratories in which chemists test molecules for pharmacological activity. The new products and processes that are discovered in such laboratories are often patented and become a source of profit for the company funding the research. A great deal of the research conducted in the chemical industry can be termed applied research because its goals are closely tied to the products and processes of the company concerned. New technologies often require much chemical expertise. The fabrication of, say, electronic microcircuits involves close to 100 separate chemical steps from start to finish. Thus, the chemical industry evolves with the technological advances of the modern world and at the same time often contributes to the rate of progress.
The methodology of chemistry
Chemistry is to a large extent a cumulative science. Over time the number and extent of observations and phenomena studied increase. Not all hypotheses and discoveries endure unchallenged, however. Some of them are discarded as new observations or more satisfying explanations appear. Nonetheless, chemistry has a broad spectrum of explanatory models for chemical phenomena that have endured and been extended over time. These now have the status of theories, interconnected sets of explanatory devices that correlate well with observed phenomena. As new discoveries are made, they are incorporated into existing theory whenever possible. However, as the discovery of high-temperature superconductors in 1986 illustrates, accepted theory is never sufficient to predict the course of future discovery. Serendipity, or chance discovery, will continue to play as much a role in the future as will theoretical sophistication.
Studies of molecular structure
The chemical properties of a substance are a function of its structure, and the techniques of X-ray crystallography now enable chemists to determine the precise atomic arrangement of complex molecules. A molecule is an ordered assembly of atoms. Each atom in a molecule is connected to one or more neighbouring atoms by a chemical bond. The length of bonds and the angles between adjacent bonds are all important in describing molecular structure, and a comprehensive theory of chemical bonding is one of the major achievements of modern chemistry. Fundamental to bonding theory is the atomic–molecular concept.
Atoms and elements
As far as general chemistry is concerned, atoms are composed of the three fundamental particles: the proton, the neutron, and the electron. Although the proton and the neutron are themselves composed of smaller units, their substructure has little impact on chemical transformation. As was explained in an earlier section, the proton carries a charge of +1, and the number of protons in an atomic nucleus distinguishes one type of chemical atom from another. The simplest atom of all, hydrogen, has a nucleus composed of a single proton. The neutron has very nearly the same mass as the proton, but it has no charge. Neutrons are contained with protons in the nucleus of all atoms other than hydrogen. The atom with one proton and one neutron in its nucleus is called deuterium. Because it has only one proton, deuterium exhibits the same chemical properties as hydrogen but has a different mass. Hydrogen and deuterium are examples of related atoms called isotopes. The third atomic particle, the electron, has a charge of -1, but its mass is 1,836 times smaller than that of a proton. The electron occupies a region of space outside the nucleus termed an orbital. Some orbitals are spherical with the nucleus at the centre. Because electrons have so little mass and move about at speeds close to half that of light, they exhibit the same wave–particle duality as photons of light. This means that some of the properties of an electron are best described by considering the electron to be a particle, while other properties are consistent with the behaviour of a standing wave. The energy of a standing wave, such as a vibrating string, is distributed over the region of space defined by the two fixed ends and the up-and-down extremes of vibration. Such a wave does not exist in a fixed region of space as does a particle. Early models of atomic structure envisioned the electron as a particle orbiting the nucleus, but electron orbitals are now interpreted as the regions of space occupied by standing waves called wave functions. These wave functions represent the regions of space around the nucleus in which the probability of finding an electron is high. They play an important role in bonding theory, as will be discussed later.
Each proton in an atomic nucleus requires an electron for electrical neutrality. Thus, as the number of protons in a nucleus increases, so too does the number of electrons. The electrons, alone or in pairs, occupy orbitals increasingly distant from the nucleus. Electrons farther from the nucleus are attracted less strongly by the protons in the nucleus, and they can be removed more easily from the atom. The energy required to move an electron from one orbital to another, or from one orbital to free space, gives a measure of the energy level of the orbitals. These energies have been found to have distinct, fixed values; they are said to be quantized. The energy differences between orbitals give rise to the characteristic patterns of light absorption or emission that are unique to each chemical atom.
A new chemical atom—that is, an element—results each time another proton is added to an atomic nucleus. Consecutive addition of protons generates the whole range of elements known to exist in the universe. Compounds are formed when two or more different elements combine through atomic bonding. Such bond formation is a consequence of electron pairing and constitutes the foundation of all structural chemistry.
Ionic and covalent bonding
When two different atoms approach each other, the electrons in their outer orbitals can respond in two distinct ways. An electron in the outermost atomic orbital of atom A may move completely to an outer but stabler orbital of atom B. The charged atoms that result, A+ and B-, are called ions, and the electrostatic force of attraction between them gives rise to what is termed an ionic bond. Most elements can form ionic bonds, and the substances that result commonly exist as three-dimensional arrays of positive and negative ions. Ionic compounds are frequently crystalline solids that have high melting points (e.g., table salt).
The second way in which the two outer electrons of atoms A and B can respond to the approach of A and B is to pair up to form a covalent bond. In the simple view known as the valence-bond model, in which electrons are treated strictly as particles, the two paired electrons are assumed to lie between the two nuclei and are shared equally by atoms A and B, resulting in a covalent bond. Atoms joined together by one or more covalent bonds constitute molecules. Hydrogen gas is composed of hydrogen molecules, which consist in turn of two hydrogen atoms linked by a covalent bond. The notation H2 for hydrogen gas is referred to as a molecular formula. Molecular formulas indicate the number and type of atoms that make up a molecule. The molecule H2 is responsible for the properties generally associated with hydrogen gas. Most substances on Earth have covalently bonded molecules as their fundamental chemical unit, and their molecular properties are completely different from those of the constituent elements. The physical and chemical properties of carbon dioxide, for example, are quite distinct from those of pure carbon and pure oxygen.
The interpretation of a covalent bond as a localized electron pair is an oversimplification of the bonding situation. A more comprehensive description of bonding that considers the wave properties of electrons is the molecular-orbital theory. According to this theory, electrons in a molecule, rather than being localized between atoms, are distributed over all the atoms in the molecule in a spatial distribution described by a molecular orbital. Such orbitals result when the atomic orbitals of bonded atoms combine with each other. The total number of molecular orbitals present in a molecule is equal to the sum of all atomic orbitals in the constituent atoms prior to bonding. Thus, for the simple combination of atoms A and B to form the molecule AB, two atomic orbitals combine to generate two molecular orbitals. One of these, the so-called bonding molecular orbital, represents a region of space enveloping both the A and B atoms, while the other, the anti-bonding molecular orbital, has two lobes, neither of which occupies the space between the two atoms. The bonding molecular orbital is at a lower energy level than are the two atomic orbitals, while the anti-bonding orbital is at a higher energy level. The two paired electrons that constitute the covalent bond between A and B occupy the bonding molecular orbital. For this reason, there is a high probability of finding the electrons between A and B, but they can be found elsewhere in the orbital as well. Because only two electrons are involved in bond formation and both can be accommodated in the lower energy orbital, the anti-bonding orbital remains unpopulated. This theory of bonding predicts that bonding between A and B will occur because the energy of the paired electrons after bonding is less than that of the two electrons in their atomic orbitals prior to bonding. The formation of a covalent bond is thus energetically favoured. The system goes from a state of higher energy to one of lower energy.
Another feature of this bonding picture is that it is able to predict the energy required to move an electron from the bonding molecular orbital to the anti-bonding one. The energy required for such an electronic excitation can be provided by visible light, for example, and the wavelength of the light absorbed determines the colour displayed by the absorbing molecule (e.g., violets are blue because the pigments in the flower absorb the red rays of natural light and reflect more of the blue). As the number of atoms in a molecule increases, so too does the number of molecular orbitals. Calculation of molecular orbitals for large molecules is mathematically difficult, but computers have made it possible to determine the wave equations for several large molecules. Molecular properties predicted by such calculations correlate well with experimental results.
Many elements can form two or more covalent bonds, but only a few are able to form extended chains of covalent bonds. The outstanding example is carbon, which can form as many as four covalent bonds and can bond to itself indefinitely. Carbon has six electrons in total, two of which are paired in an atomic orbital closest to the nucleus. The remaining four are farther from the nucleus and are available for covalent bonding. When there is sufficient hydrogen present, carbon will react to form methane, CH4. When all four electron pairs occupy the four molecular orbitals of lowest energy, the molecule assumes the shape of a tetrahedron, with carbon at the centre and the four hydrogen atoms at the apexes. The C–H bond length is 110 picometres (1 picometre = 10-12 metre), and the angle between adjacent C–H bonds is close to 110°. Such tetrahedral symmetry is common to many carbon compounds and results in interesting structural possibilities. If two carbon atoms are joined together, with three hydrogen atoms bonded to each carbon atom, the molecule ethane is obtained. When four carbon atoms are joined together, two different structures are possible: a linear structure designated n-butane and a branched structure called iso-butane. These two structures have the same molecular formula, C4H10, but a different order of attachment of their constituent atoms. The two molecules are termed structural isomers. Each of them has unique chemical and physical properties, and they are different compounds. The number of possible isomers increases rapidly as the number of carbon atoms increases. There are five isomers for C6H14, 75 for C10H22, and 6.2 × 1013 for C40H82. When carbon forms bonds to atoms other than hydrogen, such as oxygen, nitrogen, and sulfur, the structural possibilities become even greater. It is this great potential for structural diversity that makes carbon compounds essential to living organisms.
Even when the bonding sequence of carbon compounds is fixed, further structural variation is still possible. When two carbon atoms are joined together by two bonding pairs of electrons, a double bond is formed. A double bond forces the two carbon atoms and attached groups into a rigid, planar structure. As a result, a molecule such as CHCl=CHCl can exist in two nonidentical forms called geometric isomers. Structural rigidity also occurs in ring structures, and attached groups can be on the same side of a ring or on different sides. Yet another opportunity for isomerism arises when a carbon atom is bonded to four different groups. These can be attached in two different ways, one of which is the mirror image of the other. This type of isomerism is called optical isomerism, because the two isomers affect plane-polarized light differently. Two optical isomers are possible for every carbon atom that is bonded to four different groups. For a molecule bearing 10 such carbon atoms, the total number of possible isomers will be 210 = 1,024. Large biomolecules often have 10 or more carbon atoms for which such optical isomers are possible. Only one of all the possible isomers will be identical to the natural molecule. For this reason, the laboratory synthesis of large organic molecules is exceedingly difficult. Only in the last few decades of the 20th century have chemists succeeded in developing reagents and processes that yield specific optical isomers. They expect that new synthetic methods will make possible the synthesis of ever more complex natural products.
Investigations of chemical transformations
Basic factors
The structure of ionic substances and covalently bonded molecules largely determines their function. As noted above, the properties of a substance depend on the number and type of atoms it contains and on the bonding patterns present. Its bulk properties also depend, however, on the interactions among individual atoms, ions, or molecules. The force of attraction between the fundamental units of a substance dictate whether, at a given temperature and pressure, that substance will exist in the solid, liquid, or gas phase. At room temperature and pressure, for example, the strong forces of attraction between the positive ions of sodium (Na+) and the negative ions of chlorine (Cl−) draw them into a compact solid structure. The weaker forces of attraction among neighbouring water molecules allow the looser packing characteristic of a liquid. Finally, the very weak attractive forces acting among adjacent oxygen molecules are exceeded by the dispersive forces of heat; oxygen, consequently, is a gas. Interparticle forces thus affect the chemical and physical behaviour of substances, but they also determine to a large extent how a particle will respond to the approach of a different particle. If the two particles react with each other to form new particles, a chemical reaction has occurred. Notwithstanding the unlimited structural diversity allowed by molecular bonding, the world would be devoid of life if substances were incapable of change. The study of chemical transformation, which complements the study of molecular structure, is built on the concepts of energy and entropy.
Energy and the first law of thermodynamics
The concept of energy is a fundamental and familiar one in all the sciences. In simple terms, the energy of a body represents its ability to do work, and work itself is a force acting over a distance.
Chemical systems can have both kinetic energy (energy of motion) and potential energy (stored energy). The kinetic energy possessed by any collection of molecules in a solid, liquid, or gas is known as its thermal energy. Since liquids expand when they have more thermal energy, a liquid column of mercury, for example, will rise higher in an evacuated tube as it becomes warmer. In this way a thermometer can be used to measure the thermal energy, or temperature, of a system. The temperature at which all molecular motion comes to a halt is known as absolute zero.
Energy also may be stored in atoms or molecules as potential energy. When protons and neutrons combine to form the nucleus of a certain element, the reduction in potential energy is matched by the production of a huge quantity of kinetic energy. Consider, for instance, the formation of the deuterium nucleus from one proton and one neutron. The fundamental mass unit of the chemist is the mole, which represents the mass, in grams, of 6.02 × 1023 individual particles, whether they be atoms or molecules. One mole of protons has a mass of 1.007825 grams and one mole of neutrons has a mass of 1.008665 grams. By simple addition the mass of one mole of deuterium atoms (ignoring the negligible mass of one mole of electrons) should be 2.016490 grams. The measured mass is 0.00239 gram less than this. The missing mass is known as the binding energy of the nucleus and represents the mass equivalent of the energy released by nucleus formation. By using Einstein's formula for the conversion of mass to energy (E = mc2), one can calculate the energy equivalent of 0.00239 gram as 2.15 × 108 kilojoules. This is approximately 240,000 times greater than the energy released by the combustion of one mole of methane. Such studies of the energetics of atom formation and interconversion are part of a specialty known as nuclear chemistry.
The energy released by the combustion of methane is about 900 kilojoules per mole. Although much less than the energy released by nuclear reactions, the energy given off by a chemical process such as combustion is great enough to be perceived as heat and light. Energy is released in so-called exothermic reactions because the chemical bonds in the product molecules, carbon dioxide and water, are stronger and stabler than those in the reactant molecules, methane and oxygen. The chemical potential energy of the system has decreased, and most of the released energy appears as heat, while some appears as radiant energy, or light. The heat produced by such a combustion reaction will raise the temperature of the surrounding air and, at constant pressure, increase its volume. This expansion of air results in work being done. In the cylinder of an internal-combustion engine, for example, the combustion of gasoline results in hot gases that expand against a moving piston. The motion of the piston turns a crankshaft, which then propels the vehicle. In this case, chemical potential energy has been converted to thermal energy, some of which produces useful work. This process illustrates a statement of the conservation of energy known as the first law of thermodynamics. This law states that, for an exothermic reaction, the energy released by the chemical system is equal to the heat gained by the surroundings plus the work performed. By measuring the heat and work quantities that accompany chemical reactions, it is possible to ascertain the energy differences between the reactants and the products of various reactions. In this manner, the potential energy stored in a variety of molecules can be determined, and the energy changes that accompany chemical reactions can be calculated.
Entropy and the second law of thermodynamics
Some chemical processes occur even though there is no net energy change. Consider a vessel containing a gas, connected to an evacuated vessel via a channel wherein a barrier obstructs passage of the gas. If the barrier is removed, the gas will expand into the evacuated vessel. This expansion is consistent with the observation that a gas always expands to fill the volume available. When the temperature of both vessels is the same, the energy of the gas before and after the expansion is the same. The reverse reaction does not occur, however. The spontaneous reaction is the one that yields a state of greater disorder. In the expanded volume, the individual gas molecules have greater freedom of movement and thus are more disordered. The measure of the disorder of a system is a quantity termed entropy. At a temperature of absolute zero, all movement of atoms and molecules ceases, and the disorder—and entropy—of such perfectly compacted substances is zero. (Zero entropy at zero temperature is in accord with the third law of thermodynamics.) All substances above absolute zero will have a positive entropy value that increases with temperature. When a hot body cools down, the thermal energy it loses passes to the surrounding air, which is at a lower temperature. As the entropy of the cooling body decreases, the entropy of the surrounding air increases. In fact, the increase in entropy of the air is greater than the decrease in entropy of the cooling body. This is consistent with the second law, which states that the total entropy of a system and its surroundings always increases in a spontaneous reaction. Thus the first and second laws of thermodynamics indicate that, for all processes of chemical change throughout the universe, energy is conserved but entropy increases.
Application of the laws of thermodynamics to chemical systems allows chemists to predict the behaviour of chemical reactions. When energy and entropy considerations favour the formation of product molecules, reagent molecules will act to form products until an equilibrium is established between products and reagents. The ratio of products to reagents is specified by a quantity known as an equilibrium constant, which is a function of the energy and entropy differences between the two. What thermodynamics cannot predict, however, is the rate at which chemical reactions occur. For fast reactions an equilibrium mixture of products and reagents can be established in one millisecond or less; for slow reactions the time required could be hundreds of years.
Rates of reaction
When the specific rates of chemical reactions are measured experimentally, they are found to be dependent on the concentrations of reacting species, temperature, and a quantity called activation energy. Chemists explain this phenomenon by recourse to the collision theory of reaction rates. This theory builds on the premise that a reaction between two or more chemicals requires, at the molecular level, a collision between two rapidly moving molecules. If the two molecules collide in the right way and with enough kinetic energy, one of the molecules may acquire enough energy to initiate the bond-breaking process. As this occurs, new bonds may begin to form, and ultimately reagent molecules are converted into product molecules. The point of highest energy during bond breaking and bond formation is called the transition state of the molecular process. The difference between the energy of the transition state and that of the reacting molecules is the activation energy that must be exceeded for a reaction to occur. Reaction rates increase with temperature because the colliding molecules have greater energies, and more of them will have energies that exceed the activation energy of reaction. The modern study of the molecular basis of chemical change has been greatly aided by lasers and computers. It is now possible to study short-lived collision products and to better determine the molecular mechanisms that fix the rate of chemical reactions. This knowledge is useful in designing new catalysts that can accelerate the rate of reaction by lowering the activation energy. Catalysts are important for many biochemical and industrial processes because they speed up reactions that ordinarily occur too slowly to be useful. Moreover, they often do so with increased control over the structural features of the product molecules. A rhodium phosphine catalyst, for example, has enabled chemists to obtain 96 percent of the correct optical isomer in a key step in the synthesis of L-dopa, a drug used for treating Parkinson's disease.
Chemistry and society
For the first two-thirds of the 20th century, chemistry was seen by many as the science of the future. The potential of chemical products for enriching society appeared to be unlimited. Increasingly, however, and especially in the public mind, the negative aspects of chemistry have come to the fore. Disposal of chemical by-products at waste-disposal sites of limited capacity has resulted in environmental and health problems of enormous concern. The legitimate use of drugs for the medically supervised treatment of diseases has been tainted by the growing misuse of mood-altering drugs. The very word chemicals has come to be used all too frequently in a pejorative sense. There is, as a result, a danger that the pursuit and application of chemical knowledge may be seen as bearing risks that outweigh the benefits.
It is easy to underestimate the central role of chemistry in modern society, but chemical products are essential if the world's population is to be clothed, housed, and fed. The world's reserves of fossil fuels (e.g., oil, natural gas, and coal) will eventually be exhausted, some as soon as the 21st century, and new chemical processes and materials will provide a crucial alternative energy source. The conversion of solar energy to more concentrated, useful forms, for example, will rely heavily on discoveries in chemistry. Long-term, environmentally acceptable solutions to pollution problems are not attainable without chemical knowledge. There is much truth in the aphorism that “chemical problems require chemical solutions.” Chemical inquiry will lead to a better understanding of the behaviour of both natural and synthetic materials and to the discovery of new substances that will help future generations better supply their needs and deal with their problems.
Progress in chemistry can no longer be measured only in terms of economics and utility. The discovery and manufacture of new chemical goods must continue to be economically feasible but must be environmentally acceptable as well. The impact of new substances on the environment can now be assessed before large-scale production begins, and environmental compatibility has become a valued property of new materials. For example, compounds consisting of carbon fully bonded to chlorine and fluorine, called chlorofluorocarbons (or Freons), were believed to be ideal for their intended use when they were first discovered. They are nontoxic, nonflammable gases and volatile liquids that are very stable. These properties led to their widespread use as solvents, refrigerants, and propellants in aerosol containers. Time has shown, however, that these compounds decompose in the upper regions of the atmosphere and that the decomposition products act to destroy stratospheric ozone. Limits have now been placed on the use of chlorofluorocarbons, but it is impossible to recover the amounts already dispersed into the atmosphere.
The chlorofluorocarbon problem illustrates how difficult it is to anticipate the overall impact that new materials can have on the environment. Chemists are working to develop methods of assessment, and prevailing chemical theory provides the working tools. Once a substance has been identified as hazardous to the existing ecological balance, it is the responsibility of chemists to locate that substance and neutralize it, limiting the damage it can do or removing it from the environment entirely. The last years of the 20th century will see many new, exciting discoveries in the processes and products of chemistry. Inevitably, the harmful effects of some substances will outweigh their benefits, and their use will have to be limited. Yet, the positive impact of chemistry on society as a whole seems beyond doubt.
Additional
Historical developments in chemistry through the 17th century are explored in Robert P. Multhauf, The Origins of Chemistry (1966). Cecil J. Schneer, Mind and Matter: Man's Changing Concepts of the Material World (1970, reprinted 1988), gives an interesting account of the history of chemistry in relation to the structure of matter; Aaron J. Ihde, The Development of Modern Chemistry (1964, reprinted 1984), is a comprehensive history covering the 18th to the middle of the 20th century; and Frederic Lawrence Holmes, Lavoisier and the Chemistry of Life: An Exploration of Scientific Creativity (1985), is a study of chemical experimentation.
Concise explanations of chemical terms can be found in Douglas M. Considine and Glenn D. Considine (eds.), Van Nostrand Reinhold Encyclopedia of Chemistry, 4th ed. (1984), a specialized reference work. Comprehensive treatment of chemical theories and reactivity is presented in Donald A. McQuarrie and Peter A. Rock, General Chemistry, 2nd ed. (1987); and in John C. Kotz and Keith F. Purcell, Chemistry & Chemical Reactivity (1987). Studies of common applications of chemistry, intended for the general reader, include William R. Stine et al., Applied Chemistry, 2nd ed. (1981); and John W. Hill, Chemistry for Changing Times, 5th ed. (1988). An overview of modern chemistry and discussion of its prospects is found in George C. Pimentel and Janice A. Coonrod, Opportunities in Chemistry: Today and Tomorrow (1987). P.W. Atkins, Molecules (1987), is a pictorial examination of chemical structure. Lionel Salem, Marvels of the Molecule, trans. from French (1987), presents the molecular orbital theory of chemical bonding in simple terms. The extent to which molecular structure has become central to biochemistry is demonstrated in the articles in The Molecules of Life: Readings from Scientific American (1985). The fundamental principles governing chemical change and the laws of thermodynamics are presented, with a minimum of mathematics, in P.W. Atkins, The Second Law (1984); and John B. Fenn, Engines, Energy, and Entropy: A Thermodynamics Primer (1982). Social aspects of developments in chemistry, especially the environmental costs of use of chemical products, are studied in Luciano Caglioti, The Two Faces of Chemistry, trans. from Italian (1983).
basketball
Introduction
game played between two teams of five players each on a rectangular court, usually indoors. Each team tries to score by tossing the ball through the opponent's goal, an elevated horizontal hoop and net called a basket.
The only major sport strictly of U.S. origin, basketball was invented by James Naismith (1861–1939) on or about December 1, 1891, at the International Young Men's Christian Association (YMCA) Training School (now Springfield College), Springfield, Massachusetts, where Naismith was an instructor in physical education.
For that first game of basketball in 1891, Naismith used as goals two half-bushel peach baskets, which gave the sport its name. The students were enthusiastic. After much running and shooting, William R. Chase made a midcourt shot—the only score in that historic contest. Word spread about the newly invented game, and numerous associations wrote Naismith for a copy of the rules, which were published in the January 15, 1892, issue of the Triangle, the
While basketball is competitively a winter sport, it is played on a 12-month basis—on summer playgrounds, in municipal, industrial, and church halls, in schoolyards and family driveways, and in summer camps—often on an informal basis between two or more contestants. Many grammar schools, youth groups, municipal recreation centres, churches, and other organizations conduct basketball programs for youngsters of less than high school age. Jay Archer, of
History
The early years
In the early years the number of players on a team varied according to the number in the class and the size of the playing area. In 1894 teams began to play with five on a side when the playing area was less than 1,800 square feet (167.2 square metres); the number rose to seven when the gymnasium measured from 1,800 to 3,600 square feet (334.5 square metres) and up to nine when the playing area exceeded that. In 1895 the number was occasionally set at five by mutual consent; the rules stipulated five players two years later, and this number has remained ever since.
Since Naismith and five of his original players were Canadians, it is not surprising that
While basketball helped swell the membership of YMCAs because of the availability of their gyms, within five years the game was outlawed by various associations because gyms that had been occupied by classes of 50 or 60 members were now monopolized by only 10 to 18 players. The banishment of the game induced many members to terminate their YMCA membership and to hire halls to play the game, thus paving the way to the professionalization of the sport.
Originally, players wore one of three styles of uniforms: knee-length football trousers; jersey tights, as commonly worn by wrestlers; or short padded pants, forerunners of today's uniforms, plus knee guards. The courts often were of irregular shape with occasional obstructions such as pillars, stairways, or offices that interfered with play. In 1903 it was ruled that all boundary lines must be straight. In 1893 the Narragansett Machinery Co. of Providence,
Baskets were frequently attached to balconies, making it easy for spectators behind a basket to lean over the railings and deflect the ball to favour one side and hinder the other; in 1895 teams were urged to provide a 4-by-6-foot (1.2-by-1.8-metre) screen for the purpose of eliminating interference. Soon after, wooden backboards proved more suitable. Glass backboards were legalized by the professionals in 1908–09 and by colleges in 1909–10. In 1920–21 the backboards were moved 2 feet (0.6 metre), and in 1939–40 4 feet, in from the end lines to reduce frequent stepping out-of-bounds. Fan-shaped backboards were made legal in 1940–41.
A soccer ball (football) was used for the first two years. In 1894 the first basketball was marketed. It was laced, measured close to 32 inches (81 cm), or about 4 inches (10 cm) larger than the soccer ball, in circumference, and weighed less than 20 ounces (567 grams). By 1948–49, when the laceless molded ball was made official, the size had been set at 30 inches (76 cm).
The first college to play the game was either
The colleges formed their own rules committee in 1905, and by 1913 there were at least five sets of rules: collegiate, YMCA–Amateur Athletic Union, those used by state militia groups, and two varieties of professional rules. Teams often agreed to play under a different set for each half of a game. To establish some measure of uniformity, the colleges, Amateur Athletic Union, and YMCA formed the Joint Rules Committee in 1915. This group was renamed the National Basketball Committee (NBC) of the
Growth of the game
Basketball grew steadily but slowly in popularity and importance in the
Basketball at the high school and college levels developed from a structured, rigid game in the early days to one that is often fast-paced and high-scoring. Individual skills improved markedly, and, although basketball continued to be regarded as the ultimate team game, individualistic, one-on-one performers came to be not only accepted but used as an effective means of winning games.
In the early years games were frequently won with point totals of less than 30, and the game, from the spectator's viewpoint, was slow. Once a team acquired a modest lead, the popular tactic was to stall the game by passing the ball without trying to score, in an attempt to run out the clock. The NBC, seeing the need to discourage such slowdown tactics, instituted a number of rule changes. In 1932–33 a line was drawn at midcourt, and the offensive team was required to advance the ball past it within 10 seconds or lose possession. Five years later, in 1937–38, the centre jump following each field goal or free throw was eliminated. Instead, the defending team was permitted to inbound the ball from the out-of-bounds line underneath the basket. Decades passed before another alteration of like magnitude was made in the college game. After experimentation the NCAA Rules Committee installed a 45-second shot clock in 1985, restricting the time a team could control the ball before shooting, and one year later it implemented a three-point shot rule for baskets made beyond a distance of 19.75 feet (6.0 metres).
More noticeable alteration in the game came at both the playing and coaching levels. Stanford University's Hank Luisetti was the first to use and popularize the one-hand shot in the late 1930s. Until then the only outside attempts were two-handed push shots. In the 1950s and '60s a shooting style evolved from Luisetti's push-off one hander to a jump shot, which is released at the top of the jump.
Coaching strategy changed appreciably over the years. Frank W. Keaney, coach at Rhode Island University from 1921 to 1948, is credited with introducing the concept of “fast break” basketball, in which the offensive team rushes the ball upcourt hoping to get a good shot before the defense can get set. Another man who contributed to a quicker pace of play, particularly through the use of the pressure defense, was Adolph Rupp, who became the University of Kentucky's coach in 1931 and turned its program into one of the most storied in basketball history.
Defensive coaching philosophy, similarly, has undergone change. Whereas pioneer coaches such as Henry Iba of Oklahoma A&M University (now Oklahoma State University) or Long Island University's Clair Bee taught strictly a man-to-man defense, the zone defense, developed by Cam Henderson of Marshall University in West Virginia, later became an integral part of the game (see below Play of the game).
Over the years one of the rules makers' chief concerns was to neutralize the advantage of taller players. At 6 feet 5 inches (1.96 metres) Joe Lapchick was considered very tall when he played for the Original Celtics in the 1920s, but, as even taller players appeared, rules were changed in response. To prevent tall players from stationing themselves near the basket, a rule was instituted in 1932–33 prohibiting the player with the ball from standing inside the foul lane with his back to the basket for more than three seconds; the three-second rule later applied to any attacking player in the foul lane. In 1937–38 a new rule forbade any player from touching the ball when it was in the basket or on its rim (basket interference), and in 1944–45 it became illegal for any defending player to touch the ball on its downward flight toward the basket (goaltending).
Nevertheless, with each passing decade, the teams with the tallest players tended to dominate. Bob Kurland (7 feet [2.13 metres]) led Oklahoma A&M to two NCAA championships in the 1940s and led the nation in scoring in 1945–46. In the same era George Mikan (6 feet 10 inches [2.08 metres]) scored more than 550 points in each of his final two seasons at DePaul University before going on to play nine professional seasons in which he scored more than 11,000 points. Mikan was an outstanding player, not only because of his size but because of his ability to shoot sweeping hook shots with both hands.
In the 1950s Bill Russell (6 feet 9 inches [2.06 metres]) led the University of San Francisco to two NCAA championships before going on to become one of the greatest centres in professional basketball history. Wilt Chamberlain (7 feet 1 inch [2.16 metres]) played at the
So, too, have the small- and medium-size players affected the game's development. Bob Cousy, playing at
Nothing influenced the college game's growth more than television, however. The NCAA championship games were televised nationally from 1963, and by the 1980s all three major television networks were telecasting intersectional college games during the November-to-March season. Rights fees for these games soared from a few million dollars to well over $50 million by the late 1980s. As for broadcasting the NCAA finals, a television contract that began in 2003 gave the NCAA an average of $545 million per year for the television rights; this exponential growth in broadcast fees reflected the importance of these games to both networks and advertisers.
Profits such as these inevitably attract gamblers, and in the evolution of college basketball the darkest hours have been related to gambling scandals. But, as the game began to draw more attention and generate more income, the pressure to win intensified, resulting in an outbreak of rules violations, especially with regard to recruitment of star players.
The most identifiable phase of college basketball in
New York City basketball writers organized the first National Invitation Tournament (NIT) in 1938, but a year later the New York City colleges took control of the event. Until the early 1950s the NIT was considered the most prestigious American tournament, but, with the growth of the college-run NCAA championship, the NIT became a consolation event for teams that failed to make the NCAA selections.
The first NCAA tournament was played in 1939, and its growth took place in three stages. The first era ran through 1964, when it was essentially a tournament for champions of various conferences. There were just eight teams in the 1939 field, and by 1963 it had been expanded to 25 teams, all champions of their respective conferences, plus several successful independent teams. The most outstanding teams of the 1940s and '50s participated in both the NCAA and NIT tournaments, but, after the gambling scandals that followed the 1950 NIT championship, a rule was passed prohibiting a team from playing in both. Afterward the NCAA tournament progressively outgrew the NIT.
In 1964 the second era dawned as the UCLA Bruins, coached by John Wooden, began a period of domination over the NCAA field. From that season until 1975 Wooden led his teams to 10 NCAA championships. Only championships won by
The third growth stage came with the end of UCLA's dominance. Champions began to emerge from all sections of the country. From the field of 25 in 1974, the NCAA tournament expanded to 64 participants, including not only conference championship teams but other outstanding teams from the same conferences as well. Three weeks of play culminate with the Final Four weekend, an event now comparable in general public interest and media attention to the Super Bowl and World Series. Championships at the Division II, Division III, and NAIA levels also continued to grow in interest, reaping some of the fallout from the popularity of Division I.
About 17,000 high schools in the
Professional basketball
The professional game first prospered largely in the Middle Atlantic and
A group of basketball stylists who never received the acclaim they deserved (because in their heyday they played for various towns) consisted of Edward and Lew Wachter, Jimmy Williamson, Jack Inglis, and Bill Hardman. They introduced the bounce pass and long pass as offensive weapons and championed the rule (adopted 1923–24) that made each player, when fouled, shoot his own free throw.
Before World War II the most widely heralded professional team was the Original Celtics, which started out in 1915 as a group of youngsters from
Another formidable aggregation was the New York Renaissance (the Rens), organized by Robert Douglas in 1923 and regarded as the strongest all-black team of all time. During the 1925–26 campaign they split a six-game series with the Original Celtics. During the 1932–33 season the Rens won 88 consecutive games. In 1939 they defeated the Harlem Globetrotters and the Oshkosh All Stars in the world championship pro tournament in
The first professional league was the National Basketball League (NBL), formed in 1898. Its game differed from the college game in that a chicken-wire cage typically surrounded the court, separating players from often hostile fans. (Basketball players were long referred to as cagers.) The chicken wire was soon replaced with a rope netting, off which the players bounced like prizefighters in a boxing ring. The cage also kept the ball from going out-of-bounds, thus quickening the pace of play. In these early days players were also permitted to resume dribbling after halting. Despite the lively action of the game, the NBL and other early leagues were short-lived, mostly because of the frequent movement of players, who sold their services on a per-game basis. With players performing for several cities or clubs within the same season, the leagues suffered games of unreliable quality and many financially unstable franchises.
The Great Depression of the 1930s hurt professional basketball, and a new NBL was organized in 1937 in and around the upper
To help equalize the strength of the teams, the NBA established an annual college draft permitting each club to select a college senior in inverse order to the final standings in the previous year's competition, thus enabling the lower-standing clubs to select the more talented collegians. In addition, the game was altered through three radical rule changes in the 1954–55 season:
After a struggle to survive, including some large financial losses and several short-lived franchises, the NBA took its place as the major professional basketball league in the
The NBA grew increasingly popular through the 1980s. Attendance records were broken in that decade by most of the franchises, a growth pattern stimulated at least in part by the increased coverage by cable television. The NBA has a total of 30 teams organized into Eastern and Western conferences and further divided into six divisions. In the Eastern Conference the Atlantic Division comprises the Boston Celtics, the New Jersey Nets (in East Rutherford), the New York Knicks (in New York City), the Philadelphia 76ers, and the Toronto Raptors; the Central Division is made up of the Chicago Bulls, the Cleveland (Ohio) Cavaliers, the Detroit (Michigan) Pistons, the Indiana Pacers (in Indianapolis), and the Milwaukee (Wisconsin) Bucks; the Southeast Division comprises the Atlanta (Georgia) Hawks, the Charlotte (North Carolina) Bobcats, the Miami (Florida) Heat, the Orlando (Florida) Magic, and the Washington (D.C.) Wizards. In the Western Conference the Southwest Division comprises the Texas-based Dallas Mavericks, Houston Rockets, and San Antonio Spurs, the Memphis (Tennessee) Grizzlies, and the New Orleans (Louisiana) Hornets; the Northwest Division is made up of the Denver (Colorado) Nuggets, the Minnesota Timberwolves (in Minneapolis), the Portland (Oregon) Trail Blazers, the Seattle (Washington) SuperSonics, and the Utah Jazz (in Salt Lake City); the Pacific Division comprises the Phoenix (Arizona) Suns and the California-based Golden State Warriors (in Oakland), Los Angeles Clippers, Los Angeles Lakers, and Sacramento Kings. The play-offs follow the traditional 82-game schedule, involving 16 teams and beginning in late April. Played as a best-of-seven series, the final pairings stretch into late June.
Although basketball is traditionally a winter game, the NBA still fills its arenas and attracts a national television audience in late spring and early summer. As the popularity of the league grew, player salaries rose to an annual average of $3.7 million in 2003. Some superstars earned nearly $30 million yearly for the 2003–04 season. The NBA has a salary cap that limits (at least theoretically, as loopholes allow many teams to exceed the cap) the total amount a team can spend on salaries in any given season. For 2003–04 the team salary cap was $43.84 million.
In 2001 the NBA launched the National Basketball Development League (NBDL). The league served as a kind of “farm system” for the NBA. Through its first 50 years the NBA did not have an official system of player development or a true minor league system for bringing up young and inexperienced players such as exists in major league baseball. College basketball has been the area from which the NBA did the vast majority of its recruiting. By 2000 this had begun to change somewhat, as players began to be drafted straight out of high school. At the turn of the 21st century the NBA was reviewing whether to continue this policy or to set a minimum age limit for players entering the league.
Clara Baer, who introduced basketball at the H. Sophie Newcomb College for Women in
Women's rules over the years frequently have been modified. Until 1971 there were six players on a team, and the court was so divided that the three forwards played in the frontcourt and did all the scoring while the three guards covered the backcourt. Senda Berenson staged the first women's college basketball game in 1893 when her freshman and sophomore Smith College women played against one another. In April 1895 the women of the
In the early 1980s control of the women's college game was shifted from the Association for Intercollegiate Athletics for Women (AIAW) to the NCAA, a move that not only streamlined the operation and made it more efficient but also added to the visibility of women's basketball. The women's NCAA championship tournament runs concurrently with the men's, and many of the games are nationally televised. Women's basketball became an Olympic sport in 1976.
Individual women stars have been heavily recruited by colleges, but the players frequently found that there was no opportunity for them to play beyond the college level. Leagues were occasionally formed, such as the Women's Professional Basketball League (WPBL). Begun in 1978, the league lasted only three years. Eventually filling the void was the Women's National Basketball Association (WNBA). Aligned with the powerful NBA, the WNBA held its inaugural season in 1997 with eight teams. By 2004 the league had grown to 13 teams. The Eastern Conference consisted of the Charlotte Sting, Connecticut Sun (in Uncasville), Detroit Shock, Indiana Fever (in Indianapolis), New York Liberty (in New York City), and Washington (D.C.) Mystics. The Western Conference comprised the Houston Comets, Los Angeles Sparks, Minnesota Lynx (in
International competition
The success of international basketball was greatly advanced by Forrest C. (“Phog”) Allen, a Naismith disciple and a former coach at the University of Kansas, who led the movement for the inclusion of basketball in the Olympic Games in 1936 and thereafter. Basketball has also been played in the Pan-American Games since their inauguration in 1951. The international game is governed by the Fédération Internationale de Basketball Amateur (FIBA). World championships began in 1950 for men and in 1953 for women. Under international rules the court differs in that there is no frontcourt or backcourt, and the free throw lanes form a modified wedge shape. There are some differences in rules, including those governing substitutions, technical and personal fouls, free throws, intermissions, and time-outs. Outside the
Basketball has caught on particularly well in
Play of the game
Court and equipment
The standard American basketball court is in the shape of a rectangle 50 feet (15.2 metres) by 94 feet (28.7 metres); high school courts may be slightly smaller. There are various markings on the court, including a centre circle, free throw lanes, and a three-point line, that help regulate play. A goal, or basket, 18 inches (46 cm) in diameter is suspended from a backboard at each end of the court. The metal rim of the basket is 10 feet (3.0 metres) above the floor. In the professional game the backboard is a rectangle, 6 feet (1.8 metres) wide and 3.5 feet (1.1 metres) high, made of a transparent material, usually glass; it may be 4 feet (1.2 metres) high in college. The international court varies somewhat in size and markings. The spherical inflated ball measures 29.5 to 30 inches (74.9 to 76 cm) in circumference and weighs 20 to 22 ounces (567 to 624 grams). Its covering is leather or composition.
Rules
The rules governing play of the game are based on Naismith's five principles requiring a large, light ball, handled with the hands; no running with the ball; no player being restricted from getting the ball when it is in play; no personal contact; and a horizontal, elevated goal. The rules are spelled out in specific detail by the governing bodies of the several branches of the sport and cover the playing court and equipment, officials, players, scoring and timing, fouls, violations, and other matters. The officials include a referee and two umpires in college play (two referees and a crew chief in NBA play), two timers, and two scorekeepers. One player on each team acts as captain and speaks for the team on all matters involving the officials, such as interpretation of rules. Professional and high school games are divided into four periods, college games into two.
Since the 1895–96 season, a field goal has scored two points and a free throw one point. When the
Basketball is a rough sport, although it is officially a noncontact game. A player may pass or bounce (dribble) the ball to a position whereby he or a teammate may try for a basket. A foul is committed whenever a player makes such contact with an opponent as to put him at a disadvantage; for the 2001–02 season the NBA approved a rule change that eliminated touch fouls, meaning brief contact initiated by a defensive player is allowable if it does not impede the progress of the offensive player. If a player is fouled while shooting and the shot is good, the basket counts and he is awarded one free throw (an unhindered throw for a goal from behind the free throw, or foul, line, which is 15 feet [4.6 metres] from the backboard); if the shot misses, he gets a second free throw. If a foul is committed against a player who is not shooting, then his team is awarded either the possession of the ball or a free throw if the other team is in a penalty situation. A team is in a penalty situation when it has been called for a set number of fouls in one period (four in professional and international play and six in the college game). Infractions such as unsportsmanlike conduct or grasping the rim are technical fouls, which award to the opposition a free throw and possession of the ball. Overly violent fouls are called flagrant fouls and also result in free throws and possession for the opposition. Players are allowed a set number of personal fouls per game (six in the NBA, five in most other competitions) and are removed from the game when the foul limit is reached.
Other common infractions occur when a player (with the ball) takes an excessive number of steps or slides; fails to advance the ball within five seconds while being “closely guarded”; causes the ball to go out-of-bounds; steps over the foul line while shooting a free throw; steps over the end line or sideline while tossing the ball in to a teammate, or fails to pass the ball in within five seconds; runs with, kicks, or strikes the ball with his fist; dribbles a second time after having once concluded his dribble (double dribble); remains more than three seconds in his free throw lane while he or his team has the ball; causes the ball to go into the backcourt; retains the ball in the backcourt more than 10 seconds, changed in the NBA to 8 seconds for 2001–02; or fails to shoot within the time allotted by the shot clock (24 seconds in the NBA and international play, 30 in the WNBA, and 35 in college). The penalty is loss of the ball—opponents throw the ball in from the side.
Common terms used in basketball include the following:
Any illegal personal contact that impedes the progress of an opponent who does not have the ball.
Ball movement by bouncing the ball. A dribble ends when a player touches the ball with both hands simultaneously or does not continue his dribble.
Called when two opponents have one or two hands so firmly upon the ball that neither can gain possession without undue roughness. It also is called when a player in the frontcourt is so closely guarded that he cannot pass or try for a goal or is obviously withholding the ball from play.
A method of putting the ball into play. The referee tosses the ball up between two opponents who try to tap it to a teammate. The jump ball is used to begin games and, in the professional game, when the ball is possessed by two opposing players at the same time.
Throwing, batting, or rolling the ball to another player. The main types are (1) the chest pass, in which the ball is released from a position in front of the chest, (2) the bounce pass, in which the ball is bounced on the floor to get it past a defensive opponent, (3) the roll pass on the floor, (4) the hook pass (side or overhead), and (5) the baseball pass, in which the ball is thrown a longer distance with one hand in a manner similar to a baseball throw.
A movement in which a player with the ball steps once or more in any direction with the same foot while the other foot (pivot foot) is kept at its point of contact with the floor.
Pivot player
Another term for centre; also called a post player. He may begin the offensive set from a position just above the free throw line.
Both teams attempting to gain possession of the ball after any try for a basket that is unsuccessful, but the ball does not go out-of-bounds and remains in play.
Legal action of a player who, without causing more than incidental contact, delays or prevents an opponent from reaching his desired position.
Shots from the field
One of the main field shots is the layup, in which the shooter, while close to the basket, jumps and lays the ball against the backboard so it will rebound into the basket or just lays it over the rim. Away from the basket, players use a one-hand push shot from a stride, jump, or standing position and a hook shot, which is overhead. Some players can dunk or slam-dunk the ball, jamming the ball down into the basket.
Traveling (walking with the ball)
Progressing in any direction in excess of the prescribed limits, normally two steps, while holding the ball.
Turnover
Loss of possession of the ball by a team through error or a rule violation.
Other special terms are discussed below.
Principles of play
Each team of five players consists of two forwards, two guards, and a centre, usually the tallest man on the team. At the beginning of the first period of a game, the ball is put into play by a jump ball at centre court; i.e., the referee tosses the ball up between the opposing centres, higher than either can jump, and when it descends each tries to tap it to one of his teammates, who must remain outside the centre circle until the ball is tapped. Subsequent periods of professional and college games begin with a throw in from out-of-bounds. Jump balls are also signaled by the officials when opposing players share possession of the ball (held ball) or simultaneously cause it to go out-of-bounds. In
A player who takes possession of the ball must pass or shoot before taking two steps or must start dribbling before taking his second step. When the dribble stops, the player must stop his movement and pass or shoot the ball. The ball may be tapped or batted with the hands, passed, bounced, or rolled in any direction.
As basketball has progressed, various coaches and players have devised intricate plays and offensive maneuvers. Some systems emphasize speed, deft ball handling, and high scoring; others stress ball control, slower patterned movement, and lower scoring. A strategy based on speed is called the fast break. When fast-break players recover possession of the ball in their backcourt, as by getting the rebound from an opponent's missed shot, they race upcourt using a combination of speed and passing and try to make a field goal before the opponents have time to set up a defense.
Some teams, either following an overall game plan or as an alternative when they do not have the opportunity for a fast break, employ a more deliberate style of offense. The guards carefully bring the ball down the court toward the basket and maintain possession of the ball in the frontcourt by passing and dribbling and by screening opponents in an effort to set up a play that will free a player for an open shot. Set patterns of offense generally use one or two pivot, or post, players who play near the free throw area at the low post positions (between the free throw line and the end line) or at high post positions (between the free throw line and the basket). The pivot players are usually the taller players on the team and are in position to receive passes, pass to teammates, shoot, screen for teammates, and tip in or rebound (recover) missed shots. All the players on the team are constantly on the move, executing the patterns designed to give one player a favourable shot—and at the same time place one or more teammates in a good position to tip in or rebound if that player misses.
Systems of defense also have developed over the years. One of the major strategies is known as man-to-man or man-for-man. In this system each player guards a specific opponent, except when “switching” with a teammate when he is screened or in order to guard another player in a more threatening scoring position. Another major strategy is the zone, or five-man, defense. In this system each player has a specific area to guard irrespective of which opponent plays in that area. The zone is designed to keep the offense from driving in to the basket and to force the offense into taking long shots.
A great many variations and combinations have been devised to employ the several aspects of both man-to-man and zone defensive strategies. The press, which can be either man-to-man or zone, is used by a team to guard its opponent so thoroughly that the opposition is forced to hurry its movements and especially to commit errors that result in turnovers. A full-court press applies this pressure defense from the moment the opposition takes possession of the ball at one end of the court. Well-coached teams are able to modify both their offensive and defensive strategies according to the shifting circumstances of the game and in response to their opponents' particular strengths and weaknesses and styles of play.
William George MokrayRobert G. LoganLarry W. Donald
Additional
Histories of the game of basketball include Basketball: Its Origin and Development (1996), by the game's inventor, James Naismith; Bernice Larson Webb, The Basketball Man: James Naismith, rev. ed. (1994), a comprehensive biography of Naismith; and Joan S. Holt and Marianna Trekell (eds.), A Century of Women's Basketball: From Frailty to Final Four (1991), a comprehensive treatment of the women's game. Other histories include Stanley Cohen, The Game They Played (1977), a view of the gambling scandals that rocked college basketball in the 1950s; Joe Gergen, The Final Four (1987), a history of the NCAA; and Robert W. Peterson, Cages to Jump Shots: Pro Basketball's Early Years (1990), a history of the developing professional game with an emphasis on the contributions of African and Jewish Americans. The NBA's Official Encyclopedia of Pro Basketball, 3rd ed. (2000), has general information and statistics.
For further records and statistics, see The Official NBA Guide (annual), NBA Register (annual), and The Official WNBA Guide and Register (annual), all published by The Sporting News, which give the records for the preceding year and the career surveys of all players in that year; and The Official National Collegiate Athletic Association Basketball Guide (annual), with U.S. college records, schedules, and statistics.
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
In honor of this historic event, the International Astronomical Union and the United Nations have
In honor of this historic event, the International Astronomical Union and the United Nations have proclaimed 2009 as the International Year of Astronomy.
The purpose of IYA is to spread awareness of astronomy's contributions to society and culture, stimulate young people's interest in science, portray astronomy as a global peaceful endeavor and to nourish a scientific outlook in society.
NASA invites you to join in the celebration of IYA 2009, as a part of the overall U.S. IYA effort. To commemorate this event, NASA has launched a new Web site that will serve as a portal to NASA resources, events, and opportunities for involvement. A program of regional and national IYA activities for students, teachers and the public are currently being planned.
To learn more about this IYA and to find news and information about events that are being
planned, visit http://astronomy2009.nasa.gov
THE internet could soon be made obsolete. The scientists who pioneered it have now built a lightning-fast replacement capable of downloading entire feature films within seconds.
At speeds about 10,000 times faster than a typical broadband connection, “the grid” will be able to send the entire Rolling Stones back catalogue from Britain to Japan in less than two seconds.
http://cph-theory.persiangig.com/2150-superfastinternet.htm
Large Hadron Collider could unlock secrets of the Big Bang
As the world's largest and most expensive science experiment, the new particle accelerator buried 300ft beneath the Alpine foothills along the Swiss French border is 17 miles long and up to 12 stories high. It is designed to generate temperatures of more than a trillion degrees centigrade.
http://cph-theory.persiangig.com/2149-largehadroncollider.htm
Acid rain
is rain or any other form of precipitation that is unusually acidic. It has harmful effects on plants, aquatic animals and buildings. Acid rain is mostly caused by human emissions of sulfur and nitrogen compounds which react in the atmosphere to produce acids. In recent years, many governments have introduced laws to reduce these emissions.
The term "acid rain" is commonly used to mean the deposition of acidic components in rain, snow, fog, dew, or dry particles. The more accurate term is "acid precipitation." Distilled water, which contains no carbon dioxide, has a neutral pH of 7. Liquids with a pH less than 7 are acidic, and those with a pH greater than 7 are bases. "Clean" or unpolluted rain is slightly acidic, its pH being about 5.6, because carbon dioxide and water in the air react together to form carbonic acid, a weak acid.
Carbonic acid then can ionize in water forming low concentrations of hydronium ions:
The extra acidity in rain comes from the reaction of primary air pollutants, primarily sulfur oxides and nitrogen oxides, with water in the air to form strong acids (like sulfuric and nitric acid). The main sources of these pollutants are vehicles and industrial and power-generating plants.
Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides to the atmosphere have increased.[1] Acid rain was first found in Manchester, England. In 1852, Robert Angus Smith found the relationship between acid rain and atmospheric pollution.[2] Though acid rain was discovered in 1852, it wasn't until the late 1960s that scientists began widely observing and studying the phenomenon. Canadian Harold Harvey was among the first to research a "dead" lake. Public awareness of acid rain in the U.S increased in the 1990s after the New York Times promulgated reports from the Hubbard Brook Experimental Forest in New Hampshire of the myriad deleterious environmental effects demonstrated to result from it.
Occasional pH readings of well below 2.4 (the acidity of vinegar) have been reported in industrialized areas. Industrial acid rain is a substantial problem in China, Eastern Europe, Russia and areas down-wind from them. These areas all burn sulfur-containing coal to generate heat and electricity. The problem of acid rain not only has increased with population and industrial growth, but has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation. Often deposition occurs a considerable distance downwind of the emissions, with mountainous regions tending to receive the most (simply because of their higher rainfall). An example of this effect is the low pH of rain (compared to the local emissions) which falls in Scandinavia.[6]
The most important gas which leads to acidification is sulfur dioxide. Emissions of nitrogen oxides which are oxidized to form nitric acid are of increasing importance due to stricter controls on emissions of sulfur containing compounds. 70 Tg(S) per year in the form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) from wildfires and 7-8 Tg(S) per year from volcanoes
The principal natural phenomena that contribute acid-producing gases to the atmosphere are emissions from volcanoes and those from biological processes that occur on the land, in wetlands, and in the oceans. The major biological source of sulfur containing compounds is dimethyl sulfide.
The history of nuclear weapons is the subject of voluminous literature. Richard Rhodes, The Making of the Atomic Bomb (1986), is an excellent work on the
The British project is discussed in the official histories of the U.K. Atomic Energy Authority: Margaret Gowing, Britain and Atomic Energy, 1939–1945 (1964), and Independence and Deterrence: Britain and Atomic Energy, 1945–1952, 2 vol. (1974). Little has been published about the program of the former U.S.S.R., but see David Holloway, The Soviet Union and the Arms Race, 2nd ed. (1984); and Thomas B. Cochran, Soviet Nuclear Weapons (1989). No official history is available for the French project. Bertrand Goldschmidt, Les Rivalités atomiques, 1939–1966 (1967), is a semiofficial account by a participant. The Chinese project is covered in John Wilson Lewis and
(AEC),= U.S. federal civilian agency established by the Atomic Energy Act, which was signed into law by President Harry S. Truman on Aug. 1, 1946, to control the development and production of nuclear weapons and to direct the research and development of peaceful uses of nuclear energy. On Dec. 31, 1946, the AEC succeeded the Manhattan Engineer District of the U.S. Army Corps of Engineers (which had developed the atomic bomb during World War II) and thus officially took control of the nation's nuclear program.
A nuclear explosion releases energy in a variety of forms, including blast, heat, and radiation (X rays, gamma rays, and neutrons). By varying a weapon's design, these effects could be tailored for a specific military purpose. In an enhanced-radiation weapon, more commonly called a neutron bomb, the objective was to minimize the blast by reducing the fission yield and to enhance the neutron radiation. Such a weapon would prove lethal to invading troops without, it was hoped, destroying the defending country's towns and countryside. It was actually a small (on the order of one kiloton), two-stage thermonuclear weapon that utilized deuterium and tritium, rather than lithium deuteride, to maximize the release of fast neutrons. The first
Though it had virtually created the American nuclear-power industry, the AEC also had to regulate that industry to ensure public health and safety and to safeguard national security. Because these dual roles often conflicted with each other, the U.S. government under the Energy Reorganization Act of 1974 disbanded the AEC and divided its functions between two new agencies: the Nuclear Regulatory Commission (q.v.), which regulates the nuclear-power industry; and the Energy Research and Development Administration, which was disbanded in 1977 when the Department of Energy was created.
"""""autonomous intergovernmental organization dedicated to increasing the contribution of atomic energy to the world's peace and well-being and ensuring that agency assistance is not used for military purposes. The IAEA and its director general, Mohamed ElBaradei, won the Nobel Prize for Peace in 2005.
The agency was established by representatives of more than 80 countries in October 1956, nearly three years after U.S. President Dwight D. Eisenhower's “Atoms for Peace” speech to the United Nations General Assembly, in which Eisenhower called for the creation of an international organization for monitoring the diffusion of nuclear resources and technology. The IAEA's statute officially came into force on July 29, 1957. Its activities include research on the applications of atomic energy to medicine, agriculture, water resources, and industry; the operation of conferences, training programs, fellowships, and publications to promote the exchange of technical information and skills; the provision of technical assistance, especially to less-developed countries; and the establishment and administration of radiation safeguards. As part of the Treaty on the Non-Proliferation of Nuclear Weapons (1968), all non-nuclear powers are required to negotiate a safeguards agreement with the IAEA; as part of that agreement, the IAEA is given authority to monitor nuclear programs and to inspect nuclear facilities.
The General Conference, consisting of all members (in the early 21st century some 135 countries were members), meets annually to approve the budget and programs and to debate the IAEA's general policies; it also is responsible for approving the appointment of a director general and admitting new members. The Board of Governors, which consists of 35 members who meet about five times per year, is charged with carrying out the agency's statutory functions, approving safeguards agreements, and appointing the director general. The day-to-day affairs of the IAEA are run by the Secretariat, which is headed by the director general, who is assisted by six deputies; the Secretariat's departments include nuclear energy, nuclear safety, nuclear sciences and application, safeguards, and technical cooperation. Headquarters are in
"""'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
also called atomic weapon , or thermonuclear weapon bomb or other warhead that derives its force from either the fission or the fusion of atomic nuclei and is delivered by an aircraft, missile, Earth satellite, or other strategic delivery system.
Nuclear weapons have enormous explosive force. Their significance may best be appreciated by the coining of the words kiloton (1,000 tons) and megaton (one million tons) to describe their blast effect in equivalent weights of TNT. For example, the first nuclear fission bomb, the one dropped on
The first nuclear weapons were bombs delivered by aircraft; warheads for strategic ballistic missiles, however, have become by far the most important nuclear weapons. There are also smaller tactical nuclear weapons that include artillery projectiles, demolition munitions (land mines), antisubmarine depth bombs, torpedoes, and short-range ballistic and cruise missiles. The
The basic principle of nuclear fission weapons (also called atomic bombs) involves the assembly of a sufficient amount of fissile material (e.g., the uranium isotope uranium-235 or the plutonium isotope plutonium-239) to “go supercritical”—that is, for neutrons (which cause fission and are in turn released during fission) to be produced at a much faster rate than they can escape from the assembly. There are two ways in which a subcritical assembly of fissionable material can be rendered supercritical and made to explode. The subcritical assembly may consist of two parts, each of which is too small to have a positive multiplication rate; the two parts can be shot together by a gun-type device. Alternatively, a subcritical assembly surrounded by a chemical high explosive may be compressed into a supercritical one by detonating the explosive.
The basic principle of the fusion weapon (also called the thermonuclear or hydrogen bomb) is to produce ignition conditions in a thermonuclear fuel such as deuterium, an isotope of hydrogen with double the weight of normal hydrogen, or lithium deuteride. The Sun may be considered a thermonuclear device; its main fuel is deuterium, which it consumes in its core at temperatures of 18,000,000° to 36,000,000° F (10,000,000° to 20,000,000° C). To achieve comparable temperatures in a weapon, a fission triggering device is used.
Following the discovery of artificial radioactivity in the 1930s, the Italian physicist Enrico Fermi performed a series of experiments in which he exposed many elements to low-velocity neutrons. When he exposed thorium and uranium, chemically different radioactive products resulted, indicating that new elements had been formed, rather than merely isotopes of the original elements. Fermi concluded that he had produced elements beyond uranium (element 92), then the last element in the periodic table; he called them transuranic elements and named two of them ausenium (element 93) and hesperium (element 94). During the autumn of 1938, however, when Fermi was receiving the Nobel Prize for his work, Otto Hahn and Fritz Strassmann of Germany discovered that one of the “new” elements was actually barium (element 56).
The Danish scientist Niels Bohr visited the United States in January 1939, carrying with him an explanation, devised by the Austrian refugee scientist Lise Meitner and her nephew Otto Frisch, of the process behind Hahn's surprising data. Low-velocity neutrons caused the uranium nucleus to fission, or break apart, into two smaller pieces; the combined atomic numbers of the two pieces—for example, barium and krypton—equalled that of the uranium nucleus. Much energy was released in the process. This news set off experiments at many laboratories. Bohr worked with John Wheeler at Princeton; they postulated that the uranium isotope uranium-235 was the one undergoing fission; the other isotope, uranium-238, merely absorbed the neutrons. It was discovered that neutrons were produced during the fission process; on the average, each fissioning atom produced more than two neutrons. If the proper amount of material were assembled, these free neutrons might create a chain reaction. Under special conditions, a very fast chain reaction might produce a very large release of energy; in short, a weapon of fantastic power might be feasible.
The possibility that such a weapon might first be developed by Nazi Germany alarmed many scientists and was drawn to the attention of President Franklin D. Roosevelt by Albert Einstein, then living in the
During the summer of 1940, Edwin McMillan and Philip Abelson of the University of California at Berkeley discovered element 93, named neptunium; they inferred that this element would decay into element 94. The Bohr and Wheeler fission theory suggested that one of the isotopes, mass number 239, of this new element might also fission under low-velocity neutron bombardment. The cyclotron at the University of California at Berkeley was put to work to make enough element 94 for experiments; by mid-1941, element 94 had been firmly identified and named plutonium, and its fission characteristics had been established. Low-velocity neutrons did indeed cause it to undergo fission, and at a rate much higher than that of uranium-235. The
In May 1941 a review committee reported that a nuclear explosive probably could not be available before 1945, that a chain reaction in natural uranium was probably 18 months off, and that it would take at least an additional year to produce enough plutonium for a bomb and three to five years to separate enough uranium-235. Further, it was held that all of these estimates were optimistic. In late June 1941 President Roosevelt established the Office of Scientific Research and Development under the direction of the scientist Vannevar Bush.
In the fall of 1941 the
The U.S. entry into World War II in December 1941 was decisive in providing funds for a massive research and production effort for obtaining fissionable materials, and in May 1942 the momentous decision was made to proceed simultaneously on all promising production methods. Bush decided that the army should be brought into the production plant construction activities. The Corps of Engineers opened an office in
Meantime, as part of the June 1942 reorganization, J. Robert Oppenheimer became, in October, the director of Project Y, the group that was to design the actual weapon. This effort was spread over several locations. On November 16
The emphasis during the summer and fall of 1943 was on the gun method of assembly, in which the projectile, a subcritical piece of uranium-235 (or plutonium-239), would be placed in a gun barrel and fired into the target, another subcritical piece of uranium-235. After the mass was joined (and now supercritical), a neutron source would be used to start the chain reaction. A problem developed with applying the gun method to plutonium, however. In manufacturing plutonium-239 from uranium-238 in a reactor, some of the plutonium-239 absorbs a neutron and becomes plutonium-240. This material undergoes spontaneous fission, producing neutrons. Some neutrons will always be present in a plutonium assembly and cause it to begin multiplying as soon as it goes critical, before it reaches supercriticality; it will then explode prematurely and produce comparatively little energy. The gun designers tried to beat this problem by achieving higher projectile speeds, but they lost out in the end to a better idea—the implosion method.
In April 1943 a Project Y physicist, Seth Neddermeyer, proposed to assemble a supercritical mass from many directions, instead of just two as in the gun. In particular, a number of shaped charges placed on the surface of a sphere would fire many subcritical pieces into one common ball at the centre of the sphere. John von Neumann, a mathematician who had had experience in shaped-charge, armour-piercing work, supported the implosion method enthusiastically and pointed out that the greater speed of assembly might solve the plutonium-240 problem. The physicist Edward Teller suggested that the converging material might also become compressed, offering the possibility that less material would be needed. By late 1943 the implosion method was being given an increasingly higher priority; by July 1944 it had become clear that the plutonium gun could not be built. The only way to use plutonium in a weapon was by the implosion method.
By 1944 the Manhattan Project was spending money at a rate of more than $1 billion per year. The situation was likened to a nightmarish horse race; no one could say which of the horses (the calutron plant, the diffusion plant, or the plutonium reactors) was likely to win or whether any of them would even finish the race. In July 1944 the first Y-12 calutrons had been running for three months but were operating at less than 50 percent efficiency; the main problem was in recovering the large amounts of material that reached neither the uranium-235 nor uranium-238 boxes and, thus, had to be rerun through the system. The gaseous diffusion plant was far from completion, the production of satisfactory barriers remaining the major problem. And the first plutonium reactor at
Within 24 hours of Roosevelt's death on April 12, 1945, President Harry S. Truman was told briefly about the atomic bomb by Secretary of War Henry Stimson. On April 25 Stimson, with
The test of the plutonium weapon was named Trinity; it was fired at 5:29:45 AM (local time) on July 16, 1945, at the
A single B-29 bomber, named the Enola Gay, flew over Hiroshima,
Scientists in several countries performed experiments in connection with nuclear reactors and fission weapons during World War II, but no country other than the
By the time the war began on Sept. 1, 1939, Germany had a special office for the military application of nuclear fission; chain-reaction experiments with uranium and carbon were being planned, and ways of separating the uranium isotopes were under study. Some measurements on carbon, later shown to be in error, led the physicist Werner Heisenberg to recommend that heavy water be used, instead, for the moderator. This dependence on scarce heavy water was a major reason the German experiments never reached a successful conclusion. The isotope separation studies were oriented toward low enrichments (about 1 percent uranium-235) for the chain reaction experiments; they never got past the laboratory apparatus stage, and several times these prototypes were destroyed in bombing attacks. As for the fission weapon itself, it was a rather distant goal, and practically nothing but “back-of-the-envelope” studies were done on it.
Like their counterparts elsewhere, Japanese scientists initiated research on an atomic bomb. In December 1940,
The British weapon project started informally, as in the
The formal postwar decision to manufacture a British atomic bomb was made by Prime Minister Clement Attlee's government during a meeting of the Defence Subcommittee of the Cabinet in early January 1947. The construction of a first reactor to produce fissile material and associated facilities had gotten under way the year before. William Penney, a member of the British team at Los Alamos during the war, was placed in charge of fabricating and testing the bomb, which was to be of a plutonium type similar to the one dropped on
In the decade before the war, Soviet physicists were actively engaged in nuclear and atomic research. By 1939 they had established that, once uranium has been fissioned, each nucleus emits neutrons and can therefore, at least in theory, begin a chain reaction. The following year, physicists concluded that such a chain reaction could be ignited in either natural uranium or its isotope, uranium-235, and that this reaction could be sustained and controlled with a moderator such as heavy water. In June 1940 the Soviet Academy of Sciences established the Uranium Commission to study the “uranium problem.”
In February 1939, news had reached Soviet physicists of the discovery of nuclear fission in the West. The military implications of such a discovery were immediately apparent, but Soviet research was brought to a halt by the German invasion in June 1941. In early 1942 the physicist Georgy N. Flerov noticed that articles on nuclear fission were no longer appearing in western journals; this indicated that research on the subject had become secret. In response, Flerov wrote to, among others, Premier Joseph Stalin, insisting that “we must build the uranium bomb without delay.” In 1943 Stalin ordered the commencement of a research project under the supervision of Igor V. Kurchatov, who had been director of the nuclear physics laboratory at the Physico-Technical Institute in
By the end of 1944, 100 scientists were working under Kurchatov, and by the time of the Potsdam Conference, which brought the Allied leaders together the day after the Trinity test, the project on the atomic bomb was seriously under way. During one session at the conference, Truman remarked to Stalin that the
Upon his return from
French scientists, such as Henri Becquerel, Marie and Pierre Curie, and Frédéric and Irène Joliot-Curie, made important contributions to 20th-century atomic physics. During World War II several French scientists participated in an Anglo-Canadian project in
On Oct. 18, 1945, the Atomic Energy Commission (Commissariat à l'Énergie Atomique; CEA) was established by General Charles de Gaulle with the objective of exploiting the scientific, industrial, and military potential of atomic energy. The military application of atomic energy did not begin until 1951. In July 1952 the National Assembly adopted a five-year plan, a primary goal of which was to build plutonium production reactors. Work began on a reactor at Marcoule in the summer of 1954 and on a plutonium separating plant the following year.
On Dec. 26, 1954, the issue of proceeding with a French bomb was raised at Cabinet level. The outcome was that Prime Minister Pierre Mendès-France launched a secret program to develop a bomb. On Nov. 30, 1956, a protocol was signed specifying tasks the CEA and the Defense Ministry would perform. These included providing the plutonium, assembling a device, and preparing a test site. On July 22, 1958, de Gaulle, who had resumed power as prime minister, set the date for the first atomic explosion to occur within the first three months of 1960. On Feb. 13, 1960, the French detonated their first atomic bomb from a 330-foot tower in the
On Jan. 15, 1955, Mao Zedong (Mao Tse-tung) and the Chinese leadership decided to obtain their own nuclear arsenal. From 1955 to 1958 the Chinese were partially dependent upon the
Unlike the initial
On May 18, 1974, India detonated a nuclear device in the Rājasthān desert near Pokaran with a reported yield of 15 kilotons.
Several other countries were believed to have built nuclear weapons or to have acquired the capability of assembling them on short notice. Israel was believed to have built an arsenal of more than 200 weapons, including thermonuclear bombs. In August 1988 the South African foreign minister said that South Africa had “the capability to [produce a nuclear bomb] should we want to.” Argentina, Brazil, South Korea, and Taiwan also had the scientific and industrial base to develop and produce nuclear weapons, but they did not seem to have active programs.
U.S. research on thermonuclear weapons started from a conversation in September 1941 between Fermi and Teller. Fermi wondered if the explosion of a fission weapon could ignite a mass of deuterium sufficiently to begin thermonuclear fusion. (Deuterium, an isotope of hydrogen with one proton and one neutron in the nucleus—i.e., twice the normal weight—makes up 0.015 percent of natural hydrogen and can be separated in quantity by electrolysis and distillation. It exists in liquid form only below about −417° F, or −250° C.) Teller undertook to analyze the thermonuclear processes in some detail and presented his findings to a group of theoretical physicists convened by Oppenheimer in
As a result of these discussions the participants concluded that a weapon based on thermonuclear fusion was possible. When the
In the fall of 1945, after the success of the atomic bomb and the end of World War II, the future of the Manhattan Project, including
From April 18 to 20, 1946, a conference led by Teller at
One of the two central design problems was how to ignite the thermonuclear fuel. It was recognized early on that a mixture of deuterium and tritium theoretically could be ignited at lower temperatures and would have a faster reaction time than deuterium alone, but the question of how to achieve ignition remained unresolved. The other problem, equally difficult, was whether and under what conditions burning might proceed in thermonuclear fuel once ignition had taken place. An exploding thermonuclear weapon involves many extremely complicated, interacting physical and nuclear processes. The speeds of the exploding materials can be up to millions of feet per second, temperatures and pressures are greater than those at the centre of the Sun, and time scales are billionths of a second. To resolve whether the “classical Super” or any other design would work required accurate numerical models of these processes—a formidable task, since the computers that would be needed to perform the calculations were still under development. Also, the requisite fission triggers were not yet ready, and the limited resources of
On Sept. 23, 1949, Truman announced that “we have evidence that within recent weeks an atomic explosion occurred in the U.S.S.R.” This first Soviet test stimulated an intense, four-month, secret debate about whether to proceed with the hydrogen bomb project. One of the strongest statements of opposition against proceeding with a hydrogen bomb program came from the General Advisory Committee (GAC) of the AEC, chaired by Oppenheimer. In their report of Oct. 30, 1949, the majority recommended “strongly against” initiating an all-out effort, believing “that extreme dangers to mankind inherent in the proposal wholly outweigh any military advantages that could come from this development.” “A super bomb,” they went on to say, “might become a weapon of genocide.” They believed that “a super bomb should never be produced.” Nevertheless, the Joint Chiefs of Staff, the State and Defense departments, the Joint Committee on Atomic Energy, and a special subcommittee of the National Security Council all recommended proceeding with the hydrogen bomb. Truman announced on Jan. 31, 1950, that he had directed the AEC to continue its work on all forms of atomic weapons, including hydrogen bombs. In March,
In the months that followed Truman's decision, the prospect of actually being able to build a hydrogen bomb became less and less likely. The mathematician Stanislaw M. Ulam, with the assistance of Cornelius J. Everett, had undertaken calculations of the amount of tritium that would be needed for ignition of the classical Super. Their results were spectacular and, to Teller, discouraging: the amount needed was estimated to be enormous. In the summer of 1950 more detailed and thorough calculations by other members of the Los Alamos Theoretical Division confirmed Ulam's estimates. This meant that the cost of the Super program would be prohibitive.
Also in the summer of 1950, Fermi and Ulam calculated that liquid deuterium probably would not burn—that is, there would probably be no self-sustaining and propagating reaction. Barring surprises, therefore, the theoretical work to 1950 indicated that every important assumption regarding the viability of the classical Super was wrong. If success was to come, it would have to be accomplished by other means.
The other means became apparent between February and April 1951, following breakthroughs achieved at
The major figures in these breakthroughs were Ulam and Teller. In December 1950 Ulam had proposed a new fission weapon design, using the mechanical shock of an ordinary fission bomb to compress to a very high density a second fissile core. (This two-stage fission device was conceived entirely independently of the thermonuclear program, its aim being to use fissionable materials more economically.) Early in 1951 Ulam went to see Teller and proposed that the two-stage approach be used to compress and ignite a thermonuclear secondary. Teller suggested radiation implosion, rather than mechanical shock, as the mechanism for compressing the thermonuclear fuel in the second stage. On March 9, 1951, Teller and Ulam presented a report containing both alternatives, entitled “On Heterocatalytic Detonations I. Hydrodynamic Lenses and Radiation Mirrors.” A second report, dated April 4, by Teller, included some extensive calculations by Frederic de Hoffmann and elaborated on how a thermonuclear bomb could be constructed. The two-stage radiation implosion design proposed by these reports, which led to the modern concept of thermonuclear weapons, became known as the Teller–Ulam configuration.
It was immediately clear to all scientists concerned that these new ideas—achieving a high density in the thermonuclear fuel by compression using a fission primary—provided for the first time a firm basis for a fusion weapon. Without hesitation,
Just prior to the conference, on May 8 at Enewetak atoll in the western Pacific, a test explosion called George had successfully used a fission bomb to ignite a small quantity of deuterium and tritium. The original purpose of George had been to confirm the burning of these thermonuclear fuels (about which there had never been any doubt), but with the new conceptual understanding contributed by Teller and Ulam, the test provided the bonus of successfully demonstrating radiation implosion.
In September 1951,
With the Teller–Ulam configuration proved, deliverable thermonuclear weapons were designed and initially tested during
With completion of Castle, the feasibility of lightweight, solid-fuel thermonuclear weapons was proved. Vast quantities of tritium would not be needed after all. New possibilities for adaptation of thermonuclear weapons to various kinds of missiles began to be explored.
In 1948 Kurchatov organized a theoretical group, under the supervision of physicist Igor Y. Tamm, to begin work on a fusion bomb. (This group included Andrey Sakharov, who, after contributing several important ideas to the effort, later became known as the “father of the Soviet H-bomb.”) In general, the Soviet program was two to three years behind that of the
Minister of Defence Harold Macmillan announced in his Statement of Defence, on Feb. 17, 1955, that the
It remained unclear exactly when the first British thermonuclear test occurred. Three high-yield tests in May and June 1957 near
Well before their first atomic test, the French assumed they would eventually have to become a thermonuclear power as well. The first French thermonuclear test was conducted on Aug. 24, 1968.
Plans to proceed toward a Chinese hydrogen bomb were begun in 1960, with the formation of a group by the
From the late 1940s,
The first advances came through the test series Operation Sandstone, conducted in the spring of 1948. These three tests used implosion designs of a second generation, which incorporated composite and levitated cores. A composite core consisted of concentric shells of both uranium-235 and plutonium-239, permitting more efficient use of these fissile materials. Higher compression of the fissile material was achieved by levitating the core—that is, introducing an air gap into the weapon to obtain a higher yield for the same amount of fissile material.
Tests during Operation Ranger in early 1951 included implosion devices with cores containing a fraction of a critical mass—a concept originated in 1944 during the Manhattan Project. Unlike the original Fat Man design, these “fractional crit” weapons relied on compressing the fissile core to a higher density in order to achieve a supercritical mass. These designs could achieve appreciable yields with less material.
One technique for enhancing the yield of a fission explosion was called “boosting.” Boosting referred to a process whereby thermonuclear reactions were used as a source of neutrons for inducing fissions at a much higher rate than could be achieved with neutrons from fission chain reactions alone. The concept was invented by Teller by the middle of 1943. By incorporating deuterium and tritium into the core of the fissile material, a higher yield could be obtained from a given quantity of fissile material—or, alternatively, the same yield could be achieved with a smaller amount. The fourth test of Operation Greenhouse, on May 24, 1951, was the first proof test of a booster design. In subsequent decades approximately 90 percent of nuclear weapons in the
Refinements of the basic two-stage Teller–Ulam configuration resulted in thermonuclear weapons with a wide variety of characteristics and applications. Some high-yield deliverable weapons incorporated additional thermonuclear fuel (lithium deuteride) and fissionable material (uranium-235 and uranium-238) in a third stage. While there was no theoretical limit to the yield that could be achieved from a thermonuclear bomb (for example, by adding more stages), there were practical limits on the size and weight of weapons that could be carried by aircraft or missiles. The largest
The AEC was headed by a five-member board of commissioners, one of whom served as chairman. During the late 1940s and early '50s, the AEC devoted most of its resources to developing and producing nuclear weapons, though it also built several small-scale nuclear-power plants for research purposes. In 1954 the Atomic Energy Act was revised to permit private industry to build nuclear reactors (for electric power), and in 1956 the AEC authorized construction of the world's first two large, privately owned atomic-power plants. Under the chairmanship (1961–71) of Glenn T. Seaborg, the AEC worked with private industry to develop nuclear fission reactors that were economically competitive with thermal generating plants, and the 1970s witnessed an ever-increasing commercial utilization of nuclear power in the United States.
Have you ever wonder how soccer / football games get into our life? When and where is the origin of this game from? Why is it so many peoples in the world crazy about this game? Almost every culture has reference to the history of soccer.
The origin of football / soccer can be found in every corner of geography and history. The Chine
But it was in
On October 1963, eleven
Only eight years after its foundation, The Football Association already had 50 member clubs. The first football competition in the world was started in the same year - the FA Cup, which preceded the League Championship by 17 years.
International matches were being staged in
After the English Football Association, the next oldest are the Scottish FA (1873), the FA of Wales (1875) and the Irish FA (1880). Strictly speaking, at the time of the first international match,
The spread of football outside of