Materials of the future

Materials of the future: an article for the UNESCO Encyclopaedia of Life Support Systems (2001). Download PDF

MATERIALS OF THE FUTURE.
A chapter for the UNESCO Encyclopaedia of Life Support Systems, 2001
Philip Ball – Consultant Editor, Nature, London, UK

Contents

1. Introduction
2. Synthesis and Processing
3. Biomedical Materials
4. Smart Materials
5. Biomimetics and Self-assembly
6. Nanoscale Materials and Assembly
7. Future Information Technologies
8. Display Technology
9. Ultrastrong Fibers
10. Materials Made To Measure

Glossary

Catalyst: A substance that accelerates the rate of a chemical reaction.
Ceramic: An inorganic substance composed of two or more elements, which is typically hard and brittle. Most ceramics are compounds of metals with non-metals; many are metal oxides.
Composite material
: A substance comprised of more than one kind of fabric, such as graphite fibers embedded in an organic resin.
Covalent bond: The chemical bond that normally holds atoms together in molecules.
DNA
: Deoxyribonucleic acid, the polymer that encodes an organism’s genetic information.
Inorganic: Composed primarily of elements other than carbon, e.g., metals. Rocks and ceramic materials are considered to be inorganic.
Materials science: The discipline concerned with developing new materials for technology, industry and medicine, and for understanding the factors that control the properties (mechanical, electronic, magnetic, chemical, etc.) of materials.
Micrometer
: A thousandth of a millimeter (mm): roughly the size of a typical human cell or a bacterium.
Nanometer
: A millionth of a millimeter (nm): roughly the size of a typical medium-sized molecule.
Nanotechnology: Engineering of components and devices of nanometer dimensions.
Optoelectronics
: A combination of electronics with information processing based on light signals: for example, the use of optical fibers to carry signals in long-distance telecommunications.
Organic
: Composed of primarily carbon-based molecules, often (but not necessarily) related to those that appear in living organisms.
Piezoelectricity: The generation of an electric field by the pressure applied to a material. Conversely, an applied electric field will deform a piezoelectric material.
Polymer
: A substance whose molecules consist of many small molecules linked together by chemical bonds. Most plastics are linear polymers, whose component parts are linked into long molecular chains.
Protein: A biological polymer made by linking together amino acids. Proteins range in size from chains of a few dozen to several thousand amino acids.
Semiconductor
: A material with a small electrical conductivity that increases with temperature. Typical semiconductors like silicon have a conductivity intermediate between that of metals and non-conducting (insulating) materials.
Superconductor
: A material that conducts electricity with zero electrical resistance. In theory, the tiniest voltage will generate an infinite current in such a material, and a current will circulate around a superconducting loop forever.
Surfactant: Usually a soap-like organic molecule with both water-soluble and oil-soluble parts.
Transistor
: The “workhorse” of electronic and computer circuitry. The transistor is basically a switch that can be opened or closed, to control the flow of an electrical current, by applying a voltage to one of its terminals.

Summary

The development of new techniques for seeing and manipulating matter from the atomic scale upwards has enabled an increasing element of rational design to be incorporated into materials innovation, enabling materials to be tailored to particular tasks. In particular, these technical developments are shrinking the size scales at which engineering can be conducted. Concomitantly, materials science has experienced a trend away from structural materials towards functional ones: from materials that perform some passive structural role (generally supporting a heavy load) to ones that perform some active function, such as generating an electrical current or closing a valve. This change makes materials increasingly important for a wide range of technologies, notably medicine and information technology. The materials of the future will therefore arise from collaborative efforts between scientists ranging from electronic engineers to chemists to cell biologists.

1. Introduction

Ages defined by their prevalent materials seem now to be passing with disconcerting speed. The Stone Age can be stretched, depending on one’s terms of reference, over 20 millennia; the Bronze Age lasted for over 20 centuries. But one could argue that the Plastic Age began and ended during the twentieth century, and the Silicon Age could be over within the lifetime of many of those who saw its dawning. If the current time represents the Age of Materials, as some have suggested, that is surely because the appearance and demise of new materials is happening at a phenomenal rate. All of this leaves us decidedly myopic when trying to gaze into the crystal ball of the future.

Discussions of future technologies-indeed, of the future in general-must either be prosaic extrapolations of current capabilities, or science fiction. This article will err on the side of the former, while acknowledging that for each ensuing decade the probability that it will overlook something of great importance is at least doubled.

This can be illustrated with reference to the case of carbon nanotubes (described in more detail in Section 9). Before the 1990s, no one had even postulated that these super-strong filaments of graphite-like carbon exist. That in itself is worth remarking on, since it is commonplace now for theorists to speculate about materials that might exist, to calculate their putative properties in inordinate detail. But even though the soccer-ball-shaped carbon molecules called buckminsterfullerene (now one of a family of hollow “fullerenes”) were predicted in the 1970s and discovered in 1985, no one extrapolated to tubular versions. Yet in the 8 years that have now passed since the discovery of carbon nanotubes, they have furnished an entirely new arena of research, have eclipsed the fullerenes as the most promising and interesting form of “nanostructured” carbon (that is, a form of pure carbon sculpted at the nm scale), and are now discussed in the context of molecular wires for microelectronics, single-molecule transistors, ultra-strong carbon fibers, probosci for the most powerful microscopes, high-capacity storage cylinders for hydrogen fuel, etc.) If written 10 years ago, (1990), this article would have missed them totally.

Specific materials systems aside, one can be little more confident in making some prognostication of the possible future trends in materials science and engineering. First-and this is not a semantic detail-the discipline will need a new name. Already there is concern that this stolid, post-Second World War label does scant justice to a field that embraces (amongst others) cell biology, computer science, geophysics, organic chemistry and mechanical engineering. It is truly now a discipline that surveys the behavior of matter in all its guises, short of the extreme energy scales that remain the domain of particle physics. Two of the prime reasons for this expansion of breadth are associated with scale and function.

The materials devised and created today have important structural features on all length scales between the atomic and the macroscopic (the “everyday” scale discernible to the human eye). Sometimes just a single scale tends to dominate: for the nm-scale crystals that act as “quantum dots” for optoelectronic information-processing devices (which employ both light and electricity as their input and output), it is the size of the crystals that sets the wavelength of light absorbed or emitted. In other cases, the material might acquire important properties from structures over a range of scales. This is typical, for example, of natural materials such as bone, wood and shell-all of them sophisticated composite materials whose superior properties make them attractive models for synthetic products.

One of the key concepts under the umbrella of scale is hierarchical structure, which implies that the structure is defined over a multiplicity of length scales. The architectural prototype is the Eiffel Tower, which gains a high strength-to-density ratio by a repeated application of the triangulated crossbeam principle (which an engineer knows as a Warren’s truss) over four distinct length scales. (Here, as so often, nature points the way: Warren’s truss is seen in the metacarpal bones of a vulture’s wing.) But a hierarchical structure need not simply repeat the same motif at increasing magnification: in bone, for example, each level of structural organization bears no resemblance to the one before.

Traditional methods of materials synthesis will not generally allow for simultaneous control of structure over several length scales. At scales below ~1 mm, this sort of control requires various kinds of chemical expertise. Making nm-scale particles of some inorganic material might involve finesse in colloid science. Organic chemists might then be able to advise on how to attach molecular linkers to the surface of the particles so as to join them together. In any event, this sort of microscopic manipulation is a far cry from the hot-pressing techniques with which a ceramics technologist might be familiar.

The second consideration is function, and this forces us to re-evaluate the whole meaning of the term “material.” Colloquially, it conjures up some substance that performs a structural role: a steel girder, a cement bridge, a cotton sheet. But increasingly, materials are acting like machines: they do things. Materials that emit light, that swell and contract when prompted, that stimulate bone growth or release drugs-all are active substances, qualitatively unlike the passive materials that have traditionally formed the core of the discipline. It then becomes a matter of opinion where “material” ends and “device” begins, or what distinguishes a materials researcher from a computer engineer or a biomedical scientist.

Many of the advanced materials in development today betray a desire to be invisible-not to muscle in on daily life like a garish plastic chair or a chrome-plated staircase, but to carry out their role as inobtrusively as possible. Vibration sensors and switches made from “smart” materials that respond to stimuli in their environment will make aircraft cabins quieter without anyone knowing they are there. Fractured bones will be held in place not by metal plates that will forever trigger airport alarms, but by sutures that slowly dissolve as the bone regrows. Cumbersome and eye-fatiguing televisual screens might give way to electronic ink: microstructured particles that redistribute themselves on a flat white field, looking almost indistinguishable from ink on paper. The world of advanced materials has its Herculean aspects, such as ceramics that withstand awesome temperatures; but it is more often a Lilliputian and modest neighborhood where the job gets done more cleanly and quietly.

2. Synthesis and Processing

How will the materials of the future be discovered and made? We are faced with the curious situation that two current trends in materials synthesis are taking the process of innovation in opposite directions. On the one hand, there is a greater element of design than ever before-materials are planned as if at the drawing board. For example, one can now rationally design and then fabricate organic or inorganic materials perforated with microscopic pores of a well-defined shape and size. This design process may sometimes permit of a modular approach, in the same way that an electronic circuit consists of modules such as amplifiers or pulse generators. A polymer for use in optoelectronic technology might be given some side chains that absorb light at a certain frequency, others that release charged particles when stimulated by the absorbed energy, still others that transport or trap these charges, and so forth.

On the other hand, one of the major innovations of the 1990s in the arena of materials synthesis and discovery was the development of combinatorial methods. These entail the creation of a huge library of materials whose compositions are blends of several different components or substances, mixed at random or in gradually modulated steps. These libraries are analogous to (and sometimes visibly resemble) a color chart for commercial paints, in which each unique hue is composed of a mixture of several different pigments.

Figure 1. A Combinatorial Array (Library) of Ceramic Materials Prepared
by Mixing Four Elements in Different Ratios
These are all potential superconductors, and each block will be tested
individually to discover its properties

By blending chemical elements in different ratios, or assembling molecular units at random into new molecules, one samples different materials over a whole region of “composition space.” The challenge is then to find an effective, rapid and sensitive way of screening these candidate materials in the hope of discovering one that performs in the manner desired-a superior phosphor, say, or a catalyst, dielectric or superconductor. Combinatorial synthesis may be intellectually less satisfying than rational design-it amounts to little more than trial and error writ large by automated technology-but the accurate, quantitative prediction of materials properties from theory alone remains severely challenging, and so the combinatorial approach may be the most pragmatic for many situations.

It is also important to appreciate that serendipity is as important to the materials scientist as ever it was. One of the most important new materials of the 1990s is the mesoporous form of silica known as MCM-41, created by researchers at Mobil’s research laboratories in the early part of the decade. “Mesoporous” denotes the fact that the silica is laced with long cylindrical channels ~10-100 nm across; and the critical feature is that these are uniform in size, and arranged in an orderly fashion, packed hexagonally as if in a honeycomb.

Figure 2. MCM-41, A Porous Form of Silica Permated
by Cylindrical Channels of Uniform Size
These can range from ~10-100 nm in width, depending
on the method of preparation

This makes MCM-41 a scaled-up version of the aluminosilicate zeolites that are widely used in the chemical industry as highly selective catalysts and “molecular sieves”-their smaller pores are the width of small molecules. MCM-41 is now simply a member of a much broader family of ordered mesoporous inorganic materials: others have slit-like pores or three-dimensional connectivity between pores, and their walls can be made of oxides other than silica. An understanding of the formation process, which involves templating by spontaneously assembling clusters of surfactant molecules, has now led to a strong element of rational design in the creation of these materials. But the initial discovery was unexpected, and stemmed from studies directed at the creation of new types of (small-pore) zeolites.

Rational design of materials has emphasized the importance of the interface with chemistry, in particular because developments in the field of supramolecular chemistry have a great deal to offer the materials scientist seeking control of structure on the scale of nanometers or less. Supramolecular chemistry is the “chemistry beyond the molecule.” Whereas the traditional synthetic chemist strives to assemble atoms into a specific molecular geometry, the supramolecular chemist typically uses whole molecules as the fundamental building blocks, and devises ways of assembling them into organized structures and arrays, generally using non-covalent interactions, such as hydrogen bonding or metal-ion coordination chemistry. This makes synthesis possible at a scale extending upwards to meet that at which the engineer can carve structures from monolithic materials. Thus, to make the kinds of microstructures required for, say, electronic circuitry or micromechanical engineering, one now has the choice of adopting either a top-down or a bottom-up approach. Examples later in this section should serve to illustrate the possibilities this presents. One consequence is that the very meaning of the word “material” becomes ill-defined. Supramolecular and colloid chemistry can afford assemblies of diverse components, some of which are single molecules. The term “integrated chemical systems” has been proposed for heterogeneous molecular assemblies of this type that are designed for a particular function. Perhaps the central point is that “materials” are no longer to “chemistry” as “bulk” is to “molecular”: much of the action is taking place somewhere in between, at the mesoscopic scale of nanometers to micrometers-the dimensions typical of many of the structures in living cells. Many of the advanced materials of the future will surely be engineered at this size scale.

How will they be put together? Several considerations are leading to a shift away from energy-intensive, “harsh” conditions of synthesis and towards what one might call “soft processing” technologies: solution-phase chemistry (often with water as the solvent), low temperatures, ambient pressure. In part, this trend is driven by environmental considerations: a reduction in energy consumption and in the use of hazardous solvents. In part, it is dictated by the kinds of synthetic procedures pertinent to supramolecular chemistry, which employs gentle inter-particle forces, rather than the strong (and therefore somewhat intransigent) bonds that maintain the integrity of individual molecules.

It is certainly true that toxic organic solvents, high temperatures and ultrahigh vacuums have been widely used in the synthesis of advanced materials-in semiconductor processing, for example. But the recognition that the solvation properties of benign fluids, such as water and carbon dioxide are markedly different in the supercritical state, has led to their introduction as solvents for several industrial chemical processes that might otherwise utilize organics. Water’s critical point, where the distinction between gas and liquid disappears, is at 374 ¡C and 218 atmospheres. Substances that are sparingly soluble in liquid water may become appreciably so in the supercritical fluid, and vice versa. Reaction rates also differ significantly: many organic compounds, for example, can be efficiently oxidized in supercritical water. Carbon dioxide, with its lower critical temperature of 31 ¡C, is a more amenable supercritical solvent, and has been used, e.g., in the fabrication of nanoscale particles of various materials, including drugs.

Liquid water too, is finding increasing use as a solvent. Electrochemical processing from aqueous solution can deliver thin films of technologically useful oxide ceramics such as barium, strontium and lead titanate, and lithium niobate. Electrodeposition has been used to make complex many-layered “superlattices” of metals and ceramic materials, which are more conventionally fabricated using high-temperature vapor deposition methods. And hydrothermal methods, which employ moderate temperatures and aqueous solutions comparable to the conditions of some geological processes are used in zeolite synthesis amongst other things, and have been proposed for the fabrication of diamond films.

3. Biomedical Materials

One place, where the new face of a functional, nano-engineered materials science should be felt most keenly, is in the hospital. We have, since time immemorial, been able to do little better in effecting mechanical repairs to the body than if it were an inanimate machine. Thus, prostheses of wood or metal have evolved into robotic limbs of immeasurable benefit to the recipient, but nonetheless a tacit submission to the traditional idea that the ways and materials of engineering have little or no overlap with those of biology.

This disjunction becomes all the more stark in the case of artificial organs, and it is perhaps remarkable that a heart imitated by a plastic air-blown pump, or a kidney by a plastic membrane filter, does so well! Such synthetic devices are by no means crude in engineering terms: the Jarvik-7 artificial heart, for example, has complex laminated polymer walls designed to minimize an inflammatory response and to incur low friction and wear as it inflates and deflates. This device has been used to sustain many patients awaiting urgent cardiac transplants. But nothing of the kind will suffice for long-term use.

In biomedicine, “minimal visibility” of materials was long equated with inertness: as long as a material in contact with the bloodstream did not provoke an allergic, toxic or inflammatory response, it was deemed adequate. But to an organism, utter passivity is not at all the same as invisibility. An inert material is typically regarded as a wound, triggering the creation of scar tissue at the interface. To truly look like a biological material, an implant must be active, capable of a kind of communication with the surrounding cells. That is why, for example, artificial blood vessels made from polymers such as polytetrafluoroethylene (in a porous form comparable to the Goretex fabric) may be given a lining of the protein heparin, which prevents blood clotting. The epithelial cells of real blood vessels release heparin to combat thrombosis, and the same end is served by immobilizing heparin at the surface of synthetic vessels.

The same broad principle is being employed in materials that simulate bone. Here, one finds a tidy illustration of the many, sometimes conflicting, demands placed on a biomedical material. Clearly a bone replacement must be strong and lightweight. Metals such as stainless steel and titanium have good fracture-resistance, but are considerably denser than real bone. The material must also be corrosion-resistant, and if, as in hip joints, it is liable to be subjected to movement against another hard surface, wear resistance is crucial. Flexibility similar to that of real bone is also an important attribute. And if the material takes up too much stress in its load-bearing capacity, it can induce dissolution and weakening in surrounding bone.

Small wonder, then, that the best bone replacement materials are composites that derive the right combination of properties from several distinct substances: e.g., fiber-reinforced polymers. A porous microstructure is important both mechanically, and to allow ingrowth of new tissue, e.g., so that the implant can develop an intimate interface with freshly grown bone. Natural coral has been used as a master for making molds with the required porosity.

Yet even this falls short of providing a material that can bind smoothly and securely to real bone, as an implant is commonly required to do. Again, inertness is in this respect a hindrance, not a help. The formation of fresh bone demands a surface that is conducive to the bone-depositing osteoblast cells. Bioactive ceramics are materials that actively encourage this regrowth. These “bioglasses” are typically mixtures of silica with sodium, calcium and phosphorus oxides, which appear to be capable of mimicking the behavior of the calcium phosphate (hydroxyapatite) component of real bone. Osteoblasts will create crystallites of a carbonate-containing variant of hydroxyapatite, a precursor to the deposition of true bone, on the surface of the bioactive ceramic, and this helps to weld together the new bone and the implant. Composites of bioactive ceramics with metals or polymers can achieve an attractive combination of stength, flexibility and compatibility with natural tissue.

If even apparently “mechanical” biological structures like blood vessels and bone display this complexity of interaction with the surrounding tissue and fluid, what are the prospects for developing materials systems that have a more active biological role, like that of the liver, the cornea, the spinal cord or the brain? While some organs have a function that one might hope to mimic crudely in purely artificial devices, there are others whose job can, at present, be conducted only by the living cells themselves. We simply do not know how to make an artificial nerve cell or neuron that interfaces seamlessly with the real thing.

Therefore, the ultimate in biomedical materials engineering is to find ways of growing the biological tissue itself: to grow new organs in culture, seeded by the cells of the intended recipient. This is called tissue engineering.

The process, as yet still hypothetical, is something like the following. Rather than having to await a suitable donor and then run the risk of transplant rejection, a person suffering from kidney failure has a smattering of cells removed from her kidney. These are scattered within a porous polymer material shaped like a kidney, and the cells are stimulated into growth by the right amounts of nutrients. Slowly the cells multiply and colonize the scaffold, which is gradually dissolved away as the tissue forms around it. The cells are provided with the hormones that promote the formation of a network of blood vessels, ensuring that the nutrient-bearing “blood” gets distributed throughout. The final product is a fully grown kidney, complete with blood supply, ready to be implanted in the patient and fully compatible with her immune system.

Some researchers even speculate that, given an appropriately shaped scaffold seeded with cells of the requisite tissue types, an entire artificial arm could be grown in culture to replace one lost in an industrial accident. Once grown, it is simply stitched into place.

Figure 3. Graftskin, Artificial Skin Grown from Cultured Cells
on a Scaffold of Biodegradable Polymer

The key to these advances is a suitable polymer scaffold: the material should be broken down slowly by the cells into non-toxic by-products. A copolymer of lactic and glycolic acid is a favorite material (approved by the US Food and Drugs Administration), which is degraded to carbon dioxide. Liver cells have been cultured in such supports. The formation of blood vessels (angiogenesis) can be encouraged by doping the scaffold with the protein called the angiogenic growth factor, which does just what the name implies. Similarly, bone growth in a biodegradable scaffold may be stimulated using bone morphogenic protein.

Many organs are not, however, just a mass of cells. They might contain several different tissue types interwoven in a complex geometry that is essential to proper cell-to-cell communication. For example, the hepatocyte cells of the liver are mixed in with tissue-forming fibroblasts. Hepatocytes cultured in isolation do not function so efficiently. The right blend might be achieved by growing the two cell types on surfaces treated with a thin layer of cell-adhesion molecules. Techniques of microlithography, discussed in Section 6, may be used to imprint a microscopically patterned film of adhesion molecules on the surface, and hepatocytes will then stick only to the patterned areas. Once growth is underway, the intervening spaces can be made adhesive too, and fibroblasts deposited in these interstices. Studies of this kind have shown that there is an optimal patterning scale at which the hepatocytes function best.

Biomedical materials are therefore moving in two related directions. The first is towards greater biological integrity: synthetic materials are being developed for their ability to interact with cells in beneficial ways, so that the interface of the natural and the artificial is as inobtrusive as possible. The ultimate extension of this approach is then to remove the interface altogether: to grow real biological materials, assisted by supports and scaffolding that direct and stimulate the growth before, in the ideal case, being destroyed by the very tissues whose formation they have promoted.

4. Smart Materials

Wholly synthetic materials that mimic biological functions, such as artificial muscles, might eventually be displaced in a biomedical context by tissue engineering, but they are nevertheless likely to play an important role in prosthesis for many years to come. And their field of application is by no means exclusively biomedical. Materials that impart mechanical force in response to some external stimulus have a wealth of applications, ranging from robotics to adaptive optics (astronomical telescopic mirrors that change shape to cancel out the distorting effects of atmospheric turbulence), from vibration control and earthquake protection to loudspeaker technology and noise reduction.

Artificial muscles are examples of so-called smart materials, which furnish the perfect illustration of materials fulfilling active functions, rather than serving as passive fabrics. They are, in a sense, materials that act as machines: a robot arm flexes because of a material that changes its properties, rather than because a system of hydraulic pistons or electrically powered cogs and gears is set in motion. Smart materials that effect some change, whether it be mechanical motion, closing or opening a valve, or throwing an electrical switch, are called actuators. A second broad classification is that of sensor materials, which sense and signal some crucial change in their ambient environment. Sensors coupled to actuators make a formidable technological marriage.

For example, there are efforts to reduce cabin noise in aircraft by coupling acoustic sensors to actuators that vibrate in perfect antipathy to the noise, broadcasting “anti-noise” that cancels out the droning sound imparted by the vibrating aircraft superstructure. The coupling of sensors to actuators virtually defines robotics, where sophisticated information-processing and pattern-recognition software is needed to make sense of the input data from sensors before it can be translated to some appropriate action mediated by actuators. Here there is a move towards making more use of materials properties, rather than information processing, to achieve the required ends. It should in principle be simpler and less demanding on the control algorithms to take advantage of, say, the inherent elasticity and resilience of an artificial muscle in determining limb movement than to equip the robot with many unidirectional “stiff” actuators to achieve the same end.

Most of the artificial muscles (i.e., materials which convert a signal, typically electrical, to mechanical motion) in use today are “hard materials,” such as piezoelectric ceramics: ferroelectrics such as barium titanate and lead zirconate-titanate (PZT). As actuators they are used, e.g., in ink printers. Because the conversion works also in reverse-mechanical motion (pressure) excites an electric current-they also serve as sensors for vibration, sound or sonar systems. High-force mechanical displacement, as might be required in the aerospace industry, can be effected by magnetostrictive metal alloys such as Terfenol-D, a blend of terbium, dysprosium and iron that contracts when a magnetic field is applied by an electromagnet.

Figure 4. Terfenol-D is a Magnetostrictive Alloy: it
Contracts when Placed in a Magnetic Field
This is a smart material used to make a kind of artificial muscle

Shape-memory alloys such as Nitinol (an alloy of nickel and titanium), on the other hand, change shape in response to a change in temperature. Nitinol wires have been used in robotic hands that achieve a lightness of touch not easily afforded by electromechanical or hydraulic actuators.

But there is a great deal of interest in making “soft” artificial muscles too. Part of the motivation here is biomedical: actuators made from compliant polymers may be more compatible with soft tissues, and there is also greater scope for engineering organic materials to ensure biocompatibility with the body’s chemistry. But polymer-based smart materials could also be more lightweight and cheap to produce-benefits for any kind of engineering.

Many soft smart materials are polymer hydrogels: gels of crosslinked polymers that will swell and shrink reversibly in water. These “volume transitions” can be very abrupt, like freezing or melting transitions, and can be induced in some gels by changes in environmental conditions: temperature, pH, electric fields, light, or the presence of some chemical substance. For example, swelling might be induced entropically through conformational changes of the “free” parts of the polymer chains in the gel, and so is triggered at some temperature threshold. Or the volume change could be of electrostatic origin due to ionization of acidic side-groups on the chains, and so induced by a change in hydrogen-ion osmotic pressure, caused by application of an electric field.

Conformational changes in a hydrogel have been used to create a shape-memory polymer, which can be deformed at room temperature, but will regain its original shape on heating to 50 ¡C. But the mechanical force generated by volume changes in hydrogels is rather modest because of their very softness, and much of the interest in their “smart” behavior stems instead from the changes in permeability of the gel network that these transitions bring about. Drug molecules entrapped inside the gel when it is cross-linked will typically remain there in the collapsed state of the gel but are able to escape when the gel expands. This offers a mechanism for the controlled release of drugs inside the body, stimulated perhaps by non-invasive methods, such as gentle warming or light-induced processes. The drugs could be carried into the body in microscopic particles of the gel, which can be administered orally. Transitions induced by pH changes might be particularly useful in this context, as they could be triggered by the passage of the gel medium into or out of the acidic environment of the stomach. And swelling transitions triggered by a certain biochemical substance could make the release pattern sensitive to pertinent chemical changes in the body. For example, a prototype insulin release medium for diabetes sufferers has been prepared from a hydrogel loaded with both insulin and the enzyme glucose oxidase. The reaction of the enzyme with (high concentrations of) glucose brings about a change in the pH of the solution-and the gel swells, releasing its load of insulin.

Some electrically conducting polymers have also shown potential for use as actuators. The electrochemical insertion or removal of dopant molecules can bring about conformational changes in conducting polypyrrole that are translated into changes of shape and thus mechanical force. And the piezoelectric properties of polyvinylidene fluoride recommend it as a cheap pressure-sensitive medium for computer keyboards.

The key question for the future is: how smart can materials get? In some contexts, this is a question about the magnitude of their responsiveness. There is still a need for mechancial actuators that can combine large force generation with large displacements-a difficult combination, at present often obtained by switching many actuators in parallel. There is still no “artificial muscle” that compares in these respects with real muscle, which achieves large (and rapid) displacements by the interdigitation of the fibrous cells in the sarcomere assembly. For sensor technologies, the question may be one of sensitivity: how to maximize the ratio of output to input signal. Here again biology excels, for example, with the single-photon sensitivity of photoreceptors in the rod cells of the retina. One area of critical technological importance here (and it is one not commonly included explicitly under the banner of smart materials) is the development of highly sensitive read-out heads for magnetic data storage. Higher sensitivity would permit greater read-out speeds and greater storage densities (that is, smaller areal densities per bit). Such improvements are already being attained by replacing the traditional read-out heads, working by electromagnetic induction, with smart materials that alter their resistance in an applied magnetic field. A strong “magnetoresistive” response of this sort is obtained in so-called magnetic multilayers, stacked thin films of magnetically coupled iron or cobalt alternating with thin films of a non-magnetic metal such as chromium or copper. But significant further improvements are promised by a recently-discovered class of oxide ceramics such as lanthanum strontium manganite, which exhibit a much larger magnetoresistive response graphically dubbed colossal magnetoresistance.

5. Biomimetics and Self-assembly

As some of these examples illustrate, nature’s materials continue to be the envy of the materials scientist. But we are increasingly able to do far more than just stand back and admire. As more understanding emerges about how these materials are structured and put together, so the materials scientist acquires inspiration and guidance for making comparable fabrics by taking tips from nature. The buzzword here is “biomimetics” (sometimes called “bionics” in continental Europe): the mimicry of biological systems in engineering contexts. For the materials scientist, this is largely about making composite materials with superior properties-high toughness, for example-using cheap, readily available substances assembled into intricate microstructures through low-energy pathways.

Studies of biological materials have heightened an appreciation that, while optimizing a particular material parameter tends to involve the fine-tuning of a specific feature of the structure, the combination of several desirable properties is typically a matter of controlling structure and organization across several different length scales. In other words, natural materials display hierarchical structures. The strength of bone is more than a matter of marrying organic and inorganic materials (the protein collagen and the mineral hydroxyapatite); there are distinct types of organization at scales ranging from the primary structure of the collagen helices, through to the placement of the crystals along the fibrils at the 100-nm scale to the arrangement of osteon fibers at sub-millimeter scales and the macroporosity of the bulk substance.

Figure 5. The Hierarchical Structure of Bone Extends over at least
Four Orders of Magnitude in Size Scales

A comparable hierarchy of structure is evident in most of nature’s structural materials, notably wood, tendon, cartilage and silk. In many of these instances the mechanical function of the components of the hierarchy can be understood according to familiar engineering principles.

Related to the idea of hierarchy is the use of modular structure, which in this case generally means building up materials through the assembly of identical smaller units. Wood, for example, with its compartmentalized cellular structure illustrates the mechanical advantages of a material composed of adjoining, closed cells. It has a high strength-to-weight ratio, and the modular architecture can help to localize damage: rupture of a few cells need not lead to a propagating crack, as it does in brittle materials. The stepwise unfolding of the titin protein, which holds the fibrous proteins together in muscle, is due to a kind of modular domain structure and gives it a stress-strain curve very different from a purely elastic filament. A similar sawtooth-like stress-strain curve has been measured for the polymeric filaments that are pulled out from the organic binder between the aragonite plates of nacre (Mother of Pearl) as the plates are separated. Because the area under the curve, equal to the work of fracture, is greater in this case than for either a stiff, strong filament or an elastic one, this modular extension behavior makes for a tough adhesive, and may prove to be a more general mechanism in biology.

One of the predominant leitmotifs of nature’s structural materials is orientational control of fiber growth. In bone, the oriented packing of the collagen fibrils increases the elastic modulus, the work of fracture and the breaking strain of the hydroxyapatite. The high fracture toughness of wood- greater than that predicted for a simple fibrous composite of the fiber and matrix components- seems to be due largely to the helical winding of cellulose fibers in the multilayered cell walls. The orientation of the protein fibers in silk helps to give it a fracture strength greater than that of steel.

Many hard biomaterials are manufactured by controlled nucleation and growth of crystals. The archetype is nacre, a layered sandwich of mineral platelets and protein sheets. Bone is a much more intimate marriage of mineral and polymer, while tooth enamel has a woven texture that reveals control of crystal growth in all three dimensions. The layered structure of nacre lends itself most obviously to mimetic synthesis, since materials engineers already have considerable experience in preparing layered composites. The basic strengthening mechanism derives from the presence of weak interfaces between the mineral platelets. The energy of a crack is dissipated in pulling the platelets apart, and its forward progress is deflected laterally as this happens. The same principle has been exploited in a synthetic layered composite in which slabs of silicon carbide are separated by a thin film of graphite at the interfaces. The resulting material is tougher than a monolithic piece of the ceramic.

Figure 6. A Ceramic Composite made from Layers of Silicon Carbide
Coated with Graphite is much Tougher than a Hard,
but Brittle Block of the Pure Ceramic
This microstructure mimics that of Mother-of-Pearl (nacre)

Natural materials are, almost by definition, self-assembling-which generally also implies active control systems to guarantee self-repair, self-reinforcement and disassembly when needed. Biomimetic synthetic materials cannot yet achieve all of this, but the basic idea of self-assembly is one that has been enthusiastically imported from supramolecular chemistry. The essential feature of self-assembling chemical systems is that they are “pre-programmed” with the information needed for creation of the superstructure. That is to say, the component molecules are capable of interactions, often highly directional in space, that guide the assembly process into the required architecture.

Nowhere is this more elegantly illustrated than in the use of DNA as a fabric for materials synthesis. DNA is the “programmed” molecule par excellence-in chromosomes, it holds the basic information needed to create a replicating organism. The important point for the materials chemist is that this information is embodied in highly specific intermolecular interactions, which enable both complementary DNA and messenger RNA to be assembled piece by piece on single-stranded DNA. In short, the two strands recognize one another. This enables the use of DNA as a programmed molecular girder, in which the ends will link up in very well specified ways, determined by the terminal sequence of base pairs. Short strands of double-helical DNA can be given “sticky ends”-short single-stranded sections-that will recognize and bind to other ends with the complementary sequence. Moreover, the cell provides ready-made enzymes for making these sticky unions permanent through covalent linkage (ligation enzymes), or cleaving the resulting framework at locations precisely defined by sequence (restriction enzymes).

Using these principles and this molecular machinery, synthetic strands of DNA have been fashioned into topologically complex architectures such as a cube and other polyhedra, and into extended, ordered sheet-like arrays of DNA loops, rather like a kind of molecular chain mail.

Figure 7. A Molecule in the Shape of a Truncated Icosahedron,
made by the “Programmed” Self-assembly of Strands of DNA

One aim is to extend this approach to the formation of three-dimensional networks, like a kind of DNA zeolite. Suitably functionalized, these might serve as selective catalysts; or they might act as templates for mineralization or metallization, giving more robust porous frameworks. Perhaps the electrically conductive properties of DNA might even be brought to bear to good effect in these networks.

One of the most useful tricks deployed for natural self-assembly is the use of templates. Organic tissues imprint shape and pattern on biominerals such as bone and the exoskeletons of marine organisms such as radiolarians and diatoms. Some of the latter have the most exquisite designs at the microscopic scale, seemingly cast in the mineral phase around a mould of organic vesicles. This same idea of using (self-assembling) organic structures as a mould for making patterned inorganic materials is exemplified in the synthesis of the mesoporous silica MCM-41 and its relatives, where the organic structures are micelle-like aggregates of surfactants. The same principle has been scaled up still further by using bubble-like surfactant vesicles to imprint complex patterns on the surface of an aluminophosphate.

Figure 8. An Inorganic Material (an Aluminophosphate) Patterned by
Bubble-like Structures Self-assembled from Surfactant Molecules

And “colloidal crystals”-orderly stacks of microspheres made from silica or polymers-have been used as casts for making porous solids with voids an order of magnitude bigger than those of MCM-41.

Figure 9. An Ordered Porous Material made by Casting Silicon around
a “Colloidal Crystal,” in which mm-sized Spheres are Packed in Regular Arrays
The spheres are subsequently broken down to create voids

Related to templating, is the idea of compartmentalization: of conducting materials synthesis in compartments that delimit the extent of growth, as well as providing a microenvironment in which parameters, such as supersaturation of a precipitating phase can be delicately controlled by active transport of ions. This is how much biomineralization takes place, perhaps most dramatically in the formation of exquisitely patterned coccolith plates in the soft tissues of the sea creatures called coccolithophores. The natural iron-storage protein ferritin has been used as an enclosed compartment for the formation of monodisperse iron oxide nanoparticles, while similar small particles of inorganic and polymeric materials have been cast inside the empty protein coats (virions) of viruses.

6. Nanoscale Materials and Assembly

There is a great deal of interest in the production of nanometer-scale particles. Matter discretized at such lengthy scales can behave quite differently from the bulk material. For example, polycrystalline metals become harder as the size of the individual crystal grains is reduced, a phenomenon known as the Hall-Petch effect. This is considered to arise from the obstruction of the movement of dislocations-the main deformation mechanism-by the proliferation of grain boundaries. When the grains are just ~100 nm across, however, a new strengthening mechanism may come into operation: the grains are too small even to allow dislocations to be nucleated. Nanoscale grain size in ceramics, meanwhile, can induce the opposite effect of enhanced plasticity, leading even to “superplastic” deformation where the ceramic deforms like a plastic. This property, potentially useful for the forming of ceramic components, is thought to arise from sliding at grain boundaries, lubricated by the formation of fluids at the interface.

Ultrafine particles of metals can serve as highly active catalysts, and there is some indication that when the number of atoms in each particle is less than 100, the selectivity of the catalyst can become highly dependent on the particle size. So such catalysts might be tailored to order by control of the particle size.

Nanoscale particles of titania have been used in a new solar cell that is efficient and cheaper than conventional cells made from silicon. One can regard these devices as a genuine “integrated chemical system,” assembled from molecular, supramolecular and nanoscale components. The titania nanoparticles, deposited as a thin film on the back electrode, are coated with dye molecules which release an electron when they absorb a photon. The semiconducting titania conveys the electron to the electrode. In this way, the tasks of generation and conduction of charged particles are performed by separate entities (unlike the case of silicon solar cells), reducing the chance that oppositely charged particles will recombine and re-emit the absorbed energy. Meanwhile, the high surface area of the nanoparticle film enhances the light-harvesting efficiency. The circuit is completed by an electrolyte carrying an electron donor to replenish the oxidized dye molecules.

Much of the excitement about nanoscale particles stems from the way that they interact with light. Semiconductors will absorb light at wavelengths corresponding to their bandgap-the difference in energy between the conduction and valence electronic bands. They may emit light of the same wavelength when negative and positive charge carriers (electrons and holes respectively) are injected into the material; these recombine across the bandgap with the release of a photon. So light-emitting semiconductors can mediate between light and electricity, and thus serve as the fabric of optoelectronic technologies, which transmit and process information using a combination of electricity and photons. The attraction of nanoscale engineering here is that the “color” of the semiconductors-the wavelength at which they absorb and emit-can be tuned by altering the particle size.

This is a quantum-mechanical effect: the classic “particle-in-a-box” system, in which the energy states (of, e.g., the conduction and valence electrons) are determined by the dimensions of the box. For this reason, the particles are sometimes called quantum dots or Q-particles. One illustration of the potential of nanoscale particles in this area, is the creation of a light-emitting diode in which the emitting material is a thin film of cadmium selenide particles just a few nanometers across.

There are now several methods available for making nanoscale metal and semiconductor particles with well-defined sizes. One of the most common utilizes the “compartmentalization” principle mentioned in the discussion of biomimetics. The particles are precipitated from a saturated aqueous solution of salts inside organic aggregates called reverse micelles. These are roughly spherical structures formed from surfactants in a non-polar solvent, in which they will cluster with their water-soluble heads pointing inwards and the hydrophobic tails directed outwards like the spines of a porcupine. The interior of the reverse micelles can accommodate a small reservoir of water, and it is in this that the crystalline material precipitates. The size of the resulting particles is then determined by the size of the micelles.

Some prospective applications of these optically active nanoparticles require that they be organized into ordered arrays. Two-dimensional arrays of quantum dots might, for example, act as memory elements, optically addressable by laser beam. Under the right conditions, the nanoparticles can condense spontaneously onto a surface with an ordered hexagonal packing, if they are all the same size. Even more strikingly, a mixture of two particle sizes can form single-layer films, in which the two types of particle alternate regularly along the rows-a demonstration that interparticle forces and packing constraints can do a lot of hard work for us.

Figure 10. Quantum Dots-Nanometer-scale Crystals of (in this case)
Metals-of two Different Sizes will Self-organize into Lattices in
which the Two Types of Particle Alternate

This kind of self-organization of quantum dots is also manifested in a quite different synthetic approach, in which the dots are formed by deposition of the constituent elements from the vapor phase. The synthesis of atomically thin films by vapor deposition is a standard technique in semiconductor technology, and generally it results in the formation, atomic layer by layer, of smooth films. But if the deposited atoms can move about on the surface of the substrate, they can in some cases, congregate into small clusters or islands. The effects of the strain induced by the mismatch between the lattice spacing of the crystalline substrate and that of the islands can then give rise to an effective repulsion between islands, resulting in their forming with more or less constant size and separation: as an ordered array of dots or stripes (quantum wires).

Figure 11. An Array of Self-organized Quantum Dots Formed by
Chemical Vapor Deposition into a Surface

But what if more complicated arrangements of the dots is needed, e.g., if they must be grouped in threes, or if dots of different composition need to be adjacent to one another? Techniques for linking nanoscale particles of metals or semiconductors into clusters of well-defined shape or size make use of small molecules that serve as bridging groups. Specificity in the linking process may be achieved through molecular recognition: in effect, onto one particle is grafted a lock, and onto another, a key. Silver nanoparticles have been assembled into linked arrays by grafting onto their surface the protein molecule streptavidin and the small molecule called biotin, which binds very tightly to streptavidin. The two appendages then snap together. More flexibility can be achieved by using as the linking unit “sticky-ended” strands of DNA. In this way, a particular type of nanoparticle can be programmed to adhere to one other kind of individual and no other. Here, we see the use of the informational aspect of biological interactions as a constructional tool for nanoscale materials synthesis.

It seems likely that self-assembly of one sort or another will be the only practical option for bringing together nanoscale components into materials and devices. The alternative-to push the parts together using some means of microscopic manipulation-has been demonstrated, but is typically slow and cumbersome. One tool for this kind of manipulation is the atomic force microscope, a device originally devised for measuring surface topography by the force it exerted on a needle-like tip attached to a delicate cantilever arm, like a phonogram stylus. The tip may serve as a tiny mechanical finger, as well as a probe. Another option is to use “optical tweezers” to conduct microscale assembly processes. Here an object is held at the confluence of laser beams by their intense electromagnetic field.

Self-assembly is a versatile method of creating ultrathin organic films. The process devised in the 1910s by Irving Langmuir and Katharine Blodgett is of this ilk. Langmuir-Blodgett films are made by allowing a layer of surfactant molecules to spread at the surface of water in a trough; they will do so, oriented with their hydrophobic tails in the air. A moveable barrier ushers the surfactants into a densely packed layer, where they will eventually adopt a regular packing, essentially a two-dimensional crystal. This film can then be transferred onto a plate dipped carefully through the trough, creating a surface monolayer with a very homogeneous surface energy. If the plate is of glass, the molecules stick to it head first, and the protruding tails then give the glass a smooth, water-repelling coat. Varying the nature of the tail groups allows one to tailor the surface properties (such as hydrophobicity). And repeated dips build up a stacked sequence of oriented layers, each of which can be composed of different surfactants. So this is a cheap and easy “wet-chemical” approach to the kind of thin-film engineering that, in microelectronics, usually requires high-temperature, high-vacuum vapor deposition techniques.

Similar principles are used to make more robust organic thin films called self-assembled monolayers (SAMs). The component molecules in these films are again amphiphilic, composed of a polar head group and a fatty tail. But they become attached to the surface via covalent bonds, since the head groups are selected for their propensity to react with the material of the substrate. There are two main combinations used for the synthesis of SAMs: alkylthiols on gold, and alkylsilanes on silica glass. The thiol groups consist of a sulfur and a hydrogen atom; sulfur’s affinity for gold leads it to cast off the hydrogen and bond to the surface gold atoms. SAMs can be deposited from solution under mild conditions, and their molecules organize themselves into oriented layers with a roughly equal spacing: again, the films are roughly crystalline in two dimensions, ensuring uniform surface properties.

Figure 12. A Self-assembled Monolayer of Alkylthiols on a Gold Surface
The molecular chains become aligned at a well-defined angle of tilt

Patterned SAMs can be easily made by using a stamp to imprint an “ink” of alkylthiols on a gold surface. The monolayer then confers resistance of the underlying metal to etching techniques, and so this simple printing process can be used to define wiring patterns in microelectronic circuitry. Conventionally, this has been done using photolithography, in which a protective polymer “resist” is patterned on the surface of the substrate using a photochemical process spatially confined with a mask. The SAM process can be scaled down to write pattern features just 0.2 mm wide, which is at the limit of what can be achieved with current lithographic methods.

Figure 13. Microcontact Printing is a Technique that “Stamps” a
Patterned Self-assembled Monolayer onto a Surface
This film can then act as a resistance against an etching agent, enabling
the creation of surface relief patterns on the surface

One very attractive application of SAMs shows how ideas of “recognition” can be scaled up to enable self-assembly at scales much larger than the molecular. In water, two hydrophobic surfaces will tend to stick together: this is one of the factors that allow protein molecules to retain the exquisite folded conformations of their chains. So by applying different SAMs to different surfaces of small objects, their surface energies can be adjusted to promote their assembly in water into specific arrangements. This approach has been used to enable objects of micrometer or millimeter size to come together in ordered arrays, and it could in principle be used to program the components to assemble into any arbitrary configuration-such as might be required, e.g., in a microscopic mechanical device, or in an array of electronic components in a circuit.

Figure 14. An Orderly Array of Micrometer-sized Components Self-assembled
by Coating the Relevant Parts of their Surfaces with Mutually Attractive
Self-assembled Monolayer Films

The bond between like surfaces can be made more permanent by including in the film, molecular components that will link together if “cured,” like an epoxy adhesive, by exposure to light or heat. It may not be too far fetched to imagine some miniature machines of the future being assembled, not on a dry conveyer-line process, but by priming the surfaces of the component parts and then throwing them into water and leaving them to sort out the right conjunctions.

All these approaches to the assembly of nanostructured materials draw heavily on the concepts prevalent in supramolecular chemistry, such as the molecular recognition of molecules with complementary binding sites, or the collective organization of molecular arrays such as liquid crystals. But there is another way that takes its lead directly from nature. Cells are alive with the trafficking of molecules from one point to another, and they must on occasion organize the construction of rather complicated structures, such as the mitotic spindle. When a cell divides, the chromosomes are copied and arranged on protein-based rods (microtubules) organized into a kind of spindle with two pinched ends (asters). Each chromosome is then split down the middle, and the fragments are pulled to the two poles of the spindle by protein molecules called kinesin, a molecular motor, which ratchets along the microtubules. It is a prodigious feat of organization; but nonetheless the assembly of microtubules into the aster can be mimicked using a remarkably simplified system of microtubules and kinesin molecules. Artificial molecular motors made by binding four kinesin molecules together will spontaneously arrange microtubules into the aster bundles. This shows that natural molecular motors can effect organization of nanoscale components in a cell-free environment.

In a more technologically explicit demonstration of this, microtubules have been shunted by kinesin in a well-defined direction across surfaces. When the kinesin is bound to shear-oriented films of polytetrafluoroethylene at low concentrations, it sticks preferentially along the striated grooves and edges of the polymer film. This sets up immobilized motors in linear tracks, which will propel adsorbed microtubules like a row of workers passing a pole down the line. There are exciting possibilities for using this kind of directed mechanical motion for assembly processes at the molecular scale. The challenge of making wholly artificial molecular motors is a formidable one (although some chemists are taking steps in that direction). So it may be prudent in the immediate future to adapt nature’s existing machinery for this kind of nanotechnological feat.

A step in this direction was the development of peptides-small protein-like molecules-that can bind selectively to different kinds of semiconductor. The use of semiconductor nanoparticles (Section 6) in new kinds of electronic device is likely to require their positioning in specific arrangements, just as the fabrication of conventional silicon-based circuitry demands the microscale patterning of different semiconducting and insulating materials. By shuffling the amino-acid sequence of peptides at random and then searching amongst the library of products for those with the ability to bind to a particular semiconductor surface, it has been possible to generate peptide molecules that will, for example, recognize and stick to gallium arsenide, but not silicon or cadmium sulfide. It might be possible to attach these peptide appendages to motor proteins by genetic engineering, enabling them to pick up particular kinds of semiconductor nanoparticle and transport them to designated locations.

7. Future Information Technologies

Much of the impetus for nanotechnology comes from the fact that the scale of engineering is shrinking on many fronts. Entire laboratories may soon be constructed on a single silicon chip, in which microscopic amounts of chemical reagents are driven down channels, mixed together and analyzed. Lithographic methods can already carve out gears and motors too small for the eye to see, which might power flying craft no bigger than insects-to be used in vast swarms for space exploration or, one has to recognize, to be abused for military purposes. But this relentless miniaturization has always been most keenly felt in information technology, which is sure to be one of the most socially transforming technologies of the coming century.

In 1965, Gordon Moore, one of the co-founders of Intel, pointed out that the speed of computers had roughly doubled every 18 months. This rate of change, colloquially dubbed Moore’s Law, has since been followed with remarkable fidelity. The increase in speed is largely concomitant with a decrease in scale of the electronic components, so that more processing power can be packed onto a single silicon chip. By 1998, more components could be packed onto a silicon wafer 8 inches in diameter than there are people in the world.

This has so far been a revolution written in silicon. But there is no guarantee that silicon will continue as the bedrock of information technology in the twenty-first century. If Moore’s law persists, the smallest dimensions in microelectronic devices such as transistors will reach the size of small molecules by around 2012. At that stage, their traditional function can no longer be sustained. For example, the layer of silicon dioxide that insulates the “gate electrode” of a conventional transistor (a metal-oxide-semiconductor field-effect transistor or MOSFET) will be just five or so atoms thick. At this point, it is no longer a perfect insulator-the device becomes leaky.

Alternative materials, such as diamond doped to a semiconducting state, might possibly stave off the crisis a little longer (although it looms too imminently for the industry to be likely to effect so rapid a change to a new materials basis). But sooner rather than later, a completely new approach to computer hardware will be required if the information industry is to sustain Moore’s Law. Some suggest that this might come about by using new types of device-replacing the workhorse of the MOSFET with some other kind of electronic switching device, perhaps exploiting the quantum phenomena that appear at very small dimensions. Others feel that the answer might be algorithmic: using quantum algorithms instead of classical ones to vastly expand the parallelism of computers. But some very dramatic leaps in technical capability will be needed to transform the present modest laboratory demonstrations of few-bit quantum computation into anything useful.

Another alternative is to abandon reliance on electronics altogether, and use a different medium to carry, store and process information: light. To a limited extent, light-based information technology is here already: long- and medium-distance telecommunications are now generally conducted by photonic means by passing pulsed light signals along optical fibers. The capacity of a fiber-optic cable is far greater than that of a copper wire of the same width; and the full potential of optical transmission, barely realized as yet, is awesome. Moreover, information is now commonly stored and read out by optical means from compact disk (CD) systems or, less commonly, from magneto-optic memories. Conceptually, it would seem to make sense to do everything with light-processing as well as transmission and storage-instead of laboriously converting the signal from electronic to photonic form and back again.

But the all-optical computer is still an uncertain prospect-the technologies (including the materials) to realize it are still lacking, and it is fair to say that there is no consensus as to whether the payoffs would justify the effort. It seems clear that for the immediate future, a hybrid of electronic and photonic technologies-optoelectronics-will be pursued with more vigor.

The two aspects tend to be kept separate, meantime: the devices for translating electrical pulses to light pulses and for the converse are housed separately from the electronic components. The light sources are generally miniaturized laser diodes, relatively lumen devices as big as a chip themselves-several hundreds of micrometers in length. They are layered structures in which the light-emitting (lasing) medium is sandwiched between semiconducting materials that inject charge carriers. When these recombine, a photon is emitted; and the bouncing of photons back and forth between the reflective ends of the cavity induces the stimulated emission characteristic of laser action. Lasers used for optical telecommunications and CD players have generally been made from so-called III-V semiconductors such as gallium arsenide and gallium aluminum arsenide, mixtures of elements from groups III and V of the Periodic Table. They emit light in the infrared and red part of the spectrum: near-infrared radiation is used for signal transmission in current fiber-optic networks.

But there are several good reasons to extend the wavelength range of these semiconductor laser diodes to shorter wavelengths. A wider range allows for wavelength multiplexing: using light of different colors to carry many signals simultaneously down a fiber, just as many distinct radio signals can be broadcast simultaneously at different frequencies. And because the size of a focused laser beam depends on its wavelength, shorter-wavelength laser light could be used to read and write smaller bit sizes into optical storage media, permitting a greater density of information. A CD read with blue light could have about four times the capacity of one read with the current generation of near-infrared lasers.

This need to extend the wavelength range of laser diodes has led to much interest in materials that alter the frequency of light transmitted through them. A material that transmits light to a degree that is not simply proportional to the intensity of the illumination is said to have nonlinear optical properties; frequency-doubling and -tripling materials are examples of these. The first commercialized blue-light laser diode used the well-known frequency doubler potassium niobate to “upgrade” near-infrared light. But such devices are now rendered redundant by laser diodes that genuinely emit blue light.

The emission wavelength from a semiconductor laser is set by the bandgap of the material-crudely speaking, the energy difference between a mobile (conduction) and bound (valence) electron. This determines how much energy is given up (as a photon) when a mobile electron falls back to the valence band. The larger the bandgap, the larger the photon energy and so the smaller the wavelength. The search for large-bandgap materials led at first to so-called II-VI compounds such as zinc selenide, but they were never very efficient. In 1996, however, a bright blue/ultraviolet laser was reported in which the light-emitting medium was a “new” III-V material: gallium nitride.

Figure 15. A Blue-light Laser Fabricated from Gallium Nitride

Gallium nitride was long known as a large-bandgap material, but its use in a laser diode seemed unlikely because of the difficulty of growing gallium nitride films on a silicon substrate. The problem faced by all semiconductors for use in optoelectronic technology is that its dependence on silicon devices means that silicon is still the universal substrate. Everything has to be grown on silicon. Yet for most crystalline semiconductors the lattice spacing of the component atoms is very different from that between two silicon atoms. This means that, if a thin film of the material is deposited on silicon, the atoms must either move away from their equilibrium positions, or suffer the occasional discontinuity of regular order if they are to marry up with the silicon atoms to which they must bond. In other words, at least the first few layers of the film are strained, and this can give rise to imperfections that are propagated into the growing film like a kind of crack. Defects like this can severely disrupt the electrical conductivity of the material, hampering the progress of mobile charge carriers. Gallium arsenide has a lattice spacing not far from that of silicon, which is one reason why it has been so favored as a photonic material.

Gallium nitride, however, has a very different lattice spacing, and so it seemed most unlikely that sufficiently regular films could be deposited on silicon to be operative in semiconductor laser diodes. The technical problems were solved, however, by the persistent effort of researchers at Nichia Chemical Industries in Japan, who created the first gallium nitride laser (Figure 15). Since then, several other companies worldwide have introduced their own versions of these blue and violet lasers. Future CD systems are sure to take advantage of the benefits in data storage capacity offered by these laser systems.

The need for compatibility of photonic semiconductors with silicon would be obviated if silicon itself could be made to emit light by injection of charge. As it stands, it is a very poor emitter of infrared light-the recombination process that expels a photon is very inefficient in relation to various processes that consume the injected electrons and holes without creating a photon. Yet there are still some hopes for a silicon-based optoelectronics. If silicon is electrochemically etched in acid to dissolve most of it away and leave only a tenuous, sponge-like network of tiny “wires,” the highly porous material can emit visible light efficiently owing to quantum effects that become significant in the nanometer-wide wires. And silicate compounds of cerium, which can be synthesized on silicon substrates, are also reasonably good emitters of blue light.

An increase in component density of optoelectronic circuitry would be made possible if the various photonic and electronic devices could sit side by side on a single silicon chip-an optoelectronic integrated circuit. This poses some big technical challenges, however, such as sufficient miniaturization of the various devices and the need to lay down areas of quite different semiconducting materials on different areas of the chip. A step further would be the development of photonic integrated circuits, in which all the processing is done with light-based signals. Such signals can be carried down waveguides, which are really nothing more than optical fibers etched directly onto a chip. The right materials are needed for the switching and logic processes currently performed by transistors-for combining two or more input signals into a particular output, for example. Nonlinear optical effects may be used to achieve switching (e.g., rerouting) of optical signals in fiber-based devices, but it remains to be seen whether the same functions can be managed in miniature on a chip.

The present means of getting light from one place to another in optoelectronic and photonic technologies is rather different from the way that electrical signals are conveyed. The latter tend to be confined within conducting materials by insulating surroundings: plastic sheathing on copper cable, silicon dioxide on silicon. But light is held within an optical fiber or a waveguide by reflection: it travels through a transparent medium surrounded by a different substance with a different refractive index, and bounces back from the boundary where the two meet. In silica glass fibers, the change in refractive index is typically achieved by doping either the core or the cladding layer-both basically silica-with different elements: germanium, phosphorus, or fluorine. Thus the cladding material does not have to be opaque (and in general it is not); there must simply be a large enough change in refractive index at the interface. The effect is much the same as sunlight reflecting off water.

This is generally an adequate way of confining light, but it has its drawbacks. For one thing, it can be leaky-the reflection is not perfect. Long-distance fiber-optic transmission cables are punctuated with little amplifiers to compensate for these losses. And it is not easy to keep light on track around very sharp bends by internal reflection-like racing cars, they tend to shoot off the track at the corners. This poses a potential problem for on-chip photonic circuitry.

So a better way of manipulating light signals might be to use the same principle as for electricity: to surround a light conductor with a light insulator. There is no shortage of transparent materials, but a light insulator is an unusual concept. It is hard to create a mirror that does not absorb some light at the same time that it reflects the rest. But in 1987, physicists realized that photonic insulators could be fashioned from microstructured materials containing periodic, crystal-like arrays of particles about the same size, and separation as the wavelengths of light: hundreds of nanometers or so.

The particle lattices in these so-called photonic crystals scatter light in the same way as the tiny fat globules in milk. To scatter strongly, the particles must have a fairly large difference in refractive index from the material in the spaces between them; and to act as a “perfect” mirror, they must be arranged regularly. Then, light of a particular range of wavelengths, set by the size and spacing of the obstacles, simply cannot penetrate into the forest of particles. It is said to have a photonic bandgap, just as the electronic bandgap of an insulator excludes conduction electrons.

One way to make a photonic crystal is to drill a regular lattice of holes through a solid block of a dielectric material (that is to say, an insulator like silica). The photonic bandgap exists only in directions in which there is periodicity. A flat film perforated with a two-dimensional field of holes, like a sieve, will have a bandgap in the plane of the film, but will not reflect light travelling perpendicular to the plane. In-plane light could be guided through such a material by omitting a line or two of holes-these defects in the two-dimensional photonic crystal will support a propagating light field, confined by the “light insulator” to either side. Light can be guided through sharp angles in such a material. Two-dimensional photonic crystals like this have been carved in thin films of semiconducting materials using electron beams: this enables the holes to be small and close enough to set up a bandgap in the near-infrared region used for current optical telecommunications.

However, making a three-dimensional photonic crystal at this scale is more challenging. In principle one can do it by drilling a network of holes in three dimensions; but that is technically difficult. Basically what one needs is a scaled-up crystal, in which the particles are not atoms but objects of micrometer size, separated by air gaps. Such materials do in fact exist naturally: opal is a “crystalline” array of microscopic spheres of biogenic silica. A suspension of such spheres can, if they are all of identical size, spontaneously settle into this kind of ordered array, called a colloidal crystal. There is now a great deal of interest in making photonic crystals in this way.

The existence of three-dimensional photonic bandgaps in the near infrared has now been demonstrated in colloidal crystals made, for example, from synthetic silica microspheres. The precise nature of the bandgap depends on the stacking arrangement of the spheres, and on the nature of the refractive-index change between the spheres and their surrounding medium. The first of these can be controlled to some extent by employing principles used in conventional vapor-phase crystal growth for semiconductor technology. Here the atomic ordering in an overlayer may be governed by that in the surface layer of the substrate on which it is grown, which acts as a kind of template. By using a holey template carved from a thin film, a colloidal crystal can be guided into a particular arrangement of spheres, rather as a layer of eggs fits into an egg box.

The difference in refractive index can be adjusted by filling the spaces between the microspheres with some other material-by precipitating an inorganic substance in the interstices, for example. Indeed, one way to make a photonic crystal with rather different bandgap characteristics from the original stacking of spheres is to fill up the spaces between them and then dissolve away the original spheres to leave a highly porous material with a highly uniform pore size and shape (Figure 9).

This templating approach to making “inverse” photonic crystals offers a great deal of control over the photonic properties: silica and polymeric microspheres with very uniform sizes are available commercially, but are not necessarily the best media for photonic applications. For one thing, the fabric of the photonic crystal must not absorb light strongly in the wavelength band of interest; and a full photonic bandgap (“perfect insulation”) generally requires that there be a large refractive-index difference between the array elements and their surroundings. Selenium glass fulfills these requirements, and can be imbibed into a silica colloidal crystal that is subsequently dissolved away to create a near-infrared photonic crystal.

Optical fibers too can use photonic bandgaps to confine light. Silica-glass “photonic crystal fibers” have been made by extruding a bundle of glass capillaries, with a solid glass rod in their center. The drawing process shrinks the cross-section of the bundle by many orders of magnitude, while maintaining an array of holes transverse to the axis of the resulting fiber, which then acts as a photonic crystal with a bandgap in the appropriate range (infrared or visible). The absence of a hole in the central part of the fiber provides a “conducting” channel for the light signal.

Figure 16. Photonic-crystal Fibers Confine Light in their Core
by Surrounding it with a Periodic Lattice of Holes, which
Acts as a Photonic Crystal Impermeable to Light
These fibers can be made by heating and drawing out
bundles of glass capillaries

These fibers act as “single-mode” conduits, supporting only a single optical mode (since the others would leak away through the photonic-crystal cladding) and so avoiding the smearing of pulsed optical data that can occur with conventional multimode fibers. They can support very high single-mode optical power densities, meaning that for telecommunications applications, fewer in-line amplifiers are needed en route. And photonic-crystal fibers could also channel intense laser light for microsurgery or precision engineering.

8. Display technology

In its broadest sense, information technology is not just about processing, conveying and storing information, but displaying it. The most immediate and perhaps the most important interface in desktop computer systems is that between the machine and the eye, mediated by the computer screen. Display technology-the conversion of electronic data to a visual display-is a vast industrial concern, of which computer screens are but one aspect. Television screens are of course basically the same devices; but traffic signals are a very different kind of visual display, and increasingly electronic media are replacing ink and paper as the vehicle for all kinds of written information.

Standard-sized television screens today employ the same technological principles as the earliest of cathode-ray tubes, dating from before the discovery of the electron. A beam of electrons is scanned rapidly over a pixellated array of phosphor dots, which glow in a particular color when irradiated. Separate red, green and blue phosphors at each pixel suffice to generate the colors of most of the visible spectrum.

This is a cumbersome system, since it requires an electron gun placed some distance behind the screen. The TV terminal is therefore the most bulky part of many personal computers. Laptop computers use a different display medium, which is more expensive to produce, but less wasteful of space and has a lower power consumption. These flat screens make use of liquid-crystal light shutters, which can be switched electronically between a transparent and an opaque black state. In the transparent state, a liquid-crystal pixel element lets through light from a background source, which also passes through a color filter to generate the three primaries of each pixel. Some of the challenges for liquid-crystal display technology include faster switching speeds and development of display systems that permit a wide viewing angle, so that the picture does not vanish or become bizarrely colored when not seen face-on.

But still better than this shutter-and-filter method, would be a display in which each element is an intrinsic light emitter that is electronically switchable, robust, flat and cheap to produce. Most efforts in this direction are focused on the fabrication of banks of light-emitting diodes-the challenge is to make them cheap, reliable and bright enough that the flat screen becomes an economically viable product. Light-emitting diodes based on the same kind of inorganic semiconducting materials used in laser diodes have long been in use: gallium arsenide doped with phosphorus, for example, offers light emission in the visible range. But whereas such materials cover the spectrum from red to green, efficient blue LEDs were not available until the advent of gallium nitride. LEDs made from this material were in fact marketed several years before the corresponding blue-light lasers, bringing full-color bright LED displays and TV screens within reach for the first time. An indium-doped version of gallium nitride is an efficient emitter of green light, and is used in LED-based traffic lights, which are not only brighter than those that use incandescent bulbs but also have much lower power consumption and longer lifetimes. Applications like these seem set to secure gallium nitride as a major technological material in the next several decades.

Figure 17. A Traffic Light that uses Red, Amber and Green Light-emitting
Diodes Rather than Incandescent Bulbs

But for flat-screen color displays, inorganic semiconductors now face stiff competition from organic materials: light-emitting polymers. These work on much the same principles: the emitter is a semiconductor, which luminesces when charge is injected (although the mechanisms of charge transport and recombination are quite different). The polymers acquire an electrical conductivity from the presence of delocalized electron orbitals along their chains. The first polymer LED, fashioned in 1990, was made from the hydrocarbon polymer poly(p-phenylene vinylene), which emits in the yellow part of the spectrum. Conducting polymers have since been devised that glow right across the visible range, so that full color displays are now possible in principle from polymer LEDs. The advantages are that these materials are very lightweight, flexible (a polymer LED can be rolled up like a sheet of paper), easy to process and fabricate into patterns (this can be done using a kind of printing process) and are amenable to fine-tuning of the emission wavelength by chemical modification of the polymer chain. But the drawbacks are that the emission efficiencies (power out relative to power in) can be very low, and that the polymers may be susceptible to chemical degradation after many hours of use. These shortcomings are being overcome, however, and a full-color polymer LED display is already commercially available.

Figure 18. A Display Device based on a Polymeric (“Plastic”) Light-emitting Diode

But polymers are not the only class of organic materials capable of providing electroluminescent devices. These have also been fabricated from thin films of small molecules, notably the complex of aluminum with three tris-(8-hydroxyquinoline) molecules, which emits light in the green region of the spectrum. Both this material and electroluminescent polymers have also been used as the active medium in “organic” thin-film diode lasers.

A completely different approach to full-color flat-screen displays is to miniaturize the old cathode-ray tube technology in so-called field-emission devices. Here, as in tube-based screens, a stream of electrons excites a colored phosphor; but the electron beam is generated very close to the phosphor dot by using an intense electric field to pull the electrons from a microfabricated needle-like tip, a “field emitter.” The emitter is charged to a negative potential with respect to a plate above it; at the very apex of the tip, the field is then strong enough to pull electrons out into the empty space, where they are accelerated towards the plate, but pass right through a hole-just as the electrons in a cathode-ray tube fly past the positively charged accelerator plates-to strike the phosphor.

In a cathode-ray tube, emission of electrons from the cathode is promoted by heating it to a high temperature. But in the miniaturized vacuum tubes, making the cathode from a material in which “free” (conduction) electrons already have a higher energy than they would in the vacuum (that is, materials with a negative electron affinity) means that emission can take place even when the cathode is cold. Diamond doped with elements that provide “excess” electrons has this property, and so diamond thin films, shaped into arrays of pyramid-like tips, are being investigated as potential cold-cathode displays.

Figure 19. A Diamond-based Field-emission Device for Display Technology
The diamond acts as a “cold cathode” emitting electrons
even at room temperature

The centuries-old “display technology” of ink on paper is still popular today. Most people still find documents easier to read on paper than on screen, no matter how prettily colored the latter can be. Paper is also a much more convenient and portable medium for information. But a book can weigh as much as a laptop computer capable of holding an entire library’s-worth of data. To combine the advantages of both, researchers are developing “electronic ink”: materials that, in the form of thin films, can resemble the stark black-on-white of ink on paper and yet which can be reconfigured electronically into new pages. One version, called E-Ink, consists of micrometer-sized clear plastic capsules containing black and white pigment particles, which can be rearranged by applying an electric field across the capsule. If the black particles are drawn to the top, the capsule appears dark when seen from above; but switching the field can allow the white particles to be presented instead.

Figure 20. E-Ink is an Electronically Switchable Ink Comprised of Clear Plastic Microspheres Containing Black and White Pigment Particles
Which of them is displayed at the upper face of the capsule is determined by an electric field

One can regard this microstructured device as a kind of smart composite material. A layer of E-Ink laminated between two sheets of clear, electrically conducting material patterned into tiny pixels provides a page whose text can be electronically controlled. In this way, the computer might not so much replace the book as reinvent it in a new, lightweight form.

9. Ultrastrong Fibers

All of this must seem as a far cry from the days when new materials meant bronze, or stainless steel or Bakelite casings or cement. But this survey will conclude by returning to such traditional structural roles of materials. We will always need to fabricate structures that execute some passive function without snapping, cracking, crumbling, wearing away, or collapsing. Technological advances pose new challenges to the strength, toughness, and resilience of materials, and nowhere more evidently so than in space engineering. These demands may be usefully illustrated by taking a leap into the future: one that admittedly may never materialize, but which at least has a good pedigree, as it comes from the fertile mind of Arthur C. Clarke.

In his novel The Fountains of Paradise, Clarke posited the Space Elevator: a platform positioned in geostationary orbit around the Earth, tethered to the ground by a long, superstrong cable. Space hardware is shuttled up to the platform via an elevator, from where it can be launched into space with a fraction of the fuel requirements needed to escape the Earth’s gravity from ground level. As most of a rocket’s mass consists of the engines and fuel needed for this ascent, the savings imparted by the Space Elevator are substantial.

But what manner of thread could be relied upon to tie a platform, possibly manned, to the Earth’s surface? Today’s materials science provides two candidates, and both are forms of pure carbon.

The chemical bond between two carbon atoms is one of the strongest and stiffest known: it is ultimately responsible for the tremendous hardness of diamond (although the relationship between bond strength and hardness is by no means simple, or even fully understood). Diamond’s sibling allotrope graphite, in contrast, has a reputation for weakness: its flaky layers can be rubbed off simply by the passage of pencil over paper, and for this reason, graphite makes a good lubricant. But the weakness is all in one direction. Graphite consists of sheets in which carbon atoms are linked into adjoining hexagons, like chicken wire. The bonding between the sheets is weak, since there are no bonds “left over” on each carbon atom; so the sheets slide easily over one another. But the bonding within the sheets is very strong. It is just that we do not get to see this, because the stacks of sheets form tiny crystallites with little cohesion between them. The potential strength of graphite-like (graphitic) carbon is evident, however, in carbon fibers, where the sheets are all oriented and are linked by a few strong bonds.

In 1991, the ultimate carbon fiber was discovered: sheets of graphite-like carbon rolled up on themselves into tubes just a few nanometers in diameter. The first of these “carbon nanotubes” were many-layered: tubes inside tubes, like Russian dolls, each one separated from its neighbors by the same distance that divides the flat sheets in graphite. But nanotubes have since been made that have only a single layer, and it is now possible to exercise at least a little control over the width of the tubes.

Figure 21. Carbon Nanotubes are Hollow Cylindrical Structures of Pure Carbon, Linked into the Hexagonal Sheets Characteristic of Graphite

The carbon atoms in nanotubes are linked into hexagons, which are arrayed around the side of the tube in a spiral fashion. The properties of the tubes can depend very sensitively on this spiral arrangement-on the pitch, for example. Certain spirals confer electrical conductivity, while others give insulating nanotubes. This opens up the possibility of using carbon nanotubes as “molecular wires,” thinner than the thinnest wires that can be carved lithographically into metal or semiconductor films. And theories predict that the electrons, confined essentially to one dimension in these tiny wires, will exhibit unusual quantum-mechanical behavior as a result of this reduced dimensionality. Moreover, if the nanotubes can be modified at certain locations, perhaps by introducing kinks or dopant molecules into the framework, the electronic properties might be altered in ways that become useful for electronic engineering. Transistor-like and rectifying behavior has already been seen in carbon nanotubes.

There are many other possible applications of nanotubes that exploit their small size, hollow nature and electrical conductivity. But perhaps the most dramatic application could take advantage of the fact that, as an essentially crystalline form of pure graphitic carbon, they should be extremely strong, and also very stiff. Experiments have provided some justification for the belief that nanotubes are at least as stiff as diamond, and several times more so than steel. Lacking the imperfections that exist in conventional carbon fibers (which are orders of magnitude larger), carbon nanotubes may be the strongest of all human-made fibers.

One of the challenges in putting these exceptional properties to use, however, is that of growing nanotubes to great lengths-typically, they are no longer than a micrometer or so, and capped with a hemisphere or polygon of carbon hexagons and pentagons. The dream is to understand the formation process well enough that nanotubes could be grown to any length, like spaghetti feeding continuously from an extruder.

Yet nanotubes are not the only contender for the Space Elevator’s tether-for strong fibers can be fabricated from carbon’s other form, diamond. Synthetic diamond has been known since the 1950s, and most of that currently used as an industrial abrasive and in cutting tools is created by exposing carbon-rich material to very high temperatures and pressures. Yet diamond can also be grown at low pressure from a carbon-containing gas, typically methane fragmented into free radicals by heat or radio waves. This is a form of chemical vapor deposition (CVD), and it may be used to deposit thin films of diamond on a substrate. Diamond films grown by CVD are generally polycrystalline: a mass of tiny grains fused together. Diamond coatings of this sort offer to confer wear-resistance and good frictional characteristics on machine parts.

Diamond wires, meanwhile, can be made simply by depositing CVD diamond onto metal wires. Iron dissolves in diamond to form a carbide; but other metals, such as titanium and molybdenum, provide suitable substrates.

Figure 22. A Diamond-coated Metal Wire Represents a Very Strong Fiber,
which could be Used to make Ultra-tough Fiber-reinforced Composites
If the wire is coiled, diamond coating produces a hollow diamond tube

Used in fiber composites, these diamond fibers can engender a much greater stiffness than that offered by conventional “stiff” fibers such as silicon carbide. A diamond-fiber/titanium alloy has even been proposed as the fabric of future spacecraft by one of the scientific advisers to the Star Trek series, blurring the lines between fact and fiction. And diamond tubes, made by coating coiled wire with a diamond film, could contain an air-curing adhesive that would provide a self-healing capability to damaged fibers. Whether even this will keep a Space Elevator in place remains to be seen.

10. Materials Made To Measure

Until the end of the twentieth century, the discovery of new materials was a haphazard and empirical process. We do not know how silk and paper were invented in ancient China, but we can be certain that no one understood the first thing about why they have their particular and attractive properties. Copper was perhaps first smelted in the Middle East as a by-product of pigment manufacture. Even the earliest synthetic polymers and plastics-cellulose nitrate, vulcanized rubber, bakelite-were chance discoveries, whose discoverers knew next to nothing of their material’s composition.

As we enter the twenty-first century, things are fundamentally different. We have an entirely different attitude to materials discovery. Serendipity will never become obsolete, for science has always depended on an element of luck coupled to a prepared mind. But materials are being not so much discovered as invented: designed for the job, their components rationally selected and assembled for specific functions. Even steels have become highly designed materials, with carefully blended compositions to suit different roles. A report by the US National Academy of Science in 1997 put it like this: “Our knowledge now gives us unprecedented control over the structure and properties of materials.”

Several factors have made this possible. Materials scientists now have at their disposal a vast array of techniques for probing the most intimate structural features of materials: new microscopes that can provide images at atomic resolution, scattering methods for deducing crystal structures of the tiniest samples, spectroscopic probes which reveal the subtleties of chemical bonding. Fabrication methods permit the control of structure over a wide range of length scales. The ability to design molecules that interact and assemble in highly specific and predictable ways has had a great impact on the synthesis of molecular materials. Increases in computer power enable theorists to predict many properties of a hypothetical material-electronic, mechanical, optical-based on a knowledge of nothing more than how the atoms are arranged. A greater understanding of the mechanisms of cell biology guide the design of new materials sympathetic to the processes of life. These developments provide many handles for manipulating the material world.

At the same time (and for much the same reasons), materials science has emerged as an expanding interface between many diverse disciplines, at which there are rich seams of fundamental science to be mined. And so the discipline has been transformed from a branch of engineering to one of the mainstreams of fundamental and applied science, attracting fruitful collaboration between scientists of all persuasions.

Regardless of whether this or that material mentioned in this article proves to be a winner in the marketplace, the impact of these changes will be profound, not only in science but in daily life. The future of information technology, energy production, transportation, space technology, medical science and chemical engineering all depend to a considerable degree on the invention of new materials. These will surely be the products of exquisite planning and execution, fabrics tailored to perform feats unimaginable in traditional materials. With such capabilities at our fingertips, society will be confronted more strongly than ever with the responsibility to make wise choices about the technologies it creates.

Bibliography

Amato I. (1997). Stuff: The Materials the World is Made of. New York: Basic Books. [Describes the emergence of materials science as an independent discipline, and provides some vignettes of current themes.]

Ball P. (1997). Made To Measure: New Materials for the 21st Century. New Jersey, US: Princeton University Press. [A survey of current developments and trends in materials science intended for the non-scientist.]

Bard A. J. (1994). Integrated Chemical Systems. New York: John Wiley. [A more technical book that looks at the interfaces between chemistry, biology and electronics and the “modular” approach to molecular-based technologies.]

Bloor D., Brook R. J., Flemings M. C., and Mahajan S. eds. (1994). The Encyclopaedia of Advanced Materials. Oxford, UK: Pergamon. [A comprehensive overview of current materials science, written at a somewhat technical level.]

Olson G. B. (2000). Designing a new material world. Science 288, 993-998. [A historical survey of materials engineering, with particular emphasis on metals and alloys.]

Biographical Sketch

Philip Ball is a science writer and a consultant editor for Nature, the international journal of science. He was an editor for physical sciences with Nature for over 10 years. He is also Science Writer in Residence at University College, London. Philip Ball’s books include Made To Measure: New Materials for the 21st Century (Princeton University Press, 1997) and H2O: A Biography of Water (Weidenfeld and Nicolson, 1999). He writes on all areas of science for the international press and in the scientific literature. Philip holds a degree in Chemistry from the University of Oxford, and a Ph.D. in physics from the University of Bristol, UK.

Scroll to Top