Nanotechnology was speculated on by futurists such as K. Eric Drexler, Marvin Minsky, and Robert Freitas Jr; it was portrayed in science fiction as far back as the 1960s, and most recently in the Marvel Cinematic Universe as a key component in the suits of Black Panther, Iron Man, and Spider-Man. The popular conception of nanotechnology consists of swarms of self-replicating, nanometer-scale robots working in concert under the direction of their creators. As is often the case with emerging technologies, this popular vision of the field is starkly at odds with experimental reality.
The global nanotechnology market was estimated at over $1 billion in 2018 and is projected to more than double by 2025—and yet not only are nanorobots utterly absent from the lineup of current nanotech-enabled commercial products, they’re not even under development in research laboratories anywhere in the world. Understanding this disconnect is critical for anyone wishing to realistically forecast the future of the field.
What do researchers in the field mean by “nanotechnology”?
A good place to start in exploring this disconnect is with the definition of nanotechnology given by Mihail Roco, an architect of the United States federal government’s National Nanotechnology Initiative:
Nanotechnology is the creation of functional materials, devices, and systems through the control of matter on the nanometer length scale, and the exploitation of novel phenomena and properties developed at that scale.*
To gain a foothold within this definition, let’s start with “novel phenomena and properties”. Behavior of matter that is micrometer-sized and above is governed by classical physics. You hold a pencil in your hand and then drop it, and it falls according to Newton’s laws of motion. We have gravity, air pressure, drag, elastic and inelastic deformations upon contact with the floor, and other familiar forces governing the pencil’s behavior. While there are numerous forces involved in this everyday form of physics, the general behavior is nonetheless well-understood and in many cases quite intuitive. After all, we evolved and learned what to expect from dropped objects in an environment where these behaviors dominate.
At the atomic level, however, quantum mechanics rules the behavior of matter. Objects exhibit particle-like and wave-like behavior simultaneously. They can tunnel through barriers that they “shouldn’t” have enough energy to overcome. Their behavior can become entangled with another object such that measuring one affects the other despite the two not being connected in any traditional sense. Quantum mechanics is, if anything, counterintuitive, and yet the laws that govern quantum mechanical behavior have been understood by physicists quite well for nearly a century.
There isn’t, however, a sharp size cutoff between classical behavior and quantum behavior. Imagine you have a bar of gold. It is a reflective, yellow metal. Cut the bar in half, and it is still a reflective, yellow metal. Keep doing this repeatedly, and eventually the color starts to shift from yellow to red, and then even further into the infrared, all well before we get to the point of individual atoms. What we are observing in this process is a gradual transition from classical physics being the best description, to quantum mechanics being the best description. Exactly how big the particles are determines where on that transition we find ourselves.
Now consider that property with the mindset of an engineer. Suppose you need a specific color for whatever your application is. If you can control the size of your particles very precisely, you can control the color of your particles very precisely. You now have an engineering knob you can turn to give yourself access to exactly the property you are interested in. Generally, when there is some property that exhibits this kind of size-dependence, this transition I am describing occurs somewhere between 1 and 100 nanometers.
Getting specific: Nanotech-enabled cancer therapy
Let’s make this exercise concrete with a specific example: cancer therapy. With a traditional tumor, surgery can often remove the main mass, but the determining factor in long-term survivability is whether you are able to suppress or destroy the small number of cancerous cells that lie beyond the surgically removed tumor. If you succeed, the patient is cancer-free. If a few are missed, they blossom into new tumors. The purpose of most chemotherapy, radiotherapy, or other post-surgery techniques is to kill those few remaining cancerous cells. The problem with such therapies is that, in general, the materials are toxic by design, and you are simply betting that they will kill more cancerous cells than healthy cells. This is why chemotherapy and radiotherapy have such extreme side effects: They are deliberately killing lots of cells, and many of those cells are healthy ones.
This is why biotechnology researchers are pursuing “targeted” chemotherapies. After identifying a feature that is overexpressed on the cell membrane of a cancerous cell, such as certain oncoproteins, a synthetic antibody for that feature is attached to the chemotherapeutic agent. While the therapeutic agent passes throughout the entire body via the circulatory system, it sticks to the cancerous cells, and so causes the most damage there. It’s still not perfect, because the agents nonetheless cause damage elsewhere, and once the cancerous cell is dead the body has to flush out the still-toxic substance, allowing more damage to happen on the way out. But it is a step in the right direction.
What would take such a therapy to the next level would be an ability to turn on the toxicity of the agent externally. Imagine this scenario: We attach the oncoprotein antibody to the therapeutic agent while the agent is in a completely harmless state. The tagged agent is injected into the bloodstream and permeates the body, but only sticks to the cancerous cells. The agent that hasn’t attached to cells clears the body by any of the standard biological routes (kidney, liver, etc.). At this point the cancer has the agent on it, and none of the other cells do. Now, with an external signal of some kind, we turn on the toxicity of the agent, killing the cancerous cells. We then turn it off again, allowing the remains of the dead cells and the formerly toxic therapeutic agent to clear the body without causing any further problems to the patient.
What kinds of external signals could be used? Many such possibilities have been pursued by researchers, including magnetic fields and RF signals, each of whose implementations employ nanotechnology by Dr. Roco’s definition. But the one that I will describe in detail here uses my earlier example of gold. Remember that the smaller you make gold nanoparticles, the further toward the red the color shifts. A pair of researchers, Jennifer West and Naomi Halas, have fine-tuned this process even further by coating nanometer-scale silica (i.e., glass) spheres with thin shells of gold, allowing them to tune the wavelength of light their gold nanoshells absorb very precisely down into the near-infrared region of the electromagnetic spectrum. Not very many materials absorb this particular kind of invisible-to-us light, which means that a near-infrared laser can penetrate very deeply into the soft tissues of the body without any meaningful loss of intensity, and with no effect on the tissues it passes through. But if these gold nanoshells are within those tissues, they will absorb the light very efficiently. That absorbed light energy has to be converted into some other kind of energy, and so the nanoshells heat up, effectively cooking the neighboring cells to death.
We now have the tools to describe a startlingly effective cancer therapy. Gold and silica are particularly biocompatible, so side-effects should be minimal. We attach an antibody for an oncoprotein overexpressed on the particular cancer cells that are being targeted to gold nanoshells. We surgically remove as much of the tumor as possible, and then inject a dose of the antibody-tagged nanoshells into the patient and allow them to circulate throughout the body. They stick to the remaining cancerous cells, and the unattached nanoshells flush out. Then the region surrounding the removed tumor is exposed to a near-infrared laser, cooking the remaining, too-isolated-to-find cancerous cells to death. The laser is turned off, and the nanoshells and the dead cancer cell remnants are cleared by the body. There is little damage to healthy tissues, and even the smallest pockets of cancerous cells are susceptible to the treatment.
This technology, dubbed Aurolase Therapy, has been in development for two decades, and is now in pilot studies in humans with early results showing remarkable success. The following two forecast questions and their resolutions are meant to help us in gauging such progress. I predict them as 70% and 95% likely, respectively:
Generalizing: Nanotechnology is nanomaterials (and some nanodevices)
When you look at other commercial nanotechnologies this example I’ve focused on clearly points to a pattern:
- Nanowhiskers embedded in textiles to render your clothing or furniture immune to spilled wine
- Self-cleaning windows where embedded nanoparticles of titania photocatalyze the degradation of dirt into smaller molecules that can be easily washed away by the next rain
- Sunscreens with titania or zinc oxide nanoparticles that block ultraviolet radiation effectively and safely
- “Nano-glue” adhesives that get stronger at high temperatures, unlike traditional adhesives
- Nanoparticle additives to sports equipment that make tennis racquets stronger and help tennis balls retain their bounce longer
Nanotechnology is, today and for the foreseeable future, dominated by nanomaterials applications—not nanodevices, and certainly not nanorobots. Now it’s true that within the electronics industry there are certainly examples of nanodevices: Field emission displays, photonic crystals, and field-effect transistors are each device types that depend heavily on the properties of nanostructured material components. But examples like field-effect transistors in fact predate the formal field of nanotechnology, and so there has been a natural resistance to their being subsumed into the “new” category known as nanotech.
A similar issue has arisen in nanomaterials. Colloids are nanostructured materials that have been used because of their nano-specific properties since the Middle Ages, as anyone who has looked at a stained-glass window can attest. The vibrant reds in such windows are colloidal gold particles around 25 nm in diameter, and they have that color because of their size. Others argue that essentially all of biotechnology is nanotechnology, since the active biomolecules in all of biotechnology are in the right size range, and their critical properties are at least in part due to their nano-scale structuring. There’s a reason nanotech pioneer Richard Smalley famously declared that biology was the only working nanotechnology.
So, depending on how expansive you are willing to make your definition of nanotechnology, the true economic impact of the field may be several orders of magnitude higher than the $1-2 billion value listed earlier. There are pros and cons to such expansive definitions, and for the purposes of this article we will be agnostic toward such definitions.
Nanosystems: The unapproached target
All that said, the popular conception of nanotechnology remains the nanorobot. Generally these are self-replicating. Otherwise how would one get enough such nanorobots to do anything useful? Generally they operate on deterministic principles akin to modern computers. Otherwise how could they be trusted to do exactly what we design them to do? The two titans of the field that most famously debated the feasibility of such “molecular nanotechnology” were Eric Drexler and Richard Smalley, who engaged in a series of articles and open letters in Scientific American and Chemical & Engineering News from 2001 to 2003, with Drexler taking the pro-nanobot position and Smalley taking the anti-nanobot position.
Drexler had spawned the modern fascination with nanobots with his books Engines of Creation and Nanosystems: Molecular Machinery, Manufacturing, and Computation. His vision of self-replicating nanobots constructed from diamond-inspired, carbon-based structures prompted calls for funding agencies to support research that would lead to the ability to grow a “battleship in a beaker” within a not-too-distant future. Smalley, winner of the 1996 Nobel Prize in Chemistry, disagreed vehemently, citing what he termed the “fat fingers problem,” which prevents atomic-scale manipulators from being able to position with atomic accuracy other atomic-scale components. He also cited the “sticky fingers problem,” which made the binding and un-binding of components during transfer into position by the “fat fingers” a severe energetic issue. The exchange is worth reading even twenty years later, but in terms of evaluating the experience behind each of these interlocutors’ positions it is worth noting that Smalley was an experimental scientist, while Drexler was a theoretician whose Ph.D. came from the MIT Media Lab after the department of electrical and computer science refused to approve his plan of study.
Smalley’s two primary objections, the “fat” and “sticky” fingers problems, are still unresolved. The inherently probabilistic nature of quantum mechanics poses a possibly insurmountable problem to the atomically-precise assembly required by Drexler’s vision of molecular nanotechnology—as do the laws of thermodynamics with respect to the entropy in crystals formed above absolute zero. Most of the proposals for molecular nanotechnology systems are scaled-down versions of devices that operate, at their original larger scales, according to classical physics. Such behaviors simply don’t in general scale down to quantum regimes.
To that end, it is instructive to further consider Smalley’s observation that biology is the only working nanotechnology. Biology, unlike Drexlerian molecular nanotechnology, is anything but atomically precise. Biology consists of the construction of components that self-assemble into only roughly defined structures. No two bacteria have exactly the same number of lipid molecules in their cell membranes. No two have exactly the same mutual spatial arrangement of their trans-membrane proteins. The transcription of proteins from RNA is good, but not 100% accurate. The replication of DNA during reproduction is good, but not 100% accurate. The system as a whole is fundamentally, structurally impossible to map to modern computer architectures, which depend intrinsically on a deterministic behavior of all components. If a circuit misfires in a modern computer, it is a bug. If a molecule doesn’t move exactly as expected in a bacterium, it isn’t even noticed, because the architecture doesn’t depend at all on that kind of precision. It depends instead on the statistical behavior of many molecules within a loosely defined structure.
So, are nanosystems, perhaps even nanobots, possible? Yes, absolutely. Biology is evidence of that. But the popular vision of nanobots—with atomically precise positioning of all components operating in a deterministic fashion designed by the architect—is not being pursued by any serious researchers, and for very good reason. It may in fact be fundamentally impossible. Biologically-inspired nanosystems that depend on aggregate behavior of numerous, independent active components in a statistical sense, however, are almost certainly possible. But understanding such structures well enough to design them will require significant advances in non-deterministic architectures. There is progress in that area, but it is a long way from producing a working nanosystem.
And in the meantime, there is so much societally-important low-hanging fruit like Aurolase Therapy that doesn’t require full nanosystems. With the exception of software engineering, all engineering consists of constructing physical products to solve real-world problems. The solutions we can construct are all-too-frequently limited by the properties of the materials we can use. The greatest frontier in the development of new materials, currently, is in nanostructured materials. It seems obvious, then, that the vast majority of the research effort in nanotechnology for the foreseeable future will go into nanostructured materials. That doesn’t mean that nothing will be developed in the area of nanosystems, but until there is a realistic roadmap to a specific, societally-valued outcome that requires a nanosystem, such advances will be slow.