# When Will We See Real Nanotechnology?

Nanotechnology was speculated on by futurists such as K. Eric Drexler, Marvin Minsky, and Robert Freitas Jr; it was portrayed in science fiction as far back as the 1960s, and most recently in the Marvel Cinematic Universe as a key component in the suits of Black Panther, Iron Man, and Spider-Man. The popular conception of nanotechnology consists of swarms of self-replicating, nanometer-scale robots working in concert under the direction of their creators. As is often the case with emerging technologies, this popular vision of the field is starkly at odds with experimental reality.

## Nanosystems: The unapproached target

All that said, the popular conception of nanotechnology remains the nanorobot. Generally these are self-replicating. Otherwise how would one get enough such nanorobots to do anything useful? Generally they operate on deterministic principles akin to modern computers. Otherwise how could they be trusted to do exactly what we design them to do? The two titans of the field that most famously debated the feasibility of such “molecular nanotechnology” were Eric Drexler and Richard Smalley, who engaged in a series of articles and open letters in Scientific American and Chemical & Engineering News from 2001 to 2003, with Drexler taking the pro-nanobot position and Smalley taking the anti-nanobot position.

Drexler had spawned the modern fascination with nanobots with his books Engines of Creation and Nanosystems: Molecular Machinery, Manufacturing, and Computation. His vision of self-replicating nanobots constructed from diamond-inspired, carbon-based structures prompted calls for funding agencies to support research that would lead to the ability to grow a “battleship in a beaker” within a not-too-distant future. Smalley, winner of the 1996 Nobel Prize in Chemistry, disagreed vehemently, citing what he termed the “fat fingers problem,” which prevents atomic-scale manipulators from being able to position with atomic accuracy other atomic-scale components. He also cited the “sticky fingers problem,” which made the binding and un-binding of components during transfer into position by the “fat fingers” a severe energetic issue. The exchange is worth reading even twenty years later, but in terms of evaluating the experience behind each of these interlocutors’ positions it is worth noting that Smalley was an experimental scientist, while Drexler was a theoretician whose Ph.D. came from the MIT Media Lab after the department of electrical and computer science refused to approve his plan of study.

Smalley’s two primary objections, the “fat” and “sticky” fingers problems, are still unresolved. The inherently probabilistic nature of quantum mechanics poses a possibly insurmountable problem to the atomically-precise assembly required by Drexler’s vision of molecular nanotechnology—as do the laws of thermodynamics with respect to the entropy in crystals formed above absolute zero. Most of the proposals for molecular nanotechnology systems are scaled-down versions of devices that operate, at their original larger scales, according to classical physics. Such behaviors simply don’t in general scale down to quantum regimes.

To that end, it is instructive to further consider Smalley’s observation that biology is the only working nanotechnology. Biology, unlike Drexlerian molecular nanotechnology, is anything but atomically precise. Biology consists of the construction of components that self-assemble into only roughly defined structures. No two bacteria have exactly the same number of lipid molecules in their cell membranes. No two have exactly the same mutual spatial arrangement of their trans-membrane proteins. The transcription of proteins from RNA is good, but not 100% accurate. The replication of DNA during reproduction is good, but not 100% accurate. The system as a whole is fundamentally, structurally impossible to map to modern computer architectures, which depend intrinsically on a deterministic behavior of all components. If a circuit misfires in a modern computer, it is a bug. If a molecule doesn’t move exactly as expected in a bacterium, it isn’t even noticed, because the architecture doesn’t depend at all on that kind of precision. It depends instead on the statistical behavior of many molecules within a loosely defined structure.

So, are nanosystems, perhaps even nanobots, possible? Yes, absolutely. Biology is evidence of that. But the popular vision of nanobots—with atomically precise positioning of all components operating in a deterministic fashion designed by the architect—is not being pursued by any serious researchers, and for very good reason. It may in fact be fundamentally impossible. Biologically-inspired nanosystems that depend on aggregate behavior of numerous, independent active components in a statistical sense, however, are almost certainly possible. But understanding such structures well enough to design them will require significant advances in non-deterministic architectures. There is progress in that area, but it is a long way from producing a working nanosystem.

And in the meantime, there is so much societally-important low-hanging fruit like Aurolase Therapy that doesn’t require full nanosystems. With the exception of software engineering, all engineering consists of constructing physical products to solve real-world problems. The solutions we can construct are all-too-frequently limited by the properties of the materials we can use. The greatest frontier in the development of new materials, currently, is in nanostructured materials. It seems obvious, then, that the vast majority of the research effort in nanotechnology for the foreseeable future will go into nanostructured materials. That doesn’t mean that nothing will be developed in the area of nanosystems, but until there is a realistic roadmap to a specific, societally-valued outcome that requires a nanosystem, such advances will be slow.

Submit Essay

Once you submit your essay, you can no longer edit it.