Approximately Engineers

Back

The entire field of biological “engineering” is built on a pretty self-delusional (though useful) foundation - that we can engineer biological systems. For the most part, we can’t do reliable genetic engineering, but instead we do what I like to call “approximate engineering”. While engineering is built off of a premise of reliable parts that can be composed to achieve a goal, approximate engineering is built off of a premise of there being a combination of parts that can achieve a goal. The first allows you to reliably and predictably build components up to make more complex devices, while the latter allows you to build simple devices, but rapidly explodes in complexity.

A classic example of “approximate engineering” is combinatorial genetic circuit design, where you make a library of thousands of different genetic circuits and then select the one which works best. Right now, this is essentially how every industrial biological process design works, since combinatorial genetic design will actually increase the efficiency of your reaction. Although approximate engineering works, it rapidly becomes complicated as there is a combinatorial explosion when you grow the number of parts you want functioning together. If we want the ability to do genetic engineering so we can build full organisms rapidly, we need to leave the domain of approximate engineering and become real engineers.

How do we do that?

Let’s first make a subtle mutation to Conway’s Law - the law that “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure”. In computer engineering, the communication structure within a computer is such that each process is run separately with basic logic gates. Inherent separation as a core communication structure gave rise to a system (UNIX) which copied the idea of inherent separation. It follows that from UNIX, we start seeing the development of containers for unix-like operating systems, and orchestration systems for those containers. Each layer of the stack is greatly influenced from the core communication structure of computers: proccesses are separate.

Conway's Law

In biological engineering, we have the opposite of computer engineering - the communication structure of biology is not separated, it is interconnected. The systems designed upon the constraints of biological systems will not gain separative power, but rather, will replicate the communication structure that we find in biology, which is one of interconnectedness. The system that will be developed to bring us from approximately engineering to engineering will be interconnected.

Instead of “abstraction” we have at most bounded contexts (genetic context, host context, environmental context, etc). Each level can be analyzed independently, but are connected with each other - yeast isn’t going to like it when it is placed in a 65c environment. However, you might be able to predict how a yeast will act when it is placed in a 65c environment (not happy). There is something important in the word “predict”, though - it feels much more like approximate engineering than engineering.

And that is because our concepts of engineering are based off of environments that can be robustly separated - this screw doesn’t affect that screw, that program doesn’t affect this program. But in biology, we don’t have that inherent separation. In other engineering environments we want reliability, but what does that actually mean? I believe that “reliability” is a false prophet, especially in synthetic biology. We do not want reliablity, we want predictability, and predictability manifests itself differently based off of the core communication structure in a discipline.

In computer sciences, we gain predictability by separating everything out, because that is how a computer works. In biological sciences, I believe we will gain predictability by connecting everything to each other, because that is how a cell works. From the genetic design, to the host organism, to the lab, to the network - an interconnected system will be able to build biology better than any independent system.

Practically, this means we should not focus on the DNA parts themselves, but the connections between parts and the system they reside in. Computer simulation and prediction should be more important than DNA parts and effort should be focused on making those prediction systems better and more connected, rather than trying to improve individual parts.

Recap

Keoni Gandall

(PS: I also thought in terms of the core of computer science being math with the core of biology being evolution, and therefore combinatorial approaches will work better, because they replicate how biology works. While true, it doesn’t actually help very much to get to the end goal of man-made lineage-agnostic truly-synthetic life) (PSPS: Yes, I know that traditional engineering isn’t completely predictable either, and empirical tests are needed. In biology, a LOT are necessary, which is why I try to strike a difference between the two)