Winning synthetic biology

Back

I wrote a pretty long email recently that I think is worth publishing. I did not edit it much, this very much a rough-draft. In the end, it was part of a larger article.

How to Win in Synthetic Biology

In retrospect, it does reveal how I think about business: that it is a means to an end.

The prompt

[From Maxx Chatsko]

Keoni,

Hope all is well my dude. I'm writing an article series on business models in synthetic biology, but more so about how to succeed in the long run. I've been reaching out to various people building the field to gather insights and quotes that will be weaved into the final article.

But... how boring would it be if I didn't include the DIYbio perspective? I've been asking everyone a purposely open-ended question:

How do you win in synthetic biology?

There are no wrong answers! If you'd like to participate, then send me a response here or at {}

Answer

Howdy Maxx,

Sorry this email is so long. Got busy, and didn't have time to make it short

First, to define "winning" in the field of synthetic biology. For me, winning would be increasing humanity's ability to create with biology. Synthetic biology, as originally defined by Drew back in 2003 basically made the claim that we could make biology more engineerable through standardization of components (inspired from mechanical engineering), substrate and component abstraction (physics/electrical engineering), and decoupling of fabrication and design (VSLI electronics). Biohacking, on the other hand, aimed at making bioengineering more effective by focusing on accessible education and reagents.

Synthetic biology, as originally defined, largely has failed: it quickly became obvious that standardization of components didn't really allow for component and substrate abstractions. It's gotten great marketing and hype outside of that, though. If the Human Genome Project could be summed up with "Bought the book; hard to read", synthetic biology could be summed up with "Easy to write; nothing to say" (you can put together words, but not sentences). On the other hand, biohacking quickly hits a cliff for users: after initial small experimental steps, like a GFP transformation, it becomes nearly impossible to do any more research without joining an organized biology lab. Biohacking only handles easy ways of making biotech easier to do, while not tackling the harder problems in the field. While both ideologies have made a splash with their branding, neither have dramatically impacted how bioengineering is done.

Synthetic biology did not bring us synthetic life: the Venter institute did. Parts did not lead us to better engineering: high throughput screening did. Accessibility did not let anyone engineer biology: it only enabled more people to join existing organizations.

So, how to increase humanity's ability to create with biology:

On the technical side, how to win:

From a social side, how to win:

full generic automation

As a disclaimer, I've founded a startup with two friends of mine called "Trilobio" which is specifically working on generic full automation of biotech protocols. So I'm a bit biased here :)

The basic procedures of doing biotech are just too damn expensive in training time and skilled manual labor time. There might be people like Sebastian Cocioba who can train themselves and get things working themselves, but that definitely doesn't scale. The fact that PhDs are doing what essentially equates to manual labor is insanity. This also massively hurts the replicability of the field - each protocol requires training and skill to learn and execute. The fact that biological reproducibility is so low is a testament to the fact that this human endeavour isn't scaling.

While things on the perimeter have been improving (sequencing, synthesis), the core workflows of synthetic biology have not (liquid handling, centrifugation, incubation, etc). Opentrons might be helping part of the core workflows (liquid handling), but as I learned from building out FreeGenes - it's not good enough. You need an entire custom software system surrounding each piece of equipment, and then integration becomes a nightmare, to the point where most folks just have lab technicians do the "easy stuff" (like moving plates between equipment), but then you suddenly need a whole system for handling lab technicians.

On the hardware side, there are full automation companies like High-Res who do build full automation systems, but they aren't generic in the way microprocessors were - they are custom built circuits to do specific applications. Sometimes that is definitely needed, but I think the world of semiconductors teaches a valuable lesson in scaling technology. We see a similar story in software of the space - companies like Artificial or Radix are trying, but honestly aren't good enough and aren't focusing on the right things, like no humans, generic protocols, or bottom up adoption.

To be specific on what I mean by "generic" - I mean a protocol that can be built without regard to the particulars of the lab it is executed in. Practically, this is an objective protocol that on a high level describes actions to be taken on a sample, like "Make this mixture" instead of "Aspirate 100uL from A1 into B2". The software behind a generic protocol then figures out the exact aspirations and dispenses to accomplish the goal. Once written, these high level protocols can be executed on a hundred-robot lab or a single-robot lab with (hopefully) identical results. In addition to the higher level commands, it must also be "objective" - that means describing each step in terms of exact chemical or biological composition, instead of using fuzzy things like "M9 media". Higher level + objective = ability to be executed anywhere efficiently with no additional information.

What we need is affordable hardware that executes generic protocols (same exact code runs on 1 robot or 100) with zero humans (no tacit knowledge) in the loop. This can scale the basic operations of doing biology to where we need it to be - the only limitation of scientists should be their ability to think up new projects, not their ability to physically execute them. Anyone should be able to access and use the robots with no knowledge of how to physically do biology, and protocols should be able to be run locally or on a cluster of thousands of robots with identical results. Hopefully this will enable people to just run 10x to 100x the amount of experiments they run right now for the same $$$. [this is what we're building, so obviously I believe in it quite a bit]

biological devices

I have a suspicion that Conway's law applies to fields in general - the law that any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure. In computer sciences, there is an emphasis on separation and the binary - ie, because they're built on logic systems that are 1 or 0, this separation of concerns builds its way up the stack, so we end up with things like Unix and containers. In biotech, the inherent communication structure is absolutely intertwined, and my suspicion is that embracing this communication structure (stop trying to make everything separated) will lead to more effective engineering, in contrast to the likes of the old Biofab or other similar standardization projects.

Back in 2019 I wrote a little bit about how I think about this problem space in an essay called "Declarative Bioengineering" - the idea being that to engineer biotechnology better, we need biological devices to be defined independently from their implementation, with the ability to automatically approach a complete implementation from the final device through sub-device optimization. The idea of declarative bioengineering extends the above idea of Conway's law: things will be intertwined, so implementation should be done with the assumption of that intertwined-ness.

I think we actually can get to the place where biological devices act reliably and to the place where they work together reliably, but not how we're currently going about it.

Overall, I think that so long as our solution is high-throughput stupidity rather than a thoughtful exploration of fast learning, we won't be able to continuously build more complicated biological circuits. This seems minor compared to my other points here, but I think it'll look much more dramatic once it starts working. It'll just be real fucking hard to get there.

prices should drop

In semiconductors, we had Moore's law: which was the belief that transistor counts should double every approx 2 years. We have nothing like that in biotech. There is not a belief that the cost of doing biotechnology should be getting cheaper, even though the inherent units of doing most biotech are dirt cheap (as in, literal dirt). NEB has not changed their prices significantly in how many years? And the fact that nobody thinks this is a problem is the problem. People don't actually believe that the cost of executing biotechnology procedures should drop and, similar to being adrift in a river, are simply happy when the current brings us to a place with "yay! lower sequencing costs!" or the like.

If things are to improve in synthetic biology, people need to believe that things should improve. Capex costs, reagent cost, training time, human labor, troubleshooting effort, etc should all improve over time continuously, and there should be an expectation of such. It should not be acceptable that Opentrons has increased the price of their robots, nor that Twist hasn't decreased prices of synthesis over the last 3 years.

We simply can't do enough experiments right now to make synthetic biology cost effective for anyone to try out crazy ideas. Those experiments cost too much - and with the direction of the field right now, it'll be quite a few decades until they are cheap enough for the fun to start.

organization building

Twist has no incentive to lower prices. Nor Ginkgo (who needs to make a more effective biofoundry when you can just do more business-things?) Zymergen and high-res both don't really care about bringing full automation to the masses, and Emerald Cloud's 40k/month pricing clearly isn't trying to make cloud labs more accessible. Transcriptic certainly tried and failed to make cloud labs accessible. So long as you sell the majority of your stuff to pharma, you're going to be drawn into the incentives of larger but fewer sales.

"Commoditization your complement" is a pretty good framework to build an organization that actually helps improve the field, rather than just skimming value off the top. TL;DR is that the demand of one product increases as the price of its complements decrease. Personally, I try to pick projects where my complements are the goal, and thusly have a valid market excuse to go decrease the price of those complements that I care for to increase the value of what I'm working on. In the example of Trilobio, we are building and selling full automation systems. If we decrease the price of, say, DNA assembly, on our robots, we can use that to sell more robotic systems.

In the long term, an organization or institution built to solve these problems actually has to make money, and significant amounts, to continue towards the overall goal. It is very easy to believe in something if it makes you money. Ginkgo's altruistic goal of making biology more accessible failed because they were too direct about it IMO: suddenly they had to make money, which made them swallow the enterprise pill, and suddenly you're an IBM-style consultancy. Ginkgo doesn't make money from synthetic biology being more accessible, so it's very hard to hold that altruistic goal. This should be thought of far ahead of time.

Whew! Took a while to write. Hope this helps,

Keoni