I Told You So, Prometheus
Doomers' Guide to Tech Attribution
One summer during my undergrad I had the opportunity to work on enzyme evolution with Dr. Greg Petsko. In a talk about how to give scientific talks, Greg pulled up a reference from his background in Greek philosophy, noting that Aristotle’s favorite poet was Homer, not Hesiod, because Hesiod started every story with an exhausting “in the beginning” whereas Homer jumped right to the chase and threw you into the odyssey you came here to see. The Greeks, according to Greg, had already covered everything, and there’s no better place to contextualize western thought than the pages of Greek philosophy.
It’s no surprise, then, that our current moment has me reading about Prometheus, the titan who stole fire from the Olympian Gods and gave it to mankind, and Pandora’s Box, the mysterious container conceived in a poem by Hesiod that, when opened, curses mankind. While Hesiod may be the lesser poet in the eyes of Aristotle, Pandora’s Box is not a lesser allegory for the perils of scientific discoveries. In Prometheus and Pandora’s Box, the Greeks who ventured to explain the workings of the universe and found our Western philosophical traditions also gave us the warnings that sometimes knowledge carries risks.
Doom. The ill-defined ill-fate that we all suffer when it all goes sour. From biotech to AI, the cold breath of doom feels as if it’s breathing on the back of our necks. Every incremental advance in biotechology enabling the enhancement of pathogens stops the breath of people who fear biodoom. Every update of an LLM is met with stress-testing to evaluate whether/not the LLM enables mischievous things as an anarchist’s cookbook for cyber, bio, and beyond. At the interface of AI and bio, we have AI biodesign tools that blend the pestilential doom of synthetic biology with the computational doom of AI probing humanity’s weaknesses on an NVIDIA GPU. Doom, so many kinds of doom, just doom as far as the eye can see.
On one hand, with plenty of types of doom to choose from, I am legitimately afraid.
As a biologist, I know the risks of biology. I know how the biologists favoring wise risk management lost the debate prior to COVID-19, and now many academics seem unwilling to acknowledge the probable lab origin of SARS-CoV-2. Ironically, by failing to attribute COVID-19 to a lab origin, these scientists seeking to protect the reputation of biologists worsen the threat of biological agents by suggesting plausible deniability exists even for the most obvious cases. We have a grant proposing to make a virus just like SARS-CoV-2, a grant written and proposed 2 years prior, receiving funding the year of emergence, and conducting their research in the same city whence the biological agent emerged. If we can’t attribute a lab origin for SARS-CoV-2 (spoiler alert: we can, even if those entrenched in the wet market theory deny it), even without knowing who held the pipette, we would undermine our ability to deter biological threats and thereby exacerbate the threat of synthetic biology.
As an information scientist, mathematician and statistician following AI, I also see the perils of AI. AI is an assembly line for digital things. Whether digital content like ads, digital communications like emails or customer service, digital services like coding or designs (including biodesign), or digital actions like researching options and executing a transaction based on available information, AI enables us to use compute instead of human time to perform digital tasks. The assembly line and broader industrial revolution allowed us to substitute energy and machinery for human labor, enabling the mass production of goods that incentivized the mass distribution of goods and reformed industrial capitalism as we knew it. A lot of people lost their jobs, but eventually they found new work. Similarly, AI enables the mass production of digital goods and services, threatening any job making digital products or providing digital services that can be well approximated by available data. Any repetitive digital task that can be autocompleted to the customer’s satisfaction now runs the risk of being performed by compute at a much lower hourly wage than humans require. Doom-ish… serious economic consequences for sure, but I still see this as like the margarine of doom, I Can’t Believe It’s Not Doom.
I put less stock on the theory of doom from AGI learning things humanity couldn’t otherwise conceive. “AI” is a hype concept for a family of mathematical tools implemented with cool new computational methods: universal function approximators with linear/stepwise-like interpolations owing to the sigmoidal activation functions in neurons in multi-layer perceptrons (MLPs), trained on GPUs thanks to slick software packages enabling massive parallelization of back-propagation and other operations being computed in training & inference. These MLPs are good at approximating a dataset, any dataset sampling any function in the universe, from languages in our social media rants to videos of our YouTube shenanigans, but any ‘intelligence’ of interpolation between points or extrapolation beyond the training data is either a linear extrapolation or some un-fitted stepwise guess, and rarely are scientific theories overturned by linear extrapolations in the language we use for science. Even different architectures like Kolmogorov-Arnold networks (KANs) and their wiggly functions force us to ask: will this wiggly extrapolation be better? Just look at how a Fourier series - a function approximation using wiggly sines and cosines - extrapolates outside the window to which it’s fit to any curve for my intuition on how KANs will extrapolate, or random forests (step function approximations build on decision trees) for how MLPs will extrapolate. People are trying to train models to reason - power to them, that’s so cool, I’m rooting for them to do cool things! - but chain-of-thought may, at best, approximate the dataset of thoughts in a chain (nudged by reinforcement learning to more acceptable thoughts).
While meaningfully different, and while more is yet to be developed, the underlying math will extrapolate between unknown thoughts using crude approaches - is a line or a wiggly function the most reliable way to extrapolate into the world of unknown thoughts? Would AI trained on language up to the time immediately prior to Darwin’s Origin of Species be able to impute the theory of evolution? Would AI trained on a world of language about phlogiston be able to impute or extrapolate to atomic theory? The language changed, the chain of thoughts changed dramatically; paradigm shifts in science are so much more than linguistic autocompletes, but conceptual webs that tie together distant pieces of evidence others may not have paid attention to or cared much about, and explaining how these fit together with entirely new yet intuitive linguistic structures (“evolution by natural selection”) that are mindful of the series of challenges they need to overcome to be adopted (e.g. Darwin had to contend with the age of the Earth, the shifting of continents, the fossil record, the beaks of finches, inheritance, and more).
Perhaps more hilarious or depressing depending on whether you like to laugh or cry, suppose AGI comes up with something humanity has never before thought. Suppose it’s right. Suppose AGI comes up with a theory of everything or some profound truth nobody has yet been smart enough to learn. Talk to any human who has discovered something, from atomic theory to the origin of SARS-CoV-2, and ask them how adoption of their ideas went. I will bet a lot of money you’ll have stubborn professors trying to delegitimize the AI as “not an expert”, scoffing and laughing: “who are you going to believe? Some random computer, or the world expert in ____?” Will investors buy the argument “AI said it was a great idea, so far-out that nobody can understand it. We’re doing it at the screaming deal of $50M for 1% in our pre-seed round!”
As I chew on the nuanced cud of doom and read the recent Atlantic article on “The AI Doomers are Getting Doomier”, I can’t help but draw analogy between the social, scientific, political, and historical processes at play in AI and bio doom.
In 2011, biologists in Netherlands serially passaged an avian influenza in ferrets, breeding a mammalian-transmissible bird flu that could kill millions, if not billions, if the virus got out. Best of all for the Homers and Hesiods among us writing the tragedy of how it all went down, the protocol they used was unspeakably simple, as if Pandora’s Box and the curse upon humanity was locked with the password “password”.
It’s one thing if the risks of disruptive technologies are so unreachably esoteric that only a handful of people in the world could possibly follow the protocols to replicate disaster, if the line between happy living and certain global death could only be crossed by individuals after a lifelong journey to the treacherous and unreachable edges of human knowledge. It’s another thing entirely if that line is a hop and a skip away from most people.
In the former scenario, the best way to maximize the expected time until disaster strikes may be to regulate the activities of researchers, to focus on those esoteric few venturing quite far along the path and make sure they never cross that distant line. In the latter scenario, the best way to save society may be to counter the proliferation of technological know-how… even reminding people of Fouchier scares me, because that points them to a body of work I wish never existed, work whose mere existence in the public domain introduces threats to global health security.
One important flavor of AI doom is that AI may make it easier for any dumbass to make that long trek to distant, dangerous biology. In a sense, AI may be a modern anarchist cookbook for bio and many other technological threats (for those unfamiliar, the Anarchist Cookbook is a webpage from the dawn of the internet where mischievous and curious kids like myself could learn how to make everything from napalm to fertilizer bombs - I can confirm the napalm recipe was legit and quite fun).
The AI doomers are getting doomier, and so are the bio doomers. Technological changes are rapidly advancing and we have failed to erect societal controls to prevent worst-case-scenario outcomes from these technologies. If you know just a little bit about these technologies and have a modicum of mischievousness in your brain to think adversarial thoughts, you can appreciate the many ways these technologies, through accidents, adversarial uses, and even just widespread adoption, can cause harm.
If bio has taught us anything, it’s that incumbent forces with institutional momentum will proceed with business as usual. For bio, those forces were investments in vaccine development predicated on predicting the next pandemic pathogen, as well as the major players in health science funding who normalized such work. For AI, the incumbent force comes in the form of billions of dollars seeking ROI on AGI, pharmaceutical design, unmanned vehicles, and more. Once a horrible accident happens some will slowly back away… others will say it can’t plausibly be an accident (as with the COVID-19 zoonotic origin crowd) or they will say it was only an accident because other people did it wrong (e.g. China’s sub-par biosafety was the problem) and not that the technology has inherent risks.
All this doom has me feeling a little nostalgic for the days when life was so simple. Remember when nukes were the only technology threatening our world? AI and Bio are so unlike nuclear physics. On the bright side, let’s actually talk about nuclear fission as few fields have embraced the awesome responsibility of managing their technology quite like the nuclear physics community. From Chernobyl and Three Mile Island to Fukushima and beyond, there is no cave of plausible deniability in which nuclear physicists can hide when radionuclides fill the air - in this way, nuclear accidents are attributable. Every nuclear severe accident, being so easily attributable, has been followed by intensive, often global, investigations of what went wrong and how that risk can be mitigated going forward.
While physicists might like to pretend they’re just better (as they usually do), what nuclear physics reveals is not the superiority of physicists (sorry physicists) but the critical role of attribution for tech doom accountability. AI will, I hope, be much more attributable for any harm caused, whether systemic harms like job losses from displaced workers or specific harms from nefarious uses of compute. AI requires compute, compute requires servers, servers have logs, and logs enable attribution. Bio is, I believe, far more attributable than many people realize, and I’ve spoken up to this effect on COVID, but the methods are more sophisticated. We may have to wait for the zoonotic origin proponents to retire, or new funding investments in bioattribution to emerge, before we can advance this discussion, unless somebody somewhere suddenly decides to invest in this area.
On one hand, as I said, I’m legitimately afraid. On the other hand, I harbor an enduring ember of hope for humanity, if not for our ability to prevent disaster then from our ability to learn from our mistakes.
Some people can learn from reading. Some of us can imagine things going horribly wrong without even reading about it. The wisdom of the crowds, meanwhile, may not be that wise. Collectively, we appear as if we have to figuratively pee on the electric fence in order to learn what some can anticipate. I’d hoped the 1977 pandemic, or perhaps the Sverdlovsk anthrax outbreak, or perhaps COVID-19 could’ve been the catalyzing event, the electric fence we peed on to learn from our mistakes, but sadly we’re not there yet. The voltage will rise, we’ll improve our attribution capabilities, and we’ll pee again.
Don’t mistake my optimism and hope for a concrete assurance that we’ll be okay. Humanity is better at learning from mistakes than preventing them. It’s not clear how big of a mistake we need to make before we learn how to best incentivize technological innovation while preventing Pandora’s box from being opened. My optimism stems from our self preservation and our ability to mold our culture once lessons are burned into our collective psyche. I’m encouraged to see, for example, the President’s executive order on dangerous gain of function - this executive order is the next step in bipartisan efforts to build biosafety policies that protect our health security from the myriad threats we face. While, collectively, we may not be 100% aligned on the probable lab origin of SARS-CoV-2, and while not everyone may agree with specifics about this or that policy, at least we’re having these discussions. At least policy is taking the threat seriously and we’re moving in the right direction.
Long before Homer and Hesiod, humans evolved the wonderful power to use language, symbols, and stories. The enduring power of a poet is to help us learn from the experiences - real and imagined - of others. We already know this general story, from Prometheus to Pandora’s Box, and it’s as old as western philosophy. Since the dawn of science we’ve made risky technologies and slowly used our language, symbols, and stories to tell the tales that build our safety culture. AI and Bio are specific, modern instantiations of the old stories, and telling the story of the origin of SARS-CoV-2 in a way everybody can understand may have been the best way to help us learn our lessons. Dr. Petsko was right that everything we talk about today can be contextualized in Greek philosophy, but the specifics and the stories matter as we devise policy and adapt to changes introduced by modern technology.
Now, more than ever, we need scientific poets ready to tell the stories of tech gone wrong, balanced out by deference to tech that has also gone right. It’s clear that “no more tech” is not an answer people seem willing to adopt, but we’re all very open to discussing better ways to contain the fires we innovate.
To all the doomers out there, I encourage you to focus on attribution. Set yourself up to become the poet who said “I told you so”. We fear harm caused by technologies, so use your expertise in a tech area to focus on ways to anticipate and trace any harms caused back to the source with evidence nobody can deny. What evidence would AI doomers need to prove they’re right? Lobby for laws that help us collect that evidence, use whatever business intelligence or DRASTIC-like forensic sleuthing methods you can to legally collect it yourself. Accept that some harms you anticipate may be caused. Find solace only in knowing you are helping humanity learn from its inevitable mistakes. For harms caused by actions against which we currently have no laws, we can try to tell the story of what happened, how cause led to effect that left someone injured, and trust the process by which humanity learns hard lessons and adapts to new technologies.
For companies investing in disruptive technologies, consider cooperating with law enforcement officials should there be an interest in attributing malicious uses of your tech. As a company, protecting your intellectual property is in your interest, so invest in lawyers and scientists working on building the case that somebody else used your tech, and share these capabilities with law enforcement officials to ensure they, too, can monitor, attribute, and deter malicious uses of that tech.
Hopefully, we can hone our skills at tech attribution, using less harmful incidents to learn lessons and protect our world from “The Big One”. In thinking about this problem of tech attribution for over 15 years now (long story, won’t give you the Hesiod version), I’m not convinced we can stop Prometheus from stealing fire or stop every ambitious fool from opening Pandora’s Box, but we can prepare an epic “I told you so” by attributing harms caused to mistakes made.


just a note, here: the anarchist's cookbook does not date from the early internet. it was actually available during the 1960's as a printed book.
some of my recent reading has focussed on the unintended consequences of new technologies, as in Neil Postman's book "Technopoly". the new tech is cheered by it's proponents in terms of potential benefits, and by the time the unintended negative results are known, it is too late.
Well, we have now read the advanced human's perspective, but there are other voices which should be listened to, voices of those who live in suffering and fear, voices of those simple folk who live in rural communities in old cultures who fear the advances of biotechnology and the madness of biowarfare, people who read news stories of Unit 731 and Project Coast, and even the screaming voices of the countless victims of lab experiments in the 21st century. A warning tale: https://g.co/gemini/share/43aa9dcaceb7