What is differential technological development?
Technologies have risks associated with them that may seriously harm society. This may include technologies that already exist like energy generation through coal or new, emerging technologies and research like in the domain of biotechnology.
As such, it is important to ensure emerging research and technologies are being developed and directed with risk reduction in mind. Such deliberate development is called Differential Technological Development (DTD), and can be implemented before or after a risky technology has proliferated. DTD can be thought of as a new way to look at practices commonly exercised in research and development.
Engaging with DTD may entail developing:
Safety Technologies: modifying existing technologies to make them safer.
Defensive Technologies: counter-act risk-increasing technologies.
Substitute Technologies: Replacing risky technologies with less risky ones.
Predicting the impacts of various technologies can be difficult and requires deep domain expertise. Some exercises and strategies that may generate DTD-aligned technologies look like:
Scientific road mapping: Working retroactively from a specific goal to map efforts in a field, and using that map to understand the highest leverage points to pursue.
Ordering: Prioritizing specific interventions over others given their risk-reducing impact.
Gradualism: Iterating or developing a field slowly to uncover unknown risks.
Distancing: Delaying milestones within a field that may be risky through something like moratoriums.
This piece will touch on all of the above and concludes with a toolkit that lists actionable items that could be taken toward risk reduction, targeted at everything from institutions like the NIH to individuals interested in the space.
Please note that I use many examples in this piece, and do not necessarily endorse them as the best things to do in any case.
Differential Technology Development is originally defined as an innovation framework that calls for preferentially advancing risk-reducing technologies over any alternatives (source). The principles of DTD can be applied in two ways: before or after the proliferation of a risky technology.
The values and ideas at the core of DTD are not new, but it can be thought of as a useful framework that groups together existing and new actions taken toward risk reduction. One example where risk-reducing actions were implemented before the proliferation of a risky technology occurred within the energy portfolio of the province of Ontario. Ontario converted what was once the largest coal power station in the world into a solar power station in 2019. The Ontario government, simultaneously, subsidized nuclear energy and renewables through a concerted effort with private companies. As of 2019, 92% of Ontario’s energy originated from zero-carbon sources, while still being able to export a net amount of 13.2 TWh that same year.
Proactive examples include post-quantum cryptography (PQC). Since quantum computers have been proven to, in theory, have the capacity to break current encryption methods, efforts like PQC have been in development to create encryption methods that are more resistant to attacks by quantum technologies (source). DTD of this form comes with its own challenges like anticipation, which I will discuss in a later section.
Traditional approaches to innovation frameworks taken by, for example, a venture capital firm, tend to focus mainly on the commercial viability of an idea with little consideration of risk-reducing potential. If that project happens to be risk-reducing it is seen as good, but it is usually neither a prerequisite nor a goal.
An approach to innovation grounded in DTD places risk reduction as an explicit goal. Since not all of these efforts may yield profits, it sometimes becomes necessary to engage actors who do not require commercial viability to aid in realising a project (e.g. philanthropists). However, the creation of commercial viability itself could also be used to incentivize risk-reducing tech (example).
This is not to say that existing actors are falling short. Many of the institutions I will be mentioning have made great strides in supporting risk-reducing technologies — several of these examples would not exist otherwise! DTD builds upon much of that previous work and aims to combine it into a new tool that such stakeholders can utilize.
DTD is also relevant for potentially dual-use technologies, which are technologies that may be used in both civilian and military applications (sources 1 and 2). An example of dual-use is the development of improved methods of drug delivery to the lungs — this advancement could provide better care for people with asthma, but may also allow for more efficient delivery of anthrax (source). However, it is important to bear in mind that biasing such technologies in one direction can be difficult in practice.
What parties may partake in DTD?
My guess is there are two distinct stakeholders which I will refer to as Regulators and Operators.
Institutions or groups. (Government agencies (e.g., NIH), international groups (e.g., UN), philanthropies)
People who build technology or contribute to research and development on a technical basis (e.g., Engineers, researchers, etc.)
Coordination between these groups is key to best implementing DTD (e.g. not funding risky research only works if everyone decides not to do it) but is not necessary in all cases.
DTD also requires significant domain expertise. Understanding what may be risky in drug development vs. transportation vs. AI vs. energy is vastly different, let alone implementing what the solutions might be. As such, the best Regulators are probably going to also be previous Operator types for this reason, or are deeply informed by them. A good example of this is Kevin Esvelt, a professor at MIT who is working on risk-reducing solutions specific to his domain, but also generates papers like these that could be relevant to regulators.
What does risk-reducing tech look like?
Operators and Regulators have historically taken many steps to decrease risks and advance risk-reducing technologies. However, balancing trade-offs and identifying concrete deliverables can be hard to do. Facing these tough questions through a DTD lens could be helpful and serve as a starting point (Sanbrink et al).
These technologies mitigate negative societal impacts by modifying existent risk-increasing technologies. Some examples: PALs for nuclear weapons, catalytic converters in car exhausts.
These are technologies that aim to directly decrease risk propagated by risk-increasing technologies. (e.g. Far-UVC Light Sterilisation).
Alternatives to risk-increasing technologies that produce less risk with a similar benefit. (e.g. nuclear power plants as an alternative to coal).
Risk reduction may also necessitate governance, protocols, conventions, and even workshops. More on what non-technical solutions may look like can be found in the DTD Toolkit.
On Anticipation and Windows of Vulnerability
One way I like to think of DTD involves a focus on decreasing or completely eliminating windows of vulnerability. In other words, DTD is about “getting ahead of the curve” and setting the path straight. This is easy to know what to do in some cases (i.e. stockpiling mRNA vaccines that can adapt to new viruses quickly) but difficult in others (i.e. space governance).
Such a challenge could come in two forms. The first is when there are risks that are well understood. Climate change is a good example of this, in which DTD calls for the prioritization of proportional interventions. Implementation of such solutions usually involves providing incentives (e.g. carbon taxes, public procurement). This relates back to DTD being implemented after the proliferation of a technology.
The second is concerned with unknown risks or anticipating otherwise unseen risks. It is inherently difficult to come up with an example here but foresight, anticipation techniques, and forecasting can help. This sort of DTD is implemented before a technology proliferates. However, incorrect predictions could lead to stalled progress and stunting of other technologies. For example, talent or funding may be diverted from fields that may turn out to have no negative effects or indeed may contain positive impacts long term. Fields do not need to be explicitly defunded for this to happen either. For example, it may be the case that increasing funding for one field implicitly diverts funding from another domain that may not have any negative impacts itself, or may have more positive impacts than the one with a funding increase.
I tend to believe employing differential treatment of technologies is worth doing despite these potential shortcomings of prediction, especially in the cases of well-defined risk (e.g. climate change). This comes down to considering how much larger an effect these catastrophes could have in comparison to attempts to mitigate them. This, though is also not a reason to avoid exploring ways near-term prediction could be done better, especially within the context of scientific breakthroughs. Below are some thoughts on exercises that could make this sort of “field strategizing” better.
Retroactive Scientific Roadmapping:
One way to develop concrete ideas of what interventions might have promise is to think retroactively from a specific scenario. For example, “We had an engineered pandemic kill around 1 billion people sometime in the next 10 years. What went wrong?” and work backwards from there using a predetermined taxonomy that is also mutually exclusive and collectively exhaustible (MECE).
In doing this retroactive exercise, the goal should be to:
Identify sources of risk from a field
Identify questions of who is working on what, and what is not being worked on in a field (i.e. is there anyone working on methods or tools that could make engineering pandemics easier/harder?)
Create a map of constraints or bottlenecks within a field and what implications their unblocking holds (i.e. why or why not would someone work on method x which is likely to reduce the likelihood of accidental pandemics?)
Work backwards to identify research directions needed to address the bottleneck
Combine insights and tools from multiple domains for potential novel approaches (i.e. what if we try this method, used in field x for reason y, instead?)
Determine if the timing is correct for such a goal, or if it is premature (i.e. is it clear this does reduce risks?)
Identify the team needed for such a project and what existing work may inform it or is complimentary (i.e. reports like this one could be incredibly helpful)
Determine the best funding mechanisms and or institutions that could support this effort, existing or new (i.e. is this defensive technology better developed within DARPA or as a Focused Research Organisation?)
Sources for these points can be found here and here.
Road mapping may enable the identification of several risk-reducing projects and can be crucial in informing what projects to support, curb back, or start if they are not already being worked on. This is an example of scientific road mapping, though without an explicit risk-reducing purpose. Note the funding opportunities section. I think this exercise is useful, informative, and has great ROI.
Types of Differential Tech Development
As noted, it is sometimes difficult to explicitly generate action items that will lead to advancing risk-reducing technology. Scientific Road mapping within a DTD context is one way to generate action items but is not always possible. Below are some strategies that could be seen as ways to generate action items that are DTD aligned. The following strategies were originally outlined here and I will attempt to summarise and build on top of them.
Roughly speaking, Ordering is the exercise that takes a select amount of initiatives or developments we already have in mind and orders them based on factors like urgency or importance. It is the most basic form of DTD and can act as a solid starting point. An example of Ordering is a philanthropic organization making it a priority to fund non-dual use defensive technologies first, especially those with little risk associated with them, like Far-UVC light sterilization. This way, the organization is signaling it would prefer this development before other specific developments.
By placing some of the most beneficial developments first on a “prioritization list”, and some of the least beneficial ones last, Ordering can help us come up with better questions or decisions without relying on explicit forecasting predictions.
Outside of non-dual, defensive technologies, Ordering is more difficult and, in practice, does necessitate some level of forecasting or anticipation. I think Categorising has the potential to provide robust frameworks for Ordering purposes. However, I think it highlights how DTD could be relevant for informing trade-offs in everyday decision-making for relevant organizations, not just catastrophic risks.
Gradualism assumes it is beneficial to develop technology gradually, as the effects of exponential growth may be uncontrollable but the technology itself is potentially beneficial.
Roughly speaking, I think this approach lends itself well to developing Substitute and Safety Technologies. The Collingridge dilemma illustrates the utility of a gradual approach well. As technology increases in adoption, our knowledge around it increases but the ability to act on that knowledge decreases; It is difficult to shape a technology if it is already widely adopted. Conversely, when a technology is new, the ability to act on the available knowledge is high, but often little is known about it. The figure below demonstrates the inverse relationship between control over technology and its predictability as it is adopted over time, also known as “the Collingridge curve”.
A gradual approach intensifies efforts to increase knowledge around a technology before it is well adopted within society. This, in theory, may provide additional knowledge or clarity. This additional clarity may allow regulators to take action in guiding the technology toward less risky paths.
This clarity may also enable better Ordering, and, for example, could showcase that risks may come on quickly and risk-reducing efforts should be implemented more urgently.
Competitive pressures may disincentivize Gradualism, as they may push companies to reach a specific end goal as fast as possible (e.g. advanced AI systems). Roughly speaking, I think this argument sometimes overlooks the benefits of safe deployment that may help counteract competitive pressures. Releasing unsafe systems may lead to repercussions that pose a detriment to not only to the developer but the entire industry. Continuing with the case of AI, one of the following events may occur (source):
Reduction of revenue due to a decrease in consumer trust.
Regulatory fines or penalties, or development of tighter AI regulation overall.
Litigation from customers.
However, it is not necessary that a company will think those repercussions are worth the opportunity cost of taking a gradual approach. Customers could also not have access to accurate information about the safety of the products and, as such, have a high level of trust. One tool that could be of use is informed government regulation that removes incentives or increases the cost of neglecting safety concerns, though that usually takes significant time and effort to implement.
I think red teaming (e.g. that of DALLE-2) is an example of taking a gradual approach to the release of a powerful AI system and is also a case where DTD is useful in preventing non-existential risks.
Developing technology should be “distanced” for reasons other than Gradualism or Ordering. This could be “buying time”, or delaying when a development might emerge by stopping research from continuing due to a lack of understanding of potential risk or imminent danger.
Distancing applies to both technical and non-technological developments like limiting others from working on something with a track record of being risky. This could be additional regulatory oversight where warranted or moratoriums on developments. These interventions should obviously have a high bar to clear. One case where achieving this would be difficult is in regard to state actors unwilling to cooperate with regulations they deem to be stringent, or not enforcing them in practice.
In the case of something like moratoriums, DTD has implications relating to academic freedom and access to technology, and I think it is worth exploring these aspects of the framework further.
A strategy that may help operationalise some of the above interventions is Categorising. There already exist ways to categorize individual technologies. One example is Technology Readiness Level (TRL) which is used to gauge the maturity of emerging technologies in the absence of perfect information. A key use case for this framework is within space programs and is used to inform procurement, funding, and R&D efforts. This useful categorization could be beneficial for ordering purposes, in addition to providing a rough understanding of where a technology falls on the Collingridge curve.
TRL itself is not perfect, and some efforts to improve it have been made. However, it could enable a certain level of common language around the maturity of technologies from different domains (source). It could also help set realistic expectations about how advanced some technologies are through filtering away optimism or pessimism that may arise from external circumstances (e.g. optimism from a hype cycle) (Ibid). I think improving and developing TRL or another categorising framework with DTD would be useful, especially in developing more robust methods of ordering dual-use technology.
Here is my guess at what a DTD toolkit can look like that can inform interventions at both the Operator and Regulator level.
Nothing here is listed in any particular order. Please let me know if I missed anything.
Build defensive tech yourself or get involved with such projects
Join the evolutionary sculpting lab
Founding your own DTD-aligned FRO.
Generate knowledge gap databases that aid in accelerating risk-reducing ideas
Frontier climate’s interactive database of research and innovation gaps relating to carbon removal technologies
Preferentially funding specific technologies over others
Preferential funding toward Far-UVC light research which is a defensive technology.
Cultural norms that discourage risky research
Shifting cultures in science is not impossible, and some efforts have seen some tractability previously.
Moratoriums to halt the use of risky technologies, or to force them to go through additional approval
I think this house bill is a good example of what I am thinking of here
Create institutions that engage in DTD-aligned research (I think creating institutions or investing talent in good institutions is one of the most worthwhile “tools”):
Create institutions that engage in DTD-aligned advocacy or development of the concept
Incentivising DTD (e.g. Some efforts at iGem)
Regulations that disincentivize specific technologies
Coordinates bans on an international level on risky technology
Montreal Protocol (Treaty to protect the Ozone layer)
Third-party auditing to uncover potential risks or aid in identifying “unknown unknowns”.
Informing potential operators on the likelihood of catastrophic risks (did you know the chance of existential risk comes close to a roll of the dice?) to direct them toward DTD-aligned institutions.
Prizes, competitions, and Advanced Market Commitments (AMC)
DARPA’s Grand Challenge advanced the field of autonomous vehicles significantly
Frontier Climate is an “AMC dedicated to buying $925M of permanent carbon removal between 2022 and 2030”.
Advocating that the NIH, for example, assess second-order effects in their grants. This is needed and currently not being done, as the NIH only assesses bio risks and risks to study patients (source).
Aids in directing funding toward risk-reducing technology
Writing DTD-aligned briefings like this one from the Institute for progress
Run workshops/ create a working group that have an explicit DTD focus
Coordinate what information is shared between researchers/limit access to authorized personnel
This can help build trust between communities and can help mitigate transfer risks. (Especially relevant for dual-use technologies, as mentioned above).
“Knowledge spillover is a big deal” and should be taken very seriously
Responsible access to genetic sequencing is an example of limiting access
Authoring Cause Area Explorations that examine DTD efforts within a specific space, and recommending fundable opportunities to philanthropists
Such an exercise is in my view neglected and could have an impact outside of just philanthropists
Thank you to Jacob Swett, Jonas Sandbrink, Santi Ruiz, Emily Nobes, Vishal Maini, Anna Wang, Milan Cvitkovic for feedback on this post.
Is DTD mainly relevant to catastrophic risk reduction?
While DTD can inform catastrophic risk reduction, it is also relevant to many topics beyond that and could influence many day to day decisions at philanthropies, funding agencies, and research-heavy companies. Here, it is helpful to remember that the contents and many of the ideas within DTD are not new and have been common among the aforementioned groups for some time. DTD provides a new way to frame them and implement them. These ideas are difficult and complicated (e.g. trade-offs of dual-use technology) and I think DTD is a useful framework that may provide some insight into such thorny problems.
As an example:
Gradualism is relevant to image-generating AIs like DALLE to produce less gruesome images or to curb biases.
Distancing, which may include moratoriums (in exceptional circumstances), has considerable impacts on academic freedom and access to technology
Will this not cause unnecessary red tape and slow down innovation?
Not necessarily. Some risk-reducing efforts could be made to not interfere with legitimate research.
In the case of bans or protocols, I think they should be limited in scope and temporary, like in this case with gain-of-function research. These limited bans should not interfere with other R&D efforts as to not impede it unnecessarily, but that is highly dependent on the specific case involved and may not always be possible.
However, taking the example of the NIH assessing first and second-order risks from a research project, that may add more time to processing applications or worse, requiring more time of applicants during their grant applications.
The answer in this case is less clear, and I would be interested in proposals that investigate this more deeply.
If we take the limit of risk-reducing tech, will we not end up with boring applications?
Not necessarily. Shifting climate tech to cleaner energy is not boring for example.
If some research is feasible, will it not be developed anyway? Is it not a futile attempt to try to affect it through some policy decisions?
It may well be developed. However, timing is very important. Technology can also be path dependent, and shaping that path is important.
DTD is also incentivised from a purely selfish point of view. For example, if a nation has a strong defensive infrastructure against pandemics and is able to detect and defend against them quickly, they would be unaffected by another country’s mishap from lack of research oversight.
Where does DTD not work?
These are some initial guesses (it is not meant to be a complete list):
Vulnerabilities that exist regardless of time (massive solar flare) (source).
Where defensive technology is not possible, or is extremely hard to develop
Risk-increasing technology has an advantage even with defensive technology at maturity
Where worldwide cooperation or agreement is not achieved and defensive technology is not useful
Published: February 22nd, 2023. Last Updated: February 22nd, 2023.