Rarely do we enter a conversation today about software security where the topic of threat modeling does not emerge. It is clearly an important part of developing secure software. What may surprise many people is that threat modeling is not a new concept.
It has its roots in modern times at least to the 1960s where the context was national defense and military initiatives. Having said that, it would serve us well if we took a step back and examined where we are today compared to the past. We want to keep learning, act wisely, heed the warnings of those who came before us, and avoid repeating their mistakes. As such, this is a call to pause and reflect on threat modeling today.
The evolution of threat modeling
While there are many books, papers, and articles in modern history that talk about threat modeling, we will highlight an illustrative example using a 1959 paper titled, “Design of Threat Models,” by Harold Tombach. You can also explore other material from the same era published by IEEE and ACM, among others.
Tombach’s paper described a layered approach to threat modeling:
-
Start by examining all possible threats to the system, assuming you know nothing about the threat actors.
-
Refine the list to consider all possible threats that we (not the threat actors) can possibly imagine now or in the future.
-
Refine further to the current state of the art based on today’s knowledge (and so doesn’t require extensive intelligence gathering).
-
Determine the cost of the identified threats.
-
Determine all threats which have not yet been considered that we (and therefore threat actors) can execute, if additional intelligence were available.
At first glance, this approach appears quite reasonable. We want an approach that is repeatable and will produce actionable security guidance. Having a step-by-step approach eliminates at least some variables related to quality and consistency.
Where is threat modeling today?
Our industry has standardized (using the term loosely) an approach that uses data flow analysis for much of our threat modeling. It is useful when we consider the flow of information across our systems. Data flow diagrams, therefore, are generated as a means of communicating this to the outside world (those who are not threat modelers).
The original intent of a data flow approach is consistent with the lessons expressed in Tombach’s paper. The need for due consideration of common knowledge threats and more esoteric ones helps to recommend security mitigations.
Trouble rears its ugly head when we consider modern challenges in secure software development. We now have teams constructing and integrating various code bases. The flow of data across these complex, distributed, applications is less clear. Even if we figure it out, the speed of agile development makes the previous analysis outdated very quickly. Compounding the problem is that very few people understand the application in its entirety. A migratory workforce makes it difficult to retain the knowledge organizationally. So the elephant in the room is Agile Threat Modeling.
What options do we have to improve threat modeling?
There are many possible responses to this problem. Let us consider five possible responses:
-
Keep doing what we have always done and hire more people.
-
Reduce the scope of threat modeling.
-
Slow down to ensure threat modeling has time to keep up.
-
Reduce time by reusing previous threat models as templates for future analysis.
-
Automate the creation of data flow diagrams.
Keep doing what we have always done and hire more people
The line of thought here is that we have built up a certain level of expertise in our organizations. Why not expand that same capability with more people?
The underlying assumptions behind this strategy are that we have enough people interested, the output will be consistent across teams, and we can invest the time to train people. These assumptions become questionable when we consider the dynamic software development workforce, the relative maturity of different individuals, and the lack of security professionals to conduct training at scale.
Reduce the scope of threat modeling
The approach here is to reduce the analysis to a subset of our entire system. After all, since there are specialists in specific technologies, they would know best how to address potential threats.
The flawed assumption in this argument is that threat actors will, likewise, limit their activities to a subset of our system. Furthermore, if we collect all the mitigations from various threat modeling teams, we have to conduct further analysis to determine appropriate priorities and cross dependencies. So, while we intend to move faster by reducing the scope, we end up moving slower.
Slow down to ensure threat modeling has time to keep up
Since threat modeling is a very important activity, we should allow time for the model to develop. This, of course, assumes that the cost of slowing down is acceptable to the organization. The business context, however, seems to contradict this. We are seeing a greater push for speed through digital transformation and digital delivery programs.
Reduce time by reusing previous threat models as templates for future analysis
If we can take advantage of a repeatable process through reusable templates, that should allow us to keep up with the need for speed. Templates are intended, after all, to save time for building a threat model from the ground up each time.
There are several problems that emerge with this line of thinking. First, any template carries with it assumptions. Those assumptions may be business or technology-related. For example, if a threat model examines the threats around a mission-critical system, then the recommendations will reasonably be quite extensive. However, blindly using this threat model as a template in another context will lead to mitigations that are unnecessary. Not only that, but we face the problem of governing the threat modeling templates and determining the impact of any change. The effort in maintaining these templates will continue to grow.
Automate the creation of data flow diagrams
To address the challenge of creating diagrams quickly, why not automate this? By examining the implied flows through our system, we can construct data flow graphs of every possible combination in a fraction of the time it would take a person. While this might reduce the diagram creation process, it amplifies the pain when it comes to figuring out which data flows are reasonable. So we end up with the problem of data flows without context, and that manifests itself with false positives which need to be removed.
Can we ever win?
We have reached a tipping point where we must rethink our approach to a single way of performing threat modeling. A diagrammatic approach with data flows is not the only way to derive mitigations, particularly in today’s software development context.
We can also consider misuse cases, well-known security checklists (OWASP and SANS come to mind), and even standards (which represent a minimum threshold for security). As a community, we have learned a significant amount which can be helpful with avoiding wasted effort that leads to the same conclusions.
That is not to say that data flow diagrams are not useful. Quite on the contrary, we must focus our data flow analysis where new or novel threat analysis is required. We must consider, as Harold Tombach suggested, a range of possible threats based on additional intelligence work. This may imply the need to invest more in intelligence tools or training. But this is far more scalable than attempting to conduct threat models with limited resources in a fast-moving environment.
Software security is more important than ever now
Security is not going away any time soon. In fact, as recent breaches attest, it is more important than ever. Evolving a threat modeling program to the next stage involves some key steps:
-
Identify the Top 10 lists, hardening guides (such as CIS), frameworks (for example those put out by NIST), or standards (like ISO) that reflect the common use cases for your systems.
-
Map the criteria above to architectural components, that is, specific servers or software development frameworks.
-
Determine mitigations required to conform the second point with the first one.
-
Conduct a financial analysis to determine what is feasible.
-
Use a data flow threat model analysis, with the aid of intelligence tools, to determine additional areas that are outside the boundary of common knowledge.
-
Present a list of mitigations from the above two points to the leaders for their decision.
Enable security across teams
Using this type of approach extends the reach of threat modeling beyond the security team. Other teams, not familiar with data flow diagrams and threat modeling, can still contribute toward mitigating potential threats by leveraging the broader industry experience. In this way, threat modeling will continue to provide its intended value at identifying meaningful threats and proposing mitigations without getting in the way of a fast-moving business.
If you want a deep dive into how you can evolve threat modeling for agility and business value, read our whitepaper.
About Security Compass
Security Compass, a leading provider of cybersecurity solutions, enables organizations to shift left and build secure applications by design, integrated directly with existing DevSecOps tools and workflows. Its flagship product, SD Elements, allows organizations to balance the need to accelerate software time-to-market while managing risk by automating significant portions of proactive manual processes for security and compliance. SD Elements is the world’s first Balanced Development Automation platform. Security Compass is the trusted solution provider to leading financial and technology organizations, the U.S. Department of Defense, government agencies, and renowned global brands across multiple industries. The company is headquartered in Toronto, with offices in the U.S. and India. For more information, please visit https://www.securitycompass.com/