All software has defects, especially in today’s complex code with thousands of lines that have to be worded just so. The Institute of Electrical and Electronics Engineers (IEEE) is aware of this dilemma. Beginning in 2014, the IEEE launched a new initiative: the Computer Society Center for Secure Design (CSD). It’s mission? To provide guidance on recognizing software systems that are vulnerable to compromise, and designing and building software systems with strong, identifiable security properties
One might say that’s been done before. That is true. However, CSD intends to take a different approach by shifting some of the focus in security from finding bugs to identifying common design flaws in the hope that software architects can learn from each other.
To get that kind of information, the CSD sought the help of veterans in the field of software security — more or less those who have either made the aforementioned mistakes or had a hand in fixing them. After much deliberation, the group gathered its thoughts in a paper, "Avoiding the Top 10 Software Security Design Flaws." IEEE mentioned that many of the flaws that made the list have been well known for decades, but continue to be a problem. Here we’ll take a look at those flaws – and how to fix them.
What Is Secure Design?
It might be helpful to first define what CSD considers a secure design: "The goal of a secure design is to enable a system that supports and enforces the necessary authentication, authorization, confidentiality, data integrity, accountability, availability and non-repudiation requirements."
One more piece of business needs to be taken care of before getting to the list. Remember that the CSD shifted its focus from bugs to flaws? It’s important that everyone be on the same page with how the CSD defines bugs and flaws. Both are software defects, albeit different ones. So how do they differ, exactly? Well, bugs are implementation-level software problems that may exist in the code, but may never be executed. Flaws, on the other hand, are problems located at a deeper level; they’re more subtle, and might be instantiated as software code. Bottom line: Flaws are the result of mistakes or oversights at the design level.
10 Tips for Avoiding Key Software Flaws
With the definitions in place, let’s look at the most common flaws that appear in software today.
Earn or give – but never assume – trust
Software systems often need information from other software packages or users. For example, server and client applications. The system software may at first be secure, but if any component is insecure, the system software will inherit that insecurity. CSD stressed the importance of inventorying all the components that communicate with the software in question. Then methods must be devised to validate the discovered components.
Use an authentication mechanism that cannot be bypassed or tampered with
The whole purpose of authentication is defeated if vetted users or software can bypass or tamper with the process. CSD offered three suggestions: design the authentication system to act as a choke point, give credentials a limited life and use multi-factor authentication.
Authorize after authenticate
Authorize is different from authenticate. For instance, debit cards and PINs authenticate ATM users. However, authenticated ATM users are only authorized to withdraw money from their accounts. CSD believes that authorizing should be considered an explicit check using a common infrastructure (system library or back end) that defines the privileges and services allowed.
Strictly separate data and control instructions, and never process control instructions received from untrusted sources
To prevent injection vulnerabilities, it is important to avoid co-mingling data and control instructions. Ignoring the principle of separation decreases security by undermining low-level security mechanisms. CSD offers the following advice:
- Design compilers, parsers and related pieces of infrastructure to check for control-flow integrity and segregation of control and untrusted data.
- Create APIs that avoid exposing methods or endpoints that consume language strings.
- Develop applications to avoid APIs that mix data and control information in their parameters.
Define an approach that ensures data is explicitly validated
Software systems and components make assumptions about the data they run. It must be ensured that the assumptions hold. Vulnerabilities arise from wrong assumptions. CSD recommends:
- The design or use of centralized validation mechanisms.
- The transformation of data into a canonical form.
- The use of common libraries of validation primitives.
- Designing the implementation’s input validation to be state-aware.
Use cryptography correctly
CSD admits that getting cryptography right is difficult. However, not getting it right leads to:
- Misuse of libraries and algorithms
- Poor key management
- Randomness that is not random
- Failure to centralize cryptography
- Failure to allow for algorithm adaptation and evolution
Because of this difficulty, CSD advocates consulting an expert. CSD also noted that the field of expertise should be applied cryptography.
Handle identity-sensitive data with care
Two things happen; designers fail to identify data as sensitive, or designers do not determine all the ways in which data can be exposed or manipulated. CSD proposes creating a data-sensitivity policy that addresses all company-specific factors. The policy should include:
- Information about government and industry regulations that apply to the organization
- Whether or not data confidentiality applies
- How data is handled when it’s in transition
Always consider the users
Developers’ failure to consider users can lead to many issues, including:
- Privilege escalation
- Users disclosing sensitive information
- Complicated security procedures that improve the odds of incorrect or lack of use.
CSD is aware that security versus convenience is an issue. Using a risk assessment approach (making trade-offs) may not be perfect, but it’s better than ignoring users altogether.
Understand how integrating external components changes your attack surface
Similar to the first design flaw, software systems also inherit security weaknesses from external components being integrated into the parent software system. CSD advises:
- Scheduling enough time to study external component security.
- Avoiding the trust of external components until security controls have been applied.
- Documenting everything. For example, explain why default settings were changed.
Be flexible when considering future changes to objects and actors
Assuming that a software program’s code is static is asking for trouble. Software of note is under constant revision. Knowing that, the important question then becomes, how software changes affect security. To that end, CSD suggests keeping the following in mind. Design for:
- Security properties changing over time, such as when code is updated
- The ability to isolate or toggle functionality
- Changes to objects intended to be kept secret
- Changes in the security properties of components beyond your control
An Interesting Take on Why Software Is Still Insecure
On the Cigital blog, Principal Consultant Jim Delgrosso offered reasons why even the pros may design software with one or more of the above flaws:
- Some of these flaws are just hard to get right all the time.
- We are human beings, and human beings make mistakes.
- Design flaws can be hard to find.
IEEE’s Center for Secure Design has moved the bar from bugs to flaws. Only time will tell whether their suggestions will have an impact.