3 Best Practices for Secure Software Development
Software security risks are everywhere. And in an era of cyberattacks, they can affect everyone — individuals, corporations, governments, etc.
Cyberattacks make headlines. Duqu and Stuxnet had everyone talking in 2010 and 2011. And cyberattacks have only gotten worse since then. WannaCry hit important systems in 2017, including Britain’s National Health Service. And GitHub was hit by a denial of service attack in early 2018.
Embedded Systems Aren’t Immune, Either
This is a huge problem.
Cyberattacks against embedded systems could lead to wide-scale damage to:
- Critical infrastructure, including power generation, oil, and gas refining.
- Water and waste control systems.
Interdependent systems make software the weakest link.
Software size and complexity complicates testing.
An outsourced software supply chain increases risk exposure.
Sophisticated attacks find more risk.
Legacy software is reused.
Cybersecurity Impacts Development, Too
More organizations are investing in cybersecurity technologies. And many advances have been made in cybersecurity coverage. But much of the effort has been focused on adding security after the fact and improving threat detection.
Many are now realizing that software development needs to be more secure.
It’s not enough to apply new security technologies. The software itself needs to close risk gaps. Putting stronger locks on your front door is no use if the windows are left open.
So, you need to secure the software development process to improve security.
Why Developing Secure Software Is Difficult
Security Isn’t a Big Enough Priority
Security isn’t a big enough priority for most developers.
There’s an old saying that you need to:
- Get to market fast.
- Include all features planned.
- Maintain a high level of quality.
But you can only have two out of the three. So, while quality is part of the conversation, security is often left behind.
Features and deadlines drive development checklists. And security usually isn’t a feature or a requirement. So, it’s rarely addressed.
Quality Doesn’t Equal Security
Quality doesn’t guarantee security.
Improving software quality can reduce security flaws that result from defects. But QA usually doesn’t take hacking into consideration.
Too Many Moving Parts in Embedded Development
Embedded systems are big and complex.
There’s new and legacy code — and connectivity components. And embedded systems run on a variety of operating systems.
Multiple development teams work on software. And they’re often spread around the world.
And it’s difficult enough to ensure that the software functions properly. It can be even more difficult to ensure security.
Not Enough Training
Unfortunately, many people involved in software development don’t know how to recognize security problems. This includes the security implications of certain software requirements — or lack thereof.
And they don’t know how security impacts the way software is:
- Prepared for distribution and deployment
So, developers may not design secure software. Security requirements may be lacking. And developers might not understand how a mistake turns into a security vulnerability.
No One Owns Security
Most embedded development teams don’t have someone tasked with software security. Instead, they rely on a variety of roles — from product management to development to QA — to make software secure. And that doesn’t always work.
3 Best Practices for Secure Software Development
Cyberattacks happen all the time — and attackers are getting smarter.
But many teams are overwhelmed when it comes to secure development. It’s a challenge to figure out which threats and vulnerabilities pose the greatest risk. And most developers don’t know how to protect against and respond to those risks.
If you’re looking to ensure secure development processes, follow these best practices.
Start With Requirements
You can address and eliminate security weaknesses in your requirements.
Start with defining software requirements.
Best practices include:
- Constraints on process behaviors and input handling.
- Resistance to (and tolerance of) intentional failures.
- Secure multicore designs that prevent unexpected interactions between threads and processes.
Develop Based on Standards
Developing with compliance standards in mind can also improve security. Compliance standards — such as ISO 26262 — require coding standards. And coding standards give developers a way to identify and avoid risks.
One of most prominent security initiatives related to software development are the Common Weakness Enumeration (CWE) database project and the CERT C coding standard. Other coding standards, such as MISRA, can also be used to ensure security.
It’s a best practice to use coding standards to ensure:
- Code is consistent.
- Code written by any developer is readable and easy to understand.
- Code reviews and downstream maintenance are efficient.
Using coding standards helps you prevent, detect, and eliminate security weaknesses.
Test Early and Test Often
Finding security weaknesses early in development reduces costs and speeds up release cycles.
It’s important to test code as soon as it’s written — and to test any code being reused from a prior project. And it’s important to test often throughout the development process.
So, it’s a best practice to test code regularly, including with:
Static Code Analysis Tools for Secure Development
Half of all security defects are introduced at source code level. So, finding and fixing bugs as soon as code is written is critical.
But many developers lack security training. And identifying security problems during a code review can be difficult, if not impossible. Security mistakes can be subtle and easy to overlook even for trained developers.
Static code analysis tools can bridge that knowledge gap. They flag security vulnerabilities and accelerate code reviews.
Using static analysis, developers can identify errors, including:
- Memory leaks
- Access violations
- Arithmetic errors
- Array and string overruns
This maximizes code quality and minimizes the impact of errors on the finished product — and project timeline.
Plus, static code analysis tools — such as Helix QAC — can be used to comply with CERT C (or MISRA) coding rules. And they can identify CWE coding errors faster.
Learn more about using static analysis to secure embedded systems.