The Broken Link Between Software Quality Management and Code Analysis
As organizations work to mature their Agile and DevOps processes, they must reduce inefficiencies, noise, and backlogs, and — most importantly – automate as much of their activities as possible throughout their entire pipelines. All of which requires the support of developers, test engineers, ops, and many more. But, by everyone working together, they can fix the broken link between software quality management and code analysis.
While each industry vertical may face different challenges, and be tied into different quality requirements, they all aim toward maximum code coverage and the highest functionality within their software.
The Broken Software Quality Management Link
Most likely, many software developers and testers run into the term “shift-left” at least once a week. The reason for this is obvious as shift-left supports automation within the sprint and build cycle, which better positions the entire team to release a high-quality application on time. The problem is that in many cases, it is not always clear what falls into the bucket of software quality management and software test coverage.
Software quality is not only determined by metrics, functional and non-functional testing results, integrations, and APIs, but also code quality compliance, safety assurance, and other industry-specific standards, such as MISRA and IEC 61508.
This is important to understand when addressing errors, weaknesses, and bugs that could impact software quality. As any vulnerability can impact the SDLC and require the entire team’s attention. Many of these vulnerabilities may even put the entire product at risk to the customers and to the brand itself.
Why Is the Software Quality Management Link Broken?
Within the software development community, there is a common understanding that the main responsibility of software development engineers in testing (SDETs) is to conduct test automation While unit, API, and static/dynamic code analysis are the responsibility of the developers for each code commit.
While that understanding is correct in most cases, the entire team sprint should not dictate only by persona. This is supported by the Agile manifesto, which reads:
Manifesto for Agile Software Development
We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:
Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.
That is, while there is value in the items on the right, we value the items on the left more.
Often static code analysis executions within the CI process reveals many coding errors and weaknesses, such as memory leaks, safety issues, and security vulnerabilities that violates OWASP — all failures that may delay the release of your application due to violation of standards, or security and other general concerns.
If not identified early, all of these quality and security vulnerabilities will delay releases and have a negative impact on business.
Connecting the Dots Between SAST, Continuous Testing, and CI/CD
To make a clear connection between all of the software quality activities mentioned above, there needs to be a synchronized pipeline with proper quality gates that can accommodate all of those activities.
In the example pipeline below, a developer has submitted a PR (pull request). This triggers a code analysis task that examines the differences between the previous code branch and the current. Based on the analysis, the code analyzer recommends whether to merge the changes or make some fixes to the code.
Some of the potential findings could include secure coding or major compliance violations. However, by enforcing secure coding standards — such as CWE and CERT — or functional safety standards — such as MISRA and IEC 61508 — those violations could be effectively dealt with.
As the above diagram illustrates, a developer can submit their code changes with greater confidence to the next testing phase — which could be functional, regression, BAT (build acceptance testing), or whatever their process may be — and show a code analysis report/audit as a quality gate checkmark. Without such a step in the middle, the “gate” is broken and code errors are being deployed into the main branches. Such errors may result in a broken build, critical bugs later in the SLDC, and costly rework.
However, in a “connected” process, the example code analysis, which has been triggered upon each code change, produces a confident continuous testing cycle that looks like the below illustration:
As you can see from the above pipeline example, the testing team is already aware that the underlying code has undergone an automated analysis that has eliminated code quality risks. This makes the build ready for smoke testing, and — later on — a major regression cycle across various targets and environments.
By following the above process, the team is able to connect code analysis with continuous testing, which is agnostic to any platform or development language being tested, and any Agile development methodology.
In addition, it is important to note that the connection between the processes and practitioners is that both of the practices happen inside the IDEs (Eclipse, IntelliJ, Visual Studio Code, etc.), which makes collaboration and feedback sharing quite easy.
Agile development and DevOps have provided teams with the opportunity to connect best practices from different domains and personas into a single working process. In addition, having the ability to add the extra code analysis step as a quality gate per each code change and passing it to the test managers to start their testing activities is a simple and straightforward thing to do. Unfortunately, in many organizations, this is still a disconnected practice, the broken link between the developers and the test team members.