"Security is a process, not a product." -Bruce Schneier, Cybersecurity expert

According to statistical studies led by Dr. Gary McGraw at Cigital, half of all vulnerabilities are due to coding errors - bugs. The other half are design flaws which can only be discovered and fixed through threat modeling, or an architectural risk analysis process – which is a topic for another blog.

Just as with any bug, the cost to fix a vulnerability goes up exponentially the further down in the development cycle it is found and addressed. Unlike a simple bug, the cost of a vulnerability being successfully exploited has the potential to be much greater, especially once it gets into a production environment. And the cost is not just monetary, but could also be reputational and legal.

A secure software development lifecycle (SSDLC) process is a development methodology, an independent set of tasks in all phases of the software development lifecycle that includes several secure coding components.

"Security needs to be done by people who don’t have “security” in their job title. Doing what you do the right way is 90% of getting things secure". -Paco Hope, Secure coding author

Not all developers need to be security experts, but all developers need to be aware of their responsibility in writing code that reduces the chance of exploitable vulnerabilities showing up in applications. Training in secure design principles, common attacks against specific coding languages and use of tools, like static code analysis, to help find and remediate vulnerabilities is needed within the field, as these topics are usually not covered in any computer science or coding classes in school.

Half of the OWASP Top 10 web application vulnerabilities and more than half of the MITRE/SANS Top 25 programming errors are due to insufficient input validation of user controlled data. So particular attention needs to be paid to tools and libraries available to validate and sanitize this data before it is used. Developers (and testers) need to develop a mindset of “What can an attacker do with this?” anytime user controlled input is requested or used. Fuzzing (also called random mutation testing) of input files or protocols is good way to find many input validation issues, and it is eminently automatable.

Static code analysis tools are now built into most integrated development environments (IDEs) for the major coding languages and should be enabled, configured, and used religiously by all developers – before they check any code into their code management systems (CMS). These are effective at preventing known vulnerability attack vectors from being included into submitted code, as well as resulting in higher quality code. Code quality is directly correlated to application security. So good, clean code will be inherently more secure.

Code review from a security point-of-view is also an important aspect of secure coding, as the Heartbleed vulnerability so clearly demonstrated. No static analysis program was able to find this seemingly simple input validation error prior to the publication of the vulnerability. However, having an additional validation step of code review for security issues by multiple reviewers, versus a lone reviewer, could have made the difference.

Even with a fully implemented SSDLC there will be times when a previously unknown (or unfixed) bug becomes tomorrow’s vulnerability du jour. Developers will still need to quickly react to quantify impact, determine a remediation or develop a patch, and perform a root cause analysis so similar weaknesses can be found and corrected quickly. Attackers will often look for additional related vulnerabilities, in case the responsible developers only fix the specific vulnerability exploited.

The goal is that with a well implemented SSDLC we can reduce the frequency of these resource-diverting events and reduce risk to both ourselves and our customers. Learn more about how CenturyLink Cloud can provide the enterprise level security, speed, performance, and availability to meets your business demands.