October is Cybersecurity Awareness Month and, in that spirit, I’d like to shed some light on a cybersecurity topic that is both increasingly important and frequently misunderstood.
In his proclamation declaring October National Cybersecurity Month, President Obama said, “Keeping cyberspace secure is a matter of national security, and in order to ensure we can reap the benefits and utility of technology while minimizing the dangers and threats it presents, we must continue to make cybersecurity a top priority.”
I agree that cybersecurity is key to taking advantage of the incredible innovations technology brings to our lives. I also feel strongly that software is at the core of most of these innovations. Yet the systemic risk to our economy and national security brought on by vulnerabilities in software code does not get the attention it deserves.
The fact is Verizon’s Data Breach Incident Report shows that web application attacks are now the most frequent pattern in confirmed breaches. But why isn’t every organization talking about application security? Part of the reason that there are a lot of misconceptions, so I’d like to set the record straight and debunk a few of the more common application security “fallacies.”
Fallacy: Implementing an application security program is expensive.
Reality: Not so much anymore, especially when you consider the down-stream cost savings.
Cloud-based application security solutions changed the cost game. You no longer need to purchase or maintain expensive equipment or hire specialized staff. And the cost of a breach can be staggering. Juniper Research recently predicted that the cost of data breaches will increase to $2.1 trillion globally by 2019. The risk of an expensive breach, not to mention the long-term fallout that often follows, far outweighs the cost of implementing application security.
Fallacy: Firewalls, antivirus, and network security cover applications.
Reality: These technologies may protect you in some ways, but they do not protect against attacks targeting your applications.
One of the reasons that cyberattackers have turned their attention to web-facing applications is that most enterprises are proficient at hardening traditional perimeters with next-generation firewalls, IDS/IPS systems and endpoint security solutions, making applications a more attractive target.
For instance, firewalls were designed to handle network events, such as finding and blocking botnets and remote access exploits but don’t get as granular as the application level. Some network security solutions do address certain application-level events -- but they require significant effort to configure and monitor, leading to security inefficiencies. Ultimately, it’s like trying to eat spaghetti with a spoon. You can ... but it’s not the way to do it.
Fallacy: One single technology can secure all applications.
Reality: There is no solution to rule them all. In my work at Veracode, I have found that there are significant differences in the types of vulnerabilities that are commonly discovered by looking at applications while they are running with dynamic testing compared to analyzing the raw code with static tests. Static testing can help identify vulnerabilities inherent in the code while dynamic testing can provide a valuable outside perspective. By combining both techniques across the life of the application, organizations can find and address more kinds of vulnerabilities and drive down application risk.
In other words, only using one kind of application security testing during the software development lifecycle means missing many vulnerabilities.
Fallacy: Covering only business-critical applications is enough for success.
Reality: Some of the recent major breaches stemmed from applications considered “non-critical.” For instance, JPMorgan was recently breached after a third-party website for its annual charity road race was compromised, exposing employee credentials that were leveraged to log into a misconfigured JPMorgan server. The site was hardly a business-critical application and was not even under the direct control of JPMorgan, but hackers found a vulnerability in the third-party website and used it to their advantage.
Cyberattackers look for the path of least resistance into an organization, and that path is often through less-critical and third-party applications.
Fallacy: Developers won’t change their agile processes to incorporate application security.
Reality: The data shows They do—and Increasingly, they won’t need to.
Application security is in the midst of a major transformation – it is becoming less driven by security professionals and more by frontline developers. With the rise of DevOps and continuous deployment/release models, the old ways of inserting security into software development after the fact are no longer viable. In today’s development environment, developers can’t be held back by waiting for the security team’s review. As such, security professionals must adapt to the new developer processes, not the other way around. Application security solutions are already increasingly being designed to fit the way developers work and to integrate seamlessly and automatically into developers’ workflows. Data from our customers, and third-party research from TechTarget, show that DevOps teams are seeing security as a part of the responsibility.
Ultimately, cybersecurity awareness should include software. The reality is that the world is increasingly powered by software and overlooking application security is a dangerous oversight. Make sure you know the truth of how application security really works to make the right decisions to keep your businesses safe.
This article is published as part of the IDG Contributor Network. Want to Join?