Attackers have long targeted application vulnerabilities in order to breach systems and steal data, but recently they've been skipping a step and going directly after the tools developers use to actually build those applications.
Consider the news that broke earlier this year that entailed how the CIA allegedly attempted to compromise Apple's development software Xcode. Such a breach could mean that every app developed with the development environment would, in turn, contain malware that would enable its creators to spy and snoop on people who installed those apps, as The Intercept reported in the story The CIA Campaign To Steal Secrets. "The security researchers also claimed they had created a modified version of Apple's proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool,"
To be sure, infecting the tools developers use, in order to compromise the apps they ultimately ship, makes for a very juicy target for attackers as well as a dangerous and significant threat to enterprises. Consider the brute force-attacks that targeted the popular source code repository GitHub in 2013, after numerous accounts had been compromised, GitHub banned what it considers weak passwords and implemented rate limiting for logon attempts.
That GitHub attack and the attack on Xcode aren't isolated incidents. Just last week Apple acknowledged that its App Store endured a significant breach involving thousands of apps. The compromise was made possible when Chinese developers downloaded counterfeit copies of Xcode that were tainted with malware dubbed XcodeGhost. XcodeGhost compromises the Xcode integrated development environment in such a way that apps created with that version of Xcode would comprise subsequently developed apps. While Apple removed the infected apps, more than 4,000 tainted apps have been estimated to have made it into the App Store. Also, in 2013, Apple's Dev Center was taken down for an extended period with many developers reporting that Apple forced their passwords to be reset.
Strategist with IT risk management firm CBI, J. Wolfgang Goerlich, explains why the recent spate of attacks on Apple's development tools are notable. "The number of OS X computers continues to raise in the enterprise environment. Few organizations are considering Macs [from a security perspective] as the numbers have long been small and most [security] controls are Windows-based," he says.
"These types of attacks -- infecting the compiler -- used to be considered a potential threat by high security governmental organizations. You would be considered paranoid to present such a scenario as something that could impact the general public. And yet here we are," says Yossi Naar, co-founder of Cybereason, a provider of breach detection software.
If these types of two-stage attacks are no longer threats only to the paranoid, and enterprise development environments are targeted, what does this mean for enterprises trying to ensure they are developing and deploying secure applications.
"From a development perspective, the best practices in continuous integration and deployment would have prevented the attack [against Apple's App Store]," says Goerlich.
Chris Camejo, director of threat and vulnerability analysis for NTT Com Security, would agree. "This should be obvious, but developers (and anyone else for that matter) should only use software from trusted sources like a vendor's website or official app store, or verify that software packages they've downloaded haven't been tampered with by verifying the software's digital signatures when available," says Camejo.
Sri Ramanathan, CTO of mobile app development platform Kony, says the same holds true for open source software. "To protect developers, enterprises need to ensure that any software used has been vetted and certified as safe for use. Vigilance must be maintained on open source software modules in particular," he says. When it comes to Kony's development environment, Ramanathan says that Kony developers working on a product cannot use open source unless its specifically approved, and that every piece of software is statically and dynamically scanned prior to and after being approved for use.
"We also use a battery of internal and external pen tests to periodically certify all our runtimes. And we ensure that any open source software we use originates from a vibrant trusted community, and is actively supported, does not have too many known security issues (known issues can and should be mitigated) and is well documented," Ramanathan explains.
For enterprises, it's important developers and the software development chain be protected like any other users and assets, perhaps more so in many instances. "For other tool chains, particularly open-source, it is important to verify the authenticity of the software before you use it. Most open-source projects provide cryptographic hashes that you can use to verify the authenticity of downloaded software," says Bobby Kuzma, CISSP, systems engineer, at Core Security. "Treating build servers as secure systems, with advanced security controls, similar to what should be used when dealing with sensitive cryptographic materials will help gain control against this type of threat," Kuzma adds.
Good advice for any development team. And enterprises need to make certain developers work in a clean environment using separate systems for development from those used in building apps, adds Goerlich. "The build machine is then kept in a secure hardened state, with the compiling automated. Even if the developers download malicious code such as XcodeGhost on their computers, the build computer is kept clean and what is submitted to the App Store is protected," he says.
"For enterprises, a strong network security management program that monitors for malware connecting out to command-and-control computers is the first line of defense when identifying attacks like XcodeGhost," Goerlich adds.
This story, "Developers find themselves in hackers' crosshairs" was originally published by CSO.