Wednesday, November 15, 2017

Essential (and free) security tools for Docker

Docker makes it easy for developers to package up and push out application changes, and spin up run-time environments on their own. Maybe too easy.

With Docker, developers can make their own decisions on how to configure and package applications. But this also means that they can make simple but dangerous mistakes that will leave the system unsafe without anyone noticing until it is too late.

Fortunately, there are some good tools that can catch many of these problems early, as part of your build pipelines and run-time configuration checks. Toni de la Fuente maintains a helpful list of Docker security and auditing tools here.

Unfortunately, many of the open source projects in this list have been shelved or orphaned. So, I want to put together a short list of the essential open source tools that are available today to help you secure your Docker environment.

Check your container configuration settings

As part of your build process and continuous run-time checks, it is important that you enforce safe and consistent configuration defaults for containers and the hosts that they run on.

The definitive guidelines for setting up Docker safely is the CIS Docker Benchmark, which lists over 100 recommendations and best practices for hardening the host configuration and Docker daemon configuration (including Swarm configuration settings), file permissions rules, container images and build file management, container runtime settings, and operations practices.

The Docker security team has provided a free tool, Docker Bench for Security, that checks Docker containers against this hardening guide (although the tests are organized a bit differently – the Swarm checks are all run together in a separate section for example). Docker Bench is updated for each release of the CIS benchmark guide, which is updated with each release of Docker, although there tends to be a brief lag.

Docker Bench ships as a small container which runs with high privilege, and executes a set of tests against all containers that it can find. Tests return PASS or WARN (clear fail) status, or INFO (for findings that need to be manually reviewed to see if they match expected results). NOTEs are printed for manual checks that need to be done separately.

After you run Docker Bench, you will need to work through fussy detailed findings and decide what makes sense for your environment. Docker Bench is an auditing tool, designed to be run and reviewed manually. Docker Bench Test shows how you can run Docker Bench in an automated test pipeline, by wrapping it inside the Bats test framework, although unfortunately it hasn’t been updated for a couple of years.

Another free auditing tool from the Docker security team is Actuary. According to Diogo Monica at Docker, Actuary checks the same rules as Docker Bench (for now), but runs across all nodes in a Docker Swarm. Actuary is positioned as a future replacement for Docker Bench: it is written in Go (instead of Bash scripts) and is more extensible, using configurable templates for checking and testing.

Image scanning and policy enforcement

In addition to making sure that your container run-time is configured correctly, you need to ensure that all of the image layers in a container are free from known vulnerabilities. This is done by static scanning of “cold images” in repos, or before they are pushed to a repo, as part of your image build process.

Commercial Docker customers can take advantage of Docker Security Scanning (DSS) (fka Nautilus) to automatically and continuously check images in private registries on Docker Hub or Docker Cloud for known vulnerabilities. DSS is also used to scan Official Repositories on Docker Hub.

If you’re using open source Docker, you’ll need to do your own checking. There are a few good open source tools available, all of which work basically the same way:

  • Scan the image (generally a binary scan), pull apart the layers, and build a detailed manifest or bill of materials of the contents
  • Take a snapshot of OS and software package vulnerability data
  • Compare the contents of the image manifest against the list of known vulnerabilities and report any matches

The effectiveness of these security scanning tools depends on:

  1. Depth and completeness of static analysis – the scanner’s ability to see inside image layers and the contents of those layers (packages and files)
  2. Quality of vulnerability feeds – coverage, and how up to date the vulnerability lists are
  3. How results are presented – is it clear what the problem is, where to find it, and what to do about it
  4. De-duplication and whitelisting capabilities to reduce noise
  5. Scanning speed

First, there is Clair from CoreOS, the scanning engine used in the Quay.io public container registry (an alternative to Docker Hub). Clair is a static analysis tool for Docker and appc containers, which scans an image and compares the vulnerabilities found against a whitelist to see if they have already been reviewed and accepted. It can be controlled through a JSON API or CLI.

If you’re using OpenSCAP there is the oscap-docker util which can be used to scan Docker images and running containers for CVEs, and compliance violations against SCAP policy guides.

Anchore is a powerful and flexible automated scanning and policy enforcement engine that is easy to integrate into your CI/CD build pipelines to check for CVEs – and much more – in Docker images. You can create whitelists (to suppress findings that you’ve determined are not exploitable) and blacklists (for required packages or banned packages, and prohibited content such as source code or secrets), as well as custom checks on container or application configuration rules, etc.

Anchore is available as a free SaaS online Navigator for public registries, and an open source engine for on prem scanning. The scanning engine can be wired in to your CI/CD pipelines using CLI or REST or a Jenkins plug in, to automatically analyze images as changes are checked in, and fail the build if checks don’t pass. A nice overview of running Anchore can be found here.

Anchore comes with a built-in set of security and compliance policies, analysis functions and decision gates. You can write your own analysis modules and policies, reports and certification workflows in a high-level language, or extend the analysis engine with custom plugins.

You can also integrate the Anchore scanning engine with Anchore Navigator, so that you can define policies and whitelists using Navigator’s graphical editor. Anchore will subscribe to updates so that you will be automatically notified of new CVEs, or updates to images in public registries.

Anchore (the company) offers premium support subscriptions, and enterprise solutions to discover, explore and analyze images, with additional analysis modules and policies, data feeds, tooling, and workflow integration options.

Another new and ambitious open source container scanner is Dagda. Dagda builds a consolidated vulnerability database, taking snapshots of CVE information from NIST’s NVD, publicly-reported security bugs in the SecurityFocus Bugtraq database, and known exploits from the Offensive Security database, and uses OWASP Dependency Check and Retire.JS to analyze dependencies, to identify known security vulnerabilities in Docker images. Dagda can be controlled through the command line or its REST API, and keeps a history of all checks for auditing and trend analysis.

It also runs ClamAV against Docker images to check for trojans and other malware, and integrates with Sysdig’s powerful (and free) Falco run-time anomaly checker to monitor containers on Linux hosts. Falco is installed as an agent on each host, which taps into kernel syscalls and filters against rules in a signature database to identify suspicious activity and catch attacks or operational problems on the host and inside containers.

Dagda throws everything but the kitchen sink at container security. It is a lot of work to set this up and keep all of it working, but it shows you how far you can go without having to roll out a commercial container protection solution like Twistlock or AquaSec.

Don’t leave container security up to chance

What makes Docker so compelling is also what makes it dangerous: it takes work and decisions out of ops hands, and gives it to developers who may not understand (or care about) the details or why they are important. Using Docker moves responsibility for packaging and configuring application run-times from ops (who are responsible for making sure that this is done carefully and safely) to developers (who want to get it done quickly and simply).

This is why it is so important to add checks that can be run continuously to catch mistakes and known vulnerabilities in dependencies, and to enforce security and compliance policies when changes are made. The tools listed here can help you to reduce operational risks, without getting in the way of teams getting valuable work done.

Friday, September 29, 2017

Agile Application Security book

This is the first post in a while. I've been busy working on a bunch of projects. One of them is now finally complete: a book on Agile Application Security for O'Reilly, with Laura Bell, Michael Brunton-Spall, and Rich Smith.

In this book we try to build bridges between the security community and Agile teams, by taking advantage of our different experiences and viewpoints:

  • Rich's extensive experience as a pen tester, and running the security team at Etsy.
  • Michael's experience in hyperdrive Agile development, DevOps and security at The Guardian and the UK Digital Service.
  • Laura's work as a software developer and application security cat herder with large and small organizations in many different stages on their journeys to Agile adoption.
  • My work in development and operations in enterprise financial technology.

This is a unique book that looks at Agile from a security perspective, and security from an Agile perspective.

We explain the driving ideas and key problems in security, and the core enabling practices in Agile that help teams succeed, and how security programs can leverage Agile ideas and practices. How to deal with important risks and problems, and how to scale.

We look in detail at security practices and tools in an Agile context: threat and risk management, how to think about security in requirements, secure coding and code reviews, security testing in Continuous Integration and Continuous Deployment, what scanning can and cannot do for you, building hardened infrastructure and running secure systems, and putting all of this together into automated pipelines and feedback loops.

We also step through regulatory compliance and how to achieve continuous compliance; and how to get value from working with outsiders, including auditors, pen testers and bug bounty programs. We end with how to build an agile security culture and how to break down walls between engineers and security.

It was a unique opportunity to work with experts around the world: Michael in the UK, Laura in New Zealand, Rich in the US. Challenging, exhausting, and a great learning experience.

Our hope is that it offers value to developers who work in Agile environments and are new to security; to people in the security community who want to understand how security can keep up with high-velocity Agile and DevOps teams; and even to people who are expert in both.

Tuesday, July 19, 2016

Why you Should Attack Your Systems - Before "They" Do

You can't hack and patch your way to a secure system.

You will never be able to find all of the security vulnerabilities and weaknesses in your code and network through scanning, or by paying outsiders to try to hack their way in.

The only way to be secure is to design and build security in from the beginning:

  1. threat modeling and risk assessment when designing apps and networks
  2. understanding and using the security features of your languages and frameworks, and filling in any gaps with secure libraries like Apache Shiro or KeyCzar…
  3. hardening the run-time using guidelines like the CIS benchmarks and tools like Chef and Puppet and UpGuard
  4. carefully reviewing every change that you make to code and configuration before putting them into production
  5. training everybody involved so that they know what to do, and what not to do
This is hard work, and it is unavoidable.

So what's the point of penetration testing? Why do organizations like Intuit and Microsoft have Red Teams attacking their production systems? And why are Facebook and Google and even the US Department of Defense running bug bounty programs, paying outsiders to hack into their system and report bugs?

Because once you've done everything you know how to do - or everything that you think you need to do - to secure your system, the only way to find out whether you've done a good enough job, is to attack your systems - before the bad guys do.

Attacking your system can show you where you are strong, and where you are weak: what you missed, where you made mistakes. It will uncover misunderstandings and hilight gaps in your design, in your defensive controls, and in your logging and monitoring. Watching your system under attack, watching what attackers do and how they do it, understanding what to look for and why, how to identify attacks and how to respond to them, will help change the way that you think and the way that you design and code and set up and run systems.

Let's look at different ways of attacking your system, and what you can learn from them:

Pen Testing

Pen testing - hiring an ethical hacker to scan and explore your application or network to find vulnerabilities and see what they can do with them - is usually done as part of due diligence, before a new system or a major change is rolled out, or once a year to satisfy some kind of regulatory obligation.

Pen testers will scan and test for common vulnerabilities and common mistakes in network and system configuration, missing patches, unsafe default settings. They'll find mistakes in authentication and user set up logic, session management, and access control schemes. They'll look at logs and error messages to find information leaks and bugs in error handling, and they will test for mistakes in some business logic (at least for well-understood workflows like online shopping or online banking), trying to work around approval steps or limit checks.

Pen tests should act as a reality check. If they found problems, a bad guy could too - or already has.

Pen testers won't usually have enough time, or understand your system well enough, to find subtle mistakes, even if they have access to documentation and source code. But anything that they do find in a few days or a few weeks of testing should be taken seriously. These are real, actionable insights into weaknesses in your system – and weaknesses in how you built it. Why didn't you find these problems yourself? How did they get there in the first place? What do you need to change in order to prevent problems like this from happening again?

Some organizations will try to narrow the scope of the pen tests as much as possible, in order to increase their chance of getting a "passing grade" and move on. But this defeats the real point of pen testing. You've gone to the trouble and expense of hiring somebody smart to check your system security. You should take advantage of what they know to find as many problems as possible – and learn as much as you can from them. A good pen tester will explain what they found, how they found it, why it is serious, and what you need to do to fix it.

But pen testing is expensive and doesn't scale. It takes time to find a good pen tester, time to set up and run the test, and time to review and understand and triage the results before you can work on addressing them. In an Agile or DevOps world. where changes are being rolled out every few days or maybe several times a day, a pen test once or twice a year won't cut it.

Red Teaming

If you can afford to have your own pen testing skills in house, you can take another step closer to what it’s like dealing with real world attacks, by running Red Team exercises. Organizations like Microsoft, Intuit and Salesforce have standing Red Teams who continuously attack their systems – live, in production.

Red Teaming is based on military Capture the Flag exercises. The Red Team - a small group of attackers - try to break into the system (without breaking the system), while a Blue Team (developers and operations) tries to catch them and stop them.

The Blue Team may know that an attack is scheduled and what systems will be targeted, but they won't know the details of the attack scenarios. While the Red Team’s success is measured by how many serious problems they find, and how fast they can exploit them, the Blue Team will be measured by MTTD and MTTR: how fast they detected and identified the attack, and how quickly they stopped it or contained and recovered from it.

Like pen testers, the Red Team's job is to find important vulnerabilities, prove that they can be exploited, and help the Blue Team to understand how they found the vulnerabilities, why they are important, and how to fix them properly.

The point of Red Teaming isn't just to find bugs - although you will find good bugs this way, bugs that definitely need to be fixed. The real value of Red Teaming is that you can observe how your system and your Ops team behaves and responds under attack. To learn what an attack looks like, to train your team how to recognize and respond to attacks, and, by exercising regularly, to get better at this.

Over time, as the Blue Team gains experience and improves, as they learn to respond to - and prevent - attacks, the Red Team will be forced to work harder, to look deeper for problems, to be more subtle and creative. As this competition escalates, as both teams push each other, your system - and your security capability - will benefit.

Intuit, for example, runs Red Team exercises the first day of every week (they call this “Red Team Mondays”). The Red Team identifies target systems and builds up their attack plans throughout the week, and publishes their targets internally each Friday. The Blue Teams for those systems will often work over the weekend to prepare, and to find and fix vulnerabilities on their own, to make the Red Team’s job harder. After the Red Team Monday exercises are over, the teams get together to debrief, review the results, and build action plans. And then it starts again.

Bug Bounties

Bug Bounty programs take one more step closer to real world attacks, by enlisting outsiders to hack into your system.

Outside researchers and white hat hackers might not have the insight and familiarity with the system that your own Red Team will. But Bug Bounties will give you access to a large community of people with unique skills, creativity, and time and energy that you can't afford on your own. This is why even organizations like Facebook and Google, who already hire the best engineers available and run strong internal security programs, have had so much success with their Bug Bounty programs.

Like Red Teaming, the rewards and recognition given to researchers drives competition. And like Red Teaming, you need to carefully establish - and enforce - ground rules of conduct. What systems and functions can be attacked, and what can't be. How far testers are allowed to go, where they need to stop, and what evidence they need to provide in order to win their bounties.

You can try to set up and run your own program, following guidelines like the ones that Google has published or you can use a platform like BugCrowd (https://bugcrowd.com/) or HackerOne (https://hackerone.com/) to manage outside testers.

Automated Attacks

But you don't have to wait until outsiders - or even your own Red Team - attack your system to find security problems. Why not attack the system yourself, every day, or every time that you make a change?

Tools like Gauntlt and BDD-Security can be used to run automated security tests and checks on online applications in Continuous Integration or Continuous Delivery, every time that code is checked in and every time that the system configuration is changed.

Gauntlt (http://gauntlt.org/) is an open source testing framework that makes it easy to write security tests in a high-level, English-like language. Because it uses Cucumber under the covers, you can express tests in Gherkin's familiar Given {precondition} When {execute test steps} Then {results should/not be} syntax.

Gauntlt comes with attack adaptors that wrap the details of using security pen testing tools, and sample attack files for checking your SSL configuration using sslyze, testing for SQL injection vulnerabilities using sqlmap or checking the network configuration using nmap, running simple web app attacks using curl, scanning for common vulnerabilities using arachni and dirb and garmr, and checking for serious vulnerabilities like Heartbleed.

BDD-Security (https://github.com/continuumsecurity/bdd-security) is another open source security testing framework, also based on Cucumber. It includes SSL checking (again using sslyze), scanning for run-time vulnerabilities using Nessus, and it integrates nicely with Selenium, so that you can add automated tests for authentication and access control, and run web app scans using OWASP ZAP as part of your automated functional testing.

All of these tests can be plugged in to your CI/CD pipelines so that they run automatically, every time that you make a change, as a security smoke test.

You can take a similar approach to attack your network.

Startups such as

provide automated attack platforms which simulate how adversaries probe and penetrate your systems, and report on any weaknesses that they find.

You can automatically schedule and run pre-defined attacks and validation scenarios (or execute your own custom attacks) as often as you want, against all or parts of your network. These platforms scale easily, and provide you with an attacker's view into your systems and their weaknesses. You can see what attacks were tried, what worked, and why. You can use these tools for regular scanning and testing, to see if changes have left your systems vulnerable, to evaluate the effectiveness of a security defense tool, or, like Red Teaming, to exercise your incident response capabilities.

Running automated tests or attack simulations isn't the same as hiring a pen tester or running a Bug Bounty program or having a real Red Team. These tests have to be structured and limited in scope, so that they can be run often and provide consistent results.

But these tools can catch common and serious mistakes quickly - before anybody else does. They will give you confidence as you make changes. And they can be run continuously, so that you can maintain a secure baseline.

Why you need to Attack Yourself

There is a lot to be gained by attacking your systems. You'll find real and important bugs and mistakes - bugs that you know have to be fixed.

You can use the results to measure the effectiveness of your security programs, to see where you need to improve, and whether you are getting better.

And you will learn. You'll learn how to think like an attacker, and how your systems look from an attacker's perspective. You'll learn what to watch for, how to identify an attack, how to respond to attacks and how to contain them. You'll learn how long it takes to do this, and how to do it faster and easier.

You'll end up with a more secure system - and a stronger team.

Thursday, June 16, 2016

Dev-Sec.io Automated Hardening Framework

Automated configuration management tools like Ansible, Chef and Puppet are changing the way that organizations provision and manage their IT infrastructure. These tools allow engineers to programmatically define how systems are set up, and automatically install and configure software packages. System provisioning and configuration becomes testable, auditable, efficient, scalable and consistent, from tens to hundreds or thousands of hosts.

These tools also change the way that system hardening is done. Instead of following a checklist or a guidebook like one of the CIS Benchmarks, and manually applying or scripting changes, you can automatically enforce hardening policies or audit system configurations against recognized best practices, using pre-defined hardening rules programmed into code.

An excellent resource for automated hardening is a set of open source templates originally developed at Deutsche Telekom, under the project name "Hardening.io". The authors have recently had to rename this hardening framework to Dev-Sec.io

It includes Chef recipes and Puppet manifests for hardening base Linux, as well as for SSH, MySQL and PostgreSQL, Apache and Nginx. Ansible support at this time is limited to playbooks for base Linux and SSH. Dev-Sec.io works on Ubuntu, Debian, RHEL, CenOS and Oracle Linux distros.

For container security, the project team have just added an InSpec profile for Chef Compliance against the CIS Docker 1.11.0 benchmark.

Dev-Sec.io is comprehensive and at the same time accessible. And it’s open, actively maintained, and free. You can review the rules, adopt them wholesale, or cherry pick or customize them if needed. It’s definitely worth your time to check it out on GitHub: https://github.com/dev-sec

Thursday, June 2, 2016

DevOpsSec: Using DevOps to Secure DevOps

I finished writing an e-book for O'Reilly on DevOpsSec: Securing Software through Continuous Delivery. It explains how to wire security into Continuous Delivery, and how to use Continuous Delivery and programmable Infrastructure as Code and other DevOps practices to build and operate more secure systems. It is based on approaches followed by organizations like Etsy, Netflix, LMAX, Amazon, Intuit, Google, and others, including my own firm.

The e-book is available for free download at: http://www.oreilly.com/webops-perf/free/devopssec.csp. I'd appreciate feedback and corrections.

Monday, April 18, 2016

DevOpsDays: Empathy, Scaling, Docker, Dependencies and Secrets

Last week I attended DevOpsDays 2016 in Vancouver. I was impressed to see how strong the DevOps community has grown from the time that I attended my first DevOpsDays event in Mountain View in 2012. There were more than 350 attendees, all of them doing interesting and important work.

Here are the main themes that I followed at this conference:

Empathy – Humanizing Engineering and Ops

There was a strong thread running through the conference on the importance of the human side of engineering and operations, understanding and empathizing with people across the organization. There were two presentations specifically on empathy: one from an engineering perspective by Joyent’s Matthew Smillie, and another excellent presentation on the neuroscience of empathy by Dave Mangot at Librato, which explained how we are all built for empathy and that it is core to our survival. There was also a presentation on gender issues, and several breakout sessions on dealing with people issues and bringing new people into DevOps.

Another side to this was how we use tools to collaborate and build connections between people. More people are depending more on – and doing more with – chat systems like HipChat and Slack to do ChatOps. Using chat as a general interface to other tools, leveraging bots like Hubot to automatically trigger and guide actions, such as tracking releases and handling incidents.

In some organizations, standups are being replaced with Chatups, as people continue to find new ways to engage and connect with other people working remotely and inside and outside of teams.

Scaling DevOps

All kinds of organizations are dealing with scaling problems in DevOps.

Scaling their organizations. Dealing with DevOps at the extremes, at really large organizations and figuring out how to effectively do DevOps in small teams.

Scaling Continuous Delivery. Everyone is trying to push out more changes, faster and more often in order to reduce risk (by reducing the batch size of changes), increase engagement (for users and developers), and improve the quality of feedback. Some organizations are already reaching the point where they need to manage hundreds or thousands of pipelines, or optimize single pipelines shared by hundreds of engineers, building and shipping out changes (or newly baked containers) several times a day to many different environments.

A common story for CD as organizations scale up goes something like this:

  1. Start out building a CD capability in an ad hoc way, using Jenkins and adding some plugins and writing custom scripts. Keep going until it can’t keep up.
  2. Then buy and install a commercial enterprise CD toolset, transition over and run until it can’t keep up.
  3. Finally, build your own custom CD server and move your build and test fleet to the cloud and keep going until your finance department shouts at you.
Scaling testing. Coming up with effective strategies for test automation where it adds most value – in unit testing (at the bottom of the test pyramid), and end-to-end system testing (at the top of the pyramid). Deciding where to invest your time. Understanding the tools and how to use them. What kind of tests are worth writing, and worth maintaining.

Scaling architecture. Which means more and more experiments with microservices.

Docker, Docker, Docker

Docker is everywhere. In pilots. In development environments. In test environments especially. And more often now, in production. Working with Docker, problems with Docker, and questions about Docker came up in many presentations, break outs and hallway discussions.

Docker is creating new problems at the start and end of the CD pipeline.

First, it moves configuration management upfront into the build step. Every change to the application or change to the stack that it is built and runs on requires you to “bake a new cake” (Diogenes Rettori at Openshift) and build up and ship out a new container. This places heavy demands on your build environment. You need to find effective and efficient ways to manage all of the layers in your containers, caching dependencies and images to make builds run fast.

Docker is also presenting new challenges at the production end. How do you track and manage and monitor clusters of containers as the application scales out? Kubernetes seems to be the tool of choice here.

Depending on Dependencies

More attention is turning to builds and dependency management, managing third party and open source dependencies. Identifying, streamlining and securing these dependencies.

Not just your applications and their direct dependencies – but all of the nested dependencies in all of the layers below (the software that your software depends on, and the software that this software depends on, and so on and so on). Especially for teams working with heavy stacks like Java.

There was a lot of discussion on the importance of tracking dependencies and managing your own dependency repositories, using tools like Archiva, Artifactory or Nexus, and private Docker registries. And stripping back unnecessary dependencies to reduce the attack surface and run-time footprint of VMs and containers. One organization does this by continuously cutting down build dependencies and spinning up test environments in Vagrant until things break.

Docker introduces some new challenges, by making dependency management seem simpler and more convenient, and giving developers more control over application dependencies – which is good for them, but not always good for security:

  • Containers are too fat by default - they include generic platform dependencies that you don’t need and - if you leave this up to developers - developer tools that you don’t want to have in production.
  • Containers are shipped with all of the dependencies baked in. Which means that as containers are put together and shipped around, you need to keep track of what versions of what images were built with what versions of what dependencies and when, where they have been shipped to, and what vulnerabilities need to be fixed.
  • Docker makes it easy to pull down pre-built images from public registries. Which means it is also easy to pull images that are out of date or that could contain malware.
You need to find a way to manage these risks without getting in the way and slowing down delivery. Container security tools like Twistlock can scan for vulnerabilities, provide visibility into run-time security risks, and enforce policies.

Keeping Secrets Secret

Docker, CD tooling, automated configuration management tools like Chef and Puppet and Ansible and other automated tooling create another set of challenges for ops and security: how to keep the credentials, keys and other secrets that these tools need safe. Keeping them out of code and scripts, out of configuration files, and out of environment variables.

This needs to be handled through code reviews, access control, encryption, auditing, frequent key rotation, and by using a secrets manager like Hashicorp’s Vault.

Passion, Patterns and Problems

I met a lot of interesting, smart people at this conference. I experienced a lot of sincere commitment and passion, excitement and energy. I learned about some cool ideas, new tools to use and patterns to follow (or to avoid).

And new problems that need to be solved.

Wednesday, December 23, 2015

DZone's 2015 Guide to Application Security

DZone recently published a Guide to Application Security. It provides a good overview of effective appsec tools and practices, including my article 10 Steps to Secure Software, which looks at the latest release of OWASP's Proactive Controls project.
Site Meter