We are excited to welcome Richard Seiersen to Wallarm advisory team. Richard brings tons of security experience from both start-ups and global companies and unique views on making the impact of security measurable.
We have asked Richard to share some of his thoughts on what’s important in cyber security.
Based on your experience what are some of the key things to remember when your responsibility is to implement security for a large-scale API-first service?
From a product capabilities perspective, which includes API’s, here are things I consider:
- “Think more do less.” In product security this is best expressed in the design phase of the security development lifecycle (SDL). A simplified view of the SDL process is: Design, Development, and Deploy. The main practice in secure design is threat modeling. And if there is only one thing you’re going to do within the SDL, make it threat modeling. This was made abundantly clear to me while at GE. I hired a number of SDL leaders from Microsoft. Their threat modeling skills were unparalleled and the value materialized in testing. It’s at this phase when security principles are infused into product architectures. Principles include high level concepts like least privilege, fail safe defaults, separation of privileges and etc. We then translate the principles into specific controls that address concerns like authentication/authorization, the handling of key material/tokens/secrets, encryption etc.
- The next critical phase in the SDL is development. It’s here where we pile on static and dynamic code review and penetration testing. The key here is ensuring integration into the development pipeline with a variety of quality gates and minimal false positives and negatives. You threat model should also inform bespoke automated tests that you write to validate key controls and quality criteria.
- Normally “deploy” would come next in the SDL. I think measurement, or metrics, must be called out in conjunction with the development. Even with a well executed SDL, some vulnerabilities will inevitably escape through the quality gates. They escape into various pre-production and production phases. There is a rate at which they do this as well as time to live (ttl). If your SDL is effective you should ideally see both the rate and ttl decrease as the rate of software release increases. That is a mathematically unambiguous expression of “attack surface” and its reduction. I am writing and speaking much on this topic as I believe it’s a gap in the security body of knowledge. A lack of quantitative measurement (think decision science as opposed to data science) is a void that prevents security from being the rigorous discipline it could, and, I believe, wants to be.
- Despite all the process work above, both first and third party vulnerabilities inevitably escape into deployment. This is where, I think, we need more robots. Robots, AI, machine learning and etc. will be most effective when it learns from an effective SDL. Your robots should be integrated into the development pipeline so they can learn before deployment, particularly since you will be security testing/attacking during dev (I hope). This is what excites me about what Wallarm is doing.
How did the application security tooling/approaches change with the adoption of DevOps practices and CI/CD, now that hundreds of developers commit their code to production every day?
DevOps is distinguished by continuous integration and continuous deployment. I do hear folks talk about SecDevOps but I am still unclear about their definition. Continuous is a rate term. It’s something that is always running or always on. Always on in security means “secure by default.” A huge part of approaching “continuous” is increasing the number of defaults. Defaults also include producing secure libraries for key security use cases and language sets. Short of “always on” we move to “PDQ” security, i.e. very short SLAs. For example, if there is a third party vulnerability with a patch, automated regression kicks in and the software is progressed to production upon passing, etc.
Do you believe AI is an applicable real-world technology for security solutions or just a marketing buzzword?
It certainly is a marketing buzzword. I prefer to think in terms of analytics. We have an analytics progression I observe in security. We have descriptive, heuristic, behavioral, predictive and prescriptive analytics. There may be more, but this will do. I don’t think most folks in security really know what any of these terms mean. For a security leader I spend a lot of time on these topics and I still get confused. But I have come to the conclusion that, as defenders, we absolutely compete with bad guys on analytics. AI, or robots, fall into the domain of prescriptive analytics if you hold to the above taxonomy. I think the lion’s share of the cost in robots is the cost of feeding them. That of course is a very information theory/decision analytics-centric perspective on things — i.e. the cost of increasingly perfect information etc. This is why having robots that can feed off your SDL, CI/CD and such is so key. There be treasure there (said in a pirate sounding voice). I think it’s the key to making well informed and lower cost robots. Nobody talks about this, I guess Wallarm does now.
Richard is a security executive, author and advisor with ~20 years experience ranging from start-ups to global organizations. Richard is currently holding a position of Chief Information Security Officer and VP of Trust at Twilio, where he is in charge of implementing the global security strategy for Twilio. Prior to Twilio he was the GM/VP of Cyber Security and Privacy for GE Healthcare.
Richard has extensive background in Information Security, Digital Risk Management and Product Development with an analytical bent.
He recently co-authored a decision analysis book called “How To Measure Anything In Cybersecurity Risk” (Wiley 2016) As of 2018, “The Green Book” is required reading for the Society of Actuaries exam prep and is finding its way into numerous university cybersecurity programs.