Cybersecurity Best Practices to Adopt for Your Organization

Brett Stewart
Brett Stewart

This blog contains Part 2 of the Cybersecurity for Software as a Medical Device blog series, which featured an interview with Bruce Parr, a DevSecOps leader and innovator at Paylocity. The following are links to each part of this blog series: 

Bruce’s career evolution and Paylocity’s enterprise evolution into DevSecOps

Brett: 

Alright, so we’ve been talking about the user perspective on security. Now can you tell us about your personal job role with Paylocity. 

 

Bruce: 

I’m the manager of the DevSecOps team. Before that, I was a senior DevSecOps engineer.

 

Brett: 

Awesome! 

 

Randy: 

I suspect Brett knows a lot more about this than I do, so to humor me: Just what exactly is DevSecOps?

 

Bruce:

DevSecOps is short for development, security and operations.[1] We’re the guys overseeing the integration of security all throughout the process of software development. At Paylocity, the DevSecOps team serves several roles within the organization, but it boils down to two key functions.

The first function is to serve as cybersecurity advisors to all of the engineering teams in the organization.  We meet with our product and technology teams on a regular basis. They host an annual TechCon technology conference, which our InfoSec teams attend. They also have a monthly Day of Learning, as well as Product Briefings and Developer and Test Communities of practice meetings. Our DevSecOps team members attend most of these where we can, and try to present relevant InfoSec knowledge. 

Anytime that we have a question about something that they’re doing technology-wise – maybe they’re rolling out a new technology – we’ll ask for an advisory meeting, so we can go over it and understand it with them. And if they have a question on things that they’re doing where they don’t understand the security implications, they’ll reach out to us and we’ll do an advisory with them from a DevSecOps perspective.

The second role, which is really exciting to have been a part of, is leading the charge in automating everything we can. We automate our Static Analysis (SAST) Software Scans. We’re building all the orchestration out for that – I built a custom Software Composition Analysis (SCA) scan[2] for all of our third-party dependencies. That software was built in such a way that once we get the permissions for our continuous integration/continuous delivery (CI/CD)[3] pipeline, any team will be able to go into a copy of the project, change the three parameters that they need, and they’re good to go.

 

Brett: 

So, your application security experts aren’t running security scans on your software products? But instead, the scans are automatically part of the development and testing process to see if the code is secure? It sounds like your application teams own their apps’ own security on a day-to-day level, that it’s not a separate audit function like in a more traditional model. Is that right?

 

Bruce: 

Yes. In the past, many organizations’ product development or engineering teams would create software, and hand it over to application security believing it was ready to deploy. By the time that they got to start penetration testing it, they would find the vulnerabilities and then wind up with a stack of security debt that had to get fixed before the product could be released.

 

Brett: 

And today?


Bruce: 

That LinkedIn post you saw basically highlights our process and culture. We have enough people and enough teams that they can identify their own vulnerabilities and fix them before our DevSecOps team has a chance to catch security holes or advise on them. In essence, at a steady clip we’ve been maturing our security automation and incorporating security into the development process. And as we’ve matured more and more to an automation model, it’s slowly eroding the need in certain areas for my team to serve in an advisory role to our application developers. The advisory role will never go away completely. But the time that we used to spend advising developers will focus more on the truly high value and tough new security challenges, and not on the known-knowns of security issues.

DevSecOps Bruce Parr orthogonal cybersecurity samd

Evolving into DevSecOps

Brett: 

So how did you get here from there? What did it take to make that happen? I mean, software, software engineering organizations, culture, technology… There are a lot of things you have to get in sync to make that kind of huge leap forward.

 

Bruce: 

It’s a really interesting story. A few years ago, at the point we were talking about earlier, Paylocity was probably hovering around 2,000-2,500 people working here.

The people in the organization are looking around the industry, and in every industry, thinking more about cybersecurity. We’re all reading the same front page headlines in the newspapers, and we’re all seeing these major, high-profile, damaging breaches left, right and center. Other companies in our sector got hit with something big. 

I know that our leaders were worrying (for good reason) more and more about how we could get ahead of this before a breach. I’d assume that customers are also starting to ask more probing questions, looking for assurance that we can be trusted stewards of their HR and payroll data. We have trust that we’ve earned.  Trust that we can run your company’s HR and payroll data.

At the same time, a bunch of our software engineers, including me, started looking around and saying, “We’re really kind of interested in security, and we want to start getting involved.” To us, we’re always looking for the next big professional challenge, and this seemed like a really interesting one in terms of engineering – and an important one to boot. A small group of us engineers started talking about it and started trying to figure out how we could get involved. Then we started talking with our Application Security (AppSec) team. 

The first idea we came up with was to hire a bunch of engineers with deep application security expertise, and embed them into every single product and tech team in the company. Well, that got shot down quickly (but very respectfully, I will say). Looking back, I’m sure it was seen as a hair-brained solution thought up by a bunch of techies, who didn’t understand the realities of running a business that manufactures software to support a business function. 

And it’s a good thing that nobody bought into the approach, because it wasn’t the right solution to the problem. It was too much like the old-school approach of saying, “Let’s hire a bunch of security people and spread them everywhere.” Looking back on it, I can clearly see that it would have been both far more expensive and far less effective that where we’ve landed today.

Interestingly, our AppSec team took a modified version of our idea, one that was a lot more modern in its approach. Instead of hiring security engineers on every team, we instead identified and nurtured select engineers who were already successfully working on each team (in terms of everything but security) and helped them learn security, the same way that someone might learn a new Agile team technique or software development framework. We created this concept we called Ninjas and Champions. “Champions” are software engineers who receive a broader security training. “Ninjas” are their counterparts on the test engineering side. They both get certified by the application security team. 

We first trialed the idea with a couple of friendly teams. I was actually one of the very first champions when I was on a software team. I went through all the training and I started doing penetration tests on my team’s software. I even found a couple of things and fixed them. 

Once the champions found our footing, we started providing guidance to all the other teams. So it wasn’t the application security team doing all this guidance on how to build secure software right from the start. I and the other champions were actually doing videos to teach other software engineers how to build secure software, how to identify issues and how to remediate them.

What they realized was that the teams that were doing this had fewer and fewer vulnerabilities being reported by application security. So they expanded it to the next level and added a bunch more teams. 

The key was, I think, that our senior management in IT, and people who had profit and loss responsibility, understood that security was now part and parcel of quality. We had to find a way to make it a priority and still churn out great software at scale and with a high velocity.

Another thing that came out of this was a lot more organizational visibility around security issues in the code. Nobody wants the responsibility for a breach on their watch. All of the group directors and category directors would get notified of any critical vulnerability that we discovered, and their teams were held accountable.

As a result, HR got involved last year. Now, in order to make the move from a software engineer up to senior software engineer or from test engineer to senior test engineer, you have to be security certified. That’s now part of the career development process. We have full buy-in from Management, Product, Engineering, and HR. All of us.

When you work here, you can see it. Several times a week there are announcements of a new security champion or new security ninja. That’s just awesome and so exciting to see. Engineers always want to improve a process, and a home run like this after so much work is just so gratifying to see – especially in an area as important as cybersecurity.

 

Brett: 

The things that you’re describing are much more cultural and workflow related than they are a tool. I know tons of people in a lot of different organizations that have similar security tools. But what sounds different in your case is the commitment, from the management level on down, to say, “This is something that is part of quality. We are reprioritizing security from dealing with it at the end of the road to making it a first concern.”

 

Bruce:

Absolutely. Realistically, you could have an application with the best user interface (UI) in the world. You can look at that and say, “Our UI/UX team has developed this unbelievable UI that conforms to everything that the client has asked for.” From a software quality perspective, that UI is gold, right?  

But if the UI is leaking information, is that quality? If it doesn’t conform with CIA, confidentiality, integrity and availability, then it’s not quality software. That’s the cultural aspect of it from the executive buy-in. I can think of a great conversation where an executive basically said, “If we don’t have security, then our software is low quality no matter how great the features.”

 

Randy:

So Bruce, what else was part of this secret sauce for DevSecOps?

 

Bruce: 

The other part of what makes things really tick is that everyone in our small DevSecOps group that grew out of this pilot, including me, are software engineers. We’ve all built software from the ground up; we’ve supported it, we’ve enhanced it, we’ve debugged it and we’ve tested it. 

Because everyone on our DevSecOps team comes from a hands-on software development background, we all understand how software actually gets built. We understand that it’s not just about having the right tools. Anyone can learn security tools, run scans, and come back with results. But understanding how software teams build software… that’s the cultural piece of it that is so crucial to our effectiveness.

What our DevSecOps team brings to the table is an understanding that the solutions we promote, the tools we use to develop them, they have to work with how software teams actually build software. If what we come up with keeps people from doing their job, if it becomes such a drag to ensure security that nothing happens, then we’re in a situation where we’ve been reduced to producing very low volumes of high-quality software, and creating a lot of misery and discontent among everyone (including the executives footing the bill for the team.)

 

Brett: 

It’s like you’ve made security into a feature, rather than a chore.

 

Bruce: 

Yeah, exactly!

 

Randy: 

As the least technical person in this conversation, let me make sure I’ve got this right. Basically, you didn’t say every single engineer in your organization has to be a security engineer. But every team must have somebody who has a certification or a “minor” in security, on the development side and the testing side.  

 

Bruce:

Yep.

 

Randy: 

Nice!

Orthogonal SaMD White Paper CTA Banner

Fostering Good Security Practices Through Gamification

Bruce: 

Another key thing we did to make security a part of our company culture is gamifying the whole process. For example, our application security team came up with this maturity model. When it started out, each team got a maturity model point for running a SAST scan, for running a Dynamic Application (DAST) scan, or for being onboarded for repeated DAST or SAST scans. Then we added more points, so for every ninja or champion that the team has, they get another maturity model point. It quickly became a competition across the teams.

Originally it was impressive when a team had two or three maturity points. Then all of a sudden, a team jumps forward and their director finds out that they have five, six, even seven maturity points. The director is giving them high fives and it looks like their team is killing it to the other directors they work with. It’s a competition to be at the top. 

 

Randy:

Is there also a spotlight on those at the bottom?

 

Bruce: 

I’m sure that’s paid attention to, but I know tough conversations in front of others are not done here as a public thing. It’s not about shaming. It’s done privately because we’re here to work together for success and not to publicly point fingers as a reaction to issues. Instead, people are helped out of the negative spotlight so that they can catch up and get back in the game.

I assume there’s key performance indicators built into management performance around this, that it’s baked into their annual goals and things like that. The leadership certainly behaves as if they are incentivized to help their teams be competitive on these security-maturity scales.

 

On Fostering Security While Being Invisible

Bruce: 

Another key part of the story of our success with DevSecOps is our understanding of how software teams operate in general, and how our software teams at Paylocity operate specifically. The DevSecOps mantra is, “Get your security in place and stay the hell out of everyone’s way.”  

The way that we do it on our team is to consciously try as hard as we can not to impede delivery. This means that anything that we do, we try to make it happen as a part of the normal course of software development. What we put in place can’t require our software engineers to make changes to what they do to accommodate the need for secure code. While they know that shipping secure code is a core part of their job, they aren’t thinking of security as a separate “thing”. Security just sort of happens because it’s naturally in the air. At the end of the day, our goal is to be secure and invisible…. sort of like when the Secret Service is guarding the college-aged child of a U.S. President as they live in the dorms.

If we can do security scans invisibly, then we’re doing our job. If we screw up someone’s build, well, we’re going to get unhappy questions and have to answer them. We’ll also have to do a root cause analysis to figure out why it happened in the first place, learn from that, and make sure that we don’t repeat that same error again. And that’s just as it should be!

 

Letting your application engineers work on the parts of security that they can do best

Bruce: 

One of the key aspects to this “blending into the work and making ourselves invisible” idea is the scanning tools that we’ve built. We have great technical talent here, and any of them can learn to use those tools. Running these tools is not too hard. But running the tools is only 1/10th of the battle. The other 9/10ths of the challenge is interpreting the results that these scanning tools spit out, and understanding the results in context. It’s knowing how to identify false positives and quickly move on, especially when you are doing static application scanning.

 

Brett: 

So something that’s a false positive in one context – in one application – might not be a false positive in another situation?

 

Bruce: 

Exactly. Imagine somebody who’s a professional in information security discovering a vulnerability out in the wild, let’s say a remote code execution issue in software package “X.” The next thing that the professional does (hopefully) is documenting that vulnerability in one of the public security databases: where it lives, how it works, anything else important they’ve figured out. In this hypothetical case, they’ve found that software package X has a remote code execution issue and it’s rated as a High security risk.

We run our scans and we find that vulnerability that has been ranked as a High. Now we have a new problem. It isn’t that we now know we have a high-ranking security risk in our software. It’s that, depending on how we are using that software, we might have a high-ranking security issue, but we might have a low or even a false positive. 

That’s because the professional who discovered this vulnerability has no idea how we actually use that package X. It may be that we’re using the package, but we’re not calling the method that causes a remote code execution. We would look at it and say, “Okay, it’s not a high-ranked issue – it’s more of a Medium.”  And that interpretation of this high-severity issue being a low-severity issue in the context of how we are using it… that changes the urgency and the timeframe in which our teams have to take action. 

 

Randy: 

Right, because if you go through and call everything critical and everything a High, then the software teams are going to spend all their time treating everything as super-dangerous, and never actually having time to do what they are supposed to do, which is ship software.

 

Bruce: 

Yeah. The key is having the people on staff like our software engineers who can look at this, do the code spelunking, and say, “Yes, I see that this High in this case is actually a High,” or, “I see that this potential High is just a Medium in this case. If, in this hypothetical case, the only way to get to this exploit is through remote code execution, and the only way to get to this vulnerability is for you to call this method in this code. But we’re not calling this method, and this package is actually behind authentication and authorization. You have to be on the network to get to it in the first place. So I, the developer, am making the decision in concert with our DevSecOps partners to officially downgrade it for our context to a Medium severity”

The thing is, to make that kind of determination correctly and with any kind of speed, you have to understand the code base that is the context that this potential vulnerability sits in. That’s where the software engineering discipline and our software engineers come into play. Because no one, and I mean no one, knows the code and how that code works works better than the team that wrote it in the first place.  This vital task is being done in the most efficient and effective way by the people who are in the best position to do it.

Letting your DevSecOps engineers on the parts of security that they can do best

Bruce: 

The key to success is more than just letting your application engineers focus on security for the code they know. It’s also about fostering the skills among your DevSecOps engineers so they focus on the code that they know best.

For example, take development tools. A lot of the tools that are out on the market, commercial or open source, they’re all sound. The challenge is that they’re all developed by former developers, and they’re developed in the context of what those developers know. Which means a lot of these products are developed from the perspective of, “I’m going to use this tool to scan a single repository.” That’s great. But what happens when you have 5,000 repositories? How are you going to use the tool in that context?

Sometimes what we wind up doing when we’re putting on our software engineering hats at DevSecOps, is building custom pieces that use those scanners, or building the orchestration so that we can use those tools designed for one repository to scan multiple repositories. I could be writing that custom adapter and the orchestration that wraps around it in Powershell, or Python, or C#. It depends on the context and the tool we are scanning. But these are all languages that we work in and have to be able to automate around. 

And frankly, as a software engineer, I love that I have a job where I get to work on integration across such a broad spectrum of technologies. Certain software engineers see this kind of automation work of integration, adaptation and extension as a hard and fun challenge.  

That’s why we look for software engineers that are interested in security, not just security people who can learn software, because that is a much more difficult road. If you don’t know how to build software well in the first place, you have to learn that, and how to work across all these languages, and how to analyze all of these software repositories through the lens of application security.  Only then can you quickly draw some pretty sophisticated conclusions about the nature and severity of the security risks we are talking about, or determine that there really aren’t any risks and we can move on to the next challenge.

 

This blog contains Part 2 of the Cybersecurity for Software as a Medical Device blog series, which featured an interview with Bruce Parr, a DevSecOps leader and innovator at Paylocity. The following are links to each part of this blog series: 

References:

1. Education I. What is DevSecOps?. Ibm.com. https://www.ibm.com/cloud/learn/devsecops. Published 2020. Accessed January 19, 2022.

2. Revenera. What is Software Composition Analysis? | Revenera Blog. https://www.revenera.com/blog/what-is-software-composition-analysis/. Accessed January 19, 2022.

3.  What is a CI/CD pipeline?. Red Hat. https://www.redhat.com/en/topics/devops/what-cicd-pipeline. Published 2019. Accessed January 19, 2022.

Related Posts

White Paper

Software as a Medical Device (SaMD): What It Is & Why It Matters

Article

SaMD Cleared by the FDA: The Ultimate Running List

Article

Help Us Build an Authoritative List of SaMD Cleared by the FDA

Article

Predetermined Change Control Plans: Seizing the Opportunity