Host: Raman Kalyan – Director, Microsoft
Host: Talhah Mir - Principal Program Manager, Microsoft
Guest: Dawn Cappelli – VP of Global Security & CISO, Rockwell Automation
The following conversation is adapted from transcripts of Episode 5 of the Uncovering Hidden Risks podcast. There may be slight edits in order to make this conversation easier for readers to follow along. You can view the full transcripts of this episode at: https://aka.ms/uncoveringhiddenrisks
In this podcast we explore steps to take to set up and run an insider risk management program. We talk about specific organizations to collaborate with, and top risks to address first. We hear directly from an expert with three decades of experience setting up impactful insider risk management programs in government and private sector.
RAMAN: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team.
TALHAH: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team.
RAMAN: We have more time with Dawn Cappelli, CISO of Rockwell Automation. We're going to talk to her about how to set up an effective insider risk management program in your organization.
TALHAH: That's right. Getting a holistic view of what it takes to properly identify and manage that risk and do it in a way so that it's aligned with your corporate culture and your corporate privacy requirements and legal requirements. Ramen and I talk to a lot of customers now and it's humbling to see how front and center insider risk, insider threat management, has become, but at the same time, customers are still asking, "How do I get started?"
Dawn, what do you tell those customers, those peers of yours in the industry today, with the kind of landscape and the kind of technologies and processes and understanding we have, in terms of how to get started building out an effective program?
DAWN: First of all, you need to get HR on board. I mean, that's essential. We have insider risk training that is specifically for HR. They have to take it every single year. We have our security awareness training that every employee in the company has to take every year, HR in addition has to take specific insider risk training. So, in that way we know that globally we're covered. So that's where I started, was by training HR, and that way the serious behavioral issues, I mean, IP theft is easier to detect, but sabotage is a serious issue, and it does happen.
I'm not going to say it happens in every company, but when you read about an insider cyber sabotage case, it's really scary, because this is where you have your very technical users who are very upset about something, they are angry with the company, and they have what the psychologists called personal predispositions that make them prone to take action. Because most people, no matter how angry you are, most people are not going to actually try to cause harm, it's just not in our human nature.
But like I said, I worked with psychologists from day one, and they said, "The people that commit sabotage, they have these personal predispositions. They don't get along with people well, they feel like they're above the rules, they don't take criticism well, you kind of feel like you have to walk on eggshells around them." And so I think a good place to start is by educating HR so that if they see that, they see someone who has that personality and they are very angry, very upset, and their behaviors are bad enough that someone came to HR to report it, HR needs to contact, even if you don't have an insider risk team, contact your IT security team and get legal involved, because you could have a serious issue on your hand. And so, I think educating HR is a good to start.
Of course, technical controls are a good place to start. Think about how you can prevent insider threats. That's the best thing to do is lock things down so that, first of all, people can only access what they need to, and secondly, they can only move it where they need to be able to move information. So really think about those proactive technical controls.
And then third, take that look back, like we talked about Talhah, take that look back. Pick out just some key people, go to your key business segments and say, "Hey, who's left in the past" I mean, as long as your logs go back, if they go back six months, you can go back six months. But just give me the name of someone who's left who had access to the crown jewels, and just take a look in all those logs and see what you see. And you might be surprised.
TALHAH: Dawn, we're actually hearing that from our customers quite a bit. The way they frame it is that, "Why don't you look through some of the logs I already have in the system, parse through that, to give me an insider risk profile, if you will, of what's happening, what looks like potential shenanigans in the environment, so I can get a better sense of where I need to focus and what kind of a case I need to make to my executive sponsor so I can get started." So that's definitely something we're thinking about quite deeply and hearing consistently from our customers as well.
DAWN: Yeah, because the interesting thing we found in CERT, we expected that we would find very sophisticated ways of exfiltrating information, but what we found was these are insiders, they don't have to do anything fancy. If they can use a USB drive, they're going to use a USB drive, especially if you don't have an insider risk program, and so they think they can get away with it. If it's a small amount of information, they'll email it to their personal email account. Or if you're an Office 365 user, just go and download the information onto a personal computer if you can, move it to a cloud site.
We found there weren't a whole lot of really sophisticated theft of IP cases, and maybe that's because those people weren't caught. But if you can get to the point where you have a mature insider risk program that's analytics based, then you have time to look at the more sophisticated ways of exfiltrating information.
RAMAN: I had a conversation with a customer about a week and a half ago. And you talked about people who are sometimes doing things maliciously, they are also doing other things. Have you looked at things like sentiment analysis? This customer was talking to me about hey, communications, like people in communications actually saying things that they shouldn't be saying, maybe harassing people, and then that leading to other types of behaviors, to your point around sabotage. Would love for you to kind of, if that's something that you have either implemented yourself, or if you've heard as part of the broader OSIT group, around the communications people, the harassment, and all that kind of stuff.
DAWN: Yeah, we did look at that when I was in CERT. Back then we found that the technologies just weren't mature enough, so we did not have any luck with it back then. And I don't know what Dan Costa said to you as far as what they're doing now, but in my experience, I have not found anything that really was effective.
I tried a little experiment at Rockwell, with legal approval, and just kind of looked for words like kill, and die, you know, those kinds of words, and it came back like... IT uses those words all the time. Like, "The system died, I killed the process." It was like, oh, this just isn't working at all. And the other thing that made it really hard, just with sentiment analysis, people were very casual in their communications. So it was the informal communications that made it really difficult to really tell the sentiment. So yeah, I'd love to hear if the tools mature to that point, that would be great.
RAMAN: One of the things that we've been looking at is using Azure Cognitive Services to really start to think about natural language, to distinguish between, "That product is killer" to, "I'm going to kill you." If we were, to your point, initially, yeah, it would be looking at keywords, and then you get overloaded with a ton of different alerts. Now if you can distinguish between the context of how the word kill was used, then you can start to highlight things like, again in a risk score type of thing, that this could be more risky communication than this other communication. Allow you to really prioritize and filter through it.
DAWN: Hmm, interesting. Do you know if anyone has gone to the European Works Councils about that kind of technology?
RAMAN: One of the things that we have been working on, is that we have customers in Europe using some of our solutions to start to look at communications, and they have been working with the various worker councils to start to think about, for example, pseudonymization. You want to anonymize the user before you go down the path of really investigating them. If you're just highlighting this could be a possible violation, you want to do that in a way that doesn't really invite bias or discrimination.
And if he can do that upfront, then that would allow you to say, "Hey, okay, this might be something that's a challenge." And one of the things that we've seen recently, especially with COVID and all the different stressors that people are under, is that some customers are actually using machine learning classifiers that we have for threats, and really looking, not at me trying to threaten somebody else, but me maybe threatening myself. So suicide type things, people under a lot of pressure, and we've seen a lot of organizations start to take that route. And also, in education, where you have a lot of young folks who might be sharing things in appropriately, their imagery or bullying, that kind of stuff. That's another area where we're seeing some activity around this.
DAWN: Hmm. That's interesting. Yeah, I bring up the works councils just because when you're talking insider risk, it's a really important topic, that if anybody is watching this that doesn't know what the works councils are and does business in Europe, you need to find out what they are because basically they're there to protect the privacy of the employees in the company. And some of them have a lot of power, like in Germany, they can just block you from using a new technology, and in other countries you simply have to inform them, but they can't stop you.
And we're very careful about our works councils, and we have taken the approach that that's our bar. If we can't get something through the works councils, then we don't do it, because we feel like they're protecting the privacy of their employees, and all of our employees are entitled to that degree of privacy. So that's kind of how we approach it, and so it's kind of an all or nothing approach for us. But that's each company's decision to make, and it probably depends on how much business you do where and how global you really are, but it's something that everybody should look into who's working in insider threat.
TALHAH: With COVID-19, it's been sort of a punch in the gut, the whole roles, having to react, personal lives, professional lives. And clearly, we're starting to see from our customers this insider risk becoming more heightened in terms of awareness of it, and a need to manage it. Because you have work from home, and data's being moved all over the place. What have you seen work in this environment, with your experience, how have you adjusted to this COVID reality? Have you done things differently with your program? What kind of advice would you give to your peers in the industry and how to deal with it?
DAWN: So, we were fortunate. I know a lot of companies, from what I've been reading, a lot of companies, their employees use desktops at their office. And when COVID struck, suddenly you have employees at home working on their personal computers. Fortunately, we didn't have that. We've been using all laptops since I went to Rockwell in 2013, so it was easier for us because our employees are just working at home now. They're off our network, but they're using their same computer they always have, with the same controls that we've always had. But we are seeing a big uptick in them downloading, and again, this is not malicious, but downloading a game that has malware in it, downloading pirated copies of software, things like that.
Because they're at home, and they're sitting at their desk and I guess they figure, "Hey, I have my Rockwell computer here, I guess I'll play my games on here and not fight with the kids, because now they're home, they're trying to do schoolwork, they're trying to play games, they're trying to watch movies. And I'm not going to compete for that computer, I'm going to use my Rockwell computer." So, we're catching a lot of those things. And that's what I meant when I said that by using the analytics to give us more time, we're not doing all those manual audits.
Now we have time that the C-CERT, they used to catch those things and they would just kind of, "Hey, you're not allowed to do that, get that off of there." Or just block it. But now they come to us because sometimes when you see someone downloading malware, we had an employee who downloaded malicious hacking tools, and our C-CERT contacted the insider risk team and said, "Hey, this is someone who's a developer and downloaded a hacking tool, and so we're going to hand it to you to investigate." And we talked to their manager because we thought, oh, well maybe this is a pen tester and so he needed the hacking tools.
Well, there was no reason that he needed the hacking tools, and the manager was very concerned. Like, "What is that guy doing?" And he was sophisticated, we have a secure development environment that protects that development environment with additional controls. And he downloaded it to his Rockwell computer, and then he was trying to move it over into the secure development environment, so we saw what he was doing. And he had no good reason, but this is where we didn't rely on the human social behaviors to trigger the investigation, we were able to cut, catch it quickly, because of that technical indicator, and because of the partnership with the C-CERT.
It's interesting just to see, as you talk about the evolution of technology for insider threat over the years, it's now to the point where we're not just looking at theft of IP, we're looking at those technical indicators that might indicate sabotage. We're not so reliant on human behavior because, look at COVID. People are working at home, so are we really going to know when we have an employee who's really angry and really upset and getting worse and worse? I don't know. I don't know if we're going to be able to rely on those human behavior so much. If you're in the office all day, people can see that, but if you're on a phone call here and there, you might not pick that up.
TALHAH: That's right. And this could lead to sabotage type scenarios where, we've worked for our customers this ability to detect technical indicators which may indicate somebody downloading unwanted software or malicious software, or somebody trying to tamper with security controls, is so important because these could be those leading indicators, similar to behavior indicators, these are technical indicators that could indicate an oncoming potential sabotage risk.
DAWN: We had a very interesting case, but I hate to talk about that one. Yeah, I hate to talk about that one, because this actual individual told me, I don't want you to go out and talk about me in your conferences, Dawn, don't ever do that. Yeah, so actually I'm not going to talk about that one. I'll talk about a different one, though. We had a team, a test engineer team, that was under intense deadlines and really working long hours and weekends. And one day two of the employees on that team had a big, huge verbal argument. Just yelling at each other, not physical, but very, very verbal argument. So bad that someone had to go get a manager to come in and break it up. So, he broke it up. Next day the whole test environment goes down, and that's really bad. It took three days to rebuild the environment.
When you're working nights and weekends to make a deadline, and now you lost three days, that's a huge deal. And the manager said, "When that first happened, I was thinking, "Well, it went down, let's just get it back up and not worry about why until later." But then he said, "I thought about Dawn's insider risk presentations," because I communicate as widely as I can around the company to everyone, not just HR, about insider risk. And he said, "I thought about Dawn's presentation and the concerning behaviors and I thought, hmm, wonder if one of those two could have deliberately sabotage the test environment." So, he contacted us, we got legal approval to investigate, and sure enough, when we looked, one of those guys wrote a script to bring down the entire environment.
And when we talked to him, he said, "Oh, well, I had in my goals and objectives, I had an objective that I had to write some automated scripts to maintain the environment. So, I was testing it, and it just accidentally brought everything down." And we're like, "Wait a second," this was in like April, his objective was due like September 30th, the end of our fiscal year. We just didn't buy it, and we ended up looking and we could see, he actually was executing these commands. He didn't write a script; he was executing the commands manually and brought down the test environment.
But it was a really good case where, if that manager hadn't thought to contact us, who knows what he would've done next. So, I thought that was a really good, even though we did not avert the sabotage completely, he did commit the sabotage, three days impact, which was a big deal, but it could have been much, much worse because he could have done much worse, and that could have been the next step. Just another story, Talhah.
TALHAH: Love it.
RAMAN: Yeah, I mean, that's just crazy. This has been a great conversation, Dawn. The stories that you've told, they've just been captivating, and the last thing that you just mentioned, which is really, within an organization, to have a successful insider risk program, you really need to educate all levels of the organization, all the different teams so people can sort of look and be on the lookout for these types of things. Not only to identify the risks, but to also help maybe support people who might be under intense pressure.
DAWN: Yeah, and first of all, deterrence is huge. We talk very widely; we have an insider risk blog that we put out internally for employees. We talk about cases, we talk about what we find, because deterrence is a big thing, and I think that's why we're not catching as much malicious activity as we used to. Now we're finding, almost everything we're finding is unintentional, it's not malicious. Because I think word has gotten around, "Hey, if you try to do that, we're going to catch you." We tell people that all the time, don't even try, we're going to catch you.
To learn more about this episode of the Uncovering Hidden Risks podcast, visit https://aka.ms/uncoveringhiddenrisks.
For more on Microsoft Compliance and Risk Management solutions, click here.
To follow Microsoft’s Insider Risk blog, click here.
To subscribe to the Microsoft Security YouTube channel, click here.
Keep in touch with Raman on LinkedIn.
Keep in touch with Talhah on LinkedIn.
Keep in touch with Dawn on LinkedIn.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.