Challenge Accepted Podcast: Hot Take Coffee Break on Cybersecurity Incident Transparency

Share :

Challenge Accepted is a podcast from Arctic Wolf that has informative and insightful discussions around the real-world challenges organizations face on their security journey. 

Hosted by Arctic Wolf’s VP of Strategy Ian McShane and Chief Information Security Officer (CISO) Adam Marrè, the duo draw upon their years of security operations experience to share their thoughts and opinions on issues facing today’s security leaders.  

In this episode, our two hosts try something new and take a “hot take coffee break” to discuss challenges organizations face when dealing with cybersecurity incidents, including the lack of standardized reporting requirements and the issue of accountability.  

 

You can subscribe to Challenge Accepted via Apple, Spotify, Google, RSS, and most other major podcast platforms. 

Episode Transcript 

Adam Marrè  0:00   

Do we want to just hand this to him and then say, hey, we’ll jump on record? 

Ian McShane  0:04   

Yes, let’s do that. Yeah, let’s do that. 

Adam Marrè  0:15   

Yeah, that’s the past little intro of the challenge. 

Ian McShane  0:20   

Let’s just do it now. Alex like editing things. 

Adam Marrè  0:24   

Should we do an intro or should we just start talking? 

Ian McShane  0:27   

Let’s just start talking. 

Show Theme Song plays  

Ian McShane   

Hi everyone, my name is Ian McShane, VP of Strategy here at Arctic Wolf, and welcome to the Challenge Accepted podcast “Hot Take Coffee Break” edition, where we talk about all things security. 

Adam Marrè  0:50   

Hi, I’m Adam Marrè, CISO of Arctic Wolf. So, whether you’re a beginner or an experienced security professional, we hope you’ll join us in our “hot take coffee break” edition as we talk about some of the latest news and things that we’re reacting to in the security world. So I’ll just like to set the table on this. Really interesting that the Surgeon General of the United States came out with a report saying that social media may harm children and adolescents.  

Ian McShane  1:21   

I’m shocked. I’m shocked.  

Adam Marrè  1:23   

Shocked, right? Yeah, I mean, this is a reflection of research that’s been going on for years and years now. And hundreds of different studies across different data sets are really starting to show this and the data is really interesting to look at. You go and look, and you’ll see a hockey stick of, self reporting of mental health issues by young people. And so and it’s like a hockey stick. And of course, where does the hockey stick start? It starts right around 2010-2012. Right, when iPhones and other smartphones blew up. 

Ian McShane  2:07   

Hyper connection. Everyone’s connected all the time, right? 

Adam Marrè  2:10   

Absolutely. And that was the time when I really think a lot of the social media applications started to dial in their algorithms, so you’re getting that effect as well. And those two things combined, along with all the other psychological techniques that these apps use to keep people with their eyes glued to the app, have created a huge change. And it’s something that a lot of researchers have noticed.  

And of course, now they’ve done study after study that show this, but not just the correlative stuff. So not just the timing is correlative. But also they look at, and not just our kids are more comfortable talking about mental health, which they are, but there’s actually objective measures they can look at and some of that is in the report. And these are like hospitalizations or self harm, actual suicide attempts, things like this that are objective measures that can show and support the concern here is maybe unleashing this social media technology on our young people, and our vulnerable people, especially kids like going through puberty and forming their self vision of themselves, their self identity, is maybe something we want to think about. More deeply than what we’re doing right now. 

Ian McShane  3:26   

Does that report mention anything about what’s the actual underlying cause is? The fact that more information about it is available? So people are talking about to your point, like self harm, and that’s introducing the idea of self harm into young people’s train of thought? Or is it more they pressures of bullying, that kind of thing over social media that’s leading to it? 

Adam Marrè  3:49   

So the report and I’ll admit, I haven’t read the entire thing, although I’ve read a lot of the studies that underline some of the conclusions that are drawing, I don’t think it goes into that in-depth. And that’s definitely something that is in the conversation is the copycat issue. But again, that’s just reinforcing the problem with social media, and one of them is that there’s this copycat issue, but there are others though.

One of the most intriguing to me, is the displacement issue, which is where you’re actually just simply displacing good activities or important activities for people with time glued to the screen, doom-scroll, whatever, right?

And that’s everything from sleep, which we all know is really important for mental health. But it can be also building the skills to make friends and deal with uncomfortable situations. And all the kinds of skill building that you need, by going out and having social interaction and a lot of that’s limited now. Because you’re getting that interaction from a different medium, which maybe isn’t helping you build the same skills and mechanisms that most people built.   

Ian McShane  5:00   

The hard part is that the operating systems, the device manufacturers, the tools themselves, they don’t make it easy for people with parental responsibility to be able to control or manage or have any oversight of the stuff that young people are doing with those things.

I’ve gone through iterations of family controls on Microsoft products for things like Xbox and for Windows, even for iOS, PlayStation, the stuff that they put in for family controls just doesn’t seem to be designed by anyone that’s ever had a family or ever needed to be able to put these controls into practice.

Like they don’t make sense to me, as someone who prides themself on being a nerd and being able to figure out what all this this stuff means. And so that kind of brings it back around to some of the topics we talked about before about how important it is for user experience to be taken into consideration around security controls and risk controls. Right? 

Adam Marrè  5:57   

Yeah, absolutely. I mean, to bring this back to more of the security mindset, not just the health and well being of folks. Yeah, those controls are really important. We have data security issues, right, which is usually what I would talk about with social media, not necessarily the mental health effects, but there’s that, but yeah, the controlling that or trying to oversee the action of minors, especially those under your care. Your kids or what have you, trying to be able to help them be in that online world and be in the online world safely? A very difficult thing to do.

I mean, obviously, this is a hotbed for scams, and there’s things that affects kids, you have kids taking their parents debit cards, and whatever and making huge payments or, to your point, there was a situation here in the state where I live in Utah, where a kid was a teenager, a young teenager was actually taken from his home, but he voluntarily left with an older man and they had an inappropriate relationship. And they were found in Kansas or something in the middle of country a few days later. And you would think, “Oh, this maybe was a kid who was in the foster system or underprivileged or broken home or whatever,” none of those things are true.

This kid had great parents, very involved, like you, knew the technology. They were watching everything. But it just came down to he had a headset, I don’t know if it was an Oculus Rift or something like that. And he was playing, I believe the game Roblox in the headset. And that is where this person was able to find a place where he could talk and groom this young boy, this teenager, without the parents knowing about it, because they checked everywhere else, they read DMs and texts on phones, and they did all that, but they just didn’t realize that there was the ability to text inside this headset. 

Ian McShane  7:55   

It’s hard. Yeah, you need that constant vigilance. Like, it’s almost almost dare I say, like, enterprise security, right? You need that constant vigilance of what is the risk? What’s the risk going on here? And, frankly, we see it day to day, like employees at security companies don’t always understand the risk. And it’s the same with parents don’t understand the full risk everywhere. And it’s almost like, ‘oh, it won’t happen to us, or it won’t happen to me.’ 

Adam Marrè  8:22   

Absolutely. And I mean, it’s like letting our kids like, there are certain places in this world, you wouldn’t just let your kids walk around in public, you know, you wouldn’t just say ‘hey, go to Times Square’ or some other very public place, and you say,’ hey, just walk around and talk to anybody you want to.’ And that’s kind of what we’re doing with social media in a lot of ways and especially some parents just kind of let their kids do whatever they want on there for as long as they can.

But I think there are many more like you said, they want it they they want to facilitate this activity in a safe way. But the tools they’re given are really just not up to the task. And then there’s all the training and the vigilance and time that it requires. It’s just, it makes it a really losing scenario for somebody to do the right thing. 

Ian McShane  9:08   

It really does. On the topic of it’ll never happen to me, like that used to be one of the stories I’d hear a lot like working in, especially in antivirus 10-15 years ago, would get organizations that would be saying, ‘oh, you know, I don’t need to necessarily invest too much in this because no one’s gonna attack me.’  No one’s gonna want anything from me.

And then are you familiar with the band Smashing Pumpkins from the ’90s? Just checking if you’re cool or not, right. But I noticed, I read this, I guess it was an article based on a podcast that Smashing Pumpkins’ newest album came out this year, and they had some kind of cyber attack where some of the songs were stolen.

And essentially, it seems like Billy Corgan was being held ransom, like actually having to pay out of his own pocket to stop someone releasing them early, which is a pretty interesting shift of affairs when you think from point of view of it will never happen to me. I can’t remember the last time I heard of a music act that was impacted by cybersecurity other than, well, I don’t know that cybersecurity is really Ticketmasters nonsense, but that’s the technology that I’m used to hearing when when things go up in arms. 

Adam Marrè  10:19   

I would love to have been the duty agent at the FBI the day that Billy Corgan called the FBI to say, ‘hey, somebody stole my songs.’ That would have just been a very interesting conversation for whatever person picked up the phone of the FBI that day. 

Ian McShane  10:37   

I don’t know if I could have kept it professional, I would have been, ‘Hey, can I get a quick photo?  Well, while we’re on, any chance you can play me these songs real quick?’  

Adam Marrè  10:47   

But it really does go to show that there’s, in today’s cybersecurity threat landscape, there really isn’t anyone that is too small to be attacked or just not a target. It’s really about if you’re online, people have a way to monetize something that you have access to, whether it be your information, family photos on your home computer and that gets locked up, be it your life savings, and someone’s going to try to sort of catfish you through through that social engineering mechanism.

Or it could be they want access to what you do in your work life, or what your parents do, or what your spouse does. And they’re going to attack you through personal accounts to try to leapfrog and pivot into that executive at that company or someone who has access that they want.

Now with cryptocurrency coming out number of years ago, making some of this monetization easier for the attackers, there just really isn’t anyone that is beyond cybersecurity, or where it doesn’t matter to them. Unless they’re truly a Luddite who is offline altogether. And if they’re doing that more power to them, I salute them. That’s great. For the rest of us. I mean, there’s a basic level of cybersecurity knowledge that everyone needs to have. And I don’t think we’re doing a good enough job getting everyone educated on it. 

Ian McShane  12:15   

One of the things that’s interesting is traditional journalists, maybe not in cyber, but journalists in general aren’t necessarily the most trusted of people. But it’s almost feels like in today’s threat landscape to use a phrase that I hate, I can’t believe I said out loud. Do you know in today’s age, it feels like it’s even it’s more important than ever that security incidents are reported in the news and transparency should rule because not only does it help people learn about what’s happening and not only keeps the other organizations from falling for the same traps, but it keeps people honest about what’s happening with their business, with the data that they’re holding, right. I’m thinking specifically about an incident in the UK that happened to an outsourcing company called Capita.

And so they deal with a lot of different things, from background checks for the police, these kinds of things. So they’ve got sensitive data. And there’s been this ongoing ransomware incident they’ve been dealing with, it must be a couple of months now, where, like all good incidents, the first thing they said was, ‘yeah, we don’t think there’s any data access. We don’t think there’s anything that’s been leaked, and everything is falling apart, like the onion has well and truly been opened.’

So how do you get someone coming from the FBI side of things? What is the law enforcement guidance that was given to organizations that have these kinds of breaches? Are they advised to not say anything publicly? Is there a playbook they should be following? Or is it it come down to legal counsel? Is it all about insurance and risk? 

Adam Marrè  13:51   

Yeah, this is such a difficult question. There’s no easy answer. And in a lot of ways law enforcement has to stay out of it, whether or not unless there’s a regulatory or legal requirement that someone asked to report, they have to kind of stay out of the conversation, because they don’t want to create liability for their agency or anything for telling somebody to do something that they shouldn’t. And they can offer an opinion like, ‘hey, maybe this would work,’ but they have to be really careful unless there’s actual obligations to inform people or inform the public about what happened and a lot of times when the FBI specifically is investigating, but other agencies will say ‘don’t say anything now, because we’re still investigating. We don’t want the bad guys to know.” 

But the larger question you’re bringing up is, what level of transparency should organizations have when they have an incident is a really murky question, because I always err on the side of transparency, because ironically, oftentimes, being more transparent will build trust between the public and you have this is what’s really going on.

But the problem we have, as sort of a global cyber society online, is we’re just to into blaming the victim right now, or issues, and some of that’s well founded because someone is being absolutely derelict in their duty to try to protect the data and things like that.

And that’s definitely a problem that we want to solve is making sure that everyone has a baseline of security and security practice in their organization. But ironically, that name and shame that’s happening when companies aren’t doing the right things is the very thing that is cooling the ability for companies to come out and say, ‘Hey, we had an event.’ Because immediately, you know, the value hit. So are we doing the right thing for shareholders and for the company itself, and then they could lose business over it. And there’s all of these things to consider when making that decision.

Again, I like to be transparent, I love to be transparent, but there’s some real-world consequences for transparency, especially with stuff where you got to admit that you didn’t have the right thing in place, somebody did the wrong thing. It does kill me, though, the companies that are willing to throw the intern under the bus, like, they didn’t change the password or whatever. But I think that’s the difficulty when people are trying to figure out what the right thing to do here is. 

Ian McShane  16:25   

Yeah, I completely agree. As an outsider, I’ve never had to deal with the law enforcement side of things. As a consumer of this kind of stuff I’m kind of leaning towards some kind of governance, there should be rules and regulations about what has to be disclosed and when, so that you can at least follow a timeline. So there’s less guessing, because at the moment, like, over the weekend, Heathrow Airport, eGates, the automated customs thing all went down, and everyone’s first thought is, ‘oh, it’s gotta be ransomware.’ Like, ‘oh, the whole thing is broken.’ And, you know, it caused chaos and, you know, took a couple of days to come back up. And there’s been, as far as I’m aware, no root cause analysis, at least nothing discussed.

But as soon as there’s any incident that takes anything down, whether it’s a website, whether it’s a service, whether it’s a TV station, whatever, it’s the immediate thought is not, ‘oh, something’s broken.’ It’s like, ‘oh, there’s a cyber attack.’ And, you know, as much as I don’t want government oversight into everything, and you know, I very much believe in keeping them at arm’s length when possible, there has to be some way of at least controlling or forcing organizations to have follow the same playbook when it comes to communicating what’s going on. Do you think that makes sense?  

Adam Marrè  17:42   

No, absolutely. It makes sense. But the balance that you’re struggling with there is the balance everyone is struggling with. We don’t want so much regulation, that we’re like, stifling innovation and the ability for businesses to do business. 

And on the other hand, we’re definitely not doing enough. Like, we’ve probably err too far on that side. I mean, just look at right now, there is a pretty significant concern among senior executives, CISOs, especially across, certainly the United States, but across the world about the responsibility and the accountability for incidents.

You look at the former Uber CISO, who was just sentenced recently in the United States, and there’s some particularities to that, that are unique to the situation that Uber was in at the time. But the effect is still the same in that it has everybody thinking, ‘What is my responsibility?’ And if I make decisions, and I’m guided by legal counsel, CEO signs off on it, all of these things, yet is the CISO is still the one that’s holding the bag, and they don’t have the same kind of corporate legal protections traditionally that other executives have.

And so, regulations that would clearly outline when something needs to be reported and when it doesn’t, would also help protect them, making it so that they can be more clear on what exactly their responsibility is, when a breach or some sort of security event happens. And right now that murkiness is causing people to, I think, conflate a very specific issue with with Joe Sullivan, in particular, with every other breach that happens and decisions CISOs are making. I don’t think any of these situations are cut and dry. And it makes when to report something and how to report it a really difficult decision to make. 

Ian McShane  20:00   

Yeah, it doesn’t help when you see organizations that do a really fast and good job of being open about the security incidents they had. Like there was the Dragos one a couple of weeks ago, where I think it was almost less than 24 hours later, they had a blog post explaining exactly what happens. Obviously, those guys and girls do DFIR for a living. So it’s a little bit more familiar to them. But they had the entirety of the attack chain all the way from the root cause, which was a compromised new employee having credentials sent to them. And they had everything very well documented very, very fast, to show exactly what happened and what didn’t happen.

And so when you have those kinds of benchmarks to stand up against or to compare to, it makes almost everyone look bad when when the incident happens. And everyone is going to have a security incident at some point, right?  

Adam Marrè  20:48   

Yeah. And I mean, what a great blog post. So clearly written and so much shown on the attack, and also highlighting an attack vector where somebody compromises an employee before they joined the job on their personal computer, because you’re almost always using a personal device for that first initial signup. And using that as a vector, I mean, brilliant on the attacker side, right? Really, really good way to get in.

I think it was maybe a little bit of a humble brag in that, or just in a way where they did catch it within hours, they were able to remediate before anything really bad happened. Dragos is an incredible company, they’re great people. So I’m not casting any shade on them. I’m just saying, for any company, even one willing to be as transparent as they are with this, and as highly skilled as they are with this, like if something really bad had happened, would it have been the same response? Right?

Yeah, I mean, this is this is really a victory story. It’s really like, ‘this bad thing happened, but we caught it. And we prevented anything worse from happening.’ I think the really hard decisions come when the bad guys actually get in. They get the data. And now now you got to like, what do you do at that point? That’s the question.  

Ian McShane  22:06   

That was going to be my point as well. I think they did a great job of communicating what happens and opening eyes to an attack vector that I bet a lot of organizations hadn’t really thought of. And it reminded me of when I joined my last three companies like everything, DocuSign, email credentials, everything was sent to my personal email address, which could have been compromised by anyone, if they replied and said, ‘Oh, can you actually ship the device?’

In fact, in one of my jobs that I joined, I won’t give it a timeframe, but I was traveling when I on my first day, and so they had to ship my laptop to a hotel that I was staying at, which, I’d met them on Zoom and spoke to them in person on Zoom. And so they knew it was me that was requesting this but shipping a corporate device to a vacation hotel is probably not within the T’s and C’s of IT’s acceptable use right? 

Adam Marrè  23:00   

Yeah, probably not the greatest idea. But you know, like you said, if you get double/triple verified, then maybe it’s okay.  

Ian McShane  23:09   

They could have done that as well. The attacker could have done that as well.  

Adam Marrè  23:16   

But to your overarching point on this is I do think that, making it more clear, what is reportable, what should be reported across all the different regulations. And even if we can get closer to like, almost a global standard on this, that we understand this is when people report, and we can get to a point where we’re left to immediately blame the company for their incompetence, or lack of security and all of that.

I mean, security is hard, we know it, I think we could get to a place where there would be more transparency, more information sharing between companies, more open information sharing, like in the Dragos example. And that would help us. It’s easy to say that though, but to get to a framework that’s workable in the details, that’s where the devil is, I mean, it’s just really difficult. Well, it’s the same thing like privacy, it’s just a really difficult thing to get right, for every scenario out there. 

Ian McShane  24:11   

Like this is the thing where everything’s difficult but, both you and I are on the same page around victim blaming and accidents happen and security’s gonna security whatever happens, adversaries are going to adverse, whatever happens. But the thing is like, something, someone has to be accountable somewhere, right? There has to be some kind of accountability. And so I’m not going to ask you to tell me who should be accountable.

But that’s what I’ve been thinking about recently is when you think of all of these things, the Joe Sullivan stuff, you think of Capita, you think of government breaches and things that have happened, like who ultimately is accountable? Is it the person that makes a mistake, a genuine mistake? There was no malice intended, it was just something that happened. Is it the person that signed off on it that could be six or seven rungs, so to speak, up the ladder and has no oversight or control directly over what happens. Where does the accountability stop? Or is it ‘hey, we need some kind of accountability insurance that takes care of this for us?’ 

Adam Marrè  25:08   

Yeah, I think part of the problem we have there is that the level of complexity is such that it’s difficult for us to understand what true due care and due diligence on something is like it’s much easier in the physical security realm. Right?

If someone breaks into a company, and they’re smashing glass and getting in, everybody’s not rushing to say, ‘well, who is the physical security manager and did they drop the ball?’ You know, ‘he didn’t have the right locks on the doors?’ Because there’s an understanding of what general common practices, now, if it comes out in the story that all the doors were left open. And after hours someone was letting them in well, then maybe it looks like okay, well, that was negligent on the part of the management and they would get in trouble. But we don’t have a good sense for that in cybersecurity, like, what is a reasonable level of due care and due diligence?

I think we in the industry know, we have a good idea. But I mean, we as a society, there isn’t that sort of culture built around that. Where that it’s almost instinctual where people understand that. And so what happens is they just go, ‘well, that person obviously must not be doing their job. And they don’t understand how the attack happened. They don’t understand what was exploited.’ If it was something that was super sophisticated, that required a nation-state level support, or if it was just doing an easy thing. Those details don’t come out, but we’re just automatically blaming the company along with everything else. And I think we need to get to a better place where we understand and we can make better decisions as a public as we look at these events. But I think we’re a long way off from that. 

Ian McShane  26:54   

It sounds like cybersecurity’s in his teenage years of ‘mom, you just don’t understand.’ Yeah, I think that’s the title for the podcast today. 

Adam Marrè  27:03   

Yeah, right. Parents just don’t understand or whatever. But, I mean, I’m opining on these things. But I think we’re a long way off. I think you’re right. I think we do. We do need to get it right. But I think until we do it’s going to be one of the reasons why organizations are more reluctant to be transparent and open about what’s happening to them.   

Ian McShane  27:27   

Yeah, that sucks. That’s another podcast title for you ‘Yeah, that sucks.’  

Adam Marrè  27:32   

Yeah, that’s awesome. 

Picture of Arctic Wolf

Arctic Wolf

Arctic Wolf provides your team with 24x7 coverage, security operations expertise, and strategically tailored security recommendations to continuously improve your overall posture.
Share :
Table of Contents
Categories
Subscribe to our Monthly Newsletter