Making Sense of Cybersecurity – Part 1: Seeing Through Complexity
During the Black Hat Europe conference in December, I sat with one of our main security analysts, Paul Stringfellow. In this first part of our conversation, we discuss the complexity of navigation of cybersecurity tools and the definition of relevant measures to measure the return on investment and risk.
Jon: Paul, how does a final user organization give meaning to everything that happens? We are here at Black Hat, and there are a multitude of technologies, options, subjects and different categories. In our research, there are 30 to 50 different security subjects: posture management, services management, asset management, SIEM, SOAR, EDR, XDR, etc. However, from the point of view of the organization of end users, they do not want to think of 40 to 50 different things. They want to think of 10, 5 or maybe even 3. Your role is to deploy these technologies. How do they want to think about it and how do you help them translate the complexity that we see here in the simplicity they are looking for?
Paul: I attend events like this because the challenge is so complex and rapidly evolving. I do not think you can be a CIO or a leader in modern security without spending time with your suppliers and the wider industry. Not necessarily at Black Hat Europe, but you have to get involved with your sellers to do your job.
To return to your point of about 40 or 50 sellers, you are right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on the research to which you refer. So how do you follow this? When I come to events like this, I like to do two things – and I added a third since I started working with Gigaom. One is to meet sellers because people asked me. Two, go to a few presentations. Three consists of walking in the Expo Floor talking to sellers, especially those I have never met, to see what they are doing.
I sat in a session yesterday, and what attracted my attention was the title: “How to identify the cybersecurity measures that will offer you value.” This attracted my attention from an analyst’s point of view because part of what we do in Gigaom is to create measures to measure the effectiveness of a solution in a given subject. But if you deploy technology as part of dry or computer operations, you bring together a lot of measures to try to make decisions. One of the things they talked about during the session was the question of creating so many measures because we have so many tools that there are so much noise. How do you start to discover the value?
The long answer to your question is that they suggested something that I thought I was a really intelligent approach: recover and think like an organization on what metrics count. What should you know as a business? This allows you to reduce noise and potentially reduce the number of tools you use to deliver these measures. If you decide that a certain metric has no value, why keep the tool that provides it? If nothing but give you this metric, remove it. I thought it was a really interesting approach. It’s almost like: “We have done it all. Now let’s think about what still matters.”
It is an evolving space, and the way we treat it must also evolve. You can’t just assume only because you bought something five years ago, it still has value. You probably have three other tools that do the same now. The way we approach the threat has changed and the way we approach security has changed. We have to come back to some of these tools and ask, “Do we really need this?”
Jon: We measure our success with this and, in turn, we will change.
Paul: Yes, and I think it’s extremely important. I recently told someone about the importance of automation. If we will invest in automation, are we better now than 12 months ago after having implemented it? We have spent money on automation tools and none of them comes for free. We were sold on the idea that these tools will solve our problems. One thing I do in my role as CTO, apart from my work with Gigaom, is to take the dreams and visions of the sellers and to transform them into reality for what customers ask.
Sellers have aspirations that their products will change the world for you, but reality is what the customer needs at the other end. It is this kind of consolidation and understanding – being able to measure what happened before implementing something and what happened afterwards. Can we show improvements and has this investment have real value?
Jon: In the end, here is my hypothesis: risk is the only measure that counts. You can decompose this at risk of reputation, corporate risk or technical risk. For example, are you going to lose data? Are you going to compromise the data and, therefore, damage your business? Or are you going to expose the data and upset your customers, who could hit you like a ton of bricks? But there is on the other side – will you spend much more money than what you need, to mitigate the risks?
So, you are entering cost, efficiency, etc., but is that organizations think about it? Because it is my way of seeing him in the old way. Maybe it’s moved.
Paul: I think you are on the right track. As an industry, we live in a small echo room. So when I say “industry”, I mean the little piece I see, which is only a small part of all the industry. But in this part, I think we see a change. In customer conversations, there are many more discussions on risks. They begin to understand the balance between expenses and risks, trying to understand the risk with which they are comfortable. You will never eliminate all risks. No matter the number of safety tools you implement, there is always the risk that someone will do something stupid that exposes the company to vulnerabilities. And it is even before we entered AI agents trying to befriend other AI agents to do malicious things – it’s a whole different conversation.
Jon: Like social engineering?
Paul: Yeah, very good. It is a completely different show. But understanding the risk becomes more and more common. The people I speak to begin to realize that it is risk management. You cannot delete all security risks and you cannot treat each incident. You must focus on identifying the place where the real risks are found for your business. For example, a criticism of CVE scores is that people look at a cve with a score of 9.8 and assume that it is a massive risk, but there is no context around it. They do not consider if the CVE has been seen in nature. If not, what is the risk of being the first to meet him? And if the feat is so complicated that it has not been seen in nature, how realistic it uses it?
It is such a complicated thing to exploit that no one will use it. He has a 9.8, and he appears on your vulnerability scanner saying: “You really have to face it.” The reality is that you have already seen a change where there is no context applied to this – if we have seen it in nature.
Jon: The risk is equal to the probability multiplied by the impact. So you are talking about probability and will it have an impact on your business? Does this affect a system used for maintenance once every six months, or is it your customer-oriented website? But I am curious because in the 90s, when we did this practice, we went through a wave of risks avoidance, then we went to “we have to stop everything”, which you are talking about, for the attenuation of risks and the prioritization of risks, etc.
But with the advancement of the cloud and the rise of new cultures as agile in the digital world, we have the impression that we have returned to the management of “Well, you should prevent this from happening, lock all doors and implement zero confidence.” And now we see the wave of “Maybe we have to think a little more intelligent.”
Paul: It’s a very good point, and in fact, it’s an interesting parallel that you raise. Let’s be a small argument while we record this. Do you mind if I chat with you? I will question your definition of zero confidence for a while. Thus, zero confidence is often considered something that tries to stop everything. This is probably not true for zero confidence. Zero Trust is more an approach, and technology can help undergo this approach. Anyway, it is a personal debate with myself. But, zero confidence …
Now I’m just going to reframe myself here later and chat with myself. So zero confidence … if you take it as an example, it’s good. What we used to do was an implicit confidence – you connect, and I would accept your username and password, and everything you did after that, inside the secure bubble, would be considered valid without malicious activity. The problem is that when your account is compromised, the connection could be the only non -malicious thing you do. Once connected, all that your compromise account tries to do is malicious. If we trust implicit, we are not very intelligent.
Jon: So, the opposite of this would be to completely block access?
Paul: It is not reality. We can’t just prevent people from connecting you. Zero Trust allows us to allow you to connect, but not to make everything blindly trust everything. We trust you for the moment and we continually assess your actions. If you do something that doesn’t trust us anymore, we act on this subject. It is a question of continuously evaluating if your activities are appropriate or potentially malicious, then act accordingly.
Jon: It will be a very disappointing argument because I agree with everything you say. You have argued with yourself more than I can, but I think, as you said, the castle defense model – once in you, you are in it.
I mix two things there, but the idea is that once you are inside the castle, you can do whatever you want. It has changed.
So what to do about it? Read part 2, to provide a profitable response.