Webinar Series - On Demand
State of Browser Attacks Series:
Yes, You've Been Pwned
Learn how attackers exploit weak, breached, and reused passwords — and what you can do about it.


Got questions for John or Luke?
Transcript: Yes, You've Been Pwned
Mark: Well, welcome everyone to the second in Push Security's series on the state of browser attacks. I'm Mark Orlando, Field CTO at Push, and for this session I'm joined by someone who probably needs no introduction, but I'm going to do it anyway. He's a Microsoft Regional Director and MVP, speaker, trainer, founder, and CEO of Have I Been Pwned — Troy Hunt. Welcome. Thanks for joining me.
Troy: Hey Mark. Thanks for having me.
Mark: Absolutely. Lots of ground to cover today. Really excited to kind of hear your insights on some of these problems and some of the work that you've been doing now for — over a decade, right? Coming up on 13 years?
Troy: Yeah, half a million — wild, isn't it? I have children that are only that age. It's a long time.
Mark: Um, good. Well, I wanted to kind of start off with a broad question. You know, you've collected — I think more of this data, this credential breach data, arguably than anyone in history. I would say at this point — I think I saw you're into billions of email addresses now, the dataset obviously quite large — and if you had to summarize, over these 13 years or so, what your work has taught you about why these things are still happening, why these kinds of breaches are still happening, why these attacks still work — what would you say?
Troy: Yeah, it's a fascinating question and I think there are many different reasons for it. Part of the question about "why" is what motivates attackers — what makes attackers go after this data? Some of the classic motives are things like money. Obviously people are seeking it for money. I do feel like we've seen a shift in recent times where we are seeing a lot more data breaches and "pay or leak" style scams, where people are grabbing data and then trying to shake the company down for money. I'm sure we'll go more into that shortly as well.
There's the old sort of curiosity and adventure as well. A lot of the data breaches we have are from kids, usually. Legally kids, or — yeah, kids in that they're a lot younger than us — but it's almost always someone very young who just has this curiosity, and they probe and they prod, and they eventually find a way in and pull data out. I don't think any of those motives are changing. All that's really happened over time is the emergence of ubiquitous cloud computing, where it's easier than ever to spin stuff up. Now of course with AI — I don't think we're necessarily making more mistakes because of AI, but it's that much easier to create more things that create bigger footprints. We've just got more systems online and more people looking at them. So I just can't see any reason why this would slow down at all. I've been saying this for a very long time, and here we are with things definitely accelerating.
Mark: Yeah, it's interesting. And I'm kind of wondering at this point — you've been doing this for quite some time. I've been in cyber defense for coming up on a couple of decades now, and it seems to me that, as you said, this stuff isn't really slowing down. And I guess I'm wondering at this point, from your perspective: do you see this as an inflection point at all? Are we seeing a huge spike? Is this just continuing to trend in the direction it has been for quite some time? What are your thoughts?
Troy: Look, it depends on how you measure it, but I honestly can't think of an empirical measurement that would show that we've suddenly hit some inflection point where things are going through the roof. I do think we have phases where particularly certain threat actors or certain exploits gain prominence. At the moment — at the time of recording — it's the Shiny Hunters, who are very good at social engineering via voice phishing and getting into Salesforce instances. We're seeing a lot of stuff dumped by that mechanism. But organizations will get better at securing that, Salesforce will probably have more secure-by-default profiles, these guys will get arrested, and then we'll move on and there'll be something else. But I think it's just one continuous growth if you look at it over a longer period of time.
Mark: Do you think that — it seems like with this credential theft ecosystem, and the commoditization happening here — and I'm thinking more about the attack infrastructure and some of these kits and rentable infrastructure that can be had for, in some cases, a few hundred bucks. You get like 10 days of access, or maybe a one-time fee for permanent access. From our side at Push, we're seeing that this has kind of a flywheel effect. And you see this especially with attacks like ClickFix, which you can launch with some of this rentable infrastructure — where it's like ClickFix to info stealer to account takeover, which results in maybe more ad account takeover, so you distribute more ClickFix, and it just has this compounding effect. Are you seeing that as well in some of these credential breaches, or is this just kind of another iteration of things we've seen before — just the latest pivot?
Troy: I think the thing you touched on there — and to use a bit of an overloaded term — there's almost like the democratization of hacking tools. When you get all of these things as a service... it used to be like DDoS as a service. People would sell stressors and botnets and all this sort of thing, and in more recent times we've seen things like phishing as a service. So you don't need to be particularly technically sophisticated if you can go and pay someone else for access to their infrastructure. Of course, we've seen a lot of ransomware as a service. We've seen all sorts of turnkey platforms built by criminals for other criminals, making it so much more accessible for people with less technical skill. And maybe that's analogous to life in general in terms of access to technology. Now we've got AI that makes it so much more accessible for anybody to build an app. Before that it was platforms as a service, which made it so much easier for anyone to get access. And if you look at this through the lens of the moral neutrality of technology — that rising tide lifts all boats — and some of those boats are criminals who can now do things easier than before.
Mark: Yeah. And I wonder if that gets back to what we were saying about this sort of talent pipeline that seems to include younger and younger adversaries. I know you and I spoke about that before — this pipeline from online gaming communities and things like that into some of these nefarious groups. And I believe you've done some work or advising around that problem specifically.
Troy: Yeah, it's a really fascinating one — it's almost like the youth pipeline to cybercrime. I think it's fascinating for many reasons. The average age of people being arrested for a lot of these data breach-style activities is around 19. So — okay, adult, but just. And you started this many years ago, so you've got child hackers. It's fascinating when you look at the size of the organizations that are falling victim to children and go, "that is just amazing leverage" — where you've got, in some cases, Fortune 100 companies being breached by a kid in his bedroom. How are you defending a company with that much resources and money when some kid is managing to take over their systems? And the fascinating thing as well is they're not necessarily technically sophisticated. A lot of the attacks lately have been social engineering attacks. Kids are great at social engineering — if you've got kids, you know how good they are at that. And they can do that to big organizations too. It's, again, to use that term, a bit of a democratization, where everybody gets to have a shot because the bar is not necessarily that high.
Mark: Absolutely. And we're both parents — we've both seen some of the scary side of some of those skills, even if in different contexts. But I definitely think there's something there. As a defender myself, I think I took some solace in the fact that even a sophisticated attacker — maybe 15 or 20 years ago we might have said an APT, a state-sponsored attacker, or a cyber criminal actor — at the end of the day, I think when you're conceptualizing those kinds of attacks at that time, it was, "well, there's going to be this multi-stage attack chain that's going to happen, there's going to be some logical objective at the end of that chain." Even if I don't observe what that is or can't necessarily infer it, most likely there's somebody on the other end who has been tasked with achieving an objective, or there's a financial incentive in play. So I think on the defensive side, we've all internalized those models and said, "there's going to be some logic to it, some predictability — if we can just get enough visibility, if we can get enough data, if we can slam in enough security tools, we're going to be able to observe those attack chains and disrupt them."
And it kind of seems like those models aren't really the same today with the types of adversaries we're talking about — this loosely affiliated comm group, Shiny Hunters, and whatever comes next in six months or a year. So I guess my question for you would be: do you think that is kind of a sea change — that we've moved from multi-stage attacks to now it's all identity, all SaaS ecosystem, and the first part of the attack — the info stealer — might not even have happened in your environment? You're just going to see the tail end of that attack chain. It seems like these attacks look somewhat different than what we designed for, even five years ago when we were building out security infrastructure. Is that what you've seen as well?
Troy: I think what you've described is very reflective of now having a lot more external dependencies. When we think about attacks against identity, we're seeing attacks against the likes of Okta, because Okta holds identity — that's very valuable. Salesforce. A couple of years ago it was things like Snowflake, where these external dependencies have so many different entry points into them, because you've got X number of different apps that have been authorized to go in and get certain data to do their certain things. I'm very sympathetic to organizations where, if you put all of this up on a board — crime fighter style — and you draw the lines between everything, it's just an absolute spider web of interdependencies and access rights. And that's enormously difficult, because once a group manages to find a reproducible pattern to gain access to these things, the same pattern is used by so many different organizations. And again, this is why we see so many attacks of the same style against massive organizations that, at the point of recording, have been going on for a couple of months as it relates to the Shiny Hunters group. So clearly it's got legs, and it just keeps continuing.
Mark: Yeah. Well, I think that kind of covers the problem, so to speak, and I want to spend some time also talking about solutions — or maybe how security teams can start to level the playing field a little bit. And again, just from your perspective, managing all of the data that you manage — obviously there are lots of different ways that an organization or even individuals can interact with and benefit from the data you're maintaining. Can you talk about some of the ways that security teams are operationalizing the Have I Been Pwned data? Is it a direct data feed going into detection engineering pipelines? Is it more of a threat intelligence style consumption model where it's just context? What does that look like in your experience?
Troy: Yeah, it's a bit of a mix, and every now and then I find a really surprising use case someone's found as well — I'll give you an example of that in a moment. When there's a data breach, we take email addresses and put those email addresses into the online service. Nothing else goes in there, other than if we have a corpus of plain text passwords — which fortunately doesn't happen much these days. But stealer logs, credential stuffing lists, and things like this we do have in the clear. So one of the services we have is called Pwned Passwords. There are about a billion passwords from previous known data breaches in there. Everyone has a prevalence count against it. We have an API with an anonymity model where you can query it on demand or download it all. It's all open source — both code and data. And that API endpoint we see hit 18 billion times a month at the moment.
So one really easy win here is to try and block known bad passwords. We know credential reuse is massive. We know attackers get credentials from one data breach and then go along and try them on all sorts of different services — now you've got one data breach leading to multiple account takeovers. So blocking known bad passwords is an easy win.
Lots of organizations are also very interested in the exposure of their own people, because it turns out people reuse their passwords and also leave a lot of personal information — often on their corporate email address. So we have a domain search feature where any organization can go in and say "show me all the email addresses at example.com" and see which breaches they've been in, and then we'll send notifications when there are new breaches.
Imagine an organization that discovers they have executives in the Ashley Madison data breach — that's probably going to be an HR discussion. But what if they find someone in the Dropbox data breach or whatever the next cloud-based equivalent is? Suddenly you're going, "hang on, there are people probably behind the corporate firewall with a potentially non-sanctioned cloud storage service here — I wonder what that means for us." So being able to see where that organizational risk is is very useful for many companies.
And then the example of the sorts of things we don't expect: I had a company recently who was trying to do some sort of identity verification — getting an idea of whether accounts are legitimate or have possibly just been stood up to gain access to their services. They said, "We've got this thesis that people are so breached that if you're not in a data breach, you may not be real." So if they search Have I Been Pwned for an email address and it comes back with no prior breaches, that doesn't mean they're not human — it's kind of ambiguous. But conversely, if they search for someone and they're in a dozen data breaches going back the last 10 years — alright, this person has been around for a while, and it's almost certainly a legitimate email address. So that was just one of those cool use cases I hadn't thought of before.
Mark: That's fascinating. I wouldn't have guessed that, but when you say it, it makes total sense. And there's so much in there that I want to dig into, but I think certainly this idea that your users are being issued a corporate identity — a username and password — and they're going out and self-selecting big parts of your infrastructure. They're signing up for apps and services, very likely reusing their favorite passwords or an easily guessable password, or maybe a compromised password they may not even be aware of. That's something we see quite a bit of when we deploy.
One of the things I've noticed — not only at Push, but in my prior life as an enterprise defender — is that getting a notification that you have an account or email address that has shown up in a breach list can be very helpful context, but can also be a recipe for spending some time only to find out that maybe that person left two years ago, they're no longer an active user. Or maybe it's not actually a legitimate email address in your organization, even though your domain might be in there. And I know you've kind of written about this — the fake email problem. A lot of it comes down to how organizations leverage this type of data. In our world, for example, we take that kind of intelligence and match it with a credential that somebody is actively using today. So maybe we flag it with "hey, this has shown up in a breach list, and also somebody is logging in using this same account right now in your environment." Can you talk about why, at the scale you're operating, there are inevitably going to be some anomalies in the data — some things that you can't necessarily just shove into your detection pipeline without doing some validation and checking?
Troy: Well, I think the real macro question here is: for an organization — particularly larger ones — when they discover they've got people in a data breach, what next? The first obvious one is that if it is email addresses belonging to employees that no longer exist, or email addresses that might have even been fabricated — every now and then we just see a load of stuff like "sales@" every single domain you could imagine — those are usually pretty easy to discard. In most cases, that's not going to make much sense to take any further action with.
And one of the interesting tangential observations here is that data never really dies. You know, people leave the organization — I was at Pfizer for 14 years. I left there 11 years ago. I'm sure my pfizer.com email address still sits in all of these different online services because I legitimately used them in the course of my job. If they have data breaches later on, that's probably not going to matter to the organization, but yeah, they're going to get an alert, and from our perspective, we don't know who's a current employee or not — we just know it is a valid email address pattern. You could easily discard that.
Then of course there's the question of: what do you do when you do have a hit against an active employee? I'd suggest that's going to depend very much on the nature of the data breach, the nature of the organization, and possibly the nature of the individual too. At the very least, I'd like to think that every time someone in an organization is in a data breach, there's an education opportunity. One of the things we hear most from law enforcement agencies is: "We use your service a lot for things like community outreach and to educate people" — not necessarily to say "you have to go and respond to these incidents right now," but just awareness that everywhere you leave your data may one day appear somewhere publicly. So at the very least, you can say to people: "Just so you know, you've been in X, Y, Z data breach — you probably need to be conscious that your information is floating around."
Then of course, depending on the nature of the service, there might be different action involved. There are a surprisingly large number of corporate, government, and military email addresses in adult website data breaches. I'd argue the discussion you have with someone then might be different to the discussion you have around, say, them being in some online e-commerce service.
The other problem I'm seeing is almost like a second order of criminality that often happens around data breaches. I loaded an education sector one just last week, and very shortly after that, I saw people saying, "I'm getting extortion emails." People were getting emails saying, "You have been in this data breach, we know you're in this data breach" — and now the recipient goes, "Oh wow, yeah, I was. This is a little bit of social engineering — they're telling me something that's true." And then they go on to make claims about having malware on the computer and catching them doing things they wouldn't want to be caught doing on video camera — "please send Bitcoin now." And that's a discussion which might also be important if a corporate email address is now being used to extort someone for money. That's something most organizations like to know about.
Mark: Yeah, that's interesting. And I would guess there might even be some overlap there — we know there are some adversary groups, state-sponsored and otherwise, that try to recruit insiders and flip them to provide sensitive data or access. I wonder if there's any overlap there as well — not just financial extortion.
Troy: Well, that's possibly the gateway, right? That's possibly the entry point. If you can start to build a picture about someone and create a rich profile of them for someone who's trying to recruit an insider, the more information you've got, the more valuable it's going to be.
Mark: Absolutely. Well, on that scary note — there's one more area I wanted to get into here. Now that you've been doing this for a while, as we've established, you've built up this phenomenal dataset. I know you work with a lot of law enforcement agencies and many different organizations. Just a best guess: how many breach notifications do you think you've sent since the project started?
Troy: Every now and then I do a count on this. We send many millions every year. We've got about six and a half million individuals we send breach notifications to. We've got 400,000 domains being monitored on the service, which includes more than half the Fortune 500. There was a data breach I loaded this morning, and one of my test domains — it's a real organization that I use to validate some of our processes — there were more than a thousand people on this one breach. The numbers are just staggering, and as we said earlier, there's just nothing slowing it down either.
Mark: That is staggering. And I'm going to ask you another best-guess question: out of all those notifications — many millions a year — if you had to guess the percentage of those that result in some kind of meaningful action, and I know you may not always see the other side of that, but if you had to take a guess — what kind of impact do you think that's having? Even if it's just internal education, not necessarily full-blown incident response. What would you put that percentage at?
Troy: That's a great question. I wish I knew how to measure that. I think to give you a half answer, because I don't have an exact one — I would like to think there are multiple different outcomes. In some cases, I know it's resulted in organizations speaking to employees about the exposure of their data. I know in many cases it's helped individuals who've said, "Ah, now I have a potential answer as to why I keep getting all this spam." And I'd also like to think that in many cases it has led to people proactively changing their behavior before they do have some sort of nasty data breach — maybe they've gone and gotten a password manager and made all their passwords strong and unique, and when they are involved in an incident later on, it wouldn't have been as bad as it would've been otherwise. We have hundreds of thousands of people a day use the service, hundreds of thousands of organizations using it. So there must have been some good come out of it, but I have no idea how to measure it.
Mark: The numbers kind of speak for themselves, I suppose. But I think you touched on something I want to really emphasize — taking that kind of awareness and then acting on it to put in more guardrails around how people access their accounts, identity hygiene, and things like that. That seems like a positive impact even if you can't quantify it. Whenever I see that in the wild — "we've implemented MFA," or "we've gone to stronger, phishing-resistant MFA" — I know you've had some personal experience with those kinds of attacks yourself. It seems like that's a win, even if you can't necessarily quantify the impact. Would you say that's fair?
Troy: Yeah, look — I think it's about positive behavioral changes. And what you politely alluded to just there is that I got phished myself, about a year ago.
Mark: Happens to the best of us.
Troy: My password out of 1Password got phished. My OTP out of 1Password got phished, because it was a phishable form of two-factor authentication. And as a result, my mailing list got exposed. So I had to put my own mailing list into Have I Been Pwned and then email all my subscribers — which was, to be honest, slightly embarrassing. But I think to come back to your point — where it was really valuable is that it got a lot of press around how even I can be phished, and the vulnerability of phishable two-factor authentication, and the importance of things like passkeys and non-phishable two-FA mechanisms. So maybe that was a bit of an example — like Have I Been Pwned itself — where people use that experience to hopefully make things better for everybody else.
Mark: Yeah, absolutely. And I guess by that interesting metric we talked about earlier, that now means that you are, in fact, a real person — you're showing up in a breach, right?
Troy: I was already in like 36 data breaches — I mean, literally have been pwned several dozen times already. And it just comes out of nowhere. My wife the other day suddenly got an email from Have I Been Pwned because she'd bought Canada Goose, and then Canada Goose — because they're one of the victims of Shiny Hunters — yeah.
Mark: So that made it easy to verify when your own data's in there. But I think when you talk about strong authentication — you still got phished. And I think that's also a decent case study in how MFA — some MFA is better than none, and FIDO2 tokens and those kinds of mechanisms being the strongest — but even then, on the defensive side, we're seeing a lot of post-authentication attacks. Even as good as those controls are, MFA and phishing-resistant MFA isn't a panacea any more than any other security control. You can still be targeted. Some of this stuff can still work. And in fact, some of the things in the Shiny Hunters' playbook still work even if you have really strong types of MFA in place.
Troy: Yeah, and that was an example of one of those platforms-as-a-service or software-as-a-service products. Genesis Markets got taken down a couple of years ago, and that was literally cookie material — post-auth. So you can have the world's best non-phishable two-FA, but a stolen cookie gets you in because it replays the session and had browser fingerprints and things in it as well. Then you've still got a problem. And it also reinforces the need for technical controls that are separate and complementary to the human controls. Obviously in my own case, getting phished — the human controls broke down. Unfortunately there weren't sufficient technical controls in order to save me from myself. So we really need both.
Mark: Very well said. And the last thing I want to touch on here, Troy — and I'm sure this will come as a shock to you — artificial intelligence. Apparently a big deal. Who knew? I know you've done some writing about this. I was hoping you could talk a little bit about Bruce and some of your work in this area — what you've learned applying that toolset in your own work, what you've seen, what works, what doesn't work as well as you might have thought.
Troy: I think to start with a very objective observation: everyone's trying to figure out where the value is — what's the stuff it actually does well, versus the stuff where it creates images of people with funny teeth and fingers. You're trying to work out, "how do I use this in a way that's productive for me as a business and hopefully in the best interest of the world as well?" We're trying to figure out that sweet spot.
So the experiment we've been doing at the moment is trying to respond to our support tickets for Have I Been Pwned more efficiently, by using an instance of Claude that interacts with our customers on Zendesk, under a persona we call Bruce — because we just picked the most Australian name we could find. What he's doing now is we treat him like a junior employee. We do refer to him as a person. So it is Bruce the bot. Bruce the bot has his own account in Zendesk. He doesn't have access to my account or to anyone else's account. He has all these locked-down rights. He's got access to a couple of different services under his own identity, just like we'd give a junior employee. So if he goes completely rogue or does something stupid, he's pretty sandboxed in terms of his access.
And where we're finding the sweet spot is not to make him a fully autonomous bot that's completely detached from humans, but to have him actually augment the work that we do. So Bruce pops up and says, "Hey, someone's logged a ticket. They're asking how to cancel their subscription. This is the response I want to give them — does that sound okay?" And we can either say "yes, go ahead" or we can give him a prompt. Particularly when it gets more complex — we had someone a couple of days ago who kept asking the same question over and over again, and I just told Bruce, "This person's obviously just being difficult. Just summarize everything that they've sent and give it back to them." So Bruce went through it: "At this date and time you said this, then I said this, then I said this." It was almost like playing defense for the humans — keeping us from, in this case, a rather belligerent customer.
It's almost like a human-augmented AI. I think if we still have the human touch — he doesn't do anything without our okay — we're able to do things much faster and more efficiently. And then as we find particular paths where we can really reliably and confidently give the right answer — someone says "how do I cancel my subscription?" — there's a very clear answer for that, and we'll give Bruce a little bit more leash. Just like you would with a junior employee: you invest time upfront, but you want them to be able to act more autonomously over time. And I think for us at the moment, that seems to be the best path forward.
Mark: That's fantastic. And I really like what you said there about treating Bruce as a junior employee. From our side, that's something we've seen work quite well too — using AI as augmentation to do what our research team could do, but maybe would take them longer to do manually. Really great as an assistive technology. And back to your point about the junior employee: it's fascinating to me how many of these AI problems are actually problems we've dealt with before — just at a slightly larger scale. Vibe-coded software, bugs and vulnerabilities — that's not exactly a new problem. Over-authorizing a new junior employee — not a particularly new problem either. So I'm encouraged to hear that the approach is "keep it on a leash, put those guardrails in place, don't over-authorize the agent to do things you don't want it doing" — just as you would with a junior employee you're bringing in.
Troy: Yeah, so it makes a lot of sense. It's almost like the same human challenges we had before, just transcended into a new era. Even to the point where now on my weekend I was writing a "Robophobia policy." A robophobia policy — people who refuse to speak to the robot. They say, "I'll only speak to a human." And I'm a little bit cautious about how I word this, but it kind of feels like other areas of discrimination in life, where someone says, "I will not have anything to do with you because of your makeup — I'll only talk to someone else." Now, for us as well, we are literally instructing the robot. So when someone says, "I don't want to talk to this person because it's not a sentient being" — maybe there's a little bit of humor involved here as well — but I don't want people to be scared of the robots. I want them to embrace Bruce and all his silicon-based brethren.
Mark: Right. Very well said. Well, I think that's about it for our time. Troy, I want to be respectful of yours as well. Before we break — for those of you watching who have questions for Troy or myself, please feel free to leave them in the comments. We'll do our best to consolidate those, summarize them, and maybe we can speak to some of those questions. We'll make sure we mention you, Troy, if you want to jump in, or maybe we can task Bruce to answer some of them — we can work something out.
But, Troy, any other projects, trainings, talks, or initiatives you want to call out before we end the session?
Troy: Any events or talks I have coming up are on my events page, listed on the front of troyhunt.com. I've got a little bit more international travel coming up this year — mostly European stuff, some online things. Otherwise everything is on troyhunt.com and all the social things are linked off there.
Mark: Okay, well there you have it. And if you're not reading Troy's blog, I don't know why — make sure you do that. You can also check out the Push Security blog at our website, where we write about a lot of the types of threats we've talked about today and share a lot of our research. If you want to book a demo of Push Security, you can also do that at our website. And with that — Troy, thank you again very much for joining me. Really appreciate it. Hope to talk to you again soon.
Troy: Thanks, Mark. Cheers.
Agenda: Why the Browser is the New Battleground
The browser is the new endpoint, and it's under attack. Attacks are happening entirely inside the browser sandbox, targeting applications directly over the internet, and blending in with legitimate web and network traffic, application access, and user activity. This is a significant challenge for security teams. Existing security tools can't get visibility of what's happening inside the browser. Attackers know this, and are ruthlessly exploiting the browser blindspot. This is fuelling a lot of attacker innovation, with new tools and techniques constantly emerging. Push Security VP R&D Luke Jennings is joined by John Hammond, Senior Principal Security Researcher at Huntress, to demonstrate the latest browser-based attack techniques. Ride along with Luke and John as they analyse real-world attacks, covering:
- ConsentFix, the browser-native ClickFix attack linked to Russian APTs
- Session-stealing, MFA-bypassing phishing campaigns targeting enterprises over LinkedIn and Google Ads
- The latest social engineering tradecraft and detection evasion techniques
- What the future of browser-based attacks looks like and what security teams can do about it