After the takedown of APT28, I continued to receive spam from IP ranges that were associated with APT29's malware campaigns.
Turns out there's a lot of fake shell companies that act either as hosting companies specifically for malware campaigns from Russia and China or specifically as a company that tries to fraud people, e.g. their CEO being on the FBI most wanted list or the company being sanctioned by the UN.
I'm currently creating some sort of cyber map of these spam/phish/malware campaign overlaps, as part of my antispam [1] effort.
I got tired of LLM based targeted spam where they have a system in place that is trained on my social media profiles, because they are very hard to identify as being spam.
Blocking specific domains is a useless effort because they keep on spawning new fake company domains that are either copies of legit ones or are generated fake profiles. They are so automated that they also create staff members and fake profiles on LinkedIn, specifically for that spam effort. Nobody at LinkedIn gives a shit about those fake avatars, I reported hundreds by now and they did absolutely nothing.
Anyways, long story short, here's the blocklist of those ASNs and companies. I'm working on the map at the moment and don't wanna publish it until I can prove its correctness:
For every account I create on the internet I create a new mail inbox, this way I can just compare the email title with the inbox it was sent to. So, when I receive a notice from my bank on my github email I know what happened. This genuinely saved me a few times already.
Most providers let you run a catch-all adress on your own domain. You usually set it up with just a check-box "catch-all" and where to send all mail, or you write the username as "*" in an alias.
This feature is available for no extra cost from panix.com with the "+" (dcoder+anytext@panix.com) technique, and I can use filters on the address.
Since many sites can't believe that an email address can have a "+", I can also use "anytext@dcoder.users.panix.com" at most sites instead of dcoder@panix.com. ("anytext" typically, for me, being the name of the company or organization that I'm dealing with. Also, my Panix account is not really "dcoder".)
Sounds like it's time for someone to set up an email service that offers the same functionality without the +. There would be some headaches and it would limit the degrees of freedom users have with base email addresses, but I'd use it!
Fastmail lets you make as many redirects as you want, no + in them.
You can even get an api key for it and plug that into bitwarden, so that when you sign up for whatever, you click bitwarden, generate password, generate email, sign in and it's all set. So smooth. (I sound like an ad, but internet pinky promise no affiliation)
As far as I know Samsung isn't a Korean family name, it's just a brand.
That said, are you sure it wasn't the + that caused the problem? I've run into that a few times, presumably when someone tried to roll their own email validation.
It's probably a specific policy of Samsung which doesn't allow the word samsung in recipient addresses. I had the same issue, but with samsung@private-domain.tld
Sometimes you can sign up with the + but when you try to log in either on the homepage or an app, the login is invalid because of that + sign. Different validations. Stopped using that way after getting locked out of accounts 6 months later...
My usual "smasung" typo worked fine when I registered with them. I use a service for disposable addresses redirected to my main mailbox for potentially spammy registrations which I don't really care about, instead of just creating new accounts which is way too inconvenient to manage.
I'll make these intentional letter swaps every time just to avoid regexes and automatic filters.
I'm curious - is there a benefit to doing so versus using Apple's Hide My Email (or a similar service) or appending +service to a gmail email address? Completely ignorant on the topic so apologies if this is a silly question.
I'm not really sure how Apple's Hide My Email works, but my impression is they work by creating a proxy email for you. If that is the case, it should be a good solution for protecting you privacy. The problem is you become hostage to Apple, because now if you loose access to your Apple account you also loose access to ALL your accounts(potentially). It's probably on the same level as using a password manager like BitWarden.
I've just explained the problem with the gmail tagging in another comment.
Yup. Got my own domain(s) and use a different address for all my services (like with Gmail where you could append +service to your email but with a completely distinct email per service like paypal@mydomain.com). Helped my several times to identify spam & phishing without even having to check the E-Mail itself.
My guess is that you probably know what I'm going to write, but a lot of people don't realize this 'Gmail trick' doesn't really work.
The problem is that foo+bar@gmail.com and foo@gmail.com are delivered to the same inbox, so if you are trying to scam someone it is safe to remove anything after the + in a gmail address.
And having a custom domain on gmail doesn't improve your situation, because with just a simple 'dig mx' you can know if the domain is hosted on gmail and apply the same regex to remove all labels.
So, to be less inflammatory the feature works as expected. But it only protects you if the bad actor is really dumb/lazy or if he is honest.
Some people really love putting dumb validation rules for emails in forms... You would be surprised to know how many system in the real world will just refuse anything that is not a letter or a number in your email.
And the 'fuck them, I won't do business with them' attitude doesn't really work if the system that wont accept your email is the local gas company.
And there is another problem, some systems will just remove any label without informing you. I've had this problem logging in some random websites. My account was created with foo+bar@gmail.com but to log I had to use foo@gmail.com.
Not surprised at all, I've been using the Internet and writing software for a couple decades now. Heck, I might've written one of the validators you're complaining about. But they are typically written to avoid +, for the exact reasons you described.
For those sites, you can add a dot in your username. Then you can ignore any emails sent to an address without the presence of a dot or a plus.
I'm sure there are sites that don't accept dots either, but I've never run into one. So you have to make an exception? Oh well.
I agree that it's easiest to do with service@domain.tld, like the grandparent suggested.
I do the same as the person you're responding to. There is no '+' in my email, I just create random strings @mydomain. It's impossible for a scammer to know they all go to one inbox.
I use a similar approach due to me having the luxury of an owned domain.
The problem, however, is that most companies still rely on crappy Enterprise services like Microsoft Office. For most people managing identities like this is impossible to do - due to either lack of user-friendly options or due to too high thresholds of necessary IT knowledge.
I mean, we are speaking about having to configure Dovecot and Postfix and similar tools, and I fuck that up regularly. And we are also assuming that they have to be unguessable (you have github@? maybe I should target linkedin@, too, then!) which implies that they have to be random-looking which means they will likely be blocked by registration filters.
Newer projects like Maddy [1] kind of go towards that direction, but are still targeted at developers or sysadmins.
'Creating a new inbox' was an exaggeration on my part. What I have is a catchall on my fastmail account. But when I talk about creating creating inboxes it seems to make it easier for normal people to understand what I'm doing and the benefits it brings.
> we are also assuming that they have to be unguessable
That would be nice, but I don't have a nice way of doing it. I've tried to use something like rot13 to make it less obvious, but it is a pain to manage it. It would be nice it existed a cypher that was pretty easy to do in my head, but I never found anything like this.
> you have github@? maybe I should target linkedin@, too, then!
Yes, this is a problem. For a targeted attack this may become a weakpoint in my defense. But this is a calculated risk I'm willing to accept for now.
Microsoft used to let you get 500 free emails under any domain you added, for years. I miss those years. Had the nice benefit of putting you into Microsoft's ecosystem. I was able to make emails for different sites too.
I also recently came across some of these fake company campaigns with attached employee profiles. It's very hard to distinguish them from legitimate companies.
It's especially hard now that many legitimate companies use a lot of generic sounding AI-generated content, which seems to be same approach the spam/phish/malware teams are using.
IMO we need some kind of zero-knowledge proof system that can be checked to verify if a message sender is a US citizen, employed by who they say they are employed by etc.
I don't see how we can trust anything in a post-generative AI world any other way.
>IMO we need some kind of zero-knowledge proof system that can be checked to verify if a message sender is a US citizen, employed by who they say they are employed by etc.
I think this could be a great opportunity for Google. Lots of organizations already make use of Google Workspace/Gmail. Imagine if Google Workspace offered the equivalent of a "Twitter blue check", where you pay extra, and anyone who views your email in Gmail sees a little check mark next to it, that shows Google verified you are who you say you are, and Google thinks you're not malicious. Salespeople sending cold emails would love it.
I don't think you can solve this problem purely cryptographically. An attacker could always bribe a US citizen to set up a shell corp or whatever. Most objectively verifiable indicators can be gamed. There has to be an organization that's good at security, like Google, which is in the business of continuously keeping up with adversaries. Actually Google might not be the best because they kinda suck at tailored customer service, but anyway.
When I was in the US Navy, I learned most of the time, the weak points in security were usually people. Attackers know this and exploit it. And it usually wasn't movie plot style "do this or your wife gets it" exploits. Those seemed to get blown up easily. It was mundane things. Distracting a watch stander with something that was actually stupid. Making someone late for duty. Putting something really gross in the garbage hoping the inspector would skip that bag. So many little lapses in human judgement. Most completely innocent. This was with vigilant, uniformed people subject to military discipline, and those thing happened.
So you have to focus on process and systems. Some easy stuff:
* Never ask customers/employees for a password. If someone does it's a scam.
* Refund money only to the payment method used to pay for the product/service.
* 2FA is your friend no matter how much the VP of Sales whines about it.
* have a way to expire tokens and force reset of passwords.
That's why it's easy. People think "I'm not important enough" to be targeted, or "My job isn't that important", but that's what adversaries are counting on. Their "unimportant" job or whatever is just a stepping stone.
What's the threat scenario where forcing a password reset increases security? I'm genuinely curious, because I feel it's often the case that password expirations might introduce more threats than they mitigate.
> What's the threat scenario where forcing a password reset increases security? I'm genuinely curious, because I feel it's often the case that password expirations might introduce more threats than they mitigate.
Not every reset is due to expiration... e.g. if you know a user reused a password from a different service that got hacked on your service, you should probably make them reset it...
Layered defence. These days, why do you have anything unauthenticated anywhere? Every system should be authenticated and authorized. Unless information is fully public.
Because of that one cobbled together system, or old network MFP that sends to mail, that's needed for a whole bunch of stuff, can't authenticate, and someone decided it's too expensive to replace for such a small attack surface. Until the problem costs more than the solution large organizations don't move by design. This is usually officially the benchmark: what costs more.
In my experience, there’s a random server that nobody knows who maintains it offhand, including the person who maintains it. People ask, and it just doesn’t go anywhere. Until something like this happens. It’s nothing to do with costs, its just an oversight
> there’s a random server that nobody knows who maintains it offhand, including the person who maintains it
It makes no sense that you'd keep an insecure service because you forgot someone needs it. You turn it off and the reminder will promptly come to you. After this it's a decision, not oversight.
> It’s nothing to do with costs, its just an oversight
The article suggests that their internal unauthenticated SMTP was there by design, not oversight, together with an authenticated (presumably external) one. Some assessment deemed addressing the risk from the unauthenticated internal one not worth the cost and effort.
> People connecting through our VPN have access to an internal-only SMTP gateway machine that doesn't require SMTP authentication [...] previous phish spammers have exploited some combination of webmail and authenticated SMTP.
> It makes no sense that you'd keep an insecure service because you forgot someone needs it. You turn it off and the reminder will promptly come to you. After this it's a decision, not oversight.
You’re assuming an org that had a policy my in place for this which was followed all along, and not that it’s a piecemeal service barely held together by dreams and prayers. My experience with university It departments is there’s an _incredible_ amount of “dunno who that belongs to but don’t touch it because it might be important” going on.
> there’s an _incredible_ amount of “dunno who that belongs to but don’t touch it because it might be important”
Right, so not an oversight, but a decision not to touch the obscure system. Decisions with bad outcome aren't oversight unless you want to downplay them when justifying yourself.
Your SMTP gateway is never "that" system that nobody knows about. You must know who owns and manages it, you know you have to secure it (minimal measures like... authentication) so you don't get unceremoniously penetrated. And if you do it you may or may not realize that something will fail because of the extra security.
If you know that "one cobbled together system, or old network MFP" I was mentioning earlier will fail when you enforce authenticated SMTP, because it's too old and replacing it is $$$, or too arcane and bringing an expert is $$$ then you will take an informed decision whether to proceed with your security hardening or not.
If you have no idea something will fail (you didn't catch it in the dry runs) if you enforce authenticated SMTP, you just do it and if someone comes in a frenzy to tell you that the old and arcane system is down then you revert the change. Now on you're in the informed decision scenario from above.
This is not a minor omission. Leaving a glaring insecurity like this open by oversight isn't what the article suggests happened, and it almost never never the case. It's not something that "just happens", it's something that people meet to discuss about and decide to ignore it maybe for reasons that look good at the time. This is the essence of risk taking. But it's a decision nonetheless.
1. Carefully establish the one critical data flow the whole business depends on. It may cost some time, but this one you have to protect by all means, so stakeholders won't mind.
2. As for the rest, take them down one by one and see what breaks. Got a call to internal support hotline? "Ooops, sorry, we will turn it on and let's chat about it soon."
There can be a few announcements in advance to shift the blame before (2): "Declare yourself or face consequences" (ChatGPT will write a nicer email). If you are on good terms with CFO, the noise won't matter. In fact, many people will thank you, when their weird stuff is taken over for care by IT.
Unfortunately basically every business i support has some horrible app that takes an ip address and expects to be able to send unauthenticated email to it. If it's not payroll software, it's hr or legal software.
Imagine having to type your password for every single shell command you execute. Not just for the line you typed, but per statement. A subshell counts as at least one; so does each segment of a pipe. Then, if any of those statements run a shell script, this applies recursively. Then, for any actutal program that runs, you have to confirm every syscall with a password too.
That's what "every system should be authenticated and authorized" feels like in the limit. So in practice, it always boils down to how deep you go before the overhead starts to eclipse any benefit you get from running the system.
Old java based application that doesnt respect all email flags and will often just close the connection even mid successful auth.
New server that lives in the cloud, but doesnt match up with the right protocols to send email out of Azure and into 365, so its punted down to on prem and back up to 365 just so Microsoft can sleep better at night.
Cars are insured and the consequences of a stolen car are not very high. Big difference from losing your retirement account or proprietary ip of a business.
Consequences of a stolen car are directly obvious and noticeable, and we can't get some people to apply security there. How do you think that's going to work in a system with so many levels of abstraction and typically huge numbers of people involved?
Cars are not stolen very often. Sure you can look up national statistics and find large numbers, but for the average person (who generally isn't in a "bad neighborhood") it something that only happens in movies.
That seems pretty relative. The consequences are probably high enough for most people. You have the hassle of not having a car, then getting a rental, dealing with your insurance, eventually getting a check for what similar used cars go for, which may be in worse condition and who-knows-what mechanical condition, so then if you want a car that you can (pretty much) trust, you have to buy new which is an unplanned expense. If you're not in a good position financially, it can f things up for you. Especially if you have personal belongings in the car which are not replaceable and probably won't get full value for if you even think to report it to your insurance such as sunglasses, USB cable, emergency seat-belt cutter, first aid kit, tire iron (if you don't want to use the crap one that came with the car), floor liners, seat covers, etc.
OK. You’re gonna lose $100-$200 worth of crap in your car and possibly pay out a $1000 deductible. How does it remotely compare to losing your retirement account or say having all your nude photos sent to your contact list?
For some people, maybe they can't afford to lose $1000-$1500 and the time, all which might be a bigger deal than some nudes getting out.
For myself, my personal and professional contact lists wouldn't be pleased, and I would apologize to them, but it's not going to cause me to lose any money. I'm certainly not going to be embarrassed about nudity or body shape/condition. Everyone is nude under their clothes, and other people have the same body shape or maybe even medical conditions. (Unhygienics is something else though, and that would be unpleasant.)
Also, I don't let anyone take nudes of me, nor do so myself, because it's just easier to assume that anything digital might be hacked one day.
The prize is sending phishing e-mails that are indistinguishable from authentic e-mails. E-mails where every check and signature says they really come from the university's employee pensions team, or the IT accounts team, or the legal team.
Or perhaps sending spam was just a ploy to divert attention from something happening on a different server while people were trying to stop the flow of emails?
not great indeed. is this organization not under any compliance requirements? unauthenticated SMTP is not going to pass even the laziest of security scans. although neither is VPN access without MFA
Most of them are said to be quick about exploiting Gmail or other systems they know, but slow with unknown software (hours rather than seconds).
If your system is on-premises, you may reasonably assume that the attacker will need to read the man page, like a new employee, see? But these guys didn't need to read the man page.
You're right, it's not clever at all, the attacker just happened to find a completely zero authentication internal service. They might have even done so via an automated tool like some kind of script kiddie network scanning program.
This is the kind of dumb stuff we were doing 30 years ago: making the assumption that being physically on the network implies authentication.
There's zero excuse to have a no-auth SMTP server, or anything else for that matter.
I see a lot of posts, articles, etc... stating that people are surprised by the complexity of a cyber attack or scam. It seems that most people haven't yet learnt that this is a full blown industry targeting countless businesses, institutions and individuals 24/7, not just some script kiddies in their bedroom. There are office blocks full of trained professionals with sophisticated tools working to compromise digital security and manipulate human nature to gain access to accounts, data and funds. Everyone needs to be adopting a form of zero trust or trust but verify to every digital interaction and every use of technology.
As a passive hotel owner and active programmer, I can confirm it's always been the case. In the hotels business, getting customer requests, invoices and refund requests seemingly out of nowhere isn't too uncommon. Receptionists, who have the authority handle customer cancellations and refunds, but also package / documents receipt for them, frequently fall for the slightly more laborious scams, in spite of the safeguards in place.
The phishing emails we get at my software dev job for security certification and pen testing pale in comparison to the actual effort being put in by scammers, who coordinate bookings with parcels and random invoices so that they tell a story, always targeting different shifts (almost never the same).
As the other commenter has posted, refunds for inexistent bookings or refunds for someone else's booking are pretty frequent.
Other simple stuff is overdue payments for fictive deliveries such as soaps, toilet paper, cleaning bills or even outsourced work.
The more complex scams involve making bookings and sending packages with fees and totals paid by the recipient, they try to convince the receptionists that their package needs to be delivered, and an actual delivery of random stuff happens using a real delivery company to complete the scam. They don't always mention that there's payment required on delivery.
Other scams involve claiming lost luggage, wallets, electronics without them being the owners, and trying to convince the receptionist to send the item internationally. We're a hotel next to the airport, so international travellers are the norm, plus we have a room full of lost stuff. They make a booking with a fictional name, then cancel it or no show, and then ask for their black luggage, black wallet, tablet, gold bracelet, etc.
While zero trust is great, humans have experimentally established that it is more or less impossible to maintain by all people in all cases all the time. Eventually someone will fail, and it can even be a security professional. Trustless is a tokenbro buzzword and it's not a viable path for users in general. We need some good trusted core software from which we can move further to auth other less reliable apps or machines.
> Everyone needs to be adopting a form of zero trust or trust but verify to every digital interaction and every use of technology.
I'd be interested in hearing how folks find working with "zero trust"; my employer's adoption of a zero trust VPN has been pretty bad, but I don't know if it's normal.
In my company, it's made it much harder to give decent support to users; previously, a user knew if they were on the VPN or not, and if they were on the VPN but they couldn't reach our service, that was a very rare event and it lead to a P1 outage getting an immediate response from a senior engineer.
Now, users don't know if they've passed the device posture checks or not - user plugs in their phone to charge it? Unauthorised external storage device, silently reduce their network access. So now if a user knows they're on the VPN but can't reach our service, that's very common; it's a P4 issue and within a 4 hours an intern will tell them to reboot their PC and try again.
Apparently users can't be told when they've failed the device posture check or why, for 'security'.
Needless to say, the engineers hate the much larger support burden, and the users hate the the much slower and less helpful responses.
No, and this isn't the concept of Zero Trusts fault. This is inexperience and/or a lack of competency from your security people and your support people. Although, more likely given that two "silos" are impacted, systemic organizational issues that aren't going to go away.
But isn't the whole point of Zero Trust to move away from a binary "fully trusted (allowed on the VPN) or not" and towards nuanced, dynamic, semi-trusted states?
i.e. isn't the fact you can be on the VPN yet blocked from accessing the service the goal of Zero Trust?
Weren't there implemented protocols to use the devices connected to the VPN that would proof against the most common sources of posture check failure? I imagine most problems are quite trivial, like the phone you mentionned, especially if treated as P4 (there might wven already be a document with the required advice used by the interns when telling people to reboot).
THIS. People who are trained by common stereotypes (generally from the entertainment industry) don't have a clue.
I wonder how it might work out, if Hollywood produced a "Breaking Bad"-style series, about an ambitious young cybercriminal moving up into the really big leagues.
> if Hollywood produced a "Breaking Bad"-style series, about an ambitious young cybercriminal moving up into the really big leagues.
I'm waiting for the biopic of Ross Ulbricht. It's got all of the bits that Hollywood loves with the young protagonist breaking bad, FBI agents also breaking bad, and now comes with a guilty conviction turning into a full blown pardon.
The hard thing about this is that a montage of "complicated chemistry in an RV in your underwear" is way more interesting to watch than a montage of "typing on a computer in your underwear".
No, there are global cyber espionage programs going on.
War is something entirely different. The belligerents are not trying to disable agricultural systems or power grids; an actual war is a horse of a different color, and would likely be regarded as a proper escalation in the physical realm.
> The belligerents are not trying to disable agricultural systems or power grids; an actual war is a horse of a different color, and would likely be regarded as a proper escalation in the physical realm.
There have been any number of attacks on physical infra. and civil institutions that fit that description. Sandworm (a group in the Russian military) alone has successfully brought down power grids multiple times.
I think that there's a difference between state sponsored hacking and a government turning a blind eye to illegal activities that happen to fulfill the same effect without their hands being dirtied. Incentives, such as potential future employment or the good graces of (more or less corrupt) local authorities when it comes to other illegal activities, can make a significant impact on influencing an adversaries' overall cyber readiness.
It's not unrestricted cyberwar, but I also don't think we have a complete picture of the scale of the cyber conflict, nor do we have a complete picture of the attacks and defences being mounted.
"According to a report on grid security compiled by a power industry cyber clearinghouse, obtained by POLITICO, a total of 1,665 security incidents involving the U.S. and Canadian power grids occurred last year. That count included 60 incidents that led to outages, 71 percent more than in 2021." [0]
Uh, no. There have been a massive number of attacks attempting to take down the power grid. It's just that the protections in place are currently working most of the time.
Unfortunately, the official stats tends to combine both physical and cyber attacks, so there's no clear sense of which is dominant... But, frankly, there isn't a need to separate them. The attacks are happening.
"As for information on our VPN setup (and our mail sending setups), it's on our support site (for obvious reasons) so we assume the attacker read it in advance."
That really changes the level of complexity for the attacker here
> As far as we can tell [...] the phish spam attacker used the main password they'd just stolen to register the person for our VPN and obtain a VPN password...
Requiring admin approval for VPN accounts would have prevented the phisher from getting VPN access to begin with.
This is a university. I expect they have a higher than normal proportion of attackers who know the system and exactly how they'd escalate having gained some access, and have the free time to prepare a customized attack.
On the other hand, those attackers are probably less malicious than the average Russian ransomware group.
I thought our unauthenticated SMTP got shutoff after switching to Office 365. I looped over the SMTP servers in the hop list in the headers of an email and one of the Microsoft domains accepted unauthenticated requests from within the network.
> It seems extremely likely that the attacker had already researched our mail and VPN environment before they sent their initial phish spam, since they knew exactly where to go and what to do.
As someone else said, I would increasingly suspect that apparently targeted or seemingly highly-invested hacking behaviour is just a new breed of scripts that are puppeteer by phishing AI multi-agent systems (maybe backed by deepseek now).
Just like self driving cars that will never make the same mistake twice, these things will likely keep a catalog of successful tactics, and so always be learning obscure new tricks
Or you can ask a GPT, that has already indexed your publicly available support docs, to prioritize potential places for a user looking to keep access as a backup.
AI is available to everyone, and we’re not prepared.
Sounds like it is possibly an automated script or agent doing most of this work accessing the VPN and SMTP server. Really shouldnt have any open mail servers anywhere.
I have concluded that I will eventually fall for a scam and pay a medical bill for some service I never received. All the bills look like scams, and for one service there are often 3 separate bills from different areas so it would be easy for someone to tack on one more and get some cash from me...
I feel bad for cks, but I probably would have handled it a little differently...
- shut their accounts off network-wide
- drop all related network connections
- forcibly reset their password and make them choose a new one in person. They may have changed it earlier, but do it again
- increase logging to catch any potential reoccurrences against the same user or other users
- inspect ACLs and reduce access for all users if possible
- prevent users from connecting from areas outside of their usual network sphere
- let the user back on, and ask them to be more careful in the future
- better mail filtering would be nice, but they'll always find a way to beat the spam filter
- (i hate this option the most, but...) send fake scam emails internally to see if anyone else takes the bait
This is of course ignoring 2fa, but 2fa isn't perfect either with sim swapping... but I personally don't think changing the password is enough for an event like this.
I've gotten emails with links "From" my parents names, but checking the addresses it was accounts on random .edu domains. The fact that separate emails came from two different names I knew really made me feel targeted.
Not necessarily. We have employees sent the usual phishing emails claiming to be from the CEO - but sent to their personal email addresses, since the attackers know targeting a corporate domain won't work because of the "similar-name-but-external-domain" warnings.
I figure these kinds of relationships are determinable from linkedin etc., but they're still automated. Using family members seems like an extension of this technique, sending phishing from someone you probably know.
TFA describes a perfect example of unwarranted implicit trust. Any
tunnel-in should terminate in an environment not unlike being outside
a regular perimeter, with internal per-host access control (perhaps by
RADIUS or some coordinated ACL - which would also have fixed the
parallel account problem )... especially to an unsecured Internet mail
server. I wish the misnomer "Zero Trust" were better crafted and
understood as a broad philosophy. I think it's psychologically
difficult to do the role-play and imagine "what if I couldn't trust
myself?"
Cool, but hey get this... I'm interviewing next week some guys working
in API security. and the discussion notes so far are terrifying.
People do all this work to build secure networks and OS, and then
someone says "Hmm we need an API for <fashionable reason>", and next
thing a junior dev exposes all the top level functions of a program
running with high privileges as URL handlers.
So maybe even _within_ your app it's not too paranoid to think "what
if someone got an entry point into this function?" and at least put a
"NEVER EXPOSE" comment there :)
ztna+ can help here. VPN give access to entire network, with all the rubbish services sysadmins have laying around like unauthenticated smtp servers (sounds total shit but it happen everywhere.... need to send email notifications from all sorts of shit and no one wants to manage these accounts... sadstory..)
ztna+ will for users kind of seem like vpn, good protection,but it only give access to services the user is allowed to access, not the entire vpn network.
it help alot against these type of scenarios.
also, how fast is fast? you can scan an internal network on a single port in the blink of an eye, so if u don't have good network IDS/IPS internally, u will not really see the scan and it seems like someone 'knows the network in advance' because they scan it in like 2 seconds and based on results automatically run scripts etc. - it doesn't need to be knowledge gained in advance.
- monitor internal network properly, asif its external network.
- use ztna+ if you can afford such solution
- do regular audits for things like unauthenticated services and use these kind of incident to in a friendly manner educate sysadmins about risks of such services. they will usually understand it, especially after an incident. aslong as you bring it friendly with a good explanation, not some demanding attitude.
- use a lot of mail filtering... more is better. it can be a bit tedious. at my company we have more than 4 solutions to scan all email and attachements etc. , still stuff slip through, but not a lot...
- also scan outbound or 'local' email. (BEC fraud etc.)
- do good post-incident reviews and use learnings each time something happens (sounds obvious, but this is often omitted, the learnings are only kept within sec teams, or turnt into one-off remediations rather than process etc. )
edit: oh.. and also monitor for logon anomalies. a lot of solutions support this. e.g. a user logs in from a unique new ip - alert on it, or even block it. , that action depends a bit on what's normal, so here actually ML and such solutions are great.. but basic statistical analysis etc. can also help if u can't pay or create ml solution. (its not too hard to create really, basic models will suffice.)
After the takedown of APT28, I continued to receive spam from IP ranges that were associated with APT29's malware campaigns.
Turns out there's a lot of fake shell companies that act either as hosting companies specifically for malware campaigns from Russia and China or specifically as a company that tries to fraud people, e.g. their CEO being on the FBI most wanted list or the company being sanctioned by the UN.
I'm currently creating some sort of cyber map of these spam/phish/malware campaign overlaps, as part of my antispam [1] effort.
I got tired of LLM based targeted spam where they have a system in place that is trained on my social media profiles, because they are very hard to identify as being spam.
Blocking specific domains is a useless effort because they keep on spawning new fake company domains that are either copies of legit ones or are generated fake profiles. They are so automated that they also create staff members and fake profiles on LinkedIn, specifically for that spam effort. Nobody at LinkedIn gives a shit about those fake avatars, I reported hundreds by now and they did absolutely nothing.
Anyways, long story short, here's the blocklist of those ASNs and companies. I'm working on the map at the moment and don't wanna publish it until I can prove its correctness:
[1] https://github.com/cookiengineer/antispam
> they are very hard to identify as being spam
For every account I create on the internet I create a new mail inbox, this way I can just compare the email title with the inbox it was sent to. So, when I receive a notice from my bank on my github email I know what happened. This genuinely saved me a few times already.
Same, but I use a catch-all, not separate inboxes.
I've caught a couple of hacked or sold email lists, but nothing that drastic yet.
One organization posted the email address I gave them on a public contact list webpage, so I get spam/phishing at that one.
Using a catch-all is the easiest way to do this, and I highly recommend it for other people.
May I ask what E-mail provider you use? Or do you run your own mail server?
(not GP)
Most providers let you run a catch-all adress on your own domain. You usually set it up with just a check-box "catch-all" and where to send all mail, or you write the username as "*" in an alias.
Fastmail
This feature is available for no extra cost from panix.com with the "+" (dcoder+anytext@panix.com) technique, and I can use filters on the address.
Since many sites can't believe that an email address can have a "+", I can also use "anytext@dcoder.users.panix.com" at most sites instead of dcoder@panix.com. ("anytext" typically, for me, being the name of the company or organization that I'm dealing with. Also, my Panix account is not really "dcoder".)
Spammers know of the "+" trick, which was popularized by GMail. If they have any level of sophistication at all, they'll /+.*@/@/
Plus-tag stripping sounds like a no-brainer for spammers to do, but I've never come across this behavior yet.
I always give out myname+tag@... to places that ask for email, and have an incoming message rule that puts bare myname@ straight into spam folder.
So far, the only messages to bare address were service updates from my email provider itself.
Sounds like it's time for someone to set up an email service that offers the same functionality without the +. There would be some headaches and it would limit the degrees of freedom users have with base email addresses, but I'd use it!
Fastmail already provides this feature if you have a custom domain.
If you have 'domain.com' you can receive emails either on 'foo@domain.com' or 'bar@foo.domain.com' without problems.
Apple and Proton Mail both let you do this without the plus sign. You just have to generate the alias through their password apps.
https://relay.firefox.com/ may be a tool for that. I never used it, as I use a catch-all, but may use it in the future.
Fastmail lets you make as many redirects as you want, no + in them.
You can even get an api key for it and plug that into bitwarden, so that when you sign up for whatever, you click bitwarden, generate password, generate email, sign in and it's all set. So smooth. (I sound like an ad, but internet pinky promise no affiliation)
It sounds like panix already did.
Tried signing up with Samsung using username+samsung@gmail.com and was told my email contained illegal word.
I don't know what they are thinking. Isn't it a real family name in Korea?
As far as I know Samsung isn't a Korean family name, it's just a brand.
That said, are you sure it wasn't the + that caused the problem? I've run into that a few times, presumably when someone tried to roll their own email validation.
It's probably a specific policy of Samsung which doesn't allow the word samsung in recipient addresses. I had the same issue, but with samsung@private-domain.tld
gnusmas@private-domain.tld worked just fine..
Sometimes you can sign up with the + but when you try to log in either on the homepage or an app, the login is invalid because of that + sign. Different validations. Stopped using that way after getting locked out of accounts 6 months later...
My usual "smasung" typo worked fine when I registered with them. I use a service for disposable addresses redirected to my main mailbox for potentially spammy registrations which I don't really care about, instead of just creating new accounts which is way too inconvenient to manage.
I'll make these intentional letter swaps every time just to avoid regexes and automatic filters.
You spelled smasnug wrong :)
I'm curious - is there a benefit to doing so versus using Apple's Hide My Email (or a similar service) or appending +service to a gmail email address? Completely ignorant on the topic so apologies if this is a silly question.
I'm not really sure how Apple's Hide My Email works, but my impression is they work by creating a proxy email for you. If that is the case, it should be a good solution for protecting you privacy. The problem is you become hostage to Apple, because now if you loose access to your Apple account you also loose access to ALL your accounts(potentially). It's probably on the same level as using a password manager like BitWarden.
I've just explained the problem with the gmail tagging in another comment.
Yup. Got my own domain(s) and use a different address for all my services (like with Gmail where you could append +service to your email but with a completely distinct email per service like paypal@mydomain.com). Helped my several times to identify spam & phishing without even having to check the E-Mail itself.
My guess is that you probably know what I'm going to write, but a lot of people don't realize this 'Gmail trick' doesn't really work.
The problem is that foo+bar@gmail.com and foo@gmail.com are delivered to the same inbox, so if you are trying to scam someone it is safe to remove anything after the + in a gmail address.
And having a custom domain on gmail doesn't improve your situation, because with just a simple 'dig mx' you can know if the domain is hosted on gmail and apply the same regex to remove all labels.
So, to be less inflammatory the feature works as expected. But it only protects you if the bad actor is really dumb/lazy or if he is honest.
The other thing Gmail does is ignore `.` in the local part. So, one other trick would be to use particular dot patterns for specific accounts.
I have seen spam messages using random distribution of '.' in mails for years to my gmail.
If everything goes to a + address, then any email sent to your base address is invalid and can be trashed.
Some people really love putting dumb validation rules for emails in forms... You would be surprised to know how many system in the real world will just refuse anything that is not a letter or a number in your email.
And the 'fuck them, I won't do business with them' attitude doesn't really work if the system that wont accept your email is the local gas company.
And there is another problem, some systems will just remove any label without informing you. I've had this problem logging in some random websites. My account was created with foo+bar@gmail.com but to log I had to use foo@gmail.com.
Not surprised at all, I've been using the Internet and writing software for a couple decades now. Heck, I might've written one of the validators you're complaining about. But they are typically written to avoid +, for the exact reasons you described.
For those sites, you can add a dot in your username. Then you can ignore any emails sent to an address without the presence of a dot or a plus.
I'm sure there are sites that don't accept dots either, but I've never run into one. So you have to make an exception? Oh well.
I agree that it's easiest to do with service@domain.tld, like the grandparent suggested.
I do the same as the person you're responding to. There is no '+' in my email, I just create random strings @mydomain. It's impossible for a scammer to know they all go to one inbox.
I have a feeling spammers don't "dig" anything before removing labels, if they remove them at all.
I use a similar approach due to me having the luxury of an owned domain.
The problem, however, is that most companies still rely on crappy Enterprise services like Microsoft Office. For most people managing identities like this is impossible to do - due to either lack of user-friendly options or due to too high thresholds of necessary IT knowledge.
I mean, we are speaking about having to configure Dovecot and Postfix and similar tools, and I fuck that up regularly. And we are also assuming that they have to be unguessable (you have github@? maybe I should target linkedin@, too, then!) which implies that they have to be random-looking which means they will likely be blocked by registration filters.
Newer projects like Maddy [1] kind of go towards that direction, but are still targeted at developers or sysadmins.
[1] https://github.com/foxcpp/maddy
> configure Dovecot and Postfix
'Creating a new inbox' was an exaggeration on my part. What I have is a catchall on my fastmail account. But when I talk about creating creating inboxes it seems to make it easier for normal people to understand what I'm doing and the benefits it brings.
> we are also assuming that they have to be unguessable
That would be nice, but I don't have a nice way of doing it. I've tried to use something like rot13 to make it less obvious, but it is a pain to manage it. It would be nice it existed a cypher that was pretty easy to do in my head, but I never found anything like this.
> you have github@? maybe I should target linkedin@, too, then!
Yes, this is a problem. For a targeted attack this may become a weakpoint in my defense. But this is a calculated risk I'm willing to accept for now.
Microsoft used to let you get 500 free emails under any domain you added, for years. I miss those years. Had the nice benefit of putting you into Microsoft's ecosystem. I was able to make emails for different sites too.
I also recently came across some of these fake company campaigns with attached employee profiles. It's very hard to distinguish them from legitimate companies.
It's especially hard now that many legitimate companies use a lot of generic sounding AI-generated content, which seems to be same approach the spam/phish/malware teams are using.
IMO we need some kind of zero-knowledge proof system that can be checked to verify if a message sender is a US citizen, employed by who they say they are employed by etc.
I don't see how we can trust anything in a post-generative AI world any other way.
>IMO we need some kind of zero-knowledge proof system that can be checked to verify if a message sender is a US citizen, employed by who they say they are employed by etc.
I think this could be a great opportunity for Google. Lots of organizations already make use of Google Workspace/Gmail. Imagine if Google Workspace offered the equivalent of a "Twitter blue check", where you pay extra, and anyone who views your email in Gmail sees a little check mark next to it, that shows Google verified you are who you say you are, and Google thinks you're not malicious. Salespeople sending cold emails would love it.
I don't think you can solve this problem purely cryptographically. An attacker could always bribe a US citizen to set up a shell corp or whatever. Most objectively verifiable indicators can be gamed. There has to be an organization that's good at security, like Google, which is in the business of continuously keeping up with adversaries. Actually Google might not be the best because they kinda suck at tailored customer service, but anyway.
> I reported hundreds by now and they did absolutely nothing.
Ha. Same here but reporting a job ad that targets the dublin area, but it's really for bangkok. I hate it.
Thank you for doing this important work!
[dead]
When I was in the US Navy, I learned most of the time, the weak points in security were usually people. Attackers know this and exploit it. And it usually wasn't movie plot style "do this or your wife gets it" exploits. Those seemed to get blown up easily. It was mundane things. Distracting a watch stander with something that was actually stupid. Making someone late for duty. Putting something really gross in the garbage hoping the inspector would skip that bag. So many little lapses in human judgement. Most completely innocent. This was with vigilant, uniformed people subject to military discipline, and those thing happened.
So you have to focus on process and systems. Some easy stuff:
* Never ask customers/employees for a password. If someone does it's a scam.
* Refund money only to the payment method used to pay for the product/service.
* 2FA is your friend no matter how much the VP of Sales whines about it.
* have a way to expire tokens and force reset of passwords.
That's why it's easy. People think "I'm not important enough" to be targeted, or "My job isn't that important", but that's what adversaries are counting on. Their "unimportant" job or whatever is just a stepping stone.
What's the threat scenario where forcing a password reset increases security? I'm genuinely curious, because I feel it's often the case that password expirations might introduce more threats than they mitigate.
> What's the threat scenario where forcing a password reset increases security? I'm genuinely curious, because I feel it's often the case that password expirations might introduce more threats than they mitigate.
Not every reset is due to expiration... e.g. if you know a user reused a password from a different service that got hacked on your service, you should probably make them reset it...
When you know that account / those credentials have already been compromised.
> When I was in the US Navy, I learned most of the time, the weak points in security were usually people.
Good example:
> Navy chiefs conspired to get themselves illegal warship Wi-Fi [0]
[0] https://www.navytimes.com/news/your-navy/2024/09/03/how-navy...
[0] https://news.ycombinator.com/item?id=41441486
People… and frankly even just accounting at many places is surprisingly informal.
> People connecting through our VPN have access to an internal-only SMTP gateway machine that doesn't require SMTP authentication.
Time to clean that up while you're at it.
Layered defence. These days, why do you have anything unauthenticated anywhere? Every system should be authenticated and authorized. Unless information is fully public.
Because of that one cobbled together system, or old network MFP that sends to mail, that's needed for a whole bunch of stuff, can't authenticate, and someone decided it's too expensive to replace for such a small attack surface. Until the problem costs more than the solution large organizations don't move by design. This is usually officially the benchmark: what costs more.
In my experience, there’s a random server that nobody knows who maintains it offhand, including the person who maintains it. People ask, and it just doesn’t go anywhere. Until something like this happens. It’s nothing to do with costs, its just an oversight
> there’s a random server that nobody knows who maintains it offhand, including the person who maintains it
It makes no sense that you'd keep an insecure service because you forgot someone needs it. You turn it off and the reminder will promptly come to you. After this it's a decision, not oversight.
> It’s nothing to do with costs, its just an oversight
The article suggests that their internal unauthenticated SMTP was there by design, not oversight, together with an authenticated (presumably external) one. Some assessment deemed addressing the risk from the unauthenticated internal one not worth the cost and effort.
> People connecting through our VPN have access to an internal-only SMTP gateway machine that doesn't require SMTP authentication [...] previous phish spammers have exploited some combination of webmail and authenticated SMTP.
> It makes no sense that you'd keep an insecure service because you forgot someone needs it. You turn it off and the reminder will promptly come to you. After this it's a decision, not oversight.
You’re assuming an org that had a policy my in place for this which was followed all along, and not that it’s a piecemeal service barely held together by dreams and prayers. My experience with university It departments is there’s an _incredible_ amount of “dunno who that belongs to but don’t touch it because it might be important” going on.
> there’s an _incredible_ amount of “dunno who that belongs to but don’t touch it because it might be important”
Right, so not an oversight, but a decision not to touch the obscure system. Decisions with bad outcome aren't oversight unless you want to downplay them when justifying yourself.
Your SMTP gateway is never "that" system that nobody knows about. You must know who owns and manages it, you know you have to secure it (minimal measures like... authentication) so you don't get unceremoniously penetrated. And if you do it you may or may not realize that something will fail because of the extra security.
If you know that "one cobbled together system, or old network MFP" I was mentioning earlier will fail when you enforce authenticated SMTP, because it's too old and replacing it is $$$, or too arcane and bringing an expert is $$$ then you will take an informed decision whether to proceed with your security hardening or not.
If you have no idea something will fail (you didn't catch it in the dry runs) if you enforce authenticated SMTP, you just do it and if someone comes in a frenzy to tell you that the old and arcane system is down then you revert the change. Now on you're in the informed decision scenario from above.
This is not a minor omission. Leaving a glaring insecurity like this open by oversight isn't what the article suggests happened, and it almost never never the case. It's not something that "just happens", it's something that people meet to discuss about and decide to ignore it maybe for reasons that look good at the time. This is the essence of risk taking. But it's a decision nonetheless.
(opportunity) cost to dig into ownership
1. Carefully establish the one critical data flow the whole business depends on. It may cost some time, but this one you have to protect by all means, so stakeholders won't mind.
2. As for the rest, take them down one by one and see what breaks. Got a call to internal support hotline? "Ooops, sorry, we will turn it on and let's chat about it soon."
There can be a few announcements in advance to shift the blame before (2): "Declare yourself or face consequences" (ChatGPT will write a nicer email). If you are on good terms with CFO, the noise won't matter. In fact, many people will thank you, when their weird stuff is taken over for care by IT.
By that logic everything is a cost or an opportunity cost. I think the idea behind an opportunity cost is lost if you over apply it
Unfortunately basically every business i support has some horrible app that takes an ip address and expects to be able to send unauthenticated email to it. If it's not payroll software, it's hr or legal software.
Imagine having to type your password for every single shell command you execute. Not just for the line you typed, but per statement. A subshell counts as at least one; so does each segment of a pipe. Then, if any of those statements run a shell script, this applies recursively. Then, for any actutal program that runs, you have to confirm every syscall with a password too.
That's what "every system should be authenticated and authorized" feels like in the limit. So in practice, it always boils down to how deep you go before the overhead starts to eclipse any benefit you get from running the system.
Old MFP that cant speak new SSL/TLS
Old java based application that doesnt respect all email flags and will often just close the connection even mid successful auth.
New server that lives in the cloud, but doesnt match up with the right protocols to send email out of Azure and into 365, so its punted down to on prem and back up to 365 just so Microsoft can sleep better at night.
These are the most common reasons I have seen.
There is this, and then there is the real world of (usually large) companies.
Switching to a modern stack is not just a matter of choosing the summiting. This is easy.
You then must know what days you have. Still manageable somehiw.
Then the processes, maybe the company as a while know all of them (maybe) but this is dispersed amon plenty of staff.
Then you have dependencies. You close z door and a building collapses 10 km away.
Finally there is everything you do not know about many someones added.
Don't get me wrong : I work in cybersecurity. But I know how complicated things are.
Because there are two unstoppable laws of the universe:
Physics.
Laziness.
Forget authentication, I know some people who leave their car key in their car and their front door unlocked because they can't be arsed.
Cars are insured and the consequences of a stolen car are not very high. Big difference from losing your retirement account or proprietary ip of a business.
Consequences of a stolen car are directly obvious and noticeable, and we can't get some people to apply security there. How do you think that's going to work in a system with so many levels of abstraction and typically huge numbers of people involved?
Cars are not stolen very often. Sure you can look up national statistics and find large numbers, but for the average person (who generally isn't in a "bad neighborhood") it something that only happens in movies.
That's for the full car disappearing. Getting your car broken into and something stolen is much, much higher.
That seems pretty relative. The consequences are probably high enough for most people. You have the hassle of not having a car, then getting a rental, dealing with your insurance, eventually getting a check for what similar used cars go for, which may be in worse condition and who-knows-what mechanical condition, so then if you want a car that you can (pretty much) trust, you have to buy new which is an unplanned expense. If you're not in a good position financially, it can f things up for you. Especially if you have personal belongings in the car which are not replaceable and probably won't get full value for if you even think to report it to your insurance such as sunglasses, USB cable, emergency seat-belt cutter, first aid kit, tire iron (if you don't want to use the crap one that came with the car), floor liners, seat covers, etc.
OK. You’re gonna lose $100-$200 worth of crap in your car and possibly pay out a $1000 deductible. How does it remotely compare to losing your retirement account or say having all your nude photos sent to your contact list?
For some people, maybe they can't afford to lose $1000-$1500 and the time, all which might be a bigger deal than some nudes getting out.
For myself, my personal and professional contact lists wouldn't be pleased, and I would apologize to them, but it's not going to cause me to lose any money. I'm certainly not going to be embarrassed about nudity or body shape/condition. Everyone is nude under their clothes, and other people have the same body shape or maybe even medical conditions. (Unhygienics is something else though, and that would be unpleasant.)
Also, I don't let anyone take nudes of me, nor do so myself, because it's just easier to assume that anything digital might be hacked one day.
Whataboutism.
They don't have to compare, they are both problems.
Seems like a lot of effort to just send spam. Almost feels like their preparation outweighs their imagination by a large margin.
I'd have thought there would be a lot more that could be done with VPN access than immediately burn it by sending spam.
The prize isn't sending spam.
The prize is sending phishing e-mails that are indistinguishable from authentic e-mails. E-mails where every check and signature says they really come from the university's employee pensions team, or the IT accounts team, or the legal team.
Or perhaps sending spam was just a ploy to divert attention from something happening on a different server while people were trying to stop the flow of emails?
Distraction. Like a magician.
> People connecting through our VPN have access to an internal-only SMTP gateway machine that doesn't require SMTP authentication.
This part sounds... not great. Even bad actor within org could send messages as someone else: president to payroll etc.
not great indeed. is this organization not under any compliance requirements? unauthenticated SMTP is not going to pass even the laziest of security scans. although neither is VPN access without MFA
I don't know where is the tech ability bar for spammers but this doesn't strike me as unusually clever or well prepared.
Most of them are said to be quick about exploiting Gmail or other systems they know, but slow with unknown software (hours rather than seconds).
If your system is on-premises, you may reasonably assume that the attacker will need to read the man page, like a new employee, see? But these guys didn't need to read the man page.
For the typical spammer, this was pretty good.
For the typical hacker or foreign service this went as expected. Just that they detected it very soon, so not much harm done. Only VPN
You're right, it's not clever at all, the attacker just happened to find a completely zero authentication internal service. They might have even done so via an automated tool like some kind of script kiddie network scanning program.
This is the kind of dumb stuff we were doing 30 years ago: making the assumption that being physically on the network implies authentication.
There's zero excuse to have a no-auth SMTP server, or anything else for that matter.
No auth smtp server sounds like a very bad idea and the real culprit here. Security by obscurity (VPN in this case) never works.
I see a lot of posts, articles, etc... stating that people are surprised by the complexity of a cyber attack or scam. It seems that most people haven't yet learnt that this is a full blown industry targeting countless businesses, institutions and individuals 24/7, not just some script kiddies in their bedroom. There are office blocks full of trained professionals with sophisticated tools working to compromise digital security and manipulate human nature to gain access to accounts, data and funds. Everyone needs to be adopting a form of zero trust or trust but verify to every digital interaction and every use of technology.
As a passive hotel owner and active programmer, I can confirm it's always been the case. In the hotels business, getting customer requests, invoices and refund requests seemingly out of nowhere isn't too uncommon. Receptionists, who have the authority handle customer cancellations and refunds, but also package / documents receipt for them, frequently fall for the slightly more laborious scams, in spite of the safeguards in place.
The phishing emails we get at my software dev job for security certification and pen testing pale in comparison to the actual effort being put in by scammers, who coordinate bookings with parcels and random invoices so that they tell a story, always targeting different shifts (almost never the same).
What are these scammers looking for? Presumably not to just get a refund on their vacation or package delivery.
As the other commenter has posted, refunds for inexistent bookings or refunds for someone else's booking are pretty frequent.
Other simple stuff is overdue payments for fictive deliveries such as soaps, toilet paper, cleaning bills or even outsourced work.
The more complex scams involve making bookings and sending packages with fees and totals paid by the recipient, they try to convince the receptionists that their package needs to be delivered, and an actual delivery of random stuff happens using a real delivery company to complete the scam. They don't always mention that there's payment required on delivery.
Other scams involve claiming lost luggage, wallets, electronics without them being the owners, and trying to convince the receptionist to send the item internationally. We're a hotel next to the airport, so international travellers are the norm, plus we have a room full of lost stuff. They make a booking with a fictional name, then cancel it or no show, and then ask for their black luggage, black wallet, tablet, gold bracelet, etc.
They're looking for a refund for a vacation or package delivery that never happened -- or did happen, but not for them.
While zero trust is great, humans have experimentally established that it is more or less impossible to maintain by all people in all cases all the time. Eventually someone will fail, and it can even be a security professional. Trustless is a tokenbro buzzword and it's not a viable path for users in general. We need some good trusted core software from which we can move further to auth other less reliable apps or machines.
> Everyone needs to be adopting a form of zero trust or trust but verify to every digital interaction and every use of technology.
I'd be interested in hearing how folks find working with "zero trust"; my employer's adoption of a zero trust VPN has been pretty bad, but I don't know if it's normal.
In my company, it's made it much harder to give decent support to users; previously, a user knew if they were on the VPN or not, and if they were on the VPN but they couldn't reach our service, that was a very rare event and it lead to a P1 outage getting an immediate response from a senior engineer.
Now, users don't know if they've passed the device posture checks or not - user plugs in their phone to charge it? Unauthorised external storage device, silently reduce their network access. So now if a user knows they're on the VPN but can't reach our service, that's very common; it's a P4 issue and within a 4 hours an intern will tell them to reboot their PC and try again.
Apparently users can't be told when they've failed the device posture check or why, for 'security'.
Needless to say, the engineers hate the much larger support burden, and the users hate the the much slower and less helpful responses.
Is it supposed to suck this much?
> Is it supposed to suck this much?
No, and this isn't the concept of Zero Trusts fault. This is inexperience and/or a lack of competency from your security people and your support people. Although, more likely given that two "silos" are impacted, systemic organizational issues that aren't going to go away.
But isn't the whole point of Zero Trust to move away from a binary "fully trusted (allowed on the VPN) or not" and towards nuanced, dynamic, semi-trusted states?
i.e. isn't the fact you can be on the VPN yet blocked from accessing the service the goal of Zero Trust?
Absolutely baseless take:
Weren't there implemented protocols to use the devices connected to the VPN that would proof against the most common sources of posture check failure? I imagine most problems are quite trivial, like the phone you mentionned, especially if treated as P4 (there might wven already be a document with the required advice used by the interns when telling people to reboot).
There was a movie about that from last year - The Beekeeper. https://en.wikipedia.org/wiki/The_Beekeeper_(2024_film)
THIS. People who are trained by common stereotypes (generally from the entertainment industry) don't have a clue.
I wonder how it might work out, if Hollywood produced a "Breaking Bad"-style series, about an ambitious young cybercriminal moving up into the really big leagues.
> if Hollywood produced a "Breaking Bad"-style series, about an ambitious young cybercriminal moving up into the really big leagues.
I'm waiting for the biopic of Ross Ulbricht. It's got all of the bits that Hollywood loves with the young protagonist breaking bad, FBI agents also breaking bad, and now comes with a guilty conviction turning into a full blown pardon.
The hard thing about this is that a montage of "complicated chemistry in an RV in your underwear" is way more interesting to watch than a montage of "typing on a computer in your underwear".
It’s not just an industry, it is also state sponsored.
There is a global cyber war going on.
No, there are global cyber espionage programs going on.
War is something entirely different. The belligerents are not trying to disable agricultural systems or power grids; an actual war is a horse of a different color, and would likely be regarded as a proper escalation in the physical realm.
> The belligerents are not trying to disable agricultural systems or power grids; an actual war is a horse of a different color, and would likely be regarded as a proper escalation in the physical realm.
There have been any number of attacks on physical infra. and civil institutions that fit that description. Sandworm (a group in the Russian military) alone has successfully brought down power grids multiple times.
I think that there's a difference between state sponsored hacking and a government turning a blind eye to illegal activities that happen to fulfill the same effect without their hands being dirtied. Incentives, such as potential future employment or the good graces of (more or less corrupt) local authorities when it comes to other illegal activities, can make a significant impact on influencing an adversaries' overall cyber readiness.
Nope, it's a war, it's just not evenly distributed across the globe.
Probably the best example, is the current conflict with Russia and Ukraine. It's a global cyberwar, with real (life and death) consequences.
https://blogs.microsoft.com/on-the-issues/2022/04/27/hybrid-...
https://blogs.microsoft.com/on-the-issues/2022/06/22/defendi...
It's not unrestricted cyberwar, but I also don't think we have a complete picture of the scale of the cyber conflict, nor do we have a complete picture of the attacks and defences being mounted.
"According to a report on grid security compiled by a power industry cyber clearinghouse, obtained by POLITICO, a total of 1,665 security incidents involving the U.S. and Canadian power grids occurred last year. That count included 60 incidents that led to outages, 71 percent more than in 2021." [0]
Uh, no. There have been a massive number of attacks attempting to take down the power grid. It's just that the protections in place are currently working most of the time.
Unfortunately, the official stats tends to combine both physical and cyber attacks, so there's no clear sense of which is dominant... But, frankly, there isn't a need to separate them. The attacks are happening.
[0] https://www.politico.com/news/2023/09/10/power-grid-attacks-...
In the comments, the author mentions this:
"As for information on our VPN setup (and our mail sending setups), it's on our support site (for obvious reasons) so we assume the attacker read it in advance."
That really changes the level of complexity for the attacker here
Its just full of bad practices. No 2FA? Why is VPN its own username and password? Why not use a product that can use SSO?
The invisible lesson here is to just use 2FA everywhere or accept the risk of this happening to you.
That's why we have now 2FA enabled on most external access, VPN included.
Amazing they don't have 2FA on VPN, even if you don't go for yubikey/phone app you could at least require a cert.
> As far as we can tell [...] the phish spam attacker used the main password they'd just stolen to register the person for our VPN and obtain a VPN password...
Requiring admin approval for VPN accounts would have prevented the phisher from getting VPN access to begin with.
This is a university. I expect they have a higher than normal proportion of attackers who know the system and exactly how they'd escalate having gained some access, and have the free time to prepare a customized attack.
On the other hand, those attackers are probably less malicious than the average Russian ransomware group.
I thought our unauthenticated SMTP got shutoff after switching to Office 365. I looped over the SMTP servers in the hop list in the headers of an email and one of the Microsoft domains accepted unauthenticated requests from within the network.
> It seems extremely likely that the attacker had already researched our mail and VPN environment before they sent their initial phish spam, since they knew exactly where to go and what to do.
As someone else said, I would increasingly suspect that apparently targeted or seemingly highly-invested hacking behaviour is just a new breed of scripts that are puppeteer by phishing AI multi-agent systems (maybe backed by deepseek now).
Just like self driving cars that will never make the same mistake twice, these things will likely keep a catalog of successful tactics, and so always be learning obscure new tricks
Or you can ask a GPT, that has already indexed your publicly available support docs, to prioritize potential places for a user looking to keep access as a backup.
AI is available to everyone, and we’re not prepared.
Sounds like it is possibly an automated script or agent doing most of this work accessing the VPN and SMTP server. Really shouldnt have any open mail servers anywhere.
You got hit by a former employee
Or maybe an upset former student.
I have concluded that I will eventually fall for a scam and pay a medical bill for some service I never received. All the bills look like scams, and for one service there are often 3 separate bills from different areas so it would be easy for someone to tack on one more and get some cash from me...
I feel bad for cks, but I probably would have handled it a little differently...
- shut their accounts off network-wide
- drop all related network connections
- forcibly reset their password and make them choose a new one in person. They may have changed it earlier, but do it again
- increase logging to catch any potential reoccurrences against the same user or other users
- inspect ACLs and reduce access for all users if possible
- prevent users from connecting from areas outside of their usual network sphere
- let the user back on, and ask them to be more careful in the future
- better mail filtering would be nice, but they'll always find a way to beat the spam filter
- (i hate this option the most, but...) send fake scam emails internally to see if anyone else takes the bait
This is of course ignoring 2fa, but 2fa isn't perfect either with sim swapping... but I personally don't think changing the password is enough for an event like this.
I've gotten emails with links "From" my parents names, but checking the addresses it was accounts on random .edu domains. The fact that separate emails came from two different names I knew really made me feel targeted.
Not necessarily. We have employees sent the usual phishing emails claiming to be from the CEO - but sent to their personal email addresses, since the attackers know targeting a corporate domain won't work because of the "similar-name-but-external-domain" warnings.
I figure these kinds of relationships are determinable from linkedin etc., but they're still automated. Using family members seems like an extension of this technique, sending phishing from someone you probably know.
TFA describes a perfect example of unwarranted implicit trust. Any tunnel-in should terminate in an environment not unlike being outside a regular perimeter, with internal per-host access control (perhaps by RADIUS or some coordinated ACL - which would also have fixed the parallel account problem )... especially to an unsecured Internet mail server. I wish the misnomer "Zero Trust" were better crafted and understood as a broad philosophy. I think it's psychologically difficult to do the role-play and imagine "what if I couldn't trust myself?"
Only place I accept implicit trust is inside a single program. Even on same machine one should think is trusting other processes warranted.
> inside a single program
Cool, but hey get this... I'm interviewing next week some guys working in API security. and the discussion notes so far are terrifying.
People do all this work to build secure networks and OS, and then someone says "Hmm we need an API for <fashionable reason>", and next thing a junior dev exposes all the top level functions of a program running with high privileges as URL handlers.
So maybe even _within_ your app it's not too paranoid to think "what if someone got an entry point into this function?" and at least put a "NEVER EXPOSE" comment there :)
ztna+ can help here. VPN give access to entire network, with all the rubbish services sysadmins have laying around like unauthenticated smtp servers (sounds total shit but it happen everywhere.... need to send email notifications from all sorts of shit and no one wants to manage these accounts... sadstory..) ztna+ will for users kind of seem like vpn, good protection,but it only give access to services the user is allowed to access, not the entire vpn network.
it help alot against these type of scenarios.
also, how fast is fast? you can scan an internal network on a single port in the blink of an eye, so if u don't have good network IDS/IPS internally, u will not really see the scan and it seems like someone 'knows the network in advance' because they scan it in like 2 seconds and based on results automatically run scripts etc. - it doesn't need to be knowledge gained in advance.
- monitor internal network properly, asif its external network. - use ztna+ if you can afford such solution - do regular audits for things like unauthenticated services and use these kind of incident to in a friendly manner educate sysadmins about risks of such services. they will usually understand it, especially after an incident. aslong as you bring it friendly with a good explanation, not some demanding attitude.
- use a lot of mail filtering... more is better. it can be a bit tedious. at my company we have more than 4 solutions to scan all email and attachements etc. , still stuff slip through, but not a lot... - also scan outbound or 'local' email. (BEC fraud etc.)
- do good post-incident reviews and use learnings each time something happens (sounds obvious, but this is often omitted, the learnings are only kept within sec teams, or turnt into one-off remediations rather than process etc. )
edit: oh.. and also monitor for logon anomalies. a lot of solutions support this. e.g. a user logs in from a unique new ip - alert on it, or even block it. , that action depends a bit on what's normal, so here actually ML and such solutions are great.. but basic statistical analysis etc. can also help if u can't pay or create ml solution. (its not too hard to create really, basic models will suffice.)
Maybe we just stop using email.
[dead]