×
top 200 commentsshow all 397

[–]toy71camaro 188 points189 points  (89 children)

Ouch... ok, so what do you recommend to do to "lock your ESXi management access"?

New to ESXi here.

TIA

[–]KillingRyukSysadmin 151 points152 points  (85 children)

Turn ssh off and lock down access by IP for a start.

[–]lost_signal 57 points58 points  (2 children)

Turn ssh off and lock down access by IP for a start.

SSH is off by *default*.

I'm willing to bet that windows box had direct access to VMFS (they were doing SAN mode backups, or it was mounting the NFS share the VMs were on).

[–]gallopsdidnothingwrg 36 points37 points  (13 children)

Are IP restrictions possible on the host somehow? All our hosts and VMs are on the same subnet interface, so firewall rules won't work here.

[–]AfroStorms 47 points48 points  (7 children)

Use the built in firewall to ESXi. You can lock down almost every service by IP.

[–]gallopsdidnothingwrg 11 points12 points  (6 children)

I don't see that option in the ESXi web interface on 6.5. I don't use vCenter since we only have a couple hosts.

Anyone know where it is?

[–]losthoughtIT Director 36 points37 points  (3 children)

You might want to consider building a management VLAN that is restricted to only certain workstations. Then move management interfaces to that (and not just for ESC/vcenter).

[–]forkwhilef0rkNetadmin 7 points8 points  (1 child)

Not sure why you were downvoted - this is the correct approach.

[–]clubertiCat herder 13 points14 points  (0 children)

Because implementing security isn't necessarily easy or quick in most cases, and most people in the industry will take the quicker/easier approach over the secure one? I'm sure people will downvote this too, because it is more true than most want to admit.

I get it, there are 40 hours in a work week and 80 hours of work to be done, but the reality is taking on technical debt and then never paying it back is what leads to stuff like this.

[–]VexingRaven 21 points22 points  (29 children)

Turn ssh off

What? Why? It's no less secure than any other means of accessing it.

[–]Grass-tastes_bad 21 points22 points  (4 children)

It’s a VMware best practice recommendation.

[–]OathOfFeanor 6 points7 points  (3 children)

Yeah because if it is disabled and you can't access the host VMware doesn't care, they will just tell you that you need to use the console and we will put your ticket on hold until you figure out the logistics of that. Too bad your customer didn't pay for the out-of-band management license, and they don't have a KVM. That's not VMware's problem.

Instead of disabling useful functionality people should properly fence off their management networks behind firewalls.

It is very wrong to think you have "secured" things by disabling SSH when I can still land packets on the host from a random infected workstation.

[–]Grass-tastes_bad 3 points4 points  (1 child)

I mean to be fair none of that is a VMWare problem, it’s the customers?

If a customer isn’t going to pay for Out of band management they aren’t really going to care if things are properly segregated or not.

You have improved the security of it though, its a layer of security by completely disabling SSH. That prevents any SSH Vulns being used against that host. Sure other devices can reach it on other ports, but it’s now more secure than it was when SSH was enabled.

[–]reddwombatSr. Sysadmin 2 points3 points  (0 children)

Dell does that across their product lines. Not just with VMware. Want to lockdown this WebUI, the hardening guide say disable the WebUI service.

WTF!? Thats not hardening, when you just disable it! Forcing strong crypto, locking down to certain IPs, thats hardening.

[–]KillingRyukSysadmin 25 points26 points  (0 children)

No use in leaving it on if it is not being used.

[–]ptrwiv 17 points18 points  (0 children)

I only turn it on when needed per host (rarely).

[–]maskedvarchar 5 points6 points  (1 child)

For extra security, unplug all network cables.

[–]phord 1 point2 points  (0 children)

Be sure to unplug both ends for extra security.

[–]lost_signal 2 points3 points  (0 children)

It's disabled by default FWIW.

[–]IntrovertedecstasySysadmin 2 points3 points  (3 children)

I don't believe SSH has the same password lockout rules that the SSO login portal does. Which would make it less secure to brute force and other attacks.

[–]VexingRaven 1 point2 points  (2 children)

That's pretty shitty if that's the case, but yeah in that case turn it off.

[–]phord 1 point2 points  (1 child)

ssh, the protocol, has no way to do this. Implementations vary, but most don't have built-in abuse detection. fail2ban is your friend.

[–]disclosure5 1 point2 points  (0 children)

It's off by default. There are very few troubleshooting scenarios in which you should temporarily enable it. This recommendation shouldn't need to be made.

[–]DR_Nova_KaneWindows Admin 8 points9 points  (0 children)

I would make sure your don't enable SSO so you have a different username and password so it's not linked to AD. Locked down your volumes with IP groups and chap. You are not in the IP group you can't mount the volume. You don't have the chap password you can't mount the volume.

Your back up box should not be in the domain so it has a different password.

Don't open 3389 on the firewall for remote management.

[–][deleted] 2 points3 points  (0 children)

I would start by removing AD integration until security issues are resolved on Microsoft domains... use complex passwords and heavily use RBAC for services (read only for monitoring, for example)

[–]The_PacketeerSales Engineer 0 points1 point  (0 children)

Lockdown mode. You should ONLY admin ESXi from the vCenter level.

[–]006work 79 points80 points  (27 children)

You'd need two things. A compromised windows host, and a compromised account with admin level vCenter access. From there you can do what you like.

If that host happens to be your backup server with direct access to the VMWare SAN volumes then all the better.

[–]NetInfused[S] 36 points37 points  (26 children)

Yeah in this case it wasn't the backup server, just a bare metal machine with other applications.

[–]006work 24 points25 points  (24 children)

So you must have had both then, an admin account compromised and the server compromised.

[–]jkeegan123 21 points22 points  (5 children)

sounds like ... windows authentication was enabled for VCENTER and admin creds of some kind were compromised.

[–][deleted] 14 points15 points  (4 children)

Or they had administrator@vsphere.local password set to Welcome123.

[–]jmhalder 21 points22 points  (0 children)

Who told you my password?

[–]mustang__1onsite monster 11 points12 points  (0 children)

I just see asterisks

[–]user49459505950 11 points12 points  (0 children)

Could also be that when the Server 2012 got compromised the password hashes got dumped then cracked. Then they used one of the cracked passwords to login to the ESXI box. Password reuse is all too common.

[–]remag75 1 point2 points  (16 children)

Why do you say that’s? I’m missing the connection server.

[–]billy_teats 5 points6 points  (15 children)

if the server gets malware on it, its contained to that machine. An attacker could see vmware and vcenter, but would still need username/password to get in there.

[–]huxley00 138 points139 points  (4 children)

Look at the bright side. If your security department wanted encryption at rest in your VMWare environment, you just got your project done for the year.

[–]NetInfused[S] 22 points23 points  (1 child)

LOL 😂😂😂😂😂😂

[–]huxley00 10 points11 points  (0 children)

Got to laugh or cry right : )

[–]UltraEngine60 63 points64 points  (21 children)

Also, calvin is not a good idrac password.

[–]pjcace 30 points31 points  (5 children)

I tend to change mine to hunter2.

[–]UltraEngine60 23 points24 points  (1 child)

sigh ******* is a horrible password. Not enough entropy.

[–]pjcace 10 points11 points  (0 children)

Fortunately, no one can see it!

[–]thelanguyRebel without a clue 5 points6 points  (0 children)

That's funny. I just change mine to Hobbes....

[–]WelcomingRapier 4 points5 points  (5 children)

Hah. Thank goodness Dell stopped doing that. Finding an iDRAC still having that on it among hundreds of production servers kills you a little bit inside... every time.

[–]vmwareguy69 1 point2 points  (3 children)

Believe me, they haven't. Every new piece of Dell hardware I get is using calvin.

[–]WelcomingRapier 2 points3 points  (1 child)

Strange. Every new Dell server we get has the unique default password printed next to the server tag label pull out tab on the server front. Maybe it varies depending on model or how they are ordered (with iDrac licensing or not)?

[–]the_squeaky_cheese 5 points6 points  (0 children)

MSP server monkey here - it's a config option in the ordering interface. I believe it's called something along the lines of "Secure Default iDRAC Configuration."

[–]sarbuk 7 points8 points  (0 children)

I changed all of ours to "hobbes"

[–]SOMDH0ckey87 1 point2 points  (7 children)

HAHAHAHA

jeesus... is it still this?

[–]pbrutsche 4 points5 points  (6 children)

No, new Dell servers ship with a randomized default password for the iDRAC

[–]Goofology 3 points4 points  (5 children)

If you request it*

[–]qupada42 2 points3 points  (4 children)

It's the other way for everything I've ordered recently; randomised is the default, root:calvin is the option.

[–]Goofology 1 point2 points  (3 children)

Good to know. How recently?

[–]Significant_Parking 119 points120 points  (45 children)

I work in Incident Response and recovery and I have witnessed this more and more in the last 6 months. The way this works is, the attacker gets domain level rights, checks the network to see how much they can encrypt and also try the same domain admin creds in vCenter. If they can, the will enable SSH and will return later. They will then encrypt all virtual windows guest systems, shut them down, then move to VMware and encrypt the datastores effectively encrypting the environment twice. It isn't fun, but our new recommendations are to have separate (non-domain) credentials for vCenter and Veeam (if applicable).

[–]dicknardsSales Engineer 39 points40 points  (3 children)

Yep. Chances are they have gained access to someones email as well, and have already exfiltrated data.

Encryption is the last step in these types of attacks. You have to have robust monitoring of all areas of your environment. If you are finding out through the encryption of data, who knows how long they have had access to your environment.

[–]sarbuk 5 points6 points  (2 children)

What datapoints in the environment would you suggest be monitored to spot an attack like this before it gets to the encryption stage?

[–]dicknardsSales Engineer 11 points12 points  (0 children)

You will want at a minimum to be monitoring things like:

  • Active Directory (brute forcing, kerberoasting, priv escalation)
  • Exchange/Exchange Online (activity and permissions)
  • Access activity (auditing of on prem, and cloud data access)
  • DNS activity (c&c communication, dns recon, etc...)
  • Data Exfiltration (dns exfil, website uploads, etc...)

But what you do with all that data is the hard part...

[–]gallopsdidnothingwrg 13 points14 points  (14 children)

Veeam defaulted to windows credentials when we set it up. Is it possible to change that now?

[–]Significant_Parking 33 points34 points  (8 children)

Take your Veeam server off your Active Directory domain and put Duo MFA on it as well. Use local accounts and have completely different passwords.

[–]unsigned1138 6 points7 points  (3 children)

The way we do it is:

VBR servers not part of domain. 2 servers, different local creds different on each.

VBR management console installed in domain joined VM. DUO MFA enabled. Veeam credentials used to login to Veeam management console.

I hadn't considered taking vCenter off the domain.

[–]autisticpig 3 points4 points  (2 children)

we have our esxi hosts, vcenter, vrops, etc all isolated from the domain. DUO MFA for veeam is not something we have done though. I will look into that. Thanks!

[–]DrGrinch 6 points7 points  (0 children)

I work IR as well and this was the complete saving grace for a client that got wrecked with Ransomware. Their Veeam was sitting on the same network, but not attached to the domain and used local creds only. Attacker either didn't notice it or couldn't get into it and just dropped their payload and bounced hoping the backups wouldn't help. Customer was able to get back up in about 5 days using the bare metal recovery.

[–]Duckbutter_cream 5 points6 points  (4 children)

Always have your backup systems on a sperate set of creds and auth. That way they stay safe.

[–]thelanguyRebel without a clue 6 points7 points  (0 children)

Guess who just shot SSO in the face?

[–]unknamed 6 points7 points  (0 children)

It isn't fun, but our new recommendations are to have separate (non-domain) credentials for vCenter and Veeam (if applicable).

I learned this the hard way about a year ago. Compromised domain admin credentials led to all domain computers connected to the network to have files encrypted by a ransomware attack.

Our Veeam server was domain joined but luckily it also housed a giant repository of files that the encryption mechanism was chewing its way through when I realized what was going on and prevented it from encrypting our Veeam backups. We did loose some of the stuff on the file repository but it wasn't anything critical (procrastinating cleaning up old files finally paid off)

It took rebuilding the Veeam server and restoring every single VM but we were backup and running on Monday (attack started early Sunday morning).

[–]IntentionalTexanIT Manager 8 points9 points  (20 children)

What about us chumps using Hyper-V?

[–]Significant_Parking 16 points17 points  (12 children)

Even worse. Don't use hyper-v. Ha, I kid, I know that's a lift to move to a different platform. Have your Hyper-V Management on a different VLAN with proper segmentation. If you have a small enough environment, remove your hyper v hosts from the domain (multiple reasons to do this) and install DUO MFA on them as well as turn off administrative shares.

[–]MrJacks0n 6 points7 points  (9 children)

Is it still a nightmare to not have hyper-v on a domain?

[–]aprimeproblem 4 points5 points  (6 children)

It is....unfortunately and it’s not going t be better any time soon. Best management you can get is either use PowerShell remoting or the Windows Admin Center

[–]corrigun 1 point2 points  (5 children)

What in the world are you talking about?

[–]Adobe_Flesh 1 point2 points  (1 child)

DUO MFA is an executable that supersedes LDAP or?

[–]sidneydancoff 0 points1 point  (0 children)

Do you have any resources that you supply to clients regarding best practices/recommendations on network segmentation for VLANs?

[–]OperationMobocracy 0 points1 point  (0 children)

Are you advising just disabling SSO integration with AD for vCenter? It's probably good idea, although I can see where it complicates management for larger shops who rely on it for access control without a lot of extra account creation. Maybe there's some idea you could establish an entirely separate AD forest for admins only, perhaps with a one-way trust of the user domain, although I'm sure this has weaknesses but presumably a smaller attack surface.

I work for an MSP and have been strongly suggesting that Veeam backup servers not be domain joined at all and share no administrative credentials with other systems, with the idea that it ups the odds that the Veeam server will survive the attack. Mostly because I've seen so many ransomware attacks duly encrypt domain joined Veeam servers, rendering all those backups useless. Of course offline backups are also helpful here, too, but I see a lot fewer of them. It's also nice if you have SAN mode configure for backups that the SAN LUN access to the Veeam server is read only.

[–]irrisionJack of All Trades 0 points1 point  (0 children)

This sounds like shitty advice to be honest. People will just reuse the same password as their DA account regardless and you have no way to audit and prevent it but you do get crappy decentralized authentication audit logs as a result of this approach.

[–]Kaladis 52 points53 points  (3 children)

Is this really an ESXi issue? I could be wrong, but I would lean more towards the attacker having direct access to the internal network and was able to freely pivot from one server to another using elevated credentials.

[–]the_andshrew 12 points13 points  (1 child)

Exactly, if someone has the keys to kingdom and nobody has noticed it shouldn't be surprising that they can do whatever they want.

[–]zebediah49 3 points4 points  (0 children)

Solution is to have multiple sets of keys. I do understand that many organizations aren't large enough to have a separate backup team, but you can at least have different [local] accounts with different credentials.

[–]gordonv 3 points4 points  (0 children)

This. But now that I think of it, this could work with AWS CLI also.

[–]ThirstyOneComputer Janitor 14 points15 points  (0 children)

On today's edition of "What fresh hell is this?"

[–][deleted] 26 points27 points  (12 children)

Reading these kind of posts are bad for my anxiety. Company I’m contracted to has everything in the same /16 network and uses the same flimsy password for everything.

I really need to find a new gig lmao.

[–]gallopsdidnothingwrg 24 points25 points  (2 children)

Just keep moving the ball forward. Make the place a little better every day. When you go home, remember that you are not your job, and you are not your company's problems.

[–]adjacentkeyturkey 7 points8 points  (1 child)

Thanks. This one helps me out right now.

[–]gallopsdidnothingwrg 9 points10 points  (0 children)

...and to be clear, that doesn't mean you shouldn't give a shit when you're at work - you should - you should care about the mess. Don't become another cynical asshole.

...just don't take that shit home. ...and realize that you can't fix the whole world in a day at work.

[–]dtmpower 2 points3 points  (5 children)

Desktop , Servers , Management all in the same subnet ?

[–]90Carat 1 point2 points  (0 children)

Oh! I worked in a MSP/hosting provider that did that! I begged management to change passwords right after I started. “Nah, it’s fine”. So one night, about a month and a half after I start, a disgruntled employee pulls up to the building. Hops on the corporate wifi with the well known credentials that were never changed. Navigates to the fiber switches that sat between the ESXi hosts and all the storage. Wipes all the config on said switches, saves the empty configs, and reboots them. The network guy was camping and didn’t get phone reception for the next 12 hours. Finally, he comes back, rummages through his desk, and finds the backup of the configs. A couple of dozens clients were totally down for about 24 hours. All passwords were changed immediately.

[–]NetInfused[S] 1 point2 points  (0 children)

Well, the wake up call is here :) start changing that flat subnet, bro.

[–]captain_bowltonJack of All Trades 10 points11 points  (1 child)

Man you straight up scared the shit out of me. A lot of our settings have been at their defaults since our environment was installed a couple of years ago. I just disabled SSH access, changed root passwords, and removed domain accounts from administrators groups - local vSphere accounts for administrative stuff from now on.

Might not be 100% perfect but more secure than it was a couple of hours ago. Good to have all of this stuff set up the right way anyways, we should never be complacent or feel like we're above any type of bullshit like ransomware.

[–]wutangdangie 14 points15 points  (11 children)

So attacker gets into a management server, dumps memory, finds VMware admin creds, and is able to have admin access to vCenter? Is that right? From there, starts powering off VMs and encrypting datastores?

If that's what happened, then I'm not sure how you avoid that by "segregate ESXi management from the VMs."

[–]heapsp 7 points8 points  (3 children)

im surprised they can pass the windows authentication from a 'pass the hash' attack directly to ESXi's active directory authentication. That's some next level shit... more likely they used a keylogger to grab the domain admin's creds somewhere.

These targeted attacks are scary as shit. If they lay dormant for a month or two and have an entry point even after recovering everything from offline unencrypted backups the attack could just happen again and again until you torch everything and start over.

[–]BenaiahChronicles 2 points3 points  (1 child)

Why is it next level shit? If it's on the domain, and especially if service accounts, admin accounts, and password policies aren't locked down, then it's almost trivial.

[–]AccurateCandidate Intune 2003 R2 for Workgroups NT Datacenter for Legacy PCs 2 points3 points  (0 children)

You can just check "Use windows authentication" on the vSphere login page from IE and it'll pass the currently logged on user's credentials. Seems easy enough.

[–]NetInfused[S] 6 points7 points  (6 children)

There was no such thing as a management server, that's the point.

All production VMs could see ESXi management/vCenter.

[–]boli99 6 points7 points  (2 children)

because all files were encrypted with the company name on the extension of the crypted files

Company name could easily have been available as part of some software registration info or license key somewhere.

[–]zebediah49 2 points3 points  (1 child)

Presumably, company name is probably in a bunch of URL's, and probably an AD domain name.

However, generic software usually isn't clever enough to have an automated detection and naming scheme. It's far more likely that this was customized for this particular "customer".

[–]boli99 2 points3 points  (0 children)

It's far more likely that this was customized for this particular "customer".

it really isnt.

[–]pc_jangkrik 8 points9 points  (0 children)

F*ck this sh!t, I'm back to farming.

On serious note, Detaching all creds from AD seems a good option now. Fuck with all SSO thingy. I'll have my 16 chars password printed and kept in my wallet.

[–]evs9000 5 points6 points  (15 children)

Please share some more details, was a user account compromised, AD involved? Was the password saved in a browser? This is very, very bad.

[–]NetInfused[S] 5 points6 points  (13 children)

We can't know now... The whole thing was crypted.

The whole environment was torched.

CSIRT is still trying to pinpoint where it all started.

[–]lost_signal 4 points5 points  (11 children)

We can't know now... The whole thing was crypted.

What was the storage platform used for ESXi?

[–]rabidphilbrick 1 point2 points  (9 children)

Asking the real questions. Some backend storage allows exporting NFS and sharing CIFS of the same volume/folder. The Windows server may have simply accessed the datastore root(s) via CIFS. Heck, this is how I rapidly clone VMs in my ESXi home lab using PowerShell.

Edit: I see in a lower comment it’s SAN. Interesting.

[–]lost_signal 2 points3 points  (8 children)

He’s said it was FC (which means it was either VMFS, RDMs or vVol). Now they might have duel path’d that over iSCSI.

I work for the VMware storage and availability product team, is why I ask...

[–]NetInfused[S] 1 point2 points  (0 children)

My NDA won't allow that to be disclosed.

[–]Reddit_Mobile_1 0 points1 point  (0 children)

Do we know what version of ESxi/Vcenter? Were the servers on their inside network or were they exposed to the internet? Were the datastores shared somehow?

[–]CryptoSin 8 points9 points  (23 children)

How did it hit the ESXI/Vcenter? Outside world access? or was it on the LAN and someone compromised the network?

[–]NetInfused[S] 9 points10 points  (21 children)

It was on the LAN. Somehow they found their way in. The windows server had no external access.

[–]CryptoSin 15 points16 points  (15 children)

We need more details.

[–]xchadrickxSysadmin 17 points18 points  (14 children)

Yeah we're missing a lot of information here. I don't know that OP is intentionally leaving it out but I think there's more to this. Like the the username was root and the password was RootPassword or something like that. I've not seen any news of ESXi datastores getting hit like this unless whatever is hosting the datastores was compromised in the first place (like being on a SAN with known exploits)

[–]the_drew 4 points5 points  (0 children)

I was thinking compromised admin account. Hoping OP responds with more info.

[–]CryptoSin 4 points5 points  (5 children)

Exactly, There is some hidden information. Unless its something stupid like "root, root" most datastores in VWMARE are secure and not apart of the shares. Datastores wouldnt be wide open to the lan.

[–]Significant_Parking 2 points3 points  (0 children)

In my experience with this, the attacker ran the encryption via SSH directly from one of the hosts using a .py script that performed the encryption and renaming of the files.

[–]SoMundayn 1 point2 points  (2 children)

Once you have Admin vCenter creds, you can plan to do anything to any of the connected ESXi hosts.

[–]notauthorised 0 points1 point  (0 children)

Also interested to know how this happened. And how does one recover from it. I am new to ESXi.

[–]starmizzleS-1-5-420-512 3 points4 points  (0 children)

If correctly described/assessed it sounds like an account had administrative access to the ESXi host and was used to cycle through the VMs and shut each one down and encrypt the contents of each folder. Yikes.

Logical protections for attacks like this aside, having shared storage with frequent snapshots would make cleanup from something like this pretty quick. It's yet another reason to boot your hosts from shared storage (in case a rootkit makes its way into the ESXi OS).

[–]jocke92 2 points3 points  (0 children)

I think a lot of companies has a bad password on their esxi if they have vcenter, and then a good password on vcenter.

This is interesting. Do you have vcenter single sign on or internal accounts? Was it an automated attack or a human?

The password for vcenter is not stored in the Windows credential manager. I must have been stored in a text-file or in the browser. Also the script they used must have support for esxi

[–]aprimeproblem 2 points3 points  (0 children)

Managing admin access or good credential hygiene is applicable not only to a hypervisor but to all layers. For Windows admin access take a look at the “Windows 10 credential theft mitigation guide V2”. The concepts in that document can be applicable at every layer of your organization.

Hope it helps!

[–]cantab314 12 points13 points  (35 children)

If you missed when it popped up on the sub recently, know that depending on where you are, paying the ransom may be unlawful with both the company and the individual workers involved at risk of civil and even criminal penalties.

[–]NetInfused[S] 14 points15 points  (1 child)

I saw it, but thankfully we don't need to pay the ransom as the backups were not compromised.

[–]cantab314 1 point2 points  (0 children)

Good to hear it.

[–]dicknardsSales Engineer 11 points12 points  (3 children)

People focus a lot on the encryption and ransom ( for good reason), but a lot of new ransomware attacks now have already exfiltrated the important data they found before they even encrypted the data. If the encryption is happening, they've already been in your environment for a while probably.

[–]cantab314 2 points3 points  (2 children)

Indeed. In a situation like this you can fairly assume there's also been a significant data breach, unless you have very good reason not to.

[–]gallopsdidnothingwrg 9 points10 points  (28 children)

This is a childish over-reaction. The gov't indicated that they had the ability to go after OFAC violators.

That is a far cry from actually punishing a company hit by ransomware.

There has been ZERO enforcement for this rule at all, and it's not likely at all to be enforced against victims - rather the middleman companies that are cropping up to facilitate payment and might be collaborating with the bad guys.

This is like telling people not to give your wallet to a mugger with a gun pointed at you because you might be in violation of Federal racketeering laws.

There is almost no motive for the gov't to come after victims, EVEN IF they could demonstrate that the bitcoin address paid was associated with an OFAC entity (which is borderline impossible).

[–]isoaclue 8 points9 points  (25 children)

I don't know, the more entities that pay these bastards the more they keep doing it. If you work for a regulated industry I wouldn't be at all surprised if they don't start coming at you heavy because you're already supposed to have controls in place to mitigate the attack.

[–]gallopsdidnothingwrg 1 point2 points  (24 children)

the more entities that pay these bastards the more they keep doing it.

That's an easy argument to make when it's not YOUR business on the line, and the livelihoods of dozens/hundreds of employees.

I wouldn't be at all surprised if they don't start coming at you heavy because you're already supposed to have controls in place

This seems true regardless of whether you pay or not.

In general, the people who throw up their hands and say "too bad, you should have known better, f u" are just callous spiteful folks.

[–]ksandbergfl 2 points3 points  (1 child)

The US government can "come at you heavy" for not following "best practices" to secure the stuff you manage for them. In the past few years, the government has expanded the Federal Acquisition Rules (FAR's) to include compliance with various industry/government cyber security directives/specifications.... with these rules in place, if you win a contract and later on are found "in violation" of these rules (ie you are breached due to negligence or incompetence), you can lose a contract... it is indeed "F U, you should've have done better, sorry Charlie".

[–]gallopsdidnothingwrg 1 point2 points  (0 children)

Show me literally any example of them "come at you heavy" against a victim of ransomware for the act of paying the ransom.

[–]isoaclue 4 points5 points  (21 children)

It's not that I lack sympathy but it's an extremely known threat at this point. If you aren't taking significant steps to mitigate against it you're not doing your job. I think that absent any extenuating circumstances, a successful ransomware attack that results in a payment should be an automatic trigger for replacing everyone in the IT decision making chain.

[–]gallopsdidnothingwrg 1 point2 points  (9 children)

You don't think victims are doing their job? You don't think it's possible for a company who's doing a reasonable job at security to still get nailed?

[–]Pie-Otherwise 3 points4 points  (1 child)

You sound like the MSP owner who told me that his biggest client, a chain of doctor's offices, didn't need to be HIPAA compliant because there was no enforcement body going out there knocking on doors doing HIPAA audits at random and thus, little chance they will be caught.

I mean it's possible to operate that way but it's also opening up the door to HUGE liability. Bitcoin forensics aren't super hard to do so once someone like the Department of the Treasury creates a task force and they ID an Iranian or Russian group's bitcoin wallet, you might well find yourself on the wrong end of an over zealous Assistant US Attorney who wants to make an example.

It's like running an IT services business without insurance. It's fine till it isn't and then you get to see what the inside of a bankruptcy court looks like.

[–]Gummyrabbit 2 points3 points  (3 children)

Anyone have their Vcenter set up with 2FA (e.g. RSA)? I think this would prevent this from happening.

[–]mike-foley 3 points4 points  (0 children)

There is no one simple thing to do here. Will it help? Sure. Is it all you need? Not at all. Security is about layers. Defense in Depth.

If you have your VM's on the same network as your management, as I've been saying for a decade, you're asking for trouble.

Looks like trouble is here now, unfortunately.

[–]irrisionJack of All Trades 2 points3 points  (1 child)

Last I checked there was no officially supported way to implement 2fa for vcenter at least prior to v7.

[–]ArizonaGeekSr. Sysadmin 2 points3 points  (2 children)

I would be curious if ESXi was actually compromised or the underlying storage was. If I have access to NFS or iSCSI network information, I, as a bad actor, could encrypt the storage without ever touching ESXi or vCenter but kill every VM. Don't just lock down your servers, lock down your network too.

[–]NetInfused[S] 1 point2 points  (1 child)

It's FC storage. Only accessible thru ESXi or vCenter.

[–]ArizonaGeekSr. Sysadmin 2 points3 points  (0 children)

I think that makes it more scary.

[–]GoodMoGo 2 points3 points  (9 children)

Not sure if asked already: How much did they want? What is the HR consequences?

[–]NetInfused[S] 7 points8 points  (8 children)

1 million. They also said they knew we could pay that much.

No HR consequences. Mgmt knew their lack of funding would lead to this. They've been warned over and over.

[–]GoodMoGo 4 points5 points  (4 children)

Mgmt knew their lack of funding would lead to this

Glad no one got thrown under the bus.

[–]NetInfused[S] 1 point2 points  (0 children)

More important than doing, is letting them know that if it can't be done, they're taking the risk, not IT.

It was their call.

But it is fundamentally important to make them aware of the risks they're running beforehand.

[–]k0derkid 2 points3 points  (2 children)

Just had this exact attack happen at my organization a few weeks ago. Almost 700 VMs and all datastore files encrypted exactly as described in OP. Ransom note didn't mention any $$ amounts, just left an email address to start negotiations.

I was the on-call admin that got to discover the 'You'veBeenPwned.txt' file in every folder on every datastore. Still get that gut-punch feeling when I think about it.

We're mostly back up and running. Bright side is IT/cyber-security budget is finally getting the funds it deserves and the cleanup of legacy systems we should've done a long time ago is a lot easier now. We're just not restoring those systems unless we absolutely have to.

It didn't touch any backups or physical servers (that we know of), but I will repeat one piece of advice I see posted here weekly, and that is to make sure you practice restoring your environment on a regular basis. If we would've had all the kinks in our restore process ironed out before a major incident like this, we would've been back to almost 100% within a few days. Instead, I would estimate we're at about 80-85% and still getting a good amount of overtime to get us back to 100%.

Anyways, a lot of lessons learned and tbh, even though it was terrible for the organization, it's been nice to do 'actual' sysadmin work, instead of explaining to the helpdesk how to use google to troubleshoot printer problems.

[–]Z_BabbleBlox 4 points5 points  (0 children)

Reminder: Keycloak + 2FA takes about 10 minutes to setup, is free, and makes this type of attack much much harder to pull off.

Also reminder: Poor administration hygiene is still, and always will be, the number one way that attackers gain access to systems.

[–]atlantauser 4 points5 points  (0 children)

Full disclosure, I work for HyTrust.

hytrust.com/cloudcontrol-overview/

We specialize in security for the management layers of ESXi, vCenter, & NSX Manager. VMware is a major investor in HyTrust since 2013.

We enforce things like Two-Person Rules via SecondaryApproval policies. As well as 2FA support for things like RADIUS, RSA, & SAML. On top of that we also have configuration assessment and remediation capabilities. The combination of these controls is both pro-active prevention and reactive cleanup. Together they make a rock solid solution to secure the hypervisor and it's management tools.

Hindsight is 20-20, but this is one major reason why a jumpbox shouldn't be used, and instead you should use a purpose built, application aware, proxy.

EDIT: Here's a write up on how HyTrust could avoid this issue altogether!

https://www.linkedin.com/feed/update/urn:li:activity:6722256278162296832/

[–]r5aboom.ninjutsu 1 point2 points  (0 children)

Any ideas on how did they got to the Management host (your 2012 R2 server)?

Anyone knows if they can get to your vCenter which is what I assume what you mean by ESXi management URL then you're toast. I wonder how they got the admin creds for that as well. Maybe log on using SSO?

[–]yParticle 1 point2 points  (1 child)

This would have been a nightmare just five years ago when we were heavily into VMware and security was job twelve. Hopefully with AWS and better security awareness we're better positioned to handle attacks now, but it's a chilling reminder that no infrastructure is automatically safe.

[–]irrisionJack of All Trades 0 points1 point  (0 children)

AWS instances get hacked all the time. It's not the infrastructure platform that makes an environment secure it's your systems and management hygiene which you can equally screw up in the cloud as with a self hosted environment.

[–][deleted] 1 point2 points  (1 child)

What storage protocol where they using? Block or NFS?

[–]NetInfused[S] 1 point2 points  (0 children)

Block. Fibre channel storage.

[–]N01kyz 1 point2 points  (2 children)

For a noob like me, how would one work towards preventing this?

Should management access be granted only to a specific vlan that IT can only access?

[–]NetInfused[S] 2 points3 points  (1 child)

To a specific host, we call it a "jump server", where only this host can access mgmt interfaces.

IT needs to reboot a vm? They rdp into the jump server, then access vmware.

Leave IT workstations with no access to mgmt resources.

Also be sure to lock the jump server with 2FA (Cisco Duo).

[–]Same_Bat_Channel 1 point2 points  (2 children)

Was the ESXI hypervisor ransomed or just the files on the FC datastore?

You restored pretty quickly... back onto the same esxi hypervisor? I would be concerned with the integrity of the entire AD domain at this point. Attacker likely has password hashes from every system/server that a user/domain admin has logged into. Just make sure you are restoring safely and closing back doors/rootkits..

[–]NetInfused[S] 0 points1 point  (0 children)

Of course not.... We didn't restore onto the same hosts.

We didn't restore quickly. This mess is going on for days.

All were wiped and reinstalled before restores took place.

All active directory accounts were reset. It's a major mess.

We have deployed a SIEM to make sure they aren't taking back to a C&C and all virtual machine images are being scanned before going live.

[–]the_drew 0 points1 point  (0 children)

and closing back doors/rootkits..

I'd be scanning for SSH keys also. We routinely run an SSH & SSL discovery scan on client sites and they have no idea what the bulk of the keys are far.

One client, a bank in Iceland, had 13,000 active SSH keys, they only have 1100 employees!

[–]anibis 1 point2 points  (2 children)

I setup SSO for vCenter and added our IT security group. This group has permissions to do most of the day to day work that needs done, however I have severely limited this group when it comes to deleting things and datastore access. There is a sole local vCenter admin account that we use for that. We are small, so we're not deleting VM's on a daily basis or anything.

Good post, just shows that you need to think about security with everything. I'm sure it'd take no time at all to encrypt all the VM's since there are so few files. A lot of people (us included) don't have hypervisor level AV, so there would be nothing to stop them once access is gained.

[–]ang3l12 1 point2 points  (1 child)

This just scares me enough to consider working as a waiter again. Godspeed

[–]sidneydancoff 1 point2 points  (6 children)

Also, was vCenter a Windows or Virtual Appliance? I've dealt with a lot of windows ransomware but an ESXI incident is my nightmare.

I've found this community to be less and less helpful and more "oMg I cAnT bElIevE YoU'd Do tHaT" so take criticism in the comments with a grain of salt. It's easy to backseat quarterback in this community.

[–]NetInfused[S] 4 points5 points  (5 children)

It was a Virtual appliance running on Linux.

[–]blackfireburn[🍰] 2 points3 points  (2 children)

Do I down vote because that scares me or upvote so people see this and lock their shit down

[–]NetInfused[S] 13 points14 points  (0 children)

Upvote to help fellow sysadmins.

[–]sarbuk 12 points13 points  (0 children)

Up and down voting isn't a Facebook like vs an angry face. You're not voting to express emotion, you're voting to ensure that the good quality content rises to the top and the crap sinks to the bottom.

[–]DonutHand 1 point2 points  (1 child)

Would anyone be surprised if the compromised server had a text doc on the desktop with the ESXi credentials in it?

[–]Kaladis 5 points6 points  (0 children)

They wouldn't even need local credentials to ESXi if a Domain Admins account was compromised and ESXi is joined to the domain. They could just as easily drop the compromised account into the default "ESX Admins" group in AD (if it wasn't there already) to gain full privs.

[–]sidneydancoff 0 points1 point  (0 children)

Do you know how they deployed the ransomware to the datastore? Was there more to it than Linux ransomware?

[–]itsnotthenetwork 0 points1 point  (1 child)

Does this thing have a name?

[–]NetInfused[S] 0 points1 point  (0 children)

Nope. It was directed.

[–]nodnarb501 0 points1 point  (2 children)

Mind blowing! Question: what versions of vCenter and ESXi were in use and were those patched up completely? Also, were unique root credentials set for vCenter and ESXi hosts or everything using the same passwords?

[–]poshftwmaster of none 1 point2 points  (1 child)

were unique root credentials set

Hint: you don't need to know ESXi root password if you have vCenter credentials with highest privileges.

[–]Unique-Job-1373 0 points1 point  (0 children)

Any further info on this? Especially as to how it happened. Looking over the comments it seems to be lazy administrators

[–]Luxily 0 points1 point  (0 children)

Well for starters we don't have our ssh server open to wan and it is restricted to 1 workstation.

[–]K3___ 0 points1 point  (0 children)

Is there any AV in place? Any EDR? Or FW?

[–]EnterpriseGuy52840Kernel-based Virtual Machine 0 points1 point  (9 children)

Can endlessh be setup on ESXi? Also shut off SSH completely. For me, ESXi really needs to have VM management that is easy at the DCUI level.

EDIT: Sorry for ressurecting.

[–]FlipnGenius 0 points1 point  (0 children)

2FA and policies by identity, process and FQDN with software-based segmentation should help alot.

[–]Knersus_ZAJack of All Trades 0 points1 point  (1 child)

https://www.theregister.com/2020/11/06/brazil_court_ransomware/

Right at the end of the article it made mention of this Reddit post - and that VMWare experts try to pooh-pooh this possibility.

Anything is possible these days, once you manage to get hold of a good set of working administrative credentials.

This is why I don't save any administrative credentials in web browsers.

[–]Whole_Daikon 0 points1 point  (0 children)

So many questions about this. Did they gain root access to the vSphere hosts to run the encryption? Did they use VM-level encryption and datastore encryption that's built-in to vSphere (and toss the decryption keys), or did they somehow use another binary cryptoware to encrypt the data? Were they standard datastores connected to local or fabric (iSCSI/FC) storage, or were they NFS shares (including vSAN and arrays like NetApp FAS) that could have been accessible outside of the vSphere hosts?

- If they gained root access without password hacks, that's serious and indicates they exploited a pretty critical vSphere vulnerability.

- If they used a non-vSphere binary, that's also pretty huge, since the vSphere hypervisor is pretty locked down and doesn't run like a normal Linux/Unix system.

- If the datastores were externally accessible (via NFS, for instance), that would be pretty trivial for someone with compromised credentials to access it and encrypt the filesystems. vSAN includes NFS capabilities, as do many array vendors products (NetApp being the biggest).

I have seen cases where a built-in encryption platform (BitLocker, to be precise) was weaponized to encrypt an entire company's workstations and servers. Since it's built-in, no AV software would have flagged it as abnormal activity, even ones with cryptoware detection enabled.