Critical vCenter Server Vulnerability – Patch Immediately!

On May 25, a critical vulnerability reported which affects vCenter Server 6.5, 6.7 and 7.0 and VMware Cloud Foundation 3.x and 4.x. With access to port 443 of vCenter Server, an attacker may exploit this issue to execute commands with unrestricted privileges on the operating system that hosts vCenter Server.  This issue arise because of lack of input validation in vSAN Health Check plug-in.

Continue reading “Critical vCenter Server Vulnerability – Patch Immediately!”

NSX-T Distributed Firewall – Part 1

Before jumping to NSX-T Distributed Firewall (DFW) concept and rule creation, I want to point out why this solution is important and what security issues can be addressed by using this powerful solution. Building a zero trust model security has been the biggest concern of network and security teams. In traditional data centers, high-level segmentation is built, which could help to prevent various types of the workload from communicating. But the main challenge of the legacy security model is data centers facing a lack of lateral prevention communication system between workloads within a tier. In other words, traffic can traverse freely inside a network segment and access the crucial information until it reaches the physical firewall to get dropped. In addition, implementing different layers of security and firewalls would cause complexity and cost.

NSX-T Distributed Firewall (DFW) is a hypervisor kernel-based firewall that monitors all the East-West traffic and could be applied to individual workloads like VM and enforce zero-Trust security model. Micro-segmentation logically divides department or set of applications into security segments and distribute firewalls to each VM.

Continue reading “NSX-T Distributed Firewall – Part 1”

vCenter Server 7.0 HTML5 UI error “no healthy upstream”

After upgrading to vCenter 7 Update 1 , when I tried to browse vCenter HTML5 UI, I faced “no healthy upstream” error. I could access to vCenter Management Interface (VAMI) https://vCenter-IPaddress:5480 without any issues. I could also connect to vCenter Server through  SSH but I realized couple of vCenter Server services could not start.

Continue reading “vCenter Server 7.0 HTML5 UI error “no healthy upstream””

VxRail 2-Node Implementation Considerations (VxRail 7.0.100)

Starting with version 4.7.100, VxRail supports vSAN 2-Node for small and Remote-Office Branch-Office (ROBO) deployments. This solution works best for environments that needs hyperconverged compute and storage with a minimal configuration. VxRail 2-Node consists of two VxRail E560 nodes and a vSAN Witness Appliance. It is recommended to deploy the Witness appliance in another site but in case of lacking another site it can be deployed in the same site as vSAN 2-Node.

There are some considerations and requirements that you need to have it in place before starting the VxRAIL 2-Node implementation.

Continue reading “VxRail 2-Node Implementation Considerations (VxRail 7.0.100)”

vSphere 7.0 Update 1 is now Globally Available!

vSphere 7.0 introduced by VMware in March 2020 and went to GA in April 2020. Many new features like DRS & vMotion improvement and also Lifecycle Manager has been released. After half a year VMware introduced first major update on vSphere 7 and today this release went into GA. It is now publicly available, you can download it from VMware and take advantage of this latest and greatest release! Here in this blog post I will go through the new features and capabilities

3 Pillars of vSphere 7 Update 1
Continue reading “vSphere 7.0 Update 1 is now Globally Available!”

Configure NSX-T 3.0 RBAC with Native Active Directory Integration

One of the new features which has been added to NSX-T 3.0 is supporting RBAC with Native Active Directory. In previous version of NSX-T we had to use VMware Identity Manager (vIDM) to be able to add users and groups from Active Directory for RBAC purposes. In set posts I have already described how to install and configure vIDM with NSX-T. I still believe configuring RBAC through vIDM has some added value like Multi-Factor Authentication(MFA).

To setup NSX-T Role-based Access Control(RBAC) it’s better to create groups in Active Directory and add users into the group for two reasons. First it’s easier to add a group with couple of users as members rather than assign role to many users in NSX-T. Second, with help of Group Policy you can define a “Restricted Group” and it locks down membership to that group. As a result it provides a layer of security.

Continue reading “Configure NSX-T 3.0 RBAC with Native Active Directory Integration”

Configure Virtual IP for NSX-T Management Cluster

Now that we have finalize deploying three managers in NSX-T management cluster we can go ahead and configure a Virtual IP(VIP) on it. We can use NSX-T internal mechanism to set an IP address on the cluster or setup an external load balancer in front of NSX-T managers. Configuring VIP which is recommended by VMware is more simple but using a LB would load balance traffic among NSX-T managers. This is a design question and should be chosen based on requirements and customer needs.

Please keep in mind that if you want to choose this approach, you need to have all NSX-T managers are on the same subnet. In this case, managers are attached to SDDC Management network. To configure Virtual IP, login to NSX-T Manager UI, choose System and on the left panel select Appliances then click on SET VIRTUAL IP option.

Continue reading “Configure Virtual IP for NSX-T Management Cluster”

Finalizing NSX-T Management Cluster Deployment

In the previous articles, we deployed first NSX-T Manager and then we added vCenter Server as Compute Manager in NSX-T Web UI. In this post we are going to finalize NSX-T Management cluster. In production environment for high availability and performance reasons, it is recommended to have three NSX-T Managers in the cluster. Second and third NSX-T Managers should be added from NSX-T Web UI. To deploy additional NSX-T manager appliances, go to System menu and choose Appliances and click on “ADD NSX APPLIANCE”.

Continue reading “Finalizing NSX-T Management Cluster Deployment”

Add Compute Manager to NSX-T 3.0

In previous blog post we started NSX-T implementation by deploying first NSX-T Manager. Before deploying other two NSX-T Managers we need to add a Compute Manager. As it defines by VMware, “A Compute Manager is an application that manage resources such as hosts and VMs. One example is vCenter Server”. We do this because other NSX-T Managers will be deployed through Web UI and with help of vCenter Server. We can add up to 16 vCenter Servers in a NSX-T Management cluster.

To add compute manager in NSX-T, It is recommended to create a service account and customized vSphere Role instead of using NSX-T default admin account. The reason behind defining a specific role is because of security reasons. As you can see in the below screen shot I created a vSphere Role call “NSX-T Compute Manager” with the required privileges. I use this Role to assign permission to the service account on vCenter Server.

Continue reading “Add Compute Manager to NSX-T 3.0”

Deploying NSX-T Management Cluster

In a previous blog post, NSX-T architecture explained and now we can start implementation of NSX-T. Deployment process of NSX-T Data Center beings with deployment of NSX-T Management cluster. In NSX-T 3.0 management cluster is consist of three NSX-T managers which include both management and control plane. The management plane provides Web UI, REST API and also interface to other management platforms like vCenter Server, vCloud Director or vRealize Automation. The Control plane is responsible for computing and distributing network run time state.

NSX-T managers can be deployed on ESXi or KVM hypervisor. If you are planning to use ESXi platform to host NSX-T managers, an OVA file should be used. On the other hand for KVM platform, a QCOW2 image will be used for NSX-T manager deployment. It is important to note that mixed deployments of managers on both ESXi and KVM are not supported. Based on type of deployment and size of environment, NSX-T manager node size configuration should be selected. Following is the four different configuration options and their requirements.

Continue reading “Deploying NSX-T Management Cluster”