Deploying NSX-T Management Cluster

In a previous blog post, NSX-T architecture explained and now we can start implementation of NSX-T. Deployment process of NSX-T Data Center beings with deployment of NSX-T Management cluster. In NSX-T 3.0 management cluster is consist of three NSX-T managers which include both management and control plane. The management plane provides Web UI, REST API and also interface to other management platforms like vCenter Server, vCloud Director or vRealize Automation. The Control plane is responsible for computing and distributing network run time state.

NSX-T managers can be deployed on ESXi or KVM hypervisor. If you are planning to use ESXi platform to host NSX-T managers, an OVA file should be used. On the other hand for KVM platform, a QCOW2 image will be used for NSX-T manager deployment. It is important to note that mixed deployments of managers on both ESXi and KVM are not supported. Based on type of deployment and size of environment, NSX-T manager node size configuration should be selected. Following is the four different configuration options and their requirements.

Continue reading “Deploying NSX-T Management Cluster”

NSX-T 3.0 Deep Dive

In series of blog posts we are going to walk through different steps to setup a NSX-T Data Center infrastructure. If you are new to NSX-T, please first go ahead and read the Introduction to VMware NSX. To get more insight on NSX-T architecture you can continue with NSX-T Architecture and Components post. Because we are using NSX-T 3.0 for the purpose of this implementation deep dive, you can also review What’s new in NSX-T 3.0 blog post.

https://d3utlhu53nfcwz.cloudfront.net/171901/cdnImage/article/913ec53d-8797-4531-99b8-f41e2db1ff50/?size=Box320

Following are the required steps to build a solid NSX-T Data Center foundation. Please follow each step and we are going to update and complete this list regularly.

Continue reading “NSX-T 3.0 Deep Dive”

VMware vSAN 7.0 Witness Appliance Deployment

As part of vSAN Stretched or 2-Node cluster configuration, a witness appliance should be deployed and configured. This witness appliance will host witness components that are being used in split-brain failure scenarios. The witness component will act as a tie-breaker and help vSAN cluster to satisfy the quorum requirements. The witness server could be installed as a dedicated physical ESXi host or a specialized virtual witness appliance can be used instead. The main reason for having witness as a virtual appliance is it does not require an extra vSphere license to consume and eventually save some cost especially for smaller implementation like ROBO. The other reason behind using a virtual appliance is for multi-cluster environments like VCF stretched cluster implementation. Due to the reason of each vSAN cluster needs its own witness, then you can consolidate all of them on one physical host on a third site.

https://blogs.vmware.com/virtualblocks/files/2016/11/SCDIAG.png
Continue reading “VMware vSAN 7.0 Witness Appliance Deployment”

What’s New in NSX-T 3.0

On April 7th 2020, VMware introduced next major release of its Network Virtualization & Security solution. NSX-T 3.0 introduces variety of new features which enhance the adoption of software-defined networking in private, pubic and hybrid-cloud environment.

Following are some of the new features and enhancements that are available in NSX-T 3.0 Datacenter;

Continue reading “What’s New in NSX-T 3.0”

Deploying & Configuring VMware Identity Manager (vIDM) – Part 2

Following the first blog post about deployment of vIDM, this post will cover how to configure vIDM and implement NSX-T Role Based Access Control (RBAC) with help of vIDM. As you might noticed, in NSX-T 2.5 and earlier release RBAC cannot be enabled without use of vIDM.

When you login to administration page with vIDM’s admin user account, dashboard would be the fist page you will land. Dashboard contains login information and applications which are used by users and analytics.

To start vIDM configuration click on Identity & Access Management. Here you can join vIDM to Active directory domain, add directory to sync with vIDM and define user attributes which get synchronized from directory service to vIDM.

Continue reading “Deploying & Configuring VMware Identity Manager (vIDM) – Part 2”

Cloud Journey with AWS!

Since beginning of 2020, we have started our cloud computing journey by actively practicing and studying Amazon Web Services(AWS) public cloud computing services. We choose AWS because of its tight integration with VMware’s private cloud & SDDC offering and also broad usage & service coverage of AWS intentionally.

https://www.gratasoftware.com/wp-content/uploads/2019/05/https___blogs-images.forbes.com_janakirammsv_files_2018_12_aws-1080x675.jpg

AWS was founded in 2006 to provide IT infrastructure as a service which now commonly known as Cloud Computing. Initially AWS lunched with Simple Storage Service(S3), Elastic Cloud Computing(EC2) and Simple Queue Service(SQS) service offering. Since then AWS has experienced rapid growth in terms of number of customers, service portfolio and also profitability. AWS also maintained its position as the leader in cloud computing market. AWS interestingly surpass its giant parent company, Amazon, in terms of profitability!

In series of blog posts we will cover AWS wide range of services and also AWS architectural principals.

What’s new in VCF 4.0?

https://cormachogan.com/wp-content/uploads/2020/01/Cloud-Foundation.png

On March 10th 2020, VMware released VMware Cloud Foundation(VCF) 4.0 along side a refresh on its other SDDC protofolio including vSphere 7.0, vSAN 7.0 and vRealize Suite 2019 latest release. By deploying VCF 4.0, you can take advantage of all the components that are included in the package and there are some features which only available with VCF 4.0. For example Kubernetes capabilities of vSphere 7 are only included as part of VCF 4.0 with Tanzu. Following you can find Bill of Materials(BoM) for VCF 4.0.

One of the new capabilities that have been added to VCF 4.0 is the possibility to use NSX-T in Management workload domains. Before VCF 4.0, Management workload domain had to use NSX-V as networking and security virtualization solution. NSX-T will also used as a defacto network and virtualization solution for VM and container workload. With use of NSX-T we have the option to bring up one NSX-T Management cluster that can serve many workload domains.

VCF 4.0 also supports latest update of vRealize Suite 2019 which includes;

  • vRealize Automation 8.1
  • vRealize Opertions 8.1
  • vRealize Log Insight 8.1

All the above products have the capability to operate based on container workloads beside normal VM workload. VCF SDDC Manage 4.0 together with vRealize Suite Lifecycle Manager 8.1 will automate the process of lifecycle management for both VCF core components and also vRealize suite components.

NSX-T Architecture & Components

As it mentioned in Introduction to VMware NSX , NSX-T Datacenter is built on three integrated layers of components which are Management Plane, Control plane & Data plane. This architecture and separation of key roles enables scalability without impacting workloads.

NSX-T Management cluster which built from three-node NSX-T managers controller nodes. Management plane and control plane are converged on each node. NSX managers provides Web-GUI and REST API for management purposes. This is one of the architectural difference compared to NSX-V which had to integrate into vSphere Client & vCenter server. NSX Manager is also could be consumed by Cloud Management Platform(CMP) like vRealize Automation to integrate SDN into cloud automation platforms. NSX-T Manager can also connect to vSphere infrastructure through integration with vCenter Server(Compute Manager).

Continue reading “NSX-T Architecture & Components”

VMware’s New per-CPU Licensing Model

VMware has announced new update to per-CPU licensing model. Ok don’t panic VMware is not going to bring back vRAM licensing model but they added new CPU related license type. Effective from April 2nd 2020, building a server with a processor which has more than 32 cores needs additional license. According to VMware’s website, “Under the new model, one CPU license covers up to 32 cores in a single CPU”. This means, additional license requires to be purchased for every 32 physical CPU cores! So if there is a single-CPU server with up to 32 physical cores, as before, 1 license should be purchased. But if there is single-CPU server with 64 cores, 2 licenses needed because as it said before every license covers a single CPU with up to 32 cores. To get a better view of this change, take a look at below image from VMware.

Fortunately for those who are going to buy servers and VMware licenses till April 30th 2020, there is “Free per CPU licensing” program. According to VMware website, “Any existing customers who purchase VMware software licenses, to be deployed on a physical server with more than 32-cores per CPU, prior to April 30, 2020 will be eligible for additional free per-CPU licenses to cover the CPUs on that server”.

You can also get more information from VMware’s website!