Pages

Total Pageviews

Monday, September 16, 2013

Vmware Direct Path IO

Introduction 

Direct I/O is available from vsphere 4 and later versions that leverage Intel  VT-D and AMD V CPU hardware feature . With Direct I/O VM can directly access physical network card by passing  Virtualized NIC ( E1000 , Valance ) and para virtualized NIC ( Vm Next ) . With Direct I/O VM can sustain high bandwidth beyond 10 GIG additional to saving CPU cycles . VMware recommends to use only when VM has high IO load and Saving CPU benefit the overall infra.

Para virtualized NICs can provide the throughput 9 GIG +, But the vsphere handles all the network related tasks  like physical NIC interrupts, processes packets, determines the recipient of the packet and copies them into the destination VM, if needed. The vSphere host also mediates packet transmissions over the physical NIC.This will consume lot of CPU.

Direct IO bypass the virtualized network layer while saving the CPU cycle , but this feature trades the virtualization features like physical NIC sharing , Vmotion and Network IO control . The VM need to have memory reserved to avoid the  Memory swap while physical NIC is processing.


vSphere Features 

Features that are not available for the VMs which had Direct IO control .

  • Hot adding and removing of virtual devices
  • Suspend and resume
  • Record and replay
  • Fault tolerance
  • High availability
  • DRS (limited availability. The virtual machine can be part of a cluster, but cannot migrate across hosts)
  • Snapshots
Cisco UCS  and Direct IO 

Following features are available while using Direct IO with Cisco UCS 

  • Vmotion 
  • Hot Adding and removing of virtual hardware
  • Suspend and resume
  • High availability
  • DRS
  • Snapshots
Configure Passthrough Devices on a Host 

vsphere Client 
  1. Select a host from the inventory panel of the vSphere Client.
  2. On the Configuration tab, click Advanced Settings.
  3. The Passthrough Configuration page appears, listing all available passthrough devices. A green icon indicates that a device is enabled and active. An orange icon indicates that the state of the device has changed and the host must be rebooted before the device can be used.
  4. Click Edit.Select the devices to be used for passthrough and click OK
Web Client 

  1. Browse to a host in the vSphere Web Client navigator.
  2. Click the Manage tab, click Settings.
  3. In the Hardware section, click PCI Devices.
  4. To add a PCI device to the host, click Edit.


    Key Points : 

    1. An adapter can only be used by a single virtual machine when using DirectPath I/O.
    2. Only two devices can be passed through to a virtual machine.
    3. Virtual machine hardware version 7 must be used.
    4. Host requires a restart once the device has been added for passthrough.
    5. Check the VMware HCL (Hardware Compatibility Guide) to make sure the device is supported.
    6. It is typically used on virtual machines that have very high I/O requirements such as database servers that need direct access to a storage HBA (host bus adapter).
    7. It relies on Intel VT-d (Virtual Technology for Directed I/O) or AMD IOMMU (IO Memory Management Unit), although the latter is experimental. Remember to enable this option in the BIOS!


    Friday, September 13, 2013

    HA configuration Parameters

     “das.maskCleanShutdownEnabled” :This is enabled by default from vsphere 5.0 , ie HA is going to assume a virtual machine needs to be restarted when it is powered and isn’t able to update the config files. (Config files contain the details about the shutdown state normally, was it an admin initiated shutdown?)

    disk.terminateVMOnPDLDefault  :  This is not enabled by default . 
    If this setting is not explicitly enabled then the virtual machine will not be killed and HA won’t be able to take action. In other words, if your storage admin changes the presentation of your LUNs and removes a host accidentally the virtual machine will just sit there without access to disk. The OS might fail at some point, your application will definitely not be happy, but this is it.


    Note : Big thanks to Yellow Bricks .. 

    Wednesday, September 11, 2013

    what's new in vSphere 5.5

    vSphere 5.5 beta version is already released in market. Following are some of new features introduced in 5.5 release.


    1. Support of Reliable Memory Technology : This is a CPU hardware feature , which will report a region of memory as reliable memory .By using Reliable Memory technology , Kernel and other critical component of ESXI are run on reliable memory which will provide greater resiliency and protect memory errors.
    2. Host-Level Configuration Maximums –   The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5!
    3. Hot-pluggable PCIe SSD Devices – vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host.  With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.
    4. Improved Power Management – ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states).   By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity.  Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.
    5. Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10) – ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 ( previous versions support up to 60 ) virtual disk and CD-ROM devices per virtual machine.   This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.
    6. VM Latency Sensitivity – included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency.  When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
    7. Expanded vGPU Support – vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD.  The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads.  In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering. It can be done only via web client .
    8. Graphics Acceleration for Linux Guests – vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost. Ubuntu: 12.04 and later , fedora 17 and later and RHEL 7 is supported.
    9. vCenter Single Sign-On (SSO) – in vSphere 5.5 SSO comes with many improvements.   There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions.   This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts.  In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.
    10. vSphere Web Client – the web client in vSphere 5.5 also comes with several notable enhancements.  The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates.  In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.Web Client is the ultimate clinet now . C# Client is used for backword compatibility . For instance you can create 62 TB VMDK file only via web clinet. 
    11. vCenter Server Appliance – with vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability.  I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB.  They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers.  However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move away from a Windows based vCenter.
    12. vSphere App HA – App HA brings application awareness to vSphere HA helping to further improve application uptime.  vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.This can be done with Special API's.
    13. vSphere HA Compatibility with DRS Anti-Affinity Rules –vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.In previous version , HA would restart the VM and DRS (  if fully automated ) will place (vmotion ) the VM accourding to affinity rule. With this New feature VM will be placed in correct host without a Migration .
    14.  vSphere Big Data Extensions(BDE) – Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions.  BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.
    15. Support for 62TB VMDK – vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.Privious limitation was 2 TB - 512 b.
    16. Microsoft Cluster Server (MCSC) Updates – MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
    17. 16Gb End-to-End Support – In vsphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
    18. Auto Remove of Devices on PDL – This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state.  Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.
    19. VAAI UNMAP Improvements – vSphere 5.5 provides  and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.
    20. VMFS Heap Improvements – vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes.  With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.
    21. vSphere Flash Read Cache – a new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.
    22. Link Aggregation Control Protocol (LACP) Enhancements – with the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.
    23. Traffic Filtering Enhancements – the vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).
    24. Quality of Service Tagging – vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking.  DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.
    25. Single-Root I/O Virtualization (SR-IOV) Enhancements – vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.
    26. Enhanced Host-Level Packet Capture – vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.
    27. 40Gb Bandwidth  Support – vSphere 5.5 provides support for 40Gb bandwidth.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.
    28. vSphere Data Protection (VDP) – VDP has also been updated in 5.5 with several great improvements to include the ability to replicate  backup data to EMC Avamar,  direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance. 

    Monday, August 19, 2013

    VLAN Tagging

    VLAN tagging in ESX can be
    • VST - Virtual Switch Tagging 
    • EST - External Switch Tagging 
    • VGT - Virtual Guest Tagging 

    VST - Virtual Switch Tagging 

    • VLAN tagging for all packets is performed by the Virtual Switch before leaving the ESX/ESXI host
    • Port groups on the Virtual switch of ESX server should be configured with VLAN ID (1-4094).
      Note : VLAN ID 0 (zero) Disables VLAN tagging on port group (EST Mode)
      VLAN ID 4095 Enables trunking on port group (VGT Mode)
    • Reduces the number of Physical nics on the ESX Host by running all the VLANs over one physical nic.
    • The physical switch port connecting the uplink from the ESX should be configured as Trunk port And all the vlans defined in vSwitch need to be allowed.
    • virtual machine network Packet is delivered to vSwitch and before it is sent to physical switch the packet is tagged with vlan id according to the port group membership of originating virtual machine.
    • switch NIC teaming policy to Route based on originating virtual port ID (this is set by default)
    • Physical  Switch Port Configuration :
      switch port need to be set to TRUNK mode
      dot1q encapsulation should be enabled. 
    EST- External Switch Tagging 
    • Esx host doesn't see the vlan tagging . vlan Tagging is done by Physical switch.
    • No. of Physical Nics = No. of vlans
    • Port groups on the Virtual switch of ESX server need not to be configured with the VLAN number or configure VLAN ID 0 (if it is not native VLAN
    • Physical Switch Port configuration :
      Port need to be configured as access port.
    VGT - Virtual Guest Tagging


    • Install 8021.Q VLAN trunking driver inside virtual machine guest operating system.
    •  All the VLAN tagging is performed by the virtual machine with use of trunking driver in the guest.VLAN tags are understandable only between the virtual machine and external switch when frames are passed to/from virtual switches.
    • Virtual Switch will not be involved or aware of this operation. Vswitch only forwards the packets from Virtual machine to physical switch and will not perform any operation.
    • Port group of the virtual machine should be configured with VLAN ID 4095
    • Physical  Switch Port Configuration :
      switch port need to be set to TRUNK mode

    Tuesday, June 4, 2013

    Restart management Agents

    ESXi Management agent can be restarted in couple of ways 

    DUCI 


    • Connect to ESXi Host 
    • Press F2 , provide the credentials ( Login using root)
    • Go to Trouble shooting , Navigate to Restart ,Management Agents

    Local Console or ssh 

    Method 1 : No Down time to VMs

    • /sbin/services.sh restart

    will restart all the management agents, hostd, ntpd, sfcbd, slpd, wsman, vodb

    Method 2 :

    Run following commands,

    • /etc/init.d/hostd restart
    • /etc/init.d/vpxa restart


    Method 3 : 

    • service mgmt-vmware restart
    • service vmware-vpxa rest

    If Automatic Startup/shutdown is enabled on VMs , virtual machine may restart.


    Friday, May 31, 2013

    vCenter Roles and Privilages

    Role and Privileges

    Vcenter privileges are fairly different than the  Active directory (Discretionary access control ) . vCenter uses role based access control - RBAC .

    There are three type of roles 

    • System
    • Sample
    • Custom 


    System Roles

    There are 3 type of system roles these are default and cannot be changed

    • NO access  - User cannt see the object
    • Read Only - User can see the object but right click options are grayed out
    • Administrator - users have all the privilege on the object


    Sample Roles : default sample roles are 


    • Virtual machine power on 
    • Datastore consumer
    • Network consumer
    • Virtual Machine User
    • Resource Pool administrator
    • vmware consolidated backup user
    Note : Its advised not to change the Sample roles . Its better to clone the roles and apply to the object


    Custom Roles:

    When you create additional roles in the vCenter are called custom roles.

    How Permissions are applied and inherited ?

    • Permissions applied on the objects supersedes a permission that is inherited
    • Permissions applied on the user supersedes permission which is inherited from being part of a group.
    Examples :

    • User A - has admin access on DataCenter and No Access on VM1.Result  : This implies User A can see and modify all the objects under the datacenter but he cant see VM1
    • Group_A - Power on VMGroup_B - take SnapshotUser_A - Memeber of Group_A and Group_BUser_B- Group_AUser_C- Group_BResult : User_A can power on and take snapshot of all the vmsUser_B - Can take snapshot of vms but cant power on the vmUser_C - can only power on the machine
    • Group_A : Administrator
      Group_B: Read only VM2
      User_A : Group_A , Group_B
      User_B : Group_A
      User_C: Group_B
      Result : User_A : Can see and perform admin activity on all the objects accept VM2
      User_B : Has Administrative privilege on all the object including the vm2
      User_C : Can see only VM2 , no other objects in the datacenter
    • Group_A - Power on VMGroup_B - Take Snapshot
      User_A - ReadOnly on Datacenter
      Result : Even though user is part of both groups A and B , user will be able to see only the objects but all the options will be grayed out.

    Thursday, May 16, 2013

    Multiple Page files - single volume

    Using GUI , you can define only one page files for each volume. It is possible to create multiple page files for a single volume , it can be done by modifying registry . 


    To create multiple paging files on one volume 

    • On the drive or volume you want to hold the paging files, create folders for the number of paging files you want to create on the volume. For example, C:\Pagefile1, C:\Pagefile2, and C:\Pagefile3.
    • Click Start, Click Run, type regedit in the Open box, and then click OK.
    • In the left pane, locate and click the following registry subkey: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SessionManager\MemoryManagement
    • Find the Pagingfiles value, and then double-click it to open it.
    • Remove any existing values, and add the following values:
      c:\pagefile1\pagefile.sys 3000 4000
      c:\pagefile2\pagefile.sys 3000 4000
      c:\pagefile3\Pagefile.sys 3000 4000
    • Click OK, and then quit Registry Editor.
    • Restart the computer to cause the changes to take effect.

    Vmware Standalone converter

    Vmware Standalone converter - This VMware product is used to convert
    1. Physical to virtual 
    2. virtual to virtual 
    3. Import virtual machines hosted in the workstation or Hyper V.
    4. Import third party backup machine which can be managed by vCenter .
    This is a free product which can be downloaded from vmware website . Alternative to this Plate Spin can be used to Physical to virtual machine conversion which is of-course a paid product.The latest version of stand alone converter is 5.1.

    Components
    1. Converter Standalone server : Consists of two services , Converter Standalone server and Converter Standalone worker.
    2. Converter Standalone agent : installs on the source physical machine to import to virtual machine. you can always choose uninstall the agent from physical machine once the import it complete.
    3. Converter Stand alone Client : Converter Standalone server works with client. It consists of user interface for server which provides  access for conversion and configuration wizard

    How the converter works : 

    • Stand alone converter uses cloning and system re-configuration steps to create and configure the destination virtual machine so that it works successfully in the vCenter environment.Migration process is non-destructive for the source machine so you can continue using the source machine after conversion completes
    • Cloning process is copying of Physical volumes to destination virtual machine.Cloning involves copying the data from source machines hard disk and transferring the data to destination virtual machine. The Destination machine can have different geometry size, file layout and other characteristics.
    • System reconfiguration adjusts the migrated operating system to enable the function on virtual hardware/
     NOTE :  if you want both the source and destination machine to co -exist , you have to change the IP address and the computer Name.

    HOT CLONING OF PHYSICAL MACHINE

    Using the converter you can perform hot cloning , ie converting the physical machine when its running . It will allow you to convert the virtual machine without shutting down the source . 

    How is works?

    As the process is converting the running virtual machine , resulting virtual machine is not exact copy of the physical machine. While converting windows physical machine , you can set the converter to synchronize the destination virtual machine after hot cloning. Synchronization is performed from source to destination the blocks that were changes during  the initial cloning period.To avoid loss of data the standalone converter will shutdown certain services so that no critical changes are made at the source machine.

    Stand alone converter can shutdown the source machine once the conversion / import is completed.When combined with Synchronization , the virtual machine can take over the source with least possible downtime . 


    NOTE : when you hot clone dual boot systems , you can clone only the default operating system to which boot.ini file points . To clone the non-default OS you need to change the boot.ini file . For Linux , you can boot it and then clone using stand alone converter.


    REMOTE HOT CLONE THAT ARE RUNNING WINDOWS

    • Standalone converter installs the agent on the source machine and agent takes snapshot of the volumes.i.e VSS snapshot feature of windows is used.
    • Standalone converter create the destination machine and copies the volumes from the source machine to destination machine.
    • Agent installs the required drives to allow the operating system to boot in a virtual machine and personalize the virtual machine. ex change the Ip address.
    • optionally uninstall the agent on the physical source machine.
    Pre- requisite : 

    • Turn of Simple file sharing at the source machine 
    • Ensure file and print sharing is not blocked in the firewall
    • Sys prep should be installed on the Standalone converter server - used for guest customization. If guest OS is windows 2003 , sys prep should be copied to %ALLUSERSPROFILE%\Application Data\VMware\VMware vCenter Converter Standalone\sysprep\svr2003 .


    HOT CLONING OF LINUX MACHINE 

    Unlike windows , there are no agents installed on the source machine . instead of that a helper virtual machine is created at the destination , ie esx/i host .

    Working ??

    • The converter creates a empty helper virtual machine at the destination which will be used as the container during the migration. The helper machine boots from the ISO file that is located on the converter standalone server
    • Helper machines connects to the source using SSH and starts retrieving the data from source, you can always select the source volumes that need to be copied .
    • Once copy is completed , destination virtual machine is reconfigure to boot as virtual machine.
    • Converter stops the helper machines once the conversion.

    DATA CLONING 

    Volume Based - 
    • Volumes are copied from source to destination . 
    • All the dynamic files are read but converted to basic disk at the destination . 
    • There are two type File System level and Block level. File level is used when destination disk is smaller than the original disk or when FAT volume is re-sized. Block level is used when preserve or larger volume size is selected for NTFS source volume.
    • Supported for import existing virtual machine and hot cloning.

    DISK Based

    • Supported for import of existing virtual machine.
    • Transfers all the sector from all the disk and preserves the volume metadata.disk properties are same as the source
    • Supports both basic and dynamic disks
    Linked Clone 

    Is the fastest method of cloning..


    SYSTEM SETTINGS AFTER CONVERSION

    Following source computer settings are not changed 
    • Operating system settings i.e computer name , security ID , user accounts , profiles.
    • Application data and data files
    • Volume serial numbers for each disk partition.

    Changes after conversion 
    • CPU Model and serial number 
    • Ethernet adapters
    • Graphic Cards
    • Disks and Partitions
    • Primary disk controllers


    PORT REQUIREMENT



    P2v of Linux Machine

    V2VWindows Machine

    P2 V Window Machine

    LIMITATIONS

    • Converter Standalone cannot detect any source volumes and file systems that are located on physical disks larger than 2TB.
    • Hybrid disk cloning are not supported 
    • Synchronization is supported only for volume-based cloning at the block level and Scheduling synchronization is supported only for managed destinations that are ESX 4.0 or later.
    • When you convert a virtual machine with snapshots, the snapshots are not transferred to the destination virtual machine.
    KEDB

    • Physical to virtual machine conversion fails at 1 %
      This error is usually caused due to the bad sector in the source machine  and due to VSS error. Analyse and defragment the disks in the source machine and try again. If it doesn't work contact the hardware vendor.

    Tuesday, April 23, 2013

    Performance Monitoring

    Commonly used performance monitoring tool :

    => vSphere performance chart - access via both vsphere client as well as  web client.
    => esxtop or resxtop : individual ESX monitoring tool .
    => guest Monitoring tools : Perfmon , IOmeter

    For virtual platform always two layers of monitoring is required , i.e. host level and guest level and it should be monitored over a period of time. 

    Note :
    * resxtop in batch mode cannot be used in vMa because of bug as of now .
    http://kb.vmware.com/selfseOrvice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2008122
    * Avoid using guest monitoring tools which depend on time synchronization .
    * when vsphere client is directly connected to esx host only the real time  performance data is available and this data is stored in flat files of ESXi Host  

    Vmware performance Data is available in 2 type i.e overview and advanced .

    Over view : 
    Depending on the object you selected, performance in the overview plan is displayed.

    Advanced : 
    Displays the statistical data of esxi host or any object in the vcenter like datastore, cluster, resource pool , VM , vapps. Available chart options are 
    # CPU
    # Memory 
    # Disk 
    # Management Agent
    # Network and 
    # System

    Chart Types : 
    # Line Graph
    # Stacked Graph 
    # Stacked Graph per VM  (Only on  ESX)
    # Bar chart (Storage Metrics)
    # Pie Chart (Storage Metrics)

    Counter : 
    Depending on the chart option you can select the Object , For ex , if you select CPU as char type you can see the objects as CPU usage , CPU ready time etc. Under counter you can see description of counter , Roll up , unit and internal name . 

    Statistics at different Granularity:

    These are predefined values and cant be changed . 



    Statistic Type :

    # Rate : value over current interval.  ex - Cpu usage 
    # Delta : changed from previous interval. ex - cpu ready time 
    # absolute : independent of interval .  ex- Memory Active 

    Roll up : conversion function between the statistics 
    # average  : avg of data points : CPU usage 
    # sum : sum of data points : CPU ready time 
    # latest : latest data point  : Uptime


    Note : 
    * For real time data , its current max and current min 
    * For historical data its avg max and avg min

    You can save the performance report in JPEG, BMP, PN BacG,GIF and in excel format.

    resxtop :

    It can  be run in following mode 
    # Interactive (Default): real time 
    # Batch  : output is redirected to a file 
    # Replay : used for tech support using vm-support command and is replayed using the esxtop

    Usage : Login to the system which has vcli and should have administrative role on Esxi host

    Interactive Mode : 

    # run resxtop --server esxhost --username root . This will prompt for the password.
    # if you have connected to vcenter use following 
    resxtop --server vc --username userid --password vcpwd -vihost esxhot

    # type the following to change the behavior 
    c =>CPU 
    m=>Memory
    d =>disk adapter
    u => disk device
    f => add or remove column 
    v =>virtual machine view
    n => network view
    h => help
    q => quit 

    if you type d , resxtop would display Adapter , Path , Npth,CMDs,Reads,writes,Mbreads, MBWRTN/s,DVAG/cmd,KAVG/cmd,GAGV/cmd,QAVG/cmd.

    when you press f column names will be displayed . The name which begins with * indicate the column is added to the output.

    Note : 
    * Options are case sensitive .
    * Space bar is used to return to the screen .
    * W is used to save the configuration.

    Batch Mode 
    resxtop -a -b >>file.csv

    a - all parameters
    b - batch mode 

    Note : 
    *Always start the VM before enabling the batch mode because newly added VM counter will not stored once the batch mode is enabled.

    Replay mode :
    vm-support -s -d 300 - I 30 

    resxtop -r filename  (replay)

    -s : restrict the collection of diagnostics data 
    -d : duration logging for 300 sec ie 5 min   
    -I : sampling interval  i.e each sample will be collected after 30 sec 


    You can use perfmon to view the resxtop output file.

    Guest Monitoring 
    From 5.1 version Vmware has started adding additional Dlls which are installed on guest os using the vm toll. But this parameters are disabled by default . you need to enable these dlls to use them in the perfmon under the guest os . Extra counter that are added are  VM Processor and VM Memory.

    tools.guestlib.enableHostInfo => true in .vmx file.

    Tuesday, April 16, 2013

    Configure ESXi

    NTP Configuration on ESXi Host


    vSphere Client :




    Web client





    DNS and Routing Config


    Using ESXi DCUI console :






     VC client





    Hyper Threading 

    Hyper threading is the advanced CPU feature available on Intel NEHALEAM  CPU series , which allows cpu to run two threads simultaneously.Its helps to improve the CPU performance by 0 - 40 % on supported systems.


    Os can recognize  whether physical cores or Hyperthreded cores but application will not be able to distinguish .


    ESX  Behavior:


    Esxi scheduler can distinguish between the Physical cores and the hyperthreded cores. scheduler will allocate the physical cores untill physical cores are loaded .If there are additional vCPUs requesting CPU resources they will then be assigned to the additional logical cores. By this method HT has no impact on performance until more vCPUs are concurrently executing than there exist physical cores.




    To enable Hyper-threading , this need to be first enabled in the BIOS of the host . 

    Configuration using vSphere client:

    Hyperthreading - web client





    Hyperthread config for VM 



    VM level sharing  :

    1. Any - Default - allows VMs to share virtual core from other VMs as well. This is set by default.
    2. None -Virtual Machine get exclusive access to the core the other thread who wants to access the core has to wait.
    3. Internal  -VMs are allowed to share cores from the same physical cores


    Memory compression cache
     ESX Uses Memory compression techniques i.e GZIP to extend the use of RAM which will reduce the swapping of memory pages . Memory compression is enabled by default and improves the performance when over-committing the memory by limiting the swap to disk .This technique divide VM swapping memory to 4KB and tried to compress them to 2KB. If compress success, then, you can save 50% space, if it fails, VM still swap the original 4KB to physical disk.



    Enabling/Disabling Memory Compression

    Open the Virtual Center and select Hosts and Clusters
    Select a host and click the Configuration tab
    Select Software | Advanced Settings
    Select Mem.MemZipEnable, 1 for on, 0 for off


    Configuring Memory Compression
    Open the Virtual Center and select Hosts and Clusters
    Select a host and click the Configuration tab
    Select Software | Advanced Settings
    Select Mem.MemZipMaxPct. 
    This defaults at 10% of the RAM for caching and can be altered from 5% to 100%


    Note : Compression process uses 2 - 3 % of Host CPU and time of compression is 20 ms. Compressed Memory is placed in the VM Memory not the Host Memory .




    ESXi Licensing

    You can add any number of licenses to  vSphere environment using the vsphere license key. Esxi license is socket based . There is not more restriction on the Memory ( vRAM entitlement )

    Home => Administration => Licensing => Manage Vsphere License => Add license key 











    Vcenter installation


    Steps

    1. DNS Sever needs to be reached from New vCente Sever . DNS is one of the Main c0mponenet of Vcenter
    2. Install windows 2008 R2 operating system (From vSphere 4.1 , vcenter is supported only on 64 Bit OS)
    3. Enable Application server role to install  .Net Frame work 3.5.1 which is pre-requisite for SQL 2008 Standard Edition 

    1. Install SQL 2008 server  on vCenter server
    Note : Don’t install the Reporting service in vCenter server as it uses 80 and 43 ports



    1. Disable the Dynamic Port






    1. Mount the vCenter ISO

    2. Browse to  I:\Single Sign On\DBScripts\SSOServer\schema\mssql Copy the files to Management studio of SQL 2008
       
    CREATE LOGIN RSA_DBA WITH PASSWORD = '<CHANGE DBA PASSWORD>', DEFAULT_DATABASE = RSA

    CREATE LOGIN RSA_USER WITH PASSWORD = '<CHANGE USER PASSWORD>', DEFAULT_DATABASE = RSA

    Change the password and make a note of it. This password is required for further installations. First Script is used to create the data bases called RSA which is used for Single Sign On.

    1. Create a new database for Vcenter and Upgrade manager – VCDB and VCUPDATEDB
    Set recovery model as Simple for ease of Management.

    1. Create a System DSN
     
     
    1. Now the pre-requisites for Vcenter 5.1 one is ready . We are good install vcenter 5.1
    2. Vcenter Single Sign on Installation
      Change the permissions of the RSA_DBA  and RSA_User on RSA database
      Database User : RSA_DBA
      Database User : RSA_User
      Database Instance: vcdb

    3.  
     




    Access rights on the Single Sing on Data Bases








    Inventory Services


    User Id created while installing Single Sing On




    Vcenter Server Installtion





     
     
     
     
     This would complete the installation in vCenter 5.1