Category: Virtualization


In most cases you will add VMFS volumes (LUNs) to your vSphere environment, because most of the time your environment will grow. Adding a LUN is very simple: Just configure the masking correctly on your SAN, carry out a rescan on your ESXi server (or the entire cluster) and you have your LUN/volume available.

But what about removing a LUN from your environment. This is a bit more tricky, you have to think about:

  • No virtual machines are using the LUN you want to remove. The LUN should not be used as a RDM;
  • The LUN should not be a part of a datastorecluster and storage DRS should not be active on the LUN;
  • Storage IO Control should be disabled for the LUN;
  • The LUNs should not be used as a HA heartbeat datastore;
  • And the LUN should not be used as persistent scratch partition.

On top of that, you cannot just remove the LUN, even when it’s not used anymore by virtual machines. It’s very important to first unmount the LUN and then detach the LUN from your ESXi server.

VMware described a clear procedure in the following two KB articles:

  • Removing a LUN containing a datastore from VMware ESXi/ESX 4.x – KB 1029786
  • Unpresenting a LUN in ESXi 5: KB 2004605

 

When a virtual machine is provisioned to the datastore cluster, Storage DRS algorithm runs to determine the best placement of the virtual machine. The interesting part of this process is the method Storage DRS determines the free space of a datastore or to be more precise the improvement made in vSphere 5.1 regarding free space calculation and the method of finding the optimal destination datastore.

vSphere 5.0 Storage DRS behavior
Storage DRS is designed to balance the utilization of the datastore cluster, it selects the datastore with the highest free space value to balance the space utilization of the datastores in the datastore cluster and avoids out-of-space situations.

During the deployment of a virtual machine, Storage DRS initiates a simulation to generate an initial placement operation. This process is an isolated process and retrieves the current datastore free space values. However, when a virtual machine is deployed, the space usage of the datastore is updated once the virtual machine deployment is completed and the virtual machine is ready to power-on. This means that the initial placement process is unaware of any ongoing initial placement recommendations and pending storage space allocations. Let’s use an example that explains this behavior.

 

More information about this article in frankdenneman.nl

  1. Download vCenter 5.1.0a ISO and mount it inside vCenter. If you splitted the components on several servers, use the ISO on the corrisponding server
  2. first of all, SSO. Open a command line, move into the folder “Single Sign On” of the DVD, and run:
    1 VMware-SSO-Server.exe /S /v" /L*v \"%temp%\vim-sso-msi.log\" /qn"

    You can then open %temp% folder in Windows Explorer and monitor the automatic upgrade from here. You will find the log file and a temporary sub-directory. When that directory will disappear, the installation will be completed. Check the log to be sure the upgrade was completed successfully.

  3. After SSO, proceed with Inventory Service. You can run the installer from the Autorun interface of the DVD. It will find out you already have Inventory Service in place and offer you to upgrade. Obviously choose YES, and wait for the upgrade to finish. There will be no interactive screen to deal with.
  4. Third component, Web Client. I usually like to install it even before vCenter, since it’s the only graphical interface to SSO, so if something is not working you could use it to manage SSO. As before, it will automatically discover the previous version and will upgrade it.
  5. Fourth, vCenter Server. Even here, it will automatically discover the previous version and will upgrade it
  6. Last one, Update Manager, if you have installed it.

 

Configuring CA signed certificates is a challenge with vSphere as with any complex enterprise environment. Securing an environment is a requirement in many large organizations. You need either public certificates (such as Verisign or Globaltrust), Microsoft CA certificates, or OpenSSL CA certificates to ensure a secure communication.
This article provides steps to allow configuration of these certificates on vSphere components in an environment. The article also assumes that all components are installed and running already with self-signed certificates.
Please validate each step below. Each step provides instructions or a link to a document that provides information on configuring the certificates in your environment.
  1. Generate certificate requests and certificates for each of the vCenter Server components. For more information, seeCreating certificate requests and certificates for the vCenter Server 5.1 components (2037432).

  2. Replace the vCenter SSO certificates. For more information, see Configuring CA signed SSL certificates for vCenter SSO in vCenter Server 5.1 (2035011).

  3. Replace the Inventory Service certificates. For more information on this, see Configuring CA signed SSL certificates for the Inventory service in vCenter Server 5.1 (2035009).

  4. Replace the vCenter Server 5.1 certificates. For more information, see Configuring CA Signed Certificates for vCenter Server 5.1 (2035005).

  5. Replace the vSphere Update Manager Update Manager Certificates. For more information, see Configuring CA signed SSL certificates for VMware Update Manager in vSphere 5.1 (2037581).

  6. Replace ESXi 5.x host certificates. For more information, see Configuring CA signed SSL certificates with ESXi 5.x hosts (2015499).

If your issue persists even after trying these steps:

 

Source: VMwareKB

Intro to VMware vVol

For awhile now, HP Storage has been working with VMware as a design partner to define and develop a VM-granular storage architecture to potentially replace vSphere’s VMFS/datastore model. This new model is called VMware Virtual Volumes (vVols). Virtual Volumes introduces a 1:1 mapping of VMs (more specifically VMDKs or VM LUNs) to storage volumes—in other words, each VM will be associated with its own, unique storage volume. With vVols we could finally have the VMDK representation in vSphere match the representation on storage.

 

As a result, the storage system could now have the ability to operate at the same level of granularity as vSphere, which means that vSphere could better leverage, and take advantage of, the native strengths and capabilities of modern, intelligent storage arrays, like HP 3PAR.

Why vVol?

I think the big thing VMware and storage partners like HP want to overcome is the inefficiencies and the challenges that exist today as a result of working at the LUN or volume level with vSphere. Despite all the advances that have been made, when a VM and VMDK is the unit of data management, a LUN is too coarse to gain the efficiency and flexibility customers need. The granularity mismatch between vSphere and storage systems needs to be resolved. Enter vVols.

 

More Information on this in HP Blogs

 

We can now  manage multiple hypervisors with VMware vCenter.

 

 

 

ESXi 5.1 comes with many improvements and one of them is new namespaces and commands in esxcli.

Those new commands enable a system administrator to perform a shutdown, a reboot or a maintenance operation in a host.

Under the system namespace the new commands are the equivalents of the classic vicfg/esxcfg-hostops which until now was the only way to perform such kind of operations with vCLI and are also accesible locally on ESXi Shell.

image

Maintenance mode operations

Getting the basic usage of the command is as simple as always. You can perform two operations.

  • Get the state of the host
  • Put the the host in or out of Maintenance Mode
~ # esxcli system maintenanceMode 
Usage: esxcli system maintenanceMode {cmd} [cmd options]
Available Commands: 
  get                   Get the maintenance mode state of the system. 
  set                   Enable or disable the maintenance mode of the system. 
~ #
  • Get the state of the host
~ # esxcli system maintenanceMode get 
Disabled 
~ #
  • Put the host in Maintenance Mode
~ # esxcli system maintenanceMode set -e true -t 0 
~ # 
~ # esxcli system maintenanceMode get 
Enabled 
~ #

Power operations

With the shutdown command the host can be either rebooted or shutdown. If the ESXi server is not in Maintenance Mode mode the operation will not be allowed.

~ # esxcli system shutdown 
Usage: esxcli system shutdown {cmd} [cmd options]
Available Commands: 
  poweroff              Power off the system. The host must be in maintenance mode. 
  reboot                Reboot the system. The host must be in maintenance mode. 
~ #

For both task the delay and reason parameter must be provided.

~ # esxcli system shutdown poweroff 
Error: Missing required parameter -r|--reason
Usage: esxcli system shutdown poweroff [cmd options]
Description: 
  poweroff              Power off the system. The host must be in maintenance mode.
Cmd options: 
  -d|--delay=<long>     Delay interval in seconds 
  -r|--reason=<str>     Reason for performing the operation (required) 
~ #
  • Power off the host
~ # esxcli system shutdown poweroff --delay=10 --reason=”Hardware maintenance”
  • Reboot the host
~ # esxcli system shutdown reboot -d 10 –r “Patches applied”

Cool script:

$myCol = @()
ForEach ($Cluster in Get-Cluster)
    {
        ForEach ($vmhost in ($cluster | Get-VMHost))
        {
            $VMView = $VMhost | Get-View
                        $VMSummary = “” | Select HostName, ClusterName, MemorySizeGB, CPUSockets, CPUCores
                        $VMSummary.HostName = $VMhost.Name
                        $VMSummary.ClusterName = $Cluster.Name
                        $VMSummary.MemorySizeGB = $VMview.hardware.memorysize / 1024Mb
                        $VMSummary.CPUSockets = $VMview.hardware.cpuinfo.numCpuPackages
                        $VMSummary.CPUCores = $VMview.hardware.cpuinfo.numCpuCores
                        $myCol += $VMSummary
                    }
            }
$myCol #| out-gridview

Part 1. Present the new LUNs

Because the SQL servers are virtual machines using RDM’s I needed to create 3 new LUNs on the new SAN and present them to the VMware servers. These three LUNs would be used for: 1. Cluster Quorum Disk 2. MSDTC Disk 3. SQL Data Disk. I wont dive deep into this step as it would be different for each SAN vendor, but in summary, create your new LUNs as needed and add them to the storage group that is presented to your VMware hosts, after that rescan all of your VMware HBA’s and verify that the VMware hosts can see the LUNs.

Part 2. Add New RDM’s to Primary Cluster Node

Next we will add each of the new RDM Disks to our primary cluster node. Technically we would not have to mount them to the primary node, but I’m doing it that way just to keep things organized. Here are the steps for this section:

  1. Open Edit Settings of Node 1
  2. Click Add, then Select Disk
  3. Pick Raw Device Map as the new disk type
  4. Select the Raw LUN that you want to use
  5. Tell it to store the information about the RDM with the VM
  6. Select Physical Compatibility Mode
  7. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode)
  8. Complete the Wizard
  9. Repeat Steps 2 – 8 to add the number of new RDM’s you will need
  10. Now click ok on the edit settings box to commit the changes
  11. After committing, go back into Edit Settings of node 1 and look to see what the file name is for the RDM’s (mine were SQL1_6.vmdk and SQL1_7.vmdk, we will need these to configure node 2)

Part 3. Add Existing RDM’s to Secondary Cluster Node

  1. Open Edit Settings of Node 2
  2. Click Add, then Select Disk
  3. Pick Existing Virtual Disk as the disk type
  4. Browse to where the config files or Node 1 are on the SAN and select the VMDK file that you made note of in step 10 in Part 2
  5. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode, should probably be the same as the first node)
  6. Complete the Wizard
  7. Repeat steps 2 – 6 for the remaining RDM’s that you need to add to the second node
  8. Now click ok on the edit settings box to commit the changes

Part 4. Preparing the new RDM’s in Windows

Note: these steps are preformed only one node 1

  1. Open Disk Management and Rescan the server for new disks
  2. Right click on the first new drive and select “Online”
  3. Right click again on the first new disk and select “Initialize”
  4. Now right click in the right area of the first new disk and pick “Create Volume”
  5. Complete the new volume wizard and assign a temporary drive letter
  6. Repeat Step 2 – 5 for each new drive

Part 5. Add the new drives to the cluster

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster you are working on and select the Storage item in the left tree.
  3. On the right click Add a Disk
  4. Make sure there are check marks beside all of the new drives you wish to add as a cluster disk
  5. Click OK
  6. Verify that the new disks now appear under Available Storage in the middle column

Part 6. Move the Cluster Quorum Disk

  1. Open “Failover Cluster Manager” if you dont still have it open
  2. Right click the cluster you want to modify and select “More actions -> Configure Quorum Settings”
  3. Select “Node and Disk Majority” (or whatever you already have selected)
  4. Select the new Disk that you want to use from the list (it should save “Available Storage” in the right column)
  5. Click next on the confirmation page
  6. Click Finish on the final step after the wizard has completed

Part 7. Move the SQL Data Disk

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster your working on and select “SQL Server” under Services and applications
  3. Select “Add storage” from the menu on the right
  4. Select the new drive from the list, and click OK
  5. In the middle column right click “Name: YourClusterNameHere” and select “Take this resource offline”
  6. Confirm that you want to take SQL offline
  7. Verify that SQL Server and SQL Server Agent are offline
  8. Open Windows Explorer and copy the SQL data from the old drive to the new drive
  9. Back in Failover Cluster Manger right click on the old disk in the middle column and select “Change drive letter”
  10. Make the old drive a temporary drive letter other then what it currently is, Click OK
  11. Confirm that you want to change the drive letter
  12. Next right click the new drive and select change drive letter, set the new drive’s letter to what the old drive was
  13. Again, confirm you want to change the drive letter
  14. Right click on SQL Server and select “Bring this resource online”, do the same for SQL Server Agent
  15. Right Click “Name: YourClusterNameHere” and select “Bring this resource online” in the middle column
  16. Verify that SQL starts and is accessible

Part 8. Moving MS DTC Witness Disk

From what I have read MSDTC’s witness disk cannot be moved like the SQL data can. Instead you simply delete the DTC instance and then recreate it using the disk that you want to use.

  1. Make sure SQL is shutdown
  2. Next Take the DTC instance offline
  3. Make sure to note the IP address of the DTC and the name
  4. Right click and delete the DTC instance
  5. Now right click on “Services and Applications” and select add new
  6. Pick DTC from the list and click next
  7. Fill in the information that you noted from the old instance, but select the new disk this time.
  8. Finish the wizard and make sure that the new instance is online

Part 9. Verify Operational Status

  1. Verify that SQL Server and SQL Agent are online
  2. Verify that MSDTC is online
  3. Login to SQL using a client application and verify functionality

This part is just to make sure that everything is still working. At this point you need to make sure that SQL is back online and that the client applications that it serves are working properly before we remove our old drives.

Part 10. Remove old disks from Cluster

  1. Open Failover Cluster Manager
  2. Select Storage
  3. Verify that the disks under “Available Storage” are the old drives
  4. Right click each old drive and select “Delete”
  5. Confirm that you wish to delete the cluster disk

Part 11. Remove Old Disks from VM settings

This part would seem simple, but you must make sure you remove the correct RDM’s otherwise you will have problems. The best way that I found to make absolute sure was to make a node of how big the RDM’s were that I would be removing. Then we can browse the datastore of the primary node and see which VMDK descriptor files show that size. Of course this only works if they are different sizes, otherwise you will have to go but which order they are in windows and which order they are via the SCSI buss numbers in the VM settings.

After determining which disks need to be removed (which VMDK files they are that is):

  1. On the secondary node go into Edit Settings and find which RDM drives have the same file name as the ones identified earlier
  2. Select the Remove button at the top of the hardware information page.
  3. Leave it set to “remove from vm” and don’t select delete from datastore
  4. Click OK to commit the changes
  5. Now go to the primary node’s Edit Settings dialog box
  6. Repeat Steps 2 -4, but this time tell it to delete them from disk, as we no longer need the descriptor VMDK files for those RDM’s
  7. Now that there should be nothing else using those RDM’s you can delete them from your old SAN or un-mask those LUNs from your VMware hosts.

Source: Justin’s IT

HP has created a vSphere Installation Bundle (VIB) depot for all sort of important HP driver bundles for VMware vSphere 5 and higher. The VIP depot contains:

  • HP ESXi 5.0 Management Providers bundle – includes the latest HP Common Information Model (CIM) Providers, HP Integrated Lights-Out (iLO) driver, HP Compaq ROM Utility (CRU) driver, and the new HP Agentless Management Service (AMS).
  • HP ESXi 5.0 Utilities bundle – ESXCLI utilities such as HPBOOTCFG (boot order configuration),HPONCFG (remote iLO configuration) and HPACUCLI (Smart Array reporting and configuration)
  • HP ESXi 5.0 NMI bundle – Non Maskable Interrupt (NMI) driver used to write VMware® errors to the Insight Management Log (IML)
  • HP Agentless Management Service Offline Bundle – a service that provides support for Agentless Management and Active Health. Agentless Management Service provides a wider range of server information (e.g. OS type and version, installed applications, IP addresses) allowing customers to complement hardware management with OS information and alerting. Agentless Management provides Integrated Lights Out (iLO) based robust management without the complexity of OS-based agents. Active Health provides 24×7 mission control for servers, delivering maximum uptime through automated monitoring, diagnostics and alerting.
  • Device Drivers as used in the HP Customized VMware images
  • Latest ProLiant Server and option firmware and driver version recipe

The VIB depot can be used by the following tools:

  • VMware Update Manager (VUM)
  • ESXCLI
  • ImageBuilder

With the HP VIB depot integration for example in VUM there is no need to download and install the bundles manually. In this blog post I explain how to add the HP VIB depot to VMware Update Manager (VUM) by using the following steps:

Add the VIB Depot to VUM

  • Open the vCenter Client
  • Select under Solution and Application in the main screen of the vCenter client – Update Manager
  • Select configuration
  • In the Download sources select – Add Download Source
  • In Source URL use – http://vibsdepot.hp.com/index.xml
  • Press OK
  • Select – Download Now to update the baseline

image

Create a new Baseline

  • Select the Baselines and Groups tab
  • Create
  • Baseline Name  – HP Updates
  • Baseline Type – Host Extension
  • Extensions to Add – Select the extensions you need. In this example I selected the following extensions:
    • hpnmi for ESXi 5.0 v1.3
    • HP ESXi 5.0 Complete Bundle Update 1.3
    • HP ESXi 5.0 Management Bundle 1.2-26
    • HP Utility Bundle ESXi 5.0 v 1.2

image

  • Finish

Attach the baseline and remediate

  • Attach the baseline to the cluster

image

  • Select the baseline and scan for patches and extensions
  • The HP bundles that are missing are listed

image

  • Use Remediate to install the HP bundles

When the hosts are rebooted and the remediation is finished, the Host Compliance overview is 100%.

image

Using the  HP VIB depot is great way of keep your HP server up-to-date with the latest HP bundles.

More information can be found HP VIB bundle site found here.

Source: ivobeerens.nl