Category Archives: vCloud

VCAP-CID Study Notes: Objective 3.1

Welcome to the VCAP-CID Study Notes. This is Objective 3.1 in the VCAP-CID blueprint Guide 2.8. The rest of the sections/objectives can be found here.

Bold items that have higher importance and copied text is in italic.

Knowledge

  • Identify network isolation technologies available for a vCloud design.
    • For internal networks
      • VXLAN (dynamically created vSphere port groups)
        • Virtual eXtensible LAN (VXLAN) network pools use a Layer 2 over Layer 3 MAC in UDP encapsulation to provide scalable, standards-based traffic isolation across Layer 3 boundaries (requires distributed switch).
      • vCloud Network Isolation-backed (dynamically created vSphere port groups)
        • vCloud Director Network Isolation-backed (VCD-NI) network pools are backed by vCloud isolated networks. A vCloud isolated network is an overlay network uniquely identified by a fence ID that is implemented through encapsulation techniques that span hosts and provides traffic isolation from other networks (requires distributed switch).
      • VLAN-backed (dynamically created vSphere port groups)
        • VLAN-backed network pools are backed by a range of preprovisioned VLAN IDs. For this arrangement, all specified VLANs are trunked into the vCloud environment (requires distributed switch).
      • vSphere port group-backed (manually created vSphere port groups)
        • vSphere port group-backed network pools are backed by preprovisioned port groups, distributed port groups, or third-party distributed switch port groups.

VCAP CID 3-1-1

 

Skills and Abilities

  • Based on a given logical design, determine appropriate network isolation technologies for a physical vCloud design
    • You will need to base that on the feature of each isolation technology.
    • Based on the requirements (and constraints) you will get a logical design for the network, bot internal and exteranl connections. This needs to be translated in choosing an isolation method. Also you could end up with all of them if the use-cases require so.
  • Based on a given logical design, determine network service communication requirements (DNS, LDAP, IPv6 and NTP) for a physical vCloud design
  • Analyze communication requirements for a given application.
    • This is either based on internal and new vCloud application (multi-tier), or single applications that need external access from a routed network.
      • Multi-tier applications are workload consisting of multiple seperate virtual machines, each with a role within the application. A Web service, with a web front end, and application server to work out request from the web service, and database server to keep all that data processed by the application server.
        • These server each have communication requirements that so the application works and is secure.
      • Single virtual machines running in a vCloud can also have various communication requirements, like inbound http access, ldap access for authentication, file level access to file server etc. This list is really anything you can think of as all application need to communicate to other applications and some point.
    • This can also apply to workloads that will get migrated into a vCloud instance. In this example a dependancy list of applications and their servers is something that is needed. Unfortunately most organizations don’t have that software but VMware has several options for customers if they need to map out their current workload dependencies.
      • VMware Application Dependency Planner
        • A tool for VMware Partners to use to map dependencies for both virtual and physical systems
          • Its agentsless and uses port mirror features available in vSphere vSwitches to create a dependecy map.
      • vCenter Operations Manager Infrastructure Navigator
        • A tool to map out dependencies of virtual environments. A Part of vCops Advanced packages.
  • Given an application security profile, determine the required vShield edge services (static routing, IPSEC VPN, IP masquerading, NAT, DHCP, etc.).
    • The vShield Edge services are and are only for routed networks, or DHCP for interna organization networks.
      • Static Routing
        • You can configure an edge gateway to provide static routing services. After you enable static routing on an edge gateway, you can add static routes to allow traffic between vApp networks routed to organization vDC networks backed by the edge gateway.
      • IPSEC VPN
        • You can create VPN tunnels between organization vDC networks on the same organization, between organization vDC networks on different organizations, and between an organization vDC network and an external network
      • IP masquerading
        • You can configure certain vApp networks to provide IP masquerade services. Enable IP masquerading on a vApp network to hide the internal IP addresses of virtual machines from the organization vDC network.
        • When you enable IP masquerade, vCloud Director translates a virtual machine’s private, internal IP address to a public IP address for outbound traffic.
      • NAT (SNAT and DNAT)
        • A source NAT rule translates the source IP address of outgoing packets on an organization vDC that are being sent to another organization vDC network or an external network.
        • A destination NAT rule translates the IP address and port of packets received by an organization vDC network coming from another organization vDC network or an external network.
      • DHCP
      • Load Balancer
        • Edge gateways provide load balancing for TCP, HTTP, and HTTPS traffic. You map an external, or public, IP address to a set of internal servers for load balancing. The load balancer accepts TCP, HTTP, or HTTPS requests on the external IP address and decides which internal server to use. Port 809 is the default listening port for TCP, port 80 is the default port for HTTP, and port 443 is the default port for HTTPS.
      •  Firewall
        • Firewall rules are enforced in the order in which they appear in the firewall list. You can change the order of the rules in the list. When you add a new firewall rule to an edge gateway, it appears at the bottom of the firewall rule list. To enforce the new rule before an existing rule, reorder the rules.
    • Application security profiles can be very different and its best to know what these service can do to be able to determine which ones you should configure
  • Given security requirements, determine firewall configuration.
    • This is involves configuring the Edge Firewall to fulfill the security requirements of a application.
    • Just have in mind that the firewall rules are enforced in the order in which they appear in the firewall list, so make sure the order is correct 🙂 (not allow all first , then deny some)
  • Given compliance, application and security requirements, create a vApp network design.
    • Instead of having a great time drawing all available configurations of vApp designs I’ll tell you to read from page 56 to 59 in the vCAT  Architecting a VMware vCloud document where most of the network design are explained (and with pictures).
  • Given compliance, application and security requirements, create a private vCloud network design.
    • This really can’t be explained any better than on pages 65-66 in the vCAT  Architecting a VMware vCloud document.
    • The thing is you really need to know how you can use the diffrent networking features of vCloud (direct, routed, vApp networks) to be able to create a vCloud network design.
    • Also the Private VMware vCloud Implementation Example is a great reference document for vCloud designs.
    • And as for most Physical designs they work as an expansion on logical designs with information on physical layout and attributes.
  • Given compliance, application and security requirements, create a public vCloud network design.
    • Same goes for this one, pages 64-65 in the vCAT  Architecting a VMware vCloud document.
    • Like in the previous bullet the Public VMware vCloud Implementation Example is a great reference.

 

VCAP-CID Study Notes: Objective 2.5

Welcome to the VCAP-CID Study Notes. This is Objective 2.5 in the VCAP-CID blueprint Guide 2.8. The rest of the sections/objectives can be found here.

Bold items that have higher importance and copied text is in italic.

Knowledge

  • Compare and contrast vCloud allocation models.
  • Identify storage constraints for an Organization Virtual Datacenter (VDC)
    • When configuring storage for a Organization Virtual Datacenter you can set a Storage Limit in GB. This is the storage that is used by VM’s and Catalog items in the organization vDC.
    • When using Fast Provisioning there are constraints regarding the usage of Shadow VM’s and their Linked Clones.
      • When a VM is created with Fast Provisioning a Shadow VM is created. The VM is then created as a linked clone from that Shadow VM.
      • This table (from the vCAT Architecting a VMware vCloud document) shows how placement of Linked Clones behaves:

VCAP CID  2-5-1

 

 

Skills and Abilities

  • Determine applicable resource pool, CPU and memory reservations/limits for a vCloud logical design.
    • The allocation pool types do most of that configuration automatically and should not be changed using the vSphere client.
    • But you could create sub resource pools for each type of an allocation pool.
  • Determine the impact of allocation model performance to a vCloud logical design.
    • This is based on the allocation model used:
      • Reservation Pool
      • Allocation Pool
        • Since Allocation Pool is based on resource pool reservation the same concepts apply here as to the Reservation Pools. The only difference is that the users can’t change the limit, reservations and shares on a VM level.
        • Here you don’t have to worry that much as the VM’s will use the CPU/Memory capacity configured, with % of it reserved when VM’s are powered on.
      • Pay-as-you-Go
        •  A default resource pool with no configuration is created, but the VM’s have limits based on the vCPU speed configured and reservation based on the % setting. So a VM with 2 vCPU would have double the limit of the vCPU speed.
        • CPU reservations at a VM level can result in high CPU ready times. Esxtop would show %RDY, and %MLMTD show the percentage of time the VM is ready to run but isn’t because it violates the CPU limit setting. %MLMTD should be 0%.
        • Memory reservation at a VM level is covered in this blog post from Frank Denneman: http://frankdenneman.nl/2009/12/08/impact-of-memory-reservation/
        • Here the performane penalty is mostly based on the vCPU speed configured at the creation of the Organization vDC.
          • To small and you’ll have a lot of slow VM’s. You could fix this by adding more vCPU, by that increasing the limit, but the you might end up with VM’s with to many vCPU (create a vCPU scheduling war)
          • If the vCPU speed is set to the MHz number of the actual hosts used means all the VM’s will basically work like they don’t have a limit. But that means you will need to give the Organization a huge quota to work with to be able to turn on all the VM’s on, or leave it at unlimited.
  • Determine the impact to a given billing policy based on a selected allocation model.
    • There is no need for me to write about this as this subject has been covered in this excellent post from Eiad Al-Aqqad.
  • Given service level requirements, determine appropriate allocation model(s).
    • First we need to point out that service levels can be different based on what they should cover:
      • Availability – Are the system running? Based on uptime of the systems in question.
      • Backups – RTO & RPO
      • Serviceability – Initial response – Intital resolution time
      • Performance SLA – Need a certain amount of performance – 15 K disk , SSD disk etc.
      • Compliance – Logging,  ensure infrastructure is compliant to standards (PCI-DSS, etc)
      • Operations – Time when users can be added (if manual)
      • Billing – How long billing information is kept, depends on the local law.
    • Each allocation model has its caveats regarding service levels:
      • Reservation Pool
        • DRS Affinity rules can not be set by users in the default vCloud UI but will need to be “spliced” in as a part of a custom UI perhaps (Objective 2.4 has a link to a great example http://vniklas.djungeln.se/2012/06/21/vm-affinity-when-using-vcloud-director-and-vapps/)
        • If the service level are regarding the amount of resources available, Reservation Pool has all their resources reserved.
        • Availability SLA is the same for each cluster of ESXi hosts and has no impact on different allocation models.
      • Allocation Pool
        • If you are using Elastic mode your VM’s might be running two seperate DRS clusters, so you will need to keep that in mind.
        • If the service level are regarding the amount of resources available, Allocation Pool has a part of their resources reserved.
      • Pay-as-you-Go
        • If the service level are regarding the amount of resources available, PAYG has a part of their resources reserved, but it most likely to have performance problems regarding CPUs.
  • Given customer requirements, determine an appropriate storage provisioning model (thin/fast).
    • Thin Provisioning is just that, VM disks use less space on the datastores. So if space efficiency is a requirement than great.
    • Fast Provisioning like I mentioned before uses the Linked Clone technology to create clones that read from a single Shadow VM. Each VM reads that master disk, but writes to their own delta disk. Great for creating multiple VM’s at once as they don’t need to clone the whole disk of the templates.
  • Given a desired customer performance level, determine a resource allocation configuration.
    • Resource allocation configuration are allocation pools and how they are configured.
      • Reservation Pool
        • Customer gets resources reserved and can control how those resources are divided between workloads in that pool.
      • Allocation Pool
        • Customer gets a part of the resource reserved with a chance to burst to a certain amount.
      • Pay-as-you-Go
        • Customer gets a VM based reservation and limited vCPU power. Great for Test/Dev or dynamic workloads (meaning created and then deleted after a short period of time)
    • Performance levels can also mean using different performance Tiers (and perhaps with different level of service)
      • You can create different Provider vDC’s with different CPU speeds, HA configurations, and perhaps adding a SSD caching solution to create different Tiers.
      • Also you can offer storage Tiers, that could be based on different kind of spinning disks, eg. 10K, 15K and 7.2K. All based on the storage array and protocol used.
    • If we create an example
      • 3 Tiers of Provider vDC’s
        • Gold: Reservation Pool – E7 Intel processors – High speed Memory – HA at N+2 – SSD caching enabled.
        • Silver: Allocation Pool- E5 Intel processors – High speed memory – HA at N+1
        • Bronze: PAYG – E5 Intel Processors – High speed memory – HA turned off
      • Storage
        • Gold: 15K HDD + SSD caching in the storage array
        • Silver: 10K HDD
        • Bronze: 7.2K HDD

VCAP-CID Study Notes: Objective 2.4

Welcome to the VCAP-CID Study Notes. This is Objective 2.4 in the VCAP-CID blueprint Guide 2.8. The rest of the sections/objectives can be found here.

Bold items that have higher importance and copied text is in italic. Please note that this post is one of the larger ones in this series.

Knowledge

  • Identify constraints of vSphere cluster sizing
    • Each vSphere cluster can only have 32 hosts. But the clusters can be Elastic so Organizations can use multiple clusters.
    • This is from ESXi 5.1 Maximums:

VCAP CID 2-4-1

  • Identify constraints of Provider/Organization Virtual Datacenter relationships
    • You can only create 32 Resource Pools (Organization vDCs) for the same Organization. They can be a part of more than one Resource Pool for a single Provider vDC.
    • Elastic mode allows organizations to use multiple clusters (which is a resource pool)
    • This piece of advice is always good to know, but it might be to strict if you have a good estimate of the growth of the environment or use elastic Provider vDCs:
      • As the number of hosts backing a provider virtual data center approaches the halfway mark of cluster limits, implement controls to preserve headroom and avoid reaching the cluster limits. For example, restrict the creation of additional tenants for this virtual data center and add hosts to accommodate increased resource demand for the existing tenants.
  • Identify capabilities of allocation models
  • Explain vSphere/vCloud storage features, functionality, and constraints.
    • vSphere storage features include (among others):
      • Storage IO control
        • Supported in vCloud environment, as this is a feature of the vSphere environment and doesn’t have a constraint on a vCloud design.
        • I will not be explaining these vSphere features in length as I assume people know the work and might impact a design.
      • Storage DRS (Storage vMotion)
        • Supported in vCloud Director 5.1
      • Storage Clusters
        • Supported in vCloud Director 5.1. You can add a Storage Cluster in the vCloud director administrative page.
        • I recommend setting the same Storage Policy (Profile in 5.1) for each Storage Cluster
        • Each Cluster can only contain 32 Datastores, but a Storage Policy (Profile in 5.1)  can include multiple datastores from multiple Storage Clusters.
          • So VM’s for the same Organization could reside in two different Datastore Clusters.
      • vSphere Flash Read Cache (vSphere 5.5)
      • vSphere Profile Driven Storage
      • All theses features are supported for vCloud environments, all but vFRC in 5.1, but that is supported in vCloud Director 5.5.
      • Please note that only 64 ESXi hosts can access the same Datastore at any given time so if you have a large environment you might run into constraints on that fact.
      • If not familiar with storage features for vSphere please make sure to catch up on that subject.
    • vCloud storage features include:
      • The only real storage features used in vCloud Director are
        • Thin-provisioning
        • Fast-provisioning
        • Snapshots
          • A vSphere feature, but it’s capped in vCloud Director to only one Snapshot each VM.
          • Other capabilities include:
            • One snapshot per virtual machine is permitted.
            • NIC settings are marked read-only after a snapshot is taken.
            • Editing of NIC settings is disabled through the API after a snapshot is taken.
            • To take a snapshot, the user must have the vAPP_Clone user right.
            • Snapshot storage allocation is added to Chargeback.
            • vCloud Director performs a storage quota check.
            • REST API support is provided to perform snapshots.
            • Virtual machine memory can be included in the snapshot.
            • Full clone is forced on copy or move of the virtual machine, resulting in the deletion of the snapshot (shadow VMDK).
  • Explain the relationship between allocation models, vCloud workloads and resource groups.
    • First I want to recommend everyone read this document:
    • You should know VMware changed how CPU allocation works in Allocation Pools between 5.1 RTM (810718) and 5.1.2 (1068441).
      • In 810718 (5.1 RTM)
        • When configuring an organization neither Limit or Reservation of the RP created was set at creation.
        • 20 GHz Capacity and 50% guarantee. 1 vCPU is 2 GHz.  1 VM  is powered on and the RP Limit is set to 2GHz and the reservation are set to 1 GHz. So you only have 10 vCPU’s available in your environment before you hit that cap of 20GHz and the VM’s can’t be turned on.
        • And when using a lower number for the vCPU, lets say 400 MHz you get VM’s that are limited in available CPU at first as the RP is incremented in 400MHz. First VM has 400MHz, 2 VMs has 800MHz, 3VM has 1200 MHz etc.
      • In 868405 (5.1.1)
        • When configuring an organization the Limit of the RP created was set at creation, so lets say at 20GHz Capacity and 50% guarantee the resource pool would have a 20 GHz limit. No reservations was set on the RP, that was done when a VM was powered on.
        • 20 GHz Capacity and 50% guarantee. 1 vCPU is 2 GHz.  1 VM  is powered on and the RP Reservation are set to 1 GHz. You still only have 10 vCPU’s available.
        • But now if you used lower vCPU numbers you could create VM’s like you wanted (). 400Mhz per vCPU would allow you to create  50 vCPU’s for 20 GHz Capacity RP.
        • The first VM’s created now have the whole CPU Capacity to use so they are not so constrained.
        • But this means if the Allocation pool was Elastic (spanning multiple Clusters) each RP in each cluster would have the limit set to the initial Capacity, allowing organizations to use more resource than was initially configured.
        • Massimo Re Ferre’ has a great post on what changed between 5.1 an 5.1.1 : http://it20.info/2012/10/vcloud-director-5-1-1-changes-in-resource-entitlements/
      • In 1068441 (5.1.2)
    • Allocation Pool (like is works in 5.1.3)
      • What kind of resource pool does this pool create?
        • A Sub-resource pool under the Provider vDC resource pool. The pool is configured with the CPU Capacity configured as a limit, leaving CPU reservation unchanged. The Memory limit and reservation are also unchanged (with Expandable and Unlimited Selected as well)
      • What happens when a virtual machine is turned on?
        • When a VM is turned on the Sub-resource pools memory limit is left unchanged with Expandable Reservation and Unlimited selected, and its reservation is increased by the VM’s configured memory size times the percentage guarantee for that organization vDC.
          • Please note even thought the limit is not set on the resource pool vCloud Director will not power on VM’s that will break the Memory Capacity configuration for pool.
        • The CPU reservation is increased by the number of vCPU configured for the virtual machine times the vCPU specified at the organization vDC level times the percentage guarantee factor for CPU set at the organization vDC level. The virtual machine is reconfigured to set its memory and CPU reservation to zero and placed.
      • Does this allocation model have any special features?
        • Elasticity: Can span multiple Provider Resource pools.
    • Pay-as-you-Go (EDITed)
      • What kind of resource pool does this pool create?
        • A Sub-resource pools is created with zero rate and unlimited limit.
      • What happens when a virtual machine is turned on?
        • When a VM is turned on the VM’s memory limit is increased by the VM’s configured memory size, and its reservation is increased by the VM’s configured memory size times the percentage guarantee for that organization vDC. The Resource pool reservation is also increased to the same amount of reservation+the VM overhead.
        • The CPU limit on the VM is increased by the number of vCPU the virtual machine is configured with times the vCPU frequency specified at the organization vDC level, and the CPU reservation is increased by the number of vCPU configured for the virtual machine times the vCPU specified at the organization vDC level times the percentage guarantee factor for CPU set at the organization vDC level. The Resource pool reservation is also increased to the same amount of reservation.
      • Does this allocation model have any special features?
        • No resources are reserved ahead of time so they might fail to power on if there aren’t enough resources.
    • Reservation Pool
      • What kind of resource pool does this pool create?
        • A Sub-resource pools is created with the limit and reservation configured set at the organization vDC level.
      • What happens when a virtual machine is turned on?
        • Rate and limit are not modified. The organization can change these settings on a per VM level with this allocation model.
      • Does this allocation model have any special features?
        • Can not be elastic between multiple Provider resource pools.
        • Will fail to create if the resources in the Provider resource pools are insufficient.
        • Users can set shares, limits and rates on Virtual machines.

 

Skills and Abilities

  • Given a set of vCloud workloads, determine appropriate DRS/HA configuration for resource clusters.
    • These are the available HA configurations:
      • Host failures tolerated
      • % of Cluster resources reserved
      • Specify Failover Host
    • This really depends on Allocation pool type that will be used with the cluster, and lets say a whole cluster is using the same allocation type(for simplicitys sake)
      • Reserved Pool
        • The Resource pools will have reservations and limits. The VM reservations and shares will be  controlled by the users of that organization so I recommend using % based HA mode. That will make HA take the reservation of each VM into account when calculating the Current Failover Capacity.
        • If you would use Host failures at its default setting it would use the VM with the most reserved CPU and Memory as a slot size. You would need know how much resources the users are going to reserve to be able to manually set those slot sizes with advanced setting, so it’s not a very flexible option.
      • Allocation Pool
        • The resource pools will have reservation and limits based on the configuration of the Organization. The VM’s will not be configured with a limit so the slot size will be the default of 32MHz and 0 MB + overhead in memory.
        • Allocation pool VM’s can also be very different in size and again you will need to have good, no a great idea on the sizes of the VM’s that will be running in there to be able to use advanced setting for Host failure tolerates.
        • A % based HA is probably the better choice of the two.
      • Pay-as-you-Go
        • In PAYG the VM’s have a limit set to the configured size of the vCPU. But the memory limit depends on the % guaranteed setting at the creation of the organization vDC. So you will have a very predictable CPU limit (or slot size) but the memory slot size will depend on the size of the largest VM in the cluster.
        • So If you have large VM’s with perhaps 32 GB memory and any percentage guaranteed, lets say 20% (the default) the slot size will be 1 GHz and 6,4 GB. That will not be a good slot size as most VM’s will be much smaller.
        • One way of making Host failure Tolerated acceptable is to use no % guaranteed, so every VM must fight for the Memory in the resource pool.
        • But best to use % of Cluster Resources since it is the de-facto standard in most HA clusters because it delivers the most flexibility of the three options.
          • Takes all available resources in a cluster and adds them upp. Then it subtracts the % configured in HA settings. Then HA will add all reserved resource are in use (Powered-on VM’s only), but will use a 32 MHz default setting for each VM. Memory is reservation+overhead. This value is than used to calculate the failover capacity.
          • It’s best to let the experts explain this as they made a book on it so I recommend reading the vSphere 5.1 Clustering Deepdive book
          • Or read their blog on HA. This blog post from Duncan Epping explains the % based HA very well: http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
    • To be able to use Anti/affinity rules you will need to use PowerCLI or other automation features to make sure certain vApp are deployed to different/same host.
    • DRS configuration
      • First I must mention you HAVE to enable DRS on a vCloud Provider resource pools as that is the only way to be able to create resource pools.
      • And it’s best practice to use different Provider vDC’s for each version of Allocation Pools
        • Pay-as-you-Go Cluster, Allocation Pool cluster and Reservation Cluster
        • But let’s be realistic, that will not be that case at all for many installations. Not everybody has a budget to create 3 different clusters. You can create Sub-resource pools for each version but it complicates DRS scheduling immensely as FrankDenneman explains in this blog post:
      • DRS moves VM’s around ESXi hosts in a cluster based on resources used in the cluster. That was a very simplified description of DRS as it uses a lot of different metric and algorithms to calculate if and when a VM should be moved.
      • I’m not going to explain in detail how DRS works as you can read up on that in various books and documentation.
      • As you might expect using different resource pools can affect DRS calculations. You might have a Reservation Allocation Pool using half of the resources and Allocation or PAYG for the rest.
      • In case of configuration of DRS settings its best in most cases to set the mode to automatic and just let DRS do its thing.
      • Read this to get a good idea on how DRS works, and of course if you really want to know more you should pick up Duncan’s and Frank’s book.
  • Given a set of vCloud workloads, determine appropriate host logical design.
    • So this means creating a logical design for the hosts that are a part of the Resource cluster.
    • Here you have the configuration of the ESXi hosts for the Resource clusters, and it depends the amount of workloads projected and usage of resources and availability requirements.
      • Chapter 4.4 in the vCAT Architecting a VMware vCloud document describes this process.
    • This Table from a VMware Partner SET documents is a good overview what you would like to have as a part of a host logical design

VCAP CID 2-4-2

  • Given a set of vCloud workloads, determine appropriate vCloud logical network design.
    • This section will require details on networking and networking security configuration for the ESXi hosts supporting the resource clusters.
      • This include details on MTU sizes, when using VCD-NI or VXLAN.
      • Increasing the number of ports on a vDS from 128 to 4096 to allow vCD to dynamically create port groups.
      • An overview of vSS and vDS usage on the ESXi hosts for each cluster (if different)
      • vSS configuration: # of ports, # network adapters, Security settings, Port Groups with VLANs and security settings.
      • vDS configuration: # of ports, # network adapters, Security settings, Port Groups with VLANs and security settings and Bindings
    • Chapter 4.4 in the vCAT Architecting a VMware vCloud document helps with sizing the environment to be able to create the logical design.
  • Given a set of vCloud workloads, determine appropriate vCloud logical storage design.
    • Chapter 4.4 in the vCAT Architecting a VMware vCloud document helps with sizing the environment to be able to create the logical design.
    • Here you need to state how much storage is needed by the projected workloads, and if different Tier of storage will be offered. There Tiers need to be explained
    • If available, create a list of VM’s with their configured disk sizes, memory sizes, safety margins and average number and sizes of snapshots. I know this is almost impossible for public clouds, but its better to be able to make a decision on something tangible rather than say “oh cause I always used 2TB datastores”.
      • Let say the storage vendor that was used says it only wants 36 VM’s per datastore (I couldn’t imagine why).
      • You have created an estimate of the number and their disk sizes of the VM’s that will be deployed/migrated into the environment.
      • From that you could size the datastores from those numbers.
    • If you will be using Datastore Clusters, they should be quite proficient in moving workloads around so you only need to make sure you have enough resources in the cluster. The 32 Datastore per Cluster restriction, and the Shadow VM for Fast provisioning might affect that design though. And plan ahead with projected growth as well.
  • Given a set of vCloud workloads, determine appropriate vCloud logical design.
    • vCloud logical design is an overview of the whole environment. vCAT Architecting a VMware vCloud document has a great picture that shows just that:

VCAp CID 2-4-3