close

Вход

Забыли?

вход по аккаунту

?

MDC-B353: Comparing Windows ServerHyper

код для вставкиСкачать
June 2008
October 2008
October 2009
Huge Scalability
Live Migration
Storage Spaces
Cluster Shared Volumes
Metering & QoS
Dynamic Memory
Processor Compatibility
Migration Enhancements
Hot-Add Storage RemoteFX
Extensibility
Performance &
Hardware Offloading
February
2011 Improvements
September 2012
Scalability
Network Virtualization
Replication
Scalability &
Performance
Security &
Multitenancy
Flexible
Infrastructure
High Availability
& Resiliency
System Resource
Host
VM
Cluster
Hyper-V (2008 R2)
Hyper-V (2012 R2)
Improvement Factor
Logical Processors
64
320
5Г—
Physical Memory
1TB
4TB
4Г—
Virtual CPUs per Host
512
2,048
4Г—
Virtual CPUs per VM
4
64
16Г—
64GB
1TB
16Г—
Active VMs per Host
384
1,024
2.7Г—
Guest NUMA
No
Yes
-
Maximum Nodes
16
64
4Г—
1,000
8,000
8Г—
Memory per VM
Maximum VMs
System Resource
Host
VM
Cluster
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Logical Processors
320
160
160
Physical Memory
4TB
32GB1
2TB
Virtual CPUs per Host
2,048
2,048
2,048
Virtual CPUs per VM
64
8
642
1TB
32GB1
1TB
1,024
512
512
Guest NUMA
Yes
Yes
Yes
Maximum Nodes
64
N/A3
32
8,000
N/A3
4,000
Memory per VM
Active VMs per Host
Maximum VMs
1 Host
physical memory is capped at 32GB thus maximum VM memory is also restricted to 32GB usage.
5.1 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions
supporting 8 vCPUs per VM
3 For clustering/high availability, customers must purchase vSphere
2 vSphere
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf, https://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Platform-TechnicalWhitepaper.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Virtual Fiber Channel
Yes
Yes
Yes
3rd Party Multipathing (MPIO)
Yes
No
Yes (VAMP)1
Native 4-KB Disk Support
Yes
No
No
64TB VHDX
2TB VMDK
2TB VMDK
Yes
Grow Only
Grow Only
256TB+2
64TB
64TB
Offloaded Data Transfer
Yes
No
Yes (VAAI)3
Boot from USB
Yes
Yes
Yes
Tiered Storage Pooling
Yes
No
No
Maximum Virtual Disk Size
Online Virtual Disk Resize
Maximum Pass Through Disk Size
vStorage API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1
maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and the chosen file system within
the guest. More recent Windows Server operating systems support disks in excess of 256TB in size
3 vStorage API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.1
1
2 The
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Dynamic Memory
Yes
Yes
Yes
Resource Metering
Yes
Yes1
Yes
Network QoS
Yes
No2
Yes2
Storage QoS
Yes
No2
Yes2
1 Without
2 Quality
vCenter, Resource Metering in the vSphere Hypervisor is only available on an individual host by host basis.
of Service (QoS) is only available in the Enterprise Plus edition of vSphere 5.1
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf and http://www.vmware.com/products/vsphere/buy/editions_comparison.html
Layer-2 Network Switch for
Virtual Machine Connectivity
Granular In-box Capabilities
•
ARP/ND Poisoning (spoofing)
protection
•
DHCP Guard protection
•
Virtual Port ACLs
•
Trunk Mode to VMs
•
Network Traffic Monitoring
•
Isolated (Private) VLAN (PVLANs)
•
PowerShell & WMI Interfaces for
extensibility
Hyper–V host
Virtual machine
Virtual machine
Network
application
Virtual network
adapter
Virtual machine
Network
application
Network
application
Virtual network
adapter
Virtual network
adapter
Hyper-V
Extensible Switch
Physical network
adapter
Physical switch
Build Extensions for Capturing,
Filtering & Forwarding
Many Key Features
•
Extension monitoring & uniqueness
•
Extensions that learn VM life cycle
•
Extensions that can veto state changes
•
Multiple extensions on same switch
Several Partner Solutions Available
•
Cisco – Nexus 1000V & UCS-VMFEX
•
NEC – ProgrammableFlow PF1000
•
5nine – Security Manager
•
InMon - SFlow
Virtual Machine
Virtual Machine
Parent Partition
VM NIC
Host NIC
Virtual Switch
Extension Protocol
Capture
Extensions
Extension
A
Filtering
Extensions
Extension
C
Forwarding
Extension
Extension
D
Extension Miniport
Physical NIC
Hyper-V Extensible Switch architecture
VM NIC
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Yes
No
Replaceable1
5
No
2
Private Virtual LAN (PVLAN)
Yes
No
Yes1
ARP Spoofing Protection
Yes
No
vCNS/Partner2
DHCP Snooping Protection
Yes
No
vCNS/Partner2
Virtual Port ACLs
Yes
No
vCNS/Partner2
Trunk Mode to Virtual Machines
Yes
No
Yes3
Port Monitoring
Yes
Per Port Group
Yes3
Port Mirroring
Yes
Per Port Group
Yes3
Extensible vSwitch
Confirmed Partner Extensions
1 The
vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.1 and is replaceable
(By Partners such as Cisco/IBM) rather than extensible.
2 ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the App component of VMware vCloud Network & Security (vCNS)
product or a Partner solution, all of which are additional purchases
3 Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in
the Enterprise Plus edition of vSphere 5.1
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technicalresources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf, http://www.vmware.com/products/vshieldapp/features.html and http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html
Dynamic
VMq
Dynamically span multiple CPUs when processing
virtual machine network traffic
IPsec Task
Offload
Offload IPsec processing from within virtual machine,
to physical network adaptor, enhancing performance
Virtual Receive
Side Scaling
Scale a VM's send & receive side traffic to multiple virtual processors,
increasing performance whilst reducing bottlenecks
SR-IOV
Support
Map virtual function of an SR-IOV capable physical network adaptor,
directly to a virtual machine
Integrated with NIC hardware
for increased performance
•
Standard that allows PCI Express devices
to be shared by multiple VMs
•
More direct hardware path for I/O
•
Reduces network latency, CPU utilization
for processing traffic and increases
throughput
•
SR-IOV capable physical NICs contain
virtual functions that are securely
mapped to VM
•
This bypasses the Hyper-V Extensible
Switch
•
Virtual Machine
VM Network Stack
Synthetic NIC
Virtual Function
Hyper-V
Extensible Switch
SR-IOV NIC
VF
VF
VF
Full support for Live Migration
Traffic Flow
Traffic Flow
In-box Disk Encryption to
Protect Sensitive Data
VHDX on Traditional LUN
E:\VM2
Data Protection, built in
•
Supports Used Disk Space Only
Encryption
•
Integrates with TPM chip
•
Network Unlock & AD Integration
VHDX on DAS
F:\VM1
Multiple Disk Type Support
•
Direct Attached Storage (DAS)
•
Traditional SAN LUN
•
Cluster Shared Volumes
•
Windows Server 2012 File Server Share
VHDX on Cluster Shared Volumes
C:\ClusterStorage\Volume1\VM4
VHDX on File Server
\\FileServer\VM3
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Dynamic Virtual Machine Queue
Yes
NetQueue1
NetQueue1
IPsec Task Offload
Yes
No
No
Virtual Receive Side Scaling
Yes
Yes (VMXNet3)
Yes (VMXNet3)
SR-IOV with Live Migration
Yes
No2
No2
Storage Encryption
Yes
No
No
1 VMware
vSphere and the vSphere Hypervisor support VMq only (NetQueue)
SR-IOV implementation does not support vMotion, HA or Fault Tolerance.
DirectPath I/O, whilst not identical to SR-IOV, aims to provide virtual machines with more direct access to hardware devices, with network cards
being a good example. Whilst on the surface, this will boost VM networking performance, and reduce the burden on host CPU cycles, in reality,
there are a number of caveats in using DirectPath I/O:
2 VMware’s
•
•
•
•
Small Hardware Compatibility List
No Memory Overcommit | No vMotion (unless running certain configurations of Cisco UCS) | No Fault Tolerance
No Network I/O Control | No VM Snapshots (unless running certain configurations of Cisco UCS)
No Suspend/Resume (unless running certain configurations of Cisco UCS) | No VMsafe/Endpoint Security support
SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the highest vSphere edition to take advantage of this
capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of performance with
the flexibility they need for an agile infrastructure.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf
Comprehensive feature
support for virtualized Linux
Significant Improvements in
Interoperability
•
Multiple supported Linux distributions
and versions on Hyper-V.
•
Includes Red Hat, SUSE, OpenSUSE,
CentOS, and Ubuntu
Comprehensive Feature Support
•
64 vCPU SMP
•
Virtual SCSI, Hot-Add & Online Resize
•
Full Dynamic Memory Support
•
Live Backup
•
Deeper Integration Services Support
Configuration
Store
Worker
Processes
WMI Provider
Management Service
Windows
Kernel
Virtual Service
Provider
Independent Hardware
Vendor Drivers
Hyper-V
Server Hardware
Duplication of a Virtual
Machine whilst Running
Export a clone of a running VM
•
Point-time image of running VM
exported to an alternate location
•
Useful for troubleshooting VM
without downtime for primary VM
Export from an existing checkpoint
VM1 VM2
1
•
Export a full cloned virtual machine
from a point-in-time, existing checkpoint
of a virtual machine
2
•
Checkpoints automatically merged into
single virtual disk
3
4
User Initiates an export of a running VM
Hyper-V performs a live, point-in-time export of
the VM, which remains running, creating the new
files in the target location
Admin imports new, powered-off VM on the
target host, finalizes configuration and starts VM
With Virtual Machine Manager, Admin can select
host as part of the clone wizard
Simplified upgrade process
from 2012 to 2012 R2
•
Customers can upgrade from Windows
Server 2012 Hyper-V to Windows Server
2012 R2 Hyper-V with no VM downtime
•
Supports Shared Nothing Live Migration
for migration when changing storage
locations
•
If using SMB share, migration transfers
only the VM running state for faster
completion
•
Automated with PowerShell
•
One-way Migration Only
Hyper-V Cluster Upgrade without Downtime
2012 Cluster Nodes
2012 R2 Cluster Nodes
Network Isolation & Flexibility
without VLAN Complexity
•
Secure Isolation for traffic segregation,
without VLANs
•
VM migration flexibility & Seamless
Integration
Blue Network
10.10.10.10 10.10.10.11
Red Network
10.10.10.12
10.10.10.10 10.10.10.11
10.10.10.12
Key Concepts
•
Provider Address – Unique IP addresses
routable on physical network
•
VM Networks – Boundary of isolation
between different sets of VMs
Network/VSID
Provider Address
Customer Address
Blue (5001)
192.168.2.10
10.10.10.10
Customer Address – VM Guest OS IP
addresses within the VM Networks
Blue (5001)
192.168.2.10
10.10.10.11
Blue (5001)
192.168.2.12
10.10.10.12
Policy Table – maintains relationship
between different addresses & networks
Red (6001)
192.168.2.13
10.10.10.10
Red (6001)
192.168.2.14
10.10.10.11
Red (6001)
192.168.2.12
10.10.10.12
•
•
192.168.2.10
192.168.2.11
192.168.2.12
192.168.2.13
192.168.2.14
Network Isolation & Flexibility
without VLAN Complexity
•
Network Virtualization using Generic
Route Encapsulation uses
encapsulation & tunneling
•
Standard proposed by Microsoft, Intel,
Arista Networks, HP, Dell & Emulex
•
VM traffic within the same VSID routable
over different physical subnets
•
VM’s packet encapsulated for
transmission over physical network
•
Network Virtualization is part of the
Hyper-V Switch
192.168.2.10 ->
192.168.5.12
10.10.10.10
GRE Key
(5001)
MAC
Same Customer
Network & VSID
192.168.2.10
10.10.10.10 ->
10.10.10.11
10.10.10.11
192.168.5.12
Different Subnets
Network Virtualization Packet Flow
Blue1 sending to Blue2
Hyper-V Switch
Hyper-V Switch
VSID ACL Enforcement
VSID ACL Enforcement
Network Virtualization
Network Virtualization
IP Virtualization
Policy Enforcement
Routing
IP Virtualization
Policy Enforcement
Routing
ARP TABLE
10.10.10.11
34:29:af:c7:d9:12
MACB1 -> MACB2
10.10.10.10 -> 10.10.10.11
Network Virtualization Packet Flow
Blue1 sending to Blue2
Hyper-V Switch
VSID ACL Enforcement
Hyper-V Switch
5001
MACB1 -> MACB2
10.10.10.10 -> 10.10.10.11
VSID ACL Enforcement
Network Virtualization
Network Virtualization
IP Virtualization
Policy Enforcement
Routing
IP Virtualization
Policy Enforcement
Routing
MACP1 ->
MACP2
192.168.2.10 ->
192.168.5.12
5001
MACB1 ->
MACB2
10.10.10.10 ->
10.10.10.11
Bridge Between VM Networks
& Physical Networks
•
Multi-tenant VPN gateway in Windows
Server 2012 R2
•
Integral multitenant edge gateway for
seamless connectivity
•
Guest clustering for high availability
•
BGP for dynamic routes update
•
Encapsulates & De-encapsulates
NVGRE packets
•
Multitenant aware NAT for
Internet access
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
VM Live Migration
Yes
No1
Yes2
VM LM with Compression
Yes
N/A
No
VM LM over RDMA
Yes
N/A
No3
1GB Simultaneous Live Migrations
Unlimited4
N/A
4
10GB Simultaneous Live Migrations
Unlimited4
N/A
8
Automated Live Migrations (Load)
Yes (VMM)
N/A
Yes (DRS)
Automated Live Migrations (Power)
Yes (VMM)
N/A
Yes (DPM)
Live Storage Migration
Yes
No5
Yes6
Shared Nothing Live Migration
Yes
No
Yes2
1 Live
Migration (vMotion) is unavailable in the vSphere Hypervisor – vSphere 5.1 required
Migration (vMotion) and Shared Nothing Live Migration (Enhanced vMotion) is available in Essentials Plus & higher editions of vSphere 5.1
3 Unsupported at this time
4 Within the technical capabilities of the networking hardware
5 Live Storage Migration (Storage vMotion) is unavailable in the vSphere Hypervisor
6 Live Storage Migration (Storage vMotion) is available in Standard, Enterprise & Enterprise Plus editions of vSphere 5.1
2 Live
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/files/pdf/products/vcns/vCloud-Networking-and-Security-Overview-Whitepaper.pdf
http://www.vmware.com/products/datacenter-virtualization/vcloud-network-security/features.html#vxlan, http://cto.vmware.com/wp-content/uploads/2012/09/RDMAonvSphere.pdf
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Live VM Cloning
Yes
No
Yes1
Live Migration Upgrades
Yes
N/A
Yes
Network Virtualization
Yes
No
vCNS2
Network Virtualization Gateway
Yes
No
vCNS2
1 VM
Cloning requires vCenter
& Edge Gateway are features of the vCloud Networking & Security Product, which is available at additional cost to vSphere 5.1.
In addition, it requires the vSphere Distributed Switch, only available in vSphere 5.1 Enterprise Plus.
2 VXLAN
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/files/pdf/products/vcns/vCloud-Networking-and-Security-Overview-Whitepaper.pdf
http://www.vmware.com/products/datacenter-virtualization/vcloud-network-security/features.html#vxlan
Integrated Solution for
Resilient Virtual Machines
•
Massive scalability with support for 64
physical nodes & 8,000 VMs
•
VMs automatically failover & restart on
physical host outage
•
Enhanced Cluster Shared Volumes
•
Cluster VMs on SMB 3.0 Storage
•
Dynamic Quorum & Witness
•
Reduced AD dependencies
•
Drain Roles – Maintenance Mode
•
VM Drain on Shutdown
•
VM Network Health Detection
•
Enhanced Cluster Dashboard
Cluster Dynamic Quorum Configuration
Complete Flexibility for
Deploying App-Level HA
•
Full support for running clustered
workloads on Hyper-V host cluster
•
Guest Clusters that require shared storage
can utilize software iSCSI, Virtual FC or
SMB
•
Full support for Live Migration of Guest
Cluster Nodes
•
Full Support for Dynamic Memory of
Guest Cluster Nodes
•
Restart Priority, Possible & Preferred
Ownership, & AntiAffinityClassNames
help ensure optimal operation
Guest
Cluster
running
onona physical
Hyper-V
Cluster
node
restarts
failure
Guest
cluster
nodes
supported
with Livehost
Migration
Guest Clustering No Longer
Bound to Storage Topology
•
VHDX files can be presented to multiple
VMs simultaneously, as shared storage
•
VM sees shared virtual SAS disk
•
Unrestricted number of VMs can
connect to a shared VHDX file
•
Utilizes SCSI-persistent reservations
•
VHDX can reside on a Cluster Shared
Volume on block storage, or on
File-based storage
•
Supports both Dynamic and Fixed VHDX
Flexible choices for placement of Shared VHDX
Ensure Optimal VM Placement
and Restart Operations
•
Failover Priority ensures certain VMs
start before others on the cluster
•
Affinity rules allow VMs to reside on
certain hosts in the cluster
•
AntiAffinityClassNames helps to keep
virtual machines apart on separate
physical cluster nodes
•
AntiAffinityClassNames exposed
through VMM as Availability Set
Hyper-V
cluster
with
VMsinonpriority
eachapart
node
Upon
Anti-Affinity
failover,
VMs
keeps
restart
related
VMs
order
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Integrated High Availability
Yes
No3
Yes4
Failover Prioritization
Yes
N/A
Yes6
Affinity Rules
Yes
N/A
Yes6
NIC Teaming
Yes
Yes
Yes
Guest OS Application Monitoring
Yes
N/A
No5
Cluster-Aware Updating
Yes
N/A
Yes6
3 vSphere
Hypervisor has no high availability features built in – vSphere 5.1 is required.
HA is built in to Essentials Plus and higher vSphere 5.1 editions
5 VMware have made APIs publicly available, but actual application monitoring is not included
6 Features available in all editions that have High Availability enabled.
4 VMware
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html and http://www.yellow-bricks.com/2011/08/11/vsphere-5-0-ha-application-monitoring-intro/
Capability
Hyper-V (2012)
vSphere Hypervisor
vSphere 5.1 Ent+
64
N/A1
32
8,000
N/A1
4,000
Max Size Guest Cluster (iSCSI)
64 Nodes
64 Nodes2
64 Nodes2
Max Size Guest Cluster (Fibre)
64 Nodes
5 Nodes
5 Nodes
Max Size Guest Cluster (File Based)
64 Nodes
0 Nodes3
0 Nodes3
Guest Clustering with Live Migration
Yes
N/A1
No4
Guest Clustering with Dynamic Memory
Yes
No5
No5
Guest Cluster with Shared Virtual Disk
Yes
Yes6
Yes6
Nodes per Cluster
VMs per Cluster
1 High
Availability/vMotion/Clustering is unavailable in the standalone vSphere Hypervisor
Clusters can be created on vSphere 5.1 using the in-guest iSCSI initiator to connect to the SAN, the same as would be configured in a physical
cluster. Support of guest operating systems up to Windows Server 2008 R2 means 16 node clusters are the maximum size on vSphere 5.1
3 VMware does not support VM Guest Clustering using File Based Storage i.e. NFS
4 VMware does not support vMotion and Storage vMotion of a VM that is part of a Guest Cluster
5 VMware does not support the use of Memory Overcommit with a VM that is part of a Guest Cluster
6 VMware supports Shared VMDK with EagerZeroedThick and doesn’t support vMotion/Memory Overcommit of VMs in Guest Cluster
2 Guest
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf, http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-mscsguide.pdf, http://kb.vmware.com/kb/1037959
Replicate Hyper-V VMs from a
Primary to a Replica site
• Affordable in-box business continuity and
disaster recovery
• Configurable replication frequencies of 30
seconds, 5 minutes and 15 minutes
• Secure replication across network
• Agnostic of hardware on either site
• No need for other virtual machine
replication technologies
• Automatic handling of live migration
• Simple configuration and management
Once
Once
Uponreplicated,
Hyper-V
site failure,
Replica
changes
VMs is
can
enabled,
replicated
be started
VMs
onon
chosen
begin
secondary
replication
frequency
site
Replicate to 3rd Location for
Extra Level of Resiliency
•
Once a VM has been successfully
replicated to the replica site, replica
can be replicated to a 3rd location
•
Chained Replication
•
Extended Replica contents match the
original replication contents
•
Extended Replica replication frequencies
can differ from original replica
•
Useful for scenarios such as SMB ->
Service Provider -> Service Provider DR
Site
Replication
canconfigured
be enabledfrom
on the
1st replica
to a 3rd site
Replication
primary
to secondary
Capability
Hyper-V (2012 R2)
vSphere Hypervisor
vSphere 5.1 Ent+
Incremental Backup
Yes
No1
Yes1
Inbox VM Replication
Yes
No1
Yes1
Replication Capability
Hyper-V Replica (R2)
vSphere Replication
Inbox with Hypervisor
Virtual Appliance
Asynchronous
Asynchronous
30 secs, 5 mins, 15 mins
15 Minutes
Tertiary
Secondary
Planned Failover
Yes
No
Unplanned Failover
Yes
Yes
Test Failover
Yes
No
Simple Failback Process
Yes
No
Automatic Re-IP Address
Yes
No
Yes, 15 points
No
Yes, PowerShell, HVRM
No, SRM
Architecture
Replication Type
RTO
Replication
Point in Time Recovery
Orchestration
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/vsphere/buy/editions_comparison.html, http://www.vmware.com/products/datacenter-virtualization/vsphere/compare-kits.html
Scalability &
Performance
Security &
Multitenancy
Flexible
Infrastructure
High Availability
& Resiliency
http://aka.ms/WS2012R2
http://aka.ms/SC2012R2
http://channel9.msdn.com/Events/TechEd
www.microsoft.com/learning
http://microsoft.com/technet
http://microsoft.com/msdn
Документ
Категория
Презентации
Просмотров
15
Размер файла
4 030 Кб
Теги
1/--страниц
Пожаловаться на содержимое документа