Monday, March 22, 2010

How does a SAN connect to my Server

A question asked to me the other day was Does a SAN connect to a pc using something like external SCSI (whether it be fibre or not), or external SATA? Does it require a custom PCI expansion card to do connection?

OK, lots of speculation. For those who don't have their own, and might be interested:

A Storage Area Network, or SAN, is a specialized chassis that holds a disk write controller and a bunch of hard drives. Kind of like a blade server requires a special chassis to hold the blades, and provides back-plane communications between them, the SAN chassis holds 12-16 hard drives (usually), the write controllers (usually two for redundancy), power supplies (usually two for redundancy), and provides a back-plane for communications between them.

The first piece of a SAN you purchase will be the most expensive piece. The initial chassis, with the write controllers, has most of the logic, the on board software, licensing, etc. Often, some number of the initial hard drives in the first chassis are reserved for the on board operating system. Sometimes you can use a portion of those disks for your own storage, but it's usually not recommended.

This is where the near-infinite configuration options come in. Most vendors offer either SAS or SATA drives. SAS drives are the current equivalent of SCSI drives we used to order for servers, while SATA drives are the what IDE drives used to be for workstations. SAS drives are fast, "enterprise class", and dual-ported. SATA drives are slower, single-ported, but offer vastly more storage. For primary file storage, you'd opt for SAS drives. For secondary disk-to-disk backups, or terabytes of video, order SATA drives.

If your first chassis can't hold enough disks to meet your anticipated storage needs, you can backpack on another chassis. Say you ordered the first one as SAS drives. You might order the second one as SATA drives, for expanded storage, or second tier storage. You may decide to order four SAS chassis, and three SATA chassis. The chassis themselves typically connect with 2-4 GBPS FibreChannel connectors, depending on how current your SAN unit is. Again, the chassis back-plane is responsible for providing enough I/O for all this traffic. Each expansion chassis will also have redundant write controllers, but without the total O/S software that came in the first chassis, so they're usually about half the cost of the initial unit.

This is what makes a SAN so tantalizing to server admins. How do you add more storage to a SAN? Order a new chassis, stuffed with disks, and snap it on. Voila - more room. Go into the management software, and either add those disks to an existing server's assigned storage pool, or tell the SAN to give those disks to a different server that needed more storage.

The other thing a SAN can do is automatically swap in a hot spare from anywhere in the SAN to any failed disk. Have 150 disks in 40 different RAID groups? 1 hot spare can be put into any one of those raid groups, no need for manual swapping around until you get to it.

How do you add more storage to a traditional server? Shut the server down, open the case, pray there are some free power connectors and data cables, add some more hard drives, power back on, double check the BIOS to make sure they're recognized, initialize the disks, format them, set up ACL permissions, shares, etc. With the SAN, you never have to crack your server, it stays running, the ACL stays in place, the shares are still there, the total capacity simply got bigger.

OK, back to the original question - how do you connect a SAN to a computer? There are currently two methods - FibreChannel or iSCSI.

FibreChannel uses a different card setup than most folks are used to. Again, FibreChannel offers 2 , 4, 8 and now 20 Gbps speed. Usually you have to order 2 cards per server, in addition to the ethernet nics they have. These then connect to a fiber switch as a storage network.

With iSCSI, everything is done over regular TCP/IP ethernet. iSCSI uses TCP/IP (typically TCP ports 860 and 3260). In essence, iSCSI simply allows two hosts to negotiate and then exchange SCSI commands using IP networks. By doing this iSCSI takes a popular high-performance local storage bus and emulates it over wide-area networks, creating a storage area network (SAN). Unlike some SAN protocols, iSCSI requires no dedicated cabling; it can be run over existing switching and IP infrastructure. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN). As a result, iSCSI is often seen as a low-cost alternative to Fibre Channel, which requires dedicated infrastructure. However, Fiber Channel over Ethernet or FCoE does not require dedicated infrastructure.

Although iSCSI can communicate with arbitrary types of SCSI devices, system administrators almost always use it to allow server computers (such as database servers) to access disk volumes on storage arrays. iSCSI SANs often have one of two objectives:

Storage consolidation
Organizations move disparate storage resources from servers around their network to central locations, often in data centers; this allows for more efficiency in the allocation of storage. In a SAN environment, a server can be allocated a new disk volume without any change to hardware or cabling.
Ideally , you'd have 4 Nics per server - 2 for "public" communications, and 2 for your iSCSI data connection. you can get by with legacy servers that only have two network cards, 1 for the public network, 1 for iSCSI. With a Gigabit switch, you set up a couple of Virtual LANs in the switch.

In both FibreChannel and iSCSI, it is vitally important that you reduce collisions between your public traffic and your SAN traffic, and between the two write controllers in your SAN. So - for my iSCSI solution:

Public NIC goes into a port on the switch that talks to the rest of my LAN. All client workstations talk to the server through that port. All clients access data shares over that port, that IP. That NIC is assigned a typical IP in our corporate addressing scheme.

iSCSI NIC Server 1 goes into a port on the switch that talks to one set of ports on my SAN. Server 2's iSCSI NIC goes into a port on the LAN with a different VLAN that talks to two different ports on my SAN. That way they avoid contention, I can expand this indefinitely. Each has a primary port on one write controller, and a secondary port on the other write controller. The iSCSI NIC's have manually assigned IP addresses that are non-routable, do not match the corporate standard, and due to the Virtual LAN on the switch, should never, ever, be able to be seen by anyone else on the corporate network.

most chose to leave the operating system boot drives in on the servers. It makes it a lot easier to troubleshoot things when they go wrong, and makes the SAN an optional extension of the servers. There are two versions of the Microsoft iSCSI initiator you can download - one is for booting off of an iSCSI SAN, one is for "normal" use.

You install the Microsoft iSCSI initiator on your Microsoft server. Depending on the vendor, you many have to install their drivers or management software as well. On your SAN, you carve up your available disk space, decide how you want the raid to work (RAID 1, 0, 10, 3, 5, 6, etc.), and assign it to the servers. Once it's assigned to the servers, you go into the server, pull up Disk Management, scan for new drives, and go from there. At that point it looks like an internal hard drive to the server. The first time, you have to format it, and do all the NTFS ACL/Sharing work, but from there, you're done with internal server work.

And there you have it !!!

Thursday, March 4, 2010

Best Practices for Deploying Hosted Virtual Desktops


I thought this was a great article by

Brian Gammage

Gartner RAS Core Research




Overview



Enterprises evaluating, testing or deploying hosted virtual desktops (HVDs) can learn a lot from the experiences of other organizations that have already employed the approach. This research summarizes current best practices.

Key Findings
  • Through mid-2010, structured task workers are the user group that can be most viably addressed with HVDs.
  • Persistent personalization of HVD images will expand the range of addressable users to include desk-based knowledge workers. It will also be a critical enabling technology for the eventual support of mobile users from 2011.
  • Confusion over the roles and responsibilities of desktop and data center IT staff is a common issue for companies that deploy HVDs.
  • Dedicated HVD images, used for secure remote access, will be rendered obsolete by persistent personalization in 2012.
Recommendations
  • Be realistic in planning which users will be supported through an HVD. Start with desk-based structured task workers and plan to expand to desk-based knowledge workers in 2010.
  • Cost-justify any request for dedicated HVD images on a case-by-case basis, even if intended to support secure remote access.
  • Define the responsibilities of desktop and data center staff before beginning HVD deployments.
  • Ensure that full production requirements for server, storage and network infrastructure are factored into pilot deployments of HVDs.
  • Aggressive adopters of HVDs should plan for major updates every 12 months through 2012.



Analysis



Interest in HVDs continues to grow. Since first writing about this architectural approach in 2005, Gartner has talked with over 500 organizations that are evaluating, testing or deploying HVDs. Based on our discussions with those that have deployed broadly or partially across their organization, this research summarizes current best practices for HVD deployment and use.





Identify Target Users and Applications

HVDs are not suitable for every user requirement or application. Limitations in the performance of centralized applications when accessed over local-area or wide-area networks must be considered when determining who is a viable HVD candidate. Technical improvements in HVD products (primarily in the performance of connection protocols) will alleviate the impact of these limitations through 2009 and 2010. However, these will reduce rather than eliminate latency issues for users.

An HVD implementation separates the user from his or her computing environment, introducing two factors that add latency to application performance: network throughput and protocol-connection constraints. Only the latter can be addressed through improvements in HVD products. Even if the protocol imposed no performance constraints, network latency would still be an issue. For enterprises, this means the user and application requirements that can be viably addressed with an HVD will expand as products improve, but are unlikely to ever encompass the full user population.

The requirements of mobile and intermittently connected users must also be planned for. Although HVD vendors are beginning to describe approaches that will permit the downloading of full or partial HVD images, these are unlikely to be available before 2011, at the earliest, and will require other infrastructure changes before they become viable for broad deployment and use. The changes in HVD image structure that will eventually help support offline users will also be critical in expanding the addressable audience of non-mobile users. By 2011, support for "persistent personalization" of images (allowing changes made to HVD images to be retained between active sessions) and user-installed applications will expand the number of users that can be viably addressed with an HVD to include most desk-based knowledge workers.

With these constraints and HVD development expectations in mind, Gartner recommends the following approach to identifying which workers can viably use an HVD now and through 2011:

  • Focus on structured task workers using desktop PCs for transaction-based and personal productivity applications first. Begin HVD deployments with users that do not personalize their images (that is, through the adjustment of settings and other features). In most cases, these users will be equipped with a well-managed and locked-down desktop PC.
  • Do not implement HVDs for users of graphic-intensive or streaming media applications before mid-2010, at the earliest. Even then, only deploy HVDs for users accessing over local-area networks after thorough testing.
  • Do not plan to extend HVD deployments to desk-based knowledge workers before 2010. Initially, only those workers that do not need to install applications will be addressable, but this will require the use of a third-party "point solution" product to support persistent personalization of the user's image. By 2011, desk-based knowledge workers that need to install applications will also be addressable.
  • Do not plan to support mobile users with HVDs before mid-2011, at the earliest.




Consider Printing Requirements

Most HVD implementations support local printing using a Universal Serial Bus (USB) connection to the accessing device. However, not every type of printer hardware is supported: HVD images provide generic ("universal") printer drivers that offer adequate functionality with most printers, but do not support some device-specific functions. Organizations with complex remote printing requirements, or that need to support specific printer hardware, should budget to deploy an additional printing utility. Such utilities are available from a number of third-party vendors. Organizations that may need to support high volumes of contemporary remote print requests (such as ticketing agencies or bank branches) should also consider the potential for network bottlenecks.





Restrict Use for Secure Remote Access

Through mid-2009, the majority of HVD deployments targeted two user groups/requirements:

  • Structured task workers using desktop PCs for transaction-based and personal productivity applications
  • Secure remote access from specific devices outside the corporate security perimeter

These two groups typically use different HVD deployment models. The former uses pooled deployments designed to optimize resource utilization, but there are imposing restrictions over how much the user image can be "personalized&quot. The second group typically uses dedicated HVD images, which support personalization but are significantly less flexible and cost more to operate than pooled images.

Changes in the way HVD images are provisioned and managed will facilitate the personalization of pooled images, but this capability is unlikely to be integrated into HVD products before late 2010. Enterprises that use HVDs to support secure remote access requirements (where we assume the user is prepared to accept latency in return for access from his or her remote location) should plan to migrate from the dedicated to the pooled approach at that point. Until then, use for secure remote access requirements is likely to be an expensive option that creates additional obstacles for the expansion of pooled deployments to other workers. In most cases, enterprises should only deploy dedicated HVD images for users that can demonstrate a genuine business requirement that cannot be met through more-traditional means (such as a notebook).





Redefine Roles and Responsibilities

One of the HVD issues most frequently described to Gartner is not technical; rather, it relates to confusion in roles and responsibilities caused by the HVD architecture.

An HVD moves a "thick client" PC image from a remote location to the data center, where it becomes a server workload. The functions of desktop image management and support will move with the image, so the IT staff responsible for desktops will naturally assume they are still fully responsible for the image in its new location. However, the IT staff responsible for the data center and servers is unlikely to see it that way. By moving the image, new scope for confusion is created, and this must be addressed through explicit definitions of the boundaries of responsibilities for the personnel involved. Unless this occurs, productivity will be reduced, and service levels for users may be compromised.

In most cases, the boundary between the responsibilities of desktop and data center IT staff will be the virtual machine "bubble" of the HVD image. Desktop staff should continue to take responsibility for what happens inside this bubble, but responsibility for how and where the bubble resides will move to data center staff (the desktop becomes a server workload). Enterprises should plan to review these responsibilities as and when HVD image provisioning technologies evolve to support persistent personalization and offline use.





Train and Communicate

Changing the location or the performance of desktop applications may disrupt the routines of some workers. IT staff may also need time to adjust to where and how their responsibilities must be fulfilled. Don't assume that either group will automatically understand the implications of a shift from a distributed thick-client environment to HVDs. Communication of what has changed and why will be essential, especially if there is a need to "sell" the new architecture to less-enthusiastic users. Training will also be critical in helping avoid disruption and lost productivity.





Plan for Scalability

Although many organizations raise doubts about the scalability of HVD deployments, there is no obvious architectural or technical foundation for such doubts. A number of organizations have already deployed HVDs to around 5,000 to 10,000 users, and a handful have deployed HVDs to more than 10,000 users. However, scalability issues can be introduced through incomplete planning. These issues typically fall into three categories:

  • Network — Organizations fail to evaluate the effect of increased network traffic as users interact with their hosted desktop images. Unless the topology of network requirements is realistically evaluated, new bottlenecks can occur.
  • Servers — Many organizations overestimate the number of HVD images a server will typically support. Despite frequent claims from vendors that eight HVDs can run on each server processor core, we continue to recommend a limit of five. Above that level, shared access to storage and other server resources can create performance bottlenecks within the server.
  • Storage — When not in use, HVD images reside in network storage, and this can also create bottlenecks. If many users log on at the same time, then the result will be delayed boot times for all.

All of these issues can be addressed through appropriate provisioning and careful planning. However, most HVD deployments begin small, with a pilot project for a few hundred users that does not strain network, server or storage performance. It's only as the HVD deployment is moved into production for more users that these problems appear. Our recommendation for enterprises is to factor the eventual production requirements into the pilot-phase planning process.





Plan for Rapid Update and Demand Product Road Maps

Although growing in viability, HVD products and technologies are still maturing. Organizations that want to address a significant part of their user populations with HVDs will need to embrace new developments rapidly to overcome existing limitations and to expand the addressable user population. This implies regular updates and refreshes to HVD components. For example:

  • Improvements in HVD connection protocols will requires some changes to brokering and session management software.
  • Persistent personalization point products will be rendered obsolete as the capabilities of HVD management tools improve.
  • Changes in the way HVD images are deployed and used will require changes to brokering software and instrumentation. The coverage and function of PC life cycle management tools may also be affected.

More-aggressive adopters of HVDs should plan for major updates every 12 months through 2012. Support from vendors with realistic and detailed product road maps will be essential for plotting the lowest-cost and most-efficient upgrade path. Enterprises should press vendors to disclose future product plans, even where these are not yet fully confirmed.





Rationalize HVD Images

HVD images are complex and large — typically anywhere between 5GB and 15GB per user (including user data). In most current HVD deployments, images are managed as integral objects, which drives high storage requirements. Recent developments in HVD products partly address this by deduplicating the largest single element in each HVD image: the Microsoft Windows operating system (OS). Citrix's Provisioning Server does this for XenDesktop and VMware's View Composer delivers the same functionality for View (previously VDI) deployments.

These changes will help, but much of the complexity in HVD images is driven by the integration of applications (both with the OS and with each other). This can be addressed independently of developments in HVD technologies through a range of approaches, including application virtualization, streaming or server-based computing. Where an application is common to a high number of users, it should be removed from the stored image (whether or not the OS is deduplicated) and delivered to the OS, either through streaming (at boot time) or through a server-based approach.

The most successful HVD deployments, to date, have typically combined the shift of the PC image to a data center with an image rationalization initiative. In most cases, the rationalization project was run separately, after the HVD deployment began.





Ensure Licensing Compliance

For most applications, a shift from a traditional desktop deployment to an HVD carries no licensing implications, but some applications may be affected. Licensing of the Windows client OS is a special case: a Windows Vista Enterprise Centralized Desktop (VECD) license will always be required. There are two types of VECD licenses: those for named PCs and those used for thin-client terminals or unnamed PCs. Prior to any HVD deployment, a full review should be carried out to avoid any potential for noncompliance with current license agreements.



© 2009 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.