Tuesday, November 9, 2010

Top 7 Considerations for Your Wireless Network

Introduction

It’s a wireless world outside, with cell phones, Blackberries, netbooks, and more relying on various wireless data networks to connect and communicate. Adding or upgrading your WLAN (Wireless Local Area Network) inside the business adds flexibility, convenience, and keeps data available everywhere inside your company. Wireless networks come at a cost, however, both in money and management time. Security concerns jump when you add wireless components to your network. So here are the Top 7 Considerations when adding or upgrading a WLAN for your business.

Considerations

1. Site Surveys and Wireless Signal Obstacles

Wireless networks aren’t magic, they’re radio. Just as your car radio signal drops because of distance or obstacles like buildings, mountains, and tunnels, your wireless network signal has limitations. In fact, a WLAN signal is much less robust than a radio station because of the frequency used. While a mountain will block a radio station, a file cabinet might block your network connection. Avoid placing access points close to windows, because the signal goes through glass as easily as it goes through air. Broadcasting your network to the world invites security issues and wastes bandwidth your users need. The most common wireless network types, 802.11b and 802.11g, are “two wall” technologies. This means the signal can only go through two normal walls before it becomes too degraded for use. Extra thick walls, or plaster walls with a steel mesh inside will degrade or stop the signal more quickly. Floors and ceilings count as wells, too, so learn to think in three dimensions while placing access points. Placing access points intelligently will support the most users with the fewest number of access points. Start by placing access points in the middle of the office and check the signal levels. If you have only a few wireless clients to support, you may get by using a laptop with a good signal strength meter in the wireless client utility (check your results with a second and third laptop). Larger companies should invest in wireless testing tools (some software tools are free or darn cheap) to speed the process. Search for “wireless network survey tools” for a quick list of thousands of options. Larger companies will need a site survey which can be expensive but speeds deployment and reduces the number of access points by locating them correctly. Smaller companies can usually get by without a survey if their physical location is limited. An extra access point or two goes a long way toward user satisfaction, so pad your budget a bit to ensure happier users.

2. Changes in Network Infrastructure

Adding wireless to your network requires more than just a couple of access points plugged into your existing router. In fact, wireless access points are one of the major reasons companies invest in switches with PoE (Power Over Ethernet). Placing access points on the ceiling is much faster and less expensive when you don’t need to run electrical power through conduits to each location. Small companies may be able to use a single wireless access point built into their main router as their only wireless infrastructure, but you know what they say about “best laid plans.” The flexibility of an extra access point or two is worth the expense. When planning for user capacity, take into consideration more than just laptops and some wireless-enabled desktops. Will iPhone users start surfing via their WiFi interface? iPad users certainly will. Check with your phone service manager, because wireless desk phone handsets can eat up a fair amount of wireless bandwidth. Your network hardware, software, and management processes will change more when you add wireless networking than you expect. Use the addition or expansion of a WLAN to examine and update your existing infrastructure. Bolting a new, high speed wireless network to an outdated and overworked router will only lead to complaints. 3. Router Upgrade Your router, the connection point for internal networks to the outside world, may not be suitable for a WLAN. Even routers than don’t include wireless support need to accommodate different network configurations to support a WLAN. A wireless network will have a different network address range than your wired network, and your router must support at least two network ranges. Companies with visitors often provide a “guest network” login in the lobby or throughout the building. This requires another network address range that should be separated from all your internal network resources. After all, a guest should see your Internet connection, but not your internal auditing files. If your router does support WLAN connections, and you’ve had the router more than three years, upgrading is recommended for security reasons alone. Wireless networks require authentication protocols that have changed drastically the last few years. Older routers are less secure, and often don’t work at all with newer security protocols included on the most recent laptops and other devices. Include the cost of a new router in your wireless budget. You may not need it, but better to be prepared than insecure.

4. Rethink Security

Wired networks have one great security edge: hackers have to be inside your building to connect to your network. Wireless networks, especially when configured incorrectly, broadcast to the world. Security must be ratcheted up a couple of notches when you add wireless.
Every wireless access point sends an SSID (Service Set IDentifier), a unique number attached to wireless data packets to differentiate that WLAN from others. Do not confuse this with a security measure, because changing your SSID away from the default setting, and turning SSID broadcast off, only slows down hackers by about sixty seconds. This is a network identifier, not a security tool. Change it from the default for easier internal management, but don’t think it blocks anyone. Security client tools are like using WPA (WiFi Protected Access) and WPA2 for authentication. These supersede the earlier WEP (Wired Equivalency Protocol) that wasn’t, unfortunately, near as equivalent as the industry hoped. In fact, if your company handles customer credit card information, the PCI (Payment Card Industry) audits demand you use
at least WPA for wireless security, or you fail the audit. Wireless client authentication deep dives into far too many details for this discussion. Just be aware that adding a WLAN to your network requires a complete security approach, not just some piecemeal kludge to get a few laptops connected.

5. Clamp Down on Unauthorized Access Loopholes

A “rogue” access point is one that users set up for themselves, usually by going to an electronics superstore and buying a consumer router with wireless support for $30. No security, no authentication, and no management, but they blow a giant hole in your security wall.
The second way users either purposefully or accidentally destroy your security is through turning on Ad Hoc mode on their wireless client software. Early on, when Internet connections were limited, a laptop with an Ad Hoc connection helped others get to the Internet. Today they just help hackers. Use regular sweeps with wireless monitoring tools to find and quickly close both these loopholes. Discourage such experimentation by users by including ensuring everyone who wants wireless access has it, and by offering to solve wireless problems for users immediately. Users unhappy with IT are most likely to “help” IT by creating their own
wireless networks.

6. Plan for Upgrades

You may find older laptops and wireless client access cards may not support WPA2, or even WPA. That is one example of upgrades to plan for, but not the only one. Security protocols change regularly, and updated implementations of popular security tools offer much better protection than older hardware and software. This may mean updating some firmware on your wireless access points, or replacing an older router that can’t be updated. Your wireless budget needs don’t stop when you turn on the network.
The most critical area to plan for is upgrading your WLAN hardware to support 802.11n, the latest wireless protocol approved for use by the standards committee. Speeds in 802.11n are many times faster than 802.11b and 802.11g, and the signals go further with higher quality. The speed and increased user count supported by 802.11n equipment is well worth the upgrade, when you get to it. Beyond that, always plan for security upgrades. Test for security leaks, like rogue access points, regularly, and that may mean buying tools as the wireless user base increases. Keep your software, including on clients, wireless access points, and routers, up to date. Most of the time, a firmware upgrade will be enough. Be prepared that older equipment will reach a point where it must be replaced, and that point will usually be decided by a needed security upgrade.

7. Invest in a WLAN Controller

Small companies can get by managing wireless clients as they manage wired network clients: manually. This method is popular because it’s cheap, not good, and more than a dozen or so users seems to be the point where the manual method becomes painful. Unfortunately, small companies tend to ignore management needs rather than upgrade to automated tools.
Larger companies, because they can amortize costs over more users, rely on automated tools. One that’s critical for companies with more than a couple of wireless access points is a WLAN controller. These tools use less intelligent wireless access points but manage, configure, and secure them more completely than so called “fat” access points do. In addition, they provide a single management interface for all wireless access points and users. A WLAN controller is highly recommended as a management upgrade that saves time and increases security. Conclusion
As in life in other areas, doing things right takes a bit more time, effort, and often money. Doing a wireless network cheap can cost you a fortune. One of the largest and most expensive data breach thefts of customer information ever, from T.J. Maxx, occurred at a retail store through their unsecured wireless network. The cybercriminals actually did their work in the comfort of their own car in the parking lot. Done well, a wireless network offers user freedoms not possible any other way. Building a proper wireless network will be much easier when following the seven considerations presented here.

Think security first, and the rest will fall into place easily.

And if you need help or are interested on how MicrAge can guide you through this process please feel free to contact me

Creinhard@microage.com
480-366-2091

Monday, July 12, 2010

Windows 7 user accounts and groups management

DESKTOP OPERATING SYSTEMS - By Ed Tittel

Windows 7 user accounts and groups management


Ed Tittel, Contributor
07.12.2010
Rating: --- (out of 5)


Digg This! StumbleUpon Toolbar StumbleUpon Bookmark with Delicious Del.icio.us


There are three types of basic Windows 7 user accounts for solutions providers to work with: one for administrators and equivalents; one for standard, everyday users; and another for a guest account (turned off by default in Windows 7). All of these types are shown in Figure 1, along with an administrator account. To access the Windows 7 User Accounts item in the Control Panel, type user into the Start menu search box, then click User Accounts in the resulting menu selections that appear.

Figure 1 – There are three types of basic Windows 7 user accounts.

With administrator accounts, solutions providers can install software, make configuration changes, add or delete files in most directories and so forth. Standard users can manage their own files inside the %SystemDrive%\Users\ directory tree, and they can only make limited changes to their machines. Guests can look at system files, but only in certain directories, and they can't do much to the Windows machines they have access to. Having user account control is vital for creating user IDs and associating passwords and images to accounts. But when it comes to managing user rights and permissions, the real action lies elsewhere in Windows 7.

More resources about Windows 7 user accounts
Why User Account Control in Windows is necessary

The best Windows 7 user accounts control comes via group management

Ask any experienced Windows solutions provider, and he or she will tell you that the best way to manage rights and permissions -- the controls that establish which applications or services a customer may run and which files or other system resources they can access -- is by establishing groups related to specific kinds of roles or activities.

A quick look at Windows 7's default group names and descriptions (Figure 2) helps illustrate this principle, while also listing the roles and activities that Microsoft finds most useful on Windows 7 systems.

Figure 2 – Windows 7 default group names and descriptions in the Local Users and Groups management console.

Notice the kinds of groups that appear by default, which include backup operators (those who can back up or restore systems), event log readers (those who can access and view event log contents to seek out and diagnose system issues), network configuration operators (those who can manage network configuration items and elements), remote desktop users (those who are allowed to log in from across the network or the Internet) and so on. The idea is to break various types of functionality into distinct areas (or roles), each of which is associated with some group, and then to use group membership to grant access to groups. For example, a system with PhotoShop installed might have a PhotoShop users group, and only those who belong to the group can run PhotoShop on a specific computer.

To access this capability, solutions providers must be logged in using the Administrator account or another account with administrator privileges (like the Ed account in Figure 1). Then, you can simply type lusrmgr.msc in the Start command search box to open the Local Users and Groups management console plug-in depicted in Figure 2. The word "Local" is important because the control applies only to one Windows 7 (or other Windows) machine at a time.

For network users, Active Directory and Group Policy hold the keys to the kingdom

The principles of managing Windows 7 user accounts are slightly different on Windows server networks, where Active Directory servers typically house user account and group information and definitions as well as the policies that go with them. Though you can manage groups, accounts and Group Policies locally from individual Windows machines on production networks, the process is too time-consuming to be worth the effort.

Most solutions providers use the Microsoft Management Console (mmc.exe) with plug-ins to support users, groups and Group Policy management. You can use the Active Directory (AD) Users and Computers tool to set up AD users and groups, and you can use a Group Policy management tool (the Group Policy Management Console, aka gpmc.msc) to set up and manage group policy settings. Group policy settings are used to control desktop appearance, application access, file system rights and permissions and lots more.

Tuesday, June 29, 2010

Top Ten Reasons For A Server Refresh !

#1 Power Savings- New servers offer Energy Star certified families of servers. The right sized power supplies are 90%+ more energy efficient and provide lower
power draw.

#2 Cooling Savings - The latest server offerings are designed for greater venting and airflow. The latest generation of HP, IBM and Dell servers use less than 60% of the fan power from our
previous generation servers.

#3 Improved Performance with the latest Intel processors new servers provide up to 180% performance per watt improvement over our older generation servers.

#4 MicroAge's services- We maximize value of new technology while empowering you to operate and maintain solutions.

#5 Consolidation and Virtualization Find a “hidden data center” by consolidating several physical servers into one physical server with nine virtual machines.

#6 Simplified Management many new systems have embedded management which speeds deployment by eliminating the need for CD’s. New Management Consoles provide simplified tools to deploy, manage, and update
systems.

#7 Commonality Image commonality across platforms. Clean, consistent placement of interface •
ports. Obvious, clear component organization.

#8 Purposeful Design Customer inspired design using professional industrial materials including improved chassis, rails, cable management arms, hard drive carriers and latching.

#9 Improved Reliability All steel construction cable management arm eliminates creep. New metal hard drive carriers. Quick release rack latching for easy deployment

#10 Our Financial Services Simplify server acquisition with flexible financing options.

Contact me today to learn how MicroAge can save you time , money and out perform the competition with A+++ Service
creinhard@microage.com
480-366-2091

Wednesday, June 9, 2010

Don't let Your need to meet PCI compliance bust your budget.

In today's budget-conscious world, many organizations find themselves struggling with the challenge of meeting PCI compliance requirements while controlling IT and network security-related costs.

MicroAge can offer your company several solutions that will both maintain your PCI compliance while securely protecting your credit card information for a reasonable cost.

Recently, we were contacted by a small- to medium-sized non-profit organization that was having difficulty adding an Intrusion Detection Prevention (IDP) solution to its network to protect cardholder information.

The organization's existing firewall would not allow for the addition of any IDP software, so I suggested that the company move its firewall from its external-facing position to an internal PCI-segmented position and then add a Juniper SRX210 Services Gateway for IDP on its external-facing network. We provided this solution for less than $2,000.

Since all organizations that accept, transmit or store any cardholder data MUST be PCI compliant, regardless of size or number of transactions, Sword & Shield offers a variety of products and services to help you securely maintain compliance without breaking your budget.

For small- to medium-sized businesses, MicroAge will provide general PCI consulting to help you complete your Self-Assessment Questionnaire and submit your Attestation of Compliance (AOC) to your acquiring bank.

Larger organizations may need a comprehensive security assessment of their data security standards to complete a documented Report on Compliance, while companies with a large number of widely-dispersed points of sale locations may need assistance in completing SAQs for each location.

For more information, please contact Me @ 480-366-2091 Chris Thanks

Wednesday, May 12, 2010

Infrastructure Architecture


Why Infrastructure Architecture Matters

Infrastructure architecture is a new kid on the architecture block. Traditionally, a large amount of IT-architecture attention has been devoted to information and application architecture. However, several developments have fostered a growing desire for infrastructure architecture. But not only will an organization's infrastructure provisions benefit from the appliance of this new architectural discipline; IT architecture, as a whole, will mature. Being that infrastructure architecture is in its childhood, a lot of work has to be done to stimulate and create infrastructure-architecture methods, models, and tools. This paper includes a number of first steps in this new architecture discipline.

The Importance of "Trust" in an Automated World

For ages, "trust" has been the basis of our economic system. In our economic transactions, we rely on "trust"—confident, as we are, that things are carried out properly. Our confidence is based on our experience with—and the reputation of—the companies, governments, and individuals with whom we interact. Many of the services that we use are virtualized. For example, the amount of money in a bank account is no more than a record in the bank's database system. Contracts, bills, and receipts are produced to underpin our activities; however, in an increasingly automated world, even these documents tend to be virtualized. How many companies urge their clients to accept e-bills, e-contracts and e-accounts these days, instead of paper copies? Many! As long as they can be accessed by these clients, there remains some sort of cogent evidence that the system is still running. I started to wonder if these companies understood the important role of their infrastructures, because:
  • Business drives everything.
  • Information and communications technology (ICT) enable business.
  • There is no ICT without infrastructure.

    And, therefore:

  • There is no business without infrastructure.

A Solid Infrastructure: Essential to Business Continuity and Agility

Of course, it is not infrastructure services alone that support automation. Software applications contain most of the (complex) logic that drives automation. Therefore, it is not a surprise that a quick survey of the IT-architecture field shows that information and application architecture receive the greatest amount of attention.

Most methodologies and frameworks focus on application architecture. When a methodology or framework does pay some attention to infrastructure, it is remarkable that the level of abstraction is significantly lower when dealing with infrastructure services. This can be understood from a historical point of view. In most cases, infrastructure services have been "simple" during the first decades of IT development. While applications advanced in functionality and complexity, hardware only got "faster." However, the turning point came during the Internet hype. Infrastructure vendors innovated like never before.

Infrastructure started to become "smart," together with a massive growth of connectivity solutions. This coincided with the rapid development and deployment of new application types (such as e-marketing, e-commerce, ERP, and data warehousing), which demanded new infrastructure services.

Within the infrastructure field of work, a silent revolution took place. Many new and complex types of infrastructure services have been added to the field, while existing services gained a lot of functionality. Traditionally separated domains (such as telephony and video) are being integrated within the infrastructure domain, while generalized, standardized applications (such as mail, calendar services, and collaboration applications) are being added to this infrastructure domain. This results in complex infrastructure landscapes that are hard to manage and expand. Most current infrastructure landscapes are the result of a history of application-implementation projects that brought their own specific piece of hardware into being. Mergers and acquisitions make things even worse—leaving many companies with different sets of the same services that are hard to connect to each other, let alone integrate and consolidate.

Why Infrastructure Architecture Is Decisive

When organizations (out of necessity) pay attention to business-continuity management or want to save on costly administrative staff, they should invest in infrastructure architecture to rationalize, standardize, and structure their infrastructure landscapes. Organizations also benefit from infrastructure architecture when they want to be flexible and agile, because a solid and naturally scalable, modular infrastructure provides a firm foundation for quick adaptations at higher levels. The coming market, which is full of digital natives (forming "markets of one"), asks for a degree of flexibility that can no longer be supported by infrastructures that are inconsistent and hard to expand. These markets need infrastructures that are constructed with standardized, modular components.

Of course, proper project management, skilled design, construction, and operation are essential to implement and maintain reliable infrastructure services. But to make infrastructures consistent and fitting with business needs, architecture is indispensable.

However, not only infrastructure landscapes benefit from proper infrastructure architecture. To be able to translate business, information, and application architectures into solutions that really work in a real world, the supporting infrastructure services should be in line. The result would make architecture stronger as a whole and enable architecture to deliver solutions that are consistent from beginning to end. To enhance the effectiveness of architecture, we must pay attention to infrastructure architecture to complete the whole picture.

A nice incentive is that it directly pays off to invest in infrastructure architecture. Blessings that are delivered by a mature use of it include:

  • Greater insight into and overview of existing complex infrastructure services by preparing a transparent and structured taxonomy.
  • Development of a structured, standardized, and consolidated set of infrastructure services that optimally support business processes and applications. This prevents overlapping and diversity of services, and thus reduces the complexity of managed services and life-cycle management. Standardization produces greater flexibility bottom-up, because it makes it easier to carry out expansions, changes, and replacements.
  • A balanced examination of the possibilities that are offered by new technologies and a concrete path towards solutions to the challenges that occur in business operations. Specialized expertise is used to dispel hype, but without missing opportunities. Architecture thus strengthens the demand side in an area that is frequently dominated by the supply side (that is, manufacturers and suppliers).
  • Transparent and complete input—both technical and functional—for engineering, building, and testing activities. Architecture avoids a one-sided, technical approach to projects for building infrastructure services, and it also safeguards the alignment of delivered products with the predefined requirements for functionality and quality.
  • Improved alignment with operational services, because architecture enables engineering that is driven by service-level agreements (SLAs)/operating-level agreements (OLAs). Service-level management and operational services play a role at an early stage of creating new infrastructure services. This results in better and more effectively supported SLAs and OLAs. In combination with standardization and consolidation, this reduces the complexity of service-level management and operational services, too, because there is less diversity in the SLAs and OLAs.

First Steps

Infrastructure architecture is a young and immature discipline. Available literature is scarce, and it is very hard to find schools and universities that include some of it in their curricula. Much of what is called "infrastructure architecture" can actually be considered as "design." However, this is quite natural for a discipline that needs to develop more abstract methodologies and models.

Structuring and rationalizing design is a first step. Architecture methodologies should be developed by elaborating design practices, because this is the only way in which they stay in touch with reality. The border between architecture and design should remain diffuse, because as soon as the distinction between the two proves itself in some way to be clear, it will create a painful gap. Architecture misses its goal when architects are not able to transform their abstract constructs and artifacts into real solutions, because designers, specialists, and engineers cannot understand the directions that the architects provide. If this happens, engineers tend to start building their own solutions that are related in some way to the interpretation that they have regarding the architect's high-level descriptions.

Friday, April 16, 2010

Virtualization is a game-changer

Unlike many technologies that only address specific painpoints in IT, virtualization is a platform, which is changing how data centers are built and how IT resources are provisioned. Furthermore, virtualization is a key driver for emerging IT initiatives such as VDI (virtual desktop infrastructure) and cloud computing.

For years, organizations have been virtualizing applications successfully with our products. Many of these organizations already have begun server virtualization projects and are looking at desktop virtualization projects on the horizon. However, server virtualization exists only within pockets of most organizations as the majority of data center workloads are still running on physical servers. As a result, the significant long-term TCO and ROI benefits of server virtualization remain largely untapped.

After an initial implementation phase, organizations face many challenges when trying to scale out their server virtualization platform. For starters, IT shops often target “lowest hanging fruit” servers when beginning virtualization projects. It is easy to virtualize those old Windows NT servers or other underutilized servers, which, for business reasons, must remain as a separate workload. While virtualizing these largely underutilized servers does provide some cost-savings, the long-term benefit is relatively low. The reason is simple: low-risk servers yield a low return because they do not represent a significant portion of the organization’s IT infrastructure. To fully realize all the ROI, TCO and technical benefits of server virtualization, IT departments need to virtualize key workloads and infrastructure servers, which cost the most to power, cool and maintain.

However, organizations are struggling to virtualize critical servers, often due to performance issues, availability requirements or capital expenditure costs. For example, IT shops may deem a critical workload as too risky to migrate due to high SLA uptime requirements. Performance might be another issue as IT shops might find that a virtualized server does not perform as well or support as many users as its physical counterpart. Lastly, the initial hardware and storage costs required to virtualize critical workloads might be too high.

How can IT shops overcome these barriers and increase virtualization’s penetration within their organization? Organizations should make virtualization training and certification a priority. Training and education help IT departments:

  • Properly scope and scale out the virtualization platform according to best practices
  • Achieve dramatically lower TCO through reduced power, cooling and data center costs
  • Avoid common pitfalls, which plague implementations and keep them stuck in the niche
  • Extract the most performance from host servers to yield both higher consolidation ratios and user density

In addition to giving IT manager peace of mind, virtualization training and certifications help IT professionals stand out among peers while arming them with in-demand skills. The percentage of infrastructure that is being virtualized is growing every day. As that percentage grows so will the need for IT professionals trained with the skills to design, implement and maintain virtualization solutions. This trend is apparent already, as a visit to any top job site will show several openings for virtualization architects, engineers and consultants. We understand the impact virtualization is having on the IT landscape and job market and, therefore, has made the technology a cornerstone of current discussions with IT departments We can offer courses and certifications in virtualization ranging from administrator-level up through the architect-level, MicroAge makes it easy for IT professionals to develop a training plan that fits any role or skill level.

Virtualization does not end with server virtualization. Organizations often start with server virtualization and consolidation projects, but then begin extending those projects to support emerging initiatives such as VDI. Furthermore, virtualization also goes hand-in-hand with cloud computing, as it is a core technology for infrastructure-as-a-service solutions, such as Amazon’s EC2. As organizations explore VDI and public and private cloud computing models, IT professionals must remember that these initiatives ultimately leverage the underlying server virtualization platform, making training and education crucial to the success of these projects.

Some IT professionals who have specialized in a particular aspect of IT (such as security, server engineering or networking) might question whether virtualization training and certifications are relevant to them. The answer is a resounding yes! Virtualization training and certifications are relevant for everyone in the industry, regardless of role specialization. As a platform, virtualization is strategic in nature: each specialization within IT will be expected to integrate its own resources within the virtualization platform. Hiring managers will begin seeking IT professionals who already have this experience or training.

Whether your organization already has plunged into virtualization or is just now beginning virtualization projects, one thing is clear: virtualization is here to stay. The technology is a key driver for current server consolidation projects as well as emerging initiatives such as VDI and cloud computing. As a result, comprehensive virtualization training should be a top training priority for IT. For IT managers, a trained staff will help their organization realize the full ROI benefits of their virtualization investment. For IT professionals looking to spruce up their resume, the virtualization skills learned today will prepare them for the data centers of tomorrow.

For Information on training or products offered please feel free to contact me any time at 480-366-2091


Monday, March 22, 2010

How does a SAN connect to my Server

A question asked to me the other day was Does a SAN connect to a pc using something like external SCSI (whether it be fibre or not), or external SATA? Does it require a custom PCI expansion card to do connection?

OK, lots of speculation. For those who don't have their own, and might be interested:

A Storage Area Network, or SAN, is a specialized chassis that holds a disk write controller and a bunch of hard drives. Kind of like a blade server requires a special chassis to hold the blades, and provides back-plane communications between them, the SAN chassis holds 12-16 hard drives (usually), the write controllers (usually two for redundancy), power supplies (usually two for redundancy), and provides a back-plane for communications between them.

The first piece of a SAN you purchase will be the most expensive piece. The initial chassis, with the write controllers, has most of the logic, the on board software, licensing, etc. Often, some number of the initial hard drives in the first chassis are reserved for the on board operating system. Sometimes you can use a portion of those disks for your own storage, but it's usually not recommended.

This is where the near-infinite configuration options come in. Most vendors offer either SAS or SATA drives. SAS drives are the current equivalent of SCSI drives we used to order for servers, while SATA drives are the what IDE drives used to be for workstations. SAS drives are fast, "enterprise class", and dual-ported. SATA drives are slower, single-ported, but offer vastly more storage. For primary file storage, you'd opt for SAS drives. For secondary disk-to-disk backups, or terabytes of video, order SATA drives.

If your first chassis can't hold enough disks to meet your anticipated storage needs, you can backpack on another chassis. Say you ordered the first one as SAS drives. You might order the second one as SATA drives, for expanded storage, or second tier storage. You may decide to order four SAS chassis, and three SATA chassis. The chassis themselves typically connect with 2-4 GBPS FibreChannel connectors, depending on how current your SAN unit is. Again, the chassis back-plane is responsible for providing enough I/O for all this traffic. Each expansion chassis will also have redundant write controllers, but without the total O/S software that came in the first chassis, so they're usually about half the cost of the initial unit.

This is what makes a SAN so tantalizing to server admins. How do you add more storage to a SAN? Order a new chassis, stuffed with disks, and snap it on. Voila - more room. Go into the management software, and either add those disks to an existing server's assigned storage pool, or tell the SAN to give those disks to a different server that needed more storage.

The other thing a SAN can do is automatically swap in a hot spare from anywhere in the SAN to any failed disk. Have 150 disks in 40 different RAID groups? 1 hot spare can be put into any one of those raid groups, no need for manual swapping around until you get to it.

How do you add more storage to a traditional server? Shut the server down, open the case, pray there are some free power connectors and data cables, add some more hard drives, power back on, double check the BIOS to make sure they're recognized, initialize the disks, format them, set up ACL permissions, shares, etc. With the SAN, you never have to crack your server, it stays running, the ACL stays in place, the shares are still there, the total capacity simply got bigger.

OK, back to the original question - how do you connect a SAN to a computer? There are currently two methods - FibreChannel or iSCSI.

FibreChannel uses a different card setup than most folks are used to. Again, FibreChannel offers 2 , 4, 8 and now 20 Gbps speed. Usually you have to order 2 cards per server, in addition to the ethernet nics they have. These then connect to a fiber switch as a storage network.

With iSCSI, everything is done over regular TCP/IP ethernet. iSCSI uses TCP/IP (typically TCP ports 860 and 3260). In essence, iSCSI simply allows two hosts to negotiate and then exchange SCSI commands using IP networks. By doing this iSCSI takes a popular high-performance local storage bus and emulates it over wide-area networks, creating a storage area network (SAN). Unlike some SAN protocols, iSCSI requires no dedicated cabling; it can be run over existing switching and IP infrastructure. However, the performance of an iSCSI SAN deployment can be severely degraded if not operated on a dedicated network or subnet (LAN or VLAN). As a result, iSCSI is often seen as a low-cost alternative to Fibre Channel, which requires dedicated infrastructure. However, Fiber Channel over Ethernet or FCoE does not require dedicated infrastructure.

Although iSCSI can communicate with arbitrary types of SCSI devices, system administrators almost always use it to allow server computers (such as database servers) to access disk volumes on storage arrays. iSCSI SANs often have one of two objectives:

Storage consolidation
Organizations move disparate storage resources from servers around their network to central locations, often in data centers; this allows for more efficiency in the allocation of storage. In a SAN environment, a server can be allocated a new disk volume without any change to hardware or cabling.
Ideally , you'd have 4 Nics per server - 2 for "public" communications, and 2 for your iSCSI data connection. you can get by with legacy servers that only have two network cards, 1 for the public network, 1 for iSCSI. With a Gigabit switch, you set up a couple of Virtual LANs in the switch.

In both FibreChannel and iSCSI, it is vitally important that you reduce collisions between your public traffic and your SAN traffic, and between the two write controllers in your SAN. So - for my iSCSI solution:

Public NIC goes into a port on the switch that talks to the rest of my LAN. All client workstations talk to the server through that port. All clients access data shares over that port, that IP. That NIC is assigned a typical IP in our corporate addressing scheme.

iSCSI NIC Server 1 goes into a port on the switch that talks to one set of ports on my SAN. Server 2's iSCSI NIC goes into a port on the LAN with a different VLAN that talks to two different ports on my SAN. That way they avoid contention, I can expand this indefinitely. Each has a primary port on one write controller, and a secondary port on the other write controller. The iSCSI NIC's have manually assigned IP addresses that are non-routable, do not match the corporate standard, and due to the Virtual LAN on the switch, should never, ever, be able to be seen by anyone else on the corporate network.

most chose to leave the operating system boot drives in on the servers. It makes it a lot easier to troubleshoot things when they go wrong, and makes the SAN an optional extension of the servers. There are two versions of the Microsoft iSCSI initiator you can download - one is for booting off of an iSCSI SAN, one is for "normal" use.

You install the Microsoft iSCSI initiator on your Microsoft server. Depending on the vendor, you many have to install their drivers or management software as well. On your SAN, you carve up your available disk space, decide how you want the raid to work (RAID 1, 0, 10, 3, 5, 6, etc.), and assign it to the servers. Once it's assigned to the servers, you go into the server, pull up Disk Management, scan for new drives, and go from there. At that point it looks like an internal hard drive to the server. The first time, you have to format it, and do all the NTFS ACL/Sharing work, but from there, you're done with internal server work.

And there you have it !!!

Thursday, March 4, 2010

Best Practices for Deploying Hosted Virtual Desktops


I thought this was a great article by

Brian Gammage

Gartner RAS Core Research




Overview



Enterprises evaluating, testing or deploying hosted virtual desktops (HVDs) can learn a lot from the experiences of other organizations that have already employed the approach. This research summarizes current best practices.

Key Findings
  • Through mid-2010, structured task workers are the user group that can be most viably addressed with HVDs.
  • Persistent personalization of HVD images will expand the range of addressable users to include desk-based knowledge workers. It will also be a critical enabling technology for the eventual support of mobile users from 2011.
  • Confusion over the roles and responsibilities of desktop and data center IT staff is a common issue for companies that deploy HVDs.
  • Dedicated HVD images, used for secure remote access, will be rendered obsolete by persistent personalization in 2012.
Recommendations
  • Be realistic in planning which users will be supported through an HVD. Start with desk-based structured task workers and plan to expand to desk-based knowledge workers in 2010.
  • Cost-justify any request for dedicated HVD images on a case-by-case basis, even if intended to support secure remote access.
  • Define the responsibilities of desktop and data center staff before beginning HVD deployments.
  • Ensure that full production requirements for server, storage and network infrastructure are factored into pilot deployments of HVDs.
  • Aggressive adopters of HVDs should plan for major updates every 12 months through 2012.



Analysis



Interest in HVDs continues to grow. Since first writing about this architectural approach in 2005, Gartner has talked with over 500 organizations that are evaluating, testing or deploying HVDs. Based on our discussions with those that have deployed broadly or partially across their organization, this research summarizes current best practices for HVD deployment and use.





Identify Target Users and Applications

HVDs are not suitable for every user requirement or application. Limitations in the performance of centralized applications when accessed over local-area or wide-area networks must be considered when determining who is a viable HVD candidate. Technical improvements in HVD products (primarily in the performance of connection protocols) will alleviate the impact of these limitations through 2009 and 2010. However, these will reduce rather than eliminate latency issues for users.

An HVD implementation separates the user from his or her computing environment, introducing two factors that add latency to application performance: network throughput and protocol-connection constraints. Only the latter can be addressed through improvements in HVD products. Even if the protocol imposed no performance constraints, network latency would still be an issue. For enterprises, this means the user and application requirements that can be viably addressed with an HVD will expand as products improve, but are unlikely to ever encompass the full user population.

The requirements of mobile and intermittently connected users must also be planned for. Although HVD vendors are beginning to describe approaches that will permit the downloading of full or partial HVD images, these are unlikely to be available before 2011, at the earliest, and will require other infrastructure changes before they become viable for broad deployment and use. The changes in HVD image structure that will eventually help support offline users will also be critical in expanding the addressable audience of non-mobile users. By 2011, support for "persistent personalization" of images (allowing changes made to HVD images to be retained between active sessions) and user-installed applications will expand the number of users that can be viably addressed with an HVD to include most desk-based knowledge workers.

With these constraints and HVD development expectations in mind, Gartner recommends the following approach to identifying which workers can viably use an HVD now and through 2011:

  • Focus on structured task workers using desktop PCs for transaction-based and personal productivity applications first. Begin HVD deployments with users that do not personalize their images (that is, through the adjustment of settings and other features). In most cases, these users will be equipped with a well-managed and locked-down desktop PC.
  • Do not implement HVDs for users of graphic-intensive or streaming media applications before mid-2010, at the earliest. Even then, only deploy HVDs for users accessing over local-area networks after thorough testing.
  • Do not plan to extend HVD deployments to desk-based knowledge workers before 2010. Initially, only those workers that do not need to install applications will be addressable, but this will require the use of a third-party "point solution" product to support persistent personalization of the user's image. By 2011, desk-based knowledge workers that need to install applications will also be addressable.
  • Do not plan to support mobile users with HVDs before mid-2011, at the earliest.




Consider Printing Requirements

Most HVD implementations support local printing using a Universal Serial Bus (USB) connection to the accessing device. However, not every type of printer hardware is supported: HVD images provide generic ("universal") printer drivers that offer adequate functionality with most printers, but do not support some device-specific functions. Organizations with complex remote printing requirements, or that need to support specific printer hardware, should budget to deploy an additional printing utility. Such utilities are available from a number of third-party vendors. Organizations that may need to support high volumes of contemporary remote print requests (such as ticketing agencies or bank branches) should also consider the potential for network bottlenecks.





Restrict Use for Secure Remote Access

Through mid-2009, the majority of HVD deployments targeted two user groups/requirements:

  • Structured task workers using desktop PCs for transaction-based and personal productivity applications
  • Secure remote access from specific devices outside the corporate security perimeter

These two groups typically use different HVD deployment models. The former uses pooled deployments designed to optimize resource utilization, but there are imposing restrictions over how much the user image can be "personalized&quot. The second group typically uses dedicated HVD images, which support personalization but are significantly less flexible and cost more to operate than pooled images.

Changes in the way HVD images are provisioned and managed will facilitate the personalization of pooled images, but this capability is unlikely to be integrated into HVD products before late 2010. Enterprises that use HVDs to support secure remote access requirements (where we assume the user is prepared to accept latency in return for access from his or her remote location) should plan to migrate from the dedicated to the pooled approach at that point. Until then, use for secure remote access requirements is likely to be an expensive option that creates additional obstacles for the expansion of pooled deployments to other workers. In most cases, enterprises should only deploy dedicated HVD images for users that can demonstrate a genuine business requirement that cannot be met through more-traditional means (such as a notebook).





Redefine Roles and Responsibilities

One of the HVD issues most frequently described to Gartner is not technical; rather, it relates to confusion in roles and responsibilities caused by the HVD architecture.

An HVD moves a "thick client" PC image from a remote location to the data center, where it becomes a server workload. The functions of desktop image management and support will move with the image, so the IT staff responsible for desktops will naturally assume they are still fully responsible for the image in its new location. However, the IT staff responsible for the data center and servers is unlikely to see it that way. By moving the image, new scope for confusion is created, and this must be addressed through explicit definitions of the boundaries of responsibilities for the personnel involved. Unless this occurs, productivity will be reduced, and service levels for users may be compromised.

In most cases, the boundary between the responsibilities of desktop and data center IT staff will be the virtual machine "bubble" of the HVD image. Desktop staff should continue to take responsibility for what happens inside this bubble, but responsibility for how and where the bubble resides will move to data center staff (the desktop becomes a server workload). Enterprises should plan to review these responsibilities as and when HVD image provisioning technologies evolve to support persistent personalization and offline use.





Train and Communicate

Changing the location or the performance of desktop applications may disrupt the routines of some workers. IT staff may also need time to adjust to where and how their responsibilities must be fulfilled. Don't assume that either group will automatically understand the implications of a shift from a distributed thick-client environment to HVDs. Communication of what has changed and why will be essential, especially if there is a need to "sell" the new architecture to less-enthusiastic users. Training will also be critical in helping avoid disruption and lost productivity.





Plan for Scalability

Although many organizations raise doubts about the scalability of HVD deployments, there is no obvious architectural or technical foundation for such doubts. A number of organizations have already deployed HVDs to around 5,000 to 10,000 users, and a handful have deployed HVDs to more than 10,000 users. However, scalability issues can be introduced through incomplete planning. These issues typically fall into three categories:

  • Network — Organizations fail to evaluate the effect of increased network traffic as users interact with their hosted desktop images. Unless the topology of network requirements is realistically evaluated, new bottlenecks can occur.
  • Servers — Many organizations overestimate the number of HVD images a server will typically support. Despite frequent claims from vendors that eight HVDs can run on each server processor core, we continue to recommend a limit of five. Above that level, shared access to storage and other server resources can create performance bottlenecks within the server.
  • Storage — When not in use, HVD images reside in network storage, and this can also create bottlenecks. If many users log on at the same time, then the result will be delayed boot times for all.

All of these issues can be addressed through appropriate provisioning and careful planning. However, most HVD deployments begin small, with a pilot project for a few hundred users that does not strain network, server or storage performance. It's only as the HVD deployment is moved into production for more users that these problems appear. Our recommendation for enterprises is to factor the eventual production requirements into the pilot-phase planning process.





Plan for Rapid Update and Demand Product Road Maps

Although growing in viability, HVD products and technologies are still maturing. Organizations that want to address a significant part of their user populations with HVDs will need to embrace new developments rapidly to overcome existing limitations and to expand the addressable user population. This implies regular updates and refreshes to HVD components. For example:

  • Improvements in HVD connection protocols will requires some changes to brokering and session management software.
  • Persistent personalization point products will be rendered obsolete as the capabilities of HVD management tools improve.
  • Changes in the way HVD images are deployed and used will require changes to brokering software and instrumentation. The coverage and function of PC life cycle management tools may also be affected.

More-aggressive adopters of HVDs should plan for major updates every 12 months through 2012. Support from vendors with realistic and detailed product road maps will be essential for plotting the lowest-cost and most-efficient upgrade path. Enterprises should press vendors to disclose future product plans, even where these are not yet fully confirmed.





Rationalize HVD Images

HVD images are complex and large — typically anywhere between 5GB and 15GB per user (including user data). In most current HVD deployments, images are managed as integral objects, which drives high storage requirements. Recent developments in HVD products partly address this by deduplicating the largest single element in each HVD image: the Microsoft Windows operating system (OS). Citrix's Provisioning Server does this for XenDesktop and VMware's View Composer delivers the same functionality for View (previously VDI) deployments.

These changes will help, but much of the complexity in HVD images is driven by the integration of applications (both with the OS and with each other). This can be addressed independently of developments in HVD technologies through a range of approaches, including application virtualization, streaming or server-based computing. Where an application is common to a high number of users, it should be removed from the stored image (whether or not the OS is deduplicated) and delivered to the OS, either through streaming (at boot time) or through a server-based approach.

The most successful HVD deployments, to date, have typically combined the shift of the PC image to a data center with an image rationalization initiative. In most cases, the rationalization project was run separately, after the HVD deployment began.





Ensure Licensing Compliance

For most applications, a shift from a traditional desktop deployment to an HVD carries no licensing implications, but some applications may be affected. Licensing of the Windows client OS is a special case: a Windows Vista Enterprise Centralized Desktop (VECD) license will always be required. There are two types of VECD licenses: those for named PCs and those used for thin-client terminals or unnamed PCs. Prior to any HVD deployment, a full review should be carried out to avoid any potential for noncompliance with current license agreements.



© 2009 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.