For what it's worth, this is totally unsupported, so don't even think about trying this on a production system.  Anyway, if you want to run Microsoft Security Essentials (MSE) on a Server 2012 box in a lab environment, follow this easy process:

1) Right click on the installer and choose Properties.

2) In the resulting pane, select Compatibility.

3) Select the checkbox beside "Run this program in compatibility mode for" and choose Windows 7.  Hit OK.

4) Open a command prompt as an administrator.

5) Execute the file from the command prompt with the /disableoslimit switch (e.g. C:\mseinstall.exe /disableoslimit)

Again, this is completely unsupported, so use this at your own risk.

Please see part 1 and part 2.

So, here's what it comes down to: performance!  I knew the crappy Marvell 9128 controller on my motherboard would be a limiting factor, but I was still hoping for decent performance.  The Bobulator is now two years old and SATA 6GB was a brand new technology in December 2010.  It wasn't supported by Intel's ICH10R chipset, so Asus added SATA 6GB to their high-end motherboards using a third-party chip.  These "tack-on" solutions always suck, but it's what I've got.  I'll build a new Bobulator in about 12 months, so I'll be able to unlock the total performance of this SSD when that happens.

I used a few different tools to test performance.  Most of the testing was done with a tool called IOMeter.  IOMeter is freeware and it's the industry-accepted tool for storage performance benchmarking.  It's the same tool I use for performance testing on multimillion dollar storage appliances, and it works just as well for standalone disks on gaming/work PCs!


Random IO
The first test was for random IO.  Random IO occurs when you have a whole bunch of files being accessed simultaneously, causing the drive to hop all over the place grabbing chunks of data.  A good example is the launching of an operating system which requires a bunch of files to be accessed at the same time.  SSDs are insanely good at random IO because they can access non-sequential sectors just as quickly as sequential sectors.  It takes the same amount of time to access sectors 1, 324, and 179 as sectors 1, 2, and 3.  This isn't the case with spinning disks which can only read or write to a given sector once the correct portion of the platter is aligned underneath the head.

My existing hard drive was able to provide 380 IOPS (input/output operations per second, e.g. a single read or write operation) for random reads and 366 IOPS for random writes.  That's actually not bad for a 7,200RPM spinning disk.  They max out at approximately 80 IOPS natively and the rest is all caching, hence the value of a large cache (64MB or higher) on a spinning disk.  The SSD absolutely blew it out of the water, though, with nearly 57,000 IOPS for random reads and just over 46,000 IOPS write.  These lead to mind-boggling improvements of 14,945% and 12,593%, respectively.

...




Sequential IO
IOPS can be misleading, though, as the number of operations you can perform is meaningless if those operations don't reflect real-world usage.  Sequential IO is a much better test of true performance.  Unlike random IO, sequential IO is a measure of performance when sectors are read in order (e.g. 1/2/3 instead of 17/5/32).  A good example of sequential IO would be a straight reading of a file like an HD movie file.  Large numbers of sectors are read in ascending numerical order to allow the media player to stream your movie from the disk drive.

My existing drive provided 46MBps for sequential reads and 59MBps for sequential writes.  These numbers are a bit backward, as read operations should always outpace write operations.  I chalk this up to an overtaxed drive.  I had three volumes on a single physical disk (C: for OS, D: for applications, and E: for mass storage), so I was splitting the paltry IO across three different volumes.  For instance, any IO used by the OS on drive C: would be unavailable for applications on drive D:.  By moving the C: drive to the SSD, I was not only accelerating the C: drive substantially but also reducing the IO constraints on the D: and E: drives. 

The SSD provided 383MBps for sequential reads and 238MBps for sequential writes.  That's an improvement of 832.61% and 403.39%, respectively.  Sadly, this is where the weakness of the Marvell chip really shows up.  I should be seeing another 140-160MBps for reads and nearly 280MBps for sequential writes, but the Marvel 9128 just can't handle it.  I'm considering purchasing a high-end RAID adapter just to get the performance I'm paying for, but that money would can probably be put to better use by saving up for the next Bobulator.

...




Launch Times

For me, launch time isn't a huge concern as I very rarely reboot my system.  Like most of you, my PC runs 24/7.  Screw the environment, right?  With that said, I understand that many people use OS launch times as their benchmark of SSDs.  Keep in mind that I have a very "heavy" OS that loads a lot of extremely resource-intensive applications (e.g. Visual Studio, SQL Server, etc), so my load times won't be anywhere near as low as a system that is specifically engineered for fast boots.  For those interested, the OS is Windows 7 Ultimate SP1 64-Bit.

Since my system is on an Active Directory domain, I have to provide credentials to log into the system.  As such, I'm breaking my results into two categories.  The first, Windows Launch, measures the time from hitting the power button to seeing the "Press Ctrl+Alt+Delete to Log On".  The second, Desktop Ready, measures the time from me hitting "enter" after entering my credentials to the point where everything is completely loaded and my primary disk backs down from 100% utilization.

The launch time for Windows was 65 seconds with the spinning disk.  This dropped to a mere 18 seconds after migrating to the SSD for a 361% improvement in boot time.  The desktop ready time was a very substantial 386 seconds on the spinning disk, or nearly six and a half minutes.  This is why you should never, ever put your applications and operating system on the same spinning disk!  The queue lengths were absolutely astronomical for that poor disk.  By improving the IO of the operating system and removing that burden from the disk containing my applications, I now have a ridiculously fast desktop ready time of 15 seconds.  That's a 2,526% improvement!

...




Final Thoughts

Overall, I'm incredibly satisfied with the Samsung 840 Pro series, and I would recommend it to any enthusiast that's looking to improve the performance of their system.  Honestly, I can't think of a single component I've ever replaced that had even half the impact of this SSD.  Just look at these numbers!

...

Please see part 1.

I'm a bit old school and I had planned on doing a clean Windows install, along with a reinstall of all of my applications and a migration of my data.  Disk cloning tools used to be a notorious pain in the ass and it was always a crap-shoot when cloning a system disk, particularly when the source and target disks were not the same size.  However, I've heard good things about the Samsung Data Migration tool, so I figured I'd try to clone my old spinning disk on to the new SSD and give it a shot.  I've got constant bit-level backups running to my server and to the cloud, so if all else fails I could always reinstall.  Besides, this is a copy operation, so if it blew up on me I could simply boot using the old partition on the original spinning disk and try Ghost or another professional utility. 

My worries were overblown, though, because the Samsung Data migration tool worked like a champ!  The tool is very simple to use, you simply tell it which volume you're cloning from and which disk you want that volume to end up on (the software only supports Samsung SSDs as targets) and it does the rest.  I was able to migrate all 113GB from my C: drive in 16 minutes at a surprisingly brisk 113MBps.  Considering that I had three volumes (C:, D:, and E:) all living on one 7,200RPM disk, that wasn't bad at all.

...



Once the cloning was done, all I had to do was reboot the computer and change the boot order in BIOS to boot off the shiny new SSD instead of the HDD.  Once that was done, the system instantly booted into Windows from the SSD.  No fuss, no muss!


Optimization

There are a couple of changes you'll want to make to optimize the performance of the SSD.  First, and most importantly, you'll want to enable AHCI mode in BIOS if it isn't already set.  Many motherboards (including mine) default to IDE mode for compatibility with legacy SATA disks.  AHCI mode will improve performance and is a requirement for TRIM support.  TRIM is used for garbage collection and allows the SSD to maintain performance over time.  Without TRIM, the SSDs performance will decrease substantially as time goes on. 

AHCI also enables native command queuing, or NCQ, on your disks (if they support it).  NCQ is a technology that re-orders your read/write operations as they arrive at the disk.  On a spinning disk, without NCQ, your disk will need to make three passes (e.g. the disk spins 3 times) if it gets requests for sectors 5, 3, and 1.  With NCQ, the requests would be re-ordered as 1, 3, and 5, allowing the disk to read all three sectors in a single pass.  Obviously, this is a moot point on an SSD, but you'll likely see some decent performance gains on random reads and writes on any spinning disks you have that support NCQ.

In Windows, you'll also want to ensure a couple features are turned off.  First, make sure the Indexing Service is either disabled entirely or at least turned off for the SSD.  Indexing is used to build a catalog of files and their contents, thus speeding up searches.  This isn't necessary for an SSD as it should be able to do a raw search very quickly, and the constant disk thrashing caused by indexing can reduce the lifespan of the SSD.

Next, you'll definitely want to disable disk defragmentation for the SSD.  You can either disable the Disk Defragmenter service entirely or disable scheduled defragmentation for the SSD.  Since I still have a couple spinning disks in my system, I left Disk Defragmenter enabled on those disks and disabled it for the SSD.

Finally, you'll want to disable the SuperFetch service.  This is a service that caches commonly-read files in RAM to improve access times.  Since SSDs have insanely fast read speeds, this isn't necessary.  Also, it keeps more RAM available to handle system needs, thus reducing the reliance on the paging file.  This, in turn, reduces wear on the SSD.

Please see part 1 and part 3.

Note:  This article was originally written for another site back in December 2012.  Although this page is oriented toward enterprise storage technology, I thought somebody might find it beneficial.

 

Over the past two years, my job has migrated from a wide-ranging network engineering and systems administration role into a much more focused storage engineering role.  I've had the opportunity to work with some of the largest and fastest commercial storage devices on the planet and I'm constantly barraged by amazing new storage and archiving technologies.  Some of the SSD-centric storage systems are hitting over 1 million IOPS, albeit with questionable benchmarking. 

As you guys may have noticed, my home network tends to reflect my professional occupation, and it's time I started catching up on storage.  Over time, I've become increasingly aware of how ridiculously dated the storage system is in my primary gaming/work/school/etc PC, affectionately called The Bobulator.  Although SSDs have been commercially available at the consumer/prosumer level since 2007, I've been avoiding them like the plague for a variety of reasons.  The initial models were ridiculously expensive, had incredibly low capacity (32GB was the largest size available), and had substantial issues with poor lifespans due to a lack of TRIM support.  I could save up money for a good SSD if the performance warranted it, but I can't handle a lack of capacity and I absolutely refuse to jeopardize the safety of my data with an unreliable drive.  Rule 1 of storage administration is never lose data!

As always happens with hardware, the situation has changed substantially over the past couple years.  SSD prices have dropped by 66% over two years from roughly $3.00/GB to roughly $1.00/GB for high-quality drives.  Some decent manufacturers are as low as $0.85/GB now.  Capacity has also improved substantially.  64GB SSDs were the largest commonly available drives two years ago, and now quite a few 512GB SSDs are on the market (albeit still very pricy).

It took awhile to wade through the massive number of SSDs on the market to find the right one for the Bobulator.  Whereas hard drives require very strict tolerances to manufacture (thus requiring very expensive factories to build them), SSDs are a lot more forgiving and can be built in factories that were designed for other solid state components such as motherboards, video cards, etc.  As such, there are a veritable crapload of vendors hawking SSDs.  You are limited to a handful of companies like Western Digital, Seagate, and Hitachi for a spinning disk, but there are easily 30+ vendors selling SSDs.

Here were my requirements for my first SSD for home use:

  • At least 200GB capacity, with 256GB desired.
  • High reliability ratings from consumers.
  • Blazing fast throughput.  At least 400MBps random reads and writes, 40,000+ IOPS.



I finally settled on this bad boy:

...



256GB is the sweet spot for me.  This is only replacing my OS drive (C:).  I keep all of my applications on a second volume (D:), mass storage on a third (E:), and a giant collection of Steam games on a fourth (F:).  I am currently consuming ~100GB on my C: drive, much of which is in my user profile.  Some day I'll move my user profile to another drive, but that is time consuming and I'm lazy.

The Samsung 840 Pro series claims up to 540MBps for sequential reads and 520MBps and up to 100,000 IOPS for random reads, which is ridiculously fast.  Real world performance testing from several different reviewers showed an average of 515MBps for sequential reads and 475MBps for sequential writes, which is insanely fast for a consumer-grade SSD.  Most of the other comparable SSDs in consumer price range offer about half that performance.  Sadly, the SATA 6GB ports in my computer are attached to a crappy Marvell controller, so I'll likely only see about 2/3 of that.  However, I'm planning on rebuilding the Bobulator in about 12 months and I'll move this SSD to the new rig, so it's wise to plan ahead.

 

Please see part 2 and part 3

CrashPlan Service Fails Repeatedly

I've been using CrashPlan to back up my home PCs and servers for about three months now and I'm generally very happy with the service.  I back up everything locally to my file server as well as the CrashPlan cloud service.  There have been a couple hiccups such as systems getting disconnected from the cloud and being unable to reconnect without starting the service, but I ran into a problem this week that took a bit of sleuthing to figure out.

My main file server has about 2.5TB of data that is slowly but surely being uploaded to the cloud.  I originally had it limited to 512Kbps of upload bandwidth and had to pony up the cash a couple weeks ago to upgrade my cable Internet service.  It now runs at 3Mbps but I've still got about a month to go before the initial backup is completed.  At the moment, I've got 927.8GB uploaded.  Until a couple days ago, the upload process was relatively flawless, but the service started randomly failing and restarting about once every 60-75 seconds.  It would generate event ID 7036 ("The CrashPlan Backup Service service entered the stopped state") in the System log, then immediately restart the service and issue the same event ID 7036 ("The CrashPlan Backup Service service entered the running state").

CrashPlanEventID.jpg

It took a fair amount of web sleuthing to find the culprit.  For some odd reason, CrashPlan utilizes an *.ini file that specifies a hard minimum and maximum memory allocation.  By default, the *.ini file specifies a minimum of 16MB of RAM and a maximum of 512MB of RAM.  I'm sure this is sufficient for the majority of CrashPlan home users that are backing up a gigabyte or two, but 512MB apparently becomes insufficient betwen 800-900GB of uploaded data.  I'm not sure why memory requirements increase based on the amount of data that has already been uploaded, but I'm assuming the deduplication process is keeping track of the hashes associated with the uploaded data, and more data means more hashes.

You can increase the memory limitation for CrashPlanService.exe by modifying CrashPlanService.ini.  By default, this file is located in C:\ProgramFiles\CrashPlan on Windows Vista/2008 and up.  You must stop the service called CrashPlan Backup Service before you can modify this file!  Once you've opened the file, look for the phrase -Xmx512M, which is on the fourth line of the document by default with CrashPlan 3.5.2.  Modify 512 to reflect the amount of RAM you'd like CrashPlan to be able to consume.  Keep in mind this is the maximum and CrashPlan will only consume it if necessary!  The screenshot below shows my CrashPlanService.ini file after increasing the memory maximum to 4,096MB (4GB), with the relevant bit highlighted:

CrashPlanServiceIni.jpg

If you run into permissions issues when you try to save the file, save it to the desktop instead and move it to the CrashPlan folder.  This will provide the UAC prompt needed to overwrite the file.  Once the file has been modified and saved, restart the service called CrashPlan Backup Service.  For what it's worth, CrashPlan consumed 634MB of RAM once I increased the maximum memory ceiling to 4GB.  I'll update this post at some point in the future with the amount of RAM consumed once the full 2.5TB has been uploaded.

CrashPlanMemory.jpg

Edit:  I've also seen some oddities on a Linux VM that runs CrashPlan.  After some more web sleuthing, I've found the configuration file for CrashPlan is located at /usr/local/crashplan/bin/run.conf on a BackTrack v5.2 box.  I'd assume most Linux distros would place it in the same spot.

NetApp - Forcing a Controller Failover

If you're testing the configuration of a new NetApp storage appliance, preparing to perform maintenance on a controller (e.g. adding a new PCIe card), or preparing to perform an upgrade, you may need to perform a manual failover from one controller to another.  These are done primarily with the cf suite of commands.

To check on the status of services on each controller, issue the following command:

cf status


When you're prepard to begin the failover process, log onto the controller that is taking over for the other controller and issue this command:

cf takeover

 

This will cause LUNs and services to be moved from the opposing controller to the controller on which you executed the command.  Once services have been passed over, the opposing controller will reboot.  In my experience, the migration of services happens very quickly.  It takes 3-5 minutes for the opposing controller to fully reboot, although your mileage may vary on older systems.  Once you've completed your testing/maintenance activities, you can restore services by executing the following command on the controller that currently owns the services:

cf giveback

There are two quick and painless methods to enumerate the serial number on a NetApp controller.  If you only want the serial number, here's my preferred method:

 rdfile /etc/serialnum

If you want more information about the configuration of the system, you can get everything but the kitchen sink using the following command.  The serial number is located near the top of this wall of text.

sysconfig -a

When trying to resize a LUN, you may receive the following error:

Data ONTAP API Failed: New size exceeds this LUN's initial geometry.


This occurs when you try to increase a LUN size to more than 10x it's original size, e.g. increasing a 50GB lun to 1TB.  The 10x limitation is a hard limit and, to the best of my knowledge, cannot be exceeded.  You'll need to trash the LUN and recreate it to exceed the 10x limitation.

I often get inquiries from IT folks (usually systems or network administrators) that are interested in breaking into storage management.  They generally want to know what kind of work the job entails, how it meshes with other IT disciplines, and what kind of skills they need to acquire to have a shot at a storage analysis position.  In this post, I'm going to take a stab at the last question and address the skills and experience I'm looking for when it comes to hiring. Obviously, this is a topic that can be highly subjective and I'm limited to my own anecdotal experience, but hopefully this will be beneficial to those of you that are looking at moving into (or moving up within) a storage role.
 
It goes without saying that soft skills are important in IT (or any job, for that matter).  Although we rarely interact with end users, we spend a substantial portion of our time interacting with our peers in storage and other IT disciplines, vendors, contractors, etc.  As such, the manner in which you present yourself via your résumé, during your interview, and through other methods of communication will play a major role in your success.  Polish your résumé and have someone else look it over.  Prepare for your interview ahead of time and make sure you're knowledgeable about the core subject areas involved.  If you communicate via e-mail or another "informal" method with your interviewers, make sure you use the same level of discretion used on your cover letter.
 
When it comes to technical skills, I always look for people with that have a solid understanding of centralized storage concepts. I don't really expect storage-specific certifications unless I'm hiring an engineer or architect, but I expect storage analyst candidates to know the basics of enterprise storage.  I've provided a handful of questions below that I routinely ask during interviews.  They're all very basic questions that are intended to meet two purposes.  First, what is the candidate's knowledge level with basic storage concepts?  Second, and perhaps more importantly, is the candidate comfortable enough with these technologies to come up with a reasonable application of those technologies when faced with a real-world scenario?
 
Here are a couple of questions I routinely ask, along with sample answers:
 

1) Describe the difference between block-level (SAN) and file-level (NAS) storage. How does each type of storage operate? Compare and contrast the two storage archetypes and give me examples of scenarios in which you would choose one over the other.

Block-level storage utilizes the storage appliance as a simple data repository.  The host (e.g. an ESXi host) connected to that storage appliance is responsible for managing the file system (e.g. VMFS).  Since the host is managing the file system, the storage appliance has very little insight into what has been written.  This can be a very efficient storage method and has a lower impact on the storage appliance resources, but it also limits the ability of the storage appliance to interact with the data.  For instance, the storage appliance has no idea where one file starts and the next begins, there is no way to do file-level snapshots at the storage appliance.  You also can't have end-to-end deduplication with these technologies.  To continue with the ESXi example, you can deduplicate datastores at the VMFS level and you can deduplicate again at the LUN level on the storage appliance, but there is no way to perform end-to-end deduplication.  Fibre Channel (FC), iSCSI, and Fibre Channel over Ethernet (FCoE) are good examples of centralized block-level storage.

File-level storage is just the opposite.  The storage appliance not only holds the data but is also responsible for maintaining the file system.  Since the storage appliance owns the file system, it knows where files begin and end.  This permits file-level snapshots, meaning you can restore a single file or directory from a snapshot instead of restoring the entire LUN.  This can be a huge time-saver.  Moreover, you can do end-to-end deduplication, meaning you can perform a single deduplication operation on the LUN from the storage appliance and realize your entire space savings at the LUN level.  CIFS (Windows) and NFS (UNIX) are the two predominant file-level storage standards.

 

2) Give me an overview of a Fibre Channel storage environment. Tell me about the components of a Fibre Channel infrastructure and tell me what each component does.

Fibre Channel has long been the dominant centralized enterprise storage medium.  Newer, cheaper technologies like iSCSI and FCoE have stolen a good portion of the market, but fibre channel remains the dominant standard for datacenter deployments for the moment.  Fibre channel networks generally consist of at least one storage appliance, one fibre channel switch, and one or more host bus adapters (HBAs, effectively a NIC and a SCSI adapter in one).  Of course most large-scale deployments utilize multiple FC switches and storage appliances in a mesh configuration.

 

3) Tell me about the pros and cons of server virtualization. What are some ideal candidates for virtualization? What types of systems would you avoid virtualizing?

Server virtualization is pretty much a standard approach to managing servers these days, and desktop virtualization is also making great strides.  Virtualization is a method of hosting multiple autonomous systems on a single piece of hardware, with a "hypervisor" providing a common platform on top of that hardware.  VMware is by far the biggest player in the market, with Microsoft and Citrix also having sizable shares.  Any production environment is going to have a minimum of two host servers, with an N+1 methodology being a common starting point.  Essentially, if you're putting some or all of your eggs in one basket, you want to make sure you've got a very reliable basket.  You want to make sure you've got enough overhead to permit at least one server to fail or be taking down for maintenance/patching without running out of capacity to support your guests.

One of the most obvious benefits to virtualization is the abstraction of the guest OS from the physical hardware.  With the appropriate licensing, you have ability to seamlessly move a VM from host to host without impacting end users.  The VMware-branded name for this technology is vMotion.  This can be done manually if you need to shut down a host for maintenance, or it can be done automatically to meet a variety of needs.  VMware offers a technology known as Dynamic Resource Scheduling, or DRS, that will automatically balance your virtual machines across a cluster of hosts based on factors such as CPU consumption, memory utilization, network IO, etc.  A newer technology known as Storage DRS performs a similar function for datastores, automatically migrating VMs from datastore to datastore with a goal of balancing datastore capacity and storage IO.  Finally, a technology called High Availability, or HA, will monitor your host servers for failures.  If a failure is detected, your VMs are automatically booted up on other hosts within the cluster.

Choosing ideal virtualization candidates can be somewhat tricky.  When moving existing physical servers to a virtual environment (known as physical-to-virtual, or P2V), it's always easiest to start with the low-hanging fruit.  Choose the one-off application servers that aren't heavily taxed.  Systems running on legacy hardware are another valuable commodity, as you can eliminate risks to your infrastructure by getting rid of crufty systems.  Save your more complex systems for last, including systems with heavy resource utilization and IO loads (e.g. SQL servers).  Some of these systems may not be ideal candidates for virtualization.  Also, watch out for systems that require special hardware such as modems, USB license dongles, and sound cards.  Some of these will need special care when virtualizing, and others may not be convertable at all.

In short, the goal of this question is to get an idea of the candidate's familiarity with virtualization technologies and VMware in particular.  Depending upon the candidate's response, I may dig deeper into storage-specific questions in a VMware environment.  For senior-level candidates, I might ask about raw-device mapping (RDM), storage IO control (SIOC), multipathing in ESX, or other similar concepts.  Again, the goal is to determine just how familiar the candidate is with enteprise storage in a virtualized environment.

 

4) Our environment is incredibly heterogeneous. We have a wide variety of storage appliances from virtually every manufacturer under the sun fulfilling nearly every imaginable role. How would you consolidate the management and monitoring of these disparate devices?

For this question, I'm really looking for answers that either enumerate the candidate's knowledge of the field, or creative answers that show me a candidate can think quickly on their feet.  For senior-level storage analysts, engineers, and architects, I would expect a response about specific products the candidate has utilized in the past.  If they're coming from a heterogeneous environment, I would expect vendor-specific answers such as EMC Unisphere Server, NetApp OnCommand Unified Operations Manager, etc.  However, I would press the candidate for a vendor-agnostic solution such as SolarWinds Storage Manager or EMC's upcoming Project Bourne.  We have a fair amount of legacy hardware along with some oddball products that can't be centrally managed by much of anything, so I might follow up by asking the candidate to describe how they would manage the hardware that doesn't fit into their overarching management application.

 

5) One of our major goals moving forward is to pull IT assets out of our 40+ facilities and consolidate them into dedicated, centralized data centers. Tell me how you would manage this effort. Which assets would you consolidate first, and how would you accomplish it? (Looking for commentary on virtualization, centralized file systems/stubbing, centralized backup/archiving, etc).

This is a big one.  This isn't really a common interview question so it tends to catch candidates off-guard, which is really ideal for me as an interviewer.  It gives me an opportunity to see if the candidate can think and react quickly.  This is intentionally a very open-ended question and I've received some very thought-provoking answers.  There are several different angles the candidate could take.

Virtualization is an obvious route.  Internal candidates are aware of our big push toward virtualization at all of our facilities, so they may discuss the foreward-looking plan to move as many VMs as possible out of the facilities and into our centralized datacenters.  I would expect a candidate to discuss the improved security and reliability of the infrastructure at those facilities, as well as the simplified management that would come from this form of consolidation.  In short, it's a cheap and relatively simple way to move critical infrastructure out of the facilities.

Many candidates also discuss migration of file shares.  We're in the process of replacing traditional Windows-based file servers (some clustered and some not, some DAS and some shared storage, etc.) at the facilities with CIFS-based shares hosted directly on dedicated storage appliances.  Many candidates discuss distributed file system (DFS) and similar technologies that can be implemented seamlessly today and leveraged later to migrate users to remotely hosted shares.  Many candidates also discuss the implementation of storage-level replication technologies to migrate this data.  Finally, some candidates discuss stubbing technologies that allow infrequently-accessed data to be moved to a remote repository and replaced with a file stub (think "shortcut") that points at the new location.  All of these are excellent strategies, and I always appreciate hearing a candidate suggest a combination of them to come up with a plan that will meet the diverse needs of our customers.

A coworker was asking about caching and LUN sizing practices for a Windows disk served on VNX storage.  Here is a list of best practices for Windows LUNs, with a couple special notes for Exchange.

User Storage System RAID - This seems like a no-brainer, but I often see people utilizing host-based RAID (particularly software RAID) on virtual machines to stripe across two or more VMDKs to get logical disks larger than 2TB.  This can have a substantial impact on CPU utilization.  You paid for that expensive storage appliance, so use it!

Use Basic Disks in Windows - This reduces CPU overhead on the host.  If you don't have a reason to use dynamic disks, don't do it!  Your storage appliance can take care of growing or shrinking the LUN, so this eliminates the most common reasoning I hear for dynamic disks.

Use the Default 8KB Cache Page Size - This is ideal for a typical Windows environment.

Use the Default 64KB Element Size - Again, the default configuration is ideal for most Windows boxes.

Align the LUN with a 1,024KB Boundary - LUN alignment is a big deal, particularly with heavy-IO hosts.  Use a utility like DiskPart to accomplish this.

Choose the Correct Page Size for Exchange - A 4KB page size is ideal for Exchange servers up to 2003.  With Exchange 2007, choose an 8KB page size.  For Exchange 2010, this doubles again to a 16KB page size.