Friday, January 22, 2010

Microsoft Licensing for Schools Lock-In

Like many large IT companies, Microsoft offers discounts for educational institutions. But what most people don't realize is that these licensing agreements make it financially undesirable for schools to use anything under than Windows and Office on their computers. Since schools participate in volume licensing, some sort of counting is needed to determine how much that school should be paying. After all, a fixed cost regardless of how many computers or users would be pretty unfair. But it's in this counting itself that results in lock-in. The bundle typically includes Windows upgrade, Office, and Client Access Licenses (CAL) for Windows Server and Exchange. Note that this means any PC purchased must already have some version of Windows on it in order to qualify; a school can't just buy a PC with no OS license and install Windows based on this program alone. Upgrades and extended patch support are included as long as the school maintains the agreement contract.

K-12 schools can use Microsoft School Agreement. This is what Fairfax County Public Schools uses. FCPS spent $2.9 million on Microsoft School Agreement in FY2009 for its roughly 98000 desktops and laptops. That comes out to about $30 for each PC, which, considering the retail cost of Windows Professional and Office Professional, is a really good deal. However, from the School Agreement information booklet:
School Agreement requires an institution-wide commitment for any application, system, and Client Access License (CAL) products selected. To that end, you must include all of the eligible PCs in the participating school(s) or district. Eligible PCs include all of the Pentium III, iMac G3, or equivalent or better machines*. You must also include any additional machines within your institution on which any of the software will be run.
*Includes machines with similar processors, such as Intel Celeron and AMD Athlon.

In other words, if a school has PCs that are Pentium III or iMac G3 or better, they have to count them even if those PCs won't be using some or even any of the products covered under the agreement. Another way to put it is if a school runs Windows on only half of its computers (the other half could be Mac or Linux, etc.), and all of those computers are less than 5 years old, the school wouldn't pay Microsoft any less than if it ran Windows on all its computers. Older computers only have to be counted if they are going to run software from the agreement, otherwise they can be left out.

Higher education can utilize a similar program, the Microsoft Campus Agreement. The lock-in here is potentially worse:
Campus Agreement pricing for any application, system, and CAL products you select is based on a count of your total faculty/staff FTE employees and requires organization- or department-wide coverage. To that end, you must include all FTE employees in the participating institution(s) or departments(s) (including student employees) using the calculation below.

Non-knowledge workers, such as maintenance, grounds keeping, and cafeteria staff may be excluded from the faculty/staff FTE employee count if they do not use institutional computers.

So colleges and universities don't even pay based on how many computers there are, but based on how many people they hire. Part-time faculty and staff are included as fractions.

From a financial perspective, then, it makes the most sense to run Windows on every computer.

So what about thin clients?
I found an interesting article from Computerworld, written about three years ago. The Software Assurance mentioned is included with both School and Campus Agreement options. Microsoft has a Vista Enterprise Centralized Desktop program for thin clients, it seems. However, it's unclear exactly what the interaction between that and a School or Campus Agreement would be. In any event, from the information document:
Thin client license. For thin clients, a single annual subscription purchase is required. With this subscription, companies can install unlimited copies of Windows Vista Enterprise or earlier operating systems, such as Windows XP Professional or Microsoft Windows 2000 Professional, on any number of physical servers, as long as the VMs are accessed only by licensed client devices. Users can access up to four running VM instances on up to four servers per subscription license. In addition, the annual subscription has Software Assurance built-in and provides for earlier versions, and well as upgrades that are made available within the license time frame.
To use desktop applications (for example, Microsoft Office Professional 2007) from the licensed device, each accessing device must be licensed for the application. Windows Vista Enterprise Centralized Desktop does not include application licenses.

It sounds like the OS would be licensed on a concurrent user basis (assuming the thin client disconnects from a VM when it isn't in use), while Office would need to be licensed per-thin client that might potentially use the application. It's also unclear if licensing under VECD would be more or less expensive than licensing under SA/CA. At the least, though, VECD seems to be less restrictive than the traditional licensing agreements, but perhaps only by necessity. However, it's also possible that Microsoft assumes the use of a Microsoft-based thin client, and they would demand a more stringent contract in the presence of, say, a Sun thin client that runs no Microsoft OS on-board.

So yes, maybe a school district or university stands to save by using alternative platforms such as Linux or Solaris, or would like to do graphics work on Macs. But unless they completely dump Microsoft, or opt for a different and probably more expensive (unit cost-wise) licensing agreement, this is at best difficult to do, especially in economically hard times. That said, I'm not saying institutions shouldn't try, and VECD may be a good way to go in and of itself if I've understood it correctly.

UPDATE 1/24/2010: I want to clarify the volume licensing mentioned above is on a subscription and not a perpetual basis. Customers have to recount and repay for each and every eligible system on a regular basis (either annual or three year contract).

Sunday, January 10, 2010

Virtual Memory Allocation

Systems administration is definitely something that involves lifelong learning, and every so often something interesting comes up, especially when it involves comparing multiple operating systems in their default behaviors.

Since analogies can be fun, imagine for a minute that you live in a simplified world that includes your bank, its customers, and merchants. The bank has a limited quantity of cash on hand. The bank also has assets that are less liquid but can be turned into cash as needed. In order for its customers to use the money that the bank has, interest-free deadline-free loans are issued via cashier's checks. When the checks are redeemed, the bank wires the cash directly to the merchant in real time. So naturally, customers would rather that the bank have enough cash on hand for their purchases, since waiting for the bank to convert their non-liquid assets would mean customers may end up waiting a long time at the checkout line for a transaction to clear.

And here is where banks differentiate: how much in cashier's checks they issue and what they do when the cash and other assets backing the checks runs out.

For the purposes of this discussion, let's say there are four banks. Three of the four banks are more profit-driven and will issue as many cashier's checks as it has paper to print on, counting on the high probability that a number of those checks will not be redeemed in full and that they will always have enough assets on hand to cover all the checks redeemed at any given point in time. Of course that cash reserve is replenished when the customer repays the loan.

The fourth bank, however, will never issue more in checks than it has in assets of any type. This bank does not need to worry about what to do if it runs out of cash, because if it does, it knows that there are no further outstanding IOUs.

The other three banks, however, have to decide what to do. Two of these banks know how to contact all of its customers at all times and if it runs low on assets, it will send out an emergency alert asking for anyone that can to repay their loans. 99% of the time this will be sufficient and enough people will repay their loans such that the bank always has enough assets. The third bank does not have a way to contact its customers. In either case, if any bank runs completely out of assets and another customer tries to redeem one of the checks already issued, the bank can't break its promise. So it goes out and kills a customer, reclaiming the value of the loan made to that person.

Okay, let's come back to the real world. Obviously, killing customers is illegal and would never fly. But this was an analogy. Banks are actually operating systems, cash is RAM, non-liquid assets is the page file (sometimes called swap, it's basically when part of your hard drive is used as RAM when all of RAM is used up, but this isn't something you normally want to use because hard drives are much slower), and customers are programs. Issuing more checks than the bank has assets is called "memory overcommit." The emergency alert is the message that pops up telling you that your computer is low on memory, and repaying loans is when you close a program and the memory it was using is freed. A cashier's check is issued when a program requests memory but before it is actually used by it (for instance, if an array is declared but before data is stored in it).

Windows, Mac, and Linux will kill programs when they run out of memory (remember, they "promised" more than is available). Windows and Mac have standard user interfaces and so they have a reliable way to notify you that you should close some programs. Linux has more choice in regards to a user interface, but that also means there isn't a standard way to tell you that you are low on memory. You can monitor it manually, or try to notice when your computer slows down a lot (you are out of RAM but still have page file space), but sometimes you don't notice because the OS is doing a good job at managing memory. In addition, Windows (and maybe Mac) can dynamically expand the page file so far as your hard drive still has free space, and although there is a performance penalty for doing so, sometimes that can be better than killing programs. Linux won't expand your page file for a number of reasons (there are cases when you don't want this to happen automatically, which is why Windows also lets you manually set a static size).

Solaris is the conservative. It will only promise as much memory as it has. As a result, there is a higher chance that Solaris will start saying "no" to requests for more memory before it is actually out of it.

In the case of Windows and Mac, this makes sense since they are primarily personal operating systems, where you want to make sure that programs continue to run as best as possible, and occasionally killing a program is not the end of the world. Plus, most users will close some programs if they are warned to do so. For Windows, Mac, and Linux, this allocation model also generally enables higher utilization of the RAM you have, especially in the face of lazy programming (i.e. asking for more memory than you ever plan to use). An example of lazy programming might be declaring an array that is too big or declaring a double variable when you only ever intend to store an integer.

For Solaris, which has a heritage of being a very stable and reliable server operating system, it's important that your mission critical program stays running and has no chance of being killed off automatically. As a side effect, actual memory utilization will probably be lower, resulting in higher costs (more RAM costs more money, after all).

BUT what this means is that while it is fine to run with very little or even no page file on Windows, Mac, and Linux (since your system isn't really running well if it starts paging; it'll be crawling), on Solaris, the page file should be much larger in order to ensure that you are able to get full use of your memory. Determining the optimum size for the page file is a difficult task and is very dependent on the applications being run, but it means the age-old rule of thumb for anywhere from one to two times RAM doesn't change by much even when you start to get to larger amounts of RAM in modern servers, sometimes even 16 GB and up. Yes, you may be reserving a large part of your hard drive that you never use, but it guarantees the safe behavior.

Essentially the different approaches to memory allocation are attempts to address the same set of problems, with different consequences. Linux certainly has ways to change the behavior of its memory allocator and out-of-memory (OOM) killer, but as I mentioned earlier, what I discussed was its default setting.

Monday, January 4, 2010

ext2/ext3 on Windows

UPDATE 10/8/2011: It appears that ext2fsd 0.51, released 7/9/2011, has fixed the corruption issue according to the changelog on its website: "1. FIXME: Data corruption issue, especially for multiple-thread writing on XP system." The version tested in the original blog post below was 0.48.

ext3 has long been one of the most commonly used filesystems on Linux. While it is sometimes displaced by competitor ReiserFS or newer players such as ext4 and Btrfs, ext3 is still around, at least on my computer, because it can be mounted on Windows, while the others cannot (although I think ReiserFS might also be mountable on Windows). ext3 is the same as ext2, with the addition of journaling, so mounting an ext3 filesystem as ext2 is akin to mounting without journaling.

NTFS has been the filesystem used on Windows for long enough that every Windows computer out there probably has it for the C: drive. Linux has an ntfs-3g driver that allows read/write to NTFS partitions and is generally considered stable. I believe ntfs-3g also does not emulate journaling. I also use NTFS on my external backups hard drive primarily since it has the highest chance of being mountable on any computer (Windows can do it natively and most Linux systems today have ntfs-3g).

On Windows, there are currently two primary methods of mounting an ext2/3 filesystem as a drive letter. There is Ext2 IFS and Ext2Fsd. Both drivers offer the same basic functionality. For the longest time, I used Ext2 IFS and was happy with it. But I recently decided to try out using Unison for backups to my external hard drive, and discovered that symlinks on my Ext3 partition were not handled when read with the IFS driver; any operation would simply return access denied. So I decided to try out Ext2Fsd. As it turns out, although Ext2Fsd has a number of nice features in it, Ext2 IFS is still the stabler product and I have switched back to it. I've decided that backing up directories with symlinks will just have to be done from within Linux, since the ntfs-3g driver is able to store symlinks (they are just represented as binary files when viewed under Windows).

However, I did want to offer my comparison chart of the differences in the two ext2 drivers on Windows, as I attempted to search for one myself but could not find one. Note that this chart applies to Ext2 IFS 1.11a and Ext2Fsd 0.48, and I have not listed all of the differences, just the ones that were the most obvious. My system runs Windows XP SP3 32-bit.

FeatureExt2 IFSExt2Fsd
inode sizeup to 128up to 256 (the default on recent Linux systems)
ext3 journal present (unclean unmount from Linux)refuse to mountroll journal and mount
symlinksreturn access deniedfollow valid symlinks (but they look like regular files or directories on Windows)
Explorer shell featuresstandard folder icon only and no Recycle Bindesktop.ini processed (folders can have non-default icons, etc.) and Recycle Bin used with deleted files
stabilityall files okaysome files (large Outlook psts and some small Office documents) error'ed during Unison backup, so read operations

UPDATE 3:20 PM: It would appear that two of my PST files that I attempted to backup are actually now corrupt. I haven't yet run Outlook's repair utility, but if it fails, at least it was only IMAP downloaded mail data (the mail is still on the server).