Systems administration is definitely something that involves lifelong learning, and every so often something interesting comes up, especially when it involves comparing multiple operating systems in their default behaviors.
Since analogies can be fun, imagine for a minute that you live in a simplified world that includes your bank, its customers, and merchants. The bank has a limited quantity of cash on hand. The bank also has assets that are less liquid but can be turned into cash as needed. In order for its customers to use the money that the bank has, interest-free deadline-free loans are issued via cashier's checks. When the checks are redeemed, the bank wires the cash directly to the merchant in real time. So naturally, customers would rather that the bank have enough cash on hand for their purchases, since waiting for the bank to convert their non-liquid assets would mean customers may end up waiting a long time at the checkout line for a transaction to clear.
And here is where banks differentiate: how much in cashier's checks they issue and what they do when the cash and other assets backing the checks runs out.
For the purposes of this discussion, let's say there are four banks. Three of the four banks are more profit-driven and will issue as many cashier's checks as it has paper to print on, counting on the high probability that a number of those checks will not be redeemed in full and that they will always have enough assets on hand to cover all the checks redeemed at any given point in time. Of course that cash reserve is replenished when the customer repays the loan.
The fourth bank, however, will never issue more in checks than it has in assets of any type. This bank does not need to worry about what to do if it runs out of cash, because if it does, it knows that there are no further outstanding IOUs.
The other three banks, however, have to decide what to do. Two of these banks know how to contact all of its customers at all times and if it runs low on assets, it will send out an emergency alert asking for anyone that can to repay their loans. 99% of the time this will be sufficient and enough people will repay their loans such that the bank always has enough assets. The third bank does not have a way to contact its customers. In either case, if any bank runs completely out of assets and another customer tries to redeem one of the checks already issued, the bank can't break its promise. So it goes out and kills a customer, reclaiming the value of the loan made to that person.
Okay, let's come back to the real world. Obviously, killing customers is illegal and would never fly. But this was an analogy. Banks are actually operating systems, cash is RAM, non-liquid assets is the page file (sometimes called swap, it's basically when part of your hard drive is used as RAM when all of RAM is used up, but this isn't something you normally want to use because hard drives are much slower), and customers are programs. Issuing more checks than the bank has assets is called "memory overcommit." The emergency alert is the message that pops up telling you that your computer is low on memory, and repaying loans is when you close a program and the memory it was using is freed. A cashier's check is issued when a program requests memory but before it is actually used by it (for instance, if an array is declared but before data is stored in it).
Windows, Mac, and Linux will kill programs when they run out of memory (remember, they "promised" more than is available). Windows and Mac have standard user interfaces and so they have a reliable way to notify you that you should close some programs. Linux has more choice in regards to a user interface, but that also means there isn't a standard way to tell you that you are low on memory. You can monitor it manually, or try to notice when your computer slows down a lot (you are out of RAM but still have page file space), but sometimes you don't notice because the OS is doing a good job at managing memory. In addition, Windows (and maybe Mac) can dynamically expand the page file so far as your hard drive still has free space, and although there is a performance penalty for doing so, sometimes that can be better than killing programs. Linux won't expand your page file for a number of reasons (there are cases when you don't want this to happen automatically, which is why Windows also lets you manually set a static size).
Solaris is the conservative. It will only promise as much memory as it has. As a result, there is a higher chance that Solaris will start saying "no" to requests for more memory before it is actually out of it.
In the case of Windows and Mac, this makes sense since they are primarily personal operating systems, where you want to make sure that programs continue to run as best as possible, and occasionally killing a program is not the end of the world. Plus, most users will close some programs if they are warned to do so. For Windows, Mac, and Linux, this allocation model also generally enables higher utilization of the RAM you have, especially in the face of lazy programming (i.e. asking for more memory than you ever plan to use). An example of lazy programming might be declaring an array that is too big or declaring a double variable when you only ever intend to store an integer.
For Solaris, which has a heritage of being a very stable and reliable server operating system, it's important that your mission critical program stays running and has no chance of being killed off automatically. As a side effect, actual memory utilization will probably be lower, resulting in higher costs (more RAM costs more money, after all).
BUT what this means is that while it is fine to run with very little or even no page file on Windows, Mac, and Linux (since your system isn't really running well if it starts paging; it'll be crawling), on Solaris, the page file should be much larger in order to ensure that you are able to get full use of your memory. Determining the optimum size for the page file is a difficult task and is very dependent on the applications being run, but it means the age-old rule of thumb for anywhere from one to two times RAM doesn't change by much even when you start to get to larger amounts of RAM in modern servers, sometimes even 16 GB and up. Yes, you may be reserving a large part of your hard drive that you never use, but it guarantees the safe behavior.
Essentially the different approaches to memory allocation are attempts to address the same set of problems, with different consequences. Linux certainly has ways to change the behavior of its memory allocator and out-of-memory (OOM) killer, but as I mentioned earlier, what I discussed was its default setting.
Sunday, January 10, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment