Server hardware and hyper-v is confusing to me. I think im going to take a couple minutes today to try and understand server hardware requirements as they specifically relate to hyper-v. This is not in the best interest of passing the 70-410 however its something im a little confused about and I actually dont know that much about server hardware so I suppose it could be good to understand it. Questions like does a virtual core in Hyper-V actually correspond to a physical core on a server? Im under the assumption that it does because if it doesnt then what exactly does it correspond to? If im given the opportunity to make money again I would like to own a server and client set up to continually learn these sorts of things. Not sure why I hadnt done this in the past but studying server has really gotten my brain going down the road of possibilities. Its easy enough to build my own pc and virtualize server for a while to run labs but if im lucky enough to be able to actually buy a server and have a small physical network that would be interesting as well. It also brings up questions about the various versions of server, like if I have an one standard key can an virtualize on my main box or do I have to virtualize on my server? In reality this shouldn’t be that much different as its easy enough to establish some form of RDP connection and view your virtualized machines from the server however it would be nice to be able to locally virtualize on a client machine. This setup would also allow me to install RSAT
tools and server manager on a windows 10 install to see how that ran. Overall a very exciting prospect. I could also possibly finally do some thing with that copy of SQL server 08 thats been sitting around too. Ok first things first, lets take a look at some articles on virtualization and server hardware then move on to the question that I feel like I should know by now concerning the various editions of server.
This seems fairly interesting and apparently im not the only one with these questions, which seems obvious.
CPU in your guests does not correspond to CPU in your physical host. The recommendation is that you assign no more than 8 vCPUs per physical core with guests prior to 2008 R2/Windows 7 and no more than 12 vCPUs per physical on later guests. However, these numbers
are difficult to understand in practice. You might have a VDI scenario where there are 100 2-vCPU desktops on a single host but only around 20 of those are ever likely to be active at a time. Strictly by the numbers, that would appear to need a 16-core system. In reality, that 16-core system is going to be nearly idle most of the time. On the other hand, you might be considering virtualizing a real-time communications system, such as a Lync mediation server. That’s pretty much going to need be designed at a 1-to-1 ratio and possibly with more vCPU than a physical deployment would ask for.
The takeaway is that there is some math to vCPU allotments but it’s really not going to be found in a generic 1-to-x statement. When a virtual machine wants to execute a thread, it’s first going to see if it has enough vCPUs (a virtual machine, like a physical machine, can only run one thread per logical processor). If it does, it will attempt to run that thread. Since it’s virtualized, Hyper-V will attempt to schedule that thread on behalf of the virtual machine. If there’s a core available, the thread will run. Otherwise, it will wait. Threads will be given time slices just as they would in a non-virtualized world. That means that they get a certain amount of time to complete and then, if there is any contention, they are suspended while another thread operates. All that said, the 1-to-8 and 1-to-12 numbers weren’t simply invented out of thin air. If you aren’t sure, they are likely to serve you well.
I am taking away from this quote that processor cores and speed are kind of a trail and error type of thing with common sense and experience taking a large factor in understanding the deployment requirements for various types of virtualization. For example im assuming that if you a have a virtualized instance of SQL or oracle running that sees regular use a 1/1 ratio of physical to virtualized would be a good idea. However it
doesnt really answer my question of what exactly is going on here. Is Hyper-V manager just making up a number of virtual cores that you can relate to specific machines that in no way correspond to your physical hardware? Im assuming ram functions the same way? These are all questions that maybe I could have sought the answers to before writing the early article about how Hyper-V seemed in efficient in terms of physical processing
power. Assuming that we are all on a less than shoestring budget maybe it isnt all that bad but it does still appear to, at this point in time, be somewhat of a luxury item which could lead to better designs in the future with necessary investments. To be honest if you consider the capabilities of computers before the mid 90s they all seemed like luxury items unless you really needed a fancy calculator for some scientific project.
In moving on with questions of efficiency and given that its not really possible to get more physical ram than you actually have simply by virtualization, Im finding it hard to believe that I could magically get more ram through the use of clever software tricks so, while appreciate these numbers being displayed im a little concerned about the efficiency of the actual physical correspondence between the software and the physical or parent host
If high-density is your aim, there’s a bit of planning to be done for the virtual machines. Up to the first gigabyte of guest memory, Hyper-V will use 32MB. For each gigabyte after that, Hyper-V has an overhead of 8 MB. If the guest is using Dynamic Memory and isn’t at its maximum, a buffer will also be in place. By default, this is 10% of the RAM that the guest’s current demand. You can modify this buffer size and Hyper-V can also opt to use less than the designated amount. Since Hyper-V does not overcommit memory, it is safe to squeeze in what you can. However, if you squeeze too tightly, some virtual machines will not be allowed what they need to perform well.
With this im possibly coming to understand that the efficiency of a resting server (meaning not underload) is below the minimum requirements and therefore we can actually squeeze more out of the machine while its idle. This is now starting to make sense. Im not sure how I jumped to that conclusion with that bit about ram but it does make sense to me. From this I will conclude that its generally a good idea to correlate 1-1 but given idle time and layers of assurance we could possibly do less than that. Im slowly becoming a fan of this technology now that im kind of grasping the concepts. I can just have saved state versions of my machine for not a whole lot of extra costs or overhead which would drastically reduce downtime an it gets more use out of my existing hardware by just layering an efficiently lightweight running operating systems on top of it. I like this idea.
Now on to the next question of can I visualize using a license that I purchased to run on a physical server on a client machine. Google sometimes has the answer and in this case they indeed do:
There are no technical restrictions on the number of VMs that Windows Server 2012 Standard can host. What you are referring to is the number of included or “free” instances of Windows Server 2012 that may be installed without providing an additional Windows license.
For Windows 2012 Standard (there is no enterprise version of 2012), you are allowed 2 “free” instances of VMs with a Windows Server OS on that host. You are free to have as many virtual machines as you want, but you will need to provide the appropriate licenses for each Windows based VM you install. For example, if you install Windows Server 2012 Standard, you can install 2 more Windows Server VMs without needing to purchase a server license. If you want to install a third Windows Server 2012, you would need to purchase or provide another license, as well as the fourth, fifth, etc. For Windows 2012 Datacenter, you are allowed unlimited “free” instances of VMs with a Windows Server OS on that host. Windows 8 offers no “free” OS instances. For each Windows OS based VM you install, you need to provide and assign that Windows license to that particular VM. The only real “limit” is the amount of RAM for running VMs and/or disk space for installing OSes.
Ok, now we have have hit an expenditure roadblock making Hyper-V a much more expensive proposition for number crunching CTO tech officers/decision makers
Or maybe not. I may actually be able to get two servers out of one with a $500 standard license and have the added layer of protection increasing my overall performance in terms of yearly system downtime numbers. Wait, more expensive or is this actually a real value proposition. Well thats going to depend on your accounting numbers. Just like an organization can seem very profitable depending on which type of profit your talking about such as ebitda numbers vs gross product numbers. Tricky, still very tricky. The only wait to really be able to find out a cost to benefit ratio, which the more and more research I do it would make it appear to be a benefit, is it wait and really figure out a proven set of numbers for an organization. Just like the accounting teams hopefully proven methodology of discovering profit margins that go beyond what the stock numbers are saying the IT team within an organization should be able to figure out the same in terms of overall system design efficiency. However im going to trust that MSFT knows what they are doing in this case and that this is a profitable product that makes people happy by increasing uptime SLAs that translates directly into prophet margins. With that ill just leave this guy here.
Oh yeah so to answer my basic question, I would have to use remote desktop tools and not virtualize locally is what I think i’m looking at here.
Leave a Reply