Sorry had to put this in here. Hope it loads consistently and if it doesnt here is the link becuase you for sure want to expereince this awesomeness.
Cluster aware updating, I really like this term but it doesn’t have anything to do with what this post is about. Just the concept of “cluster aware” anything indicates team work playing a more important role in the overall success of thing over the concept of the individual. Rolls right perfectly off the tongue, like idea that a group of computers where the functional level of band that understood how to play off each other. What a neat concept for VMs or physical host machines for that matter but we are going to talk about VMs cause I like talking about VMs I supposes and theoretically if I was tested on the matter it shouldn’t be any different. Anyway, since VMs are interesting these days and the concept of architecturing a data center or enterprise network around them is a fascinating concept/experiment I have to ask myself what would be the need for so many virtualized machines? Well, being that im out of work I should probably take the more labor intensive work equals more jobs but Im of the persuasion that the industry as a whole is more important that I am as an individual. Perhaps there is a correlation or relationship between the two things but we are going to talk about the technology since we are pretty sure bad unionized politics won’t be on an exam. Since I’ve never actually built a dada center or configured Hyper-V machines within an instance of Server I truly have no idea. Coming from a background of working in business support/analytics roles I have to consider my limited personal experience theory’s and try and visualize the way that data would flow when it comes to terms of dollars spent. As I go through this process come back to the same ideas of supporting a failover cluster with scalability availability based on the obvious factors. If you think about it logically VMs in theory shouldn’t reduce the footprint of actual physical hardware because now you just have more powerful servers running more copies of an OS with both hard and soft networking to support so why couldn’t physical instances of windows support all of the services and programs that you need to run/host? Hosting more virtualized instances of an operating system should actually reduce efficiency terms of actual hard disk storage space. From a stand point of actual failure and having to use backups then comparing server state backups to Hyper-V replicas the Hyper-V replicas could be more efficient in terms of the time that it takes to get them running because a physical machine isn’t down hard in this instance. However if a solid failover cluster of parent machines is in place it should be transparent to end users in a production environment. Is it more efficient in terms of power consumption or physical space, I wouldn’t assume that either. It seems like if you host each major service or program on a separate physical machine you would take them down at separate times in case of a major software patch for either the service/program that you’re supporting or for a Microsoft patch. This has been the standard operating procedure for many years. I suppose that it seems easier to deal with restarting a VM with just a few clicks when updating but you still have to deal with the parent or host machine. So again why not just have separate physical servers instead of one giant server with 4 million cores and 90 billion gigs of ram or whatever? Real east cost could play a factor in overcoming utility costs extremely large scale deployments. I suppose in these terms finding efficiency is a much of an art as it is an exact science. So if I understand anything about how real world business application works and that decision makers study these sorts of cost efficiency measures I’m betting that there are fairly easy to find resources on the matter. Turns out the R&D info provided by 3rd party virtualization says I’m wrong and that is indeed at least somewhat more efficient:
Results of this study: The customers profiled in this study reduced their server TCO by 74% on average and realized an ROI of over 300% within the first six months of deploying VMware virtualization software. Although the sample size in this study is too small to make significant generalizations of TCO savings by industry or across types of businesses, the findings from the three customers studied in this paper are consistent with VMware experiences with other customers.
However this is a pretty small study and I still feel like at current level of technology finding a balance is possible and it is more art than science due to individual network/business personalities. These types of business efficiency questions are certainly not going to be CBT Nuggets but understanding the history of a product, the reasons for its implementation and the best uses of a technology product may help us to uncover the truth of the reason why things work the way they do. These are also the sort of real world questions you may have to answer when running an IT department while forecasting yearly budget plans that Microsoft is not exactly able to test on. In addition to real world working cost knowledge, understanding the philosophy of a thing is to understand a thing and that knowledge is true clout my friends. I think virtualization is absolutely phenomenal, for developers and students who are constantly breaking things and having to rebuild them because the rebuild time is at least in theory significantly reduced with virtualization but it comes at the price of increased storage space. However if your running stable production code your environment shouldn’t demand anything more than updates and launch patches that have been tested in a dev environment. It seems like there was a reason that we moved away from the concept of terminal computing and now we are in theory returning to it by hosting a million instances of server. So given that virtualization seems to be most suitable for dev environments and limited production use for non-critical systems why the push to learn so much Hyper-V? Not really sure to be honest, it’s like training construction workers to build highly effective sandboxes. At least from my current observations and pervious experience but don’t trust me because I don’t study to tests. So this won’t help me get that high paying admin job I’ve been after in terms of on paper qualifications. Bummer. However these are the questions I will continue to ask of any system whether it’s an operating system, a business practice regardless of it being in my personal best interest because global efficiency’s play a factor and are more important in these sorts of things. Maybe that’s kind of a cheesy Three Musketeers type of philosophy but at the end of the day the ability to place strictly capitalist economic theories on these sorts of backend technology’s would most likely prove to be very foolish. When it comes to creating front in consumer facing technologies such as cell phones and the need to create jobs on a large scale I think we may find our robotic desire for physical efficiency may find us in a state of economic failure.
I know I’m kind of repeating myself here so I guess I could start a new thought process and ask my next question, is Hyper-V preparing the world for a fully cloud based server solution provided by Microsoft and their backend hardware vendor of choice? Given that this is really leaning into Apple’s business philosophy territory and that I doubt it would be feasible solution for large scale environments I kind of doubt it. It does however have the potential to be a phenomenal a la cart type of product for smaller business environments with less than 200 users and given that it’s a line of business outside of being a technology vendor or some sort. That said even with this scenario there is the very obvious caveat of data transfer/latency as well as actual profitability concerns from Microsoft. If the Microsoft’s small business clients are paying for Google fiber or some other high end data transfer service will the speed be effective enough to reduce latency to an acceptable level? That’s a fairly large question that would need to be answered because that has the potential for massive failure leading us back to square one with having at least one onsite physical DC connected to Azure or whatever they decided to call it. However it does kind of cut out the middle men and lead to a standard per user pricing for the little guys similar to what we see with consumer cellphone usage today and cell phone companies seem to be profitable in a very respectful fashion however they are not exactly targeting their product at a niche market.
So what about the networking benefits and the layers of virtualized networking and switches? Is this going to increase efficiency or provide one more layer of potential failure? I feel certain that im not the first person to think about this stuff. BRB gonna see whats up on google, techtarget will not do, how about this Toms Hardware situation, I like these dudes and once again we see the sandbox analogy in place but we don’t see anything about reliability. I want to know if these things are Hondas, vintage jaguars or somewhere in the middle like a Volkswagen. Turns out there’s lots of articles on the topic and it depends on which one your using. I guess that makes sense. Overall it doesn’t really speak to the reliability of widespread implementation of virtualization.
Ok so Ive drawn some conclusions here and im finding my self wondering why on earth did I write something about economic theory’s because there’s probably a whole lot of people that are like a bazillion times smarter than me that work in marketing departments and sit on boards? Because there are also a whole lot of people out there that is a whole lot smarter than me that are writing tech blogs. Also, If there’s only one person writing things ideas don’t get passed around and that is boring or and stale and maybe writing/considering these sorts of things is like jogging for your brain.