TCP/IP Fundamentals This was kind of buried in a post but worth sharing. Mastering PowerShell resident older book on powershell, some outdated info but lost of good stuff. A classic i suppose
Now after hashing through the past and considering the future let’s check out some “practical” things to learn some stuff and come up with a scenario where we use virtual machines and we have to update them and drain roles so that we can understand the TCIP traffic flow and all those sorts of good things. Maybe this will lead to a functioning level of understanding DNS and TCP/IP traffic which still seems like nothing short of wizard magic at this point in my understanding of computer architecture. However that may be changing thanks to this newly (to me) unearthed document from Microsoft explaining TCP/IP, its only 500 pages long is it should be a quick read Hopefully I’ll learn more about the alchemical science of internet traffic wizardry though.
At least that was my initial intent however the real world of studying technology, or any subject for that matter, never works out that way. So somehow this allegedly organized thought process turned into a less than organized stream of conscience writing where everything should connect nicely but they don’t all point the right direction or align just perfectly but I guess that’s learning. So maybe this blog is more of a conceptual learning blog instead of something helpful? Maybe that’s just this post though or it could be that I’m just over analyzing and I know I’m writing this thing for myself not really to use as a reference guide for others. I should get an editor or something. Who knew that learning was so less than linear and imperfect? Personal notes aside from personal experience this also appears to be a problem for text book authors as well. So any way, here are a few random topics and some explanations that are continued from the previous post and again my structure of these went something like this, read materials, list bullet points, regurgitate information from the internet.
Drain roles w/ TCP/IP explanation and types of network adapters
So I don’t understand why we have two network adapters what’s the technological barrier with the “modern” adapter that won’t let it PXE boot? Is this just fluff for fun? What’s the true benefit of PXE booting and if I do PXE boot can I switch over to a modern adapter once the machine is built? Isn’t is just as easy to launch a Hyper-V machine from a ISO file? Well here’s some instructions and im guessing its got more to do with RM stuff than anything which is still possibly illogical. I mean that’s the only reason the legacy adapter is there from what I can tell. It doesn’t make any logical sense to have an option for a legacy and a regular adapter if the only difference is in the initial boot sequence especially if the “modern” adapter is higher performing. What’s up with this? The internet isn’t talking about it. Maybe pirate bay has some info?
Drain roles is like rad though, I mean I can just seamlessly pull stuff of one server and have another pick up the traffic from the application or service in a manor that’s transparent to the end users? That’s amazing. The other amazing thing is that I sort of understand the philosophy behind what’s happening, not like when I turn a light switch on in my house and the light bulbs glow because they have suddenly become infused with the blood of Thomas Edison or something. Anyway more info on that situation here . It tells you how to flush the proverbial toilets known as servers in case you need to intentionally restart them.
That aside there are still some more bullet points to cover with switches.
External Switch
External virtual networks are used where you want to allow communications between
Virtual machine to virtual machine on the same physical server
Virtual machine to parent partition (and visa-versa)
Virtual machine to externally located servers (and visa-versa)
(Optional) Parent partition to externally located servers (and visa-versa)
Hyper v and creating replica hyper vs sever back ups
So this a question I have about what is more efficient in terms of backups, live cluster nodes or hyper-v machines. I feel kind of like a high school kid asking questions about safe sex and googling stupid things like should I wear two condoms. I did come across this in my sails across the goggle seas, I did use a slightly more appropriate and work safe search term though and it seems to have some good information. This PowerShell cmdlet is interesting:
So I guess this is how I build a new cluster in PowerShell? Sorry I’m knew here, I still don’t understand all this stuff. I did figure out what quorum was though. So that’s good but It took me a while to understand though. My current emotional state has quickly progressed, im now feeling freshly divorced middle aged dude trying to date and having no clue how to talk to women. Akward. That hapened in my actual life since the last time we chatted microsoft certs.
Anyway, given that with either option you can literally store backups anywhere, like thumb drive but that isn’t exactly a secure option. There are benefits and restrictions to both. Ok so here is some info on conversion if you’re the evangelical type when it comes to virtualization. There some options here so I guess thats good. The links surrounding the first one seem a bit hokey.
So with these tools its really easy to back up your physical boxes to a virtual machine and return it to the same state it was running in. now this is a real benefit of virtualization. This may be slightly more complicated to convert to an MSI file and then have to reinstall on a physical box.
So how do server state backups compare? Obviously it’s a little more complicated in that you can’t have services and things in the middle of running because it might mess up your SQL groove or your SQL groove might mess up your return state groove. Im assuming that’s the same for VM snapshots that are basically the same thing but the goal was more to figure out what’s more efficient cause that’s like you know gonna save you like proverbial horsepower and I like fast cars. My dad got me into that as a young man.
Ok so now that we have read up a little on both topics and understand that WBadmin is not exactly the same thing as a Hyper-V replica we have established that this is a less than perfect comparison. So there isn’t really an answer to the efficiency question here. But maybe we know how to create a Hyper-V replica and use WBadmin.
Dynamic memory and live adjustments
One good thing about hyper V machines is that you can do cool things with them while they are running so if they are looking like this:
and feelin all resource starved/hungarian for ram. Then maybe you can get them at least lookin like this slightly unenthused gorilla if you throw in some more ram
I wonder if we can add additional logical cores to a running VM so that we could get our running environment beefed up and looking like this guy
So obviously this is going to require a search to see what else is actually “dynamic” in this case meaning a parameter that you can adjust live.
Ok so we learned that dynamic processor adjustment does not exist at this point but we did a get a slew of new links about logical processors. Or whatever you would like to call them, the process remains the same. So I guess it is cool that you can actually add ram it without shutting off or restarting the machine, given that you have extra ram in the machine and if that’s the case well….uha you would have had free ram if it was just one instance and not bogged down by multiple copies of the same instance so whatever. I guess this is also another one of those dev benefits in that if you’re testing out software it could prove to be really helpful but with a thought out implemented plan in a production environment it’s hard for me to envision a need for this but again, I’m not exactly a professional.
Cluster aware updating
Ok so these two posts where initially just going to be about cluster aware updating then we ended up hunting rabbits and I fell into some kind of pit and my short story turned into a rambling novel or diary entry or something. So does this work with physical machines as well as virtualize environments? It seems like it should but im not real sure. I get the concept that the machines are basically like hey guys lets run some patches and they kind of take turns. If I understand it right, this there’s a very strong possibility that I don’t. But that’s basically what this says right?
CAU is an automated feature that enables you to update clustered servers with little or no loss of availability during the update process. During an Updating Run, CAU transparently performs the following tasks:
Puts each node of the cluster into node maintenance mode
Moves the clustered roles off the node
Installs the updates and any dependent updates
Performs a restart if necessary
Brings the node out of maintenance mode
Restores the clustered roles on the node
Moves to update the next node
The terminology used is a little confusing because “node” is a term that I typically associate with virtualization however if you keep reading it would appear that its talking about a physical machine. So maybe it applies to both and it basically like when the armed forces people do those training exercises in terms of what happens in case of failure?
Live migration benefits
Ok so this is actually just concerning virtualized machines. Got it. Now, wtf is it and why do I tim curry about it? Clearly we are going to need moor links! uhhaaaa stop saying this because it confuses me: “This consolidation is a key focus of virtualization.” How on earth is virtualization consolidation? Its not at all in most cases! Also the images in this article are fascinating. Anyway, I guess its cool to be able to move a “live” machine from one box to the next. But is it actually still running while your host swapping? That doesn’t seem possible, I just picture that max headroom guy talking to me and then getting sucked down into a computer toilet or something. So logically im going to say that the machine cant actually be turned on unless your migrating a replica vm. Lets verify that assumption with Technet!
“Live migration allows you to transparently move running virtual machines from one node of the failover cluster to another node in the same cluster without a dropped network connection or perceived downtime.”
Ohhhhh ok so its just got to be a part of a failover cluster and you can actually live migrate without the whole max headroom scenario, got it.
“Hyper-V in Windows Server 2012 makes it possible to move virtual machine storage while a virtual machine is running.” Ok this is believable since its coming from msft. Basically its just moving the drive a vhd/x is stored on while the vhd/x is active.
“In Windows Server 2008 R2, you can move a running instance of a virtual machine using live migration, but you are not able to move the virtual machine’s storage while the virtual machine is running.”
Another important note, curious about the PowerShell commands to move a storage device, this sounds complicated. https://channel9.msdn.com/events/TechEd/Australia/2012/VIR314
Virtual switch manager
I don’t know where this came from I don’t think it’s a real tool. but we could talk again about Networking benefits and the layers of virtualized networking and switches I just don’t think there are any benefits to this situation outside of dev shit. Thanks brill
Mega tyte video about this: how to make a switch
and here is the supporting MSFT documentation
this is rael long but kind of kewl maybe? Early on he discusses the benefits of virtualization and this is my reaction:
So my take on this is its sort of like a hybrid car? So virtualization is like trying to build a hybrid car that’s actually more efficient than a regular petrol car which hasn’t really happened yet when you actually run the full numbers on cost to benefit ratios. Nor do I think the battery technology is available to make it more efficient and what about the pollutants caused by power plants and the manufacturing and disposal of batteries? Ok maybe I’m reading too much into it because he’s not exactly “directly” saying these things, it just seemed implied. So thanks for validating my concerns and at this point im taking mass transit which isn’t exactly cost effective but either way im out of the hybrid car debate.
Favorite part 59.20ish get-netvirtualizationlookuprecord –virtualsubnetid (insert id) sets up a virtual domain? Seems like theres more to this as well. Going to have to do some more digging on this topic because im interested but that might be for another day once I have little more knowledge of PowerShell and Hyper-V structure. Starting around 52 mins bro sets up a dc and creates a subnet then pulls all the info for what’s on that subnet, if I understood it correctly but theres for sure a need to recreate the scenario to get the concept of it actually functioning. Most of this was over my head because I’m not really sure what he’s doing or what he’s showing. It’s like he’s setting up a network and then he’s testing things that he did prior to setting up the network configured? Either this is designed to confused nubs or im not very perceptive when it comes to networking in general. Current status:
So towards the end of this he mentions something called message analyzer? Is this like a parket snuffer? Thanks bro that’s just what we needed. so now that we know it’s a packet sniffer what does this have to do with building virtual machines other than pwning MSFT tech, using MSFT tech, so you can figure out what’s going across between machines. So uha were encouraging black dev type stuff to build more efficient topology’s/protocols with in our networks and systems? That actually makes sense because if you want something be efficient you need to know how it works in a complete sense. Like you should understand it well enough to almost be able rebuild it to suit your specific needs and in the case of building servers and server traffic that stuff gets complicated so its not exactly for the feint of heart. The last thing mentioned that I took note of was the concept of an SDN or software defined network which is really kind of working backwards as this should have been discussed before we get into packet sniffers? Im not really clear on what hes talking about other than the idea of a virtualized network and the layers of where it can break when sending traffic to another virtualized environment across physical hardware. Man its like that dune quote where that one worm guy or what ever is all like “I see plans within plans” is this what we are talking about:
Any way it turned out to be a pretty bad ass video (the one about the computer stuff, dune is obviously “a badass video”) with layers of intricacy that where above my head but it for sure helped me to put together a more complete picture of virtualization and dune is my favorite movie film so any time I can conjure up a dune reference im pumped.
So this is cool technology that has nothing to do with the relative usefulness of mass implementation of virtualization however it does enable you to be swift on your feet with the demands of dynamic environment. So if you are for some reason getting a whole lot of traffic on one machine or you’re doing something more resource intensive you can adjust the machines ram while it’s live instead of having to do a restart. In the gui you just punch in some numbers on the Hyper V manager and in PowerShell you use the command:
For more detailed info with screen shots and more complicated scenarios such as using the | command and setting multiple machines check out these links
Obviously there are some limitations with this like if you adjust start up ram it’s not going to affect performance, unless you restart the machine. Also you can’t assign more ram than your machine currently has. You can only a lot a certain amount of physical ram.
Get-VM -ComputerName SOURCE_HOST | Out-GridView -Title "Select one or more VMs to Live Migrate" -PassThru | Move-VM -DestinationHost DEST_HOST -DestinationStoragePath DRIVE:FOLDER
Sorry had to put this in here. Hope it loads consistently and if it doesnt here is the link becuase you for sure want to expereince this awesomeness.
Cluster aware updating, I really like this term but it doesn’t have anything to do with what this post is about. Just the concept of “cluster aware” anything indicates team work playing a more important role in the overall success of thing over the concept of the individual. Rolls right perfectly off the tongue, like idea that a group of computers where the functional level of band that understood how to play off each other. What a neat concept for VMs or physical host machines for that matter but we are going to talk about VMs cause I like talking about VMs I supposes and theoretically if I was tested on the matter it shouldn’t be any different. Anyway, since VMs are interesting these days and the concept of architecturing a data center or enterprise network around them is a fascinating concept/experiment I have to ask myself what would be the need for so many virtualized machines? Well, being that im out of work I should probably take the more labor intensive work equals more jobs but Im of the persuasion that the industry as a whole is more important that I am as an individual. Perhaps there is a correlation or relationship between the two things but we are going to talk about the technology since we are pretty sure bad unionized politics won’t be on an exam. Since I’ve never actually built a dada center or configured Hyper-V machines within an instance of Server I truly have no idea. Coming from a background of working in business support/analytics roles I have to consider my limited personal experience theory’s and try and visualize the way that data would flow when it comes to terms of dollars spent. As I go through this process come back to the same ideas of supporting a failover cluster with scalability availability based on the obvious factors. If you think about it logically VMs in theory shouldn’t reduce the footprint of actual physical hardware because now you just have more powerful servers running more copies of an OS with both hard and soft networking to support so why couldn’t physical instances of windows support all of the services and programs that you need to run/host? Hosting more virtualized instances of an operating system should actually reduce efficiency terms of actual hard disk storage space. From a stand point of actual failure and having to use backups then comparing server state backups to Hyper-V replicas the Hyper-V replicas could be more efficient in terms of the time that it takes to get them running because a physical machine isn’t down hard in this instance. However if a solid failover cluster of parent machines is in place it should be transparent to end users in a production environment. Is it more efficient in terms of power consumption or physical space, I wouldn’t assume that either. It seems like if you host each major service or program on a separate physical machine you would take them down at separate times in case of a major software patch for either the service/program that you’re supporting or for a Microsoft patch. This has been the standard operating procedure for many years. I suppose that it seems easier to deal with restarting a VM with just a few clicks when updating but you still have to deal with the parent or host machine. So again why not just have separate physical servers instead of one giant server with 4 million cores and 90 billion gigs of ram or whatever? Real east cost could play a factor in overcoming utility costs extremely large scale deployments. I suppose in these terms finding efficiency is a much of an art as it is an exact science. So if I understand anything about how real world business application works and that decision makers study these sorts of cost efficiency measures I’m betting that there are fairly easy to find resources on the matter. Turns out the R&D info provided by 3rd party virtualization says I’m wrong and that is indeed at least somewhat more efficient:
Results of this study: The customers profiled in this study reduced their server TCO by 74% on average and realized an ROI of over 300% within the first six months of deploying VMware virtualization software. Although the sample size in this study is too small to make significant generalizations of TCO savings by industry or across types of businesses, the findings from the three customers studied in this paper are consistent with VMware experiences with other customers.
However this is a pretty small study and I still feel like at current level of technology finding a balance is possible and it is more art than science due to individual network/business personalities. These types of business efficiency questions are certainly not going to be CBT Nuggets but understanding the history of a product, the reasons for its implementation and the best uses of a technology product may help us to uncover the truth of the reason why things work the way they do. These are also the sort of real world questions you may have to answer when running an IT department while forecasting yearly budget plans that Microsoft is not exactly able to test on. In addition to real world working cost knowledge, understanding the philosophy of a thing is to understand a thing and that knowledge is true clout my friends. I think virtualization is absolutely phenomenal, for developers and students who are constantly breaking things and having to rebuild them because the rebuild time is at least in theory significantly reduced with virtualization but it comes at the price of increased storage space. However if your running stable production code your environment shouldn’t demand anything more than updates and launch patches that have been tested in a dev environment. It seems like there was a reason that we moved away from the concept of terminal computing and now we are in theory returning to it by hosting a million instances of server. So given that virtualization seems to be most suitable for dev environments and limited production use for non-critical systems why the push to learn so much Hyper-V? Not really sure to be honest, it’s like training construction workers to build highly effective sandboxes. At least from my current observations and pervious experience but don’t trust me because I don’t study to tests. So this won’t help me get that high paying admin job I’ve been after in terms of on paper qualifications. Bummer. However these are the questions I will continue to ask of any system whether it’s an operating system, a business practice regardless of it being in my personal best interest because global efficiency’s play a factor and are more important in these sorts of things. Maybe that’s kind of a cheesy Three Musketeers type of philosophy but at the end of the day the ability to place strictly capitalist economic theories on these sorts of backend technology’s would most likely prove to be very foolish. When it comes to creating front in consumer facing technologies such as cell phones and the need to create jobs on a large scale I think we may find our robotic desire for physical efficiency may find us in a state of economic failure.
I know I’m kind of repeating myself here so I guess I could start a new thought process and ask my next question, is Hyper-V preparing the world for a fully cloud based server solution provided by Microsoft and their backend hardware vendor of choice? Given that this is really leaning into Apple’s business philosophy territory and that I doubt it would be feasible solution for large scale environments I kind of doubt it. It does however have the potential to be a phenomenal a la cart type of product for smaller business environments with less than 200 users and given that it’s a line of business outside of being a technology vendor or some sort. That said even with this scenario there is the very obvious caveat of data transfer/latency as well as actual profitability concerns from Microsoft. If the Microsoft’s small business clients are paying for Google fiber or some other high end data transfer service will the speed be effective enough to reduce latency to an acceptable level? That’s a fairly large question that would need to be answered because that has the potential for massive failure leading us back to square one with having at least one onsite physical DC connected to Azure or whatever they decided to call it. However it does kind of cut out the middle men and lead to a standard per user pricing for the little guys similar to what we see with consumer cellphone usage today and cell phone companies seem to be profitable in a very respectful fashion however they are not exactly targeting their product at a niche market.
So what about the networking benefits and the layers of virtualized networking and switches? Is this going to increase efficiency or provide one more layer of potential failure? I feel certain that im not the first person to think about this stuff. BRB gonna see whats up on google, techtarget will not do, how about this Toms Hardware situation, I like these dudes and once again we see the sandbox analogy in place but we don’t see anything about reliability. I want to know if these things are Hondas, vintage jaguars or somewhere in the middle like a Volkswagen. Turns out there’s lots of articles on the topic and it depends on which one your using. I guess that makes sense. Overall it doesn’t really speak to the reliability of widespread implementation of virtualization.
Ok so Ive drawn some conclusions here and im finding my self wondering why on earth did I write something about economic theory’s because there’s probably a whole lot of people that are like a bazillion times smarter than me that work in marketing departments and sit on boards? Because there are also a whole lot of people out there that is a whole lot smarter than me that are writing tech blogs. Also, If there’s only one person writing things ideas don’t get passed around and that is boring or and stale and maybe writing/considering these sorts of things is like jogging for your brain.
This is pretty straight forward activity that we can find a lot of info about online. I feel like at this point we basically understand how PowerShell works and the lab environs are not exactly helpful so we may come back to this and add screen shots after we get back into a working test environment at CED Solutions, fingers crossed that that actually happens. Also instead of listing the full syntax for the cmdlets I’ve simply posted the links to the TechNet articles which I did less of last time so you can check that out for yourself!
So the first thing when trying to answer any question is clearly to do a search for the answer. So I searched for “create a group PowerShell” and came up with an interesting post. This is lifted from a blog and it’s pretty basic straight forward helpful info for someone such as my nubbins self that’s trying to learn this stuff: Create an Active Directory Group with PowerShell
In Windows Server 2012 R2 or Windows Server 2008 R2, use the New-ADGroup cmdlet.To create a new global group in the default Users folder of Active Directory called “Finance”:NEW-ADGroup -name “Finance” -groupscope Global If it needs to exist in different path in Active Directory, specify the path by its distinguished name:NEW-ADGroup -name “Finance” -groupscope Global -path “OU=Offices,DC=Contoso,DC=local”
However there’s more on this topic that I’m aware of so ill add links to the TechNet articles of some of the cmdlets listed in case we want more info on the full syntax of the cmdlets discussed. Using these commands in context and in order should provide for a more complete understanding of the given topic in PowerShell as well. Hard to tell if these will work without testing tho
At this point I’m starting to wonder if there a shorter version/alias or like abberviations for some of these. There doesn’t appear to be any reference articles pointing to that but there is a great thread about this if you don’t mind cussing 🙂 Help on Add-Groupmember
This is basically the same as creating user accounts like we did in the previous post so the screen shot there is still applicable but its robots instead of actual human users. However sometimes computers do use user accounts for services and so forth so what evs, you get the idea. Also some of these have potential prompt for credential sets in the syntax but I’m not good enough at reading the TechNet articles to know if its required without actually trying it however if it does I’m assuming that it will look exactly like the credential prompt show in screen shot shown in the last post.
So we aren’t using this one in this instance but it does exist and seems to be in use, not sure what job function it could accomplish that the standard set-adaccountpassword that we used previously couldn’t?
So this is basic creation of accounts sort of thing. What about managing and viewing computer accounts? What if we want to see all the accounts listed in our directory and then pipe them to a webpage? Well we would use get-adcomputer and then spell it out from there. Note that this would be kind of harmful if you were to run this in an enterprise environment with lots of computer accounts because duh there’s lots of them and it’s not a prepopulated csv or database, your actually pulling on a harddrive your actually querying against a live database. So what I’m saying is probably don’t do this during regular business hours unless you’re playing Chaos Monkey
Ok so that’s that we should now have a locally hosted website that shows all of the computers on our network that if needed could be placed in a shared file for network users to view or actually hosted in proper format for web viewing. Also I’m pretty sure that syntax should work but you know, testing probs. So let’s dig around and see what all is out there as far as information on viewing computer info besides what we already know that was displayed in the videos mentioned previously. So I came across this little ditty on a hardware vendors website/forums and I found it quite confusing as get-Qadcomputer doesn’t seem to be a built in PowerShell command so maybe it’s an add on software package from the hardware vendor? It seems to come up quite often. Interesting, but not important cause it’s not on the test right? Here’s an example of a complete syntax displayed on the sites
Get-QADComputer | Get-Member Remove-ADComputer
Create or manage organizational units and containers
Ok cool, so wtf is an OU and how is it different than a group? Well according to Someolddude1’s internet blog it’s something like this:
Groups have SIDs, can be placed on access control lists, and can contain other groups (even the same type of group referred to as group nesting). Organizational units do not have SIDs, can’t be placed on an access control list, and cannot be placed into a group. Instead, organizational units are used to organize users, groups, and computers within Active Directory. This organization is used to grant delegation and deploy configuration and security settings through group policy. Moving forward it is ideal to use the best practice for group nesting, as it is easiest to manage and provides the best security environment for Active Directory. Of course organizational units can be nested into other organizational units and often are. Just remember the two main reasons for organizational units and the design and deployment of them will be clear.
Still doesn’t make sense to me. Why do I care if something has a SID (security identifier), what’s an access control list, and why would I not want something placed into a group or are groups inside of OU’s? Also so I can’t put a group into an OU and it acquires these things? I’m lost at this point and I’m not going to lie about it because it’s better to ask questions and figure it out rather than try to be the cool pretentious kid that doesn’t figure it out cause he’s too busy pretending to know everything.
Well there’s also this TechNet article and after reading it I think what I’m understanding here is that groups are placed into OUs which are created to organize sites or different lines of business and then groups are placed inside of them? Maybe this is correct. Hard to be sure. I suppose we could ask the question on TechNet forums but someone has already done that too. This is more helpful info which seems to tell me my assumption is correct. So I can link GPs to groups within OUs but not to the OUs themselves. Not sure why this is designed this way but whatever.
This also brings up another interesting question. What about the default user’s container that’s built into AD that you can’t attach GPO’s to. What’s that called and is it an OU, a group or neither? I feel like this is really basic stuff that I should know by now that I don’t. I asked someone that past the 70-410 test in class and they didn’t really seem to know either. It was in the middle of trying out some stuff listed on a Toms Hardware article about PowerShell, which is a fantastic reference by the way. I cant seem to find this information any where. Some one should really consider creating a table that shows an AD tree and has names boxed in with arrows pointing to the folders in the tree so you can get a better idea of WTF is going on with all that. As soon as I figure it out ill let you know. : )
Connect to one or several domains or domain controllers in the same instance of ADAC
This is actually really easy to do using PowerShell and we are going to dip into some things we learned in the PowerShell tutorials from Microsoft on this one as well as the next one. However the books descriptor is kind of vague so we are going to explore a couple of options as to how we might do this. The GUI method is fairly straight forward. You simply right click in the management console and go. You can also open a local PowerShell session and if you just want to run a PowerShell prompt so you can use a PowerShell on a machine in a local type of fashion use the command
This takes us directly to whatever machine we named RODC and if you type hostname at the prompt you should see the name of the computer you connected to returned.
Or if you want to run code on your machine and send to a computer you use the -computername switch if it’s available with the cmdlet you’re using and for more info on this switch check out this article about the ComputerName switch
You could also use these to query any computer or targeted computers on your domain
Get-AdComputer The syntax below should get you any computer on the network running bits or if you target specific machines it will do that as well. Again use caution running this against every computer on an enterprise domain. Also throwing in a new command in here with the Get-Service cmdlet. The filter switch in the case below is going to search all computers. If you used the -identity switch you could simply target computers. So the compound |’d structure works like this, you all the computers, you have that data then it searches each of those computers for the service name bits or in the second case you should get a really huge list of every service running on every AD computer sorted by name and status then you could out put it to html if you wanted. #epic haha
Get-AdComputer -filter * |get-service -name bits
Get-AdComputer -filter * |get-service | select -property name, status
Filter active directory data
The most obvious source of “active directory data” kind of vague term as active directory is nothing but data is the event log. If you’ve ever worked in support, development, walked through an IT department, pushed a computer and then expected it to work your probably familiar with this thing called an event log that tells you where shit went wrong. After you know what went wrong then you can figure out how to fix it. Yay! This really is a pretty critical part of an operating system as far as anyone in the field is concerned. Obviously a standard computer user has no need to dive into an event log but we are not average users are we? Cool. Now that that’s established.
Get-eventlog -logname system (-newest 5) |convertto-html| out-file c:usersadministratordesktopbooyatribsorgserverprobs.htm
Ahhh sukisuki now we got a website called booyatribsorgserverprobs with our recent eventlog errors. Hopefully we can take a look at those and get our stuff togeather.
We could also use Get-GPO to output some or all of our GPOs since this is also “active directory data” that is obviously filterable and you can also do whatever you want in terms of |ing this data to a location or file type as previously discussed. While its not really applicable to this section I suppose you could also write a “what if script” and see what would happen if you applied certain GPOs to users/computers and then send that to website…..but that’s outside of our scope? So maybe we should stick with some thing basic that pulls all GPO’s
Ok so this is from the book and it tells you how to do it
using the GUI starting on page 126 but that’s boring and old hat stuff that
frankly any one going out for this MCSA situation should know any way. So where
going to see what we can cook up by learning how to do this in powershell
again, cause were over achievers that actually want to figure out what we are
doing.
Reset user password
Ok
this one is easy enough and amazingly easy using server gui, users and
computers, find the account right click, reset password. Well lets see what we
have to do in powershell. So google gives us the following info from this TechNet
article: https://technet.microsoft.com/en-us/library/Ee617261.aspx
Ok this is working but its asking me for her
current password and I don’t know what it is. She forgot it. Dang it Tammy. Why
you wana go and make me learn. And this isn’t in the book at all. 😦
Google is no more help. Can this be done in
powershell? Its looking like unless you’ve just mass created a bunch of users
or just created an account through powershell that has no password associated that’s
a no. You could try enabling a locked account and pressing enter and not
putting any thing in the password field but that doesn’t seem very secure or
like something Microsoft would have overlooked since its so amazingly basic. I
guess some things are better left to GUIs.
Well its really not that much, its just all the fields
that you can populate to describe a user. We don’t need to do all that atm so
where going to keep it basic. new-aduser -name RODC -displayname RODC -givenname
Br0no -Surname Tosaurus
Hey it worked! We now have an RODC account with
a funny user name. Let’s try to set this password because it shouldn’t have one
since we didn’t use the switch to give it a password. Lets try that command
from the first bullet.
So in this instance set-adaccountpassword rodc
works but we still need to enable the account. Again, no idea so lets hit TechNet
up again.
lets verify this, not sure how. Well if I google
“verifying enabled ad account powershell” it takes me to some long page about a
script but I have a feeling I can use the basic cmdlet from the first line in
the script which is get-aduser. So lets try that.
Theres a long string in here but I think we just
need the basics which looks like: Get-aduser -identity rodc
As to why some require the switch identity and
why some don’t is beyond me but whatever it worked. Now this whole string looks
like this, ignore the part where I was renaming a server. I was attempting to
create an RODC in powershell but we will get to that later just not on these
test environs
So I watched these MSFT videos, here https://www.microsoftvirtualacademy.com/en-us/training-courses/getting-started-with-powershell-3-0-jump-start-8276
, that where so helpful in understanding powershell. Much better than most reference
book that I’ve used and if you pair some of this info with the Virtual Academy
lab environments you might just learn a very small amount of powershell without
actually working on a real working server environment. The first time I watched
the help video I was like omg holy shit this is amazing and took absolutely no
notes what so ever. A few days later I opened a powershell prompt in the MSFT
lab environment and just sort of stared at like “woah scope that arrow >”
and couldn’t remember anything these dudes where talking about. So I decided to
watch it again and write down everything they were saying. Turns out this info
is generally more helpful than any book ive seen see far. So maybe this exists
out there somewhere in internet land but here’s some powershell notes for
idiots like me.
Note:I do apologize for any incoveninece my typos my cause,
I was drinking beer and listening to style of hip-hop known as krunk. Some
examples of krunk are the Yin-Yang Twins known for there hit single “Salt Shaker”
and the artist Trick Daddy. You may recall Trick Daddys 1998 hit single “Nann”
from his album www.thug.com which has an
image on the cover of a website created around the time of its release. Im sure
that the website was hosted on a windows advanced server NT 4.0 machine running IIS however there is a
possibility that it was a Unix box hosting it that was powered by Apaceh.
Either way it was a real website currently it appears to have some sort of
alias record directing you to some label page. Boring.
Update-help
-force :this command will
download the latest help file info
Using the up errow will legt you scroll through the history
of commands you have typed.
The tab key is also extremely useful in that it will let you
scroll through possible commands, in this screen shot I just typed get help and
the pressed tab and space a few times to see what the results would be. I don’t
think this would display a functional output but you get the idea.
The typical Copy and paste cntrl C cntrl V does not work
however highlighting text very carefully and then rich clicking then scrolling down to the next input space
and right clicking does work
Get-help
“command” or cmdlet a little confused on terminology here :basic powershell help
parameter, an example of a “command” would be add-windowsfeature as you can see
it doesn’t like install-service This
looks like:
Help “command” :shows more information that simply using
get-help, the output of this command looks like this and as you can see it does
not require a -:
Man “command” :also a more prolific version of the get-help
command, the output of this command looks like this and as you can see with
this being done on the MSFT free test lab environment we run into a few issues
as the update-help command doesn’t seem work:
Get-help *service* :in this example these dudes are using
what amounts to a search parameter to search for anything that has the word
service in the name. This looks like this:
Get-help g*service* :
this will narrow the list and pull any commands that have a g and service for
example get-service. The output of this command looks like this:
Get-verb : this will show all the verbs used in powershell
and instead of listing all the verbs ive shown that the location of the asterisk
character matters. If you do a search with *R it will display everything that
ends with the letter R if you use R* it will show any thing that starts with
the letter R.
Get-verb |measure :this will give a number of returned
options. This is the first time we have seen the | command show up but it’s a
very powerful tool that you can use add addendum to powershell commands. More
on this whole | in later posts. Now it feels like where getting somewhere and
learning out how organize and display information in powershell!
-detailed :this switch lists all the help for the command. It’s
fairly extensive and this point is worth noting what all the [,][<[] things
mean. At first I was mega confused by this because im not a coder. But basically
it goes like this, if the syntax starts with a [ then a command inside of that
then it requires no additional switches to run, however if it doesn’t start
with that you have to give some more description. If you see additional []
things inside of a [ after describing a switch you can use multiple variables. Hopefully
that makes sense and it looks something like this:
-full :this switch basically the same as using the –detailed
command however there is some more info about additional paramaters and im not
sure that I fully understand that yet so after I get some more info I may
discuss this more. Also its worth noting here that I picked the add-dnsserverconditionalforwarderzone
cmdlet because DNS is somewhat confusing to me and a global sense and I just
used the Tab key to find it:
Get-help get -service –online :the online switch takes you
to the TechNet article on the requested topic. Also you can see that we start running
into problems with using the free labs again. Im assuming they don’t have an
internal internet connection which would make sense because I could see people
using these as a proxy server of sorts being a problem.
Get-help get-service –examples :this examples switch is
where they keep the good stuff. The get-help is absolutely useless unless you
understand the code and all the brackets and all that stuff that’s obviously
super confusing. The –examples parm displays an exact line that you can type to
get what you’re looking for. And as you can see in the previous example we are
little limited here as well
Get-help get-service –showinwindow :this is amazing and it works great in the video. The show
in window switch shows the help file that was just pulled in a separate pop
window. Like omg a GUI in a dos type inviroment. My favorite part of this, as
if this wasn’t enough to spin your command line clues brain into a spin you can
also select check boxes to figure out/drill down to specifics so you can figure
out exactly how to talk to this thing. However it doesn’t work in the test environment.
There’s also an interesting bit in the video about finding
things out by using bad switches/parameters after cmdlets in hopes of getting
some information in the returned error. I didn’t exactly find that helpful but
its displayed in the next image any way.
The event log search and pull tool however is amazing. Everyone
that’s ever had to search through an event log to figure out what was going on
knows how awful it is. Powershell just makes this a non issue. You can target
specific machines, types of errors, whatever you want and then output it to an
html file and have a nice little browser display of exactly what you’re looking
for. Here’s a basic example of that, obviously there’s not a lot of event log
info on freshly created test environments.
So hopefully this helps a little with a basic understanding of powershell. Like the time that some one explained how a mouse operated when you first sat down at a computer.
Also heres some tumblrs that have scripts on them:
Failover
Clustering has been a major part of designing and supporting an effective architecturally
sound high availability environment for a long time and from what I understand it’s
not a large part of the MCSA testing. However that fact is somewhat irrelevant to
me, being a seeker of knowledge and skills rather than simply obtaining certificates.
Thus the concept of learning to design and
implement technology that is a large part of a real world application of
Windows Server 2012 is very appealing to me. I realize this may seem silly as
im unemployed and hoping to possibly get a job at some point and that
certifications certainly do improve the odds of that. But whatever I’m a scholar
yo.
If
you have never heard of Failover Clustering you may be wondering what the basic
premise of the technology is. A failover cluster is a group of independent
computers (known as nodes for our purposes) that work together to increase
availability and scalability of clustered roles (https://technet.microsoft.com/en-us/library/Hh831579.aspx). We (implying both IT professionals and desktop
users in corporate environments, well really even Google users) rely on FoC for
high availability for almost any critical applications such as Exchange Server
and Sql that require connections to non-local information systems (meaning not
stored on the local machines hard disk). In the past we used multiple physical
servers usually connected to a single storage unit that was also disk fault
tolerant using a raid array and SCSI connected hard disks. There have not been
many updates to this basic premise however the technology is now easier to use
than ever thanks to technology known as virtualization and branded by Microsoft
as Hyper-V. Now we have physical hard disks configured in fault tolerant arrays
hosting virtualized hard disk’s known as VHD or VHDx files that are also set up
in a fault tolerant array. This provides for two layers of information fail
over support, if a physical hard disk crashes we have a physical back up of the
data and if a virtual disk becomes corrupt we also have a failover copy of that
information as well. This allows
administrators to provide uptimes approaching 99.99% for critical applications
in order to meet the high standards of today’s business needs.
Clustered nodes can be connected using physical hardware or
virtualized hardware. A basic example (fig.1) would include three computers
each with 3 NIC cards, one talking to the other nodes in the cluster, one to
the database known as a cluster shared volume or CSV for short or the quorum
resource) containing the information about the cluster configuration (and one
taking incoming traffic from the network. One downside to this model was that
if the quorum disk failed, so did the cluster. A legacy two node cluster could
not function without it. So if just the disk failed but both nodes remained,
the cluster would cease to function. The
data on the quorum resource (CSV) includes a set of cluster configuration
information plus records (Sometimes called checkpoints) of the most recent
changes made to that configuration. A node coming online after an outage can
use the quorum resource as the definitive source for recent changes in the
configuration. It is also possible to set up fail over nodes in a configuration
using multiple local volumes and skipping the CSV (fig. 2). This also has
benefits but requires more replication across servers to ensure that every node
has a similar database. The point of this being that in case one of the nodes fails
for some reason one of the other two nodes would notice a problem with the
faulty node and seamlessly pick up the role that node was hosting (which machine
picks it up is determined by using something called quorum votes, more on this
later). This will obviously cause an
increase in network traffic to the node picking up the role which is certainly something
to consider when designing hardware specifications to ensure a functional level
of NLB (Network Load Balancing). However the node may or may not have been a
node that was previously hosting that role for the rest of the network and in
that case the hardware impact would be less critical. Clustered nodes should be
heavily monitored in a proactive fashion to verify that they are working and
general best practice is considered to be using a Microsoft product known as
System Center that alerts network administrators to any potential issues that
may occur resulting in a node fail over situation. However this product costs
as well so budget restraints could be a factor. If you are using System Center
and a node fails for some reason an administrator is automatically notified of
the failure while System Center attempts to resolve the issue (service is hung,
the machine freezes, ect.). If System Center fails to resolve the issue the
administrator can then machine can be restart, rebuild or take whatever action
is necessary to repair the node and as mentioned previously, the role will be
shifted to another node as long as the cluster is properly configured.
All
of this sounds very confusing for several reasons however a primary reason being
that there are two layers of technology involved, a virtualized layer, known as a guest cluster, that set
up almost exactly like a physical layer that’s sitting inside a server install that’s on a physical server. If
you’re like me you may need a more relatable explanation or visualization of
this. So here’s a picture (in-case you havent seen it) of something some genius programmer created. You can
play the video game Doom from a laptop while actually inside the videogame. So
its like playing doom doom. Maybe that helps? If your playing the game its
really obvious which layer of the game your interacting with. Like sitting at a
server interacting with Hyper-V machines that are essentially set up the same
way you would set up a physical machine.
So where kind of left with more than a few questions here
but me being a part of the omfg wtf r u doing here nubsauce train to fail town
users group and basically taking educated guesses as to how this technology
works only enables me to talk about a few things. Besides the fact that entire
technical manuals could be written on the subject not to mention the countless
technet articles and youtube videos on the subject. Maybe in the future I’ll add
addendums/updates to this post but for now we will ramble on as we can. One of
the obvious things is how the servers know that they are functioning? The most
basic way that the servers know that the other servers are still online is
through the use of something called a “heartbeart” the way that I understand
this technology is fairly basic. A server pings the other server on their
private network and says hey you still there and the server responds with something
like “yeah bro im still here stop buggin me bro”and this happens every second. If
this fails then the process of quorum voting comes into play. This seems like a
very mysterious process that involves a bunch of math and im not exactly sure
how the servers are self-aware (see HAL) enough to assume that they have the
extra processing power or know that another node would have enough processing
power but apparently they are able to do this without much trouble (aside from programmer
and technological explanation headaches). There is a default setting that Microsoft
has configured in Failover Cluster Manager as well as a few custom options
however the default is obviously recommended unless you’re a mathematician or
something because im convinced that the process involved in quorum voting is
nothing short of wizard magic, same for dns resolution.
So if your computational status is anything like my nubsauce
w/ x-tra Polynesian self and are convinced that computers are full of wizard
magic and mystery math then you’ll probably get really excited by the notion of
the appropriately named High Availability Wizard. This marvelous device will
help you set up and configure failover clustering as such:
In the High
Availability Wizard, you can choose from the generic options described in the
previous note, or you can choose from the following services and applications:
DFS Namespace Server:
Provides a virtual view of shared folders in an organization. When a user
views the namespace, the folders appear to reside on a single hard disk.
Users can navigate the namespace without needing to know the server names
or shared folders that are hosting the data.
DHCP Server:
Automatically provides client computers and other TCP/IP-based network
devices with valid IP addresses.
Distributed Transaction Coordinator (DTC): Supports distributed applications that perform
transactions. A transaction is a set of related tasks, such as updates to
databases, that either succeed or fail as a unit.
File Server:
Provides a central location on your network where you can store and share
files with users.
Internet Storage Name Service (iSNS) Server: Provides a directory of iSCSI targets.
Message Queuing:
Enables distributed applications that are running at different times to
communicate across heterogeneous networks and with computers that may be
offline.
Other Server:
Provides a client access point and storage only. Add an application after
completing the wizard.
Print Server:
Manages a queue of print jobs for a shared printer.
Remote Desktop Connection Broker (formerly TS Session Broker): Supports session
load balancing and session reconnection in a load-balanced remote desktop
server farm. RD Connection Broker is also used to provide users access to
RemoteApp programs and virtual desktops through RemoteApp and Desktop
Connection.
Virtual Machine:
Runs on a physical computer as a virtualized computer system. Multiple
virtual machines can run on one computer.
WINS Server:
Enables users to access resources by a NetBIOS name instead of requiring
them to use IP addresses that are difficult to recognize and remember
There
are also a few youtube videos that display how to walk through this wizard but
some of them aren’t in English. If interested google is ur friend. But heres a
few that I liek any way.
https://www.youtube.com/watch?v=eiEA9kBubDQ
– hommie sounds like the Pastor Rod Parsley and talks to the beat of Ghetto D so
if your into that and wana go to choych watch this. Also in a more serious
sense it was very helpful for understanding quorum voting.
These guys really know what they are talking about they have a useful way of speaking, meaning its actually understandable.
update 2: for more info on the fail over cluster wizard or check out some powershell commands regarding fail over clustering check out this page…..and this one for a great basic definition
Update 3: the more flashcards I make the more info I come across! good times any way. This is seems like some basic info from Microsoft with lots of info on fail over clustering. So far it doesnt seem as useful in a pratical sense as the powershell videos but proably worth watching none the less Server 2012 Jumpstart