A set of seemingly unrelated questions with some interesting finds

Im going to hit the next 10 questions that I was struggling with in this post. Im still unsure about the DNS IPv6 question I posted in the last section. I understand it but not completely. All I can do is hope and trust the understanding comes naturally in time.
Still feeling a time crunch and a fear of what happens when Im done with all this studying and have nothing to keep studying. Keep hoping for thrift stored books to add to my resume for list of books Ive read with no proof of understanding? That doesn’t sound very
good. Dunkin doughnuts plz hire me this summer so I can server again, thanx.

Moving on from my personal struggles what if we got back on topic with technology? Specifically technology related to Windows Server 12k.

 photo 2016-03-24_zpsyzuve4my.png

This is confusing, like are you kidding me? This one of a million other things that I need to know currently that I could possibly google in a real world scenario? Out of the options I was able to realize that an ACL was at least a good start but I had no clue about the switch vs the adapter. Honestly at my current level of experience and knowledge level im not exactly sure of the specific differences between a switch and a router (hence why I bought Network + study materials that will hopefully still be relevant after im done with 12k).

 photo 2016-03-24 1_zpsebwwebdf.png

So now I know that ACLs apply to adapters, still not exactly sure why but I just hope that understanding will come with the backlog of Network + material.


This is on is kind of obvious and I feel like If I would have read the whole question I would have gotten right because its fairly straight forward with the exception of the wording. “still work” is a little vague. You want to block the application or you want it to let the exe
run? I think the point remains the same that its in a location, its not signed and therefore definitionally its getting at the idea of a hash rule. If it was signed and we are approached with the same scenario we could assume a publisher rule as hash rules seem to be a little higher maintenance.

 photo 2016-03-21 21_zpsobqp3ctb.png

Ok, this is confusing, a path rule deals with a location a hash rule deals with the specifics of a file, kind of confused as to the intent of this answer part. Regardless of the amount of confusing verbs I think the intent remains the same as its very clearly outlining a hash rule.


This is kind of far out as I didnt realize that hyper-v machines used a paging file. Wouldn’t that reduce speed and how does this correlate to the ram they are assigned? I have so many questions and speculations about this scenario but Im not truly informed enough
on this to make a strong decision. However im standing by my decisions that high volume big dollar companies would be better off investing in virtualization than smaller companies that dont have as harsh of SLAs to meet however even with that scenario im assuming some extensive reliability testing would be called for as in a modern era im never comfortable with the concept of a paging file as it usually means running out of ram and using a harddisk (or ssd) for ram.

 photo 2016-03-23 3_zpsmiaj7p2i.png

 photo 2016-03-23 4_zpsxsgcgwtn.png

Well, no I suppose we know that we can preform this action by using the move-vmstorage and set-cm cmdlets with some deep running switches. Though I still havent read any literature about why a paging file is needed if the machine can directly use physical ram. It would be nice to find out the full string though. This answers a variety of questions that had concerning these issues. First of all it makes sense from an efficiency stand point of the ideas of physical memory over commitment however it still feels like a stop gap and im coming to the conclusion that a 1-1 physical to virtualized for physical hardware requirements is a best practice how ever there are work around for that if you really want to push your system however I’ve personally never been a fan of things like overclocking for long term solutions. Its fun to mess around with this sort of stuff but its certainly counterproductive from the standpoint of Hyper-V being a good practice for SLAs. I also enjoy that the author of that last post is a PS specialist but doesn’t give a string for actually producing the final desired result and google seems to be lacking there as well. We may never realize the full line of cmd, thanks measureup and internet. Gosh!


This one isnt super hard but I could see how it could be missed by confusing setting a link with inheritance.

 photo 2016-03-21 13_zpskonb327s.png

 photo 2016-03-21 12_zpsx7cf33uj.png

Not a whole lot to discuss here, it’s just important to know when and where to set a link and when inheritance of GP comes into play.


Any thing to do with subnetting Ive found to be particularly hard, especially when it comes to DHCP or any kind of routing question such as this.

 photo 2016-03-22 10_zpsolzngyx5.png

 photo 2016-03-22 11_zpszj2ttzuv.png

Hopefully the exhibit isn’t too small to see, I probably should have cropped the screen grab for higher resolution but then again I don’t feel a whole lot of responsibility as this blog doesn’t see much actual traffic? I honestly don’t even know where to start with this
question. Its seriously one of those questions that you could spend an entire month studying for and still be iffy about the answer. I find it very hard to study for these.

 photo 2016-03-22 12_zpswnowr5oj.png

 

I find it interesting and slightly unplausible that it would end up with a 192. address in the real world given the little that I do know about routing but the 240. was honestly a complete guess. Im sure that I will miss these on the test and I saw a whole slew of these kinds of questions and they were not like this so even if I was to study subnetting to this level I would have missed them. This is the hardest stuff on the test, in my opinion.


This next one is a little confusing as the rabbit hole all ways goes deeper than your elementary level of understanding as in this case.  Im familiar with enhanced session mode to access things connected to a machine such as physical drives (usb and optical) however printing it would appear is a little different.

 photo 2016-03-22 13_zpsmorotxrb.png

So now we have a thing called VMConnect and wow is that a good article with useful screen grabs and everything. Although, im not actually seeing something called VMConnect
being enabled. So in theory this is a real thing and without having access to the actual product im limited in my ability to display proof of this technology’s existence but im going to assume it to be true whether or not measureup knows its correct or not.

 photo 2016-03-22 14_zpsfcqguhzt.png


This question is a much more straightforward definitional type of question where we have a description of an idea that relates to a specific technology so again this one is almost just memorization.

 photo 2016-03-21 9_zpserlmrjb9.png

They key here being the support of live migration which relates to SR-IOV.

 photo 2016-03-21 10_zps2yhfnxnz.png

Naturally I assumed that having a tech enabled would hinder live migration (as in the case of pxeboot) however in this case I was wrong and SR-IOV should actually be enabled for live migration to be supported. However I could be completely wrong in that assumption and perhaps I should switch the logic train to the CPU cycles part. Maybe checking with TechNet would be good in this case?

It would seem that the page concerning live migration says nothing about this. This leads us to the understanding that SR-IOV reducing cpu clock cycles is the most important thing
in this case.


That’s all for tonight, had a few others but in a flurry of screen grabs and mass uploading to photobucket some questions are in a slightly disorganized  condition so ill revisit those that im aware of a desire to further discuss as I work through the questions again.

 

DNS/IP configuration questions

I’ve come across a few questions that I need to resolve for myself concerning DNS, ipconfig or firewall rules, i’m going to try to group these together in a sensical manor but given that they are kind of all different the flow might not be perfect.

Diving into the first question I have on my list:

 photo 2016-03-22 4_zpsk7xy8e7f.png

This is a very good and logical question that, I as person responsible for maintaining a network, should know. Unfortunately I don’t. I would be less dismayed by the fact that I didnt know this if it was just a one off question about an obscure firewall rule but thats
not the case here but as you can see the answer configure windows firewall.

 photo 2016-03-22 5_zpsnyj1mpih.png

The next step here is to look up some more information on this problem and see if we can figure out more information on how to set this up. It sounds in theory to be very similar to secure DNS updates but this is something completely different. Unfortunately google is no help so we may have to chalk this one up to experience.


Another question that’s really simple if you have enough experience to know exactly what is going wrong, this sort of break/fix logic isnt normally found in training material:

 photo 2016-03-21 15_zps1ibs32uj.png

 photo 2016-03-21 14_zpssbjoecxe.png

Apparently we are supposed to understand that there are stale records and we simply need to flush the cash I found this video and as you can see the user also goes through ipconfig /flushdns so maybe I wasnt horribly off in my line of thought.


Personally I dont find ip4 to 6 or vice versa questions to be overly complicated as its kind of just a matter of what goes where rather than needing a pool of experience to work through break fix scenarios or something of that nature. I decided to include them in this
for good measure I suppose:

 photo 2016-03-21 6_zps3xjbytpv.png

This is really straight forward to me but im going to put a training video for understanding teredo here as well as the other transition technologies.


These sorts of questions I do find hard because I feel like there’s possibly a measure of experience coupled with a deeper understanding of DNS than I currently possess.

 photo 2016-03-21 25_zpsbi8bwukk.png

Im honestly not sure where to start to find information on this, its too complicated for the basic websites that tell you what does what with DNS record types. I think the logic in the answer explains its self which in that case provides for the idea that we should better
understand ptr record uses and forward lookup zones. In my thought of they basic definitional sites may not be enough I could be faulting on myself for not completely grasping the whole concepts. Im sure its just a matter of time though. Lots to learn!

 photo 2016-03-21 26_zpsiqseuick.png

At first I was absolutely convinced that DNS was crafted by wizards that live in castles using eyes of newts and things like that. I think I’ve moved on (only slightly) from this concept as Ive grown to kind of understand it. I found this lecture series that im currently watching and Ive also found this website to be extremely helpful

 

Server hardware and Hyper-V

Server hardware and hyper-v is confusing to me. I think im going to take a couple minutes today to try and understand server hardware requirements as they specifically relate to hyper-v. This is not in the best interest of passing the 70-410 however its something im a little confused about and I actually dont know that much about server hardware so I suppose it could be good to understand it. Questions like does a virtual core in Hyper-V actually correspond to a physical core on a server? Im under the assumption that it does because if it doesnt then what exactly does it correspond to? If im given the opportunity to make money again I would like to own a server and client set up to continually learn these sorts of things. Not sure why I hadnt done this in the past but studying server has really gotten my brain going down the road of possibilities. Its easy enough to build my own pc and virtualize server for a while to run labs but if im lucky enough to be able to actually buy a server and have a small physical network that would be interesting as well. It also brings up questions about the various versions of server, like if I have an one standard key can an virtualize on my main box or do I have to virtualize on my server? In reality this shouldn’t be that much different as its easy enough to establish some form of RDP connection and view your virtualized machines from the server however it would be nice to be able to locally virtualize on a client machine. This setup would also allow me to install RSAT
tools and server manager on a windows 10 install to see how that ran. Overall a very exciting prospect. I could also possibly finally do some thing with that copy of SQL server 08 thats been sitting around too. Ok first things first, lets take a look at some articles on virtualization and server hardware then move on to the question that I feel like I should know by now concerning the various editions of server.

This seems fairly interesting and apparently im not the only one with these questions, which seems obvious.

CPU in your guests does not correspond to CPU in your physical host. The recommendation is that you assign no more than 8 vCPUs per physical core with guests prior to 2008 R2/Windows 7 and no more than 12 vCPUs per physical on later guests. However, these numbers
are difficult to understand in practice. You might have a VDI scenario where there are 100 2-vCPU desktops on a single host but only around 20 of those are ever likely to be active at a time. Strictly by the numbers, that would appear to need a 16-core system. In reality, that 16-core system is going to be nearly idle most of the time. On the other hand, you might be considering virtualizing a real-time communications system, such as a Lync mediation server. That’s pretty much going to need be designed at a 1-to-1 ratio and possibly with more vCPU than a physical deployment would ask for.

The takeaway is that there is some math to vCPU allotments but it’s really not going to be found in a generic 1-to-x statement. When a virtual machine wants to execute a thread, it’s first going to see if it has enough vCPUs (a virtual machine, like a physical machine, can only run one thread per logical processor). If it does, it will attempt to run that thread. Since it’s virtualized, Hyper-V will attempt to schedule that thread on behalf of the virtual machine. If there’s a core available, the thread will run. Otherwise, it will wait. Threads will be given time slices just as they would in a non-virtualized world. That means that they get a certain amount of time to complete and then, if there is any contention, they are suspended while another thread operates. All that said, the 1-to-8 and 1-to-12 numbers weren’t simply invented out of thin air. If you aren’t sure, they are likely to serve you well.

I am taking away from this quote that processor cores and speed are kind of a trail and error type of thing with common sense and experience taking a large factor in understanding the deployment requirements for various types of virtualization. For example im assuming that if you a have a virtualized instance of SQL or oracle running that sees regular use a 1/1 ratio of physical to virtualized would be a good idea. However it
doesnt really answer my question of what exactly is going on here. Is Hyper-V manager just making up a number of virtual cores that you can relate to specific machines that in no way correspond to your physical hardware? Im assuming ram functions the same way? These are all questions that maybe I could have sought the answers to before writing the early article about how Hyper-V seemed in efficient in terms of physical processing
power. Assuming that we are all on a less than shoestring budget maybe it isnt all that bad but it does still appear to, at this point in time, be somewhat of a luxury item which could lead to better designs in the future with necessary investments. To be honest if you consider the capabilities of computers before the mid 90s they all seemed like luxury items unless you really needed a fancy calculator for some scientific project.

In moving on with questions of efficiency and given that its not really possible to get more physical ram than you actually have simply by virtualization, Im finding it hard to believe that I could magically get more ram through the use of clever software tricks so, while appreciate these numbers being displayed im a little concerned about the efficiency of the actual physical correspondence between the software and the physical or parent host

If high-density is your aim, there’s a bit of planning to be done for the virtual machines. Up to the first gigabyte of guest memory, Hyper-V will use 32MB. For each gigabyte after that, Hyper-V has an overhead of 8 MB. If the guest is using Dynamic Memory and isn’t at its maximum, a buffer will also be in place. By default, this is 10% of the RAM that the guest’s current demand. You can modify this buffer size and Hyper-V can also opt to use less than the designated amount. Since Hyper-V does not overcommit memory, it is safe to squeeze in what you can. However, if you squeeze too tightly, some virtual machines will not be allowed what they need to perform well.

With this im possibly coming to understand that the efficiency of a resting server (meaning not underload) is below the minimum requirements and therefore we can actually squeeze more out of the machine while its idle. This is now starting to make sense. Im not sure how I jumped to that conclusion with that bit about ram but it does make sense to me. From this I will conclude that its generally a good idea to correlate 1-1 but given idle time and layers of assurance we could possibly do less than that. Im slowly becoming a fan of this technology now that im kind of grasping the concepts. I can just have saved state versions of my machine for not a whole lot of extra costs or overhead which would drastically reduce downtime an it gets more use out of my existing hardware by just layering an efficiently lightweight running operating systems on top of it. I like this idea.


Now on to the next question of can I visualize using a license that I purchased to run on a physical server on a client machine. Google sometimes has the answer and in this case they indeed do:

There are no technical restrictions on the number of VMs that Windows Server 2012 Standard can host. What you are referring to is the number of included or “free” instances of Windows Server 2012 that may be installed without providing an additional Windows license.
For Windows 2012 Standard (there is no enterprise version of 2012), you are allowed 2 “free” instances of VMs with a Windows Server OS on that host. You are free to have as many virtual machines as you want, but you will need to provide the appropriate licenses for each Windows based VM you install. For example, if you install Windows Server 2012 Standard, you can install 2 more Windows Server VMs without needing to purchase a server license. If you want to install a third Windows Server 2012, you would need to purchase or provide another license, as well as the fourth, fifth, etc. For Windows 2012 Datacenter, you are allowed unlimited “free” instances of VMs with a Windows Server OS on that host. Windows 8 offers no “free” OS instances. For each Windows OS based VM you install, you need to provide and assign that Windows license to that particular VM. The only real “limit” is the amount of RAM for running VMs and/or disk space for installing OSes.

Ok, now we have have hit an expenditure roadblock making Hyper-V a much more expensive proposition for number crunching CTO tech officers/decision makers
Or maybe not. I may actually be able to get two servers out of one with a $500 standard license and have the added layer of protection increasing my overall performance in terms of yearly system downtime numbers. Wait, more expensive or is this actually a real value proposition. Well thats going to depend on your accounting numbers. Just like an organization can seem very profitable depending on which type of profit your talking about such as ebitda numbers vs gross product numbers. Tricky, still very tricky. The only wait to really be able to find out a cost to benefit ratio, which the more and more research I do it would make it appear to be a benefit, is it wait and really figure out a proven set of numbers for an organization. Just like the accounting teams hopefully proven methodology of discovering profit margins that go beyond what the stock numbers are saying the IT team within an organization should be able to figure out the same in terms of overall system design efficiency. However im going to trust that MSFT knows what they are doing in this case and that this is a profitable product that makes people happy by increasing uptime SLAs that translates directly into prophet margins. With that ill just leave this guy here.

Oh yeah so to answer my basic question, I would have to use remote desktop tools and not virtualize locally is what I think i’m looking at here.

 

Office 365 exchange deployments and Azure

So I got a call about a job involving office 365, possibly troubleshooting for domain users from recently acquired companies transitioning to a new domain or possibly more business admin number crunching type of scenarios, the person wasn’t exactly sure. Either way I figured it would be a good idea to start to understand office 365 and understand modern deployments of outlook as it would appear the days of the exchange server being a physical box deployed on location are numbered. Turns out that this was helpful for studying server regardless of any situation involving accrual of capital that might possibly incur (which may be needed at some point this year if i’m to continue studying technology as unfortunate as may view the responsibilities of being a person that’s working (working obviously meaning a person performing a task while incurring some sort of monetary benefit, clearly not a person of free will doing something because they really enjoy what they do)) due to the number of questions i’ve seen in testing scenarios involving Azure and the lack of discussion in any documentation I’ve come across.  Specifically I’ve noticed this issue with the trusts required involving ADFS and with server you’re obviously clueless as to the actual end user scenario support situations that you may need to support. We just how to configure a lockdown policy on your browser, desktop and possibly set you up with some shared folders, right?

For starters I found this training video to be tremendously helpful and informative on a lot of levels, its quite long but worth a view if you’re into these sorts of things.

He gets into the ADFS situations and when considering Azure deployments I guess this is more practical that considering blanket deployments that look more like virtualization rather some thing currently in practice such as exchange server cloud deployments,
which appears to be what 365 is doing in some cases, particularly with small businesses and im assuming there are several examples of enterprise deployments of this solution. As to why a small business would just use gmail, exchange and google drive is beyond me but
im sure there are people that feel the premium hosting service is worth it to have the @company.com or what ever their reason is.

Maybe we could get on with attempting to understand ADFS and how it works since it seems like we are seeing so much technical jargon and not much for a high level overview of what a modern exchange deployment might look like in an enterprise environment.

Interesting, no mention of outlook or exchange server in any of those, what’s going on here guys? It’s too easy to use to be true? I think some people are worried that may be the case. However I’m still confused about the end user experience because I’ve watched several videos about this on youtube and the end user experience seems to be awful. Which is both a good thing and a bad thing however the video below makes it seem as if you can’t deploy and email solution through group policy (I have my doubts about this because good admins should be able to figure out how to deploy install packages)

Perhaps we could get back to understanding how to setup ADFS/DirSync and I may have found something solid here, at least for a place to start understanding this tech.

That is actually turning out to be a really solid series and is very clearly demonstrating the knowledge gaps present between client support roles and server support roles. In a client support role most things are black and white but if you start to follow this you quickly realize that for a server test to pose questions about how to set up a cloud environment the questions could get really complex really fast and there are a lot of other topics to cover that could possibly be more relevant while this is important to have a really solid foundational understanding of. At least I think that’s what’s being said?

That’s all for now, we learned a little about future cloud deployments, how testing works and gained a very confused understanding of
modern exchange deployments.

 

Nick Barnes Resume

Nick Barnes

MCSA on Server 2012 R2, MCP on Hyper V

MCITP/MCTS on Windows Vista

5377 Rockmoor Dr.| Stone Mountain, GA 30088 | home 770-469-0783 | cell 770-845-7693 | Nickrbarnes@gmail.com

IT Product Professional
PRODUCT DEVELOPMENT | ENTERPRISE IMPLEMENTATIONS | TECH SUPPORT

Achievement Highlights:

 

    • Experienced IT product professional with 5 years of success leading all phases of diverse technology projects  Experienced product strategist, assisting with design and implementation of technology products.
    • Business strategist plan and manage multimillion dollar projects aligning business goals with technology solutions to drive process improvements, competitive advantage and ensuring client/business partnering success.
    • Excellent communicator leverage technical, business acumen to communicate effectively with clients, executives and associated teams driving and developing KPIs.
    • Diverse business skill set Able to manage/communicate diverse business needs and expectations, managing large project items and ensuring high-quality deliverables that meet or exceed timeline/budgetary targets.
    • Defined processes and tools best suited to each project. Moved between agile and waterfall approaches depending on project specifics and client goals, creating detailed project road maps, plans, schedules and work breakdown structures.
    • Awarded the CEO Innovation Achievement trophy for 2011 by Asurion Executive Officer Sue Nokes for stellar team achievements while successfully launching the Premier Support Solutions line of business. Participated in business model design, client product demonstrations, and managing operational teams. Returned substantial revenue profit against a forecast of cost. Also  developed KPI’s as directed by business decision makers and worked with a dev team to turn these into actionable items for call logging in a CRM system that were dashboarded daily.

 

  • Tech blogger/student after attending classes at CED for an MCSA on Server 2012 I am blogging my continued learning journey and creating technical content. My blog is available at this address: www.xafterhoursx.com (named after a scorsese film) as well as doing short term temp work for Compucom/Robert Half

 

Skills Summary

  • Certifications: MCTS/MCITP on Vista, MCSA on Server 2012 and MCP on Hyper V
  • Platforms: Windows 10, 8, 7, Vista, XP, Server 2008, 2012,
  • Networking: LAN / WAN Administration, VPN, TCP/IP, DHCP, CIDR/Subnetting  
  • Tools: Microsoft Office Suite, Microsoft Dynamics

Professional Experience

4/2017                                                                                CareerBuilder                                                                                   Client Support Specialist

  • Offer top notch technical support to CareerBuilder clients and problem resolution for internal customers
  • Track client issues using Salesforce
  • Work with cross platform teams to resolve client issues

 

2/2016 -8/2016                                                                    Contracts through Robert Half                                                  Desktop Support Technician

  • Inventoried approximately 2000 machines in the Sun Trust Plaza and garden buildings
  • Found and returned approximately 100-200 devices that were not currently in use and processed through decom
  • Imaged approximately 20 Dell AIO devices at two Home Depot locations using Windows 10 builds
  • Use pxe boot utilizing both UEFI and legacy technologies
  • Used extensive troubleshooting skills detailing environmental issues engaging Cisco and tier 3 in house Home Depot support
  • Boots sometimes required pulling error logs via PowerShell and managing build loads via bios SCCM options
  • Personally installed approximately 80 no domain joined laptop computers and configured wifi
  • Installed several wireless access points for the computers
  • Offered basic network connectivity and application support for approximately 200 machines

1/2016 Univar, contract through Intellapro

  • Replaced 5 user workstations, going from Win XP to Win 7 on new Dell machines connected to a domain
  • Migrated PST files from Outlook 1.1.1 to 1.1.3
  • Installed network printers, setup  a printer in a public space  that had not worked in over a year due to ip configuration issue
  • Ensure the migration of all user files with no on site admin

Artistic endeavors/Blogging and school work

8/13-1/16

  • Working with a Google + group featuring mostly international members  that is focused on helping people to study for and obtain the  Server 2012 MCSA certifications. We share our studies, discoveries and get together once a week to discuss findings and discuss a weekly topic. These sessions are saved to our leaders youtube channel, if you would like to view the group or videos please find the links on my blog or simply ask and I can point you in the right direction.
  • Some of this time was not terribly professionally active however I have spent this time studying computer technology and pursuing personal study goals mostly in the areas of humanities and fine arts. I also did some landscaping during this time

6/2010-7/2013                                                       Asurion- Mobile Tech Support/Premier Support Solutions                                   Business Analyst

  • Worked with a team of 4 people reporting to the director of operations to develop and design  every functional aspect of a new product for Asurion known as mobile tech support
  • This team grew the product from a testing phase/survey phase, sold the product to ATT, designed systems, training, call scripts as well as everything needed to roll out this product for a 100+ seat call center taking highly technical calls centered around answering any and every question a customer might have with a cellular device
  • Gave product demos to VPs from companies such as AT&T, Verizon, Bell Canada, and Directv  
  • Partnered with the software development team to design and implement call logging software utilizing Microsoft CRM and SQL backend.  
  • Participated in scrum meetings pertaining to the development of our Microsoft Dynamics products and the evolution of how our CSRs would validate customers being product owners, at first it was a very manual process using AT&T’s native software that required an FTP connection of sorts, getting appropriate network and users access proved to be complicated, we eventually integrated the IVR into a solution that would auto populate AT&T SOC codes into our CRM interface negating the CSR responsibility for owning another login and providing faster customer service to the end user.
  • This often included late night software roll outs with testing that went into the morning hours ensuring that tools would be available for our agents to use the next day.
  • Responsible for updating client interfacing website with new devices.
  • Developed knowledge base/troubleshooting articles on everything dealing with iPhone technology
  • Provided daily reporting of complete call data analysis based on data pulled from in house developed CRM solutions using SQL reports fed into an excel spreadsheet . Ensuring that every call was broken down and viewable every day in a dashboard format to leaders from the board of directors to my manager. This Information was used by leadership to make integral business decisions.  

8/2007-12/2009                                                         Juris/LexisNexis (Contract through Vaco)                                           Lead Software Technician

  • Responsible for ensuring a smooth release of Lexis Insight, a new benchmarking tool for midsize law firms built from .NET 3.0 technology and utilizing SQL database structures
  • Managed the installation, setup and troubleshooting of Lexis Insight providing stellar customer service and prompt resolution for any problems that occurred with the software
  • Participated in scrum meetings with .NET/SQL/HTML developers
  • Updated and maintained/updated client workstations and servers on a variety of platforms including Server 2000, Server 2003 and  XP                                                                       
  • Maintained a working client relationship with 200+ international and domestic law firms including making cold sales calls to invite clients to participate in using the Insight product
  • Worked closely with development to ensure the highest possible product quality and customer satisfaction   
  • Functioned as assistant systems admin providing internal PC support including working with Windows Vista, XP,  Server 2000, troubleshooting networking issues, as well as support for various internal applications as needed  and assisted with the manual migration of 200+ machines to a new domain  

 

6/2003-6/2006                                                                 Alliance Data                                                                                                         Helpdesk Lead                                                                   

    • Awarded the Tennessee Award for leadership excellence
    • Functioned as a front line supervisor for  90+ associates
    • Developed a team of 12+ Helpdesk Agents, motivating, building trust and instilling confidence
    • Significantly and consistently increased agent productivity and work quality, thus optimizing agent performance through the use customer service metrics as well as strong leadership skills
    • Purposefully communicated client expectations and organized helpdesk efforts to ensure client satisfaction                                                                                  
    • Coordinated a variety of internal support personnel with vendor support throughout service outage conditions, ensuring prompt resolution     
    • Handled a wide variety of escalated calls while providing top tier customer service to clients                                                                                                     

 

  • Functioned as a resource to Helpdesk Agents to assist in problem resolution

 

Education & Credentials

Dec   2006                      Milligan College                                                                                            Johnson City, TN                                                                                              

                       Bachelor of Fine Arts, Emphasis in Photography         

                                      GPA 3.0, Dean’s list 2006                                                                                                                                                                                

Sept 2008                       MCITP: Desktop Support Technician                                                            CED Solutions

June 2015       MCSA: Windows Server 2012            CED Solutions

 

5377 Rockmoor Dr. | Stone Mountain, GA 30088 | 770-469-0783 | Nickrbarnes@gmail.com

More questions and answers that have no names

This post is going to be a series of questions, as im kind of working from the assumption that its a little more efficient way to do things. The first question is one that I was completely confused by. Im not sure if it just wasn’t really covered in material ive read or if it wasnt covered enough but it was certainly a head scratcher for me. Not exactly sure what else to do but attempt to write enough about the topic so that if I see something like this one again I can be prepared. I also learned a new HTML tag that allows me to do separations (line breaks) between questions, hopefully it will work on wordpress. Really enjoying this site, also really enjoying server and hoping to become employed
at some point so that I can financially afford to switch back to a client OS and get some certs on win 10 and work through this network +. A new computer and a copy of server would be amazing for both but lets not get ahead of ourselves. I did some work outside today and found it to be amazingly relaxing which is a nice change. In some ways manual labor really easier, I spend the past couple summers doing lawn work and
I’ve worked at oil change places before so I guess I have some room to say that. Any way, full steam ahead in hopes of finding some sense of
reasonable employment!


Ok so on to this first question that I was so clueless over.

 photo 2016-03-22 6_zpsmc1bt4cr.png

So we have a VM connecting to a SAN and some thing about a LUN. I understand the idea in that its some sort of storage device but other than that im essentially clueless so we should probably check youtube.

This is still slightly confusing up front with the conversations about ISCSI but other than that it all seems some what familiar, so im not entirely sure why it seems so horribly confusing however that video is a little thin, lets try an official msft video.

So this a great video with lots of awesome introductory stuff about JBOD, something else I didnt understand, double parity and an array of other topics but nothing about this specific topic. We may have to result to just technet articles. Bummer.

ok so this one is decent but im still kind of confused as to a having a “base line” understanding of the question being asked. At this point ive spent half a day on this one question. again, bummer.

oh, great here is this playlist that claims you will know everything you need to know to master Hyper V after watching all of them. Woof. There is actually a video in here by our faithful narrator in the previous that turned out to be not so bad. Ok im giving up on youtubers and coming to realization that again, the only confusing part of this is the nomenclature, using the phrase “connect to a LUN” means absolutely nothing to me. This
could be a result of the fact that I barely know what a SAN is or how it works. The worst part of this obscure part of questioning is that nothing on this TechNet article leads me to
a better understanding of the concept. Im sure Ill understand all this at some point but it may not be today, so lets just move on to memorizing the answer.

 photo 2016-03-22 7_zpsp3tm71yz.png

Well, that was far less painless than assumed, add an HBA, as previously discussed while neglecting the bit about a LUN, and create a
fiber channel adapter. Not nearly as bad as tooth extraction, again just overly complicated verbiage.


Ok, with this next one im just going to jump in and show the question and answer.

 photo 2016-03-22 2_zpsyskcohwj.png

 photo 2016-03-22 3_zpscvmfygbq.png

So it seems fairly confusing right? Sorry you cant see both answers highlighted but its Hashing mode & Switch independent. The explanation given seems to kind of cover most of it but I dont understand this hashing mode stuff, never heard of her. This is going
a little deeper in the sea of Hyper-V networking than ive gone before and im scared of getting the bends. Well lets see what kind of useless information we can dig up on this obscure byte of technology. this seems ok but it kind of makes it seem like this is one of those 100$ gaming NICs that doesn’t actually do much for real world performance.
A forum piece also exists so that’s helpful. Maybe this is the answer im looking for:

Address Hash is how you configure your team to load balance network traffic between the NICs in the team

Or maybe not, either way one thing is for certain, I think once you get used to blogging HTML is a superior format to word for document creation. I also think this one requires some degree of memorization as opposed to rationalization between the ideas.


On to the next idea thats hopefully more than random memorization however we could get a little more in depth with the NIC teaming situation but in the interest of time I think we will skip it. How about some some basic share permissions questions? cool.

 photo 2016-03-22 8_zpsg7lsrclz.png

 photo 2016-03-22 9_zpsg3hhaoqf.png

I find these to be endlessly confusing because supposedly if you have an explicit deny NTFS permission, it will win, thus ensuring no access. I realize that isnt the case but why wouldn’t adjust the the permissions is slightly confusing to me. But I can understand that the read + write will essentially be nullified in this case leaving you with the “share” permissions to modify to ensure change access. This can be very confusing
though and surely im not the only person that struggles with this.


So I know literally nothing about this next one outside of the idea of secure DNS updates and active directory integrated zones I’m literally clueless. It seems to have something to do with DNS sec (not even really sure what that is so I went with that.

 photo 2016-03-22 4_zps8rcucbkm.png

 photo 2016-03-22 5_zpsms9eusnk.png

Oh, thanks for in passing mentioning I should disable one of the other millions of firewall rules. Does this technology have a name?  I honestly dont even know where to start. I realize this is so far from helpful but what am I supposed to google here to try to figure
this one out, is god real? why is the sky blue? If I google the question I will at best come up with a similar question and answer.  So I guess we chalk this one up to memorize the answer or be very experienced with actually configuring windows firewall rules?
I suppose if I had server I could demo this but I dont so sorry guys.

 

Starter GPOs and setting links

This question is mostly straight forward, the kind of obvious part where you have to decide if the two gpos have been created or if you are still in the process of creating them from the starter gpo is a little confusing but ultimately it seems clear that they exist prior to the effort that you are undertaking currently, for better or worse.

 photo 2016-03-21 18_zps0bjzy1w5.png

I tried to find a youtube video that demonstrated this way of linking gpos but I found it surprisingly difficult. Im not saying it doesnt exist but I gave a good 30-45 mins of searching and found nothing. So we are just going to trust that this method actually works in practice.

 photo 2016-03-21 19_zpspwluqgsx.png

I suppose if I had access to server I could make a video for this unfortunately I dont and according to the answer my question of linking starter GPOs may be irrelevant because they serve as templates, not actual GPOs if my understanding is correct. Maybe I understand this, so the process goes:

  1. create starter gpo

2. create an actual gpo from a starter gpo template

3. link that gpo to an OU, which you cannot do from the starter gpo settings.

This would be much easier if I could actually test this on the software.

 

Parity? Three-way mirror?

I chose this question due to the numbers of answers, possible complications and questions concerning labeling again. Does mirroring using parity data? Im honestly not sure but I think its a safe assumption that if you are using a space saving raid configuration
some semblance of parity data must be in use. Thus its either the right answer or just listed as an option to throw nubs off.

Anyway here is the question/answer set:

 photo 2016-03-21 4_zpslwugzcsg.png

 photo 2016-03-21 3_zpsypvezfkv.png

So as you can see parity is there but I think the understanding is that its an incomplete answer. Which begs the question what exactly is a three-way mirror? You have 3 sets of disks and all three have the same set of data on them? Im unfamiliar with this idea in the
context of raid configurations. I guess it does say if two disks fail at the same time, so that actually makes sense and this is here to explain but it still doesn’t explain why that you would use 5 disks in this situation. If your using less disk than a 1 to 1 ratio (or more in this case) what’s going on with the extra disks in this raid configuration?

 photo 2016-03-21 5_zpsu8myw95f.png

As you can read in msft linked article you can see that the word parity is even mentioned and the answer doesn’t provide for much in the way of clarity. Not much to do here but assume that super obscure ideas are fair game.

Any way, if I was feeling more spunky or had more time I would link to articles on each of the answers but that isn’t exactly what im going for with this set or what I have the time to do, at this point.

 

A stripe-set also increases performance?

I know I said I was going to keep the logistical questions to a minimum but this is another great example of ideas clashing. So, in the previous question we are told to take a volume off line to create a pass-through disk to ensure high performance. So let’s check out the next question:

 photo 2016-03-21 16_zps7wccbfmd.png

There’s a whole lot of potential answers in the right hand section and two separate free space partitions listed in the example. While I understand that as pass-through disk increases performance this seems to be unimportant here despite the emphasis on performance in the question. Obviously these tests are not impossible as people do pass them but the logic is not for the feint of heart.

Any way here’s what “the book” says:

 photo 2016-03-21 17_zpsqcvroxt1.png

It almost seems to be a parody of the idea that you can take a test without experience to use as leverage for argument and expect some sort of promotion. Surprisingly, in spite of being bad for me personally, I kind of support this notion. I feel that the cream will probably rise and that things will be as they should with hard work in that respect.

Anyway, apparently a “stripe-set” logically translates to a global group as far as the lay people are concerned.

 

Pass-through disks

I got through about 40 questions today and it went ok, this is an example of “this logic is somewhat full of fail because of confusing or missing words” and there are lots of examples of this however I kind of understood where this was going.

 photo 2016-03-21 8_zpshoegasub.png

Ok, so I like to think that im a reasonably intelligent person how ever there are lots and lots of people out there that are vastly more intelligent than me and biologically superior. Its just a fact. Regardless, on to the topic at hand. This is talking about I/O
being important so Im starting think about this, just kidding I really meant
this becuase we are trained that pass-through disks are superior for performance.

However, no where in the question does it state that there is free space on the disk or if there is a volume created on the disk which
would require deletion. I’ve found these types of questions to be accurate in the types of logic required for testing.

Website Powered by WordPress.com.

Up ↑