Learning so much! I try to do blog posts that have 5 questions I’m struggling with the amount of research that goes into them being more voluminous than what I’m used to. The very first question required almost 1k words and I had to split the 5 question lot into 2 posts. Thus the confusion on the posting number because you know, I like to stick with an established method of doing things. Very traditional haha… anyway, here is the first question:
I have some confusion about data storage terms and now might be a good time to clear those up. The other part of the question is what kind of data is the message file. Like is it CSV? I would assume it to be as that would most likely be the most efficient way. I’m not sure if I’ll get an answer to that but maybe I’ll learn something by searching for an answer to that question. Regardless, it seems that would plausibly go into a table if that where the case but I can see two problems with that for one, why use SQL for everything and two it might end being fragmented data based on how the conversations are handled. Data lake and blob seem the same to me so lets figure out what that means Azure Data Lake vs Azure Blob Storage in Data Warehousing
I think blob storage is good at non-text based files – database backups, photos, videos and audio files. Whereas data lake I feel is a bit better at large volumes of text data. More often than not, personally, I would choose Data Lake Store if I’m using text file data to be loaded into my data warehouse. Of course, you can use blob storage, but I feel that is for those non-text data that I mentioned above.
Welp, thats helpful. It seems more than ‘opinion based’ but lets find out what MSFT has to say Comparing Azure Data Lake Storage Gen1 and Azure Blob Storage
Based on this, I like the idea that datalake is for text files and worst case scenario we can for sure assume that chat message logs are text based but I’m not sure how they are indexed, outside of using SQL, to find key words and so forth and msft isn’t really saying. It cant be that hard to figure out if you actually have one or look at someones chat data.
The next interesting thing in this is ‘a file share in an Azure storage account’ is this blob storage? haha now stop me if I’m wrong on the differences between hierarchical and folder storage models not being ‘containers’
Regardless, there are more than blob and data lake storage. The confusing part is that no one exactly lays out data lake in articles. I read through several and found this one helpful Microsoft Azure Storage Overview
Azure blob storage: It is optimized to store huge unstructured data. Storage is in terms of binary large objects (BLOBs).
Azure table storage: It has now become a part of Azure Cosmo DB. Azure table stores structured NoSQL data.
Azure file storage: It is a fully managed file sharing service in the cloud or on-premise via the Server Message Block (SMB) protocol.
Azure queue storage: It is a storage service that stores messages that can be accessed through HTTP or HTTPS from any part of the globe.
Disk storage: It is a virtual hard disk (VHD) which is of two types: managed and unmanaged.
Which matches up with what MSFT says in this Introduction to the core Azure Storage services
The Azure Storage platform includes the following data services:
Azure Blobs: A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2.
Azure Files: Managed file shares for cloud or on-premises deployments.
Azure Queues: A messaging store for reliable messaging between application components.
Azure Tables: A NoSQL store for schemaless storage of structured data.
Azure Disks: Block-level storage volumes for Azure VMs.
As you can see it doesn’t mention Data Lake storage but there is a separate article for that Introduction to Azure Data Lake Storage Gen2
Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. Data Lake Storage Gen2 is the result of converging the capabilities of our two existing storage services, Azure Blob storage and Azure Data Lake Storage Gen1. Features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and scale are combined with low-cost, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.
And this leads me to wonder if there will be variations between 1 and 2 noted on the test. I guess we will get to that when it shows up. Anyway, its for big data that you run analytics against. Somehow. Anyway, based on this information I think that the Data lake or data lake answer makes sense.
I’m not sure what these rolls are so lets find that out: What is role-based access control (RBAC) for Azure resources?
Owner – Has full access to all resources including the right to delegate access to others.
Contributor – Can create and manage all types of Azure resources but can’t grant access to others.
Reader – Can view existing Azure resources.
User Access Administrator – Lets you manage user access to Azure resources.
Ok, so this one is pretty straight forward
Haha, this is great, I get to quote my self! And no, I didnt do this on purpose.
Quickly realizing that you gave an answer to a question in the last blog post and then looking at the question and somehow not knowing right away that the answer is what you said in the last blog post, regardless of what you where thinking when going through the questions, is a good sign that this may take longer than expected. Sigh. I’ve also realized that keeping text from articles in a standardized format is more annoying with this method of blogging than using a text editor so I’ll go back and clean that one up. This next one has a lot of screen shots and does not appear to be answerable but I would like to cover the material none the less.
And if you let them sit a few days haha magic…anyway, here is the list of articles possibly related to this topic thus ensuring me there is an overwhelming amount of info covered on this bad boy
Anyway, this is the one we are covering in this question: Configure a VNet-to-VNet VPN gateway connection by using the Azure portal
Wow, that is a long article but the following is what they are talking about with the gateway:
To create a virtual network gateway
- From the Azure portal menu, select Create a resource.
- In the Search the Marketplace field, type ‘Virtual Network Gateway’. Locate Virtual network gateway in the search return and select the entry. On the Virtual network gateway page, select Create. This opens the Create virtual network gateway page.
- On the Basics tab, fill in the values for your virtual network gateway.Project details
- Subscription: Select the subscription you want to use from the dropdown.
- Resource Group: This setting is autofilled when you select your virtual network on this page.
- Name: Name your gateway. Naming your gateway not the same as naming a gateway subnet. It’s the name of the gateway object you are creating.
- Region: Select the region in which you want to create this resource. The region for the gateway must be the same as the virtual network.
- Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
- VPN type: Select the VPN type that is specified for your configuration. Most configurations require a Route-based VPN type.
- SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
- Generation: For information about VPN Gateway Generation, see Gateway SKUs.
- Virtual network: From the dropdown, select the virtual network to which you want to add this gateway.
- Gateway subnet address range: This field only appears if your VNet doesn’t have a gateway subnet. If possible, make the range /27 or larger (/26,/25 etc.). We don’t recommend creating a range any smaller than /28. If you already have a gateway subnet, you can view GatewaySubnet details by navigating to your virtual network. Click Subnets to view the range. If you want to change the range, you can delete and recreate the GatewaySubnet.
The Gateway subnet range seems confusing until you assume they are using NAT and on the other side of the Gateway it wants to know what range of addresses are going through that gateway on the VLAN. Why? Honestly, no clue. Which adds to the confusion of ‘am I thinking correctly about why it asks for this’ normally it should ask for a gateway when going between VLANS. Assuming thats what a VNET is? Maybe I should verify that too: Azure Virtual Network frequently asked questions (FAQ)
What is an Azure Virtual Network (VNet)?
An Azure Virtual Network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the Azure cloud dedicated to your subscription. You can use VNets to provision and manage virtual private networks (VPNs) in Azure and, optionally, link the VNets with other VNets in Azure, or with your on-premises IT infrastructure to create hybrid or cross-premises solutions. Each VNet you create has its own CIDR block and can be linked to other VNets and on-premises networks as long as the CIDR blocks do not overlap. You also have control of DNS server settings for VNets, and segmentation of the VNet into subnets.
Use VNets to:
Create a dedicated private cloud-only VNet. Sometimes you don’t require a cross-premises configuration for your solution. When you create a VNet, your services and VMs within your VNet can communicate directly and securely with each other in the cloud. You can still configure endpoint connections for the VMs and services that require Internet communication, as part of your solution.
Securely extend your data center. With VNets, you can build traditional site-to-site (S2S) VPNs to securely scale your datacenter capacity. S2S VPNs use IPSEC to provide a secure connection between your corporate VPN gateway and Azure.
Enable hybrid cloud scenarios. VNets give you the flexibility to support a range of hybrid cloud scenarios. You can securely connect cloud-based applications to any type of on-premises system such as mainframes and Unix systems
So basically its the Azure version of a VLAN. Ok, so this block editor is super buggy and bear with me if there is some formatting issues. Using the block editor is a nightmare if you bring over test with both a header and a ordered list. It splits them into separate blocks that it wont merge, one will allow you to edit the text into a quote and the other wont. Its kind of a pain in the ass. You have to switch it to HTML and then delete the second block and be careful you select the right block because I accidentally deleted this paragraph and then it wouldn’t let me use the back button to restore it. So I’m learning as I go with this as opposed to stressing over formatting too much. Seems like a waste of time to become a quick expert.
Now I’ve got this block quote text that I cant get to go away lol, fun. Anyway, I feel like I covered the topics at hand and will try and go back and adjust formatting on the previous post at some point. Humm, I think I resolved that by changing the blockquote HTLM that auto populates into a paragraph and then it didnt like that so I switched it over to a classic block and removed it. That seems to have worked…thanks for all your help in illuminating these issues. Good wrk
Anyway, there was an additional question in this lot but its basically another, post unto its self type of question and Ill get back to it later. That’s all for now!
Now that I kind of know how to do use this interface and have done, 1 question, its time to start into the next set and hopefully get 5-10 knocked out.
Well, I was on the right track here but having no background in Azure its kind of a shot with being familiar with MSFT stuff. I mean, I’m sort of familiar with Azure but given my shock at what it does in the last post its clear that I have a lot to learn. I would assume that you would add the rule and then auto-scaling but that could go either way. However, it does seem you would want to add the rule and then say the rule auto applies? I mean ok, you apply the auto-scaling with no logic haha. Anyway, splitting hairs. I would also assume that the rule contained the condition but also, wrong.
When walking through the UI it makes sense but it doesn’t talk about Azure app service tiers so I should probably look at that. Whats online isnt super clear but it seems like it will scale out to an additional instance so its possible that figuring out pricing for pushing another instance vs setting it to a higher tier would be an issue for pricing configuration optimization. Maybe there is a video on YouTube about this…well theres a page with a video that I found helpful How and When to Scale Up/Out Using Azure Analysis Services
Let’s start with when to scale up your queries. You need to scale up when your reports are slow, so you’re reporting out of Power BI and the throughput isn’t working for your needs. What you’re doing with scaling up is adding more resources. The QPU is a combination of your CPU, memory and other factors like the number of users.
Memory checks are straightforward. You run the metrics in the Azure portal and you can see what your memory usage is, if your memory limited or memory hard settings are being saturated. If so, you need to either upgrade your tier or adjust the level within your current tier.
CPU bottlenecks are a bit tougher to figure out. You can get an idea by starting to watch your QPUs to see if you’re saturating those using those metrics and looking at the logs within the Azure portal. Then you want to watch your processor pool job que length and your processing pool busy, non-IO threads. This should give you an idea of how it’s performing.
For the most part, you’re going to want to scale up when the processing engine is taking too long to process the data to build your models.
Next up, scaling out. You’ll want to scale out if you’re having problems with responsiveness with reporting because the reporting requirements are saturating what you currently have available. Typically, in cases with a large number of users, you can fix this by scaling out and adding more nodes.
You can add up to 7 additional query replicas; these are Read-only replicas that you can report off, but the processing is handled on the initial instance of Azure Analysis Services and subsequent queries are being handled as part of those query replicas. Hence, any processing is not affecting the responsiveness of the reports.
After it separates the model processing from query engine, then you can measure the performance by watching the log analytics and query processing units and see how they’re performing. If you’re still saturating those, you’ll need to re-evaluate whether you need additional QPUs or to upgrade your tiers.
The thing about this is that its not mentioning tiers beyond standard and these are the current plans but it saying up or out. Up being to a ‘better machine’ and out being to create a replica machine at the same price point, as I understand it. Anyway, these are the current tiers:
Honestly, its a fairly basic concept but to calculate out cost you’ll probably need some kind of Azure Pricing Calculator anyway, I liked this link too Horizontal vs Vertical scaling – Azure Autoscaling … moving on
First of all I have no idea what QnA Maker is, at all but I did get this right, but the terms where backwards. Which means, it wasn’t right. Not sure if this was legible. Anyway, here is the base article and not the ‘how to’ QnA Maker
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.
I mean, I could have figured that out based on the name but lets find a docs article with like descriptors Quickstart: Create, train, and publish your QnA Maker knowledge base ok so I was confused by this one based on the question but it starts with this:
Create your first QnA Maker knowledge base
Sign in to the QnAMaker.ai portal with your Azure credentials.
In the QnA Maker portal, select Create a knowledge base.
On the Create page, skip Step 1 if you already have your QnA Maker resource.If you haven’t created the resource yet, select Create a QnA service. You are directed to the Azure portal to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to Step
Which there is an entire article about this that explains the QnA Maker management service: Manage QnA Maker resources
That one walks through using and creating things with it and its basically the engine that makes the API function. So basically anything having to do with the actual interaction uses that. Terms seem to be a little hazy but that is what they are talking about.
Now, as for the runtime, this is a little confusing if your thinking its insights into how people interact with your data. Which, one would assume that runtime would be associated with performance for hardware but again, terms can be confusing at times. This is also from the setup article:
The QnAMaker runtime is part of the Azure App Service instance that’s deployed when you create a QnAMaker service in the Azure portal. Updates are made periodically to the runtime. The QnA Maker App Service instance is in auto-update mode after the April 2019 site extension release (version 5+). This update is designed to take care of ZERO downtime during upgrades.
You can check your current version at https://www.qnamaker.ai/UserSettings. If your version is older than version 5.x, you must restart App Service to apply the latest updates:
So, you can sort of see info on how the process is running here and im assuming that also has processor and ram usage info? Unclear but I’m sure this is part of a standard format that I’ll figure out as we move along.
Now if you want to understand how people are interacting with your data, this is the information your looking for: Get analytics on your knowledge base
QnA Maker stores all chat logs and other telemetry, if you have enabled App Insights during the creation of your QnA Maker service. Run the sample queries to get your chat logs from App Insights.
So, these terms are not backwards at all and I’m not really sure what the hell I was thinking when I answered it the way that I did but now I have a new level of clarity on all sorts of things. Moving on…
Ok so…this is a whole ass set of stuff to get into. The answer seemed obvious to me but what does the Azure AD Connect Wizard do? The last time I looked into Azure and on prem connect was 2012 and im assuming a lot has changed so lets start there…. holy shit … ok What is hybrid identity with Azure Active Directory?
Alright…lets start with the What is Azure AD Connect? and ill have the cheesecake, yes the entire thing. thanks. haha…anyway. uha
Azure AD Connect is the Microsoft tool designed to meet and accomplish your hybrid identity goals. It provides the following features:
Password hash synchronization – A sign-in method that synchronizes a hash of a users on-premises AD password with Azure AD.
Pass-through authentication – A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn’t require the additional infrastructure of a federated environment.
Federation integration – Federation is an optional part of Azure AD Connect and can be used to configure a hybrid environment using an on-premises AD FS infrastructure. It also provides AD FS management capabilities such as certificate renewal and additional AD FS server deployments.
Synchronization – Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.
Health Monitoring – Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.
So it seems like using a Wizard is bad but you know, we’ve been through the server thing before and its a great idea to know, like, most things: Azure AD Connect sync: Understand and customize synchronization
The Azure Active Directory Connect synchronization services (Azure AD Connect sync) is a main component of Azure AD Connect. It takes care of all the operations that are related to synchronize identity data between your on-premises environment and Azure AD. Azure AD Connect sync is the successor of DirSync, Azure AD Sync, and Forefront Identity Manager with the Azure Active Directory Connector configured.
This is, a huge can of worms haha wow, this is exciting. Anyway, so here’s this: Introduction to the Azure AD Connect Synchronization Service Manager UI
The Synchronization Service Manager UI is used to configure more advanced aspects of the sync engine and to see the operational aspects of the service.
You start the Synchronization Service Manager UI from the start menu. It is named Synchronization Service and can be found in the Azure AD Connect group.
Again, this is as far into this as im going but yeah I love this crap haha… time to move on
I dont know what any of these things are, obviously, so lets define them
- Azure Service Bus – Microsoft Azure Service Bus is a fully managed enterprise integration message broker. Service Bus can decouple applications and services. Service Bus offers a reliable and secure platform for asynchronous transfer of data and state. Data is transferred between different applications and services using messages. A message is in binary format and can contain JSON, XML, or just text. For more information, see Integration Services.
- Azure Relay – The Azure Relay service enables you to securely expose services that run in your corporate network to the public cloud. You can do so without opening a port on your firewall, or making intrusive changes to your corporate network infrastructure.
- Azure Event Grid – Azure Event Grid allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your own events, using custom topics.
- Azure Event Hub -Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
I’m now understanding them and the ‘Restaurant Telemetry’ seems like it would be Event Hub, the “Inventory’ answer makes sense but to be honest I’m not sure about the first one, ‘Shopping Cart’, however I think its also Service Bus.
Anyway, that’s about all for now and I’ve accomplished all the things I wanted to do yesterday, this morning. So that’s pretty cool. I am, so fucking excited about Azure! I’m happy I got the Sec+ and Net+ but like this type of stuff is so much fun to learn!
So, I decided to go the Azure route and I’m starting into these questions. So far its a 50% blog rate with a lower rate of correct answers. I’m find that the things I understand conceptually the terms that I’m assuming they are using are backwards. This is fine though, at least I understand what they are talking about. The first time I started studying for the MCSA for Server 2012, I was absolutely clueless. I finding the material to be about hosting web apps and that’s not super-surprising but I would be interested to compare business cases with using Azure over AWS for eCommerce and so forth. Regardless of specifics such as that, there does seem to be a lot of networking and inter-connectivity issues to discuss. This excites me. What worries me is the amount of hands on click through type of questions I’m seeing. This worries me. I should be able to create a trail Azure sub and work through those though. Lots of maybes in that one though haha. I’ve also decided to start using the WordPress native text editor and host images on WordPress, foregoing Photobucket. I was very used to the way I was doing things but to be honest, this saves time. Photobucket does want 8 dollars a month to remove watermarks and I keep trying to pay that and they say I’m picking an invalid plan lol. Regardless, I’ll miss CoffeeCup. Its kind of fun to edit HTML and its a familiar way of doing things. I have to say, typing away and drag and drop on images is much easier. Links and so forth seem to be easier to do as well. Bulleted lists? Well, ill get to that. Anyway, lets dig into these questions and see how long this takes to find my self in a spot where I feel good about spending money on taking the test. Rough guess 1-2 months. Regardless, I really enjoy reading MSFT articles and love the feel of the new docs page so I’m happy to be here!
Ok, so right out of the gate something I should maybe have an idea on that I got wrong. I’m assuming that scale set is the same as configuration script but I need to look that up as well as generally find something that has this process outlined. As you can see they have put a link here with an article thats probably helpful but for starters I want to make sure I know what a configuration script is and what a VM scale set is
So, right away I’m seeing that you cant throw in a link on this editor without typing something in first? This fine but kind of odd. I’ll get used to it. Anyway, https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/overview ok so I’m missing something or there isn’t a way to change the label of the URL to a name rather than the link? Thankfully, I can click a switch to HTML to fix that: Virtual Machine Scale Sets Ok so maybe I opened my text editor, created the link, realized I could switch the entire paragraph to HTML and then copied and pasted the link haha. Anyway….
Azure virtual machine scale sets let you create and manage a group of identical, load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs. With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and container workloads.
So basically its like a set of rules that govern an entire set of VM’s as opposed to having to define rules for each one. Now this is a huge benefit over using containers because I don’t think a container tech similar to that exists. Lets find out real quick!… Short answer, no. People are apparently running containers inside of machine scale set VMs though. So that’s, not really surprising and I understand the use case for having varying instances of databases or jboss or like Apache. So now lets look at a Configuration Script
A configure script is an executable script designed to aid in developing a program to be run on a wide number of different computers. It matches the libraries on the user’s computer, with those required by the program before compiling it from its source code. As a common practice, all configure scripts are named
configure. Usually, configure scripts are written for the Bourne shell, but they may be written for execution in any desired shell.
Well that’s cool, If you copy from a website, somehow it comes over as HTML and keeps the hyperlinks. Anyway, that’t not at all what I thought it was. I get lost when starting to consider code but basically it seems like its the bit of code that configures file mapping and allows executables to run with dependencies? I was thinking it would it hold information on how to set up VMs for some reason…
Welp, I think I may only get one question done today because I want to cover every thing I don’t know about, which is a lot. So lets start into the next thing:
Azure Resource Manager Templates, ok so I learned if you type out text you can highlight it and then click the link button and put the URL in that. So that’s exciting. Anyway,
With the move to the cloud, many teams have adopted agile development methods. These teams iterate quickly. They need to repeatedly deploy their solutions to the cloud, and know their infrastructure is in a reliable state. As infrastructure has become part of the iterative process, the division between operations and development has disappeared. Teams need to manage infrastructure and application code through a unified process.
To meet these challenges, you can automate deployments and use the practice of infrastructure as code. In code, you define the infrastructure that needs to be deployed. The infrastructure code becomes part of your project. Just like application code, you store the infrastructure code in a source repository and version it. Any one on your team can run the code and deploy similar environments.
Oh, look at that, transform to quote. Anyway, “Teams need to manage infrastructure and application code through a unified process” while this is not wrong it shuffles server admin roles to the dev team. How do I feel about this? Doesn’t mater because its cloud based and its correct and dorks that run shops where the high and mighty admin is the sole proprietor of a bit of hardware is kind of annoying. If it was a localized issue with like a physical NLB or a firewall, I could understand concern. However getting that into the hands of the people the are managing the applications is best practice. Not that anyone asked me. Which leads me to wonder what it is I’m studying or why. Tip: if you know how to manage AD servers, you can spin up instances of those and then your the person that needs to be managing those.
So we may as well cover these other two things on this. Was hoping to get about 10 slides done today but as you can see there’s a rather vast skill set that I need to cover and I find it better to write the book and do my own research rather than sitting down with the latest edition of Azure Lighthouse haha… so Automation account! whats that!
Azure Automation delivers a cloud-based automation and configuration service that supports consistent management across your Azure and non-Azure environments. It comprises process automation, configuration management, update management, shared capabilities, and heterogeneous features. Automation gives you complete control during deployment, operations, and decommissioning of workloads and resources.
Basically desired state configuration cross platform between on prim, off prim Linux and Windows systems. No mention of ‘account’ but this sounds expensive when you throw that in. As if it doesn’t come with the service normally. But holy shit is that cool.
Azure Policy helps to enforce organizational standards and to assess compliance at-scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill-down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
Common use cases for Azure Policy include implementing governance for resource consistency, regulatory compliance, security, cost, and management. Policy definitions for these common use cases are already available in your Azure environment as built-ins to help you get started.
Alright, not super interesting but the amount of finite detail that you can get over an environment with Azure is insane. I really like this product and I’m only one question into this study! So much adventure to be had! There is obviously a lot more detail to go into on each of these topics but I think starting into figuring out that they exist is a good place to start. Anyway, thats all for today. Oh wait, we didn’t cover the link in the question did we? Haha I guess we should do that! Shit, it looks like my saved test crapped out. That’s never happened before. Sometimes it randomizes when you restart which is a total pain in the ass but whatever. I’ve got about 10 questions to cover.
When you define a virtual machine scale set with an Azure template, the Microsoft.Compute/virtualMachineScaleSets resource provider can include a section on extensions. The extensionsProfile details what is applied to the VM instances in a scale set. To use the Custom Script Extension, you specify a publisher of Microsoft.Azure.Extensions and a type of CustomScript.
The fileUris property is used to define the source install scripts or packages. To start the install process, the required scripts are defined in commandToExecute. The following example defines a sample script from GitHub that installs and configures the NGINX web server
So basically, you create the scale set, probably helpful to know what that is first, point to a github repo (or where ever your install files are, assuming you can use docker cmds in this too? like get-docker). Man, also super cool thing to look at that actually excites me for some reason. This is so much more fun than Security+, let me tell you! This stuff is like new functional tech that you can build things with! One thing to note, it says to edit the extension profile before creating the Azure scale set.
Ok, for real this time. Now that we have things arranged logically rather than randomly and have a picture of things covered in this, one slide. I think we are done for the moment. I’ll be blogging later about the other 10 slides. We may even cover a few of them twice, depending on what happens but I’ll get through this. Looking like it may take more than 1-2 months but who knows! I want to actually learn this stuff and I like reading MSFT docs like a big fucking weirdo haha
So my goal this year was to do this two Azure exams that have a combined total of less questions that than the Security+. So, I may go for all 3 of these this year. Make it a MSFT year haha
The desktop admin I feel like I could have that done in like a month but its over 300 dollars in test fees and not working kind of puts a damper on shelling that much cash but for some reason I think it would be fun. However, it seems exactly like the cert I got on Vista. Which sucks because that one is probably the most helpful in job placement at this point and no one cares if you got a cert on Vista. 285 questions isnt much when the Security+ alone was almost 700.
The 365 one is for administering all the office products that are out these days like teams and really any thing 365 based. Azure is the cloud platform where your AD servers are hosted off prem or maybe your using it for web hosting. Which would be unusual. I’m not really sure how hosting workings in 365 because I’m assuming its all hosted by MSFT but I might be wrong there. That one seems the most appealing to start the ‘fiscal year’ with as I have questions haha.
Azure is going to be a bunch of authentication specifics, I’m pretty sure. Unless they go into actually using the OS systems your hosting. Which could be 365 products. I’m not really sure what else there is to know about it. I’m not writing code to it. The second one says design but one would assume that would be typical AD domain design but interface is different.
I might download the VCE for the desktop platform and go through it without actually taking the test and start into 365 stuff. Not a ton of people looking for Azure folks yet. Humm… I could be wrong on that though. Maybe start back to front? I was pretty stern in saying that the 300 and 301 where what I wanted to do next. I’m going to go through job postings on LinkedIn and decide. The salary recs for Azure people are higher per Google but I’m pretty sure most people in IT are aware that can be misleading info in the real world.
More and more, people don’t care about expert views. That’s according to Tom Nichols, author of “The Death of Expertise,” who says Americans have become insufferable know-it-alls, locked in constant conflict and debate with others over topics they actually know almost nothing about. Nichols shares his humble opinion on how we got here.
— Read on www.pbs.org/newshour/amp/show/problem-thinking-know-experts
Test review: very hard and pretty much none of the available test prep material questions are directly on the test however the questions are modeled in the same fashion, so be prepared for that! Pretty sure this was the toughest cert test that I’ve taken to date.
Very nervous about whats going to happen shortly. I feel like I’m right on the money with understanding the material but some how when there are answers that both mean the same thing or one is needed to have the other and some times its the more specific answer and sometimes its the less specific answer you get a little nervous. Anyway, I noticed that Photobucket has watermarked my images and requested that I pay them 8 dollars a month so I started to wonder if it was worth it and then began to look into my stats to see if any one actually looks at this blog and was quite surprised and thought I would throw some stats on here. I’ve gotten over 14k views have written over 51k words on here! Amazing! So I guess people do look at this. Even if it is surprising to me as I never actually checked the stats for the page given that employers don’t normally mention it. I also started a SoundCloud that is some how doing ok. It’s mostly beats and slanted commentary on things. Down at the bottom if you have opinions on pressing urban issues, obvious…
Anway, here are the stats for my little blog
Trying to fix the issue with photobucket in hopes that I dont have to port every thing over to another storage solution can keep using an HTML editor but its starting to look like the wordpress editor and using wordpress to host will work fine. I don’t know, ill see how I feel but it is a hot mess…
I keep getting this error when trying to give them the requested 8 dollars but regardless this editor feels clunky to me
That’s all for now…praying I don’t waste 300 dollars on this test this morning. And I may start posting tech articles on this blog, in the off chance that you read it I would be happy to take feedback on if posting non-study materials to this is valid.
Almost through the questions I missed the first go round, again. I think I’m in the 70-80 precent range but its possible its worse. Hoping not but I’m going to keep going through this lot of 250 till I’m in the mid 90’s before going through everything again and then hope I’m in the mid 90s there but that’s probably not going to happen. Anyway, getting much better at explaining answers and understanding what’s going on rather than being like uhaaa I think its that one but I cant really tell you why. So that’s good. Anyway, got some more questions tonight.
I’ll be real, I mean, I’m not the best with things like port numbers and in practice. Never mind, any way what I was saying was that that everyone knows SSL uses 443 and that was the wrong answer but I don’t know what port number some of these use and you know, I wont remember them most likely but whatever, A for effort!
- Stelnet – this is actual STelnet which is supposed to be secure telnet and uses port 423
- SCP – 22
- SNMP – SNMP uses UDP as its transport protocol. The well known UDP ports for SNMP traffic are 161 (SNMP) and 162 (SNMPTRAP)
- FTPS – FTP/S commonly runs on port 990 and sometimes on port 21, the primary difference being that port 990 is an Implicit FTP/S, and port 21 is an Explicit FTP/S. If a client connects to an FTP/S server on port 990, the assumption is that the client intends to perform SS
- SSL – By default, HTTPS connections utilize the ‘TCP port 443’ whereas the HTTP connections (not secure ones) utilize ‘port 80
- SFTP -SFTP (SSH File Transfer Protocol), not to be confused with FTPS (Secure FTP), runs on top of the SSH (Secure Shell) protocol and by default uses port 22 for communications
Welp, that’s that one. SFTP and SCP which seem like they should be the same thing but are not some how.
Jeez oh pete how the fuck, thats a long list of things. I took at stab at certificate pinning as I wasnt sure if mutal authentication was common but really, im not sure what that even is. So this is like, work.
lets start by defining stuff
- key rotation – generally speaking, this is generation of new encryption keys that is a manual process unless you use a third party vendor, from what I understand.
- mutual authentication – basically anything thats ssl/tls provides mutual authentication which PEAP “The difference is: PEAP is a SSL wrapper around EAP carrying EAP. TTLS is a SSL wrapper around diameter TLVs (Type Length Values) carrying RADIUS authentication attributes” so it fits the bill here and you basically can look up if something is ssl/tls to find out if it works under these conditions.
- secure hashing – sha has nothing to do with this
- certificate pinning – there are lots of sites for this and it seems like its for mobile apps to associate with an x.509 host so its not applicable in this case
Screwed this up again so lets go through it, one more time!
- Rule-based access control – Under Rules Based Access Control, access is allowed or denied to resource objects based on a set of rules defined by a system administrator. As with Discretionary Access Control, access properties are stored in Access Control Lists (ACL) associated with each resource object. When a particular account or group attempts to access a resource, the operating system checks the rules contained in the ACL for that object
- Role-based access control – Essentially, RBAC assigns permissions to particular roles in an organization. Users are then assigned to that particular role. For example, an accountant in a company will be assigned to the Accountant role, gaining access to all the resources permitted for all accountants on the system. Similarly, a software engineer might be assigned to the developer role.
- Mandatory access control – MAC takes a hierarchical approach to controlling access to resources. Under a MAC enforced environment access to all resource objects (such as data files) is controlled by settings defined by the system administrator. As such, all access to resource objects is strictly controlled by the operating system based on system administrator configured settings. It is not possible under MAC enforcement for users to change the access control of a resource.
- Discretionary access control- under DAC a user can only set access permissions for resources which they already own. A hypothetical User A cannot, therefore, change the access control for a file that is owned by User B. User A can, however, set access permissions on a file that she owns. Under some operating systems it is also possible for the system or network administrator to dictate which permissions users are allowed to set in the ACLs of their resources.
Maybe I understand it now.
Is it just me or does it seem that Kerberos is more likely to use a PKI? Its just me? ok then. But like is SFTP not left over from dial up? Oh you said that the specific internal function that kerberos provides has nothing to do with encryption and is a dumb answer? shit. Ok, welll then. SFTP was first started in 1997? Humm, well then I guess it varies by version SFTP Public Key Authentication. SAML is off the table and SIP is like basic voip shit. IPSec doesn’t really have anything to do with this either. So, ok.
It seems like firewall logs would have more info than a system that attaches to a firewall to monitor traffic but maybe that’s just me. There’s also not a website to point to prove this but ok.
What even the fuck. any of this! Do they mean SQL injection? Doesn’t seem super likely but ok. anyway: How SQL injection is done through FTP?
Welp, I think that’s all for now.