Azure, Pt. 4

Maybe this wont turn into several blog posts but I quickly have learned that this is like starting from scratch on infrastructure because you may think you know how something works but this is cloud so we do it a little bit different. Which is fine but you know, definitional stuff and concepts. For example, this first one. Clearly, its always encrypted but what are we calling that?

Anyway, I don’t know what the rest of this shit is so lets get to lookin! Normally, I would have a good system for designing this but since we are on this new block editor, we will see how this goes (you know the old bulleted list with hyperlinks and short blob to the side)

  • Advanced data security for Azure SQL Database – Advanced data security is a unified package for advanced SQL security capabilities. It includes functionality for discovering and classifying sensitive data, surfacing and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your database. It provides a single go-to location for enabling and managing these capabilities. — really product that does assessment and remediation
  • Always Encrypted – Always Encrypted is a feature designed to protect sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can’t access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.
  • Elastic pools – SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
  • Transparent data encryption for SQL Database and Azure SynapseTransparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL Managed Instance, and Synapse SQL in Azure Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application. By default, TDE is enabled for all newly deployed Azure SQL databases and needs to be manually enabled for older databases of Azure SQL Database, Azure SQL Managed Instance, or Azure Synapse. — I find this confusing as it seems to indicate this is only for data in an ‘at rest’ state but ok

So I took some time off after writing one question and basically remodeled a bathroom for some reason. It started with cleaning the grout. Which turned into re grouting because grout is cheap, which turned into painting the walls, naturally you have to replace all the hardware and paint the cabinets while applying a distressed finished covered with a shiny lacquer. Anyway, the current episode of ‘This Old House’ is finished. Thankfully, I didn’t spend too much money on it but it does look great! Anyway, lets uha, get back to work. Sound good? Great. So the first part of this, after learning that AlwaysOn encryption is for NPPI data that seems obvious, the next few things I have no idea what they are so lets get into that:

  • Encrypt a Column of Data – One would think based on this name that it would apply to this scenario how ever it does not. This is a normal encryption scenario where you generate keys to decrypt the data. Anyway what we are looking for is instructions for setting up AlwaysOn encryption, which apparently is here Query columns using Always Encrypted with SQL Server Management Studio and now we have a bullet with a quote, bare with me here “To enable Always Encrypted, type Column Encryption Setting = Enabled. To disable Always Encrypted, specify Column Encryption Setting = Disabled or remove the setting of Column Encryption Setting from the Additional Properties tab (its default value is Disabled).” So that has to be enabled for alwayson to work
  • Public Database Role – Every database user belongs to the public database role. When a user has not been granted or denied specific permissions on a securable object, the user inherits the permissions granted to public on that object. Database users cannot be removed from the public role. – basically its kind of extra and doesn’t matter
  • The encryption keys bit, per the second article linked in the first bullet “Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine”

So here we are, realizing that this Azure cert is, application dev, SQL, windows sever, networking, virtualization and containers. Damn, we are really getting into some shit now boys. And girls. Maybe its only one or the other that reads this. Who knows. Anyway, type 1 for Burt Reynolds and 2 for Jim Croce. If you didn’t pick it up, this is a mustache contest. Actually the photos are backwards because its clear that Croce is number 1 but type 2 for Croce. Unless you really think the guy runner blocker for coors and banging Sally Feild is A number 1. I dont even think he was really bangin her but I could be wrong.

The Story Behind The Song: I Got A Name by Jim Croce | Louder
Burt Reynolds: A Star With the Pedal to the Metal - The New York Times

Anyway,

You know, at first I found this confusing because it doesn’t state that the appliance router is functioning as a subnet gateway and I don’t think it gave the IP of the appliance either but that appears to be the situation is that it hits the soft router that’s function as a gateway and then goes from there. If you don’t know what an appliance is Virtual Appliance

Anyway, here’s Jim Croce

Autoscale is obvious, managed disks offer a lot of benefits but I cant find any thing directly stating that you have to use them in this scenario. So I’m still a little confused but I can say that manged disks offer a ton of benefits. No idea what the extra cost is but given how they work I would assume you have to use them with Auto-scale but I don’t see any indicator of that. Anyway, here is some more info on ‘Managed Disks’

This one has a doc associated, so im going to start there: Use Azure Import/Export service to import data to Azure Files

Well this simply mentions that you need them and not really in specific language but its one of the many recommendations along with have a fedex account for some reason? But you do have to have them. It doesn’t go into why or any thing like that but here is info on the creation of the two documents

  • Preparing hard drives for an Import Job – The WAImportExport tool is the drive preparation and repair tool that you can use with the Microsoft Azure Import/Export service. You can use this tool to copy data to the hard drives you are going to ship to an Azure datacenter. After an import job has completed, you can use this tool to repair any blobs that were corrupted, were missing, or conflicted with other blobs. After you receive the drives from a completed export job, you can use this tool to repair any files that were corrupted or missing on the drives. In this article, we go over the use of this tool.

What is dataset CSV

Dataset CSV file is the value of /dataset flag is a CSV file that contains a list of directories and/or a list of files to be copied to target drives. The first step to creating an import job is to determine which directories and files you are going to import. This can be a list of directories, a list of unique files, or a combination of those two. When a directory is included, all files in the directory and its subdirectories will be part of the import job.

For each directory or file to be imported, you must identify a destination virtual directory or blob in the Azure Blob service. You will use these targets as inputs to the WAImportExport tool. Directories should be delimited with the forward slash character “/”.

What is driveset CSV

The value of the /InitialDriveSet or /AdditionalDriveSet flag is a CSV file that contains the list of disks to which the drive letters are mapped so that the tool can correctly pick the list of disks to be prepared. If the data size is greater than a single disk size, the WAImportExport tool will distribute the data across multiple disks enlisted in this CSV file in an optimized way.

There is no limit on the number of disks the data can be written to in a single session. The tool will distribute data based on disk size and folder size. It will select the disk that is most optimized for the object-size. The data when uploaded to the storage account will be converged back to the directory structure that was specified in dataset file. In order to create a driveset CSV, follow the steps below.

Anyway, back to the new format being more or less confusing but I think I sort of understand this. Man, when I thought server had a lot to learn I was a little off. Not really but I’m saying there is a ton of stuff to learn with this and Its going to take a while to get familiar with. I’m not really sure how much more or different stuff they could put in the new exams but I’ll be working on this until changes are made that indicate that new material is the way to go or only option.

How to Reduce the Costs of your Azure IaaS VMs – Thomas Maurer

I hadn’t considered this but yeah those licenses are expensive.

If you already have existing Windows Server and SQL Server on-premises licenses with Software Assurance, you can use them for Azure virtual machines (VMs). This will allow you to save the Pay-as-you-go cost for Windows Server and SQL Server licenses. The Azure Hybrid Benefit applies not only to Azure VMs but also on Azure SQL Database PaaS services and the Azure Dedicated Host. If you want to know more about how to take advantage of the Azure Hybrid Benefit, check out the Microsoft Azure Docs page.

Azure 3.3, i’ll be on this for a while

Welp, yesterday went fairly well. Ended up going for brunch and leaving the house for the first time in like 2 or 3 months to actually do something. Anyway, I think this is going to be a short post unless I go through some more questions.

Wow, a slightly tricky question where they actually expound upon the answer so I don’t have to look around confused like. Having never used Azure for storage or really at all, root is not c but /. Good note. I don’t think this would work the same way on a local install but I could be wrong. Well, per this (which does state that add can be used to copy files into a build) it has to be where docker is installed and you don’t annotate the drive. Interesting note if I’m reading it correctly.

The docker documentation doesn’t use a drive any where either. I’m not sure why I didn’t notice that haha

My first instinct in this was to select the two correct answers but then considered the A answer and went with that and C. Why I thought forwarded traffic was more important than remote gateways was that it seemed redundant for some reason. It seems like if you use gateway transit it has to go through a gateway and into other one right? I don’t know about this one, I need to read the article. Probably a good place to start Virtual network peering

Virtual network peering enables you to seamlessly connect networks in Azure Virtual Network. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines uses the Microsoft backbone infrastructure. Like traffic between virtual machines in the same network, traffic is routed through Microsoft’s private network only.

Azure supports the following types of peering:

  • Virtual network peering: Connect virtual networks within the same Azure region.
  • Global virtual network peering: Connecting virtual networks across Azure regions.

The benefits of using virtual network peering, whether local or global, include:

  • A low-latency, high-bandwidth connection between resources in different virtual networks.
  • The ability for resources in one virtual network to communicate with resources in a different virtual network.
  • The ability to transfer data between virtual networks across Azure subscriptions, Azure Active Directory tenants, deployment models, and Azure regions.
  • The ability to peer virtual networks created through the Azure Resource Manager.
  • The ability to peer a virtual network created through Resource Manager to one created through the classic deployment model. To learn more about Azure deployment models, see Understand Azure deployment models.
  • No downtime to resources in either virtual network when creating the peering, or after the peering is created.

Network traffic between peered virtual networks is private. Traffic between the virtual networks is kept on the Microsoft backbone network. No public Internet, gateways, or encryption is required in the communication between the virtual networks.

So that’s helpful but its not the specifics we are looking for but this one has more info Create, change, or delete a virtual network peering

So it turns out this is a fairly specific scenario as it doesn’t indicate that your using a VPN, it says hub and spoke. Which apparently works the same way as using a VPN. Forwarded traffic is also covered in that article and explains why you would want to do that and gives scenario examples. I’ll let you click through to the article if your interested but I’m sure your excited about my high lighted notes in this bad boy. Anyway, that’s all for this one.

Azure! Part 3.2… Or Network Watcher, NSG’s and more!

I’m unemployed at the moment and doing lots of interviews but with this COVID-19 stuff not a lot going on. Unemployment is also kind of tough as my employer has filed a claim but they are still sitting on it. My bills are paid up for this month but I’m pretty sure I’ll have to cash out my small 401k as it doesn’t look like unemployment is coming through any time soon. I complain but there are people in much worse positions. Also paid a company to redo my resume, I don’t think I mentioned that, and sent them a list of information about my blog and whats covered on various certs. Excited to get back because I’m not really certain how to organize some of that stuff and based on emails it looked like they could tell that I was highly skilled hard working professional but who knows, they are corporate linguistic experts. Anyway, lets get to work.

I found it kind of suprising with this one that they didnt give an idea of why the set up is with VM3 but I have no idea what NSGs are so I think we need to start there. Network security groups

You can use Azure network security group to filter network traffic to and from Azure resources in an Azure virtual network. A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol. This article describes properties of a network security group rule, the default security rules that are applied, and the rule properties that you can modify to create an augmented security rule.

This is kind of basic stuff but there is a specific flavor to it and im starting to realize I might want to watch one of those hour or two hour long videos on Azure networking basics however the idea is NSGs basically function as a rule set as if traffic was going through a configured switch. At least that’s my understanding so far. If it functions like Hyper-V, it may prove to provide too much non-useful granular detail settings but hopefully that isnt the case. I mean, that’s my experience using Hyper-V in 2019 but maybe you had a different experience. Anyway, where we? Oh yeah, all right, lets get into Azure Network Watcher. Also, a diagram being static seems like a good idea as you could see what NSG was applied to it (right I couldnt figure that out) to view conflicting rules until you see what Azure Network Watcher is. In the below video you can see the tool in use and from the starting point up to about 5 mins in they are talking about this scenario. They also go into diagramming up time and so forth but what I would like to see is if the tool shows real time if a connection is broken and offers a reason as to why. That doesn’t seem that hard but I could be wrong haha

At around 6 mins, you can see that if you run through some things it will tell you more information but I wondering about a heads up display with like real time type of diagram situations. Anyway, you can view and change network diagrams from Network Watcher.

You know, since I’ve found that being more through is helpful when studying for certification tests lets just make this a big long post where we learn about Azure Networking. So lets look at these other answers. We can start with Azure Monitor. Since these are tools I think that videos might be more helpful and I found this Azure Monitor video to be most helpful. There is another video that looks much slicker but to be honest this one has the best description and tool use cases

So I searched YouTube for videos to come up with this one and discovered it was on a page. This image shows that you can get network insights using Azure monitor but I’m not seeing it on the page and he doesn’t go into in the video so I’m going to assume its more performance related than being a super useful tool to diagnose network issues as that’s probably what Network Watcher is for. Anyway, Azure Monitor overview

Azure Monitor can collect data from a variety of sources. You can think of monitoring data for your applications in tiers ranging from your application, any operating system and services it relies on, down to the platform itself. Azure Monitor collects data from each of the following tiers:

  • Application monitoring data: Data about the performance and functionality of the code you have written, regardless of its platform.
  • Guest OS monitoring data: Data about the operating system on which your application is running. This could be running in Azure, another cloud, or on-premises.
  • Azure resource monitoring data: Data about the operation of an Azure resource.
  • Azure subscription monitoring data: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself.
  • Azure tenant monitoring data: Data about the operation of tenant-level Azure services, such as Azure Active Directory.

Basically it seems like a place to sort logs pertaining to machine and app performance.

Ok, so whats a Traffic Manager Profile? Well, lets start here: What is Traffic Manager?

Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions, while providing high availability and responsiveness.

Traffic Manager uses DNS to direct client requests to the most appropriate service endpoint based on a traffic-routing method and the health of the endpoints. An endpoint is any Internet-facing service hosted inside or outside of Azure. Traffic Manager provides a range of traffic-routing methods and endpoint monitoring options to suit different application needs and automatic failover models. Traffic Manager is resilient to failure, including the failure of an entire Azure region.

Oh man, I keep hearing Azure region mentioned but I haven’t gotten into that yet. Might as well grab that while we are thinking about it:

A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.

With more global regions than any other cloud provider, Azure gives customers the flexibility to deploy applications where they need to. Azure is generally available in 53 regions around the world, with plans announced for 5 additional regions.

Ok, that’s straight forward and interesting but lets get back to load balancing with hybrid cloud options, I mean Azure Traffic Manager…anyway, yeah its a powerful load balancer and MSFT has some really great documentation about how to set it up and use it like this profile for low latency that even goes into actually creating VMs, installing IIS and and all that and then finally gets into creating the profile that is in use to actually direct traffic Tutorial: Improve website response using Traffic Manager . Anyway, I think that’s all for this question. I’m going to do one more. Watch an Azure Networking Video that I probably wont link but you can find if you can use google and then maybe go downtown for a nice long run.

I got the first two right in this question but I have no idea what they are talking about with a probe. The other two are basic networking questions. I mean, maybe not basic as in home router but at … literally nothing I can say at this point wont sound pretentious as hell haha. Anyway, lets figure out what a probe is. The naming convention here is a little wonky but I can read through the idea to understand what it is Application Gateway health monitoring overview

An application gateway automatically configures a default health probe when you don’t set up any custom probe configuration. The monitoring behavior works by making an HTTP request to the IP addresses configured for the back-end pool. For default probes if the backend http settings are configured for HTTPS, the probe uses HTTPS as well to test health of the backends.

For example: You configure your application gateway to use back-end servers A, B, and C to receive HTTP network traffic on port 80. The default health monitoring tests the three servers every 30 seconds for a healthy HTTP response. A healthy HTTP response has a status code between 200 and 399.

If the default probe check fails for server A, the application gateway removes it from its back-end pool, and network traffic stops flowing to this server. The default probe still continues to check for server A every 30 seconds. When server A responds successfully to one request from a default health probe, it’s added back as healthy to the back-end pool, and traffic starts flowing to the server again.

So a probe is basically a heartbeat and the naming conventions for that concept are usually changed and everyone calls it something different. Its one server saying “hey are you up” to another server but perhaps this is a little more in-depth as they don’t usually have rule sets identified with them but this is for larger scale infrastructure.

Honestly the next two questions in this are not as expansive so I may try to figure out some more stuff. Who knows. Anyways, thank you to whom ever actually reads this blog! I appreciate your viewership of this thing that I put time and money into haha

Azure Pt. 3.1, Container image hosting…

Well, I paid a company to redo my resume. Not a lot of stuff out there at the moment by my current resume format seems a bit crowded and its probably good to have someone who deals with resumes day in and day out to take a look at it and figure out how to organize things and what to highlight. So much Linux experience with NAPA and im attempting to target a Windows Server Admin role but we will see how that goes haha. Anyway, back to Azure and realizing that this two test cert could take most of the year. This is fine with me. If I have extra time maybe ill start into the 70-744 but they are no longer offering that in January of next year and to be honest a Network+ and a Security+ with two MCSAs sounds better than a Core Infrastructure MCSE to me as its vendor diverse and implies the same thing with less confusion as to meaning. That’s not say that I don’t want the Core Infrastructure MCSE but I’m not sure I have the time/value for it unfortunately but it would feel awesome to pass the 70-744. It’s also becoming very apparent that cloud computing is the future so, here we are. Anyway, this is the part where I start throwing in questions and trying figure out what everything is.

One would assume that the data has to go into some type of storage for a container, assuming we are using Docker as that’s what I learned about on the 2016 MCSA but who knows. Lets take a look at how containers work in Azure. This make take a while or it may not. Who knows. Lets start with the link in the question: Deploy an Azure Web App Container

I dont know what YAML is, I’ve heard the term thrown around but I’m not super familiar with it so lets sort through that but you can see in the screen shot that its pointing to a container registry. Earlier they ran a command to pull your docker image from a github repo that has sample container images or you would assume its sample images but really its just code pointing to default docker test images, as seen below

This Dockerfile is for a test container to demonstrate use of docker-compose with Azure Pipelines

See http://docs.microsoft.com/azure/devops/pipelines/languages/docker for more information

FROM ubuntu:trusty

RUN apt-get update && apt-get install -yq curl && apt-get clean

WORKDIR /app

ADD docs/test.sh /app/test.sh

CMD [“bash”, “test.sh”]

Kind of confused by this as its running an update script and doesn’t seem to point to an image and appears to be updating an image which has a large potential to break any running apps you have on a container haha. Anyway, if your interested install docker on your machine and then download something like this: Couchbase and you’ve got a small VM running on your machine. Now lets figure out what YAML is

YAML (a recursive acronym for “YAML Ain’t Markup Language”) is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted. YAML targets many of the same communications applications as Extensible Markup Language (XML) but has a minimal syntax which intentionally differs from SGML .[1] It uses both Python-style indentation to indicate nesting, and a more compact format that uses [] for lists and {} for maps[1] making YAML 1.2 a superset of JSON.[2]

Funny thing about programming languages, I’ve learned that every thing in 2020 is basically JAVA or XML haha. I’ve also found it very helpful to occassionaly poke around in Kali and walk through some basic stuff on Vuln hub as it promotes familiarity with Linux and unless you want to sit around and build web apps at home or something its sort of like Leapfrog learning. it Also comes with free super cool sunglasses and and a hoodie (follow @viss on twitter for more info haha)

hacking

Edit: now with hackerman HD images courtesy of Viss that I wasn’t sure where I had saved

Anyway, back to Docker on Azure. The point being this is dev ops and they have linked an article that is fairly specific and slightly confusing for infrastructure people with no background in containers. For the sake of “this is the article they linked” I’m going to start with Pipelines and then dig into the container process because the linked repo is concerning Azure Pipelines

There is a lot going on with Pipelines lol but this little graphic sums it up in common folk talk the best. As an added bonus be sure and note the underlying hackerman joke of ‘deploy to target.’ Also, have you read AWS documentation? Can you read technical documentation well haha, JFC! I think we have a little comedy club going for those … you know what. nevermind. its better this way. Ok, here we go Build An Image

This one makes sense, you throw in your docker file and this is a template that has images and isn’t just a piping code in. I don’t know, I could be fucking this up as I haven’t used docker super extensively but I sort of get the basics. All right, well its kind of clear on that one. Again, not really in devops so I would I have to do some more research and testing but I have found for learning this stuff ‘hacker man’ stuff is a great way to figure some things out. Again, I don’t recommend it for cool points, but local Defcon groups can be great fun . Anyway, Container Registry? (normally I don’t link the sale pages but I found this helpful)

Wow! Pipelines for patching! not..building images haha

The office smile excited GIF on GIFER - by Dirr

And now we are back on Docker with the technical documentation Introduction to private Docker container registries in Azure (why not just use docker hub, pull your image over and pipe code to the container? Who knows … I’m not in sales or devops haha)

Azure Container Registry is a managed, private Docker registry service based on the open-source Docker Registry 2.0. Create and maintain Azure container registries to store and manage your private Docker container images and related artifacts.

Use Azure container registries with your existing container development and deployment pipelines, or use Azure Container Registry Tasks to build container images in Azure. Build on demand, or fully automate builds with triggers such as source code commits and base image updates.

All right, this is kind of what I was looking for Quickstart: Create a private container registry using the Azure portal

An Azure container registry is a private Docker registry in Azure where you can store and manage private Docker container images and related artifacts. In this quickstart, you create a container registry with the Azure portal. Then, use Docker commands to push a container image into the registry, and finally pull and run the image from your registry.

To log in to the registry to work with container images, this quickstart requires that you are running the Azure CLI (version 2.0.55 or later recommended). Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

You must also have Docker installed locally. Docker provides packages that easily configure Docker on any MacWindows, or Linux system.

Alright, so you have to be running win10 pro to get docker because virtualization is locked on the home version and as such I cant run any hypervisor on my current main machine but I have two older machines that work fine for that and I use this for blogging and FL Studio mostly. I may upgrade at some point but jesus christ is it a pain in the ass to get a large SSD and windows 10 if you go through the dell site to order a PC. Not to mention! I would like to simply put in my volume license key that I bought off of eBay for a quarter of the price of what dell charges for windows 10 and have it mysteriously work so I can fuck with docker when I feel like it. Sorry for cussing. Back on track, it is looking like you may not be able to use public image repositories per the MSFT suggested method but I’m sure there are ways around that. Maybe? Regardless, now we know where docker images are hosted. Honestly, I think that’s a good place to stop for now as this turned into a wall of text fairly quick.

Azure…Part 2! ..where I quote my self

Learning so much! I try to do blog posts that have 5 questions I’m struggling with the amount of research that goes into them being more voluminous than what I’m used to. The very first question required almost 1k words and I had to split the 5 question lot into 2 posts. Thus the confusion on the posting number because you know, I like to stick with an established method of doing things. Very traditional haha… anyway, here is the first question:

I have some confusion about data storage terms and now might be a good time to clear those up. The other part of the question is what kind of data is the message file. Like is it CSV? I would assume it to be as that would most likely be the most efficient way. I’m not sure if I’ll get an answer to that but maybe I’ll learn something by searching for an answer to that question. Regardless, it seems that would plausibly go into a table if that where the case but I can see two problems with that for one, why use SQL for everything and two it might end being fragmented data based on how the conversations are handled. Data lake and blob seem the same to me so lets figure out what that means Azure Data Lake vs Azure Blob Storage in Data Warehousing

I think blob storage is good at non-text based files – database backups, photos, videos and audio files. Whereas data lake I feel is a bit better at large volumes of text data. More often than not, personally, I would choose Data Lake Store if I’m using text file data to be loaded into my data warehouse. Of course, you can use blob storage, but I feel that is for those non-text data that I mentioned above.

Welp, thats helpful. It seems more than ‘opinion based’ but lets find out what MSFT has to say Comparing Azure Data Lake Storage Gen1 and Azure Blob Storage

Based on this, I like the idea that datalake is for text files and worst case scenario we can for sure assume that chat message logs are text based but I’m not sure how they are indexed, outside of using SQL, to find key words and so forth and msft isn’t really saying. It cant be that hard to figure out if you actually have one or look at someones chat data.

The next interesting thing in this is ‘a file share in an Azure storage account’ is this blob storage? haha now stop me if I’m wrong on the differences between hierarchical and folder storage models not being ‘containers’

Regardless, there are more than blob and data lake storage. The confusing part is that no one exactly lays out data lake in articles. I read through several and found this one helpful Microsoft Azure Storage Overview

Azure blob storage: It is optimized to store huge unstructured data. Storage is in terms of binary large objects (BLOBs).

Azure table storage: It has now become a part of Azure Cosmo DB. Azure table stores structured NoSQL data.

Azure file storage: It is a fully managed file sharing service in the cloud or on-premise via the Server Message Block (SMB) protocol.

Azure queue storage: It is a storage service that stores messages that can be accessed through HTTP or HTTPS from any part of the globe.

Disk storage: It is a virtual hard disk (VHD) which is of two types: managed and unmanaged.

Which matches up with what MSFT says in this Introduction to the core Azure Storage services

The Azure Storage platform includes the following data services:

Azure Blobs: A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2.

Azure Files: Managed file shares for cloud or on-premises deployments.

Azure Queues: A messaging store for reliable messaging between application components.

Azure Tables: A NoSQL store for schemaless storage of structured data.

Azure Disks: Block-level storage volumes for Azure VMs.

As you can see it doesn’t mention Data Lake storage but there is a separate article for that Introduction to Azure Data Lake Storage Gen2

Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob storage. Data Lake Storage Gen2 is the result of converging the capabilities of our two existing storage services, Azure Blob storage and Azure Data Lake Storage Gen1. Features from Azure Data Lake Storage Gen1, such as file system semantics, directory, and file level security and scale are combined with low-cost, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.

And this leads me to wonder if there will be variations between 1 and 2 noted on the test. I guess we will get to that when it shows up. Anyway, its for big data that you run analytics against. Somehow. Anyway, based on this information I think that the Data lake or data lake answer makes sense.

I’m not sure what these rolls are so lets find that out: What is role-based access control (RBAC) for Azure resources?

Owner – Has full access to all resources including the right to delegate access to others.

Contributor – Can create and manage all types of Azure resources but can’t grant access to others.

Reader – Can view existing Azure resources.

User Access Administrator – Lets you manage user access to Azure resources.

Ok, so this one is pretty straight forward

Haha, this is great, I get to quote my self! And no, I didnt do this on purpose.

Quickly realizing that you gave an answer to a question in the last blog post and then looking at the question and somehow not knowing right away that the answer is what you said in the last blog post, regardless of what you where thinking when going through the questions, is a good sign that this may take longer than expected. Sigh. I’ve also realized that keeping text from articles in a standardized format is more annoying with this method of blogging than using a text editor so I’ll go back and clean that one up. This next one has a lot of screen shots and does not appear to be answerable but I would like to cover the material none the less.

And if you let them sit a few days haha magic…anyway, here is the list of articles possibly related to this topic thus ensuring me there is an overwhelming amount of info covered on this bad boy

Anyway, this is the one we are covering in this question: Configure a VNet-to-VNet VPN gateway connection by using the Azure portal

Wow, that is a long article but the following is what they are talking about with the gateway:

To create a virtual network gateway

  1. From the Azure portal menu, select Create a resource.Create a resource in the Azure portal
  2. In the Search the Marketplace field, type ‘Virtual Network Gateway’. Locate Virtual network gateway in the search return and select the entry. On the Virtual network gateway page, select Create. This opens the Create virtual network gateway page.
  3. On the Basics tab, fill in the values for your virtual network gateway.Create virtual network gateway page fieldsCreate virtual network gateway page fieldsProject details
    • Subscription: Select the subscription you want to use from the dropdown.
    • Resource Group: This setting is autofilled when you select your virtual network on this page.
    Instance details
    • Name: Name your gateway. Naming your gateway not the same as naming a gateway subnet. It’s the name of the gateway object you are creating.
    • Region: Select the region in which you want to create this resource. The region for the gateway must be the same as the virtual network.
    • Gateway type: Select VPN. VPN gateways use the virtual network gateway type VPN.
    • VPN type: Select the VPN type that is specified for your configuration. Most configurations require a Route-based VPN type.
    • SKU: Select the gateway SKU from the dropdown. The SKUs listed in the dropdown depend on the VPN type you select. For more information about gateway SKUs, see Gateway SKUs.
    • Generation: For information about VPN Gateway Generation, see Gateway SKUs.
    • Virtual network: From the dropdown, select the virtual network to which you want to add this gateway.
    • Gateway subnet address range: This field only appears if your VNet doesn’t have a gateway subnet. If possible, make the range /27 or larger (/26,/25 etc.). We don’t recommend creating a range any smaller than /28. If you already have a gateway subnet, you can view GatewaySubnet details by navigating to your virtual network. Click Subnets to view the range. If you want to change the range, you can delete and recreate the GatewaySubnet.

The Gateway subnet range seems confusing until you assume they are using NAT and on the other side of the Gateway it wants to know what range of addresses are going through that gateway on the VLAN. Why? Honestly, no clue. Which adds to the confusion of ‘am I thinking correctly about why it asks for this’ normally it should ask for a gateway when going between VLANS. Assuming thats what a VNET is? Maybe I should verify that too: Azure Virtual Network frequently asked questions (FAQ)

What is an Azure Virtual Network (VNet)?

An Azure Virtual Network (VNet) is a representation of your own network in the cloud. It is a logical isolation of the Azure cloud dedicated to your subscription. You can use VNets to provision and manage virtual private networks (VPNs) in Azure and, optionally, link the VNets with other VNets in Azure, or with your on-premises IT infrastructure to create hybrid or cross-premises solutions. Each VNet you create has its own CIDR block and can be linked to other VNets and on-premises networks as long as the CIDR blocks do not overlap. You also have control of DNS server settings for VNets, and segmentation of the VNet into subnets.

Use VNets to:

Create a dedicated private cloud-only VNet. Sometimes you don’t require a cross-premises configuration for your solution. When you create a VNet, your services and VMs within your VNet can communicate directly and securely with each other in the cloud. You can still configure endpoint connections for the VMs and services that require Internet communication, as part of your solution.

Securely extend your data center. With VNets, you can build traditional site-to-site (S2S) VPNs to securely scale your datacenter capacity. S2S VPNs use IPSEC to provide a secure connection between your corporate VPN gateway and Azure.

Enable hybrid cloud scenarios. VNets give you the flexibility to support a range of hybrid cloud scenarios. You can securely connect cloud-based applications to any type of on-premises system such as mainframes and Unix systems

So basically its the Azure version of a VLAN. Ok, so this block editor is super buggy and bear with me if there is some formatting issues. Using the block editor is a nightmare if you bring over test with both a header and a ordered list. It splits them into separate blocks that it wont merge, one will allow you to edit the text into a quote and the other wont. Its kind of a pain in the ass. You have to switch it to HTML and then delete the second block and be careful you select the right block because I accidentally deleted this paragraph and then it wouldn’t let me use the back button to restore it. So I’m learning as I go with this as opposed to stressing over formatting too much. Seems like a waste of time to become a quick expert.

Now I’ve got this block quote text that I cant get to go away lol, fun. Anyway, I feel like I covered the topics at hand and will try and go back and adjust formatting on the previous post at some point. Humm, I think I resolved that by changing the blockquote HTLM that auto populates into a paragraph and then it didnt like that so I switched it over to a classic block and removed it. That seems to have worked…thanks for all your help in illuminating these issues. Good wrk

Anyway, there was an additional question in this lot but its basically another, post unto its self type of question and Ill get back to it later. That’s all for now!

Azure Part uha… more questions

Now that I kind of know how to do use this interface and have done, 1 question, its time to start into the next set and hopefully get 5-10 knocked out.

Well, I was on the right track here but having no background in Azure its kind of a shot with being familiar with MSFT stuff. I mean, I’m sort of familiar with Azure but given my shock at what it does in the last post its clear that I have a lot to learn. I would assume that you would add the rule and then auto-scaling but that could go either way. However, it does seem you would want to add the rule and then say the rule auto applies? I mean ok, you apply the auto-scaling with no logic haha. Anyway, splitting hairs. I would also assume that the rule contained the condition but also, wrong.

Get started with Autoscale in Azure

When walking through the UI it makes sense but it doesn’t talk about Azure app service tiers so I should probably look at that. Whats online isnt super clear but it seems like it will scale out to an additional instance so its possible that figuring out pricing for pushing another instance vs setting it to a higher tier would be an issue for pricing configuration optimization. Maybe there is a video on YouTube about this…well theres a page with a video that I found helpful How and When to Scale Up/Out Using Azure Analysis Services

Let’s start with when to scale up your queries. You need to scale up when your reports are slow, so you’re reporting out of Power BI and the throughput isn’t working for your needs. What you’re doing with scaling up is adding more resources. The QPU is a combination of your CPU, memory and other factors like the number of users.

Memory checks are straightforward. You run the metrics in the Azure portal and you can see what your memory usage is, if your memory limited or memory hard settings are being saturated. If so, you need to either upgrade your tier or adjust the level within your current tier.

CPU bottlenecks are a bit tougher to figure out. You can get an idea by starting to watch your QPUs to see if you’re saturating those using those metrics and looking at the logs within the Azure portal. Then you want to watch your processor pool job que length and your processing pool busy, non-IO threads. This should give you an idea of how it’s performing.

For the most part, you’re going to want to scale up when the processing engine is taking too long to process the data to build your models.

Next up, scaling out. You’ll want to scale out if you’re having problems with responsiveness with reporting because the reporting requirements are saturating what you currently have available. Typically, in cases with a large number of users, you can fix this by scaling out and adding more nodes.

You can add up to 7 additional query replicas; these are Read-only replicas that you can report off, but the processing is handled on the initial instance of Azure Analysis Services and subsequent queries are being handled as part of those query replicas. Hence, any processing is not affecting the responsiveness of the reports.

After it separates the model processing from query engine, then you can measure the performance by watching the log analytics and query processing units and see how they’re performing. If you’re still saturating those, you’ll need to re-evaluate whether you need additional QPUs or to upgrade your tiers.

The thing about this is that its not mentioning tiers beyond standard and these are the current plans but it saying up or out. Up being to a ‘better machine’ and out being to create a replica machine at the same price point, as I understand it. Anyway, these are the current tiers:

Honestly, its a fairly basic concept but to calculate out cost you’ll probably need some kind of Azure Pricing Calculator anyway, I liked this link too Horizontal vs Vertical scaling – Azure Autoscaling … moving on

First of all I have no idea what QnA Maker is, at all but I did get this right, but the terms where backwards. Which means, it wasn’t right. Not sure if this was legible. Anyway, here is the base article and not the ‘how to’ QnA Maker

QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.

I mean, I could have figured that out based on the name but lets find a docs article with like descriptors Quickstart: Create, train, and publish your QnA Maker knowledge base ok so I was confused by this one based on the question but it starts with this:

Create your first QnA Maker knowledge base

Sign in to the QnAMaker.ai portal with your Azure credentials.

In the QnA Maker portal, select Create a knowledge base.

On the Create page, skip Step 1 if you already have your QnA Maker resource.If you haven’t created the resource yet, select Create a QnA service. You are directed to the Azure portal to set up a QnA Maker service in your subscription. Remember your Azure Active Directory ID, Subscription, QnA resource name you selected when you created the resource.When you are done creating the resource in the Azure portal, return to the QnA Maker portal, refresh the browser page, and continue to Step 

Which there is an entire article about this that explains the QnA Maker management service: Manage QnA Maker resources

That one walks through using and creating things with it and its basically the engine that makes the API function. So basically anything having to do with the actual interaction uses that. Terms seem to be a little hazy but that is what they are talking about.

Now, as for the runtime, this is a little confusing if your thinking its insights into how people interact with your data. Which, one would assume that runtime would be associated with performance for hardware but again, terms can be confusing at times. This is also from the setup article:

The QnAMaker runtime is part of the Azure App Service instance that’s deployed when you create a QnAMaker service in the Azure portal. Updates are made periodically to the runtime. The QnA Maker App Service instance is in auto-update mode after the April 2019 site extension release (version 5+). This update is designed to take care of ZERO downtime during upgrades.

You can check your current version at https://www.qnamaker.ai/UserSettings. If your version is older than version 5.x, you must restart App Service to apply the latest updates:

So, you can sort of see info on how the process is running here and im assuming that also has processor and ram usage info? Unclear but I’m sure this is part of a standard format that I’ll figure out as we move along.

Now if you want to understand how people are interacting with your data, this is the information your looking for: Get analytics on your knowledge base

QnA Maker stores all chat logs and other telemetry, if you have enabled App Insights during the creation of your QnA Maker service. Run the sample queries to get your chat logs from App Insights.

So, these terms are not backwards at all and I’m not really sure what the hell I was thinking when I answered it the way that I did but now I have a new level of clarity on all sorts of things. Moving on…

Ok so…this is a whole ass set of stuff to get into. The answer seemed obvious to me but what does the Azure AD Connect Wizard do? The last time I looked into Azure and on prem connect was 2012 and im assuming a lot has changed so lets start there…. holy shit … ok What is hybrid identity with Azure Active Directory?

Alright…lets start with the What is Azure AD Connect? and ill have the cheesecake, yes the entire thing. thanks. haha…anyway. uha

Azure AD Connect is the Microsoft tool designed to meet and accomplish your hybrid identity goals. It provides the following features:

Password hash synchronization – A sign-in method that synchronizes a hash of a users on-premises AD password with Azure AD.

Pass-through authentication – A sign-in method that allows users to use the same password on-premises and in the cloud, but doesn’t require the additional infrastructure of a federated environment.

Federation integration – Federation is an optional part of Azure AD Connect and can be used to configure a hybrid environment using an on-premises AD FS infrastructure. It also provides AD FS management capabilities such as certificate renewal and additional AD FS server deployments.

Synchronization – Responsible for creating users, groups, and other objects. As well as, making sure identity information for your on-premises users and groups is matching the cloud. This synchronization also includes password hashes.

Health Monitoring – Azure AD Connect Health can provide robust monitoring and provide a central location in the Azure portal to view this activity.

So it seems like using a Wizard is bad but you know, we’ve been through the server thing before and its a great idea to know, like, most things: Azure AD Connect sync: Understand and customize synchronization

The Azure Active Directory Connect synchronization services (Azure AD Connect sync) is a main component of Azure AD Connect. It takes care of all the operations that are related to synchronize identity data between your on-premises environment and Azure AD. Azure AD Connect sync is the successor of DirSync, Azure AD Sync, and Forefront Identity Manager with the Azure Active Directory Connector configured.

This is, a huge can of worms haha wow, this is exciting. Anyway, so here’s this: Introduction to the Azure AD Connect Synchronization Service Manager UI

The Synchronization Service Manager UI is used to configure more advanced aspects of the sync engine and to see the operational aspects of the service.

You start the Synchronization Service Manager UI from the start menu. It is named Synchronization Service and can be found in the Azure AD Connect group.

Again, this is as far into this as im going but yeah I love this crap haha… time to move on

I dont know what any of these things are, obviously, so lets define them

  • Azure Service Bus – Microsoft Azure Service Bus is a fully managed enterprise integration message broker. Service Bus can decouple applications and services. Service Bus offers a reliable and secure platform for asynchronous transfer of data and state. Data is transferred between different applications and services using messages. A message is in binary format and can contain JSON, XML, or just text. For more information, see Integration Services.
  • Azure Relay – The Azure Relay service enables you to securely expose services that run in your corporate network to the public cloud. You can do so without opening a port on your firewall, or making intrusive changes to your corporate network infrastructure.
  • Azure Event Grid – Azure Event Grid allows you to easily build applications with event-based architectures. First, select the Azure resource you would like to subscribe to, and then give the event handler or WebHook endpoint to send the event to. Event Grid has built-in support for events coming from Azure services, like storage blobs and resource groups. Event Grid also has support for your own events, using custom topics.
  • Azure Event Hub -Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.

I’m now understanding them and the ‘Restaurant Telemetry’ seems like it would be Event Hub, the “Inventory’ answer makes sense but to be honest I’m not sure about the first one, ‘Shopping Cart’, however I think its also Service Bus.

Anyway, that’s about all for now and I’ve accomplished all the things I wanted to do yesterday, this morning. So that’s pretty cool. I am, so fucking excited about Azure! I’m happy I got the Sec+ and Net+ but like this type of stuff is so much fun to learn!

Website Powered by WordPress.com.

Up ↑